Introduction of Parallel GPGPU Acceleration Algorithms for the Solution of Radiative Transfer
NASA Technical Reports Server (NTRS)
Godoy, William F.; Liu, Xu
2011-01-01
General-purpose computing on graphics processing units (GPGPU) is a recent technique that allows the parallel graphics processing unit (GPU) to accelerate calculations performed sequentially by the central processing unit (CPU). To introduce GPGPU to radiative transfer, the Gauss-Seidel solution of the well-known expressions for 1-D and 3-D homogeneous, isotropic media is selected as a test case. Different algorithms are introduced to balance memory and GPU-CPU communication, critical aspects of GPGPU. Results show that speed-ups of one to two orders of magnitude are obtained when compared to sequential solutions. The underlying value of GPGPU is its potential extension in radiative solvers (e.g., Monte Carlo, discrete ordinates) at a minimal learning curve.
Use of general purpose graphics processing units with MODFLOW
Hughes, Joseph D.; White, Jeremy T.
2013-01-01
To evaluate the use of general-purpose graphics processing units (GPGPUs) to improve the performance of MODFLOW, an unstructured preconditioned conjugate gradient (UPCG) solver has been developed. The UPCG solver uses a compressed sparse row storage scheme and includes Jacobi, zero fill-in incomplete, and modified-incomplete lower-upper (LU) factorization, and generalized least-squares polynomial preconditioners. The UPCG solver also includes options for sequential and parallel solution on the central processing unit (CPU) using OpenMP. For simulations utilizing the GPGPU, all basic linear algebra operations are performed on the GPGPU; memory copies between the central processing unit CPU and GPCPU occur prior to the first iteration of the UPCG solver and after satisfying head and flow criteria or exceeding a maximum number of iterations. The efficiency of the UPCG solver for GPGPU and CPU solutions is benchmarked using simulations of a synthetic, heterogeneous unconfined aquifer with tens of thousands to millions of active grid cells. Testing indicates GPGPU speedups on the order of 2 to 8, relative to the standard MODFLOW preconditioned conjugate gradient (PCG) solver, can be achieved when (1) memory copies between the CPU and GPGPU are optimized, (2) the percentage of time performing memory copies between the CPU and GPGPU is small relative to the calculation time, (3) high-performance GPGPU cards are utilized, and (4) CPU-GPGPU combinations are used to execute sequential operations that are difficult to parallelize. Furthermore, UPCG solver testing indicates GPGPU speedups exceed parallel CPU speedups achieved using OpenMP on multicore CPUs for preconditioners that can be easily parallelized.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loughry, Thomas A.
As the volume of data acquired by space-based sensors increases, mission data compression/decompression and forward error correction code processing performance must likewise scale. This competency development effort was explored using the General Purpose Graphics Processing Unit (GPGPU) to accomplish high-rate Rice Decompression and high-rate Reed-Solomon (RS) decoding at the satellite mission ground station. Each algorithm was implemented and benchmarked on a single GPGPU. Distributed processing across one to four GPGPUs was also investigated. The results show that the GPGPU has considerable potential for performing satellite communication Data Signal Processing, with three times or better performance improvements and up to tenmore » times reduction in cost over custom hardware, at least in the case of Rice Decompression and Reed-Solomon Decoding.« less
Capability of GPGPU for Faster Thermal Analysis Used in Data Assimilation
NASA Astrophysics Data System (ADS)
Takaki, Ryoji; Akita, Takeshi; Shima, Eiji
A thermal mathematical model plays an important role in operations on orbit as well as spacecraft thermal designs. The thermal mathematical model has some uncertain thermal characteristic parameters, such as thermal contact resistances between components, effective emittances of multilayer insulation (MLI) blankets, discouraging make up efficiency and accuracy of the model. A particle filter which is one of successive data assimilation methods has been applied to construct spacecraft thermal mathematical models. This method conducts a lot of ensemble computations, which require large computational power. Recently, General Purpose computing in Graphics Processing Unit (GPGPU) has been attracted attention in high performance computing. Therefore GPGPU is applied to increase the computational speed of thermal analysis used in the particle filter. This paper shows the speed-up results by using GPGPU as well as the application method of GPGPU.
Orthorectification by Using Gpgpu Method
NASA Astrophysics Data System (ADS)
Sahin, H.; Kulur, S.
2012-07-01
Thanks to the nature of the graphics processing, the newly released products offer highly parallel processing units with high-memory bandwidth and computational power of more than teraflops per second. The modern GPUs are not only powerful graphic engines but also they are high level parallel programmable processors with very fast computing capabilities and high-memory bandwidth speed compared to central processing units (CPU). Data-parallel computations can be shortly described as mapping data elements to parallel processing threads. The rapid development of GPUs programmability and capabilities attracted the attentions of researchers dealing with complex problems which need high level calculations. This interest has revealed the concepts of "General Purpose Computation on Graphics Processing Units (GPGPU)" and "stream processing". The graphic processors are powerful hardware which is really cheap and affordable. So the graphic processors became an alternative to computer processors. The graphic chips which were standard application hardware have been transformed into modern, powerful and programmable processors to meet the overall needs. Especially in recent years, the phenomenon of the usage of graphics processing units in general purpose computation has led the researchers and developers to this point. The biggest problem is that the graphics processing units use different programming models unlike current programming methods. Therefore, an efficient GPU programming requires re-coding of the current program algorithm by considering the limitations and the structure of the graphics hardware. Currently, multi-core processors can not be programmed by using traditional programming methods. Event procedure programming method can not be used for programming the multi-core processors. GPUs are especially effective in finding solution for repetition of the computing steps for many data elements when high accuracy is needed. Thus, it provides the computing process more quickly and accurately. Compared to the GPUs, CPUs which perform just one computing in a time according to the flow control are slower in performance. This structure can be evaluated for various applications of computer technology. In this study covers how general purpose parallel programming and computational power of the GPUs can be used in photogrammetric applications especially direct georeferencing. The direct georeferencing algorithm is coded by using GPGPU method and CUDA (Compute Unified Device Architecture) programming language. Results provided by this method were compared with the traditional CPU programming. In the other application the projective rectification is coded by using GPGPU method and CUDA programming language. Sample images of various sizes, as compared to the results of the program were evaluated. GPGPU method can be used especially in repetition of same computations on highly dense data, thus finding the solution quickly.
GPU-based real-time trinocular stereo vision
NASA Astrophysics Data System (ADS)
Yao, Yuanbin; Linton, R. J.; Padir, Taskin
2013-01-01
Most stereovision applications are binocular which uses information from a 2-camera array to perform stereo matching and compute the depth image. Trinocular stereovision with a 3-camera array has been proved to provide higher accuracy in stereo matching which could benefit applications like distance finding, object recognition, and detection. This paper presents a real-time stereovision algorithm implemented on a GPGPU (General-purpose graphics processing unit) using a trinocular stereovision camera array. Algorithm employs a winner-take-all method applied to perform fusion of disparities in different directions following various image processing techniques to obtain the depth information. The goal of the algorithm is to achieve real-time processing speed with the help of a GPGPU involving the use of Open Source Computer Vision Library (OpenCV) in C++ and NVidia CUDA GPGPU Solution. The results are compared in accuracy and speed to verify the improvement.
Multi-Threaded Algorithms for GPGPU in the ATLAS High Level Trigger
NASA Astrophysics Data System (ADS)
Conde Muíño, P.; ATLAS Collaboration
2017-10-01
General purpose Graphics Processor Units (GPGPU) are being evaluated for possible future inclusion in an upgraded ATLAS High Level Trigger farm. We have developed a demonstrator including GPGPU implementations of Inner Detector and Muon tracking and Calorimeter clustering within the ATLAS software framework. ATLAS is a general purpose particle physics experiment located on the LHC collider at CERN. The ATLAS Trigger system consists of two levels, with Level-1 implemented in hardware and the High Level Trigger implemented in software running on a farm of commodity CPU. The High Level Trigger reduces the trigger rate from the 100 kHz Level-1 acceptance rate to 1.5 kHz for recording, requiring an average per-event processing time of ∼ 250 ms for this task. The selection in the high level trigger is based on reconstructing tracks in the Inner Detector and Muon Spectrometer and clusters of energy deposited in the Calorimeter. Performing this reconstruction within the available farm resources presents a significant challenge that will increase significantly with future LHC upgrades. During the LHC data taking period starting in 2021, luminosity will reach up to three times the original design value. Luminosity will increase further to 7.5 times the design value in 2026 following LHC and ATLAS upgrades. Corresponding improvements in the speed of the reconstruction code will be needed to provide the required trigger selection power within affordable computing resources. Key factors determining the potential benefit of including GPGPU as part of the HLT processor farm are: the relative speed of the CPU and GPGPU algorithm implementations; the relative execution times of the GPGPU algorithms and serial code remaining on the CPU; the number of GPGPU required, and the relative financial cost of the selected GPGPU. We give a brief overview of the algorithms implemented and present new measurements that compare the performance of various configurations exploiting GPGPU cards.
Performance Testing of GPU-Based Approximate Matching Algorithm on Network Traffic
2015-03-01
Defense Department’s use. vi THIS PAGE INTENTIONALLY LEFT BLANK vii TABLE OF CONTENTS I. INTRODUCTION...22 D. GENERATING DIGESTS ............................................................................23 1. Reference...the-shelf GPU Graphical Processing Unit GPGPU General -Purpose Graphic Processing Unit HBSS Host-Based Security System HIPS Host Intrusion
A Large Scale, High Resolution Agent-Based Insurgency Model
2013-09-30
CUDA) is NVIDIA Corporation’s software development model for General Purpose Programming on Graphics Processing Units (GPGPU) ( NVIDIA Corporation ...Conference. Argonne National Laboratory, Argonne, IL, October, 2005. NVIDIA Corporation . NVIDIA CUDA Programming Guide 2.0 [Online]. NVIDIA Corporation
HRLSim: a high performance spiking neural network simulator for GPGPU clusters.
Minkovich, Kirill; Thibeault, Corey M; O'Brien, Michael John; Nogin, Aleksey; Cho, Youngkwan; Srinivasa, Narayan
2014-02-01
Modeling of large-scale spiking neural models is an important tool in the quest to understand brain function and subsequently create real-world applications. This paper describes a spiking neural network simulator environment called HRL Spiking Simulator (HRLSim). This simulator is suitable for implementation on a cluster of general purpose graphical processing units (GPGPUs). Novel aspects of HRLSim are described and an analysis of its performance is provided for various configurations of the cluster. With the advent of inexpensive GPGPU cards and compute power, HRLSim offers an affordable and scalable tool for design, real-time simulation, and analysis of large-scale spiking neural networks.
Astrophysical Supercomputing with GPUs: Critical Decisions for Early Adopters
NASA Astrophysics Data System (ADS)
Fluke, Christopher J.; Barnes, David G.; Barsdell, Benjamin R.; Hassan, Amr H.
2011-01-01
General-purpose computing on graphics processing units (GPGPU) is dramatically changing the landscape of high performance computing in astronomy. In this paper, we identify and investigate several key decision areas, with a goal of simplifying the early adoption of GPGPU in astronomy. We consider the merits of OpenCL as an open standard in order to reduce risks associated with coding in a native, vendor-specific programming environment, and present a GPU programming philosophy based on using brute force solutions. We assert that effective use of new GPU-based supercomputing facilities will require a change in approach from astronomers. This will likely include improved programming training, an increased need for software development best practice through the use of profiling and related optimisation tools, and a greater reliance on third-party code libraries. As with any new technology, those willing to take the risks and make the investment of time and effort to become early adopters of GPGPU in astronomy, stand to reap great benefits.
NASA Astrophysics Data System (ADS)
Fehr, M.; Navarro, V.; Martin, L.; Fletcher, E.
2013-08-01
Space Situational Awareness[8] (SSA) is defined as the comprehensive knowledge, understanding and maintained awareness of the population of space objects, the space environment and existing threats and risks. As ESA's SSA Conjunction Prediction Service (CPS) requires the repetitive application of a processing algorithm against a data set of man-made space objects, it is crucial to exploit the highly parallelizable nature of this problem. Currently the CPS system makes use of OpenMP[7] for parallelization purposes using CPU threads, but only a GPU with its hundreds of cores can fully benefit from such high levels of parallelism. This paper presents the adaptation of several core algorithms[5] of the CPS for general-purpose computing on graphics processing units (GPGPU) using NVIDIAs Compute Unified Device Architecture (CUDA).
On the Accuracy and Parallelism of GPGPU-Powered Incremental Clustering Algorithms.
Chen, Chunlei; He, Li; Zhang, Huixiang; Zheng, Hao; Wang, Lei
2017-01-01
Incremental clustering algorithms play a vital role in various applications such as massive data analysis and real-time data processing. Typical application scenarios of incremental clustering raise high demand on computing power of the hardware platform. Parallel computing is a common solution to meet this demand. Moreover, General Purpose Graphic Processing Unit (GPGPU) is a promising parallel computing device. Nevertheless, the incremental clustering algorithm is facing a dilemma between clustering accuracy and parallelism when they are powered by GPGPU. We formally analyzed the cause of this dilemma. First, we formalized concepts relevant to incremental clustering like evolving granularity. Second, we formally proved two theorems. The first theorem proves the relation between clustering accuracy and evolving granularity. Additionally, this theorem analyzes the upper and lower bounds of different-to-same mis-affiliation. Fewer occurrences of such mis-affiliation mean higher accuracy. The second theorem reveals the relation between parallelism and evolving granularity. Smaller work-depth means superior parallelism. Through the proofs, we conclude that accuracy of an incremental clustering algorithm is negatively related to evolving granularity while parallelism is positively related to the granularity. Thus the contradictory relations cause the dilemma. Finally, we validated the relations through a demo algorithm. Experiment results verified theoretical conclusions.
Accelerating image recognition on mobile devices using GPGPU
NASA Astrophysics Data System (ADS)
Bordallo López, Miguel; Nykänen, Henri; Hannuksela, Jari; Silvén, Olli; Vehviläinen, Markku
2011-01-01
The future multi-modal user interfaces of battery-powered mobile devices are expected to require computationally costly image analysis techniques. The use of Graphic Processing Units for computing is very well suited for parallel processing and the addition of programmable stages and high precision arithmetic provide for opportunities to implement energy-efficient complete algorithms. At the moment the first mobile graphics accelerators with programmable pipelines are available, enabling the GPGPU implementation of several image processing algorithms. In this context, we consider a face tracking approach that uses efficient gray-scale invariant texture features and boosting. The solution is based on the Local Binary Pattern (LBP) features and makes use of the GPU on the pre-processing and feature extraction phase. We have implemented a series of image processing techniques in the shader language of OpenGL ES 2.0, compiled them for a mobile graphics processing unit and performed tests on a mobile application processor platform (OMAP3530). In our contribution, we describe the challenges of designing on a mobile platform, present the performance achieved and provide measurement results for the actual power consumption in comparison to using the CPU (ARM) on the same platform.
Real-time traffic sign recognition based on a general purpose GPU and deep-learning.
Lim, Kwangyong; Hong, Yongwon; Choi, Yeongwoo; Byun, Hyeran
2017-01-01
We present a General Purpose Graphics Processing Unit (GPGPU) based real-time traffic sign detection and recognition method that is robust against illumination changes. There have been many approaches to traffic sign recognition in various research fields; however, previous approaches faced several limitations when under low illumination or wide variance of light conditions. To overcome these drawbacks and improve processing speeds, we propose a method that 1) is robust against illumination changes, 2) uses GPGPU-based real-time traffic sign detection, and 3) performs region detecting and recognition using a hierarchical model. This method produces stable results in low illumination environments. Both detection and hierarchical recognition are performed in real-time, and the proposed method achieves 0.97 F1-score on our collective dataset, which uses the Vienna convention traffic rules (Germany and South Korea).
JDiffraction: A GPGPU-accelerated JAVA library for numerical propagation of scalar wave fields
NASA Astrophysics Data System (ADS)
Piedrahita-Quintero, Pablo; Trujillo, Carlos; Garcia-Sucerquia, Jorge
2017-05-01
JDiffraction, a GPGPU-accelerated JAVA library for numerical propagation of scalar wave fields, is presented. Angular spectrum, Fresnel transform, and Fresnel-Bluestein transform are the numerical algorithms implemented in the methods and functions of the library to compute the scalar propagation of the complex wavefield. The functionality of the library is tested with the modeling of easy to forecast numerical experiments and also with the numerical reconstruction of a digitally recorded hologram. The performance of JDiffraction is contrasted with a library written for C++, showing great competitiveness in the apparently less complex environment of JAVA language. JDiffraction also includes JAVA easy-to-use methods and functions that take advantage of the computation power of the graphic processing units to accelerate the processing times of 2048×2048 pixel images up to 74 frames per second.
A Comparison of FPGA and GPGPU Designs for Bayesian Occupancy Filters.
Medina, Luis; Diez-Ochoa, Miguel; Correal, Raul; Cuenca-Asensi, Sergio; Serrano, Alejandro; Godoy, Jorge; Martínez-Álvarez, Antonio; Villagra, Jorge
2017-11-11
Grid-based perception techniques in the automotive sector based on fusing information from different sensors and their robust perceptions of the environment are proliferating in the industry. However, one of the main drawbacks of these techniques is the traditionally prohibitive, high computing performance that is required for embedded automotive systems. In this work, the capabilities of new computing architectures that embed these algorithms are assessed in a real car. The paper compares two ad hoc optimized designs of the Bayesian Occupancy Filter; one for General Purpose Graphics Processing Unit (GPGPU) and the other for Field-Programmable Gate Array (FPGA). The resulting implementations are compared in terms of development effort, accuracy and performance, using datasets from a realistic simulator and from a real automated vehicle.
On the Accuracy and Parallelism of GPGPU-Powered Incremental Clustering Algorithms
He, Li; Zheng, Hao; Wang, Lei
2017-01-01
Incremental clustering algorithms play a vital role in various applications such as massive data analysis and real-time data processing. Typical application scenarios of incremental clustering raise high demand on computing power of the hardware platform. Parallel computing is a common solution to meet this demand. Moreover, General Purpose Graphic Processing Unit (GPGPU) is a promising parallel computing device. Nevertheless, the incremental clustering algorithm is facing a dilemma between clustering accuracy and parallelism when they are powered by GPGPU. We formally analyzed the cause of this dilemma. First, we formalized concepts relevant to incremental clustering like evolving granularity. Second, we formally proved two theorems. The first theorem proves the relation between clustering accuracy and evolving granularity. Additionally, this theorem analyzes the upper and lower bounds of different-to-same mis-affiliation. Fewer occurrences of such mis-affiliation mean higher accuracy. The second theorem reveals the relation between parallelism and evolving granularity. Smaller work-depth means superior parallelism. Through the proofs, we conclude that accuracy of an incremental clustering algorithm is negatively related to evolving granularity while parallelism is positively related to the granularity. Thus the contradictory relations cause the dilemma. Finally, we validated the relations through a demo algorithm. Experiment results verified theoretical conclusions. PMID:29123546
Real-time traffic sign recognition based on a general purpose GPU and deep-learning
Hong, Yongwon; Choi, Yeongwoo; Byun, Hyeran
2017-01-01
We present a General Purpose Graphics Processing Unit (GPGPU) based real-time traffic sign detection and recognition method that is robust against illumination changes. There have been many approaches to traffic sign recognition in various research fields; however, previous approaches faced several limitations when under low illumination or wide variance of light conditions. To overcome these drawbacks and improve processing speeds, we propose a method that 1) is robust against illumination changes, 2) uses GPGPU-based real-time traffic sign detection, and 3) performs region detecting and recognition using a hierarchical model. This method produces stable results in low illumination environments. Both detection and hierarchical recognition are performed in real-time, and the proposed method achieves 0.97 F1-score on our collective dataset, which uses the Vienna convention traffic rules (Germany and South Korea). PMID:28264011
A Comparison of FPGA and GPGPU Designs for Bayesian Occupancy Filters
Medina, Luis; Diez-Ochoa, Miguel; Correal, Raul; Cuenca-Asensi, Sergio; Godoy, Jorge; Martínez-Álvarez, Antonio
2017-01-01
Grid-based perception techniques in the automotive sector based on fusing information from different sensors and their robust perceptions of the environment are proliferating in the industry. However, one of the main drawbacks of these techniques is the traditionally prohibitive, high computing performance that is required for embedded automotive systems. In this work, the capabilities of new computing architectures that embed these algorithms are assessed in a real car. The paper compares two ad hoc optimized designs of the Bayesian Occupancy Filter; one for General Purpose Graphics Processing Unit (GPGPU) and the other for Field-Programmable Gate Array (FPGA). The resulting implementations are compared in terms of development effort, accuracy and performance, using datasets from a realistic simulator and from a real automated vehicle. PMID:29137137
2014-09-01
semiempirical and ray-optical models. For example, the semiempirical COST-Walfisch- Ikegami model (3) estimates the received power predominantly on the...Books: Philadelphia, PA, 1965. 2. Rick, T .; Mathur, R. Fast Edge-Diffraction-Based Radio Wave Propagation Model for Graphics Hardware. Proceedings of
Real-time radar signal processing using GPGPU (general-purpose graphic processing unit)
NASA Astrophysics Data System (ADS)
Kong, Fanxing; Zhang, Yan Rockee; Cai, Jingxiao; Palmer, Robert D.
2016-05-01
This study introduces a practical approach to develop real-time signal processing chain for general phased array radar on NVIDIA GPUs(Graphical Processing Units) using CUDA (Compute Unified Device Architecture) libraries such as cuBlas and cuFFT, which are adopted from open source libraries and optimized for the NVIDIA GPUs. The processed results are rigorously verified against those from the CPUs. Performance benchmarked in computation time with various input data cube sizes are compared across GPUs and CPUs. Through the analysis, it will be demonstrated that GPGPUs (General Purpose GPU) real-time processing of the array radar data is possible with relatively low-cost commercial GPUs.
A GPU-based calculation using the three-dimensional FDTD method for electromagnetic field analysis.
Nagaoka, Tomoaki; Watanabe, Soichi
2010-01-01
Numerical simulations with the numerical human model using the finite-difference time domain (FDTD) method have recently been performed frequently in a number of fields in biomedical engineering. However, the FDTD calculation runs too slowly. We focus, therefore, on general purpose programming on the graphics processing unit (GPGPU). The three-dimensional FDTD method was implemented on the GPU using Compute Unified Device Architecture (CUDA). In this study, we used the NVIDIA Tesla C1060 as a GPGPU board. The performance of the GPU is evaluated in comparison with the performance of a conventional CPU and a vector supercomputer. The results indicate that three-dimensional FDTD calculations using a GPU can significantly reduce run time in comparison with that using a conventional CPU, even a native GPU implementation of the three-dimensional FDTD method, while the GPU/CPU speed ratio varies with the calculation domain and thread block size.
BarraCUDA - a fast short read sequence aligner using graphics processing units
2012-01-01
Background With the maturation of next-generation DNA sequencing (NGS) technologies, the throughput of DNA sequencing reads has soared to over 600 gigabases from a single instrument run. General purpose computing on graphics processing units (GPGPU), extracts the computing power from hundreds of parallel stream processors within graphics processing cores and provides a cost-effective and energy efficient alternative to traditional high-performance computing (HPC) clusters. In this article, we describe the implementation of BarraCUDA, a GPGPU sequence alignment software that is based on BWA, to accelerate the alignment of sequencing reads generated by these instruments to a reference DNA sequence. Findings Using the NVIDIA Compute Unified Device Architecture (CUDA) software development environment, we ported the most computational-intensive alignment component of BWA to GPU to take advantage of the massive parallelism. As a result, BarraCUDA offers a magnitude of performance boost in alignment throughput when compared to a CPU core while delivering the same level of alignment fidelity. The software is also capable of supporting multiple CUDA devices in parallel to further accelerate the alignment throughput. Conclusions BarraCUDA is designed to take advantage of the parallelism of GPU to accelerate the alignment of millions of sequencing reads generated by NGS instruments. By doing this, we could, at least in part streamline the current bioinformatics pipeline such that the wider scientific community could benefit from the sequencing technology. BarraCUDA is currently available from http://seqbarracuda.sf.net PMID:22244497
Shorebird Migration Patterns in Response to Climate Change: A Modeling Approach
NASA Technical Reports Server (NTRS)
Smith, James A.
2010-01-01
The availability of satellite remote sensing observations at multiple spatial and temporal scales, coupled with advances in climate modeling and information technologies offer new opportunities for the application of mechanistic models to predict how continental scale bird migration patterns may change in response to environmental change. In earlier studies, we explored the phenotypic plasticity of a migratory population of Pectoral sandpipers by simulating the movement patterns of an ensemble of 10,000 individual birds in response to changes in stopover locations as an indicator of the impacts of wetland loss and inter-annual variability on the fitness of migratory shorebirds. We used an individual based, biophysical migration model, driven by remotely sensed land surface data, climate data, and biological field data. Mean stop-over durations and stop-over frequency with latitude predicted from our model for nominal cases were consistent with results reported in the literature and available field data. In this study, we take advantage of new computing capabilities enabled by recent GP-GPU computing paradigms and commodity hardware (general purchase computing on graphics processing units). Several aspects of our individual based (agent modeling) approach lend themselves well to GP-GPU computing. We have been able to allocate compute-intensive tasks to the graphics processing units, and now simulate ensembles of 400,000 birds at varying spatial resolutions along the central North American flyway. We are incorporating additional, species specific, mechanistic processes to better reflect the processes underlying bird phenotypic plasticity responses to different climate change scenarios in the central U.S.
Efficient and automatic image reduction framework for space debris detection based on GPU technology
NASA Astrophysics Data System (ADS)
Diprima, Francesco; Santoni, Fabio; Piergentili, Fabrizio; Fortunato, Vito; Abbattista, Cristoforo; Amoruso, Leonardo
2018-04-01
In the last years, the increasing number of space debris has triggered the need of a distributed monitoring system for the prevention of possible space collisions. Space surveillance based on ground telescope allows the monitoring of the traffic of the Resident Space Objects (RSOs) in the Earth orbit. This space debris surveillance has several applications such as orbit prediction and conjunction assessment. In this paper is proposed an optimized and performance-oriented pipeline for sources extraction intended to the automatic detection of space debris in optical data. The detection method is based on the morphological operations and Hough Transform for lines. Near real-time detection is obtained using General Purpose computing on Graphics Processing Units (GPGPU). The high degree of processing parallelism provided by GPGPU allows to split data analysis over thousands of threads in order to process big datasets with a limited computational time. The implementation has been tested on a large and heterogeneous images data set, containing both imaging satellites from different orbit ranges and multiple observation modes (i.e. sidereal and object tracking). These images were taken during an observation campaign performed from the EQUO (EQUatorial Observatory) observatory settled at the Broglio Space Center (BSC) in Kenya, which is part of the ASI-Sapienza Agreement.
Workflow of the Grover algorithm simulation incorporating CUDA and GPGPU
NASA Astrophysics Data System (ADS)
Lu, Xiangwen; Yuan, Jiabin; Zhang, Weiwei
2013-09-01
The Grover quantum search algorithm, one of only a few representative quantum algorithms, can speed up many classical algorithms that use search heuristics. No true quantum computer has yet been developed. For the present, simulation is one effective means of verifying the search algorithm. In this work, we focus on the simulation workflow using a compute unified device architecture (CUDA). Two simulation workflow schemes are proposed. These schemes combine the characteristics of the Grover algorithm and the parallelism of general-purpose computing on graphics processing units (GPGPU). We also analyzed the optimization of memory space and memory access from this perspective. We implemented four programs on CUDA to evaluate the performance of schemes and optimization. Through experimentation, we analyzed the organization of threads suited to Grover algorithm simulations, compared the storage costs of the four programs, and validated the effectiveness of optimization. Experimental results also showed that the distinguished program on CUDA outperformed the serial program of libquantum on a CPU with a speedup of up to 23 times (12 times on average), depending on the scale of the simulation.
Progress in a novel architecture for high performance processing
NASA Astrophysics Data System (ADS)
Zhang, Zhiwei; Liu, Meng; Liu, Zijun; Du, Xueliang; Xie, Shaolin; Ma, Hong; Ding, Guangxin; Ren, Weili; Zhou, Fabiao; Sun, Wenqin; Wang, Huijuan; Wang, Donglin
2018-04-01
The high performance processing (HPP) is an innovative architecture which targets on high performance computing with excellent power efficiency and computing performance. It is suitable for data intensive applications like supercomputing, machine learning and wireless communication. An example chip with four application-specific integrated circuit (ASIC) cores which is the first generation of HPP cores has been taped out successfully under Taiwan Semiconductor Manufacturing Company (TSMC) 40 nm low power process. The innovative architecture shows great energy efficiency over the traditional central processing unit (CPU) and general-purpose computing on graphics processing units (GPGPU). Compared with MaPU, HPP has made great improvement in architecture. The chip with 32 HPP cores is being developed under TSMC 16 nm field effect transistor (FFC) technology process and is planed to use commercially. The peak performance of this chip can reach 4.3 teraFLOPS (TFLOPS) and its power efficiency is up to 89.5 gigaFLOPS per watt (GFLOPS/W).
Advantages of GPU technology in DFT calculations of intercalated graphene
NASA Astrophysics Data System (ADS)
Pešić, J.; Gajić, R.
2014-09-01
Over the past few years, the expansion of general-purpose graphic-processing unit (GPGPU) technology has had a great impact on computational science. GPGPU is the utilization of a graphics-processing unit (GPU) to perform calculations in applications usually handled by the central processing unit (CPU). Use of GPGPUs as a way to increase computational power in the material sciences has significantly decreased computational costs in already highly demanding calculations. A level of the acceleration and parallelization depends on the problem itself. Some problems can benefit from GPU acceleration and parallelization, such as the finite-difference time-domain algorithm (FTDT) and density-functional theory (DFT), while others cannot take advantage of these modern technologies. A number of GPU-supported applications had emerged in the past several years (www.nvidia.com/object/gpu-applications.html). Quantum Espresso (QE) is reported as an integrated suite of open source computer codes for electronic-structure calculations and materials modeling at the nano-scale. It is based on DFT, the use of a plane-waves basis and a pseudopotential approach. Since the QE 5.0 version, it has been implemented as a plug-in component for standard QE packages that allows exploiting the capabilities of Nvidia GPU graphic cards (www.qe-forge.org/gf/proj). In this study, we have examined the impact of the usage of GPU acceleration and parallelization on the numerical performance of DFT calculations. Graphene has been attracting attention worldwide and has already shown some remarkable properties. We have studied an intercalated graphene, using the QE package PHonon, which employs GPU. The term ‘intercalation’ refers to a process whereby foreign adatoms are inserted onto a graphene lattice. In addition, by intercalating different atoms between graphene layers, it is possible to tune their physical properties. Our experiments have shown there are benefits from using GPUs, and we reached an acceleration of several times compared to standard CPU calculations.
Multi-GPGPU Tsunami simulation at Toyama-bay
NASA Astrophysics Data System (ADS)
Furuyama, Shoichi; Ueda, Yuki
2017-07-01
Accelerated multi General Purpose Graphics Processing Unit (GPGPU) calculation for Tsunami run-up simulation was achieved at the wide area (whole Toyama-bay in Japan) by faster computation technique. Toyama-bay has active-faults at the sea-bed. It has a high possibility to occur earthquakes and Tsunami waves in the case of the huge earthquake, that's why to predict the area of Tsunami run-up is important for decreasing damages to residents by the disaster. However it is very hard task to achieve the simulation by the computer resources problem. A several meter's order of the high resolution calculation is required for the running-up Tsunami simulation because artificial structures on the ground such as roads, buildings, and houses are very small. On the other hand the huge area simulation is also required. In the Toyama-bay case the area is 42 [km] × 15 [km]. When 5 [m] × 5 [m] size computational cells are used for the simulation, over 26,000,000 computational cells are generated. To calculate the simulation, a normal CPU desktop computer took about 10 hours for the calculation. An improvement of calculation time is important problem for the immediate prediction system of Tsunami running-up, as a result it will contribute to protect a lot of residents around the coastal region. The study tried to decrease this calculation time by using multi GPGPU system which is equipped with six NVIDIA TESLA K20xs, InfiniBand network connection between computer nodes by MVAPICH library. As a result 5.16 times faster calculation was achieved on six GPUs than one GPU case and it was 86% parallel efficiency to the linear speed up.
Utilizing GPUs to Accelerate Turbomachinery CFD Codes
NASA Technical Reports Server (NTRS)
MacCalla, Weylin; Kulkarni, Sameer
2016-01-01
GPU computing has established itself as a way to accelerate parallel codes in the high performance computing world. This work focuses on speeding up APNASA, a legacy CFD code used at NASA Glenn Research Center, while also drawing conclusions about the nature of GPU computing and the requirements to make GPGPU worthwhile on legacy codes. Rewriting and restructuring of the source code was avoided to limit the introduction of new bugs. The code was profiled and investigated for parallelization potential, then OpenACC directives were used to indicate parallel parts of the code. The use of OpenACC directives was not able to reduce the runtime of APNASA on either the NVIDIA Tesla discrete graphics card, or the AMD accelerated processing unit. Additionally, it was found that in order to justify the use of GPGPU, the amount of parallel work being done within a kernel would have to greatly exceed the work being done by any one portion of the APNASA code. It was determined that in order for an application like APNASA to be accelerated on the GPU, it should not be modular in nature, and the parallel portions of the code must contain a large portion of the code's computation time.
NASA Astrophysics Data System (ADS)
Bielski, Conrad; Lemoine, Guido; Syryczynski, Jacek
2009-09-01
High Performance Computing (HPC) hardware solutions such as grid computing and General Processing on a Graphics Processing Unit (GPGPU) are now accessible to users with general computing needs. Grid computing infrastructures in the form of computing clusters or blades are becoming common place and GPGPU solutions that leverage the processing power of the video card are quickly being integrated into personal workstations. Our interest in these HPC technologies stems from the need to produce near real-time maps from a combination of pre- and post-event satellite imagery in support of post-disaster management. Faster processing provides a twofold gain in this situation: 1. critical information can be provided faster and 2. more elaborate automated processing can be performed prior to providing the critical information. In our particular case, we test the use of the PANTEX index which is based on analysis of image textural measures extracted using anisotropic, rotation-invariant GLCM statistics. The use of this index, applied in a moving window, has been shown to successfully identify built-up areas in remotely sensed imagery. Built-up index image masks are important input to the structuring of damage assessment interpretation because they help optimise the workload. The performance of computing the PANTEX workflow is compared on two different HPC hardware architectures: (1) a blade server with 4 blades, each having dual quad-core CPUs and (2) a CUDA enabled GPU workstation. The reference platform is a dual CPU-quad core workstation and the PANTEX workflow total computing time is measured. Furthermore, as part of a qualitative evaluation, the differences in setting up and configuring various hardware solutions and the related software coding effort is presented.
NASA Astrophysics Data System (ADS)
Hobson, T.; Clarkson, V.
2012-09-01
As a result of continual space activity since the 1950s, there are now a large number of man-made Resident Space Objects (RSOs) orbiting the Earth. Because of the large number of items and their relative speeds, the possibility of destructive collisions involving important space assets is now of significant concern to users and operators of space-borne technologies. As a result, a growing number of international agencies are researching methods for improving techniques to maintain Space Situational Awareness (SSA). Computer simulation is a method commonly used by many countries to validate competing methodologies prior to full scale adoption. The use of supercomputing and/or reduced scale testing is often necessary to effectively simulate such a complex problem on todays computers. Recently the authors presented a simulation aimed at reducing the computational burden by selecting the minimum level of fidelity necessary for contrasting methodologies and by utilising multi-core CPU parallelism for increased computational efficiency. The resulting simulation runs on a single PC while maintaining the ability to effectively evaluate competing methodologies. Nonetheless, the ability to control the scale and expand upon the computational demands of the sensor management system is limited. In this paper, we examine the advantages of increasing the parallelism of the simulation by means of General Purpose computing on Graphics Processing Units (GPGPU). As many sub-processes pertaining to SSA management are independent, we demonstrate how parallelisation via GPGPU has the potential to significantly enhance not only research into techniques for maintaining SSA, but also to enhance the level of sophistication of existing space surveillance sensors and sensor management systems. Nonetheless, the use of GPGPU imposes certain limitations and adds to the implementation complexity, both of which require consideration to achieve an effective system. We discuss these challenges and how they can be overcome. We further describe an application of the parallelised system where visibility prediction is used to enhance sensor management. This facilitates significant improvement in maximum catalogue error when RSOs become temporarily unobservable. The objective is to demonstrate the enhanced scalability and increased computational capability of the system.
NASA Astrophysics Data System (ADS)
Zakirov, Andrey; Belousov, Sergei; Valuev, Ilya; Levchenko, Vadim; Perepelkina, Anastasia; Zempo, Yasunari
2017-10-01
We demonstrate an efficient approach to numerical modeling of optical properties of large-scale structures with typical dimensions much greater than the wavelength of light. For this purpose, we use the finite-difference time-domain (FDTD) method enhanced with a memory efficient Locally Recursive non-Locally Asynchronous (LRnLA) algorithm called DiamondTorre and implemented for General Purpose Graphical Processing Units (GPGPU) architecture. We apply our approach to simulation of optical properties of organic light emitting diodes (OLEDs), which is an essential step in the process of designing OLEDs with improved efficiency. Specifically, we consider a problem of excitation and propagation of surface plasmon polaritons (SPPs) in a typical OLED, which is a challenging task given that SPP decay length can be about two orders of magnitude greater than the wavelength of excitation. We show that with our approach it is possible to extend the simulated volume size sufficiently so that SPP decay dynamics is accounted for. We further consider an OLED with periodically corrugated metallic cathode and show how the SPP decay length can be greatly reduced due to scattering off the corrugation. Ultimately, we compare the performance of our algorithm to the conventional FDTD and demonstrate that our approach can efficiently be used for large-scale FDTD simulations with the use of only a single GPGPU-powered workstation, which is not practically feasible with the conventional FDTD.
Zhou, Lili; Clifford Chao, K S; Chang, Jenghwa
2012-11-01
Simulated projection images of digital phantoms constructed from CT scans have been widely used for clinical and research applications but their quality and computation speed are not optimal for real-time comparison with the radiography acquired with an x-ray source of different energies. In this paper, the authors performed polyenergetic forward projections using open computing language (OpenCL) in a parallel computing ecosystem consisting of CPU and general purpose graphics processing unit (GPGPU) for fast and realistic image formation. The proposed polyenergetic forward projection uses a lookup table containing the NIST published mass attenuation coefficients (μ∕ρ) for different tissue types and photon energies ranging from 1 keV to 20 MeV. The CT images of interested sites are first segmented into different tissue types based on the CT numbers and converted to a three-dimensional attenuation phantom by linking each voxel to the corresponding tissue type in the lookup table. The x-ray source can be a radioisotope or an x-ray generator with a known spectrum described as weight w(n) for energy bin E(n). The Siddon method is used to compute the x-ray transmission line integral for E(n) and the x-ray fluence is the weighted sum of the exponential of line integral for all energy bins with added Poisson noise. To validate this method, a digital head and neck phantom constructed from the CT scan of a Rando head phantom was segmented into three (air, gray∕white matter, and bone) regions for calculating the polyenergetic projection images for the Mohan 4 MV energy spectrum. To accelerate the calculation, the authors partitioned the workloads using the task parallelism and data parallelism and scheduled them in a parallel computing ecosystem consisting of CPU and GPGPU (NVIDIA Tesla C2050) using OpenCL only. The authors explored the task overlapping strategy and the sequential method for generating the first and subsequent DRRs. A dispatcher was designed to drive the high-degree parallelism of the task overlapping strategy. Numerical experiments were conducted to compare the performance of the OpenCL∕GPGPU-based implementation with the CPU-based implementation. The projection images were similar to typical portal images obtained with a 4 or 6 MV x-ray source. For a phantom size of 512 × 512 × 223, the time for calculating the line integrals for a 512 × 512 image panel was 16.2 ms on GPGPU for one energy bin in comparison to 8.83 s on CPU. The total computation time for generating one polyenergetic projection image of 512 × 512 was 0.3 s (141 s for CPU). The relative difference between the projection images obtained with the CPU-based and OpenCL∕GPGPU-based implementations was on the order of 10(-6) and was virtually indistinguishable. The task overlapping strategy was 5.84 and 1.16 times faster than the sequential method for the first and the subsequent digitally reconstruction radiographies, respectively. The authors have successfully built digital phantoms using anatomic CT images and NIST μ∕ρ tables for simulating realistic polyenergetic projection images and optimized the processing speed with parallel computing using GPGPU∕OpenCL-based implementation. The computation time was fast (0.3 s per projection image) enough for real-time IGRT (image-guided radiotherapy) applications.
High performance hybrid functional Petri net simulations of biological pathway models on CUDA.
Chalkidis, Georgios; Nagasaki, Masao; Miyano, Satoru
2011-01-01
Hybrid functional Petri nets are a wide-spread tool for representing and simulating biological models. Due to their potential of providing virtual drug testing environments, biological simulations have a growing impact on pharmaceutical research. Continuous research advancements in biology and medicine lead to exponentially increasing simulation times, thus raising the demand for performance accelerations by efficient and inexpensive parallel computation solutions. Recent developments in the field of general-purpose computation on graphics processing units (GPGPU) enabled the scientific community to port a variety of compute intensive algorithms onto the graphics processing unit (GPU). This work presents the first scheme for mapping biological hybrid functional Petri net models, which can handle both discrete and continuous entities, onto compute unified device architecture (CUDA) enabled GPUs. GPU accelerated simulations are observed to run up to 18 times faster than sequential implementations. Simulating the cell boundary formation by Delta-Notch signaling on a CUDA enabled GPU results in a speedup of approximately 7x for a model containing 1,600 cells.
Spiking neural networks on high performance computer clusters
NASA Astrophysics Data System (ADS)
Chen, Chong; Taha, Tarek M.
2011-09-01
In this paper we examine the acceleration of two spiking neural network models on three clusters of multicore processors representing three categories of processors: x86, STI Cell, and NVIDIA GPGPUs. The x86 cluster utilized consists of 352 dualcore AMD Opterons, the Cell cluster consists of 320 Sony Playstation 3s, while the GPGPU cluster contains 32 NVIDIA Tesla S1070 systems. The results indicate that the GPGPU platform can dominate in performance compared to the Cell and x86 platforms examined. From a cost perspective, the GPGPU is more expensive in terms of neuron/s throughput. If the cost of GPGPUs go down in the future, this platform will become very cost effective for these models.
Real-time Scheduling for GPUS with Applications in Advanced Automotive Systems
2015-01-01
129 3.7 Architecture of GPU tasklet scheduling infrastructure ...throughput. This disparity is even greater when we consider mobile CPUs, such as those designed by ARM. For instance, the ARM Cortex-A15 series processor as...stub library that replaces the GPGPU runtime within each virtual machine. The stub library communicates API calls to a GPGPU backend user-space daemon
3D SPH numerical simulation of the wave generated by the Vajont rockslide
NASA Astrophysics Data System (ADS)
Vacondio, R.; Mignosa, P.; Pagani, S.
2013-09-01
A 3D numerical modeling of the wave generated by the Vajont slide, one of the most destructive ever occurred, is presented in this paper. A meshless Lagrangian Smoothed Particle Hydrodynamics (SPH) technique was adopted to simulate the highly fragmented violent flow generated by the falling slide in the artificial reservoir. The speed-up achievable via General Purpose Graphic Processing Units (GP-GPU) allowed to adopt the adequate resolution to describe the phenomenon. The comparison with the data available in literature showed that the results of the numerical simulation reproduce satisfactorily the maximum run-up, also the water surface elevation in the residual lake after the event. Moreover, the 3D velocity field of the flow during the event and the discharge hydrograph which overtopped the dam, were obtained.
Topical perspective on massive threading and parallelism.
Farber, Robert M
2011-09-01
Unquestionably computer architectures have undergone a recent and noteworthy paradigm shift that now delivers multi- and many-core systems with tens to many thousands of concurrent hardware processing elements per workstation or supercomputer node. GPGPU (General Purpose Graphics Processor Unit) technology in particular has attracted significant attention as new software development capabilities, namely CUDA (Compute Unified Device Architecture) and OpenCL™, have made it possible for students as well as small and large research organizations to achieve excellent speedup for many applications over more conventional computing architectures. The current scientific literature reflects this shift with numerous examples of GPGPU applications that have achieved one, two, and in some special cases, three-orders of magnitude increased computational performance through the use of massive threading to exploit parallelism. Multi-core architectures are also evolving quickly to exploit both massive-threading and massive-parallelism such as the 1.3 million threads Blue Waters supercomputer. The challenge confronting scientists in planning future experimental and theoretical research efforts--be they individual efforts with one computer or collaborative efforts proposing to use the largest supercomputers in the world is how to capitalize on these new massively threaded computational architectures--especially as not all computational problems will scale to massive parallelism. In particular, the costs associated with restructuring software (and potentially redesigning algorithms) to exploit the parallelism of these multi- and many-threaded machines must be considered along with application scalability and lifespan. This perspective is an overview of the current state of threading and parallelize with some insight into the future. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Ohene-Kwofie, Daniel; Otoo, Ekow
2015-10-01
The ATLAS detector, operated at the Large Hadron Collider (LHC) records proton-proton collisions at CERN every 50ns resulting in a sustained data flow up to PB/s. The upgraded Tile Calorimeter of the ATLAS experiment will sustain about 5PB/s of digital throughput. These massive data rates require extremely fast data capture and processing. Although there has been a steady increase in the processing speed of CPU/GPGPU assembled for high performance computing, the rate of data input and output, even under parallel I/O, has not kept up with the general increase in computing speeds. The problem then is whether one can implement an I/O subsystem infrastructure capable of meeting the computational speeds of the advanced computing systems at the petascale and exascale level. We propose a system architecture that leverages the Partitioned Global Address Space (PGAS) model of computing to maintain an in-memory data-store for the Processing Unit (PU) of the upgraded electronics of the Tile Calorimeter which is proposed to be used as a high throughput general purpose co-processor to the sROD of the upgraded Tile Calorimeter. The physical memory of the PUs are aggregated into a large global logical address space using RDMA- capable interconnects such as PCI- Express to enhance data processing throughput.
Fukunishi, Yoshifumi; Mashimo, Tadaaki; Misoo, Kiyotaka; Wakabayashi, Yoshinori; Miyaki, Toshiaki; Ohta, Seiji; Nakamura, Mayu; Ikeda, Kazuyoshi
2016-01-01
Computer-aided drug design is still a state-of-the-art process in medicinal chemistry, and the main topics in this field have been extensively studied and well reviewed. These topics include compound databases, ligand-binding pocket prediction, protein-compound docking, virtual screening, target/off-target prediction, physical property prediction, molecular simulation and pharmacokinetics/pharmacodynamics (PK/PD) prediction. Message and Conclusion: However, there are also a number of secondary or miscellaneous topics that have been less well covered. For example, methods for synthesizing and predicting the synthetic accessibility (SA) of designed compounds are important in practical drug development, and hardware/software resources for performing the computations in computer-aided drug design are crucial. Cloud computing and general purpose graphics processing unit (GPGPU) computing have been used in virtual screening and molecular dynamics simulations. Not surprisingly, there is a growing demand for computer systems which combine these resources. In the present review, we summarize and discuss these various topics of drug design.
Fukunishi, Yoshifumi; Mashimo, Tadaaki; Misoo, Kiyotaka; Wakabayashi, Yoshinori; Miyaki, Toshiaki; Ohta, Seiji; Nakamura, Mayu; Ikeda, Kazuyoshi
2016-01-01
Abstract: Background Computer-aided drug design is still a state-of-the-art process in medicinal chemistry, and the main topics in this field have been extensively studied and well reviewed. These topics include compound databases, ligand-binding pocket prediction, protein-compound docking, virtual screening, target/off-target prediction, physical property prediction, molecular simulation and pharmacokinetics/pharmacodynamics (PK/PD) prediction. Message and Conclusion: However, there are also a number of secondary or miscellaneous topics that have been less well covered. For example, methods for synthesizing and predicting the synthetic accessibility (SA) of designed compounds are important in practical drug development, and hardware/software resources for performing the computations in computer-aided drug design are crucial. Cloud computing and general purpose graphics processing unit (GPGPU) computing have been used in virtual screening and molecular dynamics simulations. Not surprisingly, there is a growing demand for computer systems which combine these resources. In the present review, we summarize and discuss these various topics of drug design. PMID:27075578
Besozzi, Daniela; Pescini, Dario; Mauri, Giancarlo
2014-01-01
Tau-leaping is a stochastic simulation algorithm that efficiently reconstructs the temporal evolution of biological systems, modeled according to the stochastic formulation of chemical kinetics. The analysis of dynamical properties of these systems in physiological and perturbed conditions usually requires the execution of a large number of simulations, leading to high computational costs. Since each simulation can be executed independently from the others, a massive parallelization of tau-leaping can bring to relevant reductions of the overall running time. The emerging field of General Purpose Graphic Processing Units (GPGPU) provides power-efficient high-performance computing at a relatively low cost. In this work we introduce cuTauLeaping, a stochastic simulator of biological systems that makes use of GPGPU computing to execute multiple parallel tau-leaping simulations, by fully exploiting the Nvidia's Fermi GPU architecture. We show how a considerable computational speedup is achieved on GPU by partitioning the execution of tau-leaping into multiple separated phases, and we describe how to avoid some implementation pitfalls related to the scarcity of memory resources on the GPU streaming multiprocessors. Our results show that cuTauLeaping largely outperforms the CPU-based tau-leaping implementation when the number of parallel simulations increases, with a break-even directly depending on the size of the biological system and on the complexity of its emergent dynamics. In particular, cuTauLeaping is exploited to investigate the probability distribution of bistable states in the Schlögl model, and to carry out a bidimensional parameter sweep analysis to study the oscillatory regimes in the Ras/cAMP/PKA pathway in S. cerevisiae. PMID:24663957
NASA Astrophysics Data System (ADS)
Mohammed, F.
2016-12-01
Landslide hazards such as fast-moving debris flows, slow-moving landslides, and other mass flows cause numerous fatalities, injuries, and damage. Landslide occurrences in fjords, bays, and lakes can additionally generate tsunamis with locally extremely high wave heights and runups. Two-dimensional depth-averaged models can successfully simulate the entire lifecycle of the three-dimensional landslide dynamics and tsunami propagation efficiently and accurately with the appropriate assumptions. Landslide rheology is defined using viscous fluids, visco-plastic fluids, and granular material to account for the possible landslide source materials. Saturated and unsaturated rheologies are further included to simulate debris flow, debris avalanches, mudflows, and rockslides respectively. The models are obtained by reducing the fully three-dimensional Navier-Stokes equations with the internal rheological definition of the landslide material, the water body, and appropriate scaling assumptions to obtain the depth-averaged two-dimensional models. The landslide and tsunami models are coupled to include the interaction between the landslide and the water body for tsunami generation. The reduced models are solved numerically with a fast semi-implicit finite-volume, shock-capturing based algorithm. The well-balanced, positivity preserving algorithm accurately accounts for wet-dry interface transition for the landslide runout, landslide-water body interface, and the tsunami wave flooding on land. The models are implemented as a General-Purpose computing on Graphics Processing Unit-based (GPGPU) suite of models, either coupled or run independently within the suite. The GPGPU implementation provides up to 1000 times speedup over a CPU-based serial computation. This enables simulations of multiple scenarios of hazard realizations that provides a basis for a probabilistic hazard assessment. The models have been successfully validated against experiments, past studies, and field data for landslides and tsunamis.
Construction of the Fock Matrix on a Grid-Based Molecular Orbital Basis Using GPGPUs.
Losilla, Sergio A; Watson, Mark A; Aspuru-Guzik, Alán; Sundholm, Dage
2015-05-12
We present a GPGPU implementation of the construction of the Fock matrix in the molecular orbital basis using the fully numerical, grid-based bubbles representation. For a test set of molecules containing up to 90 electrons, the total Hartree-Fock energies obtained from reference GTO-based calculations are reproduced within 10(-4) Eh to 10(-8) Eh for most of the molecules studied. Despite the very large number of arithmetic operations involved, the high performance obtained made the calculations possible on a single Nvidia Tesla K40 GPGPU card.
Feasibility evaluation of a motion detection system with face images for stereotactic radiosurgery.
Yamakawa, Takuya; Ogawa, Koichi; Iyatomi, Hitoshi; Kunieda, Etsuo
2011-01-01
In stereotactic radiosurgery we can irradiate a targeted volume precisely with a narrow high-energy x-ray beam, and thus the motion of a targeted area may cause side effects to normal organs. This paper describes our motion detection system with three USB cameras. To reduce the effect of change in illuminance in a tracking area we used an infrared light and USB cameras that were sensitive to the infrared light. The motion detection of a patient was performed by tracking his/her ears and nose with three USB cameras, where pattern matching between a predefined template image for each view and acquired images was done by an exhaustive search method with a general-purpose computing on a graphics processing unit (GPGPU). The results of the experiments showed that the measurement accuracy of our system was less than 0.7 mm, amounting to less than half of that of our previous system.
Acceleration of spiking neural network based pattern recognition on NVIDIA graphics processors.
Han, Bing; Taha, Tarek M
2010-04-01
There is currently a strong push in the research community to develop biological scale implementations of neuron based vision models. Systems at this scale are computationally demanding and generally utilize more accurate neuron models, such as the Izhikevich and the Hodgkin-Huxley models, in favor of the more popular integrate and fire model. We examine the feasibility of using graphics processing units (GPUs) to accelerate a spiking neural network based character recognition network to enable such large scale systems. Two versions of the network utilizing the Izhikevich and Hodgkin-Huxley models are implemented. Three NVIDIA general-purpose (GP) GPU platforms are examined, including the GeForce 9800 GX2, the Tesla C1060, and the Tesla S1070. Our results show that the GPGPUs can provide significant speedup over conventional processors. In particular, the fastest GPGPU utilized, the Tesla S1070, provided a speedup of 5.6 and 84.4 over highly optimized implementations on the fastest central processing unit (CPU) tested, a quadcore 2.67 GHz Xeon processor, for the Izhikevich and the Hodgkin-Huxley models, respectively. The CPU implementation utilized all four cores and the vector data parallelism offered by the processor. The results indicate that GPUs are well suited for this application domain.
GPU acceleration of Runge Kutta-Fehlberg and its comparison with Dormand-Prince method
NASA Astrophysics Data System (ADS)
Seen, Wo Mei; Gobithaasan, R. U.; Miura, Kenjiro T.
2014-07-01
There is a significant reduction of processing time and speedup of performance in computer graphics with the emergence of Graphic Processing Units (GPUs). GPUs have been developed to surpass Central Processing Unit (CPU) in terms of performance and processing speed. This evolution has opened up a new area in computing and researches where highly parallel GPU has been used for non-graphical algorithms. Physical or phenomenal simulations and modelling can be accelerated through General Purpose Graphic Processing Units (GPGPU) and Compute Unified Device Architecture (CUDA) implementations. These phenomena can be represented with mathematical models in the form of Ordinary Differential Equations (ODEs) which encompasses the gist of change rate between independent and dependent variables. ODEs are numerically integrated over time in order to simulate these behaviours. The classical Runge-Kutta (RK) scheme is the common method used to numerically solve ODEs. The Runge Kutta Fehlberg (RKF) scheme has been specially developed to provide an estimate of the principal local truncation error at each step, known as embedding estimate technique. This paper delves into the implementation of RKF scheme for GPU devices and compares its result with Dorman Prince method. A pseudo code is developed to show the implementation in detail. Hence, practitioners will be able to understand the data allocation in GPU, formation of RKF kernels and the flow of data to/from GPU-CPU upon RKF kernel evaluation. The pseudo code is then written in C Language and two ODE models are executed to show the achievable speedup as compared to CPU implementation. The accuracy and efficiency of the proposed implementation method is discussed in the final section of this paper.
Accelerating a three-dimensional eco-hydrological cellular automaton on GPGPU with OpenCL
NASA Astrophysics Data System (ADS)
Senatore, Alfonso; D'Ambrosio, Donato; De Rango, Alessio; Rongo, Rocco; Spataro, William; Straface, Salvatore; Mendicino, Giuseppe
2016-10-01
This work presents an effective implementation of a numerical model for complete eco-hydrological Cellular Automata modeling on Graphical Processing Units (GPU) with OpenCL (Open Computing Language) for heterogeneous computation (i.e., on CPUs and/or GPUs). Different types of parallel implementations were carried out (e.g., use of fast local memory, loop unrolling, etc), showing increasing performance improvements in terms of speedup, adopting also some original optimizations strategies. Moreover, numerical analysis of results (i.e., comparison of CPU and GPU outcomes in terms of rounding errors) have proven to be satisfactory. Experiments were carried out on a workstation with two CPUs (Intel Xeon E5440 at 2.83GHz), one GPU AMD R9 280X and one GPU nVIDIA Tesla K20c. Results have been extremely positive, but further testing should be performed to assess the functionality of the adopted strategies on other complete models and their ability to fruitfully exploit parallel systems resources.
Flexible, fast and accurate sequence alignment profiling on GPGPU with PaSWAS.
Warris, Sven; Yalcin, Feyruz; Jackson, Katherine J L; Nap, Jan Peter
2015-01-01
To obtain large-scale sequence alignments in a fast and flexible way is an important step in the analyses of next generation sequencing data. Applications based on the Smith-Waterman (SW) algorithm are often either not fast enough, limited to dedicated tasks or not sufficiently accurate due to statistical issues. Current SW implementations that run on graphics hardware do not report the alignment details necessary for further analysis. With the Parallel SW Alignment Software (PaSWAS) it is possible (a) to have easy access to the computational power of NVIDIA-based general purpose graphics processing units (GPGPUs) to perform high-speed sequence alignments, and (b) retrieve relevant information such as score, number of gaps and mismatches. The software reports multiple hits per alignment. The added value of the new SW implementation is demonstrated with two test cases: (1) tag recovery in next generation sequence data and (2) isotype assignment within an immunoglobulin 454 sequence data set. Both cases show the usability and versatility of the new parallel Smith-Waterman implementation.
CUDA GPU based full-Stokes finite difference modelling of glaciers
NASA Astrophysics Data System (ADS)
Brædstrup, C. F.; Egholm, D. L.
2012-04-01
Many have stressed the limitations of using the shallow shelf and shallow ice approximations when modelling ice streams or surging glaciers. Using a full-stokes approach requires either large amounts of computer power or time and is therefore seldom an option for most glaciologists. Recent advances in graphics card (GPU) technology for high performance computing have proven extremely efficient in accelerating many large scale scientific computations. The general purpose GPU (GPGPU) technology is cheap, has a low power consumption and fits into a normal desktop computer. It could therefore provide a powerful tool for many glaciologists. Our full-stokes ice sheet model implements a Red-Black Gauss-Seidel iterative linear solver to solve the full stokes equations. This technique has proven very effective when applied to the stokes equation in geodynamics problems, and should therefore also preform well in glaciological flow probems. The Gauss-Seidel iterator is known to be robust but several other linear solvers have a much faster convergence. To aid convergence, the solver uses a multigrid approach where values are interpolated and extrapolated between different grid resolutions to minimize the short wavelength errors efficiently. This reduces the iteration count by several orders of magnitude. The run-time is further reduced by using the GPGPU technology where each card has up to 448 cores. Researchers utilizing the GPGPU technique in other areas have reported between 2 - 11 times speedup compared to multicore CPU implementations on similar problems. The goal of these initial investigations into the possible usage of GPGPU technology in glacial modelling is to apply the enhanced resolution of a full-stokes solver to ice streams and surging glaciers. This is a area of growing interest because ice streams are the main drainage conjugates for large ice sheets. It is therefore crucial to understand this streaming behavior and it's impact up-ice.
NASA Astrophysics Data System (ADS)
Yarovyi, Andrii A.; Timchenko, Leonid I.; Kozhemiako, Volodymyr P.; Kokriatskaia, Nataliya I.; Hamdi, Rami R.; Savchuk, Tamara O.; Kulyk, Oleksandr O.; Surtel, Wojciech; Amirgaliyev, Yedilkhan; Kashaganova, Gulzhan
2017-08-01
The paper deals with a problem of insufficient productivity of existing computer means for large image processing, which do not meet modern requirements posed by resource-intensive computing tasks of laser beam profiling. The research concentrated on one of the profiling problems, namely, real-time processing of spot images of the laser beam profile. Development of a theory of parallel-hierarchic transformation allowed to produce models for high-performance parallel-hierarchical processes, as well as algorithms and software for their implementation based on the GPU-oriented architecture using GPGPU technologies. The analyzed performance of suggested computerized tools for processing and classification of laser beam profile images allows to perform real-time processing of dynamic images of various sizes.
Igarashi, Jun; Shouno, Osamu; Fukai, Tomoki; Tsujino, Hiroshi
2011-11-01
Real-time simulation of a biologically realistic spiking neural network is necessary for evaluation of its capacity to interact with real environments. However, the real-time simulation of such a neural network is difficult due to its high computational costs that arise from two factors: (1) vast network size and (2) the complicated dynamics of biologically realistic neurons. In order to address these problems, mainly the latter, we chose to use general purpose computing on graphics processing units (GPGPUs) for simulation of such a neural network, taking advantage of the powerful computational capability of a graphics processing unit (GPU). As a target for real-time simulation, we used a model of the basal ganglia that has been developed according to electrophysiological and anatomical knowledge. The model consists of heterogeneous populations of 370 spiking model neurons, including computationally heavy conductance-based models, connected by 11,002 synapses. Simulation of the model has not yet been performed in real-time using a general computing server. By parallelization of the model on the NVIDIA Geforce GTX 280 GPU in data-parallel and task-parallel fashion, faster-than-real-time simulation was robustly realized with only one-third of the GPU's total computational resources. Furthermore, we used the GPU's full computational resources to perform faster-than-real-time simulation of three instances of the basal ganglia model; these instances consisted of 1100 neurons and 33,006 synapses and were synchronized at each calculation step. Finally, we developed software for simultaneous visualization of faster-than-real-time simulation output. These results suggest the potential power of GPGPU techniques in real-time simulation of realistic neural networks. Copyright © 2011 Elsevier Ltd. All rights reserved.
Strbac, V; Pierce, D M; Vander Sloten, J; Famaey, N
2017-12-01
Finite element (FE) simulations are increasingly valuable in assessing and improving the performance of biomedical devices and procedures. Due to high computational demands such simulations may become difficult or even infeasible, especially when considering nearly incompressible and anisotropic material models prevalent in analyses of soft tissues. Implementations of GPGPU-based explicit FEs predominantly cover isotropic materials, e.g. the neo-Hookean model. To elucidate the computational expense of anisotropic materials, we implement the Gasser-Ogden-Holzapfel dispersed, fiber-reinforced model and compare solution times against the neo-Hookean model. Implementations of GPGPU-based explicit FEs conventionally rely on single-point (under) integration. To elucidate the expense of full and selective-reduced integration (more reliable) we implement both and compare corresponding solution times against those generated using underintegration. To better understand the advancement of hardware, we compare results generated using representative Nvidia GPGPUs from three recent generations: Fermi (C2075), Kepler (K20c), and Maxwell (GTX980). We explore scaling by solving the same boundary value problem (an extension-inflation test on a segment of human aorta) with progressively larger FE meshes. Our results demonstrate substantial improvements in simulation speeds relative to two benchmark FE codes (up to 300[Formula: see text] while maintaining accuracy), and thus open many avenues to novel applications in biomechanics and medicine.
GPU COMPUTING FOR PARTICLE TRACKING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nishimura, Hiroshi; Song, Kai; Muriki, Krishna
2011-03-25
This is a feasibility study of using a modern Graphics Processing Unit (GPU) to parallelize the accelerator particle tracking code. To demonstrate the massive parallelization features provided by GPU computing, a simplified TracyGPU program is developed for dynamic aperture calculation. Performances, issues, and challenges from introducing GPU are also discussed. General purpose Computation on Graphics Processing Units (GPGPU) bring massive parallel computing capabilities to numerical calculation. However, the unique architecture of GPU requires a comprehensive understanding of the hardware and programming model to be able to well optimize existing applications. In the field of accelerator physics, the dynamic aperture calculationmore » of a storage ring, which is often the most time consuming part of the accelerator modeling and simulation, can benefit from GPU due to its embarrassingly parallel feature, which fits well with the GPU programming model. In this paper, we use the Tesla C2050 GPU which consists of 14 multi-processois (MP) with 32 cores on each MP, therefore a total of 448 cores, to host thousands ot threads dynamically. Thread is a logical execution unit of the program on GPU. In the GPU programming model, threads are grouped into a collection of blocks Within each block, multiple threads share the same code, and up to 48 KB of shared memory. Multiple thread blocks form a grid, which is executed as a GPU kernel. A simplified code that is a subset of Tracy++ [2] is developed to demonstrate the possibility of using GPU to speed up the dynamic aperture calculation by having each thread track a particle.« less
Astrophysical data mining with GPU. A case study: Genetic classification of globular clusters
NASA Astrophysics Data System (ADS)
Cavuoti, S.; Garofalo, M.; Brescia, M.; Paolillo, M.; Pescape', A.; Longo, G.; Ventre, G.
2014-01-01
We present a multi-purpose genetic algorithm, designed and implemented with GPGPU/CUDA parallel computing technology. The model was derived from our CPU serial implementation, named GAME (Genetic Algorithm Model Experiment). It was successfully tested and validated on the detection of candidate Globular Clusters in deep, wide-field, single band HST images. The GPU version of GAME will be made available to the community by integrating it into the web application DAMEWARE (DAta Mining Web Application REsource, http://dame.dsf.unina.it/beta_info.html), a public data mining service specialized on massive astrophysical data. Since genetic algorithms are inherently parallel, the GPGPU computing paradigm leads to a speedup of a factor of 200× in the training phase with respect to the CPU based version.
Fast generation of computer-generated hologram by graphics processing unit
NASA Astrophysics Data System (ADS)
Matsuda, Sho; Fujii, Tomohiko; Yamaguchi, Takeshi; Yoshikawa, Hiroshi
2009-02-01
A cylindrical hologram is well known to be viewable in 360 deg. This hologram depends high pixel resolution.Therefore, Computer-Generated Cylindrical Hologram (CGCH) requires huge calculation amount.In our previous research, we used look-up table method for fast calculation with Intel Pentium4 2.8 GHz.It took 480 hours to calculate high resolution CGCH (504,000 x 63,000 pixels and the average number of object points are 27,000).To improve quality of CGCH reconstructed image, fringe pattern requires higher spatial frequency and resolution.Therefore, to increase the calculation speed, we have to change the calculation method. In this paper, to reduce the calculation time of CGCH (912,000 x 108,000 pixels), we employ Graphics Processing Unit (GPU).It took 4,406 hours to calculate high resolution CGCH on Xeon 3.4 GHz.Since GPU has many streaming processors and a parallel processing structure, GPU works as the high performance parallel processor.In addition, GPU gives max performance to 2 dimensional data and streaming data.Recently, GPU can be utilized for the general purpose (GPGPU).For example, NVIDIA's GeForce7 series became a programmable processor with Cg programming language.Next GeForce8 series have CUDA as software development kit made by NVIDIA.Theoretically, calculation ability of GPU is announced as 500 GFLOPS. From the experimental result, we have achieved that 47 times faster calculation compared with our previous work which used CPU.Therefore, CGCH can be generated in 95 hours.So, total time is 110 hours to calculate and print the CGCH.
NASA Astrophysics Data System (ADS)
Woodbury, D.; Kubota, S.; Johnson, I.
2014-10-01
Computer simulations of electromagnetic wave propagation in magnetized plasmas are an important tool for both plasma heating and diagnostics. For active millimeter-wave and microwave diagnostics, accurately modeling the evolution of the beam parameters for launched, reflected or scattered waves in a toroidal plasma requires that calculations be done using the full 3-D geometry. Previously, we reported on the application of GPGPU (General-Purpose computing on Graphics Processing Units) to a 3-D vacuum Maxwell code using the FDTD (Finite-Difference Time-Domain) method. Tests were done for Gaussian beam propagation with a hard source antenna, utilizing the parallel processing capabilities of the NVIDIA K20M. In the current study, we have modified the 3-D code to include a soft source antenna and an induced current density based on the cold plasma approximation. Results from Gaussian beam propagation in an inhomogeneous anisotropic plasma, along with comparisons to ray- and beam-tracing calculations will be presented. Additional enhancements, such as advanced coding techniques for improved speedup, will also be investigated. Supported by U.S. DoE Grant DE-FG02-99-ER54527 and in part by the U.S. DoE, Office of Science, WDTS under the Science Undergraduate Laboratory Internship program.
Development of High-speed Visualization System of Hypocenter Data Using CUDA-based GPU computing
NASA Astrophysics Data System (ADS)
Kumagai, T.; Okubo, K.; Uchida, N.; Matsuzawa, T.; Kawada, N.; Takeuchi, N.
2014-12-01
After the Great East Japan Earthquake on March 11, 2011, intelligent visualization of seismic information is becoming important to understand the earthquake phenomena. On the other hand, to date, the quantity of seismic data becomes enormous as a progress of high accuracy observation network; we need to treat many parameters (e.g., positional information, origin time, magnitude, etc.) to efficiently display the seismic information. Therefore, high-speed processing of data and image information is necessary to handle enormous amounts of seismic data. Recently, GPU (Graphic Processing Unit) is used as an acceleration tool for data processing and calculation in various study fields. This movement is called GPGPU (General Purpose computing on GPUs). In the last few years the performance of GPU keeps on improving rapidly. GPU computing gives us the high-performance computing environment at a lower cost than before. Moreover, use of GPU has an advantage of visualization of processed data, because GPU is originally architecture for graphics processing. In the GPU computing, the processed data is always stored in the video memory. Therefore, we can directly write drawing information to the VRAM on the video card by combining CUDA and the graphics API. In this study, we employ CUDA and OpenGL and/or DirectX to realize full-GPU implementation. This method makes it possible to write drawing information to the VRAM on the video card without PCIe bus data transfer: It enables the high-speed processing of seismic data. The present study examines the GPU computing-based high-speed visualization and the feasibility for high-speed visualization system of hypocenter data.
Implementation and optimization of ultrasound signal processing algorithms on mobile GPU
NASA Astrophysics Data System (ADS)
Kong, Woo Kyu; Lee, Wooyoul; Kim, Kyu Cheol; Yoo, Yangmo; Song, Tai-Kyong
2014-03-01
A general-purpose graphics processing unit (GPGPU) has been used for improving computing power in medical ultrasound imaging systems. Recently, a mobile GPU becomes powerful to deal with 3D games and videos at high frame rates on Full HD or HD resolution displays. This paper proposes the method to implement ultrasound signal processing on a mobile GPU available in the high-end smartphone (Galaxy S4, Samsung Electronics, Seoul, Korea) with programmable shaders on the OpenGL ES 2.0 platform. To maximize the performance of the mobile GPU, the optimization of shader design and load sharing between vertex and fragment shader was performed. The beamformed data were captured from a tissue mimicking phantom (Model 539 Multipurpose Phantom, ATS Laboratories, Inc., Bridgeport, CT, USA) by using a commercial ultrasound imaging system equipped with a research package (Ultrasonix Touch, Ultrasonix, Richmond, BC, Canada). The real-time performance is evaluated by frame rates while varying the range of signal processing blocks. The implementation method of ultrasound signal processing on OpenGL ES 2.0 was verified by analyzing PSNR with MATLAB gold standard that has the same signal path. CNR was also analyzed to verify the method. From the evaluations, the proposed mobile GPU-based processing method has no significant difference with the processing using MATLAB (i.e., PSNR<52.51 dB). The comparable results of CNR were obtained from both processing methods (i.e., 11.31). From the mobile GPU implementation, the frame rates of 57.6 Hz were achieved. The total execution time was 17.4 ms that was faster than the acquisition time (i.e., 34.4 ms). These results indicate that the mobile GPU-based processing method can support real-time ultrasound B-mode processing on the smartphone.
Welch, M C; Kwan, P W; Sajeev, A S M
2014-10-01
Agent-based modelling has proven to be a promising approach for developing rich simulations for complex phenomena that provide decision support functions across a broad range of areas including biological, social and agricultural sciences. This paper demonstrates how high performance computing technologies, namely General-Purpose Computing on Graphics Processing Units (GPGPU), and commercial Geographic Information Systems (GIS) can be applied to develop a national scale, agent-based simulation of an incursion of Old World Screwworm fly (OWS fly) into the Australian mainland. The development of this simulation model leverages the combination of massively data-parallel processing capabilities supported by NVidia's Compute Unified Device Architecture (CUDA) and the advanced spatial visualisation capabilities of GIS. These technologies have enabled the implementation of an individual-based, stochastic lifecycle and dispersal algorithm for the OWS fly invasion. The simulation model draws upon a wide range of biological data as input to stochastically determine the reproduction and survival of the OWS fly through the different stages of its lifecycle and dispersal of gravid females. Through this model, a highly efficient computational platform has been developed for studying the effectiveness of control and mitigation strategies and their associated economic impact on livestock industries can be materialised. Copyright © 2014 International Atomic Energy Agency 2014. Published by Elsevier B.V. All rights reserved.
Uncluttered Single-Image Visualization of Vascular Structures using GPU and Integer Programming
Won, Joong-Ho; Jeon, Yongkweon; Rosenberg, Jarrett; Yoon, Sungroh; Rubin, Geoffrey D.; Napel, Sandy
2013-01-01
Direct projection of three-dimensional branching structures, such as networks of cables, blood vessels, or neurons onto a 2D image creates the illusion of intersecting structural parts and creates challenges for understanding and communication. We present a method for visualizing such structures, and demonstrate its utility in visualizing the abdominal aorta and its branches, whose tomographic images might be obtained by computed tomography or magnetic resonance angiography, in a single two-dimensional stylistic image, without overlaps among branches. The visualization method, termed uncluttered single-image visualization (USIV), involves optimization of geometry. This paper proposes a novel optimization technique that utilizes an interesting connection of the optimization problem regarding USIV to the protein structure prediction problem. Adopting the integer linear programming-based formulation for the protein structure prediction problem, we tested the proposed technique using 30 visualizations produced from five patient scans with representative anatomical variants in the abdominal aortic vessel tree. The novel technique can exploit commodity-level parallelism, enabling use of general-purpose graphics processing unit (GPGPU) technology that yields a significant speedup. Comparison of the results with the other optimization technique previously reported elsewhere suggests that, in most aspects, the quality of the visualization is comparable to that of the previous one, with a significant gain in the computation time of the algorithm. PMID:22291148
Toward GPGPU accelerated human electromechanical cardiac simulations
Vigueras, Guillermo; Roy, Ishani; Cookson, Andrew; Lee, Jack; Smith, Nicolas; Nordsletten, David
2014-01-01
In this paper, we look at the acceleration of weakly coupled electromechanics using the graphics processing unit (GPU). Specifically, we port to the GPU a number of components of Heart—a CPU-based finite element code developed for simulating multi-physics problems. On the basis of a criterion of computational cost, we implemented on the GPU the ODE and PDE solution steps for the electrophysiology problem and the Jacobian and residual evaluation for the mechanics problem. Performance of the GPU implementation is then compared with single core CPU (SC) execution as well as multi-core CPU (MC) computations with equivalent theoretical performance. Results show that for a human scale left ventricle mesh, GPU acceleration of the electrophysiology problem provided speedups of 164 × compared with SC and 5.5 times compared with MC for the solution of the ODE model. Speedup of up to 72 × compared with SC and 2.6 × compared with MC was also observed for the PDE solve. Using the same human geometry, the GPU implementation of mechanics residual/Jacobian computation provided speedups of up to 44 × compared with SC and 2.0 × compared with MC. © 2013 The Authors. International Journal for Numerical Methods in Biomedical Engineering published by John Wiley & Sons, Ltd. PMID:24115492
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trędak, Przemysław, E-mail: przemyslaw.tredak@fuw.edu.pl; Rudnicki, Witold R.; Interdisciplinary Centre for Mathematical and Computational Modelling, University of Warsaw, ul. Pawińskiego 5a, 02-106 Warsaw
The second generation Reactive Bond Order (REBO) empirical potential is commonly used to accurately model a wide range hydrocarbon materials. It is also extensible to other atom types and interactions. REBO potential assumes complex multi-body interaction model, that is difficult to represent efficiently in the SIMD or SIMT programming model. Hence, despite its importance, no efficient GPGPU implementation has been developed for this potential. Here we present a detailed description of a highly efficient GPGPU implementation of molecular dynamics algorithm using REBO potential. The presented algorithm takes advantage of rarely used properties of the SIMT architecture of a modern GPUmore » to solve difficult synchronizations issues that arise in computations of multi-body potential. Techniques developed for this problem may be also used to achieve efficient solutions of different problems. The performance of proposed algorithm is assessed using a range of model systems. It is compared to highly optimized CPU implementation (both single core and OpenMP) available in LAMMPS package. These experiments show up to 6x improvement in forces computation time using single processor of the NVIDIA Tesla K80 compared to high end 16-core Intel Xeon processor.« less
Comparison of a 3-D GPU-Assisted Maxwell Code and Ray Tracing for Reflectometry on ITER
NASA Astrophysics Data System (ADS)
Gady, Sarah; Kubota, Shigeyuki; Johnson, Irena
2015-11-01
Electromagnetic wave propagation and scattering in magnetized plasmas are important diagnostics for high temperature plasmas. 1-D and 2-D full-wave codes are standard tools for measurements of the electron density profile and fluctuations; however, ray tracing results have shown that beam propagation in tokamak plasmas is inherently a 3-D problem. The GPU-Assisted Maxwell Code utilizes the FDTD (Finite-Difference Time-Domain) method for solving the Maxwell equations with the cold plasma approximation in a 3-D geometry. Parallel processing with GPGPU (General-Purpose computing on Graphics Processing Units) is used to accelerate the computation. Previously, we reported on initial comparisons of the code results to 1-D numerical and analytical solutions, where the size of the computational grid was limited by the on-board memory of the GPU. In the current study, this limitation is overcome by using domain decomposition and an additional GPU. As a practical application, this code is used to study the current design of the ITER Low Field Side Reflectometer (LSFR) for the Equatorial Port Plug 11 (EPP11). A detailed examination of Gaussian beam propagation in the ITER edge plasma will be presented, as well as comparisons with ray tracing. This work was made possible by funding from the Department of Energy for the Summer Undergraduate Laboratory Internship (SULI) program. This work is supported by the US DOE Contract No.DE-AC02-09CH11466 and DE-FG02-99-ER54527.
Efficient implementation of the many-body Reactive Bond Order (REBO) potential on GPU
NASA Astrophysics Data System (ADS)
Trędak, Przemysław; Rudnicki, Witold R.; Majewski, Jacek A.
2016-09-01
The second generation Reactive Bond Order (REBO) empirical potential is commonly used to accurately model a wide range hydrocarbon materials. It is also extensible to other atom types and interactions. REBO potential assumes complex multi-body interaction model, that is difficult to represent efficiently in the SIMD or SIMT programming model. Hence, despite its importance, no efficient GPGPU implementation has been developed for this potential. Here we present a detailed description of a highly efficient GPGPU implementation of molecular dynamics algorithm using REBO potential. The presented algorithm takes advantage of rarely used properties of the SIMT architecture of a modern GPU to solve difficult synchronizations issues that arise in computations of multi-body potential. Techniques developed for this problem may be also used to achieve efficient solutions of different problems. The performance of proposed algorithm is assessed using a range of model systems. It is compared to highly optimized CPU implementation (both single core and OpenMP) available in LAMMPS package. These experiments show up to 6x improvement in forces computation time using single processor of the NVIDIA Tesla K80 compared to high end 16-core Intel Xeon processor.
NASA Technical Reports Server (NTRS)
Plante, Ianik; Cucinotta, Francis A.
2011-01-01
Radiolytic species are formed approximately 1 ps after the passage of ionizing radiation through matter. After their formation, they diffuse and chemically react with other radiolytic species and neighboring biological molecules, leading to various oxidative damage. Therefore, the simulation of radiation chemistry is of considerable importance to understand how radiolytic species damage biological molecules [1]. The step-by-step simulation of chemical reactions is difficult, because the radiolytic species are distributed non-homogeneously in the medium. Consequently, computational approaches based on Green functions for diffusion-influenced reactions should be used [2]. Recently, Green functions for more complex type of reactions have been published [3-4]. We have developed exact random variate generators of these Green functions [5], which will allow us to use them in radiation chemistry codes. Moreover, simulating chemistry using the Green functions is which is computationally very demanding, because the probabilities of reactions between each pair of particles should be evaluated at each timestep [2]. This kind of problem is well adapted for General Purpose Graphic Processing Units (GPGPU), which can handle a large number of similar calculations simultaneously. These new developments will allow us to include more complex reactions in chemistry codes, and to improve the calculation time. This code should be of importance to link radiation track structure simulations and DNA damage models.
Real-Time Imaging System for the OpenPET
NASA Astrophysics Data System (ADS)
Tashima, Hideaki; Yoshida, Eiji; Kinouchi, Shoko; Nishikido, Fumihiko; Inadama, Naoko; Murayama, Hideo; Suga, Mikio; Haneishi, Hideaki; Yamaya, Taiga
2012-02-01
The OpenPET and its real-time imaging capability have great potential for real-time tumor tracking in medical procedures such as biopsy and radiation therapy. For the real-time imaging system, we intend to use the one-pass list-mode dynamic row-action maximum likelihood algorithm (DRAMA) and implement it using general-purpose computing on graphics processing units (GPGPU) techniques. However, it is difficult to make consistent reconstructions in real-time because the amount of list-mode data acquired in PET scans may be large depending on the level of radioactivity, and the reconstruction speed depends on the amount of the list-mode data. In this study, we developed a system to control the data used in the reconstruction step while retaining quantitative performance. In the proposed system, the data transfer control system limits the event counts to be used in the reconstruction step according to the reconstruction speed, and the reconstructed images are properly intensified by using the ratio of the used counts to the total counts. We implemented the system on a small OpenPET prototype system and evaluated the performance in terms of the real-time tracking ability by displaying reconstructed images in which the intensity was compensated. The intensity of the displayed images correlated properly with the original count rate and a frame rate of 2 frames per second was achieved with average delay time of 2.1 s.
RELIABILITY, AVAILABILITY, AND SERVICEABILITY FOR PETASCALE HIGH-END COMPUTING AND BEYOND
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chokchai "Box" Leangsuksun
2011-05-31
Our project is a multi-institutional research effort that adopts interplay of RELIABILITY, AVAILABILITY, and SERVICEABILITY (RAS) aspects for solving resilience issues in highend scientific computing in the next generation of supercomputers. results lie in the following tracks: Failure prediction in a large scale HPC; Investigate reliability issues and mitigation techniques including in GPGPU-based HPC system; HPC resilience runtime & tools.
Benchmark of Client and Server-Side Catchment Delineation Approaches on Web-Based Systems
NASA Astrophysics Data System (ADS)
Demir, I.; Sermet, M. Y.; Sit, M. A.
2016-12-01
Recent advances in internet and cyberinfrastructure technologies have provided the capability to acquire large scale spatial data from various gauges and sensor networks. The collection of environmental data increased demand for applications which are capable of managing and processing large-scale and high-resolution data sets. With the amount and resolution of data sets provided, one of the challenging tasks for organizing and customizing hydrological data sets is delineation of watersheds on demand. Watershed delineation is a process for creating a boundary that represents the contributing area for a specific control point or water outlet, with intent of characterization and analysis of portions of a study area. Although many GIS tools and software for watershed analysis are available on desktop systems, there is a need for web-based and client-side techniques for creating a dynamic and interactive environment for exploring hydrological data. In this project, we demonstrated several watershed delineation techniques on the web with various techniques implemented on the client-side using JavaScript and WebGL, and on the server-side using Python and C++. We also developed a client-side GPGPU (General Purpose Graphical Processing Unit) algorithm to analyze high-resolution terrain data for watershed delineation which allows parallelization using GPU. The web-based real-time analysis of watershed segmentation can be helpful for decision-makers and interested stakeholders while eliminating the need of installing complex software packages and dealing with large-scale data sets. Utilization of the client-side hardware resources also eliminates the need of servers due its crowdsourcing nature. Our goal for future work is to improve other hydrologic analysis methods such as rain flow tracking by adapting presented approaches.
Radio-Frequency Emissions from Streamer Collisions: Implications for High-Energy Processes.
NASA Astrophysics Data System (ADS)
Luque, A.
2017-12-01
The production of energetic particles in a discharge corona is possibly linked to the collision of streamers of opposite polarities [Cooray et al. (2009), Kochkin et al. (2012), Østgaard et al. (2016)]. There is also experimental evidence linking it to radio-frequency emissions in the UHF frequency range (300 MHz-3 GHz) [Montanyà et al. (2015), Petersen and Beasley (2014)]. Here we investigate these two links by modeling the radio-frequency emissions emanating from an encounter between two counter-propagating streamers. Our numerical model combines self-consistently a conservative, high-order Finite-Volume scheme for electron transport with a Finite-Difference Time-Domain (FDTD) method for electromagnetic propagation. We also include the most relevant reactions for streamer propagation: impact ionization, dissociative attachment and photo-ionization. Our implementation benefits from massive parallelization by running on a General-Purpose Graphical Processing Unit (GPGPU). With this code we found that streamer encounters emit electromagnetic waves predominantly in the UHF range, supporting the hypothesis that streamer collisions are essential precursors of high-energy processes in electric discharges. References Cooray, V., et al., J. Atm. Sol.-Terr. Phys., 71, 1890, doi:10.1016/j.jastp.2009.07.010 (2009). Kochkin, P. O., et al., J. Phys. D, 45, 425202, doi: 10.1088/0022-3727/45/42/425202 (2012). Montanyà, J., et al., J. Atm. Sol.-Terr. Phys., 136, 94, doi:10.1016/j.jastp.2015.06.009, (2015). Østgaard, N., et al., J. Geophys. Res. (Atmos.), 121, 2939, doi:10.1002/2015JD024394 (2016). Petersen, D., and W. Beasley, Atmospheric Research, 135, 314, doi:10.1016/j.atmosres.2013.02.006 (2014).
JESPP: Joint Experimentation on Scalable Parallel Processors Supercomputers
2010-03-01
were for the relatively small market of scientific and engineering applications. Contrast this with GPUs that are designed to improve the end- user...experience in mass- market arenas such as gaming. In order to get meaningful speed-up using the GPU, it was determined that the data transfer and...Included) Conference Year Effectively using a Large GPGPU-Enhanced Linux Cluster HPCMP UGC 2009 FLOPS per Watt: Heterogeneous-Computing’s Approach
A multi-port 10GbE PCIe NIC featuring UDP offload and GPUDirect capabilities.
NASA Astrophysics Data System (ADS)
Ammendola, Roberto; Biagioni, Andrea; Frezza, Ottorino; Lamanna, Gianluca; Lo Cicero, Francesca; Lonardo, Alessandro; Martinelli, Michele; Stanislao Paolucci, Pier; Pastorelli, Elena; Pontisso, Luca; Rossetti, Davide; Simula, Francesco; Sozzi, Marco; Tosoratto, Laura; Vicini, Piero
2015-12-01
NaNet-10 is a four-ports 10GbE PCIe Network Interface Card designed for low-latency real-time operations with GPU systems. To this purpose the design includes an UDP offload module, for fast and clock-cycle deterministic handling of the transport layer protocol, plus a GPUDirect P2P/RDMA engine for low-latency communication with NVIDIA Tesla GPU devices. A dedicated module (Multi-Stream) can optionally process input UDP streams before data is delivered through PCIe DMA to their destination devices, re-organizing data from different streams guaranteeing computational optimization. NaNet-10 is going to be integrated in the NA62 CERN experiment in order to assess the suitability of GPGPU systems as real-time triggers; results and lessons learned while performing this activity will be reported herein.
NASA Astrophysics Data System (ADS)
Rizki, Permata Nur Miftahur; Lee, Heezin; Lee, Minsu; Oh, Sangyoon
2017-01-01
With the rapid advance of remote sensing technology, the amount of three-dimensional point-cloud data has increased extraordinarily, requiring faster processing in the construction of digital elevation models. There have been several attempts to accelerate the computation using parallel methods; however, little attention has been given to investigating different approaches for selecting the most suited parallel programming model for a given computing environment. We present our findings and insights identified by implementing three popular high-performance parallel approaches (message passing interface, MapReduce, and GPGPU) on time demanding but accurate kriging interpolation. The performances of the approaches are compared by varying the size of the grid and input data. In our empirical experiment, we demonstrate the significant acceleration by all three approaches compared to a C-implemented sequential-processing method. In addition, we also discuss the pros and cons of each method in terms of usability, complexity infrastructure, and platform limitation to give readers a better understanding of utilizing those parallel approaches for gridding purposes.
Large-scale ground motion simulation using GPGPU
NASA Astrophysics Data System (ADS)
Aoi, S.; Maeda, T.; Nishizawa, N.; Aoki, T.
2012-12-01
Huge computation resources are required to perform large-scale ground motion simulations using 3-D finite difference method (FDM) for realistic and complex models with high accuracy. Furthermore, thousands of various simulations are necessary to evaluate the variability of the assessment caused by uncertainty of the assumptions of the source models for future earthquakes. To conquer the problem of restricted computational resources, we introduced the use of GPGPU (General purpose computing on graphics processing units) which is the technique of using a GPU as an accelerator of the computation which has been traditionally conducted by the CPU. We employed the CPU version of GMS (Ground motion Simulator; Aoi et al., 2004) as the original code and implemented the function for GPU calculation using CUDA (Compute Unified Device Architecture). GMS is a total system for seismic wave propagation simulation based on 3-D FDM scheme using discontinuous grids (Aoi&Fujiwara, 1999), which includes the solver as well as the preprocessor tools (parameter generation tool) and postprocessor tools (filter tool, visualization tool, and so on). The computational model is decomposed in two horizontal directions and each decomposed model is allocated to a different GPU. We evaluated the performance of our newly developed GPU version of GMS on the TSUBAME2.0 which is one of the Japanese fastest supercomputer operated by the Tokyo Institute of Technology. First we have performed a strong scaling test using the model with about 22 million grids and achieved 3.2 and 7.3 times of the speed-up by using 4 and 16 GPUs. Next, we have examined a weak scaling test where the model sizes (number of grids) are increased in proportion to the degree of parallelism (number of GPUs). The result showed almost perfect linearity up to the simulation with 22 billion grids using 1024 GPUs where the calculation speed reached to 79.7 TFlops and about 34 times faster than the CPU calculation using the same number of cores. Finally, we applied GPU calculation to the simulation of the 2011 Tohoku-oki earthquake. The model was constructed using a slip model from inversion of strong motion data (Suzuki et al., 2012), and a geological- and geophysical-based velocity structure model comprising all the Tohoku and Kanto regions as well as the large source area, which consists of about 1.9 billion grids. The overall characteristics of observed velocity seismograms for a longer period than range of 8 s were successfully reproduced (Maeda et al., 2012 AGU meeting). The turn around time for 50 thousand-step calculation (which correspond to 416 s in seismograph) using 100 GPUs was 52 minutes which is fairly short, especially considering this is the performance for the realistic and complex model.
CUDA-based high-performance computing of the S-BPF algorithm with no-waiting pipelining
NASA Astrophysics Data System (ADS)
Deng, Lin; Yan, Bin; Chang, Qingmei; Han, Yu; Zhang, Xiang; Xi, Xiaoqi; Li, Lei
2015-10-01
The backprojection-filtration (BPF) algorithm has become a good solution for local reconstruction in cone-beam computed tomography (CBCT). However, the reconstruction speed of BPF is a severe limitation for clinical applications. The selective-backprojection filtration (S-BPF) algorithm is developed to improve the parallel performance of BPF by selective backprojection. Furthermore, the general-purpose graphics processing unit (GP-GPU) is a popular tool for accelerating the reconstruction. Much work has been performed aiming for the optimization of the cone-beam back-projection. As the cone-beam back-projection process becomes faster, the data transportation holds a much bigger time proportion in the reconstruction than before. This paper focuses on minimizing the total time in the reconstruction with the S-BPF algorithm by hiding the data transportation among hard disk, CPU and GPU. And based on the analysis of the S-BPF algorithm, some strategies are implemented: (1) the asynchronous calls are used to overlap the implemention of CPU and GPU, (2) an innovative strategy is applied to obtain the DBP image to hide the transport time effectively, (3) two streams for data transportation and calculation are synchronized by the cudaEvent in the inverse of finite Hilbert transform on GPU. Our main contribution is a smart reconstruction of the S-BPF algorithm with GPU's continuous calculation and no data transportation time cost. a 5123 volume is reconstructed in less than 0.7 second on a single Tesla-based K20 GPU from 182 views projection with 5122 pixel per projection. The time cost of our implementation is about a half of that without the overlap behavior.
NASA Astrophysics Data System (ADS)
Timchenko, Leonid; Yarovyi, Andrii; Kokriatskaya, Nataliya; Nakonechna, Svitlana; Abramenko, Ludmila; Ławicki, Tomasz; Popiel, Piotr; Yesmakhanova, Laura
2016-09-01
The paper presents a method of parallel-hierarchical transformations for rapid recognition of dynamic images using GPU technology. Direct parallel-hierarchical transformations based on cluster CPU-and GPU-oriented hardware platform. Mathematic models of training of the parallel hierarchical (PH) network for the transformation are developed, as well as a training method of the PH network for recognition of dynamic images. This research is most topical for problems on organizing high-performance computations of super large arrays of information designed to implement multi-stage sensing and processing as well as compaction and recognition of data in the informational structures and computer devices. This method has such advantages as high performance through the use of recent advances in parallelization, possibility to work with images of ultra dimension, ease of scaling in case of changing the number of nodes in the cluster, auto scan of local network to detect compute nodes.
On-line range images registration with GPGPU
NASA Astrophysics Data System (ADS)
Będkowski, J.; Naruniec, J.
2013-03-01
This paper concerns implementation of algorithms in the two important aspects of modern 3D data processing: data registration and segmentation. Solution proposed for the first topic is based on the 3D space decomposition, while the latter on image processing and local neighbourhood search. Data processing is implemented by using NVIDIA compute unified device architecture (NIVIDIA CUDA) parallel computation. The result of the segmentation is a coloured map where different colours correspond to different objects, such as walls, floor and stairs. The research is related to the problem of collecting 3D data with a RGB-D camera mounted on a rotated head, to be used in mobile robot applications. Performance of the data registration algorithm is aimed for on-line processing. The iterative closest point (ICP) approach is chosen as a registration method. Computations are based on the parallel fast nearest neighbour search. This procedure decomposes 3D space into cubic buckets and, therefore, the time of the matching is deterministic. First technique of the data segmentation uses accele-rometers integrated with a RGB-D sensor to obtain rotation compensation and image processing method for defining pre-requisites of the known categories. The second technique uses the adapted nearest neighbour search procedure for obtaining normal vectors for each range point.
Spoerk, Jakob; Gendrin, Christelle; Weber, Christoph; Figl, Michael; Pawiro, Supriyanto Ardjo; Furtado, Hugo; Fabri, Daniella; Bloch, Christoph; Bergmann, Helmar; Gröller, Eduard; Birkfellner, Wolfgang
2012-02-01
A common problem in image-guided radiation therapy (IGRT) of lung cancer as well as other malignant diseases is the compensation of periodic and aperiodic motion during dose delivery. Modern systems for image-guided radiation oncology allow for the acquisition of cone-beam computed tomography data in the treatment room as well as the acquisition of planar radiographs during the treatment. A mid-term research goal is the compensation of tumor target volume motion by 2D/3D Registration. In 2D/3D registration, spatial information on organ location is derived by an iterative comparison of perspective volume renderings, so-called digitally rendered radiographs (DRR) from computed tomography volume data, and planar reference x-rays. Currently, this rendering process is very time consuming, and real-time registration, which should at least provide data on organ position in less than a second, has not come into existence. We present two GPU-based rendering algorithms which generate a DRR of 512×512 pixels size from a CT dataset of 53 MB size at a pace of almost 100 Hz. This rendering rate is feasible by applying a number of algorithmic simplifications which range from alternative volume-driven rendering approaches - namely so-called wobbled splatting - to sub-sampling of the DRR-image by means of specialized raycasting techniques. Furthermore, general purpose graphics processing unit (GPGPU) programming paradigms were consequently utilized. Rendering quality and performance as well as the influence on the quality and performance of the overall registration process were measured and analyzed in detail. The results show that both methods are competitive and pave the way for fast motion compensation by rigid and possibly even non-rigid 2D/3D registration and, beyond that, adaptive filtering of motion models in IGRT. Copyright © 2011. Published by Elsevier GmbH.
Spoerk, Jakob; Gendrin, Christelle; Weber, Christoph; Figl, Michael; Pawiro, Supriyanto Ardjo; Furtado, Hugo; Fabri, Daniella; Bloch, Christoph; Bergmann, Helmar; Gröller, Eduard; Birkfellner, Wolfgang
2012-01-01
A common problem in image-guided radiation therapy (IGRT) of lung cancer as well as other malignant diseases is the compensation of periodic and aperiodic motion during dose delivery. Modern systems for image-guided radiation oncology allow for the acquisition of cone-beam computed tomography data in the treatment room as well as the acquisition of planar radiographs during the treatment. A mid-term research goal is the compensation of tumor target volume motion by 2D/3D registration. In 2D/3D registration, spatial information on organ location is derived by an iterative comparison of perspective volume renderings, so-called digitally rendered radiographs (DRR) from computed tomography volume data, and planar reference x-rays. Currently, this rendering process is very time consuming, and real-time registration, which should at least provide data on organ position in less than a second, has not come into existence. We present two GPU-based rendering algorithms which generate a DRR of 512 × 512 pixels size from a CT dataset of 53 MB size at a pace of almost 100 Hz. This rendering rate is feasible by applying a number of algorithmic simplifications which range from alternative volume-driven rendering approaches – namely so-called wobbled splatting – to sub-sampling of the DRR-image by means of specialized raycasting techniques. Furthermore, general purpose graphics processing unit (GPGPU) programming paradigms were consequently utilized. Rendering quality and performance as well as the influence on the quality and performance of the overall registration process were measured and analyzed in detail. The results show that both methods are competitive and pave the way for fast motion compensation by rigid and possibly even non-rigid 2D/3D registration and, beyond that, adaptive filtering of motion models in IGRT. PMID:21782399
Acceleration of discrete stochastic biochemical simulation using GPGPU.
Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira
2015-01-01
For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130.
Acceleration of discrete stochastic biochemical simulation using GPGPU
Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira
2015-01-01
For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130. PMID:25762936
Monitoring tumor motion by real time 2D/3D registration during radiotherapy.
Gendrin, Christelle; Furtado, Hugo; Weber, Christoph; Bloch, Christoph; Figl, Michael; Pawiro, Supriyanto Ardjo; Bergmann, Helmar; Stock, Markus; Fichtinger, Gabor; Georg, Dietmar; Birkfellner, Wolfgang
2012-02-01
In this paper, we investigate the possibility to use X-ray based real time 2D/3D registration for non-invasive tumor motion monitoring during radiotherapy. The 2D/3D registration scheme is implemented using general purpose computation on graphics hardware (GPGPU) programming techniques and several algorithmic refinements in the registration process. Validation is conducted off-line using a phantom and five clinical patient data sets. The registration is performed on a region of interest (ROI) centered around the planned target volume (PTV). The phantom motion is measured with an rms error of 2.56 mm. For the patient data sets, a sinusoidal movement that clearly correlates to the breathing cycle is shown. Videos show a good match between X-ray and digitally reconstructed radiographs (DRR) displacement. Mean registration time is 0.5 s. We have demonstrated that real-time organ motion monitoring using image based markerless registration is feasible. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kroll, Christine; von der Werth, Monika; Leuck, Holger; Stahl, Christoph; Schertler, Klaus
2017-05-01
For Intelligence, Surveillance, Reconnaissance (ISR) missions of manned and unmanned air systems typical electrooptical payloads provide high-definition video data which has to be exploited with respect to relevant ground targets in real-time by automatic/assisted target recognition software. Airbus Defence and Space is developing required technologies for real-time sensor exploitation since years and has combined the latest advances of Deep Convolutional Neural Networks (CNN) with a proprietary high-speed Support Vector Machine (SVM) learning method into a powerful object recognition system with impressive results on relevant high-definition video scenes compared to conventional target recognition approaches. This paper describes the principal requirements for real-time target recognition in high-definition video for ISR missions and the Airbus approach of combining an invariant feature extraction using pre-trained CNNs and the high-speed training and classification ability of a novel frequency-domain SVM training method. The frequency-domain approach allows for a highly optimized implementation for General Purpose Computation on a Graphics Processing Unit (GPGPU) and also an efficient training of large training samples. The selected CNN which is pre-trained only once on domain-extrinsic data reveals a highly invariant feature extraction. This allows for a significantly reduced adaptation and training of the target recognition method for new target classes and mission scenarios. A comprehensive training and test dataset was defined and prepared using relevant high-definition airborne video sequences. The assessment concept is explained and performance results are given using the established precision-recall diagrams, average precision and runtime figures on representative test data. A comparison to legacy target recognition approaches shows the impressive performance increase by the proposed CNN+SVM machine-learning approach and the capability of real-time high-definition video exploitation.
CUDA-based real time surgery simulation.
Liu, Youquan; De, Suvranu
2008-01-01
In this paper we present a general software platform that enables real time surgery simulation on the newly available compute unified device architecture (CUDA)from NVIDIA. CUDA-enabled GPUs harness the power of 128 processors which allow data parallel computations. Compared to the previous GPGPU, it is significantly more flexible with a C language interface. We report implementation of both collision detection and consequent deformation computation algorithms. Our test results indicate that the CUDA enables a twenty times speedup for collision detection and about fifteen times speedup for deformation computation on an Intel Core 2 Quad 2.66 GHz machine with GeForce 8800 GTX.
Kalman Filter Tracking on Parallel Architectures
NASA Astrophysics Data System (ADS)
Cerati, Giuseppe; Elmer, Peter; Krutelyov, Slava; Lantz, Steven; Lefebvre, Matthieu; McDermott, Kevin; Riley, Daniel; Tadel, Matevž; Wittich, Peter; Würthwein, Frank; Yagil, Avi
2016-11-01
Power density constraints are limiting the performance improvements of modern CPUs. To address this we have seen the introduction of lower-power, multi-core processors such as GPGPU, ARM and Intel MIC. In order to achieve the theoretical performance gains of these processors, it will be necessary to parallelize algorithms to exploit larger numbers of lightweight cores and specialized functions like large vector units. Track finding and fitting is one of the most computationally challenging problems for event reconstruction in particle physics. At the High-Luminosity Large Hadron Collider (HL-LHC), for example, this will be by far the dominant problem. The need for greater parallelism has driven investigations of very different track finding techniques such as Cellular Automata or Hough Transforms. The most common track finding techniques in use today, however, are those based on a Kalman filter approach. Significant experience has been accumulated with these techniques on real tracking detector systems, both in the trigger and offline. They are known to provide high physics performance, are robust, and are in use today at the LHC. Given the utility of the Kalman filter in track finding, we have begun to port these algorithms to parallel architectures, namely Intel Xeon and Xeon Phi. We report here on our progress towards an end-to-end track reconstruction algorithm fully exploiting vectorization and parallelization techniques in a simplified experimental environment.
GeantV: from CPU to accelerators
NASA Astrophysics Data System (ADS)
Amadio, G.; Ananya, A.; Apostolakis, J.; Arora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Duhem, L.; Elvira, D.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Sehgal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.
2016-10-01
The GeantV project aims to research and develop the next-generation simulation software describing the passage of particles through matter. While the modern CPU architectures are being targeted first, resources such as GPGPU, Intel© Xeon Phi, Atom or ARM cannot be ignored anymore by HEP CPU-bound applications. The proof of concept GeantV prototype has been mainly engineered for CPU's having vector units but we have foreseen from early stages a bridge to arbitrary accelerators. A software layer consisting of architecture/technology specific backends supports currently this concept. This approach allows to abstract out the basic types such as scalar/vector but also to formalize generic computation kernels using transparently library or device specific constructs based on Vc, CUDA, Cilk+ or Intel intrinsics. While the main goal of this approach is portable performance, as a bonus, it comes with the insulation of the core application and algorithms from the technology layer. This allows our application to be long term maintainable and versatile to changes at the backend side. The paper presents the first results of basket-based GeantV geometry navigation on the Intel© Xeon Phi KNC architecture. We present the scalability and vectorization study, conducted using Intel performance tools, as well as our preliminary conclusions on the use of accelerators for GeantV transport. We also describe the current work and preliminary results for using the GeantV transport kernel on GPUs.
NASA Astrophysics Data System (ADS)
Abramov, G. V.; Gavrilov, A. N.
2018-03-01
The article deals with the numerical solution of the mathematical model of the particles motion and interaction in multicomponent plasma by the example of electric arc synthesis of carbon nanostructures. The high order of the particles and the number of their interactions requires a significant input of machine resources and time for calculations. Application of the large particles method makes it possible to reduce the amount of computation and the requirements for hardware resources without affecting the accuracy of numerical calculations. The use of technology of GPGPU parallel computing using the Nvidia CUDA technology allows organizing all General purpose computation on the basis of the graphical processor graphics card. The comparative analysis of different approaches to parallelization of computations to speed up calculations with the choice of the algorithm in which to calculate the accuracy of the solution shared memory is used. Numerical study of the influence of particles density in the macro particle on the motion parameters and the total number of particle collisions in the plasma for different modes of synthesis has been carried out. The rational range of the coherence coefficient of particle in the macro particle is computed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plautz, Tia E.; Johnson, R. P.; Sadrozinski, H. F.-W.
Purpose: To characterize the modulation transfer function (MTF) of the pre-clinical (phase II) head scanner developed for proton computed tomography (pCT) by the pCT collaboration. To evaluate the spatial resolution achievable by this system. Methods: Our phase II proton CT scanner prototype consists of two silicon telescopes that track individual protons upstream and downstream from a phantom, and a 5-stage scintillation detector that measures a combination of the residual energy and range of the proton. Residual energy is converted to water equivalent path length (WEPL) of the protons in the scanned object. The set of WEPL values and associated pathsmore » of protons passing through the object over a 360° angular scan is processed by an iterative parallelizable reconstruction algorithm that runs on GP-GPU hardware. A custom edge phantom composed of water-equivalent polymer and tissue-equivalent material inserts was constructed. The phantom was first simulated in Geant4 and then built to perform experimental beam tests with 200 MeV protons at the Northwestern Medicine Chicago Proton Center. The oversampling method was used to construct radial and azimuthal edge spread functions and modulation transfer functions. The spatial resolution was defined by the 10% point of the modulation transfer function in units of lp/cm. Results: The spatial resolution of the image was found to be strongly correlated with the radial position of the insert but independent of the relative stopping power of the insert. The spatial resolution varies between roughly 4 and 6 lp/cm in both the the radial and azimuthal directions depending on the radial displacement of the edge. Conclusion: The amount of image degradation due to our detector system is small compared with the effects of multiple Coulomb scattering, pixelation of the image and the reconstruction algorithm. Improvements in reconstruction will be made in order to achieve the theoretical limits of spatial resolution.« less
Genetically improved BarraCUDA.
Langdon, W B; Lam, Brian Yee Hong
2017-01-01
BarraCUDA is an open source C program which uses the BWA algorithm in parallel with nVidia CUDA to align short next generation DNA sequences against a reference genome. Recently its source code was optimised using "Genetic Improvement". The genetically improved (GI) code is up to three times faster on short paired end reads from The 1000 Genomes Project and 60% more accurate on a short BioPlanet.com GCAT alignment benchmark. GPGPU BarraCUDA running on a single K80 Tesla GPU can align short paired end nextGen sequences up to ten times faster than bwa on a 12 core server. The speed up was such that the GI version was adopted and has been regularly downloaded from SourceForge for more than 12 months.
GeantV: From CPU to accelerators
Amadio, G.; Ananya, A.; Apostolakis, J.; ...
2016-01-01
The GeantV project aims to research and develop the next-generation simulation software describing the passage of particles through matter. While the modern CPU architectures are being targeted first, resources such as GPGPU, Intel© Xeon Phi, Atom or ARM cannot be ignored anymore by HEP CPU-bound applications. The proof of concept GeantV prototype has been mainly engineered for CPU's having vector units but we have foreseen from early stages a bridge to arbitrary accelerators. A software layer consisting of architecture/technology specific backends supports currently this concept. This approach allows to abstract out the basic types such as scalar/vector but also tomore » formalize generic computation kernels using transparently library or device specific constructs based on Vc, CUDA, Cilk+ or Intel intrinsics. While the main goal of this approach is portable performance, as a bonus, it comes with the insulation of the core application and algorithms from the technology layer. This allows our application to be long term maintainable and versatile to changes at the backend side. The paper presents the first results of basket-based GeantV geometry navigation on the Intel© Xeon Phi KNC architecture. We present the scalability and vectorization study, conducted using Intel performance tools, as well as our preliminary conclusions on the use of accelerators for GeantV transport. Lastly, we also describe the current work and preliminary results for using the GeantV transport kernel on GPUs.« less
Coded-aperture Compton camera for gamma-ray imaging
NASA Astrophysics Data System (ADS)
Farber, Aaron M.
This dissertation describes the development of a novel gamma-ray imaging system concept and presents results from Monte Carlo simulations of the new design. Current designs for large field-of-view gamma cameras suitable for homeland security applications implement either a coded aperture or a Compton scattering geometry to image a gamma-ray source. Both of these systems require large, expensive position-sensitive detectors in order to work effectively. By combining characteristics of both of these systems, a new design can be implemented that does not require such expensive detectors and that can be scaled down to a portable size. This new system has significant promise in homeland security, astronomy, botany and other fields, while future iterations may prove useful in medical imaging, other biological sciences and other areas, such as non-destructive testing. A proof-of-principle study of the new gamma-ray imaging system has been performed by Monte Carlo simulation. Various reconstruction methods have been explored and compared. General-Purpose Graphics-Processor-Unit (GPGPU) computation has also been incorporated. The resulting code is a primary design tool for exploring variables such as detector spacing, material selection and thickness and pixel geometry. The advancement of the system from a simple 1-dimensional simulation to a full 3-dimensional model is described. Methods of image reconstruction are discussed and results of simulations consisting of both a 4 x 4 and a 16 x 16 object space mesh have been presented. A discussion of the limitations and potential areas of further study is also presented.
Optimizing Cone Beam Computed Tomography (CBCT) System for Image Guided Radiation Therapy
NASA Astrophysics Data System (ADS)
Park, Chun Joo
Cone Beam Computed Tomography (CBCT) system is the most widely used imaging device in image guided radiation therapy (IGRT), where set of 3D volumetric image of patient can be reconstructed to identify and correct position setup errors prior to the radiation treatment. This CBCT system can significantly improve precision of on-line setup errors of patient position and tumor target localization prior to the treatment. However, there are still a number of issues that needs to be investigated with CBCT system such as 1) progressively increasing defective pixels in imaging detectors by its frequent usage, 2) hazardous radiation exposure to patients during the CBCT imaging, 3) degradation of image quality due to patients' respiratory motion when CBCT is acquired and 4) unknown knowledge of certain anatomical features such as liver, due to lack of soft-tissue contrast which makes tumor motion verification challenging. In this dissertation, we explore on optimizing the use of cone beam computed tomography (CBCT) system under such circumstances. We begin by introducing general concept of IGRT. We then present the development of automated defective pixel detection algorithm for X-ray imagers that is used for CBCT imaging using wavelet analysis. We next investigate on developing fast and efficient low-dose volumetric reconstruction techniques which includes 1) fast digital tomosynthesis reconstruction using general-purpose graphics processing unit (GPGPU) programming and 2) fast low-dose CBCT image reconstruction based on the Gradient-Projection-Barzilai-Borwein formulation (GP-BB). We further developed two efficient approaches that could reduce the degradation of CBCT images from respiratory motion. First, we propose reconstructing four dimensional (4D) CBCT and DTS using respiratory signal extracted from fiducial markers implanted in liver. Second, novel motion-map constrained image reconstruction (MCIR) is proposed that allows reconstruction of high quality and high phase resolution 4DCBCT images with no more than the imaging dose used in a standard Free Breathing 3DCBCT (FB-3DCBCT) scan. Finally, we demonstrate a method to analyze motion characteristics of liver that are particularly important for image guided stereotactic body radiation therapy (IG-SBRT). It is anticipated that all the approaches proposed in this study, which are both technically and clinically feasible, will allow much improvement in IGRT process.
Multi-GPU accelerated three-dimensional FDTD method for electromagnetic simulation.
Nagaoka, Tomoaki; Watanabe, Soichi
2011-01-01
Numerical simulation with a numerical human model using the finite-difference time domain (FDTD) method has recently been performed in a number of fields in biomedical engineering. To improve the method's calculation speed and realize large-scale computing with the numerical human model, we adapt three-dimensional FDTD code to a multi-GPU environment using Compute Unified Device Architecture (CUDA). In this study, we used NVIDIA Tesla C2070 as GPGPU boards. The performance of multi-GPU is evaluated in comparison with that of a single GPU and vector supercomputer. The calculation speed with four GPUs was approximately 3.5 times faster than with a single GPU, and was slightly (approx. 1.3 times) slower than with the supercomputer. Calculation speed of the three-dimensional FDTD method using GPUs can significantly improve with an expanding number of GPUs.
Higher-order ice-sheet modelling accelerated by multigrid on graphics cards
NASA Astrophysics Data System (ADS)
Brædstrup, Christian; Egholm, David
2013-04-01
Higher-order ice flow modelling is a very computer intensive process owing primarily to the nonlinear influence of the horizontal stress coupling. When applied for simulating long-term glacial landscape evolution, the ice-sheet models must consider very long time series, while both high temporal and spatial resolution is needed to resolve small effects. The use of higher-order and full stokes models have therefore seen very limited usage in this field. However, recent advances in graphics card (GPU) technology for high performance computing have proven extremely efficient in accelerating many large-scale scientific computations. The general purpose GPU (GPGPU) technology is cheap, has a low power consumption and fits into a normal desktop computer. It could therefore provide a powerful tool for many glaciologists working on ice flow models. Our current research focuses on utilising the GPU as a tool in ice-sheet and glacier modelling. To this extent we have implemented the Integrated Second-Order Shallow Ice Approximation (iSOSIA) equations on the device using the finite difference method. To accelerate the computations, the GPU solver uses a non-linear Red-Black Gauss-Seidel iterator coupled with a Full Approximation Scheme (FAS) multigrid setup to further aid convergence. The GPU finite difference implementation provides the inherent parallelization that scales from hundreds to several thousands of cores on newer cards. We demonstrate the efficiency of the GPU multigrid solver using benchmark experiments.
GPU-accelerated algorithms for many-particle continuous-time quantum walks
NASA Astrophysics Data System (ADS)
Piccinini, Enrico; Benedetti, Claudia; Siloi, Ilaria; Paris, Matteo G. A.; Bordone, Paolo
2017-06-01
Many-particle continuous-time quantum walks (CTQWs) represent a resource for several tasks in quantum technology, including quantum search algorithms and universal quantum computation. In order to design and implement CTQWs in a realistic scenario, one needs effective simulation tools for Hamiltonians that take into account static noise and fluctuations in the lattice, i.e. Hamiltonians containing stochastic terms. To this aim, we suggest a parallel algorithm based on the Taylor series expansion of the evolution operator, and compare its performances with those of algorithms based on the exact diagonalization of the Hamiltonian or a 4th order Runge-Kutta integration. We prove that both Taylor-series expansion and Runge-Kutta algorithms are reliable and have a low computational cost, the Taylor-series expansion showing the additional advantage of a memory allocation not depending on the precision of calculation. Both algorithms are also highly parallelizable within the SIMT paradigm, and are thus suitable for GPGPU computing. In turn, we have benchmarked 4 NVIDIA GPUs and 3 quad-core Intel CPUs for a 2-particle system over lattices of increasing dimension, showing that the speedup provided by GPU computing, with respect to the OPENMP parallelization, lies in the range between 8x and (more than) 20x, depending on the frequency of post-processing. GPU-accelerated codes thus allow one to overcome concerns about the execution time, and make it possible simulations with many interacting particles on large lattices, with the only limit of the memory available on the device.
NASA Astrophysics Data System (ADS)
Loring, B.; Karimabadi, H.; Rortershteyn, V.
2015-10-01
The surface line integral convolution(LIC) visualization technique produces dense visualization of vector fields on arbitrary surfaces. We present a screen space surface LIC algorithm for use in distributed memory data parallel sort last rendering infrastructures. The motivations for our work are to support analysis of datasets that are too large to fit in the main memory of a single computer and compatibility with prevalent parallel scientific visualization tools such as ParaView and VisIt. By working in screen space using OpenGL we can leverage the computational power of GPUs when they are available and run without them when they are not. We address efficiency and performance issues that arise from the transformation of data from physical to screen space by selecting an alternate screen space domain decomposition. We analyze the algorithm's scaling behavior with and without GPUs on two high performance computing systems using data from turbulent plasma simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loring, Burlen; Karimabadi, Homa; Rortershteyn, Vadim
2014-07-01
The surface line integral convolution(LIC) visualization technique produces dense visualization of vector fields on arbitrary surfaces. We present a screen space surface LIC algorithm for use in distributed memory data parallel sort last rendering infrastructures. The motivations for our work are to support analysis of datasets that are too large to fit in the main memory of a single computer and compatibility with prevalent parallel scientific visualization tools such as ParaView and VisIt. By working in screen space using OpenGL we can leverage the computational power of GPUs when they are available and run without them when they are not.more » We address efficiency and performance issues that arise from the transformation of data from physical to screen space by selecting an alternate screen space domain decomposition. We analyze the algorithm's scaling behavior with and without GPUs on two high performance computing systems using data from turbulent plasma simulations.« less
Johnson, Robert P.; Bashkirov, Vladimir; DeWitt, Langley; Giacometti, Valentina; Hurley, Robert F.; Piersimoni, Pierluigi; Plautz, Tia E.; Sadrozinski, Hartmut F.-W.; Schubert, Keith; Schulte, Reinhard; Schultze, Blake; Zatserklyaniy, Andriy
2016-01-01
We report on the design, fabrication, and first tests of a tomographic scanner developed for proton computed tomography (pCT) of head-sized objects. After extensive preclinical testing, pCT is intended to be employed in support of proton therapy treatment planning and pre-treatment verification in patients undergoing particle-beam therapy. The scanner consists of two silicon-strip telescopes that track individual protons before and after the phantom, and a novel multistage scintillation detector that measures a combination of the residual energy and range of the proton, from which we derive the water equivalent path length (WEPL) of the protons in the scanned object. The set of WEPL values and the associated paths of protons passing through the object over a 360° angular scan are processed by an iterative, parallelizable reconstruction algorithm that runs on modern GP-GPU hardware. In order to assess the performance of the scanner, we have performed tests with 200 MeV protons from the synchrotron of the Loma Linda University Medical Center and the IBA cyclotron of the Northwestern Medicine Chicago Proton Center. Our first objective was calibration of the instrument, including tracker channel maps and alignment as well as the WEPL calibration. Then we performed the first CT scans on a series of phantoms. The very high sustained rate of data acquisition, exceeding one million protons per second, allowed a full 360° scan to be completed in less than 10 minutes, and reconstruction of a CATPHAN 404 phantom verified accurate reconstruction of the proton relative stopping power in a variety of materials. PMID:27127307
Johnson, Robert P; Bashkirov, Vladimir; DeWitt, Langley; Giacometti, Valentina; Hurley, Robert F; Piersimoni, Pierluigi; Plautz, Tia E; Sadrozinski, Hartmut F-W; Schubert, Keith; Schulte, Reinhard; Schultze, Blake; Zatserklyaniy, Andriy
2016-02-01
We report on the design, fabrication, and first tests of a tomographic scanner developed for proton computed tomography (pCT) of head-sized objects. After extensive preclinical testing, pCT is intended to be employed in support of proton therapy treatment planning and pre-treatment verification in patients undergoing particle-beam therapy. The scanner consists of two silicon-strip telescopes that track individual protons before and after the phantom, and a novel multistage scintillation detector that measures a combination of the residual energy and range of the proton, from which we derive the water equivalent path length (WEPL) of the protons in the scanned object. The set of WEPL values and the associated paths of protons passing through the object over a 360° angular scan are processed by an iterative, parallelizable reconstruction algorithm that runs on modern GP-GPU hardware. In order to assess the performance of the scanner, we have performed tests with 200 MeV protons from the synchrotron of the Loma Linda University Medical Center and the IBA cyclotron of the Northwestern Medicine Chicago Proton Center. Our first objective was calibration of the instrument, including tracker channel maps and alignment as well as the WEPL calibration. Then we performed the first CT scans on a series of phantoms. The very high sustained rate of data acquisition, exceeding one million protons per second, allowed a full 360° scan to be completed in less than 10 minutes, and reconstruction of a CATPHAN 404 phantom verified accurate reconstruction of the proton relative stopping power in a variety of materials.
NASA Astrophysics Data System (ADS)
Johnson, Robert P.; Bashkirov, Vladimir; DeWitt, Langley; Giacometti, Valentina; Hurley, Robert F.; Piersimoni, Pierluigi; Plautz, Tia E.; Sadrozinski, Hartmut F.-W.; Schubert, Keith; Schulte, Reinhard; Schultze, Blake; Zatserklyaniy, Andriy
2016-02-01
We report on the design, fabrication, and first tests of a tomographic scanner developed for proton computed tomography (pCT) of head-sized objects. After extensive preclinical testing, pCT is intended to be employed in support of proton therapy treatment planning and pre-treatment verification in patients undergoing particle-beam therapy. The scanner consists of two silicon-strip telescopes that track individual protons before and after the phantom, and a novel multistage scintillation detector that measures a combination of the residual energy and range of the proton, from which we derive the water equivalent path length (WEPL) of the protons in the scanned object. The set of WEPL values and the associated paths of protons passing through the object over a 360 ° angular scan are processed by an iterative, parallelizable reconstruction algorithm that runs on modern GP-GPU hardware. In order to assess the performance of the scanner, we have performed tests with 200 MeV protons from the synchrotron of the Loma Linda University Medical Center and the IBA cyclotron of the Northwestern Medicine Chicago Proton Center. Our first objective was calibration of the instrument, including tracker channel maps and alignment as well as the WEPL calibration. Then we performed the first CT scans on a series of phantoms. The very high sustained rate of data acquisition, exceeding one million protons per second, allowed a full 360 ° scan to be completed in less than 10 minutes, and reconstruction of a CATPHAN 404 phantom verified accurate reconstruction of the proton relative stopping power in a variety of materials.
NASA Astrophysics Data System (ADS)
Mills, R. T.; Rupp, K.; Smith, B. F.; Brown, J.; Knepley, M.; Zhang, H.; Adams, M.; Hammond, G. E.
2017-12-01
As the high-performance computing community pushes towards the exascale horizon, power and heat considerations have driven the increasing importance and prevalence of fine-grained parallelism in new computer architectures. High-performance computing centers have become increasingly reliant on GPGPU accelerators and "manycore" processors such as the Intel Xeon Phi line, and 512-bit SIMD registers have even been introduced in the latest generation of Intel's mainstream Xeon server processors. The high degree of fine-grained parallelism and more complicated memory hierarchy considerations of such "manycore" processors present several challenges to existing scientific software. Here, we consider how the massively parallel, open-source hydrologic flow and reactive transport code PFLOTRAN - and the underlying Portable, Extensible Toolkit for Scientific Computation (PETSc) library on which it is built - can best take advantage of such architectures. We will discuss some key features of these novel architectures and our code optimizations and algorithmic developments targeted at them, and present experiences drawn from working with a wide range of PFLOTRAN benchmark problems on these architectures.
Restoring canonical partition functions from imaginary chemical potential
NASA Astrophysics Data System (ADS)
Bornyakov, V. G.; Boyda, D.; Goy, V.; Molochkov, A.; Nakamura, A.; Nikolaev, A.; Zakharov, V. I.
2018-03-01
Using GPGPU techniques and multi-precision calculation we developed the code to study QCD phase transition line in the canonical approach. The canonical approach is a powerful tool to investigate sign problem in Lattice QCD. The central part of the canonical approach is the fugacity expansion of the grand canonical partition functions. Canonical partition functions Zn(T) are coefficients of this expansion. Using various methods we study properties of Zn(T). At the last step we perform cubic spline for temperature dependence of Zn(T) at fixed n and compute baryon number susceptibility χB/T2 as function of temperature. After that we compute numerically ∂χ/∂T and restore crossover line in QCD phase diagram. We use improved Wilson fermions and Iwasaki gauge action on the 163 × 4 lattice with mπ/mρ = 0.8 as a sandbox to check the canonical approach. In this framework we obtain coefficient in parametrization of crossover line Tc(µ2B) = Tc(C-ĸµ2B/T2c) with ĸ = -0.0453 ± 0.0099.
NASA Astrophysics Data System (ADS)
Zellmann, Stefan; Percan, Yvonne; Lang, Ulrich
2015-01-01
Reconstruction of 2-d image primitives or of 3-d volumetric primitives is one of the most common operations performed by the rendering components of modern visualization systems. Because this operation is often aided by GPUs, reconstruction is typically restricted to first-order interpolation. With the advent of in situ visualization, the assumption that rendering algorithms are in general executed on GPUs is however no longer adequate. We thus propose a framework that provides versatile texture filtering capabilities: up to third-order reconstruction using various types of cubic filtering and interpolation primitives; cache-optimized algorithms that integrate seamlessly with GPGPU rendering or with software rendering that was optimized for cache-friendly "Structure of Array" (SoA) access patterns; a memory management layer (MML) that gracefully hides the complexities of extra data copies necessary for memory access optimizations such as swizzling, for rendering on GPGPUs, or for reconstruction schemes that rely on pre-filtered data arrays. We prove the effectiveness of our software architecture by integrating it into and validating it using the open source direct volume rendering (DVR) software DeskVOX.
Manycore Performance-Portability: Kokkos Multidimensional Array Library
Edwards, H. Carter; Sunderland, Daniel; Porter, Vicki; ...
2012-01-01
Large, complex scientific and engineering application code have a significant investment in computational kernels to implement their mathematical models. Porting these computational kernels to the collection of modern manycore accelerator devices is a major challenge in that these devices have diverse programming models, application programming interfaces (APIs), and performance requirements. The Kokkos Array programming model provides library-based approach to implement computational kernels that are performance-portable to CPU-multicore and GPGPU accelerator devices. This programming model is based upon three fundamental concepts: (1) manycore compute devices each with its own memory space, (2) data parallel kernels and (3) multidimensional arrays. Kernel executionmore » performance is, especially for NVIDIA® devices, extremely dependent on data access patterns. Optimal data access pattern can be different for different manycore devices – potentially leading to different implementations of computational kernels specialized for different devices. The Kokkos Array programming model supports performance-portable kernels by (1) separating data access patterns from computational kernels through a multidimensional array API and (2) introduce device-specific data access mappings when a kernel is compiled. An implementation of Kokkos Array is available through Trilinos [Trilinos website, http://trilinos.sandia.gov/, August 2011].« less
PuReMD-GPU: A reactive molecular dynamics simulation package for GPUs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kylasa, S.B., E-mail: skylasa@purdue.edu; Aktulga, H.M., E-mail: hmaktulga@lbl.gov; Grama, A.Y., E-mail: ayg@cs.purdue.edu
2014-09-01
We present an efficient and highly accurate GP-GPU implementation of our community code, PuReMD, for reactive molecular dynamics simulations using the ReaxFF force field. PuReMD and its incorporation into LAMMPS (Reax/C) is used by a large number of research groups worldwide for simulating diverse systems ranging from biomembranes to explosives (RDX) at atomistic level of detail. The sub-femtosecond time-steps associated with ReaxFF strongly motivate significant improvements to per-timestep simulation time through effective use of GPUs. This paper presents, in detail, the design and implementation of PuReMD-GPU, which enables ReaxFF simulations on GPUs, as well as various performance optimization techniques wemore » developed to obtain high performance on state-of-the-art hardware. Comprehensive experiments on model systems (bulk water and amorphous silica) are presented to quantify the performance improvements achieved by PuReMD-GPU and to verify its accuracy. In particular, our experiments show up to 16× improvement in runtime compared to our highly optimized CPU-only single-core ReaxFF implementation. PuReMD-GPU is a unique production code, and is currently available on request from the authors.« less
Polydisperse sphere packing in high dimensions, a search for an upper critical dimension
NASA Astrophysics Data System (ADS)
Morse, Peter; Clusel, Maxime; Corwin, Eric
2012-02-01
The recently introduced granocentric model for polydisperse sphere packings has been shown to be in good agreement with experimental and simulational data in two and three dimensions. This model relies on two effective parameters that have to be estimated from experimental/simulational results. The non-trivial values obtained allow the model to take into account the essential effects of correlations in the packing. Once these parameters are set, the model provides a full statistical description of a sphere packing for a given polydispersity. We investigate the evolution of these effective parameters with the spatial dimension to see if, in analogy with the upper critical dimension in critical phenomena, there exists a dimension above which correlations become irrelevant and the model parameters can be fixed a priori as a function of polydispersity. This would turn the model into a proper theory of polydisperse sphere packings at that upper critical dimension. We perform infinite temperature quench simulations of frictionless polydisperse sphere packings in dimensions 2-8 using a parallel algorithm implemented on a GPGPU. We analyze the resulting packings by implementing an algorithm to calculate the additively weighted Voronoi diagram in arbitrary dimension.
CaLRS: A Critical-Aware Shared LLC Request Scheduling Algorithm on GPGPU
Ma, Jianliang; Meng, Jinglei; Chen, Tianzhou; Wu, Minghui
2015-01-01
Ultra high thread-level parallelism in modern GPUs usually introduces numerous memory requests simultaneously. So there are always plenty of memory requests waiting at each bank of the shared LLC (L2 in this paper) and global memory. For global memory, various schedulers have already been developed to adjust the request sequence. But we find few work has ever focused on the service sequence on the shared LLC. We measured that a big number of GPU applications always queue at LLC bank for services, which provide opportunity to optimize the service order on LLC. Through adjusting the GPU memory request service order, we can improve the schedulability of SM. So we proposed a critical-aware shared LLC request scheduling algorithm (CaLRS) in this paper. The priority representative of memory request is critical for CaLRS. We use the number of memory requests that originate from the same warp but have not been serviced when they arrive at the shared LLC bank to represent the criticality of each warp. Experiments show that the proposed scheme can boost the SM schedulability effectively by promoting the scheduling priority of the memory requests with high criticality and improves the performance of GPU indirectly. PMID:25729772
Progress Towards a Rad-Hydro Code for Modern Computing Architectures LA-UR-10-02825
NASA Astrophysics Data System (ADS)
Wohlbier, J. G.; Lowrie, R. B.; Bergen, B.; Calef, M.
2010-11-01
We are entering an era of high performance computing where data movement is the overwhelming bottleneck to scalable performance, as opposed to the speed of floating-point operations per processor. All multi-core hardware paradigms, whether heterogeneous or homogeneous, be it the Cell processor, GPGPU, or multi-core x86, share this common trait. In multi-physics applications such as inertial confinement fusion or astrophysics, one may be solving multi-material hydrodynamics with tabular equation of state data lookups, radiation transport, nuclear reactions, and charged particle transport in a single time cycle. The algorithms are intensely data dependent, e.g., EOS, opacity, nuclear data, and multi-core hardware memory restrictions are forcing code developers to rethink code and algorithm design. For the past two years LANL has been funding a small effort referred to as Multi-Physics on Multi-Core to explore ideas for code design as pertaining to inertial confinement fusion and astrophysics applications. The near term goals of this project are to have a multi-material radiation hydrodynamics capability, with tabular equation of state lookups, on cartesian and curvilinear block structured meshes. In the longer term we plan to add fully implicit multi-group radiation diffusion and material heat conduction, and block structured AMR. We will report on our progress to date.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baeza, J.A.; Ureba, A.; Jimenez-Ortega, E.
Purpose: Although there exist several radiotherapy research platforms, such as: CERR, the most widely used and referenced; SlicerRT, which allows treatment plan comparison from various sources; and MMCTP, a full MCTP system; it is still needed a full MCTP toolset that provides users complete control of calculation grids, interpolation methods and filters in order to “fairly” compare results from different TPSs, supporting verification with experimental measurements. Methods: This work presents CARMEN, a MatLab-based platform including multicore and GPGPU accelerated functions for loading RT data; designing treatment plans; and evaluating dose matrices and experimental data.CARMEN supports anatomic and functional imaging inmore » DICOM format, as well as RTSTRUCT, RTPLAN and RTDOSE. Besides, it contains numerous tools to accomplish the MCTP process, managing egs4phant and phase space files.CARMEN planning mode assist in designing IMRT, VMAT and MERT treatments via both inverse and direct optimization. The evaluation mode contains a comprehensive toolset (e.g. 2D/3D gamma evaluation, difference matrices, profiles, DVH, etc.) to compare datasets from commercial TPS, MC simulations (i.e. 3ddose) and radiochromic film in a user-controlled manner. Results: CARMEN has been validated against commercial RTPs and well-established evaluation tools, showing coherent behavior of its multiple algorithms. Furthermore, CARMEN platform has been used to generate competitive complex treatment that has been published in comparative studies. Conclusion: A new research oriented MCTP platform with a customized validation toolset has been presented. Despite of being coded with a high-level programming language, CARMEN is agile due to the use of parallel algorithms. The wide-spread use of MatLab provides straightforward access to CARMEN’s algorithms to most researchers. Similarly, our platform can benefit from the MatLab community scientific developments as filters, registration algorithms etc. Finally, CARMEN arises the importance of grid and filtering control in treatment plan comparison.« less
A Flexible CUDA LU-based Solver for Small, Batched Linear Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tumeo, Antonino; Gawande, Nitin A.; Villa, Oreste
This chapter presents the implementation of a batched CUDA solver based on LU factorization for small linear systems. This solver may be used in applications such as reactive flow transport models, which apply the Newton-Raphson technique to linearize and iteratively solve the sets of non linear equations that represent the reactions for ten of thousands to millions of physical locations. The implementation exploits somewhat counterintuitive GPGPU programming techniques: it assigns the solution of a matrix (representing a system) to a single CUDA thread, does not exploit shared memory and employs dynamic memory allocation on the GPUs. These techniques enable ourmore » implementation to simultaneously solve sets of systems with over 100 equations and to employ LU decomposition with complete pivoting, providing the higher numerical accuracy required by certain applications. Other currently available solutions for batched linear solvers are limited by size and only support partial pivoting, although they may result faster in certain conditions. We discuss the code of our implementation and present a comparison with the other implementations, discussing the various tradeoffs in terms of performance and flexibility. This work will enable developers that need batched linear solvers to choose whichever implementation is more appropriate to the features and the requirements of their applications, and even to implement dynamic switching approaches that can choose the best implementation depending on the input data.« less
Maia, Julio Daniel Carvalho; Urquiza Carvalho, Gabriel Aires; Mangueira, Carlos Peixoto; Santana, Sidney Ramos; Cabral, Lucidio Anjos Formiga; Rocha, Gerd B
2012-09-11
In this study, we present some modifications in the semiempirical quantum chemistry MOPAC2009 code that accelerate single-point energy calculations (1SCF) of medium-size (up to 2500 atoms) molecular systems using GPU coprocessors and multithreaded shared-memory CPUs. Our modifications consisted of using a combination of highly optimized linear algebra libraries for both CPU (LAPACK and BLAS from Intel MKL) and GPU (MAGMA and CUBLAS) to hasten time-consuming parts of MOPAC such as the pseudodiagonalization, full diagonalization, and density matrix assembling. We have shown that it is possible to obtain large speedups just by using CPU serial linear algebra libraries in the MOPAC code. As a special case, we show a speedup of up to 14 times for a methanol simulation box containing 2400 atoms and 4800 basis functions, with even greater gains in performance when using multithreaded CPUs (2.1 times in relation to the single-threaded CPU code using linear algebra libraries) and GPUs (3.8 times). This degree of acceleration opens new perspectives for modeling larger structures which appear in inorganic chemistry (such as zeolites and MOFs), biochemistry (such as polysaccharides, small proteins, and DNA fragments), and materials science (such as nanotubes and fullerenes). In addition, we believe that this parallel (GPU-GPU) MOPAC code will make it feasible to use semiempirical methods in lengthy molecular simulations using both hybrid QM/MM and QM/QM potentials.
Programmable partitioning for high-performance coherence domains in a multiprocessor system
Blumrich, Matthias A [Ridgefield, CT; Salapura, Valentina [Chappaqua, NY
2011-01-25
A multiprocessor computing system and a method of logically partitioning a multiprocessor computing system are disclosed. The multiprocessor computing system comprises a multitude of processing units, and a multitude of snoop units. Each of the processing units includes a local cache, and the snoop units are provided for supporting cache coherency in the multiprocessor system. Each of the snoop units is connected to a respective one of the processing units and to all of the other snoop units. The multiprocessor computing system further includes a partitioning system for using the snoop units to partition the multitude of processing units into a plurality of independent, memory-consistent, adjustable-size processing groups. Preferably, when the processor units are partitioned into these processing groups, the partitioning system also configures the snoop units to maintain cache coherency within each of said groups.
Cher, Chen-Yong; Coteus, Paul W; Gara, Alan; Kursun, Eren; Paulsen, David P; Schuelke, Brian A; Sheets, II, John E; Tian, Shurong
2013-10-01
A processor-implemented method for determining aging of a processing unit in a processor the method comprising: calculating an effective aging profile for the processing unit wherein the effective aging profile quantifies the effects of aging on the processing unit; combining the effective aging profile with process variation data, actual workload data and operating conditions data for the processing unit; and determining aging through an aging sensor of the processing unit using the effective aging profile, the process variation data, the actual workload data, architectural characteristics and redundancy data, and the operating conditions data for the processing unit.
NASA Astrophysics Data System (ADS)
Wang, Ying; Krafczyk, Manfred; Geier, Martin; Schönherr, Martin
2014-05-01
The quantification of soil evaporation and of soil water content dynamics near the soil surface are critical in the physics of land-surface processes on many scales and are dominated by multi-component and multi-phase mass and energy fluxes between the ground and the atmosphere. Although it is widely recognized that both liquid and gaseous water movement are fundamental factors in the quantification of soil heat flux and surface evaporation, their computation has only started to be taken into account using simplified macroscopic models. As the flow field over the soil can be safely considered as turbulent, it would be natural to study the detailed transient flow dynamics by means of Large Eddy Simulation (LES [1]) where the three-dimensional flow field is resolved down to the laminar sub-layer. Yet this requires very fine resolved meshes allowing a grid resolution of at least one order of magnitude below the typical grain diameter of the soil under consideration. In order to gain reliable turbulence statistics, up to several hundred eddy turnover times have to be simulated which adds up to several seconds of real time. Yet, the time scale of the receding saturated water front dynamics in the soil is on the order of hours. Thus we are faced with the task of solving a transient turbulent flow problem including the advection-diffusion of water vapour over the soil-atmospheric interface represented by a realistic tomographic reconstruction of a real porous medium taken from laboratory probes. Our flow solver is based on the Lattice Boltzmann method (LBM) [2] which has been extended by a Cumulant approach similar to the one described in [3,4] to minimize the spurious coupling between the degrees of freedom in previous LBM approaches and can be used as an implicit LES turbulence model due to its low numerical dissipation and increased stability at high Reynolds numbers. The kernel has been integrated into the research code Virtualfluids [5] and delivers up to 30% of the peak performance of modern General Purpose Graphics Processing Units (GPGPU, [6]) allowing the simulation of several minutes real-time for an LES LBM model. In our contribution we will present detailed profiles of the velocity distribution for different surface roughnesses, describe our multi-scale approach for the advection diffusion and estimate water vapour fluxes from transient simulations of the coupled problem. REFERENCES [1] J. Fröhlich and D. von Terzi. Hybrid LES/RANS methods for the simulation of turbulent flows. Progress in Aerospace Sciences, 44(5):349 - 377, 2008. [2] S. Chen and G. D. Doolen, Annual Review, of Fluid Mechanics 30, 329, 1998, [3] S. Seeger and K. H. Hoffmann, The cumulant method for computational kinetic theory, Continuum Mech. Thermodyn., 12:403-421, 2000. [4] S. Seeger and K. H. Hoffmann, The cumulant method applied to a mixture of Maxwell gases, Continuum Mech. Thermodyn., 14:321-335, 2002. [5] S. Freudiger, J. Hegewald and M. Krafczyk. A parallelisation concept for a mult-physics Lattice Boltzmann prototype based on hierarchical grids. Progress in Computational Fluid Dynamics, 8(1):168-178, 2008. [6] M. Schönherr, K. Kucher, M. Geier, M. Stiebler, S. Freudiger and M. Krafczyk, Multi- thread implementations of the Lattice Boltzmann method on non-uniform grids for CPUs and GPUs. Computers & Mathematics with Applications, 61(12):3730-3743, 2011.
Device and method to enhance availability of cluster-based processing systems
NASA Technical Reports Server (NTRS)
Lupia, David J. (Inventor); Ramos, Jeremy (Inventor); Samson, Jr., John R. (Inventor)
2010-01-01
An electronic computing device including at least one processing unit that implements a specific fault signal upon experiencing an associated fault, a control unit that generates a specific recovery signal upon receiving the fault signal from the at least one processing unit, and at least one input memory unit. The recovery signal initiates specific recovery processes in the at least one processing unit. The input memory buffers input data signals input to the at least one processing unit that experienced the fault during the recovery period.
Maritime Domain Awareness: C4I for the 1000 Ship Navy
2009-12-04
unit action, provide unit sensed contacts, coordinate unit operations, process unit information, release image , and release contact report, Figure 33...Intelligence Tasking Request Intelligence Summary Release Unit Person Incident Release Unit Vessel Incident Process Intelligence Tasking Release Image ...xi LIST OF FIGURES Figure 1. Functional Problem Sequence Process Flow. ....................................................4 Figure 2. United
Processing device with self-scrubbing logic
Wojahn, Christopher K.
2016-03-01
An apparatus includes a processing unit including a configuration memory and self-scrubber logic coupled to read the configuration memory to detect compromised data stored in the configuration memory. The apparatus also includes a watchdog unit external to the processing unit and coupled to the self-scrubber logic to detect a failure in the self-scrubber logic. The watchdog unit is coupled to the processing unit to selectively reset the processing unit in response to detecting the failure in the self-scrubber logic. The apparatus also includes an external memory external to the processing unit and coupled to send configuration data to the configuration memory in response to a data feed signal outputted by the self-scrubber logic.
Processing device with self-scrubbing logic
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wojahn, Christopher K.
An apparatus includes a processing unit including a configuration memory and self-scrubber logic coupled to read the configuration memory to detect compromised data stored in the configuration memory. The apparatus also includes a watchdog unit external to the processing unit and coupled to the self-scrubber logic to detect a failure in the self-scrubber logic. The watchdog unit is coupled to the processing unit to selectively reset the processing unit in response to detecting the failure in the self-scrubber logic. The apparatus also includes an external memory external to the processing unit and coupled to send configuration data to the configurationmore » memory in response to a data feed signal outputted by the self-scrubber logic.« less
Estimation of Global 1km-grid Terrestrial Carbon Exchange Part I: Developing Inputs and Modelling
NASA Astrophysics Data System (ADS)
Sasai, T.; Murakami, K.; Kato, S.; Matsunaga, T.; Saigusa, N.; Hiraki, K.
2015-12-01
Global terrestrial carbon cycle largely depends on a spatial pattern in land cover type, which is heterogeneously-distributed over regional and global scales. However, most studies, which aimed at the estimation of carbon exchanges between ecosystem and atmosphere, remained within several tens of kilometers grid spatial resolution, and the results have not been enough to understand the detailed pattern of carbon exchanges based on ecological community. Improving the sophistication of spatial resolution is obviously necessary to enhance the accuracy of carbon exchanges. Moreover, the improvement may contribute to global warming awareness, policy makers and other social activities. In this study, we show global terrestrial carbon exchanges (net ecosystem production, net primary production, and gross primary production) with 1km-grid resolution. As methodology for computing the exchanges, we 1) developed a global 1km-grid climate and satellite dataset based on the approach in Setoyama and Sasai (2013); 2) used the satellite-driven biosphere model (Biosphere model integrating Eco-physiological And Mechanistic approaches using Satellite data: BEAMS) (Sasai et al., 2005, 2007, 2011); 3) simulated the carbon exchanges by using the new dataset and BEAMS by the use of a supercomputer that includes 1280 CPU and 320 GPGPU cores (GOSAT RCF of NIES). As a result, we could develop a global uniform system for realistically estimating terrestrial carbon exchange, and evaluate net ecosystem production in each community level; leading to obtain highly detailed understanding of terrestrial carbon exchanges.
15 CFR 971.209 - Processing outside the United States.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 15 Commerce and Foreign Trade 3 2011-01-01 2011-01-01 false Processing outside the United States... Applications Contents § 971.209 Processing outside the United States. (a) Except as provided in this section... contravenes the overriding national interests of the United States. (b) If foreign processing is proposed, the...
15 CFR 971.209 - Processing outside the United States.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 15 Commerce and Foreign Trade 3 2014-01-01 2014-01-01 false Processing outside the United States... Applications Contents § 971.209 Processing outside the United States. (a) Except as provided in this section... contravenes the overriding national interests of the United States. (b) If foreign processing is proposed, the...
Distributed and recoverable digital control system
NASA Technical Reports Server (NTRS)
Stange, Kent (Inventor); Hess, Richard (Inventor); Kelley, Gerald B (Inventor); Rogers, Randy (Inventor)
2010-01-01
A real-time multi-tasking digital control system with rapid recovery capability is disclosed. The control system includes a plurality of computing units comprising a plurality of redundant processing units, with each of the processing units configured to generate one or more redundant control commands. One or more internal monitors are employed for detecting data errors in the control commands. One or more recovery triggers are provided for initiating rapid recovery of a processing unit if data errors are detected. The control system also includes a plurality of actuator control units each in operative communication with the computing units. The actuator control units are configured to initiate a rapid recovery if data errors are detected in one or more of the processing units. A plurality of smart actuators communicates with the actuator control units, and a plurality of redundant sensors communicates with the computing units.
Environmental Engineering Unit Operations and Unit Processes Laboratory Manual.
ERIC Educational Resources Information Center
O'Connor, John T., Ed.
This manual was prepared for the purpose of stimulating the development of effective unit operations and unit processes laboratory courses in environmental engineering. Laboratory activities emphasizing physical operations, biological, and chemical processes are designed for various educational and equipment levels. An introductory section reviews…
40 CFR 63.100 - Applicability and designation of source.
Code of Federal Regulations, 2010 CFR
2010-07-01
... manufacturing process unit has two or more products that have the same maximum annual design capacity on a mass... subject to this subpart. (3) For chemical manufacturing process units that are designed and operated as... chemical manufacturing process units that are designed and operated as flexible operation units shall be...
15 CFR 971.427 - Processing outside the United States.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 15 Commerce and Foreign Trade 3 2014-01-01 2014-01-01 false Processing outside the United States... outside the United States. If appropriate TCRs will incorporate provisions to implement the decision of the Administrator regarding the return of resources processed outside the United States, in accordance...
40 CFR 63.1275 - Glycol dehydration unit process vent standards.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 12 2014-07-01 2014-07-01 false Glycol dehydration unit process vent... Storage Facilities § 63.1275 Glycol dehydration unit process vent standards. (a) This section applies to each glycol dehydration unit subject to this subpart that must be controlled for air emissions as...
40 CFR 63.1275 - Glycol dehydration unit process vent standards.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 12 2013-07-01 2013-07-01 false Glycol dehydration unit process vent... Storage Facilities § 63.1275 Glycol dehydration unit process vent standards. (a) This section applies to each glycol dehydration unit subject to this subpart that must be controlled for air emissions as...
40 CFR 63.1275 - Glycol dehydration unit process vent standards.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 12 2012-07-01 2011-07-01 true Glycol dehydration unit process vent... Facilities § 63.1275 Glycol dehydration unit process vent standards. (a) This section applies to each glycol dehydration unit subject to this subpart with an actual annual average natural gas flowrate equal to or...
40 CFR 63.1275 - Glycol dehydration unit process vent standards.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 11 2010-07-01 2010-07-01 true Glycol dehydration unit process vent... Facilities § 63.1275 Glycol dehydration unit process vent standards. (a) This section applies to each glycol dehydration unit subject to this subpart with an actual annual average natural gas flowrate equal to or...
40 CFR 63.765 - Glycol dehydration unit process vent standards.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 11 2013-07-01 2013-07-01 false Glycol dehydration unit process vent... Facilities § 63.765 Glycol dehydration unit process vent standards. (a) This section applies to each glycol dehydration unit subject to this subpart that must be controlled for air emissions as specified in either...
40 CFR 63.765 - Glycol dehydration unit process vent standards.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 10 2011-07-01 2011-07-01 false Glycol dehydration unit process vent... Facilities § 63.765 Glycol dehydration unit process vent standards. (a) This section applies to each glycol dehydration unit subject to this subpart with an actual annual average natural gas flowrate equal to or...
40 CFR 63.765 - Glycol dehydration unit process vent standards.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 11 2014-07-01 2014-07-01 false Glycol dehydration unit process vent... Facilities § 63.765 Glycol dehydration unit process vent standards. (a) This section applies to each glycol dehydration unit subject to this subpart that must be controlled for air emissions as specified in either...
40 CFR 63.765 - Glycol dehydration unit process vent standards.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 10 2010-07-01 2010-07-01 false Glycol dehydration unit process vent... Facilities § 63.765 Glycol dehydration unit process vent standards. (a) This section applies to each glycol dehydration unit subject to this subpart with an actual annual average natural gas flowrate equal to or...
40 CFR 63.1275 - Glycol dehydration unit process vent standards.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 11 2011-07-01 2011-07-01 false Glycol dehydration unit process vent... Facilities § 63.1275 Glycol dehydration unit process vent standards. (a) This section applies to each glycol dehydration unit subject to this subpart with an actual annual average natural gas flowrate equal to or...
40 CFR 63.765 - Glycol dehydration unit process vent standards.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 11 2012-07-01 2012-07-01 false Glycol dehydration unit process vent... Facilities § 63.765 Glycol dehydration unit process vent standards. (a) This section applies to each glycol dehydration unit subject to this subpart with an actual annual average natural gas flowrate equal to or...
Failure detection in high-performance clusters and computers using chaotic map computations
Rao, Nageswara S.
2015-09-01
A programmable media includes a processing unit capable of independent operation in a machine that is capable of executing 10.sup.18 floating point operations per second. The processing unit is in communication with a memory element and an interconnect that couples computing nodes. The programmable media includes a logical unit configured to execute arithmetic functions, comparative functions, and/or logical functions. The processing unit is configured to detect computing component failures, memory element failures and/or interconnect failures by executing programming threads that generate one or more chaotic map trajectories. The central processing unit or graphical processing unit is configured to detect a computing component failure, memory element failure and/or an interconnect failure through an automated comparison of signal trajectories generated by the chaotic maps.
Adaptive-optics optical coherence tomography processing using a graphics processing unit.
Shafer, Brandon A; Kriske, Jeffery E; Kocaoglu, Omer P; Turner, Timothy L; Liu, Zhuolin; Lee, John Jaehwan; Miller, Donald T
2014-01-01
Graphics processing units are increasingly being used for scientific computing for their powerful parallel processing abilities, and moderate price compared to super computers and computing grids. In this paper we have used a general purpose graphics processing unit to process adaptive-optics optical coherence tomography (AOOCT) images in real time. Increasing the processing speed of AOOCT is an essential step in moving the super high resolution technology closer to clinical viability.
NASA Astrophysics Data System (ADS)
Crinière, Antoine; Dumoulin, Jean; Mevel, Laurent; Andrade-Barosso, Guillermo; Simonin, Matthieu
2015-04-01
From the past decades the monitoring of civil engineering structure became a major field of research and development process in the domains of modelling and integrated instrumentation. This increasing of interest can be attributed in part to the need of controlling the aging of such structures and on the other hand to the need to optimize maintenance costs. From this standpoint the project Cloud2SM (Cloud architecture design for Structural Monitoring with in-line Sensors and Models tasking), has been launched to develop a robust information system able to assess the long term monitoring of civil engineering structures as well as interfacing various sensors and data. The specificity of such architecture is to be based on the notion of data processing through physical or statistical models. Thus the data processing, whether material or mathematical, can be seen here as a resource of the main architecture. The project can be divided in various items: -The sensors and their measurement process: Those items provide data to the main architecture and can embed storage or computational resources. Dependent of onboard capacity and the amount of data generated it can be distinguished heavy and light sensors. - The storage resources: Based on the cloud concept this resource can store at least two types of data, raw data and processed ones. - The computational resources: This item includes embedded "pseudo real time" resources as the dedicated computer cluster or computational resources. - The models: Used for the conversion of raw data to meaningful data. Those types of resources inform the system of their needs they can be seen as independents blocks of the system. - The user interface: This item can be divided in various HMI to assess maintaining operation on the sensors or pop-up some information to the user. - The demonstrators: The structures themselves. This project follows previous research works initiated in the European project ISTIMES [1]. It includes the infrared thermal monitoring of civil engineering structures [2-3] and/or the vibration monitoring of such structures [4-5]. The chosen architecture is based on the OGC standard in order to ensure the interoperability between the various measurement systems. This concept is extended to the notion of physical models. The last but not the least main objective of this project is to explore the feasibility and the reliability to deploy mathematical models and process a large amount of data using the GPGPU capacity of a dedicated computational cluster, while studying OGC standardization to those technical concepts. References [1] M. Proto et al., « Transport Infrastructure surveillance and Monitoring by Electromagnetic Sensing: the ISTIMES project », Journal Sensors, Sensors 2010, 10(12), 10620-10639; doi:10.3390/s101210620, December 2010. [2] J. Dumoulin, A. Crinière, R. Averty ," Detection and thermal characterization of the inner structure of the "Musmeci" bridge deck by infrared thermography monitoring ",Journal of Geophysics and Engineering, Volume 10, Number 2, 17 pages ,November 2013, IOP Science, doi:10.1088/1742-2132/10/6/064003. [3] J Dumoulin and V Boucher; "Infrared thermography system for transport infrastructures survey with inline local atmospheric parameter measurements and offline model for radiation attenuation evaluations," J. Appl. Remote Sens., 8(1), 084978 (2014). doi:10.1117/1.JRS.8.084978. [4] V. Le Cam, M. Doehler, M. Le Pen, L. Mevel. "Embedded modal analysis algorithms on the smart wireless sensor platform PEGASE", In Proc. 9th International Workshop on Structural Health Monitoring, Stanford, CA, USA, 2013. [5] M. Zghal, L. Mevel, P. Del Moral, "Modal parameter estimation using interacting Kalman filter", Mechanical Systems and Signal Processing, 2014.
The Ortho-Syllable as a Processing Unit in Handwriting: The Mute E Effect
ERIC Educational Resources Information Center
Lambert, Eric; Sausset, Solen; Rigalleau, François
2015-01-01
Some research on written production has focused on the role of the syllable as a processing unit. However, the precise nature of this syllable unit has yet to be elucidated. The present study examined whether the nature of this processing unit is orthographic (i.e., an ortho-syllable) or phonological. We asked French adults to copy three-syllable…
26 CFR 1.924(d)-1 - Requirement that economic processes take place outside the United States.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 26 Internal Revenue 10 2011-04-01 2011-04-01 false Requirement that economic processes take place... Citizens of United States § 1.924(d)-1 Requirement that economic processes take place outside the United... any transaction only if economic processes with respect to such transaction take place outside the...
Portable brine evaporator unit, process, and system
Hart, Paul John; Miller, Bruce G.; Wincek, Ronald T.; Decker, Glenn E.; Johnson, David K.
2009-04-07
The present invention discloses a comprehensive, efficient, and cost effective portable evaporator unit, method, and system for the treatment of brine. The evaporator unit, method, and system require a pretreatment process that removes heavy metals, crude oil, and other contaminates in preparation for the evaporator unit. The pretreatment and the evaporator unit, method, and system process metals and brine at the site where they are generated (the well site). Thus, saving significant money to producers who can avoid present and future increases in transportation costs.
Jordan, A; Chen, D; Yi, Q-L; Kanias, T; Gladwin, M T; Acker, J P
2016-07-01
Quality control (QC) data collected by blood services are used to monitor production and to ensure compliance with regulatory standards. We demonstrate how analysis of quality control data can be used to highlight the sources of variability within red cell concentrates (RCCs). We merged Canadian Blood Services QC data with manufacturing and donor records for 28 227 RCC between June 2011 and October 2014. Units were categorized based on processing method, bag manufacturer, donor age and donor sex, then assessed based on product characteristics: haemolysis and haemoglobin levels, unit volume, leucocyte count and haematocrit. Buffy-coat method (top/bottom)-processed units exhibited lower haemolysis than units processed using the whole-blood filtration method (top/top). Units from female donors exhibited lower haemolysis than male donations. Processing method influenced unit volume and the ratio of additive solution to residual plasma. Stored red blood cell characteristics are influenced by prestorage processing and donor factors. Understanding the relationship between processing, donors and RCC quality will help blood services to ensure the safety of transfused products. © 2016 International Society of Blood Transfusion.
40 CFR 63.764 - General standards.
Code of Federal Regulations, 2011 CFR
2011-07-01
... in paragraphs (c)(1) through (3) of this section. (1) For each glycol dehydration unit process vent... requirements for glycol dehydration unit process vents specified in § 63.765; (ii) The owner or operator shall... requirements for glycol dehydration unit process vents specified in § 63.765; (ii) The monitoring requirements...
40 CFR 63.764 - General standards.
Code of Federal Regulations, 2014 CFR
2014-07-01
... in paragraphs (c)(1) through (3) of this section. (1) For each glycol dehydration unit process vent... requirements for glycol dehydration unit process vents specified in § 63.765; (ii) The owner or operator shall... requirements for glycol dehydration unit process vents specified in § 63.765; (ii) The monitoring requirements...
40 CFR 63.764 - General standards.
Code of Federal Regulations, 2013 CFR
2013-07-01
... in paragraphs (c)(1) through (3) of this section. (1) For each glycol dehydration unit process vent... requirements for glycol dehydration unit process vents specified in § 63.765; (ii) The owner or operator shall... requirements for glycol dehydration unit process vents specified in § 63.765; (ii) The monitoring requirements...
40 CFR 63.764 - General standards.
Code of Federal Regulations, 2012 CFR
2012-07-01
... in paragraphs (c)(1) through (3) of this section. (1) For each glycol dehydration unit process vent... requirements for glycol dehydration unit process vents specified in § 63.765; (ii) The owner or operator shall... requirements for glycol dehydration unit process vents specified in § 63.765; (ii) The monitoring requirements...
40 CFR 63.764 - General standards.
Code of Federal Regulations, 2010 CFR
2010-07-01
... in paragraphs (c)(1) through (3) of this section. (1) For each glycol dehydration unit process vent... requirements for glycol dehydration unit process vents specified in § 63.765; (ii) The owner or operator shall... requirements for glycol dehydration unit process vents specified in § 63.765; (ii) The monitoring requirements...
EDExpress, 2000-2001: Direct Loan.
ERIC Educational Resources Information Center
Department of Education, Washington, DC. Student Financial Assistance.
This workbook covers all the processes needed to administer the federal direct loan program in schools; it requires familiarity with the basic concepts found in the "Direct Loan School Guide." The eight units of instruction include: Unit 1: an overview; Unit 2: processing loan records, including the EDExpress setup, the processing cycle,…
40 CFR 63.1016 - Alternative means of emission limitation: Enclosed-vented process units.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 10 2010-07-01 2010-07-01 false Alternative means of emission limitation: Enclosed-vented process units. 63.1016 Section 63.1016 Protection of Environment ENVIRONMENTAL... § 63.1016 Alternative means of emission limitation: Enclosed-vented process units. (a) Use of closed...
40 CFR 63.1016 - Alternative means of emission limitation: Enclosed-vented process units.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 10 2011-07-01 2011-07-01 false Alternative means of emission limitation: Enclosed-vented process units. 63.1016 Section 63.1016 Protection of Environment ENVIRONMENTAL... § 63.1016 Alternative means of emission limitation: Enclosed-vented process units. (a) Use of closed...
15 CFR 971.209 - Processing outside the United States.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 15 Commerce and Foreign Trade 3 2010-01-01 2010-01-01 false Processing outside the United States... THE ENVIRONMENTAL DATA SERVICE DEEP SEABED MINING REGULATIONS FOR COMMERCIAL RECOVERY PERMITS Applications Contents § 971.209 Processing outside the United States. (a) Except as provided in this section...
Estimating Missing Unit Process Data in Life Cycle Assessment Using a Similarity-Based Approach.
Hou, Ping; Cai, Jiarui; Qu, Shen; Xu, Ming
2018-05-01
In life cycle assessment (LCA), collecting unit process data from the empirical sources (i.e., meter readings, operation logs/journals) is often costly and time-consuming. We propose a new computational approach to estimate missing unit process data solely relying on limited known data based on a similarity-based link prediction method. The intuition is that similar processes in a unit process network tend to have similar material/energy inputs and waste/emission outputs. We use the ecoinvent 3.1 unit process data sets to test our method in four steps: (1) dividing the data sets into a training set and a test set; (2) randomly removing certain numbers of data in the test set indicated as missing; (3) using similarity-weighted means of various numbers of most similar processes in the training set to estimate the missing data in the test set; and (4) comparing estimated data with the original values to determine the performance of the estimation. The results show that missing data can be accurately estimated when less than 5% data are missing in one process. The estimation performance decreases as the percentage of missing data increases. This study provides a new approach to compile unit process data and demonstrates a promising potential of using computational approaches for LCA data compilation.
Zhang, Chundong; Jun, Ki-Won; Ha, Kyoung-Su; Lee, Yun-Jo; Kang, Seok Chang
2014-07-15
Two process models for carbon dioxide utilized gas-to-liquids (GTL) process (CUGP) mainly producing light olefins and Fischer-Tropsch (F-T) synthetic oils were developed by Aspen Plus software. Both models are mainly composed of a reforming unit, an F-T synthesis unit and a recycle unit, while the main difference is the feeding point of fresh CO2. In the reforming unit, CO2 reforming and steam reforming of methane are combined together to produce syngas in flexible composition. Meanwhile, CO2 hydrogenation is conducted via reverse water gas shift on the Fe-based catalysts in the F-T synthesis unit to produce hydrocarbons. After F-T synthesis, the unreacted syngas is recycled to F-T synthesis and reforming units to enhance process efficiency. From the simulation results, it was found that the carbon efficiencies of both CUGP options were successfully improved, and total CO2 emissions were significantly reduced, compared with the conventional GTL processes. The process efficiency was sensitive to recycle ratio and more recycle seemed to be beneficial for improving process efficiency and reducing CO2 emission. However, the process efficiency was rather insensitive to split ratio (recycle to reforming unit/total recycle), and the optimum split ratio was determined to be zero.
Internode data communications in a parallel computer
Archer, Charles J.; Blocksome, Michael A.; Miller, Douglas R.; Parker, Jeffrey J.; Ratterman, Joseph D.; Smith, Brian E.
2013-09-03
Internode data communications in a parallel computer that includes compute nodes that each include main memory and a messaging unit, the messaging unit including computer memory and coupling compute nodes for data communications, in which, for each compute node at compute node boot time: a messaging unit allocates, in the messaging unit's computer memory, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; receives, prior to initialization of a particular process on the compute node, a data communications message intended for the particular process; and stores the data communications message in the message buffer associated with the particular process. Upon initialization of the particular process, the process establishes a messaging buffer in main memory of the compute node and copies the data communications message from the message buffer of the messaging unit into the message buffer of main memory.
Internode data communications in a parallel computer
Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Parker, Jeffrey J; Ratterman, Joseph D; Smith, Brian E
2014-02-11
Internode data communications in a parallel computer that includes compute nodes that each include main memory and a messaging unit, the messaging unit including computer memory and coupling compute nodes for data communications, in which, for each compute node at compute node boot time: a messaging unit allocates, in the messaging unit's computer memory, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; receives, prior to initialization of a particular process on the compute node, a data communications message intended for the particular process; and stores the data communications message in the message buffer associated with the particular process. Upon initialization of the particular process, the process establishes a messaging buffer in main memory of the compute node and copies the data communications message from the message buffer of the messaging unit into the message buffer of main memory.
2017-08-01
access to the GPU for general purpose processing .5 CUDA is designed to work easily with multiple programming languages , including Fortran. CUDA is a...Using Graphics Processing Unit (GPU) Computing by Leelinda P Dawson Approved for public release; distribution unlimited...The Performance Improvement of the Lagrangian Particle Dispersion Model (LPDM) Using Graphics Processing Unit (GPU) Computing by Leelinda
Process Development Unit. NREL's Thermal and Catalytic Process Development Unit can process 1/2 ton per biomass to fuels and chemicals Affiliated Research Programs Thermochemical Process Integration, Scale-Up
Improvement for enhancing effectiveness of universal power system (UPS) continuous testing process
NASA Astrophysics Data System (ADS)
Sriratana, Lerdlekha
2018-01-01
This experiment aims to enhance the effectiveness of the Universal Power System (UPS) continuous testing process of the Electrical and Electronic Institute by applying work scheduling and time study methods. Initially, the standard time of testing process has not been considered that results of unaccurate testing target and also time wasting has been observed. As monitoring and reducing waste time for improving the efficiency of testing process, Yamazumi chart and job scheduling theory (North West Corner Rule) were applied to develop new work process. After the improvements, the overall efficiency of the process possibly increased from 52.8% to 65.6% or 12.7%. Moreover, the waste time could reduce from 828.3 minutes to 653.6 minutes or 21%, while testing units per batch could increase from 3 to 4 units. Therefore, the number of testing units would increase from 12 units up to 20 units per month that also contribute to increase of net income of UPS testing process by 72%.
Testing a model of componential processing of multi-symbol numbers-evidence from measurement units.
Huber, Stefan; Bahnmueller, Julia; Klein, Elise; Moeller, Korbinian
2015-10-01
Research on numerical cognition has addressed the processing of nonsymbolic quantities and symbolic digits extensively. However, magnitude processing of measurement units is still a neglected topic in numerical cognition research. Hence, we investigated the processing of measurement units to evaluate whether typical effects of multi-digit number processing such as the compatibility effect, the string length congruity effect, and the distance effect are also present for measurement units. In three experiments, participants had to single out the larger one of two physical quantities (e.g., lengths). In Experiment 1, the compatibility of number and measurement unit (compatible: 3 mm_6 cm with 3 < 6 and mm < cm; incompatible: 3 cm_6 mm with 3 < 6 but cm > mm) as well as string length congruity (congruent: 1 m_2 km with m < km and 2 < 3 characters; incongruent: 2 mm_1 m with mm < m, but 3 > 2 characters) were manipulated. We observed reliable compatibility effects with prolonged reaction times (RT) for incompatible trials. Moreover, a string length congruity effect was present in RT with longer RT for incongruent trials. Experiments 2 and 3 served as control experiments showing that compatibility effects persist when controlling for holistic distance and that a distance effect for measurement units exists. Our findings indicate that numbers and measurement units are processed in a componential manner and thus highlight that processing characteristics of multi-digit numbers generalize to measurement units. Thereby, our data lend further support to the recently proposed generalized model of componential multi-symbol number processing.
Jurado, Marisa; Algora, Manuel; Garcia-Sanchez, Félix; Vico, Santiago; Rodriguez, Eva; Perez, Sonia; Barbolla, Luz
2012-01-01
Background The Community Transfusion Centre in Madrid currently processes whole blood using a conventional procedure (Compomat, Fresenius) followed by automated processing of buffy coats with the OrbiSac system (CaridianBCT). The Atreus 3C system (CaridianBCT) automates the production of red blood cells, plasma and an interim platelet unit from a whole blood unit. Interim platelet unit are pooled to produce a transfusable platelet unit. In this study the Atreus 3C system was evaluated and compared to the routine method with regards to product quality and operational value. Materials and methods Over a 5-week period 810 whole blood units were processed using the Atreus 3C system. The attributes of the automated process were compared to those of the routine method by assessing productivity, space, equipment and staffing requirements. The data obtained were evaluated in order to estimate the impact of implementing the Atreus 3C system in the routine setting of the blood centre. Yield and in vitro quality of the final blood components processed with the two systems were evaluated and compared. Results The Atreus 3C system enabled higher throughput while requiring less space and employee time by decreasing the amount of equipment and processing time per unit of whole blood processed. Whole blood units processed on the Atreus 3C system gave a higher platelet yield, a similar amount of red blood cells and a smaller volume of plasma. Discussion These results support the conclusion that the Atreus 3C system produces blood components meeting quality requirements while providing a high operational efficiency. Implementation of the Atreus 3C system could result in a large organisational improvement. PMID:22044958
Jurado, Marisa; Algora, Manuel; Garcia-Sanchez, Félix; Vico, Santiago; Rodriguez, Eva; Perez, Sonia; Barbolla, Luz
2012-01-01
The Community Transfusion Centre in Madrid currently processes whole blood using a conventional procedure (Compomat, Fresenius) followed by automated processing of buffy coats with the OrbiSac system (CaridianBCT). The Atreus 3C system (CaridianBCT) automates the production of red blood cells, plasma and an interim platelet unit from a whole blood unit. Interim platelet unit are pooled to produce a transfusable platelet unit. In this study the Atreus 3C system was evaluated and compared to the routine method with regards to product quality and operational value. Over a 5-week period 810 whole blood units were processed using the Atreus 3C system. The attributes of the automated process were compared to those of the routine method by assessing productivity, space, equipment and staffing requirements. The data obtained were evaluated in order to estimate the impact of implementing the Atreus 3C system in the routine setting of the blood centre. Yield and in vitro quality of the final blood components processed with the two systems were evaluated and compared. The Atreus 3C system enabled higher throughput while requiring less space and employee time by decreasing the amount of equipment and processing time per unit of whole blood processed. Whole blood units processed on the Atreus 3C system gave a higher platelet yield, a similar amount of red blood cells and a smaller volume of plasma. These results support the conclusion that the Atreus 3C system produces blood components meeting quality requirements while providing a high operational efficiency. Implementation of the Atreus 3C system could result in a large organisational improvement.
NASA Technical Reports Server (NTRS)
Choi, Michael K.
2017-01-01
A thermal design concept of using propylene loop heat pipes to minimize survival heater power for NASA's Evolutionary Xenon Thruster power processing units is presented. It reduces the survival heater power from 183 W to 35 W per power processing unit. The reduction is 81%.
40 CFR 61.348 - Standards: Treatment processes.
Code of Federal Regulations, 2013 CFR
2013-07-01
... enhanced biodegradation unit shall not be included in the calculation of the total annual benzene quantity, if the enhanced biodegradation unit is the first exempt unit in which the waste is managed or treated. A unit shall be considered enhanced biodegradation if it is a suspended-growth process that...
40 CFR 61.348 - Standards: Treatment processes.
Code of Federal Regulations, 2011 CFR
2011-07-01
... enhanced biodegradation unit shall not be included in the calculation of the total annual benzene quantity, if the enhanced biodegradation unit is the first exempt unit in which the waste is managed or treated. A unit shall be considered enhanced biodegradation if it is a suspended-growth process that...
40 CFR 61.348 - Standards: Treatment processes.
Code of Federal Regulations, 2012 CFR
2012-07-01
... enhanced biodegradation unit shall not be included in the calculation of the total annual benzene quantity, if the enhanced biodegradation unit is the first exempt unit in which the waste is managed or treated. A unit shall be considered enhanced biodegradation if it is a suspended-growth process that...
40 CFR 61.348 - Standards: Treatment processes.
Code of Federal Regulations, 2014 CFR
2014-07-01
... enhanced biodegradation unit shall not be included in the calculation of the total annual benzene quantity, if the enhanced biodegradation unit is the first exempt unit in which the waste is managed or treated. A unit shall be considered enhanced biodegradation if it is a suspended-growth process that...
Natural Gas Processing Plants in the United States: 2010 Update
2011-01-01
This special report presents an analysis of natural gas processing plants in the United States as of 2009 and highlights characteristics of this segment of the industry. The purpose of the paper is to examine the role of natural gas processing plants in the natural gas supply chain and to provide an overview and summary of processing plant characteristics in the United States, such as locations, capacities, and operations.
NASA Technical Reports Server (NTRS)
Pan, Jing; Levitt, Karl N.; Cohen, Gerald C.
1991-01-01
Discussed here is work to formally specify and verify a floating point coprocessor based on the MC68881. The HOL verification system developed at Cambridge University was used. The coprocessor consists of two independent units: the bus interface unit used to communicate with the cpu and the arithmetic processing unit used to perform the actual calculation. Reasoning about the interaction and synchronization among processes using higher order logic is demonstrated.
40 CFR 63.1360 - Applicability.
Code of Federal Regulations, 2014 CFR
2014-07-01
... process unit. If the greatest input to and/or output from a shared storage vessel is the same for two or... not have an intervening storage vessel. If two or more PAI process units have the same input to or... process unit that sends the most material to or receives the most material from the storage vessel. If two...
40 CFR Appendix B to Part 63 - Sources Defined for Early Reduction Provisions
Code of Federal Regulations, 2010 CFR
2010-07-01
.... All valves in gas or light liquid service within a process unit b. All pumps in light liquid service within a process unit c. All connectors in gas or light liquid service within a process unit d. Each...-ended valve or line i. Each sampling connection system j. Each instrumentation system k. Each pump...
78 FR 21862 - Revision to United States Marshals Service Fees for Services
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-12
... the United States Marshals Service for service of process in federal court proceedings. DATES: Written... 28 CFR 0.114(a) as follows: For process forwarded for service from one U.S Marshals Service office or... process, the United States Marshals Service is proposing to charge $65 per hour (or portion thereof) for...
NASA Astrophysics Data System (ADS)
Santi, S. S.; Renanto; Altway, A.
2018-01-01
The energy use system in a production process, in this case heat exchangers networks (HENs), is one element that plays a role in the smoothness and sustainability of the industry itself. Optimizing Heat Exchanger Networks (HENs) from process streams can have a major effect on the economic value of an industry as a whole. So the solving of design problems with heat integration becomes an important requirement. In a plant, heat integration can be carried out internally or in combination between process units. However, steps in the determination of suitable heat integration techniques require long calculations and require a long time. In this paper, we propose an alternative step in determining heat integration technique by investigating 6 hypothetical units using Pinch Analysis approach with objective function energy target and total annual cost target. The six hypothetical units consist of units A, B, C, D, E, and F, where each unit has the location of different process streams to the temperature pinch. The result is a potential heat integration (ΔH’) formula that can trim conventional steps from 7 steps to just 3 steps. While the determination of the preferred heat integration technique is to calculate the potential of heat integration (ΔH’) between the hypothetical process units. Completion of calculation using matlab language programming.
Judicial Process, Grade Eight. Resource Unit (Unit V).
ERIC Educational Resources Information Center
Minnesota Univ., Minneapolis. Project Social Studies Curriculum Center.
This resource unit, developed by the University of Minnesota's Project Social Studies, introduces eighth graders to the judicial process. The unit was designed with two major purposes in mind. First, it helps pupils understand judicial decision-making, and second, it provides for the study of the rights guaranteed by the federal Constitution. Both…
Macizo, Pedro; Herrera, Amparo
2010-03-01
This study explored the processing of 2-digit number words by examining the unit-decade compatibility effect in Spanish. Participants were required to choose the larger of 2-digit number words presented in verbal notation. In compatible trials the decade and unit comparisons led to the same response (e.g., 53-68) while in incompatible trials the decade and unit comparisons led to different responses (e.g., 59-74). Participants were slower on compatible trials as compared to incompatible trials. In Experiments 2 and 3, we evaluated whether the reverse compatibility effect in Spanish was only due to a pure left-to-right encoding which favours the decade processing in this language (decade-unit order). When participants processed 2-digit number words presented in reverse form (in the unit-decade order), the same reverse compatibility effect was found. This pattern of results suggests that participants have learnt a language-dependent process for analysing written numbers which is used irrespective of the specific arrangement of units and decades in the comparison task. 2010 APA, all rights reserved.
Solar thermochemical processing system and method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wegeng, Robert S.; Humble, Paul H.; Krishnan, Shankar
A solar thermochemical processing system is disclosed. The system includes a first unit operation for receiving concentrated solar energy. Heat from the solar energy is used to drive the first unit operation. The first unit operation also receives a first set of reactants and produces a first set of products. A second unit operation receives the first set of products from the first unit operation and produces a second set of products. A third unit operation receives heat from the second unit operation to produce a portion of the first set of reactants.
On Tour... Primary Hardwood Processing, Products and Recycling Unit
Philip A. Araman; Daniel L. Schmoldt
1995-01-01
Housed within the Department of Wood Science and Forest Products at Virginia Polytechnic Institute is a three-person USDA Forest Service research work unit (with one vacancy) devoted to hardwood processing and recycling research. Phil Araman is the project leader of this truly unique and productive unit, titled ãPrimary Hardwood Processing, Products and Recycling.ä The...
Associative list processing unit
Hemmert, Karl Scott; Underwood, Keith D.
2013-01-29
An associative list processing unit and method comprising employing a plurality of prioritized cell blocks and permitting inserts to occur in a single clock cycle if all of the cell blocks are not full. Also, an associative list processing unit and method comprising employing a plurality of prioritized cell blocks and using a tree of prioritized multiplexers descending from the plurality of cell blocks.
Technical options for processing additional light tight oil volumes within the United States
2015-01-01
This report examines technical options for processing additional LTO volumes within the United States. Domestic processing of additional LTO would enable an increase in petroleum product exports from the United States, already the world’s largest net exporter of petroleum products. Unlike crude oil, products are not subject to export limitations or licensing requirements. While this is one possible approach to absorbing higher domestic LTO production in the absence of a relaxation of current limitations on crude exports, domestic LTO would have to be priced at a level required to encourage additional LTO runs at existing refinery units, debottlenecking, or possible additions of processing capacity.
Leroy, Sabine; Giammarinaro, Philippe; Chacornac, Jean-Paul; Lebert, Isabelle; Talon, Régine
2010-04-01
The staphylococcal community of the environments of nine French small-scale processing units and their naturally fermented meat products was identified by analyzing 676 isolates. Fifteen species were accurately identified using validated molecular methods. The three prevalent species were Staphylococcus equorum (58.4%), Staphylococcus saprophyticus (15.7%) and Staphylococcus xylosus (9.3%). S. equorum was isolated in all the processing units in similar proportion in meat and environmental samples. S. saprophyticus was also isolated in all the processing units with a higher percentage in environmental samples. S. xylosus was present sporadically in the processing units and its prevalence was higher in meat samples. The genetic diversity of the strains within the three species isolated from one processing unit was studied by PFGE and revealed a high diversity for S. equorum and S. saprophyticus both in the environment and the meat isolates. The genetic diversity remained high through the manufacturing steps. A small percentage of the strains of the two species share the two ecological niches. These results highlight that some strains, probably introduced by the meat, will persist in the manufacturing environment, while other strains are more adapted to the meat products.
40 CFR 63.7499 - What are the subcategories of boilers and process heaters?
Code of Federal Regulations, 2013 CFR
2013-07-01
... process heaters, as defined in § 63.7575 are: (a) Pulverized coal/solid fossil fuel units. (b) Stokers designed to burn coal/solid fossil fuel. (c) Fluidized bed units designed to burn coal/solid fossil fuel... liquid fuel. (r) Units designed to burn coal/solid fossil fuel. (s) Fluidized bed units with an...
40 CFR 63.7499 - What are the subcategories of boilers and process heaters?
Code of Federal Regulations, 2014 CFR
2014-07-01
... process heaters, as defined in § 63.7575 are: (a) Pulverized coal/solid fossil fuel units. (b) Stokers designed to burn coal/solid fossil fuel. (c) Fluidized bed units designed to burn coal/solid fossil fuel... liquid fuel. (r) Units designed to burn coal/solid fossil fuel. (s) Fluidized bed units with an...
Legislative Process, Grade Eight. Resource Unit (Unit IV).
ERIC Educational Resources Information Center
Minnesota Univ., Minneapolis. Project Social Studies Curriculum Center.
This resource unit, developed by the University of Minnesota's Project Social Studies, introduces eighth graders to the legislative process. The unit uses case studies such as the Civil Rights Acts of 1960 and 1964 and attempts to change the Rules Committee in 1961. It also uses much data on background of congressmen and on distribution of…
The Executive Process, Grade Eight. Resource Unit (Unit III).
ERIC Educational Resources Information Center
Minnesota Univ., Minneapolis. Project Social Studies Curriculum Center.
This resource unit, developed by the University of Minnesota's Project Social Studies, introduces eighth graders to the executive process. The unit uses case studies of presidential decision making such as the decision to drop the atomic bomb on Hiroshima, the Cuba Bay of Pigs and quarantine decisions, and the Little Rock decision. A case study of…
Method of up-front load balancing for local memory parallel processors
NASA Technical Reports Server (NTRS)
Baffes, Paul Thomas (Inventor)
1990-01-01
In a parallel processing computer system with multiple processing units and shared memory, a method is disclosed for uniformly balancing the aggregate computational load in, and utilizing minimal memory by, a network having identical computations to be executed at each connection therein. Read-only and read-write memory are subdivided into a plurality of process sets, which function like artificial processing units. Said plurality of process sets is iteratively merged and reduced to the number of processing units without exceeding the balance load. Said merger is based upon the value of a partition threshold, which is a measure of the memory utilization. The turnaround time and memory savings of the instant method are functions of the number of processing units available and the number of partitions into which the memory is subdivided. Typical results of the preferred embodiment yielded memory savings of from sixty to seventy five percent.
Method and Apparatus for Improved Spatial Light Modulation
NASA Technical Reports Server (NTRS)
Soutar, Colin (Inventor); Juday, Richard D. (Inventor)
2000-01-01
A method and apparatus for modulating a light beam in an optical processing system is described. Preferably, an electrically-controlled polarizer unit and/or an analyzer unit are utilized in combination with a spatial light modulator and a controller. Preferably, the spatial light modulator comprises a pixelated birefringent medium such as a liquid crystal video display. The combination of the electrically controlled polarizer unit and analyzer unit make it simple and fast to reconfigure the modulation described by the Jones matrix of the spatial light modulator. A particular optical processing objective is provided to the controller. The controller performs calculations and supplies control signals to the polarizer unit, the analyzer unit, and the spatial light modulator in order to obtain the optical processing objective.
Method and Apparatus for Improved Spatial Light Modulation
NASA Technical Reports Server (NTRS)
Colin, Soutar (Inventor); Juday, Richard D. (Inventor)
1999-01-01
A method and apparatus for modulating a light beam in an optical processing system is described. Preferably, an electrically-controlled polarizer unit and/or an analyzer unit are utilized in combination with a spatial light modulator and a controller. Preferably, the spatial light modulator comprises a pixelated birefringent medium such as a liquid crystal video display. The combination of the electrically controlled polarizer unit and analyzer unit make it simple and fast to reconfigure the modulation described by the Jones matrix of the spatial light modulator. A particular optical processing objective is provided to the controller. The controller performs calculations and supplies control signals to the polarizer unit, the analyzer unit, and the spatial light modulator in order to obtain die optical processing objective.
NASA Astrophysics Data System (ADS)
Chi, Xiao-Chun; Wang, Ying-Hui; Gao, Yu; Sui, Ning; Zhang, Li-Quan; Wang, Wen-Yan; Lu, Ran; Ji, Wen-Yu; Yang, Yan-Qiang; Zhang, Han-Zhuang
2018-04-01
Three push-pull chromophores comprising a triphenylamine (TPA) as electron-donating moiety and functionalized β-diketones as electron acceptor units are studied by various spectroscopic techniques. The time-correlated single-photon counting data shows that increasing the number of electron acceptor units accelerates photoluminescence relaxation rate of compounds. Transient spectra data shows that intramolecular charge transfer (ICT) takes place from TPA units to β-diketones units after photo-excitation. Increasing the number of electron acceptor units would prolong the generation process of ICT state, and accelerate the excited molecule reorganization process and the relaxation process of ICT state.
Advancing perinatal patient safety through application of safety science principles using health IT.
Webb, Jennifer; Sorensen, Asta; Sommerness, Samantha; Lasater, Beth; Mistry, Kamila; Kahwati, Leila
2017-12-19
The use of health information technology (IT) has been shown to promote patient safety in Labor and Delivery (L&D) units. The use of health IT to apply safety science principles (e.g., standardization) to L&D unit processes may further advance perinatal safety. Semi-structured interviews were conducted with L&D units participating in the Agency for Healthcare Research and Quality's (AHRQ's) Safety Program for Perinatal Care (SPPC) to assess units' experience with program implementation. Analysis of interview transcripts was used to characterize the process and experience of using health IT for applying safety science principles to L&D unit processes. Forty-six L&D units from 10 states completed participation in SPPC program implementation; thirty-two (70%) reported the use of health IT as an enabling strategy for their local implementation. Health IT was used to improve standardization of processes, use of independent checks, and to facilitate learning from defects. L&D units standardized care processes through use of electronic health record (EHR)-based order sets and use of smart pumps and other technology to improve medication safety. Units also standardized EHR documentation, particularly related to electronic fetal monitoring (EFM) and shoulder dystocia. Cognitive aids and tools were integrated into EHR and care workflows to create independent checks such as checklists, risk assessments, and communication handoff tools. Units also used data from EHRs to monitor processes of care to learn from defects. Units experienced several challenges incorporating health IT, including obtaining organization approval, working with their busy IT departments, and retrieving standardized data from health IT systems. Use of health IT played an integral part in the planning and implementation of SPPC for participating L&D units. Use of health IT is an encouraging approach for incorporating safety science principles into care to improve perinatal safety and should be incorporated into materials to facilitate the implementation of perinatal safety initiatives.
Automated image alignment for 2D gel electrophoresis in a high-throughput proteomics pipeline.
Dowsey, Andrew W; Dunn, Michael J; Yang, Guang-Zhong
2008-04-01
The quest for high-throughput proteomics has revealed a number of challenges in recent years. Whilst substantial improvements in automated protein separation with liquid chromatography and mass spectrometry (LC/MS), aka 'shotgun' proteomics, have been achieved, large-scale open initiatives such as the Human Proteome Organization (HUPO) Brain Proteome Project have shown that maximal proteome coverage is only possible when LC/MS is complemented by 2D gel electrophoresis (2-DE) studies. Moreover, both separation methods require automated alignment and differential analysis to relieve the bioinformatics bottleneck and so make high-throughput protein biomarker discovery a reality. The purpose of this article is to describe a fully automatic image alignment framework for the integration of 2-DE into a high-throughput differential expression proteomics pipeline. The proposed method is based on robust automated image normalization (RAIN) to circumvent the drawbacks of traditional approaches. These use symbolic representation at the very early stages of the analysis, which introduces persistent errors due to inaccuracies in modelling and alignment. In RAIN, a third-order volume-invariant B-spline model is incorporated into a multi-resolution schema to correct for geometric and expression inhomogeneity at multiple scales. The normalized images can then be compared directly in the image domain for quantitative differential analysis. Through evaluation against an existing state-of-the-art method on real and synthetically warped 2D gels, the proposed analysis framework demonstrates substantial improvements in matching accuracy and differential sensitivity. High-throughput analysis is established through an accelerated GPGPU (general purpose computation on graphics cards) implementation. Supplementary material, software and images used in the validation are available at http://www.proteomegrid.org/rain/.
Li, Kangkang; Yu, Hai; Feron, Paul; Tade, Moses; Wardhaugh, Leigh
2015-08-18
Using a rate-based model, we assessed the technical feasibility and energy performance of an advanced aqueous-ammonia-based postcombustion capture process integrated with a coal-fired power station. The capture process consists of three identical process trains in parallel, each containing a CO2 capture unit, an NH3 recycling unit, a water separation unit, and a CO2 compressor. A sensitivity study of important parameters, such as NH3 concentration, lean CO2 loading, and stripper pressure, was performed to minimize the energy consumption involved in the CO2 capture process. Process modifications of the rich-split process and the interheating process were investigated to further reduce the solvent regeneration energy. The integrated capture system was then evaluated in terms of the mass balance and the energy consumption of each unit. The results show that our advanced ammonia process is technically feasible and energy-competitive, with a low net power-plant efficiency penalty of 7.7%.
An Investigation of the Role of Grapheme Units in Word Recognition
ERIC Educational Resources Information Center
Lupker, Stephen J.; Acha, Joana; Davis, Colin J.; Perea, Manuel
2012-01-01
In most current models of word recognition, the word recognition process is assumed to be driven by the activation of letter units (i.e., that letters are the perceptual units in reading). An alternative possibility is that the word recognition process is driven by the activation of grapheme units, that is, that graphemes, rather than letters, are…
High Input Voltage, Silicon Carbide Power Processing Unit Performance Demonstration
NASA Technical Reports Server (NTRS)
Bozak, Karin E.; Pinero, Luis R.; Scheidegger, Robert J.; Aulisio, Michael V.; Gonzalez, Marcelo C.; Birchenough, Arthur G.
2015-01-01
A silicon carbide brassboard power processing unit has been developed by the NASA Glenn Research Center in Cleveland, Ohio. The power processing unit operates from two sources: a nominal 300 Volt high voltage input bus and a nominal 28 Volt low voltage input bus. The design of the power processing unit includes four low voltage, low power auxiliary supplies, and two parallel 7.5 kilowatt (kW) discharge power supplies that are capable of providing up to 15 kilowatts of total power at 300 to 500 Volts (V) to the thruster. Additionally, the unit contains a housekeeping supply, high voltage input filter, low voltage input filter, and master control board, such that the complete brassboard unit is capable of operating a 12.5 kilowatt Hall effect thruster. The performance of the unit was characterized under both ambient and thermal vacuum test conditions, and the results demonstrate exceptional performance with full power efficiencies exceeding 97%. The unit was also tested with a 12.5kW Hall effect thruster to verify compatibility and output filter specifications. With space-qualified silicon carbide or similar high voltage, high efficiency power devices, this would provide a design solution to address the need for high power electric propulsion systems.
Audits of oncology units - an effective and pragmatic approach.
Abratt, Raymond Pierre; Eedes, David; Bailey, Belinda; Salmon, Chris; Govender, Yogi; Oelofse, Ivan; Burger, Henriette
2017-05-24
Audits of oncology units are part of all quality-assurance programmes. However, they do not always come across as pragmatic and helpful to staff. To report on the results of an online survey on the usefulness and impact of an audit process for oncology units. Staff in oncology units who were part of the audit process completed the audit self-assessment form for the unit. This was followed by a visit to each unit by an assessor, and then subsequent personal contact, usually via telephone. The audit self-assessment document listed quality-assurance measures or items in the physical and functional areas of the oncology unit. There were a total of 153 items included in the audit. The online survey took place in October 2016. The invitation to participate was sent to 59 oncology units at which staff members had completed the audit process. The online survey was completed by 54 (41%) of the 132 potential respondents. The online survey found that the audit was very or extremely useful in maintaining personal professional standards in 89% of responses. The audit process and feedback was rated as very or extremely satisfactory in 80% and 81%, respectively. The self-assessment audit document was scored by survey respondents as very or extremely practical in 63% of responses. The feedback on the audit was that it was very or extremely helpful in formulating improvement plans in oncology units in 82% of responses. Major and minor changes that occurred as a result of the audit process were reported as 8% and 88%, respectively. The survey findings show that the audit process and its self- assessment document meet the aims of being helpful and pragmatic.
Near-memory data reorganization engine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gokhale, Maya; Lloyd, G. Scott
A memory subsystem package is provided that has processing logic for data reorganization within the memory subsystem package. The processing logic is adapted to reorganize data stored within the memory subsystem package. In some embodiments, the memory subsystem package includes memory units, a memory interconnect, and a data reorganization engine ("DRE"). The data reorganization engine includes a stream interconnect and DRE units including a control processor and a load-store unit. The control processor is adapted to execute instructions to control a data reorganization. The load-store unit is adapted to process data move commands received from the control processor via themore » stream interconnect for loading data from a load memory address of a memory unit and storing data to a store memory address of a memory unit.« less
ON DEVELOPING CLEANER ORGANIC UNIT PROCESSES
Organic waste products, potentially harmful to the human health and the environment, are primarily produced in the synthesis stage of manufacturing processes. Many such synthetic unit processes, such as halogenation, oxidation, alkylation, nitration, and sulfonation are common to...
A Science and Risk-Based Pragmatic Methodology for Blend and Content Uniformity Assessment.
Sayeed-Desta, Naheed; Pazhayattil, Ajay Babu; Collins, Jordan; Doshi, Chetan
2018-04-01
This paper describes a pragmatic approach that can be applied in assessing powder blend and unit dosage uniformity of solid dose products at Process Design, Process Performance Qualification, and Continued/Ongoing Process Verification stages of the Process Validation lifecycle. The statistically based sampling, testing, and assessment plan was developed due to the withdrawal of the FDA draft guidance for industry "Powder Blends and Finished Dosage Units-Stratified In-Process Dosage Unit Sampling and Assessment." This paper compares the proposed Grouped Area Variance Estimate (GAVE) method with an alternate approach outlining the practicality and statistical rationalization using traditional sampling and analytical methods. The approach is designed to fit solid dose processes assuring high statistical confidence in both powder blend uniformity and dosage unit uniformity during all three stages of the lifecycle complying with ASTM standards as recommended by the US FDA.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Wastewater Provisions for Process Units at New Sources 8 Table 8 to Subpart G of Part 63 Protection of... Vessels, Transfer Operations, and Wastewater Pt. 63, Subpt. G, Table 8 Table 8 to Subpart G of Part 63—Organic HAP's Subject to the Wastewater Provisions for Process Units at New Sources Chemical name CAS No...
Code of Federal Regulations, 2013 CFR
2013-07-01
... Wastewater Provisions for Process Units at New Sources 8 Table 8 to Subpart G of Part 63 Protection of... Vessels, Transfer Operations, and Wastewater Pt. 63, Subpt. G, Table 8 Table 8 to Subpart G of Part 63—Organic HAP's Subject to the Wastewater Provisions for Process Units at New Sources Chemical name CAS No...
Code of Federal Regulations, 2014 CFR
2014-07-01
... Wastewater Provisions for Process Units at New Sources 8 Table 8 to Subpart G of Part 63 Protection of... Vessels, Transfer Operations, and Wastewater Pt. 63, Subpt. G, Table 8 Table 8 to Subpart G of Part 63—Organic HAP's Subject to the Wastewater Provisions for Process Units at New Sources Chemical name CAS No...
ERIC Educational Resources Information Center
Coryell, Joellen Elizabeth; Durodoye, Beth A.; Wright, Robin Redmon; Pate, P. Elizabeth; Nguyen, Shelbee
2012-01-01
This report outlines a method for learning about the internationalization processes at institutions of adult and higher education and then provides the analysis of data gathered from the researchers' own institution and from site visits to three additional universities in the United States and the United Kingdom. It was found that campus…
ERIC Educational Resources Information Center
McGill, Monica M.
2010-01-01
Digital games are marketed, mass-produced, and consumed by an increasing number of people and the game industry is only expected to grow. In response, post-secondary institutions in the United Kingdom (UK) and the United States (US) have started to create game degree programs. Though curriculum theorists provide insight into the process of…
Code of Federal Regulations, 2013 CFR
2013-07-01
... section. (1) All emission units within a group must be of the same process type (e.g., primary crushers... emission units from different process types together for the purposes of this section. (2) All emission units within a group must also have the same type of air pollution control device (e.g., wet scrubbers...
Code of Federal Regulations, 2014 CFR
2014-07-01
... section. (1) All emission units within a group must be of the same process type (e.g., primary crushers... emission units from different process types together for the purposes of this section. (2) All emission units within a group must also have the same type of air pollution control device (e.g., wet scrubbers...
Code of Federal Regulations, 2012 CFR
2012-07-01
... section. (1) All emission units within a group must be of the same process type (e.g., primary crushers... emission units from different process types together for the purposes of this section. (2) All emission units within a group must also have the same type of air pollution control device (e.g., wet scrubbers...
Code of Federal Regulations, 2010 CFR
2010-07-01
... section. (1) All emission units within a group must be of the same process type (e.g., primary crushers... emission units from different process types together for the purposes of this section. (2) All emission units within a group must also have the same type of air pollution control device (e.g., wet scrubbers...
Code of Federal Regulations, 2011 CFR
2011-07-01
... section. (1) All emission units within a group must be of the same process type (e.g., primary crushers... emission units from different process types together for the purposes of this section. (2) All emission units within a group must also have the same type of air pollution control device (e.g., wet scrubbers...
15 CFR 971.427 - Processing outside the United States.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 15 Commerce and Foreign Trade 3 2010-01-01 2010-01-01 false Processing outside the United States... THE ENVIRONMENTAL DATA SERVICE DEEP SEABED MINING REGULATIONS FOR COMMERCIAL RECOVERY PERMITS Issuance... outside the United States. If appropriate TCRs will incorporate provisions to implement the decision of...
The Units Ontology: a tool for integrating units of measurement in science
Gkoutos, Georgios V.; Schofield, Paul N.; Hoehndorf, Robert
2012-01-01
Units are basic scientific tools that render meaning to numerical data. Their standardization and formalization caters for the report, exchange, process, reproducibility and integration of quantitative measurements. Ontologies are means that facilitate the integration of data and knowledge allowing interoperability and semantic information processing between diverse biomedical resources and domains. Here, we present the Units Ontology (UO), an ontology currently being used in many scientific resources for the standardized description of units of measurements. PMID:23060432
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
Prior to 1978, the Wilsonville Advanced Coal Liquefaction facility material balance surrounded only the thermal liquefaction unit and involved analyses of only the slurry stream and individual gas streams. The distillate solvent yield was determined by difference. Subsequently, several modifications and additional process units were introduced to this single unit system. With the inclusion of the deashing unit in 1978 and the catalytic hydrogenation unit in 1981, the process has evolved into a sophisticated two-stage coal liquefaction process and has the potential for various modes of integration. This report presents an elemental balancing procedure and a simplified presentation format thatmore » is sufficiently flexible to meet current and future needs. The development of the elemental balancing technique and the relevant computer programs to handle the calculations have been addressed. This will be useful in modelling individual unit performance as well as determining the impact of each unit on the overall liquefaction system, provided the units are on a steady-state basis. Five different material balance envelopes are defined. Three of these envelopes pertain to the individual units (the thermal liquefaction or TL unit, the Critical Solvent Deashing or CSD unit and the H-Oil Ebullated Bed Hydrotreating or HTR unit). The fourth or single stage material balance envelope combines the TL and CSD units. The fifth envelope is the two-stage configuration combining all three units. 3 references.« less
Jeong, Kyeong-Min; Kim, Hee-Seung; Hong, Sung-In; Lee, Sung-Keun; Jo, Na-Young; Kim, Yong-Soo; Lim, Hong-Gi; Park, Jae-Hyeung
2012-10-08
Speed enhancement of integral imaging based incoherent Fourier hologram capture using a graphic processing unit is reported. Integral imaging based method enables exact hologram capture of real-existing three-dimensional objects under regular incoherent illumination. In our implementation, we apply parallel computation scheme using the graphic processing unit, accelerating the processing speed. Using enhanced speed of hologram capture, we also implement a pseudo real-time hologram capture and optical reconstruction system. The overall operation speed is measured to be 1 frame per second.
Developing and Implementing a Process for the Review of Nonacademic Units.
ERIC Educational Resources Information Center
Brown, Marilyn K.
1989-01-01
A major research university's recently developed process for systematic evaluation of nonacademic units is described, and the steps in its development and implementation are outlined: review of literature on organizational effectiveness; survey of peer institutions; development of guidelines for review; and implementation in several campus units.…
Transforming Care at the Bedside (TCAB): enhancing direct care and value-added care.
Dearmon, Valorie; Roussel, Linda; Buckner, Ellen B; Mulekar, Madhuri; Pomrenke, Becky; Salas, Sheri; Mosley, Aimee; Brown, Stephanie; Brown, Ann
2013-05-01
The purpose of this study was to examine the effectiveness of a Transforming Care at the Bedside initiative from a unit perspective. Improving patient outcomes and nurses' work environments are the goals of Transforming Care at the Bedside. Transforming Care at the Bedside creates programs of change originating at the point of care and directly promoting engagement of nurses to transform work processes and quality of care on medical-surgical units. This descriptive comparative study draws on multiple data sources from two nursing units: a Transforming Care at the Bedside unit where staff tested, adopted and implemented improvement ideas, and a control unit where staff continued traditional practices. Change theory provided the framework for the study. Direct care and value-added care increased on Transforming Care at the Bedside unit compared with the control unit. Transforming Care at the Bedside unit decreased in incidental overtime. Nurses reported that the process challenged old ways of thinking and increased nursing innovations. Hourly rounding, bedside reporting and the use of pain boards were seen as positive innovations. Evidence supported the value-added dimension of the Transforming Care at the Bedside process at the unit level. Nurses recognized the significance of their input into processes of change. Transformational leadership and frontline projects provide a vehicle for innovation through application of human capital. © 2012 Blackwell Publishing Ltd.
2012-12-04
CAPE CANAVERAL, Fla. -- Workers inside the Space Station Processing Facility at NASA's Kennedy Space Center in Florida position the orbital replacement unit for the space station's main bus switching unit as they prepare to pack the unit in a shipping container. The unit, which was processed at Kennedy, will be shipped to Japan at the beginning of the year for the HTV-4 launch, which is currently scheduled for 2013. Photo credit: NASA/Charisse Nahser
2012-12-04
CAPE CANAVERAL, Fla. -- Workers inside the Space Station Processing Facility at NASA's Kennedy Space Center in Florida lift the orbital replacement unit for the space station's main bus switching unit as they prepare to pack the unit in a shipping container. The unit, which was processed at Kennedy, will be shipped to Japan at the beginning of the year for the HTV-4 launch, which is currently scheduled for 2013. Photo credit: NASA/Charisse Nahser
Combination of an electrolytic pretreatment unit with secondary water reclamation processes
NASA Technical Reports Server (NTRS)
Wells, G. W.; Bonura, M. S.
1973-01-01
The design and fabrication of a flight concept prototype electrolytic pretreatment unit (EPU) and of a contractor-furnished air evaporation unit (AEU) are described. The integrated EPU and AEU potable water recovery system is referred to as the Electrovap and is capable of processing the urine and flush water of a six-man crew. Results of a five-day performance verification test of the Electrovap system are presented and plans are included for the extended testing of the Electrovap to produce data applicable to the combination of electrolytic pretreatment with most final potable water recovery systems. Plans are also presented for a program to define the design requirements for combining the electrolytic pretreatment unit with a reverse osmosis final processing unit.
Laser peening with fiber optic delivery
Friedman, Herbert W.; Ault, Earl R.; Scheibner, Karl F.
2004-11-16
A system for processing a workpiece using a laser. The laser produces at least one laser pulse. A laser processing unit is used to process the workpiece using the at least one laser pulse. A fiber optic cable is used for transmitting the at least one laser pulse from the laser to the laser processing unit.
High Input Voltage, Silicon Carbide Power Processing Unit Performance Demonstration
NASA Technical Reports Server (NTRS)
Bozak, Karin E.; Pinero, Luis R.; Scheidegger, Robert J.; Aulisio, Michael V.; Gonzalez, Marcelo C.; Birchenough, Arthur G.
2015-01-01
A silicon carbide brassboard power processing unit has been developed by the NASA Glenn Research Center in Cleveland, Ohio. The power processing unit operates from two sources - a nominal 300-Volt high voltage input bus and a nominal 28-Volt low voltage input bus. The design of the power processing unit includes four low voltage, low power supplies that provide power to the thruster auxiliary supplies, and two parallel 7.5 kilowatt power supplies that are capable of providing up to 15 kilowatts of total power at 300-Volts to 500-Volts to the thruster discharge supply. Additionally, the unit contains a housekeeping supply, high voltage input filter, low voltage input filter, and master control board, such that the complete brassboard unit is capable of operating a 12.5 kilowatt Hall Effect Thruster. The performance of unit was characterized under both ambient and thermal vacuum test conditions, and the results demonstrate the exceptional performance with full power efficiencies exceeding 97. With a space-qualified silicon carbide or similar high voltage, high efficiency power device, this design could evolve into a flight design for future missions that require high power electric propulsion systems.
NASA Astrophysics Data System (ADS)
Kunstadt, Peter; Eng, P.; Steeves, Colyn; Beaulieu, Daniel; Eng, P.
1993-07-01
The number of products being radiation processed worldwide is constantly increasing and today includes such diverse items as medical disposables, fruits and vegetables, spices, meats, seafoods and waste products. This range of products to be processed has resulted in a wide range of irradiator designs and capital and operating cost requirements. This paper discusses the economics of low dose food irradiation applications and the effects of various parameters on unit processing costs. It provides a model for calculating specific unit processing costs by correlating known capital costs with annual operating costs and annual throughputs. It is intended to provide the reader with a general knowledge of how unit processing costs are derived.
Cost unit accounting based on a clinical pathway: a practical tool for DRG implementation.
Feyrer, R; Rösch, J; Weyand, M; Kunzmann, U
2005-10-01
Setting up a reliable cost unit accounting system in a hospital is a fundamental necessity for economic survival, given the current general conditions in the healthcare system. Definition of a suitable cost unit is a crucial factor for success. We present here the development and use of a clinical pathway as a cost unit as an alternative to the DRG. Elective coronary artery bypass grafting was selected as an example. Development of the clinical pathway was conducted according to a modular concept that mirrored all the treatment processes across various levels and modules. Using service records and analyses the process algorithms of the clinical pathway were developed and visualized with CorelTM iGrafix Process 2003. A detailed process cost record constituted the basis of the pathway costing, in which financial evaluation of the treatment processes was performed. The result of this study was a structured clinical pathway for coronary artery bypass grafting together with a cost calculation in the form of cost unit accounting. The use of a clinical pathway as a cost unit offers considerable advantages compared to the DRG or clinical case. The variance in the diagnoses and procedures within a pathway is minimal, so the consumption of resources is homogeneous. This leads to a considerable improvement in the value of cost unit accounting as a strategic control instrument in hospitals.
Hydrologic landscape units and adaptive management of intermountain wetlands
Custer, Stephen G.; Sojda, R.S.
2006-01-01
daptive management is often proposed to assist in the management of national wildlife refuges and allows the exploration of alternatives as well as the addition of ne w knowledge as it becomes available. The hydrological landscape unit can be a good foundation for such efforts. Red Rock Lakes National Wildlife Refuge (NWR) is in an intermountain basin dominated by vertical tectonics in the Northern Rocky Mountains. A geographic information system was used to define the boundaries for the hydrologic landscape units there. Units identified include alluvial fan, interfan, stream alluvi um and basin flat. Management alternatives can be informed by ex amination of processes that occu r on the units. For example, an ancient alluvial fan unit related to Red Rock Creek appear s to be isolated from stream flow today, with recharge dominated by precipitation and bedrock springs; while other alluvial fan units in the area have shallow ground water recharged from mountain streams and precipitation. The scale of hydrologic processes in interfan units differs from that in alluvial fan hydrologic landscape units. These differences are important when the refuge is evaluating habitat management activities. Hydrologic landscape units provide scientific unde rpinnings for the refuge’s comprehensive planning process. New geologic, hydrologic, and biologic knowledge can be integrated into the hydrologic landscape unit definition and improve adaptive management.
CFD Extraction Tool for TecPlot From DPLR Solutions
NASA Technical Reports Server (NTRS)
Norman, David
2013-01-01
This invention is a TecPlot macro of a computer program in the TecPlot programming language that processes data from DPLR solutions in TecPlot format. DPLR (Data-Parallel Line Relaxation) is a NASA computational fluid dynamics (CFD) code, and TecPlot is a commercial CFD post-processing tool. The Tec- Plot data is in SI units (same as DPLR output). The invention converts the SI units into British units. The macro modifies the TecPlot data with unit conversions, and adds some extra calculations. After unit conversions, the macro cuts a slice, and adds vectors on the current plot for output format. The macro can also process surface solutions. Existing solutions use manual conversion and superposition. The conversion is complicated because it must be applied to a range of inter-related scalars and vectors to describe a 2D or 3D flow field. It processes the CFD solution to create superposition/comparison of scalars and vectors. The existing manual solution is cumbersome, open to errors, slow, and cannot be inserted into an automated process. This invention is quick and easy to use, and can be inserted into an automated data-processing algorithm.
Ballen, Karen K.; Logan, Brent R.; Laughlin, Mary J.; He, Wensheng; Ambruso, Daniel R.; Armitage, Susan E.; Beddard, Rachel L.; Bhatla, Deepika; Hwang, William Y.K.; Kiss, Joseph E.; Koegler, Gesine; Kurtzberg, Joanne; Nagler, Arnon; Oh, David; Petz, Lawrence D.; Price, Thomas H.; Quinones, Ralph R.; Ratanatharathorn, Voravit; Rizzo, J. Douglas; Sazama, Kathleen; Scaradavou, Andromachi; Schuster, Michael W.; Sender, Leonard S.; Shpall, Elizabeth J.; Spellman, Stephen R.; Sutton, Millicent; Weitekamp, Lee Ann; Wingard, John R.; Eapen, Mary
2015-01-01
Variations in cord blood manufacturing and administration are common, and the optimal practice, not known. We compared processing and banking practices at 16 public cord blood banks (CBB) in the United States, and assessed transplant outcomes on 530 single umbilical cord blood (UCB) myeloablative transplantations for hematologic malignancies, facilitated by these banks. UCB banking practices were separated into three mutually exclusive groups based on whether processing was automated or manual; units were plasma and red blood cell reduced or buffy coat production method or plasma reduced. Compared to the automated processing system for units, the day-28 neutrophil recovery was significantly lower after transplantation of units that were manually processed and plasma reduced (red cell replete) (odds ratio [OR] 0.19 p=0.001) or plasma and red cell reduced (OR 0.54, p=0.05). Day-100 survival did not differ by CBB. However, day-100 survival was better with units that were thawed with the dextran-albumin wash method compared to the “no wash” or “dilution only” techniques (OR 1.82, p=0.04). In conclusion, CBB processing has no significant effect on early (day 100) survival despite differences in kinetics of neutrophil recovery. PMID:25543094
78 FR 57033 - United States Standards for Condition of Food Containers
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-17
... containers during production. Stationary lot sampling is the process of randomly selecting sample units from.... * * * * * Stationary lot sampling. The process of randomly selecting sample units from a lot whose production has been... less than \\1/16\\-inch Stringy seal (excessive plastic threads showing at edge of seal 222 area...
40 CFR 98.162 - GHGs to report.
Code of Federal Regulations, 2012 CFR
2012-07-01
... GREENHOUSE GAS REPORTING Hydrogen Production § 98.162 GHGs to report. You must report: (a) CO2 emissions from each hydrogen production process unit. (b) [Reserved] (c) CO2, CH4, and N2O emissions from each stationary combustion unit other than hydrogen production process units. You must calculate and report these...
40 CFR 98.162 - GHGs to report.
Code of Federal Regulations, 2011 CFR
2011-07-01
... GREENHOUSE GAS REPORTING Hydrogen Production § 98.162 GHGs to report. You must report: (a) CO2 emissions from each hydrogen production process unit. (b) [Reserved] (c) CO2, CH4, and N2O emissions from each stationary combustion unit other than hydrogen production process units. You must calculate and report these...
40 CFR 98.162 - GHGs to report.
Code of Federal Regulations, 2013 CFR
2013-07-01
... GREENHOUSE GAS REPORTING Hydrogen Production § 98.162 GHGs to report. You must report: (a) CO2 emissions from each hydrogen production process unit. (b) [Reserved] (c) CO2, CH4, and N2O emissions from each stationary combustion unit other than hydrogen production process units. You must calculate and report these...
40 CFR 98.162 - GHGs to report.
Code of Federal Regulations, 2014 CFR
2014-07-01
... GREENHOUSE GAS REPORTING Hydrogen Production § 98.162 GHGs to report. You must report: (a) CO2 emissions from each hydrogen production process unit. (b) [Reserved] (c) CO2, CH4, and N2O emissions from each stationary combustion unit other than hydrogen production process units. You must calculate and report these...
76 FR 34031 - United States Standards for Grades of Processed Raisins
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-10
...The Agricultural Marketing Service (AMS), of the United States Department of Agriculture (USDA) is withdrawing a notice soliciting comments on its proposed revision to the United States Standards for Grades of Processed Raisins. Based on the petitioner's request to withdraw their petition, the agency has decided not to proceed with this action.
Information processing in dendrites I. Input pattern generalisation.
Gurney, K N
2001-10-01
In this paper and its companion, we address the question as to whether there are any general principles underlying information processing in the dendritic trees of biological neurons. In order to address this question, we make two assumptions. First, the key architectural feature of dendrites responsible for many of their information processing abilities is the existence of independent sub-units performing local non-linear processing. Second, any general functional principles operate at a level of abstraction in which neurons are modelled by Boolean functions. To accommodate these assumptions, we therefore define a Boolean model neuron-the multi-cube unit (MCU)-which instantiates the notion of the discrete functional sub-unit. We then use this model unit to explore two aspects of neural functionality: generalisation (in this paper) and processing complexity (in its companion). Generalisation is dealt with from a geometric viewpoint and is quantified using a new metric-the set of order parameters. These parameters are computed for threshold logic units (TLUs), a class of random Boolean functions, and MCUs. Our interpretation of the order parameters is consistent with our knowledge of generalisation in TLUs and with the lack of generalisation in randomly chosen functions. Crucially, the order parameters for MCUs imply that these functions possess a range of generalisation behaviour. We argue that this supports the general thesis that dendrites facilitate input pattern generalisation despite any local non-linear processing within functionally isolated sub-units.
Laminated microchannel devices, mixing units and method of making same
Bennett, Wendy D [Kennewick, WA; Hammerstrom, Donald J [West Richland, WA; Martin, Peter M [Kennewick, WA; Matson, Dean W [Kennewick, WA
2002-10-17
A laminated microchannel device is described in which there is a unit operation process layer that has longitudinal channel. The longitudinal channel is cut completely through the layer in which the unit process operation resides. Both the device structure and method of making the device provide significant advantages in terms of simplicity and efficiency. A static mixing unit that can be incorporated in the laminated microchannel device is also described.
A Shipping Container-Based Sterile Processing Unit for Low Resources Settings
2016-01-01
Deficiencies in the sterile processing of medical instruments contribute to poor outcomes for patients, such as surgical site infections, longer hospital stays, and deaths. In low resources settings, such as some rural and semi-rural areas and secondary and tertiary cities of developing countries, deficiencies in sterile processing are accentuated due to the lack of access to sterilization equipment, improperly maintained and malfunctioning equipment, lack of power to operate equipment, poor protocols, and inadequate quality control over inventory. Inspired by our sterile processing fieldwork at a district hospital in Sierra Leone in 2013, we built an autonomous, shipping-container-based sterile processing unit to address these deficiencies. The sterile processing unit, dubbed “the sterile box,” is a full suite capable of handling instruments from the moment they leave the operating room to the point they are sterile and ready to be reused for the next surgery. The sterile processing unit is self-sufficient in power and water and features an intake for contaminated instruments, decontamination, sterilization via non-electric steam sterilizers, and secure inventory storage. To validate efficacy, we ran tests of decontamination and sterilization performance. Results of 61 trials validate convincingly that our sterile processing unit achieves satisfactory outcomes for decontamination and sterilization and as such holds promise to support healthcare facilities in low resources settings. PMID:27007568
Target Information Processing: A Joint Decision and Estimation Approach
2012-03-29
ground targets ( track - before - detect ) using computer cluster and graphics processing unit. Estimation and filtering theory is one of the most important...targets ( track - before - detect ) using computer cluster and graphics processing unit. Estimation and filtering theory is one of the most important
Sandberg, D A; Lynn, S J; Matorin, A I
2001-07-01
To assess the impact of dissociation on information processing, 66 college women with high and low levels of trait dissociation were studied with regard to how they unitized videotape segments of an acquaintance rape scenario (actual assault not shown) and a nonthreatening control scenario. Unitization is a paradigm that measures how actively people process stimuli by recording how many times they press a button to indicate that they have seen a significant or meaningful event. Trait dissociation was negatively correlated with participants' unitization of the acquaintance rape videotape, unitization was positively correlated with danger cue identification, and state dissociation was negatively correlated with dangerousness ratings.
Medical Device Regulation: A Comparison of the United States and the European Union.
Maak, Travis G; Wylie, James D
2016-08-01
Medical device regulation is a controversial topic in both the United States and the European Union. Many physicians and innovators in the United States cite a restrictive US FDA regulatory process as the reason for earlier and more rapid clinical advances in Europe. The FDA approval process mandates that a device be proved efficacious compared with a control or be substantially equivalent to a predicate device, whereas the European Union approval process mandates that the device perform its intended function. Stringent, peer-reviewed safety data have not been reported. However, after recent high-profile device failures, political pressure in both the United States and the European Union has favored more restrictive approval processes. Substantial reforms of the European Union process within the next 5 to 10 years will result in a more stringent approach to device regulation, similar to that of the FDA. Changes in the FDA regulatory process have been suggested but are not imminent.
32 CFR 516.10 - Service of civil process within the United States.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 32 National Defense 3 2010-07-01 2010-07-01 true Service of civil process within the United States. 516.10 Section 516.10 National Defense Department of Defense (Continued) DEPARTMENT OF THE ARMY AID OF CIVIL AUTHORITIES AND PUBLIC RELATIONS LITIGATION Service of Process § 516.10 Service of civil process...
32 CFR 516.10 - Service of civil process within the United States.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 32 National Defense 3 2011-07-01 2009-07-01 true Service of civil process within the United States. 516.10 Section 516.10 National Defense Department of Defense (Continued) DEPARTMENT OF THE ARMY AID OF CIVIL AUTHORITIES AND PUBLIC RELATIONS LITIGATION Service of Process § 516.10 Service of civil process...
43 CFR 429.37 - Does interest accrue on monies owed to the United States during my appeal process?
Code of Federal Regulations, 2010 CFR
2010-10-01
... United States during my appeal process? 429.37 Section 429.37 Public Lands: Interior Regulations Relating... States during my appeal process? Except for any period in the appeal process during which a stay is then... decision to OHA, or during judicial review of final agency action. ...
40 CFR 98.73 - Calculating GHG emissions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Ammonia Manufacturing § 98.73 Calculating GHG emissions. You must calculate and report the annual process CO2 emissions from each ammonia manufacturing process unit... ammonia manufacturing unit, the CO2 process emissions from gaseous feedstock according to Equation G-1 of...
40 CFR 98.73 - Calculating GHG emissions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Ammonia Manufacturing § 98.73 Calculating GHG emissions. You must calculate and report the annual process CO2 emissions from each ammonia manufacturing process unit... ammonia manufacturing unit, the CO2 process emissions from gaseous feedstock according to Equation G-1 of...
40 CFR 98.73 - Calculating GHG emissions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Ammonia Manufacturing § 98.73 Calculating GHG emissions. You must calculate and report the annual process CO2 emissions from each ammonia manufacturing process unit... ammonia manufacturing unit, the CO2 process emissions from gaseous feedstock according to Equation G-1 of...
40 CFR 98.73 - Calculating GHG emissions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Ammonia Manufacturing § 98.73 Calculating GHG emissions. You must calculate and report the annual process CO2 emissions from each ammonia manufacturing process unit... ammonia manufacturing unit, the CO2 process emissions from gaseous feedstock according to Equation G-1 of...
40 CFR 98.73 - Calculating GHG emissions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Ammonia Manufacturing § 98.73 Calculating GHG emissions. You must calculate and report the annual process CO2 emissions from each ammonia manufacturing process unit... ammonia manufacturing unit, the CO2 process emissions from gaseous feedstock according to Equation G-1 of...
20 CFR 655.50 - Enforcement process.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Enforcement process. 655.50 Section 655.50... FOREIGN WORKERS IN THE UNITED STATES Labor Certification Process and Enforcement of Attestations for Temporary Employment in Occupations Other Than Agriculture or Registered Nursing in the United States (H-2B...
40 CFR 63.1082 - What definitions do I need to know?
Code of Federal Regulations, 2010 CFR
2010-07-01
...) National Emission Standards for Ethylene Manufacturing Process Units: Heat Exchange Systems and Waste... resulting from the quench and compression of cracked gas (the cracking furnace effluent) at an ethylene... within an ethylene production unit. Process wastewater is not organic wastes, process fluids, product...
2012-09-01
the preservation of evidence or due process in mind. In contrast, police operations accept significant tacti- cal restraints military units...generally do not in order to assure due process and preserve evidence. A police action, such as an arrest, that involves maneuver and/or weapons is only...and the unique role they can play in the peace process . Resolution 1325, adopted in 2000, 9 holds a promise to women across the globe that their
Wellbore manufacturing processes for in situ heat treatment processes
Davidson, Ian Alexander; Geddes, Cameron James; Rudolf, Randall Lynn; Selby, Bruce Allen; MacDonald, Duncan Charles
2012-12-11
A method includes making coiled tubing at a coiled tubing manufacturing unit coupled to a coiled tubing transportation system. One or more coiled tubing reels are transported from the coiled tubing manufacturing unit to one or more moveable well drilling systems using the coiled tubing transportation system. The coiled tubing transportation system runs from the tubing manufacturing unit to one or more movable well drilling systems, and then back to the coiled tubing manufacturing unit.
Associative list processing unit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hemmert, Karl Scott; Underwood, Keith D
2014-04-01
An associative list processing unit and method comprising employing a plurality of prioritized cell blocks and permitting inserts to occur in a single clock cycle if all of the cell blocks are not full.
Code of Federal Regulations, 2010 CFR
2010-04-01
... WORKERS IN THE UNITED STATES Labor Certification Process and Enforcement of Attestations for Temporary Employment in Occupations Other Than Agriculture or Registered Nursing in the United States (H-2B Workers... application process. ...
A containerless levitation setup for liquid processing in a superconducting magnet.
Lu, Hui-Meng; Yin, Da-Chuan; Li, Hai-Sheng; Geng, Li-Qiang; Zhang, Chen-Yan; Lu, Qin-Qin; Guo, Yun-Zhu; Guo, Wei-Hong; Shang, Peng; Wakayama, Nobuko I
2008-09-01
Containerless processing of materials is considered beneficial for obtaining high quality products due to the elimination of the detrimental effects coming from the contact with container walls. Many containerless processing methods are realized by levitation techniques. This paper describes a containerless levitation setup that utilized the magnetization force generated in a gradient magnetic field. It comprises a levitation unit, a temperature control unit, and a real-time observation unit. Known volume of liquid diamagnetic samples can be levitated in the levitation chamber, the temperature of which is controlled using the temperature control unit. The evolution of the levitated sample is observed in real time using the observation unit. With this setup, containerless processing of liquid such as crystal growth from solution can be realized in a well-controlled manner. Since the levitation is achieved using a superconducting magnet, experiments requiring long duration time such as protein crystallization and simulation of space environment for living system can be easily succeeded.
40 CFR 98.72 - GHGs to report.
Code of Federal Regulations, 2011 CFR
2011-07-01
... GREENHOUSE GAS REPORTING Ammonia Manufacturing § 98.72 GHGs to report. You must report: (a) CO2 process..., reported for each ammonia manufacturing process unit following the requirements of this subpart (CO2... production, and therefore is not released to the ambient air from the ammonia manufacturing process unit). (b...
40 CFR 98.72 - GHGs to report.
Code of Federal Regulations, 2014 CFR
2014-07-01
... GREENHOUSE GAS REPORTING Ammonia Manufacturing § 98.72 GHGs to report. You must report: (a) CO2 process..., reported for each ammonia manufacturing process unit following the requirements of this subpart (CO2... production, and therefore is not released to the ambient air from the ammonia manufacturing process unit). (b...
40 CFR 98.72 - GHGs to report.
Code of Federal Regulations, 2013 CFR
2013-07-01
... GREENHOUSE GAS REPORTING Ammonia Manufacturing § 98.72 GHGs to report. You must report: (a) CO2 process..., reported for each ammonia manufacturing process unit following the requirements of this subpart (CO2... production, and therefore is not released to the ambient air from the ammonia manufacturing process unit). (b...
40 CFR 98.72 - GHGs to report.
Code of Federal Regulations, 2012 CFR
2012-07-01
... GREENHOUSE GAS REPORTING Ammonia Manufacturing § 98.72 GHGs to report. You must report: (a) CO2 process..., reported for each ammonia manufacturing process unit following the requirements of this subpart (CO2... production, and therefore is not released to the ambient air from the ammonia manufacturing process unit). (b...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-28
... DEPARTMENT OF LABOR Employment and Training Administration Labor Certification Process for the Temporary Employment of Aliens in Agriculture in the United States: Prevailing Wage Rates for Certain Occupations Processed Under H-2A Special Procedures; Correction and Rescission AGENCY: Employment and Training...
40 CFR 63.107 - Identification of process vents subject to this subpart.
Code of Federal Regulations, 2011 CFR
2011-07-01
... CATEGORIES National Emission Standards for Organic Hazardous Air Pollutants From the Synthetic Organic Chemical Manufacturing Industry § 63.107 Identification of process vents subject to this subpart. (a) The..., distillation unit, or reactor during operation of the chemical manufacturing process unit. (c) The discharge to...
40 CFR 63.107 - Identification of process vents subject to this subpart.
Code of Federal Regulations, 2013 CFR
2013-07-01
... CATEGORIES National Emission Standards for Organic Hazardous Air Pollutants From the Synthetic Organic Chemical Manufacturing Industry § 63.107 Identification of process vents subject to this subpart. (a) The..., distillation unit, or reactor during operation of the chemical manufacturing process unit. (c) The discharge to...
40 CFR 63.107 - Identification of process vents subject to this subpart.
Code of Federal Regulations, 2014 CFR
2014-07-01
... CATEGORIES National Emission Standards for Organic Hazardous Air Pollutants From the Synthetic Organic Chemical Manufacturing Industry § 63.107 Identification of process vents subject to this subpart. (a) The..., distillation unit, or reactor during operation of the chemical manufacturing process unit. (c) The discharge to...
A Foreign Correspondent's View of the Electoral Process.
ERIC Educational Resources Information Center
Gardner, Mary A. Ed.
According to their personal points of view regarding United States politics, a panel of foreign correspondents from other nations evaluated the United States electoral process and discussed the difficulties involved in conveying the complexities of this process to an audience. This document contains an edited transcript of the panel's comments.…
Atmospheric Processing Platform | Photovoltaic Research | NREL
printing units to the left and sample preparation and rapid thermal processing units to the right. In variety of substrates and then further process into optoelectronic materials using rapid thermal , however, occur within a vacuum (i.e., thermal evaporation, sputtering). Samples can remain in ambient
Wang, Feng; Li, Weiying; Zhang, Junpeng; Qi, Wanqi; Zhou, Yanyan; Xiang, Yuan; Shi, Nuo
2017-05-01
For the drinking water treatment plant (DWTP), the organic pollutant removal was the primary focus, while the suspended bacterial was always neglected. In this study, the suspended bacteria from each processing unit in a DWTP employing an ozone-biological activated carbon process was mainly characterized by using heterotrophic plate counts (HPCs), a flow cytometer, and 454-pyrosequencing methods. The results showed that an adverse changing tendency of HPC and total cell counts was observed in the sand filtration tank (SFT), where the cultivability of suspended bacteria increased to 34%. However, the cultivability level of other units stayed below 3% except for ozone contact tank (OCT, 13.5%) and activated carbon filtration tank (ACFT, 34.39%). It meant that filtration processes promoted the increase in cultivability of suspended bacteria remarkably, which indicated biodegrading capability. In the unit of OCT, microbial diversity indexes declined drastically, and the dominant bacteria were affiliated to Proteobacteria phylum (99.9%) and Betaproteobacteria class (86.3%), which were also the dominant bacteria in the effluent of other units. Besides, the primary genus was Limnohabitans in the effluents of SFT (17.4%) as well as ACFT (25.6%), which was inferred to be the crucial contributors for the biodegradable function in the filtration units. Overall, this paper provided an overview of community composition of each processing units in a DWTP as well as reference for better developing microbial function for drinking water treatment in the future.
State and Local Publications | State, Local, and Tribal Governments | NREL
residential and small commercial photovoltaic interconnection process time frames in the United States . Understanding Processes and Timelines for Distributed Photovoltaic Interconnection in the United States analyzes
Jasper, Justin T.; Nguyen, Mi T.; Jones, Zackary L.; Ismail, Niveen S.; Sedlak, David L.; Sharp, Jonathan O.; Luthy, Richard G.; Horne, Alex J.; Nelson, Kara L.
2013-01-01
Abstract Treatment wetlands have become an attractive option for the removal of nutrients from municipal wastewater effluents due to their low energy requirements and operational costs, as well as the ancillary benefits they provide, including creating aesthetically appealing spaces and wildlife habitats. Treatment wetlands also hold promise as a means of removing other wastewater-derived contaminants, such as trace organic contaminants and pathogens. However, concerns about variations in treatment efficacy of these pollutants, coupled with an incomplete mechanistic understanding of their removal in wetlands, hinder the widespread adoption of constructed wetlands for these two classes of contaminants. A better understanding is needed so that wetlands as a unit process can be designed for their removal, with individual wetland cells optimized for the removal of specific contaminants, and connected in series or integrated with other engineered or natural treatment processes. In this article, removal mechanisms of trace organic contaminants and pathogens are reviewed, including sorption and sedimentation, biotransformation and predation, photolysis and photoinactivation, and remaining knowledge gaps are identified. In addition, suggestions are provided for how these treatment mechanisms can be enhanced in commonly employed unit process wetland cells or how they might be harnessed in novel unit process cells. It is hoped that application of the unit process concept to a wider range of contaminants will lead to more widespread application of wetland treatment trains as components of urban water infrastructure in the United States and around the globe. PMID:23983451
Jasper, Justin T; Nguyen, Mi T; Jones, Zackary L; Ismail, Niveen S; Sedlak, David L; Sharp, Jonathan O; Luthy, Richard G; Horne, Alex J; Nelson, Kara L
2013-08-01
Treatment wetlands have become an attractive option for the removal of nutrients from municipal wastewater effluents due to their low energy requirements and operational costs, as well as the ancillary benefits they provide, including creating aesthetically appealing spaces and wildlife habitats. Treatment wetlands also hold promise as a means of removing other wastewater-derived contaminants, such as trace organic contaminants and pathogens. However, concerns about variations in treatment efficacy of these pollutants, coupled with an incomplete mechanistic understanding of their removal in wetlands, hinder the widespread adoption of constructed wetlands for these two classes of contaminants. A better understanding is needed so that wetlands as a unit process can be designed for their removal, with individual wetland cells optimized for the removal of specific contaminants, and connected in series or integrated with other engineered or natural treatment processes. In this article, removal mechanisms of trace organic contaminants and pathogens are reviewed, including sorption and sedimentation, biotransformation and predation, photolysis and photoinactivation, and remaining knowledge gaps are identified. In addition, suggestions are provided for how these treatment mechanisms can be enhanced in commonly employed unit process wetland cells or how they might be harnessed in novel unit process cells. It is hoped that application of the unit process concept to a wider range of contaminants will lead to more widespread application of wetland treatment trains as components of urban water infrastructure in the United States and around the globe.
Distributed trace using central performance counter memory
Satterfield, David L; Sexton, James C
2013-10-22
A plurality of processing cores, are central storage unit having at least memory connected in a daisy chain manner, forming a daisy chain ring layout on an integrated chip. At least one of the plurality of processing cores places trace data on the daisy chain connection for transmitting the trace data to the central storage unit, and the central storage unit detects the trace data and stores the trace data in the memory co-located in with the central storage unit.
Distributed trace using central performance counter memory
Satterfield, David L.; Sexton, James C.
2013-01-22
A plurality of processing cores, are central storage unit having at least memory connected in a daisy chain manner, forming a daisy chain ring layout on an integrated chip. At least one of the plurality of processing cores places trace data on the daisy chain connection for transmitting the trace data to the central storage unit, and the central storage unit detects the trace data and stores the trace data in the memory co-located in with the central storage unit.
Code of Federal Regulations, 2014 CFR
2014-07-01
... recordation; requests for copies of trademark documents; and certain documents filed under the Madrid Protocol... to review an action of the Office's Madrid Processing Unit, when filed by mail, must be mailed to: Madrid Processing Unit, 600 Dulany Street, MDE-7B87, Alexandria, VA 22314-5793. [68 FR 48289, Aug. 13...
Code of Federal Regulations, 2013 CFR
2013-07-01
... recordation; requests for copies of trademark documents; and certain documents filed under the Madrid Protocol... to review an action of the Office's Madrid Processing Unit, when filed by mail, must be mailed to: Madrid Processing Unit, 600 Dulany Street, MDE-7B87, Alexandria, VA 22314-5793. [68 FR 48289, Aug. 13...
Code of Federal Regulations, 2011 CFR
2011-07-01
... recordation; requests for copies of trademark documents; and certain documents filed under the Madrid Protocol... to review an action of the Office's Madrid Processing Unit, when filed by mail, must be mailed to: Madrid Processing Unit, 600 Dulany Street, MDE-7B87, Alexandria, VA 22314-5793. [68 FR 48289, Aug. 13...
Code of Federal Regulations, 2012 CFR
2012-07-01
... recordation; requests for copies of trademark documents; and certain documents filed under the Madrid Protocol... to review an action of the Office's Madrid Processing Unit, when filed by mail, must be mailed to: Madrid Processing Unit, 600 Dulany Street, MDE-7B87, Alexandria, VA 22314-5793. [68 FR 48289, Aug. 13...
Susan J. Alexander; Sonja N. Oswalt; Marla R. Emery
2011-01-01
The United States, in partnership with 11 other countries, participates in the Montreal Process. Each country assesses national progress toward the sustainable management of forest resources by using a set of criteria and indicators agreed on by all member countries. Several indicators focus on nontimber forest products (NTFPs). In the United States, permit and...
Innovation and Technology: Electronic Intensive Care Unit Diaries.
Scruth, Elizabeth A; Oveisi, Nazanin; Liu, Vincent
2017-01-01
Hospitalization in the intensive care unit can be a stressful time for patients and their family members. Patients' family members often have difficulty processing all of the information that is given to them. Therefore, an intensive care unit diary can serve as a conduit for synthesizing information, maintaining connection with patients, and maintaining a connection with family members outside the intensive care unit. Paper intensive care unit diaries have been used outside the United States for many years. This article explores the development of an electronic intensive care unit diary using a rapid prototyping model to accelerate the process. Initial results of design testing demonstrate that it is feasible, useful, and desirable to consider the implementation of electronic intensive care unit diaries for patients at risk for post-intensive care syndrome. ©2017 American Association of Critical-Care Nurses.
1993-12-01
Generally Accepted Process While neither DoD Directives nor USAF Regulations specify exact mandatory TDY order processing methods, most USAF units...functional input. Finally, TDY order processing functional experts at Hanscom, Los Angeles and McClellan AFBs provided inputs based on their experiences...current electronic auditing capabilities. 81 DTPS Initiative. This DFAS-initiated action to standardize TDY order processing throughout DoD is currently
NASA Astrophysics Data System (ADS)
Smorodin, A. I.; Red'kin, V. V.; Frolov, Y. D.; Korobkov, A. A.; Kemaev, O. V.; Kulik, M. V.; Shabalin, O. V.
2015-07-01
A set of technologies and prototype systems for eco-friendly shutdown of the power-generating, process, capacitive, and transport equipment is offered. The following technologies are regarded as core technologies for the complex: cryogenic technology nitrogen for displacement of hydrogen from the cooling circuit of turbine generators, cryo blasting of the power units by dioxide granules, preservation of the shutdown power units by dehydrated air, and dismantling and severing of equipment and structural materials of power units. Four prototype systems for eco-friendly shutdown of the power units may be built on the basis of selected technologies: Multimode nitrogen cryogenic system with four subsystems, cryo blasting system with CO2 granules for thermal-mechanical and electrical equipment of power units, and compressionless air-drainage systems for drying and storage of the shutdown power units and cryo-gas system for general severing of the steam-turbine power units. Results of the research and pilot and demonstration tests of the operational units of the considered technological systems allow applying the proposed technologies and systems in the prototype systems for shutdown of the power-generating, process, capacitive, and transport equipment.
A centerless grinding unit used for precisely processing ferrules of optical fiber connector
NASA Astrophysics Data System (ADS)
Wu, Yongbo; Kondo, Takahiro; Kato, Masana
2005-02-01
This paper describes the development of a centerless grinding unit used for precisely processing ferrules, a key component of optical fiber connectors. In conventional processing procedure, the outer diameter of a ferrule is ground by employing a special machine tool, i.e., centerless grinder. However, in the case of processing small amount of ferrules, introducing a centerless grinder leads to high processing cost. Therefore, in order to take measures against this problem, the present authors propose a new centerless grinding technique where a compact centerless grinding unit, which is composed of an ultrasonic elliptic-vibration shoe, a workrest blade, and their respective holders, is installed on a popular surface grinder to perform the centerless grinding operations for outer diameter machining of ferrules. In this work, a unit is designed and constructed, and is installed on a surface grinder equipped with a diamond grinding wheel. Then, the performance of the unit is examined experimentally followed by grinding tests of ferrule"s outer diameter. As a result, the roundness of the ferrule"s outer diameter improved from the original value of around 3μm to the final value of around 0.5 μm, confirming the validity of the new technique.
40 CFR 63.773 - Inspection and monitoring requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... devices: (i) Except for control devices for small glycol dehydration units, a boiler or process heater in...) Except for control devices for small glycol dehydration units, a boiler or process heater with a design...
40 CFR 63.773 - Inspection and monitoring requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... devices: (i) Except for control devices for small glycol dehydration units, a boiler or process heater in...) Except for control devices for small glycol dehydration units, a boiler or process heater with a design...
32 CFR 516.9 - Service of criminal process within the United States.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 32 National Defense 3 2010-07-01 2010-07-01 true Service of criminal process within the United States. 516.9 Section 516.9 National Defense Department of Defense (Continued) DEPARTMENT OF THE ARMY AID OF CIVIL AUTHORITIES AND PUBLIC RELATIONS LITIGATION Service of Process § 516.9 Service of criminal...
32 CFR 516.12 - Service of civil process outside the United States.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 32 National Defense 3 2010-07-01 2010-07-01 true Service of civil process outside the United States. 516.12 Section 516.12 National Defense Department of Defense (Continued) DEPARTMENT OF THE ARMY AID OF CIVIL AUTHORITIES AND PUBLIC RELATIONS LITIGATION Service of Process § 516.12 Service of civil...
32 CFR 516.11 - Service of criminal process outside the United States.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 32 National Defense 3 2010-07-01 2010-07-01 true Service of criminal process outside the United States. 516.11 Section 516.11 National Defense Department of Defense (Continued) DEPARTMENT OF THE ARMY AID OF CIVIL AUTHORITIES AND PUBLIC RELATIONS LITIGATION Service of Process § 516.11 Service of...
Graphic Arts: Book Two. Process Camera, Stripping, and Platemaking.
ERIC Educational Resources Information Center
Farajollahi, Karim; And Others
The second of a three-volume set of instructional materials for a course in graphic arts, this manual consists of 10 instructional units dealing with the process camera, stripping, and platemaking. Covered in the individual units are the process camera and darkroom photography, line photography, half-tone photography, other darkroom techniques,…
Code of Federal Regulations, 2010 CFR
2010-07-01
... streams in open systems within a chemical manufacturing process unit. 63.149 Section 63.149 Protection of... open systems within a chemical manufacturing process unit. (a) The owner or operator shall comply with... Air Pollutants From the Synthetic Organic Chemical Manufacturing Industry for Process Vents, Storage...
Milk Processing Plant Employee. Agricultural Cooperative Training. Vocational Agriculture.
ERIC Educational Resources Information Center
Blaschke, Nolan; Page, Foy
This course of study is designed for the vocational agricultural student enrolled in an agricultural cooperative part-time training program in the area of milk processing occupations. The course consists of 11 units, each with 4 to 13 individual topics that milk processing plant employees should know. Subjects covered by the units are the…
Suggested Guidelines for the Administrative Review Offices of Medical Education.
ERIC Educational Resources Information Center
Meleca, C. Benjamin
1987-01-01
A general description and guidelines are presented for a program review process for departments of medical education of the administrative units within colleges of medicine. After a discussion of the purposes of reviews, a suggested review process is described. The process to be utilized should be negotiated by the principle units, the…
Process Design Manual for Land Treatment of Municipal Wastewater.
ERIC Educational Resources Information Center
Crites, R.; And Others
This manual presents a procedure for the design of land treatment systems. Slow rate, rapid infiltration, and overland flow processes for the treatment of municipal wastewaters are given emphasis. The basic unit operations and unit processes are discussed in detail, and the design concepts and criteria are presented. The manual includes design…
40 CFR 98.162 - GHGs to report.
Code of Federal Regulations, 2010 CFR
2010-07-01
... GREENHOUSE GAS REPORTING Hydrogen Production § 98.162 GHGs to report. You must report: (a) CO2 process emissions from each hydrogen production process unit. (b) CO2, CH4 and N2O combustion emissions from each hydrogen production process unit. You must calculate and report these combustion emissions under subpart C...
Aerobic Digestion. Biological Treatment Process Control. Instructor's Guide.
ERIC Educational Resources Information Center
Klopping, Paul H.
This unit on aerobic sludge digestion covers the theory of the process, system components, factors that affect the process performance, standard operational concerns, indicators of steady-state operations, and operational problems. The instructor's guide includes: (1) an overview of the unit; (2) lesson plan; (3) lecture outline (keyed to a set of…
Using Mathematica to Teach Process Units: A Distillation Case Study
ERIC Educational Resources Information Center
Rasteiro, Maria G.; Bernardo, Fernando P.; Saraiva, Pedro M.
2005-01-01
The question addressed here is how to integrate computational tools, namely interactive general-purpose platforms, in the teaching of process units. Mathematica has been selected as a complementary tool to teach distillation processes, with the main objective of leading students to achieve a better understanding of the physical phenomena involved…
32 CFR 516.12 - Service of civil process outside the United States.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 32 National Defense 3 2011-07-01 2009-07-01 true Service of civil process outside the United States. 516.12 Section 516.12 National Defense Department of Defense (Continued) DEPARTMENT OF THE ARMY AID OF CIVIL AUTHORITIES AND PUBLIC RELATIONS LITIGATION Service of Process § 516.12 Service of civil...
32 CFR 516.9 - Service of criminal process within the United States.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 32 National Defense 3 2011-07-01 2009-07-01 true Service of criminal process within the United States. 516.9 Section 516.9 National Defense Department of Defense (Continued) DEPARTMENT OF THE ARMY AID OF CIVIL AUTHORITIES AND PUBLIC RELATIONS LITIGATION Service of Process § 516.9 Service of criminal...
32 CFR 516.11 - Service of criminal process outside the United States.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 32 National Defense 3 2011-07-01 2009-07-01 true Service of criminal process outside the United States. 516.11 Section 516.11 National Defense Department of Defense (Continued) DEPARTMENT OF THE ARMY AID OF CIVIL AUTHORITIES AND PUBLIC RELATIONS LITIGATION Service of Process § 516.11 Service of...
Process for Making Carbon-Carbon Turbocharger Housing Unit for Intermittent Combustion Engines
NASA Technical Reports Server (NTRS)
Northam, G. Burton (Inventor); Ransone, Philip O. (Inventor); Rivers, H. Kevin (Inventor)
1999-01-01
An improved. lightweight, turbine housing unit for an intermittent combustion reciprocating internal combustion engine turbocharger is prepared from a lay-up or molding of carbon-carbon composite materials in a single-piece or two-piece process. When compared to conventional steel or cast iron, the use of carbon-carbon composite materials in a turbine housing unit reduces the overall weight of the engine and reduces the heat energy loss used in the turbo-charging process. This reduction in heat energy loss and weight reduction provides for more efficient engine operation.
Vatsavai, Ranga Raju; Graesser, Jordan B.; Bhaduri, Budhendra L.
2016-07-05
A programmable media includes a graphical processing unit in communication with a memory element. The graphical processing unit is configured to detect one or more settlement regions from a high resolution remote sensed image based on the execution of programming code. The graphical processing unit identifies one or more settlements through the execution of the programming code that executes a multi-instance learning algorithm that models portions of the high resolution remote sensed image. The identification is based on spectral bands transmitted by a satellite and on selected designations of the image patches.
Modular Toolkit for Data Processing (MDP): A Python Data Processing Framework.
Zito, Tiziano; Wilbert, Niko; Wiskott, Laurenz; Berkes, Pietro
2008-01-01
Modular toolkit for Data Processing (MDP) is a data processing framework written in Python. From the user's perspective, MDP is a collection of supervised and unsupervised learning algorithms and other data processing units that can be combined into data processing sequences and more complex feed-forward network architectures. Computations are performed efficiently in terms of speed and memory requirements. From the scientific developer's perspective, MDP is a modular framework, which can easily be expanded. The implementation of new algorithms is easy and intuitive. The new implemented units are then automatically integrated with the rest of the library. MDP has been written in the context of theoretical research in neuroscience, but it has been designed to be helpful in any context where trainable data processing algorithms are used. Its simplicity on the user's side, the variety of readily available algorithms, and the reusability of the implemented units make it also a useful educational tool.
Noise, chaos, and (ɛ, τ)-entropy per unit time
NASA Astrophysics Data System (ADS)
Gaspard, Pierre; Wang, Xiao-Jing
1993-12-01
The degree of dynamical randomness of different time processes is characterized in terms of the (ε, τ)-entropy per unit time. The (ε, τ)-entropy is the amount of information generated per unit time, at different scales τ of time and ε of the observables. This quantity generalizes the Kolmogorov-Sinai entropy per unit time from deterministic chaotic processes, to stochastic processes such as fluctuations in mesoscopic physico-chemical phenomena or strong turbulence in macroscopic spacetime dynamics. The random processes that are characterized include chaotic systems, Bernoulli and Markov chains, Poisson and birth-and-death processes, Ornstein-Uhlenbeck and Yaglom noises, fractional Brownian motions, different regimes of hydrodynamical turbulence, and the Lorentz-Boltzmann process of nonequilibrium statistical mechanics. We also extend the (ε, τ)-entropy to spacetime processes like cellular automata, Conway's game of life, lattice gas automata, coupled maps, spacetime chaos in partial differential equations, as well as the ideal, the Lorentz, and the hard sphere gases. Through these examples it is demonstrated that the (ε, τ)-entropy provides a unified quantitative measure of dynamical randomness to both chaos and noises, and a method to detect transitions between dynamical states of different degrees of randomness as a parameter of the system is varied.
40 CFR 65.118 - Alternative means of emission limitation: Enclosed-vented process units.
Code of Federal Regulations, 2010 CFR
2010-07-01
... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONSOLIDATED FEDERAL AIR RULE Equipment Leaks § 65.118... control device. Process units that are enclosed in such a manner that all emissions from equipment leaks...
Method and apparatus for fault tolerance
NASA Technical Reports Server (NTRS)
Masson, Gerald M. (Inventor); Sullivan, Gregory F. (Inventor)
1993-01-01
A method and apparatus for achieving fault tolerance in a computer system having at least a first central processing unit and a second central processing unit. The method comprises the steps of first executing a first algorithm in the first central processing unit on input which produces a first output as well as a certification trail. Next, executing a second algorithm in the second central processing unit on the input and on at least a portion of the certification trail which produces a second output. The second algorithm has a faster execution time than the first algorithm for a given input. Then, comparing the first and second outputs such that an error result is produced if the first and second outputs are not the same. The step of executing a first algorithm and the step of executing a second algorithm preferably takes place over essentially the same time period.
High Power Silicon Carbide (SiC) Power Processing Unit Development
NASA Technical Reports Server (NTRS)
Scheidegger, Robert J.; Santiago, Walter; Bozak, Karin E.; Pinero, Luis R.; Birchenough, Arthur G.
2015-01-01
NASA GRC successfully designed, built and tested a technology-push power processing unit for electric propulsion applications that utilizes high voltage silicon carbide (SiC) technology. The development specifically addresses the need for high power electronics to enable electric propulsion systems in the 100s of kilowatts. This unit demonstrated how high voltage combined with superior semiconductor components resulted in exceptional converter performance.
2012-12-04
CAPE CANAVERAL, Fla. -- Inside the Space Station Processing Facility at NASA's Kennedy Space Center in Florida, workers have prepared the orbital replacement unit for the space station's main bus switching unit to be placed in a shipping container. The unit, which was processed at Kennedy, will be shipped to Japan at the beginning of the year for the HTV-4 launch, which is currently scheduled for 2013. Photo credit: NASA/Charisse Nahser
2012-12-04
CAPE CANAVERAL, Fla. -- Workers inside the Space Station Processing Facility at NASA's Kennedy Space Center in Florida prepare to pack the orbital replacement unit for the space station's main bus switching unit in a shipping container. The unit, which was processed at Kennedy, will be shipped to Japan at the beginning of the year for the HTV-4 launch, which is currently scheduled for 2013. Photo credit: NASA/Charisse Nahser
2012-12-04
CAPE CANAVERAL, Fla. -- Workers inside the Space Station Processing Facility at NASA's Kennedy Space Center in Florida prepare to pack the orbital replacement unit for the space station's main bus switching unit in a shipping container. The unit, which was processed at Kennedy, will be shipped to Japan at the beginning of the year for the HTV-4 launch, which is currently scheduled for 2013. Photo credit: NASA/Charisse Nahser
ERIC Educational Resources Information Center
Kruse, Rebecca; Howes, Elaine V.; Carlson, Janet; Roth, Kathleen; Bourdelat-Parks, Brooke
2013-01-01
AAAS and BSCS are collaborating to develop and study a curriculum unit that supports students' ability to explain a variety of biological processes such as growth in chemical terms. The unit provides conceptual coherence between chemical processes in nonliving and living systems through the core idea of atom rearrangement and conservation during…
Ito, Toshihiro; Kato, Tsuyoshi; Hasegawa, Makoto; Katayama, Hiroyuki; Ishii, Satoshi; Okabe, Satoshi; Sano, Daisuke
2016-12-01
The virus reduction efficiency of each unit process is commonly determined based on the ratio of virus concentration in influent to that in effluent of a unit, but the virus concentration in wastewater has often fallen below the analytical quantification limit, which does not allow us to calculate the concentration ratio at each sampling event. In this study, left-censored datasets of norovirus (genogroup I and II), and adenovirus were used to calculate the virus reduction efficiency in unit processes of secondary biological treatment and chlorine disinfection. Virus concentration in influent, effluent from the secondary treatment, and chlorine-disinfected effluent of four municipal wastewater treatment plants were analyzed by a quantitative polymerase chain reaction (PCR) approach, and the probabilistic distributions of log reduction (LR) were estimated by a Bayesian estimation algorithm. The mean values of LR in the secondary treatment units ranged from 0.9 and 2.2, whereas those in the free chlorine disinfection units were from -0.1 and 0.5. The LR value in the secondary treatment was virus type and unit process dependent, which raised the importance for accumulating the data of virus LR values applicable to the multiple-barrier system, which is a global concept of microbial risk management in wastewater reclamation and reuse.
A portable device for rapid nondestructive detection of fresh meat quality
NASA Astrophysics Data System (ADS)
Lin, Wan; Peng, Yankun
2014-05-01
Quality attributes of fresh meat influence nutritional value and consumers' purchasing power. In order to meet the demand of inspection department for portable device, a rapid and nondestructive detection device for fresh meat quality based on ARM (Advanced RISC Machines) processor and VIS/NIR technology was designed. Working principal, hardware composition, software system and functional test were introduced. Hardware system consisted of ARM processing unit, light source unit, detection probe unit, spectral data acquisition unit, LCD (Liquid Crystal Display) touch screen display unit, power unit and the cooling unit. Linux operating system and quality parameters acquisition processing application were designed. This system has realized collecting spectral signal, storing, displaying and processing as integration with the weight of 3.5 kg. 40 pieces of beef were used in experiment to validate the stability and reliability. The results indicated that prediction model developed using PLSR method using SNV as pre-processing method had good performance, with the correlation coefficient of 0.90 and root mean square error of 1.56 for validation set for L*, 0.95 and 1.74 for a*,0.94 and 0.59 for b*, 0.88 and 0.13 for pH, 0.79 and 12.46 for tenderness, 0.89 and 0.91 for water content, respectively. The experimental result shows that this device can be a useful tool for detecting quality of meat.
The neural circuits of innate fear: detection, integration, action, and memorization
Silva, Bianca A.; Gross, Cornelius T.
2016-01-01
How fear is represented in the brain has generated a lot of research attention, not only because fear increases the chances for survival when appropriately expressed but also because it can lead to anxiety and stress-related disorders when inadequately processed. In this review, we summarize recent progress in the understanding of the neural circuits processing innate fear in rodents. We propose that these circuits are contained within three main functional units in the brain: a detection unit, responsible for gathering sensory information signaling the presence of a threat; an integration unit, responsible for incorporating the various sensory information and recruiting downstream effectors; and an output unit, in charge of initiating appropriate bodily and behavioral responses to the threatful stimulus. In parallel, the experience of innate fear also instructs a learning process leading to the memorization of the fearful event. Interestingly, while the detection, integration, and output units processing acute fear responses to different threats tend to be harbored in distinct brain circuits, memory encoding of these threats seems to rely on a shared learning system. PMID:27634145
NASA Technical Reports Server (NTRS)
1981-01-01
Technical readiness for the production of photovoltaic modules using single crystal silicon dendritic web sheet material is demonstrated by: (1) selection, design and implementation of solar cell and photovoltaic module process sequence in a Module Experimental Process System Development Unit; (2) demonstration runs; (3) passing of acceptance and qualification tests; and (4) achievement of a cost effective module.
Potable water recovery for spacecraft application by electrolytic pretreatment/air evaporation
NASA Technical Reports Server (NTRS)
Wells, G. W.
1975-01-01
A process for the recovery of potable water from urine using electrolytic pretreatment followed by distillation in a closed-cycle air evaporator has been developed and tested. Both the electrolytic pretreatment unit and the air evaporation unit are six-person, flight-concept prototype, automated units. Significantly extended wick lifetimes have been achieved in the air evaporation unit using electrolytically pretreated, as opposed to chemically pretreated, urine feed. Parametric test data are presented on product water quality, wick life, process power, maintenance requirements, and expendable requirements.
A GPU-Based Wide-Band Radio Spectrometer
NASA Astrophysics Data System (ADS)
Chennamangalam, Jayanth; Scott, Simon; Jones, Glenn; Chen, Hong; Ford, John; Kepley, Amanda; Lorimer, D. R.; Nie, Jun; Prestage, Richard; Roshi, D. Anish; Wagner, Mark; Werthimer, Dan
2014-12-01
The graphics processing unit has become an integral part of astronomical instrumentation, enabling high-performance online data reduction and accelerated online signal processing. In this paper, we describe a wide-band reconfigurable spectrometer built using an off-the-shelf graphics processing unit card. This spectrometer, when configured as a polyphase filter bank, supports a dual-polarisation bandwidth of up to 1.1 GHz (or a single-polarisation bandwidth of up to 2.2 GHz) on the latest generation of graphics processing units. On the other hand, when configured as a direct fast Fourier transform, the spectrometer supports a dual-polarisation bandwidth of up to 1.4 GHz (or a single-polarisation bandwidth of up to 2.8 GHz).
Modelling and simulation of a robotic work cell
NASA Astrophysics Data System (ADS)
Sękala, A.; Gwiazda, A.; Kost, G.; Banaś, W.
2017-08-01
The subject of considerations presented in this work concerns the designing and simulation of a robotic work cell. The designing of robotic cells is the process of synergistic combining the components in the group, combining this groups into specific, larger work units or dividing the large work units into small ones. Combinations or divisions are carried out in the terms of the needs of realization the assumed objectives to be performed in these unit. The designing process bases on the integrated approach what lets to take into consideration all needed elements of this process. Each of the elements of a design process could be an independent design agent which could tend to obtain its objectives.
Quality Improvement Process in a Large Intensive Care Unit: Structure and Outcomes.
Reddy, Anita J; Guzman, Jorge A
2016-11-01
Quality improvement in the health care setting is a complex process, and even more so in the critical care environment. The development of intensive care unit process measures and quality improvement strategies are associated with improved outcomes, but should be individualized to each medical center as structure and culture can differ from institution to institution. The purpose of this report is to describe the structure of quality improvement processes within a large medical intensive care unit while using examples of the study institution's successes and challenges in the areas of stat antibiotic administration, reduction in blood product waste, central line-associated bloodstream infections, and medication errors. © The Author(s) 2015.
2013-08-22
4 cores, where the code may simultaneously run on the multiple cores or the graphics processing unit (or GPU – to be more specific on an NVIDIA ...allowed to get accurate crack shapes. DISCLAIMER Reference herein to any specific commercial company , product, process, or service by trade name
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-04
... DEPARTMENT OF COMMERCE United States Patent and Trademark Office Legal Processes ACTION: Proposed... former employees of the United States Patent and Trademark Office (USPTO). The rules for these legal... employee testimony and production of documents in legal proceedings, reports of unauthorized testimony...
The Use of the Nursing Process in Spain as Compared to the United States and Canada.
Huitzi-Egilegor, Joseba Xabier; Elorza-Puyadena, Maria Isabel; Asurabarrena-Iraola, Carmen
2017-05-18
To analyze the development of the nursing method process in Spain, and compare it with the development in the United States and Canada. This is a narrative review. The teaching of the nursing process in nursing schools started in Spain as from 1977 and that it started being used in professional practice in the 1990's. The development, the difficulties, the nursing models used and its application form are discussed. The development of the nursing process in the United States and Canada started to happen in Spain about 15-20 years later and, today, is a reality. Cross-sectional studies are needed to determine the changes in the development of the nursing process in Spain. © 2017 NANDA International, Inc.
7 CFR 1208.29 - United States.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 10 2014-01-01 2014-01-01 false United States. 1208.29 Section 1208.29 Agriculture... § 1208.29 United States. United States means collectively the 50 states, the District of Columbia, the Commonwealth of Puerto Rico, and the territories and possessions of the United States. National Processed...
7 CFR 1208.29 - United States.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 10 2013-01-01 2013-01-01 false United States. 1208.29 Section 1208.29 Agriculture... § 1208.29 United States. United States means collectively the 50 states, the District of Columbia, the Commonwealth of Puerto Rico, and the territories and possessions of the United States. National Processed...
Fundamentals of Refrigeration.
ERIC Educational Resources Information Center
Sutliff, Ronald D.; And Others
This self-study course is designed to familiarize Marine enlisted personnel with the principles of the refrigeration process. The course contains five study units. Each study unit begins with a general objective, which is a statement of what the student should learn from the unit. The study units are divided into numbered work units, each…
Intermediate SCDC Spanish Curricula Units. Science/Health, Unit 1, Kits 1-4, Teacher's Guide.
ERIC Educational Resources Information Center
Spanish Curricula Development Center, Miami Beach, FL.
Unified by the theme "our community", this unit, part of nine basic instructional units for intermediate level, reflects the observations of Mexican Americans, Puerto Ricans, and Cubans in various regions of the United States. Comprised of Kits 1-4, the unit extends the following basic and interpreted science processes: observing, communicating,…
NASA Technical Reports Server (NTRS)
1983-01-01
The process technology for the manufacture of semiconductor-grade silicon in a large commercial plant by 1986, at a price less than $14 per kilogram of silicon based on 1975 dollars is discussed. The engineering design, installation, checkout, and operation of an Experimental Process System Development unit was discussed. Quality control of scaling-up the process and an economic analysis of product and production costs are discussed.
Process for removing an organic compound from water
Baker, Richard W.; Kaschemekat, Jurgen; Wijmans, Johannes G.; Kamaruddin, Henky D.
1993-12-28
A process for removing organic compounds from water is disclosed. The process involves gas stripping followed by membrane separation treatment of the stripping gas. The stripping step can be carried out using one or multiple gas strippers and using air or any other gas as stripping gas. The membrane separation step can be carried out using a single-stage membrane unit or a multistage unit. Apparatus for carrying out the process is also disclosed. The process is particularly suited for treatment of contaminated groundwater or industrial wastewater.
Daniel Reed; Richard Bergman; Jae-Woo Kim; Adam Tayler; David Harper; David Jones; Chris Knowles; Maureen E. Puettmann
2012-01-01
In this article, we present cradle-to-gate life-cycle inventory (LCI) data for wood fuel pellets manufactured in the Southeast United States. We surveyed commercial pellet manufacturers in 2010, collecting annual production data for 2009. Weighted-average inputs to, and emissions from, the pelletization process were determined. The pellet making unit process was...
Sara A. Goeking; Greg C. Liknes
2009-01-01
The Forest Inventory and Analysis (FIA) program attempts to inventory all forested lands throughout the United States. Each of the four FIA units has developed a process to minimize inventory costs by refraining from visiting those plots in the national inventory grid that are undoubtedly nonforest. We refer to this process as pre-field operations. Until recently, the...
Assessment of mammographic film processor performance in a hospital and mobile screening unit.
Murray, J G; Dowsett, D J; Laird, O; Ennis, J T
1992-12-01
In contrast to the majority of mammographic breast screening programmes, film processing at this centre occurs on site in both hospital and mobile trailer units. Initial (1989) quality control (QC) sensitometric tests revealed a large variation in film processor performance in the mobile unit. The clinical significance of these variations was assessed and acceptance limits for processor performance determined. Abnormal mammograms were used as reference material and copied using high definition 35 mm film over a range of exposure settings. The copies were than matched with QC film density variation from the mobile unit. All films were subsequently ranked for spatial and contrast resolution. Optimal values for processing time of 2 min (equivalent to film transit time 3 min and developer time 46 s) and temperature of 36 degrees C were obtained. The widespread anomaly of reporting film transit time as processing time is highlighted. Use of mammogram copies as a means of measuring the influence of film processor variation is advocated. Careful monitoring of the mobile unit film processor performance has produced stable quality comparable with the hospital based unit. The advantages of on site film processing are outlined. The addition of a sensitometric step wedge to all mammography film stock as a means of assessing image quality is recommended.
Holmes, Lisa; Landsverk, John; Ward, Harriet; Rolls-Reutz, Jennifer; Saldana, Lisa; Wulczyn, Fred; Chamberlain, Patricia
2014-04-01
Estimating costs in child welfare services is critical as new service models are incorporated into routine practice. This paper describes a unit costing estimation system developed in England (cost calculator) together with a pilot test of its utility in the United States where unit costs are routinely available for health services but not for child welfare services. The cost calculator approach uses a unified conceptual model that focuses on eight core child welfare processes. Comparison of these core processes in England and in four counties in the United States suggests that the underlying child welfare processes generated from England were perceived as very similar by child welfare staff in California county systems with some exceptions in the review and legal processes. Overall, the adaptation of the cost calculator for use in the United States child welfare systems appears promising. The paper also compares the cost calculator approach to the workload approach widely used in the United States and concludes that there are distinct differences between the two approaches with some possible advantages to the use of the cost calculator approach, especially in the use of this method for estimating child welfare costs in relation to the incorporation of evidence-based interventions into routine practice.
NASA Astrophysics Data System (ADS)
Li-bo, Dang; Jia-chun, Wu; Yue-xing, Liu; Yuan, Chang; Bin, Peng
2017-04-01
Underground coal fire (UCF) is serious in Xinjiang region of China. In order to deal with this problem efficiently, a UCF monitoring System, which is based on the use of wireless communication technology and remote sensing images, was designed and implemented by Xinjiang Coal Fire Fighting Bureau. This system consists of three parts, i.e., the data collecting unit, the data processing unit and the data output unit. For the data collecting unit, temperature sensors and gas sensors were put together on the sites with depth of 1.5 meter from the surface of coal fire zone. Information on these sites' temperature and gas was transferred immediately to the data processing unit. The processing unit was developed by coding based on GIS software. Generally, the processed datum were saved in the computer by table format, which can be displayed on the screen as the curve. Remote sensing image for each coal fire was saved in this system as the background for each monitoring site. From the monitoring data, the changes of the coal fires were displayed directly. And it provides a solid basis for analyzing the status of coal combustion of coal fire, the gas emission and possible dominant direction of coal fire propagation, which is helpful for making-decision of coal fire extinction.
Code of Federal Regulations, 2010 CFR
2010-07-01
... chemical that is produced coincidentally during the production of another chemical. Chemical manufacturing... manufacture an intended product. A chemical manufacturing process unit consists of more than one unit... ethylene process does not include the manufacture of SOCMI chemicals such as the production of butadiene...
40 CFR 424.35 - Standards of performance for new sources.
Code of Federal Regulations, 2012 CFR
2012-07-01
...— Metric units (kg/kkg processed) TSS 0.271 0.136 Chromium total .0054 .0027 Manganese total 0.054 .027 pH (1) (1) English units (lb/ton processed) TSS .542 .271 Chromium total .011 .0054 Manganese total .108...
40 CFR 424.35 - Standards of performance for new sources.
Code of Federal Regulations, 2013 CFR
2013-07-01
...— Metric units (kg/kkg processed) TSS 0.271 0.136 Chromium total .0054 .0027 Manganese total 0.054 .027 pH (1) (1) English units (lb/ton processed) TSS .542 .271 Chromium total .011 .0054 Manganese total .108...
40 CFR 424.35 - Standards of performance for new sources.
Code of Federal Regulations, 2014 CFR
2014-07-01
...— Metric units (kg/kkg processed) TSS 0.271 0.136 Chromium total .0054 .0027 Manganese total 0.054 .027 pH (1) (1) English units (lb/ton processed) TSS .542 .271 Chromium total .011 .0054 Manganese total .108...
Code of Federal Regulations, 2010 CFR
2010-07-01
... Computer Reader finalization costs, cost per image, and Remote Bar Code Sorter leakage; (8) Percentage of... processing units costs for Carrier Route, High Density, and Saturation mail; (j) Mail processing unit costs...
Code of Federal Regulations, 2011 CFR
2011-07-01
... Computer Reader finalization costs, cost per image, and Remote Bar Code Sorter leakage; (8) Percentage of... processing units costs for Carrier Route, High Density, and Saturation mail; (j) Mail processing unit costs...
Code of Federal Regulations, 2013 CFR
2013-07-01
... Computer Reader finalization costs, cost per image, and Remote Bar Code Sorter leakage; (8) Percentage of... processing units costs for Carrier Route, High Density, and Saturation mail; (j) Mail processing unit costs...
Code of Federal Regulations, 2014 CFR
2014-07-01
... Computer Reader finalization costs, cost per image, and Remote Bar Code Sorter leakage; (8) Percentage of... processing units costs for Carrier Route, High Density, and Saturation mail; (j) Mail processing unit costs...
Code of Federal Regulations, 2012 CFR
2012-07-01
... Computer Reader finalization costs, cost per image, and Remote Bar Code Sorter leakage; (8) Percentage of... processing units costs for Carrier Route, High Density, and Saturation mail; (j) Mail processing unit costs...
40 CFR 424.35 - Standards of performance for new sources.
Code of Federal Regulations, 2010 CFR
2010-07-01
...— Metric units (kg/kkg processed) TSS 0.271 0.136 Chromium total .0054 .0027 Manganese total 0.054 .027 pH (1) (1) English units (lb/ton processed) TSS .542 .271 Chromium total .011 .0054 Manganese total .108...
40 CFR 424.35 - Standards of performance for new sources.
Code of Federal Regulations, 2011 CFR
2011-07-01
...— Metric units (kg/kkg processed) TSS 0.271 0.136 Chromium total .0054 .0027 Manganese total 0.054 .027 pH (1) (1) English units (lb/ton processed) TSS .542 .271 Chromium total .011 .0054 Manganese total .108...
Kritchevsky, S. B.; Braun, B. I.; Wong, E. S.; Solomon, S. L.; Steele, L.; Richards, C.; Simmons, B. P.
2001-01-01
The Evaluation of Processes and Indicators in Infection Control (EPIC) study assesses the relationship between hospital care and rates of central venous catheter-associated primary bacteremia in 54 intensive-care units (ICUs) in the United States and 14 other countries. Using ICU rather than the patient as the primary unit of statistical analysis permits evaluation of factors that vary at the ICU level. The design of EPIC can serve as a template for studies investigating the relationship between process and event rates across health-care institutions. PMID:11294704
An Examination of the USAF (Q,R) Policies for Managing Depot-Base Inventories.
1976-10-15
x OC. ½ EOQ J = [ ~- ~ ~~~ ___.1~ (A3) 0C~ = base j’s order processing cost = $5; tIC = unit acquisition cost of the given item...l - —— (C2) Uc x (11C0 + (llC~ - HCD) x j~~l (n~MDR~ /MDn 0)) where OC9 = depot order processing cost = $270.16; 0C~ = base order ... processing cost LIC unit acquisition cost of item; MCD = cost to hold caeh unit of the given item/yearat the depot , expressed as a fraction of its un i
Recycle Requirements for NASA's 30 cm Xenon Ion Thruster
NASA Technical Reports Server (NTRS)
Pinero, Luis R.; Rawlin, Vincent K.
1994-01-01
Electrical breakdowns have been observed during ion thruster operation. These breakdowns, or arcs, can be caused by several conditions. In flight systems, the power processing unit must be designed to handle these faults autonomously. This has a strong impact on power processor requirements and must be understood fully for the power processing unit being designed for the NASA Solar Electric Propulsion Technology Application Readiness program. In this study, fault conditions were investigated using a NASA 30 cm ion thruster and a power console. Power processing unit output specifications were defined based on the breakdown phenomena identified and characterized.
Evaluation of Selected Chemical Processes for Production of Low-cost Silicon, Phase 3
NASA Technical Reports Server (NTRS)
Blocher, J. M., Jr.; Browning, M. F.
1979-01-01
The construction of the 50 MT Si/year experimental process system development unit was deferred until FY 1980, and the fluidized bed, zinc vaporizer, by-product condenser, and electrolytic cell were combined with auxiliary units, capable of supporting 8-hour batchwise operation, to form the process development unit (PDU), which is scheduled to be in operation by October 1, 1979. The design of the PDU and objectives of its operation are discussed. Experimental program support activities described relate to: (1) a wetted-wall condensor; (2) fluidized-bed modeling; (3) zinc chloride electrolysis; and (4) zinc vaporizer.
Microcomponent chemical process sheet architecture
Wegeng, Robert S.; Drost, M. Kevin; Call, Charles J.; Birmingham, Joseph G.; McDonald, Carolyn Evans; Kurath, Dean E.; Friedrich, Michele
1998-01-01
The invention is a microcomponent sheet architecture wherein macroscale unit processes are performed by microscale components. The sheet architecture may be a single laminate with a plurality of separate microcomponent sections or the sheet architecture may be a plurality of laminates with one or more microcomponent sections on each laminate. Each microcomponent or plurality of like microcomponents perform at least one chemical process unit operation. A first laminate having a plurality of like first microcomponents is combined with at least a second laminate having a plurality of like second microcomponents thereby combining at least two unit operations to achieve a system operation.
Microcomponent chemical process sheet architecture
Wegeng, R.S.; Drost, M.K.; Call, C.J.; Birmingham, J.G.; McDonald, C.E.; Kurath, D.E.; Friedrich, M.
1998-09-22
The invention is a microcomponent sheet architecture wherein macroscale unit processes are performed by microscale components. The sheet architecture may be a single laminate with a plurality of separate microcomponent sections or the sheet architecture may be a plurality of laminates with one or more microcomponent sections on each laminate. Each microcomponent or plurality of like microcomponents perform at least one chemical process unit operation. A first laminate having a plurality of like first microcomponents is combined with at least a second laminate having a plurality of like second microcomponents thereby combining at least two unit operations to achieve a system operation. 26 figs.
NASA Astrophysics Data System (ADS)
Chung, Shin Kee; Wen, Linqing; Blair, David; Cannon, Kipp; Datta, Amitava
2010-07-01
We report a novel application of a graphics processing unit (GPU) for the purpose of accelerating the search pipelines for gravitational waves from coalescing binaries of compact objects. A speed-up of 16-fold in total has been achieved with an NVIDIA GeForce 8800 Ultra GPU card compared with one core of a 2.5 GHz Intel Q9300 central processing unit (CPU). We show that substantial improvements are possible and discuss the reduction in CPU count required for the detection of inspiral sources afforded by the use of GPUs.
Temporal Processing in the Visual Cortex of the Awake and Anesthetized Rat.
Aasebø, Ida E J; Lepperød, Mikkel E; Stavrinou, Maria; Nøkkevangen, Sandra; Einevoll, Gaute; Hafting, Torkel; Fyhn, Marianne
2017-01-01
The activity pattern and temporal dynamics within and between neuron ensembles are essential features of information processing and believed to be profoundly affected by anesthesia. Much of our general understanding of sensory information processing, including computational models aimed at mathematically simulating sensory information processing, rely on parameters derived from recordings conducted on animals under anesthesia. Due to the high variety of neuronal subtypes in the brain, population-based estimates of the impact of anesthesia may conceal unit- or ensemble-specific effects of the transition between states. Using chronically implanted tetrodes into primary visual cortex (V1) of rats, we conducted extracellular recordings of single units and followed the same cell ensembles in the awake and anesthetized states. We found that the transition from wakefulness to anesthesia involves unpredictable changes in temporal response characteristics. The latency of single-unit responses to visual stimulation was delayed in anesthesia, with large individual variations between units. Pair-wise correlations between units increased under anesthesia, indicating more synchronized activity. Further, the units within an ensemble show reproducible temporal activity patterns in response to visual stimuli that is changed between states, suggesting state-dependent sequences of activity. The current dataset, with recordings from the same neural ensembles across states, is well suited for validating and testing computational network models. This can lead to testable predictions, bring a deeper understanding of the experimental findings and improve models of neural information processing. Here, we exemplify such a workflow using a Brunel network model.
Temporal Processing in the Visual Cortex of the Awake and Anesthetized Rat
Aasebø, Ida E. J.; Stavrinou, Maria; Nøkkevangen, Sandra; Einevoll, Gaute
2017-01-01
Abstract The activity pattern and temporal dynamics within and between neuron ensembles are essential features of information processing and believed to be profoundly affected by anesthesia. Much of our general understanding of sensory information processing, including computational models aimed at mathematically simulating sensory information processing, rely on parameters derived from recordings conducted on animals under anesthesia. Due to the high variety of neuronal subtypes in the brain, population-based estimates of the impact of anesthesia may conceal unit- or ensemble-specific effects of the transition between states. Using chronically implanted tetrodes into primary visual cortex (V1) of rats, we conducted extracellular recordings of single units and followed the same cell ensembles in the awake and anesthetized states. We found that the transition from wakefulness to anesthesia involves unpredictable changes in temporal response characteristics. The latency of single-unit responses to visual stimulation was delayed in anesthesia, with large individual variations between units. Pair-wise correlations between units increased under anesthesia, indicating more synchronized activity. Further, the units within an ensemble show reproducible temporal activity patterns in response to visual stimuli that is changed between states, suggesting state-dependent sequences of activity. The current dataset, with recordings from the same neural ensembles across states, is well suited for validating and testing computational network models. This can lead to testable predictions, bring a deeper understanding of the experimental findings and improve models of neural information processing. Here, we exemplify such a workflow using a Brunel network model. PMID:28791331
NASA Technical Reports Server (NTRS)
Batcher, K. E.; Eddey, E. E.; Faiss, R. O.; Gilmore, P. A.
1981-01-01
The processing of synthetic aperture radar (SAR) signals using the massively parallel processor (MPP) is discussed. The fast Fourier transform convolution procedures employed in the algorithms are described. The MPP architecture comprises an array unit (ARU) which processes arrays of data; an array control unit which controls the operation of the ARU and performs scalar arithmetic; a program and data management unit which controls the flow of data; and a unique staging memory (SM) which buffers and permutes data. The ARU contains a 128 by 128 array of bit-serial processing elements (PE). Two-by-four surarrays of PE's are packaged in a custom VLSI HCMOS chip. The staging memory is a large multidimensional-access memory which buffers and permutes data flowing with the system. Efficient SAR processing is achieved via ARU communication paths and SM data manipulation. Real time processing capability can be realized via a multiple ARU, multiple SM configuration.
ISO 9001 in a neonatal intensive care unit (NICU).
Vitner, Gad; Nadir, Erez; Feldman, Michael; Yurman, Shmuel
2011-01-01
The aim of this paper is to present the process for approving and certifying a neonatal intensive care unit to ISO 9001 standards. The process started with the department head's decision to improve services quality before deciding to achieve ISO 9001 certification. Department processes were mapped and quality management mechanisms were developed. Process control and performance measurements were defined and implemented to monitor the daily work. A service satisfaction review was conducted to get feedback from families. In total, 28 processes and related work instructions were defined. Process yields showed service improvements. Family satisfaction improved. The paper is based on preparing only one neonatal intensive care unit to the ISO 9001 standard. The case study should act as an incentive for hospital managers aiming to improve service quality based on the ISO 9001 standard. ISO 9001 is becoming a recommended tool to improve clinical service quality.
ERIC Educational Resources Information Center
Yan, Kun; Berliner, David C.
2011-01-01
No empirical research has focused solely upon understanding the stress and coping processes of Chinese international students in the United States. This qualitative inquiry examines the individual-level variables that affect the stress-coping process of Chinese international students and how they conceptualize and adapt to their stress at an…
40 CFR 63.7491 - Are any boilers or process heaters not subject to this subpart?
Code of Federal Regulations, 2014 CFR
2014-07-01
... generating unit (EGU) covered by subpart UUUUU of this part. (b) A recovery boiler or furnace covered by... vessels. This does not include units that provide heat or steam to a process at a research and development... the average annual heat input during any 3 consecutive calendar years to the boiler or process heater...
40 CFR 63.7491 - Are any boilers or process heaters not subject to this subpart?
Code of Federal Regulations, 2013 CFR
2013-07-01
... generating unit (EGU) covered by subpart UUUUU of this part. (b) A recovery boiler or furnace covered by... vessels. This does not include units that provide heat or steam to a process at a research and development... the average annual heat input during any 3 consecutive calendar years to the boiler or process heater...
NASA Astrophysics Data System (ADS)
Guo, Qing-chun; Zhou, Hong; Wang, Cheng-tao; Zhang, Wei; Lin, Peng-yu; Sun, Na; Ren, Luquan
2009-04-01
Stimulated by the cuticles of soil animals, an attempt to improve the wear resistance of compact graphite cast iron (CGI) with biomimetic units on the surface was made by using a biomimetic coupled laser remelting process in air and various thicknesses water film, respectively. The microstructures of biomimetic units were examined by scanning electron microscope and X-ray diffraction was used to describe the microstructure and identify the phases in the melted zone. Microhardness was measured and the wear behaviors of biomimetic specimens as functions of different mediums as well as various water film thicknesses were investigated under dry sliding condition, respectively. The results indicated that the microstructure zones in the biomimetic specimens processed with water film are refined compared with that processed in air and had better wear resistance increased by 60%, the microhardness of biomimetic units has been improved significantly. The application of water film provided finer microstructures and much more regular grain shape in biomimetic units, which played a key role in improving the friction properties and wear resistance of CGI.
Real, Kevin; Fay, Lindsey; Isaacs, Kathy; Carll-White, Allison; Schadler, Aric
2018-01-01
This study utilizes systems theory to understand how changes to physical design structures impact communication processes and patient and staff design-related outcomes. Many scholars and researchers have noted the importance of communication and teamwork for patient care quality. Few studies have examined changes to nursing station design within a systems theory framework. This study employed a multimethod, before-and-after, quasi-experimental research design. Nurses completed surveys in centralized units and later in decentralized units ( N = 26 pre , N = 51 post ). Patients completed surveys ( N = 62 pre ) in centralized units and later in decentralized units ( N = 49 post ). Surveys included quantitative measures and qualitative open-ended responses. Patients preferred the decentralized units because of larger single-occupancy rooms, greater privacy/confidentiality, and overall satisfaction with design. Nurses had a more complex response. Nurses approved the patient rooms, unit environment, and noise levels in decentralized units. However, they reported reduced access to support spaces, lower levels of team/mentoring communication, and less satisfaction with design than in centralized units. Qualitative findings supported these results. Nurses were more positive about centralized units and patients were more positive toward decentralized units. The results of this study suggest a need to understand how system components operate in concert. A major contribution of this study is the inclusion of patient satisfaction with design, an important yet overlooked fact in patient satisfaction. Healthcare design researchers and practitioners may consider how changing system interdependencies can lead to unexpected changes to communication processes and system outcomes in complex systems.
ARS labs update to California Cotton Ginners and Growers
USDA-ARS?s Scientific Manuscript database
There are four USDA-ARS labs involved in cotton harvesting, processing & fiber quality research; The Southwestern Cotton Ginning Research Laboratory (Mesilla Park, NM); The Cotton Production and Processing Unit (Lubbock, TX); The Cotton Ginning Research Unit (Stoneville, MS); and The Cotton Structur...
Code of Federal Regulations, 2010 CFR
2010-07-01
... functional group. Fluorotelomers means the products of telomerization, which is the reaction of a telogen... relevant polymer-forming reaction used for the particular process. Monomer Unit means the reacted form of... monomer units but which, under the relevant reaction conditions used for the particular process, cannot...
7 CFR 60.128 - United States country of origin.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false United States country of origin. 60.128 Section 60.128... FOR FISH AND SHELLFISH General Provisions Definitions § 60.128 United States country of origin. United...: From fish or shellfish hatched, raised, harvested, and processed in the United States, and that has not...
7 CFR 60.128 - United States country of origin.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false United States country of origin. 60.128 Section 60.128... FOR FISH AND SHELLFISH General Provisions Definitions § 60.128 United States country of origin. United...: From fish or shellfish hatched, raised, harvested, and processed in the United States, and that has not...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-23
... wastes ERUs were designed to burn. Energy Recovery Units (i.e., units that would be boilers and process... and 241 Commercial and Industrial Solid Waste Incineration Units: Reconsideration and Proposed... 2060-AR15 and 2050-AG44 Commercial and Industrial Solid Waste Incineration Units: Reconsideration and...
40 CFR 63.1281 - Control equipment requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... dehydration unit baseline operations (as defined in § 63.1271). Records of glycol dehydration unit baseline... the Administrator's satisfaction, the conditions for which glycol dehydration unit baseline operations... emission reduction of 95.0 percent for the glycol dehydration unit process vent. Only modifications in...
40 CFR 63.1281 - Control equipment requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... dehydration unit baseline operations (as defined in § 63.1271). Records of glycol dehydration unit baseline... the Administrator's satisfaction, the conditions for which glycol dehydration unit baseline operations... emission reduction of 95.0 percent for the glycol dehydration unit process vent. Only modifications in...
Cisneros, Carolina; Díaz-Campos, Rocío Magdalena; Marina, Núria; Melero, Carlos; Padilla, Alicia; Pascual, Silvia; Pinedo, Celia; Trisán, Andrea
2017-01-01
This paper, developed by consensus of staff physicians of accredited asthma units for the management of severe asthma, presents information on the process and requirements for already-existing asthma units to achieve official accreditation by the Spanish Society of Pneumology and Thoracic Surgery (SEPAR). Three levels of specialized asthma care have been established based on available resources, which include specialized units for highly complex asthma, specialized asthma units, and basic asthma units. Regardless of the level of accreditation obtained, the distinction of “excellence” could be granted when more requirements in the areas of provision of care, technical and human resources, training in asthma, and teaching and research activities were met at each level. The Spanish experience in the process of accreditation of specialized asthma units, particularly for the care of patients with difficult-to-control asthma, may be applicable to other health care settings. PMID:28533690
Jeffery, Alvin D; Mosier, Sammie; Baker, Allison; Korwek, Kimberly; Borum, Cindy; Englebright, Jane
2018-02-01
Hospital medical-surgical (M/S) nursing units are responsible for up to 28 million encounters annually, yet receive little attention from professional organizations and national initiatives targeted to improve quality and performance. We sought to develop a framework recognizing high-performing units within our large hospital system. This was a retrospective data analysis of M/S units throughout a 168-hospital system. Measures represented patient experience, employee engagement, staff scheduling, nursing-sensitive patient outcomes, professional practices, and clinical process measures. Four hundred ninety units from 129 hospitals contributed information to test the framework. A manual scoring system identified the top 5% and recognized them as a "Unit of Distinction." Secondary analyses with machine learning provided validation of the proposed framework. Similar to external recognition programs, this framework and process provide a holistic evaluation useful for meaningful recognition and lay the groundwork for benchmarking in improvement efforts.
A data distributed parallel algorithm for ray-traced volume rendering
NASA Technical Reports Server (NTRS)
Ma, Kwan-Liu; Painter, James S.; Hansen, Charles D.; Krogh, Michael F.
1993-01-01
This paper presents a divide-and-conquer ray-traced volume rendering algorithm and a parallel image compositing method, along with their implementation and performance on the Connection Machine CM-5, and networked workstations. This algorithm distributes both the data and the computations to individual processing units to achieve fast, high-quality rendering of high-resolution data. The volume data, once distributed, is left intact. The processing nodes perform local ray tracing of their subvolume concurrently. No communication between processing units is needed during this locally ray-tracing process. A subimage is generated by each processing unit and the final image is obtained by compositing subimages in the proper order, which can be determined a priori. Test results on both the CM-5 and a group of networked workstations demonstrate the practicality of our rendering algorithm and compositing method.
NASA Technical Reports Server (NTRS)
Bryant, W. H.; Morrell, F. R.
1981-01-01
Attention is given to a redundant strapdown inertial measurement unit for integrated avionics. The system consists of four two-degree-of-freedom turned rotor gyros and four two-degree-of-freedom accelerometers in a skewed and separable semi-octahedral array. The unit is coupled through instrument electronics to two flight computers which compensate sensor errors. The flight computers are interfaced to the microprocessors and process failure detection, isolation, redundancy management and flight control/navigation algorithms. The unit provides dual fail-operational performance and has data processing frequencies consistent with integrated avionics concepts presently planned.
The process of implementation of emergency care units in Brazil.
O'Dwyer, Gisele; Konder, Mariana Teixeira; Reciputti, Luciano Pereira; Lopes, Mônica Guimarães Macau; Agostinho, Danielle Fernandes; Alves, Gabriel Farias
2017-12-11
To analyze the process of implementation of emergency care units in Brazil. We have carried out a documentary analysis, with interviews with twenty-four state urgency coordinators and a panel of experts. We have analyzed issues related to policy background and trajectory, players involved in the implementation, expansion process, advances, limits, and implementation difficulties, and state coordination capacity. We have used the theoretical framework of the analysis of the strategic conduct of the Giddens theory of structuration. Emergency care units have been implemented after 2007, initially in the Southeast region, and 446 emergency care units were present in all Brazilian regions in 2016. Currently, 620 emergency care units are under construction, which indicates expectation of expansion. Federal funding was a strong driver for the implementation. The states have planned their emergency care units, but the existence of direct negotiation between municipalities and the Union has contributed with the significant number of emergency care units that have been built but that do not work. In relation to the urgency network, there is tension with the hospital because of the lack of beds in the country, which generates hospitalizations in the emergency care unit. The management of emergency care units is predominantly municipal, and most of the emergency care units are located outside the capitals and classified as Size III. The main challenges identified were: under-funding and difficulty in recruiting physicians. The emergency care unit has the merit of having technological resources and being architecturally differentiated, but it will only succeed within an urgency network. Federal induction has generated contradictory responses, since not all states consider the emergency care unit a priority. The strengthening of the state management has been identified as a challenge for the implementation of the urgency network.
The process of implementation of emergency care units in Brazil
O'Dwyer, Gisele; Konder, Mariana Teixeira; Reciputti, Luciano Pereira; Lopes, Mônica Guimarães Macau; Agostinho, Danielle Fernandes; Alves, Gabriel Farias
2017-01-01
ABSTRACT OBJECTIVE To analyze the process of implementation of emergency care units in Brazil. METHODS We have carried out a documentary analysis, with interviews with twenty-four state urgency coordinators and a panel of experts. We have analyzed issues related to policy background and trajectory, players involved in the implementation, expansion process, advances, limits, and implementation difficulties, and state coordination capacity. We have used the theoretical framework of the analysis of the strategic conduct of the Giddens theory of structuration. RESULTS Emergency care units have been implemented after 2007, initially in the Southeast region, and 446 emergency care units were present in all Brazilian regions in 2016. Currently, 620 emergency care units are under construction, which indicates expectation of expansion. Federal funding was a strong driver for the implementation. The states have planned their emergency care units, but the existence of direct negotiation between municipalities and the Union has contributed with the significant number of emergency care units that have been built but that do not work. In relation to the urgency network, there is tension with the hospital because of the lack of beds in the country, which generates hospitalizations in the emergency care unit. The management of emergency care units is predominantly municipal, and most of the emergency care units are located outside the capitals and classified as Size III. The main challenges identified were: under-funding and difficulty in recruiting physicians. CONCLUSIONS The emergency care unit has the merit of having technological resources and being architecturally differentiated, but it will only succeed within an urgency network. Federal induction has generated contradictory responses, since not all states consider the emergency care unit a priority. The strengthening of the state management has been identified as a challenge for the implementation of the urgency network. PMID:29236876
Westerhof, Gerben J; Whitbourne, Susan Krauss; Freeman, Gillian P
2012-01-01
To study the aging self, that is, conceptions of one's own aging process, in relation to identity processes and self-esteem in the United States and the Netherlands. As the liberal American system has a stronger emphasis on individual responsibility and youthfulness than the social-democratic Dutch system, we expect that youthful and positive perceptions of one's own aging process are more important in the United States than in the Netherlands. Three hundred and nineteen American and 235 Dutch persons between 40 and 85 years participated in the study. A single question on age identity and the Personal Experience of Aging Scale measured aspects of the aging self. The Identity and Experiences Scale measured identity processes and Rosenberg's scale measured self-esteem. A youthful age identity and more positive personal experiences of aging were related to identity processes and self-esteem. These conceptions of one's own aging process also mediate the relation between identity processes and self-esteem. This mediating effect is stronger in the United States than in the Netherlands. As expected, the self-enhancing function of youthful and positive aging perceptions is stronger in the liberal American system than in the social-democratic Dutch welfare system. The aging self should therefore be studied in its cultural context.
NASA Astrophysics Data System (ADS)
Sanna, N.; Baccarelli, I.; Morelli, G.
2009-12-01
SCELib is a computer program which implements the Single Center Expansion (SCE) method to describe molecular electronic densities and the interaction potentials between a charged projectile (electron or positron) and a target molecular system. The first version (CPC Catalog identifier ADMG_v1_0) was submitted to the CPC Program Library in 2000, and version 2.0 (ADMG_v2_0) was submitted in 2004. We here announce the new release 3.0 which presents additional features with respect to the previous versions aiming at a significative enhance of its capabilities to deal with larger molecular systems. SCELib 3.0 allows for ab initio effective core potential (ECP) calculations of the molecular wavefunctions to be used in the SCE method in addition to the standard all-electron description of the molecule. The list of supported architectures has been updated and the code has been ported to platforms based on accelerating coprocessors, such as the NVIDIA GPGPU and the new parallel model adopted is able to efficiently run on a mixed many-core computing system. Program summaryProgram title: SCELib3.0 Catalogue identifier: ADMG_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADMG_v3_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2 018 862 No. of bytes in distributed program, including test data, etc.: 4 955 014 Distribution format: tar.gz Programming language: C Compilers used: xlc V8.x, Intel C V10.x, Portland Group V7.x, nvcc V2.x Computer: All SMP platforms based on AIX, Linux and SUNOS operating systems over SPARC, POWER, Intel Itanium2, X86, em64t and Opteron processors Operating system: SUNOS, IBM AIX, Linux RedHat (Enterprise), Linux SuSE (SLES) Has the code been vectorized or parallelized?: Yes. 1 to 32 (CPU or GPU) used RAM: Up to 32 GB depending on the molecular system and runtime parameters Classification: 16.5 Catalogue identifier of previous version: ADMG_v2_0 Journal reference of previous version: Comput. Phys. Comm. 162 (2004) 51 External routines: CUDA libraries (SDK V2.x). Does the new version supersede the previous version?: Yes Nature of problem: In this set of codes an efficient procedure is implemented to describe the wavefunction and related molecular properties of a polyatomic molecular system within the Single Center of Expansion (SCE) approximation. The resulting SCE wavefunction, electron density, electrostatic and correlation/polarization potentials can then be used in a wide variety of applications, such as electron-molecule scattering calculations, quantum chemistry studies, biomodelling and drug design. Solution method: The polycentre Hartree-Fock solution for a molecule of arbitrary geometry, based on linear combination of Gaussian-Type Orbital (GTO), is expanded over a single center, typically the Center Of Mass (C.O.M.), by means of a Gauss Legendre/Chebyschev quadrature over the θ,φ angular coordinates. The resulting SCE numerical wavefunction is then used to calculate the one-particle electron density, the electrostatic potential and two different models for the correlation/polarization potentials induced by the impinging electron, which have the correct asymptotic behavior for the leading dipole molecular polarizabilities. Reasons for new version: The present release of SCELib allows the study of larger molecular systems with respect to the previous versions by means of theoretical and technological advances, with the first implementation of the code over a many-core computing system. Summary of revisions: The major features added with respect to SCELib Version 2.0 are molecular wavefunctions obtained via the Los Alamos (Hay and Wadt) LAN ECP plus DZ description of the inner-shell electrons (on Na-La, Hf-Bi elements) [1] can now be single-center-expanded; the addition required modifications of: (i) the filtering code readgau, (ii) the main reading function setinp, (iii) the sphint code (including changes to the CalcMO code), (iv) the densty code, (v) the vst code; the classes of platforms supported now include two more architectures based on accelerated coprocessors (Nvidia GSeries GPGPU and ClearSpeed e720 (ClearSpeed version, experimental; initial preliminary porting of the sphint() function not for production runs - see the code documentation for additional detail). A single-precision representation for real numbers in the SCE mapping of the GTOs ( sphint code), has been implemented into the new code; the I h symmetry point group for the molecular systems has been added to those already allowed in the SCE procedure; the orientation of the molecular axis system for the Cs (planar) symmetry has been changed in accord with the standard orientation adopted by the latest version of the quantum chemistry code (Gaussian C03 [2]), which is used to generate the input multi-centre molecular wavefunctions ( z-axis perpendicular to the symmetry plane); the abelian subgroup for the Cs point group has been changed from C 1 to Cs; atomic basis functions including g-type GTOs can now be single-center-expanded. Restrictions: Depending on the molecular system under study and on the operating conditions the program may or may not fit into available RAM memory. In this case a feature of the program is to memory map a disk file in order to efficiently access the memory data through a disk device. The parallel GP-GPU implementation limits the number of CPU threads to the number of GPU cores present. Running time: The execution time strongly depends on the molecular target description and on the hardware/OS chosen, it is directly proportional to the ( r,θ,φ) grid size and to the number of angular basis functions used. Thus, from the program printout of the main arrays memory occupancy, the user can approximately derive the expected computer time needed for a given calculation executed in serial mode. For parallel executions the overall efficiency must be further taken into account, and this depends on the no. of processors used as well as on the parallel architecture chosen, so a simple general law is at present not determinable. References:[1] P.J. Hay, W.R. Wadt, J. Chem. Phys. 82 (1985) 270; W.R. Wadt, P.J. Hay, J. Chem. Phys. 284 (1985);P.J. Hay, W.R. Wadt, J. Chem. Phys. 299 (1985). [2] M.J. Frisch et al., Gaussian 03, revision C.02, Gaussian, Inc., Wallingford, CT, 2004.
Sivakumar, Venkatasubramanian; Swaminathan, Gopalaraman; Rao, Paruchuri Gangadhar; Ramasami, Thirumalachari
2009-01-01
Ultrasound is a sound wave with a frequency above the human audible range of 16 Hz to 16 kHz. In recent years, numerous unit operations involving physical as well as chemical processes are reported to have been enhanced by ultrasonic irradiation. There have been benefits such as improvement in process efficiency, process time reduction, performing the processes under milder conditions and avoiding the use of some toxic chemicals to achieve cleaner processing. These could be a better way of augmentation for the processes as an advanced technique. The important point here is that ultrasonic irradiation is physical method activation rather than using chemical entities. Detailed studies have been made in the unit operations related to leather such as diffusion rate enhancement through porous leather matrix, cleaning, degreasing, tanning, dyeing, fatliquoring, oil-water emulsification process and solid-liquid tannin extraction from vegetable tanning materials as well as in precipitation reaction in wastewater treatment. The fundamental mechanism involved in these processes is ultrasonic cavitation in liquid media. In addition to this there also exist some process specific mechanisms for the enhancement of the processes. For instance, possible real-time reversible pore-size changes during ultrasound propagation through skin/leather matrix could be a reason for diffusion rate enhancement in leather processing as reported for the first time. Exhaustive scientific research work has been carried out in this area by our group working in Chemical Engineering Division of CLRI and most of these benefits have been proven with publications in valued peer-reviewed international journals. The overall results indicate that about 2-5-fold increase in the process efficiency due to ultrasound under the given process conditions for various unit operations with additional benefits. Scale-up studies are underway for converting these concepts in to a real viable larger scale operation. In the present paper, summary of our research findings from employing this technique in various unit operations such as cleaning, diffusion, emulsification, particle-size reduction, solid-liquid leaching (tannin and natural dye extraction) as well as precipitation has been presented.
ERIC Educational Resources Information Center
Wiley, Catherina L.
2003-01-01
Describes a unit to study the cycling of matter and energy through speleology using cooperative learning groups. Integrates the topic with zoology, biogeochemistry, paleontology, and meteorology. Includes a sample rubric for a salt block cave presentation, unit outline, and processes for studying matter and energy processes in caves. (Author/KHR)
20 CFR 655.81 - Application filing transition.
Code of Federal Regulations, 2010 CFR
2010-04-01
... EMPLOYMENT OF FOREIGN WORKERS IN THE UNITED STATES Labor Certification Process and Enforcement of Attestations for Temporary Employment in Occupations Other Than Agriculture or Registered Nursing in the United... intended employment prior to the effective date of these regulations, the SWAs shall continue to process...
20 CFR 655.1 - Purpose and scope of subpart A.
Code of Federal Regulations, 2010 CFR
2010-04-01
... EMPLOYMENT OF FOREIGN WORKERS IN THE UNITED STATES Labor Certification Process and Enforcement of Attestations for Temporary Employment in Occupations Other Than Agriculture or Registered Nursing in the United... governing the labor certification process for the temporary employment of nonimmigrant foreign workers in...
20 CFR 655.3 - Special procedures.
Code of Federal Regulations, 2010 CFR
2010-04-01
... FOREIGN WORKERS IN THE UNITED STATES Labor Certification Process and Enforcement of Attestations for Temporary Employment in Occupations Other Than Agriculture or Registered Nursing in the United States (H-2B Workers) § 655.3 Special procedures. (a) Systematic process. This subpart provides procedures for the...
COST ESTIMATION MODELS FOR DRINKING WATER TREATMENT UNIT PROCESSES
Cost models for unit processes typically utilized in a conventional water treatment plant and in package treatment plant technology are compiled in this paper. The cost curves are represented as a function of specified design parameters and are categorized into four major catego...
40 CFR 63.2250 - What are the general requirements?
Code of Federal Regulations, 2012 CFR
2012-07-01
..., except during periods of process unit or control device startup, shutdown, and malfunction; prior to process unit initial startup; and during the routine control device maintenance exemption specified in... practice requirements are not operating, or during periods of startup, shutdown, and malfunction. Startup...
40 CFR 63.2250 - What are the general requirements?
Code of Federal Regulations, 2013 CFR
2013-07-01
..., except during periods of process unit or control device startup, shutdown, and malfunction; prior to process unit initial startup; and during the routine control device maintenance exemption specified in... practice requirements are not operating, or during periods of startup, shutdown, and malfunction. Startup...
40 CFR 63.2250 - What are the general requirements?
Code of Federal Regulations, 2014 CFR
2014-07-01
..., except during periods of process unit or control device startup, shutdown, and malfunction; prior to process unit initial startup; and during the routine control device maintenance exemption specified in... practice requirements are not operating, or during periods of startup, shutdown, and malfunction. Startup...
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Novo, E. M. L. M.
1983-01-01
The effects of the seasonal variation of illumination over digital processing of LANDSAT images are evaluated. Two sets of LANDSAT data referring to the orbit 150 and row 28 were selected with illumination parameters varying from 43 deg to 64 deg for azimuth and from 30 deg to 36 deg for solar elevation respectively. IMAGE-100 system permitted the digital processing of LANDSAT data. Original images were transformed by means of digital filtering so as to enhance their spatial features. The resulting images were used to obtain an unsupervised classification of relief units. Topographic variables (declivity, altitude, relief range and slope length) were used to identify the true relief units existing on the ground. The LANDSAT over pass data show that digital processing is highly affected by illumination geometry, and there is no correspondence between relief units as defined by spectral features and those resulting from topographic features.
NASA Astrophysics Data System (ADS)
Meng, Chao; Zhou, Hong; Cong, Dalong; Wang, Chuanwei; Zhang, Peng; Zhang, Zhihui; Ren, Luquan
2012-06-01
The thermal fatigue behavior of hot-work tool steel processed by a biomimetic coupled laser remelting process gets a remarkable improvement compared to untreated sample. The 'dowel pin effect', the 'dam effect' and the 'fence effect' of non-smooth units are the main reason of the conspicuous improvement of the thermal fatigue behavior. In order to get a further enhancement of the 'dowel pin effect', the 'dam effect' and the 'fence effect', this study investigated the effect of different unit morphologies (including 'prolate', 'U' and 'V' morphology) and the same unit morphology in different sizes on the thermal fatigue behavior of H13 hot-work tool steel. The results showed that the 'U' morphology unit had the optimum thermal fatigue behavior, then the 'V' morphology which was better than the 'prolate' morphology unit; when the unit morphology was identical, the thermal fatigue behavior of the sample with large unit sizes was better than that of the small sizes.
Laser Processed Heat Exchangers
NASA Technical Reports Server (NTRS)
Hansen, Scott
2017-01-01
The Laser Processed Heat Exchanger project will investigate the use of laser processed surfaces to reduce mass and volume in liquid/liquid heat exchangers as well as the replacement of the harmful and problematic coatings of the Condensing Heat Exchangers (CHX). For this project, two scale unit test articles will be designed, manufactured, and tested. These two units are a high efficiency liquid/liquid HX and a high reliability CHX.
ERIC Educational Resources Information Center
Balci, Ceyda; Yenice, Nilgun
2016-01-01
The aim of this study is to analyse the effects of scientific argumentation based learning process on the eighth grade students' achievement in the unit of "cell division and inheritance". It also deals with the effects of this process on their comprehension about the nature of scientific knowledge, their willingness to take part in…
ERIC Educational Resources Information Center
Nebraska Univ., Lincoln. Dept. of Agricultural Education.
Designed for use with high school juniors, this agribusiness curriculum for city schools contains twenty-four units of instruction in the areas of agricultural processing and companion animals. Among the units included in the curriculum are (1) The Meat Processing Industry, (2) Retail Cuts of Meat, (3) Buying Meat and Portion Control, (4) Dairy…
NASA Technical Reports Server (NTRS)
1981-01-01
The engineering design, fabrication, assembly, operation, economic analysis, and process support research and development for an Experimental Process System Development Unit for producing semiconductor-grade silicon using the slane-to-silicon process are reported. The design activity was completed. About 95% of purchased equipment was received. The draft of the operations manual was about 50% complete and the design of the free-space system continued. The system using silicon power transfer, melting, and shotting on a psuedocontinuous basis was demonstrated.
Computer Vision for Artificially Intelligent Robotic Systems
NASA Astrophysics Data System (ADS)
Ma, Chialo; Ma, Yung-Lung
1987-04-01
In this paper An Acoustic Imaging Recognition System (AIRS) will be introduced which is installed on an Intelligent Robotic System and can recognize different type of Hand tools' by Dynamic pattern recognition. The dynamic pattern recognition is approached by look up table method in this case, the method can save a lot of calculation time and it is practicable. The Acoustic Imaging Recognition System (AIRS) is consist of four parts -- position control unit, pulse-echo signal processing unit, pattern recognition unit and main control unit. The position control of AIRS can rotate an angle of ±5 degree Horizental and Vertical seperately, the purpose of rotation is to find the maximum reflection intensity area, from the distance, angles and intensity of the target we can decide the characteristic of this target, of course all the decision is target, of course all the decision is processed bye the main control unit. In Pulse-Echo Signal Process Unit, we ultilize the correlation method, to overcome the limitation of short burst of ultrasonic, because the Correlation system can transmit large time bandwidth signals and obtain their resolution and increased intensity through pulse compression in the correlation receiver. The output of correlator is sampled and transfer into digital data by u law coding method, and this data together with delay time T, angle information OH, eV will be sent into main control unit for further analysis. The recognition process in this paper, we use dynamic look up table method, in this method at first we shall set up serval recognition pattern table and then the new pattern scanned by Transducer array will be devided into serval stages and compare with the sampling table. The comparison is implemented by dynamic programing and Markovian process. All the hardware control signals, such as optimum delay time for correlator receiver, horizental and vertical rotation angle for transducer plate, are controlled by the Main Control Unit, the Main Control Unit also handles the pattern recognition process. The distance from the target to the transducer plate is limitted by the power and beam angle of transducer elements, in this AIRS Model, we use a narrow beam transducer and it's input voltage is 50V p-p. A RobOt equipped with AIRS can not only measure the distance from the target but also recognize a three dimensional image of target from the image lab of Robot memory. Indexitems, Accoustic System, Supersonic transducer, Dynamic programming, Look-up-table, Image process, pattern Recognition, Quad Tree, Quadappoach.
NASA Astrophysics Data System (ADS)
Ma, Yung-Lung; Ma, Chialo
1987-03-01
In this paper An Acoustic Imaging Recognition System (AIRS) will be introduced which is installed on an Intelligent Robotic System and can recognize different type of Hand tools' by Dynamic pattern recognition. The dynamic pattern recognition is approached by look up table method in this case, the method can save a lot of calculation time and it is practicable. The Acoustic Imaging Recognition System (AIRS) is consist of four parts _ position control unit, pulse-echo signal processing unit, pattern recognition unit and main control unit. The position control of AIRS can rotate an angle of ±5 degree Horizental and Vertical seperately, the purpose of rotation is to find the maximum reflection intensity area, from the distance, angles and intensity of the target we can decide the characteristic of this target, of course all the decision is target, of course all the decision is processed by the main control unit. In Pulse-Echo Signal Process Unit, we utilize the correlation method, to overcome the limitation of short burst of ultrasonic, because the Correlation system can transmit large time bandwidth signals and obtain their resolution and increased intensity through pulse compression in the correlation receiver. The output of correlator is sampled and transfer into digital data by p law coding method, and this data together with delay time T, angle information eH, eV will be sent into main control unit for further analysis. The recognition process in this paper, we use dynamic look up table method, in this method at first we shall set up serval recognition pattern table and then the new pattern scanned by Transducer array will be devided into serval stages and compare with the sampling table. The comparison is implemented by dynamic programing and Markovian process. All the hardware control signals, such as optimum delay time for correlator receiver, horizental and vertical rotation angle for transducer plate, are controlled by the Main Control Unit, the Main Control Unit also handles the pattern recognition process. The distance from the target to the transducer plate is limitted by the power and beam angle of transducer elements, in this AIRS Models, we use a narrow beam transducer and it's input voltage is 50V p-p. A Robot equipped with AIRS can not only measure the distance from the target but also recognize a three dimensional image of target from the image lab of Robot memory. Indexitems, Accoustic System, Supersonic transducer, Dynamic programming, Look-up-table, Image process, pattern Recognition, Quad Tree, Quadappoach.
Khanali, Majid; Mobli, Hossein; Hosseinzadeh-Bandbafha, Homa
2017-12-01
In this study, an artificial neural network (ANN) model was developed for predicting the yield and life cycle environmental impacts based on energy inputs required in processing of black tea, green tea, and oolong tea in Guilan province of Iran. A life cycle assessment (LCA) approach was used to investigate the environmental impact categories of processed tea based on the cradle to gate approach, i.e., from production of input materials using raw materials to the gate of tea processing units, i.e., packaged tea. Thus, all the tea processing operations such as withering, rolling, fermentation, drying, and packaging were considered in the analysis. The initial data were obtained from tea processing units while the required data about the background system was extracted from the EcoInvent 2.2 database. LCA results indicated that diesel fuel and corrugated paper box used in drying and packaging operations, respectively, were the main hotspots. Black tea processing unit caused the highest pollution among the three processing units. Three feed-forward back-propagation ANN models based on Levenberg-Marquardt training algorithm with two hidden layers accompanied by sigmoid activation functions and a linear transfer function in output layer, were applied for three types of processed tea. The neural networks were developed based on energy equivalents of eight different input parameters (energy equivalents of fresh tea leaves, human labor, diesel fuel, electricity, adhesive, carton, corrugated paper box, and transportation) and 11 output parameters (yield, global warming, abiotic depletion, acidification, eutrophication, ozone layer depletion, human toxicity, freshwater aquatic ecotoxicity, marine aquatic ecotoxicity, terrestrial ecotoxicity, and photochemical oxidation). The results showed that the developed ANN models with R 2 values in the range of 0.878 to 0.990 had excellent performance in predicting all the output variables based on inputs. Energy consumption for processing of green tea, oolong tea, and black tea were calculated as 58,182, 60,947, and 66,301 MJ per ton of dry tea, respectively.
Grace: A cross-platform micromagnetic simulator on graphics processing units
NASA Astrophysics Data System (ADS)
Zhu, Ru
2015-12-01
A micromagnetic simulator running on graphics processing units (GPUs) is presented. Different from GPU implementations of other research groups which are predominantly running on NVidia's CUDA platform, this simulator is developed with C++ Accelerated Massive Parallelism (C++ AMP) and is hardware platform independent. It runs on GPUs from venders including NVidia, AMD and Intel, and achieves significant performance boost as compared to previous central processing unit (CPU) simulators, up to two orders of magnitude. The simulator paved the way for running large size micromagnetic simulations on both high-end workstations with dedicated graphics cards and low-end personal computers with integrated graphics cards, and is freely available to download.
Microchannel Distillation of JP-8 Jet Fuel for Sulfur Content Reduction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Feng; Stenkamp, Victoria S.; TeGrotenhuis, Ward E.
2006-09-16
In microchannel based distillation processes, thin vapor and liquid films are contacted in small channels where mass transfer is diffusion-limited. The microchannel architecture enables improvements in distillation processes. A shorter height equivalent of a theoretical plate (HETP) and therefore a more compact distillation unit can be achieved. A microchannel distillation unit was used to produce a light fraction of JP-8 fuel with reduced sulfur content for use as feed to produce fuel-cell grade hydrogen. The HETP of the microchannel unit is discussed, as well as the effects of process conditions such as feed temperature, flow rate, and reflux ratio.
NASA Technical Reports Server (NTRS)
Blocher, J. M., Jr.; Browning, M. F.
1979-01-01
The construction and operation of an experimental process system development unit (EPSDU) for the production of granular semiconductor grade silicon by the zinc vapor reduction of silicon tetrachloride in a fluidized bed of seed particles is presented. The construction of the process development unit (PDU) is reported. The PDU consists of four critical units of the EPSDU: the fluidized bed reactor, the reactor by product condenser, the zinc vaporizer, and the electrolytic cell. An experimental wetted wall condenser and its operation are described. Procedures are established for safe handling of SiCl4 leaks and spills from the EPSDU and PDU.
Strategic planning in healthcare organizations.
Rodríguez Perera, Francisco de Paula; Peiró, Manel
2012-08-01
Strategic planning is a completely valid and useful tool for guiding all types of organizations, including healthcare organizations. The organizational level at which the strategic planning process is relevant depends on the unit's size, its complexity, and the differentiation of the service provided. A cardiology department, a hemodynamic unit, or an electrophysiology unit can be an appropriate level, as long as their plans align with other plans at higher levels. The leader of each unit is the person responsible for promoting the planning process, a core and essential part of his or her role. The process of strategic planning is programmable, systematic, rational, and holistic and integrates the short, medium, and long term, allowing the healthcare organization to focus on relevant and lasting transformations for the future. Copyright © 2012 Sociedad Española de Cardiología. Published by Elsevier Espana. All rights reserved.
20 CFR 655.5 - Purpose and scope of subpart A.
Code of Federal Regulations, 2010 CFR
2010-04-01
... EMPLOYMENT OF FOREIGN WORKERS IN THE UNITED STATES Labor Certification Process and Enforcement of Attestations for Temporary Employment in Occupations Other Than Agriculture or Registered Nursing in the United... certification process for the temporary employment of nonimmigrant foreign workers in the U.S. in occupations...
21 CFR 211.100 - Written procedures; deviations.
Code of Federal Regulations, 2012 CFR
2012-04-01
... Process Controls § 211.100 Written procedures; deviations. (a) There shall be written procedures for production and process control designed to assure that the drug products have the identity, strength, quality... approved by the appropriate organizational units and reviewed and approved by the quality control unit. (b...
21 CFR 211.100 - Written procedures; deviations.
Code of Federal Regulations, 2014 CFR
2014-04-01
... Process Controls § 211.100 Written procedures; deviations. (a) There shall be written procedures for production and process control designed to assure that the drug products have the identity, strength, quality... approved by the appropriate organizational units and reviewed and approved by the quality control unit. (b...
21 CFR 211.100 - Written procedures; deviations.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Process Controls § 211.100 Written procedures; deviations. (a) There shall be written procedures for production and process control designed to assure that the drug products have the identity, strength, quality... approved by the appropriate organizational units and reviewed and approved by the quality control unit. (b...
21 CFR 211.100 - Written procedures; deviations.
Code of Federal Regulations, 2013 CFR
2013-04-01
... Process Controls § 211.100 Written procedures; deviations. (a) There shall be written procedures for production and process control designed to assure that the drug products have the identity, strength, quality... approved by the appropriate organizational units and reviewed and approved by the quality control unit. (b...
21 CFR 211.100 - Written procedures; deviations.
Code of Federal Regulations, 2011 CFR
2011-04-01
... Process Controls § 211.100 Written procedures; deviations. (a) There shall be written procedures for production and process control designed to assure that the drug products have the identity, strength, quality... approved by the appropriate organizational units and reviewed and approved by the quality control unit. (b...
40 CFR 63.2250 - What are the general requirements?
Code of Federal Regulations, 2011 CFR
2011-07-01
... periods of process unit or control device startup, shutdown, and malfunction; prior to process unit initial startup; and during the routine control device maintenance exemption specified in § 63.2251. The... are not operating, or during periods of startup, shutdown, and malfunction. Startup and shutdown...
40 CFR 63.2250 - What are the general requirements?
Code of Federal Regulations, 2010 CFR
2010-07-01
... periods of process unit or control device startup, shutdown, and malfunction; prior to process unit initial startup; and during the routine control device maintenance exemption specified in § 63.2251. The... are not operating, or during periods of startup, shutdown, and malfunction. Startup and shutdown...
40 CFR 63.1100 - Applicability.
Code of Federal Regulations, 2013 CFR
2013-07-01
... suspension containing acrylonitrile. c Heat exchange systems as defined in § 63.1103(e)(2). d Fiber spinning... those process units that meet the criteria of paragraph (e)(8)(i) of this section. (f) Recovery operation equipment ownership determination. To determine the process unit to which recovery equipment shall...
40 CFR 63.1100 - Applicability.
Code of Federal Regulations, 2012 CFR
2012-07-01
... suspension containing acrylonitrile. c Heat exchange systems as defined in § 63.1103(e)(2). d Fiber spinning... those process units that meet the criteria of paragraph (e)(8)(i) of this section. (f) Recovery operation equipment ownership determination. To determine the process unit to which recovery equipment shall...
40 CFR 63.1100 - Applicability.
Code of Federal Regulations, 2014 CFR
2014-07-01
... suspension containing acrylonitrile. c Heat exchange systems as defined in § 63.1103(e)(2). d Fiber spinning... those process units that meet the criteria of paragraph (e)(8)(i) of this section. (f) Recovery operation equipment ownership determination. To determine the process unit to which recovery equipment shall...
Eutrophication, A Natural Process.
ERIC Educational Resources Information Center
Monsour, William
This environmental education learning unit deals with the topic of eutrophication. The unit is designed to allow secondary teachers of science, language arts, and social studies to use it as supplementary material in their classroom. Teacher information, unit objectives, the unit text, and appendices are included. The teacher information section…
Quality Assurance in American and British Higher Education: A Comparison.
ERIC Educational Resources Information Center
Stanley, Elizabeth C.; Patrick, William J.
1998-01-01
Compares quality improvement and accountability processes in the United States and United Kingdom. For the United Kingdom, looks at quality audits, institutional assessment, standards-based quality assurance, and research assessment; in the United States, looks at regional and specialized accreditation, performance indicator systems, academic…
Geomorphic Processes and Remote Sensing Signatures of Alluvial Fans in the Kun Lun Mountains, China
NASA Technical Reports Server (NTRS)
Farr, Tom G.; Chadwick, Oliver A.
1996-01-01
The timing of alluvial deposition in arid and semiarid areas is tied to land-surface instability caused by regional climate changes. The distribution pattern of dated deposits provides maps of regional land-surface response to past climate change. Sensitivity to differences in surface roughness and composition makes remote sensing techniques useful for regional mapping of alluvial deposits. Radar images from the Spaceborne Radar Laboratory and visible wavelength images from the French SPOT satellite were used to determine remote sensing signatures of alluvial fan units for an area in the Kun Lun Mountains of northwestern China. These data were combined with field observations to compare surface processes and their effects on remote sensing signatures in northwestern China and the southwestern United States. Geomorphic processes affecting alluvial fans in the two areas include aeolian deposition, desert varnish, and fluvial dissection. However, salt weathering is a much more important process in the Kun Lun than in the southwestern United States. This slows the formation of desert varnish and prevents desert pavement from forming. Thus the Kun Lun signatures are characteristic of the dominance of salt weathering, while signatures from the southwestern United States are characteristic of the dominance of desert varnish and pavement processes. Remote sensing signatures are consistent enough in these two regions to be used for mapping fan units over large areas.
NASA Technical Reports Server (NTRS)
1980-01-01
The design, fabrication, and installation of an experimental process system development unit (EPSDU) were analyzed. Supporting research and development were performed to provide an information data base usable for the EPSDU and for technological design and economical analysis for potential scale-up of the process. Iterative economic analyses were conducted for the estimated product cost for the production of semiconductor grade silicon in a facility capable of producing 1000-MT/Yr.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rizzo, Jeffrey J.
2010-04-30
The Wabash gasification facility, owned and operated by sgSolutions LLC, is one of the largest single train solid fuel gasification facilities in the world capable of transforming 2,000 tons per day of petroleum coke or 2,600 tons per day of bituminous coal into synthetic gas for electrical power generation. The Wabash plant utilizes Phillips66 proprietary E-Gas (TM) Gasification Process to convert solid fuels such as petroleum coke or coal into synthetic gas that is fed to a combined cycle combustion turbine power generation facility. During plant startup in 1995, reliability issues were realized in the gas filtration portion of themore » gasification process. To address these issues, a slipstream test unit was constructed at the Wabash facility to test various filter designs, materials and process conditions for potential reliability improvement. The char filtration slipstream unit provided a way of testing new materials, maintenance procedures, and process changes without the risk of stopping commercial production in the facility. It also greatly reduced maintenance expenditures associated with full scale testing in the commercial plant. This char filtration slipstream unit was installed with assistance from the United States Department of Energy (built under DOE Contract No. DE-FC26-97FT34158) and began initial testing in November of 1997. It has proven to be extremely beneficial in the advancement of the E-Gas (TM) char removal technology by accurately predicting filter behavior and potential failure mechanisms that would occur in the commercial process. After completing four (4) years of testing various filter types and configurations on numerous gasification feed stocks, a decision was made to investigate the economic and reliability effects of using a particulate removal gas cyclone upstream of the current gas filtration unit. A paper study had indicated that there was a real potential to lower both installed capital and operating costs by implementing a char cyclonefiltration hybrid unit in the E-Gas (TM) gasification process. These reductions would help to keep the E-Gas (TM) technology competitive among other coal-fired power generation technologies. The Wabash combined cyclone and gas filtration slipstream test program was developed to provide design information, equipment specification and process control parameters of a hybrid cyclone and candle filter particulate removal system in the E-Gas (TM) gasification process that would provide the optimum performance and reliability for future commercial use. The test program objectives were as follows: 1. Evaluate the use of various cyclone materials of construction; 2. Establish the optimal cyclone efficiency that provides stable long term gas filter operation; 3. Determine the particle size distribution of the char separated by both the cyclone and candle filters. This will provide insight into cyclone efficiency and potential future plant design; 4. Determine the optimum filter media size requirements for the cyclone-filtration hybrid unit; 5. Determine the appropriate char transfer rates for both the cyclone and filtration portions of the hybrid unit; 6. Develop operating procedures for the cyclone-filtration hybrid unit; and, 7. Compare the installed capital cost of a scaled-up commercial cyclone-filtration hybrid unit to the current gas filtration design without a cyclone unit, such as currently exists at the Wabash facility.« less
Needleman, Jack; Pearson, Marjorie L; Upenieks, Valda V; Yee, Tracy; Wolstein, Joelle; Parkerton, Melissa
2016-02-01
Process improvement stresses the importance of engaging frontline staff in implementing new processes and methods. Yet questions remain on how to incorporate these activities into the workday of hospital staff or how to create and maintain its commitment. In a 15-month American Organization of Nurse Executives collaborative involving frontline medical/surgical staff from 67 hospitals, Transforming Care at the Bedside (TCAB) was evaluated to assess whether participating units successfully implemented recommended change processes, engaged staff, implemented innovations, and generated support from hospital leadership and staff. In a mixed-methods analysis, multiple data sources, including leader surveys, unit staff surveys, administrative data, time study data, and collaborative documents were used. All units reported establishing unit-based teams, of which >90% succeeded in conducting tests of change, with unit staff selecting topics and making decisions on adoption. Fifty-five percent of unit staff reported participating in unit meetings, and 64%, in tests of change. Unit managers reported substantial increase in staff support for the initiative. An average 36 tests of change were conducted per unit, with 46% of tested innovations sustained, and 20% spread to other units. Some 95% of managers and 97% of chief nursing officers believed that the program had made unit staff more likely to initiate change. Among staff, 83% would encourage adoption of the initiative. Given the strong positive assessment of TCAB, evidence of substantial engagement of staff in the work, and the high volume of innovations tested, implemented, and sustained, TCAB appears to be a productive model for organizing and implementing a program of frontline-led improvement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
From November 1991 to April 1996, Kerr McGee Coal Corporation (K-M Coal) led a project to develop the Institute of Gas Technology (IGT) Mild Gasification (MILDGAS) process for near-term commercialization. The specific objectives of the program were to: design, construct, and operate a 24-tons/day adiabatic process development unit (PDU) to obtain process performance data suitable for further design scale-up; obtain large batches of coal-derived co-products for industrial evaluation; prepare a detailed design of a demonstration unit; and develop technical and economic plans for commercialization of the MILDGAS process. The project team for the PDU development program consisted of: K-M Coal,more » IGT, Bechtel Corporation, Southern Illinois University at Carbondale (SIUC), General Motors (GM), Pellet Technology Corporation (PTC), LTV Steel, Armco Steel, Reilly Industries, and Auto Research.« less
APC implementation in Chandra Asri - ethylene plant
NASA Astrophysics Data System (ADS)
Sidiq, Mochamad; Mustofa, Ali
2017-05-01
Nowadays, the modern process plants are continuously improved for maximizing production, Optimization of the energy and raw material and reducing the risk. Due to many disturbances appearance between the process units, hence, the failure of one unit might have a bad effect on the overall productivity. Ethylene Plant have significant opportunities for using Advanced Process Control (APC) technologies to improve operation stability, push closer to quality or equipment limit, and improve the capability of process units to handle disturbances. APC implementation had considered a best answer for solving multivariable control problem. PT. Chandra Asri Petrochemical, Tbk (CAP) operates a large naphtha cracker complex at Cilegon, Indonesia. To optimize the plant operation and to enhance the benefit, Chandra Asri has been decided to implement Advance Process Control (APC) for ethylene plant. The APC implementation technology scopes at CAP are as follows: 1. Hot Section : Furnaces, Quench Tower 2. Cold Section : Demethanizer, Deethanizer, Acetylene Converter, Ethylene Fractionator, Depropanizer, Propylene Fractionator, Debutanizer
Low Quality Natural Gas Sulfur Removal and Recovery CNG Claus Sulfur Recovery Process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klint, V.W.; Dale, P.R.; Stephenson, C.
1997-10-01
Increased use of natural gas (methane) in the domestic energy market will force the development of large non-producing gas reserves now considered to be low quality. Large reserves of low quality natural gas (LQNG) contaminated with hydrogen sulfide (H{sub 2}S), carbon dioxide (CO{sub 2}) and nitrogen (N) are available but not suitable for treatment using current conventional gas treating methods due to economic and environmental constraints. A group of three technologies have been integrated to allow for processing of these LQNG reserves; the Controlled Freeze Zone (CFZ) process for hydrocarbon / acid gas separation; the Triple Point Crystallizer (TPC) processmore » for H{sub 2}S / C0{sub 2} separation and the CNG Claus process for recovery of elemental sulfur from H{sub 2}S. The combined CFZ/TPC/CNG Claus group of processes is one program aimed at developing an alternative gas treating technology which is both economically and environmentally suitable for developing these low quality natural gas reserves. The CFZ/TPC/CNG Claus process is capable of treating low quality natural gas containing >10% C0{sub 2} and measurable levels of H{sub 2}S and N{sub 2} to pipeline specifications. The integrated CFZ / CNG Claus Process or the stand-alone CNG Claus Process has a number of attractive features for treating LQNG. The processes are capable of treating raw gas with a variety of trace contaminant components. The processes can also accommodate large changes in raw gas composition and flow rates. The combined processes are capable of achieving virtually undetectable levels of H{sub 2}S and significantly less than 2% CO in the product methane. The separation processes operate at pressure and deliver a high pressure (ca. 100 psia) acid gas (H{sub 2}S) stream for processing in the CNG Claus unit. This allows for substantial reductions in plant vessel size as compared to conventional Claus / Tail gas treating technologies. A close integration of the components of the CNG Claus process also allow for use of the methane/H{sub 2}S separation unit as a Claus tail gas treating unit by recycling the CNG Claus tail gas stream. This allows for virtually 100 percent sulfur recovery efficiency (virtually zero SO{sub 2} emissions) by recycling the sulfur laden tail gas to extinction. The use of the tail gas recycle scheme also deemphasizes the conventional requirement in Claus units to have high unit conversion efficiency and thereby make the operation much less affected by process upsets and feed gas composition changes. The development of these technologies has been ongoing for many years and both the CFZ and the TPC processes have been demonstrated at large pilot plant scales. On the other hand, prior to this project, the CNG Claus process had not been proven at any scale. Therefore, the primary objective of this portion of the program was to design, build and operate a pilot scale CNG Claus unit and demonstrate the required fundamental reaction chemistry and also demonstrate the viability of a reasonably sized working unit.« less
Food Processing Curriculum Material and Resource Guide.
ERIC Educational Resources Information Center
Louisiana State Dept. of Education, Baton Rouge.
Intended for secondary vocational agriculture teachers, this curriculum guide contains a course outline and a resource manual for a seven-unit food processing course on meats. Within the course outline, units are divided into separate lessons. Materials provided for each lesson include preparation for instruction (student objectives, review of…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-02
... DEPARTMENT OF LABOR Employment and Training Administration Labor Certification Process for the Temporary Employment of Aliens in Agriculture in the United States: 2012 Allowable Charges for Agricultural Workers' Meals and Travel Subsistence Reimbursement, Including Lodging AGENCY: Employment and Training...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-12
... DEPARTMENT OF LABOR Employment and Training Administration Labor Certification Process for the Temporary Employment of Aliens in Agriculture in the United States: 2013 Allowable Charges for Agricultural Workers' Meals and Travel Subsistence Reimbursement, Including Lodging AGENCY: Employment and Training...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-07
... DEPARTMENT OF LABOR Employment and Training Administration Labor Certification Process for the Temporary Employment of Aliens in Agriculture in the United States: 2012 Allowable Charges for Agricultural Workers' Meals and Travel Subsistence Reimbursement, Including Lodging AGENCY: Employment and Training...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-01
... DEPARTMENT OF LABOR Employment and Training Administration Labor Certification Process for the Temporary Employment of Aliens in Agriculture in the United States: 2011 Adverse Effect Wage Rates, Allowable Charges for Agricultural Workers' Meals, and Maximum Travel Subsistence Reimbursement AGENCY...
Code of Federal Regulations, 2010 CFR
2010-04-01
... WORKERS IN THE UNITED STATES Labor Certification Process and Enforcement of Attestations for Temporary Employment in Occupations Other Than Agriculture or Registered Nursing in the United States (H-2B Workers... a significant failure to comply with the RFI or audit process pursuant to §§ 655.23 or 655.24; (v...
20 CFR 655.75 - Decision and order of administrative law judge.
Code of Federal Regulations, 2010 CFR
2010-04-01
... LABOR TEMPORARY EMPLOYMENT OF FOREIGN WORKERS IN THE UNITED STATES Labor Certification Process and... Nursing in the United States (H-2B Workers) § 655.75 Decision and order of administrative law judge. (a... determination resulting from that process. Under no circumstances shall the administrative law judge determine...
37 CFR 7.4 - Receipt of correspondence.
Code of Federal Regulations, 2011 CFR
2011-07-01
..., DEPARTMENT OF COMMERCE RULES OF PRACTICE IN FILINGS PURSUANT TO THE PROTOCOL RELATING TO THE MADRID AGREEMENT... to review an action of the Office's Madrid Processing Unit, when filed by mail, must be addressed to: Madrid Processing Unit, 600 Dulany Street, MDE-7B87, Alexandria, VA 22314-5793. (1) International...
37 CFR 7.4 - Receipt of correspondence.
Code of Federal Regulations, 2012 CFR
2012-07-01
..., DEPARTMENT OF COMMERCE RULES OF PRACTICE IN FILINGS PURSUANT TO THE PROTOCOL RELATING TO THE MADRID AGREEMENT... to review an action of the Office's Madrid Processing Unit, when filed by mail, must be addressed to: Madrid Processing Unit, 600 Dulany Street, MDE-7B87, Alexandria, VA 22314-5793. (1) International...
37 CFR 7.4 - Receipt of correspondence.
Code of Federal Regulations, 2014 CFR
2014-07-01
..., DEPARTMENT OF COMMERCE RULES OF PRACTICE IN FILINGS PURSUANT TO THE PROTOCOL RELATING TO THE MADRID AGREEMENT... to review an action of the Office's Madrid Processing Unit, when filed by mail, must be addressed to: Madrid Processing Unit, 600 Dulany Street, MDE-7B87, Alexandria, VA 22314-5793. (1) International...
37 CFR 7.4 - Receipt of correspondence.
Code of Federal Regulations, 2013 CFR
2013-07-01
..., DEPARTMENT OF COMMERCE RULES OF PRACTICE IN FILINGS PURSUANT TO THE PROTOCOL RELATING TO THE MADRID AGREEMENT... to review an action of the Office's Madrid Processing Unit, when filed by mail, must be addressed to: Madrid Processing Unit, 600 Dulany Street, MDE-7B87, Alexandria, VA 22314-5793. (1) International...
ELI ECO Logic International, Inc.'s Thermal Desorption Unit (TDU) is specifically designed for use with Eco Logic's Gas Phase Chemical Reduction Process. The technology uses an externally heated bath of molten tin in a hydrogen atmosphere to desorb hazardous organic compounds fro...
USDA-ARS?s Scientific Manuscript database
The Cotton Chemistry and Utilization Research Unit is part of the Agricultural Research Service, the U.S. Department of Agriculture's chief scientific in-house research agency. The Research Unit develops new processes, applications and product enabling technologies which facilitate the expanded use ...
ERIC Educational Resources Information Center
Henn, Cynthia
2004-01-01
In this article, the author describes a unit she implemented on Batik designs. This unit helped second-graders gain an understanding of the batik process while learning about mask designs and the Senegalese culture. Batik has origins in many areas around the world, including Indonesia and West Africa. This fabric-resist process involves the…
US Federal LCA Commons Life Cycle Inventory Unit Process Template
The US Federal LCA Commons Life Cycle Inventory Unit Process Template is a multi-sheet Excel template for life cycle inventory data, metadata and other documentation. The template comes as a package that consistent of three parts: (1) the main template itself for life cycle inven...
Teaching Children Science. Second Edition.
ERIC Educational Resources Information Center
Abruscato, Joseph
This book focuses on science teaching at the elementary school level. It includes chapters dealing with various science content areas and teaching processes including: (1) what is science; (2) why teach science; (3) process skills as a foundation for unit and lesson planning; (4) how to plan learning units, daily lessons, and assessment…
Parallelized CCHE2D flow model with CUDA Fortran on Graphics Process Units
USDA-ARS?s Scientific Manuscript database
This paper presents the CCHE2D implicit flow model parallelized using CUDA Fortran programming technique on Graphics Processing Units (GPUs). A parallelized implicit Alternating Direction Implicit (ADI) solver using Parallel Cyclic Reduction (PCR) algorithm on GPU is developed and tested. This solve...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berlin, V. V., E-mail: vberlin@rinet.ru; Murav’ev, O. A., E-mail: muraviov1954@mail.ru; Golubev, A. V., E-mail: electronik@inbox.ru
Aspects of the startup of pumping units in the cooling and process water supply systems for thermal and nuclear power plants with cooling towers, the startup stages, and the limits imposed on the extreme parameters during transients are discussed.
40 CFR 63.1579 - What definitions apply to this subpart?
Code of Federal Regulations, 2011 CFR
2011-07-01
... process characterized by continual batch regeneration of catalyst in situ in any one of several reactors... device that treats (in-situ) the catalytic reforming unit recirculating coke burn exhaust gases for acid... or operator's convenience for in situ catalyst regeneration. Sulfur recovery unit means a process...
40 CFR 63.1579 - What definitions apply to this subpart?
Code of Federal Regulations, 2010 CFR
2010-07-01
... process characterized by continual batch regeneration of catalyst in situ in any one of several reactors... device that treats (in-situ) the catalytic reforming unit recirculating coke burn exhaust gases for acid... or operator's convenience for in situ catalyst regeneration. Sulfur recovery unit means a process...
Graphic Arts: Book Three. The Press and Related Processes.
ERIC Educational Resources Information Center
Farajollahi, Karim; And Others
The third of a three-volume set of instructional materials for a graphic arts course, this manual consists of nine instructional units dealing with presses and related processes. Covered in the units are basic press fundamentals, offset press systems, offset press operating procedures, offset inks and dampening chemistry, preventive maintenance…
Plant Puzzles, An Environmental Investigation.
ERIC Educational Resources Information Center
National Wildlife Federation, Washington, DC.
This environmental unit is one of a series designed for integration within an existing curriculum. The unit is self-contained and requires minimal teacher preparation. The philosophy of the units is based on an experience-oriented process that encourages self-paced independent student work. The purpose of this unit is to familiarize students with…
Shadows, An Environmental Investigation.
ERIC Educational Resources Information Center
National Wildlife Federation, Washington, DC.
This environmental unit is one of a series designed for integration within an existing curriculum. The units are self-contained and require minimal teacher preparation. The philosophy behind the units is based on an experience-oriented process that encourages self-paced independent work. This unit on shadows is designed for all elementary levels,…
Reasons for exclusion of 6820 umbilical cord blood donations in a public cord blood bank.
Wang, Tso-Fu; Wen, Shu-Hui; Yang, Kuo-Liang; Yang, Shang-Hsien; Yang, Yun-Fan; Chang, Chu-Yu; Wu, Yi-Feng; Chen, Shu-Huey
2014-01-01
To provide information for umbilical cord blood (UCB) banks to adopt optimal collection strategies and to make UCB banks operate efficiently, we investigated the reasons for exclusion of UCB units in a 3-year recruitment period. We analyzed records of the reasons for exclusion of the potential UCB donation from 2004 to 2006 in the Tzu-Chi Cord Blood Bank and compared the results over 3 years. We grouped these reasons for exclusion into five phases, before collection, during delivery, before processing, during processing, and after freezing according to the time sequence and analyzed the reasons at each phase. Between 2004 and 2006, there were 10,685 deliveries with the intention of UCB donation. In total, 41.2% of the UCB units were considered eligible for transplantation. The exclusion rates were 93.1, 48.4, and 54.1% in 2004, 2005, and 2006, respectively. We excluded 612 donations from women before their child birth, 133 UCB units during delivery, 80 units before processing, 5010 units during processing, and 421 units after freezing. There were 24 UCB units with unknown reasons of ineligibility. Low UCB weight and low cell count were the first two leading causes of exclusion (48.6 and 30.9%). The prevalence of artificial errors, holiday or transportation problem, low weight, and infant problems decreased year after year. The exclusion rate was high at the beginning of our study as in previous studies. Understanding the reasons for UCB exclusion may help to improve the efficiency of UCB banking programs in the future. © 2013 American Association of Blood Banks.
NASA Astrophysics Data System (ADS)
Ruf, B.; Erdnuess, B.; Weinmann, M.
2017-08-01
With the emergence of small consumer Unmanned Aerial Vehicles (UAVs), the importance and interest of image-based depth estimation and model generation from aerial images has greatly increased in the photogrammetric society. In our work, we focus on algorithms that allow an online image-based dense depth estimation from video sequences, which enables the direct and live structural analysis of the depicted scene. Therefore, we use a multi-view plane-sweep algorithm with a semi-global matching (SGM) optimization which is parallelized for general purpose computation on a GPU (GPGPU), reaching sufficient performance to keep up with the key-frames of input sequences. One important aspect to reach good performance is the way to sample the scene space, creating plane hypotheses. A small step size between consecutive planes, which is needed to reconstruct details in the near vicinity of the camera may lead to ambiguities in distant regions, due to the perspective projection of the camera. Furthermore, an equidistant sampling with a small step size produces a large number of plane hypotheses, leading to high computational effort. To overcome these problems, we present a novel methodology to directly determine the sampling points of plane-sweep algorithms in image space. The use of the perspective invariant cross-ratio allows us to derive the location of the sampling planes directly from the image data. With this, we efficiently sample the scene space, achieving higher sampling density in areas which are close to the camera and a lower density in distant regions. We evaluate our approach on a synthetic benchmark dataset for quantitative evaluation and on a real-image dataset consisting of aerial imagery. The experiments reveal that an inverse sampling achieves equal and better results than a linear sampling, with less sampling points and thus less runtime. Our algorithm allows an online computation of depth maps for subsequences of five frames, provided that the relative poses between all frames are given.
BrainFrame: a node-level heterogeneous accelerator platform for neuron simulations
NASA Astrophysics Data System (ADS)
Smaragdos, Georgios; Chatzikonstantis, Georgios; Kukreja, Rahul; Sidiropoulos, Harry; Rodopoulos, Dimitrios; Sourdis, Ioannis; Al-Ars, Zaid; Kachris, Christoforos; Soudris, Dimitrios; De Zeeuw, Chris I.; Strydis, Christos
2017-12-01
Objective. The advent of high-performance computing (HPC) in recent years has led to its increasing use in brain studies through computational models. The scale and complexity of such models are constantly increasing, leading to challenging computational requirements. Even though modern HPC platforms can often deal with such challenges, the vast diversity of the modeling field does not permit for a homogeneous acceleration platform to effectively address the complete array of modeling requirements. Approach. In this paper we propose and build BrainFrame, a heterogeneous acceleration platform that incorporates three distinct acceleration technologies, an Intel Xeon-Phi CPU, a NVidia GP-GPU and a Maxeler Dataflow Engine. The PyNN software framework is also integrated into the platform. As a challenging proof of concept, we analyze the performance of BrainFrame on different experiment instances of a state-of-the-art neuron model, representing the inferior-olivary nucleus using a biophysically-meaningful, extended Hodgkin-Huxley representation. The model instances take into account not only the neuronal-network dimensions but also different network-connectivity densities, which can drastically affect the workload’s performance characteristics. Main results. The combined use of different HPC technologies demonstrates that BrainFrame is better able to cope with the modeling diversity encountered in realistic experiments while at the same time running on significantly lower energy budgets. Our performance analysis clearly shows that the model directly affects performance and all three technologies are required to cope with all the model use cases. Significance. The BrainFrame framework is designed to transparently configure and select the appropriate back-end accelerator technology for use per simulation run. The PyNN integration provides a familiar bridge to the vast number of models already available. Additionally, it gives a clear roadmap for extending the platform support beyond the proof of concept, with improved usability and directly useful features to the computational-neuroscience community, paving the way for wider adoption.
BrainFrame: a node-level heterogeneous accelerator platform for neuron simulations.
Smaragdos, Georgios; Chatzikonstantis, Georgios; Kukreja, Rahul; Sidiropoulos, Harry; Rodopoulos, Dimitrios; Sourdis, Ioannis; Al-Ars, Zaid; Kachris, Christoforos; Soudris, Dimitrios; De Zeeuw, Chris I; Strydis, Christos
2017-12-01
The advent of high-performance computing (HPC) in recent years has led to its increasing use in brain studies through computational models. The scale and complexity of such models are constantly increasing, leading to challenging computational requirements. Even though modern HPC platforms can often deal with such challenges, the vast diversity of the modeling field does not permit for a homogeneous acceleration platform to effectively address the complete array of modeling requirements. In this paper we propose and build BrainFrame, a heterogeneous acceleration platform that incorporates three distinct acceleration technologies, an Intel Xeon-Phi CPU, a NVidia GP-GPU and a Maxeler Dataflow Engine. The PyNN software framework is also integrated into the platform. As a challenging proof of concept, we analyze the performance of BrainFrame on different experiment instances of a state-of-the-art neuron model, representing the inferior-olivary nucleus using a biophysically-meaningful, extended Hodgkin-Huxley representation. The model instances take into account not only the neuronal-network dimensions but also different network-connectivity densities, which can drastically affect the workload's performance characteristics. The combined use of different HPC technologies demonstrates that BrainFrame is better able to cope with the modeling diversity encountered in realistic experiments while at the same time running on significantly lower energy budgets. Our performance analysis clearly shows that the model directly affects performance and all three technologies are required to cope with all the model use cases. The BrainFrame framework is designed to transparently configure and select the appropriate back-end accelerator technology for use per simulation run. The PyNN integration provides a familiar bridge to the vast number of models already available. Additionally, it gives a clear roadmap for extending the platform support beyond the proof of concept, with improved usability and directly useful features to the computational-neuroscience community, paving the way for wider adoption.
1998-01-14
The Photovoltaic Module 1 Integrated Equipment Assembly (IEA) is moved through Kennedy Space Center’s Space Station Processing Facility (SSPF) toward the workstand where it will be processed for flight on STS-97, scheduled for launch in April 1999. The IEA is one of four integral units designed to generate, distribute, and store power for the International Space Station. It will carry solar arrays, power storage batteries, power control units, and a thermal control system. The 16-foot-long, 16,850-pound unit is now undergoing preflight preparations in the SSPF
1998-01-14
The Photovoltaic Module 1 Integrated Equipment Assembly (IEA) is lowered into its workstand at Kennedy Space Center’s Space Station Processing Facility (SSPF), where it will be processed for flight on STS-97, scheduled for launch in April 1999. The IEA is one of four integral units designed to generate, distribute, and store power for the International Space Station. It will carry solar arrays, power storage batteries, power control units, and a thermal control system. The 16-foot-long, 16,850-pound unit is now undergoing preflight preparations in the SSPF
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-17
... Number of DMM Units an Issuer Must Interview From the Pool of DMM Units Eligible To Participate in the... units an issuer must interview from the pool of DMM units eligible to participate in the allocation. The... issuer must interview from the pool of DMM units eligible to participate in the allocation process. Rule...
ERIC Educational Resources Information Center
Galligani, Dennis J.
This second volume of the University of California, Irvine (UCI), Student Affirmative Action (SAA) Five-Year Plan contains the complete student affirmative action plans as submitted by 33 academic and administrative units at UCI. The volume is organized by type of unit: academic units, academic retention units, outreach units, and student life…
Spahr, N.E.; Boulger, R.W.
1997-01-01
Quality-control samples provide part of the information needed to estimate the bias and variability that result from sample collection, processing, and analysis. Quality-control samples of surface water collected for the Upper Colorado River National Water-Quality Assessment study unit for water years 1995?96 are presented and analyzed in this report. The types of quality-control samples collected include pre-processing split replicates, concurrent replicates, sequential replicates, post-processing split replicates, and field blanks. Analysis of the pre-processing split replicates, concurrent replicates, sequential replicates, and post-processing split replicates is based on differences between analytical results of the environmental samples and analytical results of the quality-control samples. Results of these comparisons indicate that variability introduced by sample collection, processing, and handling is low and will not affect interpretation of the environmental data. The differences for most water-quality constituents is on the order of plus or minus 1 or 2 lowest rounding units. A lowest rounding unit is equivalent to the magnitude of the least significant figure reported for analytical results. The use of lowest rounding units avoids some of the difficulty in comparing differences between pairs of samples when concentrations span orders of magnitude and provides a measure of the practical significance of the effect of variability. Analysis of field-blank quality-control samples indicates that with the exception of chloride and silica, no systematic contamination of samples is apparent. Chloride contamination probably was the result of incomplete rinsing of the dilute cleaning solution from the outlet ports of the decaport sample splitter. Silica contamination seems to have been introduced by the blank water. Sampling and processing procedures for water year 1997 have been modified as a result of these analyses.
Cultural traits as units of analysis.
O'Brien, Michael J; Lyman, R Lee; Mesoudi, Alex; VanPool, Todd L
2010-12-12
Cultural traits have long been used in anthropology as units of transmission that ostensibly reflect behavioural characteristics of the individuals or groups exhibiting the traits. After they are transmitted, cultural traits serve as units of replication in that they can be modified as part of an individual's cultural repertoire through processes such as recombination, loss or partial alteration within an individual's mind. Cultural traits are analogous to genes in that organisms replicate them, but they are also replicators in their own right. No one has ever seen a unit of transmission, either behavioural or genetic, although we can observe the effects of transmission. Fortunately, such units are manifest in artefacts, features and other components of the archaeological record, and they serve as proxies for studying the transmission (and modification) of cultural traits, provided there is analytical clarity over how to define and measure the units that underlie this inheritance process.
Cultural traits as units of analysis
O'Brien, Michael J.; Lyman, R. Lee; Mesoudi, Alex; VanPool, Todd L.
2010-01-01
Cultural traits have long been used in anthropology as units of transmission that ostensibly reflect behavioural characteristics of the individuals or groups exhibiting the traits. After they are transmitted, cultural traits serve as units of replication in that they can be modified as part of an individual's cultural repertoire through processes such as recombination, loss or partial alteration within an individual's mind. Cultural traits are analogous to genes in that organisms replicate them, but they are also replicators in their own right. No one has ever seen a unit of transmission, either behavioural or genetic, although we can observe the effects of transmission. Fortunately, such units are manifest in artefacts, features and other components of the archaeological record, and they serve as proxies for studying the transmission (and modification) of cultural traits, provided there is analytical clarity over how to define and measure the units that underlie this inheritance process. PMID:21041205
Systems and methods for interactive virtual reality process control and simulation
Daniel, Jr., William E.; Whitney, Michael A.
2001-01-01
A system for visualizing, controlling and managing information includes a data analysis unit for interpreting and classifying raw data using analytical techniques. A data flow coordination unit routes data from its source to other components within the system. A data preparation unit handles the graphical preparation of the data and a data rendering unit presents the data in a three-dimensional interactive environment where the user can observe, interact with, and interpret the data. A user can view the information on various levels, from a high overall process level view, to a view illustrating linkage between variables, to view the hard data itself, or to view results of an analysis of the data. The system allows a user to monitor a physical process in real-time and further allows the user to manage and control the information in a manner not previously possible.
Multiple use of water in industry--the textile industry case.
Rott, Ulrich
2003-08-01
The main aim of this article is to give a review on the state of the art of available processes for the advanced treatment of wastewater from Textile Processing Industry (TPI). After an introduction to the specific wastewater situation of the TPI the article reviews the options of process and production integrated measures. The available unit processes and examples of applied combinations of unit processes are described. A special place is given to the in-plant treatment, the reuse of the treated split flow or mixed wastewater and the recovery of textile auxiliaries and dyes.
High Temperature Boost (HTB) Power Processing Unit (PPU) Formulation Study
NASA Technical Reports Server (NTRS)
Chen, Yuan; Bradley, Arthur T.; Iannello, Christopher J.; Carr, Gregory A.; Mohammad, Mojarradi M.; Hunter, Don J.; DelCastillo, Linda; Stell, Christopher B.
2013-01-01
This technical memorandum is to summarize the Formulation Study conducted during fiscal year 2012 on the High Temperature Boost (HTB) Power Processing Unit (PPU). The effort is authorized and supported by the Game Changing Technology Division, NASA Office of the Chief Technologist. NASA center participation during the formulation includes LaRC, KSC and JPL. The Formulation Study continues into fiscal year 2013. The formulation study has focused on the power processing unit. The team has proposed a modular, power scalable, and new technology enabled High Temperature Boost (HTB) PPU, which has 5-10X improvement in PPU specific power/mass and over 30% in-space solar electric system mass saving.
Ising Processing Units: Potential and Challenges for Discrete Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coffrin, Carleton James; Nagarajan, Harsha; Bent, Russell Whitford
The recent emergence of novel computational devices, such as adiabatic quantum computers, CMOS annealers, and optical parametric oscillators, presents new opportunities for hybrid-optimization algorithms that leverage these kinds of specialized hardware. In this work, we propose the idea of an Ising processing unit as a computational abstraction for these emerging tools. Challenges involved in using and bench- marking these devices are presented, and open-source software tools are proposed to address some of these challenges. The proposed benchmarking tools and methodology are demonstrated by conducting a baseline study of established solution methods to a D-Wave 2X adiabatic quantum computer, one examplemore » of a commercially available Ising processing unit.« less
Rath, N; Kato, S; Levesque, J P; Mauel, M E; Navratil, G A; Peng, Q
2014-04-01
Fast, digital signal processing (DSP) has many applications. Typical hardware options for performing DSP are field-programmable gate arrays (FPGAs), application-specific integrated DSP chips, or general purpose personal computer systems. This paper presents a novel DSP platform that has been developed for feedback control on the HBT-EP tokamak device. The system runs all signal processing exclusively on a Graphics Processing Unit (GPU) to achieve real-time performance with latencies below 8 μs. Signals are transferred into and out of the GPU using PCI Express peer-to-peer direct-memory-access transfers without involvement of the central processing unit or host memory. Tests were performed on the feedback control system of the HBT-EP tokamak using forty 16-bit floating point inputs and outputs each and a sampling rate of up to 250 kHz. Signals were digitized by a D-TACQ ACQ196 module, processing done on an NVIDIA GTX 580 GPU programmed in CUDA, and analog output was generated by D-TACQ AO32CPCI modules.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
This is the RCRA required permit application for Radioactive and Hazardous Waste Management at the Oak Ridge Y-12 Plant for the following units: Building 9206 Container Storage Unit; Building 9212 Container Storage Unit; Building 9720-12 Container Storage Unit; Cyanide Treatment Unit. All four of these units are associated with the recovery of enriched uranium and other metals from wastes generated during the processing of nuclear materials.
Nolte, Kurt B; Stewart, Douglas M; O'Hair, Kevin C; Gannon, William L; Briggs, Michael S; Barron, A Marie; Pointer, Judy; Larson, Richard S
2008-10-01
The authors developed a novel continuous quality improvement (CQI) process for academic biomedical research compliance administration. A challenge in developing a quality improvement program in a nonbusiness environment is that the terminology and processes are often foreign. Rather than training staff in an existing quality improvement process, the authors opted to develop a novel process based on the scientific method--a paradigm familiar to all team members. The CQI process included our research compliance units. Unit leaders identified problems in compliance administration where a resolution would have a positive impact and which could be resolved or improved with current resources. They then generated testable hypotheses about a change to standard practice expected to improve the problem, and they developed methods and metrics to assess the impact of the change. The CQI process was managed in a "peer review" environment. The program included processes to reduce the incidence of infections in animal colonies, decrease research protocol-approval times, improve compliance and protection of animal and human research subjects, and improve research protocol quality. This novel CQI approach is well suited to the needs and the unique processes of research compliance administration. Using the scientific method as the improvement paradigm fostered acceptance of the project by unit leaders and facilitated the development of specific improvement projects. These quality initiatives will allow us to improve support for investigators while ensuring that compliance standards continue to be met. We believe that our CQI process can readily be used in other academically based offices of research.
Relvas, Gláubia Rocha Barbosa; Buccini, Gabriela Dos Santos; Venancio, Sonia Isoyama
2018-06-08
To analyze the prevalence of ultra-processed food intake among children under one year of age and to identify associated factors. A cross-sectional design was employed. We interviewed 198 mothers of children aged between 6 and 12 months in primary healthcare units located in a city of the metropolitan region of São Paulo, Brazil. Specific foods consumed in the previous 24h of the interview were considered to evaluate the consumption of ultra-processed foods. Variables related to mothers' and children's characteristics as well as primary healthcare units were grouped into three blocks of increasingly proximal influence on the outcome. A Poisson regression analysis was performed following a statistical hierarchical modeling to determine factors associated with ultra-processed food intake. The prevalence of ultra-processed food intake was 43.1%. Infants that were not being breastfed had a higher prevalence of ultra-processed food intake but no statistical significance was found. Lower maternal education (prevalence ratio 1.55 [1.08-2.24]) and the child's first appointment at the primary healthcare unit having happened after the first week of life (prevalence ratio 1.51 [1.01-2.27]) were factors associated with the consumption of ultra-processed foods. High consumption of ultra-processed foods among children under 1 year of age was found. Both maternal socioeconomic status and time until the child's first appointment at the primary healthcare unit were associated with the prevalence of ultra-processed food intake. Copyright © 2018 Sociedade Brasileira de Pediatria. Published by Elsevier Editora Ltda. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-27
... investigation named as respondents New United Co. Group Ltd.; Jiangsu New United Office Equipments Co. Ltd.; Shenzhen Elite Business Office Equipment Co. Ltd.; Elite Business Machines Ltd.; New United Office Equipment USA, Inc.; Jiangsu Shinri Machinery Co. Ltd. (collectively, the ``New United'' respondents); and...
Soil Formation and Distribution in Missouri. Instructional Unit. Conservation Education Series.
ERIC Educational Resources Information Center
Castillon, David A.
This unit is designed to help vocational agriculture teachers incorporate information on soil formation and the soils geography of Missouri into their curriculum. The unit consists of: (1) a topic outline; (2) general unit objectives; (3) discussions of processes and factors of soil formation, the soils geography of Missouri, and some soil…
Combustion Power Unit--400: CPU-400.
ERIC Educational Resources Information Center
Combustion Power Co., Palo Alto, CA.
Aerospace technology may have led to a unique basic unit for processing solid wastes and controlling pollution. The Combustion Power Unit--400 (CPU-400) is designed as a turboelectric generator plant that will use municipal solid wastes as fuel. The baseline configuration is a modular unit that is designed to utilize 400 tons of refuse per day…
Oaks, Acorns, Climate and Squirrels, An Environmental Investigation.
ERIC Educational Resources Information Center
National Wildlife Federation, Washington, DC.
This environmental unit is one of a series designed for integration within an existing curriculum. The unit is self-contained and requires minimal teacher preparation. The philosophy of the units is based on an experience-oriented process that encourages self-paced independent student work. In this particular unit, oaks and acorns are the vehicle…
40 CFR 63.1360 - Applicability.
Code of Federal Regulations, 2013 CFR
2013-07-01
... vessel and does not have an intervening storage vessel. If two or more PAI process units have the same.... If two or more PAI process units have the same input to or output from the storage vessel in the tank... hours during the calendar year. (e) Applicability of this subpart except during periods of startup...
40 CFR 63.480 - Applicability and designation of affected sources.
Code of Federal Regulations, 2011 CFR
2011-07-01
... same maximum annual design capacity on a mass basis for two or more products, and if one of those... period, applicability shall be determined in accordance with paragraph (f)(2) of this section. (A) If the... five year period for existing process units, or the specified one year period for new process units...
40 CFR 63.480 - Applicability and designation of affected sources.
Code of Federal Regulations, 2012 CFR
2012-07-01
... same maximum annual design capacity on a mass basis for two or more products, and if one of those... period, applicability shall be determined in accordance with paragraph (f)(2) of this section. (A) If the... five year period for existing process units, or the specified one year period for new process units...
40 CFR 63.1360 - Applicability.
Code of Federal Regulations, 2012 CFR
2012-07-01
... vessel and does not have an intervening storage vessel. If two or more PAI process units have the same.... If two or more PAI process units have the same input to or output from the storage vessel in the tank... hours during the calendar year. (e) Applicability of this subpart except during periods of startup...
40 CFR 63.480 - Applicability and designation of affected sources.
Code of Federal Regulations, 2013 CFR
2013-07-01
... same maximum annual design capacity on a mass basis for two or more products, and if one of those... period, applicability shall be determined in accordance with paragraph (f)(2) of this section. (A) If the... five year period for existing process units, or the specified one year period for new process units...
40 CFR 63.1360 - Applicability.
Code of Federal Regulations, 2011 CFR
2011-07-01
... vessel and does not have an intervening storage vessel. If two or more PAI process units have the same.... If two or more PAI process units have the same input to or output from the storage vessel in the tank... hours during the calendar year. (e) Applicability of this subpart except during periods of startup...
40 CFR 63.1310 - Applicability and designation of affected sources.
Code of Federal Regulations, 2014 CFR
2014-07-01
..., the storage vessel shall be assigned to that process unit. (iv) If there are two or more process units... same maximum annual design capacity on a mass basis for two or more products, and if one of those... for the specified period, applicability shall be determined (in accordance with paragraph (f)(2) of...
40 CFR 63.480 - Applicability and designation of affected sources.
Code of Federal Regulations, 2014 CFR
2014-07-01
... same maximum annual design capacity on a mass basis for two or more products, and if one of those... period, applicability shall be determined in accordance with paragraph (f)(2) of this section. (A) If the... five year period for existing process units, or the specified one year period for new process units...
40 CFR 63.1360 - Applicability.
Code of Federal Regulations, 2010 CFR
2010-07-01
... vessel and does not have an intervening storage vessel. If two or more PAI process units have the same.... If two or more PAI process units have the same input to or output from the storage vessel in the tank... hours during the calendar year. (e) Applicability of this subpart except during periods of startup...
40 CFR 63.480 - Applicability and designation of affected sources.
Code of Federal Regulations, 2010 CFR
2010-07-01
... same maximum annual design capacity on a mass basis for two or more products, and if one of those... period, applicability shall be determined in accordance with paragraph (f)(2) of this section. (A) If the... five year period for existing process units, or the specified one year period for new process units...
7 CFR 3560.73 - Subsequent loans.
Code of Federal Regulations, 2011 CFR
2011-01-01
.... Loan requests to add units to comply with accessibility requirements may be processed as a subsequent loan; however, loan requests to add units to meet market demand will be processed as an initial loan... over a period not to exceed the lesser of the economic life of the housing being financed or 50 years...
7 CFR 3560.73 - Subsequent loans.
Code of Federal Regulations, 2012 CFR
2012-01-01
.... Loan requests to add units to comply with accessibility requirements may be processed as a subsequent loan; however, loan requests to add units to meet market demand will be processed as an initial loan... over a period not to exceed the lesser of the economic life of the housing being financed or 50 years...
7 CFR 3560.73 - Subsequent loans.
Code of Federal Regulations, 2013 CFR
2013-01-01
.... Loan requests to add units to comply with accessibility requirements may be processed as a subsequent loan; however, loan requests to add units to meet market demand will be processed as an initial loan... over a period not to exceed the lesser of the economic life of the housing being financed or 50 years...
7 CFR 3560.73 - Subsequent loans.
Code of Federal Regulations, 2014 CFR
2014-01-01
.... Loan requests to add units to comply with accessibility requirements may be processed as a subsequent loan; however, loan requests to add units to meet market demand will be processed as an initial loan... over a period not to exceed the lesser of the economic life of the housing being financed or 50 years...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-08
... DEPARTMENT OF LABOR Employment and Training Administration Labor Certification Process for the Temporary Employment of Aliens in Agriculture in the United States: Prevailing Wage Rates for Certain... Agriculture (USDA) farm production region that includes another State either with its own wage rate finding or...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-08
... DEPARTMENT OF LABOR Employment and Training Administration Labor Certification Process for the Temporary Employment of Aliens in Agriculture in the United States: 2013 Adverse Effect Wage Rates AGENCY... Department of Agriculture (USDA). 20 CFR 655.120(c) requires that the Administrator of the Office of Foreign...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-22
... DEPARTMENT OF LABOR Employment and Training Administration Labor Certification Process for the Temporary Employment of Aliens in Agriculture in the United States: 2012 Adverse Effect Wage Rates AGENCY... Department of Agriculture (USDA). 20 CFR 655.120(c) requires the Administrator of the Office of Foreign Labor...
40 CFR 63.640 - Applicability and designation of affected source.
Code of Federal Regulations, 2014 CFR
2014-07-01
... reformer catalyst regeneration vents, and sulfur plant vents; and (5) Emission points routed to a fuel gas... required for refinery fuel gas systems or emission points routed to refinery fuel gas systems. (e) The... petroleum refining process unit that is subject to this subpart; (3) Units processing natural gas liquids...
40 CFR 63.640 - Applicability and designation of affected source.
Code of Federal Regulations, 2013 CFR
2013-07-01
... reformer catalyst regeneration vents, and sulfur plant vents; and (5) Emission points routed to a fuel gas... required for refinery fuel gas systems or emission points routed to refinery fuel gas systems. (e) The... petroleum refining process unit that is subject to this subpart; (3) Units processing natural gas liquids...
40 CFR 63.7943 - How do I determine the average VOHAP concentration of my remediation material?
Code of Federal Regulations, 2012 CFR
2012-07-01
... knowledge as specified in paragraph (c) of this section. These methods may be used to determine the average... within, a remediation material management unit or treatment process; or (3) Remediation material that is... management unit or treatment process. (b) Direct measurement. To determine the average total VOHAP...
40 CFR 63.7943 - How do I determine the average VOHAP concentration of my remediation material?
Code of Federal Regulations, 2013 CFR
2013-07-01
... knowledge as specified in paragraph (c) of this section. These methods may be used to determine the average... within, a remediation material management unit or treatment process; or (3) Remediation material that is... management unit or treatment process. (b) Direct measurement. To determine the average total VOHAP...
Sanitary Engineering Unit Operations and Unit Processes Laboratory Manual.
ERIC Educational Resources Information Center
American Association of Professors in Sanitary Engineering.
This manual contains a compilation of experiments in Physical Operations, Biological and Chemical Processes for various education and equipment levels. The experiments are designed to be flexible so that they can be adapted to fit the needs of a particular program. The main emphasis is on hands-on student experiences to promote understanding.…
A Guide to the Selection of Cost-Effective Wastewater Treatment Systems. Technical Report.
ERIC Educational Resources Information Center
Van Note, Robert H.; And Others
The data within this publication provide guidelines for planners, engineers and decision-makers at all governmental levels to evaluate cost-effectiveness of alternative wastewater treatment proposals. The processes described include conventional and advanced treatment units as well as most sludge handling and processing units. Flow sheets, cost…
New and Revised Emission Factors for Flares and New Emission Factors for Certain Refinery Process Units and Determination for No Changes to VOC Emission Factors for Tanks and Wastewater Treatment Systems
Sodium content of popular commercially processed and restaurant foods in the United States
USDA-ARS?s Scientific Manuscript database
Nutrient Data Laboratory (NDL) of the U.S. Department of Agriculture (USDA) in close collaboration with U.S. Center for Disease Control and Prevention is monitoring the sodium content of commercially processed and restaurant foods in the United States. The main purpose of this manuscript is to prov...
40 CFR 61.305 - Reporting and recordkeeping.
Code of Federal Regulations, 2012 CFR
2012-07-01
... unit or process heater with a design heat input capacity of 44 MW (150 × 106 BTU/hr) or greater is used... or other flare design (i.e., steam-assisted, air-assisted or nonassisted), all visible emission... temperature of the steam generating unit or process heater with a design heat input capacity of less than 44...
40 CFR 61.305 - Reporting and recordkeeping.
Code of Federal Regulations, 2014 CFR
2014-07-01
... unit or process heater with a design heat input capacity of 44 MW (150 × 106 BTU/hr) or greater is used... or other flare design (i.e., steam-assisted, air-assisted or nonassisted), all visible emission... temperature of the steam generating unit or process heater with a design heat input capacity of less than 44...
40 CFR 61.305 - Reporting and recordkeeping.
Code of Federal Regulations, 2010 CFR
2010-07-01
... unit or process heater with a design heat input capacity of 44 MW (150 × 106 BTU/hr) or greater is used... or other flare design (i.e., steam-assisted, air-assisted or nonassisted), all visible emission... temperature of the steam generating unit or process heater with a design heat input capacity of less than 44...
40 CFR 61.305 - Reporting and recordkeeping.
Code of Federal Regulations, 2013 CFR
2013-07-01
... unit or process heater with a design heat input capacity of 44 MW (150 × 106 BTU/hr) or greater is used... or other flare design (i.e., steam-assisted, air-assisted or nonassisted), all visible emission... temperature of the steam generating unit or process heater with a design heat input capacity of less than 44...
40 CFR 61.305 - Reporting and recordkeeping.
Code of Federal Regulations, 2011 CFR
2011-07-01
... unit or process heater with a design heat input capacity of 44 MW (150 × 106 BTU/hr) or greater is used... or other flare design (i.e., steam-assisted, air-assisted or nonassisted), all visible emission... temperature of the steam generating unit or process heater with a design heat input capacity of less than 44...
Correct county areas with sidebars for Virginia
Joseph M. McCollum; Dale Gormanson; John Coulston
2009-01-01
Historically, Forest Inventory and Analysis (FIA) has processed field inventory data at the county level and county estimates of land area were constrained to equal those reported by the Census Bureau. Currently, the Southern Research Station FIA unit processes field inventory data at the survey unit level (groups of counties with similar ecological characteristics)....
Restructuring a Large IT Organization: Theory, Model, Process, and Initial Results.
ERIC Educational Resources Information Center
Luker, Mark; And Others
1995-01-01
Recently the University of Wisconsin-Madison merged three existing but disparate technology-related units into a single division reporting to a chief information officer. The new division faced many challenges, beginning with the need to restructure the old units into a cohesive new organization. The restructuring process, based on structural…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-26
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration 21 CFR Part 1005 [Docket No. FDA-2007-N-0091; (formerly 2007N-0104)] Service of Process on Manufacturers; Manufacturers Importing Electronic Products Into the United States; Agent Designation; Change of Address AGENCY: Food and Drug...
The Years Alone: A Reading Comprehension Unit (7-9).
ERIC Educational Resources Information Center
Blair-Broeker, Lynn
Based on Bloom's Taxonomy of thought, this thematic reading comprehension unit on "loneliness" is intended for teachers of grades 7-9. The thinking process is broken into six categories: (1) recall; (2) inference; (3) application; (4) analysis; (5) synthesis; and (6) evaluation. A short description is given for each of these processes.…
Modeling and Analysis of Information Product Maps
ERIC Educational Resources Information Center
Heien, Christopher Harris
2012-01-01
Information Product Maps are visual diagrams used to represent the inputs, processing, and outputs of data within an Information Manufacturing System. A data unit, drawn as an edge, symbolizes a grouping of raw data as it travels through this system. Processes, drawn as vertices, transform each data unit input into various forms prior to delivery…
Sequential microfluidic droplet processing for rapid DNA extraction.
Pan, Xiaoyan; Zeng, Shaojiang; Zhang, Qingquan; Lin, Bingcheng; Qin, Jianhua
2011-11-01
This work describes a novel droplet-based microfluidic device, which enables sequential droplet processing for rapid DNA extraction. The microdevice consists of a droplet generation unit, two reagent addition units and three droplet splitting units. The loading/washing/elution steps required for DNA extraction were carried out by sequential microfluidic droplet processing. The movement of superparamagnetic beads, which were used as extraction supports, was controlled with magnetic field. The microdevice could generate about 100 droplets per min, and it took about 1 min for each droplet to perform the whole extraction process. The extraction efficiency was measured to be 46% for λ-DNA, and the extracted DNA could be used in subsequent genetic analysis such as PCR, demonstrating the potential of the device for fast DNA extraction. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
On the technological development of cotton primary processing, using a new drying-purifying unit
NASA Astrophysics Data System (ADS)
Agzamov, M. M.; Yunusov, S. Z.; Gafurov, J. K.
2017-10-01
The article reflects feasibility study of conducting research on technological development of cotton primary processing with the modified parameters of drying and cleaning process for small litter. As a result of theoretical and experimental research, drying and purifying unit is designed, in which in the existing processes a heat source, exhaust fans, a dryer drum, a peg-drum cleaner of cotton and the vehicle transmitting raw cotton from the dryer to the purifier will be excluded. The experience has shown that when a drying-purifying unit is installed (with eight wheels) purifying effect on the small litter of 34%, ie cleaning effect is higher than of that currently in operation 1XK drum cleaner. According to the research patent of RU UZ FAP 00674 “Apparatus for drying and cleaning fibrous material” is received.
Plemmons, Christina; Clark, Michele; Feng, Du
2018-03-01
Clinical education is vital to both the development of clinical self-efficacy and the integration of future nurses into health care teams. The dedicated education unit clinical teaching model is an innovative clinical partnership, which promotes skill development, professional growth, clinical self-efficacy, and integration as a team member. Blended clinical teaching models are combining features of the dedicated education unit and traditional clinical model. The aims of this study are to explore how each of three clinical teaching models (dedicated education unit, blended, traditional) affects clinical self-efficacy and attitude toward team process, and to compare the dedicated education unit model and blended model to traditional clinical. A nonequivalent control-group quasi-experimental design was utilized. The convenience sample of 272 entry-level baccalaureate nursing students included 84 students participating in a dedicated education unit model treatment group, 66 students participating in a blended model treatment group, and 122 students participating in a traditional model control group. Perceived clinical self-efficacy was evaluated by the pretest/posttest scores obtained on the General Self-Efficacy scale. Attitude toward team process was evaluated by the pretest/posttest scores obtained on the TeamSTEPPS® Teamwork Attitude Questionnaire. All three clinical teaching models resulted in significant increases in both clinical self-efficacy (p=0.04) and attitude toward team process (p=0.003). Students participating in the dedicated education unit model (p=0.016) and students participating in the blended model (p<0.001) had significantly larger increases in clinical self-efficacy compared to students participating in the traditional model. These findings support the use of dedicated education unit and blended clinical partnerships as effective alternatives to the traditional model to promote both clinical self-efficacy and team process among entry-level baccalaureate nursing students. Copyright © 2017 Elsevier Ltd. All rights reserved.
Psychiatry training in the United Kingdom--part 2: the training process.
Christodoulou, N; Kasiakogia, K
2015-01-01
In the second part of this diptych, we shall deal with psychiatric training in the United Kingdom in detail, and we will compare it--wherever this is meaningful--with the equivalent system in Greece. As explained in the first part of the paper, due to the recently increased emigration of Greek psychiatrists and psychiatric trainees, and the fact that the United Kingdom is a popular destination, it has become necessary to inform those aspiring to train in the United Kingdom of the system and the circumstances they should expect to encounter. This paper principally describes the structure of the United Kingdom's psychiatric training system, including the different stages trainees progress through and their respective requirements and processes. Specifically, specialty and subspecialty options are described and explained, special paths in training are analysed, and the notions of "special interest day" and the optional "Out of programme experience" schemes are explained. Furthermore, detailed information is offered on the pivotal points of each of the stages of the training process, with special care to explain the important differences and similarities between the systems in Greece and the United Kingdom. Special attention is given to The Royal College of Psychiatrists' Membership Exams (MRCPsych) because they are the only exams towards completing specialisation in Psychiatry in the United Kingdom. Also, the educational culture of progressing according to a set curriculum, of utilising diverse means of professional development, of empowering the trainees' autonomy by allowing initiative-based development and of applying peer supervision as a tool for professional development is stressed. We conclude that psychiatric training in the United Kingdom differs substantially to that of Greece in both structure and process. Τhere are various differences such as pure psychiatric training in the United Kingdom versus neurological and medical modules in Greece, in-training exams in the United Kingdom versus an exit exam in Greece, and of course the three years of higher training, which prepares trainees towards functioning as consultants. However, perhaps the most important difference is one of mentality; namely a culture of competency- based training progression in the United Kingdom, which further extends beyond training into professional revalidation. We believe that, with careful cultural adaptation, the systems of psychiatric training in the United Kingdom and Greece may benefit from sharing some of their features. Lastly, as previously clarified, this diptych paper is meant to be informative, not advisory.
Increasing relational memory in childhood with unitization strategies.
Robey, Alison; Riggins, Tracy
2018-01-01
Young children often experience relational memory failures, which are thought to result from immaturity of the recollection processes presumed to be required for these tasks. However, research in adults has suggested that relational memory tasks can be accomplished using familiarity, a process thought to be mature by the end of early childhood. The goal of the present study was to determine whether relational memory performance could be improved in childhood by teaching young children memory strategies that have been shown to increase the contribution of familiarity in adults (i.e., unitization). Groups of 6- and 8-year-old children were taught to use visualization strategies that either unitized or did not unitize pictures and colored borders. Estimates of familiarity and recollection were extracted by fitting receiver operator characteristic curves (Yonelinas, Journal of Experimental Psychology: Learning, Memory, and Cognition 20, 1341-1354, 1994, Yonelinas, Memory & Cognition 25, 747-763, 1997) based on dual-process models of recognition. Bayesian analysis revealed that strategies involving unitization improved memory performance and increased the contribution of familiarity in both age groups.
Bosslet, Gabriel T; Baker, Mary; Pope, Thaddeus M
2016-09-01
Disputes regarding life-prolonging treatments are stressful for all parties involved. These disagreements are appropriately almost always resolved with intensive communication and negotiation. Those rare cases that are not require a resolution process that ensures fairness and due process. We describe three recent cases from different countries (the United States, United Kingdom, and Ontario, Canada) to qualitatively contrast the legal responses to intractable, policy-level disputes regarding end-of-life care in each of these countries. In so doing, we define the continuum of clinical and social utility among different types of dispute resolution processes and emphasize the importance of public reason-giving in the societal discussion regarding policy-level solutions to end-of-life treatment disputes. We argue that precedential, publicly available, written rulings for these decisions most effectively help to move the social debate forward in a way that is beneficial to clinicians, patients, and citizens. This analysis highlights the lack of such rulings within the United States. Copyright © 2016 American College of Chest Physicians. Published by Elsevier Inc. All rights reserved.
Evaluation of Argonne 9-cm and 10-cm Annular Centrifugal Contactors for SHINE Solution Processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wardle, Kent E.; Pereira, Candido; Vandegrift, George
2015-02-01
Work is in progress to evaluate the SHINE Medical Technologies process for producing Mo-99 for medical use from the fission of dissolved low-enriched uranium (LEU). This report addresses the use of Argonne annular centrifugal contactors for periodic treatment of the process solution. In a letter report from FY 2013, Pereira and Vandegrift compared the throughput and physical footprint for the two contactor options available from CINC Industries: the V-02 and V-05, which have rotor diameters of 5 cm and 12.7 cm, respectively. They suggested that an intermediately sized “Goldilocks” contactor might provide a better balance between throughput and footprint tomore » meet the processing needs for the uranium extraction (UREX) processing of the SHINE solution to remove undesired fission products. Included with the submission of this letter report are the assembly drawings for two Argonne-design contactors that are in this intermediate range—9-cm and 10-cm rotors, respectively. The 9-cm contactor (drawing number CE-D6973A, stamped February 15, 1978) was designed as a single-stage unit and built and tested in the late 1970s along with other size units, both smaller and larger. In subsequent years, a significant effort to developed annular centrifugal contactors was undertaken to support work at Hanford implementing the transuranic extraction (TRUEX) process. These contactors had a 10-cm rotor diameter and were fully designed as multistage units with four stages per assembly (drawing number CMT-E1104, stamped March 14, 1990). From a technology readiness perspective, these 10-cm units are much farther ahead in the design progression and, therefore, would require significantly less re-working to make them ready for UREX deployment. Additionally, the overall maximum throughput of ~12 L/min is similar to that of the 9-cm unit (10 L/min), and the former could be efficiently operated over much of the same range of throughput. As a result, only the 10-cm units are considered here, though drawings are provided for the 9-cm unit for reference.« less
NASA Astrophysics Data System (ADS)
Memeti, V.; Paterson, S. R.
2006-12-01
Data gained using various geologic tools from large, composite batholiths, such as the 95-85 Ma old Tuolumne Batholith (TB), Sierra Nevada, CA, indicate complex batholithic processes at the chamber construction site, in part since they record different increments of batholith construction through time. Large structural and compositional complexity generally occurs throughout the main batholith such as (1) geochemistry, (2) internal contacts between different units (Bateman, 1992; Zak &Paterson, 2005), (3) batholith/host rock contacts, (4) geochronology (Coleman et al., 2004; Matzel et al., 2005, 2006), and (5) internal structures such as schlieren layering and fabrics (Bateman, 1992; Zak et al., 2006) leading to controversies regarding batholith construction models. By using magmatic lobes tongues of individual batholithic units that extend into the host rock away from the main batholith we avoid some of the complexity that evolved over longer times within the main batholith. Magmatic lobes are "simpler" systems, because they are spatially separated from other units of the batholith and thus ideally represent processes in just one unit at the time of emplacement. Furthermore, they are shorter lived than the main batholith since they are surrounded by relatively cold host rock and "freeze in" (1) "snapshots" of batholith construction, and (2) relatively short-lived internal processes and resulting structures and composition in each individual unit. Thus, data from lobes of all batholith units representing different stages of a batholith's lifetime, help us to understand internal magmatic and external host rock processes during batholith construction. Based on field and analytic data from magmatic lobes of the Kuna Crest, Half Dome, and the Cathedral Peak granodiorites, we conclude that (1) the significance of internal processes in the lobes (fractionation versus mixing versus source heterogeneity) is unique for each individual TB unit; (2) emplacement mechanisms such as stoping, downward flow or ductile deformation of host rock act in a very short period of time (only a few 100,000 yrs); and (3) a variety of different magmatic fabrics, formed by strain caused by magma flow, marginal effects, or regional stress, can be found in each lobe. These data lead to the conclusion that the size of the studied lobes indicate the minimum pulse size for TB construction and that fractionation crystallization, even though slightly varying in its magnitude, is an important internal process in each individual TB unit.
NASA Astrophysics Data System (ADS)
Cerchiari, G.; Croccolo, F.; Cardinaux, F.; Scheffold, F.
2012-10-01
We present an implementation of the analysis of dynamic near field scattering (NFS) data using a graphics processing unit. We introduce an optimized data management scheme thereby limiting the number of operations required. Overall, we reduce the processing time from hours to minutes, for typical experimental conditions. Previously the limiting step in such experiments, the processing time is now comparable to the data acquisition time. Our approach is applicable to various dynamic NFS methods, including shadowgraph, Schlieren and differential dynamic microscopy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schulte, H.F.; Stoker, A.K.; Campbell, E.E.
1976-06-01
Oil shale technology has been divided into two sub-technologies: surface processing and in-situ processing. Definition of the research programs is essentially an amplification of the five King-Muir categories: (A) pollutants: characterization, measurement, and monitoring; (B) physical and chemical processes and effects; (C) health effects; (D) ecological processes and effects; and (E) integrated assessment. Twenty-three biomedical and environmental research projects are described as to program title, scope, milestones, technolgy time frame, program unit priority, and estimated program unit cost.