Improvement for enhancing effectiveness of universal power system (UPS) continuous testing process
NASA Astrophysics Data System (ADS)
Sriratana, Lerdlekha
2018-01-01
This experiment aims to enhance the effectiveness of the Universal Power System (UPS) continuous testing process of the Electrical and Electronic Institute by applying work scheduling and time study methods. Initially, the standard time of testing process has not been considered that results of unaccurate testing target and also time wasting has been observed. As monitoring and reducing waste time for improving the efficiency of testing process, Yamazumi chart and job scheduling theory (North West Corner Rule) were applied to develop new work process. After the improvements, the overall efficiency of the process possibly increased from 52.8% to 65.6% or 12.7%. Moreover, the waste time could reduce from 828.3 minutes to 653.6 minutes or 21%, while testing units per batch could increase from 3 to 4 units. Therefore, the number of testing units would increase from 12 units up to 20 units per month that also contribute to increase of net income of UPS testing process by 72%.
Noise, chaos, and (ɛ, τ)-entropy per unit time
NASA Astrophysics Data System (ADS)
Gaspard, Pierre; Wang, Xiao-Jing
1993-12-01
The degree of dynamical randomness of different time processes is characterized in terms of the (ε, τ)-entropy per unit time. The (ε, τ)-entropy is the amount of information generated per unit time, at different scales τ of time and ε of the observables. This quantity generalizes the Kolmogorov-Sinai entropy per unit time from deterministic chaotic processes, to stochastic processes such as fluctuations in mesoscopic physico-chemical phenomena or strong turbulence in macroscopic spacetime dynamics. The random processes that are characterized include chaotic systems, Bernoulli and Markov chains, Poisson and birth-and-death processes, Ornstein-Uhlenbeck and Yaglom noises, fractional Brownian motions, different regimes of hydrodynamical turbulence, and the Lorentz-Boltzmann process of nonequilibrium statistical mechanics. We also extend the (ε, τ)-entropy to spacetime processes like cellular automata, Conway's game of life, lattice gas automata, coupled maps, spacetime chaos in partial differential equations, as well as the ideal, the Lorentz, and the hard sphere gases. Through these examples it is demonstrated that the (ε, τ)-entropy provides a unified quantitative measure of dynamical randomness to both chaos and noises, and a method to detect transitions between dynamical states of different degrees of randomness as a parameter of the system is varied.
NASA Astrophysics Data System (ADS)
Chi, Xiao-Chun; Wang, Ying-Hui; Gao, Yu; Sui, Ning; Zhang, Li-Quan; Wang, Wen-Yan; Lu, Ran; Ji, Wen-Yu; Yang, Yan-Qiang; Zhang, Han-Zhuang
2018-04-01
Three push-pull chromophores comprising a triphenylamine (TPA) as electron-donating moiety and functionalized β-diketones as electron acceptor units are studied by various spectroscopic techniques. The time-correlated single-photon counting data shows that increasing the number of electron acceptor units accelerates photoluminescence relaxation rate of compounds. Transient spectra data shows that intramolecular charge transfer (ICT) takes place from TPA units to β-diketones units after photo-excitation. Increasing the number of electron acceptor units would prolong the generation process of ICT state, and accelerate the excited molecule reorganization process and the relaxation process of ICT state.
Assessment of mammographic film processor performance in a hospital and mobile screening unit.
Murray, J G; Dowsett, D J; Laird, O; Ennis, J T
1992-12-01
In contrast to the majority of mammographic breast screening programmes, film processing at this centre occurs on site in both hospital and mobile trailer units. Initial (1989) quality control (QC) sensitometric tests revealed a large variation in film processor performance in the mobile unit. The clinical significance of these variations was assessed and acceptance limits for processor performance determined. Abnormal mammograms were used as reference material and copied using high definition 35 mm film over a range of exposure settings. The copies were than matched with QC film density variation from the mobile unit. All films were subsequently ranked for spatial and contrast resolution. Optimal values for processing time of 2 min (equivalent to film transit time 3 min and developer time 46 s) and temperature of 36 degrees C were obtained. The widespread anomaly of reporting film transit time as processing time is highlighted. Use of mammogram copies as a means of measuring the influence of film processor variation is advocated. Careful monitoring of the mobile unit film processor performance has produced stable quality comparable with the hospital based unit. The advantages of on site film processing are outlined. The addition of a sensitometric step wedge to all mammography film stock as a means of assessing image quality is recommended.
NASA Astrophysics Data System (ADS)
Cerchiari, G.; Croccolo, F.; Cardinaux, F.; Scheffold, F.
2012-10-01
We present an implementation of the analysis of dynamic near field scattering (NFS) data using a graphics processing unit. We introduce an optimized data management scheme thereby limiting the number of operations required. Overall, we reduce the processing time from hours to minutes, for typical experimental conditions. Previously the limiting step in such experiments, the processing time is now comparable to the data acquisition time. Our approach is applicable to various dynamic NFS methods, including shadowgraph, Schlieren and differential dynamic microscopy.
Real-time radar signal processing using GPGPU (general-purpose graphic processing unit)
NASA Astrophysics Data System (ADS)
Kong, Fanxing; Zhang, Yan Rockee; Cai, Jingxiao; Palmer, Robert D.
2016-05-01
This study introduces a practical approach to develop real-time signal processing chain for general phased array radar on NVIDIA GPUs(Graphical Processing Units) using CUDA (Compute Unified Device Architecture) libraries such as cuBlas and cuFFT, which are adopted from open source libraries and optimized for the NVIDIA GPUs. The processed results are rigorously verified against those from the CPUs. Performance benchmarked in computation time with various input data cube sizes are compared across GPUs and CPUs. Through the analysis, it will be demonstrated that GPGPUs (General Purpose GPU) real-time processing of the array radar data is possible with relatively low-cost commercial GPUs.
A containerless levitation setup for liquid processing in a superconducting magnet.
Lu, Hui-Meng; Yin, Da-Chuan; Li, Hai-Sheng; Geng, Li-Qiang; Zhang, Chen-Yan; Lu, Qin-Qin; Guo, Yun-Zhu; Guo, Wei-Hong; Shang, Peng; Wakayama, Nobuko I
2008-09-01
Containerless processing of materials is considered beneficial for obtaining high quality products due to the elimination of the detrimental effects coming from the contact with container walls. Many containerless processing methods are realized by levitation techniques. This paper describes a containerless levitation setup that utilized the magnetization force generated in a gradient magnetic field. It comprises a levitation unit, a temperature control unit, and a real-time observation unit. Known volume of liquid diamagnetic samples can be levitated in the levitation chamber, the temperature of which is controlled using the temperature control unit. The evolution of the levitated sample is observed in real time using the observation unit. With this setup, containerless processing of liquid such as crystal growth from solution can be realized in a well-controlled manner. Since the levitation is achieved using a superconducting magnet, experiments requiring long duration time such as protein crystallization and simulation of space environment for living system can be easily succeeded.
Adaptive-optics optical coherence tomography processing using a graphics processing unit.
Shafer, Brandon A; Kriske, Jeffery E; Kocaoglu, Omer P; Turner, Timothy L; Liu, Zhuolin; Lee, John Jaehwan; Miller, Donald T
2014-01-01
Graphics processing units are increasingly being used for scientific computing for their powerful parallel processing abilities, and moderate price compared to super computers and computing grids. In this paper we have used a general purpose graphics processing unit to process adaptive-optics optical coherence tomography (AOOCT) images in real time. Increasing the processing speed of AOOCT is an essential step in moving the super high resolution technology closer to clinical viability.
Gjolaj, Lauren N; Gari, Gloria A; Olier-Pino, Angela I; Garcia, Juan D; Fernandez, Gustavo L
2014-11-01
Prolonged patient wait times in the outpatient oncology infusion unit indicated a need to streamline phlebotomy processes by using existing resources to decrease laboratory turnaround time and improve patient wait time. Using the DMAIC (define, measure, analyze, improve, control) method, a project to streamline phlebotomy processes within the outpatient oncology infusion unit in an academic Comprehensive Cancer Center known as the Comprehensive Treatment Unit (CTU) was completed. Laboratory turnaround time for patients who needed same-day lab and CTU services and wait time for all CTU patients was tracked for 9 weeks. During the pilot, the wait time from arrival to CTU to sitting in treatment area decreased by 17% for all patients treated in the CTU during the pilot. A total of 528 patients were seen at the CTU phlebotomy location, representing 16% of the total patients who received treatment in the CTU, with a mean turnaround time of 24 minutes compared with a baseline turnaround time of 51 minutes. Streamlining workflows and placing a phlebotomy station inside of the CTU decreased laboratory turnaround times by 53% for patients requiring same day lab and CTU services. The success of the pilot project prompted the team to make the station a permanent fixture. Copyright © 2014 by American Society of Clinical Oncology.
Distributed and recoverable digital control system
NASA Technical Reports Server (NTRS)
Stange, Kent (Inventor); Hess, Richard (Inventor); Kelley, Gerald B (Inventor); Rogers, Randy (Inventor)
2010-01-01
A real-time multi-tasking digital control system with rapid recovery capability is disclosed. The control system includes a plurality of computing units comprising a plurality of redundant processing units, with each of the processing units configured to generate one or more redundant control commands. One or more internal monitors are employed for detecting data errors in the control commands. One or more recovery triggers are provided for initiating rapid recovery of a processing unit if data errors are detected. The control system also includes a plurality of actuator control units each in operative communication with the computing units. The actuator control units are configured to initiate a rapid recovery if data errors are detected in one or more of the processing units. A plurality of smart actuators communicates with the actuator control units, and a plurality of redundant sensors communicates with the computing units.
Damle, Aneel; Andrew, Nathan; Kaur, Shubjeet; Orquiola, Alan; Alavi, Karim; Steele, Scott R; Maykel, Justin
2016-07-01
Lean processes involve streamlining methods and maximizing efficiency. Well established in the manufacturing industry, they are increasingly being applied to health care. The objective of this study was to determine feasibility and effectiveness of applying Lean principles to an academic medical center colonoscopy unit. Lean process improvement involved training endoscopy personnel, observing patients, mapping the value stream, analyzing patient flow, designing and implementing new processes, and finally re-observing the process. Our primary endpoint was total colonoscopy time (minutes from check-in to discharge) with secondary endpoints of individual segment times and unit colonoscopy capacity. A total of 217 patients were included (November 2013-May 2014), with 107 pre-Lean and 110 post-Lean intervention. Pre-Lean total colonoscopy time was 134 min. After implementation of the Lean process, mean colonoscopy time decreased by 10 % to 121 min (p = 0.01). The three steps of the process affected by the Lean intervention (time to achieve adequate sedation, time to recovery, and time to discharge) decreased from 3.7 to 2.4 min (p < 0.01), 4.0 to 3.4 min (p = 0.09), and 41.2 to 35.4 min (p = 0.05), respectively. Overall, unit capacity of colonoscopies increased from 39.6 per day to 43.6. Post-Lean patient satisfaction surveys demonstrated an average score of 4.5/5.0 (n = 73) regarding waiting time, 4.9/5.0 (n = 60) regarding how favorably this experienced compared to prior colonoscopy experiences, and 4.9/5.0 (n = 74) regarding professionalism of staff. One hundred percentage of respondents (n = 69) stated they would recommend our institution to a friend for colonoscopy. With no additional utilization of resources, a single Lean process improvement cycle increased productivity and capacity of our colonoscopy unit. We expect this to result in increased patient access and revenue while maintaining patient satisfaction. We believe these results are widely generalizable to other colonoscopy units as well as other process-based interventions in health care.
State and Local Publications | State, Local, and Tribal Governments | NREL
residential and small commercial photovoltaic interconnection process time frames in the United States . Understanding Processes and Timelines for Distributed Photovoltaic Interconnection in the United States analyzes
Bibok, Maximilian B; Votova, Kristine; Balshaw, Robert F; Lesperance, Mary L; Croteau, Nicole S; Trivedi, Anurag; Morrison, Jaclyn; Sedgwick, Colin; Penn, Andrew M
2018-02-27
To evaluate the performance of a novel triage system for Transient Ischemic Attack (TIA) units built upon an existent clinical prediction rule (CPR) to reduce time to unit arrival, relative to the time of symptom onset, for true TIA and minor stroke patients. Differentiating between true and false TIA/minor stroke cases (mimics) is necessary for effective triage as medical intervention for true TIA/minor stroke is time-sensitive and TIA unit spots are a finite resource. Prospective cohort study design utilizing patient referral data and TIA unit arrival times from a regional fast-track TIA unit on Vancouver Island, Canada, accepting referrals from emergency departments (ED) and general practice (GP). Historical referral cohort (N = 2942) from May 2013-Oct 2014 was triaged using the ABCD2 score; prospective referral cohort (N = 2929) from Nov 2014-Apr 2016 was triaged using the novel system. A retrospective survival curve analysis, censored at 28 days to unit arrival, was used to compare days to unit arrival from event date between cohort patients matched by low (0-3), moderate (4-5) and high (6-7) ABCD2 scores. Survival curve analysis indicated that using the novel triage system, prospectively referred TIA/minor stroke patients with low and moderate ABCD2 scores arrived at the unit 2 and 1 day earlier than matched historical patients, respectively. The novel triage process is associated with a reduction in time to unit arrival from symptom onset for referred true TIA/minor stroke patients with low and moderate ABCD2 scores.
Jurado, Marisa; Algora, Manuel; Garcia-Sanchez, Félix; Vico, Santiago; Rodriguez, Eva; Perez, Sonia; Barbolla, Luz
2012-01-01
Background The Community Transfusion Centre in Madrid currently processes whole blood using a conventional procedure (Compomat, Fresenius) followed by automated processing of buffy coats with the OrbiSac system (CaridianBCT). The Atreus 3C system (CaridianBCT) automates the production of red blood cells, plasma and an interim platelet unit from a whole blood unit. Interim platelet unit are pooled to produce a transfusable platelet unit. In this study the Atreus 3C system was evaluated and compared to the routine method with regards to product quality and operational value. Materials and methods Over a 5-week period 810 whole blood units were processed using the Atreus 3C system. The attributes of the automated process were compared to those of the routine method by assessing productivity, space, equipment and staffing requirements. The data obtained were evaluated in order to estimate the impact of implementing the Atreus 3C system in the routine setting of the blood centre. Yield and in vitro quality of the final blood components processed with the two systems were evaluated and compared. Results The Atreus 3C system enabled higher throughput while requiring less space and employee time by decreasing the amount of equipment and processing time per unit of whole blood processed. Whole blood units processed on the Atreus 3C system gave a higher platelet yield, a similar amount of red blood cells and a smaller volume of plasma. Discussion These results support the conclusion that the Atreus 3C system produces blood components meeting quality requirements while providing a high operational efficiency. Implementation of the Atreus 3C system could result in a large organisational improvement. PMID:22044958
Jurado, Marisa; Algora, Manuel; Garcia-Sanchez, Félix; Vico, Santiago; Rodriguez, Eva; Perez, Sonia; Barbolla, Luz
2012-01-01
The Community Transfusion Centre in Madrid currently processes whole blood using a conventional procedure (Compomat, Fresenius) followed by automated processing of buffy coats with the OrbiSac system (CaridianBCT). The Atreus 3C system (CaridianBCT) automates the production of red blood cells, plasma and an interim platelet unit from a whole blood unit. Interim platelet unit are pooled to produce a transfusable platelet unit. In this study the Atreus 3C system was evaluated and compared to the routine method with regards to product quality and operational value. Over a 5-week period 810 whole blood units were processed using the Atreus 3C system. The attributes of the automated process were compared to those of the routine method by assessing productivity, space, equipment and staffing requirements. The data obtained were evaluated in order to estimate the impact of implementing the Atreus 3C system in the routine setting of the blood centre. Yield and in vitro quality of the final blood components processed with the two systems were evaluated and compared. The Atreus 3C system enabled higher throughput while requiring less space and employee time by decreasing the amount of equipment and processing time per unit of whole blood processed. Whole blood units processed on the Atreus 3C system gave a higher platelet yield, a similar amount of red blood cells and a smaller volume of plasma. These results support the conclusion that the Atreus 3C system produces blood components meeting quality requirements while providing a high operational efficiency. Implementation of the Atreus 3C system could result in a large organisational improvement.
Method and apparatus for fault tolerance
NASA Technical Reports Server (NTRS)
Masson, Gerald M. (Inventor); Sullivan, Gregory F. (Inventor)
1993-01-01
A method and apparatus for achieving fault tolerance in a computer system having at least a first central processing unit and a second central processing unit. The method comprises the steps of first executing a first algorithm in the first central processing unit on input which produces a first output as well as a certification trail. Next, executing a second algorithm in the second central processing unit on the input and on at least a portion of the certification trail which produces a second output. The second algorithm has a faster execution time than the first algorithm for a given input. Then, comparing the first and second outputs such that an error result is produced if the first and second outputs are not the same. The step of executing a first algorithm and the step of executing a second algorithm preferably takes place over essentially the same time period.
Vigmond, Edward J.; Boyle, Patrick M.; Leon, L. Joshua; Plank, Gernot
2014-01-01
Simulations of cardiac bioelectric phenomena remain a significant challenge despite continual advancements in computational machinery. Spanning large temporal and spatial ranges demands millions of nodes to accurately depict geometry, and a comparable number of timesteps to capture dynamics. This study explores a new hardware computing paradigm, the graphics processing unit (GPU), to accelerate cardiac models, and analyzes results in the context of simulating a small mammalian heart in real time. The ODEs associated with membrane ionic flow were computed on traditional CPU and compared to GPU performance, for one to four parallel processing units. The scalability of solving the PDE responsible for tissue coupling was examined on a cluster using up to 128 cores. Results indicate that the GPU implementation was between 9 and 17 times faster than the CPU implementation and scaled similarly. Solving the PDE was still 160 times slower than real time. PMID:19964295
Real-time digital holographic microscopy using the graphic processing unit.
Shimobaba, Tomoyoshi; Sato, Yoshikuni; Miura, Junya; Takenouchi, Mai; Ito, Tomoyoshi
2008-08-04
Digital holographic microscopy (DHM) is a well-known powerful method allowing both the amplitude and phase of a specimen to be simultaneously observed. In order to obtain a reconstructed image from a hologram, numerous calculations for the Fresnel diffraction are required. The Fresnel diffraction can be accelerated by the FFT (Fast Fourier Transform) algorithm. However, real-time reconstruction from a hologram is difficult even if we use a recent central processing unit (CPU) to calculate the Fresnel diffraction by the FFT algorithm. In this paper, we describe a real-time DHM system using a graphic processing unit (GPU) with many stream processors, which allows use as a highly parallel processor. The computational speed of the Fresnel diffraction using the GPU is faster than that of recent CPUs. The real-time DHM system can obtain reconstructed images from holograms whose size is 512 x 512 grids in 24 frames per second.
A 16-year time series of 1 km AVHRR satellite data of the conterminous United States and Alaska
Eidenshink, Jeff
2006-01-01
The U.S. Geological Survey (USGS) has developed a 16-year time series of vegetation condition information for the conterminous United States and Alaska using 1 km Advanced Very High Resolution Radiometer (AVHRR) data. The AVHRR data have been processed using consistent methods that account for radiometric variability due to calibration uncertainty, the effects of the atmosphere on surface radiometric measurements obtained from wide field-of-view observations, and the geometric registration accuracy. The conterminous United States and Alaska data sets have an atmospheric correction for water vapor, ozone, and Rayleigh scattering and include a cloud mask derived using the Clouds from AVHRR (CLAVR) algorithm. In comparison with other AVHRR time series data sets, the conterminous United States and Alaska data are processed using similar techniques. The primary difference is that the conterminous United States and Alaska data are at 1 km resolution, while others are at 8 km resolution. The time series consists of weekly and biweekly maximum normalized difference vegetation index (NDVI) composites.
Optimized Laplacian image sharpening algorithm based on graphic processing unit
NASA Astrophysics Data System (ADS)
Ma, Tinghuai; Li, Lu; Ji, Sai; Wang, Xin; Tian, Yuan; Al-Dhelaan, Abdullah; Al-Rodhaan, Mznah
2014-12-01
In classical Laplacian image sharpening, all pixels are processed one by one, which leads to large amount of computation. Traditional Laplacian sharpening processed on CPU is considerably time-consuming especially for those large pictures. In this paper, we propose a parallel implementation of Laplacian sharpening based on Compute Unified Device Architecture (CUDA), which is a computing platform of Graphic Processing Units (GPU), and analyze the impact of picture size on performance and the relationship between the processing time of between data transfer time and parallel computing time. Further, according to different features of different memory, an improved scheme of our method is developed, which exploits shared memory in GPU instead of global memory and further increases the efficiency. Experimental results prove that two novel algorithms outperform traditional consequentially method based on OpenCV in the aspect of computing speed.
NASA Astrophysics Data System (ADS)
Memeti, V.; Paterson, S. R.
2006-12-01
Data gained using various geologic tools from large, composite batholiths, such as the 95-85 Ma old Tuolumne Batholith (TB), Sierra Nevada, CA, indicate complex batholithic processes at the chamber construction site, in part since they record different increments of batholith construction through time. Large structural and compositional complexity generally occurs throughout the main batholith such as (1) geochemistry, (2) internal contacts between different units (Bateman, 1992; Zak &Paterson, 2005), (3) batholith/host rock contacts, (4) geochronology (Coleman et al., 2004; Matzel et al., 2005, 2006), and (5) internal structures such as schlieren layering and fabrics (Bateman, 1992; Zak et al., 2006) leading to controversies regarding batholith construction models. By using magmatic lobes tongues of individual batholithic units that extend into the host rock away from the main batholith we avoid some of the complexity that evolved over longer times within the main batholith. Magmatic lobes are "simpler" systems, because they are spatially separated from other units of the batholith and thus ideally represent processes in just one unit at the time of emplacement. Furthermore, they are shorter lived than the main batholith since they are surrounded by relatively cold host rock and "freeze in" (1) "snapshots" of batholith construction, and (2) relatively short-lived internal processes and resulting structures and composition in each individual unit. Thus, data from lobes of all batholith units representing different stages of a batholith's lifetime, help us to understand internal magmatic and external host rock processes during batholith construction. Based on field and analytic data from magmatic lobes of the Kuna Crest, Half Dome, and the Cathedral Peak granodiorites, we conclude that (1) the significance of internal processes in the lobes (fractionation versus mixing versus source heterogeneity) is unique for each individual TB unit; (2) emplacement mechanisms such as stoping, downward flow or ductile deformation of host rock act in a very short period of time (only a few 100,000 yrs); and (3) a variety of different magmatic fabrics, formed by strain caused by magma flow, marginal effects, or regional stress, can be found in each lobe. These data lead to the conclusion that the size of the studied lobes indicate the minimum pulse size for TB construction and that fractionation crystallization, even though slightly varying in its magnitude, is an important internal process in each individual TB unit.
NASA Astrophysics Data System (ADS)
Jonny; Nasution, Januar
2013-06-01
Value stream mapping is a tool which is needed to let the business leader of XYZ Hospital to see what is actually happening in its business process that have caused longer lead time for self-produced medicines in its pharmacy unit. This problem has triggered many complaints filed by patients. After deploying this tool, the team has come up with the fact that in processing the medicine, pharmacy unit does not have any storage and capsule packing tool and this condition has caused many wasting times in its process. Therefore, the team has proposed to the business leader to procure the required tools in order to shorten its process. This research has resulted in shortened lead time from 45 minutes to 30 minutes as required by the government through Indonesian health ministry with increased %VA (valued added activity) or Process Cycle Efficiency (PCE) from 66% to 68% (considered lean because it is upper than required 30%). This result has proved that the process effectiveness has been increase by the improvement.
Milk Processing Plant Employee. Agricultural Cooperative Training. Vocational Agriculture.
ERIC Educational Resources Information Center
Blaschke, Nolan; Page, Foy
This course of study is designed for the vocational agricultural student enrolled in an agricultural cooperative part-time training program in the area of milk processing occupations. The course consists of 11 units, each with 4 to 13 individual topics that milk processing plant employees should know. Subjects covered by the units are the…
Jeong, Kyeong-Min; Kim, Hee-Seung; Hong, Sung-In; Lee, Sung-Keun; Jo, Na-Young; Kim, Yong-Soo; Lim, Hong-Gi; Park, Jae-Hyeung
2012-10-08
Speed enhancement of integral imaging based incoherent Fourier hologram capture using a graphic processing unit is reported. Integral imaging based method enables exact hologram capture of real-existing three-dimensional objects under regular incoherent illumination. In our implementation, we apply parallel computation scheme using the graphic processing unit, accelerating the processing speed. Using enhanced speed of hologram capture, we also implement a pseudo real-time hologram capture and optical reconstruction system. The overall operation speed is measured to be 1 frame per second.
Fang, Wai-Chi; Huang, Kuan-Ju; Chou, Chia-Ching; Chang, Jui-Chung; Cauwenberghs, Gert; Jung, Tzyy-Ping
2014-01-01
This is a proposal for an efficient very-large-scale integration (VLSI) design, 16-channel on-line recursive independent component analysis (ORICA) processor ASIC for real-time EEG system, implemented with TSMC 40 nm CMOS technology. ORICA is appropriate to be used in real-time EEG system to separate artifacts because of its highly efficient and real-time process features. The proposed ORICA processor is composed of an ORICA processing unit and a singular value decomposition (SVD) processing unit. Compared with previous work [1], this proposed ORICA processor has enhanced effectiveness and reduced hardware complexity by utilizing a deeper pipeline architecture, shared arithmetic processing unit, and shared registers. The 16-channel random signals which contain 8-channel super-Gaussian and 8-channel sub-Gaussian components are used to analyze the dependence of the source components, and the average correlation coefficient is 0.95452 between the original source signals and extracted ORICA signals. Finally, the proposed ORICA processor ASIC is implemented with TSMC 40 nm CMOS technology, and it consumes 15.72 mW at 100 MHz operating frequency.
Moody, John A.
2017-01-01
A superslug was deposited in a basin in the Colorado Front Range Mountains as a consequence of an extreme flood following a wildfire disturbance in 1996. The subsequent evolution of this superslug was measured by repeat topographic surveys (31 surveys from 1996 through 2014) of 18 cross sections approximately uniformly spaced over 1500 m immediately above the basin outlet. These surveys allowed the identification within the superslug of chronostratigraphic units deposited and eroded by different geomorphic processes in response to different flow regimes.Over the time period of the study, the superslug went through aggradation, incision, and stabilization phases that were controlled by a shift in geomorphic processes from generally short-duration, episodic, large-magnitude floods that deposited new chronostratigraphic units to long-duration processes that eroded units. These phases were not contemporaneous at each channel cross section, which resulted in a complex response that preserved different chronostratigraphic units at each channel cross section having, in general, two dominant types of alluvial architecture—laminar and fragmented. Age and transit-time distributions for these two alluvial architectures evolved with time since the extreme flood. Because of the complex shape of the distributions they were best modeled by two-parameter Weibull functions. The Weibull scale parameter approximated the median age of the distributions, and the Weibull shape parameter generally had a linear relation that increased with time since the extreme flood. Additional results indicated that deposition of new chronostratigraphic units can be represented by a power-law frequency distribution, and that the erosion of units decreases with depth of burial to a limiting depth. These relations can be used to model other situations with different flow regimes where vertical aggradation and incision are dominant processes, to predict the residence time of possible contaminated sediment stored in channels or on floodplains, and to provide insight into the interpretation of recent or ancient fluvial deposits.
NASA Astrophysics Data System (ADS)
Moody, John A.
2017-10-01
A superslug was deposited in a basin in the Colorado Front Range Mountains as a consequence of an extreme flood following a wildfire disturbance in 1996. The subsequent evolution of this superslug was measured by repeat topographic surveys (31 surveys from 1996 through 2014) of 18 cross sections approximately uniformly spaced over 1500 m immediately above the basin outlet. These surveys allowed the identification within the superslug of chronostratigraphic units deposited and eroded by different geomorphic processes in response to different flow regimes. Over the time period of the study, the superslug went through aggradation, incision, and stabilization phases that were controlled by a shift in geomorphic processes from generally short-duration, episodic, large-magnitude floods that deposited new chronostratigraphic units to long-duration processes that eroded units. These phases were not contemporaneous at each channel cross section, which resulted in a complex response that preserved different chronostratigraphic units at each channel cross section having, in general, two dominant types of alluvial architecture-laminar and fragmented. Age and transit-time distributions for these two alluvial architectures evolved with time since the extreme flood. Because of the complex shape of the distributions they were best modeled by two-parameter Weibull functions. The Weibull scale parameter approximated the median age of the distributions, and the Weibull shape parameter generally had a linear relation that increased with time since the extreme flood. Additional results indicated that deposition of new chronostratigraphic units can be represented by a power-law frequency distribution, and that the erosion of units decreases with depth of burial to a limiting depth. These relations can be used to model other situations with different flow regimes where vertical aggradation and incision are dominant processes, to predict the residence time of possible contaminated sediment stored in channels or on floodplains, and to provide insight into the interpretation of recent or ancient fluvial deposits.
Internode data communications in a parallel computer
Archer, Charles J.; Blocksome, Michael A.; Miller, Douglas R.; Parker, Jeffrey J.; Ratterman, Joseph D.; Smith, Brian E.
2013-09-03
Internode data communications in a parallel computer that includes compute nodes that each include main memory and a messaging unit, the messaging unit including computer memory and coupling compute nodes for data communications, in which, for each compute node at compute node boot time: a messaging unit allocates, in the messaging unit's computer memory, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; receives, prior to initialization of a particular process on the compute node, a data communications message intended for the particular process; and stores the data communications message in the message buffer associated with the particular process. Upon initialization of the particular process, the process establishes a messaging buffer in main memory of the compute node and copies the data communications message from the message buffer of the messaging unit into the message buffer of main memory.
Internode data communications in a parallel computer
Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Parker, Jeffrey J; Ratterman, Joseph D; Smith, Brian E
2014-02-11
Internode data communications in a parallel computer that includes compute nodes that each include main memory and a messaging unit, the messaging unit including computer memory and coupling compute nodes for data communications, in which, for each compute node at compute node boot time: a messaging unit allocates, in the messaging unit's computer memory, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; receives, prior to initialization of a particular process on the compute node, a data communications message intended for the particular process; and stores the data communications message in the message buffer associated with the particular process. Upon initialization of the particular process, the process establishes a messaging buffer in main memory of the compute node and copies the data communications message from the message buffer of the messaging unit into the message buffer of main memory.
Process Improvement to Enhance Quality in a Large Volume Labor and Birth Unit.
Bell, Ashley M; Bohannon, Jessica; Porthouse, Lisa; Thompson, Heather; Vago, Tony
The goal of the perinatal team at Mercy Hospital St. Louis is to provide a quality patient experience during labor and birth. After the move to a new labor and birth unit in 2013, the team recognized many of the routines and practices needed to be modified based on different demands. The Lean process was used to plan and implement required changes. This technique was chosen because it is based on feedback from clinicians, teamwork, strategizing, and immediate evaluation and implementation of common sense solutions. Through rapid improvement events, presence of leaders in the work environment, and daily huddles, team member engagement and communication were enhanced. The process allowed for team members to offer ideas, test these ideas, and evaluate results, all within a rapid time frame. For 9 months, frontline clinicians met monthly for a weeklong rapid improvement event to create better experiences for childbearing women and those who provide their care, using Lean concepts. At the end of each week, an implementation plan and metrics were developed to help ensure sustainment. The issues that were the focus of these process improvements included on-time initiation of scheduled cases such as induction of labor and cesarean birth, timely and efficient assessment and triage disposition, postanesthesia care and immediate newborn care completed within approximately 2 hours, transfer from the labor unit to the mother baby unit, and emergency transfers to the main operating room and intensive care unit. On-time case initiation for labor induction and cesarean birth improved, length of stay in obstetric triage decreased, postanesthesia recovery care was reorganized to be completed within the expected 2-hour standard time frame, and emergency transfers to the main hospital operating room and intensive care units were standardized and enhanced for efficiency and safety. Participants were pleased with the process improvements and quality outcomes. Working together as a team using the Lean process, frontline clinicians identified areas that needed improvement, developed and implemented successful strategies that addressed each gap, and enhanced the quality and safety of care for a large volume perinatal service.
NASA Astrophysics Data System (ADS)
Li, Jiqing; Huang, Jing; Li, Jianchang
2018-06-01
The time-varying design flood can make full use of the measured data, which can provide the reservoir with the basis of both flood control and operation scheduling. This paper adopts peak over threshold method for flood sampling in unit periods and Poisson process with time-dependent parameters model for simulation of reservoirs time-varying design flood. Considering the relationship between the model parameters and hypothesis, this paper presents the over-threshold intensity, the fitting degree of Poisson distribution and the design flood parameters are the time-varying design flood unit period and threshold discriminant basis, deduced Longyangxia reservoir time-varying design flood process at 9 kinds of design frequencies. The time-varying design flood of inflow is closer to the reservoir actual inflow conditions, which can be used to adjust the operating water level in flood season and make plans for resource utilization of flood in the basin.
Towards marine seismological Network: real time small aperture seismic array
NASA Astrophysics Data System (ADS)
Ilinskiy, Dmitry
2017-04-01
Most powerful and dangerous seismic events are generated in underwater subduction zones. Existing seismological networks are based on land seismological stations. Increased demands for accuracy of location, magnitude, rupture process of coming earthquakes and at the same time reduction of data processing time require information from seabed seismic stations located near the earthquake generation area. Marine stations provide important contribution for clarification of the tectonic settings in most active subduction zones of the world. Early warning system for subduction zone area is based on marine seabed array which located near the area of most hazardous seismic zone in the region. Fast track processing for location of the earthquake hypocenter and energy takes place in buoy surface unit. Information about detected and located earthquake reaches the onshore seismological center earlier than the first break waves from the same earthquake will reach the nearest onshore seismological station. Implementation of small aperture array is based on existed and shown a good proven performance and costs effective solutions such as weather moored buoy and self-pop up autonomous seabed seismic nodes. Permanent seabed system for real-time operation has to be installed in deep sea waters far from the coast. Seabed array consists of several self-popup seismological stations which continuously acquire the data, detect the events of certain energy class and send detected event parameters to the surface buoy via acoustic link. Surface buoy unit determine the earthquake location by receiving the event parameters from seabed units and send such information in semi-real time to the onshore seismological center via narrow band satellite link. Upon the request from the cost the system could send wave form of events of certain energy class, bottom seismic station battery status and other environmental parameters. When the battery life of particular seabed unit is close to became empty, the seabed unit is switching into sleep mode and send that information to surface buoy and father to the onshore data center. Then seabed unit can wait for the vessel of opportunity for recovery of seabed unit to sea surface and replacing seabed station to another one with fresh batteries. All collected permanent seismic data by seabed unit could than downloaded for father processing and analysis. In our presentation we will demonstrate the several working prototypes of proposed system such as real time cable broad band seismological station and real time buoy seabed seismological station.
12 CFR 404.5 - Time for processing.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 4 2010-01-01 2010-01-01 false Time for processing. 404.5 Section 404.5 Banks and Banking EXPORT-IMPORT BANK OF THE UNITED STATES INFORMATION DISCLOSURE Procedures for Disclosure of Records Under the Freedom of Information Act. § 404.5 Time for processing. (a) General. Ex-Im Bank...
Watanabe, Yuuki; Maeno, Seiya; Aoshima, Kenji; Hasegawa, Haruyuki; Koseki, Hitoshi
2010-09-01
The real-time display of full-range, 2048?axial pixelx1024?lateral pixel, Fourier-domain optical-coherence tomography (FD-OCT) images is demonstrated. The required speed was achieved by using dual graphic processing units (GPUs) with many stream processors to realize highly parallel processing. We used a zero-filling technique, including a forward Fourier transform, a zero padding to increase the axial data-array size to 8192, an inverse-Fourier transform back to the spectral domain, a linear interpolation from wavelength to wavenumber, a lateral Hilbert transform to obtain the complex spectrum, a Fourier transform to obtain the axial profiles, and a log scaling. The data-transfer time of the frame grabber was 15.73?ms, and the processing time, which includes the data transfer between the GPU memory and the host computer, was 14.75?ms, for a total time shorter than the 36.70?ms frame-interval time using a line-scan CCD camera operated at 27.9?kHz. That is, our OCT system achieved a processed-image display rate of 27.23 frames/s.
Computer Vision for Artificially Intelligent Robotic Systems
NASA Astrophysics Data System (ADS)
Ma, Chialo; Ma, Yung-Lung
1987-04-01
In this paper An Acoustic Imaging Recognition System (AIRS) will be introduced which is installed on an Intelligent Robotic System and can recognize different type of Hand tools' by Dynamic pattern recognition. The dynamic pattern recognition is approached by look up table method in this case, the method can save a lot of calculation time and it is practicable. The Acoustic Imaging Recognition System (AIRS) is consist of four parts -- position control unit, pulse-echo signal processing unit, pattern recognition unit and main control unit. The position control of AIRS can rotate an angle of ±5 degree Horizental and Vertical seperately, the purpose of rotation is to find the maximum reflection intensity area, from the distance, angles and intensity of the target we can decide the characteristic of this target, of course all the decision is target, of course all the decision is processed bye the main control unit. In Pulse-Echo Signal Process Unit, we ultilize the correlation method, to overcome the limitation of short burst of ultrasonic, because the Correlation system can transmit large time bandwidth signals and obtain their resolution and increased intensity through pulse compression in the correlation receiver. The output of correlator is sampled and transfer into digital data by u law coding method, and this data together with delay time T, angle information OH, eV will be sent into main control unit for further analysis. The recognition process in this paper, we use dynamic look up table method, in this method at first we shall set up serval recognition pattern table and then the new pattern scanned by Transducer array will be devided into serval stages and compare with the sampling table. The comparison is implemented by dynamic programing and Markovian process. All the hardware control signals, such as optimum delay time for correlator receiver, horizental and vertical rotation angle for transducer plate, are controlled by the Main Control Unit, the Main Control Unit also handles the pattern recognition process. The distance from the target to the transducer plate is limitted by the power and beam angle of transducer elements, in this AIRS Model, we use a narrow beam transducer and it's input voltage is 50V p-p. A RobOt equipped with AIRS can not only measure the distance from the target but also recognize a three dimensional image of target from the image lab of Robot memory. Indexitems, Accoustic System, Supersonic transducer, Dynamic programming, Look-up-table, Image process, pattern Recognition, Quad Tree, Quadappoach.
NASA Astrophysics Data System (ADS)
Ma, Yung-Lung; Ma, Chialo
1987-03-01
In this paper An Acoustic Imaging Recognition System (AIRS) will be introduced which is installed on an Intelligent Robotic System and can recognize different type of Hand tools' by Dynamic pattern recognition. The dynamic pattern recognition is approached by look up table method in this case, the method can save a lot of calculation time and it is practicable. The Acoustic Imaging Recognition System (AIRS) is consist of four parts _ position control unit, pulse-echo signal processing unit, pattern recognition unit and main control unit. The position control of AIRS can rotate an angle of ±5 degree Horizental and Vertical seperately, the purpose of rotation is to find the maximum reflection intensity area, from the distance, angles and intensity of the target we can decide the characteristic of this target, of course all the decision is target, of course all the decision is processed by the main control unit. In Pulse-Echo Signal Process Unit, we utilize the correlation method, to overcome the limitation of short burst of ultrasonic, because the Correlation system can transmit large time bandwidth signals and obtain their resolution and increased intensity through pulse compression in the correlation receiver. The output of correlator is sampled and transfer into digital data by p law coding method, and this data together with delay time T, angle information eH, eV will be sent into main control unit for further analysis. The recognition process in this paper, we use dynamic look up table method, in this method at first we shall set up serval recognition pattern table and then the new pattern scanned by Transducer array will be devided into serval stages and compare with the sampling table. The comparison is implemented by dynamic programing and Markovian process. All the hardware control signals, such as optimum delay time for correlator receiver, horizental and vertical rotation angle for transducer plate, are controlled by the Main Control Unit, the Main Control Unit also handles the pattern recognition process. The distance from the target to the transducer plate is limitted by the power and beam angle of transducer elements, in this AIRS Models, we use a narrow beam transducer and it's input voltage is 50V p-p. A Robot equipped with AIRS can not only measure the distance from the target but also recognize a three dimensional image of target from the image lab of Robot memory. Indexitems, Accoustic System, Supersonic transducer, Dynamic programming, Look-up-table, Image process, pattern Recognition, Quad Tree, Quadappoach.
NASA Astrophysics Data System (ADS)
Yi, Gong; Jilin, Cheng; Lihua, Zhang; Rentian, Zhang
2010-06-01
According to different processes of tides and peak-valley electricity prices, this paper determines the optimal start up time in pumping station's 24 hours operation between the rating state and adjusting blade angle state respectively based on the optimization objective function and optimization model for single-unit pump's 24 hours operation taking JiangDu No.4 Pumping Station for example. In the meantime, this paper proposes the following regularities between optimal start up time of pumping station and the process of tides and peak-valley electricity prices each day within a month: (1) In the rating and adjusting blade angle state, the optimal start up time in pumping station's 24 hours operation which depends on the tide generation at the same day varies with the process of tides. There are mainly two kinds of optimal start up time which include the time at tide generation and 12 hours after it. (2) In the rating state, the optimal start up time on each day in a month exhibits a rule of symmetry from 29 to 28 of next month in the lunar calendar. The time of tide generation usually exists in the period of peak electricity price or the valley one. The higher electricity price corresponds to the higher minimum cost of water pumping at unit, which means that the minimum cost of water pumping at unit depends on the peak-valley electricity price at the time of tide generation on the same day. (3) In the adjusting blade angle state, the minimum cost of water pumping at unit in pumping station's 24 hour operation depends on the process of peak-valley electricity prices. And in the adjusting blade angle state, 4.85%˜5.37% of the minimum cost of water pumping at unit will be saved than that of in the rating state.
Wilson, J Adam; Williams, Justin C
2009-01-01
The clock speeds of modern computer processors have nearly plateaued in the past 5 years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card [graphics processing unit (GPU)] was developed for real-time neural signal processing of a brain-computer interface (BCI). The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix-matrix multiplication (i.e., a spatial filter), followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a central processing unit-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels of 250 ms in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times.
Pipeline active filter utilizing a booth type multiplier
NASA Technical Reports Server (NTRS)
Nathan, Robert (Inventor)
1987-01-01
Multiplier units of the modified Booth decoder and carry-save adder/full adder combination are used to implement a pipeline active filter wherein pixel data is processed sequentially, and each pixel need only be accessed once and multiplied by a predetermined number of weights simultaneously, one multiplier unit for each weight. Each multiplier unit uses only one row of carry-save adders, and the results are shifted to less significant multiplier positions and one row of full adders to add the carry to the sum in order to provide the correct binary number for the product Wp. The full adder is also used to add this product Wp to the sum of products .SIGMA.Wp from preceding multiply units. If m.times.m multiplier units are pipelined, the system would be capable of processing a kernel array of m.times.m weighting factors.
Sandberg, D A; Lynn, S J; Matorin, A I
2001-07-01
To assess the impact of dissociation on information processing, 66 college women with high and low levels of trait dissociation were studied with regard to how they unitized videotape segments of an acquaintance rape scenario (actual assault not shown) and a nonthreatening control scenario. Unitization is a paradigm that measures how actively people process stimuli by recording how many times they press a button to indicate that they have seen a significant or meaningful event. Trait dissociation was negatively correlated with participants' unitization of the acquaintance rape videotape, unitization was positively correlated with danger cue identification, and state dissociation was negatively correlated with dangerousness ratings.
Evaluating MC&A effectiveness to verify the presence of nuclear materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dawson, P. G.; Morzinski, J. A.; Ostenak, Carl A.
Traditional materials accounting is focused exclusively on the material balance area (MBA), and involves periodically closing a material balance based on accountability measurements conducted during a physical inventory. In contrast, the physical inventory for Los Alamos National Laboratory's near-real-time accounting system is established around processes and looks more like an item inventory. That is, the intent is not to measure material for accounting purposes, since materials have already been measured in the normal course of daily operations. A given unit process operates many times over the course of a material balance period. The product of a given unit process maymore » move for processing within another unit process in the same MBA or may be transferred out of the MBA. Since few materials are unmeasured the physical inventory for a near-real-time process area looks more like an item inventory. Thus, the intent of the physical inventory is to locate the materials on the books and verify information about the materials contained in the books. Closing a materials balance for such an area is a matter of summing all the individual mass balances for the batches processed by all unit processes in the MBA. Additionally, performance parameters are established to measure the program's effectiveness. Program effectiveness for verifying the presence of nuclear material is required to be equal to or greater than a prescribed performance level, process measurements must be within established precision and accuracy values, physical inventory results meet or exceed performance requirements, and inventory differences are less than a target/goal quantity. This approach exceeds DOE established accounting and physical inventory program requirements. Hence, LANL is committed to this approach and to seeking opportunities for further improvement through integrated technologies. This paper will provide a detailed description of this evaluation process.« less
Real-time liquid-crystal atmosphere turbulence simulator with graphic processing unit.
Hu, Lifa; Xuan, Li; Li, Dayu; Cao, Zhaoliang; Mu, Quanquan; Liu, Yonggang; Peng, Zenghui; Lu, Xinghai
2009-04-27
To generate time-evolving atmosphere turbulence in real time, a phase-generating method for our liquid-crystal (LC) atmosphere turbulence simulator (ATS) is derived based on the Fourier series (FS) method. A real matrix expression for generating turbulence phases is given and calculated with a graphic processing unit (GPU), the GeForce 8800 Ultra. A liquid crystal on silicon (LCOS) with 256x256 pixels is used as the turbulence simulator. The total time to generate a turbulence phase is about 7.8 ms for calculation and readout with the GPU. A parallel processing method of calculating and sending a picture to the LCOS is used to improve the simulating speed of our LC ATS. Therefore, the real-time turbulence phase-generation frequency of our LC ATS is up to 128 Hz. To our knowledge, it is the highest speed used to generate a turbulence phase in real time.
Method of up-front load balancing for local memory parallel processors
NASA Technical Reports Server (NTRS)
Baffes, Paul Thomas (Inventor)
1990-01-01
In a parallel processing computer system with multiple processing units and shared memory, a method is disclosed for uniformly balancing the aggregate computational load in, and utilizing minimal memory by, a network having identical computations to be executed at each connection therein. Read-only and read-write memory are subdivided into a plurality of process sets, which function like artificial processing units. Said plurality of process sets is iteratively merged and reduced to the number of processing units without exceeding the balance load. Said merger is based upon the value of a partition threshold, which is a measure of the memory utilization. The turnaround time and memory savings of the instant method are functions of the number of processing units available and the number of partitions into which the memory is subdivided. Typical results of the preferred embodiment yielded memory savings of from sixty to seventy five percent.
Sivakumar, Venkatasubramanian; Swaminathan, Gopalaraman; Rao, Paruchuri Gangadhar; Ramasami, Thirumalachari
2009-01-01
Ultrasound is a sound wave with a frequency above the human audible range of 16 Hz to 16 kHz. In recent years, numerous unit operations involving physical as well as chemical processes are reported to have been enhanced by ultrasonic irradiation. There have been benefits such as improvement in process efficiency, process time reduction, performing the processes under milder conditions and avoiding the use of some toxic chemicals to achieve cleaner processing. These could be a better way of augmentation for the processes as an advanced technique. The important point here is that ultrasonic irradiation is physical method activation rather than using chemical entities. Detailed studies have been made in the unit operations related to leather such as diffusion rate enhancement through porous leather matrix, cleaning, degreasing, tanning, dyeing, fatliquoring, oil-water emulsification process and solid-liquid tannin extraction from vegetable tanning materials as well as in precipitation reaction in wastewater treatment. The fundamental mechanism involved in these processes is ultrasonic cavitation in liquid media. In addition to this there also exist some process specific mechanisms for the enhancement of the processes. For instance, possible real-time reversible pore-size changes during ultrasound propagation through skin/leather matrix could be a reason for diffusion rate enhancement in leather processing as reported for the first time. Exhaustive scientific research work has been carried out in this area by our group working in Chemical Engineering Division of CLRI and most of these benefits have been proven with publications in valued peer-reviewed international journals. The overall results indicate that about 2-5-fold increase in the process efficiency due to ultrasound under the given process conditions for various unit operations with additional benefits. Scale-up studies are underway for converting these concepts in to a real viable larger scale operation. In the present paper, summary of our research findings from employing this technique in various unit operations such as cleaning, diffusion, emulsification, particle-size reduction, solid-liquid leaching (tannin and natural dye extraction) as well as precipitation has been presented.
37 CFR 102.6 - Time limits and expedited processing.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 37 Patents, Trademarks, and Copyrights 1 2014-07-01 2014-07-01 false Time limits and expedited processing. 102.6 Section 102.6 Patents, Trademarks, and Copyrights UNITED STATES PATENT AND TRADEMARK OFFICE... properly process the particular request: (i) The need to search for and collect the requested records from...
37 CFR 102.6 - Time limits and expedited processing.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 37 Patents, Trademarks, and Copyrights 1 2011-07-01 2011-07-01 false Time limits and expedited processing. 102.6 Section 102.6 Patents, Trademarks, and Copyrights UNITED STATES PATENT AND TRADEMARK OFFICE... properly process the particular request: (i) The need to search for and collect the requested records from...
37 CFR 102.6 - Time limits and expedited processing.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 37 Patents, Trademarks, and Copyrights 1 2012-07-01 2012-07-01 false Time limits and expedited processing. 102.6 Section 102.6 Patents, Trademarks, and Copyrights UNITED STATES PATENT AND TRADEMARK OFFICE... properly process the particular request: (i) The need to search for and collect the requested records from...
37 CFR 102.6 - Time limits and expedited processing.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 37 Patents, Trademarks, and Copyrights 1 2013-07-01 2013-07-01 false Time limits and expedited processing. 102.6 Section 102.6 Patents, Trademarks, and Copyrights UNITED STATES PATENT AND TRADEMARK OFFICE... properly process the particular request: (i) The need to search for and collect the requested records from...
Rath, N; Kato, S; Levesque, J P; Mauel, M E; Navratil, G A; Peng, Q
2014-04-01
Fast, digital signal processing (DSP) has many applications. Typical hardware options for performing DSP are field-programmable gate arrays (FPGAs), application-specific integrated DSP chips, or general purpose personal computer systems. This paper presents a novel DSP platform that has been developed for feedback control on the HBT-EP tokamak device. The system runs all signal processing exclusively on a Graphics Processing Unit (GPU) to achieve real-time performance with latencies below 8 μs. Signals are transferred into and out of the GPU using PCI Express peer-to-peer direct-memory-access transfers without involvement of the central processing unit or host memory. Tests were performed on the feedback control system of the HBT-EP tokamak using forty 16-bit floating point inputs and outputs each and a sampling rate of up to 250 kHz. Signals were digitized by a D-TACQ ACQ196 module, processing done on an NVIDIA GTX 580 GPU programmed in CUDA, and analog output was generated by D-TACQ AO32CPCI modules.
Pelletier, Mathew G
2008-02-08
One of the main hurdles standing in the way of optimal cleaning of cotton lint isthe lack of sensing systems that can react fast enough to provide the control system withreal-time information as to the level of trash contamination of the cotton lint. This researchexamines the use of programmable graphic processing units (GPU) as an alternative to thePC's traditional use of the central processing unit (CPU). The use of the GPU, as analternative computation platform, allowed for the machine vision system to gain asignificant improvement in processing time. By improving the processing time, thisresearch seeks to address the lack of availability of rapid trash sensing systems and thusalleviate a situation in which the current systems view the cotton lint either well before, orafter, the cotton is cleaned. This extended lag/lead time that is currently imposed on thecotton trash cleaning control systems, is what is responsible for system operators utilizing avery large dead-band safety buffer in order to ensure that the cotton lint is not undercleaned.Unfortunately, the utilization of a large dead-band buffer results in the majority ofthe cotton lint being over-cleaned which in turn causes lint fiber-damage as well assignificant losses of the valuable lint due to the excessive use of cleaning machinery. Thisresearch estimates that upwards of a 30% reduction in lint loss could be gained through theuse of a tightly coupled trash sensor to the cleaning machinery control systems. Thisresearch seeks to improve processing times through the development of a new algorithm forcotton trash sensing that allows for implementation on a highly parallel architecture.Additionally, by moving the new parallel algorithm onto an alternative computing platform,the graphic processing unit "GPU", for processing of the cotton trash images, a speed up ofover 6.5 times, over optimized code running on the PC's central processing unit "CPU", wasgained. The new parallel algorithm operating on the GPU was able to process a 1024x1024image in less than 17ms. At this improved speed, the image processing system's performance should now be sufficient to provide a system that would be capable of realtimefeed-back control that is in tight cooperation with the cleaning equipment.
Bar-Kochva, Irit; Hasselhorn, Marcus
2015-12-01
The attainment of fluency in reading is a major difficulty for reading-disabled people. Manipulations applied on the presentation of texts, leading to "on-line" effects on reading (i.e., while texts are manipulated), are one direction of examinations in search of methods affecting reading. The imposing of time constraints, by deleting one letter after the other from texts presented on a computer screen, has been established as such a method. In an attempt to further understand its nature, we tested the relations between time constraints and processes of reading: phonological decoding of small orthogrpahic units and the addressing of orthographic representations from the mental lexicon. We also examined whether the type of orthogrpahic unit deleted (lexical, sublexical, or nonlexical unit) has any additional effect. Participants were German fifth graders with (n = 29) or without (n = 34) reading disability. Time constraints enhanced fluency in reading in both groups, and to a similar extent, across conditions. Comprehension was unimpaired. These results place the very principle of time constraints, regardless of the orthographic unit manipulated, as a critical factor affecting fluency in reading. However, phonological decoding explained a significant amount of variance in fluency in reading across all conditions in reading-disabled children, whereas the addressing of orthographic representations was the consistent predictor of fluency in reading in regular readers. These results indicate a qualitative difference in the processes explaining the variance in fluency in reading in regular and reading-disabled readers and suggest that time constraints might not have an effect on the relations between these processes and reading performance. Copyright © 2015 Elsevier Inc. All rights reserved.
Estimating Missing Unit Process Data in Life Cycle Assessment Using a Similarity-Based Approach.
Hou, Ping; Cai, Jiarui; Qu, Shen; Xu, Ming
2018-05-01
In life cycle assessment (LCA), collecting unit process data from the empirical sources (i.e., meter readings, operation logs/journals) is often costly and time-consuming. We propose a new computational approach to estimate missing unit process data solely relying on limited known data based on a similarity-based link prediction method. The intuition is that similar processes in a unit process network tend to have similar material/energy inputs and waste/emission outputs. We use the ecoinvent 3.1 unit process data sets to test our method in four steps: (1) dividing the data sets into a training set and a test set; (2) randomly removing certain numbers of data in the test set indicated as missing; (3) using similarity-weighted means of various numbers of most similar processes in the training set to estimate the missing data in the test set; and (4) comparing estimated data with the original values to determine the performance of the estimation. The results show that missing data can be accurately estimated when less than 5% data are missing in one process. The estimation performance decreases as the percentage of missing data increases. This study provides a new approach to compile unit process data and demonstrates a promising potential of using computational approaches for LCA data compilation.
Scalable software architecture for on-line multi-camera video processing
NASA Astrophysics Data System (ADS)
Camplani, Massimo; Salgado, Luis
2011-03-01
In this paper we present a scalable software architecture for on-line multi-camera video processing, that guarantees a good trade off between computational power, scalability and flexibility. The software system is modular and its main blocks are the Processing Units (PUs), and the Central Unit. The Central Unit works as a supervisor of the running PUs and each PU manages the acquisition phase and the processing phase. Furthermore, an approach to easily parallelize the desired processing application has been presented. In this paper, as case study, we apply the proposed software architecture to a multi-camera system in order to efficiently manage multiple 2D object detection modules in a real-time scenario. System performance has been evaluated under different load conditions such as number of cameras and image sizes. The results show that the software architecture scales well with the number of camera and can easily works with different image formats respecting the real time constraints. Moreover, the parallelization approach can be used in order to speed up the processing tasks with a low level of overhead.
The Research and Test of Fast Radio Burst Real-time Search Algorithm Based on GPU Acceleration
NASA Astrophysics Data System (ADS)
Wang, J.; Chen, M. Z.; Pei, X.; Wang, Z. Q.
2017-03-01
In order to satisfy the research needs of Nanshan 25 m radio telescope of Xinjiang Astronomical Observatory (XAO) and study the key technology of the planned QiTai radio Telescope (QTT), the receiver group of XAO studied the GPU (Graphics Processing Unit) based real-time FRB searching algorithm which developed from the original FRB searching algorithm based on CPU (Central Processing Unit), and built the FRB real-time searching system. The comparison of the GPU system and the CPU system shows that: on the basis of ensuring the accuracy of the search, the speed of the GPU accelerated algorithm is improved by 35-45 times compared with the CPU algorithm.
Liu, Jiming; Tao, Li; Xiao, Bo
2011-01-01
Prior research shows that clinical demand and supplier capacity significantly affect the throughput and the wait time within an isolated unit. However, it is doubtful whether characteristics (i.e., demand, capacity, throughput, and wait time) of one unit would affect the wait time of subsequent units on the patient flow process. Focusing on cardiac care, this paper aims to examine the impact of characteristics of the catheterization unit (CU) on the wait time of cardiac surgery unit (SU). This study integrates published data from several sources on characteristics of the CU and SU units in 11 hospitals in Ontario, Canada between 2005 and 2008. It proposes a two-layer wait time model (with each layer representing one unit) to examine the impact of CU's characteristics on the wait time of SU and test the hypotheses using the Partial Least Squares-based Structural Equation Modeling analysis tool. Results show that: (i) wait time of CU has a direct positive impact on wait time of SU (β = 0.330, p < 0.01); (ii) capacity of CU has a direct positive impact on demand of SU (β = 0.644, p < 0.01); (iii) within each unit, there exist significant relationships among different characteristics (except for the effect of throughput on wait time in SU). Characteristics of CU have direct and indirect impacts on wait time of SU. Specifically, demand and wait time of preceding unit are good predictors for wait time of subsequent units. This suggests that considering such cross-unit effects is necessary when alleviating wait time in a health care system. Further, different patient risk profiles may affect wait time in different ways (e.g., positive or negative effects) within SU. This implies that the wait time management should carefully consider the relationship between priority triage and risk stratification, especially for cardiac surgery.
Sibia, Udai S; Grover, Jennifer; Turcotte, Justin J; Seanger, Michelle L; England, Kimberly A; King, Jennifer L; King, Paul J
2018-04-01
We describe a process for studying and improving baseline postanesthesia care unit (PACU)-to-floor transfer times after total joint replacements. Quality improvement project using lean methodology. Phase I of the investigational process involved collection of baseline data. Phase II involved developing targeted solutions to improve throughput. Phase III involved measured project sustainability. Phase I investigations revealed that patients spent an additional 62 minutes waiting in the PACU after being designated ready for transfer. Five to 16 telephone calls were needed between the PACU and the unit to facilitate each patient transfer. The most common reason for delay was unavailability of the unit nurse who was attending to another patient (58%). Phase II interventions resulted in transfer times decreasing to 13 minutes (79% reduction, P < .001). Phase III recorded sustained transfer times at 30 minutes, a net 52% reduction (P < .001) from baseline. Lean methodology resulted in the immediate decrease of PACU-to-floor transfer times by 79%, with a 52% sustained improvement. Our methods can also be used to improve efficiencies of care at other institutions. Copyright © 2016 American Society of PeriAnesthesia Nurses. Published by Elsevier Inc. All rights reserved.
Improve the Efficiency of the Service Process as a Result of the Muda Ideology
NASA Astrophysics Data System (ADS)
Lorenc, Augustyn; Przyłuski, Krzysztof
2018-06-01
The aim of the paper was to improve service processes carried out by Knorr-Bremse Systemy Kolejowe Polska sp. z o.o. Particularly, emphasise unnecessary movements and physical efforts of employees. The indirect goal was to find a solution in the simplest possible way using the Muda ideology. In order to improve the service process at the beginning was executed the process mapping for the devices to be repaired, ie. brake callipers, electro-hydraulic units and auxiliary release units. The processes were assessed and shown as Pareto-Lorenz analysis. In order to determine the most time consuming process. Based on the obtained results use of a column crane with articulated arm was proposed to facilitate the transfer of heavy components between areas. The final step was to assess the effectiveness of the proposed solution in terms of time saving. From the company perspective results of the analysis are important. The proposed solution not only reduces total service time but also contributes to crew's work comfort.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loughry, Thomas A.
As the volume of data acquired by space-based sensors increases, mission data compression/decompression and forward error correction code processing performance must likewise scale. This competency development effort was explored using the General Purpose Graphics Processing Unit (GPGPU) to accomplish high-rate Rice Decompression and high-rate Reed-Solomon (RS) decoding at the satellite mission ground station. Each algorithm was implemented and benchmarked on a single GPGPU. Distributed processing across one to four GPGPUs was also investigated. The results show that the GPGPU has considerable potential for performing satellite communication Data Signal Processing, with three times or better performance improvements and up to tenmore » times reduction in cost over custom hardware, at least in the case of Rice Decompression and Reed-Solomon Decoding.« less
COLA: Optimizing Stream Processing Applications via Graph Partitioning
NASA Astrophysics Data System (ADS)
Khandekar, Rohit; Hildrum, Kirsten; Parekh, Sujay; Rajan, Deepak; Wolf, Joel; Wu, Kun-Lung; Andrade, Henrique; Gedik, Buğra
In this paper, we describe an optimization scheme for fusing compile-time operators into reasonably-sized run-time software units called processing elements (PEs). Such PEs are the basic deployable units in System S, a highly scalable distributed stream processing middleware system. Finding a high quality fusion significantly benefits the performance of streaming jobs. In order to maximize throughput, our solution approach attempts to minimize the processing cost associated with inter-PE stream traffic while simultaneously balancing load across the processing hosts. Our algorithm computes a hierarchical partitioning of the operator graph based on a minimum-ratio cut subroutine. We also incorporate several fusion constraints in order to support real-world System S jobs. We experimentally compare our algorithm with several other reasonable alternative schemes, highlighting the effectiveness of our approach.
Watanabe, Yuuki; Takahashi, Yuhei; Numazawa, Hiroshi
2014-02-01
We demonstrate intensity-based optical coherence tomography (OCT) angiography using the squared difference of two sequential frames with bulk-tissue-motion (BTM) correction. This motion correction was performed by minimization of the sum of the pixel values using axial- and lateral-pixel-shifted structural OCT images. We extract the BTM-corrected image from a total of 25 calculated OCT angiographic images. Image processing was accelerated by a graphics processing unit (GPU) with many stream processors to optimize the parallel processing procedure. The GPU processing rate was faster than that of a line scan camera (46.9 kHz). Our OCT system provides the means of displaying structural OCT images and BTM-corrected OCT angiographic images in real time.
Adaptive real-time methodology for optimizing energy-efficient computing
Hsu, Chung-Hsing [Los Alamos, NM; Feng, Wu-Chun [Blacksburg, VA
2011-06-28
Dynamic voltage and frequency scaling (DVFS) is an effective way to reduce energy and power consumption in microprocessor units. Current implementations of DVFS suffer from inaccurate modeling of power requirements and usage, and from inaccurate characterization of the relationships between the applicable variables. A system and method is proposed that adjusts CPU frequency and voltage based on run-time calculations of the workload processing time, as well as a calculation of performance sensitivity with respect to CPU frequency. The system and method are processor independent, and can be applied to either an entire system as a unit, or individually to each process running on a system.
Dynamic Quantum Allocation and Swap-Time Variability in Time-Sharing Operating Systems.
ERIC Educational Resources Information Center
Bhat, U. Narayan; Nance, Richard E.
The effects of dynamic quantum allocation and swap-time variability on central processing unit (CPU) behavior are investigated using a model that allows both quantum length and swap-time to be state-dependent random variables. Effective CPU utilization is defined to be the proportion of a CPU busy period that is devoted to program processing, i.e.…
NASA Astrophysics Data System (ADS)
Santi, S. S.; Renanto; Altway, A.
2018-01-01
The energy use system in a production process, in this case heat exchangers networks (HENs), is one element that plays a role in the smoothness and sustainability of the industry itself. Optimizing Heat Exchanger Networks (HENs) from process streams can have a major effect on the economic value of an industry as a whole. So the solving of design problems with heat integration becomes an important requirement. In a plant, heat integration can be carried out internally or in combination between process units. However, steps in the determination of suitable heat integration techniques require long calculations and require a long time. In this paper, we propose an alternative step in determining heat integration technique by investigating 6 hypothetical units using Pinch Analysis approach with objective function energy target and total annual cost target. The six hypothetical units consist of units A, B, C, D, E, and F, where each unit has the location of different process streams to the temperature pinch. The result is a potential heat integration (ΔH’) formula that can trim conventional steps from 7 steps to just 3 steps. While the determination of the preferred heat integration technique is to calculate the potential of heat integration (ΔH’) between the hypothetical process units. Completion of calculation using matlab language programming.
Rana, Vijay; Rudin, Stephen; Bednarek, Daniel R.
2012-01-01
We have developed a dose-tracking system (DTS) that calculates the radiation dose to the patient’s skin in real-time by acquiring exposure parameters and imaging-system-geometry from the digital bus on a Toshiba Infinix C-arm unit. The cumulative dose values are then displayed as a color map on an OpenGL-based 3D graphic of the patient for immediate feedback to the interventionalist. Determination of those elements on the surface of the patient 3D-graphic that intersect the beam and calculation of the dose for these elements in real time demands fast computation. Reducing the size of the elements results in more computation load on the computer processor and therefore a tradeoff occurs between the resolution of the patient graphic and the real-time performance of the DTS. The speed of the DTS for calculating dose to the skin is limited by the central processing unit (CPU) and can be improved by using the parallel processing power of a graphics processing unit (GPU). Here, we compare the performance speed of GPU-based DTS software to that of the current CPU-based software as a function of the resolution of the patient graphics. Results show a tremendous improvement in speed using the GPU. While an increase in the spatial resolution of the patient graphics resulted in slowing down the computational speed of the DTS on the CPU, the speed of the GPU-based DTS was hardly affected. This GPU-based DTS can be a powerful tool for providing accurate, real-time feedback about patient skin-dose to physicians while performing interventional procedures. PMID:24027616
Rana, Vijay; Rudin, Stephen; Bednarek, Daniel R
2012-02-23
We have developed a dose-tracking system (DTS) that calculates the radiation dose to the patient's skin in real-time by acquiring exposure parameters and imaging-system-geometry from the digital bus on a Toshiba Infinix C-arm unit. The cumulative dose values are then displayed as a color map on an OpenGL-based 3D graphic of the patient for immediate feedback to the interventionalist. Determination of those elements on the surface of the patient 3D-graphic that intersect the beam and calculation of the dose for these elements in real time demands fast computation. Reducing the size of the elements results in more computation load on the computer processor and therefore a tradeoff occurs between the resolution of the patient graphic and the real-time performance of the DTS. The speed of the DTS for calculating dose to the skin is limited by the central processing unit (CPU) and can be improved by using the parallel processing power of a graphics processing unit (GPU). Here, we compare the performance speed of GPU-based DTS software to that of the current CPU-based software as a function of the resolution of the patient graphics. Results show a tremendous improvement in speed using the GPU. While an increase in the spatial resolution of the patient graphics resulted in slowing down the computational speed of the DTS on the CPU, the speed of the GPU-based DTS was hardly affected. This GPU-based DTS can be a powerful tool for providing accurate, real-time feedback about patient skin-dose to physicians while performing interventional procedures.
EVALUATING MC AND A EFFECTIVENESS TO VERIFY THE PRESENCE OF NUCLEAR MATERIALS
DOE Office of Scientific and Technical Information (OSTI.GOV)
P. G. DAWSON; J. A MORZINSKI; ET AL
Traditional materials accounting is focused exclusively on the material balance area (MBA), and involves periodically closing a material balance based on accountability measurements conducted during a physical inventory. In contrast, the physical inventory for Los Alamos National Laboratory's near-real-time accounting system is established around processes and looks more like an item inventory. That is, the intent is not to measure material for accounting purposes, since materials have already been measured in the normal course of daily operations. A given unit process operates many times over the course of a material balance period. The product of a given unit process maymore » move for processing within another unit process in the same MBA or may be transferred out of the MBA. Since few materials are unmeasured the physical inventory for a near-real-time process area looks more like an item inventory. Thus, the intent of the physical inventory is to locate the materials on the books and verify information about the materials contained in the books. Closing a materials balance for such an area is a matter of summing all the individual mass balances for the batches processed by all unit processes in the MBA. Additionally, performance parameters are established to measure the program's effectiveness. Program effectiveness for verifying the presence of nuclear material is required to be equal to or greater than a prescribed performance level, process measurements must be within established precision and accuracy values, physical inventory results meet or exceed performance requirements, and inventory differences are less than a target/goal quantity. This approach exceeds DOE established accounting and physical inventory program requirements. Hence, LANL is committed to this approach and to seeking opportunities for further improvement through integrated technologies. This paper will provide a detailed description of this evaluation process.« less
Innovation and Technology: Electronic Intensive Care Unit Diaries.
Scruth, Elizabeth A; Oveisi, Nazanin; Liu, Vincent
2017-01-01
Hospitalization in the intensive care unit can be a stressful time for patients and their family members. Patients' family members often have difficulty processing all of the information that is given to them. Therefore, an intensive care unit diary can serve as a conduit for synthesizing information, maintaining connection with patients, and maintaining a connection with family members outside the intensive care unit. Paper intensive care unit diaries have been used outside the United States for many years. This article explores the development of an electronic intensive care unit diary using a rapid prototyping model to accelerate the process. Initial results of design testing demonstrate that it is feasible, useful, and desirable to consider the implementation of electronic intensive care unit diaries for patients at risk for post-intensive care syndrome. ©2017 American Association of Critical-Care Nurses.
NASA Technical Reports Server (NTRS)
Hsia, T. C.; Lu, G. Z.; Han, W. H.
1987-01-01
In advanced robot control problems, on-line computation of inverse Jacobian solution is frequently required. Parallel processing architecture is an effective way to reduce computation time. A parallel processing architecture is developed for the inverse Jacobian (inverse differential kinematic equation) of the PUMA arm. The proposed pipeline/parallel algorithm can be inplemented on an IC chip using systolic linear arrays. This implementation requires 27 processing cells and 25 time units. Computation time is thus significantly reduced.
Rau, Anne K; Moll, Kristina; Snowling, Margaret J; Landerl, Karin
2015-02-01
The current study investigated the time course of cross-linguistic differences in word recognition. We recorded eye movements of German and English children and adults while reading closely matched sentences, each including a target word manipulated for length and frequency. Results showed differential word recognition processes for both developing and skilled readers. Children of the two orthographies did not differ in terms of total word processing time, but this equal outcome was achieved quite differently. Whereas German children relied on small-unit processing early in word recognition, English children applied small-unit decoding only upon rereading-possibly when experiencing difficulties in integrating an unfamiliar word into the sentence context. Rather unexpectedly, cross-linguistic differences were also found in adults in that English adults showed longer processing times than German adults for nonwords. Thus, although orthographic consistency does play a major role in reading development, cross-linguistic differences are detectable even in skilled adult readers. Copyright © 2014 Elsevier Inc. All rights reserved.
40 CFR Table 1 to Subpart Ffff of... - Model Rule-Compliance Schedule
Code of Federal Regulations, 2010 CFR
2010-07-01
... Compliance Times for Other Solid Waste Incineration Units That Commenced Construction On or Before December 9... devices so that, when the incineration unit is brought on line, all process changes and air pollution...
40 CFR Table 1 to Subpart Ffff of... - Model Rule-Compliance Schedule
Code of Federal Regulations, 2011 CFR
2011-07-01
... Compliance Times for Other Solid Waste Incineration Units That Commenced Construction On or Before December 9... devices so that, when the incineration unit is brought on line, all process changes and air pollution...
An efficient start-up circuitry for de-energized ultra-low power energy harvesting systems
NASA Astrophysics Data System (ADS)
Hörmann, Leander B.; Berger, Achim; Salzburger, Lukas; Priller, Peter; Springer, Andreas
2015-05-01
Cyber-physical systems often include small wireless devices to measure physical quantities or control a technical process. These devices need a self-sufficient power supply because no wired infrastructure is available. Their operational time can be enhanced by energy harvesting systems. However, the convertible power is often limited and discontinuous which requires the need of an energy storage unit. If this unit (and thus the whole system) is de-energized, the start-up process may take a significant amount of time because of an inefficient energy harvesting process. Therefore, this paper presents a system which enables a safe and fast start-up from the de-energized state.
Evaluating Mobile Graphics Processing Units (GPUs) for Real-Time Resource Constrained Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meredith, J; Conger, J; Liu, Y
2005-11-11
Modern graphics processing units (GPUs) can provide tremendous performance boosts for some applications beyond what a single CPU can accomplish, and their performance is growing at a rate faster than CPUs as well. Mobile GPUs available for laptops have the small form factor and low power requirements suitable for use in embedded processing. We evaluated several desktop and mobile GPUs and CPUs on traditional and non-traditional graphics tasks, as well as on the most time consuming pieces of a full hyperspectral imaging application. Accuracy remained high despite small differences in arithmetic operations like rounding. Performance improvements are summarized here relativemore » to a desktop Pentium 4 CPU.« less
Real time 3D structural and Doppler OCT imaging on graphics processing units
NASA Astrophysics Data System (ADS)
Sylwestrzak, Marcin; Szlag, Daniel; Szkulmowski, Maciej; Gorczyńska, Iwona; Bukowska, Danuta; Wojtkowski, Maciej; Targowski, Piotr
2013-03-01
In this report the application of graphics processing unit (GPU) programming for real-time 3D Fourier domain Optical Coherence Tomography (FdOCT) imaging with implementation of Doppler algorithms for visualization of the flows in capillary vessels is presented. Generally, the time of the data processing of the FdOCT data on the main processor of the computer (CPU) constitute a main limitation for real-time imaging. Employing additional algorithms, such as Doppler OCT analysis, makes this processing even more time consuming. Lately developed GPUs, which offers a very high computational power, give a solution to this problem. Taking advantages of them for massively parallel data processing, allow for real-time imaging in FdOCT. The presented software for structural and Doppler OCT allow for the whole processing with visualization of 2D data consisting of 2000 A-scans generated from 2048 pixels spectra with frame rate about 120 fps. The 3D imaging in the same mode of the volume data build of 220 × 100 A-scans is performed at a rate of about 8 frames per second. In this paper a software architecture, organization of the threads and optimization applied is shown. For illustration the screen shots recorded during real time imaging of the phantom (homogeneous water solution of Intralipid in glass capillary) and the human eye in-vivo is presented.
Liu, Kui; Wei, Sixiao; Chen, Zhijiang; Jia, Bin; Chen, Genshe; Ling, Haibin; Sheaff, Carolyn; Blasch, Erik
2017-01-01
This paper presents the first attempt at combining Cloud with Graphic Processing Units (GPUs) in a complementary manner within the framework of a real-time high performance computation architecture for the application of detecting and tracking multiple moving targets based on Wide Area Motion Imagery (WAMI). More specifically, the GPU and Cloud Moving Target Tracking (GC-MTT) system applied a front-end web based server to perform the interaction with Hadoop and highly parallelized computation functions based on the Compute Unified Device Architecture (CUDA©). The introduced multiple moving target detection and tracking method can be extended to other applications such as pedestrian tracking, group tracking, and Patterns of Life (PoL) analysis. The cloud and GPUs based computing provides an efficient real-time target recognition and tracking approach as compared to methods when the work flow is applied using only central processing units (CPUs). The simultaneous tracking and recognition results demonstrate that a GC-MTT based approach provides drastically improved tracking with low frame rates over realistic conditions. PMID:28208684
Liu, Kui; Wei, Sixiao; Chen, Zhijiang; Jia, Bin; Chen, Genshe; Ling, Haibin; Sheaff, Carolyn; Blasch, Erik
2017-02-12
This paper presents the first attempt at combining Cloud with Graphic Processing Units (GPUs) in a complementary manner within the framework of a real-time high performance computation architecture for the application of detecting and tracking multiple moving targets based on Wide Area Motion Imagery (WAMI). More specifically, the GPU and Cloud Moving Target Tracking (GC-MTT) system applied a front-end web based server to perform the interaction with Hadoop and highly parallelized computation functions based on the Compute Unified Device Architecture (CUDA©). The introduced multiple moving target detection and tracking method can be extended to other applications such as pedestrian tracking, group tracking, and Patterns of Life (PoL) analysis. The cloud and GPUs based computing provides an efficient real-time target recognition and tracking approach as compared to methods when the work flow is applied using only central processing units (CPUs). The simultaneous tracking and recognition results demonstrate that a GC-MTT based approach provides drastically improved tracking with low frame rates over realistic conditions.
Scalable and responsive event processing in the cloud
Suresh, Visalakshmi; Ezhilchelvan, Paul; Watson, Paul
2013-01-01
Event processing involves continuous evaluation of queries over streams of events. Response-time optimization is traditionally done over a fixed set of nodes and/or by using metrics measured at query-operator levels. Cloud computing makes it easy to acquire and release computing nodes as required. Leveraging this flexibility, we propose a novel, queueing-theory-based approach for meeting specified response-time targets against fluctuating event arrival rates by drawing only the necessary amount of computing resources from a cloud platform. In the proposed approach, the entire processing engine of a distinct query is modelled as an atomic unit for predicting response times. Several such units hosted on a single node are modelled as a multiple class M/G/1 system. These aspects eliminate intrusive, low-level performance measurements at run-time, and also offer portability and scalability. Using model-based predictions, cloud resources are efficiently used to meet response-time targets. The efficacy of the approach is demonstrated through cloud-based experiments. PMID:23230164
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schulte, H.F.; Stoker, A.K.; Campbell, E.E.
1976-06-01
Oil shale technology has been divided into two sub-technologies: surface processing and in-situ processing. Definition of the research programs is essentially an amplification of the five King-Muir categories: (A) pollutants: characterization, measurement, and monitoring; (B) physical and chemical processes and effects; (C) health effects; (D) ecological processes and effects; and (E) integrated assessment. Twenty-three biomedical and environmental research projects are described as to program title, scope, milestones, technolgy time frame, program unit priority, and estimated program unit cost.
Testing a model of componential processing of multi-symbol numbers-evidence from measurement units.
Huber, Stefan; Bahnmueller, Julia; Klein, Elise; Moeller, Korbinian
2015-10-01
Research on numerical cognition has addressed the processing of nonsymbolic quantities and symbolic digits extensively. However, magnitude processing of measurement units is still a neglected topic in numerical cognition research. Hence, we investigated the processing of measurement units to evaluate whether typical effects of multi-digit number processing such as the compatibility effect, the string length congruity effect, and the distance effect are also present for measurement units. In three experiments, participants had to single out the larger one of two physical quantities (e.g., lengths). In Experiment 1, the compatibility of number and measurement unit (compatible: 3 mm_6 cm with 3 < 6 and mm < cm; incompatible: 3 cm_6 mm with 3 < 6 but cm > mm) as well as string length congruity (congruent: 1 m_2 km with m < km and 2 < 3 characters; incongruent: 2 mm_1 m with mm < m, but 3 > 2 characters) were manipulated. We observed reliable compatibility effects with prolonged reaction times (RT) for incompatible trials. Moreover, a string length congruity effect was present in RT with longer RT for incongruent trials. Experiments 2 and 3 served as control experiments showing that compatibility effects persist when controlling for holistic distance and that a distance effect for measurement units exists. Our findings indicate that numbers and measurement units are processed in a componential manner and thus highlight that processing characteristics of multi-digit numbers generalize to measurement units. Thereby, our data lend further support to the recently proposed generalized model of componential multi-symbol number processing.
Adaptive real-time methodology for optimizing energy-efficient computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hsu, Chung-Hsing; Feng, Wu-Chun
Dynamic voltage and frequency scaling (DVFS) is an effective way to reduce energy and power consumption in microprocessor units. Current implementations of DVFS suffer from inaccurate modeling of power requirements and usage, and from inaccurate characterization of the relationships between the applicable variables. A system and method is proposed that adjusts CPU frequency and voltage based on run-time calculations of the workload processing time, as well as a calculation of performance sensitivity with respect to CPU frequency. The system and method are processor independent, and can be applied to either an entire system as a unit, or individually to eachmore » process running on a system.« less
NASA Astrophysics Data System (ADS)
Shankar Kumar, Ravi; Goswami, A.
2015-06-01
The article scrutinises the learning effect of the unit production time on optimal lot size for the uncertain and imprecise imperfect production process, wherein shortages are permissible and partially backlogged. Contextually, we contemplate the fuzzy chance of production process shifting from an 'in-control' state to an 'out-of-control' state and re-work facility of imperfect quality of produced items. The elapsed time until the process shifts is considered as a fuzzy random variable, and consequently, fuzzy random total cost per unit time is derived. Fuzzy expectation and signed distance method are used to transform the fuzzy random cost function into an equivalent crisp function. The results are illustrated with the help of numerical example. Finally, sensitivity analysis of the optimal solution with respect to major parameters is carried out.
Accelerating Malware Detection via a Graphics Processing Unit
2010-09-01
Processing Unit . . . . . . . . . . . . . . . . . . 4 PE Portable Executable . . . . . . . . . . . . . . . . . . . . . 4 COFF Common Object File Format...operating systems for the future [Szo05]. The PE format is an updated version of the common object file format ( COFF ) [Mic06]. Microsoft released a new...NAs02]. These alerts can be costly in terms of time and resources for individuals and organizations to investigate each misidentified file [YWL07] [Vak10
ERIC Educational Resources Information Center
Qin, Dongxiao; Lykes, M. Brinton
2006-01-01
A grounded theory was developed to describe the processes of self-understanding of a group of Chinese women graduate students who were studying in the United States at the time of the research. A basic psychological process, reweaving a fragmented self, was identified from interviews with 20 Chinese women graduate students. Reweaving a fragmented…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1976-06-01
Oil shale technology has been divided into two sub-technologies: surfaceprocessing and in-situ processing. Definition of the research programs is essentially an amplification of the five King-Muir categories: (A) pollutants: characterization, measurement, and monitoring; (B) physical and chemical processes and effects; (C) health effects; (D) ecological processes and effects; and (E) integrated assessment. Twenty-three biomedical and environmental research projects are described as to program title, scope, milestones, technology time frame, program unit priority, and estimated program unit cost.
The impact of a lean rounding process in a pediatric intensive care unit.
Vats, Atul; Goin, Kristin H; Villarreal, Monica C; Yilmaz, Tuba; Fortenberry, James D; Keskinocak, Pinar
2012-02-01
Poor workflow associated with physician rounding can produce inefficiencies that decrease time for essential activities, delay clinical decisions, and reduce staff and patient satisfaction. Workflow and provider resources were not optimized when a pediatric intensive care unit increased by 22,000 square feet (to 33,000) and by nine beds (to 30). Lean methods (focusing on essential processes) and scenario analysis were used to develop and implement a patient-centric standardized rounding process, which we hypothesize would lead to improved rounding efficiency, decrease required physician resources, improve satisfaction, and enhance throughput. Human factors techniques and statistical tools were used to collect and analyze observational data for 11 rounding events before and 12 rounding events after process redesign. Actions included: 1) recording rounding events, times, and patient interactions and classifying them as essential, nonessential, or nonvalue added; 2) comparing rounding duration and time per patient to determine the impact on efficiency; 3) analyzing discharge orders for timeliness; 4) conducting staff surveys to assess improvements in communication and care coordination; and 5) analyzing customer satisfaction data to evaluate impact on patient experience. Thirty-bed pediatric intensive care unit in a children's hospital with academic affiliation. Eight attending pediatric intensivists and their physician rounding teams. Eight attending physician-led teams were observed for 11 rounding events before and 12 rounding events after implementation of a standardized lean rounding process focusing on essential processes. Total rounding time decreased significantly (157 ± 35 mins before vs. 121 ± 20 mins after), through a reduction in time spent on nonessential (53 ± 30 vs. 9 ± 6 mins) activities. The previous process required three attending physicians for an average of 157 mins (7.55 attending physician man-hours), while the new process required two attending physicians for an average of 121 mins (4.03 attending physician man-hours). Cumulative distribution of completed patient rounds by hour of day showed an improvement from 40% to 80% of patients rounded by 9:30 AM. Discharge data showed pediatric intensive care unit patients were discharged an average of 58.05 mins sooner (p < .05). Staff surveys showed a significant increase in satisfaction with the new process (including increased efficiency, improved physician identification, and clearer understanding of process). Customer satisfaction scores showed improvement after implementing the new process. Implementation of a lean-focused, patient-centric rounding structure stressing essential processes was associated with increased timeliness and efficiency of rounds, improved staff and customer satisfaction, improved throughput, and reduced attending physician man-hours.
2015-03-26
REAL-TIME RF-DNA FINGERPRINTING OF ZIGBEE DEVICES USING A SOFTWARE-DEFINED RADIO WITH FPGA...not subject to copyright protection in the United States. AFIT-ENG-MS-15-M-054 REAL-TIME RF-DNA FINGERPRINTING OF ZIGBEE DEVICES USING A...REAL-TIME RF-DNA FINGERPRINTING OF ZIGBEE DEVICES USING A SOFTWARE-DEFINED RADIO WITH FPGA PROCESSING William M. Lowder, BSEE, BSCPE
Airport Facility Queuing Model Validation
DOT National Transportation Integrated Search
1977-05-01
Criteria are presented for selection of analytic models to represent waiting times due to queuing processes. An existing computer model by M.F. Neuts which assumes general nonparametric distributions of arrivals per unit time and service times for a ...
Technological Enhancements for Personal Computers
1992-03-01
quicker order processing , shortening the time required to obtain critical spare parts. 31 Customer service and spare parts tracking are facilitated by...cards speed up order processing and filing. Bar code readers speed inventory control processing. D. DEPLOYMENT PLANNING. Many units with high mobility
High Performance Computing Assets for Ocean Acoustics Research
2016-11-18
independently on processing units with access to a typically available amount of memory, say 16 or 32 gigabytes. Our models require each processor to...allow results to be obtained with limited amounts of memory available to individual processing units (with no time frame for successful completion...put into use. One file server computer to store simulation output has also been purchased. The first workstation has 28 CPU cores, dual- thread , (56
Yang, Muer; Fry, Michael J; Raikhelkar, Jayashree; Chin, Cynthia; Anyanwu, Anelechi; Brand, Jordan; Scurlock, Corey
2013-02-01
To develop queuing and simulation-based models to understand the relationship between ICU bed availability and operating room schedule to maximize the use of critical care resources and minimize case cancellation while providing equity to patients and surgeons. Retrospective analysis of 6-month unit admission data from a cohort of cardiothoracic surgical patients, to create queuing and simulation-based models of ICU bed flow. Three different admission policies (current admission policy, shortest-processing-time policy, and a dynamic policy) were then analyzed using simulation models, representing 10 yr worth of potential admissions. Important output data consisted of the "average waiting time," a proxy for unit efficiency, and the "maximum waiting time," a surrogate for patient equity. A cardiothoracic surgical ICU in a tertiary center in New York, NY. Six hundred thirty consecutive cardiothoracic surgical patients admitted to the cardiothoracic surgical ICU. None. Although the shortest-processing-time admission policy performs best in terms of unit efficiency (0.4612 days), it did so at expense of patient equity prolonging surgical waiting time by as much as 21 days. The current policy gives the greatest equity but causes inefficiency in unit bed-flow (0.5033 days). The dynamic policy performs at a level (0.4997 days) 8.3% below that of the shortest-processing-time in average waiting time; however, it balances this with greater patient equity (maximum waiting time could be shortened by 4 days compared to the current policy). Queuing theory and computer simulation can be used to model case flow through a cardiothoracic operating room and ICU. A dynamic admission policy that looks at current waiting time and expected ICU length of stay allows for increased equity between patients with only minimum losses of efficiency. This dynamic admission policy would seem to be a superior in maximizing case-flow. These results may be generalized to other surgical ICUs.
Li, Jian; Bloch, Pavel; Xu, Jing; Sarunic, Marinko V; Shannon, Lesley
2011-05-01
Fourier domain optical coherence tomography (FD-OCT) provides faster line rates, better resolution, and higher sensitivity for noninvasive, in vivo biomedical imaging compared to traditional time domain OCT (TD-OCT). However, because the signal processing for FD-OCT is computationally intensive, real-time FD-OCT applications demand powerful computing platforms to deliver acceptable performance. Graphics processing units (GPUs) have been used as coprocessors to accelerate FD-OCT by leveraging their relatively simple programming model to exploit thread-level parallelism. Unfortunately, GPUs do not "share" memory with their host processors, requiring additional data transfers between the GPU and CPU. In this paper, we implement a complete FD-OCT accelerator on a consumer grade GPU/CPU platform. Our data acquisition system uses spectrometer-based detection and a dual-arm interferometer topology with numerical dispersion compensation for retinal imaging. We demonstrate that the maximum line rate is dictated by the memory transfer time and not the processing time due to the GPU platform's memory model. Finally, we discuss how the performance trends of GPU-based accelerators compare to the expected future requirements of FD-OCT data rates.
NASA Technical Reports Server (NTRS)
Batcher, K. E.; Eddey, E. E.; Faiss, R. O.; Gilmore, P. A.
1981-01-01
The processing of synthetic aperture radar (SAR) signals using the massively parallel processor (MPP) is discussed. The fast Fourier transform convolution procedures employed in the algorithms are described. The MPP architecture comprises an array unit (ARU) which processes arrays of data; an array control unit which controls the operation of the ARU and performs scalar arithmetic; a program and data management unit which controls the flow of data; and a unique staging memory (SM) which buffers and permutes data. The ARU contains a 128 by 128 array of bit-serial processing elements (PE). Two-by-four surarrays of PE's are packaged in a custom VLSI HCMOS chip. The staging memory is a large multidimensional-access memory which buffers and permutes data flowing with the system. Efficient SAR processing is achieved via ARU communication paths and SM data manipulation. Real time processing capability can be realized via a multiple ARU, multiple SM configuration.
32 CFR 701.53 - FOIA fee schedule.
Code of Federal Regulations, 2014 CFR
2014-07-01
... human time) and machine time. (1) Human time. Human time is all the time spent by humans performing the...) Machine time. Machine time involves only direct costs of the central processing unit (CPU), input/output... exist to calculate CPU time, no machine costs can be passed on to the requester. When CPU calculations...
32 CFR 701.53 - FOIA fee schedule.
Code of Federal Regulations, 2012 CFR
2012-07-01
... human time) and machine time. (1) Human time. Human time is all the time spent by humans performing the...) Machine time. Machine time involves only direct costs of the central processing unit (CPU), input/output... exist to calculate CPU time, no machine costs can be passed on to the requester. When CPU calculations...
32 CFR 701.53 - FOIA fee schedule.
Code of Federal Regulations, 2013 CFR
2013-07-01
... human time) and machine time. (1) Human time. Human time is all the time spent by humans performing the...) Machine time. Machine time involves only direct costs of the central processing unit (CPU), input/output... exist to calculate CPU time, no machine costs can be passed on to the requester. When CPU calculations...
Degradation data analysis based on a generalized Wiener process subject to measurement error
NASA Astrophysics Data System (ADS)
Li, Junxing; Wang, Zhihua; Zhang, Yongbo; Fu, Huimin; Liu, Chengrui; Krishnaswamy, Sridhar
2017-09-01
Wiener processes have received considerable attention in degradation modeling over the last two decades. In this paper, we propose a generalized Wiener process degradation model that takes unit-to-unit variation, time-correlated structure and measurement error into considerations simultaneously. The constructed methodology subsumes a series of models studied in the literature as limiting cases. A simple method is given to determine the transformed time scale forms of the Wiener process degradation model. Then model parameters can be estimated based on a maximum likelihood estimation (MLE) method. The cumulative distribution function (CDF) and the probability distribution function (PDF) of the Wiener process with measurement errors are given based on the concept of the first hitting time (FHT). The percentiles of performance degradation (PD) and failure time distribution (FTD) are also obtained. Finally, a comprehensive simulation study is accomplished to demonstrate the necessity of incorporating measurement errors in the degradation model and the efficiency of the proposed model. Two illustrative real applications involving the degradation of carbon-film resistors and the wear of sliding metal are given. The comparative results show that the constructed approach can derive a reasonable result and an enhanced inference precision.
Enhancing diabetes management while teaching quality improvement methods.
Sievers, Beth A; Negley, Kristin D F; Carlson, Marny L; Nelson, Joyce L; Pearson, Kristina K
2014-01-01
Six medical units realized that they were having issues with accurate timing of bedtime blood glucose measurement for their patients with diabetes. They decided to investigate the issues by using their current staff nurse committee structure. The clinical nurse specialists and nurse education specialists decided to address the issue by educating and engaging the staff in the define, measure, analyze, improve, control (DMAIC) framework process. They found that two issues needed to be improved, including timing of bedtime blood glucose measurement and snack administration and documentation. Several educational interventions were completed and resulted in improved timing of bedtime glucose measurement and bedtime snack documentation. The nurses understood the DMAIC process, and collaboration and cohesion among the medical units was enhanced. Copyright 2014, SLACK Incorporated.
Xu, Jing; Wong, Kevin; Jian, Yifan; Sarunic, Marinko V
2014-02-01
In this report, we describe a graphics processing unit (GPU)-accelerated processing platform for real-time acquisition and display of flow contrast images with Fourier domain optical coherence tomography (FDOCT) in mouse and human eyes in vivo. Motion contrast from blood flow is processed using the speckle variance OCT (svOCT) technique, which relies on the acquisition of multiple B-scan frames at the same location and tracking the change of the speckle pattern. Real-time mouse and human retinal imaging using two different custom-built OCT systems with processing and display performed on GPU are presented with an in-depth analysis of performance metrics. The display output included structural OCT data, en face projections of the intensity data, and the svOCT en face projections of retinal microvasculature; these results compare projections with and without speckle variance in the different retinal layers to reveal significant contrast improvements. As a demonstration, videos of real-time svOCT for in vivo human and mouse retinal imaging are included in our results. The capability of performing real-time svOCT imaging of the retinal vasculature may be a useful tool in a clinical environment for monitoring disease-related pathological changes in the microcirculation such as diabetic retinopathy.
Real-time blood flow visualization using the graphics processing unit
NASA Astrophysics Data System (ADS)
Yang, Owen; Cuccia, David; Choi, Bernard
2011-01-01
Laser speckle imaging (LSI) is a technique in which coherent light incident on a surface produces a reflected speckle pattern that is related to the underlying movement of optical scatterers, such as red blood cells, indicating blood flow. Image-processing algorithms can be applied to produce speckle flow index (SFI) maps of relative blood flow. We present a novel algorithm that employs the NVIDIA Compute Unified Device Architecture (CUDA) platform to perform laser speckle image processing on the graphics processing unit. Software written in C was integrated with CUDA and integrated into a LabVIEW Virtual Instrument (VI) that is interfaced with a monochrome CCD camera able to acquire high-resolution raw speckle images at nearly 10 fps. With the CUDA code integrated into the LabVIEW VI, the processing and display of SFI images were performed also at ~10 fps. We present three video examples depicting real-time flow imaging during a reactive hyperemia maneuver, with fluid flow through an in vitro phantom, and a demonstration of real-time LSI during laser surgery of a port wine stain birthmark.
Real-time blood flow visualization using the graphics processing unit
Yang, Owen; Cuccia, David; Choi, Bernard
2011-01-01
Laser speckle imaging (LSI) is a technique in which coherent light incident on a surface produces a reflected speckle pattern that is related to the underlying movement of optical scatterers, such as red blood cells, indicating blood flow. Image-processing algorithms can be applied to produce speckle flow index (SFI) maps of relative blood flow. We present a novel algorithm that employs the NVIDIA Compute Unified Device Architecture (CUDA) platform to perform laser speckle image processing on the graphics processing unit. Software written in C was integrated with CUDA and integrated into a LabVIEW Virtual Instrument (VI) that is interfaced with a monochrome CCD camera able to acquire high-resolution raw speckle images at nearly 10 fps. With the CUDA code integrated into the LabVIEW VI, the processing and display of SFI images were performed also at ∼10 fps. We present three video examples depicting real-time flow imaging during a reactive hyperemia maneuver, with fluid flow through an in vitro phantom, and a demonstration of real-time LSI during laser surgery of a port wine stain birthmark. PMID:21280915
Lee, Kenneth K C; Mariampillai, Adrian; Yu, Joe X Z; Cadotte, David W; Wilson, Brian C; Standish, Beau A; Yang, Victor X D
2012-07-01
Advances in swept source laser technology continues to increase the imaging speed of swept-source optical coherence tomography (SS-OCT) systems. These fast imaging speeds are ideal for microvascular detection schemes, such as speckle variance (SV), where interframe motion can cause severe imaging artifacts and loss of vascular contrast. However, full utilization of the laser scan speed has been hindered by the computationally intensive signal processing required by SS-OCT and SV calculations. Using a commercial graphics processing unit that has been optimized for parallel data processing, we report a complete high-speed SS-OCT platform capable of real-time data acquisition, processing, display, and saving at 108,000 lines per second. Subpixel image registration of structural images was performed in real-time prior to SV calculations in order to reduce decorrelation from stationary structures induced by the bulk tissue motion. The viability of the system was successfully demonstrated in a high bulk tissue motion scenario of human fingernail root imaging where SV images (512 × 512 pixels, n = 4) were displayed at 54 frames per second.
Tankam, Patrice; Santhanam, Anand P.; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P.
2014-01-01
Abstract. Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6 mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing. PMID:24695868
Tankam, Patrice; Santhanam, Anand P; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P
2014-07-01
Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6 mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing.
Bio-conversion of apple pomace into ethanol and acetic acid: Enzymatic hydrolysis and fermentation.
Parmar, Indu; Rupasinghe, H P Vasantha
2013-02-01
Enzymatic hydrolysis of cellulose present in apple pomace was investigated using process variables such as enzyme activity of commercial cellulase, pectinase and β-glucosidase, temperature, pH, time, pre-treatments and end product separation. The interaction of enzyme activity, temperature, pH and time had a significant effect (P<0.05) on release of glucose. Optimal conditions of enzymatic saccharification were: enzyme activity of cellulase, 43units; pectinase, 183units; β-glucosidase, 41units/g dry matter (DM); temperature, 40°C; pH 4.0 and time, 24h. The sugars were fermented using Saccharomyces cerevisae yielding 19.0g ethanol/100g DM. Further bio-conversion using Acetobacter aceti resulted in the production of acetic acid at a concentration of 61.4g/100g DM. The present study demonstrates an improved process of enzymatic hydrolysis of apple pomace to yield sugars and concomitant bioconversion to produce ethanol and acetic acid. Copyright © 2012 Elsevier Ltd. All rights reserved.
Influence of coatings on the thermal and mechanical processes at insulating glass units
NASA Astrophysics Data System (ADS)
Penkova, Nina; Krumov, Kalin; Surleva, Andriana; Geshkova, Zlatka
2017-09-01
Different coatings on structural glass are used in the advances transparent facades and window systems in order to increase the thermal performance of the glass units and to regulate their optical properties. Coated glass has a higher absorptance in the solar spectrum which leads to correspondent higher temperature in the presence of solar load compared to the uncoated one. That process results in higher climatic loads at the insulating glass units (IGU) and in thermal stresses in the coated glass elements. Temperature fields and gradients in glass panes and climatic loads at IGU in window systems are estimated at different coating of glazed system. The study is implemented by numerical simulation of conjugate heat transfer in the window systems at summer time and presence of solar irradiation, as well as during winter night time.
Optical apparatus for forming correlation spectrometers and optical processors
Butler, Michael A.; Ricco, Antonio J.; Sinclair, Michael B.; Senturia, Stephen D.
1999-01-01
Optical apparatus for forming correlation spectrometers and optical processors. The optical apparatus comprises one or more diffractive optical elements formed on a substrate for receiving light from a source and processing the incident light. The optical apparatus includes an addressing element for alternately addressing each diffractive optical element thereof to produce for one unit of time a first correlation with the incident light, and to produce for a different unit of time a second correlation with the incident light that is different from the first correlation. In preferred embodiments of the invention, the optical apparatus is in the form of a correlation spectrometer; and in other embodiments, the apparatus is in the form of an optical processor. In some embodiments, the optical apparatus comprises a plurality of diffractive optical elements on a common substrate for forming first and second gratings that alternately intercept the incident light for different units of time. In other embodiments, the optical apparatus includes an electrically-programmable diffraction grating that may be alternately switched between a plurality of grating states thereof for processing the incident light. The optical apparatus may be formed, at least in part, by a micromachining process.
Optical apparatus for forming correlation spectrometers and optical processors
Butler, M.A.; Ricco, A.J.; Sinclair, M.B.; Senturia, S.D.
1999-05-18
Optical apparatus is disclosed for forming correlation spectrometers and optical processors. The optical apparatus comprises one or more diffractive optical elements formed on a substrate for receiving light from a source and processing the incident light. The optical apparatus includes an addressing element for alternately addressing each diffractive optical element thereof to produce for one unit of time a first correlation with the incident light, and to produce for a different unit of time a second correlation with the incident light that is different from the first correlation. In preferred embodiments of the invention, the optical apparatus is in the form of a correlation spectrometer; and in other embodiments, the apparatus is in the form of an optical processor. In some embodiments, the optical apparatus comprises a plurality of diffractive optical elements on a common substrate for forming first and second gratings that alternately intercept the incident light for different units of time. In other embodiments, the optical apparatus includes an electrically-programmable diffraction grating that may be alternately switched between a plurality of grating states thereof for processing the incident light. The optical apparatus may be formed, at least in part, by a micromachining process. 24 figs.
Accelerating sino-atrium computer simulations with graphic processing units.
Zhang, Hong; Xiao, Zheng; Lin, Shien-fong
2015-01-01
Sino-atrial node cells (SANCs) play a significant role in rhythmic firing. To investigate their role in arrhythmia and interactions with the atrium, computer simulations based on cellular dynamic mathematical models are generally used. However, the large-scale computation usually makes research difficult, given the limited computational power of Central Processing Units (CPUs). In this paper, an accelerating approach with Graphic Processing Units (GPUs) is proposed in a simulation consisting of the SAN tissue and the adjoining atrium. By using the operator splitting method, the computational task was made parallel. Three parallelization strategies were then put forward. The strategy with the shortest running time was further optimized by considering block size, data transfer and partition. The results showed that for a simulation with 500 SANCs and 30 atrial cells, the execution time taken by the non-optimized program decreased 62% with respect to a serial program running on CPU. The execution time decreased by 80% after the program was optimized. The larger the tissue was, the more significant the acceleration became. The results demonstrated the effectiveness of the proposed GPU-accelerating methods and their promising applications in more complicated biological simulations.
Accelerating Molecular Dynamic Simulation on Graphics Processing Units
Friedrichs, Mark S.; Eastman, Peter; Vaidyanathan, Vishal; Houston, Mike; Legrand, Scott; Beberg, Adam L.; Ensign, Daniel L.; Bruns, Christopher M.; Pande, Vijay S.
2009-01-01
We describe a complete implementation of all-atom protein molecular dynamics running entirely on a graphics processing unit (GPU), including all standard force field terms, integration, constraints, and implicit solvent. We discuss the design of our algorithms and important optimizations needed to fully take advantage of a GPU. We evaluate its performance, and show that it can be more than 700 times faster than a conventional implementation running on a single CPU core. PMID:19191337
Model-free adaptive control of supercritical circulating fluidized-bed boilers
Cheng, George Shu-Xing; Mulkey, Steven L
2014-12-16
A novel 3-Input-3-Output (3.times.3) Fuel-Air Ratio Model-Free Adaptive (MFA) controller is introduced, which can effectively control key process variables including Bed Temperature, Excess O2, and Furnace Negative Pressure of combustion processes of advanced boilers. A novel 7-input-7-output (7.times.7) MFA control system is also described for controlling a combined 3-Input-3-Output (3.times.3) process of Boiler-Turbine-Generator (BTG) units and a 5.times.5 CFB combustion process of advanced boilers. Those boilers include Circulating Fluidized-Bed (CFB) Boilers and Once-Through Supercritical Circulating Fluidized-Bed (OTSC CFB) Boilers.
Rock-weathering rates as functions of time
Colman, Steven M.
1981-01-01
The scarcity of documented numerical relations between rock weathering and time has led to a common assumption that rates of weathering are linear. This assumption has been strengthened by studies that have calculated long-term average rates. However, little theoretical or empirical evidence exists to support linear rates for most chemical-weathering processes, with the exception of congruent dissolution processes. The few previous studies of rock-weathering rates that contain quantitative documentation of the relation between chemical weathering and time suggest that the rates of most weathering processes decrease with time. Recent studies of weathering rinds on basaltic and andesitic stones in glacial deposits in the western United States also clearly demonstrate that rock-weathering processes slow with time. Some weathering processes appear to conform to exponential functions of time, such as the square-root time function for hydration of volcanic glass, which conforms to the theoretical predictions of diffusion kinetics. However, weathering of mineralogically heterogeneous rocks involves complex physical and chemical processes that generally can be expressed only empirically, commonly by way of logarithmic time functions. Incongruent dissolution and other weathering processes produce residues, which are commonly used as measures of weathering. These residues appear to slow movement of water to unaltered material and impede chemical transport away from it. If weathering residues impede weathering processes then rates of weathering and rates of residue production are inversely proportional to some function of the residue thickness. This results in simple mathematical analogs for weathering that imply nonlinear time functions. The rate of weathering becomes constant only when an equilibrium thickness of the residue is reached. Because weathering residues are relatively stable chemically, and because physical removal of residues below the ground surface is slight, many weathering features require considerable time to reach constant rates of change. For weathering rinds on volcanic stones in the western United States, this time is at least 0.5 my. ?? 1981.
Complex Processes from Dynamical Architectures with Time-Scale Hierarchy
Perdikis, Dionysios; Huys, Raoul; Jirsa, Viktor
2011-01-01
The idea that complex motor, perceptual, and cognitive behaviors are composed of smaller units, which are somehow brought into a meaningful relation, permeates the biological and life sciences. However, no principled framework defining the constituent elementary processes has been developed to this date. Consequently, functional configurations (or architectures) relating elementary processes and external influences are mostly piecemeal formulations suitable to particular instances only. Here, we develop a general dynamical framework for distinct functional architectures characterized by the time-scale separation of their constituents and evaluate their efficiency. Thereto, we build on the (phase) flow of a system, which prescribes the temporal evolution of its state variables. The phase flow topology allows for the unambiguous classification of qualitatively distinct processes, which we consider to represent the functional units or modes within the dynamical architecture. Using the example of a composite movement we illustrate how different architectures can be characterized by their degree of time scale separation between the internal elements of the architecture (i.e. the functional modes) and external interventions. We reveal a tradeoff of the interactions between internal and external influences, which offers a theoretical justification for the efficient composition of complex processes out of non-trivial elementary processes or functional modes. PMID:21347363
Position sensitive solid-state photomultipliers, systems and methods
Shah, Kanai S; Christian, James; Stapels, Christopher; Dokhale, Purushottam; McClish, Mickel
2014-11-11
An integrated silicon solid state photomultiplier (SSPM) device includes a pixel unit including an array of more than 2.times.2 p-n photodiodes on a common substrate, a signal division network electrically connected to each photodiode, where the signal division network includes four output connections, a signal output measurement unit, a processing unit configured to identify the photodiode generating a signal or a center of mass of photodiodes generating a signal, and a global receiving unit.
Igarashi, Jun; Shouno, Osamu; Fukai, Tomoki; Tsujino, Hiroshi
2011-11-01
Real-time simulation of a biologically realistic spiking neural network is necessary for evaluation of its capacity to interact with real environments. However, the real-time simulation of such a neural network is difficult due to its high computational costs that arise from two factors: (1) vast network size and (2) the complicated dynamics of biologically realistic neurons. In order to address these problems, mainly the latter, we chose to use general purpose computing on graphics processing units (GPGPUs) for simulation of such a neural network, taking advantage of the powerful computational capability of a graphics processing unit (GPU). As a target for real-time simulation, we used a model of the basal ganglia that has been developed according to electrophysiological and anatomical knowledge. The model consists of heterogeneous populations of 370 spiking model neurons, including computationally heavy conductance-based models, connected by 11,002 synapses. Simulation of the model has not yet been performed in real-time using a general computing server. By parallelization of the model on the NVIDIA Geforce GTX 280 GPU in data-parallel and task-parallel fashion, faster-than-real-time simulation was robustly realized with only one-third of the GPU's total computational resources. Furthermore, we used the GPU's full computational resources to perform faster-than-real-time simulation of three instances of the basal ganglia model; these instances consisted of 1100 neurons and 33,006 synapses and were synchronized at each calculation step. Finally, we developed software for simultaneous visualization of faster-than-real-time simulation output. These results suggest the potential power of GPGPU techniques in real-time simulation of realistic neural networks. Copyright © 2011 Elsevier Ltd. All rights reserved.
Army Science & Technology: Problems and Challenges
2012-03-01
Boundary Conditions: Who: Small Units is COIN/Stability Operations What: Provide affordable real-time translations and d t di f b h i f l i th t i...Soldiers, Leaders and Units in complex tactical operations exceeds the Army’s current capability for home-station Challenge: Formulate a S& T program...Formulate a S& T program to capture, process and electronically a vance rauma managemen . disseminate near-real-time medical information on Soldier
Tactical Operations Analysis Support Facility.
1981-05-01
Punch/Reader 2 DMC-11AR DDCMP Micro Processor 2 DMC-11DA Network Link Line Unit 2 DL-11E Async Serial Line Interface 4 Intel IN-1670 448K Words MOS Memory...86 5.3 VIRTUAL PROCESSORS - VAX-11/750 ........................... 89 5.4 A RELATIONAL DATA MANAGEMENT SYSTEM - ORACLE...Central Processing Unit (CPU) is a 16 bit processor for high-speed, real time applications, and for large multi-user, multi- task, time shared
Systems and methods for interactive virtual reality process control and simulation
Daniel, Jr., William E.; Whitney, Michael A.
2001-01-01
A system for visualizing, controlling and managing information includes a data analysis unit for interpreting and classifying raw data using analytical techniques. A data flow coordination unit routes data from its source to other components within the system. A data preparation unit handles the graphical preparation of the data and a data rendering unit presents the data in a three-dimensional interactive environment where the user can observe, interact with, and interpret the data. A user can view the information on various levels, from a high overall process level view, to a view illustrating linkage between variables, to view the hard data itself, or to view results of an analysis of the data. The system allows a user to monitor a physical process in real-time and further allows the user to manage and control the information in a manner not previously possible.
Ooi, Shing Ming; Sarkar, Srimanta; van Varenbergh, Griet; Schoeters, Kris; Heng, Paul Wan Sia
2013-04-01
Continuous processing and production in pharmaceutical manufacturing has received increased attention in recent years mainly due to the industries' pressing needs for more efficient, cost-effective processes and production, as well as regulatory facilitation. To achieve optimum product quality, the traditional trial-and-error method for the optimization of different process and formulation parameters is expensive and time consuming. Real-time evaluation and the control of product quality using an online process analyzer in continuous processing can provide high-quality production with very high-throughput at low unit cost. This review focuses on continuous processing and the application of different real-time monitoring tools used in the pharmaceutical industry for continuous processing from powder to tablets.
Computer program documentation: Raw-to-processed SINDA program (RTOPHS) user's guide
NASA Technical Reports Server (NTRS)
Damico, S. J.
1980-01-01
Use of the Raw to Processed SINDA(System Improved Numerical Differencing Analyzer) Program, RTOPHS, which provides a means of making the temperature prediction data on binary HSTFLO and HISTRY units generated by SINDA available to engineers in an easy to use format, is discussed. The program accomplishes this by reading the HISTRY unit and according to user input instructions, the desired times and temperature prediction data are extracted and written to a word addressable drum file.
Teaching About the Constitution.
ERIC Educational Resources Information Center
White, Charles S.
1988-01-01
Reviews "The U.S. Constitution Then and Now," a two-unit program using the integrated database and word processing capabilities of AppleWorks. For grades 7-12, the units simulate the constitutional convention and the principles of free speech and privacy. Concludes that with adequate time, the program can provide a potentially powerful…
Konstantinidis, Evdokimos I; Frantzidis, Christos A; Pappas, Costas; Bamidis, Panagiotis D
2012-07-01
In this paper the feasibility of adopting Graphic Processor Units towards real-time emotion aware computing is investigated for boosting the time consuming computations employed in such applications. The proposed methodology was employed in analysis of encephalographic and electrodermal data gathered when participants passively viewed emotional evocative stimuli. The GPU effectiveness when processing electroencephalographic and electrodermal recordings is demonstrated by comparing the execution time of chaos/complexity analysis through nonlinear dynamics (multi-channel correlation dimension/D2) and signal processing algorithms (computation of skin conductance level/SCL) into various popular programming environments. Apart from the beneficial role of parallel programming, the adoption of special design techniques regarding memory management may further enhance the time minimization which approximates a factor of 30 in comparison with ANSI C language (single-core sequential execution). Therefore, the use of GPU parallel capabilities offers a reliable and robust solution for real-time sensing the user's affective state. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hur, Min Young; Verboncoeur, John; Lee, Hae June
2014-10-01
Particle-in-cell (PIC) simulations have high fidelity in the plasma device requiring transient kinetic modeling compared with fluid simulations. It uses less approximation on the plasma kinetics but requires many particles and grids to observe the semantic results. It means that the simulation spends lots of simulation time in proportion to the number of particles. Therefore, PIC simulation needs high performance computing. In this research, a graphic processing unit (GPU) is adopted for high performance computing of PIC simulation for low temperature discharge plasmas. GPUs have many-core processors and high memory bandwidth compared with a central processing unit (CPU). NVIDIA GeForce GPUs were used for the test with hundreds of cores which show cost-effective performance. PIC code algorithm is divided into two modules which are a field solver and a particle mover. The particle mover module is divided into four routines which are named move, boundary, Monte Carlo collision (MCC), and deposit. Overall, the GPU code solves particle motions as well as electrostatic potential in two-dimensional geometry almost 30 times faster than a single CPU code. This work was supported by the Korea Institute of Science Technology Information.
Common and uncommon sense about erosional processes in mountain lands
R. M. Rice
1981-01-01
Current knowledge of erosional processes in mountainous watersheds is reviewed with emphasis on the west coast of the United States. Appreciation of the relative magnitude of erosional processes may be distorted by the tendency for researchers to study ""problems"" and by the relatively short time span of their records
Experience of Data Handling with IPPM Payload
NASA Astrophysics Data System (ADS)
Errico, Walter; Tosi, Pietro; Ilstad, Jorgen; Jameux, David; Viviani, Riccardo; Collantoni, Daniele
2010-08-01
A simplified On-Board Data Handling system has been developed by CAEN AURELIA SPACE and ABSTRAQT as PUS-over-SpaceWire demonstration platform for the Onboard Payload Data Processing laboratory at ESTEC. The system is composed of three Leon2-based IPPM (Integrated Payload Processing Module) computers that play the roles of Instrument, Payload Data Handling Unit and Satellite Management Unit. Two PCs complete the test set-up simulating an external Memory Management Unit and the Ground Control Unit. Communication among units take place primarily through SpaceWire links; RMAP[2] protocol is used for configuration and housekeeping. A limited implementation of ECSS-E-70-41B Packet Utilisation Standard (PUS)[1] over CANbus and MIL-STD-1553B has been also realized. The Open Source RTEMS is running on the IPPM AT697E CPU as real-time operating system.
Bunch, Richard H.
1986-01-01
A fault finder for locating faults along a high voltage electrical transmission line. Real time monitoring of background noise and improved filtering of input signals is used to identify the occurrence of a fault. A fault is detected at both a master and remote unit spaced along the line. A master clock synchronizes operation of a similar clock at the remote unit. Both units include modulator and demodulator circuits for transmission of clock signals and data. All data is received at the master unit for processing to determine an accurate fault distance calculation.
Self-Organizing OFDMA System for Broadband Communication
NASA Technical Reports Server (NTRS)
Roy, Aloke (Inventor); Anandappan, Thanga (Inventor); Malve, Sharath Babu (Inventor)
2016-01-01
Systems and methods for a self-organizing OFDMA system for broadband communication are provided. In certain embodiments a communication node for a self organizing network comprises a communication interface configured to transmit data to and receive data from a plurality of nodes; and a processing unit configured to execute computer readable instructions. Further, computer readable instructions direct the processing unit to identify a sub-region within a cell, wherein the communication node is located in the sub-region; and transmit at least one data frame, wherein the data from the communication node is transmitted at a particular time and frequency as defined within the at least one data frame, where the time and frequency are associated with the sub-region.
A Study on Signal Group Processing of AUTOSAR COM Module
NASA Astrophysics Data System (ADS)
Lee, Jeong-Hwan; Hwang, Hyun Yong; Han, Tae Man; Ahn, Yong Hak
2013-06-01
In vehicle, there are many ECU(Electronic Control Unit)s, and ECUs are connected to networks such as CAN, LIN, FlexRay, and so on. AUTOSAR COM(Communication) which is a software platform of AUTOSAR(AUTomotive Open System ARchitecture) in the international industry standards of automotive electronic software processes signals and signal groups for data communications between ECUs. Real-time and reliability are very important for data communications in the vehicle. Therefore, in this paper, we analyze functions of signals and signal groups used in COM, and represent that functions of signal group are more efficient than signals in real-time data synchronization and network resource usage between the sender and receiver.
NASA Technical Reports Server (NTRS)
Gorospe, George E., Jr.; Daigle, Matthew J.; Sankararaman, Shankar; Kulkarni, Chetan S.; Ng, Eley
2017-01-01
Prognostic methods enable operators and maintainers to predict the future performance for critical systems. However, these methods can be computationally expensive and may need to be performed each time new information about the system becomes available. In light of these computational requirements, we have investigated the application of graphics processing units (GPUs) as a computational platform for real-time prognostics. Recent advances in GPU technology have reduced cost and increased the computational capability of these highly parallel processing units, making them more attractive for the deployment of prognostic software. We present a survey of model-based prognostic algorithms with considerations for leveraging the parallel architecture of the GPU and a case study of GPU-accelerated battery prognostics with computational performance results.
Lien, Mathilde; Saksvik, Per Øystein
2016-10-01
This paper explores a change process in the Central Norway Regional Health Authority that was brought about by the implementation of a new economics and logistics system. The purpose of this paper is to contribute to understanding of how employees' attitudes towards change develop over time and how attitudes differ between the five health trusts under this authority. In this paper, we argue that a process-oriented focus through a longitudinal diary method, in addition to action research and feedback loops, will provide greater understanding of the evaluation of organizational change and interventions. This is explored through the assumption that different units will have different perspectives and attitudes towards the same intervention over time because of different contextual and time-related factors. The diary method aims to capture the context, events, reflections and interactions when they occur and allows for a nuanced frame of reference for the different phases of the implementation process and how these phases are perceived by employees. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Settle, Margaret Doyle; Coakley, Amanda Bulette; Annese, Christine Donahue
2017-02-01
Human milk provides superior nutritional value for infants in the neonatal intensive care unit and is the enteral feeding of choice. Our hospital used the system engineering initiative for patient safety model to evaluate the human milk management system in our neonatal intensive care unit. Nurses described the previous process in a negative way, fraught with opportunities for error, increased stress for nurses, and the need to be away from the bedside and their patients. The redesigned process improved the quality and safety of human milk management and created time for the nurses to spend with their patients.
United States Marine Corps Motor Transport Mechanic-to-Equipment Ratio
time motor transport equipment remains in maintenance at the organizational command level. This thesis uses a discrete event simulation model of the...applied to a single experiment that allows for assessment of risk of not achieving the objective. Inter-arrival time, processing time, work schedule
Gao, Wu; Xu, Wenjie; Bian, Xuecheng; Chen, Yunmin
2017-11-01
The settlement of any position of the municipal solid waste (MSW) body during the landfilling process and after its closure has effects on the integrity of the internal structure and storage capacity of the landfill. This paper proposes a practical approach for calculating the settlement and storage capacity of landfills based on the space and time discretization of the landfilling process. The MSW body in the landfill was divided into independent column units, and the filling process of each column unit was determined by a simplified complete landfilling process. The settlement of a position in the landfill was calculated with the compression of each MSW layer in every column unit. Then, the simultaneous settlement of all the column units was integrated to obtain the settlement of the landfill and storage capacity of all the column units; this allowed to obtain the storage capacity of the landfill based on the layer-wise summation method. When the compression of each MSW layer was calculated, the effects of the fluctuation of the main leachate level and variation in the unit weight of the MSW on the overburdened effective stress were taken into consideration by introducing the main leachate level's proportion and the unit weight and buried depth curve. This approach is especially significant for MSW with a high kitchen waste content and landfills in developing countries. The stress-biodegradation compression model was used to calculate the compression of each MSW layer. A software program, Settlement and Storage Capacity Calculation System for Landfills, was developed by integrating the space and time discretization of the landfilling process and the settlement and storage capacity algorithms. The landfilling process of the phase IV of Shanghai Laogang Landfill was simulated using this software. The maximum geometric volume of the landfill error between the calculated and measured values is only 2.02%, and the accumulated filling weight error between the calculated value and measured value is less than 5%. These results show that this approach is practical for satisfactorily and reliably calculating the settlement and storage capacity. In addition, the development of the elevation lines in the landfill sections created with the software demonstrates that the optimization of the design of the structures should be based on the settlement of the landfill. Since this practical approach can reasonably calculate the storage capacity of landfills and efficiently provide the development of the settlement of each landfilling stage, it can be used for the optimizations of landfilling schemes and structural designs. Copyright © 2017 Elsevier Ltd. All rights reserved.
A GPS-based Real-time Road Traffic Monitoring System
NASA Astrophysics Data System (ADS)
Tanti, Kamal Kumar
In recent years, monitoring systems are astonishingly inclined towards ever more automatic; reliably interconnected, distributed and autonomous operation. Specifically, the measurement, logging, data processing and interpretation activities may be carried out by separate units at different locations in near real-time. The recent evolution of mobile communication devices and communication technologies has fostered a growing interest in the GIS & GPS-based location-aware systems and services. This paper describes a real-time road traffic monitoring system based on integrated mobile field devices (GPS/GSM/IOs) working in tandem with advanced GIS-based application software providing on-the-fly authentications for real-time monitoring and security enhancement. The described system is developed as a fully automated, continuous, real-time monitoring system that employs GPS sensors and Ethernet and/or serial port communication techniques are used to transfer data between GPS receivers at target points and a central processing computer. The data can be processed locally or remotely based on the requirements of client’s satisfaction. Due to the modular architecture of the system, other sensor types may be supported with minimal effort. Data on the distributed network & measurements are transmitted via cellular SIM cards to a Control Unit, which provides for post-processing and network management. The Control Unit may be remotely accessed via an Internet connection. The new system will not only provide more consistent data about the road traffic conditions but also will provide methods for integrating with other Intelligent Transportation Systems (ITS). For communication between the mobile device and central monitoring service GSM technology is used. The resulting system is characterized by autonomy, reliability and a high degree of automation.
NASA Astrophysics Data System (ADS)
Leung, Nelson; Abdelhafez, Mohamed; Koch, Jens; Schuster, David
2017-04-01
We implement a quantum optimal control algorithm based on automatic differentiation and harness the acceleration afforded by graphics processing units (GPUs). Automatic differentiation allows us to specify advanced optimization criteria and incorporate them in the optimization process with ease. We show that the use of GPUs can speedup calculations by more than an order of magnitude. Our strategy facilitates efficient numerical simulations on affordable desktop computers and exploration of a host of optimization constraints and system parameters relevant to real-life experiments. We demonstrate optimization of quantum evolution based on fine-grained evaluation of performance at each intermediate time step, thus enabling more intricate control on the evolution path, suppression of departures from the truncated model subspace, as well as minimization of the physical time needed to perform high-fidelity state preparation and unitary gates.
Holmes, Lisa; Landsverk, John; Ward, Harriet; Rolls-Reutz, Jennifer; Saldana, Lisa; Wulczyn, Fred; Chamberlain, Patricia
2014-04-01
Estimating costs in child welfare services is critical as new service models are incorporated into routine practice. This paper describes a unit costing estimation system developed in England (cost calculator) together with a pilot test of its utility in the United States where unit costs are routinely available for health services but not for child welfare services. The cost calculator approach uses a unified conceptual model that focuses on eight core child welfare processes. Comparison of these core processes in England and in four counties in the United States suggests that the underlying child welfare processes generated from England were perceived as very similar by child welfare staff in California county systems with some exceptions in the review and legal processes. Overall, the adaptation of the cost calculator for use in the United States child welfare systems appears promising. The paper also compares the cost calculator approach to the workload approach widely used in the United States and concludes that there are distinct differences between the two approaches with some possible advantages to the use of the cost calculator approach, especially in the use of this method for estimating child welfare costs in relation to the incorporation of evidence-based interventions into routine practice.
2013-01-01
Background A smartcard is an integrated circuit card that provides identification, authentication, data storage, and application processing. Among other functions, smartcards can serve as credit and ATM cards and can be used to pay various invoices using a ‘reader’. This study looks at the unit cost and activity time of both a traditional cash billing service and a newly introduced smartcard billing service in an outpatient department in a hospital in Taipei, Taiwan. Methods The activity time required in using the cash billing service was determined via a time and motion study. A cost analysis was used to compare the unit costs of the two services. A sensitivity analysis was also performed to determine the effect of smartcard use and number of cashier windows on incremental cost and waiting time. Results Overall, the smartcard system had a higher unit cost because of the additional service fees and business tax, but it reduced patient waiting time by at least 8 minutes. Thus, it is a convenient service for patients. In addition, if half of all outpatients used smartcards to pay their invoices, along with four cashier windows for cash payments, then the waiting time of cash service users could be reduced by approximately 3 minutes and the incremental cost would be close to breaking even (even though it has a higher overall unit cost that the traditional service). Conclusions Traditional cash billing services are time consuming and require patients to carry large sums of money. Smartcard services enable patients to pay their bill immediately in the outpatient clinic and offer greater security and convenience. The idle time of nurses could also be reduced as they help to process smartcard payments. A reduction in idle time reduces hospital costs. However, the cost of the smartcard service is higher than the cash service and, as such, hospital administrators must weigh the costs and benefits of introducing a smartcard service. In addition to the obvious benefits of the smartcard service, there is also scope to extend its use in a hospital setting to include the notification of patient arrival and use in other departments. PMID:23763904
Chu, Kuan-Yu; Huang, Chunmin
2013-06-13
A smartcard is an integrated circuit card that provides identification, authentication, data storage, and application processing. Among other functions, smartcards can serve as credit and ATM cards and can be used to pay various invoices using a 'reader'. This study looks at the unit cost and activity time of both a traditional cash billing service and a newly introduced smartcard billing service in an outpatient department in a hospital in Taipei, Taiwan. The activity time required in using the cash billing service was determined via a time and motion study. A cost analysis was used to compare the unit costs of the two services. A sensitivity analysis was also performed to determine the effect of smartcard use and number of cashier windows on incremental cost and waiting time. Overall, the smartcard system had a higher unit cost because of the additional service fees and business tax, but it reduced patient waiting time by at least 8 minutes. Thus, it is a convenient service for patients. In addition, if half of all outpatients used smartcards to pay their invoices, along with four cashier windows for cash payments, then the waiting time of cash service users could be reduced by approximately 3 minutes and the incremental cost would be close to breaking even (even though it has a higher overall unit cost that the traditional service). Traditional cash billing services are time consuming and require patients to carry large sums of money. Smartcard services enable patients to pay their bill immediately in the outpatient clinic and offer greater security and convenience. The idle time of nurses could also be reduced as they help to process smartcard payments. A reduction in idle time reduces hospital costs. However, the cost of the smartcard service is higher than the cash service and, as such, hospital administrators must weigh the costs and benefits of introducing a smartcard service. In addition to the obvious benefits of the smartcard service, there is also scope to extend its use in a hospital setting to include the notification of patient arrival and use in other departments.
Prototype design of singles processing unit for the small animal PET
NASA Astrophysics Data System (ADS)
Deng, P.; Zhao, L.; Lu, J.; Li, B.; Dong, R.; Liu, S.; An, Q.
2018-05-01
Position Emission Tomography (PET) is an advanced clinical diagnostic imaging technique for nuclear medicine. Small animal PET is increasingly used for studying the animal model of disease, new drugs and new therapies. A prototype of Singles Processing Unit (SPU) for a small animal PET system was designed to obtain the time, energy, and position information. The energy and position is actually calculated through high precison charge measurement, which is based on amplification, shaping, A/D conversion and area calculation in digital signal processing domian. Analysis and simulations were also conducted to optimize the key parameters in system design. Initial tests indicate that the charge and time precision is better than 3‰ FWHM and 350 ps FWHM respectively, while the position resolution is better than 3.5‰ FWHM. Commination tests of the SPU prototype with the PET detector indicate that the system time precision is better than 2.5 ns, while the flood map and energy spectra concored well with the expected.
General purpose molecular dynamics simulations fully implemented on graphics processing units
NASA Astrophysics Data System (ADS)
Anderson, Joshua A.; Lorenz, Chris D.; Travesset, A.
2008-05-01
Graphics processing units (GPUs), originally developed for rendering real-time effects in computer games, now provide unprecedented computational power for scientific applications. In this paper, we develop a general purpose molecular dynamics code that runs entirely on a single GPU. It is shown that our GPU implementation provides a performance equivalent to that of fast 30 processor core distributed memory cluster. Our results show that GPUs already provide an inexpensive alternative to such clusters and discuss implications for the future.
U. S. Naval Forces, Vietnam Monthly Historical Summary for May 1966
1966-07-06
Nlvm and Unit 27. nt Nlm Trnnp;. Tliey will bo undor CUMINf/.C for administrativo pm’poses. Harbor defense units in the II, III, and IV Corps...beach to investigate. In the process an explosion, possibly a small mine, occurred thirty yards astern of PCF 36, At the same time the Viet Cong...general climate throughout the month remained one of uncertainty, with most of ticers adopting a "wait and see" attitude. As a result the process of
The Algorithm Theoretical Basis Document for Level 1A Processing
NASA Technical Reports Server (NTRS)
Jester, Peggy L.; Hancock, David W., III
2012-01-01
The first process of the Geoscience Laser Altimeter System (GLAS) Science Algorithm Software converts the Level 0 data into the Level 1A Data Products. The Level 1A Data Products are the time ordered instrument data converted from counts to engineering units. This document defines the equations that convert the raw instrument data into engineering units. Required scale factors, bias values, and coefficients are defined in this document. Additionally, required quality assurance and browse products are defined in this document.
Crott, Ralph; Lawson, Georges; Nollevaux, Marie-Cécile; Castiaux, Annick; Krug, Bruno
2016-09-01
Head and neck cancer (HNC) is predominantly a locoregional disease. Sentinel lymph node (SLN) biopsy offers a minimally invasive means of accurately staging the neck. Value in healthcare is determined by both outcomes and the costs associated with achieving them. Time-driven activity-based costing (TDABC) may offer more precise estimates of the true cost. Process maps were developed for nuclear medicine, operating room and pathology care phases. TDABC estimates the costs by combining information about the process with the unit cost of each resource used. Resource utilization is based on observation of care and staff interviews. Unit costs are calculated as a capacity cost rate, measured as a Euros/min (2014), for each resource consumed. Multiplying together the unit costs and resource quantities and summing across all resources used will produce the average cost for each phase of care. Three time equations with six different scenarios were modeled based on the type of camera, the number of SLN and the type of staining used. Total times for different SLN scenarios vary between 284 and 307 min, respectively, with a total cost between 2794 and 3541€. The unit costs vary between 788€/h for the intraoperative evaluation with a gamma-probe and 889€/h for a preoperative imaging with a SPECT/CT. The unit costs for the lymphadenectomy and the pathological examination are, respectively, 560 and 713€/h. A 10 % increase of time per individual activity generates only 1 % change in the total cost. TDABC evaluates the cost of SLN in HNC. The total costs across all phases which varied between 2761 and 3744€ per standard case.
Multidisciplinary Simulation Acceleration using Multiple Shared-Memory Graphical Processing Units
NASA Astrophysics Data System (ADS)
Kemal, Jonathan Yashar
For purposes of optimizing and analyzing turbomachinery and other designs, the unsteady Favre-averaged flow-field differential equations for an ideal compressible gas can be solved in conjunction with the heat conduction equation. We solve all equations using the finite-volume multiple-grid numerical technique, with the dual time-step scheme used for unsteady simulations. Our numerical solver code targets CUDA-capable Graphical Processing Units (GPUs) produced by NVIDIA. Making use of MPI, our solver can run across networked compute notes, where each MPI process can use either a GPU or a Central Processing Unit (CPU) core for primary solver calculations. We use NVIDIA Tesla C2050/C2070 GPUs based on the Fermi architecture, and compare our resulting performance against Intel Zeon X5690 CPUs. Solver routines converted to CUDA typically run about 10 times faster on a GPU for sufficiently dense computational grids. We used a conjugate cylinder computational grid and ran a turbulent steady flow simulation using 4 increasingly dense computational grids. Our densest computational grid is divided into 13 blocks each containing 1033x1033 grid points, for a total of 13.87 million grid points or 1.07 million grid points per domain block. To obtain overall speedups, we compare the execution time of the solver's iteration loop, including all resource intensive GPU-related memory copies. Comparing the performance of 8 GPUs to that of 8 CPUs, we obtain an overall speedup of about 6.0 when using our densest computational grid. This amounts to an 8-GPU simulation running about 39.5 times faster than running than a single-CPU simulation.
Daugherty, Elizabeth L; Rubinson, Lewis
2011-11-01
In recent years, healthcare disaster planning has grown from its early place as an occasional consideration within the manuals of emergency medical services and emergency department managers to a rapidly growing field, which considers continuity of function, surge capability, and process changes across the spectrum of healthcare delivery. A detailed examination of critical care disaster planning was undertaken in 2007 by the Task Force for Mass Critical Care of the American College of Chest Physicians Critical Care Collaborative Initiative. We summarize the Task Force recommendations and available updated information to answer a fundamental question for critical care disaster planners: What is a prepared intensive care unit and how do I ensure my unit's readiness? Database searches and review of relevant published literature. Preparedness is essential for successful response, but because intensive care units face many competing priorities, without defining "preparedness for what," the task can seem overwhelming. Intensive care unit disaster planners should, therefore, along with the entire hospital, participate in a hospital or regionwide planning process to 1) identify critical care response vulnerabilities; and 2) clarify the hazards for which their community is most at risk. The process should inform a comprehensive written preparedness plan targeting the most worrisome scenarios and including specific guidance on 1) optimal use of space, equipment, and staffing for delivery of critical care to significantly increased patient volumes; 2) allocation of resources for provision of essential critical care services under conditions of absolute scarcity; 3) intensive care unit evacuation; and 4) redundant internal communication systems and means for timely data collection. Critical care disaster planners have a complex, challenging task. Experienced planners will agree that no disaster response is perfect, but careful planning will enable the prepared intensive care unit to respond effectively in times of crisis.
NASA Astrophysics Data System (ADS)
Min, Jae-Hong; Gelo, Nikolas J.; Jo, Hongki
2016-04-01
The newly developed smartphone application, named RINO, in this study allows measuring absolute dynamic displacements and processing them in real time using state-of-the-art smartphone technologies, such as high-performance graphics processing unit (GPU), in addition to already powerful CPU and memories, embedded high-speed/ resolution camera, and open-source computer vision libraries. A carefully designed color-patterned target and user-adjustable crop filter enable accurate and fast image processing, allowing up to 240fps for complete displacement calculation and real-time display. The performances of the developed smartphone application are experimentally validated, showing comparable accuracy with those of conventional laser displacement sensor.
Threshold units: A correct metric for reaction time?
Zele, Andrew J.; Cao, Dingcai; Pokorny, Joel
2007-01-01
Purpose To compare reaction time (RT) to rod incremental and decremental stimuli expressed in physical contrast units or psychophysical threshold units. Methods Rod contrast detection thresholds and suprathreshold RTs were measured for Rapid-On and Rapid-Off ramp stimuli. Results Threshold sensitivity to Rapid-Off stimuli was higher than to Rapid-On stimuli. Suprathreshold RTs specified in Weber contrast for Rapid-Off stimuli were shorter than for Rapid-On stimuli. Reaction time data expressed in multiples of threshold reversed the outcomes: Reaction times for Rapid-On stimuli were shorter than those for Rapid-Off stimuli. The use of alternative contrast metrics also failed to equate RTs. Conclusions A case is made that the interpretation of RT data may be confounded when expressed in threshold units. Stimulus energy or contrast is the only metric common to the response characteristics of the cells underlying speeded responses. The use of threshold metrics for RT can confuse the interpretation of an underlying physiological process. PMID:17240416
Using Lean principles to manage throughput on an inpatient rehabilitation unit.
Chiodo, Anthony; Wilke, Ruste; Bakshi, Rishi; Craig, Anita; Duwe, Doug; Hurvitz, Edward
2012-11-01
Performance improvement is a mainstay of operations management and maintenance of certification. In this study at a University Hospital inpatient rehabilitation unit, Lean management techniques were used to manage throughput of patients into and out of the inpatient rehabilitation unit. At the start of this process, the average admission time to the rehabilitation unit was 5:00 p.m., with a median time of 3:30 p.m., and no patients received therapy on the day of admission. Within 8 mos, the mean admission time was 1:22 p.m., 50% of the patients were on the rehabilitation unit by 1:00 p.m., and more than 70% of all patients received therapy on the day of admission. Negative variance from this performance was evaluated, the identification of inefficient discharges holding up admissions as a problem was identified, and a Lean workshop was initiated. Once this problem was tackled, the prime objective of 70% of patients receiving therapy on the date of admission was consistently met. Lean management tools are effective in improving throughput on an inpatient rehabilitation unit.
9 CFR 381.305 - Equipment and procedures for heat processing systems.
Code of Federal Regulations, 2010 CFR
2010-01-01
... control unit. A nonreturn valve shall be provided in the air supply line to prevent water from entering... control unit. A nonreturn valve shall be provided in the air supply line to prevent water from entering... supply of clean, dry air. The recorder timing mechanism shall be accurate. (i) Chart-type devices...
Man and His Physical Environment: Teacher's Manual.
ERIC Educational Resources Information Center
Mank, Evans R.
Building upon Course I, this teaching guide for the first of four units of Course II introduces the secondary student to geographic concepts and generalizations of the physical world to which man has related over time. All units of the second course emphasize the process of development whereby man, coping with given conditions in his physical…
Mundt, Diane J; Adams, Robert C; Marano, Kristin M
2009-11-01
The U.S. asphalt paving industry has evolved over time to meet various performance specifications for liquid petroleum asphalt binder (known as bitumen outside the United States). Additives to liquid petroleum asphalt produced in the refinery may affect exposures to workers in the hot mix paving industry. This investigation documented the changes in the composition and distribution of the liquid petroleum asphalt products produced from petroleum refining in the United States since World War II. This assessment was accomplished by reviewing documents and interviewing individual experts in the industry to identify current and historical practices. Individuals from 18 facilities were surveyed; the number of facilities reporting use of any material within a particular class ranged from none to more than half the respondents. Materials such as products of the process stream, polymers, elastomers, and anti-strip compounds have been added to liquid petroleum asphalt in the United States over the past 50 years, but modification has not been generally consistent by geography or time. Modifications made to liquid petroleum asphalt were made generally to improve performance and were dictated by state specifications.
Stover, Pamela R; Harpin, Scott
2015-12-01
Limited capacity in a psychiatric unit contributes to long emergency department (ED) admission wait times. Regulatory and accrediting agencies urge hospitals nationally to improve patient flow for better access to care for all types of patients. The purpose of the current study was to decrease psychiatric admission wait time from 10.5 to 8 hours and increase the proportion of patients discharged by 11 a.m. from 20% to 50%. The current study compared pre- and post-intervention data. Plan-Do-Study-Act cycles aimed to improve discharge processes and timeliness through initiation of new practices. Admission wait time improved to an average of 5.1 hours (t = 3.87, p = 0.006). The proportion of discharges occurring by 11 a.m. increased to 46% (odds ratio = 3.42, p < 0.0001). Improving discharge planning processes and timeliness in a psychiatric unit significantly decreased admission wait time from the ED, improving access to psychiatric care. Copyright 2015, SLACK Incorporated.
NASA Technical Reports Server (NTRS)
Cooper, D. B.; Yalabik, N.
1975-01-01
Approximation of noisy data in the plane by straight lines or elliptic or single-branch hyperbolic curve segments arises in pattern recognition, data compaction, and other problems. The efficient search for and approximation of data by such curves were examined. Recursive least-squares linear curve-fitting was used, and ellipses and hyperbolas are parameterized as quadratic functions in x and y. The error minimized by the algorithm is interpreted, and central processing unit (CPU) times for estimating parameters for fitting straight lines and quadratic curves were determined and compared. CPU time for data search was also determined for the case of straight line fitting. Quadratic curve fitting is shown to require about six times as much CPU time as does straight line fitting, and curves relating CPU time and fitting error were determined for straight line fitting. Results are derived on early sequential determination of whether or not the underlying curve is a straight line.
Zhao, Zi-Fang; Li, Xue-Zhu; Wan, You
2017-12-01
The local field potential (LFP) is a signal reflecting the electrical activity of neurons surrounding the electrode tip. Synchronization between LFP signals provides important details about how neural networks are organized. Synchronization between two distant brain regions is hard to detect using linear synchronization algorithms like correlation and coherence. Synchronization likelihood (SL) is a non-linear synchronization-detecting algorithm widely used in studies of neural signals from two distant brain areas. One drawback of non-linear algorithms is the heavy computational burden. In the present study, we proposed a graphic processing unit (GPU)-accelerated implementation of an SL algorithm with optional 2-dimensional time-shifting. We tested the algorithm with both artificial data and raw LFP data. The results showed that this method revealed detailed information from original data with the synchronization values of two temporal axes, delay time and onset time, and thus can be used to reconstruct the temporal structure of a neural network. Our results suggest that this GPU-accelerated method can be extended to other algorithms for processing time-series signals (like EEG and fMRI) using similar recording techniques.
NASA Astrophysics Data System (ADS)
Maciel, Thiago O.; Vianna, Reinaldo O.; Sarthour, Roberto S.; Oliveira, Ivan S.
2015-11-01
We reconstruct the time dependent quantum map corresponding to the relaxation process of a two-spin system in liquid-state NMR at room temperature. By means of quantum tomography techniques that handle informational incomplete data, we show how to properly post-process and normalize the measurements data for the simulation of quantum information processing, overcoming the unknown number of molecules prepared in a non-equilibrium magnetization state (Nj) by an initial sequence of radiofrequency pulses. From the reconstructed quantum map, we infer both longitudinal (T1) and transversal (T2) relaxation times, and introduce the J-coupling relaxation times ({T}1J,{T}2J), which are relevant for quantum information processing simulations. We show that the map associated to the relaxation process cannot be assumed approximated unital and trace-preserving for times greater than {T}2J.
GPU applications for data processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vladymyrov, Mykhailo, E-mail: mykhailo.vladymyrov@cern.ch; Aleksandrov, Andrey; INFN sezione di Napoli, I-80125 Napoli
2015-12-31
Modern experiments that use nuclear photoemulsion imply fast and efficient data acquisition from the emulsion can be performed. The new approaches in developing scanning systems require real-time processing of large amount of data. Methods that use Graphical Processing Unit (GPU) computing power for emulsion data processing are presented here. It is shown how the GPU-accelerated emulsion processing helped us to rise the scanning speed by factor of nine.
Evolution of the Power Processing Units Architecture for Electric Propulsion at CRISA
NASA Astrophysics Data System (ADS)
Palencia, J.; de la Cruz, F.; Wallace, N.
2008-09-01
Since 2002, the team formed by EADS Astrium CRISA, Astrium GmbH Friedrichshafen, and QinetiQ has participated in several flight programs where the Electric Propulsion based on Kaufman type Ion Thrusters is the baseline conceptOn 2002, CRISA won the contract for the development of the Ion Propulsion Control Unit (IPCU) for GOCE. This unit together with the T5 thruster by QinetiQ provides near perfect atmospheric drag compensation offering thrust levels in the range of 1 to 20mN.By the end of 2003, CRISA started the adaptation of the IPCU concept to the QinetiQ T6 Ion Thruster for the Alphabus program.This paper shows how the Power Processing Unit design evolved in time including the current developments.
Bio-inspired multi-mode optic flow sensors for micro air vehicles
NASA Astrophysics Data System (ADS)
Park, Seokjun; Choi, Jaehyuk; Cho, Jihyun; Yoon, Euisik
2013-06-01
Monitoring wide-field surrounding information is essential for vision-based autonomous navigation in micro-air-vehicles (MAV). Our image-cube (iCube) module, which consists of multiple sensors that are facing different angles in 3-D space, can be applied to the wide-field of view optic flows estimation (μ-Compound eyes) and to attitude control (μ- Ocelli) in the Micro Autonomous Systems and Technology (MAST) platforms. In this paper, we report an analog/digital (A/D) mixed-mode optic-flow sensor, which generates both optic flows and normal images in different modes for μ- Compound eyes and μ-Ocelli applications. The sensor employs a time-stamp based optic flow algorithm which is modified from the conventional EMD (Elementary Motion Detector) algorithm to give an optimum partitioning of hardware blocks in analog and digital domains as well as adequate allocation of pixel-level, column-parallel, and chip-level signal processing. Temporal filtering, which may require huge hardware resources if implemented in digital domain, is remained in a pixel-level analog processing unit. The rest of the blocks, including feature detection and timestamp latching, are implemented using digital circuits in a column-parallel processing unit. Finally, time-stamp information is decoded into velocity from look-up tables, multiplications, and simple subtraction circuits in a chip-level processing unit, thus significantly reducing core digital processing power consumption. In the normal image mode, the sensor generates 8-b digital images using single slope ADCs in the column unit. In the optic flow mode, the sensor estimates 8-b 1-D optic flows from the integrated mixed-mode algorithm core and 2-D optic flows with an external timestamp processing, respectively.
O'Rourke, Kathleen; Teel, Joseph; Nicholls, Erika; Lee, Daniel D; Colwill, Alyssa Covelli; Srinivas, Sindhu K
2018-03-01
To improve staff perception of the quality of the patient admission process from obstetric triage to the labor and delivery unit through standardization. Preassessment and postassessment online surveys. A 13-bed labor and delivery unit in a quaternary care, Magnet Recognition Program, academic medical center in Pennsylvania. Preintervention (n = 100), postintervention (n = 52), and 6-month follow-up survey respondents (n = 75) represented secretaries, registered nurses, surgical technicians, certified nurse-midwives, nurse practitioners, maternal-fetal medicine fellows, anesthesiologists, and obstetric and family medicine attending and resident physicians from triage and labor and delivery units. We educated staff and implemented interventions, an admission huddle and safety time-out whiteboard, to standardize the admission process. Participants were evaluated with the use of preintervention, postintervention, and 6-month follow-up surveys about their perceptions regarding the admission process. Data tracked through the electronic medical record were used to determine compliance with the admission huddle and whiteboards. A 77% reduction (decrease of 49%) occurred in the perception of incomplete patient admission processes from baseline to 6-month follow-up after the intervention. Postintervention and 6-month follow-up survey results indicated that 100% of respondents responded strongly agree/agree/neutral that the new admission process improved communication surrounding care for patients. Data in the electronic medical record indicated that compliance with use of admission huddles and whiteboards increased from 50% to 80% by 6 months. The new patient admission process, including a huddle and safety time-out board, improved staff perception of the quality of admission from obstetric triage to the labor and delivery unit. Copyright © 2018 AWHONN, the Association of Women's Health, Obstetric and Neonatal Nurses. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Parsons, Vickie S.
2009-01-01
A request to conduct a peer review of the International Space Station (ISS) proposal to use Bayesian methodology for updating Mean Time Between Failure (MTBF) for ISS Orbital Replaceable Units (ORU) was submitted to the NASA Engineering and Safety Center (NESC) on September 20, 2005. The results were requested by October 20, 2005 in order to be available during the process of reworking the current ISS flight manifest. The results are included in this report.
Engle, Martha; Ferguson, Allison; Fields, Willa
2016-01-01
The purpose of this quality improvement project was to redesign a hospital meal delivery process in order to shorten the time between blood glucose monitoring and corresponding insulin administration and improve glycemic control. This process change redesigned the workflow of the dietary and nursing departments. Modifications included nursing, rather than dietary, delivering meal trays to patients receiving insulin. Dietary marked the appropriate meal trays and phoned each unit prior to arrival on the unit. The process change was trialed on 2 acute care units prior to implementation hospital wide. Elapsed time between blood glucose monitoring and insulin administration was analyzed before and after process change as well as evaluation of glucometrics: percentage of patients with blood glucose between 70 and 180 mg/dL (percent perfect), blood glucose greater than 300 mg/dL (extreme hyperglycemia), and blood glucose less than 70 mg/dL (hypoglycemia). Percent perfect glucose results improved from 45% to 53%, extreme hyperglycemia (blood glucose >300 mg/dL) fell from 11.7% to 5%. Hypoglycemia demonstrated a downward trend line, demonstrating that with improving glycemic control hypoglycemia rates did not increase. Percentage of patients receiving meal insulin within 30 minutes of blood glucose check increased from 35% to 73%. In the hospital, numerous obstacles were present that interfered with on-time meal insulin delivery. Establishing a meal delivery process with the nurse performing the premeal blood glucose check, delivering the meal, and administering the insulin improves overall blood glucose control. Nurse-led process improvement of blood glucose monitoring, meal tray delivery, and insulin administration does lead to improved glycemic control for the inpatient population.
24 CFR 15.110 - What fees will HUD charge?
Code of Federal Regulations, 2013 CFR
2013-04-01
... duplicating machinery. The computer run time includes the cost of operating a central processing unit for that... Applies. (6) Computer run time (includes only mainframe search time not printing) The direct cost of... estimated fee is more than $250.00 or you have a history of failing to pay FOIA fees to HUD in a timely...
Matrix decomposition graphics processing unit solver for Poisson image editing
NASA Astrophysics Data System (ADS)
Lei, Zhao; Wei, Li
2012-10-01
In recent years, gradient-domain methods have been widely discussed in the image processing field, including seamless cloning and image stitching. These algorithms are commonly carried out by solving a large sparse linear system: the Poisson equation. However, solving the Poisson equation is a computational and memory intensive task which makes it not suitable for real-time image editing. A new matrix decomposition graphics processing unit (GPU) solver (MDGS) is proposed to settle the problem. A matrix decomposition method is used to distribute the work among GPU threads, so that MDGS will take full advantage of the computing power of current GPUs. Additionally, MDGS is a hybrid solver (combines both the direct and iterative techniques) and has two-level architecture. These enable MDGS to generate identical solutions with those of the common Poisson methods and achieve high convergence rate in most cases. This approach is advantageous in terms of parallelizability, enabling real-time image processing, low memory-taken and extensive applications.
Wang, Jie; Wong, Andus Wing-Kuen; Chen, Hsuan-Chih
2017-06-05
The time course of phonological encoding in Mandarin monosyllabic word production was investigated by using the picture-word interference paradigm. Participants were asked to name pictures in Mandarin while visual distractor words were presented before, at, or after picture onset (i.e., stimulus-onset asynchrony/SOA = -100, 0, or +100 ms, respectively). Compared with the unrelated control, the distractors sharing atonal syllables with the picture names significantly facilitated the naming responses at -100- and 0-ms SOAs. In addition, the facilitation effect of sharing word-initial segments only appeared at 0-ms SOA, and null effects were found for sharing word-final segments. These results indicate that both syllables and subsyllabic units play important roles in Mandarin spoken word production and more critically that syllabic processing precedes subsyllabic processing. The current results lend strong support to the proximate units principle (O'Seaghdha, Chen, & Chen, 2010), which holds that the phonological structure of spoken word production is language-specific and that atonal syllables are the proximate phonological units in Mandarin Chinese. On the other hand, the significance of word-initial segments over word-final segments suggests that serial processing of segmental information seems to be universal across Germanic languages and Chinese, which remains to be verified in future studies.
NASA Astrophysics Data System (ADS)
Wang, Tai-Han; Huang, Da-Nian; Ma, Guo-Qing; Meng, Zhao-Hai; Li, Ye
2017-06-01
With the continuous development of full tensor gradiometer (FTG) measurement techniques, three-dimensional (3D) inversion of FTG data is becoming increasingly used in oil and gas exploration. In the fast processing and interpretation of large-scale high-precision data, the use of the graphics processing unit process unit (GPU) and preconditioning methods are very important in the data inversion. In this paper, an improved preconditioned conjugate gradient algorithm is proposed by combining the symmetric successive over-relaxation (SSOR) technique and the incomplete Choleksy decomposition conjugate gradient algorithm (ICCG). Since preparing the preconditioner requires extra time, a parallel implement based on GPU is proposed. The improved method is then applied in the inversion of noisecontaminated synthetic data to prove its adaptability in the inversion of 3D FTG data. Results show that the parallel SSOR-ICCG algorithm based on NVIDIA Tesla C2050 GPU achieves a speedup of approximately 25 times that of a serial program using a 2.0 GHz Central Processing Unit (CPU). Real airborne gravity-gradiometry data from Vinton salt dome (southwest Louisiana, USA) are also considered. Good results are obtained, which verifies the efficiency and feasibility of the proposed parallel method in fast inversion of 3D FTG data.
Failure Analysis of Nonvolatile Residue (NVR) Analyzer Model SP-1000
NASA Technical Reports Server (NTRS)
Potter, Joseph C.
2011-01-01
National Aeronautics and Space Administration (NASA) subcontractor Wiltech contacted the NASA Electrical Lab (NE-L) and requested a failure analysis of a Solvent Purity Meter; model SP-IOOO produced by the VerTis Instrument Company. The meter, used to measure the contaminate in a solvent to determine the relative contamination on spacecraft flight hardware and ground servicing equipment, had been inoperable and in storage for an unknown amount of time. NE-L was asked to troubleshoot the unit and make a determination on what may be required to make the unit operational. Through the use of general troubleshooting processes and the review of a unit in service at the time of analysis, the unit was found to be repairable but would need the replacement of multiple components.
9 CFR 318.305 - Equipment and procedures for heat processing systems.
Code of Federal Regulations, 2010 CFR
2010-01-01
... control unit. A nonreturn valve shall be provided in the air supply line to prevent water from entering... control unit. A nonreturn valve shall be provided in the air supply line to prevent water from entering... ensure a supply of clean, dry air. The recorder timing mechanism shall be accurate. (i) Chart-type...
Refugee Resettlement in the U. S.: Time For A New Focus.
ERIC Educational Resources Information Center
Taft, Julia Vadala; And Others
This is a comprehensive report on refugee resettlement in the United States in the past twenty-five years. Part one discusses general concerns of the refugee resettlement process, including: (1) the admission of refugees to the United States; (2) demographic profiles of refugee populations; (3) the needs of individual refugees during resettlement;…
Understanding Processes and Timelines for Distributed Photovoltaic
data from more than 30,000 PV systems across 87 utilities in 16 states to better understand how solar photovoltaic (PV) interconnection process time frames in the United States. This study includes an analysis of Analysis Metrics" that shows the four steps involved in the utility interconnection process for solar
Interactive brain shift compensation using GPU based programming
NASA Astrophysics Data System (ADS)
van der Steen, Sander; Noordmans, Herke Jan; Verdaasdonk, Rudolf
2009-02-01
Processing large images files or real-time video streams requires intense computational power. Driven by the gaming industry, the processing power of graphic process units (GPUs) has increased significantly. With the pixel shader model 4.0 the GPU can be used for image processing 10x faster than the CPU. Dedicated software was developed to deform 3D MR and CT image sets for real-time brain shift correction during navigated neurosurgery using landmarks or cortical surface traces defined by the navigation pointer. Feedback was given using orthogonal slices and an interactively raytraced 3D brain image. GPU based programming enables real-time processing of high definition image datasets and various applications can be developed in medicine, optics and image sciences.
NASA Astrophysics Data System (ADS)
Borowska-Stefańska, Marta; Wiśniewski, Szymon
2017-12-01
The cognitive aim of this study is to point to the optimum number of local government units and the optimum boundaries of spatial units in Poland with the assumption of minimizing the cumulated theoretical travel time to all settlement units in the country. The methodological aim, in turn, is to present the use of the ArcGIS location-allocation tool for the purposes of delimitation processes as exemplifi ed by administrative boundaries in Poland. The rationale for the implementation of this study is that number and the boundaries of units of all levels of Poland's current territorial division are far from optimum in the light of minimization of accumulated theoretical travel time to all settlement units in the country. It may be concluded that it would be justifi able to increase the number of voivodships from the current number of 16 to 18. Besides it would be necessary to introduce modifi cations in relation to units with regional functions. In contrast, the number of districts and communes should be reduced. A continuation of this research may go in the direction of including analysis of public transport network in the research, creating in this way a multimodal set of network data. This would illustrate, apart from the potential itself resulting from the infrastructure, also the actually existing connections.
Huber, Stefan; Nuerk, Hans-Christoph; Reips, Ulf-Dietrich; Soltanlou, Mojtaba
2017-12-23
Symbolic magnitude comparison is one of the most well-studied cognitive processes in research on numerical cognition. However, while the cognitive mechanisms of symbolic magnitude processing have been intensively studied, previous studies have paid less attention to individual differences influencing symbolic magnitude comparison. Employing a two-digit number comparison task in an online setting, we replicated previous effects, including the distance effect, the unit-decade compatibility effect, and the effect of cognitive control on the adaptation to filler items, in a large-scale study in 452 adults. Additionally, we observed that the most influential individual differences were participants' first language, time spent playing computer games and gender, followed by reported alcohol consumption, age and mathematical ability. Participants who used a first language with a left-to-right reading/writing direction were faster than those who read and wrote in the right-to-left direction. Reported playing time for computer games was correlated with faster reaction times. Female participants showed slower reaction times and a larger unit-decade compatibility effect than male participants. Participants who reported never consuming alcohol showed overall slower response times than others. Older participants were slower, but more accurate. Finally, higher grades in mathematics were associated with faster reaction times. We conclude that typical experiments on numerical cognition that employ a keyboard as an input device can also be run in an online setting. Moreover, while individual differences have no influence on domain-specific magnitude processing-apart from age, which increases the decade distance effect-they generally influence performance on a two-digit number comparison task.
Computer simulations and real-time control of ELT AO systems using graphical processing units
NASA Astrophysics Data System (ADS)
Wang, Lianqi; Ellerbroek, Brent
2012-07-01
The adaptive optics (AO) simulations at the Thirty Meter Telescope (TMT) have been carried out using the efficient, C based multi-threaded adaptive optics simulator (MAOS, http://github.com/lianqiw/maos). By porting time-critical parts of MAOS to graphical processing units (GPU) using NVIDIA CUDA technology, we achieved a 10 fold speed up for each GTX 580 GPU used compared to a modern quad core CPU. Each time step of full scale end to end simulation for the TMT narrow field infrared AO system (NFIRAOS) takes only 0.11 second in a desktop with two GTX 580s. We also demonstrate that the TMT minimum variance reconstructor can be assembled in matrix vector multiply (MVM) format in 8 seconds with 8 GTX 580 GPUs, meeting the TMT requirement for updating the reconstructor. Analysis show that it is also possible to apply the MVM using 8 GTX 580s within the required latency.
NASA Astrophysics Data System (ADS)
Holmdahl, P. E.; Ellis, A. B. E.; Moeller-Olsen, P.; Ringgaard, J. P.
1981-12-01
The basic requirements of the SAR ground segment of ERS-1 are discussed. A system configuration for the real time data acquisition station and the processing and archive facility is depicted. The functions of a typical SAR processing unit (SPU) are specified, and inputs required for near real time and full precision, deferred time processing are described. Inputs and the processing required for provision of these inputs to the SPU are dealt with. Data flow through the systems, and normal and nonnormal operational sequence, are outlined. Prerequisites for maintaining overall performance are identified, emphasizing quality control. The most demanding tasks to be performed by the front end are defined in order to determine types of processors and peripherals which comply with throughput requirements.
NASA Astrophysics Data System (ADS)
Limmer, Steffen; Fey, Dietmar
2013-07-01
Thin-film computations are often a time-consuming task during optical design. An efficient way to accelerate these computations with the help of graphics processing units (GPUs) is described. It turned out that significant speed-ups can be achieved. We investigate the circumstances under which the best speed-up values can be expected. Therefore we compare different GPUs among themselves and with a modern CPU. Furthermore, the effect of thickness modulation on the speed-up and the runtime behavior depending on the input data is examined.
Preventing intensive care unit delirium: a patient-centered approach to reducing sleep disruption.
Stuck, Amy; Clark, Mary Jo; Connelly, Cynthia D
2011-01-01
Delirium in the intensive care unit is a disorder with multifactorial causes and is associated with poor outcomes. Sleep-wake disturbance is a common experience for patients with delirium. Care processes that disrupt sleep can lead to sleep deprivation, contributing to delirium. Patient-centered care is a concept that considers what is best for each individual. How can clinicians use a patient-centered approach to alter processes to decrease patient disruptions and improve sleep and rest? Could timing of blood draws and soothing music work to promote sleep?
Geomorphic Processes and Remote Sensing Signatures of Alluvial Fans in the Kun Lun Mountains, China
NASA Technical Reports Server (NTRS)
Farr, Tom G.; Chadwick, Oliver A.
1996-01-01
The timing of alluvial deposition in arid and semiarid areas is tied to land-surface instability caused by regional climate changes. The distribution pattern of dated deposits provides maps of regional land-surface response to past climate change. Sensitivity to differences in surface roughness and composition makes remote sensing techniques useful for regional mapping of alluvial deposits. Radar images from the Spaceborne Radar Laboratory and visible wavelength images from the French SPOT satellite were used to determine remote sensing signatures of alluvial fan units for an area in the Kun Lun Mountains of northwestern China. These data were combined with field observations to compare surface processes and their effects on remote sensing signatures in northwestern China and the southwestern United States. Geomorphic processes affecting alluvial fans in the two areas include aeolian deposition, desert varnish, and fluvial dissection. However, salt weathering is a much more important process in the Kun Lun than in the southwestern United States. This slows the formation of desert varnish and prevents desert pavement from forming. Thus the Kun Lun signatures are characteristic of the dominance of salt weathering, while signatures from the southwestern United States are characteristic of the dominance of desert varnish and pavement processes. Remote sensing signatures are consistent enough in these two regions to be used for mapping fan units over large areas.
First results from the PROTEIN experiment on board the International Space Station
NASA Astrophysics Data System (ADS)
Decanniere, Klaas; Potthast, Lothar; Pletser, Vladimir; Maes, Dominique; Otalora, Fermin; Gavira, Jose A.; Pati, Luis David; Lautenschlager, Peter; Bosch, Robert
On March 15 2009 Space Shuttle Discovery was launched, carrying the Process Unit of the Protein Crystallization Diagnostics Facility (PCDF) to the International Space Station. It contained the PROTEIN experiment, aiming at the in-situ observation of nucleation and crystal growth behaviour of proteins. After installation in the European Drawer Rack (EDR) and connection to the PCDF Electronics Unit, experiment runs were performed continuously for 4 months. It was the first time that protein crystallization experiments could be modified on-orbit in near real-time, based on data received on ground. The data included pseudo-dark field microscope images, interferograms, and Dynamic Light Scattering data. The Process Unit with space grown crystals was returned to ground on July 31 2009. Results for the model protein glucose isomerase (Glucy) from Streptomyces rubiginosus crystallized with ammonium sulfate will be reported concerning nucleation and the growth from Protein and Impurities Depletion Zones (PDZs). In addition, results of x-ray analyses for space-grown crystals will be given.
Low-SWaP coincidence processing for Geiger-mode LIDAR video
NASA Astrophysics Data System (ADS)
Schultz, Steven E.; Cervino, Noel P.; Kurtz, Zachary D.; Brown, Myron Z.
2015-05-01
Photon-counting Geiger-mode lidar detector arrays provide a promising approach for producing three-dimensional (3D) video at full motion video (FMV) data rates, resolution, and image size from long ranges. However, coincidence processing required to filter raw photon counts is computationally expensive, generally requiring significant size, weight, and power (SWaP) and also time. In this paper, we describe a laboratory test-bed developed to assess the feasibility of low-SWaP, real-time processing for 3D FMV based on Geiger-mode lidar. First, we examine a design based on field programmable gate arrays (FPGA) and demonstrate proof-of-concept results. Then we examine a design based on a first-of-its-kind embedded graphical processing unit (GPU) and compare performance with the FPGA. Results indicate feasibility of real-time Geiger-mode lidar processing for 3D FMV and also suggest utility for real-time onboard processing for mapping lidar systems.
A portable device for detecting fruit quality by diffuse reflectance Vis/NIR spectroscopy
NASA Astrophysics Data System (ADS)
Sun, Hongwei; Peng, Yankun; Li, Peng; Wang, Wenxiu
2017-05-01
Soluble solid content (SSC) is a major quality parameter to fruit, which has influence on its flavor or texture. Some researches on the on-line non-invasion detection of fruit quality were published. However, consumers desire portable devices currently. This study aimed to develop a portable device for accurate, real-time and nondestructive determination of quality factors of fruit based on diffuse reflectance Vis/NIR spectroscopy (520-950 nm). The hardware of the device consisted of four units: light source unit, spectral acquisition unit, central processing unit, display unit. Halogen lamp was chosen as light source. When working, its hand-held probe was in contact with the surface of fruit samples thus forming dark environment to shield the interferential light outside. Diffuse reflectance light was collected and measured by spectrometer (USB4000). ARM (Advanced RISC Machines), as central processing unit, controlled all parts in device and analyzed spectral data. Liquid Crystal Display (LCD) touch screen was used to interface with users. To validate its reliability and stability, 63 apples were tested in experiment, 47 of which were chosen as calibration set, while others as prediction set. Their SSC reference values were measured by refractometer. At the same time, samples' spectral data acquired by portable device were processed by standard normalized variables (SNV) and Savitzky-Golay filter (S-G) to eliminate the spectra noise. Then partial least squares regression (PLSR) was applied to build prediction models, and the best predictions results was achieved with correlation coefficient (r) of 0.855 and standard error of 0.6033° Brix. The results demonstrated that this device was feasible to quantitatively analyze soluble solid content of apple.
Automated red blood cell depletion in ABO incompatible grafts in the pediatric setting.
Del Fante, Claudia; Scudeller, Luigia; Recupero, Santina; Viarengo, Gianluca; Boghen, Stella; Gurrado, Antonella; Zecca, Marco; Seghatchian, Jerard; Perotti, Cesare
2017-12-01
Bone marrow ABO incompatible transplantations require graft manipulation prior to infusion to avoid potentially lethal side effects. We analyzed the influence of pre-manipulation factors (temperature at arrival, transit time, time of storage at 4°C until processing and total time from collection to red blood cell depletion) on the graft quality of 21 red blood cell depletion procedures in ABO incompatible pediatric transplants. Bone marrow collections were processed using the Spectra Optia ® (Terumo BCT) automated device. Temperature at arrival ranged between 4°C and 6°C, median transit time was 9.75h (range 0.33-28), median time of storage at 4°-6°C until processing was 1.8h (range 0.41-18.41) and median time from collection to RBC depletion was 21h (range1-39.4). Median percentage of red blood cell depletion was 97.7 (range 95.4-98.5), median mononuclear cells recovery was 92.2% (range 40-121.2), median CD34+ cell recovery was 93% (range 69.9-161.2), median cell viability was 97.7% (range 94-99.3) and median volume reduction was 83.9% (range 82-92). Graft quality was not significantly different between BM units > median age. Our preliminary data show that when all good manifacturing practices are respected the post-manipulation graft quality is excellent also for those units processed after 24h. Copyright © 2017 Elsevier Ltd. All rights reserved.
Stevens, Bonnie J; Yamada, Janet; Estabrooks, Carole A; Stinson, Jennifer; Campbell, Fiona; Scott, Shannon D; Cummings, Greta
2014-01-01
Hospitalized children frequently receive inadequate pain assessment and management despite substantial evidence to support effective pediatric pain practices. The objective of this study was to determine the effect of a multidimensional knowledge translation intervention, Evidence-based Practice for Improving Quality (EPIQ), on procedural pain practices and clinical outcomes for children hospitalized in medical, surgical and critical care units. A prospective cohort study compared 16 interventions using EPIQ and 16 standard care (SC) units in 8 Canadian pediatric hospitals. Chart reviews at baseline (time 1) and intervention completion (time 2) determined the nature and frequency of painful procedures and of pain assessment and pain management practices. Trained pain experts evaluated pain intensity 6 months post-intervention (time 3) during routine, scheduled painful procedures. Generalized estimating equation models compared changes in outcomes between EPIQ and SC units over time. EPIQ units used significantly more validated pain assessment tools (P<0.001) and had a greater proportion of patients who received analgesics (P=0.03) and physical pain management strategies (P=0.02). Mean pain intensity scores were significantly lower in the EPIQ group (P=0.03). Comparisons of moderate (4-6/10) and severe (7-10/10) pain, controlling for child and unit level factors, indicated that the odds of having severe pain were 51% less for children in the EPIQ group (adjusted OR: 0.49, 95% CI: 0.26-0.83; P=0.009). EPIQ was effective in improving practice and clinical outcomes for hospitalized children. Additional exploration of the influence of contextual factors on research use in hospital settings is required to explain the variability in pain processes and clinical outcomes. Copyright © 2013 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.
Hyperspectral processing in graphical processing units
NASA Astrophysics Data System (ADS)
Winter, Michael E.; Winter, Edwin M.
2011-06-01
With the advent of the commercial 3D video card in the mid 1990s, we have seen an order of magnitude performance increase with each generation of new video cards. While these cards were designed primarily for visualization and video games, it became apparent after a short while that they could be used for scientific purposes. These Graphical Processing Units (GPUs) are rapidly being incorporated into data processing tasks usually reserved for general purpose computers. It has been found that many image processing problems scale well to modern GPU systems. We have implemented four popular hyperspectral processing algorithms (N-FINDR, linear unmixing, Principal Components, and the RX anomaly detection algorithm). These algorithms show an across the board speedup of at least a factor of 10, with some special cases showing extreme speedups of a hundred times or more.
Real-time face and gesture analysis for human-robot interaction
NASA Astrophysics Data System (ADS)
Wallhoff, Frank; Rehrl, Tobias; Mayer, Christoph; Radig, Bernd
2010-05-01
Human communication relies on a large number of different communication mechanisms like spoken language, facial expressions, or gestures. Facial expressions and gestures are one of the main nonverbal communication mechanisms and pass large amounts of information between human dialog partners. Therefore, to allow for intuitive human-machine interaction, a real-time capable processing and recognition of facial expressions, hand and head gestures are of great importance. We present a system that is tackling these challenges. The input features for the dynamic head gestures and facial expressions are obtained from a sophisticated three-dimensional model, which is fitted to the user in a real-time capable manner. Applying this model different kinds of information are extracted from the image data and afterwards handed over to a real-time capable data-transferring framework, the so-called Real-Time DataBase (RTDB). In addition to the head and facial-related features, also low-level image features regarding the human hand - optical flow, Hu-moments are stored into the RTDB for the evaluation process of hand gestures. In general, the input of a single camera is sufficient for the parallel evaluation of the different gestures and facial expressions. The real-time capable recognition of the dynamic hand and head gestures are performed via different Hidden Markov Models, which have proven to be a quick and real-time capable classification method. On the other hand, for the facial expressions classical decision trees or more sophisticated support vector machines are used for the classification process. These obtained results of the classification processes are again handed over to the RTDB, where other processes (like a Dialog Management Unit) can easily access them without any blocking effects. In addition, an adjustable amount of history can be stored by the RTDB buffer unit.
Pinelli, Vincent; Stuckey, Heather L; Gonzalo, Jed D
2017-09-01
In hospital-based medicine units, patients have a wide range of complex medical conditions, requiring timely and accurate communication between multiple interprofessional providers at the time of discharge. Limited work has investigated the challenges in interprofessional collaboration and communication during the patient discharge process. In this study, authors qualitatively assessed the experiences of internal medicine providers and patients about roles, challenges, and potential solutions in the discharge process, with a phenomenological focus on the process of collaboration. Authors conducted interviews with 87 providers and patients-41 providers in eight focus-groups, 39 providers in individual interviews, and seven individual patient interviews. Provider roles included physicians, nurses, therapists, pharmacists, care coordinators, and social workers. Interviews were audio-recorded and transcribed verbatim, followed by iterative review of transcripts using qualitative coding and content analysis. Participants identified several barriers related to interprofessional collaboration during the discharge process, including systems insufficiencies (e.g., medication reconciliation process, staffing challenges); lack of understanding others' roles (e.g., unclear which provider should be completing the discharge summary); information-communication breakdowns (e.g., inaccurate information communicated to the primary medical team); patient issues (e.g., patient preferences misaligned with recommendations); and poor collaboration processes (e.g., lack of structured interprofessional rounds). These results provide context for targeting improvement in interprofessional collaboration in medicine units during patient discharges. Implementing changes in care delivery processes may increase potential for accurate and timely coordination, thereby improving the quality of care transitions.
Hybrid parallel computing architecture for multiview phase shifting
NASA Astrophysics Data System (ADS)
Zhong, Kai; Li, Zhongwei; Zhou, Xiaohui; Shi, Yusheng; Wang, Congjun
2014-11-01
The multiview phase-shifting method shows its powerful capability in achieving high resolution three-dimensional (3-D) shape measurement. Unfortunately, this ability results in very high computation costs and 3-D computations have to be processed offline. To realize real-time 3-D shape measurement, a hybrid parallel computing architecture is proposed for multiview phase shifting. In this architecture, the central processing unit can co-operate with the graphic processing unit (GPU) to achieve hybrid parallel computing. The high computation cost procedures, including lens distortion rectification, phase computation, correspondence, and 3-D reconstruction, are implemented in GPU, and a three-layer kernel function model is designed to simultaneously realize coarse-grained and fine-grained paralleling computing. Experimental results verify that the developed system can perform 50 fps (frame per second) real-time 3-D measurement with 260 K 3-D points per frame. A speedup of up to 180 times is obtained for the performance of the proposed technique using a NVIDIA GT560Ti graphics card rather than a sequential C in a 3.4 GHZ Inter Core i7 3770.
NASA Astrophysics Data System (ADS)
Liu, Guofeng; Li, Chun
2016-08-01
In this study, we present a practical implementation of prestack Kirchhoff time migration (PSTM) on a general purpose graphic processing unit. First, we consider the three main optimizations of the PSTM GPU code, i.e., designing a configuration based on a reasonable execution, using the texture memory for velocity interpolation, and the application of an intrinsic function in device code. This approach can achieve a speedup of nearly 45 times on a NVIDIA GTX 680 GPU compared with CPU code when a larger imaging space is used, where the PSTM output is a common reflection point that is gathered as I[ nx][ ny][ nh][ nt] in matrix format. However, this method requires more memory space so the limited imaging space cannot fully exploit the GPU sources. To overcome this problem, we designed a PSTM scheme with multi-GPUs for imaging different seismic data on different GPUs using an offset value. This process can achieve the peak speedup of GPU PSTM code and it greatly increases the efficiency of the calculations, but without changing the imaging result.
40 CFR 60.482-1a - Standards: General.
Code of Federal Regulations, 2014 CFR
2014-07-01
... time during the specified monitoring period (e.g., month, quarter, year), provided the monitoring is... Monthly Quarterly Semiannually. (2) Pumps and valves that are shared among two or more batch process units... be separated by at least 120 calendar days. (g) If the storage vessel is shared with multiple process...
40 CFR 60.482-1a - Standards: General.
Code of Federal Regulations, 2012 CFR
2012-07-01
... time during the specified monitoring period (e.g., month, quarter, year), provided the monitoring is... Monthly Quarterly Semiannually. (2) Pumps and valves that are shared among two or more batch process units... be separated by at least 120 calendar days. (g) If the storage vessel is shared with multiple process...
40 CFR 60.482-1a - Standards: General.
Code of Federal Regulations, 2010 CFR
2010-07-01
... time during the specified monitoring period (e.g., month, quarter, year), provided the monitoring is... Monthly Quarterly Semiannually. (2) Pumps and valves that are shared among two or more batch process units... be separated by at least 120 calendar days. (g) If the storage vessel is shared with multiple process...
40 CFR 60.482-1a - Standards: General.
Code of Federal Regulations, 2013 CFR
2013-07-01
... time during the specified monitoring period (e.g., month, quarter, year), provided the monitoring is... Monthly Quarterly Semiannually. (2) Pumps and valves that are shared among two or more batch process units... be separated by at least 120 calendar days. (g) If the storage vessel is shared with multiple process...
40 CFR 60.482-1a - Standards: General.
Code of Federal Regulations, 2011 CFR
2011-07-01
... time during the specified monitoring period (e.g., month, quarter, year), provided the monitoring is... Monthly Quarterly Semiannually. (2) Pumps and valves that are shared among two or more batch process units... be separated by at least 120 calendar days. (g) If the storage vessel is shared with multiple process...
Introducing Farouk's Process Consultation Group Approach in Irish Primary Schools
ERIC Educational Resources Information Center
Hayes, Marie; Stringer, Phil
2016-01-01
Research has shown that teacher consultation groups increase teachers' behaviour management skills through discussion and collaborative problem-solving. Unlike the United Kingdom, at the time of this research consultation groups were not widely used in Irish schools. This research introduced Farouk's process consultation approach in three Irish…
A Theory of Reading: From Eye Fixations to Comprehension.
ERIC Educational Resources Information Center
Just, Marcel Adam; Carpenter, Patricia A.
1980-01-01
A model of reading comprehension focuses on eye fixations, which are related to the level of reading processes--words, clauses, and text units. Longer pauses are associated with greater processing difficulty. This model is illustrated for a group of undergraduate students reading scientific articles from "Newsweek" and "Time" magazines. (GDC)
Messier, Erik
2016-08-01
A Multichannel Systems (MCS) microelectrode array data acquisition (DAQ) unit is used to collect multichannel electrograms (EGM) from a Langendorff perfused rabbit heart system to study sudden cardiac death (SCD). MCS provides software through which data being processed by the DAQ unit can be displayed and saved, but this software's combined utility with MATLAB is not very effective. MCSs software stores recorded EGM data in a MathCad (MCD) format, which is then converted to a text file format. These text files are very large, and it is therefore very time consuming to import the EGM data into MATLAB for real-time analysis. Therefore, customized MATLAB software was developed to control the acquisition of data from the MCS DAQ unit, and provide specific laboratory accommodations for this study of SCD. The developed DAQ unit control software will be able to accurately: provide real time display of EGM signals; record and save EGM signals in MATLAB in a desired format; and produce real time analysis of the EGM signals; all through an intuitive GUI.
Auxiliary engine digital interface unit (DIU)
NASA Technical Reports Server (NTRS)
1972-01-01
This auxiliary propulsion engine digital unit controls both the valving of the fuel and oxidizer to the engine combustion chamber and the ignition spark required for timely and efficient engine burns. In addition to this basic function, the unit is designed to manage it's own redundancy such that it is still operational after two hard circuit failures. It communicates to the data bus system several selected information points relating to the operational status of the electronics as well as the engine fuel and burning processes.
Abraham, Sushil; Bain, David; Bowers, John; Larivee, Victor; Leira, Francisco; Xie, Jasmina
2015-01-01
The technology transfer of biological products is a complex process requiring control of multiple unit operations and parameters to ensure product quality and process performance. To achieve product commercialization, the technology transfer sending unit must successfully transfer knowledge about both the product and the process to the receiving unit. A key strategy for maximizing successful scale-up and transfer efforts is the effective use of engineering and shake-down runs to confirm operational performance and product quality prior to embarking on good manufacturing practice runs such as process performance qualification runs. We consider key factors to consider in making the decision to perform shake-down or engineering runs. We also present industry benchmarking results of how engineering runs are used in drug substance technology transfers alongside the main themes and best practices that have emerged. Our goal is to provide companies with a framework for ensuring the "right first time" technology transfers with effective deployment of resources within increasingly aggressive timeline constraints. © PDA, Inc. 2015.
14 CFR 1300.16 - Application process.
Code of Federal Regulations, 2012 CFR
2012-01-01
... aviation system in the United States and that credit is not reasonably available at the time of the....16 Aeronautics and Space AIR TRANSPORTATION SYSTEM STABILIZATION OFFICE OF MANAGEMENT AND BUDGET... applications to the Board any time after October 12, 2001 through June 28, 2002. All applications must be...
14 CFR 1300.16 - Application process.
Code of Federal Regulations, 2013 CFR
2013-01-01
... aviation system in the United States and that credit is not reasonably available at the time of the....16 Aeronautics and Space AIR TRANSPORTATION SYSTEM STABILIZATION OFFICE OF MANAGEMENT AND BUDGET... applications to the Board any time after October 12, 2001 through June 28, 2002. All applications must be...
14 CFR 1300.16 - Application process.
Code of Federal Regulations, 2011 CFR
2011-01-01
... aviation system in the United States and that credit is not reasonably available at the time of the... Aeronautics and Space AIR TRANSPORTATION SYSTEM STABILIZATION OFFICE OF MANAGEMENT AND BUDGET AVIATION... applications to the Board any time after October 12, 2001 through June 28, 2002. All applications must be...
ERIC Educational Resources Information Center
Dunbar, Robert L.; Dingel, Molly J.; Prat-Resina, Xavier
2014-01-01
The disconnect between data collection and analysis across academic and administrative units within institutions of higher education makes it challenging to incorporate diverse data into curricular design. Understanding the factors related to student retention and success is unlikely to occur by focusing on only one unit at a time. By promoting…
Operation Ivy. Report to the Scientific Director. Documentary photography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaylord, J.L.
1985-09-01
The objective of Task Unit 9 was to record on film, both still and motion picture, the activities connected with certain events and programs of Operation Ivy. Task Unit 9 accomplished all the necessary field photography and was still in the process of editing this footage to form a completed motion-picture record at the time this report was written.
Gestalt Principles in the Control of Motor Action
ERIC Educational Resources Information Center
Klapp, Stuart T.; Jagacinski, Richard J.
2011-01-01
We argue that 4 fundamental gestalt phenomena in perception apply to the control of motor action. First, a motor gestalt, like a perceptual gestalt, is holistic in the sense that it is processed as a single unit. This notion is consistent with reaction time results indicating that all gestures for a brief unit of action must be programmed prior to…
Mythology Across Time and Borders: Online Workshop. ArtsEdge Curricula, Lessons and Activities.
ERIC Educational Resources Information Center
Clement, Lynne Boone
This curriculum unit can be adapted for students as young as grade 6 or 7 and as old as grade 12. The unit integrates writing process instruction, storytelling lore, mythology, and arts instruction and is in support of standards as defined by the Consortium of National Arts Education Associations and the National Council of Teachers of English.…
Method of synchronizing independent functional unit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Changhoan
A system for synchronizing parallel processing of a plurality of functional processing units (FPU), a first FPU and a first program counter to control timing of a first stream of program instructions issued to the first FPU by advancement of the first program counter; a second FPU and a second program counter to control timing of a second stream of program instructions issued to the second FPU by advancement of the second program counter, the first FPU is in communication with a second FPU to synchronize the issuance of a first stream of program instructions to the second stream ofmore » program instructions and the second FPU is in communication with the first FPU to synchronize the issuance of the second stream program instructions to the first stream of program instructions.« less
Method of synchronizing independent functional unit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Changhoan
2017-05-16
A system for synchronizing parallel processing of a plurality of functional processing units (FPU), a first FPU and a first program counter to control timing of a first stream of program instructions issued to the first FPU by advancement of the first program counter; a second FPU and a second program counter to control timing of a second stream of program instructions issued to the second FPU by advancement of the second program counter, the first FPU is in communication with a second FPU to synchronize the issuance of a first stream of program instructions to the second stream ofmore » program instructions and the second FPU is in communication with the first FPU to synchronize the issuance of the second stream program instructions to the first stream of program instructions.« less
Method of synchronizing independent functional unit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Changhoan
2017-02-14
A system for synchronizing parallel processing of a plurality of functional processing units (FPU), a first FPU and a first program counter to control timing of a first stream of program instructions issued to the first FPU by advancement of the first program counter; a second FPU and a second program counter to control timing of a second stream of program instructions issued to the second FPU by advancement of the second program counter, the first FPU is in communication with a second FPU to synchronize the issuance of a first stream of program instructions to the second stream ofmore » program instructions and the second FPU is in communication with the first FPU to synchronize the issuance of the second stream program instructions to the first stream of program instructions.« less
Infantry Small-Unit Mountain Operations
2011-02-01
expended to traverse it. Unique sustainment solutions. Sustainment in a mountain environment is a challenging and time-consuming process . Terrain...a particular environment during the intelligence preparation of the battlefield (IPB) process and provide the analysis to the company. The IPB...consists of a four-step process that includes— Defining the operational environment. Describing environmental effects on operations. Evaluating the
"Take Back Your Time": Facilitating a Student Led Teach-In
ERIC Educational Resources Information Center
Heyne, Linda A.
2008-01-01
"Take Back Your Time" (TBYT) is a movement founded by John De Graaf (2003) that exposes the issues of time poverty and overwork in the United States and Canada. This article features the process whereby undergraduate students study De Graaf's TBYT handbook, discuss its concepts, and organize a student-led TBYT "teach-in" for…
A cost analysis comparing xeroradiography to film technics for intraoral radiography.
Gratt, B M; Sickles, E A
1986-01-01
In the United States during 1978 $730 million was spent on dental radiographic services. Currently there are three alternatives for the processing of intraoral radiographs: manual wet-tanks, automatic film units, or xeroradiography. It was the intent of this study to determine which processing system is the most economical. Cost estimates were based on a usage rate of 750 patient images per month and included a calculation of the average cost per radiograph over a five-year period. Capital costs included initial processing equipment and site preparation. Operational costs included labor, supplies, utilities, darkroom rental, and breakdown costs. Clinical time trials were employed to measure examination times. Maintenance logs were employed to assess labor costs. Indirect costs of training were estimated. Results indicated that xeroradiography was the most cost effective ($0.81 per image) compared to either automatic film processing ($1.14 per image) or manual processing ($1.35 per image). Variations in projected costs indicated that if a dental practice performs primarily complete-mouth surveys, exposes less than 120 radiographs per month, and pays less than +6.50 per hour in wages, then manual (wet-tank) processing is the most economical method for producing intraoral radiographs.
NursesforTomorrow: a proactive approach to nursing resource analysis.
Bournes, Debra A; Plummer, Carolyn; Miller, Robert; Ferguson-Paré, Mary
2010-03-01
This paper describes the background, development, implementation and utilization of NursesforTomorrow (N4T), a practical and comprehensive nursing human resources analysis method to capture regional, institutional and patient care unit-specific actual and predicted nurse vacancies, nurse staff characteristics and nurse staffing changes. Reports generated from the process include forecasted shortfalls or surpluses of nurses, percentage of novice nurses, occupancy, sick time, overtime, agency use and other metrics. Readers will benefit from a description of the ways in which the data generated from the nursing resource analysis process are utilized at senior leadership, program and unit levels to support proactive hiring and resource allocation decisions and to predict unit-specific recruitment and retention patterns across multiple healthcare organizations and regions.
Silva, A F; Sarraguça, M C; Fonteyne, M; Vercruysse, J; De Leersnyder, F; Vanhoorne, V; Bostijn, N; Verstraeten, M; Vervaet, C; Remon, J P; De Beer, T; Lopes, J A
2017-08-07
A multivariate statistical process control (MSPC) strategy was developed for the monitoring of the ConsiGma™-25 continuous tablet manufacturing line. Thirty-five logged variables encompassing three major units, being a twin screw high shear granulator, a fluid bed dryer and a product control unit, were used to monitor the process. The MSPC strategy was based on principal component analysis of data acquired under normal operating conditions using a series of four process runs. Runs with imposed disturbances in the dryer air flow and temperature, in the granulator barrel temperature, speed and liquid mass flow and in the powder dosing unit mass flow were utilized to evaluate the model's monitoring performance. The impact of the imposed deviations to the process continuity was also evaluated using Hotelling's T 2 and Q residuals statistics control charts. The influence of the individual process variables was assessed by analyzing contribution plots at specific time points. Results show that the imposed disturbances were all detected in both control charts. Overall, the MSPC strategy was successfully developed and applied. Additionally, deviations not associated with the imposed changes were detected, mainly in the granulator barrel temperature control. Copyright © 2017 Elsevier B.V. All rights reserved.
Use of general purpose graphics processing units with MODFLOW
Hughes, Joseph D.; White, Jeremy T.
2013-01-01
To evaluate the use of general-purpose graphics processing units (GPGPUs) to improve the performance of MODFLOW, an unstructured preconditioned conjugate gradient (UPCG) solver has been developed. The UPCG solver uses a compressed sparse row storage scheme and includes Jacobi, zero fill-in incomplete, and modified-incomplete lower-upper (LU) factorization, and generalized least-squares polynomial preconditioners. The UPCG solver also includes options for sequential and parallel solution on the central processing unit (CPU) using OpenMP. For simulations utilizing the GPGPU, all basic linear algebra operations are performed on the GPGPU; memory copies between the central processing unit CPU and GPCPU occur prior to the first iteration of the UPCG solver and after satisfying head and flow criteria or exceeding a maximum number of iterations. The efficiency of the UPCG solver for GPGPU and CPU solutions is benchmarked using simulations of a synthetic, heterogeneous unconfined aquifer with tens of thousands to millions of active grid cells. Testing indicates GPGPU speedups on the order of 2 to 8, relative to the standard MODFLOW preconditioned conjugate gradient (PCG) solver, can be achieved when (1) memory copies between the CPU and GPGPU are optimized, (2) the percentage of time performing memory copies between the CPU and GPGPU is small relative to the calculation time, (3) high-performance GPGPU cards are utilized, and (4) CPU-GPGPU combinations are used to execute sequential operations that are difficult to parallelize. Furthermore, UPCG solver testing indicates GPGPU speedups exceed parallel CPU speedups achieved using OpenMP on multicore CPUs for preconditioners that can be easily parallelized.
2007-12-01
37 3. Poka - yoke ............................................................................................37 4. Systems for...Standard operating procedures • Visual displays for workflow and communication • Total productive maintenance • Poka - yoke techniques to prevent...process step or eliminating non-value-added steps, and reducing the seven common wastes, will decrease the total time of a process. 3. Poka - yoke
Korasa, Klemen; Vrečer, Franc
2018-01-01
Over the last two decades, regulatory agencies have demanded better understanding of pharmaceutical products and processes by implementing new technological approaches, such as process analytical technology (PAT). Process analysers present a key PAT tool, which enables effective process monitoring, and thus improved process control of medicinal product manufacturing. Process analysers applicable in pharmaceutical coating unit operations are comprehensibly described in the present article. The review is focused on monitoring of solid oral dosage forms during film coating in two most commonly used coating systems, i.e. pan and fluid bed coaters. Brief theoretical background and critical overview of process analysers used for real-time or near real-time (in-, on-, at- line) monitoring of critical quality attributes of film coated dosage forms are presented. Besides well recognized spectroscopic methods (NIR and Raman spectroscopy), other techniques, which have made a significant breakthrough in recent years, are discussed (terahertz pulsed imaging (TPI), chord length distribution (CLD) analysis, and image analysis). Last part of the review is dedicated to novel techniques with high potential to become valuable PAT tools in the future (optical coherence tomography (OCT), acoustic emission (AE), microwave resonance (MR), and laser induced breakdown spectroscopy (LIBS)). Copyright © 2017 Elsevier B.V. All rights reserved.
Surgical scheduling: a lean approach to process improvement.
Simon, Ross William; Canacari, Elena G
2014-01-01
A large teaching hospital in the northeast United States had an inefficient, paper-based process for scheduling orthopedic surgery that caused delays and contributed to site/side discrepancies. The hospital's leaders formed a team with the goals of developing a safe, effective, patient-centered, timely, efficient, and accurate orthopedic scheduling process; smoothing the schedule so that block time was allocated more evenly; and ensuring correct site/side. Under the resulting process, real-time patient information is entered into a database during the patient's preoperative visit in the surgeon's office. The team found the new process reduced the occurrence of site/side discrepancies to zero, reduced instances of changing the sequence of orthopedic procedures by 70%, and increased patient satisfaction. Copyright © 2014 AORN, Inc. Published by Elsevier Inc. All rights reserved.
The ‘hit’ phenomenon: a mathematical model of human dynamics interactions as a stochastic process
NASA Astrophysics Data System (ADS)
Ishii, Akira; Arakaki, Hisashi; Matsuda, Naoya; Umemura, Sanae; Urushidani, Tamiko; Yamagata, Naoya; Yoshida, Narihiko
2012-06-01
A mathematical model for the ‘hit’ phenomenon in entertainment within a society is presented as a stochastic process of human dynamics interactions. The model uses only the advertisement budget time distribution as an input, and word-of-mouth (WOM), represented by posts on social network systems, is used as data to make a comparison with the calculated results. The unit of time is days. The WOM distribution in time is found to be very close to the revenue distribution in time. Calculations for the Japanese motion picture market based on the mathematical model agree well with the actual revenue distribution in time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Setiani, Tia Dwi, E-mail: tiadwisetiani@gmail.com; Suprijadi; Nuclear Physics and Biophysics Reaserch Division, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung Jalan Ganesha 10 Bandung, 40132
Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic imagesmore » and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 – 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 10{sup 8} and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, S. L.
1998-08-25
Fluid Catalytic Cracking (FCC) technology is the most important process used by the refinery industry to convert crude oil to valuable lighter products such as gasoline. Process development is generally very time consuming especially when a small pilot unit is being scaled-up to a large commercial unit because of the lack of information to aide in the design of scaled-up units. Such information can now be obtained by analysis based on the pilot scale measurements and computer simulation that includes controlling physics of the FCC system. A Computational fluid dynamic (CFD) code, ICRKFLO, has been developed at Argonne National Laboratorymore » (ANL) and has been successfully applied to the simulation of catalytic petroleum cracking risers. It employs hybrid hydrodynamic-chemical kinetic coupling techniques, enabling the analysis of an FCC unit with complex chemical reaction sets containing tens or hundreds of subspecies. The code has been continuously validated based on pilot-scale experimental data. It is now being used to investigate the effects of scaled-up FCC units. Among FCC operating conditions, the feed injection conditions are found to have a strong impact on the product yields of scaled-up FCC units. The feed injection conditions appear to affect flow and heat transfer patterns and the interaction of hydrodynamics and cracking kinetics causes the product yields to change accordingly.« less
Seeing the forest for the trees: Networked workstations as a parallel processing computer
NASA Technical Reports Server (NTRS)
Breen, J. O.; Meleedy, D. M.
1992-01-01
Unlike traditional 'serial' processing computers in which one central processing unit performs one instruction at a time, parallel processing computers contain several processing units, thereby, performing several instructions at once. Many of today's fastest supercomputers achieve their speed by employing thousands of processing elements working in parallel. Few institutions can afford these state-of-the-art parallel processors, but many already have the makings of a modest parallel processing system. Workstations on existing high-speed networks can be harnessed as nodes in a parallel processing environment, bringing the benefits of parallel processing to many. While such a system can not rival the industry's latest machines, many common tasks can be accelerated greatly by spreading the processing burden and exploiting idle network resources. We study several aspects of this approach, from algorithms to select nodes to speed gains in specific tasks. With ever-increasing volumes of astronomical data, it becomes all the more necessary to utilize our computing resources fully.
Image processing applications: From particle physics to society
NASA Astrophysics Data System (ADS)
Sotiropoulou, C.-L.; Luciano, P.; Gkaitatzis, S.; Citraro, S.; Giannetti, P.; Dell'Orso, M.
2017-01-01
We present an embedded system for extremely efficient real-time pattern recognition execution, enabling technological advancements with both scientific and social impact. It is a compact, fast, low consumption processing unit (PU) based on a combination of Field Programmable Gate Arrays (FPGAs) and the full custom associative memory chip. The PU has been developed for real time tracking in particle physics experiments, but delivers flexible features for potential application in a wide range of fields. It has been proposed to be used in accelerated pattern matching execution for Magnetic Resonance Fingerprinting (biomedical applications), in real time detection of space debris trails in astronomical images (space applications) and in brain emulation for image processing (cognitive image processing). We illustrate the potentiality of the PU for the new applications.
PID Controller Settings Based on a Transient Response Experiment
ERIC Educational Resources Information Center
Silva, Carlos M.; Lito, Patricia F.; Neves, Patricia S.; Da Silva, Francisco A.
2008-01-01
An experimental work on controller tuning for chemical engineering undergraduate students is proposed using a small heat exchange unit. Based upon process reaction curves in open-loop configuration, system gain and time constant are determined for first order model with time delay with excellent accuracy. Afterwards students calculate PID…
14 CFR § 1300.16 - Application process.
Code of Federal Regulations, 2014 CFR
2014-01-01
... aviation system in the United States and that credit is not reasonably available at the time of the...§ 1300.16 Aeronautics and Space AIR TRANSPORTATION SYSTEM STABILIZATION OFFICE OF MANAGEMENT AND BUDGET... applications to the Board any time after October 12, 2001 through June 28, 2002. All applications must be...
Intensity/time profiles of solar particle events at one astronomical unit
NASA Technical Reports Server (NTRS)
Shea, M. A.
1988-01-01
A description of the intensity-time profiles of solar proton events observed at the orbit of the earth is presented. The discussion, which includes descriptive figures, presents a general overview of the subject without the detailed mathematical description of the physical processes which usually accompany most reviews.
NASA Astrophysics Data System (ADS)
Galves, A.; Löcherbach, E.
2013-06-01
We consider a new class of non Markovian processes with a countable number of interacting components. At each time unit, each component can take two values, indicating if it has a spike or not at this precise moment. The system evolves as follows. For each component, the probability of having a spike at the next time unit depends on the entire time evolution of the system after the last spike time of the component. This class of systems extends in a non trivial way both the interacting particle systems, which are Markovian (Spitzer in Adv. Math. 5:246-290, 1970) and the stochastic chains with memory of variable length which have finite state space (Rissanen in IEEE Trans. Inf. Theory 29(5):656-664, 1983). These features make it suitable to describe the time evolution of biological neural systems. We construct a stationary version of the process by using a probabilistic tool which is a Kalikow-type decomposition either in random environment or in space-time. This construction implies uniqueness of the stationary process. Finally we consider the case where the interactions between components are given by a critical directed Erdös-Rényi-type random graph with a large but finite number of components. In this framework we obtain an explicit upper-bound for the correlation between successive inter-spike intervals which is compatible with previous empirical findings.
Input-output characterization of an ultrasonic testing system by digital signal analysis
NASA Technical Reports Server (NTRS)
Williams, J. H., Jr.; Lee, S. S.; Karagulle, H.
1986-01-01
Ultrasonic test system input-output characteristics were investigated by directly coupling the transmitting and receiving transducers face to face without a test specimen. Some of the fundamentals of digital signal processing were summarized. Input and output signals were digitized by using a digital oscilloscope, and the digitized data were processed in a microcomputer by using digital signal-processing techniques. The continuous-time test system was modeled as a discrete-time, linear, shift-invariant system. In estimating the unit-sample response and frequency response of the discrete-time system, it was necessary to use digital filtering to remove low-amplitude noise, which interfered with deconvolution calculations. A digital bandpass filter constructed with the assistance of a Blackman window and a rectangular time window were used. Approximations of the impulse response and the frequency response of the continuous-time test system were obtained by linearly interpolating the defining points of the unit-sample response and the frequency response of the discrete-time system. The test system behaved as a linear-phase bandpass filter in the frequency range 0.6 to 2.3 MHz. These frequencies were selected in accordance with the criterion that they were 6 dB below the maximum peak of the amplitude of the frequency response. The output of the system to various inputs was predicted and the results were compared with the corresponding measurements on the system.
Rapid Parallel Calculation of shell Element Based On GPU
NASA Astrophysics Data System (ADS)
Wanga, Jian Hua; Lia, Guang Yao; Lib, Sheng; Li, Guang Yao
2010-06-01
Long computing time bottlenecked the application of finite element. In this paper, an effective method to speed up the FEM calculation by using the existing modern graphic processing unit and programmable colored rendering tool was put forward, which devised the representation of unit information in accordance with the features of GPU, converted all the unit calculation into film rendering process, solved the simulation work of all the unit calculation of the internal force, and overcame the shortcomings of lowly parallel level appeared ever before when it run in a single computer. Studies shown that this method could improve efficiency and shorten calculating hours greatly. The results of emulation calculation about the elasticity problem of large number cells in the sheet metal proved that using the GPU parallel simulation calculation was faster than using the CPU's. It is useful and efficient to solve the project problems in this way.
Van Bogaert, Peter; Peremans, Lieve; Diltour, Nadine; Van heusden, Danny; Dilles, Tinne; Van Rompaey, Bart; Havens, Donna Sullivan
2016-01-01
The aim of the study reported in this article was to investigate staff nurses’ perceptions and experiences about structural empowerment and perceptions regarding the extent to which structural empowerment supports safe quality patient care. To address the complex needs of patients, staff nurse involvement in clinical and organizational decision-making processes within interdisciplinary care settings is crucial. A qualitative study was conducted using individual semi-structured interviews of 11 staff nurses assigned to medical or surgical units in a 600-bed university hospital in Belgium. During the study period, the hospital was going through an organizational transformation process to move from a classic hierarchical and departmental organizational structure to one that was flat and interdisciplinary. Staff nurses reported experiencing structural empowerment and they were willing to be involved in decision-making processes primarily about patient care within the context of their practice unit. However, participants were not always fully aware of the challenges and the effect of empowerment on their daily practice, the quality of care and patient safety. Ongoing hospital change initiatives supported staff nurses’ involvement in decision-making processes for certain matters but for some decisions, a classic hierarchical and departmental process still remained. Nurses perceived relatively high work demands and at times viewed empowerment as presenting additional. Staff nurses recognized the opportunities structural empowerment provided within their daily practice. Nurse managers and unit climate were seen as crucial for success while lack of time and perceived work demands were viewed as barriers to empowerment. PMID:27035457
Van Bogaert, Peter; Peremans, Lieve; Diltour, Nadine; Van heusden, Danny; Dilles, Tinne; Van Rompaey, Bart; Havens, Donna Sullivan
2016-01-01
The aim of the study reported in this article was to investigate staff nurses' perceptions and experiences about structural empowerment and perceptions regarding the extent to which structural empowerment supports safe quality patient care. To address the complex needs of patients, staff nurse involvement in clinical and organizational decision-making processes within interdisciplinary care settings is crucial. A qualitative study was conducted using individual semi-structured interviews of 11 staff nurses assigned to medical or surgical units in a 600-bed university hospital in Belgium. During the study period, the hospital was going through an organizational transformation process to move from a classic hierarchical and departmental organizational structure to one that was flat and interdisciplinary. Staff nurses reported experiencing structural empowerment and they were willing to be involved in decision-making processes primarily about patient care within the context of their practice unit. However, participants were not always fully aware of the challenges and the effect of empowerment on their daily practice, the quality of care and patient safety. Ongoing hospital change initiatives supported staff nurses' involvement in decision-making processes for certain matters but for some decisions, a classic hierarchical and departmental process still remained. Nurses perceived relatively high work demands and at times viewed empowerment as presenting additional. Staff nurses recognized the opportunities structural empowerment provided within their daily practice. Nurse managers and unit climate were seen as crucial for success while lack of time and perceived work demands were viewed as barriers to empowerment.
A New Parallel Approach for Accelerating the GPU-Based Execution of Edge Detection Algorithms
Emrani, Zahra; Bateni, Soroosh; Rabbani, Hossein
2017-01-01
Real-time image processing is used in a wide variety of applications like those in medical care and industrial processes. This technique in medical care has the ability to display important patient information graphi graphically, which can supplement and help the treatment process. Medical decisions made based on real-time images are more accurate and reliable. According to the recent researches, graphic processing unit (GPU) programming is a useful method for improving the speed and quality of medical image processing and is one of the ways of real-time image processing. Edge detection is an early stage in most of the image processing methods for the extraction of features and object segments from a raw image. The Canny method, Sobel and Prewitt filters, and the Roberts’ Cross technique are some examples of edge detection algorithms that are widely used in image processing and machine vision. In this work, these algorithms are implemented using the Compute Unified Device Architecture (CUDA), Open Source Computer Vision (OpenCV), and Matrix Laboratory (MATLAB) platforms. An existing parallel method for Canny approach has been modified further to run in a fully parallel manner. This has been achieved by replacing the breadth- first search procedure with a parallel method. These algorithms have been compared by testing them on a database of optical coherence tomography images. The comparison of results shows that the proposed implementation of the Canny method on GPU using the CUDA platform improves the speed of execution by 2–100× compared to the central processing unit-based implementation using the OpenCV and MATLAB platforms. PMID:28487831
A New Parallel Approach for Accelerating the GPU-Based Execution of Edge Detection Algorithms.
Emrani, Zahra; Bateni, Soroosh; Rabbani, Hossein
2017-01-01
Real-time image processing is used in a wide variety of applications like those in medical care and industrial processes. This technique in medical care has the ability to display important patient information graphi graphically, which can supplement and help the treatment process. Medical decisions made based on real-time images are more accurate and reliable. According to the recent researches, graphic processing unit (GPU) programming is a useful method for improving the speed and quality of medical image processing and is one of the ways of real-time image processing. Edge detection is an early stage in most of the image processing methods for the extraction of features and object segments from a raw image. The Canny method, Sobel and Prewitt filters, and the Roberts' Cross technique are some examples of edge detection algorithms that are widely used in image processing and machine vision. In this work, these algorithms are implemented using the Compute Unified Device Architecture (CUDA), Open Source Computer Vision (OpenCV), and Matrix Laboratory (MATLAB) platforms. An existing parallel method for Canny approach has been modified further to run in a fully parallel manner. This has been achieved by replacing the breadth- first search procedure with a parallel method. These algorithms have been compared by testing them on a database of optical coherence tomography images. The comparison of results shows that the proposed implementation of the Canny method on GPU using the CUDA platform improves the speed of execution by 2-100× compared to the central processing unit-based implementation using the OpenCV and MATLAB platforms.
Reasons for exclusion of 6820 umbilical cord blood donations in a public cord blood bank.
Wang, Tso-Fu; Wen, Shu-Hui; Yang, Kuo-Liang; Yang, Shang-Hsien; Yang, Yun-Fan; Chang, Chu-Yu; Wu, Yi-Feng; Chen, Shu-Huey
2014-01-01
To provide information for umbilical cord blood (UCB) banks to adopt optimal collection strategies and to make UCB banks operate efficiently, we investigated the reasons for exclusion of UCB units in a 3-year recruitment period. We analyzed records of the reasons for exclusion of the potential UCB donation from 2004 to 2006 in the Tzu-Chi Cord Blood Bank and compared the results over 3 years. We grouped these reasons for exclusion into five phases, before collection, during delivery, before processing, during processing, and after freezing according to the time sequence and analyzed the reasons at each phase. Between 2004 and 2006, there were 10,685 deliveries with the intention of UCB donation. In total, 41.2% of the UCB units were considered eligible for transplantation. The exclusion rates were 93.1, 48.4, and 54.1% in 2004, 2005, and 2006, respectively. We excluded 612 donations from women before their child birth, 133 UCB units during delivery, 80 units before processing, 5010 units during processing, and 421 units after freezing. There were 24 UCB units with unknown reasons of ineligibility. Low UCB weight and low cell count were the first two leading causes of exclusion (48.6 and 30.9%). The prevalence of artificial errors, holiday or transportation problem, low weight, and infant problems decreased year after year. The exclusion rate was high at the beginning of our study as in previous studies. Understanding the reasons for UCB exclusion may help to improve the efficiency of UCB banking programs in the future. © 2013 American Association of Blood Banks.
NASA Technical Reports Server (NTRS)
Chawner, David M.; Gomez, Ray J.
2010-01-01
In the Applied Aerosciences and CFD branch at Johnson Space Center, computational simulations are run that face many challenges. Two of which are the ability to customize software for specialized needs and the need to run simulations as fast as possible. There are many different tools that are used for running these simulations and each one has its own pros and cons. Once these simulations are run, there needs to be software capable of visualizing the results in an appealing manner. Some of this software is called open source, meaning that anyone can edit the source code to make modifications and distribute it to all other users in a future release. This is very useful, especially in this branch where many different tools are being used. File readers can be written to load any file format into a program, to ease the bridging from one tool to another. Programming such a reader requires knowledge of the file format that is being read as well as the equations necessary to obtain the derived values after loading. When running these CFD simulations, extremely large files are being loaded and having values being calculated. These simulations usually take a few hours to complete, even on the fastest machines. Graphics processing units (GPUs) are usually used to load the graphics for computers; however, in recent years, GPUs are being used for more generic applications because of the speed of these processors. Applications run on GPUs have been known to run up to forty times faster than they would on normal central processing units (CPUs). If these CFD programs are extended to run on GPUs, the amount of time they would require to complete would be much less. This would allow more simulations to be run in the same amount of time and possibly perform more complex computations.
NASA Astrophysics Data System (ADS)
Mallakpour, Iman; Villarini, Gabriele; Jones, Michael P.; Smith, James A.
2017-08-01
The central United States is plagued by frequent catastrophic flooding, such as the flood events of 1993, 2008, 2011, 2013, 2014 and 2016. The goal of this study is to examine whether it is possible to describe the occurrence of flood and heavy precipitation events at the sub-seasonal scale in terms of variations in the climate system. Daily streamflow and precipitation time series over the central United States (defined here to include North Dakota, South Dakota, Nebraska, Kansas, Missouri, Iowa, Minnesota, Wisconsin, Illinois, West Virginia, Kentucky, Ohio, Indiana, and Michigan) are used in this study. We model the occurrence/non-occurrence of a flood and heavy precipitation event over time using regression models based on Cox processes, which can be viewed as a generalization of Poisson processes. Rather than assuming that an event (i.e., flooding or precipitation) occurs independently of the occurrence of the previous one (as in Poisson processes), Cox processes allow us to account for the potential presence of temporal clustering, which manifests itself in an alternation of quiet and active periods. Here we model the occurrence/non-occurrence of flood and heavy precipitation events using two climate indices as time-varying covariates: the Arctic Oscillation (AO) and the Pacific-North American pattern (PNA). We find that AO and/or PNA are important predictors in explaining the temporal clustering in flood occurrences in over 78% of the stream gages we considered. Similar results are obtained when working with heavy precipitation events. Analyses of the sensitivity of the results to different thresholds used to identify events lead to the same conclusions. The findings of this work highlight that variations in the climate system play a critical role in explaining the occurrence of flood and heavy precipitation events at the sub-seasonal scale over the central United States.
Zhu, Xiang; Zhang, Dianwen
2013-01-01
We present a fast, accurate and robust parallel Levenberg-Marquardt minimization optimizer, GPU-LMFit, which is implemented on graphics processing unit for high performance scalable parallel model fitting processing. GPU-LMFit can provide a dramatic speed-up in massive model fitting analyses to enable real-time automated pixel-wise parametric imaging microscopy. We demonstrate the performance of GPU-LMFit for the applications in superresolution localization microscopy and fluorescence lifetime imaging microscopy. PMID:24130785
NASA Astrophysics Data System (ADS)
O'Connor, A. S.; Justice, B.; Harris, A. T.
2013-12-01
Graphics Processing Units (GPUs) are high-performance multiple-core processors capable of very high computational speeds and large data throughput. Modern GPUs are inexpensive and widely available commercially. These are general-purpose parallel processors with support for a variety of programming interfaces, including industry standard languages such as C. GPU implementations of algorithms that are well suited for parallel processing can often achieve speedups of several orders of magnitude over optimized CPU codes. Significant improvements in speeds for imagery orthorectification, atmospheric correction, target detection and image transformations like Independent Components Analsyis (ICA) have been achieved using GPU-based implementations. Additional optimizations, when factored in with GPU processing capabilities, can provide 50x - 100x reduction in the time required to process large imagery. Exelis Visual Information Solutions (VIS) has implemented a CUDA based GPU processing frame work for accelerating ENVI and IDL processes that can best take advantage of parallelization. Testing Exelis VIS has performed shows that orthorectification can take as long as two hours with a WorldView1 35,0000 x 35,000 pixel image. With GPU orthorecification, the same orthorectification process takes three minutes. By speeding up image processing, imagery can successfully be used by first responders, scientists making rapid discoveries with near real time data, and provides an operational component to data centers needing to quickly process and disseminate data.
Retrofit concept for small safety related stationary machines
NASA Astrophysics Data System (ADS)
Epple, S.; Jalba, C. K.; Muminovic, A.; Jung, R.
2017-05-01
More and more old machines have the problem that their control electronics’ lifecycle comes to its intended end of life, whilst the mechanics itself and process capability is still in very good condition. This article shows an example of a reactive ion etcher originally built in 1988, which was refitted with a new control concept. The original control unit was repaired several times based on manufacturer’s obsolescence management. At start of the retrofit project the integrated circuits were no longer available for further repair of the original control unit. Safety, repeatability and stability of the process were greatly improved.
Needleman, Jack; Pearson, Marjorie L; Upenieks, Valda V; Yee, Tracy; Wolstein, Joelle; Parkerton, Melissa
2016-02-01
Process improvement stresses the importance of engaging frontline staff in implementing new processes and methods. Yet questions remain on how to incorporate these activities into the workday of hospital staff or how to create and maintain its commitment. In a 15-month American Organization of Nurse Executives collaborative involving frontline medical/surgical staff from 67 hospitals, Transforming Care at the Bedside (TCAB) was evaluated to assess whether participating units successfully implemented recommended change processes, engaged staff, implemented innovations, and generated support from hospital leadership and staff. In a mixed-methods analysis, multiple data sources, including leader surveys, unit staff surveys, administrative data, time study data, and collaborative documents were used. All units reported establishing unit-based teams, of which >90% succeeded in conducting tests of change, with unit staff selecting topics and making decisions on adoption. Fifty-five percent of unit staff reported participating in unit meetings, and 64%, in tests of change. Unit managers reported substantial increase in staff support for the initiative. An average 36 tests of change were conducted per unit, with 46% of tested innovations sustained, and 20% spread to other units. Some 95% of managers and 97% of chief nursing officers believed that the program had made unit staff more likely to initiate change. Among staff, 83% would encourage adoption of the initiative. Given the strong positive assessment of TCAB, evidence of substantial engagement of staff in the work, and the high volume of innovations tested, implemented, and sustained, TCAB appears to be a productive model for organizing and implementing a program of frontline-led improvement.
Economic and workflow analysis of a blood bank automated system.
Shin, Kyung-Hwa; Kim, Hyung Hoi; Chang, Chulhun L; Lee, Eun Yup
2013-07-01
This study compared the estimated costs and times required for ABO/Rh(D) typing and unexpected antibody screening using an automated system and manual methods. The total cost included direct and labor costs. Labor costs were calculated on the basis of the average operator salaries and unit values (minutes), which was the hands-on time required to test one sample. To estimate unit values, workflows were recorded on video, and the time required for each process was analyzed separately. The unit values of ABO/Rh(D) typing using the manual method were 5.65 and 8.1 min during regular and unsocial working hours, respectively. The unit value was less than 3.5 min when several samples were tested simultaneously. The unit value for unexpected antibody screening was 2.6 min. The unit values using the automated method for ABO/Rh(D) typing, unexpected antibody screening, and both simultaneously were all 1.5 min. The total cost of ABO/Rh(D) typing of only one sample using the automated analyzer was lower than that of testing only one sample using the manual technique but higher than that of testing several samples simultaneously. The total cost of unexpected antibody screening using an automated analyzer was less than that using the manual method. ABO/Rh(D) typing using an automated analyzer incurs a lower unit value and cost than that using the manual technique when only one sample is tested at a time. Unexpected antibody screening using an automated analyzer always incurs a lower unit value and cost than that using the manual technique.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-06
... proposes to establish new minimum performance standards for specialist units.\\12\\ Specifically, new Rule... Streamline the Process for Specialist Evaluations and Clarify the Time Within Which SQTs and RSQTs Must Begin...-4 thereunder,\\2\\ a proposed rule change to update and streamline the process for specialist...
ERIC Educational Resources Information Center
Schmidtke, Daniel; Matsuki, Kazunaga; Kuperman, Victor
2017-01-01
The current study addresses a discrepancy in the psycholinguistic literature about the chronology of information processing during the visual recognition of morphologically complex words. "Form-then-meaning" accounts of complex word recognition claim that morphemes are processed as units of form prior to any influence of their meanings,…
A dual-channel fusion system of visual and infrared images based on color transfer
NASA Astrophysics Data System (ADS)
Pei, Chuang; Jiang, Xiao-yu; Zhang, Peng-wei; Liang, Hao-cong
2013-09-01
A dual-channel fusion system of visual and infrared images based on color transfer The increasing availability and deployment of imaging sensors operating in multiple spectrums has led to a large research effort in image fusion, resulting in a plethora of pixel-level image fusion algorithms. However, most of these algorithms have gray or false color fusion results which are not adapt to human vision. Transfer color from a day-time reference image to get natural color fusion result is an effective way to solve this problem, but the computation cost of color transfer is expensive and can't meet the request of real-time image processing. We developed a dual-channel infrared and visual images fusion system based on TMS320DM642 digital signal processing chip. The system is divided into image acquisition and registration unit, image fusion processing unit, system control unit and image fusion result out-put unit. The image registration of dual-channel images is realized by combining hardware and software methods in the system. False color image fusion algorithm in RGB color space is used to get R-G fused image, then the system chooses a reference image to transfer color to the fusion result. A color lookup table based on statistical properties of images is proposed to solve the complexity computation problem in color transfer. The mapping calculation between the standard lookup table and the improved color lookup table is simple and only once for a fixed scene. The real-time fusion and natural colorization of infrared and visual images are realized by this system. The experimental result shows that the color-transferred images have a natural color perception to human eyes, and can highlight the targets effectively with clear background details. Human observers with this system will be able to interpret the image better and faster, thereby improving situational awareness and reducing target detection time.
G.H. Reeves; L.E. Benda; K.M. Burnett; P.A. Bisson; J.R. Sedell
1995-01-01
To preserve and recover evolutionarily significant units (ESUs) of anadromous salmonids Oncorhynchus spp. in the Pacific Northwest, long-term and short-term ecological processes that create and maintain freshwater habitats must be restored and protected. Aquatic ecosystems through- out the region are dynamic in space and time, and lack of...
Toward inventory-based estimates of soil organic carbon in forests of the United States
G.M. Domke; C.H. Perry; B.F. Walters; L.E. Nave; C.W. Woodall; C.W. Swanston
2017-01-01
Soil organic carbon (SOC) is the largest terrestrial carbon (C) sink on Earth; this pool plays a critical role in ecosystem processes and climate change. Given the cost and time required to measure SOC, and particularly changes in SOC, many signatory nations to the United Nations Framework Convention on Climate Change report estimates of SOC stocks and stock changes...
A real-time GNSS-R system based on software-defined radio and graphics processing units
NASA Astrophysics Data System (ADS)
Hobiger, Thomas; Amagai, Jun; Aida, Masanori; Narita, Hideki
2012-04-01
Reflected signals of the Global Navigation Satellite System (GNSS) from the sea or land surface can be utilized to deduce and monitor physical and geophysical parameters of the reflecting area. Unlike most other remote sensing techniques, GNSS-Reflectometry (GNSS-R) operates as a passive radar that takes advantage from the increasing number of navigation satellites that broadcast their L-band signals. Thereby, most of the GNSS-R receiver architectures are based on dedicated hardware solutions. Software-defined radio (SDR) technology has advanced in the recent years and enabled signal processing in real-time, which makes it an ideal candidate for the realization of a flexible GNSS-R system. Additionally, modern commodity graphic cards, which offer massive parallel computing performances, allow to handle the whole signal processing chain without interfering with the PC's CPU. Thus, this paper describes a GNSS-R system which has been developed on the principles of software-defined radio supported by General Purpose Graphics Processing Units (GPGPUs), and presents results from initial field tests which confirm the anticipated capability of the system.
Recycling agroindustrial waste by lactic fermentations: coffee pulp silage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carrizales, V.; Ferrer, J.
1985-04-03
This UNIDO publication on lactic acid fermentation of coffee pulp for feed production covers (1) a process which can be adapted to existing coffee processing plants for drying the product once harvesting time has finished (2) unit operations involved: pressing (optional), silaging, liming and drying (3) experiments, results and discussion, bibliography, process statistics, and diagrams. Additional references: storage, biotechnology, lime, agricultural wastes, recycling, waste utilization.
Kang, Junsu; Lee, Donghyeon; Heo, Young Jin; Chung, Wan Kyun
2017-11-07
For highly-integrated microfluidic systems, an actuation system is necessary to control the flow; however, the bulk of actuation devices including pumps or valves has impeded the broad application of integrated microfluidic systems. Here, we suggest a microfluidic process control method based on built-in microfluidic circuits. The circuit is composed of a fluidic timer circuit and a pneumatic logic circuit. The fluidic timer circuit is a serial connection of modularized timer units, which sequentially pass high pressure to the pneumatic logic circuit. The pneumatic logic circuit is a NOR gate array designed to control the liquid-controlling process. By using the timer circuit as a built-in signal generator, multi-step processes could be done totally inside the microchip without any external controller. The timer circuit uses only two valves per unit, and the number of process steps can be extended without limitation by adding timer units. As a demonstration, an automation chip has been designed for a six-step droplet treatment, which entails 1) loading, 2) separation, 3) reagent injection, 4) incubation, 5) clearing and 6) unloading. Each process was successfully performed for a pre-defined step-time without any external control device.
Fodi, Tamas; Didaskalou, Christos; Kupai, Jozsef; Balogh, Gyorgy T; Huszthy, Peter; Szekely, Gyorgy
2017-09-11
Solvent usage in the pharmaceutical sector accounts for as much as 90 % of the overall mass during manufacturing processes. Consequently, solvent consumption poses significant costs and environmental burdens. Continuous processing, in particular continuous-flow reactors, have great potential for the sustainable production of pharmaceuticals but subsequent downstream processing remains challenging. Separation processes for concentrating and purifying chemicals can account for as much as 80 % of the total manufacturing costs. In this work, a nanofiltration unit was coupled to a continuous-flow rector for in situ solvent and reagent recycling. The nanofiltration unit is straightforward to implement and simple to control during continuous operation. The hybrid process operated continuously over six weeks, recycling about 90 % of the solvent and reagent. Consequently, the E-factor and the carbon footprint were reduced by 91 % and 19 %, respectively. Moreover, the nanofiltration unit led to a solution of the product eleven times more concentrated than the reaction mixture and increased the purity from 52.4 % to 91.5 %. The boundaries for process conditions were investigated to facilitate implementation of the methodology by the pharmaceutical sector. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Nolte, Kurt B; Stewart, Douglas M; O'Hair, Kevin C; Gannon, William L; Briggs, Michael S; Barron, A Marie; Pointer, Judy; Larson, Richard S
2008-10-01
The authors developed a novel continuous quality improvement (CQI) process for academic biomedical research compliance administration. A challenge in developing a quality improvement program in a nonbusiness environment is that the terminology and processes are often foreign. Rather than training staff in an existing quality improvement process, the authors opted to develop a novel process based on the scientific method--a paradigm familiar to all team members. The CQI process included our research compliance units. Unit leaders identified problems in compliance administration where a resolution would have a positive impact and which could be resolved or improved with current resources. They then generated testable hypotheses about a change to standard practice expected to improve the problem, and they developed methods and metrics to assess the impact of the change. The CQI process was managed in a "peer review" environment. The program included processes to reduce the incidence of infections in animal colonies, decrease research protocol-approval times, improve compliance and protection of animal and human research subjects, and improve research protocol quality. This novel CQI approach is well suited to the needs and the unique processes of research compliance administration. Using the scientific method as the improvement paradigm fostered acceptance of the project by unit leaders and facilitated the development of specific improvement projects. These quality initiatives will allow us to improve support for investigators while ensuring that compliance standards continue to be met. We believe that our CQI process can readily be used in other academically based offices of research.
Relvas, Gláubia Rocha Barbosa; Buccini, Gabriela Dos Santos; Venancio, Sonia Isoyama
2018-06-08
To analyze the prevalence of ultra-processed food intake among children under one year of age and to identify associated factors. A cross-sectional design was employed. We interviewed 198 mothers of children aged between 6 and 12 months in primary healthcare units located in a city of the metropolitan region of São Paulo, Brazil. Specific foods consumed in the previous 24h of the interview were considered to evaluate the consumption of ultra-processed foods. Variables related to mothers' and children's characteristics as well as primary healthcare units were grouped into three blocks of increasingly proximal influence on the outcome. A Poisson regression analysis was performed following a statistical hierarchical modeling to determine factors associated with ultra-processed food intake. The prevalence of ultra-processed food intake was 43.1%. Infants that were not being breastfed had a higher prevalence of ultra-processed food intake but no statistical significance was found. Lower maternal education (prevalence ratio 1.55 [1.08-2.24]) and the child's first appointment at the primary healthcare unit having happened after the first week of life (prevalence ratio 1.51 [1.01-2.27]) were factors associated with the consumption of ultra-processed foods. High consumption of ultra-processed foods among children under 1 year of age was found. Both maternal socioeconomic status and time until the child's first appointment at the primary healthcare unit were associated with the prevalence of ultra-processed food intake. Copyright © 2018 Sociedade Brasileira de Pediatria. Published by Elsevier Editora Ltda. All rights reserved.
Reducing intraoperative red blood cell unit wastage in a large academic medical center.
Whitney, Gina M; Woods, Marcella C; France, Daniel J; Austin, Thomas M; Deegan, Robert J; Paroskie, Allison; Booth, Garrett S; Young, Pampee P; Dmochowski, Roger R; Sandberg, Warren S; Pilla, Michael A
2015-11-01
The wastage of red blood cell (RBC) units within the operative setting results in significant direct costs to health care organizations. Previous education-based efforts to reduce wastage were unsuccessful at our institution. We hypothesized that a quality and process improvement approach would result in sustained reductions in intraoperative RBC wastage in a large academic medical center. Utilizing a failure mode and effects analysis supplemented with time and temperature data, key drivers of perioperative RBC wastage were identified and targeted for process improvement. Multiple contributing factors, including improper storage and transport and lack of accurate, locally relevant RBC wastage event data were identified as significant contributors to ongoing intraoperative RBC unit wastage. Testing and implementation of improvements to the process of transport and storage of RBC units occurred in liver transplant and adult cardiac surgical areas due to their history of disproportionately high RBC wastage rates. Process interventions targeting local drivers of RBC wastage resulted in a significant reduction in RBC wastage (p < 0.0001; adjusted odds ratio, 0.24; 95% confidence interval, 0.15-0.39), despite an increase in operative case volume over the period of the study. Studied process interventions were then introduced incrementally in the remainder of the perioperative areas. These results show that a multidisciplinary team focused on the process of blood product ordering, transport, and storage was able to significantly reduce operative RBC wastage and its associated costs using quality and process improvement methods. © 2015 AABB.
Reducing intraoperative red blood cell unit wastage in a large academic medical center
Whitney, Gina M.; Woods, Marcella C.; France, Daniel J.; Austin, Thomas M.; Deegan, Robert J.; Paroskie, Allison; Booth, Garrett S.; Young, Pampee P.; Dmochowski, Roger R.; Sandberg, Warren S.; Pilla, Michael A.
2015-01-01
BACKGROUND The wastage of red blood cell (RBC) units within the operative setting results in significant direct costs to health care organizations. Previous education-based efforts to reduce wastage were unsuccessful at our institution. We hypothesized that a quality and process improvement approach would result in sustained reductions in intraoperative RBC wastage in a large academic medical center. STUDY DESIGN AND METHODS Utilizing a failure mode and effects analysis supplemented with time and temperature data, key drivers of perioperative RBC wastage were identified and targeted for process improvement. RESULTS Multiple contributing factors, including improper storage and transport and lack of accurate, locally relevant RBC wastage event data were identified as significant contributors to ongoing intraoperative RBC unit wastage. Testing and implementation of improvements to the process of transport and storage of RBC units occurred in liver transplant and adult cardiac surgical areas due to their history of disproportionately high RBC wastage rates. Process interventions targeting local drivers of RBC wastage resulted in a significant reduction in RBC wastage (p <0.0001; adjusted odds ratio, 0.24; 95% confidence interval, 0.15–0.39), despite an increase in operative case volume over the period of the study. Studied process interventions were then introduced incrementally in the remainder of the perioperative areas. CONCLUSIONS These results show that a multidisciplinary team focused on the process of blood product ordering, transport, and storage was able to significantly reduce operative RBC wastage and its associated costs using quality and process improvement methods. PMID:26202213
NASA Astrophysics Data System (ADS)
Mallakpour, Iman; Villarini, Gabriele; Jones, Michael; Smith, James
2016-04-01
The central United States is a region of the country that has been plagued by frequent catastrophic flooding (e.g., flood events of 1993, 2008, 2013, and 2014), with large economic and social repercussions (e.g., fatalities, agricultural losses, flood losses, water quality issues). The goal of this study is to examine whether it is possible to describe the occurrence of flood events at the sub-seasonal scale in terms of variations in the climate system. Daily streamflow time series from 774 USGS stream gage stations over the central United States (defined here to include North Dakota, South Dakota, Nebraska, Kansas, Missouri, Iowa, Minnesota, Wisconsin, Illinois, West Virginia, Kentucky, Ohio, Indiana, and Michigan) with a record of at least 50 years and ending no earlier than 2011 are used for this study. We use a peak-over-threshold (POT) approach to identify flood peaks so that we have, on average two events per year. We model the occurrence/non-occurrence of a flood event over time using regression models based on Cox processes. Cox processes are widely used in biostatistics and can be viewed as a generalization of Poisson processes. Rather than assuming that flood events occur independently of the occurrence of previous events (as in Poisson processes), Cox processes allow us to account for the potential presence of temporal clustering, which manifests itself in an alternation of quiet and active periods. Here we model the occurrence/non-occurrence of flood events using two climate indices as climate time-varying covariates: the North Atlantic Oscillation (NAO) and the Pacific-North American pattern (PNA). The results of this study show that NAO and/or PNA can explain the temporal clustering in flood occurrences in over 90% of the stream gage stations we considered. Analyses of the sensitivity of the results to different average numbers of flood events per year (from one to five) are also performed and lead to the same conclusions. The findings of this work highlight that variations in the climate system play a critical role in explaining the occurrence of flood events at the sub-seasonal scale over the central United States.
Improving Initiation and Tracking of Research Projects at an Academic Health Center: A Case Study.
Schmidt, Susanne; Goros, Martin; Parsons, Helen M; Saygin, Can; Wan, Hung-Da; Shireman, Paula K; Gelfond, Jonathan A L
2017-09-01
Research service cores at academic health centers are important in driving translational advancements. Specifically, biostatistics and research design units provide services and training in data analytics, biostatistics, and study design. However, the increasing demand and complexity of assigning appropriate personnel to time-sensitive projects strains existing resources, potentially decreasing productivity and increasing costs. Improving processes for project initiation, assigning appropriate personnel, and tracking time-sensitive projects can eliminate bottlenecks and utilize resources more efficiently. In this case study, we describe our application of lean six sigma principles to our biostatistics unit to establish a systematic continual process improvement cycle for intake, allocation, and tracking of research design and data analysis projects. The define, measure, analyze, improve, and control methodology was used to guide the process improvement. Our goal was to assess and improve the efficiency and effectiveness of operations by objectively measuring outcomes, automating processes, and reducing bottlenecks. As a result, we developed a web-based dashboard application to capture, track, categorize, streamline, and automate project flow. Our workflow system resulted in improved transparency, efficiency, and workload allocation. Using the dashboard application, we reduced the average study intake time from 18 to 6 days, a 66.7% reduction over 12 months (January to December 2015).
Wittmann, Marc
2011-01-01
It has been suggested that perception and action can be understood as evolving in temporal epochs or sequential processing units. Successive events are fused into units forming a unitary experience or “psychological present.” Studies have identified several temporal integration levels on different time scales which are fundamental for our understanding of behavior and subjective experience. In recent literature concerning the philosophy and neuroscience of consciousness these separate temporal processing levels are not always precisely distinguished. Therefore, empirical evidence from psychophysics and neuropsychology on these distinct temporal processing levels is presented and discussed within philosophical conceptualizations of time experience. On an elementary level, one can identify a functional moment, a basic temporal building block of perception in the range of milliseconds that defines simultaneity and succession. Below a certain threshold temporal order is not perceived, individual events are processed as co-temporal. On a second level, an experienced moment, which is based on temporal integration of up to a few seconds, has been reported in many qualitatively different experiments in perception and action. It has been suggested that this segmental processing mechanism creates temporal windows that provide a logistical basis for conscious representation and the experience of nowness. On a third level of integration, continuity of experience is enabled by working memory in the range of multiple seconds allowing the maintenance of cognitive operations and emotional feelings, leading to mental presence, a temporal window of an individual’s experienced presence. PMID:22022310
Parallel computing method for simulating hydrological processesof large rivers under climate change
NASA Astrophysics Data System (ADS)
Wang, H.; Chen, Y.
2016-12-01
Climate change is one of the proverbial global environmental problems in the world.Climate change has altered the watershed hydrological processes in time and space distribution, especially in worldlarge rivers.Watershed hydrological process simulation based on physically based distributed hydrological model can could have better results compared with the lumped models.However, watershed hydrological process simulation includes large amount of calculations, especially in large rivers, thus needing huge computing resources that may not be steadily available for the researchers or at high expense, this seriously restricted the research and application. To solve this problem, the current parallel method are mostly parallel computing in space and time dimensions.They calculate the natural features orderly thatbased on distributed hydrological model by grid (unit, a basin) from upstream to downstream.This articleproposes ahigh-performancecomputing method of hydrological process simulation with high speedratio and parallel efficiency.It combinedthe runoff characteristics of time and space of distributed hydrological model withthe methods adopting distributed data storage, memory database, distributed computing, parallel computing based on computing power unit.The method has strong adaptability and extensibility,which means it canmake full use of the computing and storage resources under the condition of limited computing resources, and the computing efficiency can be improved linearly with the increase of computing resources .This method can satisfy the parallel computing requirements ofhydrological process simulation in small, medium and large rivers.
Hirabayashi, Satoshi; Nowak, David J
2016-08-01
Trees remove air pollutants through dry deposition processes depending upon forest structure, meteorology, and air quality that vary across space and time. Employing nationally available forest, weather, air pollution and human population data for 2010, computer simulations were performed for deciduous and evergreen trees with varying leaf area index for rural and urban areas in every county in the conterminous United States. The results populated a national database of annual air pollutant removal, concentration changes, and reductions in adverse health incidences and costs for NO2, O3, PM2.5 and SO2. The developed database enabled a first order approximation of air quality and associated human health benefits provided by trees with any forest configurations anywhere in the conterminous United States over time. Comprehensive national database of tree effects on air quality and human health in the United States was developed. Copyright © 2016 Elsevier Ltd. All rights reserved.
21 CFR 1401.10 - Fees to be charged-general.
Code of Federal Regulations, 2011 CFR
2011-04-01
... search. (b) Computerized search for records. ONDCP will charge 116% of the salary of the programmer/operator and the apportionable time of the central processing unit directly attributed to the search. (c...
21 CFR 1401.10 - Fees to be charged-general.
Code of Federal Regulations, 2010 CFR
2010-04-01
... search. (b) Computerized search for records. ONDCP will charge 116% of the salary of the programmer/operator and the apportionable time of the central processing unit directly attributed to the search. (c...
20 CFR 655.1030 - Service and computation of time.
Code of Federal Regulations, 2010 CFR
2010-04-01
... EMPLOYMENT OF FOREIGN WORKERS IN THE UNITED STATES Enforcement of the Attestation Process for Attestations... authorized where service is by mail. In the interest of expeditious proceedings, the administrative law judge...
NASA Astrophysics Data System (ADS)
Hsieh, Tsu-Pang; Cheng, Mei-Chuan; Dye, Chung-Yuan; Ouyang, Liang-Yuh
2011-01-01
In this article, we extend the classical economic production quantity (EPQ) model by proposing imperfect production processes and quality-dependent unit production cost. The demand rate is described by any convex decreasing function of the selling price. In addition, we allow for shortages and a time-proportional backlogging rate. For any given selling price, we first prove that the optimal production schedule not only exists but also is unique. Next, we show that the total profit per unit time is a concave function of price when the production schedule is given. We then provide a simple algorithm to find the optimal selling price and production schedule for the proposed model. Finally, we use a couple of numerical examples to illustrate the algorithm and conclude this article with suggestions for possible future research.
The effects of the framing of time on delay discounting.
DeHart, William Brady; Odum, Amy L
2015-01-01
We examined the effects of the framing of time on delay discounting. Delay discounting is the process by which delayed outcomes are devalued as a function of time. Time in a titrating delay discounting task is often framed in calendar units (e.g., as 1 week, 1 month, etc.). When time is framed as a specific date, delayed outcomes are discounted less compared to the calendar format. Other forms of framing time; however, have not been explored. All participants completed a titrating calendar unit delay-discounting task for money. Participants were also assigned to one of two delay discounting tasks: time as dates (e.g., June 1st, 2015) or time in units of days (e.g., 5000 days), using the same delay distribution as the calendar delay-discounting task. Time framed as dates resulted in less discounting compared to the calendar method, whereas time framed as days resulted in greater discounting compared to the calendar method. The hyperboloid model fit best compared to the hyperbola and exponential models. How time is framed may alter how participants attend to the delays as well as how the delayed outcome is valued. Altering how time is framed may serve to improve adherence to goals with delayed outcomes. © Society for the Experimental Analysis of Behavior.
Quantum Computation Using Optically Coupled Quantum Dot Arrays
NASA Technical Reports Server (NTRS)
Pradhan, Prabhakar; Anantram, M. P.; Wang, K. L.; Roychowhury, V. P.; Saini, Subhash (Technical Monitor)
1998-01-01
A solid state model for quantum computation has potential advantages in terms of the ease of fabrication, characterization, and integration. The fundamental requirements for a quantum computer involve the realization of basic processing units (qubits), and a scheme for controlled switching and coupling among the qubits, which enables one to perform controlled operations on qubits. We propose a model for quantum computation based on optically coupled quantum dot arrays, which is computationally similar to the atomic model proposed by Cirac and Zoller. In this model, individual qubits are comprised of two coupled quantum dots, and an array of these basic units is placed in an optical cavity. Switching among the states of the individual units is done by controlled laser pulses via near field interaction using the NSOM technology. Controlled rotations involving two or more qubits are performed via common cavity mode photon. We have calculated critical times, including the spontaneous emission and switching times, and show that they are comparable to the best times projected for other proposed models of quantum computation. We have also shown the feasibility of accessing individual quantum dots using the NSOM technology by calculating the photon density at the tip, and estimating the power necessary to perform the basic controlled operations. We are currently in the process of estimating the decoherence times for this system; however, we have formulated initial arguments which seem to indicate that the decoherence times will be comparable, if not longer, than many other proposed models.
Analysis of urban regions using AVHRR thermal infrared data
Wright, Bruce
1993-01-01
Using 1-km AVHRR satellite data, relative temperature difference caused by conductivity and inertia were used to distinguish urban and non urban land covers. AVHRR data that were composited on a biweekly basis and distributed by the EROS Data Center in Sioux Falls, South Dakota, were used for the classification process. These composited images are based on the maximum normalized different vegetation index (NDVI) of each pixel during the 2-week period using channels 1 and 2. The resultant images are nearly cloud-free and reduce the need for extensive reclassification processing. Because of the physiographic differences between the Eastern and Western United States, the initial study was limited to the eastern half of the United States. In the East, the time of maximum difference between the urban surfaces and the vegetated non urban areas is the peak greenness period in late summer. A composite image of the Eastern United States for the 2-weel time period from August 30-Septmeber 16, 1991, was used for the extraction of the urban areas. Two channels of thermal data (channels 3 and 4) normalized for regional temperature differences and a composited NDVI image were classified using conventional image processing techniques. The results compare favorably with other large-scale urban area delineations.
NASA Astrophysics Data System (ADS)
Gholibeigian, Hassan
Dimension of information as the fifth dimension of the universe including packages of new information, is nested with space-time. Distributed density of information is matched on its correspondence distributed mater in space-time. Fundamental particle (string) like photon and graviton needs a package of information including its exact quantum state and law for process and travel a Planck length in a Planck time. This process is done via sub-particles (substrings). Processed information is carried by particle as the universe's history. My proposed formula for Planck unit of information (IP) and also for Fundamental Physical (Universal) Constant is: IP =lP ct P =1 Planck length lP, Planck time tP, and c , is light speed. Also my proposed formula for calculation of the packages is: I =tP- 1 . τ , in which, I is number of packages, and τ is lifetime of the particle. ``Communication of information'' as a ``fundamental symmetry'' leads phenomena. Packages should be always up to date including new information for evolution of the Universe. But, where come from or how are created new information which Hawking and his colleagues forgot it bring inside the black hole and leave it behind the horizon in form of soft hair?
Real-time blind image deconvolution based on coordinated framework of FPGA and DSP
NASA Astrophysics Data System (ADS)
Wang, Ze; Li, Hang; Zhou, Hua; Liu, Hongjun
2015-10-01
Image restoration takes a crucial place in several important application domains. With the increasing of computation requirement as the algorithms become much more complexity, there has been a significant rise in the need for accelerating implementation. In this paper, we focus on an efficient real-time image processing system for blind iterative deconvolution method by means of the Richardson-Lucy (R-L) algorithm. We study the characteristics of algorithm, and an image restoration processing system based on the coordinated framework of FPGA and DSP (CoFD) is presented. Single precision floating-point processing units with small-scale cascade and special FFT/IFFT processing modules are adopted to guarantee the accuracy of the processing. Finally, Comparing experiments are done. The system could process a blurred image of 128×128 pixels within 32 milliseconds, and is up to three or four times faster than the traditional multi-DSPs systems.
Instrumentation for optimizing an underground coal-gasification process
NASA Astrophysics Data System (ADS)
Seabaugh, W.; Zielinski, R. E.
1982-06-01
While the United States has a coal resource base of 6.4 trillion tons, only seven percent is presently recoverable by mining. The process of in-situ gasification can recover another twenty-eight percent of the vast resource, however, viable technology must be developed for effective in-situ recovery. The key to this technology is system that can optimize and control the process in real-time. An instrumentation system is described that optimizes the composition of the injection gas, controls the in-situ process and conditions the product gas for maximum utilization. The key elements of this system are Monsanto PRISM Systems, a real-time analytical system, and a real-time data acquisition and control system. This system provides from complete automation of the process but can easily be overridden by manual control. The use of this cost effective system can provide process optimization and is an effective element in developing a viable in-situ technology.
Sharing the skies: the Gemini Observatory international time allocation process
NASA Astrophysics Data System (ADS)
Margheim, Steven J.
2016-07-01
Gemini Observatory serves a diverse community of four partner countries (United States, Canada, Brazil, and Argentina), two hosts (Chile and University of Hawaii), and limited-term partnerships (currently Australia and the Republic of Korea). Observing time is available via multiple opportunities including Large and Long Pro- grams, Fast-turnaround programs, and regular semester queue programs. The slate of programs for observation each semester must be created by merging programs from these multiple, conflicting sources. This paper de- scribes the time allocation process used to schedule the overall science program for the semester, with emphasis on the International Time Allocation Committee and the software applications used.
Wee, Elijah X M; Taylor, M Susan
2018-01-01
Increasingly, continuous organizational change is viewed as the new reality for organizations and their members. However, this model of organizational change, which is usually characterized by ongoing, cumulative, and substantive change from the bottom up, remains underexplored in the literature. Taking a multilevel approach, the authors develop a theoretical model to explain the mechanisms behind the amplification and accumulation of valuable, ongoing work-unit level changes over time, which then become substantial changes at the organizational level. Drawing on the concept of emergence, they first focus on the cognitive search mechanisms of work-unit members and managers to illustrate how work-unit level routine changes may be amplified to the organization through 2 unique processes: composition and compilation emergence. The authors then discuss the managers' role in creating a sense of coherence and meaning for the accumulation of these emergent changes over time. They conclude this research by discussing the theoretical implications of their model for the existing literature of organizational change. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Making Progress: Education and Culture in New Times.
ERIC Educational Resources Information Center
Carlson, Dennis
The essays in this collection, although written at different times, are all part of a process of forming a democratic progressive educational policy and practice for the United States in the new historical era. Each chapter groups essays that critique some aspect of existing public school practice, explores the limitations of current reform…
MOVANAID: An Interactive Aid for Analysis of Movement Capabilities.
ERIC Educational Resources Information Center
Cooper, George E.; And Others
A computer-drive interactive aid for movement analysis, called MOVANAID, has been developed to be of assistance in the performance of certain Army intelligence processing tasks in a tactical environment. It can compute fastest travel times and paths through road networks for military units of various types, as well as fastest times in which…
24 CFR 582.105 - Rental assistance amounts and payments.
Code of Federal Regulations, 2010 CFR
2010-04-01
... times the applicable Fair Market Rent (FMR) of each unit times the term of the grant. (c) Payment of... demonstration of need, up to 25 percent of the total rental assistance awarded may be spent in any one of the... processing rental payments to landlords, examining participant income and family composition, providing...
THOR Field and Wave Processor - FWP
NASA Astrophysics Data System (ADS)
Soucek, Jan; Rothkaehl, Hanna; Balikhin, Michael; Zaslavsky, Arnaud; Nakamura, Rumi; Khotyaintsev, Yuri; Uhlir, Ludek; Lan, Radek; Yearby, Keith; Morawski, Marek; Winkler, Marek
2016-04-01
If selected, Turbulence Heating ObserveR (THOR) will become the first mission ever flown in space dedicated to plasma turbulence. The Fields and Waves Processor (FWP) is an integrated electronics unit for all electromagnetic field measurements performed by THOR. FWP will interface with all fields sensors: electric field antennas of the EFI instrument, the MAG fluxgate magnetometer and search-coil magnetometer (SCM) and perform data digitization and on-board processing. FWP box will house multiple data acquisition sub-units and signal analyzers all sharing a common power supply and data processing unit and thus a single data and power interface to the spacecraft. Integrating all the electromagnetic field measurements in a single unit will improve the consistency of field measurement and accuracy of time synchronization. The feasibility of making highly sensitive electric and magnetic field measurements in space has been demonstrated by Cluster (among other spacecraft) and THOR instrumentation complemented by a thorough electromagnetic cleanliness program will further improve on this heritage. Taking advantage of the capabilities of modern electronics, FWP will provide simultaneous synchronized waveform and spectral data products at high time resolution from the numerous THOR sensors, taking advantage of the large telemetry bandwidth of THOR. FWP will also implement a plasma a resonance sounder and a digital plasma quasi-thermal noise analyzer designed to provide high cadence measurements of plasma density and temperature complementary to data from particle instruments. FWP will be interfaced with the particle instrument data processing unit (PPU) via a dedicated digital link which will enable performing on board correlation between waves and particles, quantifying the transfer of energy between waves and particles. The FWP instrument shall be designed and built by an international consortium of scientific institutes from Czech Republic, Poland, France, UK, Sweden and Austria.
A novel process control method for a TT-300 E-Beam/X-Ray system
NASA Astrophysics Data System (ADS)
Mittendorfer, Josef; Gallnböck-Wagner, Bernhard
2018-02-01
This paper presents some aspects of the process control method for a TT-300 E-Beam/X-Ray system at Mediscan, Austria. The novelty of the approach is the seamless integration of routine monitoring dosimetry with process data. This allows to calculate a parametric dose for each production unit and consequently a fine grain and holistic process performance monitoring. Process performance is documented in process control charts for the analysis of individual runs as well as historic trending of runs of specific process categories over a specified time range.
Design of high-performance parallelized gene predictors in MATLAB.
Rivard, Sylvain Robert; Mailloux, Jean-Gabriel; Beguenane, Rachid; Bui, Hung Tien
2012-04-10
This paper proposes a method of implementing parallel gene prediction algorithms in MATLAB. The proposed designs are based on either Goertzel's algorithm or on FFTs and have been implemented using varying amounts of parallelism on a central processing unit (CPU) and on a graphics processing unit (GPU). Results show that an implementation using a straightforward approach can require over 4.5 h to process 15 million base pairs (bps) whereas a properly designed one could perform the same task in less than five minutes. In the best case, a GPU implementation can yield these results in 57 s. The present work shows how parallelism can be used in MATLAB for gene prediction in very large DNA sequences to produce results that are over 270 times faster than a conventional approach. This is significant as MATLAB is typically overlooked due to its apparent slow processing time even though it offers a convenient environment for bioinformatics. From a practical standpoint, this work proposes two strategies for accelerating genome data processing which rely on different parallelization mechanisms. Using a CPU, the work shows that direct access to the MEX function increases execution speed and that the PARFOR construct should be used in order to take full advantage of the parallelizable Goertzel implementation. When the target is a GPU, the work shows that data needs to be segmented into manageable sizes within the GFOR construct before processing in order to minimize execution time.
Control system for several rotating mirror camera synchronization operation
NASA Astrophysics Data System (ADS)
Liu, Ningwen; Wu, Yunfeng; Tan, Xianxiang; Lai, Guoji
1997-05-01
This paper introduces a single chip microcomputer control system for synchronization operation of several rotating mirror high-speed cameras. The system consists of four parts: the microcomputer control unit (including the synchronization part and precise measurement part and the time delay part), the shutter control unit, the motor driving unit and the high voltage pulse generator unit. The control system has been used to control the synchronization working process of the GSI cameras (driven by a motor) and FJZ-250 rotating mirror cameras (driven by a gas driven turbine). We have obtained the films of the same objective from different directions in different speed or in same speed.
Storage peak gas-turbine power unit
NASA Technical Reports Server (NTRS)
Tsinkotski, B.
1980-01-01
A storage gas-turbine power plant using a two-cylinder compressor with intermediate cooling is studied. On the basis of measured characteristics of a .25 Mw compressor computer calculations of the parameters of the loading process of a constant capacity storage unit (05.3 million cu m) were carried out. The required compressor power as a function of time with and without final cooling was computed. Parameters of maximum loading and discharging of the storage unit were calculated, and it was found that for the complete loading of a fully unloaded storage unit, a capacity of 1 to 1.5 million cubic meters is required, depending on the final cooling.
Paoletti, M; Litnhouvongs, M-N; Tandonnet, J
2015-05-01
In France, a legal framework and guidelines state that decisions to limit treatments (DLT) require a collaborative decision meeting and a transcription of decisions in the patient's file. The do-not-attempt-resuscitation order involves the same decision-making process for children in palliative care. To fulfill the law's requirements and encourage communication within the teams, the Resource Team in Pediatric Palliative Care in Aquitaine created a document shared by all children's hospital units, tracing the decision-making process. This study analyzed the decision-making process, quality of information transmission, and most particularly the relevance of this new "collaborative decision-making for reasonable care" card. Retrospective study evaluating the implementation of a traceable document relating the DLT process. All the data sheets collected between January and December 2013 were analyzed. A total of 58 data sheets were completed between January and December 2013. We chose to collect the most relevant data to evaluate the relevance of the items to be completed and the transmission of the document, to draw up the patients' profile, and the contents of discussions with families. Of the 58 children for whom DLT was discussed, 41 data sheets were drawn up in the pediatric intensive care unit, seven in the oncology and hematology unit, five in the neonatology unit, four in the neurology unit, and one in the pneumology unit. For 30 children, one sheet was created, for 11 children, two sheets and for two children, three sheets were filled out. Thirty-nine decisions were made for withholding lifesaving treatment, 11 withdrawing treatment, and for five children, no limitation was set. Nine children survived after DLT. Of the 58 data sheets, only 31 discussions with families were related to the content of the data sheet. Of the 14 children transferred out of the unit with a completed data sheet, it was transmitted to the new unit for 11 children (79%). The number of data sheets collected in 1 year shows the value of this document. The participation of several pediatric specialities' referents in its creation, then its progressive presentation in the children's hospital units, were essential steps in introducing and establishing its use. Items describing the situation, management proposals, and adaptation of the children's supportive care were completed in the majority of cases. They correspond to a clinical description, the object of the discussion, and the daily caregiver's practices, respectively. On the other hand, discussions with families were related to the card's contents in only 53% of the cases. This can be explained by the time required to complete the DLT process. It is difficult for referring doctors to systematically, faithfully, and objectively transcribe discussions with parents. Although this process has been used for a long time in intensive care units, this document made possible an indispensable formalisation in the decision-making process. In other pediatric specialities, the sheet allowed introducing the palliative approach and was a starter and a tool for reflection on the do-not-attempt-resuscitation order, thus suggesting the need for anticipation in these situations. With the implementation of this new document, the DLT, data transmission, and continuity of care conditions were improved in the children's hospital units. Sharing this sheet with all professionals in charge of these children would support homogeneity and quality of management and care for children and their parents. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
Airborne Camera System for Real-Time Applications - Support of a National Civil Protection Exercise
NASA Astrophysics Data System (ADS)
Gstaiger, V.; Romer, H.; Rosenbaum, D.; Henkel, F.
2015-04-01
In the VABENE++ project of the German Aerospace Center (DLR), powerful tools are being developed to aid public authorities and organizations with security responsibilities as well as traffic authorities when dealing with disasters and large public events. One focus lies on the acquisition of high resolution aerial imagery, its fully automatic processing, analysis and near real-time provision to decision makers in emergency situations. For this purpose a camera system was developed to be operated from a helicopter with light-weight processing units and microwave link for fast data transfer. In order to meet end-users' requirements DLR works close together with the German Federal Office of Civil Protection and Disaster Assistance (BBK) within this project. One task of BBK is to establish, maintain and train the German Medical Task Force (MTF), which gets deployed nationwide in case of large-scale disasters. In October 2014, several units of the MTF were deployed for the first time in the framework of a national civil protection exercise in Brandenburg. The VABENE++ team joined the exercise and provided near real-time aerial imagery, videos and derived traffic information to support the direction of the MTF and to identify needs for further improvements and developments. In this contribution the authors introduce the new airborne camera system together with its near real-time processing components and share experiences gained during the national civil protection exercise.
More reliable protein NMR peak assignment via improved 2-interval scheduling.
Chen, Zhi-Zhong; Lin, Guohui; Rizzi, Romeo; Wen, Jianjun; Xu, Dong; Xu, Ying; Jiang, Tao
2005-03-01
Protein NMR peak assignment refers to the process of assigning a group of "spin systems" obtained experimentally to a protein sequence of amino acids. The automation of this process is still an unsolved and challenging problem in NMR protein structure determination. Recently, protein NMR peak assignment has been formulated as an interval scheduling problem (ISP), where a protein sequence P of amino acids is viewed as a discrete time interval I (the amino acids on P one-to-one correspond to the time units of I), each subset S of spin systems that are known to originate from consecutive amino acids from P is viewed as a "job" j(s), the preference of assigning S to a subsequence P of consecutive amino acids on P is viewed as the profit of executing job j(s) in the subinterval of I corresponding to P, and the goal is to maximize the total profit of executing the jobs (on a single machine) during I. The interval scheduling problem is max SNP-hard in general; but in the real practice of protein NMR peak assignment, each job j(s) usually requires at most 10 consecutive time units, and typically the jobs that require one or two consecutive time units are the most difficult to assign/schedule. In order to solve these most difficult assignments, we present an efficient 13/7-approximation algorithm for the special case of the interval scheduling problem where each job takes one or two consecutive time units. Combining this algorithm with a greedy filtering strategy for handling long jobs (i.e., jobs that need more than two consecutive time units), we obtain a new efficient heuristic for protein NMR peak assignment. Our experimental study shows that the new heuristic produces the best peak assignment in most of the cases, compared with the NMR peak assignment algorithms in the recent literature. The above algorithm is also the first approximation algorithm for a nontrivial case of the well-known interval scheduling problem that breaks the ratio 2 barrier.
General purpose graphic processing unit implementation of adaptive pulse compression algorithms
NASA Astrophysics Data System (ADS)
Cai, Jingxiao; Zhang, Yan
2017-07-01
This study introduces a practical approach to implement real-time signal processing algorithms for general surveillance radar based on NVIDIA graphical processing units (GPUs). The pulse compression algorithms are implemented using compute unified device architecture (CUDA) libraries such as CUDA basic linear algebra subroutines and CUDA fast Fourier transform library, which are adopted from open source libraries and optimized for the NVIDIA GPUs. For more advanced, adaptive processing algorithms such as adaptive pulse compression, customized kernel optimization is needed and investigated. A statistical optimization approach is developed for this purpose without needing much knowledge of the physical configurations of the kernels. It was found that the kernel optimization approach can significantly improve the performance. Benchmark performance is compared with the CPU performance in terms of processing accelerations. The proposed implementation framework can be used in various radar systems including ground-based phased array radar, airborne sense and avoid radar, and aerospace surveillance radar.
1985-05-01
unit in the data base, with knowing one generic assembly language. °-’--a 139 The 5-tuple describing single operation execution time of the operations...TSi-- generate , random eventi ( ,.0-15 tieit tmls - ((floa egus ()16 274 r Ispt imet imel I at :EVE’JS- II ktime=0.0; /0 present time 0/ rrs ptime=0.0...computing machinery capable of performing these tasks within a given time constraint. Because the majority of the available computing machinery is general
Newell, Terry L; Steinmetz-Malato, Laura L; Van Dyke, Deborah L
2011-01-01
The inpatient medication delivery system used at a large regional acute care hospital in the Midwest had become antiquated and inefficient. The existing 24-hr medication cart-fill exchange process with delivery to the patients' bedside did not always provide ordered medications to the nursing units when they were needed. In 2007 the principles of the Toyota Production System (TPS) were applied to the system. Project objectives were to improve medication safety and reduce the time needed for nurses to retrieve patient medications. A multidisciplinary team was formed that included representatives from nursing, pharmacy, informatics, quality, and various operational support departments. Team members were educated and trained in the tools and techniques of TPS, and then designed and implemented a new pull system benchmarking the TPS Ideal State model. The newly installed process, providing just-in-time medication availability, has measurably improved delivery processes as well as patient safety and satisfaction. Other positive outcomes have included improved nursing satisfaction, reduced nursing wait time for delivered medications, and improved efficiency in the pharmacy. After a successful pilot on two nursing units, the system is being extended to the rest of the hospital. © 2010 National Association for Healthcare Quality.
NASA Astrophysics Data System (ADS)
Suarez, Hernan; Zhang, Yan R.
2015-05-01
New radar applications need to perform complex algorithms and process large quantity of data to generate useful information for the users. This situation has motivated the search for better processing solutions that include low power high-performance processors, efficient algorithms, and high-speed interfaces. In this work, hardware implementation of adaptive pulse compression for real-time transceiver optimization are presented, they are based on a System-on-Chip architecture for Xilinx devices. This study also evaluates the performance of dedicated coprocessor as hardware accelerator units to speed up and improve the computation of computing-intensive tasks such matrix multiplication and matrix inversion which are essential units to solve the covariance matrix. The tradeoffs between latency and hardware utilization are also presented. Moreover, the system architecture takes advantage of the embedded processor, which is interconnected with the logic resources through the high performance AXI buses, to perform floating-point operations, control the processing blocks, and communicate with external PC through a customized software interface. The overall system functionality is demonstrated and tested for real-time operations using a Ku-band tested together with a low-cost channel emulator for different types of waveforms.
Towards the understanding of network information processing in biology
NASA Astrophysics Data System (ADS)
Singh, Vijay
Living organisms perform incredibly well in detecting a signal present in the environment. This information processing is achieved near optimally and quite reliably, even though the sources of signals are highly variable and complex. The work in the last few decades has given us a fair understanding of how individual signal processing units like neurons and cell receptors process signals, but the principles of collective information processing on biological networks are far from clear. Information processing in biological networks, like the brain, metabolic circuits, cellular-signaling circuits, etc., involves complex interactions among a large number of units (neurons, receptors). The combinatorially large number of states such a system can exist in makes it impossible to study these systems from the first principles, starting from the interactions between the basic units. The principles of collective information processing on such complex networks can be identified using coarse graining approaches. This could provide insights into the organization and function of complex biological networks. Here I study models of biological networks using continuum dynamics, renormalization, maximum likelihood estimation and information theory. Such coarse graining approaches identify features that are essential for certain processes performed by underlying biological networks. We find that long-range connections in the brain allow for global scale feature detection in a signal. These also suppress the noise and remove any gaps present in the signal. Hierarchical organization with long-range connections leads to large-scale connectivity at low synapse numbers. Time delays can be utilized to separate a mixture of signals with temporal scales. Our observations indicate that the rules in multivariate signal processing are quite different from traditional single unit signal processing.
Development and Evaluation of Sterographic Display for Lung Cancer Screening
2008-12-01
burden. Application of GPUs – With the evolution of commodity graphics processing units (GPUs) for accelerating games on personal computers, over the...units, which are designed for rendering computer games , are readily available and can be programmed to perform the kinds of real-time calculations...575-581, 1994. 12. Anderson CM, Saloner D, Tsuruda JS, Shapeero LG, Lee RE. "Artifacts in maximun-intensity-projection display of MR angiograms
Capture of Fluorescence Decay Times by Flow Cytometry
Naivar, Mark A.; Jenkins, Patrick; Freyer, James P.
2012-01-01
In flow cytometry, the fluorescence decay time of an excitable species has been largely underutilized and is not likely found as a standard parameter on any imaging cytometer, sorting, or analyzing system. Most cytometers lack fluorescence lifetime hardware mainly owing to two central issues. Foremost, research and development with lifetime techniques has lacked proper exploitation of modern laser systems, data acquisition boards, and signal processing techniques. Secondly, a lack of enthusiasm for fluorescence lifetime applications in cells and with bead-based assays has persisted among the greater cytometry community. In this unit, we describe new approaches that address these issues and demonstrate the simplicity of digitally acquiring fluorescence relaxation rates in flow. The unit is divided into protocol and commentary sections in order to provide a most comprehensive discourse on acquiring the fluorescence lifetime with frequency-domain methods. The unit covers (i) standard fluorescence lifetime acquisition (protocol-based) with frequency-modulated laser excitation, (ii) digital frequency-domain cytometry analyses, and (iii) interfacing fluorescence lifetime measurements onto sorting systems. Within the unit is also a discussion on how digital methods are used for aliasing in order to harness higher frequency ranges. Also, a final discussion is provided on heterodyning and processing of waveforms for multi-exponential decay extraction. PMID:25419263
Current research issues related to post-wildfire runoff and erosion processes
John A. Moody; Richard A. Shakesby; Peter R. Robichaud; Susan H. Cannon; Deborah A. Martin
2013-01-01
Research into post-wildfire effects began in the United Statesmore than 70 years ago and only later extended to other parts of the world. Post-wildfire responses are typically transient, episodic, variable in space and time, dependent on thresholds, and involve multiple processes measured by different methods. These characteristics tend to hinder research progress, but...
Koike, Kazuhide; Grills, David C.; Tamaki, Yusuke; ...
2018-02-14
Supramolecular photocatalysts in which Ru(II) photosensitizer and Re(I) catalyst units are connected to each other by an ethylene linker are among the best known, most effective and durable photocatalytic systems for CO 2 reduction. In this paper we report, for the first time, time-resolved infrared (TRIR) spectra of three of these binuclear complexes to uncover why the catalysts function so efficiently. Selective excitation of the Ru unit with a 532 nm laser pulse induces slow intramolecular electron transfer from the 3MLCT excited state of the Ru unit to the Re unit, with rate constants of (1.0–1.1) × 10 4 smore » -1 as a major component and (3.5–4.3) × 10 6 s -1 as a minor component, in acetonitrile. The produced charge-separated state has a long lifetime, with charge recombination rate constants of only (6.5–8.4) × 10 4 s -1. Thus, although it has a large driving force (-ΔG 0 CR ~ 2.6 eV), this process is in the Marcus inverted region. On the other hand, in the presence of 1-benzyl-1,4-dihydronicotinamide (BNAH), reductive quenching of the excited Ru unit proceeds much faster (k q[BNAH (0.2 M)] = (3.5–3.8) × 10 6 s -1) than the abovementioned intramolecular oxidative quenching, producing the one-electron-reduced species (OERS) of the Ru unit. Nanosecond TRIR data clearly show that intramolecular electron transfer from the OERS of the Ru unit to the Re unit (k ET > 2 × 10 7 s -1) is much faster than from the excited state of the Ru unit, and that it is also faster than the reductive quenching process of the excited Ru unit by BNAH. To measure the exact value of k ET, picosecond TRIR spectroscopy and a stronger reductant were used. Thus, in the case of the binuclear complex with tri(p-fluorophenyl)phosphine ligands (RuRe(FPh)), for which intramolecular electron transfer is expected to be the fastest among the three binuclear complexes, in the presence of 1,3-dimethyl-2-phenyl-2,3-dihydro-1H-benzo[d]imidazole (BIH), k ET was measured as k ET = (1.4 ± 0.1) × 10 9 s -1. This clearly shows that intramolecular electron transfer in these RuRe binuclear supramolecular photocatalysts is not the rate-determining process in the photocatalytic reduction of CO 2, which is one of the main reasons why they work so efficiently.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koike, Kazuhide; Grills, David C.; Tamaki, Yusuke
Supramolecular photocatalysts in which Ru(II) photosensitizer and Re(I) catalyst units are connected to each other by an ethylene linker are among the best known, most effective and durable photocatalytic systems for CO 2 reduction. In this paper we report, for the first time, time-resolved infrared (TRIR) spectra of three of these binuclear complexes to uncover why the catalysts function so efficiently. Selective excitation of the Ru unit with a 532 nm laser pulse induces slow intramolecular electron transfer from the 3MLCT excited state of the Ru unit to the Re unit, with rate constants of (1.0–1.1) × 10 4 smore » -1 as a major component and (3.5–4.3) × 10 6 s -1 as a minor component, in acetonitrile. The produced charge-separated state has a long lifetime, with charge recombination rate constants of only (6.5–8.4) × 10 4 s -1. Thus, although it has a large driving force (-ΔG 0 CR ~ 2.6 eV), this process is in the Marcus inverted region. On the other hand, in the presence of 1-benzyl-1,4-dihydronicotinamide (BNAH), reductive quenching of the excited Ru unit proceeds much faster (k q[BNAH (0.2 M)] = (3.5–3.8) × 10 6 s -1) than the abovementioned intramolecular oxidative quenching, producing the one-electron-reduced species (OERS) of the Ru unit. Nanosecond TRIR data clearly show that intramolecular electron transfer from the OERS of the Ru unit to the Re unit (k ET > 2 × 10 7 s -1) is much faster than from the excited state of the Ru unit, and that it is also faster than the reductive quenching process of the excited Ru unit by BNAH. To measure the exact value of k ET, picosecond TRIR spectroscopy and a stronger reductant were used. Thus, in the case of the binuclear complex with tri(p-fluorophenyl)phosphine ligands (RuRe(FPh)), for which intramolecular electron transfer is expected to be the fastest among the three binuclear complexes, in the presence of 1,3-dimethyl-2-phenyl-2,3-dihydro-1H-benzo[d]imidazole (BIH), k ET was measured as k ET = (1.4 ± 0.1) × 10 9 s -1. This clearly shows that intramolecular electron transfer in these RuRe binuclear supramolecular photocatalysts is not the rate-determining process in the photocatalytic reduction of CO 2, which is one of the main reasons why they work so efficiently.« less
An investigation of collisions between fiber positioning units in LAMOST
NASA Astrophysics Data System (ADS)
Liu, Xiao-Jie; Wang, Gang
2016-04-01
The arrangement of fiber positioning units in the LAMOST focal plane may lead to collisions during the fiber allocation process. To avoid these collisions, a software-based protection system has to abandon some targets located in the overlapping field of adjacent fiber units. In this paper, we first analyze the probability of collisions between fibers and infer their possible reasons. It is useful to solve the problem of collisions among fiber positioning units so as to improve the efficiency of LAMOST. Based on this, a collision handling system is designed by using a master-slave control structure between the micro control unit and microcomputer. Simulated experiments validate that the system can provide real-time inspection and swap information between the fiber unit controllers and the main controller.
NASA Astrophysics Data System (ADS)
Han, Zhenyu; Sun, Shouzheng; Fu, Yunzhong; Fu, Hongya
2017-10-01
Viscidity is an important physical indicator for assessing fluidity of resin that is beneficial to contact resin with the fibers effectively and reduce manufacturing defects during automated fiber placement (AFP) process. However, the effect of processing parameters on viscidity evolution is rarely studied during AFP process. In this paper, viscidities under different scales are analyzed based on multi-scale analysis method. Firstly, viscous dissipation energy (VDE) within meso-unit under different processing parameters is assessed by using finite element method (FEM). According to multi-scale energy transfer model, meso-unit energy is used as the boundary condition for microscopic analysis. Furthermore, molecular structure of micro-system is built by molecular dynamics (MD) method. And viscosity curves are then obtained by integrating stress autocorrelation function (SACF) with time. Finally, the correlation characteristics of processing parameters to viscosity are revealed by using gray relational analysis method (GRAM). A group of processing parameters is found out to achieve the stability of viscosity and better fluidity of resin.
A System for Distributing Real-Time Customized (NEXRAD-Radar) Geosciences Data
NASA Astrophysics Data System (ADS)
Singh, Satpreet; McWhirter, Jeff; Krajewski, Witold; Kruger, Anton; Goska, Radoslaw; Seo, Bongchul; Domaszczynski, Piotr; Weber, Jeff
2010-05-01
Hydrometeorologists and hydrologists can benefit from (weather) radar derived rain products, including rain rates and accumulations. The Hydro-NEXRAD system (HNX1) has been in operation since 2006 at IIHR-Hydroscience and Engineering at The University of Iowa. It provides rapid and user-friendly access to such user-customized products, generated using archived Weather Surveillance Doppler Radar (WSR-88D) data from the NEXRAD weather radar network in the United States. HNX1 allows researchers to deal directly with radar-derived rain products, without the burden of the details of radar data collection, quality control, processing, and format conversion. A number of hydrologic applications can benefit from a continuous real-time feed of customized radar-derived rain products. We are currently developing such a system, Hydro-NEXRAD 2 (HNX2). HNX2 collects real-time, unprocessed data from multiple NEXRAD radars as they become available, processes them through a user-configurable pipeline of data-processing modules, and then publishes processed products at regular intervals. Modules in the data processing pipeline encapsulate algorithms such as non-meteorological echo detection, range correction, radar-reflectivity-rain rate (Z-R) conversion, advection correction, merging products from multiple radars, and grid transformations. HNX2's implementation presents significant challenges, including quality-control, error-handling, time-synchronization of data from multiple asynchronous sources, generation of multiple-radar metadata products, distribution of products to a user base with diverse needs and constraints, and scalability. For content management and distribution, HNX2 uses RAMADDA (Repository for Archiving, Managing and Accessing Diverse Data), developed by the UCAR/Unidata Program Center in the Unites States. RAMADDA allows HNX2 to publish products through automation and gives users multiple access methods to the published products, including simple web-browser based access, and OpenDAP access. The latter allows a user to set up automation at his/her end, and fetch new data from HNX2 at regular intervals. HNX2 uses a two-dimensional metadata structure called a mosaic for managing metadata of the rain products. Currently, HNX2 is in pre-production state and is serving near real-time rain-rate map data-products for individual radars and merged data-products from seven radars covering the state of Iowa in the United States. These products then drive a rainfall-runoff model called CUENCAS, which is used as part of the Iowa Flood Center (housed at The University of Iowa) real-time flood forecasting system. We are currently developing a generalized scalable framework that will run on inexpensive hardware and will provide products for basins anywhere in the continental United States.
Jeskey, Mary; Card, Elizabeth; Nelson, Donna; Mercaldo, Nathaniel D; Sanders, Neal; Higgins, Michael S; Shi, Yaping; Michaels, Damon; Miller, Anne
2011-10-01
To report an exploratory action-research process used during the implementation of continuous patient monitoring in acute post-surgical nursing units. Substantial US Federal funding has been committed to implementing new health care technology, but failure to manage implementation processes may limit successful adoption and the realisation of proposed benefits. Effective approaches for managing barriers to new technology implementation are needed. Continuous patient monitoring was implemented in three of 13 medical/surgical units. An exploratory action-feedback approach, using time-series nurse surveys, was used to identify barriers and develop and evaluate responses. Post-hoc interviews and document analysis were used to describe the change implementation process. Significant differences were identified in night- and dayshift nurses' perceptions of technology benefits. Research nurses' facilitated the change process by evolving 'clinical nurse implementation specialist' expertise. Health information technology (HIT)-related patient outcomes are mediated through nurses' acting on new information but HIT designed for critical care may not transfer to acute care settings. Exploratory action-feedback approaches can assist nurse managers in assessing and mitigating the real-world effects of HIT implementations. It is strongly recommended that nurse managers identify stakeholders and develop comprehensive plans for monitoring the effects of HIT in their units. © 2011 Blackwell Publishing Ltd.
Accelerated numerical processing of electronically recorded holograms with reduced speckle noise.
Trujillo, Carlos; Garcia-Sucerquia, Jorge
2013-09-01
The numerical reconstruction of digitally recorded holograms suffers from speckle noise. An accelerated method that uses general-purpose computing in graphics processing units to reduce that noise is shown. The proposed methodology utilizes parallelized algorithms to record, reconstruct, and superimpose multiple uncorrelated holograms of a static scene. For the best tradeoff between reduction of the speckle noise and processing time, the method records, reconstructs, and superimposes six holograms of 1024 × 1024 pixels in 68 ms; for this case, the methodology reduces the speckle noise by 58% compared with that exhibited by a single hologram. The fully parallelized method running on a commodity graphics processing unit is one order of magnitude faster than the same technique implemented on a regular CPU using its multithreading capabilities. Experimental results are shown to validate the proposal.
Real-time colour hologram generation based on ray-sampling plane with multi-GPU acceleration.
Sato, Hirochika; Kakue, Takashi; Ichihashi, Yasuyuki; Endo, Yutaka; Wakunami, Koki; Oi, Ryutaro; Yamamoto, Kenji; Nakayama, Hirotaka; Shimobaba, Tomoyoshi; Ito, Tomoyoshi
2018-01-24
Although electro-holography can reconstruct three-dimensional (3D) motion pictures, its computational cost is too heavy to allow for real-time reconstruction of 3D motion pictures. This study explores accelerating colour hologram generation using light-ray information on a ray-sampling (RS) plane with a graphics processing unit (GPU) to realise a real-time holographic display system. We refer to an image corresponding to light-ray information as an RS image. Colour holograms were generated from three RS images with resolutions of 2,048 × 2,048; 3,072 × 3,072 and 4,096 × 4,096 pixels. The computational results indicate that the generation of the colour holograms using multiple GPUs (NVIDIA Geforce GTX 1080) was approximately 300-500 times faster than those generated using a central processing unit. In addition, the results demonstrate that 3D motion pictures were successfully reconstructed from RS images of 3,072 × 3,072 pixels at approximately 15 frames per second using an electro-holographic reconstruction system in which colour holograms were generated from RS images in real time.
Manufacturing Enhancement through Reduction of Cycle Time using Different Lean Techniques
NASA Astrophysics Data System (ADS)
Suganthini Rekha, R.; Periyasamy, P.; Nallusamy, S.
2017-08-01
In recent manufacturing system the most important parameters in production line are work in process, TAKT time and line balancing. In this article lean tools and techniques were implemented to reduce the cycle time. The aim is to enhance the productivity of the water pump pipe by identifying the bottleneck stations and non value added activities. From the initial time study the bottleneck processes were identified and then necessary expanding processes were also identified for the bottleneck process. Subsequently the improvement actions have been established and implemented using different lean tools like value stream mapping, 5S and line balancing. The current state value stream mapping was developed to describe the existing status and to identify various problem areas. 5S was used to implement the steps to reduce the process cycle time and unnecessary movements of man and material. The improvement activities were implemented with required suggested and the future state value stream mapping was developed. From the results it was concluded that the total cycle time was reduced about 290.41 seconds and the customer demand has been increased about 760 units.
Intensity-based segmentation and visualization of cells in 3D microscopic images using the GPU
NASA Astrophysics Data System (ADS)
Kang, Mi-Sun; Lee, Jeong-Eom; Jeon, Woong-ki; Choi, Heung-Kook; Kim, Myoung-Hee
2013-02-01
3D microscopy images contain abundant astronomical data, rendering 3D microscopy image processing time-consuming and laborious on a central processing unit (CPU). To solve these problems, many people crop a region of interest (ROI) of the input image to a small size. Although this reduces cost and time, there are drawbacks at the image processing level, e.g., the selected ROI strongly depends on the user and there is a loss in original image information. To mitigate these problems, we developed a 3D microscopy image processing tool on a graphics processing unit (GPU). Our tool provides efficient and various automatic thresholding methods to achieve intensity-based segmentation of 3D microscopy images. Users can select the algorithm to be applied. Further, the image processing tool provides visualization of segmented volume data and can set the scale, transportation, etc. using a keyboard and mouse. However, the 3D objects visualized fast still need to be analyzed to obtain information for biologists. To analyze 3D microscopic images, we need quantitative data of the images. Therefore, we label the segmented 3D objects within all 3D microscopic images and obtain quantitative information on each labeled object. This information can use the classification feature. A user can select the object to be analyzed. Our tool allows the selected object to be displayed on a new window, and hence, more details of the object can be observed. Finally, we validate the effectiveness of our tool by comparing the CPU and GPU processing times by matching the specification and configuration.
Spreading a medication administration intervention organizationwide in six hospitals.
Kliger, Julie; Singer, Sara; Hoffman, Frank; O'Neil, Edward
2012-02-01
Six hospitals from the San Francisco Bay Area participated in a 12-month quality improvement project conducted by the Integrated Nurse Leadership Program (INLP). A quality improvement intervention that focused on improving medication administration accuracy was spread from two pilot units to all inpatient units in the hospitals. INLP developed a 12-month curriculum, presented in a combination of off-site training sessions and hospital-based training and consultant-led meetings, to teach clinicians the key skills needed to drive organizationwide change. Each hospital established a nurse-led project team, as well as unit teams to address six safety processes designed to improve medication administration accuracy: compare medication to the medication administration record; keep medication labeled throughout; check two patient identifications; explain drug to patient (if applicable); chart immediately after administration; and protect process from distractions and interruptions. From baseline until one year after project completion, the six hospitals improved their medication accuracy rates, on average, from 83.4% to 98.0% in the spread units. The spread units also improved safety processes overall from 83.1% to 97.2%. During the same time, the initial pilot units also continued to improve accuracy from 94.0% to 96.8% and safety processes overall from 95.3% to 97.2%. With thoughtful planning, engaging those doing the work early and focusing on the "human side of change" along with technical knowledge of improvement methodologies, organizations can spread initiatives enterprisewide. This program required significant training of frontline workers in problem-solving skills, leading change, team management, data tracking, and communication.
Changes in Efficiency and Safety Culture After Integration of an I-PASS-Supported Handoff Process.
Sheth, Shreya; McCarthy, Elisa; Kipps, Alaina K; Wood, Matthew; Roth, Stephen J; Sharek, Paul J; Shin, Andrew Y
2016-02-01
Recent publications have shown improved outcomes associated with resident-to-resident handoff processes. However, the implementation of similar handoff processes for patients moving between units and teams with expansive responsibilities presents unique challenges. We sought to determine the impact of a multidisciplinary standardized handoff process on efficiency, safety culture, and satisfaction. A prospective improvement initiative to standardize handoffs during patient transitions from the cardiovascular ICU to the acute care unit was implemented in a university-affiliated children's hospital. Time between verbal handoff and patient transfer decreased from baseline (397 ± 167 minutes) to the postintervention period (24 ± 21 minutes) (P < .01). Percentage positive scores for the handoff/transitions domain of a national culture of safety survey improved (39.8% vs 15.2% and 38.8% vs 19.6%; P = .005 and 0.03, respectively). Provider satisfaction improved related to the information conveyed (34% to 41%; P = .03), time to transfer (5% to 34%; P < .01), and overall experience (3% to 24%; P < .01). Family satisfaction improved for several questions, including: "satisfaction with the information conveyed" (42% to 70%; P = .02), "opportunities to ask questions" (46% to 74%; P < .01), and "Acute Care team's knowledgeabout my child's issues" (50% to 73%; P = .04). No differences in rates of readmission, rapid response team calls, or mortality were observed. Implementation of a multidisciplinary I-PASS-supported handoff process for patients transferring from the cardiovascular ICU to the acute care unit resulted in improved transfer efficiency, safety culture scores, and satisfaction of providers and families. Copyright © 2016 by the American Academy of Pediatrics.
Venture Evaluation and Review Technique (VERT). Users’/Analysts’ Manual
1979-10-01
real world. Additionally, activity pro- cessing times could be entered as a normal, uniform or triangular distribution. Activity times can also be...work or tasks, or if the unit activities are such abstractions of the real world that the estimation of the time , cost and performance parameters for...utilized in that con- straining capacity. 7444 The network being processed has passed all the previous error checks. It currently has a real time
The AMchip04 and the processing unit prototype for the FastTracker
NASA Astrophysics Data System (ADS)
Andreani, A.; Annovi, A.; Beretta, M.; Bogdan, M.; Citterio, M.; Alberti, F.; Giannetti, P.; Lanza, A.; Magalotti, D.; Piendibene, M.; Shochet, M.; Stabile, A.; Tang, J.; Tompkins, L.; Volpi, G.
2012-08-01
Modern experiments search for extremely rare processes hidden in much larger background levels. As the experiment`s complexity, the accelerator backgrounds and luminosity increase we need increasingly complex and exclusive event selection. We present the first prototype of a new Processing Unit (PU), the core of the FastTracker processor (FTK). FTK is a real time tracking device for the ATLAS experiment`s trigger upgrade. The computing power of the PU is such that a few hundred of them will be able to reconstruct all the tracks with transverse momentum above 1 GeV/c in ATLAS events up to Phase II instantaneous luminosities (3 × 1034 cm-2 s-1) with an event input rate of 100 kHz and a latency below a hundred microseconds. The PU provides massive computing power to minimize the online execution time of complex tracking algorithms. The time consuming pattern recognition problem, generally referred to as the ``combinatorial challenge'', is solved by the Associative Memory (AM) technology exploiting parallelism to the maximum extent; it compares the event to all pre-calculated ``expectations'' or ``patterns'' (pattern matching) simultaneously, looking for candidate tracks called ``roads''. This approach reduces to a linear behavior the typical exponential complexity of the CPU based algorithms. Pattern recognition is completed by the time data are loaded into the AM devices. We report on the design of the first Processing Unit prototypes. The design had to address the most challenging aspects of this technology: a huge number of detector clusters (``hits'') must be distributed at high rate with very large fan-out to all patterns (10 Million patterns will be located on 128 chips placed on a single board) and a huge number of roads must be collected and sent back to the FTK post-pattern-recognition functions. A network of high speed serial links is used to solve the data distribution problem.
The implementation of a postoperative care process on a neurosurgical unit.
Douglas, Mary; Rowed, Sheila
2005-12-01
The postoperative phase is a critical time for any neurosurgical patient. Historically, certain patients having neurosurgical procedures, such as craniotomies and other more complex surgeries, have been nursed postoperatively in the intensive care unit (ICU) for an overnight stay, prior to transfer to a neurosurgical floor. At the Hospital for Sick Children in Toronto, because of challenges with access to ICU beds and the cancellation of surgeries because of lack of available nurses for the ICU setting, this practice was reexamined. A set of criteria was developed to identify which postoperative patients should come directly to the neurosurgical unit immediately following their anesthetic recovery. The criteria were based on patient diagnosis, preoperative condition, comorbidities, the surgical procedure, intraoperative complications, and postoperative status. A detailed process was then outlined that allowed the optimum patients to be selected for this process to ensure patient safety. Included in this process was a postoperative protocol addressing details such as standard physician orders and the levels of monitoring required. Outcomes of this new process include fewer surgical cancellations for patients and families, equally safe, or better patient care, and the conservation of limited ICU resources. The program has currently been expanded to include patients who have undergone endovascular therapies.
Standardized severe maternal morbidity review: rationale and process.
Kilpatrick, Sarah J; Berg, Cynthia; Bernstein, Peter; Bingham, Debra; Delgado, Ana; Callaghan, William M; Harris, Karen; Lanni, Susan; Mahoney, Jeanne; Main, Elliot; Nacht, Amy; Schellpfeffer, Michael; Westover, Thomas; Harper, Margaret
2014-08-01
Severe maternal morbidity and mortality have been rising in the United States. To begin a national effort to reduce morbidity, a specific call to identify all pregnant and postpartum women experiencing admission to an intensive care unit or receipt of 4 or more units of blood for routine review has been made. While advocating for review of these cases, no specific guidance for the review process was provided. Therefore, the aim of this expert opinion is to present guidelines for a standardized severe maternal morbidity interdisciplinary review process to identify systems, professional, and facility factors that can be ameliorated, with the overall goal of improving institutional obstetric safety and reducing severe morbidity and mortality among pregnant and recently pregnant women. This opinion was developed by a multidisciplinary working group that included general obstetrician-gynecologists, maternal-fetal medicine subspecialists, certified nurse-midwives, and registered nurses all with experience in maternal mortality reviews. A process for standardized review of severe maternal morbidity addressing committee organization, review process, medical record abstraction and assessment, review culture, data management, review timing, and review confidentiality is presented. Reference is made to a sample severe maternal morbidity abstraction and assessment form.
Compact hybrid optoelectrical unit for image processing and recognition
NASA Astrophysics Data System (ADS)
Cheng, Gang; Jin, Guofan; Wu, Minxian; Liu, Haisong; He, Qingsheng; Yuan, ShiFu
1998-07-01
In this paper a compact opto-electric unit (CHOEU) for digital image processing and recognition is proposed. The central part of CHOEU is an incoherent optical correlator, which is realized with a SHARP QA-1200 8.4 inch active matrix TFT liquid crystal display panel which is used as two real-time spatial light modulators for both the input image and reference template. CHOEU can do two main processing works. One is digital filtering; the other is object matching. Using CHOEU an edge-detection operator is realized to extract the edges from the input images. Then the reprocessed images are sent into the object recognition unit for identifying the important targets. A novel template- matching method is proposed for gray-tome image recognition. A positive and negative cycle-encoding method is introduced to realize the absolute difference measurement pixel- matching on a correlator structure simply. The system has god fault-tolerance ability for rotation distortion, Gaussian noise disturbance or information losing. The experiments are given at the end of this paper.
[Training and experience in stroke units].
Arenillas, J F
2008-01-01
The social and sanitary benefits provided by stroke units can not be achieved without an adequate training and learning process. This dynamic process consists of the progressive acquisition of: a) a greater degree of expertise in stroke management by the stroke team; b) better coordination between the stroke team, extrahospitalary emergency medical systems, and other in-hospital professionals involved in stroke assistance, and c) more human and technological resources dedicated to improve attention to stroke patients. The higher degree of experience in a stroke unit will have an effect: a) improving (time and quality) the diagnostic process in acute stroke patients; b) increasing the proportion of patients treated with thrombolysis; c) reducing extra and intrahospitalary latencies to stroke treatment, and d) improving stroke outcome in terms of reducing mortality and increasing functional independence. Finally, comprehensive stroke centers will achieve a higher degree of organizational complexity that will permit a global assessment of the most advanced aspects in stroke management, including education and research.
Theory of nonstationary Hawkes processes
NASA Astrophysics Data System (ADS)
Tannenbaum, Neta Ravid; Burak, Yoram
2017-12-01
We expand the theory of Hawkes processes to the nonstationary case, in which the mutually exciting point processes receive time-dependent inputs. We derive an analytical expression for the time-dependent correlations, which can be applied to networks with arbitrary connectivity, and inputs with arbitrary statistics. The expression shows how the network correlations are determined by the interplay between the network topology, the transfer functions relating units within the network, and the pattern and statistics of the external inputs. We illustrate the correlation structure using several examples in which neural network dynamics are modeled as a Hawkes process. In particular, we focus on the interplay between internally and externally generated oscillations and their signatures in the spike and rate correlation functions.
Graphics Processing Units for HEP trigger systems
NASA Astrophysics Data System (ADS)
Ammendola, R.; Bauce, M.; Biagioni, A.; Chiozzi, S.; Cotta Ramusino, A.; Fantechi, R.; Fiorini, M.; Giagu, S.; Gianoli, A.; Lamanna, G.; Lonardo, A.; Messina, A.; Neri, I.; Paolucci, P. S.; Piandani, R.; Pontisso, L.; Rescigno, M.; Simula, F.; Sozzi, M.; Vicini, P.
2016-07-01
General-purpose computing on GPUs (Graphics Processing Units) is emerging as a new paradigm in several fields of science, although so far applications have been tailored to the specific strengths of such devices as accelerator in offline computation. With the steady reduction of GPU latencies, and the increase in link and memory throughput, the use of such devices for real-time applications in high-energy physics data acquisition and trigger systems is becoming ripe. We will discuss the use of online parallel computing on GPU for synchronous low level trigger, focusing on CERN NA62 experiment trigger system. The use of GPU in higher level trigger system is also briefly considered.
Employing OpenCL to Accelerate Ab Initio Calculations on Graphics Processing Units.
Kussmann, Jörg; Ochsenfeld, Christian
2017-06-13
We present an extension of our graphics processing units (GPU)-accelerated quantum chemistry package to employ OpenCL compute kernels, which can be executed on a wide range of computing devices like CPUs, Intel Xeon Phi, and AMD GPUs. Here, we focus on the use of AMD GPUs and discuss differences as compared to CUDA-based calculations on NVIDIA GPUs. First illustrative timings are presented for hybrid density functional theory calculations using serial as well as parallel compute environments. The results show that AMD GPUs are as fast or faster than comparable NVIDIA GPUs and provide a viable alternative for quantum chemical applications.
Allen, Peg; Jacob, Rebekah R; Lakshman, Meenakshi; Best, Leslie A; Bass, Kathryn; Brownson, Ross C
2018-03-02
Evidence-based public health (EBPH) practice, also called evidence-informed public health, can improve population health and reduce disease burden in populations. Organizational structures and processes can facilitate capacity-building for EBPH in public health agencies. This study involved 51 structured interviews with leaders and program managers in 12 state health department chronic disease prevention units to identify factors that facilitate the implementation of EBPH. Verbatim transcripts of the de-identified interviews were consensus coded in NVIVO qualitative software. Content analyses of coded texts were used to identify themes and illustrative quotes. Facilitator themes included leadership support within the chronic disease prevention unit and division, unit processes to enhance information sharing across program areas and recruitment and retention of qualified personnel, training and technical assistance to build skills, and the ability to provide support to external partners. Chronic disease prevention leaders' role modeling of EBPH processes and expectations for staff to justify proposed plans and approaches were key aspects of leadership support. Leaders protected staff time in order to identify and digest evidence to address the common barrier of lack of time for EBPH. Funding uncertainties or budget cuts, lack of political will for EBPH, and staff turnover remained challenges. In conclusion, leadership support is a key facilitator of EBPH capacity building and practice. Section and division leaders in public health agencies with authority and skills can institute management practices to help staff learn and apply EBPH processes and spread EBPH with partners.
19 CFR 163.2 - Persons required to maintain records.
Code of Federal Regulations, 2014 CFR
2014-04-01
... rough diamonds must retain a copy of the Kimberley Process Certificate accompanying each shipment for a... exports from the United States any rough diamonds and does not keep records in this time frame may be...
Algauer, Andrea; Rivera, Stephanie; Faurote, Robert
2015-01-01
With increasing wait times in emergency departments (ED) across America, there is a need to streamline the inpatient admission process in order to decrease wait times and more important, to increase patient and employee satisfaction. One inpatient unit at New York-Presbyterian Weill Cornell Medical Center initiated a program to help expedite the inpatient admission process from the ED. The goal of the ED Bridge program is to ease the patient's transition from the ED to an inpatient unit by visiting the patient in the ED and introducing and setting expectations for the inpatient environment (i.e. telemetry alarms, roommates, hourly comfort rounds). Along with improving the patient experience, this program intends to improve the collaboration between ED nurses and inpatient nurses. With the continued support of our nurse management, hospital administrators and most important, our staff, this concept is aimed to increase patient satisfaction scores and subsequently employee satisfaction. PMID:28725813
Particle-In-Cell simulations of high pressure plasmas using graphics processing units
NASA Astrophysics Data System (ADS)
Gebhardt, Markus; Atteln, Frank; Brinkmann, Ralf Peter; Mussenbrock, Thomas; Mertmann, Philipp; Awakowicz, Peter
2009-10-01
Particle-In-Cell (PIC) simulations are widely used to understand the fundamental phenomena in low-temperature plasmas. Particularly plasmas at very low gas pressures are studied using PIC methods. The inherent drawback of these methods is that they are very time consuming -- certain stability conditions has to be satisfied. This holds even more for the PIC simulation of high pressure plasmas due to the very high collision rates. The simulations take up to very much time to run on standard computers and require the help of computer clusters or super computers. Recent advances in the field of graphics processing units (GPUs) provides every personal computer with a highly parallel multi processor architecture for very little money. This architecture is freely programmable and can be used to implement a wide class of problems. In this paper we present the concepts of a fully parallel PIC simulation of high pressure plasmas using the benefits of GPU programming.
NASA Astrophysics Data System (ADS)
Ramirez, Andres; Rahnemoonfar, Maryam
2017-04-01
A hyperspectral image provides multidimensional figure rich in data consisting of hundreds of spectral dimensions. Analyzing the spectral and spatial information of such image with linear and non-linear algorithms will result in high computational time. In order to overcome this problem, this research presents a system using a MapReduce-Graphics Processing Unit (GPU) model that can help analyzing a hyperspectral image through the usage of parallel hardware and a parallel programming model, which will be simpler to handle compared to other low-level parallel programming models. Additionally, Hadoop was used as an open-source version of the MapReduce parallel programming model. This research compared classification accuracy results and timing results between the Hadoop and GPU system and tested it against the following test cases: the CPU and GPU test case, a CPU test case and a test case where no dimensional reduction was applied.
NASA Astrophysics Data System (ADS)
Zhao, Shuangle; Zhang, Xueyi; Sun, Shengli; Wang, Xudong
2017-08-01
TI C2000 series digital signal process (DSP) chip has been widely used in electrical engineering, measurement and control, communications and other professional fields, DSP TMS320F28035 is one of the most representative of a kind. When using the DSP program, need data acquisition and data processing, and if the use of common mode C or assembly language programming, the program sequence, analogue-to-digital (AD) converter cannot be real-time acquisition, often missing a lot of data. The control low accelerator (CLA) processor can run in parallel with the main central processing unit (CPU), and the frequency is consistent with the main CPU, and has the function of floating point operations. Therefore, the CLA coprocessor is used in the program, and the CLA kernel is responsible for data processing. The main CPU is responsible for the AD conversion. The advantage of this method is to reduce the time of data processing and realize the real-time performance of data acquisition.
Wearable Environmental and Physiological Sensing Unit
NASA Technical Reports Server (NTRS)
Spremo, Stevan; Ahlman, Jim; Stricker, Ed; Santos, Elmer
2007-01-01
The wearable environmental and physiological sensing unit (WEPS) is a prototype of systems to be worn by emergency workers (e.g., firefighters and members of hazardous-material response teams) to increase their level of safety. The WEPS includes sensors that measure a few key physiological and environmental parameters, a microcontroller unit that processes the digitized outputs of the sensors, and a radio transmitter that sends the processed sensor signals to a computer in a mobile command center for monitoring by a supervisor. The monitored parameters serve as real-time indications of the wearer s physical condition and level of activity, and of the degree and type of danger posed by the wearer s environment. The supervisor could use these indications to determine, for example, whether the wearer should withdraw in the face of an increasing hazard or whether the wearer should be rescued.
NASA Astrophysics Data System (ADS)
Jung, E.
1984-05-01
A color recording unit was designed for output and control of digitized picture data within computer controlled reproduction and picture processing systems. In order to get a color proof picture of high quality similar to a color print, together with reduced time and material consumption, a photographic color film material was exposed pixelwise by modulated laser beams of three wavelengths for red, green and blue light. Components of different manufacturers for lasers, acousto-optic modulators and polygon mirrors were tested, also different recording methods as (continuous tone mode or screened mode and with a drum or flatbed recording principle). Besides the application for the graphic arts - the proof recorder CPR 403 with continuous tone color recording with a drum scanner - such a color hardcopy peripheral unit with large picture formats and high resolution can be used in medicine, communication, and satellite picture processing.
[IMPLEMENTATION OF A QUALITY MANAGEMENT SYSTEM IN A NUTRITION UNIT ACCORDING TO ISO 9001:2008].
Velasco Gimeno, Cristina; Cuerda Compés, Cristina; Alonso Puerta, Alba; Frías Soriano, Laura; Camblor Álvarez, Miguel; Bretón Lesmes, Irene; Plá Mestre, Rosa; Izquierdo Membrilla, Isabel; García-Peris, Pilar
2015-09-01
the implementation of quality management systems (QMS) in the health sector has made great progress in recent years, remains a key tool for the management and improvement of services provides to patients. to describe the process of implementing a quality management system (QMS) according to the standard ISO 9001:2008 in a Nutrition Unit. the implementation began in October 2012. Nutrition Unit was supported by Hospital Preventive Medicine and Quality Management Service (PMQM). Initially training sessions on QMS and ISO standards for staff were held. Quality Committee (QC) was established with representation of the medical and nursing staff. Every week, meeting took place among members of the QC and PMQM to define processes, procedures and quality indicators. We carry on a 2 months follow-up of these documents after their validation. a total of 4 processes were identified and documented (Nutritional status assessment, Nutritional treatment, Monitoring of nutritional treatment and Planning and control of oral feeding) and 13 operating procedures in which all the activity of the Unit were described. The interactions among them were defined in the processes map. Each process has associated specific quality indicators for measuring the state of the QMS, and identifying opportunities for improvement. All the documents associated with requirements of ISO 9001:2008 were developed: quality policy, quality objectives, quality manual, documents and records control, internal audit, nonconformities and corrective and preventive actions. The unit was certified by AENOR in April 2013. the implementation of a QMS causes a reorganization of the activities of the Unit in order to meet customer's expectations. Documenting these activities ensures a better understanding of the organization, defines the responsibilities of all staff and brings a better management of time and resources. QMS also improves the internal communication and is a motivational element. Explore the satisfaction and expectations of patients can include their view in the design of care processes. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.
Valentijn, Pim P; Ruwaard, Dirk; Vrijhoef, Hubertus J M; de Bont, Antoinette; Arends, Rosa Y; Bruijnzeels, Marc A
2015-10-09
Collaborative partnerships are considered an essential strategy for integrating local disjointed health and social services. Currently, little evidence is available on how integrated care arrangements between professionals and organisations are achieved through the evolution of collaboration processes over time. The first aim was to develop a typology of integrated care projects (ICPs) based on the final degree of integration as perceived by multiple stakeholders. The second aim was to study how types of integration differ in changes of collaboration processes over time and final perceived effectiveness. A longitudinal mixed-methods study design based on two data sources (surveys and interviews) was used to identify the perceived degree of integration and patterns in collaboration among 42 ICPs in primary care in The Netherlands. We used cluster analysis to identify distinct subgroups of ICPs based on the final perceived degree of integration from a professional, organisational and system perspective. With the use of ANOVAs, the subgroups were contrasted based on: 1) changes in collaboration processes over time (shared ambition, interests and mutual gains, relationship dynamics, organisational dynamics and process management) and 2) final perceived effectiveness (i.e. rated success) at the professional, organisational and system levels. The ICPs were classified into three subgroups with: 'United Integration Perspectives (UIP)', 'Disunited Integration Perspectives (DIP)' and 'Professional-oriented Integration Perspectives (PIP)'. ICPs within the UIP subgroup made the strongest increase in trust-based (mutual gains and relationship dynamics) as well as control-based (organisational dynamics and process management) collaboration processes and had the highest overall effectiveness rates. On the other hand, ICPs with the DIP subgroup decreased on collaboration processes and had the lowest overall effectiveness rates. ICPs within the PIP subgroup increased in control-based collaboration processes (organisational dynamics and process management) and had the highest effectiveness rates at the professional level. The differences across the three subgroups in terms of the development of collaboration processes and the final perceived effectiveness provide evidence that united stakeholders' perspectives are achieved through a constructive collaboration process over time. Disunited perspectives at the professional, organisation and system levels can be aligned by both trust-based and control-based collaboration processes.
Transformational leadership training programme for charge nurses.
Duygulu, Sergul; Kublay, Gulumser
2011-03-01
This paper is a report of an evaluation of the effects of a transformational leadership training programme on Unit Charge Nurses' leadership practices. Current healthcare regulations in the European Union and accreditation efforts of hospitals for their services mandate transformation in healthcare services in Turkey. Therefore, the transformational leadership role of nurse managers is vital in determining and achieving long-term goals in this process. The sample consisted of 30 Unit Charge Nurses with a baccalaureate degree and 151 observers at two university hospitals in Turkey. Data were collected using the Leadership Practices Inventory-Self and Observer (applied four times during a 14-month study process from December 2005 to January 2007). The transformational leadership training programme had theoretical (14 hours) and individual study (14 hours) in five sections. Means, standard deviations and percentages, repeated measure tests and two-way factor analysis were used for analysis. According the Leadership Practices Inventory-Self and Observer ratings, leadership practices increased statistically significantly with the implementation of the programme. There were no significant differences between groups in age, length of time in current job and current position. The Unit Charge Nurses Leadership Practices Inventory self-ratings were significantly higher than those of the observers. There is a need to develop similar programmes to improve the leadership skills of Unit Charge Nurses, and to make it mandatory for nurses assigned to positions of Unit Charge Nurse to attend this kind of leadership programme. © 2010 Blackwell Publishing Ltd.
Park, Sohyun; Blanck, Heidi M.; Dooyema, Carrie A.; Ayala, Guadalupe X.
2015-01-01
Purpose This study examined associations between sugar-sweetened beverage (SSB) intake and acculturation among a sample representing civilian noninstitutionalized U.S. adults. Design Quantitative, cross-sectional study. Setting National. Subjects The 2010 National Health Interview Survey data for 17,142 Hispanics and U.S.-born non-Hispanic whites (≥18 years). Measures The outcome variable was daily SSB intake (nondiet soda, fruit drinks, sports drinks, energy drinks, and sweetened coffee/tea drinks). Exposure variables were Hispanic ethnicity and proxies of acculturation (language of interview, birthplace, and years living in the United States). Analysis We used multivariate logistic regression to estimate adjusted odds ratios (ORs) for the exposure variables associated with drinking SSB ≥ 1 time/d after controlling for covariates. Results The adjusted odds of drinking SSB ≥ 1 time/d was significantly higher among Hispanics who completed the interview in Spanish (OR = 1.65) than U.S.-born non-Hispanic whites. Compared with those who lived in the United States for <5 years, the adjusted odds of drinking SSB ≥ 1 time/d was higher among adults who lived in the United States for 5 to <10 years (OR = 2.72), those who lived in the United States for 10 to <15 years (OR = 2.90), and those who lived in the United States for ≥15 years (OR = 2.41). However, birthplace was not associated with daily SSB intake. Conclusion The acculturation process is complex and these findings contribute to identifying important subpopulations that may benefit from targeted intervention to reduce SSB intake. PMID:27404644
Zero Nuclear Weapons and Nuclear Security Enterprise Modernization
2011-01-01
national security strategy. For the first time since the Manhattan Project , the United States was no longer building nuclear weapons and was in fact...50 to 60 years to the Manhattan Project and are on the verge of catastrophic failure. Caustic chemicals and processes have sped up the corrosion and...day, the United States must fund the long-term modernization effort of the entire enter prise. Notes 1. Nuclear Weapon Archive, “The Manhattan
Academic Outcome Measures of a Dedicated Education Unit Over Time: Help or Hinder?
Smyer, Tish; Gatlin, Tricia; Tan, Rhigel; Tejada, Marianne; Feng, Du
2015-01-01
Critical thinking, nursing process, quality and safety measures, and standardized RN exit examination scores were compared between students (n = 144) placed in a dedicated education unit (DEU) and those in a traditional clinical model. Standardized test scores showed that differences between the clinical groups were not statistically significant. This study shows that the DEU model is 1 approach to clinical education that can enhance students' academic outcomes.
NASA Astrophysics Data System (ADS)
Hakim Halim, Abdul; Ernawati; Hidayat, Nita P. A.
2018-03-01
This paper deals with a model of batch scheduling for a single batch processor on which a number of parts of a single items are to be processed. The process needs two kinds of setups, i. e., main setups required before processing any batches, and additional setups required repeatedly after the batch processor completes a certain number of batches. The parts to be processed arrive at the shop floor at the times coinciding with their respective starting times of processing, and the completed parts are to be delivered at multiple due dates. The objective adopted for the model is that of minimizing total inventory holding cost consisting of holding cost per unit time for a part in completed batches, and that in in-process batches. The formulation of total inventory holding cost is derived from the so-called actual flow time defined as the interval between arrival times of parts at the production line and delivery times of the completed parts. The actual flow time satisfies not only minimum inventory but also arrival and delivery just in times. An algorithm to solve the model is proposed and a numerical example is shown.
A General Accelerated Degradation Model Based on the Wiener Process.
Liu, Le; Li, Xiaoyang; Sun, Fuqiang; Wang, Ning
2016-12-06
Accelerated degradation testing (ADT) is an efficient tool to conduct material service reliability and safety evaluations by analyzing performance degradation data. Traditional stochastic process models are mainly for linear or linearization degradation paths. However, those methods are not applicable for the situations where the degradation processes cannot be linearized. Hence, in this paper, a general ADT model based on the Wiener process is proposed to solve the problem for accelerated degradation data analysis. The general model can consider the unit-to-unit variation and temporal variation of the degradation process, and is suitable for both linear and nonlinear ADT analyses with single or multiple acceleration variables. The statistical inference is given to estimate the unknown parameters in both constant stress and step stress ADT. The simulation example and two real applications demonstrate that the proposed method can yield reliable lifetime evaluation results compared with the existing linear and time-scale transformation Wiener processes in both linear and nonlinear ADT analyses.
A General Accelerated Degradation Model Based on the Wiener Process
Liu, Le; Li, Xiaoyang; Sun, Fuqiang; Wang, Ning
2016-01-01
Accelerated degradation testing (ADT) is an efficient tool to conduct material service reliability and safety evaluations by analyzing performance degradation data. Traditional stochastic process models are mainly for linear or linearization degradation paths. However, those methods are not applicable for the situations where the degradation processes cannot be linearized. Hence, in this paper, a general ADT model based on the Wiener process is proposed to solve the problem for accelerated degradation data analysis. The general model can consider the unit-to-unit variation and temporal variation of the degradation process, and is suitable for both linear and nonlinear ADT analyses with single or multiple acceleration variables. The statistical inference is given to estimate the unknown parameters in both constant stress and step stress ADT. The simulation example and two real applications demonstrate that the proposed method can yield reliable lifetime evaluation results compared with the existing linear and time-scale transformation Wiener processes in both linear and nonlinear ADT analyses. PMID:28774107
Skinner, James A.; Tanaka, Kenneth L.; Platz, Thomas
2014-01-01
Consistently mappable units critical to distinguishing the style and interplay of geologic processes through time are sparse in the Martian lowlands. This study identifies a previously unmapped Middle Amazonian (ca. 1 Ga) unit (Middle Amazonian lowland unit, mAl) that postdates the Late Hesperian and Early Amazonian lowland plains by >2 b.y. The unit is regionally defined by subtle marginal scarps and slopes, has a mean thickness of 32 m, and extends >3.1 × 106 km2 between lat 35°N and 80°N. Pedestal-type craterforms and nested, arcuate ridges (thumbprint terrain) tend to occur adjacent to unit mAl outcrops, suggesting that current outcrops are vestiges of a more extensive deposit that previously covered ∼16 × 106 km2. Exposed layers, surface pits, and the draping of subjacent landforms allude to a sedimentary origin, perhaps as a loess-like deposit emplaced rhythmically through atmospheric fallout. We propose that unit mAl accumulated coevally with, and at the expense of, the erosion of the north polar basal units, identifying a major episode of Middle Amazonian climate-driven sedimentation in the lowlands. This work links ancient sedimentary processes to climate change that occurred well before those implied by current orbital and spin axis models.
Using Hazard Functions to Assess Changes in Processing Capacity in an Attentional Cuing Paradigm
ERIC Educational Resources Information Center
Wenger, Michael J.; Gibson, Bradley S.
2004-01-01
Processing capacity-defined as the relative ability to perform mental work in a unit of time-is a critical construct in cognitive psychology and is central to theories of visual attention. The unambiguous use of the construct, experimentally and theoretically, has been hindered by both conceptual confusions and the use of measures that are at best…
A Binaural Grouping Model for Predicting Speech Intelligibility in Multitalker Environments
Colburn, H. Steven
2016-01-01
Spatially separating speech maskers from target speech often leads to a large intelligibility improvement. Modeling this phenomenon has long been of interest to binaural-hearing researchers for uncovering brain mechanisms and for improving signal-processing algorithms in hearing-assistive devices. Much of the previous binaural modeling work focused on the unmasking enabled by binaural cues at the periphery, and little quantitative modeling has been directed toward the grouping or source-separation benefits of binaural processing. In this article, we propose a binaural model that focuses on grouping, specifically on the selection of time-frequency units that are dominated by signals from the direction of the target. The proposed model uses Equalization-Cancellation (EC) processing with a binary decision rule to estimate a time-frequency binary mask. EC processing is carried out to cancel the target signal and the energy change between the EC input and output is used as a feature that reflects target dominance in each time-frequency unit. The processing in the proposed model requires little computational resources and is straightforward to implement. In combination with the Coherence-based Speech Intelligibility Index, the model is applied to predict the speech intelligibility data measured by Marrone et al. The predicted speech reception threshold matches the pattern of the measured data well, even though the predicted intelligibility improvements relative to the colocated condition are larger than some of the measured data, which may reflect the lack of internal noise in this initial version of the model. PMID:27698261
A Binaural Grouping Model for Predicting Speech Intelligibility in Multitalker Environments.
Mi, Jing; Colburn, H Steven
2016-10-03
Spatially separating speech maskers from target speech often leads to a large intelligibility improvement. Modeling this phenomenon has long been of interest to binaural-hearing researchers for uncovering brain mechanisms and for improving signal-processing algorithms in hearing-assistive devices. Much of the previous binaural modeling work focused on the unmasking enabled by binaural cues at the periphery, and little quantitative modeling has been directed toward the grouping or source-separation benefits of binaural processing. In this article, we propose a binaural model that focuses on grouping, specifically on the selection of time-frequency units that are dominated by signals from the direction of the target. The proposed model uses Equalization-Cancellation (EC) processing with a binary decision rule to estimate a time-frequency binary mask. EC processing is carried out to cancel the target signal and the energy change between the EC input and output is used as a feature that reflects target dominance in each time-frequency unit. The processing in the proposed model requires little computational resources and is straightforward to implement. In combination with the Coherence-based Speech Intelligibility Index, the model is applied to predict the speech intelligibility data measured by Marrone et al. The predicted speech reception threshold matches the pattern of the measured data well, even though the predicted intelligibility improvements relative to the colocated condition are larger than some of the measured data, which may reflect the lack of internal noise in this initial version of the model. © The Author(s) 2016.
Implementation of GPU accelerated SPECT reconstruction with Monte Carlo-based scatter correction.
Bexelius, Tobias; Sohlberg, Antti
2018-06-01
Statistical SPECT reconstruction can be very time-consuming especially when compensations for collimator and detector response, attenuation, and scatter are included in the reconstruction. This work proposes an accelerated SPECT reconstruction algorithm based on graphics processing unit (GPU) processing. Ordered subset expectation maximization (OSEM) algorithm with CT-based attenuation modelling, depth-dependent Gaussian convolution-based collimator-detector response modelling, and Monte Carlo-based scatter compensation was implemented using OpenCL. The OpenCL implementation was compared against the existing multi-threaded OSEM implementation running on a central processing unit (CPU) in terms of scatter-to-primary ratios, standardized uptake values (SUVs), and processing speed using mathematical phantoms and clinical multi-bed bone SPECT/CT studies. The difference in scatter-to-primary ratios, visual appearance, and SUVs between GPU and CPU implementations was minor. On the other hand, at its best, the GPU implementation was noticed to be 24 times faster than the multi-threaded CPU version on a normal 128 × 128 matrix size 3 bed bone SPECT/CT data set when compensations for collimator and detector response, attenuation, and scatter were included. GPU SPECT reconstructions show great promise as an every day clinical reconstruction tool.
Techniques for blade tip clearance measurements with capacitive probes
NASA Astrophysics Data System (ADS)
Steiner, Alexander
2000-07-01
This article presents a proven but advantageous concept for blade tip clearance evaluation in turbomachinery. The system is based on heavy duty probes and a high frequency (HF) and amplifying electronic unit followed by a signal processing unit. Measurements are taken under high temperature and other severe conditions such as ionization. Every single blade can be observed. The signals are digitally filtered and linearized in real time. The electronic set-up is highly integrated. Miniaturized versions of the electronic units exist. The small and robust units can be used in turbo engines in flight. With several probes at different angles in one radial plane further information is available. Shaft eccentricity or blade oscillations can be calculated.
Backflushing system rapidly cleans fluid filters
NASA Technical Reports Server (NTRS)
Descamp, V. A.; Boex, M. W.; Hussey, M. W.; Larson, T. P.
1973-01-01
Self contained unit can backflush filter elements in fraction of the time expended by presently used equipment. This innovation may be of interest to manufacturers of hydraulic and pneumatic systems as well as to chemical, food, processing, and filter manufacturing industries.
Real-time fMRI processing with physiological noise correction - Comparison with off-line analysis.
Misaki, Masaya; Barzigar, Nafise; Zotev, Vadim; Phillips, Raquel; Cheng, Samuel; Bodurka, Jerzy
2015-12-30
While applications of real-time functional magnetic resonance imaging (rtfMRI) are growing rapidly, there are still limitations in real-time data processing compared to off-line analysis. We developed a proof-of-concept real-time fMRI processing (rtfMRIp) system utilizing a personal computer (PC) with a dedicated graphic processing unit (GPU) to demonstrate that it is now possible to perform intensive whole-brain fMRI data processing in real-time. The rtfMRIp performs slice-timing correction, motion correction, spatial smoothing, signal scaling, and general linear model (GLM) analysis with multiple noise regressors including physiological noise modeled with cardiac (RETROICOR) and respiration volume per time (RVT). The whole-brain data analysis with more than 100,000voxels and more than 250volumes is completed in less than 300ms, much faster than the time required to acquire the fMRI volume. Real-time processing implementation cannot be identical to off-line analysis when time-course information is used, such as in slice-timing correction, signal scaling, and GLM. We verified that reduced slice-timing correction for real-time analysis had comparable output with off-line analysis. The real-time GLM analysis, however, showed over-fitting when the number of sampled volumes was small. Our system implemented real-time RETROICOR and RVT physiological noise corrections for the first time and it is capable of processing these steps on all available data at a given time, without need for recursive algorithms. Comprehensive data processing in rtfMRI is possible with a PC, while the number of samples should be considered in real-time GLM. Copyright © 2015 Elsevier B.V. All rights reserved.
Shope, William G.
1987-01-01
The U. S. Geological Survey maintains the basic hydrologic data collection system for the United States. The Survey is upgrading the collection system with electronic communications technologies that acquire, telemeter, process, and disseminate hydrologic data in near real-time. These technologies include satellite communications via the Geostationary Operational Environmental Satellite, Data Collection Platforms in operation at over 1400 Survey gaging stations, Direct-Readout Ground Stations at nine Survey District Offices and a network of powerful minicomputers that allows data to be processed and disseminate quickly.
GPU real-time processing in NA62 trigger system
NASA Astrophysics Data System (ADS)
Ammendola, R.; Biagioni, A.; Chiozzi, S.; Cretaro, P.; Di Lorenzo, S.; Fantechi, R.; Fiorini, M.; Frezza, O.; Lamanna, G.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Neri, I.; Paolucci, P. S.; Pastorelli, E.; Piandani, R.; Piccini, M.; Pontisso, L.; Rossetti, D.; Simula, F.; Sozzi, M.; Vicini, P.
2017-01-01
A commercial Graphics Processing Unit (GPU) is used to build a fast Level 0 (L0) trigger system tested parasitically with the TDAQ (Trigger and Data Acquisition systems) of the NA62 experiment at CERN. In particular, the parallel computing power of the GPU is exploited to perform real-time fitting in the Ring Imaging CHerenkov (RICH) detector. Direct GPU communication using a FPGA-based board has been used to reduce the data transmission latency. The performance of the system for multi-ring reconstrunction obtained during the NA62 physics run will be presented.
Solutions for acceleration measurement in vehicle crash tests
NASA Astrophysics Data System (ADS)
Dima, D. S.; Covaciu, D.
2017-10-01
Crash tests are useful for validating computer simulations of road traffic accidents. One of the most important parameters measured is the acceleration. The evolution of acceleration versus time, during a crash test, form a crash pulse. The correctness of the crash pulse determination depends on the data acquisition system used. Recommendations regarding the instrumentation for impact tests are given in standards, which are focused on the use of accelerometers as impact sensors. The goal of this paper is to present the device and software developed by authors for data acquisition and processing. The system includes two accelerometers with different input ranges, a processing unit based on a 32-bit microcontroller and a data logging unit with SD card. Data collected on card, as text files, is processed with a dedicated software running on personal computers. The processing is based on diagrams and includes the digital filters recommended in standards.
NASA Technical Reports Server (NTRS)
2002-01-01
The Automatic Particle Fallout Monitor (APFM) is an automated instrument that assesses real-time particle contamination levels in a facility by directly imaging, sizing, and counting contamination particles. It allows personnel to respond to particle contamination before it becomes a major problem. For NASA, the APFM improves the ability to mitigate, avoid, and explain mission-compromising incidents of contamination occurring during payload processing, launch vehicle ground processing, and potentially, during flight operations. Commercial applications are in semiconductor processing and electronics fabrication, as well as aerospace, aeronautical, and medical industries. The product could also be used to measure the air quality of hotels, apartment complexes, and corporate buildings. IDEA sold and delivered its first four units to the United Space Alliance for the Space Shuttle Program at Kennedy. NASA used the APFM in the Kennedy Space Station Processing Facility to monitor contamination levels during the assembly of International Space Station components.
[Continuity and non-continuity from child- to adulthood in psychiatric clinical studies].
Kuwabara, Hitoshi; Kawakubo, Yuki; Kano, Yukiko
2014-01-01
It is difficult to conceive of the development of the brain as a single process, especially when we think about continuity and non-continuity from child- to adulthood. Non-continuity may be present when the brain is developing normally or consistently, or during aging, and development may vary across behavioral, structural, functional, and regional units. Clinical studies that consider the developmental process of change as natural and expected may better incorporate the potential variety and non-continuity than clinical studies that do not consider the process of change. It is likely that these complications are exacerbated because the timing of changes appears to vary across units. If we can identify the critical points of plasticity, temporally appropriate interventions can be developed. A focus on the developmental process of changes in the brain may lead to more rational and effective intervention strategies.
Accelerated rescaling of single Monte Carlo simulation runs with the Graphics Processing Unit (GPU).
Yang, Owen; Choi, Bernard
2013-01-01
To interpret fiber-based and camera-based measurements of remitted light from biological tissues, researchers typically use analytical models, such as the diffusion approximation to light transport theory, or stochastic models, such as Monte Carlo modeling. To achieve rapid (ideally real-time) measurement of tissue optical properties, especially in clinical situations, there is a critical need to accelerate Monte Carlo simulation runs. In this manuscript, we report on our approach using the Graphics Processing Unit (GPU) to accelerate rescaling of single Monte Carlo runs to calculate rapidly diffuse reflectance values for different sets of tissue optical properties. We selected MATLAB to enable non-specialists in C and CUDA-based programming to use the generated open-source code. We developed a software package with four abstraction layers. To calculate a set of diffuse reflectance values from a simulated tissue with homogeneous optical properties, our rescaling GPU-based approach achieves a reduction in computation time of several orders of magnitude as compared to other GPU-based approaches. Specifically, our GPU-based approach generated a diffuse reflectance value in 0.08ms. The transfer time from CPU to GPU memory currently is a limiting factor with GPU-based calculations. However, for calculation of multiple diffuse reflectance values, our GPU-based approach still can lead to processing that is ~3400 times faster than other GPU-based approaches.
Quality Improvement in Critical Care: Selection and Development of Quality Indicators
Martin, Claudio M.; Project, The Quality Improvement in Critical Care
2016-01-01
Background. Caring for critically ill patients is complex and resource intensive. An approach to monitor and compare the function of different intensive care units (ICUs) is needed to optimize outcomes for patients and the health system as a whole. Objective. To develop and implement quality indicators for comparing ICU characteristics and performance within and between ICUs and regions over time. Methods. Canadian jurisdictions with established ICU clinical databases were invited to participate in an iterative series of face-to-face meetings, teleconferences, and web conferences. Eighteen adult intensive care units across 14 hospitals and 5 provinces participated in the process. Results. Six domains of ICU function were identified: safe, timely, efficient, effective, patient/family satisfaction, and staff work life. Detailed operational definitions were developed for 22 quality indicators. The feasibility was demonstrated with the collection of 3.5 years of data. Statistical process control charts and graphs of composite measures were used for data display and comparisons. Medical and nursing leaders as well as administrators found the system to be an improvement over prior methods. Conclusions. Our process resulted in the selection and development of 22 indicators representing 6 domains of ICU function. We have demonstrated the feasibility of such a reporting system. This type of reporting system will demonstrate variation between units and jurisdictions to help identify and prioritize improvement efforts. PMID:27493476
A Hybrid CPU/GPU Pattern-Matching Algorithm for Deep Packet Inspection
Chen, Yaw-Chung
2015-01-01
The large quantities of data now being transferred via high-speed networks have made deep packet inspection indispensable for security purposes. Scalable and low-cost signature-based network intrusion detection systems have been developed for deep packet inspection for various software platforms. Traditional approaches that only involve central processing units (CPUs) are now considered inadequate in terms of inspection speed. Graphic processing units (GPUs) have superior parallel processing power, but transmission bottlenecks can reduce optimal GPU efficiency. In this paper we describe our proposal for a hybrid CPU/GPU pattern-matching algorithm (HPMA) that divides and distributes the packet-inspecting workload between a CPU and GPU. All packets are initially inspected by the CPU and filtered using a simple pre-filtering algorithm, and packets that might contain malicious content are sent to the GPU for further inspection. Test results indicate that in terms of random payload traffic, the matching speed of our proposed algorithm was 3.4 times and 2.7 times faster than those of the AC-CPU and AC-GPU algorithms, respectively. Further, HPMA achieved higher energy efficiency than the other tested algorithms. PMID:26437335
A Hybrid CPU/GPU Pattern-Matching Algorithm for Deep Packet Inspection.
Lee, Chun-Liang; Lin, Yi-Shan; Chen, Yaw-Chung
2015-01-01
The large quantities of data now being transferred via high-speed networks have made deep packet inspection indispensable for security purposes. Scalable and low-cost signature-based network intrusion detection systems have been developed for deep packet inspection for various software platforms. Traditional approaches that only involve central processing units (CPUs) are now considered inadequate in terms of inspection speed. Graphic processing units (GPUs) have superior parallel processing power, but transmission bottlenecks can reduce optimal GPU efficiency. In this paper we describe our proposal for a hybrid CPU/GPU pattern-matching algorithm (HPMA) that divides and distributes the packet-inspecting workload between a CPU and GPU. All packets are initially inspected by the CPU and filtered using a simple pre-filtering algorithm, and packets that might contain malicious content are sent to the GPU for further inspection. Test results indicate that in terms of random payload traffic, the matching speed of our proposed algorithm was 3.4 times and 2.7 times faster than those of the AC-CPU and AC-GPU algorithms, respectively. Further, HPMA achieved higher energy efficiency than the other tested algorithms.
Staib, Andrew; Sullivan, Clair; Jones, Matt; Griffin, Bronwyn; Bell, Anthony; Scott, Ian
2017-06-01
Patients who require emergency admission to hospital require complex care that can be fragmented, occurring in the ED, across the ED-inpatient interface (EDii) and subsequently, in their destination inpatient ward. Our hospital had poor process efficiency with slow transit times for patients requiring emergency care. ED clinicians alone were able to improve the processes and length of stay for the patients discharged directly from the ED. However, improving the efficiency of care for patients requiring emergency admission to true inpatient wards required collaboration with reluctant inpatient clinicians. The inpatient teams were uninterested in improving time-based measures of care in isolation, but they were motivated by improving patient outcomes. We developed a dashboard showing process measures such as 4 h rule compliance rate coupled with clinically important outcome measures such as inpatient mortality. The EDii dashboard helped unite both ED and inpatient teams in clinical redesign to improve both efficiencies of care and patient outcomes. © 2016 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.
WINCOF-I code for prediction of fan compressor unit with water ingestion
NASA Technical Reports Server (NTRS)
Murthy, S. N. B.; Mullican, A.
1990-01-01
The PURDUE-WINCOF code, which provides a numerical method of obtaining the performance of a fan-compressor unit of a jet engine with water ingestion into the inlet, was modified to take into account: (1) the scoop factor, (2) the time required for the setting-in of a quasi-steady distribution of water, and (3) the heat and mass transfer processes over the time calculated under 2. The modified code, named WINCOF-I was utilized to obtain the performance of a fan-compressor unit of a generic jet engine. The results illustrate the manner in which quasi-equilibrium conditions become established in the machine and the redistribution of ingested water in various stages in the form of a film out of the casing wall, droplets across the span, and vapor due to mass transfer.
Assessment of Process Capability: the case of Soft Drinks Processing Unit
NASA Astrophysics Data System (ADS)
Sri Yogi, Kottala
2018-03-01
The process capability studies have significant impact in investigating process variation which is important in achieving product quality characteristics. Its indices are to measure the inherent variability of a process and thus to improve the process performance radically. The main objective of this paper is to understand capability of the process being produced within specification of the soft drinks processing unit, a premier brands being marketed in India. A few selected critical parameters in soft drinks processing: concentration of gas volume, concentration of brix, torque of crock has been considered for this study. Assessed some relevant statistical parameters: short term capability, long term capability as a process capability indices perspective. For assessment we have used real time data of soft drinks bottling company which is located in state of Chhattisgarh, India. As our research output suggested reasons for variations in the process which is validated using ANOVA and also predicted Taguchi cost function, assessed also predicted waste monetarily this shall be used by organization for improving process parameters. This research work has substantially benefitted the organization in understanding the various variations of selected critical parameters for achieving zero rejection.
National perspective on in-hospital emergency units in Iraq
Lafta, Riyadh K.; Al-Nuaimi, Maha A.
2013-01-01
Background: Hospitals play a crucial role in providing communities with essential medical care during times of disasters. The emergency department is the most vital component of hospitals' inpatient business. In Iraq, at present, there are many casualties that cause a burden of work and the need for structural assessment, equipment updating and evaluation of process. Objective: To examine the current pragmatic functioning of the existing set-up of services of in-hospital emergency departments within some general hospitals in Baghdad and Mosul in order to establish a mechanism for future evaluation for the health services in our community. Methods: A cross-sectional study was employed to evaluate the structure, process and function of six major hospitals with emergency units: four major hospitals in Baghdad and two in Mosul. Results: The six surveyed emergency units are distinct units within general hospitals that serve (collectively) one quarter of the total population. More than one third of these units feature observation unit beds, laboratory services, imaging facilities, pharmacies with safe storage, and ambulatory entrance. Operation room was found only in one hospital's reception and waiting area. Consultation/track area, cubicles for infection control, and discrete tutorial rooms were not available. Patient assessment was performed (although without adequate privacy). The emergency specialist, family medicine specialist and interested general practitioner exist in one-third of the surveyed units. Psychiatrist, physiotherapists, occupational therapists, and social work links are not available. The shortage in medication, urgent vaccines and vital facilities is an obvious problem. Conclusions: Our emergency unit's level and standards of care are underdeveloped. The inconsistent process and inappropriate environments need to be reconstructed. The lack of drugs, commodities, communication infrastructure, audit and training all require effective build up. PMID:25003053
The air transportation industry birthplace of reliability-centered maintenance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matteson, T.D.
1996-08-01
The 1980s and 1970s provided a timely opportunity for examining and radically changing the process called {open_quotes}preventive maintenance{close_quotes} as it is applied to the aircraft used for scheduled air transportation. The Federal Aviation Administration and four major airlines, United, American, Pan American and Trans World, were the {open_quotes}principals{close_quotes} in that process. While United`s work with the FAA on the Boeing 737 had opened the door a crack, the Boeing 747 presented a major opportunity to radically improve the process for maintenance program design. That program was guided by the results of United`s analyses of failure data from operations of severalmore » fleets, each larger than 100 aircraft, and the concurrent experience of American, Pan American and Trans World. That knowledge provided the insights necessary to support an entirely different approach to maintenance program design. As a result, while United`s existing maintenance program required scheduled overhaul of 339 items on each DC-8, it required overhaul of only 8 items on the B-7471 Although the initial thrust of that work focused on components of active systems, there was concurrent work focused on items whose principal function was to carry the loads associated with operations. That program focused on the classification of structurally-significant items and their classification as {open_quotes}safe life{close_quotes} or {open_quotes}damage tolerant{close_quote} to determine what periodic replacements or repeated inspections were required. That work came to the attention of the Department of Defense which supported preparation of the book-length report by F. Stanley Nowlan and Howard F. Heap at United Airlines entitled {open_quote}Reliability-Centered maintenance{close_quotes}.« less
Modelling conflicts with cluster dynamics in networks
NASA Astrophysics Data System (ADS)
Tadić, Bosiljka; Rodgers, G. J.
2010-12-01
We introduce cluster dynamical models of conflicts in which only the largest cluster can be involved in an action. This mimics the situations in which an attack is planned by a central body, and the largest attack force is used. We study the model in its annealed random graph version, on a fixed network, and on a network evolving through the actions. The sizes of actions are distributed with a power-law tail, however, the exponent is non-universal and depends on the frequency of actions and sparseness of the available connections between units. Allowing the network reconstruction over time in a self-organized manner, e.g., by adding the links based on previous liaisons between units, we find that the power-law exponent depends on the evolution time of the network. Its lower limit is given by the universal value 5/2, derived analytically for the case of random fragmentation processes. In the temporal patterns behind the size of actions we find long-range correlations in the time series of the number of clusters and the non-trivial distribution of time that a unit waits between two actions. In the case of an evolving network the distribution develops a power-law tail, indicating that through repeated actions, the system develops an internal structure with a hierarchy of units.
Automated collection and processing of environmental samples
Troyer, Gary L.; McNeece, Susan G.; Brayton, Darryl D.; Panesar, Amardip K.
1997-01-01
For monitoring an environmental parameter such as the level of nuclear radiation, at distributed sites, bar coded sample collectors are deployed and their codes are read using a portable data entry unit that also records the time of deployment. The time and collector identity are cross referenced in memory in the portable unit. Similarly, when later recovering the collector for testing, the code is again read and the time of collection is stored as indexed to the sample collector, or to a further bar code, for example as provided on a container for the sample. The identity of the operator can also be encoded and stored. After deploying and/or recovering the sample collectors, the data is transmitted to a base processor. The samples are tested, preferably using a test unit coupled to the base processor, and again the time is recorded. The base processor computes the level of radiation at the site during exposure of the sample collector, using the detected radiation level of the sample, the delay between recovery and testing, the duration of exposure and the half life of the isotopes collected. In one embodiment, an identity code and a site code are optically read by an image grabber coupled to the portable data entry unit.
Hayes, Margaret M; Chatterjee, Souvik; Schwartzstein, Richard M
2017-04-01
Critical thinking, the capacity to be deliberate about thinking, is increasingly the focus of undergraduate medical education, but is not commonly addressed in graduate medical education. Without critical thinking, physicians, and particularly residents, are prone to cognitive errors, which can lead to diagnostic errors, especially in a high-stakes environment such as the intensive care unit. Although challenging, critical thinking skills can be taught. At this time, there is a paucity of data to support an educational gold standard for teaching critical thinking, but we believe that five strategies, routed in cognitive theory and our personal teaching experiences, provide an effective framework to teach critical thinking in the intensive care unit. The five strategies are: make the thinking process explicit by helping learners understand that the brain uses two cognitive processes: type 1, an intuitive pattern-recognizing process, and type 2, an analytic process; discuss cognitive biases, such as premature closure, and teach residents to minimize biases by expressing uncertainty and keeping differentials broad; model and teach inductive reasoning by utilizing concept and mechanism maps and explicitly teach how this reasoning differs from the more commonly used hypothetico-deductive reasoning; use questions to stimulate critical thinking: "how" or "why" questions can be used to coach trainees and to uncover their thought processes; and assess and provide feedback on learner's critical thinking. We believe these five strategies provide practical approaches for teaching critical thinking in the intensive care unit.
Chatterjee, Souvik; Schwartzstein, Richard M.
2017-01-01
Critical thinking, the capacity to be deliberate about thinking, is increasingly the focus of undergraduate medical education, but is not commonly addressed in graduate medical education. Without critical thinking, physicians, and particularly residents, are prone to cognitive errors, which can lead to diagnostic errors, especially in a high-stakes environment such as the intensive care unit. Although challenging, critical thinking skills can be taught. At this time, there is a paucity of data to support an educational gold standard for teaching critical thinking, but we believe that five strategies, routed in cognitive theory and our personal teaching experiences, provide an effective framework to teach critical thinking in the intensive care unit. The five strategies are: make the thinking process explicit by helping learners understand that the brain uses two cognitive processes: type 1, an intuitive pattern-recognizing process, and type 2, an analytic process; discuss cognitive biases, such as premature closure, and teach residents to minimize biases by expressing uncertainty and keeping differentials broad; model and teach inductive reasoning by utilizing concept and mechanism maps and explicitly teach how this reasoning differs from the more commonly used hypothetico-deductive reasoning; use questions to stimulate critical thinking: “how” or “why” questions can be used to coach trainees and to uncover their thought processes; and assess and provide feedback on learner’s critical thinking. We believe these five strategies provide practical approaches for teaching critical thinking in the intensive care unit. PMID:28157389
Acceleration of GPU-based Krylov solvers via data transfer reduction
Anzt, Hartwig; Tomov, Stanimire; Luszczek, Piotr; ...
2015-04-08
Krylov subspace iterative solvers are often the method of choice when solving large sparse linear systems. At the same time, hardware accelerators such as graphics processing units continue to offer significant floating point performance gains for matrix and vector computations through easy-to-use libraries of computational kernels. However, as these libraries are usually composed of a well optimized but limited set of linear algebra operations, applications that use them often fail to reduce certain data communications, and hence fail to leverage the full potential of the accelerator. In this study, we target the acceleration of Krylov subspace iterative methods for graphicsmore » processing units, and in particular the Biconjugate Gradient Stabilized solver that significant improvement can be achieved by reformulating the method to reduce data-communications through application-specific kernels instead of using the generic BLAS kernels, e.g. as provided by NVIDIA’s cuBLAS library, and by designing a graphics processing unit specific sparse matrix-vector product kernel that is able to more efficiently use the graphics processing unit’s computing power. Furthermore, we derive a model estimating the performance improvement, and use experimental data to validate the expected runtime savings. Finally, considering that the derived implementation achieves significantly higher performance, we assert that similar optimizations addressing algorithm structure, as well as sparse matrix-vector, are crucial for the subsequent development of high-performance graphics processing units accelerated Krylov subspace iterative methods.« less
Children's Rights and Youth Justice: 20 Years of No Progress
ERIC Educational Resources Information Center
Smith, Roger
2010-01-01
The adoption of the United Nations Convention on the Rights of the Child (UNCRC) in 1989 and its ratification by the UK government two years later came at a time of considerable progress in youth justice. The Convention itself set clear standards of treatment, in terms of both processes and disposals, which appeared at the time to provide positive…
Design of an MR image processing module on an FPGA chip
NASA Astrophysics Data System (ADS)
Li, Limin; Wyrwicz, Alice M.
2015-06-01
We describe the design and implementation of an image processing module on a single-chip Field-Programmable Gate Array (FPGA) for real-time image processing. We also demonstrate that through graphical coding the design work can be greatly simplified. The processing module is based on a 2D FFT core. Our design is distinguished from previously reported designs in two respects. No off-chip hardware resources are required, which increases portability of the core. Direct matrix transposition usually required for execution of 2D FFT is completely avoided using our newly-designed address generation unit, which saves considerable on-chip block RAMs and clock cycles. The image processing module was tested by reconstructing multi-slice MR images from both phantom and animal data. The tests on static data show that the processing module is capable of reconstructing 128 × 128 images at speed of 400 frames/second. The tests on simulated real-time streaming data demonstrate that the module works properly under the timing conditions necessary for MRI experiments.
Design of an MR image processing module on an FPGA chip
Li, Limin; Wyrwicz, Alice M.
2015-01-01
We describe the design and implementation of an image processing module on a single-chip Field-Programmable Gate Array (FPGA) for real-time image processing. We also demonstrate that through graphical coding the design work can be greatly simplified. The processing module is based on a 2D FFT core. Our design is distinguished from previously reported designs in two respects. No off-chip hardware resources are required, which increases portability of the core. Direct matrix transposition usually required for execution of 2D FFT is completely avoided using our newly-designed address generation unit, which saves considerable on-chip block RAMs and clock cycles. The image processing module was tested by reconstructing multi-slice MR images from both phantom and animal data. The tests on static data show that the processing module is capable of reconstructing 128 × 128 images at speed of 400 frames/second. The tests on simulated real-time streaming data demonstrate that the module works properly under the timing conditions necessary for MRI experiments. PMID:25909646
Kristensen, Pia Kjær; Thillemann, Theis Muncholm; Søballe, Kjeld; Johnsen, Søren Paaske
2016-01-01
admission to orthogeriatric units improves clinical outcomes for patients with hip fracture; however, little is known about the underlying mechanisms. to compare quality of in-hospital care, 30-day mortality, time to surgery (TTS) and length of hospital stay (LOS) among patients with hip fracture admitted to orthogeriatric and ordinary orthopaedic units, respectively. population-based cohort study. using prospectively collected data from the Danish Multidisciplinary Hip Fracture Registry, we identified 11,461 patients aged ≥65 years admitted with a hip fracture between 1 March 2010 and 30 November 2011. The patients were divided into two groups: (i) those treated at an orthogeriatric unit, where the geriatrician is an integrated part of the multidisciplinary team, and (ii) those treated at an ordinary orthopaedic unit, where geriatric or medical consultant service are available on request. Outcome measures were the quality of care as reflected by six process performance measures, 30-day mortality, the TTS and the LOS. Data were analysed using log-binomial, linear and logistic regression controlling for potential confounders. admittance to orthogeriatric units was associated with a higher chance for fulfilling five out of six process performance measures. Patients who were admitted to an orthogeriatric unit experienced a lower 30-day mortality (adjusted odds ratio (aOR) 0.69; 95% CI 0.54-0.88), whereas the LOS (adjusted relative time (aRT) of 1.18; 95% CI 0.92-1.52) and the TTS (aRT 1.06; 95% CI 0.89-1.26) were similar. admittance to an orthogeriatric unit was associated with improved quality of care and lower 30-day mortality among patients with hip fracture. © The Author 2015. Published by Oxford University Press on behalf of the British Geriatrics Society. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
GPU computing in medical physics: a review.
Pratx, Guillem; Xing, Lei
2011-05-01
The graphics processing unit (GPU) has emerged as a competitive platform for computing massively parallel problems. Many computing applications in medical physics can be formulated as data-parallel tasks that exploit the capabilities of the GPU for reducing processing times. The authors review the basic principles of GPU computing as well as the main performance optimization techniques, and survey existing applications in three areas of medical physics, namely image reconstruction, dose calculation and treatment plan optimization, and image processing.
Process improvement of knives production in a small scale industry
NASA Astrophysics Data System (ADS)
Ananto, Gamawan; Muktasim, Irfan
2017-06-01
Small scale industry that produces several kinds of knive should increase its capacity due to the demand from the market. Qualitatively, this case study consisted of formulating the problems, collecting and analyzing the necessary data, and determining the possible recommendations for the improvement. While the current capacity is only 9 (nine), it is expected that 20 units of knife will produced per month. The processes sequence are: profiling (a), truing (b), beveling (c), heat treatment (d), polishing (e), assembly (f), sharpening (g) and finishing (h). The first process (a) is held by out-house vendor company while other steps from (b) to (g) are executed by in-house vendor. However, there is a high dependency upon the high skilled operator who executes the in -house processes that are mostly held manually with several unbalance successive tasks, where the processing time of one or two tasks require longer duration than others since the operation is merely relied on the operator's skill. The idea is the improvement or change of the profiling and beveling process. Due to the poor surface quality and suboptimal hardness resulted from the laser cut machine for profiling, it is considered to subst itute this kind of process with wire cut that is capable to obtain good surface quality with certain range levels of roughness. Through simple cutting experiments on the samples, it is expected that the generated surface quality is adequate to omit the truing process (b). In addition, the cutting experiments on one, two, and four test samples resulted the shortest time that was obtained through four pieces in one cut. The technical parameters were set according to the recommendation of machine standard as referred to samples condition such as thickness and path length that affect ed the rate of wear. Meanwhile, in order to guarantee the uniformity of knife angles that are formed through beveling process (c), a grinding fixture was created. This kind of tool diminishes the dependency upon the operator's skill as well. The main conclusions are: the substitution of laser cut with wire cut machine for the first task (a) could reduce the operation time from 60 to 39.26 minutes with good result of surface quality and the truing process (b) could be omitted; the additional grinding fixture in beveling process (c) is required and two workstations have to be assigned instead of one as in previous condition. They lead to improvements including the guarantee of the uniformity of knifes' angle, the reduction on the operators' skills dependency, the shortening of cycle time from 855 to 420 minutes, and the higher number of productivity from 9 units/month into 20units/month.
Dawson, Gaynor W.; Mercer, Basil W.
1979-01-01
A process for removing pollutants or minerals from lake, river or ocean sediments or from mine tailings is disclosed. Magnetically attractable collection units containing an ion exchange or sorbent media with an affinity for a chosen target substance are distributed in the sediments or tailings. After a period of time has passed sufficient for the particles to bind up the target substances, a magnet drawn through the sediments or across the tailings retrieves the units along with the target substance.
Kannampallil, Thomas G; Franklin, Amy; Mishra, Rashmi; Almoosa, Khalid F; Cohen, Trevor; Patel, Vimla L
2013-01-01
Information in critical care environments is distributed across multiple sources, such as paper charts, electronic records, and support personnel. For decision-making tasks, physicians have to seek, gather, filter and organize information from various sources in a timely manner. The objective of this research is to characterize the nature of physicians' information seeking process, and the content and structure of clinical information retrieved during this process. Eight medical intensive care unit physicians provided a verbal think-aloud as they performed a clinical diagnosis task. Verbal descriptions of physicians' activities, sources of information they used, time spent on each information source, and interactions with other clinicians were captured for analysis. The data were analyzed using qualitative and quantitative approaches. We found that the information seeking process was exploratory and iterative and driven by the contextual organization of information. While there was no significant differences between the overall time spent paper or electronic records, there was marginally greater relative information gain (i.e., more unique information retrieved per unit time) from electronic records (t(6)=1.89, p=0.1). Additionally, information retrieved from electronic records was at a higher level (i.e., observations and findings) in the knowledge structure than paper records, reflecting differences in the nature of knowledge utilization across resources. A process of local optimization drove the information seeking process: physicians utilized information that maximized their information gain even though it required significantly more cognitive effort. Implications for the design of health information technology solutions that seamlessly integrate information seeking activities within the workflow, such as enriching the clinical information space and supporting efficient clinical reasoning and decision-making, are discussed. Copyright © 2012 Elsevier B.V. All rights reserved.
DDGIPS: a general image processing system in robot vision
NASA Astrophysics Data System (ADS)
Tian, Yuan; Ying, Jun; Ye, Xiuqing; Gu, Weikang
2000-10-01
Real-Time Image Processing is the key work in robot vision. With the limitation of the hardware technique, many algorithm-oriented firmware systems were designed in the past. But their architectures were not flexible enough to achieve a multi-algorithm development system. Because of the rapid development of microelectronics technique, many high performance DSP chips and high density FPGA chips have come to life, and this makes it possible to construct a more flexible architecture in real-time image processing system. In this paper, a Double DSP General Image Processing System (DDGIPS) is concerned. We try to construct a two-DSP-based FPGA-computational system with two TMS320C6201s. The TMS320C6x devices are fixed-point processors based on the advanced VLIW CPU, which has eight functional units, including two multipliers and six arithmetic logic units. These features make C6x a good candidate for a general purpose system. In our system, the two TMS320C6201s each has a local memory space, and they also have a shared system memory space which enables them to intercommunicate and exchange data efficiently. At the same time, they can be directly inter-connected in star-shaped architecture. All of these are under the control of a FPGA group. As the core of the system, FPGA plays a very important role: it takes charge of DPS control, DSP communication, memory space access arbitration and the communication between the system and the host machine. And taking advantage of reconfiguring FPGA, all of the interconnection between the two DSP or between DSP and FPGA can be changed. In this way, users can easily rebuild the real-time image processing system according to the data stream and the task of the application and gain great flexibility.
DDGIPS: a general image processing system in robot vision
NASA Astrophysics Data System (ADS)
Tian, Yuan; Ying, Jun; Ye, Xiuqing; Gu, Weikang
2000-10-01
Real-Time Image Processing is the key work in robot vision. With the limitation of the hardware technique, many algorithm-oriented firmware systems were designed in the past. But their architectures were not flexible enough to achieve a multi- algorithm development system. Because of the rapid development of microelectronics technique, many high performance DSP chips and high density FPGA chips have come to life, and this makes it possible to construct a more flexible architecture in real-time image processing system. In this paper, a Double DSP General Image Processing System (DDGIPS) is concerned. We try to construct a two-DSP-based FPGA-computational system with two TMS320C6201s. The TMS320C6x devices are fixed-point processors based on the advanced VLIW CPU, which has eight functional units, including two multipliers and six arithmetic logic units. These features make C6x a good candidate for a general purpose system. In our system, the two TMS320C6210s each has a local memory space, and they also have a shared system memory space which enable them to intercommunicate and exchange data efficiently. At the same time, they can be directly interconnected in star- shaped architecture. All of these are under the control of FPGA group. As the core of the system, FPGA plays a very important role: it takes charge of DPS control, DSP communication, memory space access arbitration and the communication between the system and the host machine. And taking advantage of reconfiguring FPGA, all of the interconnection between the two DSP or between DSP and FPGA can be changed. In this way, users can easily rebuild the real-time image processing system according to the data stream and the task of the application and gain great flexibility.
Selecting automation for the clinical chemistry laboratory.
Melanson, Stacy E F; Lindeman, Neal I; Jarolim, Petr
2007-07-01
Laboratory automation proposes to improve the quality and efficiency of laboratory operations, and may provide a solution to the quality demands and staff shortages faced by today's clinical laboratories. Several vendors offer automation systems in the United States, with both subtle and obvious differences. Arriving at a decision to automate, and the ensuing evaluation of available products, can be time-consuming and challenging. Although considerable discussion concerning the decision to automate has been published, relatively little attention has been paid to the process of evaluating and selecting automation systems. To outline a process for evaluating and selecting automation systems as a reference for laboratories contemplating laboratory automation. Our Clinical Chemistry Laboratory staff recently evaluated all major laboratory automation systems in the United States, with their respective chemistry and immunochemistry analyzers. Our experience is described and organized according to the selection process, the important considerations in clinical chemistry automation, decisions and implementation, and we give conclusions pertaining to this experience. Including the formation of a committee, workflow analysis, submitting a request for proposal, site visits, and making a final decision, the process of selecting chemistry automation took approximately 14 months. We outline important considerations in automation design, preanalytical processing, analyzer selection, postanalytical storage, and data management. Selecting clinical chemistry laboratory automation is a complex, time-consuming process. Laboratories considering laboratory automation may benefit from the concise overview and narrative and tabular suggestions provided.
A Novel Process Audit for Standardized Perioperative Handoff Protocols.
Pallekonda, Vinay; Scholl, Adam T; McKelvey, George M; Amhaz, Hassan; Essa, Deanna; Narreddy, Spurthy; Tan, Jens; Templonuevo, Mark; Ramirez, Sasha; Petrovic, Michelle A
2017-11-01
A perioperative handoff protocol provides a standardized delivery of communication during a handoff that occurs from the operating room to the postanestheisa care unit or ICU. The protocol's success is dependent, in part, on its continued proper use over time. A novel process audit was developed to help ensure that a perioperative handoff protocol is used accurately and appropriately over time. The Audit Observation Form is used for the Audit Phase of the process audit, while the Audit Averages Form is used for the Data Analysis Phase. Employing minimal resources and using quantitative methods, the process audit provides the necessary means to evaluate the proper execution of any perioperative handoff protocol. Copyright © 2017 The Joint Commission. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Boyarnikov, A. V.; Boyarnikova, L. V.; Kozhushko, A. A.; Sekachev, A. F.
2017-08-01
In the article the process of verification (calibration) of oil metering units secondary equipment is considered. The purpose of the work is to increase the reliability and reduce the complexity of this process by developing a software and hardware system that provides automated verification and calibration. The hardware part of this complex carries out the commutation of the measuring channels of the verified controller and the reference channels of the calibrator in accordance with the introduced algorithm. The developed software allows controlling the commutation of channels, setting values on the calibrator, reading the measured data from the controller, calculating errors and compiling protocols. This system can be used for checking the controllers of the secondary equipment of the oil metering units in the automatic verification mode (with the open communication protocol) or in the semi-automatic verification mode (without it). The peculiar feature of the approach used is the development of a universal signal switch operating under software control, which can be configured for various verification methods (calibration), which allows to cover the entire range of controllers of metering units secondary equipment. The use of automatic verification with the help of a hardware and software system allows to shorten the verification time by 5-10 times and to increase the reliability of measurements, excluding the influence of the human factor.
NASA Astrophysics Data System (ADS)
Niwase, Hiroaki; Takada, Naoki; Araki, Hiromitsu; Maeda, Yuki; Fujiwara, Masato; Nakayama, Hirotaka; Kakue, Takashi; Shimobaba, Tomoyoshi; Ito, Tomoyoshi
2016-09-01
Parallel calculations of large-pixel-count computer-generated holograms (CGHs) are suitable for multiple-graphics processing unit (multi-GPU) cluster systems. However, it is not easy for a multi-GPU cluster system to accomplish fast CGH calculations when CGH transfers between PCs are required. In these cases, the CGH transfer between the PCs becomes a bottleneck. Usually, this problem occurs only in multi-GPU cluster systems with a single spatial light modulator. To overcome this problem, we propose a simple method using the InfiniBand network. The computational speed of the proposed method using 13 GPUs (NVIDIA GeForce GTX TITAN X) was more than 3000 times faster than that of a CPU (Intel Core i7 4770) when the number of three-dimensional (3-D) object points exceeded 20,480. In practice, we achieved ˜40 tera floating point operations per second (TFLOPS) when the number of 3-D object points exceeded 40,960. Our proposed method was able to reconstruct a real-time movie of a 3-D object comprising 95,949 points.
GPU acceleration of Runge Kutta-Fehlberg and its comparison with Dormand-Prince method
NASA Astrophysics Data System (ADS)
Seen, Wo Mei; Gobithaasan, R. U.; Miura, Kenjiro T.
2014-07-01
There is a significant reduction of processing time and speedup of performance in computer graphics with the emergence of Graphic Processing Units (GPUs). GPUs have been developed to surpass Central Processing Unit (CPU) in terms of performance and processing speed. This evolution has opened up a new area in computing and researches where highly parallel GPU has been used for non-graphical algorithms. Physical or phenomenal simulations and modelling can be accelerated through General Purpose Graphic Processing Units (GPGPU) and Compute Unified Device Architecture (CUDA) implementations. These phenomena can be represented with mathematical models in the form of Ordinary Differential Equations (ODEs) which encompasses the gist of change rate between independent and dependent variables. ODEs are numerically integrated over time in order to simulate these behaviours. The classical Runge-Kutta (RK) scheme is the common method used to numerically solve ODEs. The Runge Kutta Fehlberg (RKF) scheme has been specially developed to provide an estimate of the principal local truncation error at each step, known as embedding estimate technique. This paper delves into the implementation of RKF scheme for GPU devices and compares its result with Dorman Prince method. A pseudo code is developed to show the implementation in detail. Hence, practitioners will be able to understand the data allocation in GPU, formation of RKF kernels and the flow of data to/from GPU-CPU upon RKF kernel evaluation. The pseudo code is then written in C Language and two ODE models are executed to show the achievable speedup as compared to CPU implementation. The accuracy and efficiency of the proposed implementation method is discussed in the final section of this paper.
Driscoll, Jessica; Hay, Lauren E.; Bock, Andrew R.
2017-01-01
Assessment of water resources at a national scale is critical for understanding their vulnerability to future change in policy and climate. Representation of the spatiotemporal variability in snowmelt processes in continental-scale hydrologic models is critical for assessment of water resource response to continued climate change. Continental-extent hydrologic models such as the U.S. Geological Survey National Hydrologic Model (NHM) represent snowmelt processes through the application of snow depletion curves (SDCs). SDCs relate normalized snow water equivalent (SWE) to normalized snow covered area (SCA) over a snowmelt season for a given modeling unit. SDCs were derived using output from the operational Snow Data Assimilation System (SNODAS) snow model as daily 1-km gridded SWE over the conterminous United States. Daily SNODAS output were aggregated to a predefined watershed-scale geospatial fabric and used to also calculate SCA from October 1, 2004 to September 30, 2013. The spatiotemporal variability in SNODAS output at the watershed scale was evaluated through the spatial distribution of the median and standard deviation for the time period. Representative SDCs for each watershed-scale modeling unit over the conterminous United States (n = 54,104) were selected using a consistent methodology and used to create categories of snowmelt based on SDC shape. The relation of SDC categories to the topographic and climatic variables allow for national-scale categorization of snowmelt processes.
Execution of a parallel edge-based Navier-Stokes solver on commodity graphics processor units
NASA Astrophysics Data System (ADS)
Corral, Roque; Gisbert, Fernando; Pueblas, Jesus
2017-02-01
The implementation of an edge-based three-dimensional Reynolds Average Navier-Stokes solver for unstructured grids able to run on multiple graphics processing units (GPUs) is presented. Loops over edges, which are the most time-consuming part of the solver, have been written to exploit the massively parallel capabilities of GPUs. Non-blocking communications between parallel processes and between the GPU and the central processor unit (CPU) have been used to enhance code scalability. The code is written using a mixture of C++ and OpenCL, to allow the execution of the source code on GPUs. The Message Passage Interface (MPI) library is used to allow the parallel execution of the solver on multiple GPUs. A comparative study of the solver parallel performance is carried out using a cluster of CPUs and another of GPUs. It is shown that a single GPU is up to 64 times faster than a single CPU core. The parallel scalability of the solver is mainly degraded due to the loss of computing efficiency of the GPU when the size of the case decreases. However, for large enough grid sizes, the scalability is strongly improved. A cluster featuring commodity GPUs and a high bandwidth network is ten times less costly and consumes 33% less energy than a CPU-based cluster with an equivalent computational power.
Evaluation of the ADAPTIR System for Work Zone Traffic Control
DOT National Transportation Integrated Search
1999-11-01
The ADAPTIR system (Automated Data Acquisition and Processing of Traffic Information in Real Time) uses variable message signs (VMS) equipped with radar units, along with a software program to interpret the data, to display appropriate warning and ad...
2001-05-01
GAO United States General Accounting Office Report to Congressional Requesters May 2001 LICENSING HYDROPOWER PROJECTS Better Time and Cost Data...Dates Covered (from... to) ("DD MON YYYY") Title and Subtitle LICENSING HYDROPOWER PROJECTS: Better Time and Cost Data Needed to Reach Informed...Organization Name(s) and Address(es) General Accounting Office, PO Box 37050, Washington, DC 20013 Performing Organization Number(s) GAO-01-499
ERIC Educational Resources Information Center
Drury, Debra
2006-01-01
Kids today are growing up with televisions, movies, videos and DVDs, so it's logical to assume that this type of media could be motivating and used to great effect in the classroom. But at what point should film and other visual media be used? Are there times in the inquiry process when showing a film or incorporating other visual media is more…
A noninvasive technique for real-time detection of bruises in apple surface based on machine vision
NASA Astrophysics Data System (ADS)
Zhao, Juan; Peng, Yankun; Dhakal, Sagar; Zhang, Leilei; Sasao, Akira
2013-05-01
Apple is one of the highly consumed fruit item in daily life. However, due to its high damage potential and massive influence on taste and export, the quality of apple has to be detected before it reaches the consumer's hand. This study was aimed to develop a hardware and software unit for real-time detection of apple bruises based on machine vision technology. The hardware unit consisted of a light shield installed two monochrome cameras at different angles, LED light source to illuminate the sample, and sensors at the entrance of box to signal the positioning of sample. Graphical Users Interface (GUI) was developed in VS2010 platform to control the overall hardware and display the image processing result. The hardware-software system was developed to acquire the images of 3 samples from each camera and display the image processing result in real time basis. An image processing algorithm was developed in Opencv and C++ platform. The software is able to control the hardware system to classify the apple into two grades based on presence/absence of surface bruises with the size of 5mm. The experimental result is promising and the system with further modification can be applicable for industrial production in near future.
Oberoi, Harinder Singh; Vadlani, Praveen V; Saida, Lavudi; Bansal, Sunil; Hughes, Joshua D
2011-07-01
Dried and ground banana peel biomass (BP) after hydrothermal sterilization pretreatment was used for ethanol production using simultaneous saccharification and fermentation (SSF). Central composite design (CCD) was used to optimize concentrations of cellulase and pectinase, temperature and time for ethanol production from BP using SSF. Analysis of variance showed a high coefficient of determination (R(2)) value of 0.92 for ethanol production. On the basis of model graphs and numerical optimization, the validation was done in a laboratory batch fermenter with cellulase, pectinase, temperature and time of nine cellulase filter paper unit/gram cellulose (FPU/g-cellulose), 72 international units/gram pectin (IU/g-pectin), 37 °C and 15 h, respectively. The experiment using optimized parameters in batch fermenter not only resulted in higher ethanol concentration than the one predicted by the model equation, but also saved fermentation time. This study demonstrated that both hydrothermal pretreatment and SSF could be successfully carried out in a single vessel, and use of optimized process parameters helped achieve significant ethanol productivity, indicating commercial potential for the process. To the best of our knowledge, ethanol concentration and ethanol productivity of 28.2 g/l and 2.3 g/l/h, respectively from banana peels have not been reported to date. Copyright © 2011 Elsevier Ltd. All rights reserved.
Noncontact Infrared-Mediated Heat Transfer During Continuous Freeze-Drying of Unit Doses.
Van Bockstal, Pieter-Jan; De Meyer, Laurens; Corver, Jos; Vervaet, Chris; De Beer, Thomas
2017-01-01
Recently, an innovative continuous freeze-drying concept for unit doses was proposed, based on spinning the vials during freezing. An efficient heat transfer during drying is essential to continuously process these spin frozen vials. Therefore, the applicability of noncontact infrared (IR) radiation was examined. The impact of several process and formulation variables on the mass of sublimed ice after 15 min of primary drying (i.e., sublimation rate) and the total drying time was examined. Two experimental designs were performed in which electrical power to the IR heaters, distance between the IR heaters and the spin frozen vial, chamber pressure, product layer thickness, and 5 model formulations were included as factors. A near-infrared spectroscopy method was developed to determine the end point of primary and secondary drying. The sublimation rate was mainly influenced by the electrical power to the IR heaters and the distance between the IR heaters and the vial. The layer thickness had the largest effect on total drying time. The chamber pressure and the 5 model formulations had no significant impact on sublimation rate and total drying time, respectively. This study shows that IR radiation is suitable to provide the energy during the continuous processing of spin frozen vials. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Programmable partitioning for high-performance coherence domains in a multiprocessor system
Blumrich, Matthias A [Ridgefield, CT; Salapura, Valentina [Chappaqua, NY
2011-01-25
A multiprocessor computing system and a method of logically partitioning a multiprocessor computing system are disclosed. The multiprocessor computing system comprises a multitude of processing units, and a multitude of snoop units. Each of the processing units includes a local cache, and the snoop units are provided for supporting cache coherency in the multiprocessor system. Each of the snoop units is connected to a respective one of the processing units and to all of the other snoop units. The multiprocessor computing system further includes a partitioning system for using the snoop units to partition the multitude of processing units into a plurality of independent, memory-consistent, adjustable-size processing groups. Preferably, when the processor units are partitioned into these processing groups, the partitioning system also configures the snoop units to maintain cache coherency within each of said groups.
Lossless data compression for improving the performance of a GPU-based beamformer.
Lok, U-Wai; Fan, Gang-Wei; Li, Pai-Chi
2015-04-01
The powerful parallel computation ability of a graphics processing unit (GPU) makes it feasible to perform dynamic receive beamforming However, a real time GPU-based beamformer requires high data rate to transfer radio-frequency (RF) data from hardware to software memory, as well as from central processing unit (CPU) to GPU memory. There are data compression methods (e.g. Joint Photographic Experts Group (JPEG)) available for the hardware front end to reduce data size, alleviating the data transfer requirement of the hardware interface. Nevertheless, the required decoding time may even be larger than the transmission time of its original data, in turn degrading the overall performance of the GPU-based beamformer. This article proposes and implements a lossless compression-decompression algorithm, which enables in parallel compression and decompression of data. By this means, the data transfer requirement of hardware interface and the transmission time of CPU to GPU data transfers are reduced, without sacrificing image quality. In simulation results, the compression ratio reached around 1.7. The encoder design of our lossless compression approach requires low hardware resources and reasonable latency in a field programmable gate array. In addition, the transmission time of transferring data from CPU to GPU with the parallel decoding process improved by threefold, as compared with transferring original uncompressed data. These results show that our proposed lossless compression plus parallel decoder approach not only mitigate the transmission bandwidth requirement to transfer data from hardware front end to software system but also reduce the transmission time for CPU to GPU data transfer. © The Author(s) 2014.
Mitchell, Sarah; Dale, Jeremy
2015-04-01
The majority of children and young people who die in the United Kingdom have pre-existing life-limiting illness. Currently, most such deaths occur in hospital, most frequently within the intensive care environment. To explore the experiences of senior medical and nursing staff regarding the challenges associated with Advance Care Planning in relation to children and young people with life-limiting illnesses in the Paediatric Intensive Care Unit environment and opportunities for improvement. Qualitative one-to-one, semi-structured interviews were conducted with Paediatric Intensive Care Unit consultants and senior nurses, to gain rich, contextual data. Thematic content analysis was carried out. UK tertiary referral centre Paediatric Intensive Care Unit. Eight Paediatric Intensive Care Unit consultants and six senior nurses participated. Four main themes emerged: recognition of an illness as 'life-limiting'; Advance Care Planning as a multi-disciplinary, structured process; the value of Advance Care Planning and adverse consequences of inadequate Advance Care Planning. Potential benefits of Advance Care Planning include providing the opportunity to make decisions regarding end-of-life care in a timely fashion and in partnership with patients, where possible, and their families. Barriers to the process include the recognition of the life-limiting nature of an illness and gaining consensus of medical opinion. Organisational improvements towards earlier recognition of life-limiting illness and subsequent Advance Care Planning were recommended, including education and training, as well as the need for wider societal debate. Advance Care Planning for children and young people with life-limiting conditions has the potential to improve care for patients and their families, providing the opportunity to make decisions based on clear information at an appropriate time, and avoid potentially harmful intensive clinical interventions at the end of life. © The Author(s) 2015.
Upgrading the fuel-handling machine of the Novovoronezh nuclear power plant unit no. 5
NASA Astrophysics Data System (ADS)
Terekhov, D. V.; Dunaev, V. I.
2014-02-01
The calculation of safety parameters was carried out in the process of upgrading the fuel-handling machine (FHM) of the Novovoronezh nuclear power plant (NPP) unit no. 5 based on the results of quantitative safety analysis of nuclear fuel transfer operations using a dynamic logical-and-probabilistic model of the processing procedure. Specific engineering and design concepts that made it possible to reduce the probability of damaging the fuel assemblies (FAs) when performing various technological operations by an order of magnitude and introduce more flexible algorithms into the modernized FHM control system were developed. The results of pilot operation during two refueling campaigns prove that the total reactor shutdown time is lowered.
Correia, J R C C C; Martins, C J A P
2017-10-01
Topological defects unavoidably form at symmetry breaking phase transitions in the early universe. To probe the parameter space of theoretical models and set tighter experimental constraints (exploiting the recent advances in astrophysical observations), one requires more and more demanding simulations, and therefore more hardware resources and computation time. Improving the speed and efficiency of existing codes is essential. Here we present a general purpose graphics-processing-unit implementation of the canonical Press-Ryden-Spergel algorithm for the evolution of cosmological domain wall networks. This is ported to the Open Computing Language standard, and as a consequence significant speedups are achieved both in two-dimensional (2D) and 3D simulations.
Philips, Patrick J.; Stinson, Beverley; Zaugg, Steven D.; Furlong, Edward T.; Kolpin, Dana W.; Esposito, Kathleen; Bodniewicz, B.; Pape, R.; Anderson, J.
2005-01-01
The second phase of the study focused on one of the most common wastewater treatment processes operated in the United States, the Activated Sludge process. Using four controlled parallel activated sludge pilots, a more detailed assessment of the impact of Sludge Retention Time (SRT) on the reduction or removal of ECs was performed.
NASA Technical Reports Server (NTRS)
1998-01-01
A Space Act Agreement between Kennedy Space Center and Surtreat Southeast, Inc., resulted in a new treatment that keeps buildings from corroding away over time. Structural corrosion is a multi-billion dollar problem in the United States. The agreement merged Kennedy Space Center's research into electrical treatments of structural corrosion with chemical processes developed by Surtreat. Combining NASA and Surtreat technologies has resulted in a unique process with broad corrosion-control applications.
Impact of nowcasting on the production and processing of agricultural crops. [in the US
NASA Technical Reports Server (NTRS)
Dancer, W. S.; Tibbitts, T. W.
1973-01-01
The value was studied of improved weather information and weather forecasting to farmers, growers, and agricultural processing industries in the United States. The study was undertaken to identify the production and processing operations that could be improved with accurate and timely information on changing weather patterns. Estimates were then made of the potential savings that could be realized with accurate information about the prevailing weather and short term forecasts for up to 12 hours. This weather information has been termed nowcasting. The growing, marketing, and processing operations of the twenty most valuable crops in the United States were studied to determine those operations that are sensitive to short-term weather forecasting. Agricultural extension specialists, research scientists, growers, and representatives of processing industries were consulted and interviewed. The value of the crops included in this survey and their production levels are given. The total value for crops surveyed exceeds 24 billion dollars and represents more than 92 percent of total U.S. crop value.
Scheinost, Dustin; Hampson, Michelle; Qiu, Maolin; Bhawnani, Jitendra; Constable, R. Todd; Papademetris, Xenophon
2013-01-01
Real-time functional magnetic resonance imaging (rt-fMRI) has recently gained interest as a possible means to facilitate the learning of certain behaviors. However, rt-fMRI is limited by processing speed and available software, and continued development is needed for rt-fMRI to progress further and become feasible for clinical use. In this work, we present an open-source rt-fMRI system for biofeedback powered by a novel Graphics Processing Unit (GPU) accelerated motion correction strategy as part of the BioImage Suite project (www.bioimagesuite.org). Our system contributes to the development of rt-fMRI by presenting a motion correction algorithm that provides an estimate of motion with essentially no processing delay as well as a modular rt-fMRI system design. Using empirical data from rt-fMRI scans, we assessed the quality of motion correction in this new system. The present algorithm performed comparably to standard (non real-time) offline methods and outperformed other real-time methods based on zero order interpolation of motion parameters. The modular approach to the rt-fMRI system allows the system to be flexible to the experiment and feedback design, a valuable feature for many applications. We illustrate the flexibility of the system by describing several of our ongoing studies. Our hope is that continuing development of open-source rt-fMRI algorithms and software will make this new technology more accessible and adaptable, and will thereby accelerate its application in the clinical and cognitive neurosciences. PMID:23319241
Scheinost, Dustin; Hampson, Michelle; Qiu, Maolin; Bhawnani, Jitendra; Constable, R Todd; Papademetris, Xenophon
2013-07-01
Real-time functional magnetic resonance imaging (rt-fMRI) has recently gained interest as a possible means to facilitate the learning of certain behaviors. However, rt-fMRI is limited by processing speed and available software, and continued development is needed for rt-fMRI to progress further and become feasible for clinical use. In this work, we present an open-source rt-fMRI system for biofeedback powered by a novel Graphics Processing Unit (GPU) accelerated motion correction strategy as part of the BioImage Suite project ( www.bioimagesuite.org ). Our system contributes to the development of rt-fMRI by presenting a motion correction algorithm that provides an estimate of motion with essentially no processing delay as well as a modular rt-fMRI system design. Using empirical data from rt-fMRI scans, we assessed the quality of motion correction in this new system. The present algorithm performed comparably to standard (non real-time) offline methods and outperformed other real-time methods based on zero order interpolation of motion parameters. The modular approach to the rt-fMRI system allows the system to be flexible to the experiment and feedback design, a valuable feature for many applications. We illustrate the flexibility of the system by describing several of our ongoing studies. Our hope is that continuing development of open-source rt-fMRI algorithms and software will make this new technology more accessible and adaptable, and will thereby accelerate its application in the clinical and cognitive neurosciences.
Chemical interaction matrix between reagents in a Purex based process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brahman, R.K.; Hennessy, W.P.; Paviet-Hartmann, P.
2008-07-01
The United States Department of Energy (DOE) is the responsible entity for the disposal of the United States excess weapons grade plutonium. DOE selected a PUREX-based process to convert plutonium to low-enriched mixed oxide fuel for use in commercial nuclear power plants. To initiate this process in the United States, a Mixed Oxide (MOX) Fuel Fabrication Facility (MFFF) is under construction and will be operated by Shaw AREVA MOX Services at the Savannah River Site. This facility will be licensed and regulated by the U.S. Nuclear Regulatory Commission (NRC). A PUREX process, similar to the one used at La Hague,more » France, will purify plutonium feedstock through solvent extraction. MFFF employs two major process operations to manufacture MOX fuel assemblies: (1) the Aqueous Polishing (AP) process to remove gallium and other impurities from plutonium feedstock and (2) the MOX fuel fabrication process (MP), which processes the oxides into pellets and manufactures the MOX fuel assemblies. The AP process consists of three major steps, dissolution, purification, and conversion, and is the center of the primary chemical processing. A study of process hazards controls has been initiated that will provide knowledge and protection against the chemical risks associated from mixing of reagents over the life time of the process. This paper presents a comprehensive chemical interaction matrix evaluation for the reagents used in the PUREX-based process. Chemical interaction matrix supplements the process conditions by providing a checklist of any potential inadvertent chemical reactions that may take place. It also identifies the chemical compatibility/incompatibility of the reagents if mixed by failure of operations or equipment within the process itself or mixed inadvertently by a technician in the laboratories. (aut0010ho.« less
NASA Astrophysics Data System (ADS)
Sewell, Stephen
This thesis introduces a software framework that effectively utilizes low-cost commercially available Graphic Processing Units (GPUs) to simulate complex scientific plasma phenomena that are modeled using the Particle-In-Cell (PIC) paradigm. The software framework that was developed conforms to the Compute Unified Device Architecture (CUDA), a standard for general purpose graphic processing that was introduced by NVIDIA Corporation. This framework has been verified for correctness and applied to advance the state of understanding of the electromagnetic aspects of the development of the Aurora Borealis and Aurora Australis. For each phase of the PIC methodology, this research has identified one or more methods to exploit the problem's natural parallelism and effectively map it for execution on the graphic processing unit and its host processor. The sources of overhead that can reduce the effectiveness of parallelization for each of these methods have also been identified. One of the novel aspects of this research was the utilization of particle sorting during the grid interpolation phase. The final representation resulted in simulations that executed about 38 times faster than simulations that were run on a single-core general-purpose processing system. The scalability of this framework to larger problem sizes and future generation systems has also been investigated.
Global geological map of Venus
NASA Astrophysics Data System (ADS)
Ivanov, Mikhail A.; Head, James W.
2011-10-01
The surface area of Venus (∼460×106 km2) is ∼90% of that of the Earth. Using Magellan radar image and altimetry data, supplemented by Venera-15/16 radar images, we compiled a global geologic map of Venus at a scale of 1:10 M. We outline the history of geological mapping of the Earth and planets to illustrate the importance of utilizing the dual stratigraphic classification approach to geological mapping. Using this established approach, we identify 13 distinctive units on the surface of Venus and a series of structures and related features. We present the history and evolution of the definition and characterization of these units, explore and assess alternate methods and approaches that have been suggested, and trace the sequence of mapping from small areas to regional and global scales. We outline the specific defining nature and characteristics of these units, map their distribution, and assess their stratigraphic relationships. On the basis of these data, we then compare local and regional stratigraphic columns and compile a global stratigraphic column, defining rock-stratigraphic units, time-stratigraphic units, and geological time units. We use superposed craters, stratigraphic relationships and impact crater parabola degradation to assess the geologic time represented by the global stratigraphic column. Using the characteristics of these units, we interpret the geological processes that were responsible for their formation. On the basis of unit superposition and stratigraphic relationships, we interpret the sequence of events and processes recorded in the global stratigraphic column. The earliest part of the history of Venus (Pre-Fortunian) predates the observed surface geological features and units, although remnants may exist in the form of deformed rocks and minerals. We find that the observable geological history of Venus can be subdivided into three distinctive phases. The earlier phase (Fortunian Period, its lower stratigraphic boundary cannot be determined with the available data sets) involved intense deformation and building of regions of thicker crust (tessera). This was followed by the Guineverian Period. Distributed deformed plains, mountain belts, and regional interconnected groove belts characterize the first part and the vast majority of coronae began to form during this time. The second part of the Guineverian Period involved global emplacement of vast and mildly deformed plains of volcanic origin. A period of global wrinkle ridge formation largely followed the emplacement of these plains. The third phase (Atlian Period) involved the formation of prominent rift zones and fields of lava flows unmodified by wrinkle ridges that are often associated with large shield volcanoes and, in places, with earlier-formed coronae. Atlian volcanism may continue to the present. About 70% of the exposed surface of Venus was resurfaced during the Guineverian Period and only about 16% during the Atlian Period. Estimates of model absolute ages suggest that the Atlian Period was about twice as long as the Guineverian and, thus, characterized by significantly reduced rates of volcanism and tectonism. The three major phases of activity documented in the global stratigraphy and geological map, and their interpreted temporal relations, provide a basis for assessing the geodynamical processes operating earlier in Venus history that led to the preserved record.
NASA Astrophysics Data System (ADS)
Szurgacz, Dawid
2018-01-01
The article discusses basic functions of a powered roof support in a longwall unit. The support function is to provide safety by protecting mine workings against uncontrolled falling of rocks. The subject of the research includes the measures to shorten the time of roof support shifting. The roof support is adapted to transfer, in hazard conditions of rock mass tremors, dynamic loads caused by mining exploitation. The article presents preliminary research results on the time reduction of the unit advance to increase the extraction process and thus reduce operating costs. Conducted stand tests showed the ability to increase the flow for 3/2-way valve cartridges. The level of fluid flowing through the cartridges is adequate to control individual actuators.
Patterns in Nature Forming Patterns in Minds: An Evaluation of an Introductory Physics Unit
NASA Astrophysics Data System (ADS)
Sheaffer, Christopher Ryan
Educators are increasingly focused on the process over the content. In science especially, teachers want students to understand the nature of science and investigation. The emergence of scientific inquiry and engineering design teaching methods have led to the development of new teaching and evaluation methods that concentrate on steps in a process rather than facts in a topic. Research supports the notion that an explicit focus on the scientific process can lead to student science knowledge gains. In response to new research and standards many teachers have been developing teaching methods that seem to work well in their classrooms, but lack the time and resources to test them in other classroom environments. A high school Physics teacher (Bradford Hill) has developed a unit called Patterns in Nature (PIN) with objectives relating mathematical modeling to the scientific process. Designed for use in his large public school classroom, the unit was taken and used in a charter school with small classes. This study looks at specifically whether or not the PIN unit effectively teaches students how to graph the data they gather and fit an appropriate mathematical pattern, using that model to predict future measurements. Additionally, the study looks at the students' knowledge and views about the nature of science and the process of scientific investigation as it is affected by the PIN unit. Findings show that students are able to identify and apply patterns to data, but have difficulties explaining the meaning of the math. Students' show increases in their knowledge of the process of science, and the majority develop positive views about science in general. A major goal of this study is to place this unit in the cyclical process of Design-Based Research and allow for Pattern in Nature's continuous improvement, development and evaluation. Design-Based Research (DBR) is an approach that can be applied to the implementation and evaluation of classroom materials. This method incorporates the complexities of different contexts and changing treatments into the research methods and analysis. From the use of DBR teachers can understand more about how the designed materials affect the students. Others may be able to use the development and analysis of PIN study as a guide to look at similar aspects of science units developed elsewhere.
Computer-aided boundary delineation of agricultural lands
NASA Technical Reports Server (NTRS)
Cheng, Thomas D.; Angelici, Gary L.; Slye, Robert E.; Ma, Matt
1989-01-01
The National Agricultural Statistics Service of the United States Department of Agriculture (USDA) presently uses labor-intensive aerial photographic interpretation techniques to divide large geographical areas into manageable-sized units for estimating domestic crop and livestock production. Prototype software, the computer-aided stratification (CAS) system, was developed to automate the procedure, and currently runs on a Sun-based image processing system. With a background display of LANDSAT Thematic Mapper and United States Geological Survey Digital Line Graph data, the operator uses a cursor to delineate agricultural areas, called sampling units, which are assigned to strata of land-use and land-cover types. The resultant stratified sampling units are used as input into subsequent USDA sampling procedures. As a test, three counties in Missouri were chosen for application of the CAS procedures. Subsequent analysis indicates that CAS was five times faster in creating sampling units than the manual techniques were.
Ferkany, John W; Williams, Michael
2008-09-01
Translational biomedical research is often directed to the introduction of a new drug or biologic intended to treat unmet medical need in humans. This unit describes the timing and content of the investigational new drug (IND) application, the primary document required by the U.S. FDA for the initiation of clinical trials in humans with any new chemical entity (NCE) or biologic. The IND application contains all the information necessary for the FDA to make an assessment of the risks and benefits of the proposed clinical trials for the NCE/biologic, containing a detailed but succinct description of the biology, safety, toxicology, chemistry and manufacturing process, and the proposed clinical plan. This unit is geared for those with little or no experience with the IND process and is intended as a global introduction to this, the initial stage of the drug development process for drugs used in humans.
Real-time two-dimensional temperature imaging using ultrasound.
Liu, Dalong; Ebbini, Emad S
2009-01-01
We present a system for real-time 2D imaging of temperature change in tissue media using pulse-echo ultrasound. The frontend of the system is a SonixRP ultrasound scanner with a research interface giving us the capability of controlling the beam sequence and accessing radio frequency (RF) data in real-time. The beamformed RF data is streamlined to the backend of the system, where the data is processed using a two-dimensional temperature estimation algorithm running in the graphics processing unit (GPU). The estimated temperature is displayed in real-time providing feedback that can be used for real-time control of the heating source. Currently we have verified our system with elastography tissue mimicking phantom and in vitro porcine heart tissue, excellent repeatability and sensitivity were demonstrated.
Cher, Chen-Yong; Coteus, Paul W; Gara, Alan; Kursun, Eren; Paulsen, David P; Schuelke, Brian A; Sheets, II, John E; Tian, Shurong
2013-10-01
A processor-implemented method for determining aging of a processing unit in a processor the method comprising: calculating an effective aging profile for the processing unit wherein the effective aging profile quantifies the effects of aging on the processing unit; combining the effective aging profile with process variation data, actual workload data and operating conditions data for the processing unit; and determining aging through an aging sensor of the processing unit using the effective aging profile, the process variation data, the actual workload data, architectural characteristics and redundancy data, and the operating conditions data for the processing unit.
He, Feng Jie; Liu, Hui Long; Chen, Long Cong; Xiong, Xing Liang
2018-03-01
Liquid crystal (LC)-based sensors have the advantageous properties of being fast, sensitive, and label-free, the results of which can be accessed directly only through the naked eye. However, the inherent disadvantages possessed by LC sensors, such as relying heavily on polarizing microscopes and the difficulty to quantify, have limited the possibility of field applications. Herein, we have addressed these issues by constructing a portable polarized detection system with constant temperature control. This system is mainly composed of four parts: the LC cell, the optics unit, the automatic temperature control unit, and the image processing unit. The LC cell was based on the ordering transitions of LCs in the presence of analytes. The optics unit based on the imaging principle of LCs was designed to substitute the polarizing microscope for the real-time observation. The image processing unit is expected to quantify the concentration of analytes. The results have shown that the presented system can detect dimethyl methyl phosphonate (a stimulant for organophosphorus nerve gas) within 25 s, and the limit of detection is about 10 ppb. In all, our portable system has potential in field applications.
NASA Astrophysics Data System (ADS)
He, Feng Jie; Liu, Hui Long; Chen, Long Cong; Xiong, Xing Liang
2018-03-01
Liquid crystal (LC)-based sensors have the advantageous properties of being fast, sensitive, and label-free, the results of which can be accessed directly only through the naked eye. However, the inherent disadvantages possessed by LC sensors, such as relying heavily on polarizing microscopes and the difficulty to quantify, have limited the possibility of field applications. Herein, we have addressed these issues by constructing a portable polarized detection system with constant temperature control. This system is mainly composed of four parts: the LC cell, the optics unit, the automatic temperature control unit, and the image processing unit. The LC cell was based on the ordering transitions of LCs in the presence of analytes. The optics unit based on the imaging principle of LCs was designed to substitute the polarizing microscope for the real-time observation. The image processing unit is expected to quantify the concentration of analytes. The results have shown that the presented system can detect dimethyl methyl phosphonate (a stimulant for organophosphorus nerve gas) within 25 s, and the limit of detection is about 10 ppb. In all, our portable system has potential in field applications.
[Reorganization of the interdisciplinary emergency unit at the university clinic of Göttingen].
Blaschke, Sabine; Müller, Gerhard A; Bergmann, Günther
2008-04-01
Configuration of the interdisciplinary emergency unit within the university clinic of Göttingen was successfully reorganized during the past two years. All emergencies except traumatologic, gynecologic and pediatric emergencies are treated within this functional unit which is guided by the center of internal medicine. It is organized in a three shift operation manner over a period of 24 hours. Due to a close interdisciplinary collaboration between different departments patients receive optimal diagnostic and therapeutic treatment within a short period of time. To improve processes within the emergency department a series of measures were taken including the -establishment of an intermediate care unit for unstable patients, setting up of special diagnostic and therapeutic units for the acute coronary syndrome as well as stroke, implementation of standardized clinical pathways, establishment of an electronic data processing network in close communication with all diagnostic entities, introduction of a quality assurance system and reduction of medical costs. Reorganization measures lead to a substantial optimization and acceleration of emergency proceedings and thus, provides optimal patient care around the clock. In addition, medical costs could clearly be reduced at the interface between preclinical and clinical emergency medicine.
Cortical Specializations Underlying Fast Computations
Volgushev, Maxim
2016-01-01
The time course of behaviorally relevant environmental events sets temporal constraints on neuronal processing. How does the mammalian brain make use of the increasingly complex networks of the neocortex, while making decisions and executing behavioral reactions within a reasonable time? The key parameter determining the speed of computations in neuronal networks is a time interval that neuronal ensembles need to process changes at their input and communicate results of this processing to downstream neurons. Theoretical analysis identified basic requirements for fast processing: use of neuronal populations for encoding, background activity, and fast onset dynamics of action potentials in neurons. Experimental evidence shows that populations of neocortical neurons fulfil these requirements. Indeed, they can change firing rate in response to input perturbations very quickly, within 1 to 3 ms, and encode high-frequency components of the input by phase-locking their spiking to frequencies up to 300 to 1000 Hz. This implies that time unit of computations by cortical ensembles is only few, 1 to 3 ms, which is considerably faster than the membrane time constant of individual neurons. The ability of cortical neuronal ensembles to communicate on a millisecond time scale allows for complex, multiple-step processing and precise coordination of neuronal activity in parallel processing streams, while keeping the speed of behavioral reactions within environmentally set temporal constraints. PMID:25689988
NASA Technical Reports Server (NTRS)
Senske, D. A.
2008-01-01
To understand the spatial and temporal relations between tectonic and volcanic processes on Venus, the Juno Chasma region is mapped. Geologic units are used to establish regional stratigraphic relations and the timing between rifting and volcanism.
Technology and the Online Catalog.
ERIC Educational Resources Information Center
Graham, Peter S.
1983-01-01
Discusses trends in computer technology and their use for library catalogs, noting the concept of bandwidth (describes quantity of information transmitted per given unit of time); computer hardware differences (micros, minis, maxis); distributed processing systems and databases; optical disk storage; networks; transmission media; and terminals.…
Data processing for water monitoring system
NASA Technical Reports Server (NTRS)
Monford, L.; Linton, A. T.
1978-01-01
Water monitoring data acquisition system is structured about central computer that controls sampling and sensor operation, and analyzes and displays data in real time. Unit is essentially separated into two systems: computer system, and hard wire backup system which may function separately or with computer.
NASA Technical Reports Server (NTRS)
Bogomolov, E. A.; Yevstafev, Y. Y.; Karakadko, V. K.; Lubyanaya, N. D.; Romanov, V. A.; Totubalina, M. G.; Yamshchikov, M. A.
1975-01-01
A system for the recording and processing of telescope data is considered for measurements of EW asymmetry. The information is recorded by 45 channels on a continuously moving 35-mm film. The dead time of the recorder is about 0.1 sec. A sorting electronic circuit is used to reduce the errors when the statistical time distribution of the pulses is recorded. The recorded information is read out by means of photoresistors. The phototransmitter signals are fed either to the mechanical recorder unit for preliminary processing, or to a logical circuit which controls the operation of the punching device. The punched tape is processed by an electronic computer.
Use telecommunications for real-time process control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zilberman, I.; Bigman, J.; Sela, I.
1996-05-01
Process operators design real-time accurate information to monitor and control product streams and to optimize unit operations. The challenge is how to cost-effectively install sophisticated analytical equipment in harsh environments such as process areas and maintain system reliability. Incorporating telecommunications technology with near infrared (NIR) spectroscopy may be the bridge to help operations achieve their online control goals. Coupling communications fiber optics with NIR analyzers enables the probe and sampling system to remain in the field and crucial analytical equipment to be remotely located in a general purpose area without specialized protection provisions. The case histories show how two refineriesmore » used NIR spectroscopy online to track octane levels for reformate streams.« less
Badal, Andreu; Badano, Aldo
2009-11-01
It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDATM programming model (NVIDIA Corporation, Santa Clara, CA). An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.
Graphics processing units in bioinformatics, computational biology and systems biology.
Nobile, Marco S; Cazzaniga, Paolo; Tangherloni, Andrea; Besozzi, Daniela
2017-09-01
Several studies in Bioinformatics, Computational Biology and Systems Biology rely on the definition of physico-chemical or mathematical models of biological systems at different scales and levels of complexity, ranging from the interaction of atoms in single molecules up to genome-wide interaction networks. Traditional computational methods and software tools developed in these research fields share a common trait: they can be computationally demanding on Central Processing Units (CPUs), therefore limiting their applicability in many circumstances. To overcome this issue, general-purpose Graphics Processing Units (GPUs) are gaining an increasing attention by the scientific community, as they can considerably reduce the running time required by standard CPU-based software, and allow more intensive investigations of biological systems. In this review, we present a collection of GPU tools recently developed to perform computational analyses in life science disciplines, emphasizing the advantages and the drawbacks in the use of these parallel architectures. The complete list of GPU-powered tools here reviewed is available at http://bit.ly/gputools. © The Author 2016. Published by Oxford University Press.
Hoffmann, Loren C.; Cicchese, Joseph J.; Berry, Stephen D.
2015-01-01
Neurobiological oscillations are regarded as essential to normal information processing, including coordination and timing of cells and assemblies within structures as well as in long feedback loops of distributed neural systems. The hippocampal theta rhythm is a 3–12 Hz oscillatory potential observed during cognitive processes ranging from spatial navigation to associative learning. The lower range, 3–7 Hz, can occur during immobility and depends upon the integrity of cholinergic forebrain systems. Several studies have shown that the amount of pre-training theta in the rabbit strongly predicts the acquisition rate of classical eyeblink conditioning and that impairment of this system substantially slows the rate of learning. Our lab has used a brain-computer interface (BCI) that delivers eyeblink conditioning trials contingent upon the explicit presence or absence of hippocampal theta. A behavioral benefit of theta-contingent training has been demonstrated in both delay and trace forms of the paradigm with a two- to four-fold increase in learning speed. This behavioral effect is accompanied by enhanced amplitude and synchrony of hippocampal local field potential (LFP)s, multi-unit excitation, and single-unit response patterns that depend on theta state. Additionally, training in the presence of hippocampal theta has led to increases in the salience of tone-induced unit firing patterns in the medial prefrontal cortex, followed by persistent multi-unit activity during the trace interval. In cerebellum, rhythmicity and precise synchrony of stimulus time-locked LFPs with those of hippocampus occur preferentially under the theta condition. Here we review these findings, integrate them into current models of hippocampal-dependent learning and suggest how improvement in our understanding of neurobiological oscillations is critical for theories of medial temporal lobe processes underlying intact and pathological learning. PMID:25918501
Hoffmann, Loren C; Cicchese, Joseph J; Berry, Stephen D
2015-01-01
Neurobiological oscillations are regarded as essential to normal information processing, including coordination and timing of cells and assemblies within structures as well as in long feedback loops of distributed neural systems. The hippocampal theta rhythm is a 3-12 Hz oscillatory potential observed during cognitive processes ranging from spatial navigation to associative learning. The lower range, 3-7 Hz, can occur during immobility and depends upon the integrity of cholinergic forebrain systems. Several studies have shown that the amount of pre-training theta in the rabbit strongly predicts the acquisition rate of classical eyeblink conditioning and that impairment of this system substantially slows the rate of learning. Our lab has used a brain-computer interface (BCI) that delivers eyeblink conditioning trials contingent upon the explicit presence or absence of hippocampal theta. A behavioral benefit of theta-contingent training has been demonstrated in both delay and trace forms of the paradigm with a two- to four-fold increase in learning speed. This behavioral effect is accompanied by enhanced amplitude and synchrony of hippocampal local field potential (LFP)s, multi-unit excitation, and single-unit response patterns that depend on theta state. Additionally, training in the presence of hippocampal theta has led to increases in the salience of tone-induced unit firing patterns in the medial prefrontal cortex, followed by persistent multi-unit activity during the trace interval. In cerebellum, rhythmicity and precise synchrony of stimulus time-locked LFPs with those of hippocampus occur preferentially under the theta condition. Here we review these findings, integrate them into current models of hippocampal-dependent learning and suggest how improvement in our understanding of neurobiological oscillations is critical for theories of medial temporal lobe processes underlying intact and pathological learning.
Tracing the decision-making process of physicians with a Decision Process Matrix.
Hausmann, Daniel; Zulian, Cristina; Battegay, Edouard; Zimmerli, Lukas
2016-10-18
Decision-making processes in a medical setting are complex, dynamic and under time pressure, often with serious consequences for a patient's condition. The principal aim of the present study was to trace and map the individual diagnostic process of real medical cases using a Decision Process Matrix [DPM]). The naturalistic decision-making process of 11 residents and a total of 55 medical cases were recorded in an emergency department, and a DPM was drawn up according to a semi-structured technique following four steps: 1) observing and recording relevant information throughout the entire diagnostic process, 2) assessing options in terms of suspected diagnoses, 3) drawing up an initial version of the DPM, and 4) verifying the DPM, while adding the confidence ratings. The DPM comprised an average of 3.2 suspected diagnoses and 7.9 information units (cues). The following three-phase pattern could be observed: option generation, option verification, and final diagnosis determination. Residents strove for the highest possible level of confidence before making the final diagnoses (in two-thirds of the medical cases with a rating of practically certain) or excluding suspected diagnoses (with practically impossible in half of the cases). The following challenges have to be addressed in the future: real-time capturing of emerging suspected diagnoses in the memory of the physician, definition of meaningful information units, and a more contemporary measurement of confidence. DPM is a useful tool for tracing real and individual diagnostic processes. The methodological approach with DPM allows further investigations into the underlying cognitive diagnostic processes on a theoretical level and improvement of individual clinical reasoning skills in practice.
Device and method to enhance availability of cluster-based processing systems
NASA Technical Reports Server (NTRS)
Lupia, David J. (Inventor); Ramos, Jeremy (Inventor); Samson, Jr., John R. (Inventor)
2010-01-01
An electronic computing device including at least one processing unit that implements a specific fault signal upon experiencing an associated fault, a control unit that generates a specific recovery signal upon receiving the fault signal from the at least one processing unit, and at least one input memory unit. The recovery signal initiates specific recovery processes in the at least one processing unit. The input memory buffers input data signals input to the at least one processing unit that experienced the fault during the recovery period.
2015-09-15
of manpower to the focal company’s development and product commercialization process, availability of products in times of shortage , and/or...their employees to help improvements teams, by notifying them about any shortage or problems in the process, rather than blaming employees for...cargo services are used in air-shipping those parts to the United States. Then, parts are consolidated by the FF in the Wilmington warehouse , and
[Redesigning the hospital discharge process].
Martínez-Ramos, M; Flores-Pardo, E; Uris-Sellés, J
2016-01-01
The aim of this article is to show that the redesign and planning process of hospital discharge advances the departure time of the patient from a hospital environment. Quasi-experimental study conducted from January 2011 to April 2013, in a local hospital. The cases analysed were from medical and surgical nursing units. The process was redesigned to coordinate all the professionals involved in the process. The hospital discharge improvement process improvement was carried out by forming a working group, the analysis of retrospective data, identifying areas for improvement, and its redesign. The dependent variable was the time of patient administrative discharge. The sample was classified as pre-intervention, inter-intervention, and post-intervention, depending on the time point of the study. The final sample included 14,788 patients after applying the inclusion and exclusion criteria. The mean discharge release time decreased significantly by 50 min between pre-intervention and post-intervention periods. The release time in patients with planned discharge was one hour and 25 min less than in patients with unplanned discharge. Process redesign is a useful strategy to improve the process of hospital discharge. Besides planning the discharge, it is shown that the patient leaving the hospital before 12 midday is a key factor. Copyright © 2015 SECA. Published by Elsevier Espana. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baucom, R.M.; Marchello, J.M.
Thermoplastic prepregs of LARC-TPI have been produced in a fluidized bed unit on spread continuous fiber tows. The powders are melted on the fibers by radiant heating to adhere the polymer to the fiber. This process produces tow prepreg uniformly without imposing severe stress on the fibers or requiring long high temperature residence times for the polymer. Unit design theory and operating correlations have been developed to provide the basis for scale up to commercial operation. Special features of the operation are the pneumatic tow spreader, fluidized bed and resin feed systems.
2016-11-17
A test unit, or prototype, of NASA's Advanced Plant Habitat (APH) was delivered to the Space Station Processing Facility at the agency's Kennedy Space Center in Florida. The APH is the largest plant chamber built for the agency. The unit is being prepared for engineering development tests to see how the science will integrate with the various systems of the plant habitat. It will have 180 sensors and four times the light output of Veggie. The APH will be delivered to the International Space Station in March 2017.
Gregg, H.R.; Meltzer, M.P.
1996-05-28
The portable Contamination Analysis Unit (CAU) measures trace quantities of surface contamination in real time. The detector head of the portable contamination analysis unit has an opening with an O-ring seal, one or more vacuum valves and a small mass spectrometer. With the valve closed, the mass spectrometer is evacuated with one or more pumps. The O-ring seal is placed against a surface to be tested and the vacuum valve is opened. Data is collected from the mass spectrometer and a portable computer provides contamination analysis. The CAU can be used to decontaminate and decommission hazardous and radioactive surfaces by measuring residual hazardous surface contamination, such as tritium and trace organics. It provides surface contamination data for research and development applications as well as real-time process control feedback for industrial cleaning operations and can be used to determine the readiness of a surface to accept bonding or coatings. 1 fig.
Gregg, Hugh R.; Meltzer, Michael P.
1996-01-01
The portable Contamination Analysis Unit (CAU) measures trace quantifies of surface contamination in real time. The detector head of the portable contamination analysis unit has an opening with an O-ring seal, one or more vacuum valves and a small mass spectrometer. With the valve closed, the mass spectrometer is evacuated with one or more pumps. The O-ring seal is placed against a surface to be tested and the vacuum valve is opened. Data is collected from the mass spectrometer and a portable computer provides contamination analysis. The CAU can be used to decontaminate and decommission hazardous and radioactive surface by measuring residual hazardous surface contamination, such as tritium and trace organics It provides surface contamination data for research and development applications as well as real-time process control feedback for industrial cleaning operations and can be used to determine the readiness of a surface to accept bonding or coatings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Gorman, T.; Gibson, K. J.; Snape, J. A.
2012-10-15
A real-time system has been developed to trigger both the MAST Thomson scattering (TS) system and the plasma control system on the phase and amplitude of neoclassical tearing modes (NTMs), extending the capabilities of the original system. This triggering system determines the phase and amplitude of a given NTM using magnetic coils at different toroidal locations. Real-time processing of the raw magnetic data occurs on a low cost field programmable gate array (FPGA) based unit which permits triggering of the TS lasers on specific amplitudes and phases of NTM evolution. The MAST plasma control system can receive a separate triggermore » from the FPGA unit that initiates a vertical shift of the MAST magnetic axis. Such shifts have fully removed m/n= 2/1 NTMs instabilities on a number of MAST discharges.« less
So, Noel F; Rubin, Devon I; Jones, Lyell K; Litchy, William J; Sorenson, Eric J
2013-12-01
Repetitive discharges may be recorded during nerve conduction studies (NCS) or during needle electromyography in a muscle at rest. Repetitive discharges that occur during voluntary activation and are time-locked to voluntary motor unit potentials (MUP) have not been described. Retrospective review of motor unit potential induced repetitive discharges (MIRDs) identified in the EMG laboratory. Characteristics of each MIRD, patient demographics, other EMG findings in the same muscle, and electrophysiological diagnosis were analyzed. MIRDs were observed in 15 patients. The morphology and number of spikes and duration of MIRDs varied. The discharges fired at rates of 50-200 Hz. All but 2 patients had EMG findings of a chronic neurogenic disorder. MIRDs are rare iterative discharges time-locked to a voluntary MUP. The pathophysiology of MIRDs is unclear, but their presence may indicate a chronic neurogenic process. Copyright © 2013 Wiley Periodicals, Inc.
Test results of the STI GPS time transfer receiver
NASA Technical Reports Server (NTRS)
Hall, D. L.; Handlan, J.; Wheeler, P.
1983-01-01
Global time transfer, or synchronization, between a user clock and USNO UTC time can be performed using the Global Positioning System (GPS), and commercially available time transfer receivers. This paper presents the test results of time transfer using the GPS system and a Stanford Telecommunications, Inc. (STI) Time Transfer System (TTS) Model 502. Tests at the GPS Master Control Site (MCS) in Vandenburg, California and at the United States Naval Observatory (USNO) in Washington, D.C. are described. An overview of GPS, and the STI TTS 502 is presented. A discussion of the time transfer process and test concepts is included.
ERIC Educational Resources Information Center
Aber, J. Lawrence; Morris, Pamela; Wolf, Sharon; Berg, Juliette
2016-01-01
This article examines the impacts of Opportunity New York City-Family Rewards, the first holistic conditional cash transfer (CCT) program evaluated in the United States, on parental financial investments in children, and high school students' academic time use, motivations and self-beliefs, and achievement outcomes. Family Rewards, launched by the…
A proactive transfer policy for critical patient flow management.
González, Jaime; Ferrer, Juan-Carlos; Cataldo, Alejandro; Rojas, Luis
2018-02-17
Hospital emergency departments are often overcrowded, resulting in long wait times and a public perception of poor attention. Delays in transferring patients needing further treatment increases emergency department congestion, has negative impacts on their health and may increase their mortality rates. A model built around a Markov decision process is proposed to improve the efficiency of patient flows between the emergency department and other hospital units. With each day divided into time periods, the formulation estimates bed demand for the next period as the basis for determining a proactive rather than reactive transfer decision policy. Due to the high dimensionality of the optimization problem involved, an approximate dynamic programming approach is used to derive an approximation of the optimal decision policy, which indicates that a certain number of beds should be kept free in the different units as a function of the next period demand estimate. Testing the model on two instances of different sizes demonstrates that the optimal number of patient transfers between units changes when the emergency patient arrival rate for transfer to other units changes at a single unit, but remains stable if the change is proportionally the same for all units. In a simulation using real data for a hospital in Chile, significant improvements are achieved by the model in key emergency department performance indicators such as patient wait times (reduction higher than 50%), patient capacity (21% increase) and queue abandonment (from 7% down to less than 1%).
Al-Krenawi, Alean; Graham, John R; Dean, Yasmin Z; Eltaiba, Nada
2004-06-01
Help-seeking processes provide critical links between the onset of mental health problems and the provision of professional care. But little is known about these processes in the Arab world, and still less in transnational, comparative terms. This is the first study to compare help-seeking processes among Muslim Arab female students in Jordan, the United Arab Emirates and Israel. The present study compares the attitudes of Arab Muslim female students from Israel, Jordan and the United Arab Emirates (UAE) towards mental health treatment. A convenience sample of 262 female Muslim-Arab undergraduate university students from Jordan, United Arab Emirates (UAE) and Arab students in Israel completed a modified Orientation for Seeking Professional Help (OSPH) Questionnaire. Data revealed that nationality was not statistically significant as a variable in a positive attitude towards seeking professional help; year of study, marital status and age were found to be significant predictors of a positive attitude towards seeking help. High proportions of respondents among the nationalities referred to God through prayer during times of psychological distress. The discussion considers implications for professional service delivery and programme development. Future research could extrapolate findings to other Arab countries and to Arab peoples living in the non-Arab world.
A wearable biofeedback control system based body area network for freestyle swimming.
Rui Li; Zibo Cai; WeeSit Lee; Lai, Daniel T H
2016-08-01
Wearable posture measurement units are capable of enabling real-time performance evaluation and providing feedback to end users. This paper presents a wearable feedback prototype designed for freestyle swimming with focus on trunk rotation measurement. The system consists of a nine-degree-of-freedom inertial sensor, which is built in a central data collection and processing unit, and two vibration motors for delivering real-time feedback. Theses devices form a fundamental body area network (BAN). In the experiment setup, four recreational swimmers were asked to do two sets of 4 x 25m freestyle swimming without and with feedback provided respectively. Results showed that real-time biofeedback mechanism improves swimmers kinematic performance by an average of 4.5% reduction in session time. Swimmers can gradually adapt to feedback signals, and the biofeedback control system can be employed in swimmers daily training for fitness maintenance.
Tokuda, Junichi; Plishker, William; Torabi, Meysam; Olubiyi, Olutayo I; Zaki, George; Tatli, Servet; Silverman, Stuart G; Shekher, Raj; Hata, Nobuhiko
2015-06-01
Accuracy and speed are essential for the intraprocedural nonrigid magnetic resonance (MR) to computed tomography (CT) image registration in the assessment of tumor margins during CT-guided liver tumor ablations. Although both accuracy and speed can be improved by limiting the registration to a region of interest (ROI), manual contouring of the ROI prolongs the registration process substantially. To achieve accurate and fast registration without the use of an ROI, we combined a nonrigid registration technique on the basis of volume subdivision with hardware acceleration using a graphics processing unit (GPU). We compared the registration accuracy and processing time of GPU-accelerated volume subdivision-based nonrigid registration technique to the conventional nonrigid B-spline registration technique. Fourteen image data sets of preprocedural MR and intraprocedural CT images for percutaneous CT-guided liver tumor ablations were obtained. Each set of images was registered using the GPU-accelerated volume subdivision technique and the B-spline technique. Manual contouring of ROI was used only for the B-spline technique. Registration accuracies (Dice similarity coefficient [DSC] and 95% Hausdorff distance [HD]) and total processing time including contouring of ROIs and computation were compared using a paired Student t test. Accuracies of the GPU-accelerated registrations and B-spline registrations, respectively, were 88.3 ± 3.7% versus 89.3 ± 4.9% (P = .41) for DSC and 13.1 ± 5.2 versus 11.4 ± 6.3 mm (P = .15) for HD. Total processing time of the GPU-accelerated registration and B-spline registration techniques was 88 ± 14 versus 557 ± 116 seconds (P < .000000002), respectively; there was no significant difference in computation time despite the difference in the complexity of the algorithms (P = .71). The GPU-accelerated volume subdivision technique was as accurate as the B-spline technique and required significantly less processing time. The GPU-accelerated volume subdivision technique may enable the implementation of nonrigid registration into routine clinical practice. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Lazcano, R.; Madroñal, D.; Fabelo, H.; Ortega, S.; Salvador, R.; Callicó, G. M.; Juárez, E.; Sanz, C.
2017-10-01
Hyperspectral Imaging (HI) assembles high resolution spectral information from hundreds of narrow bands across the electromagnetic spectrum, thus generating 3D data cubes in which each pixel gathers the spectral information of the reflectance of every spatial pixel. As a result, each image is composed of large volumes of data, which turns its processing into a challenge, as performance requirements have been continuously tightened. For instance, new HI applications demand real-time responses. Hence, parallel processing becomes a necessity to achieve this requirement, so the intrinsic parallelism of the algorithms must be exploited. In this paper, a spatial-spectral classification approach has been implemented using a dataflow language known as RVCCAL. This language represents a system as a set of functional units, and its main advantage is that it simplifies the parallelization process by mapping the different blocks over different processing units. The spatial-spectral classification approach aims at refining the classification results previously obtained by using a K-Nearest Neighbors (KNN) filtering process, in which both the pixel spectral value and the spatial coordinates are considered. To do so, KNN needs two inputs: a one-band representation of the hyperspectral image and the classification results provided by a pixel-wise classifier. Thus, spatial-spectral classification algorithm is divided into three different stages: a Principal Component Analysis (PCA) algorithm for computing the one-band representation of the image, a Support Vector Machine (SVM) classifier, and the KNN-based filtering algorithm. The parallelization of these algorithms shows promising results in terms of computational time, as the mapping of them over different cores presents a speedup of 2.69x when using 3 cores. Consequently, experimental results demonstrate that real-time processing of hyperspectral images is achievable.
Capturing ultrafast photoinduced local structural distortions of BiFeO 3
Wen, Haidan; Sassi, Michel JPC; Luo, Zhenlin; ...
2015-10-14
The interaction of light with materials is an intensively studied research forefront, in which the coupling of radiation energy to selective degrees of freedom offers contact-free tuning of functionalities on ultrafast time scales. Capturing the fundamental processes and understanding the mechanism of photoinduced structural rearrangement are essential to applications such as photo-active actuators and efficient photovoltaic devices. Using ultrafast x-ray absorption spectroscopy aided by density functional theory calculations, we reveal the local structural arrangement around the transition metal atom in a unit cell of the photoferroelectric archetype BiFeO 3 film. The out-of-plane elongation of the unit cell is accompanied bymore » the in-plane shrinkage with minimal change of interaxial lattice angles upon photoexcitation. This uniaxial elastic deformation of the unit cell is driven by localized electric field as a result of photoinduced charge separation, in contrast to a global lattice constant increase and lattice angle variations as a result of heating. The finding of a photoinduced elastic unit cell deformation elucidates a microscopic picture of photocarrier-mediated nonequilibrium processes in polar materials.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wen, Haidan; Sassi, Michel; Luo, Zhenlin
The interaction of light with materials is an intensively studied research forefront, in which the coupling of radiation energy to selective degrees of freedom offers contact-free tuning of functionalities on ultrafast time scales. Capturing the fundamental processes and understanding the mechanism of photoinduced structural rearrangement are essential to applications such as photo-active actuators and efficient photovoltaic devices. Using ultrafast x-ray absorption spectroscopy aided by density functional theory calculations, we reveal the local structural arrangement around the transition metal atom in a unit cell of the photoferroelectric archetype BiFeO 3 film. The out-of-plane elongation of the unit cell is accompanied bymore » the in-plane shrinkage with minimal change of interaxial lattice angles upon photoexcitation. This anisotropic elastic deformation of the unit cell is driven by localized electric field as a result of photoinduced charge separation, in contrast to a global lattice constant increase and lattice angle variations as a result of heating. The finding of a photoinduced elastic unit cell deformation elucidates a microscopic picture of photocarrier-mediated non-equilibrium processes in polar materials.« less
Capturing ultrafast photoinduced local structural distortions of BiFeO3
Wen, Haidan; Sassi, Michel; Luo, Zhenlin; Adamo, Carolina; Schlom, Darrell G.; Rosso, Kevin M.; Zhang, Xiaoyi
2015-01-01
The interaction of light with materials is an intensively studied research forefront, in which the coupling of radiation energy to selective degrees of freedom offers contact-free tuning of functionalities on ultrafast time scales. Capturing the fundamental processes and understanding the mechanism of photoinduced structural rearrangement are essential to applications such as photo-active actuators and efficient photovoltaic devices. Using ultrafast x-ray absorption spectroscopy aided by density functional theory calculations, we reveal the local structural arrangement around the transition metal atom in a unit cell of the photoferroelectric archetype BiFeO3 film. The out-of-plane elongation of the unit cell is accompanied by the in-plane shrinkage with minimal change of interaxial lattice angles upon photoexcitation. This anisotropic elastic deformation of the unit cell is driven by localized electric field as a result of photoinduced charge separation, in contrast to a global lattice constant increase and lattice angle variations as a result of heating. The finding of a photoinduced elastic unit cell deformation elucidates a microscopic picture of photocarrier-mediated non-equilibrium processes in polar materials. PMID:26463128
Capturing ultrafast photoinduced local structural distortions of BiFeO3.
Wen, Haidan; Sassi, Michel; Luo, Zhenlin; Adamo, Carolina; Schlom, Darrell G; Rosso, Kevin M; Zhang, Xiaoyi
2015-10-14
The interaction of light with materials is an intensively studied research forefront, in which the coupling of radiation energy to selective degrees of freedom offers contact-free tuning of functionalities on ultrafast time scales. Capturing the fundamental processes and understanding the mechanism of photoinduced structural rearrangement are essential to applications such as photo-active actuators and efficient photovoltaic devices. Using ultrafast x-ray absorption spectroscopy aided by density functional theory calculations, we reveal the local structural arrangement around the transition metal atom in a unit cell of the photoferroelectric archetype BiFeO3 film. The out-of-plane elongation of the unit cell is accompanied by the in-plane shrinkage with minimal change of interaxial lattice angles upon photoexcitation. This anisotropic elastic deformation of the unit cell is driven by localized electric field as a result of photoinduced charge separation, in contrast to a global lattice constant increase and lattice angle variations as a result of heating. The finding of a photoinduced elastic unit cell deformation elucidates a microscopic picture of photocarrier-mediated non-equilibrium processes in polar materials.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wen, Haidan; Sassi, Michel JPC; Luo, Zhenlin
The interaction of light with materials is an intensively studied research forefront, in which the coupling of radiation energy to selective degrees of freedom offers contact-free tuning of functionalities on ultrafast time scales. Capturing the fundamental processes and understanding the mechanism of photoinduced structural rearrangement are essential to applications such as photo-active actuators and efficient photovoltaic devices. Using ultrafast x-ray absorption spectroscopy aided by density functional theory calculations, we reveal the local structural arrangement around the transition metal atom in a unit cell of the photoferroelectric archetype BiFeO 3 film. The out-of-plane elongation of the unit cell is accompanied bymore » the in-plane shrinkage with minimal change of interaxial lattice angles upon photoexcitation. This uniaxial elastic deformation of the unit cell is driven by localized electric field as a result of photoinduced charge separation, in contrast to a global lattice constant increase and lattice angle variations as a result of heating. The finding of a photoinduced elastic unit cell deformation elucidates a microscopic picture of photocarrier-mediated nonequilibrium processes in polar materials.« less
Capturing ultrafast photoinduced local structural distortions of BiFeO3
NASA Astrophysics Data System (ADS)
Wen, Haidan; Sassi, Michel; Luo, Zhenlin; Adamo, Carolina; Schlom, Darrell G.; Rosso, Kevin M.; Zhang, Xiaoyi
2015-10-01
The interaction of light with materials is an intensively studied research forefront, in which the coupling of radiation energy to selective degrees of freedom offers contact-free tuning of functionalities on ultrafast time scales. Capturing the fundamental processes and understanding the mechanism of photoinduced structural rearrangement are essential to applications such as photo-active actuators and efficient photovoltaic devices. Using ultrafast x-ray absorption spectroscopy aided by density functional theory calculations, we reveal the local structural arrangement around the transition metal atom in a unit cell of the photoferroelectric archetype BiFeO3 film. The out-of-plane elongation of the unit cell is accompanied by the in-plane shrinkage with minimal change of interaxial lattice angles upon photoexcitation. This anisotropic elastic deformation of the unit cell is driven by localized electric field as a result of photoinduced charge separation, in contrast to a global lattice constant increase and lattice angle variations as a result of heating. The finding of a photoinduced elastic unit cell deformation elucidates a microscopic picture of photocarrier-mediated non-equilibrium processes in polar materials.
Maritime Domain Awareness: C4I for the 1000 Ship Navy
2009-12-04
unit action, provide unit sensed contacts, coordinate unit operations, process unit information, release image , and release contact report, Figure 33...Intelligence Tasking Request Intelligence Summary Release Unit Person Incident Release Unit Vessel Incident Process Intelligence Tasking Release Image ...xi LIST OF FIGURES Figure 1. Functional Problem Sequence Process Flow. ....................................................4 Figure 2. United
Morrison, Cecily; Jones, Matthew; Jones, Rachel; Vuylsteke, Alain
2013-04-10
Current policies encourage healthcare institutions to acquire clinical information systems (CIS) so that captured data can be used for secondary purposes, including clinical process improvement. Such policies do not account for the extra work required to repurpose data for uses other than direct clinical care, making their implementation problematic. This paper aims to analyze the strategies employed by clinical units to use data effectively for both direct clinical care and clinical process improvement. Ethnographic methods were employed. A total of 54 contextual interviews with health professionals spanning various disciplines and 18 hours of observation were carried out in 5 intensive care units in England using an advanced CIS. Case studies of how the extra work was achieved in each unit were derived from the data and then compared. We found that extra work is required to repurpose CIS data for clinical process improvement. Health professionals must enter data not required for clinical care and manipulation of this data into a machine-readable form is often necessary. Ambiguity over who should be responsible for this extra work hindered CIS data usage for clinical process improvement. We describe 11 strategies employed by units to accommodate this extra work, distributing it across roles. Seven of these motivated data entry by health professionals and four addressed the machine readability of data. Many of the strategies relied heavily on the skill and leadership of local clinical customizers. To realize the expected clinical process improvements by the use of CIS data, clinical leaders and policy makers need to recognize and support the redistribution of the extra work that is involved in data repurposing. Adequate time, funding, and appropriate motivation are needed to enable units to acquire and deliver the necessary skills in CIS customization.
James A. Turner; Joseph Buongiorno; Shushuai Zhu; Frances Maplesden
2008-01-01
Secondary processed wood products - builder's carpentry and joinery, moldings and millwork, wooden furniture, and prefabricated buildings - have grown significantly in importance in the global trade of wood products. At the same time there has been increased use of non-tariff barriers to restrict their trade. These barriers could have an important impact on the...
A web-based tree crown condition training and evaluation tool for urban and community forestry
Matthew F. Winn; Neil A. Clark; Philip A. Araman; Sang-Mook Lee
2007-01-01
Training personnel for natural resource related field work can be a costly and time-consuming process. For that reason, web-based training is considered by many to be a more attractive alternative to on-site training. The U.S. Forest Service Southern Research Station unit with Virginia Tech cooperators in Blacksburg, Va., are in the process of constructing a web site...
1979-05-01
250 A. S. Tompa REAL-TIME LOW TEMPERATURE NC AND PBX 9404 DECOMPOSITION STUDIES ....................................... 276 ._- Dr. Hermann...the five major unit operations for multi-base cannon propellant; nitrocellulose dehydration , premixing, mixing, extruding and cutting. Throughout the...during facility design, a general process description is presented as follows: Thermal Dehydration Nitrocellulose (NC) slurry is fed to a continuous
Live imaging of developmental processes in a living meristem of Davidia involucrata (Nyssaceae)
Jerominek, Markus; Bull-Hereñu, Kester; Arndt, Melanie; Claßen-Bockhoff, Regine
2014-01-01
Morphogenesis in plants is usually reconstructed by scanning electron microscopy and histology of meristematic structures. These techniques are destructive and require many samples to obtain a consecutive series of states. Unfortunately, using this methodology the absolute timing of growth and complete relative initiation of organs remain obscure. To overcome this limitation, an in vivo observational method based on Epi-Illumination Light Microscopy (ELM) was developed and tested with a male inflorescence meristem (floral unit) of the handkerchief tree Davidia involucrata Baill. (Nyssaceae). We asked whether the most basal flowers of this floral unit arise in a basipetal sequence or, alternatively, are delayed in their development. The growing meristem was observed for 30 days, the longest live observation of a meristem achieved to date. The sequence of primordium initiation indicates a later initiation of the most basal flowers and not earlier or simultaneously as SEM images could suggest. D. involucrata exemplarily shows that live-ELM gives new insights into developmental processes of plants. In addition to morphogenetic questions such as the transition from vegetative to reproductive meristems or the absolute timing of ontogenetic processes, this method may also help to quantify cellular growth processes in the context of molecular physiology and developmental genetics studies. PMID:25431576
Live imaging of developmental processes in a living meristem of Davidia involucrata (Nyssaceae).
Jerominek, Markus; Bull-Hereñu, Kester; Arndt, Melanie; Claßen-Bockhoff, Regine
2014-01-01
Morphogenesis in plants is usually reconstructed by scanning electron microscopy and histology of meristematic structures. These techniques are destructive and require many samples to obtain a consecutive series of states. Unfortunately, using this methodology the absolute timing of growth and complete relative initiation of organs remain obscure. To overcome this limitation, an in vivo observational method based on Epi-Illumination Light Microscopy (ELM) was developed and tested with a male inflorescence meristem (floral unit) of the handkerchief tree Davidia involucrata Baill. (Nyssaceae). We asked whether the most basal flowers of this floral unit arise in a basipetal sequence or, alternatively, are delayed in their development. The growing meristem was observed for 30 days, the longest live observation of a meristem achieved to date. The sequence of primordium initiation indicates a later initiation of the most basal flowers and not earlier or simultaneously as SEM images could suggest. D. involucrata exemplarily shows that live-ELM gives new insights into developmental processes of plants. In addition to morphogenetic questions such as the transition from vegetative to reproductive meristems or the absolute timing of ontogenetic processes, this method may also help to quantify cellular growth processes in the context of molecular physiology and developmental genetics studies.
The research of laser marking control technology
NASA Astrophysics Data System (ADS)
Zhang, Qiue; Zhang, Rong
2009-08-01
In the area of Laser marking, the general control method is insert control card to computer's mother board, it can not support hot swap, it is difficult to assemble or it. Moreover, the one marking system must to equip one computer. In the system marking, the computer can not to do the other things except to transmit marking digital information. Otherwise it can affect marking precision. Based on traditional control methods existed some problems, introduced marking graphic editing and digital processing by the computer finish, high-speed digital signal processor (DSP) control marking the whole process. The laser marking controller is mainly contain DSP2812, digital memorizer, DAC (digital analog converting) transform unit circuit, USB interface control circuit, man-machine interface circuit, and other logic control circuit. Download the marking information which is processed by computer to U disk, DSP read the information by USB interface on time, then processing it, adopt the DSP inter timer control the marking time sequence, output the scanner control signal by D/A parts. Apply the technology can realize marking offline, thereby reduce the product cost, increase the product efficiency. The system have good effect in actual unit markings, the marking speed is more quickly than PCI control card to 20 percent. It has application value in practicality.
Results of the first detection units of KM3NeT
NASA Astrophysics Data System (ADS)
Biagi, Simone; KM3NeT Collaboration
2017-12-01
The KM3NeT collaboration is building a km3-scale neutrino telescope in the Mediterranean Sea. The current phase of construction comprises the deep-sea and onshore infrastructures at two installation sites and the installation of the first detection units for the "ARCA" (Astroparticle Research with Cosmics in the Abyss) and "ORCA" (Oscillation Research with Cosmics in the Abyss) detector. At the KM3NeT-It site, 80 km offshore Capo Passero, Italy, the first 32 detection units for the ARCA detector are being installed and at the KM3Net-Fr site, 40 km offshore Toulon, France, 7 detection units for the ORCA detector will be deployed. The second phase of KM3NeT foresees the completion of ARCA for neutrino astronomy at energies above TeV and ORCA for neutrino mass hierarchy studies at energies in the GeV range. The basic element of the KM3NeT detector is the detection unit. In the ARCA geometry, the detection unit is a 700 m long vertical structure hosting 18 optical modules. Each optical module comprises 31 3 in photomultiplier tubes, instruments to monitor environmental parameters, and the electronic boards for the digitisation of the PMT signals and the management of data acquisition. In their final configuration, both ARCA and ORCA will be composed of about 200 detection units. The first detection unit was installed at the KM3NeT-It site in December 2015. It is active and taking data since its connection to the subsea network. The time of arrival and the duration of photon hits on each of the photomultipliers is measured with a time resolution of 1 ns and transferred onshore where the measurements are processed, triggered and stored on disk. A time calibration procedure, based on data recorded with flashing LED beacons during dedicated periods, allows for time synchronisation of the signals from the optical modules at the nanosecond level. In May 2016, an additional detection unit was installed at the KM3NeT-It site. The first results with two active detection units are presented. An update of the detector status and construction is given.
2005-04-01
process to promptly move supplies from the United States to a customer. GAO found that conflicting doctrinal responsibilities for distribution ... management , improperly packed shipments, insufficient transportation personnel and equipment, and inadequate information systems prevented the timely
37 CFR 1.138 - Express abandonment.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 37 Patents, Trademarks, and Copyrights 1 2014-07-01 2014-07-01 false Express abandonment. 1.138 Section 1.138 Patents, Trademarks, and Copyrights UNITED STATES PATENT AND TRADEMARK OFFICE, DEPARTMENT OF COMMERCE GENERAL RULES OF PRACTICE IN PATENT CASES National Processing Provisions Time for Reply by...
37 CFR 1.138 - Express abandonment.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 37 Patents, Trademarks, and Copyrights 1 2013-07-01 2013-07-01 false Express abandonment. 1.138 Section 1.138 Patents, Trademarks, and Copyrights UNITED STATES PATENT AND TRADEMARK OFFICE, DEPARTMENT OF COMMERCE GENERAL RULES OF PRACTICE IN PATENT CASES National Processing Provisions Time for Reply by...
37 CFR 1.138 - Express abandonment.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 37 Patents, Trademarks, and Copyrights 1 2011-07-01 2011-07-01 false Express abandonment. 1.138 Section 1.138 Patents, Trademarks, and Copyrights UNITED STATES PATENT AND TRADEMARK OFFICE, DEPARTMENT OF COMMERCE GENERAL RULES OF PRACTICE IN PATENT CASES National Processing Provisions Time for Reply by...
37 CFR 1.138 - Express abandonment.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 37 Patents, Trademarks, and Copyrights 1 2012-07-01 2012-07-01 false Express abandonment. 1.138 Section 1.138 Patents, Trademarks, and Copyrights UNITED STATES PATENT AND TRADEMARK OFFICE, DEPARTMENT OF COMMERCE GENERAL RULES OF PRACTICE IN PATENT CASES National Processing Provisions Time for Reply by...
Real-time track-less Cherenkov ring fitting trigger system based on Graphics Processing Units
NASA Astrophysics Data System (ADS)
Ammendola, R.; Biagioni, A.; Chiozzi, S.; Cretaro, P.; Cotta Ramusino, A.; Di Lorenzo, S.; Fantechi, R.; Fiorini, M.; Frezza, O.; Gianoli, A.; Lamanna, G.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Neri, I.; Paolucci, P. S.; Pastorelli, E.; Piandani, R.; Piccini, M.; Pontisso, L.; Rossetti, D.; Simula, F.; Sozzi, M.; Vicini, P.
2017-12-01
The parallel computing power of commercial Graphics Processing Units (GPUs) is exploited to perform real-time ring fitting at the lowest trigger level using information coming from the Ring Imaging Cherenkov (RICH) detector of the NA62 experiment at CERN. To this purpose, direct GPU communication with a custom FPGA-based board has been used to reduce the data transmission latency. The GPU-based trigger system is currently integrated in the experimental setup of the RICH detector of the NA62 experiment, in order to reconstruct ring-shaped hit patterns. The ring-fitting algorithm running on GPU is fed with raw RICH data only, with no information coming from other detectors, and is able to provide more complex trigger primitives with respect to the simple photodetector hit multiplicity, resulting in a higher selection efficiency. The performance of the system for multi-ring Cherenkov online reconstruction obtained during the NA62 physics run is presented.
Symplectic multi-particle tracking on GPUs
NASA Astrophysics Data System (ADS)
Liu, Zhicong; Qiang, Ji
2018-05-01
A symplectic multi-particle tracking model is implemented on the Graphic Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) language. The symplectic tracking model can preserve phase space structure and reduce non-physical effects in long term simulation, which is important for beam property evaluation in particle accelerators. Though this model is computationally expensive, it is very suitable for parallelization and can be accelerated significantly by using GPUs. In this paper, we optimized the implementation of the symplectic tracking model on both single GPU and multiple GPUs. Using a single GPU processor, the code achieves a factor of 2-10 speedup for a range of problem sizes compared with the time on a single state-of-the-art Central Processing Unit (CPU) node with similar power consumption and semiconductor technology. It also shows good scalability on a multi-GPU cluster at Oak Ridge Leadership Computing Facility. In an application to beam dynamics simulation, the GPU implementation helps save more than a factor of two total computing time in comparison to the CPU implementation.
NASA Astrophysics Data System (ADS)
Nikolskiy, V. P.; Stegailov, V. V.
2018-01-01
Metal nanoparticles (NPs) serve as important tools for many modern technologies. However, the proper microscopic models of the interaction between ultrashort laser pulses and metal NPs are currently not very well developed in many cases. One part of the problem is the description of the warm dense matter that is formed in NPs after intense irradiation. Another part of the problem is the description of the electromagnetic waves around NPs. Description of wave propagation requires the solution of Maxwell’s equations and the finite-difference time-domain (FDTD) method is the classic approach for solving them. There are many commercial and free implementations of FDTD, including the open source software that supports graphics processing unit (GPU) acceleration. In this report we present the results on the FDTD calculations for different cases of the interaction between ultrashort laser pulses and metal nanoparticles. Following our previous results, we analyze the efficiency of the GPU acceleration of the FDTD algorithm.
Sharp, G C; Kandasamy, N; Singh, H; Folkert, M
2007-10-07
This paper shows how to significantly accelerate cone-beam CT reconstruction and 3D deformable image registration using the stream-processing model. We describe data-parallel designs for the Feldkamp, Davis and Kress (FDK) reconstruction algorithm, and the demons deformable registration algorithm, suitable for use on a commodity graphics processing unit. The streaming versions of these algorithms are implemented using the Brook programming environment and executed on an NVidia 8800 GPU. Performance results using CT data of a preserved swine lung indicate that the GPU-based implementations of the FDK and demons algorithms achieve a substantial speedup--up to 80 times for FDK and 70 times for demons when compared to an optimized reference implementation on a 2.8 GHz Intel processor. In addition, the accuracy of the GPU-based implementations was found to be excellent. Compared with CPU-based implementations, the RMS differences were less than 0.1 Hounsfield unit for reconstruction and less than 0.1 mm for deformable registration.
NASA Astrophysics Data System (ADS)
Sentis, M. L.; Delaporte, Ph; Marine, W.; Uteza, O.
2000-06-01
The laser ablation performed with an automated excimer XeCl laser unit is used for large surface cleaning. The study focuses on metal surfaces that are oxidised and are representative of contaminated surfaces with radionuclides in a context of nuclear power plant maintenance. The unit contains an XeCl laser, the beam delivery system, the particle collection cell, and the system for real-time control of cleaning processes. The interaction of laser radiation with a surface is considered, in particular, the surface damage caused by cleaning radiation. The beam delivery system consists of an optical fibre bundle of 5 m long and allows delivering 150 W at 308 nm for laser surface cleaning. The cleaning process is controlled by analysing in real time the plasma electric field evolution. The system permits the cleaning of 2 to 6 m2 h-1 of oxides with only slight substrate modifications.
High performance hybrid functional Petri net simulations of biological pathway models on CUDA.
Chalkidis, Georgios; Nagasaki, Masao; Miyano, Satoru
2011-01-01
Hybrid functional Petri nets are a wide-spread tool for representing and simulating biological models. Due to their potential of providing virtual drug testing environments, biological simulations have a growing impact on pharmaceutical research. Continuous research advancements in biology and medicine lead to exponentially increasing simulation times, thus raising the demand for performance accelerations by efficient and inexpensive parallel computation solutions. Recent developments in the field of general-purpose computation on graphics processing units (GPGPU) enabled the scientific community to port a variety of compute intensive algorithms onto the graphics processing unit (GPU). This work presents the first scheme for mapping biological hybrid functional Petri net models, which can handle both discrete and continuous entities, onto compute unified device architecture (CUDA) enabled GPUs. GPU accelerated simulations are observed to run up to 18 times faster than sequential implementations. Simulating the cell boundary formation by Delta-Notch signaling on a CUDA enabled GPU results in a speedup of approximately 7x for a model containing 1,600 cells.
Ren, Shanshan; Bertels, Koen; Al-Ars, Zaid
2018-01-01
GATK HaplotypeCaller (HC) is a popular variant caller, which is widely used to identify variants in complex genomes. However, due to its high variants detection accuracy, it suffers from long execution time. In GATK HC, the pair-HMMs forward algorithm accounts for a large percentage of the total execution time. This article proposes to accelerate the pair-HMMs forward algorithm on graphics processing units (GPUs) to improve the performance of GATK HC. This article presents several GPU-based implementations of the pair-HMMs forward algorithm. It also analyzes the performance bottlenecks of the implementations on an NVIDIA Tesla K40 card with various data sets. Based on these results and the characteristics of GATK HC, we are able to identify the GPU-based implementations with the highest performance for the various analyzed data sets. Experimental results show that the GPU-based implementations of the pair-HMMs forward algorithm achieve a speedup of up to 5.47× over existing GPU-based implementations.
A Real-Time Capable Software-Defined Receiver Using GPU for Adaptive Anti-Jam GPS Sensors
Seo, Jiwon; Chen, Yu-Hsuan; De Lorenzo, David S.; Lo, Sherman; Enge, Per; Akos, Dennis; Lee, Jiyun
2011-01-01
Due to their weak received signal power, Global Positioning System (GPS) signals are vulnerable to radio frequency interference. Adaptive beam and null steering of the gain pattern of a GPS antenna array can significantly increase the resistance of GPS sensors to signal interference and jamming. Since adaptive array processing requires intensive computational power, beamsteering GPS receivers were usually implemented using hardware such as field-programmable gate arrays (FPGAs). However, a software implementation using general-purpose processors is much more desirable because of its flexibility and cost effectiveness. This paper presents a GPS software-defined radio (SDR) with adaptive beamsteering capability for anti-jam applications. The GPS SDR design is based on an optimized desktop parallel processing architecture using a quad-core Central Processing Unit (CPU) coupled with a new generation Graphics Processing Unit (GPU) having massively parallel processors. This GPS SDR demonstrates sufficient computational capability to support a four-element antenna array and future GPS L5 signal processing in real time. After providing the details of our design and optimization schemes for future GPU-based GPS SDR developments, the jamming resistance of our GPS SDR under synthetic wideband jamming is presented. Since the GPS SDR uses commercial-off-the-shelf hardware and processors, it can be easily adopted in civil GPS applications requiring anti-jam capabilities. PMID:22164116
Heterogeneous real-time computing in radio astronomy
NASA Astrophysics Data System (ADS)
Ford, John M.; Demorest, Paul; Ransom, Scott
2010-07-01
Modern computer architectures suited for general purpose computing are often not the best choice for either I/O-bound or compute-bound problems. Sometimes the best choice is not to choose a single architecture, but to take advantage of the best characteristics of different computer architectures to solve your problems. This paper examines the tradeoffs between using computer systems based on the ubiquitous X86 Central Processing Units (CPU's), Field Programmable Gate Array (FPGA) based signal processors, and Graphical Processing Units (GPU's). We will show how a heterogeneous system can be produced that blends the best of each of these technologies into a real-time signal processing system. FPGA's tightly coupled to analog-to-digital converters connect the instrument to the telescope and supply the first level of computing to the system. These FPGA's are coupled to other FPGA's to continue to provide highly efficient processing power. Data is then packaged up and shipped over fast networks to a cluster of general purpose computers equipped with GPU's, which are used for floating-point intensive computation. Finally, the data is handled by the CPU and written to disk, or further processed. Each of the elements in the system has been chosen for its specific characteristics and the role it can play in creating a system that does the most for the least, in terms of power, space, and money.
A real-time capable software-defined receiver using GPU for adaptive anti-jam GPS sensors.
Seo, Jiwon; Chen, Yu-Hsuan; De Lorenzo, David S; Lo, Sherman; Enge, Per; Akos, Dennis; Lee, Jiyun
2011-01-01
Due to their weak received signal power, Global Positioning System (GPS) signals are vulnerable to radio frequency interference. Adaptive beam and null steering of the gain pattern of a GPS antenna array can significantly increase the resistance of GPS sensors to signal interference and jamming. Since adaptive array processing requires intensive computational power, beamsteering GPS receivers were usually implemented using hardware such as field-programmable gate arrays (FPGAs). However, a software implementation using general-purpose processors is much more desirable because of its flexibility and cost effectiveness. This paper presents a GPS software-defined radio (SDR) with adaptive beamsteering capability for anti-jam applications. The GPS SDR design is based on an optimized desktop parallel processing architecture using a quad-core Central Processing Unit (CPU) coupled with a new generation Graphics Processing Unit (GPU) having massively parallel processors. This GPS SDR demonstrates sufficient computational capability to support a four-element antenna array and future GPS L5 signal processing in real time. After providing the details of our design and optimization schemes for future GPU-based GPS SDR developments, the jamming resistance of our GPS SDR under synthetic wideband jamming is presented. Since the GPS SDR uses commercial-off-the-shelf hardware and processors, it can be easily adopted in civil GPS applications requiring anti-jam capabilities.
Processing device with self-scrubbing logic
Wojahn, Christopher K.
2016-03-01
An apparatus includes a processing unit including a configuration memory and self-scrubber logic coupled to read the configuration memory to detect compromised data stored in the configuration memory. The apparatus also includes a watchdog unit external to the processing unit and coupled to the self-scrubber logic to detect a failure in the self-scrubber logic. The watchdog unit is coupled to the processing unit to selectively reset the processing unit in response to detecting the failure in the self-scrubber logic. The apparatus also includes an external memory external to the processing unit and coupled to send configuration data to the configuration memory in response to a data feed signal outputted by the self-scrubber logic.
Liu, Weihua; Yang, Yi; Wang, Shuqing; Liu, Yang
2014-01-01
Order insertion often occurs in the scheduling process of logistics service supply chain (LSSC), which disturbs normal time scheduling especially in the environment of mass customization logistics service. This study analyses order similarity coefficient and order insertion operation process and then establishes an order insertion scheduling model of LSSC with service capacity and time factors considered. This model aims to minimize the average unit volume operation cost of logistics service integrator and maximize the average satisfaction degree of functional logistics service providers. In order to verify the viability and effectiveness of our model, a specific example is numerically analyzed. Some interesting conclusions are obtained. First, along with the increase of completion time delay coefficient permitted by customers, the possible inserting order volume first increases and then trends to be stable. Second, supply chain performance reaches the best when the volume of inserting order is equal to the surplus volume of the normal operation capacity in mass service process. Third, the larger the normal operation capacity in mass service process is, the bigger the possible inserting order's volume will be. Moreover, compared to increasing the completion time delay coefficient, improving the normal operation capacity of mass service process is more useful.
Identification of effective exciton-exciton annihilation in squaraine-squaraine copolymers.
Hader, Kilian; May, Volkhard; Lambert, Christoph; Engel, Volker
2016-05-11
Ultrafast time-resolved transient absorption spectroscopy is able to monitor the fate of the excited state population in molecular aggregates or polymers. Due to many competing decay processes, the identification of exciton-exciton annihilation (EEA) is difficult. Here, we use a microscopic model to describe exciton annihilation processes in squaraine-squaraine copolymers. Transient absorption time traces measured at different laser powers exhibit an unusual time-dependence. The analysis points towards dynamics taking place on three time-scales. Immediately after laser-excitation a localization of excitons takes place within the femtosecond time-regime. This is followed by exciton-exciton annihilation which is responsible for a fast decay of the exciton population. At later times, excitations being localized on units which are not directly connected remain so that diffusion dominates the dynamics and leads to a slower decay. We thus provide evidence for EEA tracked by time-resolved spectroscopy which has not been reported that clearly before.
Hidalgo, H.G.; Das, T.; Dettinger, M.D.; Cayan, D.R.; Pierce, D.W.; Barnett, T.P.; Bala, G.; Mirin, A.; Wood, A.W.; Bonfils, Celine; Santer, B.D.; Nozawa, T.
2009-01-01
This article applies formal detection and attribution techniques to investigate the nature of observed shifts in the timing of streamflow in the western United States. Previous studies have shown that the snow hydrology of the western United States has changed in the second half of the twentieth century. Such changes manifest themselves in the form of more rain and less snow, in reductions in the snow water contents, and in earlier snowmelt and associated advances in streamflow "center" timing (the day in the "water-year" on average when half the water-year flow at a point has passed). However, with one exception over a more limited domain, no other study has attempted to formally attribute these changes to anthropogenic increases of greenhouse gases in the atmosphere. Using the observations together with a set of global climate model simulations and a hydrologic model (applied to three major hydrological regions of the western United States_the California region, the upper Colorado River basin, and the Columbia River basin), it is found that the observed trends toward earlier "center" timing of snowmelt-driven streamflows in the western United States since 1950 are detectably different from natural variability (significant at the p < 0.05 level). Furthermore, the nonnatural parts of these changes can be attributed confidently to climate changes induced by anthropogenic greenhouse gases, aerosols, ozone, and land use. The signal from the Columbia dominates the analysis, and it is the only basin that showed a detectable signal when the analysis was performed on individual basins. It should be noted that although climate change is an important signal, other climatic processes have also contributed to the hydrologic variability of large basins in the western United States. ?? 2009 American Meteorological Society.
Cescutti, Paola; Foschiatti, Michela; Furlanis, Linda; Lagatolla, Cristina; Rizzo, Roberto
2010-07-02
The repeating unit of cepacian, the exopolysaccharide produced by the majority of the microorganisms belonging to the Burkholderia cepacia complex, was isolated from inner bacterial membranes and investigated by mass spectrometry, with and without prior derivatisation. Interpretation of the mass spectra led to the determination of the biological repeating unit primary structure, thus disclosing the nature of the oligosaccharide produced in vivo. Moreover, mass spectra recorded on the native sample revealed that acetyl substitution was very variable, producing a mixture of repeating units containing zero to four acyl groups. At the same time, finding acetylated oligosaccharides showed that binding of these substituents occurred in the cellular periplasmic space, before the polymerisation process took place. In the chromatographic peak containing the repeating unit, oligosaccharides shorter than the repeating unit co-eluted. Mass spectrometric analysis showed that they were biosynthetic intermediates of the repeating unit and further investigation revealed the biosynthetic sequence of cepacian building block. Copyright 2010 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Feng; Ren, Yinghui; Bian, Wensheng, E-mail: bian@iccas.ac.cn
The accurate time-independent quantum dynamics calculations on the ground-state tunneling splitting of malonaldehyde in full dimensionality are reported for the first time. This is achieved with an efficient method developed by us. In our method, the basis functions are customized for the hydrogen transfer process which has the effect of greatly reducing the size of the final Hamiltonian matrix, and the Lanczos method and parallel strategy are used to further overcome the memory and central processing unit time bottlenecks. The obtained ground-state tunneling splitting of 24.5 cm{sup −1} is in excellent agreement with the benchmark value of 23.8 cm{sup −1}more » computed with the full-dimensional, multi-configurational time-dependent Hartree approach on the same potential energy surface, and we estimate that our reported value has an uncertainty of less than 0.5 cm{sup −1}. Moreover, the role of various vibrational modes strongly coupled to the hydrogen transfer process is revealed.« less
How to implement information technology in the operating room and the intensive care unit.
Meyfroidt, Geert
2009-03-01
The number of operating rooms and intensive care units looking for a data management system to perform their increasingly complex tasks is rising. Although at this time only a minority is computerized, within the next few years many centres will start implementing information technology. The transition towards a computerized system is a major venture, which will have a major impact on workflow. This chapter reviews the present literature. Published papers on this subject are predominantly single- or multi-centre implementation reports. The general principles that should guide such a process are described. For healthcare institutions or individual practitioners that plan to undertake this venture, the implementation process is described in a practical, nine-step overview.
THOR Ion Mass Spectrometer (IMS)
NASA Astrophysics Data System (ADS)
Retinò, Alessandro
2017-04-01
Turbulence Heating ObserveR (THOR) is the first mission ever flown in space dedicated to plasma turbulence. The Ion Mass Spectrometer (IMS) onboard THOR will provide the first high-time resolution measurements of mass-resolved ions in near-Earth space, focusing on hot ions in the foreshock, shock and magnetosheath turbulent regions. These measurements are required to study how kinetic-scale turbulent fluctuations heat and accelerate different ion species. IMS will measure the full three-dimensional distribution functions of main ion species (H+, He++, O+) in the energy range 10 eV/q to 30 keV/q with energy resolution DE/E down to 10% and angular resolution down to 11.25˚ . The time resolution will be 150 ms for O+, 300 ms for He++ and ˜ 1s for O+, which correspond to ion scales in the the foreshock, shock and magnetosheath regions. Such high time resolution is achieved by mounting four identical IMS units phased by 90˚ in the spacecraft spin plane. Each IMS unit combines a top-hat electrostatic analyzer with deflectors at the entrance together with a time-of-flight section to perform mass selection. Adequate mass-per-charge resolution (M/q)/(ΔM/q) (≥ 8 for He++ and ≥ 3 for O+) is obtained through a 6 cm long Time-of-Flight (TOF) section. IMS electronics includes a fast sweeping high voltage board that is required to make measurements at high cadence. Ion detection includes Micro Channel Plates (MCPs) combined with Application-Specific Integrated Circuits (ASICs) for charge amplification and discrimination and a discrete Time-to-Amplitude Converter (TAC) to determine the ion time of flight. A processor board will be used to for ion events formatting and will interface with the Particle Processing Unit (PPU), which will perform data processing for THOR particle detectors. The IMS instrument is being designed and will be built and calibrated by an international consortium of scientific institutes from France, USA, Germany and Japan and Switzerland.
SU-E-J-91: FFT Based Medical Image Registration Using a Graphics Processing Unit (GPU).
Luce, J; Hoggarth, M; Lin, J; Block, A; Roeske, J
2012-06-01
To evaluate the efficiency gains obtained from using a Graphics Processing Unit (GPU) to perform a Fourier Transform (FT) based image registration. Fourier-based image registration involves obtaining the FT of the component images, and analyzing them in Fourier space to determine the translations and rotations of one image set relative to another. An important property of FT registration is that by enlarging the images (adding additional pixels), one can obtain translations and rotations with sub-pixel resolution. The expense, however, is an increased computational time. GPUs may decrease the computational time associated with FT image registration by taking advantage of their parallel architecture to perform matrix computations much more efficiently than a Central Processor Unit (CPU). In order to evaluate the computational gains produced by a GPU, images with known translational shifts were utilized. A program was written in the Interactive Data Language (IDL; Exelis, Boulder, CO) to performCPU-based calculations. Subsequently, the program was modified using GPU bindings (Tech-X, Boulder, CO) to perform GPU-based computation on the same system. Multiple image sizes were used, ranging from 256×256 to 2304×2304. The time required to complete the full algorithm by the CPU and GPU were benchmarked and the speed increase was defined as the ratio of the CPU-to-GPU computational time. The ratio of the CPU-to- GPU time was greater than 1.0 for all images, which indicates the GPU is performing the algorithm faster than the CPU. The smallest improvement, a 1.21 ratio, was found with the smallest image size of 256×256, and the largest speedup, a 4.25 ratio, was observed with the largest image size of 2304×2304. GPU programming resulted in a significant decrease in computational time associated with a FT image registration algorithm. The inclusion of the GPU may provide near real-time, sub-pixel registration capability. © 2012 American Association of Physicists in Medicine.
Multi-time scale energy management of wind farms based on comprehensive evaluation technology
NASA Astrophysics Data System (ADS)
Xu, Y. P.; Huang, Y. H.; Liu, Z. J.; Wang, Y. F.; Li, Z. Y.; Guo, L.
2017-11-01
A novel energy management of wind farms is proposed in this paper. Firstly, a novel comprehensive evaluation system is proposed to quantify economic properties of each wind farm to make the energy management more economical and reasonable. Then, a combination of multi time-scale schedule method is proposed to develop a novel energy management. The day-ahead schedule optimizes unit commitment of thermal power generators. The intraday schedule is established to optimize power generation plan for all thermal power generating units, hydroelectric generating sets and wind power plants. At last, the power generation plan can be timely revised in the process of on-line schedule. The paper concludes with simulations conducted on a real provincial integrated energy system in northeast China. Simulation results have validated the proposed model and corresponding solving algorithms.
Development of position measurement unit for flying inertial fusion energy target
NASA Astrophysics Data System (ADS)
Tsuji, R.; Endo, T.; Yoshida, H.; Norimatsu, T.
2016-03-01
We have reported the present status in the development of a position measurement unit (PMU) for a flying inertial fusion energy (IFE) target. The PMU, which uses Arago spot phenomena, is designed to have a measurement accuracy smaller than 1 μm. By employing divergent, pulsed orthogonal laser beam illumination, we can measure the time and the target position at the pulsed illumination. The two-dimensional Arago spot image is compressed into one-dimensional image by a cylindrical lens for real-time processing. The PMU are set along the injection path of the flying target. The local positions of the target in each PMU are transferred to the controller and analysed to calculate the target trajectory. Two methods are presented to calculate the arrival time and the arrival position of the target at the reactor centre.
NASA Astrophysics Data System (ADS)
Rushton, K. R.; Zaman, M. Asaduz
2017-01-01
Identifying flow processes in multi-aquifer flow systems is a considerable challenge, especially if substantial abstraction occurs. The Rajshahi Barind groundwater flow system in Bangladesh provides an example of the manner in which flow processes can change with time. At some locations there has been a decrease with time in groundwater heads and also in the magnitude of the seasonal fluctuations. This report describes the important stages in a detailed field and modelling study at a specific location in this groundwater flow system. To understand more about the changing conditions, piezometers were constructed in 2015 at different depths but the same location; water levels in these piezometers indicate the formation of an additional water table. Conceptual models are described which show how conditions have changed between the years 2000 and 2015. Following the formation of the additional water table, the aquifer system is conceptualised as two units. A pumping test is described with data collected during both the pumping and recovery phases. Pumping test data for the Lower Unit are analysed using a computational model with estimates of the aquifer parameters; the model also provided estimates of the quantity of water moving from the ground surface, through the Upper Unit, to provide an input to the Lower Unit. The reasons for the substantial changes in the groundwater heads are identified; monitoring of the recently formed additional water table provides a means of testing whether over-abstraction is occurring.
Processing device with self-scrubbing logic
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wojahn, Christopher K.
An apparatus includes a processing unit including a configuration memory and self-scrubber logic coupled to read the configuration memory to detect compromised data stored in the configuration memory. The apparatus also includes a watchdog unit external to the processing unit and coupled to the self-scrubber logic to detect a failure in the self-scrubber logic. The watchdog unit is coupled to the processing unit to selectively reset the processing unit in response to detecting the failure in the self-scrubber logic. The apparatus also includes an external memory external to the processing unit and coupled to send configuration data to the configurationmore » memory in response to a data feed signal outputted by the self-scrubber logic.« less
Evaluation of components of residential treatment by Medicaid ICF-MR surveys: a validity assessment.
Reid, D H; Parsons, M B; Green, C W; Schepis, M M
1991-01-01
We evaluated the proficiency of the federal Medicaid program's survey process for evaluating intermediate care facilities for the mentally retarded. In Study 1, an observational analysis of active treatment during leisure times in living units suggested that these surveys did not discriminate between certified and noncertified units. In Study 2, a reactivity analysis of a survey indicated that direct-care staff performed differently during the survey by increasing interactions with clients and decreasing nonwork behavior. Similarly, results of Study 3 showed increases in client access to leisure materials during a survey. In Study 4, questionnaire results indicated considerable variability among service providers' opinions on the consistency, accuracy, and objectivity with which survey teams determine agency standard compliance. Results are discussed regarding effects of the questionable proficiency of survey processes and the potential utility of behavioral assessment methodologies to improve such processes. PMID:1909696
Gulati, Tanuj; Ramanathan, Dhakshin; Wong, Chelsea; Ganguly, Karunesh
2017-01-01
Brain-Machine Interfaces can allow neural control over assistive devices. They also provide an important platform to study neural plasticity. Recent studies indicate that optimal engagement of learning is essential for robust neuroprosthetic control. However, little is known about the neural processes that may consolidate a neuroprosthetic skill. Based on the growing body of evidence linking slow-wave activity (SWA) during sleep to consolidation, we examined if there is ‘offline’ processing after neuroprosthetic learning. Using a rodent model, here we show that after successful learning, task-related units specifically experienced increased locking and coherency to SWA during sleep. Moreover, spike-spike coherence among these units was significantly enhanced. These changes were not present with poor skill acquisition or after control awake periods, demonstrating specificity of our observations to learning. Interestingly, time spent in SWA predicted performance gains. Thus, SWA appears to play a role in offline processing after neuroprosthetic learning. PMID:24997761
Wang, Xiansheng; Ni, Jiaheng; Pang, Shuo; Li, Ying
2017-04-01
A electrocoagulation (EC)/peanut shell (PS) adsorption coupling technique was studied for the removal of malachite green (MG) in our present work. The addition of an appropriate PS dosage (5 g/L) resulted in remarkable increase in the removal efficiency of MG at lower current density and shorter operating time compared with the conventional EC process. The effect of current density, pH of MG solution, dosage of PS and initial concentration of MG were also investigated. The maximum removal efficiency of MG was 98% under optimum conditions in 5 min. And it was 23% higher than that in EC process. Furthermore, the unit energy demand (UED) and the unit electrode material demand (UEMD) were calculated and discussed. The results demonstrated that the EC/PS adsorption coupling method achieved a reduction of 94% UED and UEMD compared with EC process.
NASA Astrophysics Data System (ADS)
Cheek, Kim A.
2017-08-01
Ideas about temporal (and spatial) scale impact students' understanding across science disciplines. Learners have difficulty comprehending the long time periods associated with natural processes because they have no referent for the magnitudes involved. When people have a good "feel" for quantity, they estimate cardinal number magnitude linearly. Magnitude estimation errors can be explained by confusion about the structure of the decimal number system, particularly in terms of how powers of ten are related to one another. Indonesian children regularly use large currency units. This study investigated if they estimate long time periods accurately and if they estimate those time periods the same way they estimate analogous currency units. Thirty-nine children from a private International Baccalaureate school estimated temporal magnitudes up to 10,000,000,000 years in a two-part study. Artifacts children created were compared to theoretical model predictions previously used in number magnitude estimation studies as reported by Landy et al. (Cognitive Science 37:775-799, 2013). Over one third estimated the magnitude of time periods up to 10,000,000,000 years linearly, exceeding what would be expected based upon prior research with children this age who lack daily experience with large quantities. About half treated successive powers of ten as a count sequence instead of multiplicatively related when estimating magnitudes of time periods. Children generally estimated the magnitudes of long time periods and familiar, analogous currency units the same way. Implications for ways to improve the teaching and learning of this crosscutting concept/overarching idea are discussed.
The WINCOF-I code: Detailed description
NASA Technical Reports Server (NTRS)
Murthy, S. N. B.; Mullican, A.
1993-01-01
The performance of an axial-flow fan-compressor unit is basically unsteady when there is ingestion of water along with the gas phase. The gas phase is a mixture of air and water vapor in the case of a bypass fan engine that provides thrust power to an aircraft. The liquid water may be in the form of droplets and film at entry to the fan. The unsteadiness is then associated with the relative motion between the gas phase and water, at entry and within the machine, while the water undergoes impact on material surfaces, centrifuging, heat and mass transfer processes, and reingestion in blade wakes, following peal off from blade surfaces. The unsteadiness may be caused by changes in atmospheric conditions and at entry into and exit from rain storms while the aircraft is in flight. In a multi-stage machine, with an uneven distribution of blade tip clearance, the combined effect of various processes in the presence of steady or time-dependent ingestion is such as to make the performance of a fan and a compressor unit time-dependent from the start of ingestion up to a short time following termination of ingestion. The original WINCOF code was developed without accounting for the relative motion between gas and liquid phases in the ingested fluid. A modification of the WINCOF code was developed and named WINCOF-1. The WINCOF-1 code can provide the transient performance of a fan-compressor unit under a variety of input conditions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... can be one release or a series of releases over a short time period due to a malfunction in the... or a series of devices. Examples include incinerators, carbon adsorption units, condensers, flares... do not occur simultaneously in a batch operation. A batch process consists of a series of batch...
2004-02-22
order to minimize Emergency Center overcrowding and ambulance diversions. The purpose of this study was to identify impeding systematic delays in the...turnaround times. A pilot study was conducted on two medicine and two surgery inpatient nursing units to analyze bed turnaround times and discharge times of...to the BTGH executive leadership who identified a need for this study : Mr. George Masi, Associate Administrator; Ms. Beth De Guzman, Chief Nursing
Extending the Query Language of a Data Warehouse for Patient Recruitment.
Dietrich, Georg; Ertl, Maximilian; Fette, Georg; Kaspar, Mathias; Krebs, Jonathan; Mackenrodt, Daniel; Störk, Stefan; Puppe, Frank
2017-01-01
Patient recruitment for clinical trials is a laborious task, as many texts have to be screened. Usually, this work is done manually and takes a lot of time. We have developed a system that automates the screening process. Besides standard keyword queries, the query language supports extraction of numbers, time-spans and negations. In a feasibility study for patient recruitment from a stroke unit with 40 patients, we achieved encouraging extraction rates above 95% for numbers and negations and ca. 86% for time spans.
Face-to-face handoff: improving transfer to the pediatric intensive care unit after cardiac surgery.
Vergales, Jeffrey; Addison, Nancy; Vendittelli, Analise; Nicholson, Evelyn; Carver, D Jeannean; Stemland, Christopher; Hoke, Tracey; Gangemi, James
2015-01-01
The goal was to develop and implement a comprehensive, primarily face-to-face handoff process that begins in the operating room and concludes at the bedside in the intensive care unit (ICU) for pediatric patients undergoing congenital heart surgery. Involving all stakeholders in the planning phase, the framework of the handoff system encompassed a combination of a formalized handoff tool, focused process steps that occurred prior to patient arrival in the ICU, and an emphasis on face-to-face communication at the conclusion of the handoff. The final process was evaluated by the use of observer checklists to examine quality metrics and timing for all patients admitted to the ICU following cardiac surgery. The process was found to improve how various providers view the efficiency of handoff, the ease of asking questions at each step, and the overall capability to improve patient care regardless of overall surgical complexity. © 2014 by the American College of Medical Quality.
Progressive freezing and sweating in a test unit
NASA Astrophysics Data System (ADS)
Ulrich, J.; Özoğuz, Y.
1990-01-01
Crystallization from melts is applied in several fields like waste water treatment, fruit juice or liquid food concentration and purification of organic chemicals. Investigations to improve the understanding, the performance and the control of the process have been carried out. The experimental unit used a vertical tube with a falling film on the outside. With an specially designed measuring technique process controlling parameters have been studied. The results demonstrate the dependency of those parameters upon each other and indicate the way to control the process by controlling the dominant parameter. This is the growth rate of the crystal coat. A further purification of the crystal layer can be achieved by introducing the procedure of sweating, which is a controlled partial melting of the crystal coat. Here again process parameters have been varied and results are presented. The strong effect upon the final purity of the product by an efficient executed sweating which is effectively tuned on the crystallization procedure should save crystallization steps, energy and time.
Corton, John; Toop, Trisha; Walker, Jonathan; Donnison, Iain S; Fraser, Mariecia D
2014-10-01
The integrated generation of solid fuel and biogas from biomass (IFBB) system is an innovative approach to maximising energy conversion from low input high diversity (LIHD) biomass. In this system water pre-treated and ensiled LIHD biomass is pressed. The press fluid is anaerobically digested to produce methane that is used to power the process. The fibrous fraction is densified and then sold as a combustion fuel. Two process options designed to concentrate the press fluid were assessed to ascertain their influence on productivity in an IFBB like system: sedimentation and the omission of pre-treatment water. By concentrating press fluid and not adding water during processing, energy production from methane was increased by 75% per unit time and solid fuel productivity increased by 80% per unit of fluid produced. The additional energy requirements for pressing more biomass in order to generate equal volumes of feedstock were accounted for in these calculations. Copyright © 2014 Elsevier Ltd. All rights reserved.
The Impact of Competing Time Delays in Stochastic Coordination Problems
NASA Astrophysics Data System (ADS)
Korniss, G.; Hunt, D.; Szymanski, B. K.
2011-03-01
Coordinating, distributing, and balancing resources in coupled systems is a complex task as these operations are very sensitive to time delays. Delays are present in most real communication and information systems, including info-social and neuro-biological networks, and can be attributed to both non-zero transmission times between different units of the system and to non-zero times it takes to process the information and execute the desired action at the individual units. Here, we investigate the importance and impact of these two types of delays in a simple coordination (synchronization) problem in a noisy environment. We establish the scaling theory for the phase boundary of synchronization and for the steady-state fluctuations in the synchronizable regime. Further, we provide the asymptotic behavior near the boundary of the synchronizable regime. Our results also imply the potential for optimization and trade-offs in stochastic synchronization and coordination problems with time delays. Supported in part by DTRA, ARL, and ONR.
Implementation Plan for Flexible Automation in U.S. Shipyards
1985-01-01
process steps, cramped work sites, interrupted geometries , irregular or novel shapes, and other factors that affect automatability. We also try to...held by 2 hands in awkward places. Interrupt geometry of plates and beams. Cannot predict outcome. Creates need to measure and recut. Automation, if...of standard. enough over time I every job. I Rearrange work.Redefine work units. Too many interruptions Time, space, geometry only a little work gets
Century Scale Evaporation Trend: An Observational Study
NASA Technical Reports Server (NTRS)
Bounoui, Lahouari
2012-01-01
Several climate models with different complexity indicate that under increased CO2 forcing, runoff would increase faster than precipitation overland. However, observations over large U.S watersheds indicate otherwise. This inconsistency between models and observations suggests that there may be important feedbacks between climate and land surface unaccounted for in the present generation of models. We have analyzed century-scale observed annual runoff and precipitation time-series over several United States Geological Survey hydrological units covering large forested regions of the Eastern United States not affected by irrigation. Both time-series exhibit a positive long-term trend; however, in contrast to model results, these historic data records show that the rate of precipitation increases at roughly double the rate of runoff increase. We considered several hydrological processes to close the water budget and found that none of these processes acting alone could account for the total water excess generated by the observed difference between precipitation and runoff. We conclude that evaporation has increased over the period of observations and show that the increasing trend in precipitation minus runoff is correlated to observed increase in vegetation density based on the longest available global satellite record. The increase in vegetation density has important implications for climate; it slows but does not alleviate the projected warming associated with greenhouse gases emission.
Yuan, Jie; Xu, Guan; Yu, Yao; Zhou, Yu; Carson, Paul L; Wang, Xueding; Liu, Xiaojun
2013-08-01
Photoacoustic tomography (PAT) offers structural and functional imaging of living biological tissue with highly sensitive optical absorption contrast and excellent spatial resolution comparable to medical ultrasound (US) imaging. We report the development of a fully integrated PAT and US dual-modality imaging system, which performs signal scanning, image reconstruction, and display for both photoacoustic (PA) and US imaging all in a truly real-time manner. The back-projection (BP) algorithm for PA image reconstruction is optimized to reduce the computational cost and facilitate parallel computation on a state of the art graphics processing unit (GPU) card. For the first time, PAT and US imaging of the same object can be conducted simultaneously and continuously, at a real-time frame rate, presently limited by the laser repetition rate of 10 Hz. Noninvasive PAT and US imaging of human peripheral joints in vivo were achieved, demonstrating the satisfactory image quality realized with this system. Another experiment, simultaneous PAT and US imaging of contrast agent flowing through an artificial vessel, was conducted to verify the performance of this system for imaging fast biological events. The GPU-based image reconstruction software code for this dual-modality system is open source and available for download from http://sourceforge.net/projects/patrealtime.
NASA Astrophysics Data System (ADS)
Cai, Xiaohui; Liu, Yang; Ren, Zhiming
2018-06-01
Reverse-time migration (RTM) is a powerful tool for imaging geologically complex structures such as steep-dip and subsalt. However, its implementation is quite computationally expensive. Recently, as a low-cost solution, the graphic processing unit (GPU) was introduced to improve the efficiency of RTM. In the paper, we develop three ameliorative strategies to implement RTM on GPU card. First, given the high accuracy and efficiency of the adaptive optimal finite-difference (FD) method based on least squares (LS) on central processing unit (CPU), we study the optimal LS-based FD method on GPU. Second, we develop the CPU-based hybrid absorbing boundary condition (ABC) to the GPU-based one by addressing two issues of the former when introduced to GPU card: time-consuming and chaotic threads. Third, for large-scale data, the combinatorial strategy for optimal checkpointing and efficient boundary storage is introduced for the trade-off between memory and recomputation. To save the time of communication between host and disk, the portable operating system interface (POSIX) thread is utilized to create the other CPU core at the checkpoints. Applications of the three strategies on GPU with the compute unified device architecture (CUDA) programming language in RTM demonstrate their efficiency and validity.
GPU-based prompt gamma ray imaging from boron neutron capture therapy.
Yoon, Do-Kun; Jung, Joo-Young; Jo Hong, Key; Sil Lee, Keum; Suk Suh, Tae
2015-01-01
The purpose of this research is to perform the fast reconstruction of a prompt gamma ray image using a graphics processing unit (GPU) computation from boron neutron capture therapy (BNCT) simulations. To evaluate the accuracy of the reconstructed image, a phantom including four boron uptake regions (BURs) was used in the simulation. After the Monte Carlo simulation of the BNCT, the modified ordered subset expectation maximization reconstruction algorithm using the GPU computation was used to reconstruct the images with fewer projections. The computation times for image reconstruction were compared between the GPU and the central processing unit (CPU). Also, the accuracy of the reconstructed image was evaluated by a receiver operating characteristic (ROC) curve analysis. The image reconstruction time using the GPU was 196 times faster than the conventional reconstruction time using the CPU. For the four BURs, the area under curve values from the ROC curve were 0.6726 (A-region), 0.6890 (B-region), 0.7384 (C-region), and 0.8009 (D-region). The tomographic image using the prompt gamma ray event from the BNCT simulation was acquired using the GPU computation in order to perform a fast reconstruction during treatment. The authors verified the feasibility of the prompt gamma ray image reconstruction using the GPU computation for BNCT simulations.
NASA Technical Reports Server (NTRS)
Murray, James; Kirillov, Alexander
2008-01-01
The crew activity analyzer (CAA) is a system of electronic hardware and software for automatically identifying patterns of group activity among crew members working together in an office, cockpit, workshop, laboratory, or other enclosed space. The CAA synchronously records multiple streams of data from digital video cameras, wireless microphones, and position sensors, then plays back and processes the data to identify activity patterns specified by human analysts. The processing greatly reduces the amount of time that the analysts must spend in examining large amounts of data, enabling the analysts to concentrate on subsets of data that represent activities of interest. The CAA has potential for use in a variety of governmental and commercial applications, including planning for crews for future long space flights, designing facilities wherein humans must work in proximity for long times, improving crew training and measuring crew performance in military settings, human-factors and safety assessment, development of team procedures, and behavioral and ethnographic research. The data-acquisition hardware of the CAA (see figure) includes two video cameras: an overhead one aimed upward at a paraboloidal mirror on the ceiling and one mounted on a wall aimed in a downward slant toward the crew area. As many as four wireless microphones can be worn by crew members. The audio signals received from the microphones are digitized, then compressed in preparation for storage. Approximate locations of as many as four crew members are measured by use of a Cricket indoor location system. [The Cricket indoor location system includes ultrasonic/radio beacon and listener units. A Cricket beacon (in this case, worn by a crew member) simultaneously transmits a pulse of ultrasound and a radio signal that contains identifying information. Each Cricket listener unit measures the difference between the times of reception of the ultrasound and radio signals from an identified beacon. Assuming essentially instantaneous propagation of the radio signal, the distance between that beacon and the listener unit is estimated from this time difference and the speed of sound in air.] In this system, six Cricket listener units are mounted in various positions on the ceiling, and as many as four Cricket beacons are attached to crew members. The three-dimensional position of each Cricket beacon can be estimated from the time-difference readings of that beacon from at least three Cricket listener units
Counting-loss correction for X-ray spectroscopy using unit impulse pulse shaping.
Hong, Xu; Zhou, Jianbin; Ni, Shijun; Ma, Yingjie; Yao, Jianfeng; Zhou, Wei; Liu, Yi; Wang, Min
2018-03-01
High-precision measurement of X-ray spectra is affected by the statistical fluctuation of the X-ray beam under low-counting-rate conditions. It is also limited by counting loss resulting from the dead-time of the system and pile-up pulse effects, especially in a high-counting-rate environment. In this paper a detection system based on a FAST-SDD detector and a new kind of unit impulse pulse-shaping method is presented, for counting-loss correction in X-ray spectroscopy. The unit impulse pulse-shaping method is evolved by inverse deviation of the pulse from a reset-type preamplifier and a C-R shaper. It is applied to obtain the true incoming rate of the system based on a general fast-slow channel processing model. The pulses in the fast channel are shaped to unit impulse pulse shape which possesses small width and no undershoot. The counting rate in the fast channel is corrected by evaluating the dead-time of the fast channel before it is used to correct the counting loss in the slow channel.
VerifEYE: a real-time meat inspection system for the beef processing industry
NASA Astrophysics Data System (ADS)
Kocak, Donna M.; Caimi, Frank M.; Flick, Rick L.; Elharti, Abdelmoula
2003-02-01
Described is a real-time meat inspection system developed for the beef processing industry by eMerge Interactive. Designed to detect and localize trace amounts of contamination on cattle carcasses in the packing process, the system affords the beef industry an accurate, high speed, passive optical method of inspection. Using a method patented by United States Department of Agriculture and Iowa State University, the system takes advantage of fluorescing chlorophyll found in the animal's diet and therefore the digestive track to allow detection and imaging of contaminated areas that may harbor potentially dangerous microbial pathogens. Featuring real-time image processing and documentation of performance, the system can be easily integrated into a processing facility's Hazard Analysis and Critical Control Point quality assurance program. This paper describes the VerifEYE carcass inspection and removal verification system. Results indicating the feasibility of the method, as well as field data collected using a prototype system during four university trials conducted in 2001 are presented. Two successful demonstrations using the prototype system were held at a major U.S. meat processing facility in early 2002.
Design of an MR image processing module on an FPGA chip.
Li, Limin; Wyrwicz, Alice M
2015-06-01
We describe the design and implementation of an image processing module on a single-chip Field-Programmable Gate Array (FPGA) for real-time image processing. We also demonstrate that through graphical coding the design work can be greatly simplified. The processing module is based on a 2D FFT core. Our design is distinguished from previously reported designs in two respects. No off-chip hardware resources are required, which increases portability of the core. Direct matrix transposition usually required for execution of 2D FFT is completely avoided using our newly-designed address generation unit, which saves considerable on-chip block RAMs and clock cycles. The image processing module was tested by reconstructing multi-slice MR images from both phantom and animal data. The tests on static data show that the processing module is capable of reconstructing 128×128 images at speed of 400 frames/second. The tests on simulated real-time streaming data demonstrate that the module works properly under the timing conditions necessary for MRI experiments. Copyright © 2015 Elsevier Inc. All rights reserved.
Serrano, Katherine; Levin, Elena; Culibrk, Brankica; Weiss, Sandra; Scammell, Ken; Boecker, Wolfgang F; Devine, Dana V
2010-01-01
BACKGROUND In high-volume processing environments, manual breakage of in-line closures can result in repetitive strain injury (RSI). Furthermore, these closures may be incorrectly opened causing shear-induced hemolysis. To overcome the variability of in-line closure use and minimize RSI, Fresenius Kabi developed a new in-line closure, the CompoFlow, with mechanical openers. STUDY DESIGN AND METHODS The consistency of the performance of the CompoFlow closure device was assessed, as was its effect on component quality. A total of 188 RBC units using CompoFlow blood bag systems and 43 using the standard bag systems were produced using the buffy coat manufacturing method. Twenty-six CompoFlow platelet (PLT) concentrates and 10 control concentrates were prepared from pools of four buffy coats. RBCs were assessed on Days 1, 21, and 42 for cellular variables and hemolysis. PLTs were assessed on Days 1, 3, and 7 for morphology, CD62P expression, glucose, lactate, and pH. A total of 308 closures were excised after processing and the apertures were measured using digital image analysis. RESULTS The use of the CompoFlow device significantly improved the mean extraction time with 0.46 ± 0.11 sec/mL for the CompoFlow units and 0.52 ± 0.13 sec/mL for the control units. The CompoFlow closures showed a highly reproducible aperture after opening (coefficient of variation, 15%) and the device always remained opened. PLT and RBC products showed acceptable storage variables with no differences between CompoFlow and control. CONCLUSIONS The CompoFlow closure devices improved the level of process control and processing time of blood component production with no negative effects on product quality. PMID:20529007
Safety organizing, emotional exhaustion, and turnover in hospital nursing units.
Vogus, Timothy J; Cooil, Bruce; Sitterding, Mary; Everett, Linda Q
2014-10-01
Prior research has found that safety organizing behaviors of registered nurses (RNs) positively impact patient safety. However, little research exists on how engaging in safety organizing affects caregivers. While we know that organizational processes can have divergent effects on organizational and employee outcomes, little research exists on the effects of pursuing highly reliable performance through safety organizing on caregivers. Specifically, we examined whether, and the conditions under which, safety organizing affects RN emotional exhaustion and nursing unit turnover rates. Subjects included 1352 RNs in 50 intensive care, internal medicine, labor, and surgery nursing units in 3 Midwestern acute-care hospitals who completed questionnaires between August and December 2011 and 50 Nurse Managers from the units who completed questionnaires in December 2012. Cross-sectional analyses of RN emotional exhaustion linked to survey data on safety organizing and hospital incident reporting system data on adverse event rates for the year before survey administration. Cross-sectional analysis of unit-level RN turnover rates for the year following the administration of the survey linked to survey data on safety organizing. Multilevel regression analysis indicated that safety organizing was negatively associated with RN emotional exhaustion on units with higher rates of adverse events and positively associated with RN emotional exhaustion with lower rates of adverse events. Tobit regression analyses indicated that safety organizing was associated with lower unit level of turnover rates over time. Safety organizing is beneficial to caregivers in multiple ways, especially on nursing units with high levels of adverse events and over time.
THOR Fields and Wave Processor - FWP
NASA Astrophysics Data System (ADS)
Soucek, Jan; Rothkaehl, Hanna; Ahlen, Lennart; Balikhin, Michael; Carr, Christopher; Dekkali, Moustapha; Khotyaintsev, Yuri; Lan, Radek; Magnes, Werner; Morawski, Marek; Nakamura, Rumi; Uhlir, Ludek; Yearby, Keith; Winkler, Marek; Zaslavsky, Arnaud
2017-04-01
If selected, Turbulence Heating ObserveR (THOR) will become the first spacecraft mission dedicated to the study of plasma turbulence. The Fields and Waves Processor (FWP) is an integrated electronics unit for all electromagnetic field measurements performed by THOR. FWP will interface with all THOR fields sensors: electric field antennas of the EFI instrument, the MAG fluxgate magnetometer, and search-coil magnetometer (SCM), and perform signal digitization and on-board data processing. FWP box will house multiple data acquisition sub-units and signal analyzers all sharing a common power supply and data processing unit and thus a single data and power interface to the spacecraft. Integrating all the electromagnetic field measurements in a single unit will improve the consistency of field measurement and accuracy of time synchronization. The scientific value of highly sensitive electric and magnetic field measurements in space has been demonstrated by Cluster (among other spacecraft) and THOR instrumentation will further improve on this heritage. Large dynamic range of the instruments will be complemented by a thorough electromagnetic cleanliness program, which will prevent perturbation of field measurements by interference from payload and platform subsystems. Taking advantage of the capabilities of modern electronics and the large telemetry bandwidth of THOR, FWP will provide multi-component electromagnetic field waveforms and spectral data products at a high time resolution. Fully synchronized sampling of many signals will allow to resolve wave phase information and estimate wavelength via interferometric correlations between EFI probes. FWP will also implement a plasma resonance sounder and a digital plasma quasi-thermal noise analyzer designed to provide high cadence measurements of plasma density and temperature complementary to data from particle instruments. FWP will rapidly transmit information about magnetic field vector and spacecraft potential to the particle instrument data processing unit (PPU) via a dedicated digital link. This information will help particle instruments to optimize energy and angular sweeps and calculate on-board moments. FWP will also coordinate the acquisition of high resolution waveform snapshots with very high time resolution electron data from the TEA instrument. This combined wave/particle measurement will provide the ultimate dataset for investigation of wave-particle interactions on electron scales. The FWP instrument shall be designed and built by an international consortium of scientific institutes from Czech Republic, Poland, France, UK, Sweden and Austria.
Miura, Y; Perkel, V S; Magner, J A
1988-09-01
We have determined the structures of high mannose (Man) oligosaccharide units at individual glycosylation sites of mouse TSH. Mouse thyrotropic tumor tissue was incubated with D-[2-3H]Man with or without [14C]tyrosine ([14C] Tyr) for 2, 3, or 6 h, and for a 3-h pulse followed by a 2-h chase. TSH heterodimers or free alpha-subunits were obtained from homogenates using specific antisera. After reduction and alkylation, subunits were treated with trypsin. The tryptic fragments were then loaded on a reverse phase HPLC column to separate tryptic fragments bearing labeled oligosaccharides. The N-linked oligosaccharides were released with endoglycosidase-H and analyzed by paper chromatography. Man9GlcNac2 and Man8GlcNac2 units predominated at each time point and at each specific glycosylation site, but the processing of high Man oligosaccharides differed at each glycosylation site. The processing at Asn23 of TSH beta-subunits was slower than that at Asn56 or Asn82 of alpha-subunits. The processing at Asn82 was slightly faster than that at Asn56 for both alpha-subunits of TSH heterodimers and free alpha-subunits. The present study demonstrates that the early processing of oligosaccharides differs at the individual glycosylation sites of TSH and free alpha-subunits, perhaps because of local conformational differences.
NASA Astrophysics Data System (ADS)
Lee, K. David; Colony, Mike
2011-06-01
Modeling and simulation has been established as a cost-effective means of supporting the development of requirements, exploring doctrinal alternatives, assessing system performance, and performing design trade-off analysis. The Army's constructive simulation for the evaluation of equipment effectiveness in small combat unit operations is currently limited to representation of situation awareness without inclusion of the many uncertainties associated with real world combat environments. The goal of this research is to provide an ability to model situation awareness and decision process uncertainties in order to improve evaluation of the impact of battlefield equipment on ground soldier and small combat unit decision processes. Our Army Probabilistic Inference and Decision Engine (Army-PRIDE) system provides this required uncertainty modeling through the application of two critical techniques that allow Bayesian network technology to be applied to real-time applications. (Object-Oriented Bayesian Network methodology and Object-Oriented Inference technique). In this research, we implement decision process and situation awareness models for a reference scenario using Army-PRIDE and demonstrate its ability to model a variety of uncertainty elements, including: confidence of source, information completeness, and information loss. We also demonstrate that Army-PRIDE improves the realism of the current constructive simulation's decision processes through Monte Carlo simulation.
A unified method for evaluating real-time computer controllers: A case study. [aircraft control
NASA Technical Reports Server (NTRS)
Shin, K. G.; Krishna, C. M.; Lee, Y. H.
1982-01-01
A real time control system consists of a synergistic pair, that is, a controlled process and a controller computer. Performance measures for real time controller computers are defined on the basis of the nature of this synergistic pair. A case study of a typical critical controlled process is presented in the context of new performance measures that express the performance of both controlled processes and real time controllers (taken as a unit) on the basis of a single variable: controller response time. Controller response time is a function of current system state, system failure rate, electrical and/or magnetic interference, etc., and is therefore a random variable. Control overhead is expressed as a monotonically nondecreasing function of the response time and the system suffers catastrophic failure, or dynamic failure, if the response time for a control task exceeds the corresponding system hard deadline, if any. A rigorous probabilistic approach is used to estimate the performance measures. The controlled process chosen for study is an aircraft in the final stages of descent, just prior to landing. First, the performance measures for the controller are presented. Secondly, control algorithms for solving the landing problem are discussed and finally the impact of the performance measures on the problem is analyzed.
Use of high performance networks and supercomputers for real-time flight simulation
NASA Technical Reports Server (NTRS)
Cleveland, Jeff I., II
1993-01-01
In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations must be consistent in processing time and be completed in as short a time as possible. These operations include simulation mathematical model computation and data input/output to the simulators. In 1986, in response to increased demands for flight simulation performance, NASA's Langley Research Center (LaRC), working with the contractor, developed extensions to the Computer Automated Measurement and Control (CAMAC) technology which resulted in a factor of ten increase in the effective bandwidth and reduced latency of modules necessary for simulator communication. This technology extension is being used by more than 80 leading technological developers in the United States, Canada, and Europe. Included among the commercial applications are nuclear process control, power grid analysis, process monitoring, real-time simulation, and radar data acquisition. Personnel at LaRC are completing the development of the use of supercomputers for mathematical model computation to support real-time flight simulation. This includes the development of a real-time operating system and development of specialized software and hardware for the simulator network. This paper describes the data acquisition technology and the development of supercomputing for flight simulation.
Nonlinear Real-Time Optical Signal Processing.
1988-07-01
Principal Investigator B. K. Jenkins Signal and Image Processing Institute University of Southern California Mail Code 0272 Los Angeles, California...ADDRESS (09% SteW. Mnd ZIP Code ) 10. SOURC OF FUNONG NUMBERS Bldg. 410, Bolling AFB PROGAM CT TASK WORK UNIT Washington, D.C. 20332 EEETP.aso o 11...TAB Unmnnncced Justification By Distribution/ I O’ Availablility Codes I - ’_ ji and/or 2 I Summary During the period 1 July 1987 - 30 June 1988, the
Space Object Collision Probability via Monte Carlo on the Graphics Processing Unit
NASA Astrophysics Data System (ADS)
Vittaldev, Vivek; Russell, Ryan P.
2017-09-01
Fast and accurate collision probability computations are essential for protecting space assets. Monte Carlo (MC) simulation is the most accurate but computationally intensive method. A Graphics Processing Unit (GPU) is used to parallelize the computation and reduce the overall runtime. Using MC techniques to compute the collision probability is common in literature as the benchmark. An optimized implementation on the GPU, however, is a challenging problem and is the main focus of the current work. The MC simulation takes samples from the uncertainty distributions of the Resident Space Objects (RSOs) at any time during a time window of interest and outputs the separations at closest approach. Therefore, any uncertainty propagation method may be used and the collision probability is automatically computed as a function of RSO collision radii. Integration using a fixed time step and a quartic interpolation after every Runge Kutta step ensures that no close approaches are missed. Two orders of magnitude speedups over a serial CPU implementation are shown, and speedups improve moderately with higher fidelity dynamics. The tool makes the MC approach tractable on a single workstation, and can be used as a final product, or for verifying surrogate and analytical collision probability methods.
Investigation of multidimensional control systems in the state space and wavelet medium
NASA Astrophysics Data System (ADS)
Fedosenkov, D. B.; Simikova, A. A.; Fedosenkov, B. A.
2018-05-01
The notions are introduced of “one-dimensional-point” and “multidimensional-point” automatic control systems. To demonstrate the joint use of approaches based on the concepts of state space and wavelet transforms, a method for optimal control in a state space medium represented in the form of time-frequency representations (maps), is considered. The computer-aided control system is formed on the basis of the similarity transformation method, which makes it possible to exclude the use of reduced state variable observers. 1D-material flow signals formed by primary transducers are converted by means of wavelet transformations into multidimensional concentrated-at-a point variables in the form of time-frequency distributions of Cohen’s class. The algorithm for synthesizing a stationary controller for feeding processes is given here. The conclusion is made that the formation of an optimal control law with time-frequency distributions available contributes to the improvement of transient processes quality in feeding subsystems and the mixing unit. Confirming the efficiency of the method presented is illustrated by an example of the current registration of material flows in the multi-feeding unit. The first section in your paper.
Application of Magnetic Nanoparticles in Pretreatment Device for POPs Analysis in Water
NASA Astrophysics Data System (ADS)
Chu, Dongzhi; Kong, Xiangfeng; Wu, Bingwei; Fan, Pingping; Cao, Xuan; Zhang, Ting
2018-01-01
In order to reduce process time and labour force of POPs pretreatment, and solve the problem that extraction column was easily clogged, the paper proposed a new technology of extraction and enrichment which used magnetic nanoparticles. Automatic pretreatment system had automatic sampling unit, extraction enrichment unit and elution enrichment unit. The paper briefly introduced the preparation technology of magnetic nanoparticles, and detailly introduced the structure and control system of automatic pretreatment system. The result of magnetic nanoparticles mass recovery experiments showed that the system had POPs analysis preprocessing capability, and the recovery rate of magnetic nanoparticles were over 70%. In conclusion, the author proposed three points optimization recommendation.
Phipps, Lorri M; Bartke, Cheryl N; Spear, Debra A; Jones, Linda F; Foerster, Carolyn P; Killian, Marie E; Hughes, Jennifer R; Hess, Joseph C; Johnson, David R; Thomas, Neal J
2007-05-01
There is a paucity of literature evaluating the effects of family member presence during bedside medical rounds in the pediatric intensive care unit. We hypothesized that, when compared with rounds without family members, parental presence during morning medical rounds would increase time spent on rounds, decrease medical team teaching/education, increase staff dissatisfaction, create more stress in family members, and violate patient privacy in our open unit. Prospective, blinded, observational study. Academic pediatric intensive care unit with 12 beds. A total of 105 admissions were studied, 81 family members completed a survey, and 187 medical team staff surveys were completed. Investigators documented parental presence and time allocated for presentation, teaching, and answering questions. Surveys related to perception of goals, teaching, and privacy of rounds were distributed to participants. Time spent on rounds, time spent teaching on rounds, and medical staff and family perception of the effects of parental presence on rounds. There was no significant difference between time spent on rounds in the presence or absence of family members (p = NS). There is no significant difference between the time spent teaching by the attending physician in the presence or absence of family members (p = NS). Overall, parents reported that the medical team spent an appropriate amount of time discussing their child and were not upset by this discussion. Parents did not perceive that their own or their child's privacy was violated during rounds. The majority of medical team members reported that the presence of family on rounds was beneficial. Parental presence on rounds does not seem to interfere with the educational and communication process. Parents report satisfaction with participation in rounds, and privacy violations do not seem to be a concern from their perspective.