Sample records for fast computation speed

  1. Computer ray tracing speeds.

    PubMed

    Robb, P; Pawlowski, B

    1990-05-01

    The results of measuring the ray trace speed and compilation speed of thirty-nine computers in fifty-seven configurations, ranging from personal computers to super computers, are described. A correlation of ray trace speed has been made with the LINPACK benchmark which allows the ray trace speed to be estimated using LINPACK performance data. The results indicate that the latest generation of workstations, using CPUs based on RISC (Reduced Instruction Set Computer) technology, are as fast or faster than mainframe computers in compute-bound situations.

  2. High-Performance Computing for the Electromagnetic Modeling and Simulation of Interconnects

    NASA Technical Reports Server (NTRS)

    Schutt-Aine, Jose E.

    1996-01-01

    The electromagnetic modeling of packages and interconnects plays a very important role in the design of high-speed digital circuits, and is most efficiently performed by using computer-aided design algorithms. In recent years, packaging has become a critical area in the design of high-speed communication systems and fast computers, and the importance of the software support for their development has increased accordingly. Throughout this project, our efforts have focused on the development of modeling and simulation techniques and algorithms that permit the fast computation of the electrical parameters of interconnects and the efficient simulation of their electrical performance.

  3. Combining Acceleration Techniques for Low-Dose X-Ray Cone Beam Computed Tomography Image Reconstruction.

    PubMed

    Huang, Hsuan-Ming; Hsiao, Ing-Tsung

    2017-01-01

    Over the past decade, image quality in low-dose computed tomography has been greatly improved by various compressive sensing- (CS-) based reconstruction methods. However, these methods have some disadvantages including high computational cost and slow convergence rate. Many different speed-up techniques for CS-based reconstruction algorithms have been developed. The purpose of this paper is to propose a fast reconstruction framework that combines a CS-based reconstruction algorithm with several speed-up techniques. First, total difference minimization (TDM) was implemented using the soft-threshold filtering (STF). Second, we combined TDM-STF with the ordered subsets transmission (OSTR) algorithm for accelerating the convergence. To further speed up the convergence of the proposed method, we applied the power factor and the fast iterative shrinkage thresholding algorithm to OSTR and TDM-STF, respectively. Results obtained from simulation and phantom studies showed that many speed-up techniques could be combined to greatly improve the convergence speed of a CS-based reconstruction algorithm. More importantly, the increased computation time (≤10%) was minor as compared to the acceleration provided by the proposed method. In this paper, we have presented a CS-based reconstruction framework that combines several acceleration techniques. Both simulation and phantom studies provide evidence that the proposed method has the potential to satisfy the requirement of fast image reconstruction in practical CT.

  4. A methodology for achieving high-speed rates for artificial conductance injection in electrically excitable biological cells.

    PubMed

    Butera, R J; Wilson, C G; Delnegro, C A; Smith, J C

    2001-12-01

    We present a novel approach to implementing the dynamic-clamp protocol (Sharp et al., 1993), commonly used in neurophysiology and cardiac electrophysiology experiments. Our approach is based on real-time extensions to the Linux operating system. Conventional PC-based approaches have typically utilized single-cycle computational rates of 10 kHz or slower. In thispaper, we demonstrate reliable cycle-to-cycle rates as fast as 50 kHz. Our system, which we call model reference current injection (MRCI); pronounced merci is also capable of episodic logging of internal state variables and interactive manipulation of model parameters. The limiting factor in achieving high speeds was not processor speed or model complexity, but cycle jitter inherent in the CPU/motherboard performance. We demonstrate these high speeds and flexibility with two examples: 1) adding action-potential ionic currents to a mammalian neuron under whole-cell patch-clamp and 2) altering a cell's intrinsic dynamics via MRCI while simultaneously coupling it via artificial synapses to an internal computational model cell. These higher rates greatly extend the applicability of this technique to the study of fast electrophysiological currents such fast a currents and fast excitatory/inhibitory synapses.

  5. Networking via wireless bridge produces greater speed and flexibility, lowers cost.

    PubMed

    1998-10-01

    Wireless computer networking. Computer connectivity is essential in today's high-tech health care industry. But telephone lines aren't fast enough, and high-speed connections like T-1 lines are costly. Read about an Ohio community hospital that installed a wireless network "bridge" to connect buildings that are miles apart, creating a reliable high-speed link that costs one-tenth of a T-1 line.

  6. Effects of different computer typing speeds on acceleration and peak contact pressure of the fingertips during computer typing.

    PubMed

    Yoo, Won-Gyu

    2015-01-01

    [Purpose] This study showed the effects of different computer typing speeds on acceleration and peak contact pressure of the fingertips during computer typing. [Subjects] Twenty-one male computer workers voluntarily consented to participate in this study. They consisted of 7 workers who could type 200-300 characteristics/minute, 7 workers who could type 300-400 characteristics/minute, and 7 workers who could type 400-500 chracteristics/minute. [Methods] This study was used to measure the acceleration and peak contact pressure of the fingertips for different typing speed groups using an accelerometer and CONFORMat system. [Results] The fingertip contact pressure was increased in the high typing speed group compared with the low and medium typing speed groups. The fingertip acceleration was increased in the high typing speed group compared with the low and medium typing speed groups. [Conclusion] The results of the present study indicate that a fast typing speed cause continuous pressure stress to be applied to the fingers, thereby creating pain in the fingers.

  7. Demonstration of optical computing logics based on binary decision diagram.

    PubMed

    Lin, Shiyun; Ishikawa, Yasuhiko; Wada, Kazumi

    2012-01-16

    Optical circuits are low power consumption and fast speed alternatives for the current information processing based on transistor circuits. However, because of no transistor function available in optics, the architecture for optical computing should be chosen that optics prefers. One of which is Binary Decision Diagram (BDD), where signal is processed by sending an optical signal from the root through a serial of switching nodes to the leaf (terminal). Speed of optical computing is limited by either transmission time of optical signals from the root to the leaf or switching time of a node. We have designed and experimentally demonstrated 1-bit and 2-bit adders based on the BDD architecture. The switching nodes are silicon ring resonators with a modulation depth of 10 dB and the states are changed by the plasma dispersion effect. The quality, Q of the rings designed is 1500, which allows fast transmission of signal, e.g., 1.3 ps calculated by a photon escaping time. A total processing time is thus analyzed to be ~9 ps for a 2-bit adder and would scales linearly with the number of bit. It is two orders of magnitude faster than the conventional CMOS circuitry, ~ns scale of delay. The presented results show the potential of fast speed optical computing circuits.

  8. A fast two-plus-one phase-shifting algorithm for high-speed three-dimensional shape measurement system

    NASA Astrophysics Data System (ADS)

    Wang, Wenyun; Guo, Yingfu

    2008-12-01

    Phase-shifting methods for 3-D shape measurement have long been employed in optical metrology for their speed and accuracy. For real-time, accurate, 3-D shape measurement, a four-step phase-shifting algorithm which has the advantage of its symmetry is a good choice; however, its measurement error is sensitive to any fringe image errors caused by various sources such as motion blur. To alleviate this problem, a fast two-plus-one phase-shifting algorithm is proposed in this paper. This kind of technology will benefit many applications such as medical imaging, gaming, animation, computer vision, computer graphics, etc.

  9. Fast Data Acquisition For Mass Spectrometer

    NASA Technical Reports Server (NTRS)

    Lincoln, K. A.; Bechtel, R. D.

    1988-01-01

    New equipment has speed and capacity to process time-of-flight data. System relies on fast, compact waveform digitizer with 32-k memory coupled to personal computer. With digitizer, system captures all mass peaks on each 25- to 35-microseconds cycle of spectrometer.

  10. Fast and Epsilon-Optimal Discretized Pursuit Learning Automata.

    PubMed

    Zhang, JunQi; Wang, Cheng; Zhou, MengChu

    2015-10-01

    Learning automata (LA) are powerful tools for reinforcement learning. A discretized pursuit LA is the most popular one among them. During an iteration its operation consists of three basic phases: 1) selecting the next action; 2) finding the optimal estimated action; and 3) updating the state probability. However, when the number of actions is large, the learning becomes extremely slow because there are too many updates to be made at each iteration. The increased updates are mostly from phases 1 and 3. A new fast discretized pursuit LA with assured ε -optimality is proposed to perform both phases 1 and 3 with the computational complexity independent of the number of actions. Apart from its low computational complexity, it achieves faster convergence speed than the classical one when operating in stationary environments. This paper can promote the applications of LA toward the large-scale-action oriented area that requires efficient reinforcement learning tools with assured ε -optimality, fast convergence speed, and low computational complexity for each iteration.

  11. Distinct mechanisms explain the control of reach speed planning: evidence from a race model framework.

    PubMed

    Venkataratamani, Prasanna Venkhatesh; Murthy, Aditya

    2018-05-16

    Previous studies have investigated the computational architecture underlying the voluntary control of reach movements that demands a change in position or direction of movement planning. Here we used a novel task, where subjects either had to increase or decrease the movement speed according to a change in target color that occurred randomly during a trial. The applicability of different race models to such a speed redirect task was assessed. We found that the predictions of an independent race model that instantiated an abort and re-plan strategy was consistent with all aspects of performance in the fast to slow speed condition. The results from modeling indicated a peculiar asymmetry, in that while the fast to slow speed change required inhibition, none of the standard race models were able to explain how movements changed from slow to fast speeds. Interestingly, a weighted averaging model that simulated the gradual merge of two kinematic plans explained behavior in the slow to fast speed task. In summary, our work shows how a race model framework can provide an understanding of how the brain controls of different aspects of reach movement planning and help distinguish between an abort and re-plan strategy from merging of plans.

  12. Fast neural net simulation with a DSP processor array.

    PubMed

    Muller, U A; Gunzinger, A; Guggenbuhl, W

    1995-01-01

    This paper describes the implementation of a fast neural net simulator on a novel parallel distributed-memory computer. A 60-processor system, named MUSIC (multiprocessor system with intelligent communication), is operational and runs the backpropagation algorithm at a speed of 330 million connection updates per second (continuous weight update) using 32-b floating-point precision. This is equal to 1.4 Gflops sustained performance. The complete system with 3.8 Gflops peak performance consumes less than 800 W of electrical power and fits into a 19-in rack. While reaching the speed of modern supercomputers, MUSIC still can be used as a personal desktop computer at a researcher's own disposal. In neural net simulation, this gives a computing performance to a single user which was unthinkable before. The system's real-time interfaces make it especially useful for embedded applications.

  13. The Need for Optical Means as an Alternative for Electronic Computing

    NASA Technical Reports Server (NTRS)

    Adbeldayem, Hossin; Frazier, Donald; Witherow, William; Paley, Steve; Penn, Benjamin; Bank, Curtis; Whitaker, Ann F. (Technical Monitor)

    2001-01-01

    An increasing demand for faster computers is rapidly growing to encounter the fast growing rate of Internet, space communication, and robotic industry. Unfortunately, the Very Large Scale Integration technology is approaching its fundamental limits beyond which the device will be unreliable. Optical interconnections and optical integrated circuits are strongly believed to provide the way out of the extreme limitations imposed on the growth of speed and complexity of nowadays computations by conventional electronics. This paper demonstrates two ultra-fast, all-optical logic gates and a high-density storage medium, which are essential components in building the future optical computer.

  14. Integrated Life-Cycle Framework for Maintenance, Monitoring and Reliability of Naval Ship Structures

    DTIC Science & Technology

    2012-08-15

    number of times, a fast and accurate method for analyzing the ship hull is required. In order to obtain this required computational speed and accuracy...Naval Engineers Fleet Maintenance & Modernization Symposium (FMMS 2011) [8] and the Eleventh International Conference on Fast Sea Transportation ( FAST ...probabilistic strength of the ship hull. First, a novel deterministic method for the fast and accurate calculation of the strength of the ship hull is

  15. Hybrid-dual-fourier tomographic algorithm for a fast three-dimensionial optical image reconstruction in turbid media

    NASA Technical Reports Server (NTRS)

    Alfano, Robert R. (Inventor); Cai, Wei (Inventor)

    2007-01-01

    A reconstruction technique for reducing computation burden in the 3D image processes, wherein the reconstruction procedure comprises an inverse and a forward model. The inverse model uses a hybrid dual Fourier algorithm that combines a 2D Fourier inversion with a 1D matrix inversion to thereby provide high-speed inverse computations. The inverse algorithm uses a hybrid transfer to provide fast Fourier inversion for data of multiple sources and multiple detectors. The forward model is based on an analytical cumulant solution of a radiative transfer equation. The accurate analytical form of the solution to the radiative transfer equation provides an efficient formalism for fast computation of the forward model.

  16. Fast and accurate computation of system matrix for area integral model-based algebraic reconstruction technique

    NASA Astrophysics Data System (ADS)

    Zhang, Shunli; Zhang, Dinghua; Gong, Hao; Ghasemalizadeh, Omid; Wang, Ge; Cao, Guohua

    2014-11-01

    Iterative algorithms, such as the algebraic reconstruction technique (ART), are popular for image reconstruction. For iterative reconstruction, the area integral model (AIM) is more accurate for better reconstruction quality than the line integral model (LIM). However, the computation of the system matrix for AIM is more complex and time-consuming than that for LIM. Here, we propose a fast and accurate method to compute the system matrix for AIM. First, we calculate the intersection of each boundary line of a narrow fan-beam with pixels in a recursive and efficient manner. Then, by grouping the beam-pixel intersection area into six types according to the slopes of the two boundary lines, we analytically compute the intersection area of the narrow fan-beam with the pixels in a simple algebraic fashion. Overall, experimental results show that our method is about three times faster than the Siddon algorithm and about two times faster than the distance-driven model (DDM) in computation of the system matrix. The reconstruction speed of our AIM-based ART is also faster than the LIM-based ART that uses the Siddon algorithm and DDM-based ART, for one iteration. The fast reconstruction speed of our method was accomplished without compromising the image quality.

  17. Breaking the Supermassive Black Hole Speed Limit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smidt, Joseph

    A new computer simulation helps explain the existence of puzzling supermassive black holes observed in the early universe. The simulation is based on a computer code used to understand the coupling of radiation and certain materials. “Supermassive black holes have a speed limit that governs how fast and how large they can grow,” said Joseph Smidt of the Theoretical Design Division at Los Alamos National Laboratory. “The relatively recent discovery of supermassive black holes in the early development of the universe raised a fundamental question, how did they get so big so fast?” Using computer codes developed at Los Alamosmore » for modeling the interaction of matter and radiation related to the Lab’s stockpile stewardship mission, Smidt and colleagues created a simulation of collapsing stars that resulted in supermassive black holes forming in less time than expected, cosmologically speaking, in the first billion years of the universe.« less

  18. Fast distributed large-pixel-count hologram computation using a GPU cluster.

    PubMed

    Pan, Yuechao; Xu, Xuewu; Liang, Xinan

    2013-09-10

    Large-pixel-count holograms are one essential part for big size holographic three-dimensional (3D) display, but the generation of such holograms is computationally demanding. In order to address this issue, we have built a graphics processing unit (GPU) cluster with 32.5 Tflop/s computing power and implemented distributed hologram computation on it with speed improvement techniques, such as shared memory on GPU, GPU level adaptive load balancing, and node level load distribution. Using these speed improvement techniques on the GPU cluster, we have achieved 71.4 times computation speed increase for 186M-pixel holograms. Furthermore, we have used the approaches of diffraction limits and subdivision of holograms to overcome the GPU memory limit in computing large-pixel-count holograms. 745M-pixel and 1.80G-pixel holograms were computed in 343 and 3326 s, respectively, for more than 2 million object points with RGB colors. Color 3D objects with 1.02M points were successfully reconstructed from 186M-pixel hologram computed in 8.82 s with all the above three speed improvement techniques. It is shown that distributed hologram computation using a GPU cluster is a promising approach to increase the computation speed of large-pixel-count holograms for large size holographic display.

  19. FastMag: Fast micromagnetic simulator for complex magnetic structures (invited)

    NASA Astrophysics Data System (ADS)

    Chang, R.; Li, S.; Lubarda, M. V.; Livshitz, B.; Lomakin, V.

    2011-04-01

    A fast micromagnetic simulator (FastMag) for general problems is presented. FastMag solves the Landau-Lifshitz-Gilbert equation and can handle multiscale problems with a high computational efficiency. The simulator derives its high performance from efficient methods for evaluating the effective field and from implementations on massively parallel graphics processing unit (GPU) architectures. FastMag discretizes the computational domain into tetrahedral elements and therefore is highly flexible for general problems. The magnetostatic field is computed via the superposition principle for both volume and surface parts of the computational domain. This is accomplished by implementing efficient quadrature rules and analytical integration for overlapping elements in which the integral kernel is singular. Thus, discretized superposition integrals are computed using a nonuniform grid interpolation method, which evaluates the field from N sources at N collocated observers in O(N) operations. This approach allows handling objects of arbitrary shape, allows easily calculating of the field outside the magnetized domains, does not require solving a linear system of equations, and requires little memory. FastMag is implemented on GPUs with ?> GPU-central processing unit speed-ups of 2 orders of magnitude. Simulations are shown of a large array of magnetic dots and a recording head fully discretized down to the exchange length, with over a hundred million tetrahedral elements on an inexpensive desktop computer.

  20. Improved FastICA algorithm in fMRI data analysis using the sparsity property of the sources.

    PubMed

    Ge, Ruiyang; Wang, Yubao; Zhang, Jipeng; Yao, Li; Zhang, Hang; Long, Zhiying

    2016-04-01

    As a blind source separation technique, independent component analysis (ICA) has many applications in functional magnetic resonance imaging (fMRI). Although either temporal or spatial prior information has been introduced into the constrained ICA and semi-blind ICA methods to improve the performance of ICA in fMRI data analysis, certain types of additional prior information, such as the sparsity, has seldom been added to the ICA algorithms as constraints. In this study, we proposed a SparseFastICA method by adding the source sparsity as a constraint to the FastICA algorithm to improve the performance of the widely used FastICA. The source sparsity is estimated through a smoothed ℓ0 norm method. We performed experimental tests on both simulated data and real fMRI data to investigate the feasibility and robustness of SparseFastICA and made a performance comparison between SparseFastICA, FastICA and Infomax ICA. Results of the simulated and real fMRI data demonstrated the feasibility and robustness of SparseFastICA for the source separation in fMRI data. Both the simulated and real fMRI experimental results showed that SparseFastICA has better robustness to noise and better spatial detection power than FastICA. Although the spatial detection power of SparseFastICA and Infomax did not show significant difference, SparseFastICA had faster computation speed than Infomax. SparseFastICA was comparable to the Infomax algorithm with a faster computation speed. More importantly, SparseFastICA outperformed FastICA in robustness and spatial detection power and can be used to identify more accurate brain networks than FastICA algorithm. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. A Low-Power High-Speed Smart Sensor Design for Space Exploration Missions

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi

    1997-01-01

    A low-power high-speed smart sensor system based on a large format active pixel sensor (APS) integrated with a programmable neural processor for space exploration missions is presented. The concept of building an advanced smart sensing system is demonstrated by a system-level microchip design that is composed with an APS sensor, a programmable neural processor, and an embedded microprocessor in a SOI CMOS technology. This ultra-fast smart sensor system-on-a-chip design mimics what is inherent in biological vision systems. Moreover, it is programmable and capable of performing ultra-fast machine vision processing in all levels such as image acquisition, image fusion, image analysis, scene interpretation, and control functions. The system provides about one tera-operation-per-second computing power which is a two order-of-magnitude increase over that of state-of-the-art microcomputers. Its high performance is due to massively parallel computing structures, high data throughput rates, fast learning capabilities, and advanced VLSI system-on-a-chip implementation.

  2. Integrated Multiscale Modeling of Molecular Computing Devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gregory Beylkin

    2012-03-23

    Significant advances were made on all objectives of the research program. We have developed fast multiresolution methods for performing electronic structure calculations with emphasis on constructing efficient representations of functions and operators. We extended our approach to problems of scattering in solids, i.e. constructing fast algorithms for computing above the Fermi energy level. Part of the work was done in collaboration with Robert Harrison and George Fann at ORNL. Specific results (in part supported by this grant) are listed here and are described in greater detail. (1) We have implemented a fast algorithm to apply the Green's function for themore » free space (oscillatory) Helmholtz kernel. The algorithm maintains its speed and accuracy when the kernel is applied to functions with singularities. (2) We have developed a fast algorithm for applying periodic and quasi-periodic, oscillatory Green's functions and those with boundary conditions on simple domains. Importantly, the algorithm maintains its speed and accuracy when applied to functions with singularities. (3) We have developed a fast algorithm for obtaining and applying multiresolution representations of periodic and quasi-periodic Green's functions and Green's functions with boundary conditions on simple domains. (4) We have implemented modifications to improve the speed of adaptive multiresolution algorithms for applying operators which are represented via a Gaussian expansion. (5) We have constructed new nearly optimal quadratures for the sphere that are invariant under the icosahedral rotation group. (6) We obtained new results on approximation of functions by exponential sums and/or rational functions, one of the key methods that allows us to construct separated representations for Green's functions. (7) We developed a new fast and accurate reduction algorithm for obtaining optimal approximation of functions by exponential sums and/or their rational representations.« less

  3. GWAS with longitudinal phenotypes: performance of approximate procedures

    PubMed Central

    Sikorska, Karolina; Montazeri, Nahid Mostafavi; Uitterlinden, André; Rivadeneira, Fernando; Eilers, Paul HC; Lesaffre, Emmanuel

    2015-01-01

    Analysis of genome-wide association studies with longitudinal data using standard procedures, such as linear mixed model (LMM) fitting, leads to discouragingly long computation times. There is a need to speed up the computations significantly. In our previous work (Sikorska et al: Fast linear mixed model computations for genome-wide association studies with longitudinal data. Stat Med 2012; 32.1: 165–180), we proposed the conditional two-step (CTS) approach as a fast method providing an approximation to the P-value for the longitudinal single-nucleotide polymorphism (SNP) effect. In the first step a reduced conditional LMM is fit, omitting all the SNP terms. In the second step, the estimated random slopes are regressed on SNPs. The CTS has been applied to the bone mineral density data from the Rotterdam Study and proved to work very well even in unbalanced situations. In another article (Sikorska et al: GWAS on your notebook: fast semi-parallel linear and logistic regression for genome-wide association studies. BMC Bioinformatics 2013; 14: 166), we suggested semi-parallel computations, greatly speeding up fitting many linear regressions. Combining CTS with fast linear regression reduces the computation time from several weeks to a few minutes on a single computer. Here, we explore further the properties of the CTS both analytically and by simulations. We investigate the performance of our proposal in comparison with a related but different approach, the two-step procedure. It is analytically shown that for the balanced case, under mild assumptions, the P-value provided by the CTS is the same as from the LMM. For unbalanced data and in realistic situations, simulations show that the CTS method does not inflate the type I error rate and implies only a minimal loss of power. PMID:25712081

  4. Multiple feature fusion via covariance matrix for visual tracking

    NASA Astrophysics Data System (ADS)

    Jin, Zefenfen; Hou, Zhiqiang; Yu, Wangsheng; Wang, Xin; Sun, Hui

    2018-04-01

    Aiming at the problem of complicated dynamic scenes in visual target tracking, a multi-feature fusion tracking algorithm based on covariance matrix is proposed to improve the robustness of the tracking algorithm. In the frame-work of quantum genetic algorithm, this paper uses the region covariance descriptor to fuse the color, edge and texture features. It also uses a fast covariance intersection algorithm to update the model. The low dimension of region covariance descriptor, the fast convergence speed and strong global optimization ability of quantum genetic algorithm, and the fast computation of fast covariance intersection algorithm are used to improve the computational efficiency of fusion, matching, and updating process, so that the algorithm achieves a fast and effective multi-feature fusion tracking. The experiments prove that the proposed algorithm can not only achieve fast and robust tracking but also effectively handle interference of occlusion, rotation, deformation, motion blur and so on.

  5. L2 Learners' Engagement with High Stakes Listening Tests: Does Technology Have a Beneficial Role to Play?

    ERIC Educational Resources Information Center

    East, Martin; King, Chris

    2012-01-01

    In the listening component of the IELTS examination candidates hear the input once, delivered at "normal" speed. This format for listening can be problematic for test takers who often perceive normal speed input to be too fast for effective comprehension. The study reported here investigated whether using computer software to slow down…

  6. Comparative analysis of feature extraction methods in satellite imagery

    NASA Astrophysics Data System (ADS)

    Karim, Shahid; Zhang, Ye; Asif, Muhammad Rizwan; Ali, Saad

    2017-10-01

    Feature extraction techniques are extensively being used in satellite imagery and getting impressive attention for remote sensing applications. The state-of-the-art feature extraction methods are appropriate according to the categories and structures of the objects to be detected. Based on distinctive computations of each feature extraction method, different types of images are selected to evaluate the performance of the methods, such as binary robust invariant scalable keypoints (BRISK), scale-invariant feature transform, speeded-up robust features (SURF), features from accelerated segment test (FAST), histogram of oriented gradients, and local binary patterns. Total computational time is calculated to evaluate the speed of each feature extraction method. The extracted features are counted under shadow regions and preprocessed shadow regions to compare the functioning of each method. We have studied the combination of SURF with FAST and BRISK individually and found very promising results with an increased number of features and less computational time. Finally, feature matching is conferred for all methods.

  7. Massively Parallel Processing for Fast and Accurate Stamping Simulations

    NASA Astrophysics Data System (ADS)

    Gress, Jeffrey J.; Xu, Siguang; Joshi, Ramesh; Wang, Chuan-tao; Paul, Sabu

    2005-08-01

    The competitive automotive market drives automotive manufacturers to speed up the vehicle development cycles and reduce the lead-time. Fast tooling development is one of the key areas to support fast and short vehicle development programs (VDP). In the past ten years, the stamping simulation has become the most effective validation tool in predicting and resolving all potential formability and quality problems before the dies are physically made. The stamping simulation and formability analysis has become an critical business segment in GM math-based die engineering process. As the simulation becomes as one of the major production tools in engineering factory, the simulation speed and accuracy are the two of the most important measures for stamping simulation technology. The speed and time-in-system of forming analysis becomes an even more critical to support the fast VDP and tooling readiness. Since 1997, General Motors Die Center has been working jointly with our software vendor to develop and implement a parallel version of simulation software for mass production analysis applications. By 2001, this technology was matured in the form of distributed memory processing (DMP) of draw die simulations in a networked distributed memory computing environment. In 2004, this technology was refined to massively parallel processing (MPP) and extended to line die forming analysis (draw, trim, flange, and associated spring-back) running on a dedicated computing environment. The evolution of this technology and the insight gained through the implementation of DM0P/MPP technology as well as performance benchmarks are discussed in this publication.

  8. Fast computation of the multivariable stability margin for real interrelated uncertain parameters

    NASA Technical Reports Server (NTRS)

    Sideris, Athanasios; Sanchez Pena, Ricardo S.

    1988-01-01

    A novel algorithm for computing the multivariable stability margin for checking the robust stability of feedback systems with real parametric uncertainty is proposed. This method eliminates the need for the frequency search involved in another given algorithm by reducing it to checking a finite number of conditions. These conditions have a special structure, which allows a significant improvement on the speed of computations.

  9. Method of developing all-optical trinary JK, D-type, and T-type flip-flops using semiconductor optical amplifiers.

    PubMed

    Garai, Sisir Kumar

    2012-04-10

    To meet the demand of very fast and agile optical networks, the optical processors in a network system should have a very fast execution rate, large information handling, and large information storage capacities. Multivalued logic operations and multistate optical flip-flops are the basic building blocks for such fast running optical computing and data processing systems. In the past two decades, many methods of implementing all-optical flip-flops have been proposed. Most of these suffer from speed limitations because of the low switching response of active devices. The frequency encoding technique has been used because of its many advantages. It can preserve its identity throughout data communication irrespective of loss of light energy due to reflection, refraction, attenuation, etc. The action of polarization-rotation-based very fast switching of semiconductor optical amplifiers increases processing speed. At the same time, tristate optical flip-flops increase information handling capacity.

  10. A rapid parallelization of cone-beam projection and back-projection operator based on texture fetching interpolation

    NASA Astrophysics Data System (ADS)

    Xie, Lizhe; Hu, Yining; Chen, Yang; Shi, Luyao

    2015-03-01

    Projection and back-projection are the most computational consuming parts in Computed Tomography (CT) reconstruction. Parallelization strategies using GPU computing techniques have been introduced. We in this paper present a new parallelization scheme for both projection and back-projection. The proposed method is based on CUDA technology carried out by NVIDIA Corporation. Instead of build complex model, we aimed on optimizing the existing algorithm and make it suitable for CUDA implementation so as to gain fast computation speed. Besides making use of texture fetching operation which helps gain faster interpolation speed, we fixed sampling numbers in the computation of projection, to ensure the synchronization of blocks and threads, thus prevents the latency caused by inconsistent computation complexity. Experiment results have proven the computational efficiency and imaging quality of the proposed method.

  11. Cortical Specializations Underlying Fast Computations

    PubMed Central

    Volgushev, Maxim

    2016-01-01

    The time course of behaviorally relevant environmental events sets temporal constraints on neuronal processing. How does the mammalian brain make use of the increasingly complex networks of the neocortex, while making decisions and executing behavioral reactions within a reasonable time? The key parameter determining the speed of computations in neuronal networks is a time interval that neuronal ensembles need to process changes at their input and communicate results of this processing to downstream neurons. Theoretical analysis identified basic requirements for fast processing: use of neuronal populations for encoding, background activity, and fast onset dynamics of action potentials in neurons. Experimental evidence shows that populations of neocortical neurons fulfil these requirements. Indeed, they can change firing rate in response to input perturbations very quickly, within 1 to 3 ms, and encode high-frequency components of the input by phase-locking their spiking to frequencies up to 300 to 1000 Hz. This implies that time unit of computations by cortical ensembles is only few, 1 to 3 ms, which is considerably faster than the membrane time constant of individual neurons. The ability of cortical neuronal ensembles to communicate on a millisecond time scale allows for complex, multiple-step processing and precise coordination of neuronal activity in parallel processing streams, while keeping the speed of behavioral reactions within environmentally set temporal constraints. PMID:25689988

  12. Using the Fast Fourier Transform to Accelerate the Computational Search for RNA Conformational Switches

    PubMed Central

    Senter, Evan; Sheikh, Saad; Dotu, Ivan; Ponty, Yann; Clote, Peter

    2012-01-01

    Using complex roots of unity and the Fast Fourier Transform, we design a new thermodynamics-based algorithm, FFTbor, that computes the Boltzmann probability that secondary structures differ by base pairs from an arbitrary initial structure of a given RNA sequence. The algorithm, which runs in quartic time and quadratic space , is used to determine the correlation between kinetic folding speed and the ruggedness of the energy landscape, and to predict the location of riboswitch expression platform candidates. A web server is available at http://bioinformatics.bc.edu/clotelab/FFTbor/. PMID:23284639

  13. A Comparative Study of Sound Speed in Air at Room Temperature between a Pressure Sensor and a Sound Sensor

    ERIC Educational Resources Information Center

    Amrani, D.

    2013-01-01

    This paper deals with the comparison of sound speed measurements in air using two types of sensor that are widely employed in physics and engineering education, namely a pressure sensor and a sound sensor. A computer-based laboratory with pressure and sound sensors was used to carry out measurements of air through a 60 ml syringe. The fast Fourier…

  14. The difference engine: a model of diversity in speeded cognition.

    PubMed

    Myerson, Joel; Hale, Sandra; Zheng, Yingye; Jenkins, Lisa; Widaman, Keith F

    2003-06-01

    A theory of diversity in speeded cognition, the difference engine, is proposed, in which information processing is represented as a series of generic computational steps. Some individuals tend to perform all of these computations relatively quickly and other individuals tend to perform them all relatively slowly, reflecting the existence of a general cognitive speed factor, but the time required for response selection and execution is assumed to be independent of cognitive speed. The difference engine correctly predicts the positively accelerated form of the relation between diversity of performance, as measured by the standard deviation for the group, and task difficulty, as indexed by the mean response time (RT) for the group. In addition, the difference engine correctly predicts approximately linear relations between the RTs of any individual and average performance for the group, with the regression lines for fast individuals having slopes less than 1.0 (and positive intercepts) and the regression lines for slow individuals having slopes greater than 1.0 (and negative intercepts). Similar predictions are made for comparisons of slow, average, and fast subgroups, regardless of whether those subgroups are formed on the basis of differences in ability, age, or health status. These predictions are consistent with evidence from studies of healthy young and older adults as well as from studies of depressed and age-matched control groups.

  15. WE-DE-BRA-09: Fast Megavoltage CT Imaging with Rapid Scan Time and Low Imaging Dose in Helical Tomotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Magome, T; University of Tokyo Hospital, Tokyo; University of Minnesota, Minneapolis, MN

    Purpose: Megavoltage computed tomography (MVCT) imaging has been widely used for daily patient setup with helical tomotherapy (HT). One drawback of MVCT is its very long imaging time, owing to slow couch speed. The purpose of this study was to develop an MVCT imaging method allowing faster couch speeds, and to assess its accuracy for image guidance for HT. Methods: Three cadavers (mimicking closest physiological and physical system of patients) were scanned four times with couch speeds of 1, 2, 3, and 4 mm/s. The resulting MVCT images were reconstructed using an iterative reconstruction (IR) algorithm. The MVCT images weremore » registered with kilovoltage CT images, and the registration errors were compared with the errors with conventional filtered back projection (FBP) algorithm. Moreover, the fast MVCT imaging was tested in three cases of total marrow irradiation as a clinical trial. Results: Three-dimensional registration errors of the MVCT images reconstructed with the IR algorithm were significantly smaller (p < 0.05) than the errors of images reconstructed with the FBP algorithm at fast couch speeds (3, 4 mm/s). The scan time and imaging dose at a speed of 4 mm/s were reduced to 30% of those from a conventional coarse mode scan. For the patient imaging, a limited number of conventional MVCT (1.2 mm/s) and fast MVCT (3 mm/s) reveals acceptable reduced imaging time and dose able to use for anatomical registration. Conclusion: Fast MVCT with IR algorithm maybe clinically feasible alternative for rapid 3D patient localization. This technique may also be useful for calculating daily dose distributions or organ motion analyses in HT treatment over a wide area.« less

  16. Fast Computation of the Two-Point Correlation Function in the Age of Big Data

    NASA Astrophysics Data System (ADS)

    Pellegrino, Andrew; Timlin, John

    2018-01-01

    We present a new code which quickly computes the two-point correlation function for large sets of astronomical data. This code combines the ease of use of Python with the speed of parallel shared libraries written in C. We include the capability to compute the auto- and cross-correlation statistics, and allow the user to calculate the three-dimensional and angular correlation functions. Additionally, the code automatically divides the user-provided sky masks into contiguous subsamples of similar size, using the HEALPix pixelization scheme, for the purpose of resampling. Errors are computed using jackknife and bootstrap resampling in a way that adds negligible extra runtime, even with many subsamples. We demonstrate comparable speed with other clustering codes, and code accuracy compared to known and analytic results.

  17. A Fourier transform with speed improvements for microprocessor applications

    NASA Technical Reports Server (NTRS)

    Lokerson, D. C.; Rochelle, R.

    1980-01-01

    A fast Fourier transform algorithm for the RCA 1802microprocessor was developed for spacecraft instrument applications. The computations were tailored for the restrictions an eight bit machine imposes. The algorithm incorporates some aspects of Walsh function sequency to improve operational speed. This method uses a register to add a value proportional to the period of the band being processed before each computation is to be considered. If the result overflows into the DF register, the data sample is used in computation; otherwise computation is skipped. This operation is repeated for each of the 64 data samples. This technique is used for both sine and cosine portions of the computation. The processing uses eight bit data, but because of the many computations that can increase the size of the coefficient, floating point form is used. A method to reduce the alias problem in the lower bands is also described.

  18. Ultrahigh-speed X-ray imaging of hypervelocity projectiles

    NASA Astrophysics Data System (ADS)

    Miller, Stuart; Singh, Bipin; Cool, Steven; Entine, Gerald; Campbell, Larry; Bishel, Ron; Rushing, Rick; Nagarkar, Vivek V.

    2011-08-01

    High-speed X-ray imaging is an extremely important modality for healthcare, industrial, military and research applications such as medical computed tomography, non-destructive testing, imaging in-flight projectiles, characterizing exploding ordnance, and analyzing ballistic impacts. We report on the development of a modular, ultrahigh-speed, high-resolution digital X-ray imaging system with large active imaging area and microsecond time resolution, capable of acquiring at a rate of up to 150,000 frames per second. The system is based on a high-resolution, high-efficiency, and fast-decay scintillator screen optically coupled to an ultra-fast image-intensified CCD camera designed for ballistic impact studies and hypervelocity projectile imaging. A specially designed multi-anode, high-fluence X-ray source with 50 ns pulse duration provides a sequence of blur-free images of hypervelocity projectiles traveling at speeds exceeding 8 km/s (18,000 miles/h). This paper will discuss the design, performance, and high frame rate imaging capability of the system.

  19. Coincidence ion imaging with a fast frame camera

    NASA Astrophysics Data System (ADS)

    Lee, Suk Kyoung; Cudry, Fadia; Lin, Yun Fei; Lingenfelter, Steven; Winney, Alexander H.; Fan, Lin; Li, Wen

    2014-12-01

    A new time- and position-sensitive particle detection system based on a fast frame CMOS (complementary metal-oxide semiconductors) camera is developed for coincidence ion imaging. The system is composed of four major components: a conventional microchannel plate/phosphor screen ion imager, a fast frame CMOS camera, a single anode photomultiplier tube (PMT), and a high-speed digitizer. The system collects the positional information of ions from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of a PMT processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of ion spots on each camera frame with the peak heights on the corresponding time-of-flight spectrum of a PMT. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide.

  20. Efficient Phase Unwrapping Architecture for Digital Holographic Microscopy

    PubMed Central

    Hwang, Wen-Jyi; Cheng, Shih-Chang; Cheng, Chau-Jern

    2011-01-01

    This paper presents a novel phase unwrapping architecture for accelerating the computational speed of digital holographic microscopy (DHM). A fast Fourier transform (FFT) based phase unwrapping algorithm providing a minimum squared error solution is adopted for hardware implementation because of its simplicity and robustness to noise. The proposed architecture is realized in a pipeline fashion to maximize throughput of the computation. Moreover, the number of hardware multipliers and dividers are minimized to reduce the hardware costs. The proposed architecture is used as a custom user logic in a system on programmable chip (SOPC) for physical performance measurement. Experimental results reveal that the proposed architecture is effective for expediting the computational speed while consuming low hardware resources for designing an embedded DHM system. PMID:22163688

  1. Fast autonomous holographic adaptive optics

    NASA Astrophysics Data System (ADS)

    Andersen, G.

    2010-07-01

    We have created a new adaptive optics system using a holographic modal wavefront sensing method capable of autonomous (computer-free) closed-loop control of a MEMS deformable mirror. A multiplexed hologram is recorded using the maximum and minimum actuator positions on the deformable mirror as the "modes". On reconstruction, an input beam will be diffracted into pairs of focal spots - the ratio of particular pairs determines the absolute wavefront phase at a particular actuator location. The wavefront measurement is made using a fast, sensitive photo-detector array such as a multi-pixel photon counters. This information is then used to directly control each actuator in the MEMS DM without the need for any computer in the loop. We present initial results of a 32-actuator prototype device. We further demonstrate that being an all-optical, parallel processing scheme, the speed is independent of the number of actuators. In fact, the limitations on speed are ultimately determined by the maximum driving speed of the DM actuators themselves. Finally, being modal in nature, the system is largely insensitive to both obscuration and scintillation. This should make it ideal for laser beam transmission or imaging under highly turbulent conditions.

  2. The Effects of Font Type and Spacing of Text for Online Readability and Performance

    ERIC Educational Resources Information Center

    Hojjati, Nafiseh; Muniandy, Balakrishnan

    2014-01-01

    Texts are a group of letters which are printed or displayed in a particular style and size. In the course of the fast speed of technological development everywhere and expanding use of computer based instruction such as online courses, students spend more time on a computer screen than printed media. Texts have been the main element to convey…

  3. An Evolutionary Method for Financial Forecasting in Microscopic High-Speed Trading Environment.

    PubMed

    Huang, Chien-Feng; Li, Hsu-Chih

    2017-01-01

    The advancement of information technology in financial applications nowadays have led to fast market-driven events that prompt flash decision-making and actions issued by computer algorithms. As a result, today's markets experience intense activity in the highly dynamic environment where trading systems respond to others at a much faster pace than before. This new breed of technology involves the implementation of high-speed trading strategies which generate significant portion of activity in the financial markets and present researchers with a wealth of information not available in traditional low-speed trading environments. In this study, we aim at developing feasible computational intelligence methodologies, particularly genetic algorithms (GA), to shed light on high-speed trading research using price data of stocks on the microscopic level. Our empirical results show that the proposed GA-based system is able to improve the accuracy of the prediction significantly for price movement, and we expect this GA-based methodology to advance the current state of research for high-speed trading and other relevant financial applications.

  4. Implementation and characterization of active feed-forward for deterministic linear optics quantum computing

    NASA Astrophysics Data System (ADS)

    Böhi, P.; Prevedel, R.; Jennewein, T.; Stefanov, A.; Tiefenbacher, F.; Zeilinger, A.

    2007-12-01

    In general, quantum computer architectures which are based on the dynamical evolution of quantum states, also require the processing of classical information, obtained by measurements of the actual qubits that make up the computer. This classical processing involves fast, active adaptation of subsequent measurements and real-time error correction (feed-forward), so that quantum gates and algorithms can be executed in a deterministic and hence error-free fashion. This is also true in the linear optical regime, where the quantum information is stored in the polarization state of photons. The adaptation of the photon’s polarization can be achieved in a very fast manner by employing electro-optical modulators, which change the polarization of a trespassing photon upon appliance of a high voltage. In this paper we discuss techniques for implementing fast, active feed-forward at the single photon level and we present their application in the context of photonic quantum computing. This includes the working principles and the characterization of the EOMs as well as a description of the switching logics, both of which allow quantum computation at an unprecedented speed.

  5. Three-dimensional simulation for fast forward flight of a calliope hummingbird

    PubMed Central

    Song, Jialei; Powers, Donald R.; Hedrick, Tyson L.; Luo, Haoxiang

    2016-01-01

    We present a computational study of flapping-wing aerodynamics of a calliope hummingbird (Selasphorus calliope) during fast forward flight. Three-dimensional wing kinematics were incorporated into the model by extracting time-dependent wing position from high-speed videos of the bird flying in a wind tunnel at 8.3 m s−1. The advance ratio, i.e. the ratio between flight speed and average wingtip speed, is around one. An immersed-boundary method was used to simulate flow around the wings and bird body. The result shows that both downstroke and upstroke in a wingbeat cycle produce significant thrust for the bird to overcome drag on the body, and such thrust production comes at price of negative lift induced during upstroke. This feature might be shared with bats, while being distinct from insects and other birds, including closely related swifts. PMID:27429779

  6. A revision of the subtract-with-borrow random number generators

    NASA Astrophysics Data System (ADS)

    Sibidanov, Alexei

    2017-12-01

    The most popular and widely used subtract-with-borrow generator, also known as RANLUX, is reimplemented as a linear congruential generator using large integer arithmetic with the modulus size of 576 bits. Modern computers, as well as the specific structure of the modulus inferred from RANLUX, allow for the development of a fast modular multiplication - the core of the procedure. This was previously believed to be slow and have too high cost in terms of computing resources. Our tests show a significant gain in generation speed which is comparable with other fast, high quality random number generators. An additional feature is the fast skipping of generator states leading to a seeding scheme which guarantees the uniqueness of random number sequences. Licensing provisions: GPLv3 Programming language: C++, C, Assembler

  7. Fast multidimensional ensemble empirical mode decomposition for the analysis of big spatio-temporal datasets.

    PubMed

    Wu, Zhaohua; Feng, Jiaxin; Qiao, Fangli; Tan, Zhe-Min

    2016-04-13

    In this big data era, it is more urgent than ever to solve two major issues: (i) fast data transmission methods that can facilitate access to data from non-local sources and (ii) fast and efficient data analysis methods that can reveal the key information from the available data for particular purposes. Although approaches in different fields to address these two questions may differ significantly, the common part must involve data compression techniques and a fast algorithm. This paper introduces the recently developed adaptive and spatio-temporally local analysis method, namely the fast multidimensional ensemble empirical mode decomposition (MEEMD), for the analysis of a large spatio-temporal dataset. The original MEEMD uses ensemble empirical mode decomposition to decompose time series at each spatial grid and then pieces together the temporal-spatial evolution of climate variability and change on naturally separated timescales, which is computationally expensive. By taking advantage of the high efficiency of the expression using principal component analysis/empirical orthogonal function analysis for spatio-temporally coherent data, we design a lossy compression method for climate data to facilitate its non-local transmission. We also explain the basic principles behind the fast MEEMD through decomposing principal components instead of original grid-wise time series to speed up computation of MEEMD. Using a typical climate dataset as an example, we demonstrate that our newly designed methods can (i) compress data with a compression rate of one to two orders; and (ii) speed-up the MEEMD algorithm by one to two orders. © 2016 The Authors.

  8. Fast and unbiased estimator of the time-dependent Hurst exponent.

    PubMed

    Pianese, Augusto; Bianchi, Sergio; Palazzo, Anna Maria

    2018-03-01

    We combine two existing estimators of the local Hurst exponent to improve both the goodness of fit and the computational speed of the algorithm. An application with simulated time series is implemented, and a Monte Carlo simulation is performed to provide evidence of the improvement.

  9. Fast and unbiased estimator of the time-dependent Hurst exponent

    NASA Astrophysics Data System (ADS)

    Pianese, Augusto; Bianchi, Sergio; Palazzo, Anna Maria

    2018-03-01

    We combine two existing estimators of the local Hurst exponent to improve both the goodness of fit and the computational speed of the algorithm. An application with simulated time series is implemented, and a Monte Carlo simulation is performed to provide evidence of the improvement.

  10. H-BLAST: a fast protein sequence alignment toolkit on heterogeneous computers with GPUs.

    PubMed

    Ye, Weicai; Chen, Ying; Zhang, Yongdong; Xu, Yuesheng

    2017-04-15

    The sequence alignment is a fundamental problem in bioinformatics. BLAST is a routinely used tool for this purpose with over 118 000 citations in the past two decades. As the size of bio-sequence databases grows exponentially, the computational speed of alignment softwares must be improved. We develop the heterogeneous BLAST (H-BLAST), a fast parallel search tool for a heterogeneous computer that couples CPUs and GPUs, to accelerate BLASTX and BLASTP-basic tools of NCBI-BLAST. H-BLAST employs a locally decoupled seed-extension algorithm for better performance on GPUs, and offers a performance tuning mechanism for better efficiency among various CPUs and GPUs combinations. H-BLAST produces identical alignment results as NCBI-BLAST and its computational speed is much faster than that of NCBI-BLAST. Speedups achieved by H-BLAST over sequential NCBI-BLASTP (resp. NCBI-BLASTX) range mostly from 4 to 10 (resp. 5 to 7.2). With 2 CPU threads and 2 GPUs, H-BLAST can be faster than 16-threaded NCBI-BLASTX. Furthermore, H-BLAST is 1.5-4 times faster than GPU-BLAST. https://github.com/Yeyke/H-BLAST.git. yux06@syr.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  11. High speed civil transport: Sonic boom softening and aerodynamic optimization

    NASA Technical Reports Server (NTRS)

    Cheung, Samson

    1994-01-01

    An improvement in sonic boom extrapolation techniques has been the desire of aerospace designers for years. This is because the linear acoustic theory developed in the 60's is incapable of predicting the nonlinear phenomenon of shock wave propagation. On the other hand, CFD techniques are too computationally expensive to employ on sonic boom problems. Therefore, this research focused on the development of a fast and accurate sonic boom extrapolation method that solves the Euler equations for axisymmetric flow. This new technique has brought the sonic boom extrapolation techniques up to the standards of the 90's. Parallel computing is a fast growing subject in the field of computer science because of its promising speed. A new optimizer (IIOWA) for the parallel computing environment has been developed and tested for aerodynamic drag minimization. This is a promising method for CFD optimization making use of the computational resources of workstations, which unlike supercomputers can spend most of their time idle. Finally, the OAW concept is attractive because of its overall theoretical performance. In order to fully understand the concept, a wind-tunnel model was built and is currently being tested at NASA Ames Research Center. The CFD calculations performed under this cooperative agreement helped to identify the problem of the flow separation, and also aided the design by optimizing the wing deflection for roll trim.

  12. Reservoir computing with a slowly modulated mask signal for preprocessing using a mutually coupled optoelectronic system

    NASA Astrophysics Data System (ADS)

    Tezuka, Miwa; Kanno, Kazutaka; Bunsen, Masatoshi

    2016-08-01

    Reservoir computing is a machine-learning paradigm based on information processing in the human brain. We numerically demonstrate reservoir computing with a slowly modulated mask signal for preprocessing by using a mutually coupled optoelectronic system. The performance of our system is quantitatively evaluated by a chaotic time series prediction task. Our system can produce comparable performance with reservoir computing with a single feedback system and a fast modulated mask signal. We showed that it is possible to slow down the modulation speed of the mask signal by using the mutually coupled system in reservoir computing.

  13. Non-homogeneous updates for the iterative coordinate descent algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Zhou; Thibault, Jean-Baptiste; Bouman, Charles A.; Sauer, Ken D.; Hsieh, Jiang

    2007-02-01

    Statistical reconstruction methods show great promise for improving resolution, and reducing noise and artifacts in helical X-ray CT. In fact, statistical reconstruction seems to be particularly valuable in maintaining reconstructed image quality when the dosage is low and the noise is therefore high. However, high computational cost and long reconstruction times remain as a barrier to the use of statistical reconstruction in practical applications. Among the various iterative methods that have been studied for statistical reconstruction, iterative coordinate descent (ICD) has been found to have relatively low overall computational requirements due to its fast convergence. This paper presents a novel method for further speeding the convergence of the ICD algorithm, and therefore reducing the overall reconstruction time for statistical reconstruction. The method, which we call nonhomogeneous iterative coordinate descent (NH-ICD) uses spatially non-homogeneous updates to speed convergence by focusing computation where it is most needed. Experimental results with real data indicate that the method speeds reconstruction by roughly a factor of two for typical 3D multi-slice geometries.

  14. Vibration extraction based on fast NCC algorithm and high-speed camera.

    PubMed

    Lei, Xiujun; Jin, Yi; Guo, Jie; Zhu, Chang'an

    2015-09-20

    In this study, a high-speed camera system is developed to complete the vibration measurement in real time and to overcome the mass introduced by conventional contact measurements. The proposed system consists of a notebook computer and a high-speed camera which can capture the images as many as 1000 frames per second. In order to process the captured images in the computer, the normalized cross-correlation (NCC) template tracking algorithm with subpixel accuracy is introduced. Additionally, a modified local search algorithm based on the NCC is proposed to reduce the computation time and to increase efficiency significantly. The modified algorithm can rapidly accomplish one displacement extraction 10 times faster than the traditional template matching without installing any target panel onto the structures. Two experiments were carried out under laboratory and outdoor conditions to validate the accuracy and efficiency of the system performance in practice. The results demonstrated the high accuracy and efficiency of the camera system in extracting vibrating signals.

  15. Using NetMeeting for remote configuration of the Otto Bock C-Leg: technical considerations.

    PubMed

    Lemaire, E D; Fawcett, J A

    2002-08-01

    Telehealth has the potential to be a valuable tool for technical and clinical support of computer controlled prosthetic devices. This pilot study examined the use of Internet-based, desktop video conferencing for remote configuration of the Otto Bock C-Leg. Laboratory tests involved connecting two computers running Microsoft NetMeeting over a local area network (IP protocol). Over 56 Kbs(-1), DSL/Cable, and 10 Mbs(-1) LAN speeds, a prosthetist remotely configured a user's C-Leg by using Application Sharing, Live Video, and Live Audio. A similar test between sites in Ottawa and Toronto, Canada was limited by the notebook computer's 28 Kbs(-1) modem. At the 28 Kbs(-1) Internet-connection speed, NetMeeting's application sharing feature was not able to update the remote Sliders window fast enough to display peak toe loads and peak knee angles. These results support the use of NetMeeting as an accessible and cost-effective tool for remote C-Leg configuration, provided that sufficient Internet data transfer speed is available.

  16. Multirate-based fast parallel algorithms for 2-D DHT-based real-valued discrete Gabor transform.

    PubMed

    Tao, Liang; Kwan, Hon Keung

    2012-07-01

    Novel algorithms for the multirate and fast parallel implementation of the 2-D discrete Hartley transform (DHT)-based real-valued discrete Gabor transform (RDGT) and its inverse transform are presented in this paper. A 2-D multirate-based analysis convolver bank is designed for the 2-D RDGT, and a 2-D multirate-based synthesis convolver bank is designed for the 2-D inverse RDGT. The parallel channels in each of the two convolver banks have a unified structure and can apply the 2-D fast DHT algorithm to speed up their computations. The computational complexity of each parallel channel is low and is independent of the Gabor oversampling rate. All the 2-D RDGT coefficients of an image are computed in parallel during the analysis process and can be reconstructed in parallel during the synthesis process. The computational complexity and time of the proposed parallel algorithms are analyzed and compared with those of the existing fastest algorithms for 2-D discrete Gabor transforms. The results indicate that the proposed algorithms are the fastest, which make them attractive for real-time image processing.

  17. Hardware implementation of CMAC neural network with reduced storage requirement.

    PubMed

    Ker, J S; Kuo, Y H; Wen, R C; Liu, B D

    1997-01-01

    The cerebellar model articulation controller (CMAC) neural network has the advantages of fast convergence speed and low computation complexity. However, it suffers from a low storage space utilization rate on weight memory. In this paper, we propose a direct weight address mapping approach, which can reduce the required weight memory size with a utilization rate near 100%. Based on such an address mapping approach, we developed a pipeline architecture to efficiently perform the addressing operations. The proposed direct weight address mapping approach also speeds up the computation for the generation of weight addresses. Besides, a CMAC hardware prototype used for color calibration has been implemented to confirm the proposed approach and architecture.

  18. Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems

    DOEpatents

    Van Benthem, Mark H.; Keenan, Michael R.

    2008-11-11

    A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.

  19. Feature-fused SSD: fast detection for small objects

    NASA Astrophysics Data System (ADS)

    Cao, Guimei; Xie, Xuemei; Yang, Wenzhe; Liao, Quan; Shi, Guangming; Wu, Jinjian

    2018-04-01

    Small objects detection is a challenging task in computer vision due to its limited resolution and information. In order to solve this problem, the majority of existing methods sacrifice speed for improvement in accuracy. In this paper, we aim to detect small objects at a fast speed, using the best object detector Single Shot Multibox Detector (SSD) with respect to accuracy-vs-speed trade-off as base architecture. We propose a multi-level feature fusion method for introducing contextual information in SSD, in order to improve the accuracy for small objects. In detailed fusion operation, we design two feature fusion modules, concatenation module and element-sum module, different in the way of adding contextual information. Experimental results show that these two fusion modules obtain higher mAP on PASCAL VOC2007 than baseline SSD by 1.6 and 1.7 points respectively, especially with 2-3 points improvement on some small objects categories. The testing speed of them is 43 and 40 FPS respectively, superior to the state of the art Deconvolutional single shot detector (DSSD) by 29.4 and 26.4 FPS.

  20. Coincidence ion imaging with a fast frame camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Suk Kyoung; Cudry, Fadia; Lin, Yun Fei

    2014-12-15

    A new time- and position-sensitive particle detection system based on a fast frame CMOS (complementary metal-oxide semiconductors) camera is developed for coincidence ion imaging. The system is composed of four major components: a conventional microchannel plate/phosphor screen ion imager, a fast frame CMOS camera, a single anode photomultiplier tube (PMT), and a high-speed digitizer. The system collects the positional information of ions from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of a PMT processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of ion spots onmore » each camera frame with the peak heights on the corresponding time-of-flight spectrum of a PMT. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide.« less

  1. High speed Infrared imaging method for observation of the fast varying temperature phenomena

    NASA Astrophysics Data System (ADS)

    Moghadam, Reza; Alavi, Kambiz; Yuan, Baohong

    With new improvements in high-end commercial R&D camera technologies many challenges have been overcome for exploring the high-speed IR camera imaging. The core benefits of this technology is the ability to capture fast varying phenomena without image blur, acquire enough data to properly characterize dynamic energy, and increase the dynamic range without compromising the number of frames per second. This study presents a noninvasive method for determining the intensity field of a High Intensity Focused Ultrasound Device (HIFU) beam using Infrared imaging. High speed Infrared camera was placed above the tissue-mimicking material that was heated by HIFU with no other sensors present in the HIFU axial beam. A MATLAB simulation code used to perform a finite-element solution to the pressure wave propagation and heat equations within the phantom and temperature rise to the phantom was computed. Three different power levels of HIFU transducers were tested and the predicted temperature increase values were within about 25% of IR measurements. The fundamental theory and methods developed in this research can be used to detect fast varying temperature phenomena in combination with the infrared filters.

  2. An Evolutionary Method for Financial Forecasting in Microscopic High-Speed Trading Environment

    PubMed Central

    Li, Hsu-Chih

    2017-01-01

    The advancement of information technology in financial applications nowadays have led to fast market-driven events that prompt flash decision-making and actions issued by computer algorithms. As a result, today's markets experience intense activity in the highly dynamic environment where trading systems respond to others at a much faster pace than before. This new breed of technology involves the implementation of high-speed trading strategies which generate significant portion of activity in the financial markets and present researchers with a wealth of information not available in traditional low-speed trading environments. In this study, we aim at developing feasible computational intelligence methodologies, particularly genetic algorithms (GA), to shed light on high-speed trading research using price data of stocks on the microscopic level. Our empirical results show that the proposed GA-based system is able to improve the accuracy of the prediction significantly for price movement, and we expect this GA-based methodology to advance the current state of research for high-speed trading and other relevant financial applications. PMID:28316618

  3. Coronal Magnetic Field Topology and Source of Fast Solar Wind

    NASA Technical Reports Server (NTRS)

    Guhathakurta, M.; Sittler, E.; Fisher, R.; McComas, D.; Thompson, B.

    1999-01-01

    We have developed a steady state, 2D semi-empirical MHD model of the solar corona and the solar wind with many surprising results. This model for the first time shows, that the boundary between the fast and the slow solar wind as observed by Ulysses beyond 1 AU, is established in the low corona. The fastest wind observed by Ulysses (680-780 km/s) originates from the polar coronal holes at 70 -90 deg. latitude at the Sun. Rapidly diverging magnetic field geometry accounts for the fast wind reaching down to a latitude of +/- 30 deg. at the orbit of Earth. The gradual increase in the fast wind observed by Ulysses, with latitude, can be explained by an increasing field strength towards the poles, which causes Alfven wave energy flux to increase towards the poles. Empirically, there is a direct relationship between this gradual increase in wind speed and the expansion factor, f, computed at r greater than 20%. This relationship is inverse if f is computed very close to the Sun.

  4. A Novel Speed Compensation Method for ISAR Imaging with Low SNR

    PubMed Central

    Liu, Yongxiang; Zhang, Shuanghui; Zhu, Dekang; Li, Xiang

    2015-01-01

    In this paper, two novel speed compensation algorithms for ISAR imaging under a low signal-to-noise ratio (SNR) condition have been proposed, which are based on the cubic phase function (CPF) and the integrated cubic phase function (ICPF), respectively. These two algorithms can estimate the speed of the target from the wideband radar echo directly, which breaks the limitation of speed measuring in a radar system. With the utilization of non-coherent accumulation, the ICPF-based speed compensation algorithm is robust to noise and can meet the requirement of speed compensation for ISAR imaging under a low SNR condition. Moreover, a fast searching implementation strategy, which consists of coarse search and precise search, has been introduced to decrease the computational burden of speed compensation based on CPF and ICPF. Experimental results based on radar data validate the effectiveness of the proposed algorithms. PMID:26225980

  5. An ultrafast programmable electrical tester for enabling time-resolved, sub-nanosecond switching dynamics and programming of nanoscale memory devices.

    PubMed

    Shukla, Krishna Dayal; Saxena, Nishant; Manivannan, Anbarasu

    2017-12-01

    Recent advancements in commercialization of high-speed non-volatile electronic memories including phase change memory (PCM) have shown potential not only for advanced data storage but also for novel computing concepts. However, an in-depth understanding on ultrafast electrical switching dynamics is a key challenge for defining the ultimate speed of nanoscale memory devices that demands for an unconventional electrical setup, specifically capable of handling extremely fast electrical pulses. In the present work, an ultrafast programmable electrical tester (PET) setup has been developed exceptionally for unravelling time-resolved electrical switching dynamics and programming characteristics of nanoscale memory devices at the picosecond (ps) time scale. This setup consists of novel high-frequency contact-boards carefully designed to capture extremely fast switching transient characteristics within 200 ± 25 ps using time-resolved current-voltage measurements. All the instruments in the system are synchronized using LabVIEW, which helps to achieve various programming characteristics such as voltage-dependent transient parameters, read/write operations, and endurance test of memory devices systematically using short voltage pulses having pulse parameters varied from 1 ns rise/fall time and 1.5 ns pulse width (full width half maximum). Furthermore, the setup has successfully demonstrated strikingly one order faster switching characteristics of Ag 5 In 5 Sb 60 Te 30 (AIST) PCM devices within 250 ps. Hence, this novel electrical setup would be immensely helpful for realizing the ultimate speed limits of various high-speed memory technologies for future computing.

  6. An ultrafast programmable electrical tester for enabling time-resolved, sub-nanosecond switching dynamics and programming of nanoscale memory devices

    NASA Astrophysics Data System (ADS)

    Shukla, Krishna Dayal; Saxena, Nishant; Manivannan, Anbarasu

    2017-12-01

    Recent advancements in commercialization of high-speed non-volatile electronic memories including phase change memory (PCM) have shown potential not only for advanced data storage but also for novel computing concepts. However, an in-depth understanding on ultrafast electrical switching dynamics is a key challenge for defining the ultimate speed of nanoscale memory devices that demands for an unconventional electrical setup, specifically capable of handling extremely fast electrical pulses. In the present work, an ultrafast programmable electrical tester (PET) setup has been developed exceptionally for unravelling time-resolved electrical switching dynamics and programming characteristics of nanoscale memory devices at the picosecond (ps) time scale. This setup consists of novel high-frequency contact-boards carefully designed to capture extremely fast switching transient characteristics within 200 ± 25 ps using time-resolved current-voltage measurements. All the instruments in the system are synchronized using LabVIEW, which helps to achieve various programming characteristics such as voltage-dependent transient parameters, read/write operations, and endurance test of memory devices systematically using short voltage pulses having pulse parameters varied from 1 ns rise/fall time and 1.5 ns pulse width (full width half maximum). Furthermore, the setup has successfully demonstrated strikingly one order faster switching characteristics of Ag5In5Sb60Te30 (AIST) PCM devices within 250 ps. Hence, this novel electrical setup would be immensely helpful for realizing the ultimate speed limits of various high-speed memory technologies for future computing.

  7. Fast restoration approach for motion blurred image based on deconvolution under the blurring paths

    NASA Astrophysics Data System (ADS)

    Shi, Yu; Song, Jie; Hua, Xia

    2015-12-01

    For the real-time motion deblurring, it is of utmost importance to get a higher processing speed with about the same image quality. This paper presents a fast Richardson-Lucy motion deblurring approach to remove motion blur which rotates blurred image under blurring paths. Hence, the computational time is reduced sharply by using one-dimensional Fast Fourier Transform in one-dimensional Richardson-Lucy method. In order to obtain accurate transformational results, interpolation method is incorporated to fetch the gray values. Experiment results demonstrate that the proposed approach is efficient and effective to reduce motion blur under the blur paths.

  8. High Speed Computational Ghost Imaging via Spatial Sweeping

    NASA Astrophysics Data System (ADS)

    Wang, Yuwang; Liu, Yang; Suo, Jinli; Situ, Guohai; Qiao, Chang; Dai, Qionghai

    2017-03-01

    Computational ghost imaging (CGI) achieves single-pixel imaging by using a Spatial Light Modulator (SLM) to generate structured illuminations for spatially resolved information encoding. The imaging speed of CGI is limited by the modulation frequency of available SLMs, and sets back its practical applications. This paper proposes to bypass this limitation by trading off SLM’s redundant spatial resolution for multiplication of the modulation frequency. Specifically, a pair of galvanic mirrors sweeping across the high resolution SLM multiply the modulation frequency within the spatial resolution gap between SLM and the final reconstruction. A proof-of-principle setup with two middle end galvanic mirrors achieves ghost imaging as fast as 42 Hz at 80 × 80-pixel resolution, 5 times faster than state-of-the-arts, and holds potential for one magnitude further multiplication by hardware upgrading. Our approach brings a significant improvement in the imaging speed of ghost imaging and pushes ghost imaging towards practical applications.

  9. Real-time image reconstruction and display system for MRI using a high-speed personal computer.

    PubMed

    Haishi, T; Kose, K

    1998-09-01

    A real-time NMR image reconstruction and display system was developed using a high-speed personal computer and optimized for the 32-bit multitasking Microsoft Windows 95 operating system. The system was operated at various CPU clock frequencies by changing the motherboard clock frequency and the processor/bus frequency ratio. When the Pentium CPU was used at the 200 MHz clock frequency, the reconstruction time for one 128 x 128 pixel image was 48 ms and that for the image display on the enlarged 256 x 256 pixel window was about 8 ms. NMR imaging experiments were performed with three fast imaging sequences (FLASH, multishot EPI, and one-shot EPI) to demonstrate the ability of the real-time system. It was concluded that in most cases, high-speed PC would be the best choice for the image reconstruction and display system for real-time MRI. Copyright 1998 Academic Press.

  10. Hi-Corrector: a fast, scalable and memory-efficient package for normalizing large-scale Hi-C data.

    PubMed

    Li, Wenyuan; Gong, Ke; Li, Qingjiao; Alber, Frank; Zhou, Xianghong Jasmine

    2015-03-15

    Genome-wide proximity ligation assays, e.g. Hi-C and its variant TCC, have recently become important tools to study spatial genome organization. Removing biases from chromatin contact matrices generated by such techniques is a critical preprocessing step of subsequent analyses. The continuing decline of sequencing costs has led to an ever-improving resolution of the Hi-C data, resulting in very large matrices of chromatin contacts. Such large-size matrices, however, pose a great challenge on the memory usage and speed of its normalization. Therefore, there is an urgent need for fast and memory-efficient methods for normalization of Hi-C data. We developed Hi-Corrector, an easy-to-use, open source implementation of the Hi-C data normalization algorithm. Its salient features are (i) scalability-the software is capable of normalizing Hi-C data of any size in reasonable times; (ii) memory efficiency-the sequential version can run on any single computer with very limited memory, no matter how little; (iii) fast speed-the parallel version can run very fast on multiple computing nodes with limited local memory. The sequential version is implemented in ANSI C and can be easily compiled on any system; the parallel version is implemented in ANSI C with the MPI library (a standardized and portable parallel environment designed for solving large-scale scientific problems). The package is freely available at http://zhoulab.usc.edu/Hi-Corrector/. © The Author 2014. Published by Oxford University Press.

  11. Microgravity

    NASA Image and Video Library

    1999-05-26

    Looking for a faster computer? How about an optical computer that processes data streams simultaneously and works with the speed of light? In space, NASA researchers have formed optical thin-film. By turning these thin-films into very fast optical computer components, scientists could improve computer tasks, such as pattern recognition. Dr. Hossin Abdeldayem, physicist at NASA/Marshall Space Flight Center (MSFC) in Huntsville, Al, is working with lasers as part of an optical system for pattern recognition. These systems can be used for automated fingerprinting, photographic scarning and the development of sophisticated artificial intelligence systems that can learn and evolve. Photo credit: NASA/Marshall Space Flight Center (MSFC)

  12. Fast normal mode computations of capsid dynamics inspired by resonance

    NASA Astrophysics Data System (ADS)

    Na, Hyuntae; Song, Guang

    2018-07-01

    Increasingly more and larger structural complexes are being determined experimentally. The sizes of these systems pose a formidable computational challenge to the study of their vibrational dynamics by normal mode analysis. To overcome this challenge, this work presents a novel resonance-inspired approach. Tests on large shell structures of protein capsids demonstrate that there is a strong resonance between the vibrations of a whole capsid and those of individual capsomeres. We then show how this resonance can be taken advantage of to significantly speed up normal mode computations.

  13. Integrated semiconductor-magnetic random access memory system

    NASA Technical Reports Server (NTRS)

    Katti, Romney R. (Inventor); Blaes, Brent R. (Inventor)

    2001-01-01

    The present disclosure describes a non-volatile magnetic random access memory (RAM) system having a semiconductor control circuit and a magnetic array element. The integrated magnetic RAM system uses CMOS control circuit to read and write data magnetoresistively. The system provides a fast access, non-volatile, radiation hard, high density RAM for high speed computing.

  14. Patch forest: a hybrid framework of random forest and patch-based segmentation

    NASA Astrophysics Data System (ADS)

    Xie, Zhongliu; Gillies, Duncan

    2016-03-01

    The development of an accurate, robust and fast segmentation algorithm has long been a research focus in medical computer vision. State-of-the-art practices often involve non-rigidly registering a target image with a set of training atlases for label propagation over the target space to perform segmentation, a.k.a. multi-atlas label propagation (MALP). In recent years, the patch-based segmentation (PBS) framework has gained wide attention due to its advantage of relaxing the strict voxel-to-voxel correspondence to a series of pair-wise patch comparisons for contextual pattern matching. Despite a high accuracy reported in many scenarios, computational efficiency has consistently been a major obstacle for both approaches. Inspired by recent work on random forest, in this paper we propose a patch forest approach, which by equipping the conventional PBS with a fast patch search engine, is able to boost segmentation speed significantly while retaining an equal level of accuracy. In addition, a fast forest training mechanism is also proposed, with the use of a dynamic grid framework to efficiently approximate data compactness computation and a 3D integral image technique for fast box feature retrieval.

  15. The relationship between gamma frequency and running speed differs for slow and fast gamma rhythms in freely behaving rats

    PubMed Central

    Zheng, Chenguang; Bieri, Kevin Wood; Trettel, Sean Gregory; Colgin, Laura Lee

    2015-01-01

    In hippocampal area CA1 of rats, the frequency of gamma activity has been shown to increase with running speed (Ahmed and Mehta, 2012). This finding suggests that different gamma frequencies simply allow for different timings of transitions across cell assemblies at varying running speeds, rather than serving unique functions. However, accumulating evidence supports the conclusion that slow (~25–55 Hz) and fast (~60–100 Hz) gamma are distinct network states with different functions. If slow and fast gamma constitute distinct network states, then it is possible that slow and fast gamma frequencies are differentially affected by running speed. In this study, we tested this hypothesis and found that slow and fast gamma frequencies change differently as a function of running speed in hippocampal areas CA1 and CA3, and in the superficial layers of the medial entorhinal cortex (MEC). Fast gamma frequencies increased with increasing running speed in all three areas. Slow gamma frequencies changed significantly less across different speeds. Furthermore, at high running speeds, CA3 firing rates were low, and MEC firing rates were high, suggesting that CA1 transitions from CA3 inputs to MEC inputs as running speed increases. These results support the hypothesis that slow and fast gamma reflect functionally distinct states in the hippocampal network, with fast gamma driven by MEC at high running speeds and slow gamma driven by CA3 at low running speeds. PMID:25601003

  16. A distributed system for fast alignment of next-generation sequencing data.

    PubMed

    Srimani, Jaydeep K; Wu, Po-Yen; Phan, John H; Wang, May D

    2010-12-01

    We developed a scalable distributed computing system using the Berkeley Open Interface for Network Computing (BOINC) to align next-generation sequencing (NGS) data quickly and accurately. NGS technology is emerging as a promising platform for gene expression analysis due to its high sensitivity compared to traditional genomic microarray technology. However, despite the benefits, NGS datasets can be prohibitively large, requiring significant computing resources to obtain sequence alignment results. Moreover, as the data and alignment algorithms become more prevalent, it will become necessary to examine the effect of the multitude of alignment parameters on various NGS systems. We validate the distributed software system by (1) computing simple timing results to show the speed-up gained by using multiple computers, (2) optimizing alignment parameters using simulated NGS data, and (3) computing NGS expression levels for a single biological sample using optimal parameters and comparing these expression levels to that of a microarray sample. Results indicate that the distributed alignment system achieves approximately a linear speed-up and correctly distributes sequence data to and gathers alignment results from multiple compute clients.

  17. Analysis of the pen pressure and grip force signal during basic drawing tasks: The timing and speed changes impact drawing characteristics.

    PubMed

    Gatouillat, Arthur; Dumortier, Antoine; Perera, Subashan; Badr, Youakim; Gehin, Claudine; Sejdić, Ervin

    2017-08-01

    Writing is a complex fine and trained motor skill, involving complex biomechanical and cognitive processes. In this paper, we propose the study of writing kinetics using three angles: the pen-tip normal force, the total grip force signal and eventually writing quality assessment. In order to collect writing kinetics data, we designed a sensor collecting these characteristics simultaneously. Ten healthy right-handed adults were recruited and were asked to perform four tasks: first, they were instructed to draw circles at a speed they considered comfortable; they then were instructed to draw circles at a speed they regarded as fast; afterwards, they repeated the comfortable task compelled to follow the rhythm of a metronome; and eventually they performed the fast task under the same timing constraints. Statistical differences between the tasks were computed, and while pen-tip normal force and total grip force signal were not impacted by the changes introduced in each task, writing quality features were affected by both the speed changes and timing constraint changes. This verifies the already-studied speed-accuracy trade-off and suggest the existence of a timing constraints-accuracy trade-off. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Nonlinear dynamics support a linear population code in a retinal target-tracking circuit.

    PubMed

    Leonardo, Anthony; Meister, Markus

    2013-10-23

    A basic task faced by the visual system of many organisms is to accurately track the position of moving prey. The retina is the first stage in the processing of such stimuli; the nature of the transformation here, from photons to spike trains, constrains not only the ultimate fidelity of the tracking signal but also the ease with which it can be extracted by other brain regions. Here we demonstrate that a population of fast-OFF ganglion cells in the salamander retina, whose dynamics are governed by a nonlinear circuit, serve to compute the future position of the target over hundreds of milliseconds. The extrapolated position of the target is not found by stimulus reconstruction but is instead computed by a weighted sum of ganglion cell outputs, the population vector average (PVA). The magnitude of PVA extrapolation varies systematically with target size, speed, and acceleration, such that large targets are tracked most accurately at high speeds, and small targets at low speeds, just as is seen in the motion of real prey. Tracking precision reaches the resolution of single photoreceptors, and the PVA algorithm performs more robustly than several alternative algorithms. If the salamander brain uses the fast-OFF cell circuit for target extrapolation as we suggest, the circuit dynamics should leave a microstructure on the behavior that may be measured in future experiments. Our analysis highlights the utility of simple computations that, while not globally optimal, are efficiently implemented and have close to optimal performance over a limited but ethologically relevant range of stimuli.

  19. Timescale Halo: Average-Speed Targets Elicit More Positive and Less Negative Attributions than Slow or Fast Targets

    PubMed Central

    Hernandez, Ivan; Preston, Jesse Lee; Hepler, Justin

    2014-01-01

    Research on the timescale bias has found that observers perceive more capacity for mind in targets moving at an average speed, relative to slow or fast moving targets. The present research revisited the timescale bias as a type of halo effect, where normal-speed people elicit positive evaluations and abnormal-speed (slow and fast) people elicit negative evaluations. In two studies, participants viewed videos of people walking at a slow, average, or fast speed. We find evidence for a timescale halo effect: people walking at an average-speed were attributed more positive mental traits, but fewer negative mental traits, relative to slow or fast moving people. These effects held across both cognitive and emotional dimensions of mind and were mediated by overall positive/negative ratings of the person. These results suggest that, rather than eliciting greater perceptions of general mind, the timescale bias may reflect a generalized positivity toward average speed people relative to slow or fast moving people. PMID:24421882

  20. Decomposition method for fast computation of gigapixel-sized Fresnel holograms on a graphics processing unit cluster.

    PubMed

    Jackin, Boaz Jessie; Watanabe, Shinpei; Ootsu, Kanemitsu; Ohkawa, Takeshi; Yokota, Takashi; Hayasaki, Yoshio; Yatagai, Toyohiko; Baba, Takanobu

    2018-04-20

    A parallel computation method for large-size Fresnel computer-generated hologram (CGH) is reported. The method was introduced by us in an earlier report as a technique for calculating Fourier CGH from 2D object data. In this paper we extend the method to compute Fresnel CGH from 3D object data. The scale of the computation problem is also expanded to 2 gigapixels, making it closer to real application requirements. The significant feature of the reported method is its ability to avoid communication overhead and thereby fully utilize the computing power of parallel devices. The method exhibits three layers of parallelism that favor small to large scale parallel computing machines. Simulation and optical experiments were conducted to demonstrate the workability and to evaluate the efficiency of the proposed technique. A two-times improvement in computation speed has been achieved compared to the conventional method, on a 16-node cluster (one GPU per node) utilizing only one layer of parallelism. A 20-times improvement in computation speed has been estimated utilizing two layers of parallelism on a very large-scale parallel machine with 16 nodes, where each node has 16 GPUs.

  1. Treecode with a Special-Purpose Processor

    NASA Astrophysics Data System (ADS)

    Makino, Junichiro

    1991-08-01

    We describe an implementation of the modified Barnes-Hut tree algorithm for a gravitational N-body calculation on a GRAPE (GRAvity PipE) backend processor. GRAPE is a special-purpose computer for N-body calculations. It receives the positions and masses of particles from a host computer and then calculates the gravitational force at each coordinate specified by the host. To use this GRAPE processor with the hierarchical tree algorithm, the host computer must maintain a list of all nodes that exert force on a particle. If we create this list for each particle of the system at each timestep, the number of floating-point operations on the host and that on GRAPE would become comparable, and the increased speed obtained by using GRAPE would be small. In our modified algorithm, we create a list of nodes for many particles. Thus, the amount of the work required of the host is significantly reduced. This algorithm was originally developed by Barnes in order to vectorize the force calculation on a Cyber 205. With this algorithm, the computing time of the force calculation becomes comparable to that of the tree construction, if the GRAPE backend processor is sufficiently fast. The obtained speed-up factor is 30 to 50 for a RISC-based host computer and GRAPE-1A with a peak speed of 240 Mflops.

  2. Fast Geostatistical Inversion using Randomized Matrix Decompositions and Sketchings for Heterogeneous Aquifer Characterization

    NASA Astrophysics Data System (ADS)

    O'Malley, D.; Le, E. B.; Vesselinov, V. V.

    2015-12-01

    We present a fast, scalable, and highly-implementable stochastic inverse method for characterization of aquifer heterogeneity. The method utilizes recent advances in randomized matrix algebra and exploits the structure of the Quasi-Linear Geostatistical Approach (QLGA), without requiring a structured grid like Fast-Fourier Transform (FFT) methods. The QLGA framework is a more stable version of Gauss-Newton iterates for a large number of unknown model parameters, but provides unbiased estimates. The methods are matrix-free and do not require derivatives or adjoints, and are thus ideal for complex models and black-box implementation. We also incorporate randomized least-square solvers and data-reduction methods, which speed up computation and simulate missing data points. The new inverse methodology is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. Inversion results based on series of synthetic problems with steady-state and transient calibration data are presented.

  3. Fast non-Abelian geometric gates via transitionless quantum driving.

    PubMed

    Zhang, J; Kyaw, Thi Ha; Tong, D M; Sjöqvist, Erik; Kwek, Leong-Chuan

    2015-12-21

    A practical quantum computer must be capable of performing high fidelity quantum gates on a set of quantum bits (qubits). In the presence of noise, the realization of such gates poses daunting challenges. Geometric phases, which possess intrinsic noise-tolerant features, hold the promise for performing robust quantum computation. In particular, quantum holonomies, i.e., non-Abelian geometric phases, naturally lead to universal quantum computation due to their non-commutativity. Although quantum gates based on adiabatic holonomies have already been proposed, the slow evolution eventually compromises qubit coherence and computational power. Here, we propose a general approach to speed up an implementation of adiabatic holonomic gates by using transitionless driving techniques and show how such a universal set of fast geometric quantum gates in a superconducting circuit architecture can be obtained in an all-geometric approach. Compared with standard non-adiabatic holonomic quantum computation, the holonomies obtained in our approach tends asymptotically to those of the adiabatic approach in the long run-time limit and thus might open up a new horizon for realizing a practical quantum computer.

  4. Fast non-Abelian geometric gates via transitionless quantum driving

    PubMed Central

    Zhang, J.; Kyaw, Thi Ha; Tong, D. M.; Sjöqvist, Erik; Kwek, Leong-Chuan

    2015-01-01

    A practical quantum computer must be capable of performing high fidelity quantum gates on a set of quantum bits (qubits). In the presence of noise, the realization of such gates poses daunting challenges. Geometric phases, which possess intrinsic noise-tolerant features, hold the promise for performing robust quantum computation. In particular, quantum holonomies, i.e., non-Abelian geometric phases, naturally lead to universal quantum computation due to their non-commutativity. Although quantum gates based on adiabatic holonomies have already been proposed, the slow evolution eventually compromises qubit coherence and computational power. Here, we propose a general approach to speed up an implementation of adiabatic holonomic gates by using transitionless driving techniques and show how such a universal set of fast geometric quantum gates in a superconducting circuit architecture can be obtained in an all-geometric approach. Compared with standard non-adiabatic holonomic quantum computation, the holonomies obtained in our approach tends asymptotically to those of the adiabatic approach in the long run-time limit and thus might open up a new horizon for realizing a practical quantum computer. PMID:26687580

  5. The basic mechanics of bipedal walking lead to asymmetric behavior.

    PubMed

    Gregg, Robert D; Degani, Amir; Dhaher, Yasin; Lynch, Kevin M

    2011-01-01

    This paper computationally investigates whether gait asymmetries can be attributed in part to basic bipedal mechanics independent of motor control. Using a symmetrical rigid-body model known as the compass-gait biped, we show that changes in environmental or physiological parameters can facilitate asymmetry in gait kinetics at fast walking speeds. In the environmental case, the asymmetric family of high-speed gaits is in fact more stable than the symmetric family of low-speed gaits. These simulations suggest that lower extremity mechanics might play a direct role in functional and pathological asymmetries reported in human walking, where velocity may be a common variable in the emergence and growth of asymmetry. © 2011 IEEE

  6. Coincidence electron/ion imaging with a fast frame camera

    NASA Astrophysics Data System (ADS)

    Li, Wen; Lee, Suk Kyoung; Lin, Yun Fei; Lingenfelter, Steven; Winney, Alexander; Fan, Lin

    2015-05-01

    A new time- and position- sensitive particle detection system based on a fast frame CMOS camera is developed for coincidence electron/ion imaging. The system is composed of three major components: a conventional microchannel plate (MCP)/phosphor screen electron/ion imager, a fast frame CMOS camera and a high-speed digitizer. The system collects the positional information of ions/electrons from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of MCPs processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of electron/ion spots on each camera frame with the peak heights on the corresponding time-of-flight spectrum. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide. We further show that a time resolution of 30 ps can be achieved when measuring electron TOF spectrum and this enables the new system to achieve a good energy resolution along the TOF axis.

  7. Holographic memory for high-density data storage and high-speed pattern recognition

    NASA Astrophysics Data System (ADS)

    Gu, Claire

    2002-09-01

    As computers and the internet become faster and faster, more and more information is transmitted, received, and stored everyday. The demand for high density and fast access time data storage is pushing scientists and engineers to explore all possible approaches including magnetic, mechanical, optical, etc. Optical data storage has already demonstrated its potential in the competition against other storage technologies. CD and DVD are showing their advantages in the computer and entertainment market. What motivated the use of optical waves to store and access information is the same as the motivation for optical communication. Light or an optical wave has an enormous capacity (or bandwidth) to carry information because of its short wavelength and parallel nature. In optical storage, there are two types of mechanism, namely localized and holographic memories. What gives the holographic data storage an advantage over localized bit storage is the natural ability to read the stored information in parallel, therefore, meeting the demand for fast access. Another unique feature that makes the holographic data storage attractive is that it is capable of performing associative recall at an incomparable speed. Therefore, volume holographic memory is particularly suitable for high-density data storage and high-speed pattern recognition. In this paper, we review previous works on volume holographic memories and discuss the challenges for this technology to become a reality.

  8. Predictive Simulations of Neuromuscular Coordination and Joint-Contact Loading in Human Gait.

    PubMed

    Lin, Yi-Chung; Walter, Jonathan P; Pandy, Marcus G

    2018-04-18

    We implemented direct collocation on a full-body neuromusculoskeletal model to calculate muscle forces, ground reaction forces and knee contact loading simultaneously for one cycle of human gait. A data-tracking collocation problem was solved for walking at the normal speed to establish the practicality of incorporating a 3D model of articular contact and a model of foot-ground interaction explicitly in a dynamic optimization simulation. The data-tracking solution then was used as an initial guess to solve predictive collocation problems, where novel patterns of movement were generated for walking at slow and fast speeds, independent of experimental data. The data-tracking solutions accurately reproduced joint motion, ground forces and knee contact loads measured for two total knee arthroplasty patients walking at their preferred speeds. RMS errors in joint kinematics were < 2.0° for rotations and < 0.3 cm for translations while errors in the model-computed ground-reaction and knee-contact forces were < 0.07 BW and < 0.4 BW, respectively. The predictive solutions were also consistent with joint kinematics, ground forces, knee contact loads and muscle activation patterns measured for slow and fast walking. The results demonstrate the feasibility of performing computationally-efficient, predictive, dynamic optimization simulations of movement using full-body, muscle-actuated models with realistic representations of joint function.

  9. Exhaustive Versus Randomized Searchers for Nonlinear Optimization in 21st Century Computing: Solar Application

    NASA Technical Reports Server (NTRS)

    Sen, Syamal K.; AliShaykhian, Gholam

    2010-01-01

    We present a simple multi-dimensional exhaustive search method to obtain, in a reasonable time, the optimal solution of a nonlinear programming problem. It is more relevant in the present day non-mainframe computing scenario where an estimated 95% computing resources remains unutilized and computing speed touches petaflops. While the processor speed is doubling every 18 months, the band width is doubling every 12 months, and the hard disk space is doubling every 9 months. A randomized search algorithm or, equivalently, an evolutionary search method is often used instead of an exhaustive search algorithm. The reason is that a randomized approach is usually polynomial-time, i.e., fast while an exhaustive search method is exponential-time i.e., slow. We discuss the increasing importance of exhaustive search in optimization with the steady increase of computing power for solving many real-world problems of reasonable size. We also discuss the computational error and complexity of the search algorithm focusing on the fact that no measuring device can usually measure a quantity with an accuracy greater than 0.005%. We stress the fact that the quality of solution of the exhaustive search - a deterministic method - is better than that of randomized search. In 21 st century computing environment, exhaustive search cannot be left aside as an untouchable and it is not always exponential. We also describe a possible application of these algorithms in improving the efficiency of solar cells - a real hot topic - in the current energy crisis. These algorithms could be excellent tools in the hands of experimentalists and could save not only large amount of time needed for experiments but also could validate the theory against experimental results fast.

  10. Deflagration Wave Profiles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menikoff, Ralph

    2012-04-03

    Shock initiation in a plastic-bonded explosives (PBX) is due to hot spots. Current reactive burn models are based, at least heuristically, on the ignition and growth concept. The ignition phase occurs when a small localized region of high temperature (or hot spot) burns on a fast time scale. This is followed by a growth phase in which a reactive front spreads out from the hot spot. Propagating reactive fronts are deflagration waves. A key question is the deflagration speed in a PBX compressed and heated by a shock wave that generated the hot spot. Here, the ODEs for a steadymore » deflagration wave profile in a compressible fluid are derived, along with the needed thermodynamic quantities of realistic equations of state corresponding to the reactants and products of a PBX. The properties of the wave profile equations are analyzed and an algorithm is derived for computing the deflagration speed. As an illustrative example, the algorithm is applied to compute the deflagration speed in shock compressed PBX 9501 as a function of shock pressure. The calculated deflagration speed, even at the CJ pressure, is low compared to the detonation speed. The implication of this are briefly discussed.« less

  11. Design of a Fatigue Detection System for High-Speed Trains Based on Driver Vigilance Using a Wireless Wearable EEG.

    PubMed

    Zhang, Xiaoliang; Li, Jiali; Liu, Yugang; Zhang, Zutao; Wang, Zhuojun; Luo, Dianyuan; Zhou, Xiang; Zhu, Miankuan; Salman, Waleed; Hu, Guangdi; Wang, Chunbai

    2017-03-01

    The vigilance of the driver is important for railway safety, despite not being included in the safety management system (SMS) for high-speed train safety. In this paper, a novel fatigue detection system for high-speed train safety based on monitoring train driver vigilance using a wireless wearable electroencephalograph (EEG) is presented. This system is designed to detect whether the driver is drowsiness. The proposed system consists of three main parts: (1) a wireless wearable EEG collection; (2) train driver vigilance detection; and (3) early warning device for train driver. In the first part, an 8-channel wireless wearable brain-computer interface (BCI) device acquires the locomotive driver's brain EEG signal comfortably under high-speed train-driving conditions. The recorded data are transmitted to a personal computer (PC) via Bluetooth. In the second step, a support vector machine (SVM) classification algorithm is implemented to determine the vigilance level using the Fast Fourier transform (FFT) to extract the EEG power spectrum density (PSD). In addition, an early warning device begins to work if fatigue is detected. The simulation and test results demonstrate the feasibility of the proposed fatigue detection system for high-speed train safety.

  12. Speed and accuracy improvements in FLAASH atmospheric correction of hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Perkins, Timothy; Adler-Golden, Steven; Matthew, Michael W.; Berk, Alexander; Bernstein, Lawrence S.; Lee, Jamine; Fox, Marsha

    2012-11-01

    Remotely sensed spectral imagery of the earth's surface can be used to fullest advantage when the influence of the atmosphere has been removed and the measurements are reduced to units of reflectance. Here, we provide a comprehensive summary of the latest version of the Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes atmospheric correction algorithm. We also report some new code improvements for speed and accuracy. These include the re-working of the original algorithm in C-language code parallelized with message passing interface and containing a new radiative transfer look-up table option, which replaces executions of the MODTRAN model. With computation times now as low as ~10 s per image per computer processor, automated, real-time, on-board atmospheric correction of hyper- and multi-spectral imagery is within reach.

  13. A Fast and Efficient Version of the TwO-Moment Aerosol Sectional (TOMAS) Global Aerosol Microphysics Model

    NASA Technical Reports Server (NTRS)

    Lee, Yunha; Adams, P. J.

    2012-01-01

    This study develops more computationally efficient versions of the TwO-Moment Aerosol Sectional (TOMAS) microphysics algorithms, collectively called Fast TOMAS. Several methods for speeding up the algorithm were attempted, but only reducing the number of size sections was adopted. Fast TOMAS models, coupled to the GISS GCM II-prime, require a new coagulation algorithm with less restrictive size resolution assumptions but only minor changes in other processes. Fast TOMAS models have been evaluated in a box model against analytical solutions of coagulation and condensation and in a 3-D model against the original TOMAS (TOMAS-30) model. Condensation and coagulation in the Fast TOMAS models agree well with the analytical solution but show slightly more bias than the TOMAS-30 box model. In the 3-D model, errors resulting from decreased size resolution in each process (i.e., emissions, cloud processing wet deposition, microphysics) are quantified in a series of model sensitivity simulations. Errors resulting from lower size resolution in condensation and coagulation, defined as the microphysics error, affect number and mass concentrations by only a few percent. The microphysics error in CN70CN100 (number concentrations of particles larger than 70100 nm diameter), proxies for cloud condensation nuclei, range from 5 to 5 in most regions. The largest errors are associated with decreasing the size resolution in the cloud processing wet deposition calculations, defined as cloud-processing error, and range from 20 to 15 in most regions for CN70CN100 concentrations. Overall, the Fast TOMAS models increase the computational speed by 2 to 3 times with only small numerical errors stemming from condensation and coagulation calculations when compared to TOMAS-30. The faster versions of the TOMAS model allow for the longer, multi-year simulations required to assess aerosol effects on cloud lifetime and precipitation.

  14. High speed imaging of dynamic processes with a switched source x-ray CT system

    NASA Astrophysics Data System (ADS)

    Thompson, William M.; Lionheart, William R. B.; Morton, Edward J.; Cunningham, Mike; Luggar, Russell D.

    2015-05-01

    Conventional x-ray computed tomography (CT) scanners are limited in their scanning speed by the mechanical constraints of their rotating gantries and as such do not provide the necessary temporal resolution for imaging of fast-moving dynamic processes, such as moving fluid flows. The Real Time Tomography (RTT) system is a family of fast cone beam CT scanners which instead use multiple fixed discrete sources and complete rings of detectors in an offset geometry. We demonstrate the potential of this system for use in the imaging of such high speed dynamic processes and give results using simulated and real experimental data. The unusual scanning geometry results in some challenges in image reconstruction, which are overcome using algebraic iterative reconstruction techniques and explicit regularisation. Through the use of a simple temporal regularisation term and by optimising the source firing pattern, we show that temporal resolution of the system may be increased at the expense of spatial resolution, which may be advantageous in some situations. Results are given showing temporal resolution of approximately 500 µs with simulated data and 3 ms with real experimental data.

  15. An implementation of a tree code on a SIMD, parallel computer

    NASA Technical Reports Server (NTRS)

    Olson, Kevin M.; Dorband, John E.

    1994-01-01

    We describe a fast tree algorithm for gravitational N-body simulation on SIMD parallel computers. The tree construction uses fast, parallel sorts. The sorted lists are recursively divided along their x, y and z coordinates. This data structure is a completely balanced tree (i.e., each particle is paired with exactly one other particle) and maintains good spatial locality. An implementation of this tree-building algorithm on a 16k processor Maspar MP-1 performs well and constitutes only a small fraction (approximately 15%) of the entire cycle of finding the accelerations. Each node in the tree is treated as a monopole. The tree search and the summation of accelerations also perform well. During the tree search, node data that is needed from another processor is simply fetched. Roughly 55% of the tree search time is spent in communications between processors. We apply the code to two problems of astrophysical interest. The first is a simulation of the close passage of two gravitationally, interacting, disk galaxies using 65,636 particles. We also simulate the formation of structure in an expanding, model universe using 1,048,576 particles. Our code attains speeds comparable to one head of a Cray Y-MP, so single instruction, multiple data (SIMD) type computers can be used for these simulations. The cost/performance ratio for SIMD machines like the Maspar MP-1 make them an extremely attractive alternative to either vector processors or large multiple instruction, multiple data (MIMD) type parallel computers. With further optimizations (e.g., more careful load balancing), speeds in excess of today's vector processing computers should be possible.

  16. Transient Vibration Prediction for Rotors on Ball Bearings Using Load-dependent Non-linear Bearing Stiffness

    NASA Technical Reports Server (NTRS)

    Fleming, David P.; Poplawski, J. V.

    2002-01-01

    Rolling-element bearing forces vary nonlinearly with bearing deflection. Thus an accurate rotordynamic transient analysis requires bearing forces to be determined at each step of the transient solution. Analyses have been carried out to show the effect of accurate bearing transient forces (accounting for non-linear speed and load dependent bearing stiffness) as compared to conventional use of average rolling-element bearing stiffness. Bearing forces were calculated by COBRA-AHS (Computer Optimized Ball and Roller Bearing Analysis - Advanced High Speed) and supplied to the rotordynamics code ARDS (Analysis of Rotor Dynamic Systems) for accurate simulation of rotor transient behavior. COBRA-AHS is a fast-running 5 degree-of-freedom computer code able to calculate high speed rolling-element bearing load-displacement data for radial and angular contact ball bearings and also for cylindrical and tapered roller beatings. Results show that use of nonlinear bearing characteristics is essential for accurate prediction of rotordynamic behavior.

  17. Real-Time linux dynamic clamp: a fast and flexible way to construct virtual ion channels in living cells.

    PubMed

    Dorval, A D; Christini, D J; White, J A

    2001-10-01

    We describe a system for real-time control of biological and other experiments. This device, based around the Real-Time Linux operating system, was tested specifically in the context of dynamic clamping, a demanding real-time task in which a computational system mimics the effects of nonlinear membrane conductances in living cells. The system is fast enough to represent dozens of nonlinear conductances in real time at clock rates well above 10 kHz. Conductances can be represented in deterministic form, or more accurately as discrete collections of stochastically gating ion channels. Tests were performed using a variety of complex models of nonlinear membrane mechanisms in excitable cells, including simulations of spatially extended excitable structures, and multiple interacting cells. Only in extreme cases does the computational load interfere with high-speed "hard" real-time processing (i.e., real-time processing that never falters). Freely available on the worldwide web, this experimental control system combines good performance. immense flexibility, low cost, and reasonable ease of use. It is easily adapted to any task involving real-time control, and excels in particular for applications requiring complex control algorithms that must operate at speeds over 1 kHz.

  18. Ultrafast electron diffraction pattern simulations using GPU technology. Applications to lattice vibrations.

    PubMed

    Eggeman, A S; London, A; Midgley, P A

    2013-11-01

    Graphical processing units (GPUs) offer a cost-effective and powerful means to enhance the processing power of computers. Here we show how GPUs can greatly increase the speed of electron diffraction pattern simulations by the implementation of a novel method to generate the phase grating used in multislice calculations. The increase in speed is especially apparent when using large supercell arrays and we illustrate the benefits of fast encoding the transmission function representing the atomic potentials through the simulation of thermal diffuse scattering in silicon brought about by specific vibrational modes. © 2013 Elsevier B.V. All rights reserved.

  19. Fast and Scalable Computation of the Forward and Inverse Discrete Periodic Radon Transform.

    PubMed

    Carranza, Cesar; Llamocca, Daniel; Pattichis, Marios

    2016-01-01

    The discrete periodic radon transform (DPRT) has extensively been used in applications that involve image reconstructions from projections. Beyond classic applications, the DPRT can also be used to compute fast convolutions that avoids the use of floating-point arithmetic associated with the use of the fast Fourier transform. Unfortunately, the use of the DPRT has been limited by the need to compute a large number of additions and the need for a large number of memory accesses. This paper introduces a fast and scalable approach for computing the forward and inverse DPRT that is based on the use of: a parallel array of fixed-point adder trees; circular shift registers to remove the need for accessing external memory components when selecting the input data for the adder trees; an image block-based approach to DPRT computation that can fit the proposed architecture to available resources; and fast transpositions that are computed in one or a few clock cycles that do not depend on the size of the input image. As a result, for an N × N image (N prime), the proposed approach can compute up to N(2) additions per clock cycle. Compared with the previous approaches, the scalable approach provides the fastest known implementations for different amounts of computational resources. For example, for a 251×251 image, for approximately 25% fewer flip-flops than required for a systolic implementation, we have that the scalable DPRT is computed 36 times faster. For the fastest case, we introduce optimized just 2N + ⌈log(2) N⌉ + 1 and 2N + 3 ⌈log(2) N⌉ + B + 2 cycles, architectures that can compute the DPRT and its inverse in respectively, where B is the number of bits used to represent each input pixel. On the other hand, the scalable DPRT approach requires more 1-b additions than for the systolic implementation and provides a tradeoff between speed and additional 1-b additions. All of the proposed DPRT architectures were implemented in VHSIC Hardware Description Language (VHDL) and validated using an Field-Programmable Gate Array (FPGA) implementation.

  20. PGAS in-memory data processing for the Processing Unit of the Upgraded Electronics of the Tile Calorimeter of the ATLAS Detector

    NASA Astrophysics Data System (ADS)

    Ohene-Kwofie, Daniel; Otoo, Ekow

    2015-10-01

    The ATLAS detector, operated at the Large Hadron Collider (LHC) records proton-proton collisions at CERN every 50ns resulting in a sustained data flow up to PB/s. The upgraded Tile Calorimeter of the ATLAS experiment will sustain about 5PB/s of digital throughput. These massive data rates require extremely fast data capture and processing. Although there has been a steady increase in the processing speed of CPU/GPGPU assembled for high performance computing, the rate of data input and output, even under parallel I/O, has not kept up with the general increase in computing speeds. The problem then is whether one can implement an I/O subsystem infrastructure capable of meeting the computational speeds of the advanced computing systems at the petascale and exascale level. We propose a system architecture that leverages the Partitioned Global Address Space (PGAS) model of computing to maintain an in-memory data-store for the Processing Unit (PU) of the upgraded electronics of the Tile Calorimeter which is proposed to be used as a high throughput general purpose co-processor to the sROD of the upgraded Tile Calorimeter. The physical memory of the PUs are aggregated into a large global logical address space using RDMA- capable interconnects such as PCI- Express to enhance data processing throughput.

  1. Trinary arithmetic and logic unit (TALU) using savart plate and spatial light modulator (SLM) suitable for optical computation in multivalued logic

    NASA Astrophysics Data System (ADS)

    Ghosh, Amal K.; Bhattacharya, Animesh; Raul, Moumita; Basuray, Amitabha

    2012-07-01

    Arithmetic logic unit (ALU) is the most important unit in any computing system. Optical computing is becoming popular day-by-day because of its ultrahigh processing speed and huge data handling capability. Obviously for the fast processing we need the optical TALU compatible with the multivalued logic. In this regard we are communicating the trinary arithmetic and logic unit (TALU) in modified trinary number (MTN) system, which is suitable for the optical computation and other applications in multivalued logic system. Here the savart plate and spatial light modulator (SLM) based optoelectronic circuits have been used to exploit the optical tree architecture (OTA) in optical interconnection network.

  2. Computer-animated stimuli to measure motion sensitivity: constraints on signal design in the Jacky dragon.

    PubMed

    Woo, Kevin L; Rieucau, Guillaume; Burke, Darren

    2017-02-01

    Identifying perceptual thresholds is critical for understanding the mechanisms that underlie signal evolution. Using computer-animated stimuli, we examined visual speed sensitivity in the Jacky dragon Amphibolurus muricatus , a species that makes extensive use of rapid motor patterns in social communication. First, focal lizards were tested in discrimination trials using random-dot kinematograms displaying combinations of speed, coherence, and direction. Second, we measured subject lizards' ability to predict the appearance of a secondary reinforcer (1 of 3 different computer-generated animations of invertebrates: cricket, spider, and mite) based on the direction of movement of a field of drifting dots by following a set of behavioural responses (e.g., orienting response, latency to respond) to our virtual stimuli. We found an effect of both speed and coherence, as well as an interaction between these 2 factors on the perception of moving stimuli. Overall, our results showed that Jacky dragons have acute sensitivity to high speeds. We then employed an optic flow analysis to match the performance to ecologically relevant motion. Our results suggest that the Jacky dragon visual system may have been shaped to detect fast motion. This pre-existing sensitivity may have constrained the evolution of conspecific displays. In contrast, Jacky dragons may have difficulty in detecting the movement of ambush predators, such as snakes and of some invertebrate prey. Our study also demonstrates the potential of the computer-animated stimuli technique for conducting nonintrusive tests to explore motion range and sensitivity in a visually mediated species.

  3. Parallel peak pruning for scalable SMP contour tree computation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carr, Hamish A.; Weber, Gunther H.; Sewell, Christopher M.

    As data sets grow to exascale, automated data analysis and visualisation are increasingly important, to intermediate human understanding and to reduce demands on disk storage via in situ analysis. Trends in architecture of high performance computing systems necessitate analysis algorithms to make effective use of combinations of massively multicore and distributed systems. One of the principal analytic tools is the contour tree, which analyses relationships between contours to identify features of more than local importance. Unfortunately, the predominant algorithms for computing the contour tree are explicitly serial, and founded on serial metaphors, which has limited the scalability of this formmore » of analysis. While there is some work on distributed contour tree computation, and separately on hybrid GPU-CPU computation, there is no efficient algorithm with strong formal guarantees on performance allied with fast practical performance. Here in this paper, we report the first shared SMP algorithm for fully parallel contour tree computation, withfor-mal guarantees of O(lgnlgt) parallel steps and O(n lgn) work, and implementations with up to 10x parallel speed up in OpenMP and up to 50x speed up in NVIDIA Thrust.« less

  4. Superadiabatic Controlled Evolutions and Universal Quantum Computation.

    PubMed

    Santos, Alan C; Sarandy, Marcelo S

    2015-10-29

    Adiabatic state engineering is a powerful technique in quantum information and quantum control. However, its performance is limited by the adiabatic theorem of quantum mechanics. In this scenario, shortcuts to adiabaticity, such as provided by the superadiabatic theory, constitute a valuable tool to speed up the adiabatic quantum behavior. Here, we propose a superadiabatic route to implement universal quantum computation. Our method is based on the realization of piecewise controlled superadiabatic evolutions. Remarkably, they can be obtained by simple time-independent counter-diabatic Hamiltonians. In particular, we discuss the implementation of fast rotation gates and arbitrary n-qubit controlled gates, which can be used to design different sets of universal quantum gates. Concerning the energy cost of the superadiabatic implementation, we show that it is dictated by the quantum speed limit, providing an upper bound for the corresponding adiabatic counterparts.

  5. Superadiabatic Controlled Evolutions and Universal Quantum Computation

    PubMed Central

    Santos, Alan C.; Sarandy, Marcelo S.

    2015-01-01

    Adiabatic state engineering is a powerful technique in quantum information and quantum control. However, its performance is limited by the adiabatic theorem of quantum mechanics. In this scenario, shortcuts to adiabaticity, such as provided by the superadiabatic theory, constitute a valuable tool to speed up the adiabatic quantum behavior. Here, we propose a superadiabatic route to implement universal quantum computation. Our method is based on the realization of piecewise controlled superadiabatic evolutions. Remarkably, they can be obtained by simple time-independent counter-diabatic Hamiltonians. In particular, we discuss the implementation of fast rotation gates and arbitrary n-qubit controlled gates, which can be used to design different sets of universal quantum gates. Concerning the energy cost of the superadiabatic implementation, we show that it is dictated by the quantum speed limit, providing an upper bound for the corresponding adiabatic counterparts. PMID:26511064

  6. Aerodynamics and vortical structures in hovering fruitflies

    NASA Astrophysics Data System (ADS)

    Meng, Xue Guang; Sun, Mao

    2015-03-01

    We measure the wing kinematics and morphological parameters of seven freely hovering fruitflies and numerically compute the flows of the flapping wings. The computed mean lift approximately equals to the measured weight and the mean horizontal force is approximately zero, validating the computational model. Because of the very small relative velocity of the wing, the mean lift coefficient required to support the weight is rather large, around 1.8, and the Reynolds number of the wing is low, around 100. How such a large lift is produced at such a low Reynolds number is explained by combining the wing motion data, the computed vortical structures, and the theory of vorticity dynamics. It has been shown that two unsteady mechanisms are responsible for the high lift. One is referred as to "fast pitching-up rotation": at the start of an up- or downstroke when the wing has very small speed, it fast pitches down to a small angle of attack, and then, when its speed is higher, it fast pitches up to the angle it normally uses. When the wing pitches up while moving forward, large vorticity is produced and sheds at the trailing edge, and vorticity of opposite sign is produced near the leading edge and on the upper surface, resulting in a large time rate of change of the first moment of vorticity (or fluid impulse), hence a large aerodynamic force. The other is the well known "delayed stall" mechanism: in the mid-portion of the up- or downstroke the wing moves at large angle of attack (about 45 deg) and the leading-edge-vortex (LEV) moves with the wing; thus, the vortex ring, formed by the LEV, the tip vortices, and the starting vortex, expands in size continuously, producing a large time rate of change of fluid impulse or a large aerodynamic force.

  7. GPU applications for data processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vladymyrov, Mykhailo, E-mail: mykhailo.vladymyrov@cern.ch; Aleksandrov, Andrey; INFN sezione di Napoli, I-80125 Napoli

    2015-12-31

    Modern experiments that use nuclear photoemulsion imply fast and efficient data acquisition from the emulsion can be performed. The new approaches in developing scanning systems require real-time processing of large amount of data. Methods that use Graphical Processing Unit (GPU) computing power for emulsion data processing are presented here. It is shown how the GPU-accelerated emulsion processing helped us to rise the scanning speed by factor of nine.

  8. Reconfigurable Integrated Optoelectronics

    DTIC Science & Technology

    2011-01-01

    state -changing could be done also using thermo-optical, mechano-optical, magneto-optical or opto-optical inputs. The speed of reconfiguration can be fast... quantum computers, is a futuristic activity; however, Jeremy O’Brien believes that the time horizon for OQC suc- cess can be brought closer in by using ...2011 Richard Soref. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use

  9. Design of a Fatigue Detection System for High-Speed Trains Based on Driver Vigilance Using a Wireless Wearable EEG

    PubMed Central

    Zhang, Xiaoliang; Li, Jiali; Liu, Yugang; Zhang, Zutao; Wang, Zhuojun; Luo, Dianyuan; Zhou, Xiang; Zhu, Miankuan; Salman, Waleed; Hu, Guangdi; Wang, Chunbai

    2017-01-01

    The vigilance of the driver is important for railway safety, despite not being included in the safety management system (SMS) for high-speed train safety. In this paper, a novel fatigue detection system for high-speed train safety based on monitoring train driver vigilance using a wireless wearable electroencephalograph (EEG) is presented. This system is designed to detect whether the driver is drowsiness. The proposed system consists of three main parts: (1) a wireless wearable EEG collection; (2) train driver vigilance detection; and (3) early warning device for train driver. In the first part, an 8-channel wireless wearable brain-computer interface (BCI) device acquires the locomotive driver’s brain EEG signal comfortably under high-speed train-driving conditions. The recorded data are transmitted to a personal computer (PC) via Bluetooth. In the second step, a support vector machine (SVM) classification algorithm is implemented to determine the vigilance level using the Fast Fourier transform (FFT) to extract the EEG power spectrum density (PSD). In addition, an early warning device begins to work if fatigue is detected. The simulation and test results demonstrate the feasibility of the proposed fatigue detection system for high-speed train safety. PMID:28257073

  10. Differential effects of absent visual feedback control on gait variability during different locomotion speeds.

    PubMed

    Wuehr, M; Schniepp, R; Pradhan, C; Ilmberger, J; Strupp, M; Brandt, T; Jahn, K

    2013-01-01

    Healthy persons exhibit relatively small temporal and spatial gait variability when walking unimpeded. In contrast, patients with a sensory deficit (e.g., polyneuropathy) show an increased gait variability that depends on speed and is associated with an increased fall risk. The purpose of this study was to investigate the role of vision in gait stabilization by determining the effects of withdrawing visual information (eyes closed) on gait variability at different locomotion speeds. Ten healthy subjects (32.2 ± 7.9 years, 5 women) walked on a treadmill for 5-min periods at their preferred walking speed and at 20, 40, 70, and 80 % of maximal walking speed during the conditions of walking with eyes open (EO) and with eyes closed (EC). The coefficient of variation (CV) and fractal dimension (α) of the fluctuations in stride time, stride length, and base width were computed and analyzed. Withdrawing visual information increased the base width CV for all walking velocities (p < 0.001). The effects of absent visual information on CV and α of stride time and stride length were most pronounced during slow locomotion (p < 0.001) and declined during fast walking speeds. The results indicate that visual feedback control is used to stabilize the medio-lateral (i.e., base width) gait parameters at all speed sections. In contrast, sensory feedback control in the fore-aft direction (i.e., stride time and stride length) depends on speed. Sensory feedback contributes most to fore-aft gait stabilization during slow locomotion, whereas passive biomechanical mechanisms and an automated central pattern generation appear to control fast locomotion.

  11. Impaired Comprehension of Speed Verbs in Parkinson's Disease.

    PubMed

    Speed, Laura J; van Dam, Wessel O; Hirath, Priyantha; Vigliocco, Gabriella; Desai, Rutvik H

    2017-05-01

    A wealth of studies provide evidence for action simulation during language comprehension. Recent research suggests such action simulations might be sensitive to fine-grained information, such as speed. Here, we present a crucial test for action simulation of speed in language by assessing speed comprehension in patients with Parkinson's disease (PD). Based on the patients' motor deficits, we hypothesized that the speed of motion described in language would modulate their performance in semantic tasks. Specifically, they would have more difficulty processing language about relatively fast speed than language about slow speed. We conducted a semantic similarity judgment task on fast and slow action verbs in patients with PD and age-matched healthy controls. Participants had to decide which of two verbs most closely matched a target word. Compared to controls, PD patients were slower making judgments about fast action verbs, but not for judgments about slow action verbs, suggesting impairment in processing language about fast action. Moreover, this impairment was specific to verbs describing fast action performed with the hand. Problems moving quickly lead to difficulties comprehending language about moving quickly. This study provides evidence that speed is an important part of action representations. (JINS, 2017, 23, 412-420).

  12. Artificial Intelligence in Medical Practice: The Question to the Answer?

    PubMed

    Miller, D Douglas; Brown, Eric W

    2018-02-01

    Computer science advances and ultra-fast computing speeds find artificial intelligence (AI) broadly benefitting modern society-forecasting weather, recognizing faces, detecting fraud, and deciphering genomics. AI's future role in medical practice remains an unanswered question. Machines (computers) learn to detect patterns not decipherable using biostatistics by processing massive datasets (big data) through layered mathematical models (algorithms). Correcting algorithm mistakes (training) adds to AI predictive model confidence. AI is being successfully applied for image analysis in radiology, pathology, and dermatology, with diagnostic speed exceeding, and accuracy paralleling, medical experts. While diagnostic confidence never reaches 100%, combining machines plus physicians reliably enhances system performance. Cognitive programs are impacting medical practice by applying natural language processing to read the rapidly expanding scientific literature and collate years of diverse electronic medical records. In this and other ways, AI may optimize the care trajectory of chronic disease patients, suggest precision therapies for complex illnesses, reduce medical errors, and improve subject enrollment into clinical trials. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Fast Physically Correct Refocusing for Sparse Light Fields Using Block-Based Multi-Rate View Interpolation.

    PubMed

    Huang, Chao-Tsung; Wang, Yu-Wen; Huang, Li-Ren; Chin, Jui; Chen, Liang-Gee

    2017-02-01

    Digital refocusing has a tradeoff between complexity and quality when using sparsely sampled light fields for low-storage applications. In this paper, we propose a fast physically correct refocusing algorithm to address this issue in a twofold way. First, view interpolation is adopted to provide photorealistic quality at infocus-defocus hybrid boundaries. Regarding its conventional high complexity, we devised a fast line-scan method specifically for refocusing, and its 1D kernel can be 30× faster than the benchmark View Synthesis Reference Software (VSRS)-1D-Fast. Second, we propose a block-based multi-rate processing flow for accelerating purely infocused or defocused regions, and a further 3- 34× speedup can be achieved for high-resolution images. All candidate blocks of variable sizes can interpolate different numbers of rendered views and perform refocusing in different subsampled layers. To avoid visible aliasing and block artifacts, we determine these parameters and the simulated aperture filter through a localized filter response analysis using defocus blur statistics. The final quadtree block partitions are then optimized in terms of computation time. Extensive experimental results are provided to show superior refocusing quality and fast computation speed. In particular, the run time is comparable with the conventional single-image blurring, which causes serious boundary artifacts.

  14. SENSITIVITY OF HELIOSEISMIC TRAVEL TIMES TO THE IMPOSITION OF A LORENTZ FORCE LIMITER IN COMPUTATIONAL HELIOSEISMOLOGY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moradi, Hamed; Cally, Paul S., E-mail: hamed.moradi@monash.edu

    The rapid exponential increase in the Alfvén wave speed with height above the solar surface presents a serious challenge to physical modeling of the effects of magnetic fields on solar oscillations, as it introduces a significant Courant-Friedrichs-Lewy time-step constraint for explicit numerical codes. A common approach adopted in computational helioseismology, where long simulations in excess of 10 hr (hundreds of wave periods) are often required, is to cap the Alfvén wave speed by artificially modifying the momentum equation when the ratio between the Lorentz and hydrodynamic forces becomes too large. However, recent studies have demonstrated that the Alfvén wave speedmore » plays a critical role in the MHD mode conversion process, particularly in determining the reflection height of the upwardly propagating helioseismic fast wave. Using numerical simulations of helioseismic wave propagation in constant inclined (relative to the vertical) magnetic fields we demonstrate that the imposition of such artificial limiters significantly affects time-distance travel times unless the Alfvén wave-speed cap is chosen comfortably in excess of the horizontal phase speeds under investigation.« less

  15. Multifidelity-CMA: a multifidelity approach for efficient personalisation of 3D cardiac electromechanical models.

    PubMed

    Molléro, Roch; Pennec, Xavier; Delingette, Hervé; Garny, Alan; Ayache, Nicholas; Sermesant, Maxime

    2018-02-01

    Personalised computational models of the heart are of increasing interest for clinical applications due to their discriminative and predictive abilities. However, the simulation of a single heartbeat with a 3D cardiac electromechanical model can be long and computationally expensive, which makes some practical applications, such as the estimation of model parameters from clinical data (the personalisation), very slow. Here we introduce an original multifidelity approach between a 3D cardiac model and a simplified "0D" version of this model, which enables to get reliable (and extremely fast) approximations of the global behaviour of the 3D model using 0D simulations. We then use this multifidelity approximation to speed-up an efficient parameter estimation algorithm, leading to a fast and computationally efficient personalisation method of the 3D model. In particular, we show results on a cohort of 121 different heart geometries and measurements. Finally, an exploitable code of the 0D model with scripts to perform parameter estimation will be released to the community.

  16. Use of a graphics processing unit (GPU) to facilitate real-time 3D graphic presentation of the patient skin-dose distribution during fluoroscopic interventional procedures

    PubMed Central

    Rana, Vijay; Rudin, Stephen; Bednarek, Daniel R.

    2012-01-01

    We have developed a dose-tracking system (DTS) that calculates the radiation dose to the patient’s skin in real-time by acquiring exposure parameters and imaging-system-geometry from the digital bus on a Toshiba Infinix C-arm unit. The cumulative dose values are then displayed as a color map on an OpenGL-based 3D graphic of the patient for immediate feedback to the interventionalist. Determination of those elements on the surface of the patient 3D-graphic that intersect the beam and calculation of the dose for these elements in real time demands fast computation. Reducing the size of the elements results in more computation load on the computer processor and therefore a tradeoff occurs between the resolution of the patient graphic and the real-time performance of the DTS. The speed of the DTS for calculating dose to the skin is limited by the central processing unit (CPU) and can be improved by using the parallel processing power of a graphics processing unit (GPU). Here, we compare the performance speed of GPU-based DTS software to that of the current CPU-based software as a function of the resolution of the patient graphics. Results show a tremendous improvement in speed using the GPU. While an increase in the spatial resolution of the patient graphics resulted in slowing down the computational speed of the DTS on the CPU, the speed of the GPU-based DTS was hardly affected. This GPU-based DTS can be a powerful tool for providing accurate, real-time feedback about patient skin-dose to physicians while performing interventional procedures. PMID:24027616

  17. Use of a graphics processing unit (GPU) to facilitate real-time 3D graphic presentation of the patient skin-dose distribution during fluoroscopic interventional procedures.

    PubMed

    Rana, Vijay; Rudin, Stephen; Bednarek, Daniel R

    2012-02-23

    We have developed a dose-tracking system (DTS) that calculates the radiation dose to the patient's skin in real-time by acquiring exposure parameters and imaging-system-geometry from the digital bus on a Toshiba Infinix C-arm unit. The cumulative dose values are then displayed as a color map on an OpenGL-based 3D graphic of the patient for immediate feedback to the interventionalist. Determination of those elements on the surface of the patient 3D-graphic that intersect the beam and calculation of the dose for these elements in real time demands fast computation. Reducing the size of the elements results in more computation load on the computer processor and therefore a tradeoff occurs between the resolution of the patient graphic and the real-time performance of the DTS. The speed of the DTS for calculating dose to the skin is limited by the central processing unit (CPU) and can be improved by using the parallel processing power of a graphics processing unit (GPU). Here, we compare the performance speed of GPU-based DTS software to that of the current CPU-based software as a function of the resolution of the patient graphics. Results show a tremendous improvement in speed using the GPU. While an increase in the spatial resolution of the patient graphics resulted in slowing down the computational speed of the DTS on the CPU, the speed of the GPU-based DTS was hardly affected. This GPU-based DTS can be a powerful tool for providing accurate, real-time feedback about patient skin-dose to physicians while performing interventional procedures.

  18. Reducing The Cost of Transport and Increasing Walking Distance After Stroke: A Randomized Controlled Trial on Fast Locomotor Training Combined With Functional Electrical Stimulation

    PubMed Central

    Awad, Louis N.; Reisman, Darcy S.; Pohlig, Ryan T.; Binder-Macleod, Stuart A.

    2015-01-01

    Background Neurorehabilitation efforts have been limited in their ability to restore walking function after stroke. Recent work has demonstrated proof-of-concept for a Functional Electrical Stimulation (FES)-based combination therapy designed to improve poststroke walking by targeting deficits in paretic propulsion. Objectives To determine the effects on the energy cost of walking (EC) and long-distance walking ability of locomotor training that combines fast walking with FES to the paretic ankle musculature (FastFES). Methods Fifty participants >6 months poststroke were randomized to 12 weeks of gait training at self-selected speeds (SS), fast speeds (Fast), or FastFES. Participants’ 6-Minute Walk Test (6MWT) distance and EC at comfortable (EC-CWS) and fast (EC-Fast) walking speeds were measured pretraining, posttraining, and at a 3-month follow-up. A reduction in EC-CWS, independent of changes in speed, was the primary outcome. Also evaluated were group differences in the number of 6MWT responders and moderation by baseline speed. Results When compared with SS and Fast, FastFES produced larger reductions in EC (p’s ≤0.03). FastFES produced reductions of 24% and 19% in EC-CWS and EC-Fast (p’s <0.001), whereas neither Fast nor SS influenced EC. Between-group 6MWT differences were not observed; however, 73% of FastFES and 68% of Fast participants were responders, in contrast to 35% of SS participants. Conclusions Combining fast locomotor training with FES is an effective approach to reducing the high EC of persons poststroke. Surprisingly, differences in 6MWT gains were not observed between groups. Closer inspection of the 6MWT and EC relationship and elucidation of how reduced EC may influence walking-related disability is warranted. PMID:26621366

  19. Fast precalculated triangular mesh algorithm for 3D binary computer-generated holograms.

    PubMed

    Yang, Fan; Kaczorowski, Andrzej; Wilkinson, Tim D

    2014-12-10

    A new method for constructing computer-generated holograms using a precalculated triangular mesh is presented. The speed of calculation can be increased dramatically by exploiting both the precalculated base triangle and GPU parallel computing. Unlike algorithms using point-based sources, this method can reconstruct a more vivid 3D object instead of a "hollow image." In addition, there is no need to do a fast Fourier transform for each 3D element every time. A ferroelectric liquid crystal spatial light modulator is used to display the binary hologram within our experiment and the hologram of a base right triangle is produced by utilizing just a one-step Fourier transform in the 2D case, which can be expanded to the 3D case by multiplying by a suitable Fresnel phase plane. All 3D holograms generated in this paper are based on Fresnel propagation; thus, the Fresnel plane is treated as a vital element in producing the hologram. A GeForce GTX 770 graphics card with 2 GB memory is used to achieve parallel computing.

  20. Fast software-based volume rendering using multimedia instructions on PC platforms and its application to virtual endoscopy

    NASA Astrophysics Data System (ADS)

    Mori, Kensaku; Suenaga, Yasuhito; Toriwaki, Jun-ichiro

    2003-05-01

    This paper describes a software-based fast volume rendering (VolR) method on a PC platform by using multimedia instructions, such as SIMD instructions, which are currently available in PCs' CPUs. This method achieves fast rendering speed through highly optimizing software rather than an improved rendering algorithm. In volume rendering using a ray casting method, the system requires fast execution of the following processes: (a) interpolation of voxel or color values at sample points, (b) computation of normal vectors (gray-level gradient vectors), (c) calculation of shaded values obtained by dot-products of normal vectors and light source direction vectors, (d) memory access to a huge area, and (e) efficient ray skipping at translucent regions. The proposed software implements these fundamental processes in volume rending by using special instruction sets for multimedia processing. The proposed software can generate virtual endoscopic images of a 3-D volume of 512x512x489 voxel size by volume rendering with perspective projection, specular reflection, and on-the-fly normal vector computation on a conventional PC without any special hardware at thirteen frames per second. Semi-translucent display is also possible.

  1. Correlation-coefficient-based fast template matching through partial elimination.

    PubMed

    Mahmood, Arif; Khan, Sohaib

    2012-04-01

    Partial computation elimination techniques are often used for fast template matching. At a particular search location, computations are prematurely terminated as soon as it is found that this location cannot compete with an already known best match location. Due to the nonmonotonic growth pattern of the correlation-based similarity measures, partial computation elimination techniques have been traditionally considered inapplicable to speed up these measures. In this paper, we show that partial elimination techniques may be applied to a correlation coefficient by using a monotonic formulation, and we propose basic-mode and extended-mode partial correlation elimination algorithms for fast template matching. The basic-mode algorithm is more efficient on small template sizes, whereas the extended mode is faster on medium and larger templates. We also propose a strategy to decide which algorithm to use for a given data set. To achieve a high speedup, elimination algorithms require an initial guess of the peak correlation value. We propose two initialization schemes including a coarse-to-fine scheme for larger templates and a two-stage technique for small- and medium-sized templates. Our proposed algorithms are exact, i.e., having exhaustive equivalent accuracy, and are compared with the existing fast techniques using real image data sets on a wide variety of template sizes. While the actual speedups are data dependent, in most cases, our proposed algorithms have been found to be significantly faster than the other algorithms.

  2. Genomic prediction using an iterative conditional expectation algorithm for a fast BayesC-like model.

    PubMed

    Dong, Linsong; Wang, Zhiyong

    2018-06-11

    Genomic prediction is feasible for estimating genomic breeding values because of dense genome-wide markers and credible statistical methods, such as Genomic Best Linear Unbiased Prediction (GBLUP) and various Bayesian methods. Compared with GBLUP, Bayesian methods propose more flexible assumptions for the distributions of SNP effects. However, most Bayesian methods are performed based on Markov chain Monte Carlo (MCMC) algorithms, leading to computational efficiency challenges. Hence, some fast Bayesian approaches, such as fast BayesB (fBayesB), were proposed to speed up the calculation. This study proposed another fast Bayesian method termed fast BayesC (fBayesC). The prior distribution of fBayesC assumes that a SNP with probability γ has a non-zero effect which comes from a normal density with a common variance. The simulated data from QTLMAS XII workshop and actual data on large yellow croaker were used to compare the predictive results of fBayesB, fBayesC and (MCMC-based) BayesC. The results showed that when γ was set as a small value, such as 0.01 in the simulated data or 0.001 in the actual data, fBayesB and fBayesC yielded lower prediction accuracies (abilities) than BayesC. In the actual data, fBayesC could yield very similar predictive abilities as BayesC when γ ≥ 0.01. When γ = 0.01, fBayesB could also yield similar results as fBayesC and BayesC. However, fBayesB could not yield an explicit result when γ ≥ 0.1, but a similar situation was not observed for fBayesC. Moreover, the computational speed of fBayesC was significantly faster than that of BayesC, making fBayesC a promising method for genomic prediction.

  3. A Gradient Taguchi Method for Engineering Optimization

    NASA Astrophysics Data System (ADS)

    Hwang, Shun-Fa; Wu, Jen-Chih; He, Rong-Song

    2017-10-01

    To balance the robustness and the convergence speed of optimization, a novel hybrid algorithm consisting of Taguchi method and the steepest descent method is proposed in this work. Taguchi method using orthogonal arrays could quickly find the optimum combination of the levels of various factors, even when the number of level and/or factor is quite large. This algorithm is applied to the inverse determination of elastic constants of three composite plates by combining numerical method and vibration testing. For these problems, the proposed algorithm could find better elastic constants in less computation cost. Therefore, the proposed algorithm has nice robustness and fast convergence speed as compared to some hybrid genetic algorithms.

  4. A fast 3-D object recognition algorithm for the vision system of a special-purpose dexterous manipulator

    NASA Technical Reports Server (NTRS)

    Hung, Stephen H. Y.

    1989-01-01

    A fast 3-D object recognition algorithm that can be used as a quick-look subsystem to the vision system for the Special-Purpose Dexterous Manipulator (SPDM) is described. Global features that can be easily computed from range data are used to characterize the images of a viewer-centered model of an object. This algorithm will speed up the processing by eliminating the low level processing whenever possible. It may identify the object, reject a set of bad data in the early stage, or create a better environment for a more powerful algorithm to carry the work further.

  5. Fast-responding liquid crystal light-valve technology for color-sequential display applications

    NASA Astrophysics Data System (ADS)

    Janssen, Peter J.; Konovalov, Victor A.; Muravski, Anatoli A.; Yakovenko, Sergei Y.

    1996-04-01

    A color sequential projection system has some distinct advantages over conventional systems which make it uniquely suitable for consumer TV as well as high performance professional applications such as computer monitors and electronic cinema. A fast responding light-valve is, clearly, essential for a good performing system. Response speed of transmissive LC lightvalves has been marginal thus far for good color rendition. Recently, Sevchenko Institute has made some very fast reflective LC cells which were evaluated at Philips Labs. These devices showed sub millisecond-large signal-response times, even at room temperature, and produced good color in a projector emulation testbed. In our presentation we describe our highly efficient color sequential projector and demonstrate its operation on video tape. Next we discuss light-valve requirements and reflective light-valve test results.

  6. Fast computation of close-coupling exchange integrals using polynomials in a tree representation

    NASA Astrophysics Data System (ADS)

    Wallerberger, Markus; Igenbergs, Katharina; Schweinzer, Josef; Aumayr, Friedrich

    2011-03-01

    The semi-classical atomic-orbital close-coupling method is a well-known approach for the calculation of cross sections in ion-atom collisions. It strongly relies on the fast and stable computation of exchange integrals. We present an upgrade to earlier implementations of the Fourier-transform method. For this purpose, we implement an extensive library for symbolic storage of polynomials, relying on sophisticated tree structures to allow fast manipulation and numerically stable evaluation. Using this library, we considerably speed up creation and computation of exchange integrals. This enables us to compute cross sections for more complex collision systems. Program summaryProgram title: TXINT Catalogue identifier: AEHS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 12 332 No. of bytes in distributed program, including test data, etc.: 157 086 Distribution format: tar.gz Programming language: Fortran 95 Computer: All with a Fortran 95 compiler Operating system: All with a Fortran 95 compiler RAM: Depends heavily on input, usually less than 100 MiB Classification: 16.10 Nature of problem: Analytical calculation of one- and two-center exchange matrix elements for the close-coupling method in the impact parameter model. Solution method: Similar to the code of Hansen and Dubois [1], we use the Fourier-transform method suggested by Shakeshaft [2] to compute the integrals. However, we heavily speed up the calculation using a library for symbolic manipulation of polynomials. Restrictions: We restrict ourselves to a defined collision system in the impact parameter model. Unusual features: A library for symbolic manipulation of polynomials, where polynomials are stored in a space-saving left-child right-sibling binary tree. This provides stable numerical evaluation and fast mutation while maintaining full compatibility with the original code. Additional comments: This program makes heavy use of the new features provided by the Fortran 90 standard, most prominently pointers, derived types and allocatable structures and a small portion of Fortran 95. Only newer compilers support these features. Following compilers support all features needed by the program. GNU Fortran Compiler "gfortran" from version 4.3.0 GNU Fortran 95 Compiler "g95" from version 4.2.0 Intel Fortran Compiler "ifort" from version 11.0

  7. Symplectic molecular dynamics simulations on specially designed parallel computers.

    PubMed

    Borstnik, Urban; Janezic, Dusanka

    2005-01-01

    We have developed a computer program for molecular dynamics (MD) simulation that implements the Split Integration Symplectic Method (SISM) and is designed to run on specialized parallel computers. The MD integration is performed by the SISM, which analytically treats high-frequency vibrational motion and thus enables the use of longer simulation time steps. The low-frequency motion is treated numerically on specially designed parallel computers, which decreases the computational time of each simulation time step. The combination of these approaches means that less time is required and fewer steps are needed and so enables fast MD simulations. We study the computational performance of MD simulation of molecular systems on specialized computers and provide a comparison to standard personal computers. The combination of the SISM with two specialized parallel computers is an effective way to increase the speed of MD simulations up to 16-fold over a single PC processor.

  8. Fast and Accurate Hybrid Stream PCRTMSOLAR Radiative Transfer Model for Reflected Solar Spectrum Simulation in the Cloudy Atmosphere

    NASA Technical Reports Server (NTRS)

    Yang, Qiguang; Liu, Xu; Wu, Wan; Kizer, Susan; Baize, Rosemary R.

    2016-01-01

    A hybrid stream PCRTM-SOLAR model has been proposed for fast and accurate radiative transfer simulation. It calculates the reflected solar (RS) radiances with a fast coarse way and then, with the help of a pre-saved matrix, transforms the results to obtain the desired high accurate RS spectrum. The methodology has been demonstrated with the hybrid stream discrete ordinate (HSDO) radiative transfer (RT) model. The HSDO method calculates the monochromatic radiances using a 4-stream discrete ordinate method, where only a small number of monochromatic radiances are simulated with both 4-stream and a larger N-stream (N = 16) discrete ordinate RT algorithm. The accuracy of the obtained channel radiance is comparable to the result from N-stream moderate resolution atmospheric transmission version 5 (MODTRAN5). The root-mean-square errors are usually less than 5x10(exp -4) mW/sq cm/sr/cm. The computational speed is three to four-orders of magnitude faster than the medium speed correlated-k option MODTRAN5. This method is very efficient to simulate thousands of RS spectra under multi-layer clouds/aerosols and solar radiation conditions for climate change study and numerical weather prediction applications.

  9. Computational algorithms for simulations in atmospheric optics.

    PubMed

    Konyaev, P A; Lukin, V P

    2016-04-20

    A computer simulation technique for atmospheric and adaptive optics based on parallel programing is discussed. A parallel propagation algorithm is designed and a modified spectral-phase method for computer generation of 2D time-variant random fields is developed. Temporal power spectra of Laguerre-Gaussian beam fluctuations are considered as an example to illustrate the applications discussed. Implementation of the proposed algorithms using Intel MKL and IPP libraries and NVIDIA CUDA technology is shown to be very fast and accurate. The hardware system for the computer simulation is an off-the-shelf desktop with an Intel Core i7-4790K CPU operating at a turbo-speed frequency up to 5 GHz and an NVIDIA GeForce GTX-960 graphics accelerator with 1024 1.5 GHz processors.

  10. Architectural Aspects of Grid Computing and its Global Prospects for E-Science Community

    NASA Astrophysics Data System (ADS)

    Ahmad, Mushtaq

    2008-05-01

    The paper reviews the imminent Architectural Aspects of Grid Computing for e-Science community for scientific research and business/commercial collaboration beyond physical boundaries. Grid Computing provides all the needed facilities; hardware, software, communication interfaces, high speed internet, safe authentication and secure environment for collaboration of research projects around the globe. It provides highly fast compute engine for those scientific and engineering research projects and business/commercial applications which are heavily compute intensive and/or require humongous amounts of data. It also makes possible the use of very advanced methodologies, simulation models, expert systems and treasure of knowledge available around the globe under the umbrella of knowledge sharing. Thus it makes possible one of the dreams of global village for the benefit of e-Science community across the globe.

  11. High-speed, random-access fluorescence microscopy: I. High-resolution optical recording with voltage-sensitive dyes and ion indicators.

    PubMed

    Bullen, A; Patel, S S; Saggau, P

    1997-07-01

    The design and implementation of a high-speed, random-access, laser-scanning fluorescence microscope configured to record fast physiological signals from small neuronal structures with high spatiotemporal resolution is presented. The laser-scanning capability of this nonimaging microscope is provided by two orthogonal acousto-optic deflectors under computer control. Each scanning point can be randomly accessed and has a positioning time of 3-5 microseconds. Sampling time is also computer-controlled and can be varied to maximize the signal-to-noise ratio. Acquisition rates up to 200k samples/s at 16-bit digitizing resolution are possible. The spatial resolution of this instrument is determined by the minimal spot size at the level of the preparation (i.e., 2-7 microns). Scanning points are selected interactively from a reference image collected with differential interference contrast optics and a video camera. Frame rates up to 5 kHz are easily attainable. Intrinsic variations in laser light intensity and scanning spot brightness are overcome by an on-line signal-processing scheme. Representative records obtained with this instrument by using voltage-sensitive dyes and calcium indicators demonstrate the ability to make fast, high-fidelity measurements of membrane potential and intracellular calcium at high spatial resolution (2 microns) without any temporal averaging.

  12. Fast vessel segmentation in retinal images using multi-scale enhancement and second-order local entropy

    NASA Astrophysics Data System (ADS)

    Yu, H.; Barriga, S.; Agurto, C.; Zamora, G.; Bauman, W.; Soliz, P.

    2012-03-01

    Retinal vasculature is one of the most important anatomical structures in digital retinal photographs. Accurate segmentation of retinal blood vessels is an essential task in automated analysis of retinopathy. This paper presents a new and effective vessel segmentation algorithm that features computational simplicity and fast implementation. This method uses morphological pre-processing to decrease the disturbance of bright structures and lesions before vessel extraction. Next, a vessel probability map is generated by computing the eigenvalues of the second derivatives of Gaussian filtered image at multiple scales. Then, the second order local entropy thresholding is applied to segment the vessel map. Lastly, a rule-based decision step, which measures the geometric shape difference between vessels and lesions is applied to reduce false positives. The algorithm is evaluated on the low-resolution DRIVE and STARE databases and the publicly available high-resolution image database from Friedrich-Alexander University Erlangen-Nuremberg, Germany). The proposed method achieved comparable performance to state of the art unsupervised vessel segmentation methods with a competitive faster speed on the DRIVE and STARE databases. For the high resolution fundus image database, the proposed algorithm outperforms an existing approach both on performance and speed. The efficiency and robustness make the blood vessel segmentation method described here suitable for broad application in automated analysis of retinal images.

  13. Holographic Adaptive Laser Optics System (HALOS): Fast, Autonomous Aberration Correction

    NASA Astrophysics Data System (ADS)

    Andersen, G.; MacDonald, K.; Gelsinger-Austin, P.

    2013-09-01

    We present an adaptive optics system which uses a multiplexed hologram to deconvolve the phase aberrations in an input beam. This wavefront characterization is extremely fast as it is based on simple measurements of the intensity of focal spots and does not require any computations. Furthermore, the system does not require a computer in the loop and is thus much cheaper, less complex and more robust as well. A fully functional, closed-loop prototype incorporating a 32-element MEMS mirror has been constructed. The unit has a footprint no larger than a laptop but runs at a bandwidth of 100kHz over an order of magnitude faster than comparable, conventional systems occupying a significantly larger volume. Additionally, since the sensing is based on parallel, all-optical processing, the speed is independent of actuator number running at the same bandwidth for one actuator as for a million. We are developing the HALOS technology with a view towards next-generation surveillance systems for extreme adaptive optics applications. These include imaging, lidar and free-space optical communications for unmanned aerial vehicles and SSA. The small volume is ideal for UAVs, while the high speed and high resolution will be of great benefit to the ground-based observation of space-based objects.

  14. High-speed, random-access fluorescence microscopy: I. High-resolution optical recording with voltage-sensitive dyes and ion indicators.

    PubMed Central

    Bullen, A; Patel, S S; Saggau, P

    1997-01-01

    The design and implementation of a high-speed, random-access, laser-scanning fluorescence microscope configured to record fast physiological signals from small neuronal structures with high spatiotemporal resolution is presented. The laser-scanning capability of this nonimaging microscope is provided by two orthogonal acousto-optic deflectors under computer control. Each scanning point can be randomly accessed and has a positioning time of 3-5 microseconds. Sampling time is also computer-controlled and can be varied to maximize the signal-to-noise ratio. Acquisition rates up to 200k samples/s at 16-bit digitizing resolution are possible. The spatial resolution of this instrument is determined by the minimal spot size at the level of the preparation (i.e., 2-7 microns). Scanning points are selected interactively from a reference image collected with differential interference contrast optics and a video camera. Frame rates up to 5 kHz are easily attainable. Intrinsic variations in laser light intensity and scanning spot brightness are overcome by an on-line signal-processing scheme. Representative records obtained with this instrument by using voltage-sensitive dyes and calcium indicators demonstrate the ability to make fast, high-fidelity measurements of membrane potential and intracellular calcium at high spatial resolution (2 microns) without any temporal averaging. Images FIGURE 6 PMID:9199810

  15. Speed of CMEs and the Magnetic Non-Potentiality of Their Source ARs

    NASA Technical Reports Server (NTRS)

    Tiwari, Sanjiv K.; Falconer, David A.; Moore, Ronald L.; Venkatakrishnan, P.

    2014-01-01

    Most fast coronal mass ejections (CMEs) originate from solar active regions (ARs). Non-potentiality of ARs is expected to determine the speed and size of CMEs in the outer corona. Several other unexplored parameters might be important as well. To find out the correlation between the initial speed of CMEs and the non-potentiality of source ARs, we associated over a hundred of CMEs with source ARs via their co-produced flares. The speed of the CMEs are collected from the SOHO LASCO CME catalog. We have used vector magnetograms obtained mainly with HMI/SDO, also with Hinode (SOT/SP) when available within an hour of a CME occurrence, to evaluate various magnetic non-potentiality parameters, e.g. magnetic free-energy proxies, computed magnetic free energy, twist, shear angle, signed shear angle etc. We have also included several other parameters e.g. total unsigned flux, net current, magnetic area of ARs, area of sunspots, to investigate their correlation, if any, with the initial speeds of CMEs. Our preliminary results show that the ARs with larger non-potentiality and area mostly produce fast CMEs but they can also produce slower ones. The ARs with lesser non-potentiality and area generally produce only slower CMEs, however, there are a few exceptions. The total unsigned flux correlate with the non-potentiality parameters and area of ARs but some ARs with large unsigned flux are also found to be least non-potential. A more detailed analysis is underway.

  16. Online plasma calculator

    NASA Astrophysics Data System (ADS)

    Wisniewski, H.; Gourdain, P.-A.

    2017-10-01

    APOLLO is an online, Linux based plasma calculator. Users can input variables that correspond to their specific plasma, such as ion and electron densities, temperatures, and external magnetic fields. The system is based on a webserver where a FastCGI protocol computes key plasma parameters including frequencies, lengths, velocities, and dimensionless numbers. FastCGI was chosen to overcome security problems caused by JAVA-based plugins. The FastCGI also speeds up calculations over PHP based systems. APOLLO is built upon the WT library, which turns any web browser into a versatile, fast graphic user interface. All values with units are expressed in SI units except temperature, which is in electron-volts. SI units were chosen over cgs units because of the gradual shift to using SI units within the plasma community. APOLLO is intended to be a fast calculator that also provides the user with the proper equations used to calculate the plasma parameters. This system is intended to be used by undergraduates taking plasma courses as well as graduate students and researchers who need a quick reference calculation.

  17. Tomo3D 2.0--exploitation of advanced vector extensions (AVX) for 3D reconstruction.

    PubMed

    Agulleiro, Jose-Ignacio; Fernandez, Jose-Jesus

    2015-02-01

    Tomo3D is a program for fast tomographic reconstruction on multicore computers. Its high speed stems from code optimization, vectorization with Streaming SIMD Extensions (SSE), multithreading and optimization of disk access. Recently, Advanced Vector eXtensions (AVX) have been introduced in the x86 processor architecture. Compared to SSE, AVX double the number of simultaneous operations, thus pointing to a potential twofold gain in speed. However, in practice, achieving this potential is extremely difficult. Here, we provide a technical description and an assessment of the optimizations included in Tomo3D to take advantage of AVX instructions. Tomo3D 2.0 allows huge reconstructions to be calculated in standard computers in a matter of minutes. Thus, it will be a valuable tool for electron tomography studies with increasing resolution needs. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Computer-animated stimuli to measure motion sensitivity: constraints on signal design in the Jacky dragon

    PubMed Central

    Rieucau, Guillaume; Burke, Darren

    2017-01-01

    Abstract Identifying perceptual thresholds is critical for understanding the mechanisms that underlie signal evolution. Using computer-animated stimuli, we examined visual speed sensitivity in the Jacky dragon Amphibolurus muricatus, a species that makes extensive use of rapid motor patterns in social communication. First, focal lizards were tested in discrimination trials using random-dot kinematograms displaying combinations of speed, coherence, and direction. Second, we measured subject lizards’ ability to predict the appearance of a secondary reinforcer (1 of 3 different computer-generated animations of invertebrates: cricket, spider, and mite) based on the direction of movement of a field of drifting dots by following a set of behavioural responses (e.g., orienting response, latency to respond) to our virtual stimuli. We found an effect of both speed and coherence, as well as an interaction between these 2 factors on the perception of moving stimuli. Overall, our results showed that Jacky dragons have acute sensitivity to high speeds. We then employed an optic flow analysis to match the performance to ecologically relevant motion. Our results suggest that the Jacky dragon visual system may have been shaped to detect fast motion. This pre-existing sensitivity may have constrained the evolution of conspecific displays. In contrast, Jacky dragons may have difficulty in detecting the movement of ambush predators, such as snakes and of some invertebrate prey. Our study also demonstrates the potential of the computer-animated stimuli technique for conducting nonintrusive tests to explore motion range and sensitivity in a visually mediated species. PMID:29491965

  19. A Parallel Multigrid Solver for Viscous Flows on Anisotropic Structured Grids

    NASA Technical Reports Server (NTRS)

    Prieto, Manuel; Montero, Ruben S.; Llorente, Ignacio M.; Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    This paper presents an efficient parallel multigrid solver for speeding up the computation of a 3-D model that treats the flow of a viscous fluid over a flat plate. The main interest of this simulation lies in exhibiting some basic difficulties that prevent optimal multigrid efficiencies from being achieved. As the computing platform, we have used Coral, a Beowulf-class system based on Intel Pentium processors and equipped with GigaNet cLAN and switched Fast Ethernet networks. Our study not only examines the scalability of the solver but also includes a performance evaluation of Coral where the investigated solver has been used to compare several of its design choices, namely, the interconnection network (GigaNet versus switched Fast-Ethernet) and the node configuration (dual nodes versus single nodes). As a reference, the performance results have been compared with those obtained with the NAS-MG benchmark.

  20. Photon-trapping micro/nanostructures for high linearity in ultra-fast photodiodes

    NASA Astrophysics Data System (ADS)

    Cansizoglu, Hilal; Gao, Yang; Perez, Cesar Bartolo; Ghandiparsi, Soroush; Ponizovskaya Devine, Ekaterina; Cansizoglu, Mehmet F.; Yamada, Toshishige; Elrefaie, Aly F.; Wang, Shih-Yuan; Islam, M. Saif

    2017-08-01

    Photodetectors (PDs) in datacom and computer networks where the link length is up to 300 m, need to handle higher than typical input power used in other communication links. Also, to reduce power consumption due to equalization at high speed (>25Gb/s), the datacom links will use PAM-4 signaling instead of NRZ with stringent receiver linearity requirements. Si PDs with photon-trapping micro/nanostructures are shown to have high linearity in output current verses input optical power. Though there is less silicon material due to the holes, the micro-/nanostructured holes collectively reradiate the light to an in-plane direction of the PD surface and can avoid current crowding in the PD. Consequently, the photocurrent per unit volume remains at a low level contributing to high linearity in the photocurrent. We present the effect of design and lattice patterns of micro/nanostructures on the linearity of ultra-fast silicon PDs designed for high speed multi gigabit data networks.

  1. Leveraging transcript quantification for fast computation of alternative splicing profiles.

    PubMed

    Alamancos, Gael P; Pagès, Amadís; Trincado, Juan L; Bellora, Nicolás; Eyras, Eduardo

    2015-09-01

    Alternative splicing plays an essential role in many cellular processes and bears major relevance in the understanding of multiple diseases, including cancer. High-throughput RNA sequencing allows genome-wide analyses of splicing across multiple conditions. However, the increasing number of available data sets represents a major challenge in terms of computation time and storage requirements. We describe SUPPA, a computational tool to calculate relative inclusion values of alternative splicing events, exploiting fast transcript quantification. SUPPA accuracy is comparable and sometimes superior to standard methods using simulated as well as real RNA-sequencing data compared with experimentally validated events. We assess the variability in terms of the choice of annotation and provide evidence that using complete transcripts rather than more transcripts per gene provides better estimates. Moreover, SUPPA coupled with de novo transcript reconstruction methods does not achieve accuracies as high as using quantification of known transcripts, but remains comparable to existing methods. Finally, we show that SUPPA is more than 1000 times faster than standard methods. Coupled with fast transcript quantification, SUPPA provides inclusion values at a much higher speed than existing methods without compromising accuracy, thereby facilitating the systematic splicing analysis of large data sets with limited computational resources. The software is implemented in Python 2.7 and is available under the MIT license at https://bitbucket.org/regulatorygenomicsupf/suppa. © 2015 Alamancos et al.; Published by Cold Spring Harbor Laboratory Press for the RNA Society.

  2. Mars rover local navigation and hazard avoidance

    NASA Technical Reports Server (NTRS)

    Wilcox, B. H.; Gennery, D. B.; Mishkin, A. H.

    1989-01-01

    A Mars rover sample return mission has been proposed for the late 1990's. Due to the long speed-of-light delays between earth and Mars, some autonomy on the rover is highly desirable. JPL has been conducting research in two possible modes of rover operation, Computer-Aided Remote Driving and Semiautonomous Navigation. A recently-completed research program used a half-scale testbed vehicle to explore several of the concepts in semiautonomous navigation. A new, full-scale vehicle with all computational and power resources on-board will be used in the coming year to demonstrate relatively fast semiautonomous navigation. The computational and power requirements for Mars rover local navigation and hazard avoidance are discussed.

  3. Mars Rover Local Navigation And Hazard Avoidance

    NASA Astrophysics Data System (ADS)

    Wilcox, B. H.; Gennery, D. B.; Mishkin, A. H.

    1989-03-01

    A Mars rover sample return mission has been proposed for the late 1990's. Due to the long speed-of-light delays between Earth and Mars, some autonomy on the rover is highly desirable. JPL has been conducting research in two possible modes of rover operation, Computer-Aided Remote Driving and Semiautonomous Navigation. A recently-completed research program used a half-scale testbed vehicle to explore several of the concepts in semiautonomous navigation. A new, full-scale vehicle with all computational and power resources on-board will be used in the coming year to demonstrate relatively fast semiautonomous navigation. The computational and power requirements for Mars rover local navigation and hazard avoidance are discussed.

  4. Performance evaluation of throughput computing workloads using multi-core processors and graphics processors

    NASA Astrophysics Data System (ADS)

    Dave, Gaurav P.; Sureshkumar, N.; Blessy Trencia Lincy, S. S.

    2017-11-01

    Current trend in processor manufacturing focuses on multi-core architectures rather than increasing the clock speed for performance improvement. Graphic processors have become as commodity hardware for providing fast co-processing in computer systems. Developments in IoT, social networking web applications, big data created huge demand for data processing activities and such kind of throughput intensive applications inherently contains data level parallelism which is more suited for SIMD architecture based GPU. This paper reviews the architectural aspects of multi/many core processors and graphics processors. Different case studies are taken to compare performance of throughput computing applications using shared memory programming in OpenMP and CUDA API based programming.

  5. A fast image registration approach of neural activities in light-sheet fluorescence microscopy images

    NASA Astrophysics Data System (ADS)

    Meng, Hui; Hui, Hui; Hu, Chaoen; Yang, Xin; Tian, Jie

    2017-03-01

    The ability of fast and single-neuron resolution imaging of neural activities enables light-sheet fluorescence microscopy (LSFM) as a powerful imaging technique in functional neural connection applications. The state-of-art LSFM imaging system can record the neuronal activities of entire brain for small animal, such as zebrafish or C. elegans at single-neuron resolution. However, the stimulated and spontaneous movements in animal brain result in inconsistent neuron positions during recording process. It is time consuming to register the acquired large-scale images with conventional method. In this work, we address the problem of fast registration of neural positions in stacks of LSFM images. This is necessary to register brain structures and activities. To achieve fast registration of neural activities, we present a rigid registration architecture by implementation of Graphics Processing Unit (GPU). In this approach, the image stacks were preprocessed on GPU by mean stretching to reduce the computation effort. The present image was registered to the previous image stack that considered as reference. A fast Fourier transform (FFT) algorithm was used for calculating the shift of the image stack. The calculations for image registration were performed in different threads while the preparation functionality was refactored and called only once by the master thread. We implemented our registration algorithm on NVIDIA Quadro K4200 GPU under Compute Unified Device Architecture (CUDA) programming environment. The experimental results showed that the registration computation can speed-up to 550ms for a full high-resolution brain image. Our approach also has potential to be used for other dynamic image registrations in biomedical applications.

  6. Anthropometric, biomechanical, and isokinetic strength predictors of ball release speed in high-performance cricket fast bowlers.

    PubMed

    Wormgoor, Shohn; Harden, Lois; Mckinon, Warrick

    2010-07-01

    Fast bowling is fundamental to all forms of cricket. The purpose of this study was to identify parameters that contribute to high ball release speeds in cricket fast bowlers. We assessed anthropometric dimensions, concentric and eccentric isokinetic strength of selected knee and shoulder muscle groups, and specific aspects of technique from a single delivery in 28 high-performance fast bowlers (age 22.0 +/- 3.0 years, ball release speed 34.0 +/- 1.3 m s(-1)). Six 50-Hz cameras and the Ariel Performance Analysis System software were used to analyse the fast and accurate deliveries. Using Pearson's correlation, parameters that showed significant associations with ball release speed were identified. The findings suggest that greater front leg knee extension at ball release (r=0.52), shoulder alignment in the transverse plane rotated further away from the batsman at front foot strike (r=0.47), greater ankle height during the delivery stride (r=0.44), and greater shoulder extension strength (r=0.39) contribute significantly to higher ball release speeds. Predictor variables failed to allow their incorporation into a multivariate model, which is known to exist in less accomplished bowlers, suggesting that factors that determine ball release speed found in other groups may not apply to high-performance fast bowlers.

  7. High-speed extended-term time-domain simulation for online cascading analysis of power system

    NASA Astrophysics Data System (ADS)

    Fu, Chuan

    A high-speed extended-term (HSET) time domain simulator (TDS), intended to become a part of an energy management system (EMS), has been newly developed for use in online extended-term dynamic cascading analysis of power systems. HSET-TDS includes the following attributes for providing situational awareness of high-consequence events: (i) online analysis, including n-1 and n-k events, (ii) ability to simulate both fast and slow dynamics for 1-3 hours in advance, (iii) inclusion of rigorous protection-system modeling, (iv) intelligence for corrective action ID, storage, and fast retrieval, and (v) high-speed execution. Very fast on-line computational capability is the most desired attribute of this simulator. Based on the process of solving algebraic differential equations describing the dynamics of power system, HSET-TDS seeks to develop computational efficiency at each of the following hierarchical levels, (i) hardware, (ii) strategies, (iii) integration methods, (iv) nonlinear solvers, and (v) linear solver libraries. This thesis first describes the Hammer-Hollingsworth 4 (HH4) implicit integration method. Like the trapezoidal rule, HH4 is symmetrically A-Stable but it possesses greater high-order precision (h4 ) than the trapezoidal rule. Such precision enables larger integration steps and therefore improves simulation efficiency for variable step size implementations. This thesis provides the underlying theory on which we advocate use of HH4 over other numerical integration methods for power system time-domain simulation. Second, motivated by the need to perform high speed extended-term time domain simulation (HSET-TDS) for on-line purposes, this thesis presents principles for designing numerical solvers of differential algebraic systems associated with power system time-domain simulation, including DAE construction strategies (Direct Solution Method), integration methods(HH4), nonlinear solvers(Very Dishonest Newton), and linear solvers(SuperLU). We have implemented a design appropriate for HSET-TDS, and we compare it to various solvers, including the commercial grade PSSE program, with respect to computational efficiency and accuracy, using as examples the New England 39 bus system, the expanded 8775 bus system, and PJM 13029 buses system. Third, we have explored a stiffness-decoupling method, intended to be part of parallel design of time domain simulation software for super computers. The stiffness-decoupling method is able to combine the advantages of implicit methods (A-stability) and explicit method(less computation). With the new stiffness detection method proposed herein, the stiffness can be captured. The expanded 975 buses system is used to test simulation efficiency. Finally, several parallel strategies for super computer deployment to simulate power system dynamics are proposed and compared. Design A partitions the task via scale with the stiffness decoupling method, waveform relaxation, and parallel linear solver. Design B partitions the task via the time axis using a highly precise integration method, the Kuntzmann-Butcher Method - order 8 (KB8). The strategy of partitioning events is designed to partition the whole simulation via the time axis through a simulated sequence of cascading events. For all strategies proposed, a strategy of partitioning cascading events is recommended, since the sub-tasks for each processor are totally independent, and therefore minimum communication time is needed.

  8. Caracterización y automatización mecánica de los telescopios Cherenkov de CASLEO

    NASA Astrophysics Data System (ADS)

    Leal, N.; Yelós, L. D.; Mancilla, A.; Maya, J.; Feres, L.; Lazarte, F.; García, B.

    2017-10-01

    A new automation system for the Cherenkov Telescopes at CASLEO is designed. Two rotation speeds are proposed: a fast speed for positioning and parking and a slow speed for tracking. The wind speed at El Leoncito site is used as a design parameter. In this work we present the first tests with the new setup which shows a correct performance at fast speeds.

  9. Application of Fast Dynamic Allan Variance for the Characterization of FOGs-Based Measurement While Drilling.

    PubMed

    Wang, Lu; Zhang, Chunxi; Gao, Shuang; Wang, Tao; Lin, Tie; Li, Xianmu

    2016-12-07

    The stability of a fiber optic gyroscope (FOG) in measurement while drilling (MWD) could vary with time because of changing temperature, high vibration, and sudden power failure. The dynamic Allan variance (DAVAR) is a sliding version of the Allan variance. It is a practical tool that could represent the non-stationary behavior of the gyroscope signal. Since the normal DAVAR takes too long to deal with long time series, a fast DAVAR algorithm has been developed to accelerate the computation speed. However, both the normal DAVAR algorithm and the fast algorithm become invalid for discontinuous time series. What is worse, the FOG-based MWD underground often keeps working for several days; the gyro data collected aboveground is not only very time-consuming, but also sometimes discontinuous in the timeline. In this article, on the basis of the fast algorithm for DAVAR, we make a further advance in the fast algorithm (improved fast DAVAR) to extend the fast DAVAR to discontinuous time series. The improved fast DAVAR and the normal DAVAR are used to responsively characterize two sets of simulation data. The simulation results show that when the length of the time series is short, the improved fast DAVAR saves 78.93% of calculation time. When the length of the time series is long ( 6 × 10 5 samples), the improved fast DAVAR reduces calculation time by 97.09%. Another set of simulation data with missing data is characterized by the improved fast DAVAR. Its simulation results prove that the improved fast DAVAR could successfully deal with discontinuous data. In the end, a vibration experiment with FOGs-based MWD has been implemented to validate the good performance of the improved fast DAVAR. The results of the experience testify that the improved fast DAVAR not only shortens computation time, but could also analyze discontinuous time series.

  10. Application of Fast Dynamic Allan Variance for the Characterization of FOGs-Based Measurement While Drilling

    PubMed Central

    Wang, Lu; Zhang, Chunxi; Gao, Shuang; Wang, Tao; Lin, Tie; Li, Xianmu

    2016-01-01

    The stability of a fiber optic gyroscope (FOG) in measurement while drilling (MWD) could vary with time because of changing temperature, high vibration, and sudden power failure. The dynamic Allan variance (DAVAR) is a sliding version of the Allan variance. It is a practical tool that could represent the non-stationary behavior of the gyroscope signal. Since the normal DAVAR takes too long to deal with long time series, a fast DAVAR algorithm has been developed to accelerate the computation speed. However, both the normal DAVAR algorithm and the fast algorithm become invalid for discontinuous time series. What is worse, the FOG-based MWD underground often keeps working for several days; the gyro data collected aboveground is not only very time-consuming, but also sometimes discontinuous in the timeline. In this article, on the basis of the fast algorithm for DAVAR, we make a further advance in the fast algorithm (improved fast DAVAR) to extend the fast DAVAR to discontinuous time series. The improved fast DAVAR and the normal DAVAR are used to responsively characterize two sets of simulation data. The simulation results show that when the length of the time series is short, the improved fast DAVAR saves 78.93% of calculation time. When the length of the time series is long (6×105 samples), the improved fast DAVAR reduces calculation time by 97.09%. Another set of simulation data with missing data is characterized by the improved fast DAVAR. Its simulation results prove that the improved fast DAVAR could successfully deal with discontinuous data. In the end, a vibration experiment with FOGs-based MWD has been implemented to validate the good performance of the improved fast DAVAR. The results of the experience testify that the improved fast DAVAR not only shortens computation time, but could also analyze discontinuous time series. PMID:27941600

  11. HIGH-SPEED GC/MS FOR AIR ANALYSIS

    EPA Science Inventory

    High speed or fast gas chromatography (FGC) consists of narrow bandwidth injection into a high-speed carrier gas stream passing through a short column leading to a fast detector. Many attempts have been made to demonstrate FGC, but until recently no practical method for routin...

  12. An Energy Integrated Dispatching Strategy of Multi- energy Based on Energy Internet

    NASA Astrophysics Data System (ADS)

    Jin, Weixia; Han, Jun

    2018-01-01

    Energy internet is a new way of energy use. Energy internet achieves energy efficiency and low cost by scheduling a variety of different forms of energy. Particle Swarm Optimization (PSO) is an advanced algorithm with few parameters, high computational precision and fast convergence speed. By improving the parameters ω, c1 and c2, PSO can improve the convergence speed and calculation accuracy. The objective of optimizing model is lowest cost of fuel, which can meet the load of electricity, heat and cold after all the renewable energy is received. Due to the different energy structure and price in different regions, the optimization strategy needs to be determined according to the algorithm and model.

  13. VizieR Online Data Catalog: ynogkm: code for calculating time-like geodesics (Yang+, 2014)

    NASA Astrophysics Data System (ADS)

    Yang, X.-L.; Wang, J.-C.

    2013-11-01

    Here we present the source file for a new public code named ynogkm, aim on calculating the time-like geodesics in a Kerr-Newmann spacetime fast. In the code the four Boyer-Lindquis coordinates and proper time are expressed as functions of a parameter p semi-analytically, i.e., r(p), μ(p), φ(p), t(p), and σ(p), by using the Weiers- trass' and Jacobi's elliptic functions and integrals. All of the ellip- tic integrals are computed by Carlson's elliptic integral method, which guarantees the fast speed of the code.The source Fortran file ynogkm.f90 contains three modules: constants, rootfind, ellfunction, and blcoordinates. (3 data files).

  14. Fast and accurate image recognition algorithms for fresh produce food safety sensing

    NASA Astrophysics Data System (ADS)

    Yang, Chun-Chieh; Kim, Moon S.; Chao, Kuanglin; Kang, Sukwon; Lefcourt, Alan M.

    2011-06-01

    This research developed and evaluated the multispectral algorithms derived from hyperspectral line-scan fluorescence imaging under violet LED excitation for detection of fecal contamination on Golden Delicious apples. The algorithms utilized the fluorescence intensities at four wavebands, 680 nm, 684 nm, 720 nm, and 780 nm, for computation of simple functions for effective detection of contamination spots created on the apple surfaces using four concentrations of aqueous fecal dilutions. The algorithms detected more than 99% of the fecal spots. The effective detection of feces showed that a simple multispectral fluorescence imaging algorithm based on violet LED excitation may be appropriate to detect fecal contamination on fast-speed apple processing lines.

  15. Development of a GPU Compatible Version of the Fast Radiation Code RRTMG

    NASA Astrophysics Data System (ADS)

    Iacono, M. J.; Mlawer, E. J.; Berthiaume, D.; Cady-Pereira, K. E.; Suarez, M.; Oreopoulos, L.; Lee, D.

    2012-12-01

    The absorption of solar radiation and emission/absorption of thermal radiation are crucial components of the physics that drive Earth's climate and weather. Therefore, accurate radiative transfer calculations are necessary for realistic climate and weather simulations. Efficient radiation codes have been developed for this purpose, but their accuracy requirements still necessitate that as much as 30% of the computational time of a GCM is spent computing radiative fluxes and heating rates. The overall computational expense constitutes a limitation on a GCM's predictive ability if it becomes an impediment to adding new physics to or increasing the spatial and/or vertical resolution of the model. The emergence of Graphics Processing Unit (GPU) technology, which will allow the parallel computation of multiple independent radiative calculations in a GCM, will lead to a fundamental change in the competition between accuracy and speed. Processing time previously consumed by radiative transfer will now be available for the modeling of other processes, such as physics parameterizations, without any sacrifice in the accuracy of the radiative transfer. Furthermore, fast radiation calculations can be performed much more frequently and will allow the modeling of radiative effects of rapid changes in the atmosphere. The fast radiation code RRTMG, developed at Atmospheric and Environmental Research (AER), is utilized operationally in many dynamical models throughout the world. We will present the results from the first stage of an effort to create a version of the RRTMG radiation code designed to run efficiently in a GPU environment. This effort will focus on the RRTMG implementation in GEOS-5. RRTMG has an internal pseudo-spectral vector of length of order 100 that, when combined with the much greater length of the global horizontal grid vector from which the radiation code is called in GEOS-5, makes RRTMG/GEOS-5 particularly suited to achieving a significant speed improvement through GPU technology. This large number of independent cases will allow us to take full advantage of the computational power of the latest GPUs, ensuring that all thread cores in the GPU remain active, a key criterion for obtaining significant speedup. The CUDA (Compute Unified Device Architecture) Fortran compiler developed by PGI and Nvidia will allow us to construct this parallel implementation on the GPU while remaining in the Fortran language. This implementation will scale very well across various CUDA-supported GPUs such as the recently released Fermi Nvidia cards. We will present the computational speed improvements of the GPU-compatible code relative to the standard CPU-based RRTMG with respect to a very large and diverse suite of atmospheric profiles. This suite will also be utilized to demonstrate the minimal impact of the code restructuring on the accuracy of radiation calculations. The GPU-compatible version of RRTMG will be directly applicable to future versions of GEOS-5, but it is also likely to provide significant associated benefits for other GCMs that employ RRTMG.

  16. Control of Walking Speed in Children With Cerebral Palsy.

    PubMed

    Davids, Jon R; Cung, Nina Q; Chen, Suzy; Sison-Williamson, Mitell; Bagley, Anita M

    2017-03-21

    Children's ability to control the speed of gait is important for a wide range of activities. It is thought that the ability to increase the speed of gait for children with cerebral palsy (CP) is common. This study considered 3 hypotheses: (1) most ambulatory children with CP can increase gait speed, (2) the characteristics of free (self-selected) and fast walking are related to motor impairment level, and (3) the strategies used to increase gait speed are distinct among these levels. A retrospective review of time-distance parameters (TDPs) for 212 subjects with CP and 34 typically developing subjects walking at free and fast speeds was performed. Only children who could increase their gait speed above the minimal clinically important difference were defined as having a fast walk. Analysis of variance was used to compare TDPs of children with CP, among Gross Motor Function Classification System (GMFCS) levels, and children in typically developing group. Eight-five percent of the CP group (GMFCS I, II, III; 96%, 99%, and 34%, respectively) could increase gait speed on demand. At free speed, children at GMFCS I and II were significantly faster than children at GMFCS level III. At free speed, children at GMFCS I and II had significantly greater stride length than those at GMFCS levels III. At free speed, children at GMFCS level III had significantly lower cadence than those at GMFCS I and II. There were no significant differences in cadence among GMFCS levels at fast speeds. There were no significant differences among GMFCS levels for percent change in any TDP between free and fast walking. Almost all children with CP at GMFCS levels I and II can control the speed of gait, however, only one-third at GMFCS III level have this ability. This study suggests that children at GMFCS III level can be divided into 2 groups based on their ability to control gait speed; however, the prognostic significance of such categorization remains to be determined. Diagnostic level II.

  17. Algorithms for the optimization of RBE-weighted dose in particle therapy.

    PubMed

    Horcicka, M; Meyer, C; Buschbacher, A; Durante, M; Krämer, M

    2013-01-21

    We report on various algorithms used for the nonlinear optimization of RBE-weighted dose in particle therapy. Concerning the dose calculation carbon ions are considered and biological effects are calculated by the Local Effect Model. Taking biological effects fully into account requires iterative methods to solve the optimization problem. We implemented several additional algorithms into GSI's treatment planning system TRiP98, like the BFGS-algorithm and the method of conjugated gradients, in order to investigate their computational performance. We modified textbook iteration procedures to improve the convergence speed. The performance of the algorithms is presented by convergence in terms of iterations and computation time. We found that the Fletcher-Reeves variant of the method of conjugated gradients is the algorithm with the best computational performance. With this algorithm we could speed up computation times by a factor of 4 compared to the method of steepest descent, which was used before. With our new methods it is possible to optimize complex treatment plans in a few minutes leading to good dose distributions. At the end we discuss future goals concerning dose optimization issues in particle therapy which might benefit from fast optimization solvers.

  18. Algorithms for the optimization of RBE-weighted dose in particle therapy

    NASA Astrophysics Data System (ADS)

    Horcicka, M.; Meyer, C.; Buschbacher, A.; Durante, M.; Krämer, M.

    2013-01-01

    We report on various algorithms used for the nonlinear optimization of RBE-weighted dose in particle therapy. Concerning the dose calculation carbon ions are considered and biological effects are calculated by the Local Effect Model. Taking biological effects fully into account requires iterative methods to solve the optimization problem. We implemented several additional algorithms into GSI's treatment planning system TRiP98, like the BFGS-algorithm and the method of conjugated gradients, in order to investigate their computational performance. We modified textbook iteration procedures to improve the convergence speed. The performance of the algorithms is presented by convergence in terms of iterations and computation time. We found that the Fletcher-Reeves variant of the method of conjugated gradients is the algorithm with the best computational performance. With this algorithm we could speed up computation times by a factor of 4 compared to the method of steepest descent, which was used before. With our new methods it is possible to optimize complex treatment plans in a few minutes leading to good dose distributions. At the end we discuss future goals concerning dose optimization issues in particle therapy which might benefit from fast optimization solvers.

  19. Structural integrity of power generating speed bumps made of concrete foam composite

    NASA Astrophysics Data System (ADS)

    Syam, B.; Muttaqin, M.; Hastrino, D.; Sebayang, A.; Basuki, W. S.; Sabri, M.; Abda, S.

    2018-02-01

    In this paper concrete foam composite speed bumps were designed to generate electrical power by utilizing the movements of commuting vehicles on highways, streets, parking gates, and drive-thru station of fast food restaurants. The speed bumps were subjected to loadings generated by vehicles pass over the power generating mechanical system. In this paper, we mainly focus our discussion on the structural integrity of the speed bumps and discuss the electrical power generating speed bumps in another paper. One aspect of structural integrity is its ability to support designed loads without breaking and includes the study of past structural failures in order to prevent failures in future designs. The concrete foam composites were used for the speed bumps; the reinforcement materials are selected from empty fruit bunch of oil palm. In this study, the speed bump materials and structure were subjected to various tests to obtain its physical and mechanical properties. To analyze the structure stability of the speed bumps some models were produced and tested in our speed bump test station. We also conduct a FEM-based computer simulation to analyze stress responses of the speed bump structures. It was found that speed bump type 1 significantly reduced the radial voltage. In addition, the speed bump is equipped with a steel casing is also suitable for use as a component component in generating electrical energy.

  20. Closha: bioinformatics workflow system for the analysis of massive sequencing data.

    PubMed

    Ko, GunHwan; Kim, Pan-Gyu; Yoon, Jongcheol; Han, Gukhee; Park, Seong-Jin; Song, Wangho; Lee, Byungwook

    2018-02-19

    While next-generation sequencing (NGS) costs have fallen in recent years, the cost and complexity of computation remain substantial obstacles to the use of NGS in bio-medical care and genomic research. The rapidly increasing amounts of data available from the new high-throughput methods have made data processing infeasible without automated pipelines. The integration of data and analytic resources into workflow systems provides a solution to the problem by simplifying the task of data analysis. To address this challenge, we developed a cloud-based workflow management system, Closha, to provide fast and cost-effective analysis of massive genomic data. We implemented complex workflows making optimal use of high-performance computing clusters. Closha allows users to create multi-step analyses using drag and drop functionality and to modify the parameters of pipeline tools. Users can also import the Galaxy pipelines into Closha. Closha is a hybrid system that enables users to use both analysis programs providing traditional tools and MapReduce-based big data analysis programs simultaneously in a single pipeline. Thus, the execution of analytics algorithms can be parallelized, speeding up the whole process. We also developed a high-speed data transmission solution, KoDS, to transmit a large amount of data at a fast rate. KoDS has a file transfer speed of up to 10 times that of normal FTP and HTTP. The computer hardware for Closha is 660 CPU cores and 800 TB of disk storage, enabling 500 jobs to run at the same time. Closha is a scalable, cost-effective, and publicly available web service for large-scale genomic data analysis. Closha supports the reliable and highly scalable execution of sequencing analysis workflows in a fully automated manner. Closha provides a user-friendly interface to all genomic scientists to try to derive accurate results from NGS platform data. The Closha cloud server is freely available for use from http://closha.kobic.re.kr/ .

  1. Differences of muscle co-contraction of the ankle joint between young and elderly adults during dynamic postural control at different speeds.

    PubMed

    Iwamoto, Yoshitaka; Takahashi, Makoto; Shinkoda, Koichi

    2017-08-02

    Agonist and antagonist muscle co-contractions during motor tasks are greater in the elderly than in young adults. During normal walking, muscle co-contraction increases with gait speed in young adults, but not in elderly adults. However, no study has compared the effects of speed on muscle co-contraction of the ankle joint during dynamic postural control in young and elderly adults. We compared muscle co-contractions of the ankle joint between young and elderly subjects during a functional stability boundary test at different speeds. Fifteen young adults and 16 community-dwelling elderly adults participated in this study. The task was functional stability boundary tests at different speeds (preferred and fast). Electromyographic evaluations of the tibialis anterior and soleus were recorded. The muscle co-contraction was evaluated using the co-contraction index (CI). There were no statistically significant differences in the postural sway parameters between the two age groups. Elderly subjects showed larger CI in both speed conditions than did the young subjects. CI was higher in the fast speed condition than in the preferred speed condition in the young subjects, but there was no difference in the elderly subjects. Moreover, after dividing the analytical range into phases (acceleration and deceleration phases), the CI was larger in the deceleration phase than in the acceleration phase in both groups, except for the young subjects in the fast speed conditions. Our results showed a greater muscle co-contraction of the ankle joint during dynamic postural control in elderly subjects than in young subjects not only in the preferred speed condition but also in the fast speed condition. In addition, the young subjects showed increased muscle co-contraction in the fast speed condition compared with that in the preferred speed condition; however, the elderly subjects showed no significant difference in muscle co-contraction between the two speed conditions. This indicates that fast movements cause different influences on dynamic postural control in elderly people, particularly from the point of view of muscle activation. These findings highlight the differences in the speed effects on muscle co-contraction of the ankle joint during dynamic postural control between the two age groups.

  2. Fast computation of hologram patterns of a 3D object using run-length encoding and novel look-up table methods.

    PubMed

    Kim, Seung-Cheol; Kim, Eun-Soo

    2009-02-20

    In this paper we propose a new approach for fast generation of computer-generated holograms (CGHs) of a 3D object by using the run-length encoding (RLE) and the novel look-up table (N-LUT) methods. With the RLE method, spatially redundant data of a 3D object are extracted and regrouped into the N-point redundancy map according to the number of the adjacent object points having the same 3D value. Based on this redundancy map, N-point principle fringe patterns (PFPs) are newly calculated by using the 1-point PFP of the N-LUT, and the CGH pattern for the 3D object is generated with these N-point PFPs. In this approach, object points to be involved in calculation of the CGH pattern can be dramatically reduced and, as a result, an increase of computational speed can be obtained. Some experiments with a test 3D object are carried out and the results are compared to those of the conventional methods.

  3. Fast PSP measurements of wall-pressure fluctuation in low-speed flows: improvements using proper orthogonal decomposition

    NASA Astrophysics Data System (ADS)

    Peng, Di; Wang, Shaofei; Liu, Yingzheng

    2016-04-01

    Fast pressure-sensitive paint (PSP) is very useful in flow diagnostics due to its fast response and high spatial resolution, but its applications in low-speed flows are usually challenging due to limitations of paint's pressure sensitivity and the capability of high-speed imagers. The poor signal-to-noise ratio in low-speed cases makes it very difficult to extract useful information from the PSP data. In this study, unsteady PSP measurements were made on a flat plate behind a cylinder in a low-speed wind tunnel (flow speed from 10 to 17 m/s). Pressure fluctuations (Δ P) on the plate caused by vortex-plate interaction were recorded continuously by fast PSP (using a high-speed camera) and a microphone array. Power spectrum of pressure fluctuations and phase-averaged Δ P obtained from PSP and microphone were compared, showing good agreement in general. Proper orthogonal decomposition (POD) was used to reduce noise in PSP data and extract the dominant pressure features. The PSP results reconstructed from selected POD modes were then compared to the pressure data obtained simultaneously with microphone sensors. Based on the comparison of both instantaneous Δ P and root-mean-square of Δ P, it was confirmed that POD analysis could effectively remove noise while preserving the instantaneous pressure information with good fidelity, especially for flows with strong periodicity. This technique extends the application range of fast PSP and can be a powerful tool for fundamental fluid mechanics research at low speed.

  4. Real-time computational photon-counting LiDAR

    NASA Astrophysics Data System (ADS)

    Edgar, Matthew; Johnson, Steven; Phillips, David; Padgett, Miles

    2018-03-01

    The availability of compact, low-cost, and high-speed MEMS-based spatial light modulators has generated widespread interest in alternative sampling strategies for imaging systems utilizing single-pixel detectors. The development of compressed sensing schemes for real-time computational imaging may have promising commercial applications for high-performance detectors, where the availability of focal plane arrays is expensive or otherwise limited. We discuss the research and development of a prototype light detection and ranging (LiDAR) system via direct time of flight, which utilizes a single high-sensitivity photon-counting detector and fast-timing electronics to recover millimeter accuracy three-dimensional images in real time. The development of low-cost real time computational LiDAR systems could have importance for applications in security, defense, and autonomous vehicles.

  5. Coupled RANS/LES for SOFIA Cavity Acoustic Prediction

    NASA Technical Reports Server (NTRS)

    Woodruff, Stephen L.

    2010-01-01

    A fast but accurate approach is described for the determination of the aero-acoustic properties of a large cavity at subsonic flight speeds. This approach employs a detachededdy simulation model in the free-shear layer at the cavity opening and the surrounding boundary layer, but assumes inviscid flow in the cavity and in the far field. The reduced gridding requirements in the cavity, in particular, lead to dramatic improvements in the time required for the computation. Results of these computations are validated against wind-tunnel data. This approach will permit significantly more flight test points to be evaluated computationally in support of the Stratospheric Observatory For Infrared Astronomy flight-test program being carried out at NASA s Dryden Flight Research Center.

  6. Approximate methods for the fast computation of EPR and ST-EPR spectra. V. Application of the perturbation approach to the problem of anisotropic motion

    NASA Astrophysics Data System (ADS)

    Robinson, B. H.; Dalton, L. R.

    1981-01-01

    The modulation perturbation treatment of Galloway and Dalton is applied to the solution of the stochastic Liouville equation for the spin density matrix which incorporates an anisotropic rotational diffusion operator. Pseudosecular and saturation terms of the spin hamiltonian are explicitly considered as is the interaction of the electron spins with the applied Zeeman modulation field. The modulation perturbation treatment results in a factor of four improvement in computational speed relative to inversion of the full supermatrix with little or no loss of computational accuracy. The theoretical simulations of EPR and ST-EPR spectra are in nearly quantitative agreement with experimental spectra taken under high resolution conditions.

  7. Breaking cover: neural responses to slow and fast camouflage-breaking motion.

    PubMed

    Yin, Jiapeng; Gong, Hongliang; An, Xu; Chen, Zheyuan; Lu, Yiliang; Andolina, Ian M; McLoughlin, Niall; Wang, Wei

    2015-08-22

    Primates need to detect and recognize camouflaged animals in natural environments. Camouflage-breaking movements are often the only visual cue available to accomplish this. Specifically, sudden movements are often detected before full recognition of the camouflaged animal is made, suggesting that initial processing of motion precedes the recognition of motion-defined contours or shapes. What are the neuronal mechanisms underlying this initial processing of camouflaged motion in the primate visual brain? We investigated this question using intrinsic-signal optical imaging of macaque V1, V2 and V4, along with computer simulations of the neural population responses. We found that camouflaged motion at low speed was processed as a direction signal by both direction- and orientation-selective neurons, whereas at high-speed camouflaged motion was encoded as a motion-streak signal primarily by orientation-selective neurons. No population responses were found to be invariant to the camouflage contours. These results suggest that the initial processing of camouflaged motion at low and high speeds is encoded as direction and motion-streak signals in primate early visual cortices. These processes are consistent with a spatio-temporal filter mechanism that provides for fast processing of motion signals, prior to full recognition of camouflage-breaking animals. © 2015 The Authors.

  8. Breaking cover: neural responses to slow and fast camouflage-breaking motion

    PubMed Central

    Yin, Jiapeng; Gong, Hongliang; An, Xu; Chen, Zheyuan; Lu, Yiliang; Andolina, Ian M.; McLoughlin, Niall; Wang, Wei

    2015-01-01

    Primates need to detect and recognize camouflaged animals in natural environments. Camouflage-breaking movements are often the only visual cue available to accomplish this. Specifically, sudden movements are often detected before full recognition of the camouflaged animal is made, suggesting that initial processing of motion precedes the recognition of motion-defined contours or shapes. What are the neuronal mechanisms underlying this initial processing of camouflaged motion in the primate visual brain? We investigated this question using intrinsic-signal optical imaging of macaque V1, V2 and V4, along with computer simulations of the neural population responses. We found that camouflaged motion at low speed was processed as a direction signal by both direction- and orientation-selective neurons, whereas at high-speed camouflaged motion was encoded as a motion-streak signal primarily by orientation-selective neurons. No population responses were found to be invariant to the camouflage contours. These results suggest that the initial processing of camouflaged motion at low and high speeds is encoded as direction and motion-streak signals in primate early visual cortices. These processes are consistent with a spatio-temporal filter mechanism that provides for fast processing of motion signals, prior to full recognition of camouflage-breaking animals. PMID:26269500

  9. PCI bus content-addressable-memory (CAM) implementation on FPGA for pattern recognition/image retrieval in a distributed environment

    NASA Astrophysics Data System (ADS)

    Megherbi, Dalila B.; Yan, Yin; Tanmay, Parikh; Khoury, Jed; Woods, C. L.

    2004-11-01

    Recently surveillance and Automatic Target Recognition (ATR) applications are increasing as the cost of computing power needed to process the massive amount of information continues to fall. This computing power has been made possible partly by the latest advances in FPGAs and SOPCs. In particular, to design and implement state-of-the-Art electro-optical imaging systems to provide advanced surveillance capabilities, there is a need to integrate several technologies (e.g. telescope, precise optics, cameras, image/compute vision algorithms, which can be geographically distributed or sharing distributed resources) into a programmable system and DSP systems. Additionally, pattern recognition techniques and fast information retrieval, are often important components of intelligent systems. The aim of this work is using embedded FPGA as a fast, configurable and synthesizable search engine in fast image pattern recognition/retrieval in a distributed hardware/software co-design environment. In particular, we propose and show a low cost Content Addressable Memory (CAM)-based distributed embedded FPGA hardware architecture solution with real time recognition capabilities and computing for pattern look-up, pattern recognition, and image retrieval. We show how the distributed CAM-based architecture offers a performance advantage of an order-of-magnitude over RAM-based architecture (Random Access Memory) search for implementing high speed pattern recognition for image retrieval. The methods of designing, implementing, and analyzing the proposed CAM based embedded architecture are described here. Other SOPC solutions/design issues are covered. Finally, experimental results, hardware verification, and performance evaluations using both the Xilinx Virtex-II and the Altera Apex20k are provided to show the potential and power of the proposed method for low cost reconfigurable fast image pattern recognition/retrieval at the hardware/software co-design level.

  10. Broadband Terahertz Computed Tomography Using a 5k-pixel Real-time THz Camera

    NASA Astrophysics Data System (ADS)

    Trichopoulos, Georgios C.; Sertel, Kubilay

    2015-07-01

    We present a novel THz computed tomography system that enables fast 3-dimensional imaging and spectroscopy in the 0.6-1.2 THz band. The system is based on a new real-time broadband THz camera that enables rapid acquisition of multiple cross-sectional images required in computed tomography. Tomographic reconstruction is achieved using digital images from the densely-packed large-format (80×64) focal plane array sensor located behind a hyper-hemispherical silicon lens. Each pixel of the sensor array consists of an 85 μm × 92 μm lithographically fabricated wideband dual-slot antenna, monolithically integrated with an ultra-fast diode tuned to operate in the 0.6-1.2 THz regime. Concurrently, optimum impedance matching was implemented for maximum pixel sensitivity, enabling 5 frames-per-second image acquisition speed. As such, the THz computed tomography system generates diffraction-limited resolution cross-section images as well as the three-dimensional models of various opaque and partially transparent objects. As an example, an over-the-counter vitamin supplement pill is imaged and its material composition is reconstructed. The new THz camera enables, for the first time, a practical application of THz computed tomography for non-destructive evaluation and biomedical imaging.

  11. An efficient 3-dim FFT for plane wave electronic structure calculations on massively parallel machines composed of multiprocessor nodes

    NASA Astrophysics Data System (ADS)

    Goedecker, Stefan; Boulet, Mireille; Deutsch, Thierry

    2003-08-01

    Three-dimensional Fast Fourier Transforms (FFTs) are the main computational task in plane wave electronic structure calculations. Obtaining a high performance on a large numbers of processors is non-trivial on the latest generation of parallel computers that consist of nodes made up of a shared memory multiprocessors. A non-dogmatic method for obtaining high performance for such 3-dim FFTs in a combined MPI/OpenMP programming paradigm will be presented. Exploiting the peculiarities of plane wave electronic structure calculations, speedups of up to 160 and speeds of up to 130 Gflops were obtained on 256 processors.

  12. Memory efficient solution of the primitive equations for numerical weather prediction on the CYBER 205

    NASA Technical Reports Server (NTRS)

    Tuccillo, J. J.

    1984-01-01

    Numerical Weather Prediction (NWP), for both operational and research purposes, requires only fast computational speed but also large memory. A technique for solving the Primitive Equations for atmospheric motion on the CYBER 205, as implemented in the Mesoscale Atmospheric Simulation System, which is fully vectorized and requires substantially less memory than other techniques such as the Leapfrog or Adams-Bashforth Schemes is discussed. The technique presented uses the Euler-Backard time marching scheme. Also discussed are several techniques for reducing computational time of the model by replacing slow intrinsic routines by faster algorithms which use only hardware vector instructions.

  13. Conic state extrapolation. [computer program for space shuttle navigation and guidance requirements

    NASA Technical Reports Server (NTRS)

    Shepperd, S. W.; Robertson, W. M.

    1973-01-01

    The Conic State Extrapolation Routine provides the capability to conically extrapolate any spacecraft inertial state vector either backwards or forwards as a function of time or as a function of transfer angle. It is merely the coded form of two versions of the solution of the two-body differential equations of motion of the spacecraft center of mass. Because of its relatively fast computation speed and moderate accuracy, it serves as a preliminary navigation tool and as a method of obtaining quick solutions for targeting and guidance functions. More accurate (but slower) results are provided by the Precision State Extrapolation Routine.

  14. Membrane-based actuation for high-speed single molecule force spectroscopy studies using AFM.

    PubMed

    Sarangapani, Krishna; Torun, Hamdi; Finkler, Ofer; Zhu, Cheng; Degertekin, Levent

    2010-07-01

    Atomic force microscopy (AFM)-based dynamic force spectroscopy of single molecular interactions involves characterizing unbinding/unfolding force distributions over a range of pulling speeds. Owing to their size and stiffness, AFM cantilevers are adversely affected by hydrodynamic forces, especially at pulling speeds >10 microm/s, when the viscous drag becomes comparable to the unbinding/unfolding forces. To circumvent these adverse effects, we have fabricated polymer-based membranes capable of actuating commercial AFM cantilevers at speeds >or=100 microm/s with minimal viscous drag effects. We have used FLUENT, a computational fluid dynamics (CFD) software, to simulate high-speed pulling and fast actuation of AFM cantilevers and membranes in different experimental configurations. The simulation results support the experimental findings on a variety of commercial AFM cantilevers and predict significant reduction in drag forces when membrane actuators are used. Unbinding force experiments involving human antibodies using these membranes demonstrate that it is possible to achieve bond loading rates >or=10(6) pN/s, an order of magnitude greater than that reported with commercial AFM cantilevers and systems.

  15. A multi-emitter fitting algorithm for potential live cell super-resolution imaging over a wide range of molecular densities.

    PubMed

    Takeshima, T; Takahashi, T; Yamashita, J; Okada, Y; Watanabe, S

    2018-05-25

    Multi-emitter fitting algorithms have been developed to improve the temporal resolution of single-molecule switching nanoscopy, but the molecular density range they can analyse is narrow and the computation required is intensive, significantly limiting their practical application. Here, we propose a computationally fast method, wedged template matching (WTM), an algorithm that uses a template matching technique to localise molecules at any overlapping molecular density from sparse to ultrahigh density with subdiffraction resolution. WTM achieves the localization of overlapping molecules at densities up to 600 molecules μm -2 with a high detection sensitivity and fast computational speed. WTM also shows localization precision comparable with that of DAOSTORM (an algorithm for high-density super-resolution microscopy), at densities up to 20 molecules μm -2 , and better than DAOSTORM at higher molecular densities. The application of WTM to a high-density biological sample image demonstrated that it resolved protein dynamics from live cell images with subdiffraction resolution and a temporal resolution of several hundred milliseconds or less through a significant reduction in the number of camera images required for a high-density reconstruction. WTM algorithm is a computationally fast, multi-emitter fitting algorithm that can analyse over a wide range of molecular densities. The algorithm is available through the website. https://doi.org/10.17632/bf3z6xpn5j.1. © 2018 The Authors. Journal of Microscopy published by JohnWiley & Sons Ltd on behalf of Royal Microscopical Society.

  16. Development and Validation of a Fast, Accurate and Cost-Effective Aeroservoelastic Method on Advanced Parallel Computing Systems

    NASA Technical Reports Server (NTRS)

    Goodwin, Sabine A.; Raj, P.

    1999-01-01

    Progress to date towards the development and validation of a fast, accurate and cost-effective aeroelastic method for advanced parallel computing platforms such as the IBM SP2 and the SGI Origin 2000 is presented in this paper. The ENSAERO code, developed at the NASA-Ames Research Center has been selected for this effort. The code allows for the computation of aeroelastic responses by simultaneously integrating the Euler or Navier-Stokes equations and the modal structural equations of motion. To assess the computational performance and accuracy of the ENSAERO code, this paper reports the results of the Navier-Stokes simulations of the transonic flow over a flexible aeroelastic wing body configuration. In addition, a forced harmonic oscillation analysis in the frequency domain and an analysis in the time domain are done on a wing undergoing a rigid pitch and plunge motion. Finally, to demonstrate the ENSAERO flutter-analysis capability, aeroelastic Euler and Navier-Stokes computations on an L-1011 wind tunnel model including pylon, nacelle and empennage are underway. All computational solutions are compared with experimental data to assess the level of accuracy of ENSAERO. As the computations described above are performed, a meticulous log of computational performance in terms of wall clock time, execution speed, memory and disk storage is kept. Code scalability is also demonstrated by studying the impact of varying the number of processors on computational performance on the IBM SP2 and the Origin 2000 systems.

  17. Analytical model for ion stopping power and range in the therapeutic energy interval for beams of hydrogen and heavier ions

    NASA Astrophysics Data System (ADS)

    Donahue, William; Newhauser, Wayne D.; Ziegler, James F.

    2016-09-01

    Many different approaches exist to calculate stopping power and range of protons and heavy charged particles. These methods may be broadly categorized as physically complete theories (widely applicable and complex) or semi-empirical approaches (narrowly applicable and simple). However, little attention has been paid in the literature to approaches that are both widely applicable and simple. We developed simple analytical models of stopping power and range for ions of hydrogen, carbon, iron, and uranium that spanned intervals of ion energy from 351 keV u-1 to 450 MeV u-1 or wider. The analytical models typically reproduced the best-available evaluated stopping powers within 1% and ranges within 0.1 mm. The computational speed of the analytical stopping power model was 28% faster than a full-theoretical approach. The calculation of range using the analytic range model was 945 times faster than a widely-used numerical integration technique. The results of this study revealed that the new, simple analytical models are accurate, fast, and broadly applicable. The new models require just 6 parameters to calculate stopping power and range for a given ion and absorber. The proposed model may be useful as an alternative to traditional approaches, especially in applications that demand fast computation speed, small memory footprint, and simplicity.

  18. Analytical model for ion stopping power and range in the therapeutic energy interval for beams of hydrogen and heavier ions.

    PubMed

    Donahue, William; Newhauser, Wayne D; Ziegler, James F

    2016-09-07

    Many different approaches exist to calculate stopping power and range of protons and heavy charged particles. These methods may be broadly categorized as physically complete theories (widely applicable and complex) or semi-empirical approaches (narrowly applicable and simple). However, little attention has been paid in the literature to approaches that are both widely applicable and simple. We developed simple analytical models of stopping power and range for ions of hydrogen, carbon, iron, and uranium that spanned intervals of ion energy from 351 keV u(-1) to 450 MeV u(-1) or wider. The analytical models typically reproduced the best-available evaluated stopping powers within 1% and ranges within 0.1 mm. The computational speed of the analytical stopping power model was 28% faster than a full-theoretical approach. The calculation of range using the analytic range model was 945 times faster than a widely-used numerical integration technique. The results of this study revealed that the new, simple analytical models are accurate, fast, and broadly applicable. The new models require just 6 parameters to calculate stopping power and range for a given ion and absorber. The proposed model may be useful as an alternative to traditional approaches, especially in applications that demand fast computation speed, small memory footprint, and simplicity.

  19. Fast thought speed induces risk taking.

    PubMed

    Chandler, Jesse J; Pronin, Emily

    2012-04-01

    In two experiments, we tested for a causal link between thought speed and risk taking. In Experiment 1, we manipulated thought speed by presenting neutral-content text at either a fast or a slow pace and having participants read the text aloud. In Experiment 2, we manipulated thought speed by presenting fast-, medium-, or slow-paced movie clips that contained similar content. Participants who were induced to think more quickly took more risks with actual money in Experiment 1 and reported greater intentions to engage in real-world risky behaviors, such as unprotected sex and illegal drug use, in Experiment 2. These experiments provide evidence that faster thinking induces greater risk taking.

  20. Projected role of advanced computational aerodynamic methods at the Lockheed-Georgia company

    NASA Technical Reports Server (NTRS)

    Lores, M. E.

    1978-01-01

    Experience with advanced computational methods being used at the Lockheed-Georgia Company to aid in the evaluation and design of new and modified aircraft indicates that large and specialized computers will be needed to make advanced three-dimensional viscous aerodynamic computations practical. The Numerical Aerodynamic Simulation Facility should be used to provide a tool for designing better aerospace vehicles while at the same time reducing development costs by performing computations using Navier-Stokes equations solution algorithms and permitting less sophisticated but nevertheless complex calculations to be made efficiently. Configuration definition procedures and data output formats can probably best be defined in cooperation with industry, therefore, the computer should handle many remote terminals efficiently. The capability of transferring data to and from other computers needs to be provided. Because of the significant amount of input and output associated with 3-D viscous flow calculations and because of the exceedingly fast computation speed envisioned for the computer, special attention should be paid to providing rapid, diversified, and efficient input and output.

  1. Accelerating non-contrast-enhanced MR angiography with inflow inversion recovery imaging by skipped phase encoding and edge deghosting (SPEED).

    PubMed

    Chang, Zheng; Xiang, Qing-San; Shen, Hao; Yin, Fang-Fang

    2010-03-01

    To accelerate non-contrast-enhanced MR angiography (MRA) with inflow inversion recovery (IFIR) with a fast imaging method, Skipped Phase Encoding and Edge Deghosting (SPEED). IFIR imaging uses a preparatory inversion pulse to reduce signals from static tissue, while leaving inflow arterial blood unaffected, resulting in sparse arterial vasculature on modest tissue background. By taking advantage of vascular sparsity, SPEED can be simplified with a single-layer model to achieve higher efficiency in both scan time reduction and image reconstruction. SPEED can also make use of information available in multiple coils for further acceleration. The techniques are demonstrated with a three-dimensional renal non-contrast-enhanced IFIR MRA study. Images are reconstructed by SPEED based on a single-layer model to achieve an undersampling factor of up to 2.5 using one skipped phase encoding direction. By making use of information available in multiple coils, SPEED can achieve an undersampling factor of up to 8.3 with four receiver coils. The reconstructed images generally have comparable quality as that of the reference images reconstructed from full k-space data. As demonstrated with a three-dimensional renal IFIR scan, SPEED based on a single-layer model is able to reduce scan time further and achieve higher computational efficiency than the original SPEED.

  2. Do Athletes Excel at Everyday Tasks?

    PubMed Central

    CHADDOCK, LAURA; NEIDER, MARK B.; VOSS, MICHELLE W.; GASPAR, JOHN G.; KRAMER, ARTHUR F.

    2014-01-01

    Purpose Cognitive enhancements are associated with sport training. We extended the sport-cognition literature by using a realistic street crossing task to examine the multitasking and processing speed abilities of collegiate athletes and nonathletes. Methods Pedestrians navigated trafficked roads by walking on a treadmill in a virtual world, a challenge that requires the quick and simultaneous processing of multiple streams of information. Results Athletes had higher street crossing success rates than nonathletes, as reflected by fewer collisions with moving vehicles. Athletes also showed faster processing speed on a computer-based test of simple reaction time, and shorter reaction times were associated with higher street crossing success rates. Conclusions The results suggest that participation in athletics relates to superior street crossing multitasking abilities and that athlete and nonathlete differences in processing speed may underlie this difference. We suggest that cognitive skills trained in sport may transfer to performance on everyday fast-paced multitasking abilities. PMID:21407125

  3. SKL algorithm based fabric image matching and retrieval

    NASA Astrophysics Data System (ADS)

    Cao, Yichen; Zhang, Xueqin; Ma, Guojian; Sun, Rongqing; Dong, Deping

    2017-07-01

    Intelligent computer image processing technology provides convenience and possibility for designers to carry out designs. Shape analysis can be achieved by extracting SURF feature. However, high dimension of SURF feature causes to lower matching speed. To solve this problem, this paper proposed a fast fabric image matching algorithm based on SURF K-means and LSH algorithm. By constructing the bag of visual words on K-Means algorithm, and forming feature histogram of each image, the dimension of SURF feature is reduced at the first step. Then with the help of LSH algorithm, the features are encoded and the dimension is further reduced. In addition, the indexes of each image and each class of image are created, and the number of matching images is decreased by LSH hash bucket. Experiments on fabric image database show that this algorithm can speed up the matching and retrieval process, the result can satisfy the requirement of dress designers with accuracy and speed.

  4. Further optimization of SeDDaRA blind image deconvolution algorithm and its DSP implementation

    NASA Astrophysics Data System (ADS)

    Wen, Bo; Zhang, Qiheng; Zhang, Jianlin

    2011-11-01

    Efficient algorithm for blind image deconvolution and its high-speed implementation is of great value in practice. Further optimization of SeDDaRA is developed, from algorithm structure to numerical calculation methods. The main optimization covers that, the structure's modularization for good implementation feasibility, reducing the data computation and dependency of 2D-FFT/IFFT, and acceleration of power operation by segmented look-up table. Then the Fast SeDDaRA is proposed and specialized for low complexity. As the final implementation, a hardware system of image restoration is conducted by using the multi-DSP parallel processing. Experimental results show that, the processing time and memory demand of Fast SeDDaRA decreases 50% at least; the data throughput of image restoration system is over 7.8Msps. The optimization is proved efficient and feasible, and the Fast SeDDaRA is able to support the real-time application.

  5. Fast algorithms for Quadrature by Expansion I: Globally valid expansions

    NASA Astrophysics Data System (ADS)

    Rachh, Manas; Klöckner, Andreas; O'Neil, Michael

    2017-09-01

    The use of integral equation methods for the efficient numerical solution of PDE boundary value problems requires two main tools: quadrature rules for the evaluation of layer potential integral operators with singular kernels, and fast algorithms for solving the resulting dense linear systems. Classically, these tools were developed separately. In this work, we present a unified numerical scheme based on coupling Quadrature by Expansion, a recent quadrature method, to a customized Fast Multipole Method (FMM) for the Helmholtz equation in two dimensions. The method allows the evaluation of layer potentials in linear-time complexity, anywhere in space, with a uniform, user-chosen level of accuracy as a black-box computational method. Providing this capability requires geometric and algorithmic considerations beyond the needs of standard FMMs as well as careful consideration of the accuracy of multipole translations. We illustrate the speed and accuracy of our method with various numerical examples.

  6. Thought Speed, Mood, and the Experience of Mental Motion.

    PubMed

    Pronin, Emily; Jacobs, Elana

    2008-11-01

    This article presents a theoretical account relating thought speed to mood and psychological experience. Thought sequences that occur at a fast speed generally induce more positive affect than do those that occur slowly. Thought speed constitutes one aspect of mental motion. Another aspect involves thought variability, or the degree to which thoughts in a sequence either vary widely from or revolve closely around a theme. Thought sequences possessing more motion (occurring fast and varying widely) generally produce more positive affect than do sequences possessing little motion (occurring slowly and repetitively). When speed and variability oppose each other, such that one is low and the other is high, predictable psychological states also emerge. For example, whereas slow, repetitive thinking can prompt dejection, fast, repetitive thinking can prompt anxiety. This distinction is related to the fact that fast thinking involves greater actual and felt energy than slow thinking does. Effects of mental motion occur independent of the specific content of thought. Their consequences for mood and energy hold psychotherapeutic relevance. © 2008 Association for Psychological Science.

  7. Towards a general neural controller for quadrupedal locomotion.

    PubMed

    Maufroy, Christophe; Kimura, Hiroshi; Takase, Kunikatsu

    2008-05-01

    Our study aims at the design and implementation of a general controller for quadruped locomotion, allowing the robot to use the whole range of quadrupedal gaits (i.e. from low speed walking to fast running). A general legged locomotion controller must integrate both posture control and rhythmic motion control and have the ability to shift continuously from one control method to the other according to locomotion speed. We are developing such a general quadrupedal locomotion controller by using a neural model involving a CPG (Central Pattern Generator) utilizing ground reaction force sensory feedback. We used a biologically faithful musculoskeletal model with a spine and hind legs, and computationally simulated stable stepping motion at various speeds using the neuro-mechanical system combining the neural controller and the musculoskeletal model. We compared the changes of the most important locomotion characteristics (stepping period, duty ratio and support length) according to speed in our simulations with the data on real cat walking. We found similar tendencies for all of them. In particular, the swing period was approximately constant while the stance period decreased with speed, resulting in a decreasing stepping period and duty ratio. Moreover, the support length increased with speed due to the posterior extreme position that shifted progressively caudally, while the anterior extreme position was approximately constant. This indicates that we succeeded in reproducing to some extent the motion of a cat from the kinematical point of view, even though we used a 2D bipedal model. We expect that such computational models will become essential tools for legged locomotion neuroscience in the future.

  8. Fast hierarchical knowledge-based approach for human face detection in color images

    NASA Astrophysics Data System (ADS)

    Jiang, Jun; Gong, Jie; Zhang, Guilin; Hu, Ruolan

    2001-09-01

    This paper presents a fast hierarchical knowledge-based approach for automatically detecting multi-scale upright faces in still color images. The approach consists of three levels. At the highest level, skin-like regions are determinated by skin model, which is based on the color attributes hue and saturation in HSV color space, as well color attributes red and green in normalized color space. In level 2, a new eye model is devised to select human face candidates in segmented skin-like regions. An important feature of the eye model is that it is independent of the scale of human face. So it is possible for finding human faces in different scale with scanning image only once, and it leads to reduction the computation time of face detection greatly. In level 3, a human face mosaic image model, which is consistent with physical structure features of human face well, is applied to judge whether there are face detects in human face candidate regions. This model includes edge and gray rules. Experiment results show that the approach has high robustness and fast speed. It has wide application perspective at human-computer interactions and visual telephone etc.

  9. Gait characteristics, balance performance and falls in ambulant adults with cerebral palsy: An observational study.

    PubMed

    Morgan, P; Murphy, A; Opheim, A; McGinley, J

    2016-07-01

    The relationship between spatiotemporal gait parameters, balance performance and falls history was investigated in ambulant adults with cerebral palsy (CP). Participants completed a single assessment of gait using an instrumented walkway at preferred and fast speeds, balance testing (Balance Evaluation Systems Test; BESTest), and reported falls history. Seventeen ambulatory adults with CP, mean age 37 years, participated. Gait speed was typically slow at both preferred and fast speeds (mean 0.97 and 1.21m/s, respectively), with short stride length and high cadence relative to speed. There was a significant, large positive relationship between preferred gait speed and BESTest total score (ρ=0.573; p<0.05) and fast gait speed and BESTest total score (ρ=0.647, p<0.01). The stride lengths of fallers at both preferred and fast speeds differed significantly from non-fallers (p=0.032 and p=0.025, respectively), with those with a prior history of falls taking shorter strides. Faster gait speed was associated with better performance on tests of anticipatory and postural response components of the BESTest, suggesting potential therapeutic training targets to address either gait speed or balance performance. Future exploration of the implications of slow walking speed and reduced stride length on falls and community engagement, and the potential prognostic value of stride length on identifying falls risk is recommended. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.

  10. Vitamin D and walking speed in older adults: Systematic review and meta-analysis.

    PubMed

    Annweiler, Cedric; Henni, Samir; Walrand, Stéphane; Montero-Odasso, Manuel; Duque, Gustavo; Duval, Guillaume T

    2017-12-01

    Vitamin D is involved in musculoskeletal health. There is no consensus on a possible association between circulating 25-hydroxyvitamin D (25OHD) concentrations and walking speed, a 'vital sign' in older adults. Our objective was to systematically review and quantitatively assess the association of 25OHD concentration with walking speed. A Medline search was conducted on June 2017, with no limit of date, using the MeSH terms "Vitamin D" OR "Vitamin D Deficiency" combined with "Gait" OR "Gait disorders, Neurologic" OR "Walking speed" OR "Gait velocity". Fixed-effect meta-analyses were performed to compute: i) mean differences in usual and fast walking speeds and Timed Up and Go test (TUG) between participants with severe vitamin D deficiency (≤25nmol/L) (SVDD), vitamin D deficiency (≤50nmol/L) (VDD), vitamin D insufficiency (≤75nmol/L) (VDI) and normal vitamin D (>75nmol/L) (NVD); ii) risk of slow walking speed according to vitamin D status. Of the 243 retrieved studies, 22 observational studies (17 cross-sectional, 5 longitudinal) met the selection criteria. The number of participants ranged between 54 and 4100 (0-100% female). Usual walking speed was slower among participants with hypovitaminosis D, with a clinically relevant difference compared with NVD of -0.18m/s for SVDD, -0.08m/s for VDD and -0.12m/s for VDI. We found similar results regarding the fast walking speed (mean differences -0.04m/s for VDD and VDI compared with NVD) and TUG (mean difference 0.48s for SVDD compared with NVD). A slow usual walking speed was positively associated with SVDD (summary OR=2.17[95%CI:1.52-3.10]), VDD (OR=1.38[95%CI:1.01-1.89]) and VDI (OR=1.38[95%CI:1.04-1.83]), using NVD as the reference. In conclusion, this meta-analysis provides robust evidence that 25OHD concentrations are positively associated with walking speed among adults. Copyright © 2017. Published by Elsevier B.V.

  11. Passive scalar entrainment and mixing in a forced, spatially-developing mixing layer

    NASA Technical Reports Server (NTRS)

    Lowery, P. S.; Reynolds, W. C.; Mansour, N. N.

    1987-01-01

    Numerical simulations are performed for the forced, spatially-developing plane mixing layer in two and three dimensions. Transport of a passive scalar field is included in the computation. This, together with the allowance for spatial development in the simulations, affords the opportunity for study of the asymmetric entrainment of irrotational fluid into the layer. The inclusion of a passive scalar field provides a means for simulating the effect of this entrainment asymmetry on the generation of 'products' from a 'fast' chemical reaction. Further, the three-dimensional simulations provide useful insight into the effect of streamwise structures on these entrainment and 'fast' reaction processes. Results from a two-dimensional simulation indicate 1.22 parts high-speed fluid are entrained for every one part low-speed fluid. Inclusion of streamwise vortices at the inlet plane of a three-dimensional simulation indicate a further increase in asymmetric entrainment - 1.44:1. Results from a final three-dimensional simulation are presented. In this case, a random velocity perturbation is imposed at the inlet plane. The results indicate the 'natural' development of the large spanwise structures characteristic of the mixing layer.

  12. New automatic mode of visualizing the colon via Cine CT

    NASA Astrophysics Data System (ADS)

    Udupa, Jayaram K.; Odhner, Dewey; Eisenberg, Harvey C.

    2001-05-01

    Methods of visualizing the inner colonic wall by using CT images has actively been pursued in recent years in an attempt to eventually replace conventional colonoscopic examination. In spite of impressive progress in this direction, there are still several problems, which need satisfactory solutions. Among these, we address three problems in this paper: segmentation, coverage, and speed of rendering. Instead of thresholding, we utilize the fuzzy connectedness framework to segment the colonic wall. Instead of the endoscopic viewing mode and various mapping techniques, we utilize the central line through the colon to generate automatically viewing directions that are enface with respect to the colon wall, thereby avoiding blind spots in viewing. We utilize some modifications of the ultra fast shell rendering framework to ensure fast rendering speed. The combined effect of these developments is that a colon study requires an initial 5 minutes of operator time plus an additional 5 minutes of computational time and subsequently enface renditions are created in real time (15 frames/sec) on a 1 GHz Pentium PC under the Linux operating system.

  13. Virtual trajectories of single-joint movements performed under two basic strategies.

    PubMed

    Latash, M L; Gottlieb, G L

    1992-01-01

    The framework of the equilibrium point hypothesis has been used to analyse motor control processes for single-joint movements. Virtual trajectories and joint stiffness were reconstructed for different movement speeds and distances when subjects were instructed either to move "as fast as possible" or to intentionally vary movement speed. These instructions are assumed to be associated with similar or different rates of change of hypothetical central control variables (corresponding to the speed-sensitive and speed-insensitive strategies). The subjects were trained to perform relatively slow, moderately fast and very fast (nominal movement times 800, 400 and 250 ms) single-joint elbow flexion movements against a constant extending torque bias. They were instructed to reproduce the motor command for a series of movements while ignoring possible changes in the external torque which could slowly and unpredictably increase, decrease, or remain constant. The total muscle torque was calculated as a sum of external and inertial components. Fast movements over different distances were made with the speed-insensitive strategy. They were characterized by an increase in joint stiffness near the midpoint of the movements which was relatively independent of movement amplitude. Their virtual trajectories had a non-monotonic N-shape. All three arms of the N-shape scaled with movement amplitude. Movements over one distance at different speeds were made with a speed-sensitive strategy. They demonstrated different patterns of virtual trajectories and joint stiffness that depended on movement speed. The N-shape became less apparent for moderately fast movements and virtually disappeared for the slow movements. Slow movements showed no visible increase in joint stiffness.(ABSTRACT TRUNCATED AT 250 WORDS)

  14. In-silico experiments of zebrafish behaviour: modeling swimming in three dimensions

    NASA Astrophysics Data System (ADS)

    Mwaffo, Violet; Butail, Sachit; Porfiri, Maurizio

    2017-01-01

    Zebrafish is fast becoming a species of choice in biomedical research for the investigation of functional and dysfunctional processes coupled with their genetic and pharmacological modulation. As with mammals, experimentation with zebrafish constitutes a complicated ethical issue that calls for the exploration of alternative testing methods to reduce the number of subjects, refine experimental designs, and replace live animals. Inspired by the demonstrated advantages of computational studies in other life science domains, we establish an authentic data-driven modelling framework to simulate zebrafish swimming in three dimensions. The model encapsulates burst-and-coast swimming style, speed modulation, and wall interaction, laying the foundations for in-silico experiments of zebrafish behaviour. Through computational studies, we demonstrate the ability of the model to replicate common ethological observables such as speed and spatial preference, and anticipate experimental observations on the correlation between tank dimensions on zebrafish behaviour. Reaching to other experimental paradigms, our framework is expected to contribute to a reduction in animal use and suffering.

  15. Hardware design and implementation of fast DOA estimation method based on multicore DSP

    NASA Astrophysics Data System (ADS)

    Guo, Rui; Zhao, Yingxiao; Zhang, Yue; Lin, Qianqiang; Chen, Zengping

    2016-10-01

    In this paper, we present a high-speed real-time signal processing hardware platform based on multicore digital signal processor (DSP). The real-time signal processing platform shows several excellent characteristics including high performance computing, low power consumption, large-capacity data storage and high speed data transmission, which make it able to meet the constraint of real-time direction of arrival (DOA) estimation. To reduce the high computational complexity of DOA estimation algorithm, a novel real-valued MUSIC estimator is used. The algorithm is decomposed into several independent steps and the time consumption of each step is counted. Based on the statistics of the time consumption, we present a new parallel processing strategy to distribute the task of DOA estimation to different cores of the real-time signal processing hardware platform. Experimental results demonstrate that the high processing capability of the signal processing platform meets the constraint of real-time direction of arrival (DOA) estimation.

  16. In-silico experiments of zebrafish behaviour: modeling swimming in three dimensions

    PubMed Central

    Mwaffo, Violet; Butail, Sachit; Porfiri, Maurizio

    2017-01-01

    Zebrafish is fast becoming a species of choice in biomedical research for the investigation of functional and dysfunctional processes coupled with their genetic and pharmacological modulation. As with mammals, experimentation with zebrafish constitutes a complicated ethical issue that calls for the exploration of alternative testing methods to reduce the number of subjects, refine experimental designs, and replace live animals. Inspired by the demonstrated advantages of computational studies in other life science domains, we establish an authentic data-driven modelling framework to simulate zebrafish swimming in three dimensions. The model encapsulates burst-and-coast swimming style, speed modulation, and wall interaction, laying the foundations for in-silico experiments of zebrafish behaviour. Through computational studies, we demonstrate the ability of the model to replicate common ethological observables such as speed and spatial preference, and anticipate experimental observations on the correlation between tank dimensions on zebrafish behaviour. Reaching to other experimental paradigms, our framework is expected to contribute to a reduction in animal use and suffering. PMID:28071731

  17. Development of High-Speed Fluorescent X-Ray Micro-Computed Tomography

    NASA Astrophysics Data System (ADS)

    Takeda, T.; Tsuchiya, Y.; Kuroe, T.; Zeniya, T.; Wu, J.; Lwin, Thet-Thet; Yashiro, T.; Yuasa, T.; Hyodo, K.; Matsumura, K.; Dilmanian, F. A.; Itai, Y.; Akatsuka, T.

    2004-05-01

    A high-speed fluorescent x-ray CT (FXCT) system using monochromatic synchrotron x rays was developed to detect very low concentration of medium-Z elements for biomedical use. The system is equipped two types of high purity germanium detectors, and fast electronics and software. Preliminary images of a 10mm diameter plastic phantom containing channels field with iodine solutions of different concentrations showed a minimum detection level of 0.002 mg I/ml at an in-plane spatial resolution of 100μm. Furthermore, the acquisition time was reduced about 1/2 comparing to previous system. The results indicate that FXCT is a highly sensitive imaging modality capable of detecting very low concentration of iodine, and that the method has potential in biomedical applications.

  18. A parallel-vector algorithm for rapid structural analysis on high-performance computers

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.; Nguyen, Duc T.; Agarwal, Tarun K.

    1990-01-01

    A fast, accurate Choleski method for the solution of symmetric systems of linear equations is presented. This direct method is based on a variable-band storage scheme and takes advantage of column heights to reduce the number of operations in the Choleski factorization. The method employs parallel computation in the outermost DO-loop and vector computation via the 'loop unrolling' technique in the innermost DO-loop. The method avoids computations with zeros outside the column heights, and as an option, zeros inside the band. The close relationship between Choleski and Gauss elimination methods is examined. The minor changes required to convert the Choleski code to a Gauss code to solve non-positive-definite symmetric systems of equations are identified. The results for two large-scale structural analyses performed on supercomputers, demonstrate the accuracy and speed of the method.

  19. A parallel-vector algorithm for rapid structural analysis on high-performance computers

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.; Nguyen, Duc T.; Agarwal, Tarun K.

    1990-01-01

    A fast, accurate Choleski method for the solution of symmetric systems of linear equations is presented. This direct method is based on a variable-band storage scheme and takes advantage of column heights to reduce the number of operations in the Choleski factorization. The method employs parallel computation in the outermost DO-loop and vector computation via the loop unrolling technique in the innermost DO-loop. The method avoids computations with zeros outside the column heights, and as an option, zeros inside the band. The close relationship between Choleski and Gauss elimination methods is examined. The minor changes required to convert the Choleski code to a Gauss code to solve non-positive-definite symmetric systems of equations are identified. The results for two large scale structural analyses performed on supercomputers, demonstrate the accuracy and speed of the method.

  20. Radio-frequency measurement in semiconductor quantum computation

    NASA Astrophysics Data System (ADS)

    Han, TianYi; Chen, MingBo; Cao, Gang; Li, HaiOu; Xiao, Ming; Guo, GuoPing

    2017-05-01

    Semiconductor quantum dots have attracted wide interest for the potential realization of quantum computation. To realize efficient quantum computation, fast manipulation and the corresponding readout are necessary. In the past few decades, considerable progress of quantum manipulation has been achieved experimentally. To meet the requirements of high-speed readout, radio-frequency (RF) measurement has been developed in recent years, such as RF-QPC (radio-frequency quantum point contact) and RF-DGS (radio-frequency dispersive gate sensor). Here we specifically demonstrate the principle of the radio-frequency reflectometry, then review the development and applications of RF measurement, which provides a feasible way to achieve high-bandwidth readout in quantum coherent control and also enriches the methods to study these artificial mesoscopic quantum systems. Finally, we prospect the future usage of radio-frequency reflectometry in scaling-up of the quantum computing models.

  1. How fast is a fast train? comparing attitudes and preferences for improved passenger rail service among urban areas in the south central high-speed rail corridor.

    DOT National Transportation Integrated Search

    2011-12-01

    "High-speed passenger rail is seen by many in the U.S. transportation policy and planning communities as : an ideal solution for fast, safe, and resource-efficient mobility in high-demand intercity corridors between : 100 and 500 miles in total endpo...

  2. Space-time VMS computation of wind-turbine rotor and tower aerodynamics

    NASA Astrophysics Data System (ADS)

    Takizawa, Kenji; Tezduyar, Tayfun E.; McIntyre, Spenser; Kostov, Nikolay; Kolesar, Ryan; Habluetzel, Casey

    2014-01-01

    We present the space-time variational multiscale (ST-VMS) computation of wind-turbine rotor and tower aerodynamics. The rotor geometry is that of the NREL 5MW offshore baseline wind turbine. We compute with a given wind speed and a specified rotor speed. The computation is challenging because of the large Reynolds numbers and rotating turbulent flows, and computing the correct torque requires an accurate and meticulous numerical approach. The presence of the tower increases the computational challenge because of the fast, rotational relative motion between the rotor and tower. The ST-VMS method is the residual-based VMS version of the Deforming-Spatial-Domain/Stabilized ST (DSD/SST) method, and is also called "DSD/SST-VMST" method (i.e., the version with the VMS turbulence model). In calculating the stabilization parameters embedded in the method, we are using a new element length definition for the diffusion-dominated limit. The DSD/SST method, which was introduced as a general-purpose moving-mesh method for computation of flows with moving interfaces, requires a mesh update method. Mesh update typically consists of moving the mesh for as long as possible and remeshing as needed. In the computations reported here, NURBS basis functions are used for the temporal representation of the rotor motion, enabling us to represent the circular paths associated with that motion exactly and specify a constant angular velocity corresponding to the invariant speeds along those paths. In addition, temporal NURBS basis functions are used in representation of the motion and deformation of the volume meshes computed and also in remeshing. We name this "ST/NURBS Mesh Update Method (STNMUM)." The STNMUM increases computational efficiency in terms of computer time and storage, and computational flexibility in terms of being able to change the time-step size of the computation. We use layers of thin elements near the blade surfaces, which undergo rigid-body motion with the rotor. We compare the results from computations with and without tower, and we also compare using NURBS and linear finite element basis functions in temporal representation of the mesh motion.

  3. Space-Time VMS Computation of Wind-Turbine Rotor and Tower Aerodynamics

    NASA Astrophysics Data System (ADS)

    McIntyre, Spenser W.

    This thesis is on the space{time variational multiscale (ST-VMS) computation of wind-turbine rotor and tower aerodynamics. The rotor geometry is that of the NREL 5MW offshore baseline wind turbine. We compute with a given wind speed and a specified rotor speed. The computation is challenging because of the large Reynolds numbers and rotating turbulent ows, and computing the correct torque requires an accurate and meticulous numerical approach. The presence of the tower increases the computational challenge because of the fast, rotational relative motion between the rotor and tower. The ST-VMS method is the residual-based VMS version of the Deforming-Spatial-Domain/Stabilized ST (DSD/SST) method, and is also called "DSD/SST-VMST" method (i.e., the version with the VMS turbulence model). In calculating the stabilization parameters embedded in the method, we are using a new element length definition for the diffusion-dominated limit. The DSD/SST method, which was introduced as a general-purpose moving-mesh method for computation of ows with moving interfaces, requires a mesh update method. Mesh update typically consists of moving the mesh for as long as possible and remeshing as needed. In the computations reported here, NURBS basis functions are used for the temporal representation of the rotor motion, enabling us to represent the circular paths associated with that motion exactly and specify a constant angular velocity corresponding to the invariant speeds along those paths. In addition, temporal NURBS basis functions are used in representation of the motion and deformation of the volume meshes computed and also in remeshing. We name this "ST/NURBS Mesh Update Method (STNMUM)." The STNMUM increases computational efficiency in terms of computer time and storage, and computational exibility in terms of being able to change the time-step size of the computation. We use layers of thin elements near the blade surfaces, which undergo rigid-body motion with the rotor. We compare the results from computations with and without tower, and we also compare using NURBS and linear finite element basis functions in temporal representation of the mesh motion.

  4. Stimulated Raman adiabatic control of a nuclear spin in diamond

    NASA Astrophysics Data System (ADS)

    Coto, Raul; Jacques, Vincent; Hétet, Gabriel; Maze, Jerónimo R.

    2017-08-01

    Coherent manipulation of nuclear spins is a highly desirable tool for both quantum metrology and quantum computation. However, most of the current techniques to control nuclear spins lack fast speed, impairing their robustness against decoherence. Here, based on stimulated Raman adiabatic passage, and its modification including shortcuts to adiabaticity, we present a fast protocol for the coherent manipulation of nuclear spins. Our proposed Λ scheme is implemented in the microwave domain and its excited-state relaxation can be optically controlled through an external laser excitation. These features allow for the initialization of a nuclear spin starting from a thermal state. Moreover we show how to implement Raman control for performing Ramsey spectroscopy to measure the dynamical and geometric phases acquired by nuclear spins.

  5. A fast invariant imbedding method for multiple scattering calculations and an application to equivalent widths of CO2 lines on Venus

    NASA Technical Reports Server (NTRS)

    Sato, M.; Kawabata, K.; Hansen, J. E.

    1977-01-01

    The invariant imbedding method considered is based on an equation which describes the change in the reflected radiation when an optically thin layer is added to the top of the atmosphere. The equation is used to treat the problem of reflection from a planetary atmosphere as an initial value problem. A fast method is discussed for the solution of the invariant imbedding equation. The speed and accuracy of the new method are illustrated by comparing it with the doubling program published by Hansen and Travis (1974). Computations are performed of the equivalent widths of carbon dioxide absorption lines in solar radiation reflected by Venus for several models of the planetary atmosphere.

  6. Wavelet transform fast inverse light scattering analysis for size determination of spherical scatterers

    PubMed Central

    Ho, Derek; Kim, Sanghoon; Drake, Tyler K.; Eldridge, Will J.; Wax, Adam

    2014-01-01

    We present a fast approach for size determination of spherical scatterers using the continuous wavelet transform of the angular light scattering profile to address the computational limitations of previously developed sizing techniques. The potential accuracy, speed, and robustness of the algorithm were determined in simulated models of scattering by polystyrene beads and cells. The algorithm was tested experimentally on angular light scattering data from polystyrene bead phantoms and MCF-7 breast cancer cells using a 2D a/LCI system. Theoretical sizing of simulated profiles of beads and cells produced strong fits between calculated and actual size (r2 = 0.9969 and r2 = 0.9979 respectively), and experimental size determinations were accurate to within one micron. PMID:25360350

  7. Accelerating an Ordered-Subset Low-Dose X-Ray Cone Beam Computed Tomography Image Reconstruction with a Power Factor and Total Variation Minimization.

    PubMed

    Huang, Hsuan-Ming; Hsiao, Ing-Tsung

    2016-01-01

    In recent years, there has been increased interest in low-dose X-ray cone beam computed tomography (CBCT) in many fields, including dentistry, guided radiotherapy and small animal imaging. Despite reducing the radiation dose, low-dose CBCT has not gained widespread acceptance in routine clinical practice. In addition to performing more evaluation studies, developing a fast and high-quality reconstruction algorithm is required. In this work, we propose an iterative reconstruction method that accelerates ordered-subsets (OS) reconstruction using a power factor. Furthermore, we combine it with the total-variation (TV) minimization method. Both simulation and phantom studies were conducted to evaluate the performance of the proposed method. Results show that the proposed method can accelerate conventional OS methods, greatly increase the convergence speed in early iterations. Moreover, applying the TV minimization to the power acceleration scheme can further improve the image quality while preserving the fast convergence rate.

  8. A novel surface registration algorithm with biomedical modeling applications.

    PubMed

    Huang, Heng; Shen, Li; Zhang, Rong; Makedon, Fillia; Saykin, Andrew; Pearlman, Justin

    2007-07-01

    In this paper, we propose a novel surface matching algorithm for arbitrarily shaped but simply connected 3-D objects. The spherical harmonic (SPHARM) method is used to describe these 3-D objects, and a novel surface registration approach is presented. The proposed technique is applied to various applications of medical image analysis. The results are compared with those using the traditional method, in which the first-order ellipsoid is used for establishing surface correspondence and aligning objects. In these applications, our surface alignment method is demonstrated to be more accurate and flexible than the traditional approach. This is due in large part to the fact that a new surface parameterization is generated by a shortcut that employs a useful rotational property of spherical harmonic basis functions for a fast implementation. In order to achieve a suitable computational speed for practical applications, we propose a fast alignment algorithm that improves computational complexity of the new surface registration method from O(n3) to O(n2).

  9. Fast numerics for the spin orbit equation with realistic tidal dissipation and constant eccentricity

    NASA Astrophysics Data System (ADS)

    Bartuccelli, Michele; Deane, Jonathan; Gentile, Guido

    2017-08-01

    We present an algorithm for the rapid numerical integration of a time-periodic ODE with a small dissipation term that is C^1 in the velocity. Such an ODE arises as a model of spin-orbit coupling in a star/planet system, and the motivation for devising a fast algorithm for its solution comes from the desire to estimate probability of capture in various solutions, via Monte Carlo simulation: the integration times are very long, since we are interested in phenomena occurring on timescales of the order of 10^6-10^7 years. The proposed algorithm is based on the high-order Euler method which was described in Bartuccelli et al. (Celest Mech Dyn Astron 121(3):233-260, 2015), and it requires computer algebra to set up the code for its implementation. The payoff is an overall increase in speed by a factor of about 7.5 compared to standard numerical methods. Means for accelerating the purely numerical computation are also discussed.

  10. Protecting solid-state spins from a strongly coupled environment

    NASA Astrophysics Data System (ADS)

    Chen, Mo; Calvin Sun, Won Kyu; Saha, Kasturi; Jaskula, Jean-Christophe; Cappellaro, Paola

    2018-06-01

    Quantum memories are critical for solid-state quantum computing devices and a good quantum memory requires both long storage time and fast read/write operations. A promising system is the nitrogen-vacancy (NV) center in diamond, where the NV electronic spin serves as the computing qubit and a nearby nuclear spin as the memory qubit. Previous works used remote, weakly coupled 13C nuclear spins, trading read/write speed for long storage time. Here we focus instead on the intrinsic strongly coupled 14N nuclear spin. We first quantitatively understand its decoherence mechanism, identifying as its source the electronic spin that acts as a quantum fluctuator. We then propose a scheme to protect the quantum memory from the fluctuating noise by applying dynamical decoupling on the environment itself. We demonstrate a factor of 3 enhancement of the storage time in a proof-of-principle experiment, showing the potential for a quantum memory that combines fast operation with long coherence time.

  11. Accelerating an Ordered-Subset Low-Dose X-Ray Cone Beam Computed Tomography Image Reconstruction with a Power Factor and Total Variation Minimization

    PubMed Central

    Huang, Hsuan-Ming; Hsiao, Ing-Tsung

    2016-01-01

    In recent years, there has been increased interest in low-dose X-ray cone beam computed tomography (CBCT) in many fields, including dentistry, guided radiotherapy and small animal imaging. Despite reducing the radiation dose, low-dose CBCT has not gained widespread acceptance in routine clinical practice. In addition to performing more evaluation studies, developing a fast and high-quality reconstruction algorithm is required. In this work, we propose an iterative reconstruction method that accelerates ordered-subsets (OS) reconstruction using a power factor. Furthermore, we combine it with the total-variation (TV) minimization method. Both simulation and phantom studies were conducted to evaluate the performance of the proposed method. Results show that the proposed method can accelerate conventional OS methods, greatly increase the convergence speed in early iterations. Moreover, applying the TV minimization to the power acceleration scheme can further improve the image quality while preserving the fast convergence rate. PMID:27073853

  12. Fast Human Detection for Intelligent Monitoring Using Surveillance Visible Sensors

    PubMed Central

    Ko, Byoung Chul; Jeong, Mira; Nam, JaeYeal

    2014-01-01

    Human detection using visible surveillance sensors is an important and challenging work for intruder detection and safety management. The biggest barrier of real-time human detection is the computational time required for dense image scaling and scanning windows extracted from an entire image. This paper proposes fast human detection by selecting optimal levels of image scale using each level's adaptive region-of-interest (ROI). To estimate the image-scaling level, we generate a Hough windows map (HWM) and select a few optimal image scales based on the strength of the HWM and the divide-and-conquer algorithm. Furthermore, adaptive ROIs are arranged per image scale to provide a different search area. We employ a cascade random forests classifier to separate candidate windows into human and nonhuman classes. The proposed algorithm has been successfully applied to real-world surveillance video sequences, and its detection accuracy and computational speed show a better performance than those of other related methods. PMID:25393782

  13. Fast-Response Fiber-Optic Anemometer with Temperature Self-Compensation

    DTIC Science & Technology

    2015-05-18

    be considered to be a function of time only. With a heating source within the sensor, the model for LSA is expressed as [13], ( ) ( ),s w s s shA ... shA C Vρ , in the exponent of the transient term in RHS of Eq. (7) characterizes the response time of the anemometer. Conversion of the temperature...circulator, and the reflected signal night was acquired by a high-speed spectrometer (Ibsen Photonics, I-MON 256 USB) which was connected to a computer

  14. AHPCRC (Army High Performance Computing Research Center) Bulletin. Volume 2, Issue 2, 2011

    DTIC Science & Technology

    2011-01-01

    fixed (i.e., no flapping). The simulation was performed at sea level conditions with a pressure of 101 kPa and a density of 1.23 kg/m3. The air speed...Hardening Behavior in Au Nanopillar Microplasticity . IJMCE 5 (3&4) 287–294. (2007) 5. S. J. Plimpton. Fast Parallel Algorithms for Short- Range Molecular...such as crude oil underwa- ter. Scattering is also used for sea floor mapping. For example, communications companies laying underwa- ter fiber optic

  15. Fourier analysis and signal processing by use of the Moebius inversion formula

    NASA Technical Reports Server (NTRS)

    Reed, Irving S.; Yu, Xiaoli; Shih, Ming-Tang; Tufts, Donald W.; Truong, T. K.

    1990-01-01

    A novel Fourier technique for digital signal processing is developed. This approach to Fourier analysis is based on the number-theoretic method of the Moebius inversion of series. The Fourier transform method developed is shown also to yield the convolution of two signals. A computer simulation shows that this method for finding Fourier coefficients is quite suitable for digital signal processing. It competes with the classical FFT (fast Fourier transform) approach in terms of accuracy, complexity, and speed.

  16. Integration of progressive hedging and dual decomposition in stochastic integer programs

    DOE PAGES

    Watson, Jean -Paul; Guo, Ge; Hackebeil, Gabriel; ...

    2015-04-07

    We present a method for integrating the Progressive Hedging (PH) algorithm and the Dual Decomposition (DD) algorithm of Carøe and Schultz for stochastic mixed-integer programs. Based on the correspondence between lower bounds obtained with PH and DD, a method to transform weights from PH to Lagrange multipliers in DD is found. Fast progress in early iterations of PH speeds up convergence of DD to an exact solution. As a result, we report computational results on server location and unit commitment instances.

  17. Pseudo-Random Number Generator Based on Coupled Map Lattices

    NASA Astrophysics Data System (ADS)

    Lü, Huaping; Wang, Shihong; Hu, Gang

    A one-way coupled chaotic map lattice is used for generating pseudo-random numbers. It is shown that with suitable cooperative applications of both chaotic and conventional approaches, the output of the spatiotemporally chaotic system can easily meet the practical requirements of random numbers, i.e., excellent random statistical properties, long periodicity of computer realizations, and fast speed of random number generations. This pseudo-random number generator system can be used as ideal synchronous and self-synchronizing stream cipher systems for secure communications.

  18. Required coefficient of friction during turning at self-selected slow, normal, and fast walking speeds.

    PubMed

    Fino, Peter; Lockhart, Thurmon E

    2014-04-11

    This study investigated the relationship of required coefficient of friction to gait speed, obstacle height, and turning strategy as participants walked around obstacles of various heights. Ten healthy, young adults performed 90° turns around corner pylons of four different heights at their self selected normal, slow, and fast walking speeds using both step and spin turning strategies. Kinetic data was captured using force plates. Results showed peak required coefficient of friction (RCOF) at push off increased with increased speed (slow μ=0.38, normal μ=0.45, and fast μ=0.54). Obstacle height had no effect on RCOF values. The average peak RCOF for fast turning exceeded the OSHA safety guideline for static COF of μ>0.50, suggesting further research is needed into the minimum static COF to prevent slips and falls, especially around corners. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Overview of fast algorithm in 3D dynamic holographic display

    NASA Astrophysics Data System (ADS)

    Liu, Juan; Jia, Jia; Pan, Yijie; Wang, Yongtian

    2013-08-01

    3D dynamic holographic display is one of the most attractive techniques for achieving real 3D vision with full depth cue without any extra devices. However, huge 3D information and data should be preceded and be computed in real time for generating the hologram in 3D dynamic holographic display, and it is a challenge even for the most advanced computer. Many fast algorithms are proposed for speeding the calculation and reducing the memory usage, such as:look-up table (LUT), compressed look-up table (C-LUT), split look-up table (S-LUT), and novel look-up table (N-LUT) based on the point-based method, and full analytical polygon-based methods, one-step polygon-based method based on the polygon-based method. In this presentation, we overview various fast algorithms based on the point-based method and the polygon-based method, and focus on the fast algorithm with low memory usage, the C-LUT, and one-step polygon-based method by the 2D Fourier analysis of the 3D affine transformation. The numerical simulations and the optical experiments are presented, and several other algorithms are compared. The results show that the C-LUT algorithm and the one-step polygon-based method are efficient methods for saving calculation time. It is believed that those methods could be used in the real-time 3D holographic display in future.

  20. Kinematics of fast cervical rotations in persons with chronic neck pain: a cross-sectional and reliability study.

    PubMed

    Röijezon, Ulrik; Djupsjöbacka, Mats; Björklund, Martin; Häger-Ross, Charlotte; Grip, Helena; Liebermann, Dario G

    2010-09-27

    Assessment of sensorimotor function is useful for classification and treatment evaluation of neck pain disorders. Several studies have investigated various aspects of cervical motor functions. Most of these have involved slow or self-paced movements, while few have investigated fast cervical movements. Moreover, the reliability of assessment of fast cervical axial rotation has, to our knowledge, not been evaluated before. Cervical kinematics was assessed during fast axial head rotations in 118 women with chronic nonspecific neck pain (NS) and compared to 49 healthy controls (CON). The relationship between cervical kinematics and symptoms, self-rated functioning and fear of movement was evaluated in the NS group. A sub-sample of 16 NS and 16 CON was re-tested after one week to assess the reliability of kinematic variables. Six cervical kinematic variables were calculated: peak speed, range of movement, conjunct movements and three variables related to the shape of the speed profile. Together, peak speed and conjunct movements had a sensitivity of 76% and a specificity of 78% in discriminating between NS and CON, of which the major part could be attributed to peak speed (NS: 226 ± 88°/s and CON: 348 ± 92°/s, p < 0.01). Peak speed was slower in NS compared to healthy controls and even slower in NS with comorbidity of low-back pain. Associations were found between reduced peak speed and self-rated difficulties with running, performing head movements, car driving, sleeping and pain. Peak speed showed reasonably high reliability, while the reliability for conjunct movements was poor. Peak speed of fast cervical axial rotations is reduced in people with chronic neck pain, and even further reduced in subjects with concomitant low back pain. Fast cervical rotation test seems to be a reliable and valid tool for assessment of neck pain disorders on group level, while a rather large between subject variation and overlap between groups calls for caution in the interpretation of individual assessments.

  1. Development of hardware accelerator for molecular dynamics simulations: a computation board that calculates nonbonded interactions in cooperation with fast multipole method.

    PubMed

    Amisaki, Takashi; Toyoda, Shinjiro; Miyagawa, Hiroh; Kitamura, Kunihiro

    2003-04-15

    Evaluation of long-range Coulombic interactions still represents a bottleneck in the molecular dynamics (MD) simulations of biological macromolecules. Despite the advent of sophisticated fast algorithms, such as the fast multipole method (FMM), accurate simulations still demand a great amount of computation time due to the accuracy/speed trade-off inherently involved in these algorithms. Unless higher order multipole expansions, which are extremely expensive to evaluate, are employed, a large amount of the execution time is still spent in directly calculating particle-particle interactions within the nearby region of each particle. To reduce this execution time for pair interactions, we developed a computation unit (board), called MD-Engine II, that calculates nonbonded pairwise interactions using a specially designed hardware. Four custom arithmetic-processors and a processor for memory manipulation ("particle processor") are mounted on the computation board. The arithmetic processors are responsible for calculation of the pair interactions. The particle processor plays a central role in realizing efficient cooperation with the FMM. The results of a series of 50-ps MD simulations of a protein-water system (50,764 atoms) indicated that a more stringent setting of accuracy in FMM computation, compared with those previously reported, was required for accurate simulations over long time periods. Such a level of accuracy was efficiently achieved using the cooperative calculations of the FMM and MD-Engine II. On an Alpha 21264 PC, the FMM computation at a moderate but tolerable level of accuracy was accelerated by a factor of 16.0 using three boards. At a high level of accuracy, the cooperative calculation achieved a 22.7-fold acceleration over the corresponding conventional FMM calculation. In the cooperative calculations of the FMM and MD-Engine II, it was possible to achieve more accurate computation at a comparable execution time by incorporating larger nearby regions. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 582-592, 2003

  2. Support Vector Machine-Based Endmember Extraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Filippi, Anthony M; Archibald, Richard K

    Introduced in this paper is the utilization of Support Vector Machines (SVMs) to automatically perform endmember extraction from hyperspectral data. The strengths of SVM are exploited to provide a fast and accurate calculated representation of high-dimensional data sets that may consist of multiple distributions. Once this representation is computed, the number of distributions can be determined without prior knowledge. For each distribution, an optimal transform can be determined that preserves informational content while reducing the data dimensionality, and hence, the computational cost. Finally, endmember extraction for the whole data set is accomplished. Results indicate that this Support Vector Machine-Based Endmembermore » Extraction (SVM-BEE) algorithm has the capability of autonomously determining endmembers from multiple clusters with computational speed and accuracy, while maintaining a robust tolerance to noise.« less

  3. Conjugate gradient based projection - A new explicit methodology for frictional contact

    NASA Technical Reports Server (NTRS)

    Tamma, Kumar K.; Li, Maocheng; Sha, Desong

    1993-01-01

    With special attention towards the applicability to parallel computation or vectorization, a new and effective explicit approach for linear complementary formulations involving a conjugate gradient based projection methodology is proposed in this study for contact problems with Coulomb friction. The overall objectives are focussed towards providing an explicit methodology of computation for the complete contact problem with friction. In this regard, the primary idea for solving the linear complementary formulations stems from an established search direction which is projected to a feasible region determined by the non-negative constraint condition; this direction is then applied to the Fletcher-Reeves conjugate gradient method resulting in a powerful explicit methodology which possesses high accuracy, excellent convergence characteristics, fast computational speed and is relatively simple to implement for contact problems involving Coulomb friction.

  4. Fast fringe pattern phase demodulation using FIR Hilbert transformers

    NASA Astrophysics Data System (ADS)

    Gdeisat, Munther; Burton, David; Lilley, Francis; Arevalillo-Herráez, Miguel

    2016-01-01

    This paper suggests the use of FIR Hilbert transformers to extract the phase of fringe patterns. This method is computationally faster than any known spatial method that produces wrapped phase maps. Also, the algorithm does not require any parameters to be adjusted which are dependent upon the specific fringe pattern that is being processed, or upon the particular setup of the optical fringe projection system that is being used. It is therefore particularly suitable for full algorithmic automation. The accuracy and validity of the suggested method has been tested using both computer-generated and real fringe patterns. This novel algorithm has been proposed for its advantages in terms of computational processing speed as it is the fastest available method to extract the wrapped phase information from a fringe pattern.

  5. The influence of gait speed on the stability of walking among the elderly.

    PubMed

    Fan, Yifang; Li, Zhiyu; Han, Shuyan; Lv, Changsheng; Zhang, Bo

    2016-06-01

    Walking speed is a basic factor to consider when walking exercises are prescribed as part of a training programme. Although associations between walking speed, step length and falling risk have been identified, the relationship between spontaneous walking pattern and falling risk remains unclear. The present study, therefore, examined the stability of spontaneous walking at normal, fast and slow speed among elderly (67.5±3.23) and young (21.4±1.31) individuals. In all, 55 participants undertook a test that involved walking on a plantar pressure platform. Foot-ground contact data were used to calculate walking speed, step length, pressure impulse along the plantar-impulse principal axis and pressure record of time series along the plantar-impulse principal axis. A forward dynamics method was used to calculate acceleration, velocity and displacement of the centre of mass in the vertical direction. The results showed that when the elderly walked at different speeds, their average step length was smaller than that observed among the young (p=0.000), whereas their anterior/posterior variability and lateral variability had no significant difference. When walking was performed at normal or slow speed, no significant between-group difference in cadence was found. When walking at a fast speed, the elderly increased their stride length moderately and their cadence greatly (p=0.012). In summary, the present study found no correlation between fast walking speed and instability among the elderly, which indicates that healthy elderly individuals might safely perform fast-speed walking exercises. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Self-selected speeds and metabolic cost of longboard skateboarding.

    PubMed

    Board, Wayne J; Browning, Raymond C

    2014-11-01

    The purpose of this study was to determine self-selected speeds, metabolic rate, and gross metabolic cost during longboard skateboarding. We measured overground speed and metabolic rate while 15 experienced longboarders traveled at their self-selected slow, typical and fast speeds. Mean longboarding speeds were 3.7, 4.5 and 5.1 m s(-1), during slow, typical and fast trials, respectively. Mean rates of oxygen consumption were 24.1, 29.1 and 37.2 ml kg(-1) min(-1) and mean rates of energy expenditure were 33.5, 41.8 and 52.7 kJ min(-1) at the slow, typical and fast speeds, respectively. At typical speeds, average intensity was ~8.5 METs. There was a significant positive relationship between oxygen consumption and energy expenditure versus speed (R(2) = 0.69 (P < 0.001), and R(2) = 0.78 (P < 0.001), respectively). The gross metabolic cost was ~2.2 J kg(-1) m(-1) at the typical speed, greater than that reported for cycling and ~50% smaller than that of walking. These results suggest that longboarding is a novel form of physical activity that elicits vigorous intensity, yet is economical compared to walking.

  7. TRIIG - Time-lapse reproduction of images through interactive graphics. [digital processing of quality hard copy

    NASA Technical Reports Server (NTRS)

    Buckner, J. D.; Council, H. W.; Edwards, T. R.

    1974-01-01

    Description of the hardware and software implementing the system of time-lapse reproduction of images through interactive graphics (TRIIG). The system produces a quality hard copy of processed images in a fast and inexpensive manner. This capability allows for optimal development of processing software through the rapid viewing of many image frames in an interactive mode. Three critical optical devices are used to reproduce an image: an Optronics photo reader/writer, the Adage Graphics Terminal, and Polaroid Type 57 high speed film. Typical sources of digitized images are observation satellites, such as ERTS or Mariner, computer coupled electron microscopes for high-magnification studies, or computer coupled X-ray devices for medical research.

  8. Extended Lagrangian Density Functional Tight-Binding Molecular Dynamics for Molecules and Solids.

    PubMed

    Aradi, Bálint; Niklasson, Anders M N; Frauenheim, Thomas

    2015-07-14

    A computationally fast quantum mechanical molecular dynamics scheme using an extended Lagrangian density functional tight-binding formulation has been developed and implemented in the DFTB+ electronic structure program package for simulations of solids and molecular systems. The scheme combines the computational speed of self-consistent density functional tight-binding theory with the efficiency and long-term accuracy of extended Lagrangian Born-Oppenheimer molecular dynamics. For systems without self-consistent charge instabilities, only a single diagonalization or construction of the single-particle density matrix is required in each time step. The molecular dynamics simulation scheme can be applied to a broad range of problems in materials science, chemistry, and biology.

  9. Fast data preprocessing with Graphics Processing Units for inverse problem solving in light-scattering measurements

    NASA Astrophysics Data System (ADS)

    Derkachov, G.; Jakubczyk, T.; Jakubczyk, D.; Archer, J.; Woźniak, M.

    2017-07-01

    Utilising Compute Unified Device Architecture (CUDA) platform for Graphics Processing Units (GPUs) enables significant reduction of computation time at a moderate cost, by means of parallel computing. In the paper [Jakubczyk et al., Opto-Electron. Rev., 2016] we reported using GPU for Mie scattering inverse problem solving (up to 800-fold speed-up). Here we report the development of two subroutines utilising GPU at data preprocessing stages for the inversion procedure: (i) A subroutine, based on ray tracing, for finding spherical aberration correction function. (ii) A subroutine performing the conversion of an image to a 1D distribution of light intensity versus azimuth angle (i.e. scattering diagram), fed from a movie-reading CPU subroutine running in parallel. All subroutines are incorporated in PikeReader application, which we make available on GitHub repository. PikeReader returns a sequence of intensity distributions versus a common azimuth angle vector, corresponding to the recorded movie. We obtained an overall ∼ 400 -fold speed-up of calculations at data preprocessing stages using CUDA codes running on GPU in comparison to single thread MATLAB-only code running on CPU.

  10. Exercise Performance and Corticospinal Excitability during Action Observation

    PubMed Central

    Wrightson, James G.; Twomey, Rosie; Smeeton, Nicholas J.

    2016-01-01

    Purpose: Observation of a model performing fast exercise improves simultaneous exercise performance; however, the precise mechanism underpinning this effect is unknown. The aim of the present study was to investigate whether the speed of the observed exercise influenced both upper body exercise performance and the activation of a cortical action observation network (AON). Method: In Experiment 1, 10 participants completed a 5 km time trial on an arm-crank ergometer whilst observing a blank screen (no-video) and a model performing exercise at both a typical (i.e., individual mean cadence during baseline time trial) and 15% faster than typical speed. In Experiment 2, 11 participants performed arm crank exercise whilst observing exercise at typical speed, 15% slower and 15% faster than typical speed. In Experiment 3, 11 participants observed the typical, slow and fast exercise, and a no-video, whilst corticospinal excitability was assessed using transcranial magnetic stimulation. Results: In Experiment 1, performance time decreased and mean power increased, during observation of the fast exercise compared to the no-video condition. In Experiment 2, cadence and power increased during observation of the fast exercise compared to the typical speed exercise but there was no effect of observation of slow exercise on exercise behavior. In Experiment 3, observation of exercise increased corticospinal excitability; however, there was no difference between the exercise speeds. Conclusion: Observation of fast exercise improves simultaneous upper-body exercise performance. However, because there was no effect of exercise speed on corticospinal excitability, these results suggest that these improvements are not solely due to changes in the activity of the AON. PMID:27014037

  11. Basis for a neuronal version of Grover's quantum algorithm

    PubMed Central

    Clark, Kevin B.

    2014-01-01

    Grover's quantum (search) algorithm exploits principles of quantum information theory and computation to surpass the strong Church–Turing limit governing classical computers. The algorithm initializes a search field into superposed N (eigen)states to later execute nonclassical “subroutines” involving unitary phase shifts of measured states and to produce root-rate or quadratic gain in the algorithmic time (O(N1/2)) needed to find some “target” solution m. Akin to this fast technological search algorithm, single eukaryotic cells, such as differentiated neurons, perform natural quadratic speed-up in the search for appropriate store-operated Ca2+ response regulation of, among other processes, protein and lipid biosynthesis, cell energetics, stress responses, cell fate and death, synaptic plasticity, and immunoprotection. Such speed-up in cellular decision making results from spatiotemporal dynamics of networked intracellular Ca2+-induced Ca2+ release and the search (or signaling) velocity of Ca2+ wave propagation. As chemical processes, such as the duration of Ca2+ mobilization, become rate-limiting over interstore distances, Ca2+ waves quadratically decrease interstore-travel time from slow saltatory to fast continuous gradients proportional to the square-root of the classical Ca2+ diffusion coefficient, D1/2, matching the computing efficiency of Grover's quantum algorithm. In this Hypothesis and Theory article, I elaborate on these traits using a fire-diffuse-fire model of store-operated cytosolic Ca2+ signaling valid for glutamatergic neurons. Salient model features corresponding to Grover's quantum algorithm are parameterized to meet requirements for the Oracle Hadamard transform and Grover's iteration. A neuronal version of Grover's quantum algorithm figures to benefit signal coincidence detection and integration, bidirectional synaptic plasticity, and other vital cell functions by rapidly selecting, ordering, and/or counting optional response regulation choices. PMID:24860419

  12. A pipeline VLSI design of fast singular value decomposition processor for real-time EEG system based on on-line recursive independent component analysis.

    PubMed

    Huang, Kuan-Ju; Shih, Wei-Yeh; Chang, Jui Chung; Feng, Chih Wei; Fang, Wai-Chi

    2013-01-01

    This paper presents a pipeline VLSI design of fast singular value decomposition (SVD) processor for real-time electroencephalography (EEG) system based on on-line recursive independent component analysis (ORICA). Since SVD is used frequently in computations of the real-time EEG system, a low-latency and high-accuracy SVD processor is essential. During the EEG system process, the proposed SVD processor aims to solve the diagonal, inverse and inverse square root matrices of the target matrices in real time. Generally, SVD requires a huge amount of computation in hardware implementation. Therefore, this work proposes a novel design concept for data flow updating to assist the pipeline VLSI implementation. The SVD processor can greatly improve the feasibility of real-time EEG system applications such as brain computer interfaces (BCIs). The proposed architecture is implemented using TSMC 90 nm CMOS technology. The sample rate of EEG raw data adopts 128 Hz. The core size of the SVD processor is 580×580 um(2), and the speed of operation frequency is 20MHz. It consumes 0.774mW of power during the 8-channel EEG system per execution time.

  13. RenderMan design principles

    NASA Technical Reports Server (NTRS)

    Apodaca, Tony; Porter, Tom

    1989-01-01

    The two worlds of interactive graphics and realistic graphics have remained separate. Fast graphics hardware runs simple algorithms and generates simple looking images. Photorealistic image synthesis software runs slowly on large expensive computers. The time has come for these two branches of computer graphics to merge. The speed and expense of graphics hardware is no longer the barrier to the wide acceptance of photorealism. There is every reason to believe that high quality image synthesis will become a standard capability of every graphics machine, from superworkstation to personal computer. The significant barrier has been the lack of a common language, an agreed-upon set of terms and conditions, for 3-D modeling systems to talk to 3-D rendering systems for computing an accurate rendition of that scene. Pixar has introduced RenderMan to serve as that common language. RenderMan, specifically the extensibility it offers in shading calculations, is discussed.

  14. Acceleration of the matrix multiplication of Radiance three phase daylighting simulations with parallel computing on heterogeneous hardware of personal computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zuo, Wangda; McNeil, Andrew; Wetter, Michael

    2013-05-23

    Building designers are increasingly relying on complex fenestration systems to reduce energy consumed for lighting and HVAC in low energy buildings. Radiance, a lighting simulation program, has been used to conduct daylighting simulations for complex fenestration systems. Depending on the configurations, the simulation can take hours or even days using a personal computer. This paper describes how to accelerate the matrix multiplication portion of a Radiance three-phase daylight simulation by conducting parallel computing on heterogeneous hardware of a personal computer. The algorithm was optimized and the computational part was implemented in parallel using OpenCL. The speed of new approach wasmore » evaluated using various daylighting simulation cases on a multicore central processing unit and a graphics processing unit. Based on the measurements and analysis of the time usage for the Radiance daylighting simulation, further speedups can be achieved by using fast I/O devices and storing the data in a binary format.« less

  15. Recent advances in Optical Computed Tomography (OCT) imaging system for three dimensional (3D) radiotherapy dosimetry

    NASA Astrophysics Data System (ADS)

    Rahman, Ahmad Taufek Abdul; Farah Rosli, Nurul; Zain, Shafirah Mohd; Zin, Hafiz M.

    2018-01-01

    Radiotherapy delivery techniques for cancer treatment are becoming more complex and highly focused, to enable accurate radiation dose delivery to the cancerous tissue and minimum dose to the healthy tissue adjacent to tumour. Instrument to verify the complex dose delivery in radiotherapy such as optical computed tomography (OCT) measures the dose from a three-dimensional (3D) radiochromic dosimeter to ensure the accuracy of the radiotherapy beam delivery to the patient. OCT measures the optical density in radiochromic material that changes predictably upon exposure to radiotherapy beams. OCT systems have been developed using a photodiode and charged coupled device (CCD) as the detector. The existing OCT imaging systems have limitation in terms of the accuracy and the speed of the measurement. Advances in on-pixel intelligence CMOS image sensor (CIS) will be exploited in this work to replace current detector in OCT imaging systems. CIS is capable of on-pixel signal processing at a very fast imaging speed (over several hundred images per second) that will allow improvement in the 3D measurement of the optical density. The paper will review 3D radiochromic dosimeters and OCT systems developed and discuss how CMOS based OCT imaging will provide accurate and fast optical density measurements in 3D. The paper will also discuss the configuration of the CMOS based OCT developed in this work and how it may improve the existing OCT system.

  16. Real-time speckle variance swept-source optical coherence tomography using a graphics processing unit.

    PubMed

    Lee, Kenneth K C; Mariampillai, Adrian; Yu, Joe X Z; Cadotte, David W; Wilson, Brian C; Standish, Beau A; Yang, Victor X D

    2012-07-01

    Advances in swept source laser technology continues to increase the imaging speed of swept-source optical coherence tomography (SS-OCT) systems. These fast imaging speeds are ideal for microvascular detection schemes, such as speckle variance (SV), where interframe motion can cause severe imaging artifacts and loss of vascular contrast. However, full utilization of the laser scan speed has been hindered by the computationally intensive signal processing required by SS-OCT and SV calculations. Using a commercial graphics processing unit that has been optimized for parallel data processing, we report a complete high-speed SS-OCT platform capable of real-time data acquisition, processing, display, and saving at 108,000 lines per second. Subpixel image registration of structural images was performed in real-time prior to SV calculations in order to reduce decorrelation from stationary structures induced by the bulk tissue motion. The viability of the system was successfully demonstrated in a high bulk tissue motion scenario of human fingernail root imaging where SV images (512 × 512 pixels, n = 4) were displayed at 54 frames per second.

  17. A video-based speed estimation technique for localizing the wireless capsule endoscope inside gastrointestinal tract.

    PubMed

    Bao, Guanqun; Mi, Liang; Geng, Yishuang; Zhou, Mingda; Pahlavan, Kaveh

    2014-01-01

    Wireless Capsule Endoscopy (WCE) is progressively emerging as one of the most popular non-invasive imaging tools for gastrointestinal (GI) tract inspection. As a critical component of capsule endoscopic examination, physicians need to know the precise position of the endoscopic capsule in order to identify the position of intestinal disease. For the WCE, the position of the capsule is defined as the linear distance it is away from certain fixed anatomical landmarks. In order to measure the distance the capsule has traveled, a precise knowledge of how fast the capsule moves is urgently needed. In this paper, we present a novel computer vision based speed estimation technique that is able to extract the speed of the endoscopic capsule by analyzing the displacements between consecutive frames. The proposed approach is validated using a virtual testbed as well as the real endoscopic images. Results show that the proposed method is able to precisely estimate the speed of the endoscopic capsule with 93% accuracy on average, which enhances the localization accuracy of the WCE to less than 2.49 cm.

  18. Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure.

    PubMed

    Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei

    2011-09-07

    Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47× speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed.

  19. Ground reaction forces in shallow water running are affected by immersion level, running speed and gender.

    PubMed

    Haupenthal, Alessandro; Fontana, Heiliane de Brito; Ruschel, Caroline; dos Santos, Daniela Pacheco; Roesler, Helio

    2013-07-01

    To analyze the effect of depth of immersion, running speed and gender on ground reaction forces during water running. Controlled laboratory study. Twenty adults (ten male and ten female) participated by running at two levels of immersion (hip and chest) and two speed conditions (slow and fast). Data were collected using an underwater force platform. The following variables were analyzed: vertical force peak (Fy), loading rate (LR) and anterior force peak (Fx anterior). Three-factor mixed ANOVA was used to analyze data. Significant effects of immersion level, speed and gender on Fy were observed, without interaction between factors. Fy was greater when females ran fast at the hip level. There was a significant increase in LR with a reduction in the level of immersion regardless of the speed and gender. No effect of speed or gender on LR was observed. Regarding Fx anterior, significant interaction between speed and immersion level was found: in the slow condition, participants presented greater values at chest immersion, whereas, during the fast running condition, greater values were observed at hip level. The effect of gender was only significant during fast water running, with Fx anterior being greater in the men group. Increasing speed raised Fx anterior significantly irrespective of the level of immersion and gender. The magnitude of ground reaction forces during shallow water running are affected by immersion level, running speed and gender and, for this reason, these factors should be taken into account during exercise prescription. Copyright © 2012 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  20. Automated Heat-Flux-Calibration Facility

    NASA Technical Reports Server (NTRS)

    Liebert, Curt H.; Weikle, Donald H.

    1989-01-01

    Computer control speeds operation of equipment and processing of measurements. New heat-flux-calibration facility developed at Lewis Research Center. Used for fast-transient heat-transfer testing, durability testing, and calibration of heat-flux gauges. Calibrations performed at constant or transient heat fluxes ranging from 1 to 6 MW/m2 and at temperatures ranging from 80 K to melting temperatures of most materials. Facility developed because there is need to build and calibrate very-small heat-flux gauges for Space Shuttle main engine (SSME).Includes lamp head attached to side of service module, an argon-gas-recirculation module, reflector, heat exchanger, and high-speed positioning system. This type of automated heat-flux calibration facility installed in industrial plants for onsite calibration of heat-flux gauges measuring fluxes of heat in advanced gas-turbine and rocket engines.

  1. A phase-based stereo vision system-on-a-chip.

    PubMed

    Díaz, Javier; Ros, Eduardo; Sabatini, Silvio P; Solari, Fabio; Mota, Sonia

    2007-02-01

    A simple and fast technique for depth estimation based on phase measurement has been adopted for the implementation of a real-time stereo system with sub-pixel resolution on an FPGA device. The technique avoids the attendant problem of phase warping. The designed system takes full advantage of the inherent processing parallelism and segmentation capabilities of FPGA devices to achieve a computation speed of 65megapixels/s, which can be arranged with a customized frame-grabber module to process 211frames/s at a size of 640x480 pixels. The processing speed achieved is higher than conventional camera frame rates, thus allowing the system to extract multiple estimations and be used as a platform to evaluate integration schemes of a population of neurons without increasing hardware resource demands.

  2. Do we need 3D tube current modulation information for accurate organ dosimetry in chest CT? Protocols dose comparisons.

    PubMed

    Lopez-Rendon, Xochitl; Zhang, Guozhi; Coudyzer, Walter; Develter, Wim; Bosmans, Hilde; Zanca, Federica

    2017-11-01

    To compare the lung and breast dose associated with three chest protocols: standard, organ-based tube current modulation (OBTCM) and fast-speed scanning; and to estimate the error associated with organ dose when modelling the longitudinal (z-) TCM versus the 3D-TCM in Monte Carlo simulations (MC) for these three protocols. Five adult and three paediatric cadavers with different BMI were scanned. The CTDI vol of the OBTCM and the fast-speed protocols were matched to the patient-specific CTDI vol of the standard protocol. Lung and breast doses were estimated using MC with both z- and 3D-TCM simulated and compared between protocols. The fast-speed scanning protocol delivered the highest doses. A slight reduction for breast dose (up to 5.1%) was observed for two of the three female cadavers with the OBTCM in comparison to the standard. For both adult and paediatric, the implementation of the z-TCM data only for organ dose estimation resulted in 10.0% accuracy for the standard and fast-speed protocols, while relative dose differences were up to 15.3% for the OBTCM protocol. At identical CTDI vol values, the standard protocol delivered the lowest overall doses. Only for the OBTCM protocol is the 3D-TCM needed if an accurate (<10.0%) organ dosimetry is desired. • The z-TCM information is sufficient for accurate dosimetry for standard protocols. • The z-TCM information is sufficient for accurate dosimetry for fast-speed scanning protocols. • For organ-based TCM schemes, the 3D-TCM information is necessary for accurate dosimetry. • At identical CTDI vol , the fast-speed scanning protocol delivered the highest doses. • Lung dose was higher in XCare than standard protocol at identical CTDI vol .

  3. Optimal control of fast and high-fidelity quantum state transfer in spin-1/2 chains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xiong-Peng; Shao, Bin, E-mail: sbin610@bit.edu.cn; Hu, Shuai

    Spin chains are promising candidates for quantum communication and computation. Using quantum optimal control (OC) theory based on the Krotov method, we present a protocol to perform quantum state transfer with fast and high fidelity by only manipulating the boundary spins in a quantum spin-1/2 chain. The achieved speed is about one order of magnitude faster than that is possible in the Lyapunov control case for comparable fidelities. Additionally, it has a fundamental limit for OC beyond which optimization is not possible. The controls are exerted only on the couplings between the boundary spins and their neighbors, so that themore » scheme has good scalability. We also demonstrate that the resulting OC scheme is robust against disorder in the chain.« less

  4. Inertial Range Turbulence of Fast and Slow Solar Wind at 0.72 AU and Solar Minimum

    NASA Astrophysics Data System (ADS)

    Teodorescu, Eliza; Echim, Marius; Munteanu, Costel; Zhang, Tielong; Bruno, Roberto; Kovacs, Peter

    2015-05-01

    We investigate Venus Express observations of magnetic field fluctuations performed systematically in the solar wind at 0.72 Astronomical Units (AU), between 2007 and 2009, during the deep minimum of solar cycle 24. The power spectral densities (PSDs) of the magnetic field components have been computed for time intervals that satisfy the data integrity criteria and have been grouped according to the type of wind, fast and slow, defined for speeds larger and smaller, respectively, than 450 km s-1. The PSDs show higher levels of power for the fast wind than for the slow. The spectral slopes estimated for all PSDs in the frequency range 0.005-0.1 Hz exhibit a normal distribution. The average value of the trace of the spectral matrix is -1.60 for fast solar wind and -1.65 for slow wind. Compared to the corresponding average slopes at 1 AU, the PSDs are shallower at 0.72 AU for slow wind conditions suggesting a steepening of the solar wind spectra between Venus and Earth. No significant time variation trend is observed for the spectral behavior of both the slow and fast wind.

  5. Fast Occlusion and Shadow Detection for High Resolution Remote Sensing Image Combined with LIDAR Point Cloud

    NASA Astrophysics Data System (ADS)

    Hu, X.; Li, X.

    2012-08-01

    The orthophoto is an important component of GIS database and has been applied in many fields. But occlusion and shadow causes the loss of feature information which has a great effect on the quality of images. One of the critical steps in true orthophoto generation is the detection of occlusion and shadow. Nowadays LiDAR can obtain the digital surface model (DSM) directly. Combined with this technology, image occlusion and shadow can be detected automatically. In this paper, the Z-Buffer is applied for occlusion detection. The shadow detection can be regarded as a same problem with occlusion detection considering the angle between the sun and the camera. However, the Z-Buffer algorithm is computationally expensive. And the volume of scanned data and remote sensing images is very large. Efficient algorithm is another challenge. Modern graphics processing unit (GPU) is much more powerful than central processing unit (CPU). We introduce this technology to speed up the Z-Buffer algorithm and get 7 times increase in speed compared with CPU. The results of experiments demonstrate that Z-Buffer algorithm plays well in occlusion and shadow detection combined with high density of point cloud and GPU can speed up the computation significantly.

  6. Coupling fast fluid dynamics and multizone airflow models in Modelica Buildings library to simulate the dynamics of HVAC systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tian, Wei; Sevilla, Thomas Alonso; Zuo, Wangda

    Historically, multizone models are widely used in building airflow and energy performance simulations due to their fast computing speed. However, multizone models assume that the air in a room is well mixed, consequently limiting their application. In specific rooms where this assumption fails, the use of computational fluid dynamics (CFD) models may be an alternative option. Previous research has mainly focused on coupling CFD models and multizone models to study airflow in large spaces. While significant, most of these analyses did not consider the coupled simulation of the building airflow with the building's Heating, Ventilation, and Air-Conditioning (HVAC) systems. Thismore » paper tries to fill the gap by integrating the models for HVAC systems with coupled multizone and CFD simulations for airflows, using the Modelica simul ation platform. To improve the computational efficiency, we incorporated a simplified CFD model named fast fluid dynamics (FFD). We first introduce the data synchronization strategy and implementation in Modelica. Then, we verify the implementation using two case studies involving an isothermal and a non-isothermal flow by comparing model simulations to experiment data. Afterward, we study another three cases that are deemed more realistic. This is done by attaching a variable air volume (VAV) terminal box and a VAV system to previous flows to assess the capability of the models in studying the dynamic control of HVAC systems. Finally, we discuss further research needs on the coupled simulation using the models.« less

  7. Detonation models of fast combustion waves in nanoscale Al-MoO3 bulk powder media

    NASA Astrophysics Data System (ADS)

    Shaw, Benjamin D.; Pantoya, Michelle L.; Dikici, Birce

    2013-02-01

    The combustion of nanometric aluminum (Al) powder with an oxidiser such as molybdenum trioxide (MoO3) is studied analytically. This study focuses on detonation wave models and a Chapman-Jouget detonation model provides reasonable agreement with experimentally-observed wave speeds provided that multiphase equilibrium sound speeds are applied at the downstream edge of the detonation wave. The results indicate that equilibrium sound speeds of multiphase mixtures can play a critical role in determining speeds of fast combustion waves in nanoscale Al-MoO3 powder mixtures.

  8. Implementation of High Time Delay Accuracy of Ultrasonic Phased Array Based on Interpolation CIC Filter

    PubMed Central

    Liu, Peilu; Li, Xinghua; Li, Haopeng; Su, Zhikun; Zhang, Hongxu

    2017-01-01

    In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC) filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter’s pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA). In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection. PMID:29023385

  9. Implementation of High Time Delay Accuracy of Ultrasonic Phased Array Based on Interpolation CIC Filter.

    PubMed

    Liu, Peilu; Li, Xinghua; Li, Haopeng; Su, Zhikun; Zhang, Hongxu

    2017-10-12

    In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC) filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter's pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA). In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection.

  10. A simplified implementation of edge detection in MATLAB is faster and more sensitive than fast fourier transform for actin fiber alignment quantification.

    PubMed

    Kemeny, Steven Frank; Clyne, Alisa Morss

    2011-04-01

    Fiber alignment plays a critical role in the structure and function of cells and tissues. While fiber alignment quantification is important to experimental analysis and several different methods for quantifying fiber alignment exist, many studies focus on qualitative rather than quantitative analysis perhaps due to the complexity of current fiber alignment methods. Speed and sensitivity were compared in edge detection and fast Fourier transform (FFT) for measuring actin fiber alignment in cells exposed to shear stress. While edge detection using matrix multiplication was consistently more sensitive than FFT, image processing time was significantly longer. However, when MATLAB functions were used to implement edge detection, MATLAB's efficient element-by-element calculations and fast filtering techniques reduced computation cost 100 times compared to the matrix multiplication edge detection method. The new computation time was comparable to the FFT method, and MATLAB edge detection produced well-distributed fiber angle distributions that statistically distinguished aligned and unaligned fibers in half as many sample images. When the FFT sensitivity was improved by dividing images into smaller subsections, processing time grew larger than the time required for MATLAB edge detection. Implementation of edge detection in MATLAB is simpler, faster, and more sensitive than FFT for fiber alignment quantification.

  11. Rapid Transient Pressure Field Computations in the Nearfield of Circular Transducers using Frequency Domain Time-Space Decomposition

    PubMed Central

    Alles, E. J.; Zhu, Y.; van Dongen, K. W. A.; McGough, R. J.

    2013-01-01

    The fast nearfield method, when combined with time-space decomposition, is a rapid and accurate approach for calculating transient nearfield pressures generated by ultrasound transducers. However, the standard time-space decomposition approach is only applicable to certain analytical representations of the temporal transducer surface velocity that, when applied to the fast nearfield method, are expressed as a finite sum of products of separate temporal and spatial terms. To extend time-space decomposition such that accelerated transient field simulations are enabled in the nearfield for an arbitrary transducer surface velocity, a new transient simulation method, frequency domain time-space decomposition (FDTSD), is derived. With this method, the temporal transducer surface velocity is transformed into the frequency domain, and then each complex-valued term is processed separately. Further improvements are achieved by spectral clipping, which reduces the number of terms and the computation time. Trade-offs between speed and accuracy are established for FDTSD calculations, and pressure fields obtained with the FDTSD method for a circular transducer are compared to those obtained with Field II and the impulse response method. The FDTSD approach, when combined with the fast nearfield method and spectral clipping, consistently achieves smaller errors in less time and requires less memory than Field II or the impulse response method. PMID:23160476

  12. The direct, not V1-mediated, functional influence between the thalamus and middle temporal complex in the human brain is modulated by the speed of visual motion.

    PubMed

    Gaglianese, A; Costagli, M; Ueno, K; Ricciardi, E; Bernardi, G; Pietrini, P; Cheng, K

    2015-01-22

    The main visual pathway that conveys motion information to the middle temporal complex (hMT+) originates from the primary visual cortex (V1), which, in turn, receives spatial and temporal features of the perceived stimuli from the lateral geniculate nucleus (LGN). In addition, visual motion information reaches hMT+ directly from the thalamus, bypassing the V1, through a direct pathway. We aimed at elucidating whether this direct route between LGN and hMT+ represents a 'fast lane' reserved to high-speed motion, as proposed previously, or it is merely involved in processing motion information irrespective of speeds. We evaluated functional magnetic resonance imaging (fMRI) responses elicited by moving visual stimuli and applied connectivity analyses to investigate the effect of motion speed on the causal influence between LGN and hMT+, independent of V1, using the Conditional Granger Causality (CGC) in the presence of slow and fast visual stimuli. Our results showed that at least part of the visual motion information from LGN reaches hMT+, bypassing V1, in response to both slow and fast motion speeds of the perceived stimuli. We also investigated whether motion speeds have different effects on the connections between LGN and functional subdivisions within hMT+: direct connections between LGN and MT-proper carry mainly slow motion information, while connections between LGN and MST carry mainly fast motion information. The existence of a parallel pathway that connects the LGN directly to hMT+ in response to both slow and fast speeds may explain why MT and MST can still respond in the presence of V1 lesions. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.

  13. MIDAS, prototype Multivariate Interactive Digital Analysis System, phase 1. Volume 1: System description

    NASA Technical Reports Server (NTRS)

    Kriegler, F. J.

    1974-01-01

    The MIDAS System is described as a third-generation fast multispectral recognition system able to keep pace with the large quantity and high rates of data acquisition from present and projected sensors. A principal objective of the MIDAS program is to provide a system well interfaced with the human operator and thus to obtain large overall reductions in turnaround time and significant gains in throughput. The hardware and software are described. The system contains a mini-computer to control the various high-speed processing elements in the data path, and a classifier which implements an all-digital prototype multivariate-Gaussian maximum likelihood decision algorithm operating at 200,000 pixels/sec. Sufficient hardware was developed to perform signature extraction from computer-compatible tapes, compute classifier coefficients, control the classifier operation, and diagnose operation.

  14. Single-trial detection of visual evoked potentials by common spatial patterns and wavelet filtering for brain-computer interface.

    PubMed

    Tu, Yiheng; Huang, Gan; Hung, Yeung Sam; Hu, Li; Hu, Yong; Zhang, Zhiguo

    2013-01-01

    Event-related potentials (ERPs) are widely used in brain-computer interface (BCI) systems as input signals conveying a subject's intention. A fast and reliable single-trial ERP detection method can be used to develop a BCI system with both high speed and high accuracy. However, most of single-trial ERP detection methods are developed for offline EEG analysis and thus have a high computational complexity and need manual operations. Therefore, they are not applicable to practical BCI systems, which require a low-complexity and automatic ERP detection method. This work presents a joint spatial-time-frequency filter that combines common spatial patterns (CSP) and wavelet filtering (WF) for improving the signal-to-noise (SNR) of visual evoked potentials (VEP), which can lead to a single-trial ERP-based BCI.

  15. A Computing Method for Sound Propagation Through a Nonuniform Jet Stream

    NASA Technical Reports Server (NTRS)

    Padula, S. L.; Liu, C. H.

    1974-01-01

    Understanding the principles of jet noise propagation is an essential ingredient of systematic noise reduction research. High speed computer methods offer a unique potential for dealing with complex real life physical systems whereas analytical solutions are restricted to sophisticated idealized models. The classical formulation of sound propagation through a jet flow was found to be inadequate for computer solutions and a more suitable approach was needed. Previous investigations selected the phase and amplitude of the acoustic pressure as dependent variables requiring the solution of a system of nonlinear algebraic equations. The nonlinearities complicated both the analysis and the computation. A reformulation of the convective wave equation in terms of a new set of dependent variables is developed with a special emphasis on its suitability for numerical solutions on fast computers. The technique is very attractive because the resulting equations are linear in nonwaving variables. The computer solution to such a linear system of algebraic equations may be obtained by well-defined and direct means which are conservative of computer time and storage space. Typical examples are illustrated and computational results are compared with available numerical and experimental data.

  16. Coarse cluster enhancing collaborative recommendation for social network systems

    NASA Astrophysics Data System (ADS)

    Zhao, Yao-Dong; Cai, Shi-Min; Tang, Ming; Shang, Min-Sheng

    2017-10-01

    Traditional collaborative filtering based recommender systems for social network systems bring very high demands on time complexity due to computing similarities of all pairs of users via resource usages and annotation actions, which thus strongly suppresses recommending speed. In this paper, to overcome this drawback, we propose a novel approach, namely coarse cluster that partitions similar users and associated items at a high speed to enhance user-based collaborative filtering, and then develop a fast collaborative user model for the social tagging systems. The experimental results based on Delicious dataset show that the proposed model is able to dramatically reduce the processing time cost greater than 90 % and relatively improve the accuracy in comparison with the ordinary user-based collaborative filtering, and is robust for the initial parameter. Most importantly, the proposed model can be conveniently extended by introducing more users' information (e.g., profiles) and practically applied for the large-scale social network systems to enhance the recommending speed without accuracy loss.

  17. Kinematics of fast cervical rotations in persons with chronic neck pain: a cross-sectional and reliability study

    PubMed Central

    2010-01-01

    Background Assessment of sensorimotor function is useful for classification and treatment evaluation of neck pain disorders. Several studies have investigated various aspects of cervical motor functions. Most of these have involved slow or self-paced movements, while few have investigated fast cervical movements. Moreover, the reliability of assessment of fast cervical axial rotation has, to our knowledge, not been evaluated before. Methods Cervical kinematics was assessed during fast axial head rotations in 118 women with chronic nonspecific neck pain (NS) and compared to 49 healthy controls (CON). The relationship between cervical kinematics and symptoms, self-rated functioning and fear of movement was evaluated in the NS group. A sub-sample of 16 NS and 16 CON was re-tested after one week to assess the reliability of kinematic variables. Six cervical kinematic variables were calculated: peak speed, range of movement, conjunct movements and three variables related to the shape of the speed profile. Results Together, peak speed and conjunct movements had a sensitivity of 76% and a specificity of 78% in discriminating between NS and CON, of which the major part could be attributed to peak speed (NS: 226 ± 88 °/s and CON: 348 ± 92 °/s, p < 0.01). Peak speed was slower in NS compared to healthy controls and even slower in NS with comorbidity of low-back pain. Associations were found between reduced peak speed and self-rated difficulties with running, performing head movements, car driving, sleeping and pain. Peak speed showed reasonably high reliability, while the reliability for conjunct movements was poor. Conclusions Peak speed of fast cervical axial rotations is reduced in people with chronic neck pain, and even further reduced in subjects with concomitant low back pain. Fast cervical rotation test seems to be a reliable and valid tool for assessment of neck pain disorders on group level, while a rather large between subject variation and overlap between groups calls for caution in the interpretation of individual assessments. PMID:20875135

  18. Fast Sealift and Maritime Prepositioning Options for Improving Sealift Capabilities

    DTIC Science & Technology

    1991-01-01

    cargo ships with speeds of 22 to 25 knots as fas* The eight existing T- AKRs , which have a speed of 30... cargo than in the case of no ship attrition. 2.0 ---- Without -- attrition --- • attrition 0- 1.0- High -capacity Low-cost Improved MPS( A ) SES T- AKR ...fast sealift and maritime prepositioning ships and evaluates their performance in terms of speed and the ability to make cumulative cargo deliveries

  19. Monte Carlo-based fluorescence molecular tomography reconstruction method accelerated by a cluster of graphic processing units.

    PubMed

    Quan, Guotao; Gong, Hui; Deng, Yong; Fu, Jianwei; Luo, Qingming

    2011-02-01

    High-speed fluorescence molecular tomography (FMT) reconstruction for 3-D heterogeneous media is still one of the most challenging problems in diffusive optical fluorescence imaging. In this paper, we propose a fast FMT reconstruction method that is based on Monte Carlo (MC) simulation and accelerated by a cluster of graphics processing units (GPUs). Based on the Message Passing Interface standard, we modified the MC code for fast FMT reconstruction, and different Green's functions representing the flux distribution in media are calculated simultaneously by different GPUs in the cluster. A load-balancing method was also developed to increase the computational efficiency. By applying the Fréchet derivative, a Jacobian matrix is formed to reconstruct the distribution of the fluorochromes using the calculated Green's functions. Phantom experiments have shown that only 10 min are required to get reconstruction results with a cluster of 6 GPUs, rather than 6 h with a cluster of multiple dual opteron CPU nodes. Because of the advantages of high accuracy and suitability for 3-D heterogeneity media with refractive-index-unmatched boundaries from the MC simulation, the GPU cluster-accelerated method provides a reliable approach to high-speed reconstruction for FMT imaging.

  20. Flexible, fast and accurate sequence alignment profiling on GPGPU with PaSWAS.

    PubMed

    Warris, Sven; Yalcin, Feyruz; Jackson, Katherine J L; Nap, Jan Peter

    2015-01-01

    To obtain large-scale sequence alignments in a fast and flexible way is an important step in the analyses of next generation sequencing data. Applications based on the Smith-Waterman (SW) algorithm are often either not fast enough, limited to dedicated tasks or not sufficiently accurate due to statistical issues. Current SW implementations that run on graphics hardware do not report the alignment details necessary for further analysis. With the Parallel SW Alignment Software (PaSWAS) it is possible (a) to have easy access to the computational power of NVIDIA-based general purpose graphics processing units (GPGPUs) to perform high-speed sequence alignments, and (b) retrieve relevant information such as score, number of gaps and mismatches. The software reports multiple hits per alignment. The added value of the new SW implementation is demonstrated with two test cases: (1) tag recovery in next generation sequence data and (2) isotype assignment within an immunoglobulin 454 sequence data set. Both cases show the usability and versatility of the new parallel Smith-Waterman implementation.

  1. Fast Laser Holographic Interferometry For Wind Tunnels

    NASA Technical Reports Server (NTRS)

    Lee, George

    1989-01-01

    Proposed system makes holographic interferograms quickly in wind tunnels. Holograms reveal two-dimensional flows around airfoils and provide information on distributions of pressure, structures of wake and boundary layers, and density contours of flow fields. Holograms form quickly in thermoplastic plates in wind tunnel. Plates rigid and left in place so neither vibrations nor photgraphic-development process degrades accuracy of holograms. System processes and analyzes images quickly. Semiautomatic micro-computer-based desktop image-processing unit now undergoing development moves easily to wind tunnel, and its speed and memory adequate for flows about airfoils.

  2. Optimal Diabatic Dynamics of Majoarana-based Topological Qubits

    NASA Astrophysics Data System (ADS)

    Seradjeh, Babak; Rahmani, Armin; Franz, Marcel

    In topological quantum computing, unitary operations on qubits are performed by adiabatic braiding of non-Abelian quasiparticles such as Majorana zero modes and are protected from local environmental perturbations. This scheme requires slow operations. By using the Pontryagin's maximum principle, here we show the same quantum gates can be implemented in much shorter times through optimal diabatic pulses. While our fast diabatic gates no not enjoy topological protection, they provide significant practical advantages due to their optimal speed and remarkable robustness to calibration errors and noise. NSERC, CIfAR, NSF DMR- 1350663, BSF 2014345.

  3. Energy efficient quantum machines

    NASA Astrophysics Data System (ADS)

    Abah, Obinna; Lutz, Eric

    2017-05-01

    We investigate the performance of a quantum thermal machine operating in finite time based on shortcut-to-adiabaticity techniques. We compute efficiency and power for a paradigmatic harmonic quantum Otto engine by taking the energetic cost of the shortcut driving explicitly into account. We demonstrate that shortcut-to-adiabaticity machines outperform conventional ones for fast cycles. We further derive generic upper bounds on both quantities, valid for any heat engine cycle, using the notion of quantum speed limit for driven systems. We establish that these quantum bounds are tighter than those stemming from the second law of thermodynamics.

  4. Using the automata processor for fast pattern recognition in high energy physics experiments. A proof of concept

    DOE PAGES

    Michael H. L. S. Wang; Cancelo, Gustavo; Green, Christopher; ...

    2016-06-25

    Here, we explore the Micron Automata Processor (AP) as a suitable commodity technology that can address the growing computational needs of pattern recognition in High Energy Physics (HEP) experiments. A toy detector model is developed for which an electron track confirmation trigger based on the Micron AP serves as a test case. Although primarily meant for high speed text-based searches, we demonstrate a proof of concept for the use of the Micron AP in a HEP trigger application.

  5. Using the automata processor for fast pattern recognition in high energy physics experiments. A proof of concept

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michael H. L. S. Wang; Cancelo, Gustavo; Green, Christopher

    Here, we explore the Micron Automata Processor (AP) as a suitable commodity technology that can address the growing computational needs of pattern recognition in High Energy Physics (HEP) experiments. A toy detector model is developed for which an electron track confirmation trigger based on the Micron AP serves as a test case. Although primarily meant for high speed text-based searches, we demonstrate a proof of concept for the use of the Micron AP in a HEP trigger application.

  6. Traffic safety facts 1994 : speed

    DOT National Transportation Integrated Search

    1995-01-01

    Speeding - exceeding the posted speed limit or driving too fast for conditions - is one of the most prevalent factors contributing to traffic crashes. In 1994, speed was a factor in 30 percent of all fatal crashes, and 12,480 lives were lost in speed...

  7. Sensory and Cognitive Determinants of Reading Speed

    ERIC Educational Resources Information Center

    Jackson, Mark D.; McClelland, James L.

    1975-01-01

    Fast and average readers were tested on four tasks. Fast readers appear to pick up more information per fixation on structured textual material, and had a greater span of apprehension for unrelated elements. Results disagree with the view that reading speed depends solely on ability to infer missing information. (CHK)

  8. Receiver-Assisted Congestion Control to Achieve High Throughput in Lossy Wireless Networks

    NASA Astrophysics Data System (ADS)

    Shi, Kai; Shu, Yantai; Yang, Oliver; Luo, Jiarong

    2010-04-01

    Many applications would require fast data transfer in high-speed wireless networks nowadays. However, due to its conservative congestion control algorithm, Transmission Control Protocol (TCP) cannot effectively utilize the network capacity in lossy wireless networks. In this paper, we propose a receiver-assisted congestion control mechanism (RACC) in which the sender performs loss-based control, while the receiver is performing delay-based control. The receiver measures the network bandwidth based on the packet interarrival interval and uses it to compute a congestion window size deemed appropriate for the sender. After receiving the advertised value feedback from the receiver, the sender then uses the additive increase and multiplicative decrease (AIMD) mechanism to compute the correct congestion window size to be used. By integrating the loss-based and the delay-based congestion controls, our mechanism can mitigate the effect of wireless losses, alleviate the timeout effect, and therefore make better use of network bandwidth. Simulation and experiment results in various scenarios show that our mechanism can outperform conventional TCP in high-speed and lossy wireless environments.

  9. Parallel programming of gradient-based iterative image reconstruction schemes for optical tomography.

    PubMed

    Hielscher, Andreas H; Bartel, Sebastian

    2004-02-01

    Optical tomography (OT) is a fast developing novel imaging modality that uses near-infrared (NIR) light to obtain cross-sectional views of optical properties inside the human body. A major challenge remains the time-consuming, computational-intensive image reconstruction problem that converts NIR transmission measurements into cross-sectional images. To increase the speed of iterative image reconstruction schemes that are commonly applied for OT, we have developed and implemented several parallel algorithms on a cluster of workstations. Static process distribution as well as dynamic load balancing schemes suitable for heterogeneous clusters and varying machine performances are introduced and tested. The resulting algorithms are shown to accelerate the reconstruction process to various degrees, substantially reducing the computation times for clinically relevant problems.

  10. An examination of techniques for reformatting digital cartographic data/part 1: the raster-to- vector process.

    USGS Publications Warehouse

    Peuquet, D.J.

    1981-01-01

    Current graphic devices suitable for high-speed computer input and output of cartographic data are tending more and more to be raster-oriented, such as the rotating drum scanner and the color raster display. However, the majority of commonly used manipulative techniques in computer-assisted cartography and automated spatial data handling continue to require that the data be in vector format. This situation has recently precipitated the requirement for very fast techniques for converting digital cartographic data from raster to vector format for processing, and then back into raster format for plotting. The current article is part 1 of a 2 part paper concerned with examining the state-of-the-art in these conversion techniques. -from Author

  11. Fast, adaptive summation of point forces in the two-dimensional Poisson equation

    NASA Technical Reports Server (NTRS)

    Van Dommelen, Leon; Rundensteiner, Elke A.

    1989-01-01

    A comparatively simple procedure is presented for the direct summation of the velocity field introduced by point vortices which significantly reduces the required number of operations by replacing selected partial sums by asymptotic series. Tables are presented which demonstrate the speed of this algorithm in terms of the mere doubling of computational time in dealing with a doubling of the number of vortices; current methods involve a computational time extension by a factor of 4. This procedure need not be restricted to the solution of the Poisson equation, and may be applied to other problems involving groups of points in which the interaction between elements of different groups can be simplified when the distance between groups is sufficiently great.

  12. Extended Lagrangian Density Functional Tight-Binding Molecular Dynamics for Molecules and Solids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aradi, Bálint; Niklasson, Anders M. N.; Frauenheim, Thomas

    A computationally fast quantum mechanical molecular dynamics scheme using an extended Lagrangian density functional tight-binding formulation has been developed and implemented in the DFTB+ electronic structure program package for simulations of solids and molecular systems. The scheme combines the computational speed of self-consistent density functional tight-binding theory with the efficiency and long-term accuracy of extended Lagrangian Born–Oppenheimer molecular dynamics. Furthermore, for systems without self-consistent charge instabilities, only a single diagonalization or construction of the single-particle density matrix is required in each time step. The molecular dynamics simulation scheme can also be applied to a broad range of problems in materialsmore » science, chemistry, and biology.« less

  13. Extended Lagrangian Density Functional Tight-Binding Molecular Dynamics for Molecules and Solids

    DOE PAGES

    Aradi, Bálint; Niklasson, Anders M. N.; Frauenheim, Thomas

    2015-06-26

    A computationally fast quantum mechanical molecular dynamics scheme using an extended Lagrangian density functional tight-binding formulation has been developed and implemented in the DFTB+ electronic structure program package for simulations of solids and molecular systems. The scheme combines the computational speed of self-consistent density functional tight-binding theory with the efficiency and long-term accuracy of extended Lagrangian Born–Oppenheimer molecular dynamics. Furthermore, for systems without self-consistent charge instabilities, only a single diagonalization or construction of the single-particle density matrix is required in each time step. The molecular dynamics simulation scheme can also be applied to a broad range of problems in materialsmore » science, chemistry, and biology.« less

  14. Traffic safety facts 1998 : speeding

    DOT National Transportation Integrated Search

    1998-01-01

    Speeding exceeding the posted speed limit or driving too fast for : The economic cost : of speeding-related : crashes is estimated : to be $27.7 billion : each year. : conditions is one of the most prevalent factors contributing to traf...

  15. Fast maximum likelihood estimation of mutation rates using a birth-death process.

    PubMed

    Wu, Xiaowei; Zhu, Hongxiao

    2015-02-07

    Since fluctuation analysis was first introduced by Luria and Delbrück in 1943, it has been widely used to make inference about spontaneous mutation rates in cultured cells. Under certain model assumptions, the probability distribution of the number of mutants that appear in a fluctuation experiment can be derived explicitly, which provides the basis of mutation rate estimation. It has been shown that, among various existing estimators, the maximum likelihood estimator usually demonstrates some desirable properties such as consistency and lower mean squared error. However, its application in real experimental data is often hindered by slow computation of likelihood due to the recursive form of the mutant-count distribution. We propose a fast maximum likelihood estimator of mutation rates, MLE-BD, based on a birth-death process model with non-differential growth assumption. Simulation studies demonstrate that, compared with the conventional maximum likelihood estimator derived from the Luria-Delbrück distribution, MLE-BD achieves substantial improvement on computational speed and is applicable to arbitrarily large number of mutants. In addition, it still retains good accuracy on point estimation. Published by Elsevier Ltd.

  16. Distinct timing mechanisms produce discrete and continuous movements.

    PubMed

    Huys, Raoul; Studenka, Breanna E; Rheaume, Nicole L; Zelaznik, Howard N; Jirsa, Viktor K

    2008-04-25

    The differentiation of discrete and continuous movement is one of the pillars of motor behavior classification. Discrete movements have a definite beginning and end, whereas continuous movements do not have such discriminable end points. In the past decade there has been vigorous debate whether this classification implies different control processes. This debate up until the present has been empirically based. Here, we present an unambiguous non-empirical classification based on theorems in dynamical system theory that sets discrete and continuous movements apart. Through computational simulations of representative modes of each class and topological analysis of the flow in state space, we show that distinct control mechanisms underwrite discrete and fast rhythmic movements. In particular, we demonstrate that discrete movements require a time keeper while fast rhythmic movements do not. We validate our computational findings experimentally using a behavioral paradigm in which human participants performed finger flexion-extension movements at various movement paces and under different instructions. Our results demonstrate that the human motor system employs different timing control mechanisms (presumably via differential recruitment of neural subsystems) to accomplish varying behavioral functions such as speed constraints.

  17. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems

    PubMed Central

    Ehsan, Shoaib; Clark, Adrian F.; ur Rehman, Naveed; McDonald-Maier, Klaus D.

    2015-01-01

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems. PMID:26184211

  18. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems.

    PubMed

    Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D

    2015-07-10

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  19. High-Speed and Scalable Whole-Brain Imaging in Rodents and Primates.

    PubMed

    Seiriki, Kaoru; Kasai, Atsushi; Hashimoto, Takeshi; Schulze, Wiebke; Niu, Misaki; Yamaguchi, Shun; Nakazawa, Takanobu; Inoue, Ken-Ichi; Uezono, Shiori; Takada, Masahiko; Naka, Yuichiro; Igarashi, Hisato; Tanuma, Masato; Waschek, James A; Ago, Yukio; Tanaka, Kenji F; Hayata-Takano, Atsuko; Nagayasu, Kazuki; Shintani, Norihito; Hashimoto, Ryota; Kunii, Yasuto; Hino, Mizuki; Matsumoto, Junya; Yabe, Hirooki; Nagai, Takeharu; Fujita, Katsumasa; Matsuda, Toshio; Takuma, Kazuhiro; Baba, Akemichi; Hashimoto, Hitoshi

    2017-06-21

    Subcellular resolution imaging of the whole brain and subsequent image analysis are prerequisites for understanding anatomical and functional brain networks. Here, we have developed a very high-speed serial-sectioning imaging system named FAST (block-face serial microscopy tomography), which acquires high-resolution images of a whole mouse brain in a speed range comparable to that of light-sheet fluorescence microscopy. FAST enables complete visualization of the brain at a resolution sufficient to resolve all cells and their subcellular structures. FAST renders unbiased quantitative group comparisons of normal and disease model brain cells for the whole brain at a high spatial resolution. Furthermore, FAST is highly scalable to non-human primate brains and human postmortem brain tissues, and can visualize neuronal projections in a whole adult marmoset brain. Thus, FAST provides new opportunities for global approaches that will allow for a better understanding of brain systems in multiple animal models and in human diseases. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Towards Scalable Graph Computation on Mobile Devices.

    PubMed

    Chen, Yiqi; Lin, Zhiyuan; Pienta, Robert; Kahng, Minsuk; Chau, Duen Horng

    2014-10-01

    Mobile devices have become increasingly central to our everyday activities, due to their portability, multi-touch capabilities, and ever-improving computational power. Such attractive features have spurred research interest in leveraging mobile devices for computation. We explore a novel approach that aims to use a single mobile device to perform scalable graph computation on large graphs that do not fit in the device's limited main memory, opening up the possibility of performing on-device analysis of large datasets, without relying on the cloud. Based on the familiar memory mapping capability provided by today's mobile operating systems, our approach to scale up computation is powerful and intentionally kept simple to maximize its applicability across the iOS and Android platforms. Our experiments demonstrate that an iPad mini can perform fast computation on large real graphs with as many as 272 million edges (Google+ social graph), at a speed that is only a few times slower than a 13″ Macbook Pro. Through creating a real world iOS app with this technique, we demonstrate the strong potential application for scalable graph computation on a single mobile device using our approach.

  1. Towards Scalable Graph Computation on Mobile Devices

    PubMed Central

    Chen, Yiqi; Lin, Zhiyuan; Pienta, Robert; Kahng, Minsuk; Chau, Duen Horng

    2015-01-01

    Mobile devices have become increasingly central to our everyday activities, due to their portability, multi-touch capabilities, and ever-improving computational power. Such attractive features have spurred research interest in leveraging mobile devices for computation. We explore a novel approach that aims to use a single mobile device to perform scalable graph computation on large graphs that do not fit in the device's limited main memory, opening up the possibility of performing on-device analysis of large datasets, without relying on the cloud. Based on the familiar memory mapping capability provided by today's mobile operating systems, our approach to scale up computation is powerful and intentionally kept simple to maximize its applicability across the iOS and Android platforms. Our experiments demonstrate that an iPad mini can perform fast computation on large real graphs with as many as 272 million edges (Google+ social graph), at a speed that is only a few times slower than a 13″ Macbook Pro. Through creating a real world iOS app with this technique, we demonstrate the strong potential application for scalable graph computation on a single mobile device using our approach. PMID:25859564

  2. Frozen Gaussian approximation for 3D seismic tomography

    NASA Astrophysics Data System (ADS)

    Chai, Lihui; Tong, Ping; Yang, Xu

    2018-05-01

    Three-dimensional (3D) wave-equation-based seismic tomography is computationally challenging in large scales and high-frequency regime. In this paper, we apply the frozen Gaussian approximation (FGA) method to compute 3D sensitivity kernels and seismic tomography of high-frequency. Rather than standard ray theory used in seismic inversion (e.g. Kirchhoff migration and Gaussian beam migration), FGA is used to compute the 3D high-frequency sensitivity kernels for travel-time or full waveform inversions. Specifically, we reformulate the equations of the forward and adjoint wavefields for the purpose of convenience to apply FGA, and with this reformulation, one can efficiently compute the Green’s functions whose convolutions with source time function produce wavefields needed for the construction of 3D kernels. Moreover, a fast summation method is proposed based on local fast Fourier transform which greatly improves the speed of reconstruction as the last step of FGA algorithm. We apply FGA to both the travel-time adjoint tomography and full waveform inversion (FWI) on synthetic crosswell seismic data with dominant frequencies as high as those of real crosswell data, and confirm again that FWI requires a more sophisticated initial velocity model for the convergence than travel-time adjoint tomography. We also numerically test the accuracy of applying FGA to local earthquake tomography. This study paves the way to directly apply wave-equation-based seismic tomography methods into real data around their dominant frequencies.

  3. Measuring Sound Speed in Gas Mixtures Using a Photoacoustic Generator

    NASA Astrophysics Data System (ADS)

    Suchenek, Mariusz; Borowski, Tomasz

    2018-01-01

    We present a new method which allows us to percentage distinction of gas composition with a fast response time. This system uses the speed of sound in a resonant cell along with temperature to determine the gas mixture composition. The gas mixtures contain two gases with an unknown combination. In our experiment, the acoustic waves were excited inside the acoustic longitudinal resonator with the use of a positive feedback. This feedback provides fast tracking of a resonance frequency of the cell and causes fast tracking changes in the speed of sound. The presented method corresponds to the theoretical description of this topic. Two gas mixtures—carbon dioxide and argon mixed with nitrogen—were tested.

  4. A Novel Approach to Realize of All Optical Frequency Encoded Dibit Based XOR and XNOR Logic Gates Using Optical Switches with Simulated Verification

    NASA Astrophysics Data System (ADS)

    Ghosh, B.; Hazra, S.; Haldar, N.; Roy, D.; Patra, S. N.; Swarnakar, J.; Sarkar, P. P.; Mukhopadhyay, S.

    2018-03-01

    Since last few decades optics has already proved its strong potentiality for conducting parallel logic, arithmetic and algebraic operations due to its super-fast speed in communication and computation. So many different logical and sequential operations using all optical frequency encoding technique have been proposed by several authors. Here, we have keened out all optical dibit representation technique, which has the advantages of high speed operation as well as reducing the bit error problem. Exploiting this phenomenon, we have proposed all optical frequency encoded dibit based XOR and XNOR logic gates using the optical switches like add/drop multiplexer (ADM) and reflected semiconductor optical amplifier (RSOA). Also the operations of these gates have been verified through proper simulation using MATLAB (R2008a).

  5. Simulation evaluation of TIMER, a time-based, terminal air traffic, flow-management concept

    NASA Technical Reports Server (NTRS)

    Credeur, Leonard; Capron, William R.

    1989-01-01

    A description of a time-based, extended terminal area ATC concept called Traffic Intelligence for the Management of Efficient Runway scheduling (TIMER) and the results of a fast-time evaluation are presented. The TIMER concept is intended to bridge the gap between today's ATC system and a future automated time-based ATC system. The TIMER concept integrates en route metering, fuel-efficient cruise and profile descents, terminal time-based sequencing and spacing together with computer-generated controller aids, to improve delivery precision for fuller use of runway capacity. Simulation results identify and show the effects and interactions of such key variables as horizon of control location, delivery time error at both the metering fix and runway threshold, aircraft separation requirements, delay discounting, wind, aircraft heading and speed errors, and knowledge of final approach speed.

  6. Transfer of piano practice in fast performance of skilled finger movements.

    PubMed

    Furuya, Shinichi; Nakamura, Ayumi; Nagata, Noriko

    2013-11-01

    Transfer of learning facilitates the efficient mastery of various skills without practicing all possible sensory-motor repertoires. The present study assessed whether motor practice at a submaximal speed, which is typical in sports and music performance, results in an increase in a maximum speed of finger movements of trained and untrained skills. Piano practice of sequential finger movements at a submaximal speed over days progressively increased the maximum speed of trained movements. This increased maximum speed of finger movements was maintained two months after the practice. The learning transferred within the hand to some extent, but not across the hands. The present study confirmed facilitation of fast finger movements following a piano practice at a submaximal speed. In addition, the findings indicated the intra-manual transfer effects of piano practice on the maximum speed of skilled finger movements.

  7. A fast numerical method for ideal fluid flow in domains with multiple stirrers

    NASA Astrophysics Data System (ADS)

    Nasser, Mohamed M. S.; Green, Christopher C.

    2018-03-01

    A collection of arbitrarily-shaped solid objects, each moving at a constant speed, can be used to mix or stir ideal fluid, and can give rise to interesting flow patterns. Assuming these systems of fluid stirrers are two-dimensional, the mathematical problem of resolving the flow field—given a particular distribution of any finite number of stirrers of specified shape and speed—can be formulated as a Riemann-Hilbert (R-H) problem. We show that this R-H problem can be solved numerically using a fast and accurate algorithm for any finite number of stirrers based around a boundary integral equation with the generalized Neumann kernel. Various systems of fluid stirrers are considered, and our numerical scheme is shown to handle highly multiply connected domains (i.e. systems of many fluid stirrers) with minimal computational expense.

  8. Studying Solar Wind Properties Around CIRs and Their Effects on GCR Modulation

    NASA Astrophysics Data System (ADS)

    Ghanbari, K.; Florinski, V. A.

    2017-12-01

    Corotating interaction region (CIR) events occur when a fast solar wind stream overtakes slow solar wind, forming a compression region ahead and a rarefaction region behind in the fast solar wind. Usually this phenomena occurs along with a crossing of heliospheric current sheet which is the surface separating solar magnetic fields of opposing polarities. In this work, the solar plasma data provided by the ACE science center are utilized to do a superposed epoch analysis on solar parameters including proton density, proton temperature, solar wind speed and solar magnetic field in order to study how the variations of these parameters affect the modulation of galactic cosmic rays. Magnetic fluctuation variances in different parts a of CIR are computed and analyzed using similar techniques in order to understand the cosmic-ray diffusive transport in these regions.

  9. Speed but not amplitude of visual feedback exacerbates force variability in older adults.

    PubMed

    Kim, Changki; Yacoubi, Basma; Christou, Evangelos A

    2018-06-23

    Magnification of visual feedback (VF) impairs force control in older adults. In this study, we aimed to determine whether the age-associated increase in force variability with magnification of visual feedback is a consequence of increased amplitude or speed of visual feedback. Seventeen young and 18 older adults performed a constant isometric force task with the index finger at 5% of MVC. We manipulated the vertical (force gain) and horizontal (time gain) aspect of the visual feedback so participants performed the task with the following VF conditions: (1) high amplitude-fast speed; (2) low amplitude-slow speed; (3) high amplitude-slow speed. Changing the visual feedback from low amplitude-slow speed to high amplitude-fast speed increased force variability in older adults but decreased it in young adults (P < 0.01). Changing the visual feedback from low amplitude-slow speed to high amplitude-slow speed did not alter force variability in older adults (P > 0.2), but decreased it in young adults (P < 0.01). Changing the visual feedback from high amplitude-slow speed to high amplitude-fast speed increased force variability in older adults (P < 0.01) but did not alter force variability in young adults (P > 0.2). In summary, increased force variability in older adults with magnification of visual feedback was evident only when the speed of visual feedback increased. Thus, we conclude that in older adults deficits in the rate of processing visual information and not deficits in the processing of more visual information impair force control.

  10. Older adults must hurry at pedestrian lights! A cross-sectional analysis of preferred and fast walking speed under single- and dual-task conditions

    PubMed Central

    Tomovic, Sara; Münzer, Thomas; de Bruin, Eling D.

    2017-01-01

    Slow walking speed is strongly associated with adverse health outcomes, including cognitive impairment, in the older population. Moreover, adequate walking speed is crucial to maintain older pedestrians’ mobility and safety in urban areas. This study aimed to identify the proportion of Swiss older adults that didn’t reach 1.2 m/s, which reflects the requirements to cross streets within the green–yellow phase of pedestrian lights, when walking fast under cognitive challenge. A convenience sample, including 120 older women (65%) and men, was recruited from the community (88%) and from senior residences and divided into groups of 70–79 years (n = 59, 74.8 ± 0.4 y; mean ± SD) and ≥80 years (n = 61, 85.5 ± 0.5 y). Steady state walking speed was assessed under single- and dual-task conditions at preferred and fast walking speed. Additionally, functional lower extremity strength (5-chair-rises test), subjective health rating, and retrospective estimates of fall frequency were recorded. Results showed that 35.6% of the younger and 73.8% of the older participants were not able to walk faster than 1.2 m/s under the fast dual-task walking condition. Fast dual-task walking speed was higher compared to the preferred speed single- and dual-task conditions (all p < .05, r = .31 to .48). Average preferred single-task walking speed was 1.19 ± 0.24 m/s (70–79 y) and 0.94 ± 0.27 m/s (≥80 y), respectively, and correlated with performance in the 5-chair-rises test (rs = −.49, p < .001), subjective health (τ = .27, p < .001), and fall frequency (τ = −.23, p = .002). We conclude that the fitness status of many older people is inadequate to safely cross streets at pedestrian lights and maintain mobility in the community’s daily life in urban areas. Consequently, training measures to improve the older population’s cognitive and physical fitness should be promoted to enhance walking speed and safety of older pedestrians. PMID:28759587

  11. A Fourier-based compressed sensing technique for accelerated CT image reconstruction using first-order methods.

    PubMed

    Choi, Kihwan; Li, Ruijiang; Nam, Haewon; Xing, Lei

    2014-06-21

    As a solution to iterative CT image reconstruction, first-order methods are prominent for the large-scale capability and the fast convergence rate [Formula: see text]. In practice, the CT system matrix with a large condition number may lead to slow convergence speed despite the theoretically promising upper bound. The aim of this study is to develop a Fourier-based scaling technique to enhance the convergence speed of first-order methods applied to CT image reconstruction. Instead of working in the projection domain, we transform the projection data and construct a data fidelity model in Fourier space. Inspired by the filtered backprojection formalism, the data are appropriately weighted in Fourier space. We formulate an optimization problem based on weighted least-squares in the Fourier space and total-variation (TV) regularization in image space for parallel-beam, fan-beam and cone-beam CT geometry. To achieve the maximum computational speed, the optimization problem is solved using a fast iterative shrinkage-thresholding algorithm with backtracking line search and GPU implementation of projection/backprojection. The performance of the proposed algorithm is demonstrated through a series of digital simulation and experimental phantom studies. The results are compared with the existing TV regularized techniques based on statistics-based weighted least-squares as well as basic algebraic reconstruction technique. The proposed Fourier-based compressed sensing (CS) method significantly improves both the image quality and the convergence rate compared to the existing CS techniques.

  12. Reliability and validity of a smartphone-based assessment of gait parameters across walking speed and smartphone locations: Body, bag, belt, hand, and pocket.

    PubMed

    Silsupadol, Patima; Teja, Kunlanan; Lugade, Vipul

    2017-10-01

    The assessment of spatiotemporal gait parameters is a useful clinical indicator of health status. Unfortunately, most assessment tools require controlled laboratory environments which can be expensive and time consuming. As smartphones with embedded sensors are becoming ubiquitous, this technology can provide a cost-effective, easily deployable method for assessing gait. Therefore, the purpose of this study was to assess the reliability and validity of a smartphone-based accelerometer in quantifying spatiotemporal gait parameters when attached to the body or in a bag, belt, hand, and pocket. Thirty-four healthy adults were asked to walk at self-selected comfortable, slow, and fast speeds over a 10-m walkway while carrying a smartphone. Step length, step time, gait velocity, and cadence were computed from smartphone-based accelerometers and validated with GAITRite. Across all walking speeds, smartphone data had excellent reliability (ICC 2,1 ≥0.90) for the body and belt locations, with bag, hand, and pocket locations having good to excellent reliability (ICC 2,1 ≥0.69). Correlations between the smartphone-based and GAITRite-based systems were very high for the body (r=0.89, 0.98, 0.96, and 0.87 for step length, step time, gait velocity, and cadence, respectively). Similarly, Bland-Altman analysis demonstrated that the bias approached zero, particularly in the body, bag, and belt conditions under comfortable and fast speeds. Thus, smartphone-based assessments of gait are most valid when placed on the body, in a bag, or on a belt. The use of a smartphone to assess gait can provide relevant data to clinicians without encumbering the user and allow for data collection in the free-living environment. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Smartphones as image processing systems for prosthetic vision.

    PubMed

    Zapf, Marc P; Matteucci, Paul B; Lovell, Nigel H; Suaning, Gregg J

    2013-01-01

    The feasibility of implants for prosthetic vision has been demonstrated by research and commercial organizations. In most devices, an essential forerunner to the internal stimulation circuit is an external electronics solution for capturing, processing and relaying image information as well as extracting useful features from the scene surrounding the patient. The capabilities and multitude of image processing algorithms that can be performed by the device in real-time plays a major part in the final quality of the prosthetic vision. It is therefore optimal to use powerful hardware yet to avoid bulky, straining solutions. Recent publications have reported of portable single-board computers fast enough for computationally intensive image processing. Following the rapid evolution of commercial, ultra-portable ARM (Advanced RISC machine) mobile devices, the authors investigated the feasibility of modern smartphones running complex face detection as external processing devices for vision implants. The role of dedicated graphics processors in speeding up computation was evaluated while performing a demanding noise reduction algorithm (image denoising). The time required for face detection was found to decrease by 95% from 2.5 year old to recent devices. In denoising, graphics acceleration played a major role, speeding up denoising by a factor of 18. These results demonstrate that the technology has matured sufficiently to be considered as a valid external electronics platform for visual prosthetic research.

  14. Compute as Fast as the Engineers Can Think! ULTRAFAST COMPUTING TEAM FINAL REPORT

    NASA Technical Reports Server (NTRS)

    Biedron, R. T.; Mehrotra, P.; Nelson, M. L.; Preston, M. L.; Rehder, J. J.; Rogersm J. L.; Rudy, D. H.; Sobieski, J.; Storaasli, O. O.

    1999-01-01

    This report documents findings and recommendations by the Ultrafast Computing Team (UCT). In the period 10-12/98, UCT reviewed design case scenarios for a supersonic transport and a reusable launch vehicle to derive computing requirements necessary for support of a design process with efficiency so radically improved that human thought rather than the computer paces the process. Assessment of the present computing capability against the above requirements indicated a need for further improvement in computing speed by several orders of magnitude to reduce time to solution from tens of hours to seconds in major applications. Evaluation of the trends in computer technology revealed a potential to attain the postulated improvement by further increases of single processor performance combined with massively parallel processing in a heterogeneous environment. However, utilization of massively parallel processing to its full capability will require redevelopment of the engineering analysis and optimization methods, including invention of new paradigms. To that end UCT recommends initiation of a new activity at LaRC called Computational Engineering for development of new methods and tools geared to the new computer architectures in disciplines, their coordination, and validation and benefit demonstration through applications.

  15. Segmentation of hand radiographs using fast marching methods

    NASA Astrophysics Data System (ADS)

    Chen, Hong; Novak, Carol L.

    2006-03-01

    Rheumatoid Arthritis is one of the most common chronic diseases. Joint space width in hand radiographs is evaluated to assess joint damage in order to monitor progression of disease and response to treatment. Manual measurement of joint space width is time-consuming and highly prone to inter- and intra-observer variation. We propose a method for automatic extraction of finger bone boundaries using fast marching methods for quantitative evaluation of joint space width. The proposed algorithm includes two stages: location of hand joints followed by extraction of bone boundaries. By setting the propagation speed of the wave front as a function of image intensity values, the fast marching algorithm extracts the skeleton of the hands, in which each branch corresponds to a finger. The finger joint locations are then determined by using the image gradients along the skeletal branches. In order to extract bone boundaries at joints, the gradient magnitudes are utilized for setting the propagation speed, and the gradient phases are used for discriminating the boundaries of adjacent bones. The bone boundaries are detected by searching for the fastest paths from one side of each joint to the other side. Finally, joint space width is computed based on the extracted upper and lower bone boundaries. The algorithm was evaluated on a test set of 8 two-hand radiographs, including images from healthy patients and from patients suffering from arthritis, gout and psoriasis. Using our method, 97% of 208 joints were accurately located and 89% of 416 bone boundaries were correctly extracted.

  16. Machine Vision For Industrial Control:The Unsung Opportunity

    NASA Astrophysics Data System (ADS)

    Falkman, Gerald A.; Murray, Lawrence A.; Cooper, James E.

    1984-05-01

    Vision modules have primarily been developed to relieve those pressures newly brought into existence by Inspection (QUALITY) and Robotic (PRODUCTIVITY) mandates. Industrial Control pressure stems on the other hand from the older first industrial revolution mandate of throughput. Satisfying such pressure calls for speed in both imaging and decision making. Vision companies have, however, put speed on a backburner or ignore it entirely because most modules are computer/software based which limits their speed potential. Increasingly, the keynote being struck at machine vision seminars is that "Visual and Computational Speed Must Be Increased and Dramatically!" There are modular hardwired-logic systems that are fast but, all too often, they are not very bright. Such units: Measure the fill factor of bottles as they spin by, Read labels on cans, Count stacked plastic cups or Monitor the width of parts streaming past the camera. Many are only a bit more complex than a photodetector. Once in place, most of these units are incapable of simple upgrading to a new task and are Vision's analog to the robot industry's pick and place (RIA TYPE E) robot. Vision thus finds itself amidst the same quandries that once beset the Robot Industry of America when it tried to define a robot, excluded dumb ones, and was left with only slow machines whose unit volume potential is shatteringly low. This paper develops an approach to meeting the need of a vision system that cuts a swath into the terra incognita of intelligent, high-speed vision processing. Main attention is directed to vision for industrial control. Some presently untapped vision application areas that will be serviced include: Electronics, Food, Sports, Pharmaceuticals, Machine Tools and Arc Welding.

  17. Limits to high-speed simulations of spiking neural networks using general-purpose computers.

    PubMed

    Zenke, Friedemann; Gerstner, Wulfram

    2014-01-01

    To understand how the central nervous system performs computations using recurrent neuronal circuitry, simulations have become an indispensable tool for theoretical neuroscience. To study neuronal circuits and their ability to self-organize, increasing attention has been directed toward synaptic plasticity. In particular spike-timing-dependent plasticity (STDP) creates specific demands for simulations of spiking neural networks. On the one hand a high temporal resolution is required to capture the millisecond timescale of typical STDP windows. On the other hand network simulations have to evolve over hours up to days, to capture the timescale of long-term plasticity. To do this efficiently, fast simulation speed is the crucial ingredient rather than large neuron numbers. Using different medium-sized network models consisting of several thousands of neurons and off-the-shelf hardware, we compare the simulation speed of the simulators: Brian, NEST and Neuron as well as our own simulator Auryn. Our results show that real-time simulations of different plastic network models are possible in parallel simulations in which numerical precision is not a primary concern. Even so, the speed-up margin of parallelism is limited and boosting simulation speeds beyond one tenth of real-time is difficult. By profiling simulation code we show that the run times of typical plastic network simulations encounter a hard boundary. This limit is partly due to latencies in the inter-process communications and thus cannot be overcome by increased parallelism. Overall, these results show that to study plasticity in medium-sized spiking neural networks, adequate simulation tools are readily available which run efficiently on small clusters. However, to run simulations substantially faster than real-time, special hardware is a prerequisite.

  18. Traffic safety facts 1995 : speeding

    DOT National Transportation Integrated Search

    1996-01-01

    Speeding - exceeding the posted speed limit or driving too fast for conditions - is one of the most prevalent factors contributing to traffic crashes. In 1995, speeding was a contributing factor in 31 percent of all fatal crashes, and 13,256 lives we...

  19. Traffic safety facts 1996 : speeding

    DOT National Transportation Integrated Search

    1997-01-01

    Speeding - exceeding the posted speed limit or driving too fast for conditions - is one of the most prevalent factors contributing to traffic crashes. In 1996, speeding was a contributing factor in 30 percent of all fatal crashes, and 12,998 lives we...

  20. Traffic safety facts 1999 : speeding

    DOT National Transportation Integrated Search

    2000-01-01

    Speeding -- exceeding the posted speed limit or driving too fast for conditions -- is one of the most prevalent factors contributing to traffic crashes. In 1999, speeding was a contributing factor in 30% of all fatal crashes, and 12,628 lives were lo...

  1. Automatic aneurysm neck detection using surface Voronoi diagrams.

    PubMed

    Cárdenes, Rubén; Pozo, José María; Bogunovic, Hrvoje; Larrabide, Ignacio; Frangi, Alejandro F

    2011-10-01

    A new automatic approach for saccular intracranial aneurysm isolation is proposed in this work. Due to the inter- and intra-observer variability in manual delineation of the aneurysm neck, a definition based on a minimum cost path around the aneurysm sac is proposed that copes with this variability and is able to make consistent measurements along different data sets, as well as to automate and speedup the analysis of cerebral aneurysms. The method is based on the computation of a minimal path along a scalar field obtained on the vessel surface, to find the aneurysm neck in a robust and fast manner. The computation of the scalar field on the surface is obtained using a fast marching approach with a speed function based on the exponential of the distance from the centerline bifurcation between the aneurysm dome and the parent vessels. In order to assure a correct topology of the aneurysm sac, the neck computation is constrained to a region defined by a surface Voronoi diagram obtained from the branches of the vessel centerline. We validate this method comparing our results in 26 real cases with manual aneurysm isolation obtained using a cut-plane, and also with results obtained using manual delineations from three different observers by comparing typical morphological measures. © 2011 IEEE

  2. YNOGK: A New Public Code for Calculating Null Geodesics in the Kerr Spacetime

    NASA Astrophysics Data System (ADS)

    Yang, Xiaolin; Wang, Jiancheng

    2013-07-01

    Following the work of Dexter & Agol, we present a new public code for the fast calculation of null geodesics in the Kerr spacetime. Using Weierstrass's and Jacobi's elliptic functions, we express all coordinates and affine parameters as analytical and numerical functions of a parameter p, which is an integral value along the geodesic. This is the main difference between our code and previous similar ones. The advantage of this treatment is that the information about the turning points does not need to be specified in advance by the user, and many applications such as imaging, the calculation of line profiles, and the observer-emitter problem, become root-finding problems. All elliptic integrations are computed by Carlson's elliptic integral method as in Dexter & Agol, which guarantees the fast computational speed of our code. The formulae to compute the constants of motion given by Cunningham & Bardeen have been extended, which allow one to readily handle situations in which the emitter or the observer has an arbitrary distance from, and motion state with respect to, the central compact object. The validation of the code has been extensively tested through applications to toy problems from the literature. The source FORTRAN code is freely available for download on our Web site http://www1.ynao.ac.cn/~yangxl/yxl.html.

  3. Fast and Accurate Simulation of the Cray XMT Multithreaded Supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Villa, Oreste; Tumeo, Antonino; Secchi, Simone

    Irregular applications, such as data mining and analysis or graph-based computations, show unpredictable memory/network access patterns and control structures. Highly multithreaded architectures with large processor counts, like the Cray MTA-1, MTA-2 and XMT, appear to address their requirements better than commodity clusters. However, the research on highly multithreaded systems is currently limited by the lack of adequate architectural simulation infrastructures due to issues such as size of the machines, memory footprint, simulation speed, accuracy and customization. At the same time, Shared-memory MultiProcessors (SMPs) with multi-core processors have become an attractive platform to simulate large scale machines. In this paper, wemore » introduce a cycle-level simulator of the highly multithreaded Cray XMT supercomputer. The simulator runs unmodified XMT applications. We discuss how we tackled the challenges posed by its development, detailing the techniques introduced to make the simulation as fast as possible while maintaining a high accuracy. By mapping XMT processors (ThreadStorm with 128 hardware threads) to host computing cores, the simulation speed remains constant as the number of simulated processors increases, up to the number of available host cores. The simulator supports zero-overhead switching among different accuracy levels at run-time and includes a network model that takes into account contention. On a modern 48-core SMP host, our infrastructure simulates a large set of irregular applications 500 to 2000 times slower than real time when compared to a 128-processor XMT, while remaining within 10\\% of accuracy. Emulation is only from 25 to 200 times slower than real time.« less

  4. a Fast and Flexible Method for Meta-Map Building for Icp Based Slam

    NASA Astrophysics Data System (ADS)

    Kurian, A.; Morin, K. W.

    2016-06-01

    Recent developments in LiDAR sensors make mobile mapping fast and cost effective. These sensors generate a large amount of data which in turn improves the coverage and details of the map. Due to the limited range of the sensor, one has to collect a series of scans to build the entire map of the environment. If we have good GNSS coverage, building a map is a well addressed problem. But in an indoor environment, we have limited GNSS reception and an inertial solution, if available, can quickly diverge. In such situations, simultaneous localization and mapping (SLAM) is used to generate a navigation solution and map concurrently. SLAM using point clouds possesses a number of computational challenges even with modern hardware due to the shear amount of data. In this paper, we propose two strategies for minimizing the cost of computation and storage when a 3D point cloud is used for navigation and real-time map building. We have used the 3D point cloud generated by Leica Geosystems's Pegasus Backpack which is equipped with Velodyne VLP-16 LiDARs scanners. To improve the speed of the conventional iterative closest point (ICP) algorithm, we propose a point cloud sub-sampling strategy which does not throw away any key features and yet significantly reduces the number of points that needs to be processed and stored. In order to speed up the correspondence finding step, a dual kd-tree and circular buffer architecture is proposed. We have shown that the proposed method can run in real time and has excellent navigation accuracy characteristics.

  5. Software Engineering Principles 3-14 August 1981,

    DTIC Science & Technology

    1981-08-01

    small disk used (but rot that of the extended mass storage or large disk option); it is very fast (about 1/5 the speed of the primary memory, where the...extended mass storage or large disk option); it is very fast (about 1/5 the speed of the primary memory, where the disk was 1/10000 for access); and...programed and tested - must be correct and fast D. Choice of right synchronization operations: Design problem 1. Several mentioned in literature 9-22

  6. Lower Extremity Muscle Activity During a Women’s Overhand Lacrosse Shot

    PubMed Central

    Millard, Brianna M.; Mercer, John A.

    2014-01-01

    The purpose of this study was to describe lower extremity muscle activity during the lacrosse shot. Participants (n=5 females, age 22±2 years, body height 162.6±15.2 cm, body mass 63.7±23.6 kg) were free from injury and had at least one year of lacrosse experience. The lead leg was instrumented with electromyography (EMG) leads to measure muscle activity of the rectus femoris (RF), biceps femoris (BF), tibialis anterior (TA), and medial gastrocnemius (GA). Participants completed five trials of a warm-up speed shot (Slow) and a game speed shot (Fast). Video analysis was used to identify the discrete events defining specific movement phases. Full-wave rectified data were averaged per muscle per phase (Crank Back Minor, Crank Back Major, Stick Acceleration, Stick Deceleration). Average EMG per muscle was analyzed using a 4 (Phase) × 2 (Speed) ANOVA. BF was greater during Fast vs. Slow for all phases (p<0.05), while TA was not influenced by either Phase or Speed (p>0.05). RF and GA were each influenced by the interaction of Phase and Speed (p<0.05) with GA being greater during Fast vs. Slow shots during all phases and RF greater during Crank Back Minor and Major as well as Stick Deceleration (p<0.05) but only tended to be greater during Stick Acceleration (p=0.076) for Fast vs. Slow. The greater muscle activity (BF, RF, GA) during Fast vs. Slow shots may have been related to a faster approach speed and/or need to create a stiff lower extremity to allow for faster upper extremity movements. PMID:25114727

  7. MIDAS, prototype Multivariate Interactive Digital Analysis System, phase 1. Volume 3: Wiring diagrams

    NASA Technical Reports Server (NTRS)

    Kriegler, F. J.; Christenson, D.; Gordon, M.; Kistler, R.; Lampert, S.; Marshall, R.; Mclaughlin, R.

    1974-01-01

    The Midas System is a third-generation, fast, multispectral recognition system able to keep pace with the large quantity and high rates of data acquisition from present and projected sensors. A principal objective of the MIDAS Program is to provide a system well interfaced with the human operator and thus to obtain large overall reductions in turn-around time and significant gains in throughput. The hardware and software generated in Phase I of the overall program are described. The system contains a mini-computer to control the various high-speed processing elements in the data path and a classifier which implements an all-digital prototype multivariate-Gaussian maximum likelihood decision algorithm operating at 2 x 100,000 pixels/sec. Sufficient hardware was developed to perform signature extraction from computer-compatible tapes, compute classifier coefficients, control the classifier operation, and diagnose operation. The MIDAS construction and wiring diagrams are given.

  8. MIDAS, prototype Multivariate Interactive Digital Analysis System, Phase 1. Volume 2: Diagnostic system

    NASA Technical Reports Server (NTRS)

    Kriegler, F. J.; Christenson, D.; Gordon, M.; Kistler, R.; Lampert, S.; Marshall, R.; Mclaughlin, R.

    1974-01-01

    The MIDAS System is a third-generation, fast, multispectral recognition system able to keep pace with the large quantity and high rates of data acquisition from present and projected sensors. A principal objective of the MIDAS Program is to provide a system well interfaced with the human operator and thus to obtain large overall reductions in turn-around time and significant gains in throughout. The hardware and software generated in Phase I of the over-all program are described. The system contains a mini-computer to control the various high-speed processing elements in the data path and a classifier which implements an all-digital prototype multivariate-Gaussian maximum likelihood decision algorithm operating 2 x 105 pixels/sec. Sufficient hardware was developed to perform signature extraction from computer-compatible tapes, compute classifier coefficients, control the classifier operation, and diagnose operation. Diagnostic programs used to test MIDAS' operations are presented.

  9. Seismic data restoration with a fast L1 norm trust region method

    NASA Astrophysics Data System (ADS)

    Cao, Jingjie; Wang, Yanfei

    2014-08-01

    Seismic data restoration is a major strategy to provide reliable wavefield when field data dissatisfy the Shannon sampling theorem. Recovery by sparsity-promoting inversion often get sparse solutions of seismic data in a transformed domains, however, most methods for sparsity-promoting inversion are line-searching methods which are efficient but are inclined to obtain local solutions. Using trust region method which can provide globally convergent solutions is a good choice to overcome this shortcoming. A trust region method for sparse inversion has been proposed, however, the efficiency should be improved to suitable for large-scale computation. In this paper, a new L1 norm trust region model is proposed for seismic data restoration and a robust gradient projection method for solving the sub-problem is utilized. Numerical results of synthetic and field data demonstrate that the proposed trust region method can get excellent computation speed and is a viable alternative for large-scale computation.

  10. A spatially localized architecture for fast and modular DNA computing

    NASA Astrophysics Data System (ADS)

    Chatterjee, Gourab; Dalchau, Neil; Muscat, Richard A.; Phillips, Andrew; Seelig, Georg

    2017-09-01

    Cells use spatial constraints to control and accelerate the flow of information in enzyme cascades and signalling networks. Synthetic silicon-based circuitry similarly relies on spatial constraints to process information. Here, we show that spatial organization can be a similarly powerful design principle for overcoming limitations of speed and modularity in engineered molecular circuits. We create logic gates and signal transmission lines by spatially arranging reactive DNA hairpins on a DNA origami. Signal propagation is demonstrated across transmission lines of different lengths and orientations and logic gates are modularly combined into circuits that establish the universality of our approach. Because reactions preferentially occur between neighbours, identical DNA hairpins can be reused across circuits. Co-localization of circuit elements decreases computation time from hours to minutes compared to circuits with diffusible components. Detailed computational models enable predictive circuit design. We anticipate our approach will motivate using spatial constraints for future molecular control circuit designs.

  11. A novel graphical user interface for ultrasound-guided shoulder arthroscopic surgery

    NASA Astrophysics Data System (ADS)

    Tyryshkin, K.; Mousavi, P.; Beek, M.; Pichora, D.; Abolmaesumi, P.

    2007-03-01

    This paper presents a novel graphical user interface developed for a navigation system for ultrasound-guided computer-assisted shoulder arthroscopic surgery. The envisioned purpose of the interface is to assist the surgeon in determining the position and orientation of the arthroscopic camera and other surgical tools within the anatomy of the patient. The user interface features real time position tracking of the arthroscopic instruments with an optical tracking system, and visualization of their graphical representations relative to a three-dimensional shoulder surface model of the patient, created from computed tomography images. In addition, the developed graphical interface facilitates fast and user-friendly intra-operative calibration of the arthroscope and the arthroscopic burr, capture and segmentation of ultrasound images, and intra-operative registration. A pilot study simulating the computer-aided shoulder arthroscopic procedure on a shoulder phantom demonstrated the speed, efficiency and ease-of-use of the system.

  12. Achieving a high mode count in the exact electromagnetic simulation of diffractive optical elements.

    PubMed

    Junker, André; Brenner, Karl-Heinz

    2018-03-01

    The application of rigorous optical simulation algorithms, both in the modal as well as in the time domain, is known to be limited to the nano-optical scale due to severe computing time and memory constraints. This is true even for today's high-performance computers. To address this problem, we develop the fast rigorous iterative method (FRIM), an algorithm based on an iterative approach, which, under certain conditions, allows solving also large-size problems approximation free. We achieve this in the case of a modal representation by avoiding the computationally complex eigenmode decomposition. Thereby, the numerical cost is reduced from O(N 3 ) to O(N log N), enabling a simulation of structures like certain diffractive optical elements with a significantly higher mode count than presently possible. Apart from speed, another major advantage of the iterative FRIM over standard modal methods is the possibility to trade runtime against accuracy.

  13. A space-efficient quantum computer simulator suitable for high-speed FPGA implementation

    NASA Astrophysics Data System (ADS)

    Frank, Michael P.; Oniciuc, Liviu; Meyer-Baese, Uwe H.; Chiorescu, Irinel

    2009-05-01

    Conventional vector-based simulators for quantum computers are quite limited in the size of the quantum circuits they can handle, due to the worst-case exponential growth of even sparse representations of the full quantum state vector as a function of the number of quantum operations applied. However, this exponential-space requirement can be avoided by using general space-time tradeoffs long known to complexity theorists, which can be appropriately optimized for this particular problem in a way that also illustrates some interesting reformulations of quantum mechanics. In this paper, we describe the design and empirical space/time complexity measurements of a working software prototype of a quantum computer simulator that avoids excessive space requirements. Due to its space-efficiency, this design is well-suited to embedding in single-chip environments, permitting especially fast execution that avoids access latencies to main memory. We plan to prototype our design on a standard FPGA development board.

  14. Numerical Modelling of Foundation Slabs with use of Schur Complement Method

    NASA Astrophysics Data System (ADS)

    Koktan, Jiří; Brožovský, Jiří

    2017-10-01

    The paper discusses numerical modelling of foundation slabs with use of advanced numerical approaches, which are suitable for parallel processing. The solution is based on the Finite Element Method with the slab-type elements. The subsoil is modelled with use of Winklertype contact model (as an alternative a multi-parameter model can be used). The proposed modelling approach uses the Schur Complement method to speed-up the computations of the problem. The method is based on a special division of the analyzed model to several substructures. It adds some complexity to the numerical procedures, especially when subsoil models are used inside the finite element method solution. In other hand, this method makes possible a fast solution of large models but it introduces further problems to the process. Thus, the main aim of this paper is to verify that such method can be successfully used for this type of problem. The most suitable finite elements will be discussed, there will be also discussion related to finite element mesh and limitations of its construction for such problem. The core approaches of the implementation of the Schur Complement Method for this type of the problem will be also presented. The proposed approach was implemented in the form of a computer program, which will be also briefly introduced. There will be also presented results of example computations, which prove the speed-up of the solution - there will be shown important speed-up of solution even in the case of on-parallel processing and the ability of bypass size limitations of numerical models with use of the discussed approach.

  15. Compact high-speed scanning lidar system

    NASA Astrophysics Data System (ADS)

    Dickinson, Cameron; Hussein, Marwan; Tripp, Jeff; Nimelman, Manny; Koujelev, Alexander

    2012-06-01

    The compact High Speed Scanning Lidar (HSSL) was designed to meet the requirements for a rover GN&C sensor. The eye-safe HSSL's fast scanning speed, low volume and low power, make it the ideal choice for a variety of real-time and non-real-time applications including: 3D Mapping; Vehicle guidance and Navigation; Obstacle Detection; Orbiter Rendezvous; Spacecraft Landing / Hazard Avoidance. The HSSL comprises two main hardware units: Sensor Head and Control Unit. In a rover application, the Sensor Head mounts on the top of the rover while the Control Unit can be mounted on the rover deck or within its avionics bay. An Operator Computer is used to command the lidar and immediately display the acquired scan data. The innovative lidar design concept was a result of an extensive trade study conducted during the initial phase of an exploration rover program. The lidar utilizes an innovative scanner coupled with a compact fiber laser and high-speed timing electronics. Compared to existing compact lidar systems, distinguishing features of the HSSL include its high accuracy, high resolution, high refresh rate and large field of view. Other benefits of this design include the capability to quickly configure scan settings to fit various operational modes.

  16. A Vehicle Active Safety Model: Vehicle Speed Control Based on Driver Vigilance Detection Using Wearable EEG and Sparse Representation.

    PubMed

    Zhang, Zutao; Luo, Dianyuan; Rasim, Yagubov; Li, Yanjun; Meng, Guanjun; Xu, Jian; Wang, Chunbai

    2016-02-19

    In this paper, we present a vehicle active safety model for vehicle speed control based on driver vigilance detection using low-cost, comfortable, wearable electroencephalographic (EEG) sensors and sparse representation. The proposed system consists of three main steps, namely wireless wearable EEG collection, driver vigilance detection, and vehicle speed control strategy. First of all, a homemade low-cost comfortable wearable brain-computer interface (BCI) system with eight channels is designed for collecting the driver's EEG signal. Second, wavelet de-noising and down-sample algorithms are utilized to enhance the quality of EEG data, and Fast Fourier Transformation (FFT) is adopted to extract the EEG power spectrum density (PSD). In this step, sparse representation classification combined with k-singular value decomposition (KSVD) is firstly introduced in PSD to estimate the driver's vigilance level. Finally, a novel safety strategy of vehicle speed control, which controls the electronic throttle opening and automatic braking after driver fatigue detection using the above method, is presented to avoid serious collisions and traffic accidents. The simulation and practical testing results demonstrate the feasibility of the vehicle active safety model.

  17. A Vehicle Active Safety Model: Vehicle Speed Control Based on Driver Vigilance Detection Using Wearable EEG and Sparse Representation

    PubMed Central

    Zhang, Zutao; Luo, Dianyuan; Rasim, Yagubov; Li, Yanjun; Meng, Guanjun; Xu, Jian; Wang, Chunbai

    2016-01-01

    In this paper, we present a vehicle active safety model for vehicle speed control based on driver vigilance detection using low-cost, comfortable, wearable electroencephalographic (EEG) sensors and sparse representation. The proposed system consists of three main steps, namely wireless wearable EEG collection, driver vigilance detection, and vehicle speed control strategy. First of all, a homemade low-cost comfortable wearable brain-computer interface (BCI) system with eight channels is designed for collecting the driver’s EEG signal. Second, wavelet de-noising and down-sample algorithms are utilized to enhance the quality of EEG data, and Fast Fourier Transformation (FFT) is adopted to extract the EEG power spectrum density (PSD). In this step, sparse representation classification combined with k-singular value decomposition (KSVD) is firstly introduced in PSD to estimate the driver’s vigilance level . Finally, a novel safety strategy of vehicle speed control, which controls the electronic throttle opening and automatic braking after driver fatigue detection using the above method, is presented to avoid serious collisions and traffic accidents. The simulation and practical testing results demonstrate the feasibility of the vehicle active safety model. PMID:26907278

  18. Low-cost, high-speed back-end processing system for high-frequency ultrasound B-mode imaging.

    PubMed

    Chang, Jin Ho; Sun, Lei; Yen, Jesse T; Shung, K Kirk

    2009-07-01

    For real-time visualization of the mouse heart (6 to 13 beats per second), a back-end processing system involving high-speed signal processing functions to form and display images has been developed. This back-end system was designed with new signal processing algorithms to achieve a frame rate of more than 400 images per second. These algorithms were implemented in a simple and cost-effective manner with a single field-programmable gate array (FPGA) and software programs written in C++. The operating speed of the back-end system was investigated by recording the time required for transferring an image to a personal computer. Experimental results showed that the back-end system is capable of producing 433 images per second. To evaluate the imaging performance of the back-end system, a complete imaging system was built. This imaging system, which consisted of a recently reported high-speed mechanical sector scanner assembled with the back-end system, was tested by imaging a wire phantom, a pig eye (in vitro), and a mouse heart (in vivo). It was shown that this system is capable of providing high spatial resolution images with fast temporal resolution.

  19. Low-Cost, High-Speed Back-End Processing System for High-Frequency Ultrasound B-Mode Imaging

    PubMed Central

    Chang, Jin Ho; Sun, Lei; Yen, Jesse T.; Shung, K. Kirk

    2009-01-01

    For real-time visualization of the mouse heart (6 to 13 beats per second), a back-end processing system involving high-speed signal processing functions to form and display images has been developed. This back-end system was designed with new signal processing algorithms to achieve a frame rate of more than 400 images per second. These algorithms were implemented in a simple and cost-effective manner with a single field-programmable gate array (FPGA) and software programs written in C++. The operating speed of the back-end system was investigated by recording the time required for transferring an image to a personal computer. Experimental results showed that the back-end system is capable of producing 433 images per second. To evaluate the imaging performance of the back-end system, a complete imaging system was built. This imaging system, which consisted of a recently reported high-speed mechanical sector scanner assembled with the back-end system, was tested by imaging a wire phantom, a pig eye (in vitro), and a mouse heart (in vivo). It was shown that this system is capable of providing high spatial resolution images with fast temporal resolution. PMID:19574160

  20. Parallel Computer System for 3D Visualization Stereo on GPU

    NASA Astrophysics Data System (ADS)

    Al-Oraiqat, Anas M.; Zori, Sergii A.

    2018-03-01

    This paper proposes the organization of a parallel computer system based on Graphic Processors Unit (GPU) for 3D stereo image synthesis. The development is based on the modified ray tracing method developed by the authors for fast search of tracing rays intersections with scene objects. The system allows significant increase in the productivity for the 3D stereo synthesis of photorealistic quality. The generalized procedure of 3D stereo image synthesis on the Graphics Processing Unit/Graphics Processing Clusters (GPU/GPC) is proposed. The efficiency of the proposed solutions by GPU implementation is compared with single-threaded and multithreaded implementations on the CPU. The achieved average acceleration in multi-thread implementation on the test GPU and CPU is about 7.5 and 1.6 times, respectively. Studying the influence of choosing the size and configuration of the computational Compute Unified Device Archi-tecture (CUDA) network on the computational speed shows the importance of their correct selection. The obtained experimental estimations can be significantly improved by new GPUs with a large number of processing cores and multiprocessors, as well as optimized configuration of the computing CUDA network.

  1. A method of non-contact reading code based on computer vision

    NASA Astrophysics Data System (ADS)

    Zhang, Chunsen; Zong, Xiaoyu; Guo, Bingxuan

    2018-03-01

    With the purpose of guarantee the computer information exchange security between internal and external network (trusted network and un-trusted network), A non-contact Reading code method based on machine vision has been proposed. Which is different from the existing network physical isolation method. By using the computer monitors, camera and other equipment. Deal with the information which will be on exchanged, Include image coding ,Generate the standard image , Display and get the actual image , Calculate homography matrix, Image distort correction and decoding in calibration, To achieve the computer information security, Non-contact, One-way transmission between the internal and external network , The effectiveness of the proposed method is verified by experiments on real computer text data, The speed of data transfer can be achieved 24kb/s. The experiment shows that this algorithm has the characteristics of high security, fast velocity and less loss of information. Which can meet the daily needs of the confidentiality department to update the data effectively and reliably, Solved the difficulty of computer information exchange between Secret network and non-secret network, With distinctive originality, practicability, and practical research value.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Bingbin; Karr, Dale G.; Song, Huimin

    It is a fact that developing offshore wind energy has become more and more serious worldwide in recent years. Many of the promising offshore wind farm locations are in cold regions that may have ice cover during wintertime. The challenge of possible ice loads on offshore wind turbines raises the demand of modeling capacity of dynamic wind turbine response under the joint action of ice, wind, wave, and current. The simulation software FAST is an open source computer-aided engineering (CAE) package maintained by the National Renewable Energy Laboratory. In this paper, a new module of FAST for assessing the dynamicmore » response of offshore wind turbines subjected to ice forcing is presented. In the ice module, several models are presented which involve both prescribed forcing and coupled response. For conditions in which the ice forcing is essentially decoupled from the structural response, ice forces are established from existing models for brittle and ductile ice failure. For conditions in which the ice failure and the structural response are coupled, such as lock-in conditions, a rate-dependent ice model is described, which is developed in conjunction with a new modularization framework for FAST. In this paper, analytical ice mechanics models are presented that incorporate ice floe forcing, deformation, and failure. For lower speeds, forces slowly build until the ice strength is reached and ice fails resulting in a quasi-static condition. For intermediate speeds, the ice failure can be coupled with the structural response and resulting in coinciding periods of the ice failure and the structural response. A third regime occurs at high speeds of encounter in which brittle fracturing of the ice feature occurs in a random pattern, which results in a random vibration excitation of the structure. An example wind turbine response is simulated under ice loading of each of the presented models. This module adds to FAST the capabilities for analyzing the response of wind turbines subjected to forces resulting from ice impact on the turbine support structure. The conditions considered in this module are specifically addressed in the International Organization for Standardization (ISO) standard 19906:2010 for arctic offshore structures design consideration. Special consideration of lock-in vibrations is required due to the detrimental effects of such response with regard to fatigue and foundation/soil response. Finally, the use of FAST for transient, time domain simulation with the new ice module is well suited for such analyses.« less

  3. Short, large amplitude speed enhancements in the near-Sun fast solar wind

    NASA Astrophysics Data System (ADS)

    Horbury, T. S.; Matteini, L.; Stansby, D.

    2018-04-01

    We report the presence of intermittent, short discrete enhancements in plasma speed in the near-Sun high speed solar wind. Lasting tens of seconds to minutes in spacecraft measurements at 0.3 AU, speeds inside these enhancements can reach 1000 km/s, corresponding to a kinetic energy up to twice that of the bulk high speed solar wind. These events, which occur around 5% of the time, are Alfvénic in nature with large magnetic field deflections and are the same temperature as the surrounding plasma, in contrast to the bulk fast wind which has a well-established positive speed-temperature correlation. The origin of these speed enhancements is unclear but they may be signatures of discrete jets associated with transient events in the chromosphere or corona. Such large short velocity changes represent a measurement and analysis challenge for the upcoming Parker Solar Probe and Solar Orbiter missions.

  4. Transfer of piano practice in fast performance of skilled finger movements

    PubMed Central

    2013-01-01

    Background Transfer of learning facilitates the efficient mastery of various skills without practicing all possible sensory-motor repertoires. The present study assessed whether motor practice at a submaximal speed, which is typical in sports and music performance, results in an increase in a maximum speed of finger movements of trained and untrained skills. Results Piano practice of sequential finger movements at a submaximal speed over days progressively increased the maximum speed of trained movements. This increased maximum speed of finger movements was maintained two months after the practice. The learning transferred within the hand to some extent, but not across the hands. Conclusions The present study confirmed facilitation of fast finger movements following a piano practice at a submaximal speed. In addition, the findings indicated the intra-manual transfer effects of piano practice on the maximum speed of skilled finger movements. PMID:24175946

  5. Balance and gait in children with dyslexia.

    PubMed

    Moe-Nilssen, Rolf; Helbostad, Jorunn L; Talcott, Joel B; Toennessen, Finn Egil

    2003-05-01

    Tests of postural stability have provided some evidence of a link between deficits in gross motor skills and developmental dyslexia. The ordinal-level scales used previously, however, have limited measurement sensitivity, and no studies have investigated motor performance during walking in participants with dyslexia. The purpose of this study was to investigate if continuous-scaled measures of standing balance and gait could discriminate between groups of impaired and normal readers when investigators were blind to group membership during testing. Children with dyslexia ( n=22) and controls ( n=18), aged 10-12 years, performed walking tests at four different speeds (slow-preferred-fast-very fast) on an even and an uneven surface, and tests of unperturbed and perturbed body sway during standing. Body movements were registered by a triaxial accelerometer over the lower trunk, and measures of reaction time, body sway, walking speed, step length and cadence were calculated. Results were controlled for gender differences. Tests of standing balance with eyes closed did not discriminate between groups. All unperturbed standing tests with eyes open showed significant group differences ( P<0.05) and classified correctly 70-77.5% of the subjects into their respective groups. Mean walking speed during very fast walking on both flat and uneven surface was > or =0.2 m/s ( P< or =0.01) faster for controls than for the group with dyslexia. This test classified 77.5% and 85% of the subjects correctly on flat and uneven surface, respectively. Cadence at preferred or very fast speed did not differ statistically between groups, but revealed significant group differences when all subjects were compared at a normalised walking speed ( P< or =0.04). Very fast walking speed as well as cadence at a normalised speed discriminated better between groups when subjects were walking on an uneven surface compared to a flat floor. Continuous-scaled walking tests performed in field settings may be suitable for motor skill assessment as a component of a screening tool for developmental dyslexia.

  6. Using Computational Fluid Dynamics to Compare Shear Rate and Turbulence in the TIM-Automated Gastric Compartment With USP Apparatus II.

    PubMed

    Hopgood, Matthew; Reynolds, Gavin; Barker, Richard

    2018-03-30

    We use computational fluid dynamics to compare the shear rate and turbulence in an advanced in vitro gastric model (TIMagc) during its simulation of fasted state Migrating Motor Complex phases I and II, with the United States Pharmacopeia paddle dissolution apparatus II (USPII). A specific focus is placed on how shear rate in these apparatus affects erosion-based solid oral dosage forms. The study finds that tablet surface shear rates in TIMagc are strongly time dependant and fluctuate between 0.001 and 360 s -1 . In USPII, tablet surface shear rates are approximately constant for a given paddle speed and increase linearly from 9 s -1 to 36 s -1 as the paddle speed is increased from 25 to 100 rpm. A strong linear relationship is observed between tablet surface shear rate and tablet erosion rate in USPII, whereas TIMagc shows highly variable behavior. The flow regimes present in each apparatus are compared to in vivo predictions using Reynolds number analysis. Reynolds numbers for flow in TIMagc lie predominantly within the predicted in vivo bounds (0.01-30), whereas Reynolds numbers for flow in USPII lie above the predicted upper bound when operating with paddle speeds as low as 25 rpm (33). Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  7. CAMAC throughput of a new RISC-based data acquisition computer at the DIII-D tokamak

    NASA Astrophysics Data System (ADS)

    Vanderlaan, J. F.; Cummings, J. W.

    1993-10-01

    The amount of experimental data acquired per plasma discharge at DIII-D has continued to grow. The largest shot size in May 1991 was 49 Mbyte; in May 1992, 66 Mbyte; and in April 1993, 80 Mbyte. The increasing load has prompted the installation of a new Motorola 88100-based MODCOMP computer to supplement the existing core of three older MODCOMP data acquisition CPU's. New Kinetic Systems CAMAC serial highway driver hardware runs on the 88100 VME bus. The new operating system is MODCOMP REAL/IX version of AT&T System V UNIX with real-time extensions and networking capabilities; future plans call for installation of additional computers of this type for tokamak and neutral beam control functions. Experiences with the CAMAC hardware and software will be chronicled, including observation of data throughput. The Enhanced Serial Highway crate controller is advertised as twice as fast as the previous crate controller, and computer I/O speeds are expected to also increase data rates.

  8. A new graphical user interface for fast construction of computation phantoms and MCNP calculations: application to calibration of in vivo measurement systems.

    PubMed

    Borisov, N; Franck, D; de Carlan, L; Laval, L

    2002-08-01

    The paper reports on a new utility for development of computational phantoms for Monte Carlo calculations and data analysis for in vivo measurements of radionuclides deposited in tissues. The individual properties of each worker can be acquired for a rather precise geometric representation of his (her) anatomy, which is particularly important for low energy gamma ray emitting sources such as thorium, uranium, plutonium and other actinides. The software discussed here enables automatic creation of an MCNP input data file based on scanning data. The utility includes segmentation of images obtained with either computed tomography or magnetic resonance imaging by distinguishing tissues according to their signal (brightness) and specification of the source and detector. In addition, a coupling of individual voxels within the tissue is used to reduce the memory demand and to increase the calculational speed. The utility was tested for low energy emitters in plastic and biological tissues as well as for computed tomography and magnetic resonance imaging scanning information.

  9. A Brain-Computer Interface (BCI) system to use arbitrary Windows applications by directly controlling mouse and keyboard.

    PubMed

    Spuler, Martin

    2015-08-01

    A Brain-Computer Interface (BCI) allows to control a computer by brain activity only, without the need for muscle control. In this paper, we present an EEG-based BCI system based on code-modulated visual evoked potentials (c-VEPs) that enables the user to work with arbitrary Windows applications. Other BCI systems, like the P300 speller or BCI-based browsers, allow control of one dedicated application designed for use with a BCI. In contrast, the system presented in this paper does not consist of one dedicated application, but enables the user to control mouse cursor and keyboard input on the level of the operating system, thereby making it possible to use arbitrary applications. As the c-VEP BCI method was shown to enable very fast communication speeds (writing more than 20 error-free characters per minute), the presented system is the next step in replacing the traditional mouse and keyboard and enabling complete brain-based control of a computer.

  10. Integrating Reconfigurable Hardware-Based Grid for High Performance Computing

    PubMed Central

    Dondo Gazzano, Julio; Sanchez Molina, Francisco; Rincon, Fernando; López, Juan Carlos

    2015-01-01

    FPGAs have shown several characteristics that make them very attractive for high performance computing (HPC). The impressive speed-up factors that they are able to achieve, the reduced power consumption, and the easiness and flexibility of the design process with fast iterations between consecutive versions are examples of benefits obtained with their use. However, there are still some difficulties when using reconfigurable platforms as accelerator that need to be addressed: the need of an in-depth application study to identify potential acceleration, the lack of tools for the deployment of computational problems in distributed hardware platforms, and the low portability of components, among others. This work proposes a complete grid infrastructure for distributed high performance computing based on dynamically reconfigurable FPGAs. Besides, a set of services designed to facilitate the application deployment is described. An example application and a comparison with other hardware and software implementations are shown. Experimental results show that the proposed architecture offers encouraging advantages for deployment of high performance distributed applications simplifying development process. PMID:25874241

  11. A surface ice module for wind turbine dynamic response simulation using FAST

    DOE PAGES

    Yu, Bingbin; Karr, Dale G.; Song, Huimin; ...

    2016-06-03

    It is a fact that developing offshore wind energy has become more and more serious worldwide in recent years. Many of the promising offshore wind farm locations are in cold regions that may have ice cover during wintertime. The challenge of possible ice loads on offshore wind turbines raises the demand of modeling capacity of dynamic wind turbine response under the joint action of ice, wind, wave, and current. The simulation software FAST is an open source computer-aided engineering (CAE) package maintained by the National Renewable Energy Laboratory. In this paper, a new module of FAST for assessing the dynamicmore » response of offshore wind turbines subjected to ice forcing is presented. In the ice module, several models are presented which involve both prescribed forcing and coupled response. For conditions in which the ice forcing is essentially decoupled from the structural response, ice forces are established from existing models for brittle and ductile ice failure. For conditions in which the ice failure and the structural response are coupled, such as lock-in conditions, a rate-dependent ice model is described, which is developed in conjunction with a new modularization framework for FAST. In this paper, analytical ice mechanics models are presented that incorporate ice floe forcing, deformation, and failure. For lower speeds, forces slowly build until the ice strength is reached and ice fails resulting in a quasi-static condition. For intermediate speeds, the ice failure can be coupled with the structural response and resulting in coinciding periods of the ice failure and the structural response. A third regime occurs at high speeds of encounter in which brittle fracturing of the ice feature occurs in a random pattern, which results in a random vibration excitation of the structure. An example wind turbine response is simulated under ice loading of each of the presented models. This module adds to FAST the capabilities for analyzing the response of wind turbines subjected to forces resulting from ice impact on the turbine support structure. The conditions considered in this module are specifically addressed in the International Organization for Standardization (ISO) standard 19906:2010 for arctic offshore structures design consideration. Special consideration of lock-in vibrations is required due to the detrimental effects of such response with regard to fatigue and foundation/soil response. Finally, the use of FAST for transient, time domain simulation with the new ice module is well suited for such analyses.« less

  12. FSH: fast spaced seed hashing exploiting adjacent hashes.

    PubMed

    Girotto, Samuele; Comin, Matteo; Pizzi, Cinzia

    2018-01-01

    Patterns with wildcards in specified positions, namely spaced seeds , are increasingly used instead of k -mers in many bioinformatics applications that require indexing, querying and rapid similarity search, as they can provide better sensitivity. Many of these applications require to compute the hashing of each position in the input sequences with respect to the given spaced seed, or to multiple spaced seeds. While the hashing of k -mers can be rapidly computed by exploiting the large overlap between consecutive k -mers, spaced seeds hashing is usually computed from scratch for each position in the input sequence, thus resulting in slower processing. The method proposed in this paper, fast spaced-seed hashing (FSH), exploits the similarity of the hash values of spaced seeds computed at adjacent positions in the input sequence. In our experiments we compute the hash for each positions of metagenomics reads from several datasets, with respect to different spaced seeds. We also propose a generalized version of the algorithm for the simultaneous computation of multiple spaced seeds hashing. In the experiments, our algorithm can compute the hashing values of spaced seeds with a speedup, with respect to the traditional approach, between 1.6[Formula: see text] to 5.3[Formula: see text], depending on the structure of the spaced seed. Spaced seed hashing is a routine task for several bioinformatics application. FSH allows to perform this task efficiently and raise the question of whether other hashing can be exploited to further improve the speed up. This has the potential of major impact in the field, making spaced seed applications not only accurate, but also faster and more efficient. The software FSH is freely available for academic use at: https://bitbucket.org/samu661/fsh/overview.

  13. [Computer aided design and manufacture of the porcelain fused to metal crown].

    PubMed

    Nie, Xin; Cheng, Xiaosheng; Dai, Ning; Yu, Qing; Hao, Guodong; Sun, Quanping

    2009-04-01

    In order to satisfy the current demand for fast and high-quality prosthodontics, we have carried out a research in the fabrication process of the porcelain fused to metal crown on molar with CAD/CAM technology. Firstly, we get the data of the surface mesh on preparation teeth through a 3D-optical grating measuring system. Then, we reconstruct the 3D-model crown with the computer-aided design software which was developed by ourselves. Finally, with the 3D-model data, we produce a metallic crown on a high-speed CNC carving machine. The result has proved that the metallic crown can match the preparation teeth ideally. The fabrication process is reliable and efficient, and the restoration is precise and steady in quality.

  14. Zero-fringe demodulation method based on location-dependent birefringence dispersion in polarized low-coherence interferometry.

    PubMed

    Wang, Shuang; Liu, Tiegen; Jiang, Junfeng; Liu, Kun; Yin, Jinde; Qin, Zunqi; Zou, Shengliang

    2014-04-01

    We present a high precision and fast speed demodulation method for a polarized low-coherence interferometer with location-dependent birefringence dispersion. Based on the characteristics of location-dependent birefringence dispersion and five-step phase-shifting technology, the method accurately retrieves the peak position of zero-fringe at the central wavelength, which avoids the fringe order ambiguity. The method processes data only in the spatial domain and reduces the computational load greatly. We successfully demonstrated the effectiveness of the proposed method in an optical fiber Fabry-Perot barometric pressure sensing experiment system. Measurement precision of 0.091 kPa was realized in the pressure range of 160 kPa, and computation time was improved by 10 times compared to the traditional phase-based method that requires Fourier transform operation.

  15. Fast and predictable video compression in software design and implementation of an H.261 codec

    NASA Astrophysics Data System (ADS)

    Geske, Dagmar; Hess, Robert

    1998-09-01

    The use of software codecs for video compression becomes commonplace in several videoconferencing applications. In order to reduce conflicts with other applications used at the same time, mechanisms for resource reservation on endsystems need to determine an upper bound for computing time used by the codec. This leads to the demand for predictable execution times of compression/decompression. Since compression schemes as H.261 inherently depend on the motion contained in the video, an adaptive admission control is required. This paper presents a data driven approach based on dynamical reduction of the number of processed macroblocks in peak situations. Besides the absolute speed is a point of interest. The question, whether and how software compression of high quality video is feasible on today's desktop computers, is examined.

  16. Encoder fault analysis system based on Moire fringe error signal

    NASA Astrophysics Data System (ADS)

    Gao, Xu; Chen, Wei; Wan, Qiu-hua; Lu, Xin-ran; Xie, Chun-yu

    2018-02-01

    Aiming at the problem of any fault and wrong code in the practical application of photoelectric shaft encoder, a fast and accurate encoder fault analysis system is researched from the aspect of Moire fringe photoelectric signal processing. DSP28335 is selected as the core processor and high speed serial A/D converter acquisition card is used. And temperature measuring circuit using AD7420 is designed. Discrete data of Moire fringe error signal is collected at different temperatures and it is sent to the host computer through wireless transmission. The error signal quality index and fault type is displayed on the host computer based on the error signal identification method. The error signal quality can be used to diagnosis the state of error code through the human-machine interface.

  17. Development of a small-scale computer cluster

    NASA Astrophysics Data System (ADS)

    Wilhelm, Jay; Smith, Justin T.; Smith, James E.

    2008-04-01

    An increase in demand for computing power in academia has necessitated the need for high performance machines. Computing power of a single processor has been steadily increasing, but lags behind the demand for fast simulations. Since a single processor has hard limits to its performance, a cluster of computers can have the ability to multiply the performance of a single computer with the proper software. Cluster computing has therefore become a much sought after technology. Typical desktop computers could be used for cluster computing, but are not intended for constant full speed operation and take up more space than rack mount servers. Specialty computers that are designed to be used in clusters meet high availability and space requirements, but can be costly. A market segment exists where custom built desktop computers can be arranged in a rack mount situation, gaining the space saving of traditional rack mount computers while remaining cost effective. To explore these possibilities, an experiment was performed to develop a computing cluster using desktop components for the purpose of decreasing computation time of advanced simulations. This study indicates that small-scale cluster can be built from off-the-shelf components which multiplies the performance of a single desktop machine, while minimizing occupied space and still remaining cost effective.

  18. Combining Gait Speed and Recall Memory to Predict Survival in Late Life: Population-Based Study

    PubMed Central

    Marengoni, Alessandra; Bandinelli, Stefania; Maietti, Elisa; Guralnik, Jack; Zuliani, Giovanni; Ferrucci, Luigi; Volpato, Stefano

    2017-01-01

    OBJECTIVES To evaluate the relationship between gait speed, recall memory, and mortality. DESIGN A cohort study (last follow-up December 2009). SETTING Tuscany, Italy. PARTICIPANTS Individual data from 1,014 community-dwelling older adults aged 60 years or older with baseline gait speed and recall memory measurements and follow-up for a median time of 9.10 (IQR 7.1;9.3) years. Participants were a mean (SD) age of 73.9 (7.3) years, and 55.8% women. Participants walking faster than 0.8 m/s were defined as fast walkers; good recall memory was defined as a score of 2 or 3 in the 3-word delayed recall section of the Mini-Mental State Examination. MEASUREMENTS All-cause mortality. RESULTS There were 302 deaths and the overall 100 person-year death rate was 3.77 (95% CI: 3.37–4.22). Both low gait speed and poor recall memory were associated with mortality when analysed separately (HR = 2.47; 95% CI: 1.87–3.27 and HR = 1.47; 95% CI: 1.16–1.87, respectively). When we grouped participants according to both recall and gait speed, death rates (100 person-years) progressively increased from those with both good gait speed and memory (2.0; 95% CI: 1.6–2.5), to those with fast walk but poor memory (3.4; 95% CI: 2.8–4.2), to those with slow walk and good memory (8.8; 95% CI: 6.4–12.1), to those with both slow walk and poor memory (13.0; 95% CI: 10.6–16.1). In multivariate analysis, poor memory significantly increases mortality risk among persons with fast gait speed (HR = 1.40; 95% CI: 1.04–1.89). CONCLUSION In older persons, gait speed and recall memory are independent predictors of expected survival. Information on memory function might better stratify mortality risk among persons with fast gait speed. PMID:28029688

  19. Combining Gait Speed and Recall Memory to Predict Survival in Late Life: Population-Based Study.

    PubMed

    Marengoni, Alessandra; Bandinelli, Stefania; Maietti, Elisa; Guralnik, Jack; Zuliani, Giovanni; Ferrucci, Luigi; Volpato, Stefano

    2017-03-01

    To evaluate the relationship between gait speed, recall memory, and mortality. A cohort study (last follow-up December 2009). Tuscany, Italy. Individual data from 1,014 community-dwelling older adults aged 60 years or older with baseline gait speed and recall memory measurements and follow-up for a median time of 9.10 (IQR 7.1;9.3) years. Participants were a mean (SD) age of 73.9 (7.3) years, and 55.8% women. Participants walking faster than 0.8 m/s were defined as fast walkers; good recall memory was defined as a score of 2 or 3 in the 3-word delayed recall section of the Mini-Mental State Examination. All-cause mortality. There were 302 deaths and the overall 100 person-year death rate was 3.77 (95% CI: 3.37-4.22). Both low gait speed and poor recall memory were associated with mortality when analysed separately (HR = 2.47; 95% CI: 1.87-3.27 and HR = 1.47; 95% CI: 1.16-1.87, respectively). When we grouped participants according to both recall and gait speed, death rates (100 person-years) progressively increased from those with both good gait speed and memory (2.0; 95% CI: 1.6-2.5), to those with fast walk but poor memory (3.4; 95% CI: 2.8-4.2), to those with slow walk and good memory (8.8; 95% CI: 6.4-12.1), to those with both slow walk and poor memory (13.0; 95% CI: 10.6-16.1). In multivariate analysis, poor memory significantly increases mortality risk among persons with fast gait speed (HR = 1.40; 95% CI: 1.04-1.89). In older persons, gait speed and recall memory are independent predictors of expected survival. Information on memory function might better stratify mortality risk among persons with fast gait speed. © 2016, Copyright the Authors Journal compilation © 2016, The American Geriatrics Society.

  20. Fast reversible wavelet image compressor

    NASA Astrophysics Data System (ADS)

    Kim, HyungJun; Li, Ching-Chung

    1996-10-01

    We present a unified image compressor with spline biorthogonal wavelets and dyadic rational filter coefficients which gives high computational speed and excellent compression performance. Convolutions with these filters can be preformed by using only arithmetic shifting and addition operations. Wavelet coefficients can be encoded with an arithmetic coder which also uses arithmetic shifting and addition operations. Therefore, from the beginning to the end, the while encoding/decoding process can be done within a short period of time. The proposed method naturally extends form the lossless compression to the lossy but high compression range and can be easily adapted to the progressive reconstruction.

  1. An electronic system for measuring thermophysical properties of wind tunnel models

    NASA Technical Reports Server (NTRS)

    Corwin, R. R.; Kramer, J. S.

    1975-01-01

    An electronic system is described which measures the surface temperature of a small portion of the surface of the model or sample at high speeds using an infrared radiometer. This data is processed along with heating rate data from the reference heat gauge in a small computer and prints out the desired thermophysical properties, time, surface temperature, and reference heat rate. This system allows fast and accurate property measurements over thirty temperature increments. The technique, the details of the apparatus, the procedure for making these measurements, and the results of some preliminary tests are presented.

  2. Evaluation of the low dose cardiac CT imaging using ASIR technique

    NASA Astrophysics Data System (ADS)

    Fan, Jiahua; Hsieh, Jiang; Deubig, Amy; Sainath, Paavana; Crandall, Peter

    2010-04-01

    Today Cardiac imaging is one of the key driving forces for the research and development activities of Computed Tomography (CT) imaging. It requires high spatial and temporal resolution and is often associated with high radiation dose. The newly introduced ASIR technique presents an efficient method that offers the dose reduction benefits while maintaining image quality and providing fast reconstruction speed. This paper discusses the study of image quality of the ASIR technique for Cardiac CT imaging. Phantoms as well as clinical data have been evaluated to demonstrate the effectiveness of ASIR technique for Cardiac CT applications.

  3. The interrelationship between disease severity, dynamic stability, and falls in cerebellar ataxia.

    PubMed

    Schniepp, Roman; Schlick, Cornelia; Pradhan, Cauchy; Dieterich, Marianne; Brandt, Thomas; Jahn, Klaus; Wuehr, Max

    2016-07-01

    Cerebellar ataxia (CA) results in discoordination of body movements (ataxia), a gait disorder, and falls. All three aspects appear to be obviously interrelated; however, experimental evidence is sparse. This study systematically correlated the clinical rating of the severity of ataxia with dynamic stability measures and the fall frequency in patients with CA. Clinical severity of CA in patients with sporadic (n = 34) and hereditary (n = 24) forms was assessed with the Scale for the Assessment and Rating of Ataxia (SARA). Gait performance was examined during slow, preferred, and maximally fast walking speeds. Spatiotemporal variability parameters in the fore-aft and medio-lateral directions were analyzed. The fall frequency was assessed using a standardized interview about fall events within the last 6 months. Fore-aft gait variability showed significant speed-dependent characteristics with highest magnitudes during slow and fast walking. The SARA score correlated positively with fore-aft gait variability, most prominently during fast walking. The fall frequency was significantly associated to fore-aft gait variability during slow walking. Severity of ataxia, dynamic stability, and the occurrence of falls were interrelated in a speed-dependent manner: (a) Severity of ataxia symptoms was closely related to instability during fast walking. (b) Fall frequency was associated with instability during slow walking. These findings suggest the presence of a speed-dependent, twofold cerebellar locomotor control. Assessment of gait performance during non-preferred, slow and fast walking speeds provides novel insights into the pathophysiology of cerebellar locomotor control and may become a useful approach in the clinical evaluation of patients with CA.

  4. A Computationally Efficient Parallel Levenberg-Marquardt Algorithm for Large-Scale Big-Data Inversion

    NASA Astrophysics Data System (ADS)

    Lin, Y.; O'Malley, D.; Vesselinov, V. V.

    2015-12-01

    Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a powerful tool for large-scale applications.

  5. Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure

    NASA Astrophysics Data System (ADS)

    Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei

    2011-09-01

    Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47× speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed. This work was presented in part at the 2010 Annual Meeting of the American Association of Physicists in Medicine (AAPM), Philadelphia, PA.

  6. Variations of Strahl Properties with Fast and Slow Solar Wind

    NASA Technical Reports Server (NTRS)

    Figueroa-Vinas, Adolfo; Goldstein, Melvyn L.; Gurgiolo, Chris

    2008-01-01

    The interplanetary solar wind electron velocity distribution function generally shows three different populations. Two of the components, the core and halo, have been the most intensively analyzed and modeled populations using different theoretical models. The third component, the strahl, is usually seen at higher energies, is confined in pitch-angle, is highly field-aligned and skew. This population has been more difficult to identify and to model in the solar wind. In this work we make use of the high angular, energy and time resolution and three-dimensional data of the Cluster/PEACE electron spectrometer to identify and analyze this component in the ambient solar wind during high and slow speed solar wind. The moment density and fluid velocity have been computed by a semi-numerical integration method. The variations of solar wind density and drift velocity with the general build solar wind speed could provide some insight into the source, origin, and evolution of the strahl.

  7. High-speed true random number generation based on paired memristors for security electronics

    NASA Astrophysics Data System (ADS)

    Zhang, Teng; Yin, Minghui; Xu, Changmin; Lu, Xiayan; Sun, Xinhao; Yang, Yuchao; Huang, Ru

    2017-11-01

    True random number generator (TRNG) is a critical component in hardware security that is increasingly important in the era of mobile computing and internet of things. Here we demonstrate a TRNG using intrinsic variation of memristors as a natural source of entropy that is otherwise undesirable in most applications. The random bits were produced by cyclically switching a pair of tantalum oxide based memristors and comparing their resistance values in the off state, taking advantage of the more pronounced resistance variation compared with that in the on state. Using an alternating read scheme in the designed TRNG circuit, the unbiasedness of the random numbers was significantly improved, and the bitstream passed standard randomness tests. The Pt/TaO x /Ta memristors fabricated in this work have fast programming/erasing speeds of ˜30 ns, suggesting a high random number throughput. The approach proposed here thus holds great promise for physically-implemented random number generation.

  8. High-speed true random number generation based on paired memristors for security electronics.

    PubMed

    Zhang, Teng; Yin, Minghui; Xu, Changmin; Lu, Xiayan; Sun, Xinhao; Yang, Yuchao; Huang, Ru

    2017-11-10

    True random number generator (TRNG) is a critical component in hardware security that is increasingly important in the era of mobile computing and internet of things. Here we demonstrate a TRNG using intrinsic variation of memristors as a natural source of entropy that is otherwise undesirable in most applications. The random bits were produced by cyclically switching a pair of tantalum oxide based memristors and comparing their resistance values in the off state, taking advantage of the more pronounced resistance variation compared with that in the on state. Using an alternating read scheme in the designed TRNG circuit, the unbiasedness of the random numbers was significantly improved, and the bitstream passed standard randomness tests. The Pt/TaO x /Ta memristors fabricated in this work have fast programming/erasing speeds of ∼30 ns, suggesting a high random number throughput. The approach proposed here thus holds great promise for physically-implemented random number generation.

  9. Detecting metrologically useful asymmetry and entanglement by a few local measurements

    NASA Astrophysics Data System (ADS)

    Zhang, Chao; Yadin, Benjamin; Hou, Zhi-Bo; Cao, Huan; Liu, Bi-Heng; Huang, Yun-Feng; Maity, Reevu; Vedral, Vlatko; Li, Chuan-Feng; Guo, Guang-Can; Girolami, Davide

    2017-10-01

    Important properties of a quantum system are not directly measurable, but they can be disclosed by how fast the system changes under controlled perturbations. In particular, asymmetry and entanglement can be verified by reconstructing the state of a quantum system. Yet, this usually requires experimental and computational resources which increase exponentially with the system size. Here we show how to detect metrologically useful asymmetry and entanglement by a limited number of measurements. This is achieved by studying how they affect the speed of evolution of a system under a unitary transformation. We show that the speed of multiqubit systems can be evaluated by measuring a set of local observables, providing exponential advantage with respect to state tomography. Indeed, the presented method requires neither the knowledge of the state and the parameter-encoding Hamiltonian nor global measurements performed on all the constituent subsystems. We implement the detection scheme in an all-optical experiment.

  10. Fast underdetermined BSS architecture design methodology for real time applications.

    PubMed

    Mopuri, Suresh; Reddy, P Sreenivasa; Acharyya, Amit; Naik, Ganesh R

    2015-01-01

    In this paper, we propose a high speed architecture design methodology for the Under-determined Blind Source Separation (UBSS) algorithm using our recently proposed high speed Discrete Hilbert Transform (DHT) targeting real time applications. In UBSS algorithm, unlike the typical BSS, the number of sensors are less than the number of the sources, which is of more interest in the real time applications. The DHT architecture has been implemented based on sub matrix multiplication method to compute M point DHT, which uses N point architecture recursively and where M is an integer multiples of N. The DHT architecture and state of the art architecture are coded in VHDL for 16 bit word length and ASIC implementation is carried out using UMC 90 - nm technology @V DD = 1V and @ 1MHZ clock frequency. The proposed architecture implementation and experimental comparison results show that the DHT design is two times faster than state of the art architecture.

  11. Co-existence and switching between fast and Ω-slow wind solutions in rapidly rotating massive stars

    NASA Astrophysics Data System (ADS)

    Araya, I.; Curé, M.; ud-Doula, A.; Santillán, A.; Cidale, L.

    2018-06-01

    Most radiation-driven winds of massive stars can be modelled with m-CAK theory, resulting in the so-called fast solution. However, the most rapidly rotating stars among them, especially when the rotational speed is higher than {˜ } 75 per cent of the critical rotational speed, can adopt a different solution, the so-called Ω-slow solution, characterized by a dense and slow wind. Here, we study the transition region of the solutions where the fast solution changes to the Ω-slow solution. Using both time-steady and time-dependent numerical codes, we study this transition region for various equatorial models of B-type stars. In all cases, in a certain range of rotational speeds we find a region where the fast and the Ω-slow solution can co-exist. We find that the type of solution obtained in this co-existence region depends stongly on the initial conditions of our models. We also test the stability of the solutions within the co-existence region by performing base-density perturbations in the wind. We find that under certain conditions, the fast solution can switch to the Ω-slow solution, or vice versa. Such solution-switching may be a possible contributor of material injected into the circumstellar environment of Be stars, without requiring rotational speeds near critical values.

  12. A fast optimization algorithm for multicriteria intensity modulated proton therapy planning.

    PubMed

    Chen, Wei; Craft, David; Madden, Thomas M; Zhang, Kewu; Kooy, Hanne M; Herman, Gabor T

    2010-09-01

    To describe a fast projection algorithm for optimizing intensity modulated proton therapy (IMPT) plans and to describe and demonstrate the use of this algorithm in multicriteria IMPT planning. The authors develop a projection-based solver for a class of convex optimization problems and apply it to IMPT treatment planning. The speed of the solver permits its use in multicriteria optimization, where several optimizations are performed which span the space of possible treatment plans. The authors describe a plan database generation procedure which is customized to the requirements of the solver. The optimality precision of the solver can be specified by the user. The authors apply the algorithm to three clinical cases: A pancreas case, an esophagus case, and a tumor along the rib cage case. Detailed analysis of the pancreas case shows that the algorithm is orders of magnitude faster than industry-standard general purpose algorithms (MOSEK'S interior point optimizer, primal simplex optimizer, and dual simplex optimizer). Additionally, the projection solver has almost no memory overhead. The speed and guaranteed accuracy of the algorithm make it suitable for use in multicriteria treatment planning, which requires the computation of several diverse treatment plans. Additionally, given the low memory overhead of the algorithm, the method can be extended to include multiple geometric instances and proton range possibilities, for robust optimization.

  13. A multi-frequency impedance analysing instrument for eddy current testing

    NASA Astrophysics Data System (ADS)

    Yin, W.; Dickinson, S. J.; Peyton, A. J.

    2006-02-01

    This paper presents the design of a high-performance multi-frequency impedance analysing instrument (MFIA) for eddy current testing which has been developed primarily for monitoring a steel production process using an inductive sensor. The system consists of a flexible multi-frequency waveform generator and a voltage/current measurement unit. The impedance of the sensor is obtained by cross-spectral analysis of the current and voltage signals. The system contains high-speed digital-to-analogue, analogue-to-digital converters and dual DSPs with one for control and interface and one dedicated to frequency-spectra analysis using fast Fourier transformation (FFT). The frequency span of the signal that can be analysed ranges from 1 kHz to 8 MHz. The system also employs a high-speed serial port interface (USB) to communicate with a personal computer (PC) and to allow for fast transmission of data and control commands. Overall, the system is capable of delivering over 250 impedance spectra per second. Although the instrument has been developed mainly for use with an inductive sensor, the system is not restricted to inductive measurement. The flexibility of the design architecture is demonstrated with capacitive and resistive measurements by using appropriate input circuitry. Issues relating to optimizing the phase of the spectra components in the excitation waveform are also discussed.

  14. A CAMAC display module for fast bit-mapped graphics

    NASA Astrophysics Data System (ADS)

    Abdel-Aal, R. E.

    1992-10-01

    In many data acquisition and analysis facilities for nuclear physics research, utilities for the display of two-dimensional (2D) images and spectra on graphics terminals suffer from low speed, poor resolution, and limited accuracy. Development of CAMAC bit-mapped graphics modules for this purpose has been discouraged in the past by the large device count needed and the long times required to load the image data from the host computer into the CAMAC hardware; particularly since many such facilities have been designed to support fast DMA block transfers only for data acquisition into the host. This paper describes the design and implementation of a prototype CAMAC graphics display module with a resolution of 256×256 pixels at eight colours for which all components can be easily accommodated in a single-width package. Employed is a hardware technique which reduces the number of programmed CAMAC data transfer operations needed for writing 2D images into the display memory by approximately an order of magnitude, with attendant improvements in the display speed and CPU time consumption. Hardware and software details are given together with sample results. Information on the performance of the module in a typical VAX/MBD data acquisition environment is presented, including data on the mutual effects of simultaneous data acquisition traffic. Suggestions are made for further improvements in performance.

  15. Fast noninvasive eye-tracking and eye-gaze determination for biomedical and remote monitoring applications

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Morookian, John M.; Monacos, Steve P.; Lam, Raymond K.; Lebaw, C.; Bond, A.

    2004-04-01

    Eyetracking is one of the latest technologies that has shown potential in several areas including human-computer interaction for people with and without disabilities, and for noninvasive monitoring, detection, and even diagnosis of physiological and neurological problems in individuals. Current non-invasive eyetracking methods achieve a 30 Hz rate with possibly low accuracy in gaze estimation, that is insufficient for many applications. We propose a new non-invasive visual eyetracking system that is capable of operating at speeds as high as 6-12 KHz. A new CCD video camera and hardware architecture is used, and a novel fast image processing algorithm leverages specific features of the input CCD camera to yield a real-time eyetracking system. A field programmable gate array (FPGA) is used to control the CCD camera and execute the image processing operations. Initial results show the excellent performance of our system under severe head motion and low contrast conditions.

  16. Fast viscosity solutions for shape from shading under a more realistic imaging model

    NASA Astrophysics Data System (ADS)

    Wang, Guohui; Han, Jiuqiang; Jia, Honghai; Zhang, Xinman

    2009-11-01

    Shape from shading (SFS) has been a classical and important problem in the domain of computer vision. The goal of SFS is to reconstruct the 3-D shape of an object from its 2-D intensity image. To this end, an image irradiance equation describing the relation between the shape of a surface and its corresponding brightness variations is used. Then it is derived as an explicit partial differential equation (PDE). Using the nonlinear programming principle, we propose a detailed solution to Prados and Faugeras's implicit scheme for approximating the viscosity solution of the resulting PDE. Furthermore, by combining implicit and semi-implicit schemes, a new approximation scheme is presented. In order to accelerate the convergence speed, we adopt the Gauss-Seidel idea and alternating sweeping strategy to the approximation schemes. Experimental results on both synthetic and real images are performed to demonstrate that the proposed methods are fast and accurate.

  17. Toward high-speed 3D nonlinear soft tissue deformation simulations using Abaqus software.

    PubMed

    Idkaidek, Ashraf; Jasiuk, Iwona

    2015-12-01

    We aim to achieve a fast and accurate three-dimensional (3D) simulation of a porcine liver deformation under a surgical tool pressure using the commercial finite element software Abaqus. The liver geometry is obtained using magnetic resonance imaging, and a nonlinear constitutive law is employed to capture large deformations of the tissue. Effects of implicit versus explicit analysis schemes, element type, and mesh density on computation time are studied. We find that Abaqus explicit and implicit solvers are capable of simulating nonlinear soft tissue deformations accurately using first-order tetrahedral elements in a relatively short time by optimizing the element size. This study provides new insights and guidance on accurate and relatively fast nonlinear soft tissue simulations. Such simulations can provide force feedback during robotic surgery and allow visualization of tissue deformations for surgery planning and training of surgical residents.

  18. Fast algorithms of constrained Delaunay triangulation and skeletonization for band images

    NASA Astrophysics Data System (ADS)

    Zeng, Wei; Yang, ChengLei; Meng, XiangXu; Yang, YiJun; Yang, XiuKun

    2004-09-01

    For the boundary polygons of band-images, a fast constrained Delaunay triangulation algorithm is presented and based on it an efficient skeletonization algorithm is designed. In the process of triangulation the characters of uniform grid structure and the band-polygons are utilized to improve the speed of computing the third vertex for one edge within its local ranges when forming a Delaunay triangle. The final skeleton of the band-image is derived after reducing each triangle to local skeleton lines according to its topology. The algorithm with a simple data structure is easy to understand and implement. Moreover, it can deal with multiply connected polygons on the fly. Experiments show that there is a nearly linear dependence between triangulation time and size of band-polygons randomly generated. Correspondingly, the skeletonization algorithm is also an improvement over the previously known results in terms of time. Some practical examples are given in the paper.

  19. Ultrafast Method for the Analysis of Fluorescence Lifetime Imaging Microscopy Data Based on the Laguerre Expansion Technique

    PubMed Central

    Jo, Javier A.; Fang, Qiyin; Marcu, Laura

    2007-01-01

    We report a new deconvolution method for fluorescence lifetime imaging microscopy (FLIM) based on the Laguerre expansion technique. The performance of this method was tested on synthetic and real FLIM images. The following interesting properties of this technique were demonstrated. 1) The fluorescence intensity decay can be estimated simultaneously for all pixels, without a priori assumption of the decay functional form. 2) The computation speed is extremely fast, performing at least two orders of magnitude faster than current algorithms. 3) The estimated maps of Laguerre expansion coefficients provide a new domain for representing FLIM information. 4) The number of images required for the analysis is relatively small, allowing reduction of the acquisition time. These findings indicate that the developed Laguerre expansion technique for FLIM analysis represents a robust and extremely fast deconvolution method that enables practical applications of FLIM in medicine, biology, biochemistry, and chemistry. PMID:19444338

  20. The return of the bow shock

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scherer, K.; Fichtner, H., E-mail: kls@tp4.rub.de, E-mail: hf@tp4.rub.de

    2014-02-10

    Recently, whether a bow shock ahead of the heliospheric stagnation region exists or not has been a topic of discussion. This was triggered by measurements indicating that the Alfvén speed and the speed of fast magnetosonic waves are higher than the flow speed of the local interstellar medium (LISM) relative to the heliosphere and resulted in the conclusion that either a bow wave or a slow magnetosonic shock might exist. We demonstrate here that including the He{sup +} component of the LISM yields both an Alfvén and fast magnetosonic wave speed lower than the LISM flow speed. Consequently, the scenariomore » of a bow shock in front of the heliosphere, as modeled in numerous simulations of the interaction of the solar wind with the LISM, remains valid.« less

  1. Parameters that affect parallel processing for computational electromagnetic simulation codes on high performance computing clusters

    NASA Astrophysics Data System (ADS)

    Moon, Hongsik

    What is the impact of multicore and associated advanced technologies on computational software for science? Most researchers and students have multicore laptops or desktops for their research and they need computing power to run computational software packages. Computing power was initially derived from Central Processing Unit (CPU) clock speed. That changed when increases in clock speed became constrained by power requirements. Chip manufacturers turned to multicore CPU architectures and associated technological advancements to create the CPUs for the future. Most software applications benefited by the increased computing power the same way that increases in clock speed helped applications run faster. However, for Computational ElectroMagnetics (CEM) software developers, this change was not an obvious benefit - it appeared to be a detriment. Developers were challenged to find a way to correctly utilize the advancements in hardware so that their codes could benefit. The solution was parallelization and this dissertation details the investigation to address these challenges. Prior to multicore CPUs, advanced computer technologies were compared with the performance using benchmark software and the metric was FLoting-point Operations Per Seconds (FLOPS) which indicates system performance for scientific applications that make heavy use of floating-point calculations. Is FLOPS an effective metric for parallelized CEM simulation tools on new multicore system? Parallel CEM software needs to be benchmarked not only by FLOPS but also by the performance of other parameters related to type and utilization of the hardware, such as CPU, Random Access Memory (RAM), hard disk, network, etc. The codes need to be optimized for more than just FLOPs and new parameters must be included in benchmarking. In this dissertation, the parallel CEM software named High Order Basis Based Integral Equation Solver (HOBBIES) is introduced. This code was developed to address the needs of the changing computer hardware platforms in order to provide fast, accurate and efficient solutions to large, complex electromagnetic problems. The research in this dissertation proves that the performance of parallel code is intimately related to the configuration of the computer hardware and can be maximized for different hardware platforms. To benchmark and optimize the performance of parallel CEM software, a variety of large, complex projects are created and executed on a variety of computer platforms. The computer platforms used in this research are detailed in this dissertation. The projects run as benchmarks are also described in detail and results are presented. The parameters that affect parallel CEM software on High Performance Computing Clusters (HPCC) are investigated. This research demonstrates methods to maximize the performance of parallel CEM software code.

  2. A fast object-oriented Matlab implementation of the Reproducing Kernel Particle Method

    NASA Astrophysics Data System (ADS)

    Barbieri, Ettore; Meo, Michele

    2012-05-01

    Novel numerical methods, known as Meshless Methods or Meshfree Methods and, in a wider perspective, Partition of Unity Methods, promise to overcome most of disadvantages of the traditional finite element techniques. The absence of a mesh makes meshfree methods very attractive for those problems involving large deformations, moving boundaries and crack propagation. However, meshfree methods still have significant limitations that prevent their acceptance among researchers and engineers, namely the computational costs. This paper presents an in-depth analysis of computational techniques to speed-up the computation of the shape functions in the Reproducing Kernel Particle Method and Moving Least Squares, with particular focus on their bottlenecks, like the neighbour search, the inversion of the moment matrix and the assembly of the stiffness matrix. The paper presents numerous computational solutions aimed at a considerable reduction of the computational times: the use of kd-trees for the neighbour search, sparse indexing of the nodes-points connectivity and, most importantly, the explicit and vectorized inversion of the moment matrix without using loops and numerical routines.

  3. Early MIMD experience on the CRAY X-MP

    NASA Astrophysics Data System (ADS)

    Rhoades, Clifford E.; Stevens, K. G.

    1985-07-01

    This paper describes some early experience with converting four physics simulation programs to the CRAY X-MP, a current Multiple Instruction, Multiple Data (MIMD) computer consisting of two processors each with an architecture similar to that of the CRAY-1. As a multi-processor, the CRAY X-MP together with the high speed Solid-state Storage Device (SSD) in an ideal machine upon which to study MIMD algorithms for solving the equations of mathematical physics because it is fast enough to run real problems. The computer programs used in this study are all FORTRAN versions of original production codes. They range in sophistication from a one-dimensional numerical simulation of collisionless plasma to a two-dimensional hydrodynamics code with heat flow to a couple of three-dimensional fluid dynamics codes with varying degrees of viscous modeling. Early research with a dual processor configuration has shown speed-ups ranging from 1.55 to 1.98. It has been observed that a few simple extensions to FORTRAN allow a typical programmer to achieve a remarkable level of efficiency. These extensions involve the concept of memory local to a concurrent subprogram and memory common to all concurrent subprograms.

  4. Quickly updatable hologram images with high performance photorefractive polymer composites

    NASA Astrophysics Data System (ADS)

    Tsutsumi, Naoto; Kinashi, Kenji; Nonomura, Asato; Sakai, Wataru

    2012-02-01

    We present here quickly updatable hologram images using high performance photorefractive (PR) polymer composite based on poly(N-vinyl carbazole) (PVCz). PVCz is one of the pioneer materials for photoconductive polymer. PVCz/7- DCST/CzEPA/TNF (44/35/20/1 by wt) gives high diffraction efficiency of 68 % at E = 45 V/μm with fast response speed. Response speed of optical diffraction is the key parameter for real-time 3D holographic display. Key parameter for obtaining quickly updatable hologram images is to control the glass transition temperature lower enough to enhance chromophore orientation. Object image of the reflected coin surface recorded with reference beam at 532 nm (green beam) in the PR polymer composite is simultaneously reconstructed using a red probe beam at 642 nm. Instead of using coin object, object image produced by a computer was displayed on a spatial light modulator (SLM) is used as an object for hologram. Reflected object beam from a SLM interfered with reference beam on PR polymer composite to record a hologram and simultaneously reconstructed by a red probe beam. Movie produced in a computer was recorded as a realtime hologram in the PR polymer composite and simultaneously clearly reconstructed with a video rate.

  5. Efficient Radiative Transfer for Dynamically Evolving Stratified Atmospheres

    NASA Astrophysics Data System (ADS)

    Judge, Philip G.

    2017-12-01

    We present a fast multi-level and multi-atom non-local thermodynamic equilibrium radiative transfer method for dynamically evolving stratified atmospheres, such as the solar atmosphere. The preconditioning method of Rybicki & Hummer (RH92) is adopted. But, pressed for the need of speed and stability, a “second-order escape probability” scheme is implemented within the framework of the RH92 method, in which frequency- and angle-integrals are carried out analytically. While minimizing the computational work needed, this comes at the expense of numerical accuracy. The iteration scheme is local, the formal solutions for the intensities are the only non-local component. At present the methods have been coded for vertical transport, applicable to atmospheres that are highly stratified. The probabilistic method seems adequately fast, stable, and sufficiently accurate for exploring dynamical interactions between the evolving MHD atmosphere and radiation using current computer hardware. Current 2D and 3D dynamics codes do not include this interaction as consistently as the current method does. The solutions generated may ultimately serve as initial conditions for dynamical calculations including full 3D radiative transfer. The National Center for Atmospheric Research is sponsored by the National Science Foundation.

  6. The development of GPU-based parallel PRNG for Monte Carlo applications in CUDA Fortran

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kargaran, Hamed, E-mail: h-kargaran@sbu.ac.ir; Minuchehr, Abdolhamid; Zolfaghari, Ahmad

    The implementation of Monte Carlo simulation on the CUDA Fortran requires a fast random number generation with good statistical properties on GPU. In this study, a GPU-based parallel pseudo random number generator (GPPRNG) have been proposed to use in high performance computing systems. According to the type of GPU memory usage, GPU scheme is divided into two work modes including GLOBAL-MODE and SHARED-MODE. To generate parallel random numbers based on the independent sequence method, the combination of middle-square method and chaotic map along with the Xorshift PRNG have been employed. Implementation of our developed PPRNG on a single GPU showedmore » a speedup of 150x and 470x (with respect to the speed of PRNG on a single CPU core) for GLOBAL-MODE and SHARED-MODE, respectively. To evaluate the accuracy of our developed GPPRNG, its performance was compared to that of some other commercially available PPRNGs such as MATLAB, FORTRAN and Miller-Park algorithm through employing the specific standard tests. The results of this comparison showed that the developed GPPRNG in this study can be used as a fast and accurate tool for computational science applications.« less

  7. Compressed sensing with gradient total variation for low-dose CBCT reconstruction

    NASA Astrophysics Data System (ADS)

    Seo, Chang-Woo; Cha, Bo Kyung; Jeon, Seongchae; Huh, Young; Park, Justin C.; Lee, Byeonghun; Baek, Junghee; Kim, Eunyoung

    2015-06-01

    This paper describes the improvement of convergence speed with gradient total variation (GTV) in compressed sensing (CS) for low-dose cone-beam computed tomography (CBCT) reconstruction. We derive a fast algorithm for the constrained total variation (TV)-based a minimum number of noisy projections. To achieve this task we combine the GTV with a TV-norm regularization term to promote an accelerated sparsity in the X-ray attenuation characteristics of the human body. The GTV is derived from a TV and enforces more efficient computationally and faster in convergence until a desired solution is achieved. The numerical algorithm is simple and derives relatively fast convergence. We apply a gradient projection algorithm that seeks a solution iteratively in the direction of the projected gradient while enforcing a non-negatively of the found solution. In comparison with the Feldkamp, Davis, and Kress (FDK) and conventional TV algorithms, the proposed GTV algorithm showed convergence in ≤18 iterations, whereas the original TV algorithm needs at least 34 iterations in reducing 50% of the projections compared with the FDK algorithm in order to reconstruct the chest phantom images. Future investigation includes improving imaging quality, particularly regarding X-ray cone-beam scatter, and motion artifacts of CBCT reconstruction.

  8. Topography of Slow Sigma Power during Sleep is Associated with Processing Speed in Preschool Children.

    PubMed

    Doucette, Margaret R; Kurth, Salome; Chevalier, Nicolas; Munakata, Yuko; LeBourgeois, Monique K

    2015-11-04

    Cognitive development is influenced by maturational changes in processing speed, a construct reflecting the rapidity of executing cognitive operations. Although cognitive ability and processing speed are linked to spindles and sigma power in the sleep electroencephalogram (EEG), little is known about such associations in early childhood, a time of major neuronal refinement. We calculated EEG power for slow (10-13 Hz) and fast (13.25-17 Hz) sigma power from all-night high-density electroencephalography (EEG) in a cross-sectional sample of healthy preschool children (n = 10, 4.3 ± 1.0 years). Processing speed was assessed as simple reaction time. On average, reaction time was 1409 ± 251 ms; slow sigma power was 4.0 ± 1.5 μV²; and fast sigma power was 0.9 ± 0.2 μV². Both slow and fast sigma power predominated over central areas. Only slow sigma power was correlated with processing speed in a large parietal electrode cluster (p < 0.05, r ranging from -0.6 to -0.8), such that greater power predicted faster reaction time. Our findings indicate regional correlates between sigma power and processing speed that are specific to early childhood and provide novel insights into the neurobiological features of the EEG that may underlie developing cognitive abilities.

  9. Speed of fast and slow rupture fronts along frictional interfaces

    NASA Astrophysics Data System (ADS)

    Trømborg, Jørgen Kjoshagen; Sveinsson, Henrik Andersen; Thøgersen, Kjetil; Scheibert, Julien; Malthe-Sørenssen, Anders

    2015-07-01

    The transition from stick to slip at a dry frictional interface occurs through the breaking of microjunctions between the two contacting surfaces. Typically, interactions between junctions through the bulk lead to rupture fronts propagating from weak and/or highly stressed regions, whose junctions break first. Experiments find rupture fronts ranging from quasistatic fronts, via fronts much slower than elastic wave speeds, to fronts faster than the shear wave speed. The mechanisms behind and selection between these fronts are still imperfectly understood. Here we perform simulations in an elastic two-dimensional spring-block model where the frictional interaction between each interfacial block and the substrate arises from a set of junctions modeled explicitly. We find that material slip speed and rupture front speed are proportional across the full range of front speeds we observe. We revisit a mechanism for slow slip in the model and demonstrate that fast slip and fast fronts have a different, inertial origin. We highlight the long transients in front speed even along homogeneous interfaces, and we study how both the local shear to normal stress ratio and the local strength are involved in the selection of front type and front speed. Last, we introduce an experimentally accessible integrated measure of block slip history, the Gini coefficient, and demonstrate that in the model it is a good predictor of the history-dependent local static friction coefficient of the interface. These results will contribute both to building a physically based classification of the various types of fronts and to identifying the important mechanisms involved in the selection of their propagation speed.

  10. Fast Flood damage estimation coupling hydraulic modeling and Multisensor Satellite data

    NASA Astrophysics Data System (ADS)

    Fiorini, M.; Rudari, R.; Delogu, F.; Candela, L.; Corina, A.; Boni, G.

    2011-12-01

    Damage estimation requires a good representation of the Elements at risk and their vulnerability, the knowledge of the flooded area extension and the description of the hydraulic forcing. In this work the real time use of a simplified two dimensional hydraulic model constrained by satellite retrieved flooded areas is analyzed. The main features of such a model are computational speed and simple start-up, with no need to insert complex information but a subset of simplified boundary and initial condition. Those characteristics allow the model to be fast enough to be used in real time for the simulation of flooding events. The model fills the gap of information left by single satellite scenes of flooded area, allowing for the estimation of the maximum flooding extension and magnitude. The static information provided by earth observation (like SAR extension of flooded areas at a certain time) are interpreted in a dynamic consistent way and very useful hydraulic information (e.g., water depth, water speed and the evolution of flooded areas)are provided. These information are merged with satellite identification of elements exposed to risk that are characterized in terms of their vulnerability to floods in order to obtain fast estimates of Food damages. The model has been applied in several flooding events occurred worldwide. amongst the other activations in the Mediterranean areas like Veneto (IT) (October 2010), Basilicata (IT) (March 2011) and Shkoder (January 2010 and December 2010) are considered and compared with larger types of floods like the one of Queensland in December 2010.

  11. Development of a speeding-related crash typology

    DOT National Transportation Integrated Search

    2010-04-01

    Speeding, the driver behavior of exceeding the posted speed limit or driving too fast for conditions, has consistently been estimated to be a contributing factor to a significant percentage of fatal and nonfatal crashes. The U.S. Department of Transp...

  12. Risk of falls in older people during fast-walking--the TASCOG study.

    PubMed

    Callisaya, M L; Blizzard, L; McGinley, J L; Srikanth, V K

    2012-07-01

    To investigate the relationship between fast-walking and falls in older people. Individuals aged 60-86 years were randomly selected from the electoral roll (n=176). Gait speed, step length, cadence and a walk ratio were recorded during preferred- and fast-walking using an instrumented walkway. Falls were recorded prospectively over 12 months. Log multinomial regression was used to estimate the relative risk of single and multiple falls associated with gait variables during fast-walking and change between preferred- and fast-walking. Covariates included age, sex, mood, physical activity, sensorimotor and cognitive measures. The risk of multiple falls was increased for those with a smaller walk ratio (shorter steps, faster cadence) during fast-walking (RR 0.92, CI 0.87, 0.97) and greater reduction in the walk ratio (smaller increase in step length, larger increase in cadence) when changing to fast-walking (RR 0.73, CI 0.63, 0.85). These gait patterns were associated with poorer physiological and cognitive function (p<0.05). A higher risk of multiple falls was also seen for those in the fastest quarter of gait speed (p=0.01) at fast-walking. A trend for better reaction time, balance, memory and physical activity for higher categories of gait speed was stronger for fallers than non-fallers (p<0.05). Tests of fast-walking may be useful in identifying older individuals at risk of multiple falls. There may be two distinct groups at risk--the frail person with short shuffling steps, and the healthy person exposed to greater risk. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. Fast Holographic Wavefront Sensor

    NASA Astrophysics Data System (ADS)

    Andersen, G.; Ghebremichael, F.; Gurley, K.

    There are several different types of wavefront sensors that can be used to measure the phase of an input beam. While they have widely varying modes of operation, they all require some computational overhead in order to deconstruct the phase from an optical measurement which greatly reduces the sensing speed. Furthermore, zonal detection methods, such as the Shack-Hartmann wavefront sensor (SHWFS) are not well suited to temporal changes in pupil obscuration such as can occur with scintillation. Here we present a modal detector that incorporates a multiplexed hologram to give a full description of wavefront error without the need for any calculations. The holographic wavefront sensor (HWFS) uses a hologram that is "pre-programmed" with all desired Zernike aberration components. An input beam of arbitrary phase will diffract into pairs of focused beams. Each pair represents a different aberration, and the amplitude is obtained by measuring the relative brightness of the pair of foci. This can be easily achieved by using conventional position sensing devices. In this manner, the amplitudes of each aberration components are directly sensed without the need for any calculations. As such, a complete characterization of the wavefront can be made at speeds of up to 100 kHz in a compact device and without the need for a computer or sophisticated electronics. In this talk we will detail the operation of the holographic wavefront sensor and present results of a prototype sensor as well as a modified design suitable for a closed-loop adaptive optics system. This new wavefront sensor will not only permit faster correction, but permit adaptive optics systems to work in extremely turbulent environments such as those encountered in fast-tracking systems and the Airborne Laser project.

  14. Left–right coordination from simple to extreme conditions during split‐belt locomotion in the chronic spinal adult cat

    PubMed Central

    Desrochers, Étienne; Thibaudier, Yann; Hurteau, Marie‐France; Dambreville, Charline

    2016-01-01

    Key points Coordination between the left and right sides is essential for dynamic stability during locomotion.The immature or neonatal mammalian spinal cord can adjust to differences in speed between the left and right sides during split‐belt locomotion by taking more steps on the fast side.We show that the adult mammalian spinal cord can also adjust its output so that the fast side can take more steps.During split‐belt locomotion, only certain parts of the cycle are modified to adjust left–right coordination, primarily those associated with swing onset.When the fast limb takes more steps than the slow limb, strong left–right interactions persist.Therefore, the adult mammalian spinal cord has a remarkable adaptive capacity for left–right coordination, from simple to extreme conditions. Abstract Although left–right coordination is essential for locomotion, its control is poorly understood, particularly in adult mammals. To investigate the spinal control of left–right coordination, a spinal transection was performed in six adult cats that were then trained to recover hindlimb locomotion. Spinal cats performed tied‐belt locomotion from 0.1 to 1.0 m s−1 and split‐belt locomotion with low to high (1:1.25–10) slow/fast speed ratios. With the left hindlimb stepping at 0.1 m s−1 and the right hindlimb stepping from 0.2 to 1.0 m s−1, 1:1, 1:2, 1:3, 1:4 and 1:5 left–right step relationships could appear. The appearance of 1:2+ relationships was not linearly dependent on the difference in speed between the slow and fast belts. The last step taken by the fast hindlimb displayed longer cycle, stance and swing durations and increased extensor activity, as the slow limb transitioned to swing. During split‐belt locomotion with 1:1, 1:2 and 1:3 relationships, the timing of stance onset of the fast limb relative to the slow limb and placement of both limbs at contact were invariant with increasing slow/fast speed ratios. In contrast, the timing of stance onset of the slow limb relative to the fast limb and the placement of both limbs at swing onset were modulated with slow/fast speed ratios. Thus, left–right coordination is adjusted by modifying specific parts of the cycle. Results highlight the remarkable adaptive capacity of the adult mammalian spinal cord, providing insight into spinal mechanisms and sensory signals regulating left–right coordination. PMID:27426732

  15. Problem-solving and learning in Carib grackles: individuals show a consistent speed-accuracy trade-off.

    PubMed

    Ducatez, S; Audet, J N; Lefebvre, L

    2015-03-01

    The generation and maintenance of within-population variation in cognitive abilities remain poorly understood. Recent theories propose that this variation might reflect the existence of consistent cognitive strategies distributed along a slow-fast continuum influenced by shyness. The slow-fast continuum might be reflected in the well-known speed-accuracy trade-off, where animals cannot simultaneously maximise the speed and the accuracy with which they perform a task. We test this idea on 49 wild-caught Carib grackles (Quiscalus lugubris), a tame opportunistic generalist Icterid bird in Barbados. Grackles that are fast at solving novel problems involving obstacle removal to reach visible food perform consistently over two different tasks, spend more time per trial attending to both tasks, and are those that show more shyness in a pretest. However, they are also the individuals that make more errors in a colour discrimination task requiring no new motor act. Our data reconcile some of the mixed positive and negative correlations reported in the comparative literature on cognitive tasks, suggesting that a speed-accuracy trade-off could lead to negative correlations between tasks favouring speed and tasks favouring accuracy, but still reveal consistent strategies based on stable individual differences.

  16. Development of a speeding-related crash typology : [summary report].

    DOT National Transportation Integrated Search

    2010-01-01

    Speeding, the driver behavior of exceeding the posted speed limit or driving too fast : for conditions, has consistently been shown to be a contributing factor to a signifcant percentage of fatal and nonfatal crashes. Between 1990 and 2006, the frequ...

  17. Vortex Filaments in Grids for Scalable, Fine Smoke Simulation.

    PubMed

    Meng, Zhang; Weixin, Si; Yinling, Qian; Hanqiu, Sun; Jing, Qin; Heng, Pheng-Ann

    2015-01-01

    Vortex modeling can produce attractive visual effects of dynamic fluids, which are widely applicable for dynamic media, computer games, special effects, and virtual reality systems. However, it is challenging to effectively simulate intensive and fine detailed fluids such as smoke with fast increasing vortex filaments and smoke particles. The authors propose a novel vortex filaments in grids scheme in which the uniform grids dynamically bridge the vortex filaments and smoke particles for scalable, fine smoke simulation with macroscopic vortex structures. Using the vortex model, their approach supports the trade-off between simulation speed and scale of details. After computing the whole velocity, external control can be easily exerted on the embedded grid to guide the vortex-based smoke motion. The experimental results demonstrate the efficiency of using the proposed scheme for a visually plausible smoke simulation with macroscopic vortex structures.

  18. Fast and robust standard-deviation-based method for bulk motion compensation in phase-based functional OCT.

    PubMed

    Wei, Xiang; Camino, Acner; Pi, Shaohua; Cepurna, William; Huang, David; Morrison, John C; Jia, Yali

    2018-05-01

    Phase-based optical coherence tomography (OCT), such as OCT angiography (OCTA) and Doppler OCT, is sensitive to the confounding phase shift introduced by subject bulk motion. Traditional bulk motion compensation methods are limited by their accuracy and computing cost-effectiveness. In this Letter, to the best of our knowledge, we present a novel bulk motion compensation method for phase-based functional OCT. Bulk motion associated phase shift can be directly derived by solving its equation using a standard deviation of phase-based OCTA and Doppler OCT flow signals. This method was evaluated on rodent retinal images acquired by a prototype visible light OCT and human retinal images acquired by a commercial system. The image quality and computational speed were significantly improved, compared to two conventional phase compensation methods.

  19. Variability of gait, bilateral coordination, and asymmetry in women with fibromyalgia.

    PubMed

    Heredia-Jimenez, J; Orantes-Gonzalez, E; Soto-Hermoso, V M

    2016-03-01

    To analyze how fibromyalgia affected the variability, asymmetry, and bilateral coordination of gait walking at comfortable and fast speeds. 65 fibromyalgia (FM) patients and 50 healthy women were analyzed. Gait analysis was performed using an instrumented walkway (GAITRite system). Average walking speed, coefficient of variation (CV) of stride length, swing time, and step width data were obtained and bilateral coordination and gait asymmetry were analyzed. FM patients presented significantly lower speeds than the healthy group. FM patients obtained significantly higher values of CV_StrideLength (p=0.04; p<0.001), CV_SwingTime (p<0.001; p<0.001), CV_StepWidth (p=0.004; p<0.001), phase coordination index (p=0.01; p=0.03), and p_CV (p<0.001; p=0.001) than the control group, walking at comfortable or fast speeds. Gait asymmetry only showed significant differences in the fast condition. FM patients walked more slowly and presented a greater variability of gait and worse bilateral coordination than healthy subjects. Gait asymmetry only showed differences in the fast condition. The variability and the bilateral coordination were particularly affected by FM in women. Therefore, variability and bilateral coordination of gait could be analyzed to complement the gait evaluation of FM patients. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Discovery of Ubiquitous Fast-Propagating Intensity Disturbances by the Chromospheric Lyman Alpha Spectropolarimeter (CLASP)

    NASA Astrophysics Data System (ADS)

    Kubo, M.; Katsukawa, Y.; Suematsu, Y.; Kano, R.; Bando, T.; Narukage, N.; Ishikawa, R.; Hara, H.; Giono, G.; Tsuneta, S.; Ishikawa, S.; Shimizu, T.; Sakao, T.; Winebarger, A.; Kobayashi, K.; Cirtain, J.; Champey, P.; Auchère, F.; Trujillo Bueno, J.; Asensio Ramos, A.; Štěpán, J.; Belluzzi, L.; Manso Sainz, R.; De Pontieu, B.; Ichimoto, K.; Carlsson, M.; Casini, R.; Goto, M.

    2016-12-01

    High-cadence observations by the slit-jaw (SJ) optics system of the sounding rocket experiment known as the Chromospheric Lyman Alpha Spectropolarimeter (CLASP) reveal ubiquitous intensity disturbances that recurrently propagate in either the chromosphere or the transition region or both at a speed much higher than the speed of sound. The CLASP/SJ instrument provides a time series of two-dimensional images taken with broadband filters centered on the Lyα line at a 0.6 s cadence. The multiple fast-propagating intensity disturbances appear in the quiet Sun and in an active region, and they are clearly detected in at least 20 areas in a field of view of 527″ × 527″ during the 5 minute observing time. The apparent speeds of the intensity disturbances range from 150 to 350 km s-1, and they are comparable to the local Alfvén speed in the transition region. The intensity disturbances tend to propagate along bright elongated structures away from areas with strong photospheric magnetic fields. This suggests that the observed fast-propagating intensity disturbances are related to the magnetic canopy structures. The maximum distance traveled by the intensity disturbances is about 10″, and the widths are a few arcseconds, which are almost determined by a pixel size of 1.″03. The timescale of each intensity pulse is shorter than 30 s. One possible explanation for the fast-propagating intensity disturbances observed by CLASP is magnetohydrodynamic fast-mode waves.

  1. Double-HE-Layer Detonation-Confinement Sandwich Tests: The Effect of Slow-Layer Density

    NASA Astrophysics Data System (ADS)

    Hill, Larry

    2013-06-01

    Over a period of several years, we have explored the phenomenon in which slabs of high explosives (HEs) with differing detonation speeds are joined along one of their faces. Both are initiated (usually by a line-wave generator) at one edge. If there were no coupling between the layers, the detonation in the fast HE would outrun that in the slow HE. In reality, the detonation in the fast HE transmits an oblique shock into the slow HE, the phase speed of which is equal to the speed of the fast HE. This has one of two effects depending on the particulars. First, the oblique shock transmitted to the slow HE can pre-shock and deaden it, extinguishing the detonation in the slow HE. Second, the oblique shock can transversely initiate the slow layer, pulling its detonation along at the fast HE speed. When the second occurs, it does so at the ``penalty'' of a nominally dead layer, which forms in the slow HE adjacent to the material interface. We present the results of tests in which the fast layer was 3-mm-thick PBX 9501 (95 wt% HMX), and the slow layer was 8-mm-thick PBX 9502 (95 wt% TATB). The purpose was to observe the effect of slow layer density on the ``dead'' layer thickness. Very little effect was observed across the nominal PBX 9502 density range, 1.885-1.895 g/cc.

  2. Tempo in electronic gaming machines affects behavior among at-risk gamblers.

    PubMed

    Mentzoni, Rune A; Laberg, Jon Christian; Brunborg, Geir Scott; Molde, Helge; Pallesen, Ståle

    2012-09-01

    Background and aims Electronic gaming machines (EGM) may be a particularly addictive form of gambling, and gambling speed is believed to contribute to the addictive potential of such machines. The aim of the current study was to generate more knowledge concerning speed as a structural characteristic in gambling, by comparing the effects of three different bet-to-outcome intervals (BOI) on gamblers bet-sizes, game evaluations and illusion of control during gambling on a computer simulated slot machine. Furthermore, we investigated whether problem gambling moderates effects of BOI on gambling behavior and cognitions. Methods 62 participants played a computerized slot machine with either fast (400 ms), medium (1700 ms) or slow (3000 ms) BOI. SOGS-R was used to measure pre-existing gambling problems. Mean bet size, game evaluations and illusion of control comprised the dependent variables. Results Gambling speed had no overall effect on either mean bet size, game evaluations or illusion of control, but in the 400 ms condition, at-risk gamblers (SOGS-R score > 0) employed higher bet sizes compared to no-risk (SOGS-R score = 0) gamblers. Conclusions The findings corroborate and elaborate on previous studies and indicate that restrictions on gambling speed may serve as a harm reducing effort for at-risk gamblers.

  3. Image matrix processor for fast multi-dimensional computations

    DOEpatents

    Roberson, George P.; Skeate, Michael F.

    1996-01-01

    An apparatus for multi-dimensional computation which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination.

  4. Extratropical response to Fast and Slow episodes of Madden-Julian Oscillation in observation and using intervention experiments with CFSv2

    NASA Astrophysics Data System (ADS)

    Yadav, P.; Straus, D. M.

    2017-12-01

    The Madden-Julian Oscillation (MJO) is a potential source of predictability in the extratropics in extended range weather forecasting. The nature of MJO is sporadic and therefore, the mid-latitude response may depend on the nature of the MJO event, in particular the phase speed. We discuss the results of our observational and modeling study of mid-latitude circulation response to Fast and Slow MJO episodes using wintertime ERA-Interim reanalysis data and the CFSv2 coupled model of NOAA. The observational study shows that the mid-latitude response to different propagating speeds is not the same. The propagation speed is defined by the time the OLR takes to propagate from phase 3 to phase 6. The mid-latitude response is assessed in terms of composite maps and frequency of occurrence of robust circulation regimes. Fast episode composite anomalies of 500hPa height show a developing Rossby wave in the mid-Pacific with downstream propagation through MJO phases 2- 4. Development of NAO+ teleconnection pattern is stronger in Slow that in Fast MJO episodes, and occurs with a greater time lag after MJO heating is in the Indian Ocean (phase 3). Previous results find an increase in occurrence of NAO- regime following phase 6. We have found that much of this behavior is due to the slow episodes. Based on these observational results, intervention experiments using CFSv2 are designed to better understand the impact of heating/cooling and to estimate mid-latitude response to Fast and Slow MJO episodes. The added heating experiments consist of 31 year reforecasts for December 1 initial conditions from CFS reanalysis (1980-2011) in which the identical MJO evolution of three-dimensional diabatic heating has been added, thus producing fast and slow MJO episodes with well-defined phase speeds. We will discuss the results of these experiments with a focus on understanding the role of phase speed and interference in setting up the response, and to understand the mechanisms that distinguish fast and slow types of response We will also discuss the diagnostics using Predictable Component Analysis to distinguish the signal forced by common diabatic heating signal from noise, and weather regime response to fast and slow MJO using cluster analysis.

  5. Cognitive Processing Speed, Working Memory, and the Intelligibility of Hearing Aid-Processed Speech in Persons with Hearing Impairment

    PubMed Central

    Yumba, Wycliffe Kabaywe

    2017-01-01

    Previous studies have demonstrated that successful listening with advanced signal processing in digital hearing aids is associated with individual cognitive capacity, particularly working memory capacity (WMC). This study aimed to examine the relationship between cognitive abilities (cognitive processing speed and WMC) and individual listeners’ responses to digital signal processing settings in adverse listening conditions. A total of 194 native Swedish speakers (83 women and 111 men), aged 33–80 years (mean = 60.75 years, SD = 8.89), with bilateral, symmetrical mild to moderate sensorineural hearing loss who had completed a lexical decision speed test (measuring cognitive processing speed) and semantic word-pair span test (SWPST, capturing WMC) participated in this study. The Hagerman test (capturing speech recognition in noise) was conducted using an experimental hearing aid with three digital signal processing settings: (1) linear amplification without noise reduction (NoP), (2) linear amplification with noise reduction (NR), and (3) non-linear amplification without NR (“fast-acting compression”). The results showed that cognitive processing speed was a better predictor of speech intelligibility in noise, regardless of the types of signal processing algorithms used. That is, there was a stronger association between cognitive processing speed and NR outcomes and fast-acting compression outcomes (in steady state noise). We observed a weaker relationship between working memory and NR, but WMC did not relate to fast-acting compression. WMC was a relatively weaker predictor of speech intelligibility in noise. These findings might have been different if the participants had been provided with training and or allowed to acclimatize to binary masking noise reduction or fast-acting compression. PMID:28861009

  6. The Oscillations of Coronal Loops Including the Shell

    NASA Astrophysics Data System (ADS)

    Mikhalyaev, B. B.; Solov'ev, A. A.

    2005-04-01

    We investigate the MHD waves in a double magnetic flux tube embedded in a uniform external magnetic field. The tube consists of a dense hot cylindrical cord surrounded by a co-axial shell. The plasma and the magnetic field are taken to be uniform inside the cord and also inside the shell. Two slow and two fast magnetosonic modes can exist in the thin double tube. The first slow mode is trapped by the cord, the other is trapped by the shell. The oscillations of the second mode have opposite phases inside the cord and shell. The speeds of the slow modes propagating along the tube are close to the tube speeds inside the cord and the shell. The behavior of the fast modes depends on the magnitude of Alfvén speed inside the shell. If it is less than the Alfvén speed inside the cord and in the environment, then the fast mode is trapped by the shell and the other may be trapped under the certain conditions. In the opposite case when the Alfvén speed in the shell is greater than those inside the cord and in the environment, then the fast mode is radiated by the tube and the other may also be radiated under certain conditions. The oscillation of the cord and the shell with opposite phases is the distinctive feature of the process. The proposed model allows to explain the basic phenomena connected to the coronal oscillations: i) the damping of oscillations stipulated in the double tube model by the radiative loss, ii) the presence of two different modes of perturbations propagating along the loop with close speeds, iii) the opposite phases of oscillations of modulated radio emission, coming from the near coronal sources having sharply different densities.

  7. A Fast Superpixel Segmentation Algorithm for PolSAR Images Based on Edge Refinement and Revised Wishart Distance

    PubMed Central

    Zhang, Yue; Zou, Huanxin; Luo, Tiancheng; Qin, Xianxiang; Zhou, Shilin; Ji, Kefeng

    2016-01-01

    The superpixel segmentation algorithm, as a preprocessing technique, should show good performance in fast segmentation speed, accurate boundary adherence and homogeneous regularity. A fast superpixel segmentation algorithm by iterative edge refinement (IER) works well on optical images. However, it may generate poor superpixels for Polarimetric synthetic aperture radar (PolSAR) images due to the influence of strong speckle noise and many small-sized or slim regions. To solve these problems, we utilized a fast revised Wishart distance instead of Euclidean distance in the local relabeling of unstable pixels, and initialized unstable pixels as all the pixels substituted for the initial grid edge pixels in the initialization step. Then, postprocessing with the dissimilarity measure is employed to remove the generated small isolated regions as well as to preserve strong point targets. Finally, the superiority of the proposed algorithm is validated with extensive experiments on four simulated and two real-world PolSAR images from Experimental Synthetic Aperture Radar (ESAR) and Airborne Synthetic Aperture Radar (AirSAR) data sets, which demonstrate that the proposed method shows better performance with respect to several commonly used evaluation measures, even with about nine times higher computational efficiency, as well as fine boundary adherence and strong point targets preservation, compared with three state-of-the-art methods. PMID:27754385

  8. Measurement of instantaneous rotational speed using double-sine-varying-density fringe pattern

    NASA Astrophysics Data System (ADS)

    Zhong, Jianfeng; Zhong, Shuncong; Zhang, Qiukun; Peng, Zhike

    2018-03-01

    Fast and accurate rotational speed measurement is required both for condition monitoring and faults diagnose of rotating machineries. A vision- and fringe pattern-based rotational speed measurement system was proposed to measure the instantaneous rotational speed (IRS) with high accuracy and reliability. A special double-sine-varying-density fringe pattern (DSVD-FP) was designed and pasted around the shaft surface completely and worked as primary angular sensor. The rotational angle could be correctly obtained from the left and right fringe period densities (FPDs) of the DSVD-FP image sequence recorded by a high-speed camera. The instantaneous angular speed (IAS) between two adjacent frames could be calculated from the real-time rotational angle curves, thus, the IRS also could be obtained accurately and efficiently. Both the measurement principle and system design of the novel method have been presented. The influence factors on the sensing characteristics and measurement accuracy of the novel system, including the spectral centrobaric correction method (SCCM) on the FPD calculation, the noise sources introduce by the image sensor, the exposure time and the vibration of the shaft, were investigated through simulations and experiments. The sampling rate of the high speed camera could be up to 5000 Hz, thus, the measurement becomes very fast and the change in rotational speed was sensed within 0.2 ms. The experimental results for different IRS measurements and characterization of the response property of a servo motor demonstrated the high accuracy and fast measurement of the proposed technique, making it attractive for condition monitoring and faults diagnosis of rotating machineries.

  9. Interaction of obstructive sleep apnoea and cognitive impairment with slow gait speed in middle-aged and older adults.

    PubMed

    Lee, Sunghee; Shin, Chol

    2017-07-01

    to investigate whether slow gait speed is associated with cognitive impairment and further whether the association is modified by obstructive sleep apnoea (OSA). in total, 2,222 adults aged 49-80 years, free from dementia, stroke and head injury were asked to walk a 4-m course at fast and usual gait speeds. The time taken to walk was measured. All participants completed the Korean Mini-Mental State Examination, which was validated in the Korean language, to assess cognitive function. Additionally, the participants completed a polysomnography test to ascertain OSA (defined as an apnoea-hypopnoea index ≥15). Multivariable linear regression models were utilised to test the associations. time taken to walk 4 m showed significant inverse associations with cognitive scores (P value = 0.001 at fast gait speed and P = 0.002 at usual gait speed). Furthermore, a significant interaction according to OSA on the association between time to walk and cognitive impairment was found (P value for interaction = 0.003 at fast gait speed and P value for interaction = 0.007 at usual gait speed). we found that the inverse association between the time taken to walk 4 m and a cognitive score became significantly stronger, if an individual had OSA. © The Author 2017. Published by Oxford University Press on behalf of the British Geriatrics Society. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  10. Are the average gait speeds during the 10meter and 6minute walk tests redundant in Parkinson disease?

    PubMed

    Duncan, Ryan P; Combs-Miller, Stephanie A; McNeely, Marie E; Leddy, Abigail L; Cavanaugh, James T; Dibble, Leland E; Ellis, Terry D; Ford, Matthew P; Foreman, K Bo; Earhart, Gammon M

    2017-02-01

    We investigated the relationships between average gait speed collected with the 10Meter Walk Test (Comfortable and Fast) and 6Minute Walk Test (6MWT) in 346 people with Parkinson disease (PD) and how the relationships change with increasing disease severity. Pearson correlation and linear regression analyses determined relationships between 10Meter Walk Test and 6MWT gait speed values for the entire sample and for sub-samples stratified by Hoehn & Yahr (H&Y) stage I (n=53), II (n=141), III (n=135) and IV (n=17). We hypothesized that redundant tests would be highly and significantly correlated (i.e. r>0.70, p<0.05) and would have a linear regression model slope of 1 and intercept of 0. For the entire sample, 6MWT gait speed was significantly (p<0.001) related to the Comfortable 10 Meter Walk Test (r=0.75) and Fast 10Meter Walk Test (r=0.79) gait speed, with 56% and 62% of the variance in 6MWT gait speed explained, respectively. The regression model of 6MWT gait speed predicted by Comfortable 10 Meter Walk gait speed produced slope and intercept values near 1 and 0, respectively, especially for participants in H&Y stages II-IV. In contrast, slope and intercept values were further from 1 and 0, respectively, for the Fast 10Meter Walk Test. Comfortable 10 Meter Walk Test and 6MWT gait speeds appeared to be redundant in people with moderate to severe PD, suggesting the Comfortable 10 Meter Walk Test can be used to estimate 6MWT distance in this population. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Are the Average Gait Speeds During the 10 Meter and 6 Minute Walk Tests Redundant in Parkinson Disease?

    PubMed Central

    Duncan, Ryan P.; Combs-Miller, Stephanie A.; McNeely, Marie E.; Leddy, Abigail L.; Cavanaugh, James T.; Dibble, Leland E.; Ellis, Terry D.; Ford, Matthew P.; Foreman, K. Bo; Earhart, Gammon M.

    2016-01-01

    We investigated the relationships between average gait speed collected with the 10 Meter Walk Test (Comfortable and Fast) and 6 Minute Walk Test (6MWT) in 346 people with Parkinson disease (PD) and how the relationships change with increasing disease severity. Pearson correlation and linear regression analyses determined relationships between 10 Meter Walk Test and 6MWT gait speed values for the entire sample and for sub-samples stratified by Hoehn & Yahr (H&Y) stage I (n=53), II (n=141), III (n=135) and IV (n=17). We hypothesized that redundant tests would be highly and significantly correlated (i.e. r > 0.70, p < 0.05) and would have a linear regression model slope of 1 and intercept of 0. For the entire sample, 6MWT gait speed was significantly (p<0.001) related to the Comfortable 10 Meter Walk Test (r=0.75) and Fast 10 Meter Walk Test (r=0.79) gait speed, with 56% and 62% of the variance in 6MWT gait speed explained, respectively. The regression model of 6MWT gait speed predicted by Comfortable 10 Meter Walk gait speed produced slope and intercept values near 1 and 0, respectively, especially for participants in H&Y stages II–IV. In contrast, slope and intercept values were further from 1 and 0, respectively, for the Fast 10 Meter Walk Test. Comfortable 10 Meter Walk Test and 6MWT gait speeds appeared to be redundant in people with moderate to severe PD, suggesting the Comfortable 10 Meter Walk Test can be used to estimate 6MWT distance in this population. PMID:27915221

  12. Real-time image processing for non-contact monitoring of dynamic displacements using smartphone technologies

    NASA Astrophysics Data System (ADS)

    Min, Jae-Hong; Gelo, Nikolas J.; Jo, Hongki

    2016-04-01

    The newly developed smartphone application, named RINO, in this study allows measuring absolute dynamic displacements and processing them in real time using state-of-the-art smartphone technologies, such as high-performance graphics processing unit (GPU), in addition to already powerful CPU and memories, embedded high-speed/ resolution camera, and open-source computer vision libraries. A carefully designed color-patterned target and user-adjustable crop filter enable accurate and fast image processing, allowing up to 240fps for complete displacement calculation and real-time display. The performances of the developed smartphone application are experimentally validated, showing comparable accuracy with those of conventional laser displacement sensor.

  13. Shock-free configurations in two-and three-dimensional transonic flow

    NASA Technical Reports Server (NTRS)

    Seebass, A. R.

    1981-01-01

    Efforts to replace Sobieczky's complicated analog computations of solutions to the hodograph equations by a fast elliptic solver in order to generate shock-free airfoil designs more effectively are described. The indirect design of airfoil and wing shapes that are free from shock waves even though the local flow velocity exceeds the speed of sound is described. The problem of finding an airfoil in two dimensional, irrotational flow that has a prescribed pressure distribution is as addressed. Sobieczky's suggestion to use a fictitious gas for finding shock-free airfoils directly in the physical plane was the basis for a more efficient procedure for achieving the same end.

  14. FliMax, a novel stimulus device for panoramic and highspeed presentation of behaviourally generated optic flow.

    PubMed

    Lindemann, J P; Kern, R; Michaelis, C; Meyer, P; van Hateren, J H; Egelhaaf, M

    2003-03-01

    A high-speed panoramic visual stimulation device is introduced which is suitable to analyse visual interneurons during stimulation with rapid image displacements as experienced by fast moving animals. The responses of an identified motion sensitive neuron in the visual system of the blowfly to behaviourally generated image sequences are very complex and hard to predict from the established input circuitry of the neuron. This finding suggests that the computational significance of visual interneurons can only be assessed if they are characterised not only by conventional stimuli as are often used for systems analysis, but also by behaviourally relevant input.

  15. Superfast 3D shape measurement of a flapping flight process with motion based segmentation

    NASA Astrophysics Data System (ADS)

    Li, Beiwen

    2018-02-01

    Flapping flight has drawn interests from different fields including biology, aerodynamics and robotics. For such research, the digital fringe projection technology using defocused binary image projection has superfast (e.g. several kHz) measurement capabilities with digital-micromirror-device, yet its measurement quality is still subject to the motion of flapping flight. This research proposes a novel computational framework for dynamic 3D shape measurement of a flapping flight process. The fast and slow motion parts are separately reconstructed with Fourier transform and phase shifting. Experiments demonstrate its success by measuring a flapping wing robot (image acquisition rate: 5000 Hz; flapping speed: 25 cycles/second).

  16. Multiprocessor and memory architecture of the neurocomputer SYNAPSE-1.

    PubMed

    Ramacher, U; Raab, W; Anlauf, J; Hachmann, U; Beichter, J; Brüls, N; Wesseling, M; Sicheneder, E; Männer, R; Glass, J

    1993-12-01

    A general purpose neurocomputer, SYNAPSE-1, which exhibits a multiprocessor and memory architecture is presented. It offers wide flexibility with respect to neural algorithms and a speed-up factor of several orders of magnitude--including learning. The computational power is provided by a 2-dimensional systolic array of neural signal processors. Since the weights are stored outside these NSPs, memory size and processing power can be adapted individually to the application needs. A neural algorithms programming language, embedded in C(+2) has been defined for the user to cope with the neurocomputer. In a benchmark test, the prototype of SYNAPSE-1 was 8000 times as fast as a standard workstation.

  17. The MIDAS processor. [Multivariate Interactive Digital Analysis System for multispectral scanner data

    NASA Technical Reports Server (NTRS)

    Kriegler, F. J.; Gordon, M. F.; Mclaughlin, R. H.; Marshall, R. E.

    1975-01-01

    The MIDAS (Multivariate Interactive Digital Analysis System) processor is a high-speed processor designed to process multispectral scanner data (from Landsat, EOS, aircraft, etc.) quickly and cost-effectively to meet the requirements of users of remote sensor data, especially from very large areas. MIDAS consists of a fast multipipeline preprocessor and classifier, an interactive color display and color printer, and a medium scale computer system for analysis and control. The system is designed to process data having as many as 16 spectral bands per picture element at rates of 200,000 picture elements per second into as many as 17 classes using a maximum likelihood decision rule.

  18. Spectral analysis of GEOS-3 altimeter data and frequency domain collocation. [to estimate gravity anomalies

    NASA Technical Reports Server (NTRS)

    Eren, K.

    1980-01-01

    The mathematical background in spectral analysis as applied to geodetic applications is summarized. The resolution (cut-off frequency) of the GEOS 3 altimeter data is examined by determining the shortest wavelength (corresponding to the cut-off frequency) recoverable. The data from some 18 profiles are used. The total power (variance) in the sea surface topography with respect to the reference ellipsoid as well as with respect to the GEM-9 surface is computed. A fast inversion algorithm for matrices of simple and block Toeplitz matrices and its application to least squares collocation is explained. This algorithm yields a considerable gain in computer time and storage in comparison with conventional least squares collocation. Frequency domain least squares collocation techniques are also introduced and applied to estimating gravity anomalies from GEOS 3 altimeter data. These techniques substantially reduce the computer time and requirements in storage associated with the conventional least squares collocation. Numerical examples given demonstrate the efficiency and speed of these techniques.

  19. Comparison of Computational Results with a Low-g, Nitrogen Slosh and Boiling Experiment

    NASA Technical Reports Server (NTRS)

    Stewart, Mark E.; Moder, Jeffrey P.

    2015-01-01

    This paper compares a fluid/thermal simulation, in Fluent, with a low-g, nitrogen slosh and boiling experiment. In 2010, the French Space Agency, CNES, performed cryogenic nitrogen experiments in a low-g aircraft campaign. From one parabolic flight, a low-g interval was simulated that focuses on low-g motion of nitrogen liquid and vapor with significant condensation, evaporation, and boiling. The computational results are compared with high-speed video, pressure data, heat transfer, and temperature data from sensors on the axis of the cylindrically shaped tank. These experimental and computational results compare favorably. The initial temperature stratification is in good agreement, and the two-phase fluid motion is qualitatively captured. Temperature data is matched except that the temperature sensors are unable to capture fast temperature transients when the sensors move from wet to dry (liquid to vapor) operation. Pressure evolution is approximately captured, but condensation and evaporation rate modeling and prediction need further theoretical analysis.

  20. Multi-detector row computed tomography angiography of peripheral arterial disease

    PubMed Central

    Dijkshoorn, Marcel L.; Pattynama, Peter M. T.; Myriam Hunink, M. G.

    2007-01-01

    With the introduction of multi-detector row computed tomography (MDCT), scan speed and image quality has improved considerably. Since the longitudinal coverage is no longer a limitation, multi-detector row computed tomography angiography (MDCTA) is increasingly used to depict the peripheral arterial runoff. Hence, it is important to know the advantages and limitations of this new non-invasive alternative for the reference test, digital subtraction angiography. Optimization of the acquisition parameters and the contrast delivery is important to achieve a reliable enhancement of the entire arterial runoff in patients with peripheral arterial disease (PAD) using fast CT scanners. The purpose of this review is to discuss the different scanning and injection protocols using 4-, 16-, and 64-detector row CT scanners, to propose effective methods to evaluate and to present large data sets, to discuss its clinical value and major limitations, and to review the literature on the validity, reliability, and cost-effectiveness of multi-detector row CT in the evaluation of PAD. PMID:17882427

  1. The Julia programming language: the future of scientific computing

    NASA Astrophysics Data System (ADS)

    Gibson, John

    2017-11-01

    Julia is an innovative new open-source programming language for high-level, high-performance numerical computing. Julia combines the general-purpose breadth and extensibility of Python, the ease-of-use and numeric focus of Matlab, the speed of C and Fortran, and the metaprogramming power of Lisp. Julia uses type inference and just-in-time compilation to compile high-level user code to machine code on the fly. A rich set of numeric types and extensive numerical libraries are built-in. As a result, Julia is competitive with Matlab for interactive graphical exploration and with C and Fortran for high-performance computing. This talk interactively demonstrates Julia's numerical features and benchmarks Julia against C, C++, Fortran, Matlab, and Python on a spectral time-stepping algorithm for a 1d nonlinear partial differential equation. The Julia code is nearly as compact as Matlab and nearly as fast as Fortran. This material is based upon work supported by the National Science Foundation under Grant No. 1554149.

  2. Fast and Accurate Circuit Design Automation through Hierarchical Model Switching.

    PubMed

    Huynh, Linh; Tagkopoulos, Ilias

    2015-08-21

    In computer-aided biological design, the trifecta of characterized part libraries, accurate models and optimal design parameters is crucial for producing reliable designs. As the number of parts and model complexity increase, however, it becomes exponentially more difficult for any optimization method to search the solution space, hence creating a trade-off that hampers efficient design. To address this issue, we present a hierarchical computer-aided design architecture that uses a two-step approach for biological design. First, a simple model of low computational complexity is used to predict circuit behavior and assess candidate circuit branches through branch-and-bound methods. Then, a complex, nonlinear circuit model is used for a fine-grained search of the reduced solution space, thus achieving more accurate results. Evaluation with a benchmark of 11 circuits and a library of 102 experimental designs with known characterization parameters demonstrates a speed-up of 3 orders of magnitude when compared to other design methods that provide optimality guarantees.

  3. Simplifying silicon burning: Application of quasi-equilibrium to (alpha) network nucleosynthesis

    NASA Technical Reports Server (NTRS)

    Hix, W. R.; Thielemann, F.-K.; Khokhlov, A. M.; Wheeler, J. C.

    1997-01-01

    While the need for accurate calculation of nucleosynthesis and the resulting rate of thermonuclear energy release within hydrodynamic models of stars and supernovae is clear, the computational expense of these nucleosynthesis calculations often force a compromise in accuracy to reduce the computational cost. To redress this trade-off of accuracy for speed, the authors present an improved nuclear network which takes advantage of quasi- equilibrium in order to reduce the number of independent nuclei, and hence the computational cost of nucleosynthesis, without significant reduction in accuracy. In this paper they will discuss the first application of this method, the further reduction in size of the minimal alpha network. The resultant QSE- reduced alpha network is twice as fast as the conventional alpha network it replaces and requires the tracking of half as many abundance variables, while accurately estimating the rate of energy generation. Such reduction in cost is particularly necessary for future generation of multi-dimensional models for supernovae.

  4. Performance Evaluation of Communication Software Systems for Distributed Computing

    NASA Technical Reports Server (NTRS)

    Fatoohi, Rod

    1996-01-01

    In recent years there has been an increasing interest in object-oriented distributed computing since it is better quipped to deal with complex systems while providing extensibility, maintainability, and reusability. At the same time, several new high-speed network technologies have emerged for local and wide area networks. However, the performance of networking software is not improving as fast as the networking hardware and the workstation microprocessors. This paper gives an overview and evaluates the performance of the Common Object Request Broker Architecture (CORBA) standard in a distributed computing environment at NASA Ames Research Center. The environment consists of two testbeds of SGI workstations connected by four networks: Ethernet, FDDI, HiPPI, and ATM. The performance results for three communication software systems are presented, analyzed and compared. These systems are: BSD socket programming interface, IONA's Orbix, an implementation of the CORBA specification, and the PVM message passing library. The results show that high-level communication interfaces, such as CORBA and PVM, can achieve reasonable performance under certain conditions.

  5. Real-Time Compressive Sensing MRI Reconstruction Using GPU Computing and Split Bregman Methods

    PubMed Central

    Smith, David S.; Gore, John C.; Yankeelov, Thomas E.; Welch, E. Brian

    2012-01-01

    Compressive sensing (CS) has been shown to enable dramatic acceleration of MRI acquisition in some applications. Being an iterative reconstruction technique, CS MRI reconstructions can be more time-consuming than traditional inverse Fourier reconstruction. We have accelerated our CS MRI reconstruction by factors of up to 27 by using a split Bregman solver combined with a graphics processing unit (GPU) computing platform. The increases in speed we find are similar to those we measure for matrix multiplication on this platform, suggesting that the split Bregman methods parallelize efficiently. We demonstrate that the combination of the rapid convergence of the split Bregman algorithm and the massively parallel strategy of GPU computing can enable real-time CS reconstruction of even acquisition data matrices of dimension 40962 or more, depending on available GPU VRAM. Reconstruction of two-dimensional data matrices of dimension 10242 and smaller took ~0.3 s or less, showing that this platform also provides very fast iterative reconstruction for small-to-moderate size images. PMID:22481908

  6. Real-Time Compressive Sensing MRI Reconstruction Using GPU Computing and Split Bregman Methods.

    PubMed

    Smith, David S; Gore, John C; Yankeelov, Thomas E; Welch, E Brian

    2012-01-01

    Compressive sensing (CS) has been shown to enable dramatic acceleration of MRI acquisition in some applications. Being an iterative reconstruction technique, CS MRI reconstructions can be more time-consuming than traditional inverse Fourier reconstruction. We have accelerated our CS MRI reconstruction by factors of up to 27 by using a split Bregman solver combined with a graphics processing unit (GPU) computing platform. The increases in speed we find are similar to those we measure for matrix multiplication on this platform, suggesting that the split Bregman methods parallelize efficiently. We demonstrate that the combination of the rapid convergence of the split Bregman algorithm and the massively parallel strategy of GPU computing can enable real-time CS reconstruction of even acquisition data matrices of dimension 4096(2) or more, depending on available GPU VRAM. Reconstruction of two-dimensional data matrices of dimension 1024(2) and smaller took ~0.3 s or less, showing that this platform also provides very fast iterative reconstruction for small-to-moderate size images.

  7. Topography of Slow Sigma Power during Sleep is Associated with Processing Speed in Preschool Children

    PubMed Central

    Doucette, Margaret R.; Kurth, Salome; Chevalier, Nicolas; Munakata, Yuko; LeBourgeois, Monique K.

    2015-01-01

    Cognitive development is influenced by maturational changes in processing speed, a construct reflecting the rapidity of executing cognitive operations. Although cognitive ability and processing speed are linked to spindles and sigma power in the sleep electroencephalogram (EEG), little is known about such associations in early childhood, a time of major neuronal refinement. We calculated EEG power for slow (10–13 Hz) and fast (13.25–17 Hz) sigma power from all-night high-density electroencephalography (EEG) in a cross-sectional sample of healthy preschool children (n = 10, 4.3 ± 1.0 years). Processing speed was assessed as simple reaction time. On average, reaction time was 1409 ± 251 ms; slow sigma power was 4.0 ± 1.5 μV2; and fast sigma power was 0.9 ± 0.2 μV2. Both slow and fast sigma power predominated over central areas. Only slow sigma power was correlated with processing speed in a large parietal electrode cluster (p < 0.05, r ranging from −0.6 to −0.8), such that greater power predicted faster reaction time. Our findings indicate regional correlates between sigma power and processing speed that are specific to early childhood and provide novel insights into the neurobiological features of the EEG that may underlie developing cognitive abilities. PMID:26556377

  8. Ultra High-Speed Radio Frequency Switch Based on Photonics.

    PubMed

    Ge, Jia; Fok, Mable P

    2015-11-26

    Microwave switches, or Radio Frequency (RF) switches have been intensively used in microwave systems for signal routing. Compared with the fast development of microwave and wireless systems, RF switches have been underdeveloped particularly in terms of switching speed and operating bandwidth. In this paper, we propose a photonics based RF switch that is capable of switching at tens of picoseconds speed, which is hundreds of times faster than any existing RF switch technologies. The high-speed switching property is achieved with the use of a rapidly tunable microwave photonic filter with tens of gigahertz frequency tuning speed, where the tuning mechanism is based on the ultra-fast electro-optics Pockels effect. The RF switch has a wide operation bandwidth of 12 GHz and can go up to 40 GHz, depending on the bandwidth of the modulator used in the scheme. The proposed RF switch can either work as an ON/OFF switch or a two-channel switch, tens of picoseconds switching speed is experimentally observed for both type of switches.

  9. Fast traffic sign recognition with a rotation invariant binary pattern based feature.

    PubMed

    Yin, Shouyi; Ouyang, Peng; Liu, Leibo; Guo, Yike; Wei, Shaojun

    2015-01-19

    Robust and fast traffic sign recognition is very important but difficult for safe driving assistance systems. This study addresses fast and robust traffic sign recognition to enhance driving safety. The proposed method includes three stages. First, a typical Hough transformation is adopted to implement coarse-grained location of the candidate regions of traffic signs. Second, a RIBP (Rotation Invariant Binary Pattern) based feature in the affine and Gaussian space is proposed to reduce the time of traffic sign detection and achieve robust traffic sign detection in terms of scale, rotation, and illumination. Third, the techniques of ANN (Artificial Neutral Network) based feature dimension reduction and classification are designed to reduce the traffic sign recognition time. Compared with the current work, the experimental results in the public datasets show that this work achieves robustness in traffic sign recognition with comparable recognition accuracy and faster processing speed, including training speed and recognition speed.

  10. Fast Traffic Sign Recognition with a Rotation Invariant Binary Pattern Based Feature

    PubMed Central

    Yin, Shouyi; Ouyang, Peng; Liu, Leibo; Guo, Yike; Wei, Shaojun

    2015-01-01

    Robust and fast traffic sign recognition is very important but difficult for safe driving assistance systems. This study addresses fast and robust traffic sign recognition to enhance driving safety. The proposed method includes three stages. First, a typical Hough transformation is adopted to implement coarse-grained location of the candidate regions of traffic signs. Second, a RIBP (Rotation Invariant Binary Pattern) based feature in the affine and Gaussian space is proposed to reduce the time of traffic sign detection and achieve robust traffic sign detection in terms of scale, rotation, and illumination. Third, the techniques of ANN (Artificial Neutral Network) based feature dimension reduction and classification are designed to reduce the traffic sign recognition time. Compared with the current work, the experimental results in the public datasets show that this work achieves robustness in traffic sign recognition with comparable recognition accuracy and faster processing speed, including training speed and recognition speed. PMID:25608217

  11. Competitive code-based fast palmprint identification using a set of cover trees

    NASA Astrophysics Data System (ADS)

    Yue, Feng; Zuo, Wangmeng; Zhang, David; Wang, Kuanquan

    2009-06-01

    A palmprint identification system recognizes a query palmprint image by searching for its nearest neighbor from among all the templates in a database. When applied on a large-scale identification system, it is often necessary to speed up the nearest-neighbor searching process. We use competitive code, which has very fast feature extraction and matching speed, for palmprint identification. To speed up the identification process, we extend the cover tree method and propose to use a set of cover trees to facilitate the fast and accurate nearest-neighbor searching. We can use the cover tree method because, as we show, the angular distance used in competitive code can be decomposed into a set of metrics. Using the Hong Kong PolyU palmprint database (version 2) and a large-scale palmprint database, our experimental results show that the proposed method searches for nearest neighbors faster than brute force searching.

  12. Fast incorporation of optical flow into active polygons.

    PubMed

    Unal, Gozde; Krim, Hamid; Yezzi, Anthony

    2005-06-01

    In this paper, we first reconsider, in a different light, the addition of a prediction step to active contour-based visual tracking using an optical flow and clarify the local computation of the latter along the boundaries of continuous active contours with appropriate regularizers. We subsequently detail our contribution of computing an optical flow-based prediction step directly from the parameters of an active polygon, and of exploiting it in object tracking. This is in contrast to an explicitly separate computation of the optical flow and its ad hoc application. It also provides an inherent regularization effect resulting from integrating measurements along polygon edges. As a result, we completely avoid the need of adding ad hoc regularizing terms to the optical flow computations, and the inevitably arbitrary associated weighting parameters. This direct integration of optical flow into the active polygon framework distinguishes this technique from most previous contour-based approaches, where regularization terms are theoretically, as well as practically, essential. The greater robustness and speed due to a reduced number of parameters of this technique are additional and appealing features.

  13. Flow visualization of CFD using graphics workstations

    NASA Technical Reports Server (NTRS)

    Lasinski, Thomas; Buning, Pieter; Choi, Diana; Rogers, Stuart; Bancroft, Gordon

    1987-01-01

    High performance graphics workstations are used to visualize the fluid flow dynamics obtained from supercomputer solutions of computational fluid dynamic programs. The visualizations can be done independently on the workstation or while the workstation is connected to the supercomputer in a distributed computing mode. In the distributed mode, the supercomputer interactively performs the computationally intensive graphics rendering tasks while the workstation performs the viewing tasks. A major advantage of the workstations is that the viewers can interactively change their viewing position while watching the dynamics of the flow fields. An overview of the computer hardware and software required to create these displays is presented. For complex scenes the workstation cannot create the displays fast enough for good motion analysis. For these cases, the animation sequences are recorded on video tape or 16 mm film a frame at a time and played back at the desired speed. The additional software and hardware required to create these video tapes or 16 mm movies are also described. Photographs illustrating current visualization techniques are discussed. Examples of the use of the workstations for flow visualization through animation are available on video tape.

  14. Applications of Computer Vision for Assessing Quality of Agri-food Products: A Review of Recent Research Advances.

    PubMed

    Ma, Ji; Sun, Da-Wen; Qu, Jia-Huan; Liu, Dan; Pu, Hongbin; Gao, Wen-Hong; Zeng, Xin-An

    2016-01-01

    With consumer concerns increasing over food quality and safety, the food industry has begun to pay much more attention to the development of rapid and reliable food-evaluation systems over the years. As a result, there is a great need for manufacturers and retailers to operate effective real-time assessments for food quality and safety during food production and processing. Computer vision, comprising a nondestructive assessment approach, has the aptitude to estimate the characteristics of food products with its advantages of fast speed, ease of use, and minimal sample preparation. Specifically, computer vision systems are feasible for classifying food products into specific grades, detecting defects, and estimating properties such as color, shape, size, surface defects, and contamination. Therefore, in order to track the latest research developments of this technology in the agri-food industry, this review aims to present the fundamentals and instrumentation of computer vision systems with details of applications in quality assessment of agri-food products from 2007 to 2013 and also discuss its future trends in combination with spectroscopy.

  15. Interplanetary Propagation Behavior of the Fast Coronal Mass Ejection on 23 July 2012

    NASA Astrophysics Data System (ADS)

    Temmer, M.; Nitta, N. V.

    2015-03-01

    The fast coronal mass ejection (CME) on 23 July 2012 caused attention because of its extremely short transit time from the Sun to 1 AU, which was shorter than 21 h. In situ data from STEREO-A revealed the arrival of a fast forward shock with a speed of more than 2200 km s-1 followed by a magnetic structure moving with almost 1900 km s-1. We investigate the propagation behavior of the CME shock and magnetic structure with the aim to reproduce the short transit time and high impact speed as derived from in situ data. We carefully measured the 3D kinematics of the CME using the graduated cylindrical shell model and obtained a maximum speed of 2580±280 km s-1 for the CME shock and 2270±420 km s-1 for its magnetic structure. Based on the 3D kinematics, the drag-based model (DBM) reproduces the observational data reasonably well. To successfully simulate the CME shock, the ambient flow speed needs to have an average value close to the slow solar wind speed (450 km s-1), and the initial shock speed at a distance of 30 R ⊙ should not exceed ≈ 2300 km s-1, otherwise it would arrive much too early at STEREO-A. The model results indicate that an extremely small aerodynamic drag force is exerted on the shock, smaller by one order of magnitude than average. As a consequence, the CME hardly decelerates in interplanetary space and maintains its high initial speed. The low aerodynamic drag can only be reproduced when the density of the ambient solar wind flow, in which the fast CME propagates, is decreased to ρ sw=1 - 2 cm-3 at the distance of 1 AU. This result is consistent with the preconditioning of interplanetary space by a previous CME.

  16. Increasing use of high-speed digital imagery as a measurement tool on test and evaluation ranges

    NASA Astrophysics Data System (ADS)

    Haddleton, Graham P.

    2001-04-01

    In military research and development or testing there are various fast and dangerous events that need to be recorded and analysed. High-speed cameras allow the capture of movement too fast to be recognised by the human eye, and provide data that is essential for the analysis and evaluation of such events. High-speed photography is often the only type of instrumentation that can be used to record the parameters demanded by our customers. I will show examples where this applied cinematography is used not only to provide a visual record of events, but also as an essential measurement tool.

  17. Divide and Conquer (DC) BLAST: fast and easy BLAST execution within HPC environments

    DOE PAGES

    Yim, Won Cheol; Cushman, John C.

    2017-07-22

    Bioinformatics is currently faced with very large-scale data sets that lead to computational jobs, especially sequence similarity searches, that can take absurdly long times to run. For example, the National Center for Biotechnology Information (NCBI) Basic Local Alignment Search Tool (BLAST and BLAST+) suite, which is by far the most widely used tool for rapid similarity searching among nucleic acid or amino acid sequences, is highly central processing unit (CPU) intensive. While the BLAST suite of programs perform searches very rapidly, they have the potential to be accelerated. In recent years, distributed computing environments have become more widely accessible andmore » used due to the increasing availability of high-performance computing (HPC) systems. Therefore, simple solutions for data parallelization are needed to expedite BLAST and other sequence analysis tools. However, existing software for parallel sequence similarity searches often requires extensive computational experience and skill on the part of the user. In order to accelerate BLAST and other sequence analysis tools, Divide and Conquer BLAST (DCBLAST) was developed to perform NCBI BLAST searches within a cluster, grid, or HPC environment by using a query sequence distribution approach. Scaling from one (1) to 256 CPU cores resulted in significant improvements in processing speed. Thus, DCBLAST dramatically accelerates the execution of BLAST searches using a simple, accessible, robust, and parallel approach. DCBLAST works across multiple nodes automatically and it overcomes the speed limitation of single-node BLAST programs. DCBLAST can be used on any HPC system, can take advantage of hundreds of nodes, and has no output limitations. Thus, this freely available tool simplifies distributed computation pipelines to facilitate the rapid discovery of sequence similarities between very large data sets.« less

  18. Divide and Conquer (DC) BLAST: fast and easy BLAST execution within HPC environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yim, Won Cheol; Cushman, John C.

    Bioinformatics is currently faced with very large-scale data sets that lead to computational jobs, especially sequence similarity searches, that can take absurdly long times to run. For example, the National Center for Biotechnology Information (NCBI) Basic Local Alignment Search Tool (BLAST and BLAST+) suite, which is by far the most widely used tool for rapid similarity searching among nucleic acid or amino acid sequences, is highly central processing unit (CPU) intensive. While the BLAST suite of programs perform searches very rapidly, they have the potential to be accelerated. In recent years, distributed computing environments have become more widely accessible andmore » used due to the increasing availability of high-performance computing (HPC) systems. Therefore, simple solutions for data parallelization are needed to expedite BLAST and other sequence analysis tools. However, existing software for parallel sequence similarity searches often requires extensive computational experience and skill on the part of the user. In order to accelerate BLAST and other sequence analysis tools, Divide and Conquer BLAST (DCBLAST) was developed to perform NCBI BLAST searches within a cluster, grid, or HPC environment by using a query sequence distribution approach. Scaling from one (1) to 256 CPU cores resulted in significant improvements in processing speed. Thus, DCBLAST dramatically accelerates the execution of BLAST searches using a simple, accessible, robust, and parallel approach. DCBLAST works across multiple nodes automatically and it overcomes the speed limitation of single-node BLAST programs. DCBLAST can be used on any HPC system, can take advantage of hundreds of nodes, and has no output limitations. Thus, this freely available tool simplifies distributed computation pipelines to facilitate the rapid discovery of sequence similarities between very large data sets.« less

  19. Blast2GO goes grid: developing a grid-enabled prototype for functional genomics analysis.

    PubMed

    Aparicio, G; Götz, S; Conesa, A; Segrelles, D; Blanquer, I; García, J M; Hernandez, V; Robles, M; Talon, M

    2006-01-01

    The vast amount in complexity of data generated in Genomic Research implies that new dedicated and powerful computational tools need to be developed to meet their analysis requirements. Blast2GO (B2G) is a bioinformatics tool for Gene Ontology-based DNA or protein sequence annotation and function-based data mining. The application has been developed with the aim of affering an easy-to-use tool for functional genomics research. Typical B2G users are middle size genomics labs carrying out sequencing, ETS and microarray projects, handling datasets up to several thousand sequences. In the current version of B2G. The power and analytical potential of both annotation and function data-mining is somehow restricted to the computational power behind each particular installation. In order to be able to offer the possibility of an enhanced computational capacity within this bioinformatics application, a Grid component is being developed. A prototype has been conceived for the particular problem of speeding up the Blast searches to obtain fast results for large datasets. Many efforts have been done in the literature concerning the speeding up of Blast searches, but few of them deal with the use of large heterogeneous production Grid Infrastructures. These are the infrastructures that could reach the largest number of resources and the best load balancing for data access. The Grid Service under development will analyse requests based on the number of sequences, splitting them accordingly to the available resources. Lower-level computation will be performed through MPIBLAST. The software architecture is based on the WSRF standard.

  20. Solubility prediction, solvate and cocrystal screening as tools for rational crystal engineering.

    PubMed

    Loschen, Christoph; Klamt, Andreas

    2015-06-01

    The fact that novel drug candidates are becoming increasingly insoluble is a major problem of current drug development. Computational tools may address this issue by screening for suitable solvents or by identifying potential novel cocrystal formers that increase bioavailability. In contrast to other more specialized methods, the fluid phase thermodynamics approach COSMO-RS (conductor-like screening model for real solvents) allows for a comprehensive treatment of drug solubility, solvate and cocrystal formation and many other thermodynamics properties in liquids. This article gives an overview of recent COSMO-RS developments that are of interest for drug development and contains several new application examples for solubility prediction and solvate/cocrystal screening. For all property predictions COSMO-RS has been used. The basic concept of COSMO-RS consists of using the screening charge density as computed from first principles calculations in combination with fast statistical thermodynamics to compute the chemical potential of a compound in solution. The fast and accurate assessment of drug solubility and the identification of suitable solvents, solvate or cocrystal formers is nowadays possible and may be used to complement modern drug development. Efficiency is increased by avoiding costly quantum-chemical computations using a database of previously computed molecular fragments. COSMO-RS theory can be applied to a range of physico-chemical properties, which are of interest in rational crystal engineering. Most notably, in combination with experimental reference data, accurate quantitative solubility predictions in any solvent or solvent mixture are possible. Additionally, COSMO-RS can be extended to the prediction of cocrystal formation, which results in considerable predictive accuracy concerning coformer screening. In a recent variant costly quantum chemical calculations are avoided resulting in a significant speed-up and ease-of-use. © 2015 Royal Pharmaceutical Society.

  1. Fast polyenergetic forward projection for image formation using OpenCL on a heterogeneous parallel computing platform.

    PubMed

    Zhou, Lili; Clifford Chao, K S; Chang, Jenghwa

    2012-11-01

    Simulated projection images of digital phantoms constructed from CT scans have been widely used for clinical and research applications but their quality and computation speed are not optimal for real-time comparison with the radiography acquired with an x-ray source of different energies. In this paper, the authors performed polyenergetic forward projections using open computing language (OpenCL) in a parallel computing ecosystem consisting of CPU and general purpose graphics processing unit (GPGPU) for fast and realistic image formation. The proposed polyenergetic forward projection uses a lookup table containing the NIST published mass attenuation coefficients (μ∕ρ) for different tissue types and photon energies ranging from 1 keV to 20 MeV. The CT images of interested sites are first segmented into different tissue types based on the CT numbers and converted to a three-dimensional attenuation phantom by linking each voxel to the corresponding tissue type in the lookup table. The x-ray source can be a radioisotope or an x-ray generator with a known spectrum described as weight w(n) for energy bin E(n). The Siddon method is used to compute the x-ray transmission line integral for E(n) and the x-ray fluence is the weighted sum of the exponential of line integral for all energy bins with added Poisson noise. To validate this method, a digital head and neck phantom constructed from the CT scan of a Rando head phantom was segmented into three (air, gray∕white matter, and bone) regions for calculating the polyenergetic projection images for the Mohan 4 MV energy spectrum. To accelerate the calculation, the authors partitioned the workloads using the task parallelism and data parallelism and scheduled them in a parallel computing ecosystem consisting of CPU and GPGPU (NVIDIA Tesla C2050) using OpenCL only. The authors explored the task overlapping strategy and the sequential method for generating the first and subsequent DRRs. A dispatcher was designed to drive the high-degree parallelism of the task overlapping strategy. Numerical experiments were conducted to compare the performance of the OpenCL∕GPGPU-based implementation with the CPU-based implementation. The projection images were similar to typical portal images obtained with a 4 or 6 MV x-ray source. For a phantom size of 512 × 512 × 223, the time for calculating the line integrals for a 512 × 512 image panel was 16.2 ms on GPGPU for one energy bin in comparison to 8.83 s on CPU. The total computation time for generating one polyenergetic projection image of 512 × 512 was 0.3 s (141 s for CPU). The relative difference between the projection images obtained with the CPU-based and OpenCL∕GPGPU-based implementations was on the order of 10(-6) and was virtually indistinguishable. The task overlapping strategy was 5.84 and 1.16 times faster than the sequential method for the first and the subsequent digitally reconstruction radiographies, respectively. The authors have successfully built digital phantoms using anatomic CT images and NIST μ∕ρ tables for simulating realistic polyenergetic projection images and optimized the processing speed with parallel computing using GPGPU∕OpenCL-based implementation. The computation time was fast (0.3 s per projection image) enough for real-time IGRT (image-guided radiotherapy) applications.

  2. Visual adaptation alters the apparent speed of real-world actions.

    PubMed

    Mather, George; Sharman, Rebecca J; Parsons, Todd

    2017-07-27

    The apparent physical speed of an object in the field of view remains constant despite variations in retinal velocity due to viewing conditions (velocity constancy). For example, people and cars appear to move across the field of view at the same objective speed regardless of distance. In this study a series of experiments investigated the visual processes underpinning judgements of objective speed using an adaptation paradigm and video recordings of natural human locomotion. Viewing a video played in slow-motion for 30 seconds caused participants to perceive subsequently viewed clips played at standard speed as too fast, so playback had to be slowed down in order for it to appear natural; conversely after viewing fast-forward videos for 30 seconds, playback had to be speeded up in order to appear natural. The perceived speed of locomotion shifted towards the speed depicted in the adapting video ('re-normalisation'). Results were qualitatively different from those obtained in previously reported studies of retinal velocity adaptation. Adapting videos that were scrambled to remove recognizable human figures or coherent motion caused significant, though smaller shifts in apparent locomotion speed, indicating that both low-level and high-level visual properties of the adapting stimulus contributed to the changes in apparent speed.

  3. FANSe2: a robust and cost-efficient alignment tool for quantitative next-generation sequencing applications.

    PubMed

    Xiao, Chuan-Le; Mai, Zhi-Biao; Lian, Xin-Lei; Zhong, Jia-Yong; Jin, Jing-Jie; He, Qing-Yu; Zhang, Gong

    2014-01-01

    Correct and bias-free interpretation of the deep sequencing data is inevitably dependent on the complete mapping of all mappable reads to the reference sequence, especially for quantitative RNA-seq applications. Seed-based algorithms are generally slow but robust, while Burrows-Wheeler Transform (BWT) based algorithms are fast but less robust. To have both advantages, we developed an algorithm FANSe2 with iterative mapping strategy based on the statistics of real-world sequencing error distribution to substantially accelerate the mapping without compromising the accuracy. Its sensitivity and accuracy are higher than the BWT-based algorithms in the tests using both prokaryotic and eukaryotic sequencing datasets. The gene identification results of FANSe2 is experimentally validated, while the previous algorithms have false positives and false negatives. FANSe2 showed remarkably better consistency to the microarray than most other algorithms in terms of gene expression quantifications. We implemented a scalable and almost maintenance-free parallelization method that can utilize the computational power of multiple office computers, a novel feature not present in any other mainstream algorithm. With three normal office computers, we demonstrated that FANSe2 mapped an RNA-seq dataset generated from an entire Illunima HiSeq 2000 flowcell (8 lanes, 608 M reads) to masked human genome within 4.1 hours with higher sensitivity than Bowtie/Bowtie2. FANSe2 thus provides robust accuracy, full indel sensitivity, fast speed, versatile compatibility and economical computational utilization, making it a useful and practical tool for deep sequencing applications. FANSe2 is freely available at http://bioinformatics.jnu.edu.cn/software/fanse2/.

  4. Dwell-time algorithm for polishing large optics.

    PubMed

    Wang, Chunjin; Yang, Wei; Wang, Zhenzhong; Yang, Xu; Hu, Chenlin; Zhong, Bo; Guo, Yinbiao; Xu, Qiao

    2014-07-20

    The calculation of the dwell time plays a crucial role in polishing precision large optics. Although some studies have taken place, it remains a challenge to develop a calculation algorithm which is absolutely stable, together with a high convergence ratio and fast solution speed even for extremely large mirrors. For this aim, we introduced a self-adaptive iterative algorithm to calculate the dwell time in this paper. Simulations were conducted in bonnet polishing (BP) to test the performance of this method on a real 430  mm × 430  mm fused silica part with the initial surface error PV=1741.29  nm, RMS=433.204  nm. The final surface residual error in the clear aperture after two simulation steps turned out to be PV=11.7  nm, RMS=0.5  nm. The results confirm that this method is stable and has a high convergence ratio and fast solution speed even with an ordinary computer. It is notable that the solution time is usually just a few seconds even on a 1000  mm × 1000  mm part. Hence, we believe that this method is perfectly suitable for polishing large optics. And not only can it be applied to BP, but it can also be applied to other subaperture deterministic polishing processes.

  5. Sensitivity Analysis of Wing Aeroelastic Responses

    NASA Technical Reports Server (NTRS)

    Issac, Jason Cherian

    1995-01-01

    Design for prevention of aeroelastic instability (that is, the critical speeds leading to aeroelastic instability lie outside the operating range) is an integral part of the wing design process. Availability of the sensitivity derivatives of the various critical speeds with respect to shape parameters of the wing could be very useful to a designer in the initial design phase, when several design changes are made and the shape of the final configuration is not yet frozen. These derivatives are also indispensable for a gradient-based optimization with aeroelastic constraints. In this study, flutter characteristic of a typical section in subsonic compressible flow is examined using a state-space unsteady aerodynamic representation. The sensitivity of the flutter speed of the typical section with respect to its mass and stiffness parameters, namely, mass ratio, static unbalance, radius of gyration, bending frequency, and torsional frequency is calculated analytically. A strip theory formulation is newly developed to represent the unsteady aerodynamic forces on a wing. This is coupled with an equivalent plate structural model and solved as an eigenvalue problem to determine the critical speed of the wing. Flutter analysis of the wing is also carried out using a lifting-surface subsonic kernel function aerodynamic theory (FAST) and an equivalent plate structural model. Finite element modeling of the wing is done using NASTRAN so that wing structures made of spars and ribs and top and bottom wing skins could be analyzed. The free vibration modes of the wing obtained from NASTRAN are input into FAST to compute the flutter speed. An equivalent plate model which incorporates first-order shear deformation theory is then examined so it can be used to model thick wings, where shear deformations are important. The sensitivity of natural frequencies to changes in shape parameters is obtained using ADIFOR. A simple optimization effort is made towards obtaining a minimum weight design of the wing, subject to flutter constraints, lift requirement constraints for level flight and side constraints on the planform parameters of the wing using the IMSL subroutine NCONG, which uses successive quadratic programming.

  6. Mechanisms used to increase peak propulsive force following 12-weeks of gait training in individuals poststroke.

    PubMed

    Hsiao, HaoYuan; Knarr, Brian A; Pohlig, Ryan T; Higginson, Jill S; Binder-Macleod, Stuart A

    2016-02-08

    Current rehabilitation efforts for individuals poststroke focus on increasing walking speed because it is a predictor of community ambulation and participation. Greater propulsive force is required to increase walking speed. Previous studies have identified that trailing limb angle (TLA) and ankle moment are key factors to increases in propulsive force during gait. However, no studies have determined the relative contribution of these two factors to increase propulsive force following intervention. The purpose of this study was to quantify the relative contribution of ankle moment and TLA to increases in propulsive force following 12-weeks of gait training for individuals poststroke. Forty-five participants were assigned to 1 of 3 training groups: training at self-selected speeds (SS), at fastest comfortable speeds (Fast), and Fast with functional electrical stimulation (FastFES). For participants who gained paretic propulsive force following training, a biomechanical-based model previously developed for individuals poststroke was used to calculate the relative contributions of ankle moment and TLA. A two-way, mixed-model design, analysis of covariance adjusted for baseline walking speed was performed to analyze changes in TLA and ankle moment across groups. The model showed that TLA was the major contributor to increases in propulsive force following training. Although the paretic TLA increased from pre-training to post-training, no differences were observed between groups. In contrast, increases in paretic ankle moment were observed only in the FastFES group. Our findings suggested that specific targeting may be needed to increase ankle moment. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Mechanisms used to increase peak propulsive force following 12-weeks of gait training in individuals poststroke

    PubMed Central

    Hsiao, HaoYuan; Knarr, Brian A.; Pohlig, Ryan T.; Higginson, Jill S.; Binder-Macleod, Stuart A.

    2016-01-01

    Current rehabilitation efforts for individuals poststroke focus on increasing walking speed because it is a predictor of community ambulation and participation. Greater propulsive force is required to increase walking speed. Previous studies have identified that trailing limb angle (TLA) and ankle moment are key factors to increases in propulsive force during gait. However, no studies have determined the relative contribution of these two factors to increase propulsive force following intervention. The purpose of this study was to quantify the relative contribution of ankle moment and TLA to increases in propulsive force following 12-weeks of gait training for individuals poststroke. Forty-five participants were assigned to 1 of 3 training groups: training at self-selected speeds (SS), at fastest comfortable speeds (Fast), and Fast with functional electrical stimulation (FastFES). For participants who gained paretic propulsive force following training, a biomechanical-based model previously developed for individuals poststroke was used to calculate the relative contributions of ankle moment and TLA. A two-way, mixed-model design, analysis of covariance adjusted for baseline walking speed was performed to analyze changes in TLA and ankle moment across groups. The model showed that TLA was the major contributor to increases in propulsive force following training. Although the paretic TLA increased from pre-training to post-training, no differences were observed between groups. In contrast, increases in paretic ankle moment were observed only in the FastFES group. Our findings suggested that specific targeting may be needed to increase ankle moment. PMID:26776931

  8. Oxygen consumption, oxygen cost, heart rate, and perceived effort during split-belt treadmill walking in young healthy adults.

    PubMed

    Roper, Jaimie A; Stegemöller, Elizabeth L; Tillman, Mark D; Hass, Chris J

    2013-03-01

    During split-belt treadmill walking the speed of the treadmill under one limb is faster than the belt under the contralateral limb. This unique intervention has shown evidence of acutely improving gait impairments in individuals with neurologic impairment such as stroke and Parkinson's disease. However, oxygen use, heart rate and perceived effort associated with split-belt treadmill walking are unknown and may limit the utility of this locomotor intervention. To better understand the intensity of this new intervention, this study was undertaken to examine the oxygen consumption, oxygen cost, heart rate, and rating of perceived exertion associated with split-belt treadmill walking in young healthy adults. Fifteen participants completed three sessions of treadmill walking: slow speed with belts tied, fast speed with belts tied, and split-belt walking with one leg walking at the fast speed and one leg walking at the slow speed. Oxygen consumption, heart rate, and rating of perceived exertion were collected during each walking condition and oxygen cost was calculated. Results revealed that oxygen consumption, heart rate, and perceived effort associated with split-belt walking were higher than slow treadmill walking, but only oxygen consumption was significantly lower during both split-belt walking than fast treadmill walking. Oxygen cost associated with slow treadmill walking was significantly higher than fast treadmill walking. These findings have implications for using split-belt treadmill walking as a rehabilitation tool as the cost associated with split-belt treadmill walking may not be higher or potentially more detrimental than that associated with previously used treadmill training rehabilitation strategies.

  9. Cyberdyn supercomputer - a tool for imaging geodinamic processes

    NASA Astrophysics Data System (ADS)

    Pomeran, Mihai; Manea, Vlad; Besutiu, Lucian; Zlagnean, Luminita

    2014-05-01

    More and more physical processes developed within the deep interior of our planet, but with significant impact on the Earth's shape and structure, become subject to numerical modelling by using high performance computing facilities. Nowadays, worldwide an increasing number of research centers decide to make use of such powerful and fast computers for simulating complex phenomena involving fluid dynamics and get deeper insight to intricate problems of Earth's evolution. With the CYBERDYN cybernetic infrastructure (CCI), the Solid Earth Dynamics Department in the Institute of Geodynamics of the Romanian Academy boldly steps into the 21st century by entering the research area of computational geodynamics. The project that made possible this advancement, has been jointly supported by EU and Romanian Government through the Structural and Cohesion Funds. It lasted for about three years, ending October 2013. CCI is basically a modern high performance Beowulf-type supercomputer (HPCC), combined with a high performance visualization cluster (HPVC) and a GeoWall. The infrastructure is mainly structured around 1344 cores and 3 TB of RAM. The high speed interconnect is provided by a Qlogic InfiniBand switch, able to transfer up to 40 Gbps. The CCI storage component is a 40 TB Panasas NAS. The operating system is Linux (CentOS). For control and maintenance, the Bright Cluster Manager package is used. The SGE job scheduler manages the job queues. CCI has been designed for a theoretical peak performance up to 11.2 TFlops. Speed tests showed that a high resolution numerical model (256 × 256 × 128 FEM elements) could be resolved with a mean computational speed of 1 time step at 30 seconds, by employing only a fraction of the computing power (20%). After passing the mandatory tests, the CCI has been involved in numerical modelling of various scenarios related to the East Carpathians tectonic and geodynamic evolution, including the Neogene magmatic activity, and the intriguing intermediate-depth seismicity within the so-called Vrancea zone. The CFD code for numerical modelling is CitcomS, a widely employed open source package specifically developed for earth sciences. Several preliminary 3D geodynamic models for simulating an assumed subduction or the effect of a mantle plume will be presented and discussed.

  10. Tsunami-Generated Atmospheric Gravity Waves and Their Atmospheric and Ionospheric Effects: a Review and Some Recent Modeling Results

    NASA Astrophysics Data System (ADS)

    Hickey, M. P.

    2017-12-01

    Tsunamis propagate on the ocean surface at the shallow water phase speed which coincides with the phase speed of fast atmospheric gravity waves. The forcing frequency also corresponds with those of internal atmospheric gravity waves. Hence, the coupling and effective forcing of gravity waves due to tsunamis is particularly effective. The fast horizontal phase speeds of the resulting gravity waves allows them to propagate well into the thermosphere before viscous dissipation becomes strong, and the waves can achieve nonlinear amplitudes at these heights resulting in large amplitude traveling ionospheric disturbances (TIDs). Additionally, because the tsunami represents a moving source able to traverse large distances across the globe, the gravity waves and associated TIDs can be detected at large distances from the original tsunami (earthquake) source. Although it was during the mid 1970s when the tsunami source of gravity waves was first postulated, only relatively recently (over the last ten to fifteen years) has there has been a surge of interest in this research arena, driven largely by significant improvements in measurement technologies and computational capabilities. For example, the use of GPS measurements to derive total electron content has been a particularly powerful technique used to monitor the propagation and evolution of TIDs. Monitoring airglow variations driven by atmospheric gravity waves has also been a useful technique. The modeling of specific events and comparison with the observed gravity waves and/or TIDs has been quite revealing. In this talk I will review some of the most interesting aspects of this research and also discuss some interesting and outstanding issues that need to be addressed. New modeling results relevant to the Tohoku tsunami event will also be presented.

  11. The Relationship Between the Expansion Speed and Radial Speed of CMEs Confirmed Using Quadrature Observations of the 2011 February 15 CME

    NASA Astrophysics Data System (ADS)

    Gopalswamy, N.; Makela, P.; Yashiro, S.; Davila, J. M.

    2012-08-01

    It is difficult to measure the true speed of Earth-directed CMEs from a coronagraph along the Sun-Earth line because of the occulting disk. However, the expansion speed (the speed with which the CME appears to spread in the sky plane) can be measured by such coronagraph. In order to convert the expansion speed to radial speed (which is important for space weather applications) one can use empirical relationship between the two that assumes an average width for all CMEs. If we have the width information from quadrature observations, we can confirm the relationship between expansion and radial speeds derived by Gopalswamy et al. (2009a). The STEREO spacecraft were in qudrature with SOHO (STEREO-A ahead of Earth by 87oand STEREO-B 94obehind Earth) on 2011 February 15, when a fast Earth-directed CME occurred. The CME was observed as a halo by the Large-Angle and Spectrometric Coronagraph (LASCO) on board SOHO. The sky-plane speed was measured by SOHO/LASCO as the expansion speed, while the radial speed was measured by STEREO-A and STEREO-B. In addition, STEREO-A and STEREO-B images measured the width of the CME, which is unknown from Earth view. From the SOHO and STEREO measurements, we confirm the relationship between the expansion speed (Vexp) and radial speed (Vrad) derived previously from geometrical considerations (Gopalswamy et al. 2009a): Vrad=1/2 (1 + cot w)Vexp, where w is the half width of the CME. STEREO-B images of the CME, we found that CME had a full width of 7 6o, so w=3 8o. This gives the relation as Vrad=1.1 4 Vexp. From LASCO observations, we measured Vexp=897 km/s, so we get the radial speed as 10 2 3 km/s. Direct measurement of radial speed yields 945 km/s (STEREO-A) and 105 8 km/s (STEREO-B). These numbers are different only by 7.6 % and 3.4 % (for STEREO-A and STEREO-B, respectively) from the computed value.

  12. Relationship Between the Expansion Speed and Radial Speed of CMEs Confirmed Using Quadrature Observations from SOHO and STEREO

    NASA Technical Reports Server (NTRS)

    Gopalswamy, Nat; Makela, Pertti; Yashiro, Seiji

    2011-01-01

    It is difficult to measure the true speed of Earth-directed CMEs from a coronagraph along the Sun-Earth line because of the occulting disk. However, the expansion speed (the speed with which the CME appears to spread in the sky plane) can be measured by such coronagraph. In order to convert the expansion speed to radial speed (which is important for space weather applications) one can use empirical relationship between the two that assumes an average width for all CMEs. If we have the width information from quadrature observations, we can confirm the relationship between expansion and radial speeds derived by Gopalswamy et al. (2009, CEAB, 33, 115,2009). The STEREO spacecraft were in quadrature with SOHO (STEREO-A ahead of Earth by 87 and STEREO-B 94 behind Earth) on 2011 February 15, when a fast Earth-directed CME occurred. The CME was observed as a halo by the Large-Angle and Spectrometric Coronagraph (LASCO) on board SOHO. The sky-plane speed was measured by SOHO/LASCO as the expansion speed, while the radial speed was measured by STEREO-A and STEREO-B. In addition, STEREO-A and STEREO-B images measured the width of the CME, which is unknown from Earth view. From the SOHO and STEREO measurements, we confirm the relationship between the expansion speed (Vexp ) and radial speed (Vrad ) derived previously from geometrical considerations (Gopalswamy et al. 2009): Vrad = 1/2 (1 + cot w) Vexp, where w is the half width of the CME. STEREO-B images of the CME, we found that CME had a full width of 75 degrees, so w = 37.5 degrees. This gives the relation as Vrad = 1.15 Vexp. From LASCO observations, we measured Vexp = 897 km/s, so we get the radial speed as 1033 km/s. Direct measurement of radial speed from STEREO gives 945 km/s (STEREO-A) and 1057 km/s (STEREO-B). These numbers are different only by 2.3% and 8.5% (for STEREO-A and STEREO-B, respectively) from the computed value.

  13. Ultrasound sounding in air by fast-moving receiver

    NASA Astrophysics Data System (ADS)

    Sukhanov, D.; Erzakova, N.

    2018-05-01

    A method of ultrasound imaging in the air for a fast receiver. The case, when the speed of movement of the receiver can not be neglected with respect to the speed of sound. In this case, the Doppler effect is significant, making it difficult for matched filtering of the backscattered signal. The proposed method does not use a continuous repetitive noise-sounding signal. generalized approach applies spatial matched filtering in the time domain to recover the ultrasonic tomographic images.

  14. A Highly Accurate Face Recognition System Using Filtering Correlation

    NASA Astrophysics Data System (ADS)

    Watanabe, Eriko; Ishikawa, Sayuri; Kodate, Kashiko

    2007-09-01

    The authors previously constructed a highly accurate fast face recognition optical correlator (FARCO) [E. Watanabe and K. Kodate: Opt. Rev. 12 (2005) 460], and subsequently developed an improved, super high-speed FARCO (S-FARCO), which is able to process several hundred thousand frames per second. The principal advantage of our new system is its wide applicability to any correlation scheme. Three different configurations were proposed, each depending on correlation speed. This paper describes and evaluates a software correlation filter. The face recognition function proved highly accurate, seeing that a low-resolution facial image size (64 × 64 pixels) has been successfully implemented. An operation speed of less than 10 ms was achieved using a personal computer with a central processing unit (CPU) of 3 GHz and 2 GB memory. When we applied the software correlation filter to a high-security cellular phone face recognition system, experiments on 30 female students over a period of three months yielded low error rates: 0% false acceptance rate and 2% false rejection rate. Therefore, the filtering correlation works effectively when applied to low resolution images such as web-based images or faces captured by a monitoring camera.

  15. Redefining the Speed Limit of Phase Change Memory Revealed by Time-resolved Steep Threshold-Switching Dynamics of AgInSbTe Devices

    NASA Astrophysics Data System (ADS)

    Shukla, Krishna Dayal; Saxena, Nishant; Durai, Suresh; Manivannan, Anbarasu

    2016-11-01

    Although phase-change memory (PCM) offers promising features for a ‘universal memory’ owing to high-speed and non-volatility, achieving fast electrical switching remains a key challenge. In this work, a correlation between the rate of applied voltage and the dynamics of threshold-switching is investigated at picosecond-timescale. A distinct characteristic feature of enabling a rapid threshold-switching at a critical voltage known as the threshold voltage as validated by an instantaneous response of steep current rise from an amorphous off to on state is achieved within 250 picoseconds and this is followed by a slower current rise leading to crystallization. Also, we demonstrate that the extraordinary nature of threshold-switching dynamics in AgInSbTe cells is independent to the rate of applied voltage unlike other chalcogenide-based phase change materials exhibiting the voltage dependent transient switching characteristics. Furthermore, numerical solutions of time-dependent conduction process validate the experimental results, which reveal the electronic nature of threshold-switching. These findings of steep threshold-switching of ‘sub-50 ps delay time’, opens up a new way for achieving high-speed non-volatile memory for mainstream computing.

  16. Ferroelectric domain switching dynamics and memristive behaviors in BiFeO3-based magnetoelectric heterojunctions

    NASA Astrophysics Data System (ADS)

    Huang, Weichuan; Liu, Yukuai; Luo, Zhen; Hou, Chuangming; Zhao, Wenbo; Yin, Yuewei; Li, Xiaoguang

    2018-06-01

    The ferroelectric domain reversal dynamics and the corresponding resistance switching as well as the memristive behaviors in epitaxial BiFeO3 (BFO, ~150 nm) based multiferroic heterojunctions were systematically investigated. The ferroelectric domain reversal dynamics could be described by the nucleation-limited-switching model with the Lorentzian distribution of logarithmic domain-switching times. By engineering the domain states, multi and even continuously tunable resistances states, i.e. memristive states, could be non-volatilely achieved. The resistance switching speed can be as fast as 30 ns in the BFO-based multiferroic heterojunctions with a write voltage of ~20 V. By reducing the thickness of BFO, the La0.6Sr0.4MnO3/BFO (~5 nm)/La0.6Sr0.4MnO3 multiferroic tunnel junction (MFTJ) shows an even a quicker switching speed (20 ns) with a much lower operation voltage (~4 V). Importantly, the MFTJ exhibits a tunable interfacial magnetoelectric coupling related to the ferroelectric domain switching dynamics. These findings enrich the potential applications of multiferroic BFO based devices in high-speed, low-power, and high-density memories as well as future neuromorphic computational architectures.

  17. 31P Magnetic Resonance Spectroscopy Assessment of Muscle Bioenergetics as a Predictor of Gait Speed in the Baltimore Longitudinal Study of Aging.

    PubMed

    Choi, Seongjin; Reiter, David A; Shardell, Michelle; Simonsick, Eleanor M; Studenski, Stephanie; Spencer, Richard G; Fishbein, Kenneth W; Ferrucci, Luigi

    2016-12-01

    Aerobic fitness and muscle bioenergetic capacity decline with age; whether such declines explain age-related slowing of walking speed is unclear. We hypothesized that muscle energetics and aerobic capacity are independent correlates of walking speed in simple and challenging performance tests and that they account for the observed age-related decline in walking speed in these same tests. Muscle bioenergetics was assessed as postexercise recovery rate of phosphocreatine (PCr), k PCr , using phosphorus magnetic resonance spectroscopy ( 31 P-MRS) in 126 participants (53 men) of the Baltimore Longitudinal Study of Aging aged 26-91 years (mean = 72 years). Four walking tasks were administered-usual pace over 6 m and 150 seconds and fast pace over 6 m and 400 m. Separately, aerobic fitness was assessed as peak oxygen consumption (peak VO 2 ) using a graded treadmill test. All gait speeds, k PCr , and peak VO 2 were lower with older age. Independent of age, sex, height, and weight, both k PCr and peak VO 2 were positively and significantly associated with fast pace and long distance walking but only peak VO 2 and not k PCr was significantly associated with usual gait speed over 6 m. Both k PCr and peak VO 2 substantially attenuated the association between age and gait speed for all but the least stressful walking task of 6 m at usual pace. Muscle bioenergetics assessed using 31 P-MRS is highly correlated with walking speed and partially explains age-related poorer performance in fast and long walking tasks. Published by Oxford University Press on behalf of The Gerontological Society of America 2016. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  18. Human Kinematics of Cochlear Implant Surgery: An Investigation of Insertion Micro-Motions and Speed Limitations.

    PubMed

    Kesler, Kyle; Dillon, Neal P; Fichera, Loris; Labadie, Robert F

    2017-09-01

    Objectives Document human motions associated with cochlear implant electrode insertion at different speeds and determine the lower limit of continuous insertion speed by a human. Study Design Observational. Setting Academic medical center. Subjects and Methods Cochlear implant forceps were coupled to a frame containing reflective fiducials, which enabled optical tracking of the forceps' tip position in real time. Otolaryngologists (n = 14) performed mock electrode insertions at different speeds based on recommendations from the literature: "fast" (96 mm/min), "stable" (as slow as possible without stopping), and "slow" (15 mm/min). For each insertion, the following metrics were calculated from the tracked position data: percentage of time at prescribed speed, percentage of time the surgeon stopped moving forward, and number of direction reversals (ie, going from forward to backward motion). Results Fast insertion trials resulted in better adherence to the prescribed speed (45.4% of the overall time), no motion interruptions, and no reversals, as compared with slow insertions (18.6% of time at prescribed speed, 15.7% stopped time, and an average of 18.6 reversals per trial). These differences were statistically significant for all metrics ( P < .01). The metrics for the fast and stable insertions were comparable; however, stable insertions were performed 44% slower on average. The mean stable insertion speed was 52 ± 19.3 mm/min. Conclusion Results indicate that continuous insertion of a cochlear implant electrode at 15 mm/min is not feasible for human operators. The lower limit of continuous forward insertion is 52 mm/min on average. Guidelines on manual insertion kinematics should consider this practical limit of human motion.

  19. Cloud computing for comparative genomics

    PubMed Central

    2010-01-01

    Background Large comparative genomics studies and tools are becoming increasingly more compute-expensive as the number of available genome sequences continues to rise. The capacity and cost of local computing infrastructures are likely to become prohibitive with the increase, especially as the breadth of questions continues to rise. Alternative computing architectures, in particular cloud computing environments, may help alleviate this increasing pressure and enable fast, large-scale, and cost-effective comparative genomics strategies going forward. To test this, we redesigned a typical comparative genomics algorithm, the reciprocal smallest distance algorithm (RSD), to run within Amazon's Elastic Computing Cloud (EC2). We then employed the RSD-cloud for ortholog calculations across a wide selection of fully sequenced genomes. Results We ran more than 300,000 RSD-cloud processes within the EC2. These jobs were farmed simultaneously to 100 high capacity compute nodes using the Amazon Web Service Elastic Map Reduce and included a wide mix of large and small genomes. The total computation time took just under 70 hours and cost a total of $6,302 USD. Conclusions The effort to transform existing comparative genomics algorithms from local compute infrastructures is not trivial. However, the speed and flexibility of cloud computing environments provides a substantial boost with manageable cost. The procedure designed to transform the RSD algorithm into a cloud-ready application is readily adaptable to similar comparative genomics problems. PMID:20482786

  20. Cloud computing for comparative genomics.

    PubMed

    Wall, Dennis P; Kudtarkar, Parul; Fusaro, Vincent A; Pivovarov, Rimma; Patil, Prasad; Tonellato, Peter J

    2010-05-18

    Large comparative genomics studies and tools are becoming increasingly more compute-expensive as the number of available genome sequences continues to rise. The capacity and cost of local computing infrastructures are likely to become prohibitive with the increase, especially as the breadth of questions continues to rise. Alternative computing architectures, in particular cloud computing environments, may help alleviate this increasing pressure and enable fast, large-scale, and cost-effective comparative genomics strategies going forward. To test this, we redesigned a typical comparative genomics algorithm, the reciprocal smallest distance algorithm (RSD), to run within Amazon's Elastic Computing Cloud (EC2). We then employed the RSD-cloud for ortholog calculations across a wide selection of fully sequenced genomes. We ran more than 300,000 RSD-cloud processes within the EC2. These jobs were farmed simultaneously to 100 high capacity compute nodes using the Amazon Web Service Elastic Map Reduce and included a wide mix of large and small genomes. The total computation time took just under 70 hours and cost a total of $6,302 USD. The effort to transform existing comparative genomics algorithms from local compute infrastructures is not trivial. However, the speed and flexibility of cloud computing environments provides a substantial boost with manageable cost. The procedure designed to transform the RSD algorithm into a cloud-ready application is readily adaptable to similar comparative genomics problems.

  1. Long-range interactions and parallel scalability in molecular simulations

    NASA Astrophysics Data System (ADS)

    Patra, Michael; Hyvönen, Marja T.; Falck, Emma; Sabouri-Ghomi, Mohsen; Vattulainen, Ilpo; Karttunen, Mikko

    2007-01-01

    Typical biomolecular systems such as cellular membranes, DNA, and protein complexes are highly charged. Thus, efficient and accurate treatment of electrostatic interactions is of great importance in computational modeling of such systems. We have employed the GROMACS simulation package to perform extensive benchmarking of different commonly used electrostatic schemes on a range of computer architectures (Pentium-4, IBM Power 4, and Apple/IBM G5) for single processor and parallel performance up to 8 nodes—we have also tested the scalability on four different networks, namely Infiniband, GigaBit Ethernet, Fast Ethernet, and nearly uniform memory architecture, i.e. communication between CPUs is possible by directly reading from or writing to other CPUs' local memory. It turns out that the particle-mesh Ewald method (PME) performs surprisingly well and offers competitive performance unless parallel runs on PC hardware with older network infrastructure are needed. Lipid bilayers of sizes 128, 512 and 2048 lipid molecules were used as the test systems representing typical cases encountered in biomolecular simulations. Our results enable an accurate prediction of computational speed on most current computing systems, both for serial and parallel runs. These results should be helpful in, for example, choosing the most suitable configuration for a small departmental computer cluster.

  2. Step training with body weight support: effect of treadmill speed and practice paradigms on poststroke locomotor recovery.

    PubMed

    Sullivan, Katherine J; Knowlton, Barbara J; Dobkin, Bruce H

    2002-05-01

    To investigate the effect of practice paradigms that varied treadmill speed during step training with body weight support in subjects with chronic hemiparesis after stroke. Randomized, repeated-measures pilot study with 1- and 3-month follow-ups. Outpatient locomotor laboratory. Twenty-four individuals with hemiparetic gait deficits whose walking speeds were at least 50% below normal. Participants were stratified by locomotor severity based on initial walking velocity and randomly assigned to treadmill training at slow (0.5mph), fast (2.0mph), or variable (0.5, 1.0, 1.5, 2.0mph) speeds. Participants received 20 minutes of training per session for 12 sessions over 4 weeks. Self-selected overground walking velocity (SSV) was assessed at the onset, middle, and end of training, and 1 and 3 months later. SSV improved in all groups compared with baseline (P<.001). All groups increased SSV in the 1-month follow-up (P<.01) and maintained these gains at the 3-month follow-up (P=.77). The greatest improvement in SSV across training occurred with fast training speeds compared with the slow and variable groups combined (P=.04). Effect size (ES) was large between fast compared with slow (ES=.75) and variable groups (ES=.73). Training at speeds comparable with normal walking velocity was more effective in improving SSV than training at speeds at or below the patient's typical overground walking velocity. Copyright 2002 by the American Congress of Rehabilitation Medicine and the American Academy of Physical Medicine and Rehabilitation

  3. Validity of the iPhone M7 motion co-processor as a pedometer for able-bodied ambulation.

    PubMed

    Major, Matthew J; Alford, Micah

    2016-12-01

    Physical activity benefits for disease prevention are well-established. Smartphones offer a convenient platform for community-based step count estimation to monitor and encourage physical activity. Accuracy is dependent on hardware-software platforms, creating a recurring challenge for validation, but the Apple iPhone® M7 motion co-processor provides a standardised method that helps address this issue. Validity of the M7 to record step count for level-ground, able-bodied walking at three self-selected speeds, and agreement with the StepWatch TM was assessed. Steps were measured concurrently with the iPhone® (custom application to extract step count), StepWatch TM and manual count. Agreement between iPhone® and manual/StepWatch TM count was estimated through Pearson correlation and Bland-Altman analyses. Data from 20 participants suggested that iPhone® step count correlations with manual and StepWatch TM were strong for customary (1.3 ± 0.1 m/s) and fast (1.8 ± 0.2 m/s) speeds, but weak for the slow (1.0 ± 0.1 m/s) speed. Mean absolute error (manual-iPhone®) was 21%, 8% and 4% for the slow, customary and fast speeds, respectively. The M7 accurately records step count during customary and fast walking speeds, but is prone to considerable inaccuracies at slow speeds which has important implications for certain patient groups. The iPhone® may be a suitable alternative to the StepWatch TM for only faster walking speeds.

  4. GPU-accelerated non-uniform fast Fourier transform-based compressive sensing spectral domain optical coherence tomography.

    PubMed

    Xu, Daguang; Huang, Yong; Kang, Jin U

    2014-06-16

    We implemented the graphics processing unit (GPU) accelerated compressive sensing (CS) non-uniform in k-space spectral domain optical coherence tomography (SD OCT). Kaiser-Bessel (KB) function and Gaussian function are used independently as the convolution kernel in the gridding-based non-uniform fast Fourier transform (NUFFT) algorithm with different oversampling ratios and kernel widths. Our implementation is compared with the GPU-accelerated modified non-uniform discrete Fourier transform (MNUDFT) matrix-based CS SD OCT and the GPU-accelerated fast Fourier transform (FFT)-based CS SD OCT. It was found that our implementation has comparable performance to the GPU-accelerated MNUDFT-based CS SD OCT in terms of image quality while providing more than 5 times speed enhancement. When compared to the GPU-accelerated FFT based-CS SD OCT, it shows smaller background noise and less side lobes while eliminating the need for the cumbersome k-space grid filling and the k-linear calibration procedure. Finally, we demonstrated that by using a conventional desktop computer architecture having three GPUs, real-time B-mode imaging can be obtained in excess of 30 fps for the GPU-accelerated NUFFT based CS SD OCT with frame size 2048(axial) × 1,000(lateral).

  5. An embedded multi-core parallel model for real-time stereo imaging

    NASA Astrophysics Data System (ADS)

    He, Wenjing; Hu, Jian; Niu, Jingyu; Li, Chuanrong; Liu, Guangyu

    2018-04-01

    The real-time processing based on embedded system will enhance the application capability of stereo imaging for LiDAR and hyperspectral sensor. The task partitioning and scheduling strategies for embedded multiprocessor system starts relatively late, compared with that for PC computer. In this paper, aimed at embedded multi-core processing platform, a parallel model for stereo imaging is studied and verified. After analyzing the computing amount, throughout capacity and buffering requirements, a two-stage pipeline parallel model based on message transmission is established. This model can be applied to fast stereo imaging for airborne sensors with various characteristics. To demonstrate the feasibility and effectiveness of the parallel model, a parallel software was designed using test flight data, based on the 8-core DSP processor TMS320C6678. The results indicate that the design performed well in workload distribution and had a speed-up ratio up to 6.4.

  6. An annular superposition integral for axisymmetric radiators.

    PubMed

    Kelly, James F; McGough, Robert J

    2007-02-01

    A fast integral expression for computing the nearfield pressure is derived for axisymmetric radiators. This method replaces the sum of contributions from concentric annuli with an exact double integral that converges much faster than methods that evaluate the Rayleigh-Sommerfeld integral or the generalized King integral. Expressions are derived for plane circular pistons using both continuous wave and pulsed excitations. Several commonly used apodization schemes for the surface velocity distribution are considered, including polynomial functions and a "smooth piston" function. The effect of different apodization functions on the spectral content of the wave field is explored. Quantitative error and time comparisons between the new method, the Rayleigh-Sommerfeld integral, and the generalized King integral are discussed. At all error levels considered, the annular superposition method achieves a speed-up of at least a factor of 4 relative to the point-source method and a factor of 3 relative to the generalized King integral without increasing the computational complexity.

  7. A fast, open source implementation of adaptive biasing potentials uncovers a ligand design strategy for the chromatin regulator BRD4

    NASA Astrophysics Data System (ADS)

    Dickson, Bradley M.; de Waal, Parker W.; Ramjan, Zachary H.; Xu, H. Eric; Rothbart, Scott B.

    2016-10-01

    In this communication we introduce an efficient implementation of adaptive biasing that greatly improves the speed of free energy computation in molecular dynamics simulations. We investigated the use of accelerated simulations to inform on compound design using a recently reported and clinically relevant inhibitor of the chromatin regulator BRD4 (bromodomain-containing protein 4). Benchmarking on our local compute cluster, our implementation achieves up to 2.5 times more force calls per day than plumed2. Results of five 1 μs-long simulations are presented, which reveal a conformational switch in the BRD4 inhibitor between a binding competent and incompetent state. Stabilization of the switch led to a -3 kcal/mol improvement of absolute binding free energy. These studies suggest an unexplored ligand design principle and offer new actionable hypotheses for medicinal chemistry efforts against this druggable epigenetic target class.

  8. Assessing performance of flaw characterization methods through uncertainty propagation

    NASA Astrophysics Data System (ADS)

    Miorelli, R.; Le Bourdais, F.; Artusi, X.

    2018-04-01

    In this work, we assess the inversion performance in terms of crack characterization and localization based on synthetic signals associated to ultrasonic and eddy current physics. More precisely, two different standard iterative inversion algorithms are used to minimize the discrepancy between measurements (i.e., the tested data) and simulations. Furthermore, in order to speed up the computational time and get rid of the computational burden often associated to iterative inversion algorithms, we replace the standard forward solver by a suitable metamodel fit on a database built offline. In a second step, we assess the inversion performance by adding uncertainties on a subset of the database parameters and then, through the metamodel, we propagate these uncertainties within the inversion procedure. The fast propagation of uncertainties enables efficiently evaluating the impact due to the lack of knowledge on some parameters employed to describe the inspection scenarios, which is a situation commonly encountered in the industrial NDE context.

  9. MUSCLE: multiple sequence alignment with high accuracy and high throughput.

    PubMed

    Edgar, Robert C

    2004-01-01

    We describe MUSCLE, a new computer program for creating multiple alignments of protein sequences. Elements of the algorithm include fast distance estimation using kmer counting, progressive alignment using a new profile function we call the log-expectation score, and refinement using tree-dependent restricted partitioning. The speed and accuracy of MUSCLE are compared with T-Coffee, MAFFT and CLUSTALW on four test sets of reference alignments: BAliBASE, SABmark, SMART and a new benchmark, PREFAB. MUSCLE achieves the highest, or joint highest, rank in accuracy on each of these sets. Without refinement, MUSCLE achieves average accuracy statistically indistinguishable from T-Coffee and MAFFT, and is the fastest of the tested methods for large numbers of sequences, aligning 5000 sequences of average length 350 in 7 min on a current desktop computer. The MUSCLE program, source code and PREFAB test data are freely available at http://www.drive5. com/muscle.

  10. Speed-up of the volumetric method of moments for the approximate RCS of large arbitrary-shaped dielectric targets

    NASA Astrophysics Data System (ADS)

    Moreno, Javier; Somolinos, Álvaro; Romero, Gustavo; González, Iván; Cátedra, Felipe

    2017-08-01

    A method for the rigorous computation of the electromagnetic scattering of large dielectric volumes is presented. One goal is to simplify the analysis of large dielectric targets with translational symmetries taken advantage of their Toeplitz symmetry. Then, the matrix-fill stage of the Method of Moments is efficiently obtained because the number of coupling terms to compute is reduced. The Multilevel Fast Multipole Method is applied to solve the problem. Structured meshes are obtained efficiently to approximate the dielectric volumes. The regular mesh grid is achieved by using parallelepipeds whose centres have been identified as internal to the target. The ray casting algorithm is used to classify the parallelepiped centres. It may become a bottleneck when too many points are evaluated in volumes defined by parametric surfaces, so a hierarchical algorithm is proposed to minimize the number of evaluations. Measurements and analytical results are included for validation purposes.

  11. Real-time Accurate Surface Reconstruction Pipeline for Vision Guided Planetary Exploration Using Unmanned Ground and Aerial Vehicles

    NASA Technical Reports Server (NTRS)

    Almeida, Eduardo DeBrito

    2012-01-01

    This report discusses work completed over the summer at the Jet Propulsion Laboratory (JPL), California Institute of Technology. A system is presented to guide ground or aerial unmanned robots using computer vision. The system performs accurate camera calibration, camera pose refinement and surface extraction from images collected by a camera mounted on the vehicle. The application motivating the research is planetary exploration and the vehicles are typically rovers or unmanned aerial vehicles. The information extracted from imagery is used primarily for navigation, as robot location is the same as the camera location and the surfaces represent the terrain that rovers traverse. The processed information must be very accurate and acquired very fast in order to be useful in practice. The main challenge being addressed by this project is to achieve high estimation accuracy and high computation speed simultaneously, a difficult task due to many technical reasons.

  12. A fast, open source implementation of adaptive biasing potentials uncovers a ligand design strategy for the chromatin regulator BRD4

    PubMed Central

    Dickson, Bradley M.; Ramjan, Zachary H.; Xu, H. Eric

    2016-01-01

    In this communication we introduce an efficient implementation of adaptive biasing that greatly improves the speed of free energy computation in molecular dynamics simulations. We investigated the use of accelerated simulations to inform on compound design using a recently reported and clinically relevant inhibitor of the chromatin regulator BRD4 (bromodomain-containing protein 4). Benchmarking on our local compute cluster, our implementation achieves up to 2.5 times more force calls per day than plumed2. Results of five 1 μs-long simulations are presented, which reveal a conformational switch in the BRD4 inhibitor between a binding competent and incompetent state. Stabilization of the switch led to a −3 kcal/mol improvement of absolute binding free energy. These studies suggest an unexplored ligand design principle and offer new actionable hypotheses for medicinal chemistry efforts against this druggable epigenetic target class. PMID:27782467

  13. Geometric saliency to characterize radar exploitation performance

    NASA Astrophysics Data System (ADS)

    Nolan, Adam; Keserich, Brad; Lingg, Andrew; Goley, Steve

    2014-06-01

    Based on the fundamental scattering mechanisms of facetized computer-aided design (CAD) models, we are able to define expected contributions (EC) to the radar signature. The net result of this analysis is the prediction of the salient aspects and contributing vehicle morphology based on the aspect. Although this approach does not provide the fidelity of an asymptotic electromagnetic (EM) simulation, it does provide very fast estimates of the unique scattering that can be consumed by a signature exploitation algorithm. The speed of this approach is particularly relevant when considering the high dimensionality of target configuration variability due to articulating parts which are computationally burdensome to predict. The key scattering phenomena considered in this work are the specular response from a single bounce interaction with surfaces and dihedral response formed between the ground plane and vehicle. Results of this analysis are demonstrated for a set of civilian target models.

  14. Grid-Based Surface Generalized Born Model for Calculation of Electrostatic Binding Free Energies.

    PubMed

    Forouzesh, Negin; Izadi, Saeed; Onufriev, Alexey V

    2017-10-23

    Fast and accurate calculation of solvation free energies is central to many applications, such as rational drug design. In this study, we present a grid-based molecular surface implementation of "R6" flavor of the generalized Born (GB) implicit solvent model, named GBNSR6. The speed, accuracy relative to numerical Poisson-Boltzmann treatment, and sensitivity to grid surface parameters are tested on a set of 15 small protein-ligand complexes and a set of biomolecules in the range of 268 to 25099 atoms. Our results demonstrate that the proposed model provides a relatively successful compromise between the speed and accuracy of computing polar components of the solvation free energies (ΔG pol ) and binding free energies (ΔΔG pol ). The model tolerates a relatively coarse grid size h = 0.5 Å, where the grid artifact error in computing ΔΔG pol remains in the range of k B T ∼ 0.6 kcal/mol. The estimated ΔΔG pol s are well correlated (r 2 = 0.97) with the numerical Poisson-Boltzmann reference, while showing virtually no systematic bias and RMSE = 1.43 kcal/mol. The grid-based GBNSR6 model is available in Amber (AmberTools) package of molecular simulation programs.

  15. QuickProbs—A Fast Multiple Sequence Alignment Algorithm Designed for Graphics Processors

    PubMed Central

    Gudyś, Adam; Deorowicz, Sebastian

    2014-01-01

    Multiple sequence alignment is a crucial task in a number of biological analyses like secondary structure prediction, domain searching, phylogeny, etc. MSAProbs is currently the most accurate alignment algorithm, but its effectiveness is obtained at the expense of computational time. In the paper we present QuickProbs, the variant of MSAProbs customised for graphics processors. We selected the two most time consuming stages of MSAProbs to be redesigned for GPU execution: the posterior matrices calculation and the consistency transformation. Experiments on three popular benchmarks (BAliBASE, PREFAB, OXBench-X) on quad-core PC equipped with high-end graphics card show QuickProbs to be 5.7 to 9.7 times faster than original CPU-parallel MSAProbs. Additional tests performed on several protein families from Pfam database give overall speed-up of 6.7. Compared to other algorithms like MAFFT, MUSCLE, or ClustalW, QuickProbs proved to be much more accurate at similar speed. Additionally we introduce a tuned variant of QuickProbs which is significantly more accurate on sets of distantly related sequences than MSAProbs without exceeding its computation time. The GPU part of QuickProbs was implemented in OpenCL, thus the package is suitable for graphics processors produced by all major vendors. PMID:24586435

  16. BALSA: integrated secondary analysis for whole-genome and whole-exome sequencing, accelerated by GPU.

    PubMed

    Luo, Ruibang; Wong, Yiu-Lun; Law, Wai-Chun; Lee, Lap-Kei; Cheung, Jeanno; Liu, Chi-Man; Lam, Tak-Wah

    2014-01-01

    This paper reports an integrated solution, called BALSA, for the secondary analysis of next generation sequencing data; it exploits the computational power of GPU and an intricate memory management to give a fast and accurate analysis. From raw reads to variants (including SNPs and Indels), BALSA, using just a single computing node with a commodity GPU board, takes 5.5 h to process 50-fold whole genome sequencing (∼750 million 100 bp paired-end reads), or just 25 min for 210-fold whole exome sequencing. BALSA's speed is rooted at its parallel algorithms to effectively exploit a GPU to speed up processes like alignment, realignment and statistical testing. BALSA incorporates a 16-genotype model to support the calling of SNPs and Indels and achieves competitive variant calling accuracy and sensitivity when compared to the ensemble of six popular variant callers. BALSA also supports efficient identification of somatic SNVs and CNVs; experiments showed that BALSA recovers all the previously validated somatic SNVs and CNVs, and it is more sensitive for somatic Indel detection. BALSA outputs variants in VCF format. A pileup-like SNAPSHOT format, while maintaining the same fidelity as BAM in variant calling, enables efficient storage and indexing, and facilitates the App development of downstream analyses. BALSA is available at: http://sourceforge.net/p/balsa.

  17. Event Reconstruction for Many-core Architectures using Java

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graf, Norman A.; /SLAC

    Although Moore's Law remains technically valid, the performance enhancements in computing which traditionally resulted from increased CPU speeds ended years ago. Chip manufacturers have chosen to increase the number of core CPUs per chip instead of increasing clock speed. Unfortunately, these extra CPUs do not automatically result in improvements in simulation or reconstruction times. To take advantage of this extra computing power requires changing how software is written. Event reconstruction is globally serial, in the sense that raw data has to be unpacked first, channels have to be clustered to produce hits before those hits are identified as belonging tomore » a track or shower, tracks have to be found and fit before they are vertexed, etc. However, many of the individual procedures along the reconstruction chain are intrinsically independent and are perfect candidates for optimization using multi-core architecture. Threading is perhaps the simplest approach to parallelizing a program and Java includes a powerful threading facility built into the language. We have developed a fast and flexible reconstruction package (org.lcsim) written in Java that has been used for numerous physics and detector optimization studies. In this paper we present the results of our studies on optimizing the performance of this toolkit using multiple threads on many-core architectures.« less

  18. GTZ: a fast compression and cloud transmission tool optimized for FASTQ files.

    PubMed

    Xing, Yuting; Li, Gen; Wang, Zhenguo; Feng, Bolun; Song, Zhuo; Wu, Chengkun

    2017-12-28

    The dramatic development of DNA sequencing technology is generating real big data, craving for more storage and bandwidth. To speed up data sharing and bring data to computing resource faster and cheaper, it is necessary to develop a compression tool than can support efficient compression and transmission of sequencing data onto the cloud storage. This paper presents GTZ, a compression and transmission tool, optimized for FASTQ files. As a reference-free lossless FASTQ compressor, GTZ treats different lines of FASTQ separately, utilizes adaptive context modelling to estimate their characteristic probabilities, and compresses data blocks with arithmetic coding. GTZ can also be used to compress multiple files or directories at once. Furthermore, as a tool to be used in the cloud computing era, it is capable of saving compressed data locally or transmitting data directly into cloud by choice. We evaluated the performance of GTZ on some diverse FASTQ benchmarks. Results show that in most cases, it outperforms many other tools in terms of the compression ratio, speed and stability. GTZ is a tool that enables efficient lossless FASTQ data compression and simultaneous data transmission onto to cloud. It emerges as a useful tool for NGS data storage and transmission in the cloud environment. GTZ is freely available online at: https://github.com/Genetalks/gtz .

  19. A fast, programmable hardware architecture for the processing of spaceborne SAR data

    NASA Technical Reports Server (NTRS)

    Bennett, J. R.; Cumming, I. G.; Lim, J.; Wedding, R. M.

    1984-01-01

    The development of high-throughput SAR processors (HTSPs) for the spaceborne SARs being planned by NASA, ESA, DFVLR, NASDA, and the Canadian Radarsat Project is discussed. The basic parameters and data-processing requirements of the SARs are listed in tables, and the principal problems are identified as real-operations rates in excess of 2 x 10 to the 9th/sec, I/O rates in excess of 8 x 10 to the 6th samples/sec, and control computation loads (as for range cell migration correction) as high as 1.4 x 10 to the 6th instructions/sec. A number of possible HTSP architectures are reviewed; host/array-processor (H/AP) and distributed-control/data-path (DCDP) architectures are examined in detail and illustrated with block diagrams; and a cost/speed comparison of these two architectures is presented. The H/AP approach is found to be adequate and economical for speeds below 1/200 of real time, while DCDP is more cost-effective above 1/50 of real time.

  20. A solution to neural field equations by a recurrent neural network method

    NASA Astrophysics Data System (ADS)

    Alharbi, Abir

    2012-09-01

    Neural field equations (NFE) are used to model the activity of neurons in the brain, it is introduced from a single neuron 'integrate-and-fire model' starting point. The neural continuum is spatially discretized for numerical studies, and the governing equations are modeled as a system of ordinary differential equations. In this article the recurrent neural network approach is used to solve this system of ODEs. This consists of a technique developed by combining the standard numerical method of finite-differences with the Hopfield neural network. The architecture of the net, energy function, updating equations, and algorithms are developed for the NFE model. A Hopfield Neural Network is then designed to minimize the energy function modeling the NFE. Results obtained from the Hopfield-finite-differences net show excellent performance in terms of accuracy and speed. The parallelism nature of the Hopfield approaches may make them easier to implement on fast parallel computers and give them the speed advantage over the traditional methods.

  1. Contingency Analysis Post-Processing With Advanced Computing and Visualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yousu; Glaesemann, Kurt; Fitzhenry, Erin

    Contingency analysis is a critical function widely used in energy management systems to assess the impact of power system component failures. Its outputs are important for power system operation for improved situational awareness, power system planning studies, and power market operations. With the increased complexity of power system modeling and simulation caused by increased energy production and demand, the penetration of renewable energy and fast deployment of smart grid devices, and the trend of operating grids closer to their capacity for better efficiency, more and more contingencies must be executed and analyzed quickly in order to ensure grid reliability andmore » accuracy for the power market. Currently, many researchers have proposed different techniques to accelerate the computational speed of contingency analysis, but not much work has been published on how to post-process the large amount of contingency outputs quickly. This paper proposes a parallel post-processing function that can analyze contingency analysis outputs faster and display them in a web-based visualization tool to help power engineers improve their work efficiency by fast information digestion. Case studies using an ESCA-60 bus system and a WECC planning system are presented to demonstrate the functionality of the parallel post-processing technique and the web-based visualization tool.« less

  2. Using an Improved SIFT Algorithm and Fuzzy Closed-Loop Control Strategy for Object Recognition in Cluttered Scenes

    PubMed Central

    Nie, Haitao; Long, Kehui; Ma, Jun; Yue, Dan; Liu, Jinguo

    2015-01-01

    Partial occlusions, large pose variations, and extreme ambient illumination conditions generally cause the performance degradation of object recognition systems. Therefore, this paper presents a novel approach for fast and robust object recognition in cluttered scenes based on an improved scale invariant feature transform (SIFT) algorithm and a fuzzy closed-loop control method. First, a fast SIFT algorithm is proposed by classifying SIFT features into several clusters based on several attributes computed from the sub-orientation histogram (SOH), in the feature matching phase only features that share nearly the same corresponding attributes are compared. Second, a feature matching step is performed following a prioritized order based on the scale factor, which is calculated between the object image and the target object image, guaranteeing robust feature matching. Finally, a fuzzy closed-loop control strategy is applied to increase the accuracy of the object recognition and is essential for autonomous object manipulation process. Compared to the original SIFT algorithm for object recognition, the result of the proposed method shows that the number of SIFT features extracted from an object has a significant increase, and the computing speed of the object recognition processes increases by more than 40%. The experimental results confirmed that the proposed method performs effectively and accurately in cluttered scenes. PMID:25714094

  3. Optimal and fast rotational alignment of volumes with missing data in Fourier space.

    PubMed

    Shatsky, Maxim; Arbelaez, Pablo; Glaeser, Robert M; Brenner, Steven E

    2013-11-01

    Electron tomography of intact cells has the potential to reveal the entire cellular content at a resolution corresponding to individual macromolecular complexes. Characterization of macromolecular complexes in tomograms is nevertheless an extremely challenging task due to the high level of noise, and due to the limited tilt angle that results in missing data in Fourier space. By identifying particles of the same type and averaging their 3D volumes, it is possible to obtain a structure at a more useful resolution for biological interpretation. Currently, classification and averaging of sub-tomograms is limited by the speed of computational methods that optimize alignment between two sub-tomographic volumes. The alignment optimization is hampered by the fact that the missing data in Fourier space has to be taken into account during the rotational search. A similar problem appears in single particle electron microscopy where the random conical tilt procedure may require averaging of volumes with a missing cone in Fourier space. We present a fast implementation of a method guaranteed to find an optimal rotational alignment that maximizes the constrained cross-correlation function (cCCF) computed over the actual overlap of data in Fourier space. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.

  4. Learning Optimized Local Difference Binaries for Scalable Augmented Reality on Mobile Devices.

    PubMed

    Xin Yang; Kwang-Ting Cheng

    2014-06-01

    The efficiency, robustness and distinctiveness of a feature descriptor are critical to the user experience and scalability of a mobile augmented reality (AR) system. However, existing descriptors are either too computationally expensive to achieve real-time performance on a mobile device such as a smartphone or tablet, or not sufficiently robust and distinctive to identify correct matches from a large database. As a result, current mobile AR systems still only have limited capabilities, which greatly restrict their deployment in practice. In this paper, we propose a highly efficient, robust and distinctive binary descriptor, called Learning-based Local Difference Binary (LLDB). LLDB directly computes a binary string for an image patch using simple intensity and gradient difference tests on pairwise grid cells within the patch. To select an optimized set of grid cell pairs, we densely sample grid cells from an image patch and then leverage a modified AdaBoost algorithm to automatically extract a small set of critical ones with the goal of maximizing the Hamming distance between mismatches while minimizing it between matches. Experimental results demonstrate that LLDB is extremely fast to compute and to match against a large database due to its high robustness and distinctiveness. Compared to the state-of-the-art binary descriptors, primarily designed for speed, LLDB has similar efficiency for descriptor construction, while achieving a greater accuracy and faster matching speed when matching over a large database with 2.3M descriptors on mobile devices.

  5. Multiple Frequency Contrast Source Inversion Method for Vertical Electromagnetic Profiling: 2D Simulation Results and Analyses

    NASA Astrophysics Data System (ADS)

    Li, Jinghe; Song, Linping; Liu, Qing Huo

    2016-02-01

    A simultaneous multiple frequency contrast source inversion (CSI) method is applied to reconstructing hydrocarbon reservoir targets in a complex multilayered medium in two dimensions. It simulates the effects of a salt dome sedimentary formation in the context of reservoir monitoring. In this method, the stabilized biconjugate-gradient fast Fourier transform (BCGS-FFT) algorithm is applied as a fast solver for the 2D volume integral equation for the forward computation. The inversion technique with CSI combines the efficient FFT algorithm to speed up the matrix-vector multiplication and the stable convergence of the simultaneous multiple frequency CSI in the iteration process. As a result, this method is capable of making quantitative conductivity image reconstruction effectively for large-scale electromagnetic oil exploration problems, including the vertical electromagnetic profiling (VEP) survey investigated here. A number of numerical examples have been demonstrated to validate the effectiveness and capacity of the simultaneous multiple frequency CSI method for a limited array view in VEP.

  6. On-line surface inspection using cylindrical lens-based spectral domain low-coherence interferometry.

    PubMed

    Tang, Dawei; Gao, Feng; Jiang, X

    2014-08-20

    We present a spectral domain low-coherence interferometry (SD-LCI) method that is effective for applications in on-line surface inspection because it can obtain a surface profile in a single shot. It has an advantage over existing spectral interferometry techniques by using cylindrical lenses as the objective lenses in a Michelson interferometric configuration to enable the measurement of long profiles. Combined with a modern high-speed CCD camera, general-purpose graphics processing unit, and multicore processors computing technology, fast measurement can be achieved. By translating the tested sample during the measurement procedure, real-time surface inspection was implemented, which is proved by the large-scale 3D surface measurement in this paper. ZEMAX software is used to simulate the SD-LCI system and analyze the alignment errors. Two step height surfaces were measured, and the captured interferograms were analyzed using a fast Fourier transform algorithm. Both 2D profile results and 3D surface maps closely align with the calibrated specifications given by the manufacturer.

  7. A combined finite element-boundary integral formulation for solution of two-dimensional scattering problems via CGFFT. [Conjugate Gradient Fast Fourier Transformation

    NASA Technical Reports Server (NTRS)

    Collins, Jeffery D.; Volakis, John L.; Jin, Jian-Ming

    1990-01-01

    A new technique is presented for computing the scattering by 2-D structures of arbitrary composition. The proposed solution approach combines the usual finite element method with the boundary-integral equation to formulate a discrete system. This is subsequently solved via the conjugate gradient (CG) algorithm. A particular characteristic of the method is the use of rectangular boundaries to enclose the scatterer. Several of the resulting boundary integrals are therefore convolutions and may be evaluated via the fast Fourier transform (FFT) in the implementation of the CG algorithm. The solution approach offers the principal advantage of having O(N) memory demand and employs a 1-D FFT versus a 2-D FFT as required with a traditional implementation of the CGFFT algorithm. The speed of the proposed solution method is compared with that of the traditional CGFFT algorithm, and results for rectangular bodies are given and shown to be in excellent agreement with the moment method.

  8. Fast interactive real-time volume rendering of real-time three-dimensional echocardiography: an implementation for low-end computers

    NASA Technical Reports Server (NTRS)

    Saracino, G.; Greenberg, N. L.; Shiota, T.; Corsi, C.; Lamberti, C.; Thomas, J. D.

    2002-01-01

    Real-time three-dimensional echocardiography (RT3DE) is an innovative cardiac imaging modality. However, partly due to lack of user-friendly software, RT3DE has not been widely accepted as a clinical tool. The object of this study was to develop and implement a fast and interactive volume renderer of RT3DE datasets designed for a clinical environment where speed and simplicity are not secondary to accuracy. Thirty-six patients (20 regurgitation, 8 normal, 8 cardiomyopathy) were imaged using RT3DE. Using our newly developed software, all 3D data sets were rendered in real-time throughout the cardiac cycle and assessment of cardiac function and pathology was performed for each case. The real-time interactive volume visualization system is user friendly and instantly provides consistent and reliable 3D images without expensive workstations or dedicated hardware. We believe that this novel tool can be used clinically for dynamic visualization of cardiac anatomy.

  9. Application of numerical grid generation for improved CFD analysis of multiphase screw machines

    NASA Astrophysics Data System (ADS)

    Rane, S.; Kovačević, A.

    2017-08-01

    Algebraic grid generation is widely used for discretization of the working domain of twin screw machines. Algebraic grid generation is fast and has good control over the placement of grid nodes. However, the desired qualities of grid which should be able to handle multiphase flows such as oil injection, may be difficult to achieve at times. In order to obtain fast solution of multiphase screw machines, it is important to further improve the quality and robustness of the computational grid. In this paper, a deforming grid of a twin screw machine is generated using algebraic transfinite interpolation to produce initial mesh upon which an elliptic partial differential equations (PDE) of the Poisson’s form is solved numerically to produce smooth final computational mesh. The quality of numerical cells and their distribution obtained by the differential method is greatly improved. In addition, a similar procedure was introduced to fully smoothen the transition of the partitioning rack curve between the rotors thus improving continuous movement of grid nodes and in turn improve robustness and speed of the Computational Fluid Dynamic (CFD) solver. Analysis of an oil injected twin screw compressor is presented to compare the improvements in grid quality factors in the regions of importance such as interlobe space, radial tip and the core of the rotor. The proposed method that combines algebraic and differential grid generation offer significant improvement in grid quality and robustness of numerical solution.

  10. Fast by Nature - How Stress Patterns Define Human Experience and Performance in Dexterous Tasks

    PubMed Central

    Pavlidis, I.; Tsiamyrtzis, P.; Shastri, D.; Wesley, A.; Zhou, Y.; Lindner, P.; Buddharaju, P.; Joseph, R.; Mandapati, A.; Dunkin, B.; Bass, B.

    2012-01-01

    In the present study we quantify stress by measuring transient perspiratory responses on the perinasal area through thermal imaging. These responses prove to be sympathetically driven and hence, a likely indicator of stress processes in the brain. Armed with the unobtrusive measurement methodology we developed, we were able to monitor stress responses in the context of surgical training, the quintessence of human dexterity. We show that in dexterous tasking under critical conditions, novices attempt to perform a task's step equally fast with experienced individuals. We further show that while fast behavior in experienced individuals is afforded by skill, fast behavior in novices is likely instigated by high stress levels, at the expense of accuracy. Humans avoid adjusting speed to skill and rather grow their skill to a predetermined speed level, likely defined by neurophysiological latency. PMID:22396852

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mao, Shoudi; He, Jiansen; Yang, Liping

    The impact of an overtaking fast shock on a magnetic cloud (MC) is a pivotal process in CME–CME (CME: coronal mass ejection) interactions and CME–SIR (SIR: stream interaction region) interactions. MC with a strong and rotating magnetic field is usually deemed a crucial part of CMEs. To study the impact of a fast shock on an MC, we perform a 2.5 dimensional numerical magnetohydrodynamic simulation. Two cases are run in this study: without and with impact by fast shock. In the former case, the MC expands gradually from its initial state and drives a relatively slow magnetic reconnection with themore » ambient magnetic field. Analyses of forces near the core of the MC as a whole body indicates that the solar gravity is quite small compared to the Lorentz force and the pressure gradient force. In the second run, a fast shock propagates, relative to the background plasma, at a speed twice that of the perpendicular fast magnetosonic speed, catches up with and takes over the MC. Due to the penetration of the fast shock, the MC is highly compressed and heated, with the temperature growth rate enhanced by a factor of about 10 and the velocity increased to about half of the shock speed. The magnetic reconnection with ambient magnetic field is also sped up by a factor of two to four in reconnection rate as a result of the enhanced density of the current sheet, which is squeezed by the forward motion of the shocked MC.« less

  12. ZnO nanowire Schottky barrier ultraviolet photodetector with high sensitivity and fast recovery speed

    NASA Astrophysics Data System (ADS)

    Cheng, Gang; Wu, Xinghui; Liu, Bing; Li, Bing; Zhang, Xingtang; Du, Zuliang

    2011-11-01

    ZnO nanowire (NW) ultraviolet (UV) photodetectors have high sensitivity, while the long recovery time is an important limitation for its applications. In this paper, we demonstrate the promising applications of ZnO NW Schottky barrier as high performance UV photodetector with high sensitivity and fast recovery speed. The on/off ratio, sensitivity, and photocurrent gain are 4 × 105, 2.6 × 103 A/W, and 8.5 × 103, respectively. The recovery time is 0.28 s when photocurrent decreases by 3 orders of magnitude, and the corresponding time constant is as short as 46 ms. The physical mechanisms of the fast recovery properties have also been discussed.

  13. Investigations of Pressure Distribution on Fast Flying Bodies

    NASA Technical Reports Server (NTRS)

    Stamm, G.

    1946-01-01

    The question to be treated is: how high is the pressure in the bow wave caused by a body flying at supersonic speed, and how far reaching are the destructive effects of that wave? The pressure distribution on an s.S. and an S. projectile of normal speed has been ascertained already by the methods of measurement used at the Ballistic Institute of the Technical Academy of the German Air Forces. Now similar investigations of the conditions on especially fast-flying bodies were carried out.

  14. Gas Gun Model and Comparison to Experimental Performance of Pipe Guns Operating with Light Propellant Gases and Large Cryogenic Pellets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reed, J. R.; Carmichael, J. R.; Gebhart, T. E.

    Injection of multiple large (~10 to 30 mm diameter) shattered pellets into ITER plasmas is presently part of the scheme planned to mitigate the deleterious effects of disruptions on the vessel components. To help in the design and optimize performance of the pellet injectors for this application, a model referred to as “the gas gun simulator” has been developed and benchmarked against experimental data. The computer code simulator is a Java program that models the gas-dynamics characteristics of a single-stage gas gun. Following a stepwise approach, the code utilizes a variety of input parameters to incrementally simulate and analyze themore » dynamics of the gun as the projectile is launched down the barrel. Using input data, the model can calculate gun performance based on physical characteristics, such as propellant-gas and fast-valve properties, barrel geometry, and pellet mass. Although the model is fundamentally generic, the present version is configured to accommodate cryogenic pellets composed of H2, D2, Ne, Ar, and mixtures of them and light propellant gases (H2, D2, and He). The pellets are solidified in situ in pipe guns that consist of stainless steel tubes and fast-acting valves that provide the propellant gas for pellet acceleration (to speeds ~200 to 700 m/s). The pellet speed is the key parameter in determining the response time of a shattered pellet system to a plasma disruption event. The calculated speeds from the code simulations of experiments were typically in excellent agreement with the measured values. With the gas gun simulator validated for many test shots and over a wide range of physical and operating parameters, it is a valuable tool for optimization of the injector design, including the fast valve design (orifice size and volume) for any operating pressure (~40 bar expected for the ITER application) and barrel length for any pellet size (mass, diameter, and length). Key design parameters and proposed values for the pellet injectors for the ITER disruption mitigation systems are discussed.« less

  15. Gas Gun Model and Comparison to Experimental Performance of Pipe Guns Operating with Light Propellant Gases and Large Cryogenic Pellets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Combs, S. K.; Reed, J. R.; Lyttle, M. S.

    2016-01-01

    Injection of multiple large (~10 to 30 mm diameter) shattered pellets into ITER plasmas is presently part of the scheme planned to mitigate the deleterious effects of disruptions on the vessel components. To help in the design and optimize performance of the pellet injectors for this application, a model referred to as “the gas gun simulator” has been developed and benchmarked against experimental data. The computer code simulator is a Java program that models the gas-dynamics characteristics of a single-stage gas gun. Following a stepwise approach, the code utilizes a variety of input parameters to incrementally simulate and analyze themore » dynamics of the gun as the projectile is launched down the barrel. Using input data, the model can calculate gun performance based on physical characteristics, such as propellant-gas and fast-valve properties, barrel geometry, and pellet mass. Although the model is fundamentally generic, the present version is configured to accommodate cryogenic pellets composed of H2, D2, Ne, Ar, and mixtures of them and light propellant gases (H2, D2, and He). The pellets are solidified in situ in pipe guns that consist of stainless steel tubes and fast-acting valves that provide the propellant gas for pellet acceleration (to speeds ~200 to 700 m/s). The pellet speed is the key parameter in determining the response time of a shattered pellet system to a plasma disruption event. The calculated speeds from the code simulations of experiments were typically in excellent agreement with the measured values. With the gas gun simulator validated for many test shots and over a wide range of physical and operating parameters, it is a valuable tool for optimization of the injector design, including the fast valve design (orifice size and volume) for any operating pressure (~40 bar expected for the ITER application) and barrel length for any pellet size (mass, diameter, and length). Key design parameters and proposed values for the pellet injectors for the ITER disruption mitigation systems are discussed.« less

  16. A methodology for analysing lateral coupled behavior of high speed railway vehicles and structures

    NASA Astrophysics Data System (ADS)

    Antolín, P.; Goicolea, J. M.; Astiz, M. A.; Alonso, A.

    2010-06-01

    Continuous increment of the speed of high speed trains entails the increment of kinetic energy of the trains. The main goal of this article is to study the coupled lateral behavior of vehicle-structure systems for high speed trains. Non linear finite element methods are used for structures whereas multibody dynamics methods are employed for vehicles. Special attention must be paid when dealing with contact rolling constraints for coupling bridge decks and train wheels. The dynamic models must include mixed variables (displacements and creepages). Additionally special attention must be paid to the contact algorithms adequate to wheel-rail contact. The coupled vehicle-structure system is studied in a implicit dynamic framework. Due to the presence of very different systems (trains and bridges), different frequencies are involved in the problem leading to stiff systems. Regarding to contact methods, a main branch is studied in normal contact between train wheels and bridge decks: penalty method. According to tangential contact FastSim algorithm solves the tangential contact at each time step solving a differential equation involving relative displacements and creepage variables. Integration for computing the total forces in the contact ellipse domain is performed for each train wheel and each solver iteration. Coupling between trains and bridges requires a special treatment according to the kinetic constraints imposed in the wheel-rail pair and the load transmission. A numerical example is performed.

  17. Electron hole tracking PIC simulation

    NASA Astrophysics Data System (ADS)

    Zhou, Chuteng; Hutchinson, Ian

    2016-10-01

    An electron hole is a coherent BGK mode solitary wave. Electron holes are observed to travel at high velocities relative to bulk plasmas. The kinematics of a 1-D electron hole is studied using a novel Particle-In-Cell simulation code with fully kinetic ions. A hole tracking technique enables us to follow the trajectory of a fast-moving solitary hole and study quantitatively hole acceleration and coupling to ions. The electron hole signal is detected and the simulation domain moves by a carefully designed feedback control law to follow its propagation. This approach has the advantage that the length of the simulation domain can be significantly reduced to several times the hole width, which makes high resolution simulations tractable. We observe a transient at the initial stage of hole formation when the hole accelerates to several times the cold-ion sound speed. Artificially imposing slow ion speed changes on a fully formed hole causes its velocity to change even when the ion stream speed in the hole frame greatly exceeds the ion thermal speed, so there are no reflected ions. The behavior that we observe in numerical simulations agrees very well with our analytic theory of hole momentum conservation and energization effects we call ``jetting''. The work was partially supported by the NSF/DOE Basic Plasma Science Partnership under Grant DE-SC0010491. Computer simulations were carried out on the MIT PSFC parallel AMD Opteron/Infiniband cluster Loki.

  18. Shocks inside CMEs: A survey of properties from 1997 to 2006

    NASA Astrophysics Data System (ADS)

    Lugaz, N.; Farrugia, C. J.; Smith, C. W.; Paulson, K.

    2015-04-01

    We report on 49 fast-mode forward shocks propagating inside coronal mass ejections (CMEs) as measured by Wind and ACE at 1 AU from 1997 to 2006. Compared to typical CME-driven shocks, these shocks propagate in different upstream conditions, where the median upstream Alfvén speed is 85 km s-1, the proton β = 0.08 and the magnetic field strength is 8 nT. These shocks are fast with a median speed of 590 km s-1 but weak with a median Alfvénic Mach number of 1.9. They typically compress the magnetic field and density by a factor of 2-3. The most extreme upstream conditions found were a fast magnetosonic speed of 230 km s-1, a plasma β of 0.02, upstream solar wind speed of 740 km s-1 and density of 0.5 cm-3. Nineteen of these complex events were associated with an intense geomagnetic storm (peak Dst under -100 nT) within 12 h of the shock detection at Wind, and 15 were associated with a drop of the storm time Dst index of more than 50 nT between 3 and 9 h after shock detection. We also compare them to a sample of 45 shocks propagating in more typical upstream conditions. We show the average property of these shocks through a superposed epoch analysis, and we present some analytical considerations regarding the compression ratios of shocks in low β regimes. As most of these shocks are measured in the back half of a CME, we conclude that about half the shocks may not remain fast-mode shocks as they propagate through an entire CME due to the large upstream and magnetosonic speeds.

  19. An investigation of the speeding-related crash designation through crash narrative reviews sampled via logistic regression.

    PubMed

    Fitzpatrick, Cole D; Rakasi, Saritha; Knodler, Michael A

    2017-01-01

    Speed is one of the most important factors in traffic safety as higher speeds are linked to increased crash risk and higher injury severities. Nearly a third of fatal crashes in the United States are designated as "speeding-related", which is defined as either "the driver behavior of exceeding the posted speed limit or driving too fast for conditions." While many studies have utilized the speeding-related designation in safety analyses, no studies have examined the underlying accuracy of this designation. Herein, we investigate the speeding-related crash designation through the development of a series of logistic regression models that were derived from the established speeding-related crash typologies and validated using a blind review, by multiple researchers, of 604 crash narratives. The developed logistic regression model accurately identified crashes which were not originally designated as speeding-related but had crash narratives that suggested speeding as a causative factor. Only 53.4% of crashes designated as speeding-related contained narratives which described speeding as a causative factor. Further investigation of these crashes revealed that the driver contributing code (DCC) of "driving too fast for conditions" was being used in three separate situations. Additionally, this DCC was also incorrectly used when "exceeding the posted speed limit" would likely have been a more appropriate designation. Finally, it was determined that the responding officer only utilized one DCC in 82% of crashes not designated as speeding-related but contained a narrative indicating speed as a contributing causal factor. The use of logistic regression models based upon speeding-related crash typologies offers a promising method by which all possible speeding-related crashes could be identified. Published by Elsevier Ltd.

  20. Smart light random memory sprays Retinex: a fast Retinex implementation for high-quality brightness adjustment and color correction.

    PubMed

    Banić, Nikola; Lončarić, Sven

    2015-11-01

    Removing the influence of illumination on image colors and adjusting the brightness across the scene are important image enhancement problems. This is achieved by applying adequate color constancy and brightness adjustment methods. One of the earliest models to deal with both of these problems was the Retinex theory. Some of the Retinex implementations tend to give high-quality results by performing local operations, but they are computationally relatively slow. One of the recent Retinex implementations is light random sprays Retinex (LRSR). In this paper, a new method is proposed for brightness adjustment and color correction that overcomes the main disadvantages of LRSR. There are three main contributions of this paper. First, a concept of memory sprays is proposed to reduce the number of LRSR's per-pixel operations to a constant regardless of the parameter values, thereby enabling a fast Retinex-based local image enhancement. Second, an effective remapping of image intensities is proposed that results in significantly higher quality. Third, the problem of LRSR's halo effect is significantly reduced by using an alternative illumination processing method. The proposed method enables a fast Retinex-based image enhancement by processing Retinex paths in a constant number of steps regardless of the path size. Due to the halo effect removal and remapping of the resulting intensities, the method outperforms many of the well-known image enhancement methods in terms of resulting image quality. The results are presented and discussed. It is shown that the proposed method outperforms most of the tested methods in terms of image brightness adjustment, color correction, and computational speed.

  1. Track reconstruction in the emulsion-lead target of the OPERA experiment using the ESS microscope

    NASA Astrophysics Data System (ADS)

    Arrabito, L.; Bozza, C.; Buontempo, S.; Consiglio, L.; Cozzi, M.; D'Ambrosio, N.; DeLellis, G.; DeSerio, M.; Di Capua, F.; Di Ferdinando, D.; Di Marco, N.; Ereditato, A.; Esposito, L. S.; Fini, R. A.; Giacomelli, G.; Giorgini, M.; Grella, G.; Ieva, M.; Janicsko Csathy, J.; Juget, F.; Kreslo, I.; Laktineh, I.; Manai, K.; Mandrioli, G.; Marotta, A.; Migliozzi, P.; Monacelli, P.; Moser, U.; Muciaccia, M. T.; Pastore, A.; Patrizii, L.; Petukhov, Y.; Pistillo, C.; Pozzato, M.; Romano, G.; Rosa, G.; Russo, A.; Savvinov, N.; Schembri, A.; Scotto Lavina, L.; Simone, S.; Sioli, M.; Sirignano, C.; Sirri, G.; Strolin, P.; Tioukov, V.; Waelchli, T.

    2007-05-01

    The OPERA experiment, designed to conclusively prove the existence of νμ→ντ oscillations in the atmospheric sector, makes use of a massive lead-nuclear emulsion target to observe the appearance of ντ's in the CNGS νμ beam. The location and analysis of the neutrino interactions in quasi real-time required the development of fast computer-controlled microscopes able to reconstruct particle tracks with sub-micron precision and high efficiency at a speed of ~20 cm2/h. This paper describes the performance in particle track reconstruction of the European Scanning System, a novel automatic microscope for the measurement of emulsion films developed for OPERA.

  2. Qubit Architecture with High Coherence and Fast Tunable Coupling.

    PubMed

    Chen, Yu; Neill, C; Roushan, P; Leung, N; Fang, M; Barends, R; Kelly, J; Campbell, B; Chen, Z; Chiaro, B; Dunsworth, A; Jeffrey, E; Megrant, A; Mutus, J Y; O'Malley, P J J; Quintana, C M; Sank, D; Vainsencher, A; Wenner, J; White, T C; Geller, Michael R; Cleland, A N; Martinis, John M

    2014-11-28

    We introduce a superconducting qubit architecture that combines high-coherence qubits and tunable qubit-qubit coupling. With the ability to set the coupling to zero, we demonstrate that this architecture is protected from the frequency crowding problems that arise from fixed coupling. More importantly, the coupling can be tuned dynamically with nanosecond resolution, making this architecture a versatile platform with applications ranging from quantum logic gates to quantum simulation. We illustrate the advantages of dynamical coupling by implementing a novel adiabatic controlled-z gate, with a speed approaching that of single-qubit gates. Integrating coherence and scalable control, the introduced qubit architecture provides a promising path towards large-scale quantum computation and simulation.

  3. An accurate, compact and computationally efficient representation of orbitals for quantum Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Luo, Ye; Esler, Kenneth; Kent, Paul; Shulenburger, Luke

    Quantum Monte Carlo (QMC) calculations of giant molecules, surface and defect properties of solids have been feasible recently due to drastically expanding computational resources. However, with the most computationally efficient basis set, B-splines, these calculations are severely restricted by the memory capacity of compute nodes. The B-spline coefficients are shared on a node but not distributed among nodes, to ensure fast evaluation. A hybrid representation which incorporates atomic orbitals near the ions and B-spline ones in the interstitial regions offers a more accurate and less memory demanding description of the orbitals because they are naturally more atomic like near ions and much smoother in between, thus allowing coarser B-spline grids. We will demonstrate the advantage of hybrid representation over pure B-spline and Gaussian basis sets and also show significant speed-up like computing the non-local pseudopotentials with our new scheme. Moreover, we discuss a new algorithm for atomic orbital initialization which used to require an extra workflow step taking a few days. With this work, the highly efficient hybrid representation paves the way to simulate large size even in-homogeneous systems using QMC. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Computational Materials Sciences Program.

  4. Image matrix processor for fast multi-dimensional computations

    DOEpatents

    Roberson, G.P.; Skeate, M.F.

    1996-10-15

    An apparatus for multi-dimensional computation is disclosed which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination. 10 figs.

  5. The relative temporal sequence of decline in mobility and cognition among initially unimpaired older adults: Results from the Baltimore Longitudinal Study of Aging.

    PubMed

    Tian, Qu; An, Yang; Resnick, Susan M; Studenski, Stephanie

    2017-05-01

    most older individuals who experience mobility decline, also show cognitive decline, but whether cognitive decline precedes or follows mobility limitation is not well understood. examine the temporal sequence of mobility and cognition among initially unimpaired older adults. mobility and cognition were assessed every 2 years for 6 years in 412 participants aged ≥60 with initially unimpaired cognition and gait speed. Using autoregressive models, accounting for the dependent variable from the prior assessment, baseline age, sex, body mass index and education, we examine the temporal sequence of change in mobility (6 m usual gait speed, 400 m fast walk time) and executive function (visuoperceptual speed: Digit Symbol Substitution Test (DSST); cognitive flexibility: Trail Making Test part B (TMT-B)) or memory (California Verbal Learning Test (CVLT) immediate, short-delay, long-delay). there was a bidirectional relationship over time between slower usual gait speed and both poorer DSST and TMT-B scores (Bonferroni-corrected P < 0.005). In contrast, slower 400 m fast walk time predicted subsequent poorer DSST, TMT-B, CVLT immediate recall and CVLT short-delay scores (P < 0.005), while these measures did not predict subsequent 400 m fast walk time (P > 0.005). among initially unimpaired older adults, the temporal relationship between usual gait speed and executive function is bidirectional, with each predicting change in the other, while poor fast walking performance predicts future executive function and memory changes but not vice versa. Challenging tasks like the 400 m walk appear superior to usual gait speed for predicting executive function and memory change in unimpaired older adults. Published by Oxford University Press on behalf of the British Geriatrics Society 2016. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  6. Mixed-method pre-cooling reduces physiological demand without improving performance of medium-fast bowling in the heat.

    PubMed

    Minett, Geoffrey M; Duffield, Rob; Kellett, Aaron; Portus, Marc

    2012-05-01

    This study examined physiological and performance effects of pre-cooling on medium-fast bowling in the heat. Ten, medium-fast bowlers completed two randomised trials involving either cooling (mixed-methods) or control (no cooling) interventions before a 6-over bowling spell in 31.9±2.1°C and 63.5±9.3% relative humidity. Measures included bowling performance (ball speed, accuracy and run-up speeds), physical characteristics (global positioning system monitoring and counter-movement jump height), physiological (heart rate, core temperature, skin temperature and sweat loss), biochemical (serum concentrations of damage, stress and inflammation) and perceptual variables (perceived exertion and thermal sensation). Mean ball speed (114.5±7.1 vs. 114.1±7.2 km · h(-1); P = 0.63; d = 0.09), accuracy (43.1±10.6 vs. 44.2±12.5 AU; P = 0.76; d = 0.14) and total run-up speed (19.1±4.1 vs. 19.3±3.8 km · h(-1); P = 0.66; d = 0.06) did not differ between pre-cooling and control respectively; however 20-m sprint speed between overs was 5.9±7.3% greater at Over 4 after pre-cooling (P = 0.03; d = 0.75). Pre-cooling reduced skin temperature after the intervention period (P = 0.006; d = 2.28), core temperature and pre-over heart rates throughout (P = 0.01-0.04; d = 0.96-1.74) and sweat loss by 0.4±0.3 kg (P = 0.01; d = 0.34). Mean rating of perceived exertion and thermal sensation were lower during pre-cooling trials (P = 0.004-0.03; d = 0.77-3.13). Despite no observed improvement in bowling performance, pre-cooling maintained between-over sprint speeds and blunted physiological and perceptual demands to ease the thermoregulatory demands of medium-fast bowling in hot conditions.

  7. Mechanical work and efficiency of 5 + 5 m shuttle running.

    PubMed

    Zamparo, Paola; Pavei, Gaspare; Nardello, Francesca; Bartolini, Davide; Monte, Andrea; Minetti, Alberto E

    2016-10-01

    Acceleration and deceleration phases characterise shuttle running (SR) compared to constant speed running (CR); mechanical work is thus expected to be larger in the former compared to the latter, at the same average speed (v mean). The aim of this study was to measure total mechanical work (W tot (+) , J kg(-1) m(-1)) during SR as the sum of internal (W int (+) ) and external (W ext (+) ) work and to calculate the efficiency of SR. Twenty males were requested to perform shuttle runs over a distance of 5 + 5 m at different speeds (slow, moderate and fast) to record kinematic data. Metabolic data were also recorded (at fast speed only) to calculate energy cost (C, J kg(-1) m(-1)) and mechanical efficiency (eff(+) = W tot (+) C (-1)) of SR. Work parameters significantly increased with speed (P < 0.001): W ext (+)  = 1.388 + 0.337 v mean; W int (+)  = -1.002 + 0.853 v mean; W tot (+)  = 1.329 v mean. At the fastest speed C was 27.4 ± 2.6 J kg(-1) m(-1) (i.e. about 7 times larger than in CR) and eff(+) was 16.2 ± 2.0 %. W ext (+) is larger in SR than in CR (2.5 vs. 1.4 J kg(-1) m(-1) in the range of investigated speeds: 2-3.5 m s(-1)) and W int (+) , at fast speed, is about half of W tot (+) . eff(+) is lower in SR (16 %) than in CR (50-60 % at comparable speeds) and this can be attributed to a lower elastic energy reutilization due to the acceleration/deceleration phases over this short shuttle distance.

  8. Skills Associated with Line Breaks in Elite Rugby Union

    PubMed Central

    den Hollander, Steve; Brown, James; Lambert, Michael; Treu, Paul; Hendricks, Sharief

    2016-01-01

    The ability of the attacking team to break through the defensive line is a key indicator of success as it creates opportunities to score tries. The aim of this study was to analyse line breaks and identify the associated skills and playing characteristics. The 2013 Super Rugby season (125 games) was analysed, in which 362 line breaks were identified and coded using variables that assessed team patterns and non-contact attacking skills in the phases preceding the line break. There was an average of 3 line breaks per game, with 39% of line breaks resulting in a try. Line breaks occurred when the ball-carrier was running fast [61%, x2(4) = 25.784, p = 0.000, Cramer’s v = 0.1922, weak]. At a moderate distance, short lateral passes (19%) and skip passes (15%) attributed to the highest percentage of line breaks [x2(26) = 50.899, p = 0.036, Cramer’s v = 0.2484, moderate]. Faster defensive line speeds resulted in more line breaks [x2(12) = 61.703, p < 0.001, Cramer’s v = 0.3026, moderate]. Line breaks are associated with overall team success and try scoring opportunities. Awareness of the defenders line speed and depth, fast running speed when receiving the ball and quick passing between attackers to the outside backs creates line break opportunities. During training, coaches should emphasise the movement speed of the ball between attackers and manipulate the speed and distance of the defenders. Key points Line breaks are associated with overall team success and try scoring opportunities. Awareness of the defenders line speed and depth, fast running speed when receiving the ball and quick passing between attackers to the outside backs creates line break opportunities During training, coaches should emphasise the movement speed of the ball between attackers and manipulate the speed and distance of the defenders. PMID:27803629

  9. Feasibility of shutter-speed DCE-MRI for improved prostate cancer detection.

    PubMed

    Li, Xin; Priest, Ryan A; Woodward, William J; Tagge, Ian J; Siddiqui, Faisal; Huang, Wei; Rooney, William D; Beer, Tomasz M; Garzotto, Mark G; Springer, Charles S

    2013-01-01

    The feasibility of shutter-speed model dynamic-contrast-enhanced MRI pharmacokinetic analyses for prostate cancer detection was investigated in a prebiopsy patient cohort. Differences of results from the fast-exchange-regime-allowed (FXR-a) shutter-speed model version and the fast-exchange-limit-constrained (FXL-c) standard model are demonstrated. Although the spatial information is more limited, postdynamic-contrast-enhanced MRI biopsy specimens were also examined. The MRI results were correlated with the biopsy pathology findings. Of all the model parameters, region-of-interest-averaged K(trans) difference [ΔK(trans) ≡ K(trans)(FXR-a) - K(trans)(FXL-c)] or two-dimensional K(trans)(FXR-a) vs. k(ep)(FXR-a) values were found to provide the most useful biomarkers for malignant/benign prostate tissue discrimination (at 100% sensitivity for a population of 13, the specificity is 88%) and disease burden determination. (The best specificity for the fast-exchange-limit-constrained analysis is 63%, with the two-dimensional plot.) K(trans) and k(ep) are each measures of passive transcapillary contrast reagent transfer rate constants. Parameter value increases with shutter-speed model (relative to standard model) analysis are larger in malignant foci than in normal-appearing glandular tissue. Pathology analyses verify the shutter-speed model (FXR-a) promise for prostate cancer detection. Parametric mapping may further improve pharmacokinetic biomarker performance. Copyright © 2012 Wiley Periodicals, Inc.

  10. Next Generation CTAS Tools

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz

    2000-01-01

    The FAA's Free Flight Phase 1 Office is in the process of deploying the current generation of CTAS tools, which are the Traffic Management Advisor (TMA) and the passive Final Approach Spacing Tool (pFAST), at selected centers and airports. Research at NASA is now focussed on extending the CTAS software and computer human interfaces to provide more advanced capabilities. The Multi-center TMA (McTMA) is designed to operate at airports where arrival flows originate from two or more centers whose boundaries are in close proximity to the TRACON boundary. McTMA will also include techniques for routing arrival flows away from congested airspace and around airspace reserved for arrivals into other hub airports. NASA is working with FAA and MITRE to build a prototype McTMA for the Philadelphia airport. The active Final Approach Spacing Tool (aFAST) provides speed and heading advisories to help controllers achieve accurate spacing between aircraft on final approach. These advisories will be integrated with those in the existing pFAST to provide a set of comprehensive advisories for controlling arrival traffic from the TRACON boundary to touchdown at complex, high-capacity airports. A research prototype of aFAST, designed for the Dallas-Fort Worth is in an advanced stage of development. The Expedite Departure Path (EDP) and Direct-To tools are designed to help controllers guide departing aircraft out of the TRACON airspace and to climb to cruise altitude along the most efficient routes.

  11. Modification of wave propagation and wave travel-time by the presence of magnetic fields in the solar network atmosphere

    NASA Astrophysics Data System (ADS)

    Nutto, C.; Steiner, O.; Schaffenberger, W.; Roth, M.

    2012-02-01

    Context. Observations of waves at frequencies above the acoustic cut-off frequency have revealed vanishing wave travel-times in the vicinity of strong magnetic fields. This detection of apparently evanescent waves, instead of the expected propagating waves, has remained a riddle. Aims: We investigate the influence of a strong magnetic field on the propagation of magneto-acoustic waves in the atmosphere of the solar network. We test whether mode conversion effects can account for the shortening in wave travel-times between different heights in the solar atmosphere. Methods: We carry out numerical simulations of the complex magneto-atmosphere representing the solar magnetic network. In the simulation domain, we artificially excite high frequency waves whose wave travel-times between different height levels we then analyze. Results: The simulations demonstrate that the wave travel-time in the solar magneto-atmosphere is strongly influenced by mode conversion. In a layer enclosing the surface sheet defined by the set of points where the Alfvén speed and the sound speed are equal, called the equipartition level, energy is partially transferred from the fast acoustic mode to the fast magnetic mode. Above the equipartition level, the fast magnetic mode is refracted due to the large gradient of the Alfvén speed. The refractive wave path and the increasing phase speed of the fast mode inside the magnetic canopy significantly reduce the wave travel-time, provided that both observing levels are above the equipartition level. Conclusions: Mode conversion and the resulting excitation and propagation of fast magneto-acoustic waves is responsible for the observation of vanishing wave travel-times in the vicinity of strong magnetic fields. In particular, the wave propagation behavior of the fast mode above the equipartition level may mimic evanescent behavior. The present wave propagation experiments provide an explanation of vanishing wave travel-times as observed with multi-line high-cadence instruments. Movies are available in electronic form at http://www.aanda.org

  12. TH-A-19A-08: Intel Xeon Phi Implementation of a Fast Multi-Purpose Monte Carlo Simulation for Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Souris, K; Lee, J; Sterpin, E

    2014-06-15

    Purpose: Recent studies have demonstrated the capability of graphics processing units (GPUs) to compute dose distributions using Monte Carlo (MC) methods within clinical time constraints. However, GPUs have a rigid vectorial architecture that favors the implementation of simplified particle transport algorithms, adapted to specific tasks. Our new, fast, and multipurpose MC code, named MCsquare, runs on Intel Xeon Phi coprocessors. This technology offers 60 independent cores, and therefore more flexibility to implement fast and yet generic MC functionalities, such as prompt gamma simulations. Methods: MCsquare implements several models and hence allows users to make their own tradeoff between speed andmore » accuracy. A 200 MeV proton beam is simulated in a heterogeneous phantom using Geant4 and two configurations of MCsquare. The first one is the most conservative and accurate. The method of fictitious interactions handles the interfaces and secondary charged particles emitted in nuclear interactions are fully simulated. The second, faster configuration simplifies interface crossings and simulates only secondary protons after nuclear interaction events. Integral depth-dose and transversal profiles are compared to those of Geant4. Moreover, the production profile of prompt gammas is compared to PENH results. Results: Integral depth dose and transversal profiles computed by MCsquare and Geant4 are within 3%. The production of secondaries from nuclear interactions is slightly inaccurate at interfaces for the fastest configuration of MCsquare but this is unlikely to have any clinical impact. The computation time varies between 90 seconds for the most conservative settings to merely 59 seconds in the fastest configuration. Finally prompt gamma profiles are also in very good agreement with PENH results. Conclusion: Our new, fast, and multi-purpose Monte Carlo code simulates prompt gammas and calculates dose distributions in less than a minute, which complies with clinical time constraints. It has been successfully validated with Geant4. This work has been financialy supported by InVivoIGT, a public/private partnership between UCL and IBA.« less

  13. Propeller speed and phase sensor

    NASA Technical Reports Server (NTRS)

    Collopy, Paul D. (Inventor); Bennett, George W. (Inventor)

    1992-01-01

    A speed and phase sensor counterrotates aircraft propellers. A toothed wheel is attached to each propeller, and the teeth trigger a sensor as they pass, producing a sequence of signals. From the sequence of signals, rotational speed of each propeller is computer based on time intervals between successive signals. The speed can be computed several times during one revolution, thus giving speed information which is highly up-to-date. Given that spacing between teeth may not be uniform, the signals produced may be nonuniform in time. Error coefficients are derived to correct for nonuniformities in the resulting signals, thus allowing accurate speed to be computed despite the spacing nonuniformities. Phase can be viewed as the relative rotational position of one propeller with respect to the other, but measured at a fixed time. Phase is computed from the signals.

  14. Discovery of Ubiquitous Fast Propagating Intensity Disturbances by the Chromospheric Lyman Alpha Spectropolarimeter (CLASP)

    NASA Technical Reports Server (NTRS)

    Kubo, M.; Katsukawa, Y.; Suematsu, Y.; Kano, R.; Bando, T.; Narukage, N.; Ishikawa, R.; Hara, H.; Giono, G.; Tsuneta, S.; hide

    2016-01-01

    High cadence observations by the slit-jaw (SJ) optics system of the sounding rocket experiment known as the Chromospheric Lyman Alpha SpectroPolarimeter (CLASP) reveal ubiquitous intensity disturbances that recurrently propagate in one or both of the chromosphere or transition region at a speed much higher than the sound speed. The CLASP/SJ instrument provides a time series of 2D images taken with broadband filters centered on the Ly(alpha) line at a 0.6 s cadence. The fast propagating intensity disturbances are detected in the quiet Sun and in an active region, and at least 20 events are clearly detected in the field of view of 527'' x 527'' during the 5-minute observing time. The apparent speeds of the intensity disturbances range from 150 to 350 km/s, and they are comparable to the local Alfven speed in the transition region. The intensity disturbances tend to propagate along bright elongated structures away from areas with strong photospheric magnetic fields. This suggests that the observed propagating intensity disturbances are related to the magnetic canopy structures. The maximum distance traveled by the intensity disturbances is of about 10'', and the widths are a few arcseconds, which is almost determined by the pixel size of 1.''03. The timescale of each intensity pulse is shorter than 30 s. One possible explanation of the fast propagating intensity disturbances observed by CLASP is magneto-hydrodynamic fast mode waves.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kubo, M.; Katsukawa, Y.; Suematsu, Y.

    High-cadence observations by the slit-jaw (SJ) optics system of the sounding rocket experiment known as the Chromospheric Lyman Alpha Spectropolarimeter (CLASP) reveal ubiquitous intensity disturbances that recurrently propagate in either the chromosphere or the transition region or both at a speed much higher than the speed of sound. The CLASP/SJ instrument provides a time series of two-dimensional images taken with broadband filters centered on the Ly α line at a 0.6 s cadence. The multiple fast-propagating intensity disturbances appear in the quiet Sun and in an active region, and they are clearly detected in at least 20 areas in amore » field of view of 527″ × 527″ during the 5 minute observing time. The apparent speeds of the intensity disturbances range from 150 to 350 km s{sup −1}, and they are comparable to the local Alfvén speed in the transition region. The intensity disturbances tend to propagate along bright elongated structures away from areas with strong photospheric magnetic fields. This suggests that the observed fast-propagating intensity disturbances are related to the magnetic canopy structures. The maximum distance traveled by the intensity disturbances is about 10″, and the widths are a few arcseconds, which are almost determined by a pixel size of 1.″03. The timescale of each intensity pulse is shorter than 30 s. One possible explanation for the fast-propagating intensity disturbances observed by CLASP is magnetohydrodynamic fast-mode waves.« less

  16. Pacing Strategy, Muscle Fatigue, and Technique in 1500-m Speed-Skating and Cycling Time Trials.

    PubMed

    Stoter, Inge K; MacIntosh, Brian R; Fletcher, Jared R; Pootz, Spencer; Zijdewind, Inge; Hettinga, Florentina J

    2016-04-01

    To evaluate pacing behavior and peripheral and central contributions to muscle fatigue in 1500-m speed-skating and cycling time trials when a faster or slower start is instructed. Nine speed skaters and 9 cyclists, all competing at regional or national level, performed two 1500-m time trials in their sport. Athletes were instructed to start faster than usual in 1 trial and slower in the other. Mean velocity was measured per 100 m. Blood lactate concentrations were measured. Maximal voluntary contraction (MVC), voluntary activation (VA), and potentiated twitch (PT) of the quadriceps muscles were measured to estimate central and peripheral contributions to muscle fatigue. In speed skating, knee, hip, and trunk angles were measured to evaluate technique. Cyclists showed a more explosive start than speed skaters in the fast-start time trial (cyclists performed first 300 m in 24.70 ± 1.73 s, speed skaters in 26.18 ± 0.79 s). Both trials resulted in reduced MVC (12.0% ± 14.5%), VA (2.4% ± 5.0%), and PT (25.4% ± 15.2%). Blood lactate concentrations after the time trial and the decrease in PT were greater in the fast-start than in the slow-start trial. Speed skaters showed higher trunk angles in the fast-start than in the slow-start trial, while knee angles remained similar. Despite similar instructions, behavioral adaptations in pacing differed between the 2 sports, resulting in equal central and peripheral contributions to muscle fatigue in both sports. This provides evidence for the importance of neurophysiological aspects in the regulation of pacing. It also stresses the notion that optimal pacing needs to be studied sport specifically, and coaches should be aware of this.

  17. Walking energetics, fatigability, and fatigue in older adults: the study of energy and aging pilot.

    PubMed

    Richardson, Catherine A; Glynn, Nancy W; Ferrucci, Luigi G; Mackey, Dawn C

    2015-04-01

    Slow gait speed increases morbidity and mortality in older adults. We examined how preferred gait speed is associated with energetic requirements of walking, fatigability, and fatigue. Older adults (n = 36, 70-89 years) were categorized as slow or fast walkers based on median 400-m gait speed. We measured VO2peak by graded treadmill exercise test and VO2 during 5-minute treadmill walking tests at standard (0.72 m/s) and preferred gait speeds. Fatigability was assessed with the Situational Fatigue Scale and the Borg rating of perceived exertion at the end of walking tests. Fatigue was assessed by questionnaire. Preferred gait speed over 400 m (range: 0.75-1.58 m/s) averaged 1.34 m/s for fast walkers versus 1.05 m/s for slow walkers (p < .001). VO2peak was 26% lower (18.5 vs 25.1ml/kg/min, p = .001) in slow walkers than fast walkers. To walk at 0.72 m/s, slow walkers used a larger percentage of VO2peak (59% vs 42%, p < .001). To walk at preferred gait speed, slow walkers used more energy per unit distance (0.211 vs 0.186ml/kg/m, p = .047). Slow walkers reported higher rating of perceived exertion during walking and greater overall fatigability on the Situational Fatigue Scale, but no differences in fatigue. Slow walking was associated with reduced aerobic capacity, greater energetic cost of walking, and greater fatigability. Interventions to improve aerobic capacity or decrease energetic cost of walking may prevent slowing of gait speed and promote mobility in older adults. © The Author 2014. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  18. Modulation of phase durations, phase variations, and temporal coordination of the four limbs during quadrupedal split-belt locomotion in intact adult cats

    PubMed Central

    D'Angelo, Giuseppe; Thibaudier, Yann; Telonio, Alessandro; Hurteau, Marie-France; Kuczynski, Victoria; Dambreville, Charline

    2014-01-01

    Stepping along curvilinear paths produces speed differences between the inner and outer limb(s). This can be reproduced experimentally by independently controlling left and right speeds with split-belt locomotion. Here we provide additional details on the pattern of the four limbs during quadrupedal split-belt locomotion in intact cats. Six cats performed tied-belt locomotion (same speed bilaterally) and split-belt locomotion where one side (constant side) stepped at constant treadmill speed while the other side (varying side) stepped at several speeds. Cycle, stance, and swing durations changed in parallel in homolateral limbs with shorter and longer stance and swing durations on the fast side, respectively, compared with the slow side. Phase variations were quantified in all four limbs by measuring the slopes of the regressions between stance and cycle durations (rSTA) and between swing and cycle durations (rSW). For a given limb, rSTA and rSW were not significantly different from one another on the constant side whereas on the varying side rSTA increased relative to tied-belt locomotion while rSW became more negative. Phase variations were similar for homolateral limbs. Increasing left-right speed differences produced a large increase in homolateral double support on the slow side, while triple-support periods decreased. Increasing left-right speed differences altered homologous coupling, homolateral coupling on the fast side, and coupling between the fast hindlimb and slow forelimb. Results indicate that homolateral limbs share similar control strategies, only certain features of the interlimb pattern adjust, and spinal locomotor networks of the left and right sides are organized symmetrically. PMID:25031257

  19. Walking Energetics, Fatigability, and Fatigue in Older Adults: The Study of Energy and Aging Pilot

    PubMed Central

    Richardson, Catherine A.; Glynn, Nancy W.; Ferrucci, Luigi G.

    2015-01-01

    Background. Slow gait speed increases morbidity and mortality in older adults. We examined how preferred gait speed is associated with energetic requirements of walking, fatigability, and fatigue. Methods. Older adults (n = 36, 70–89 years) were categorized as slow or fast walkers based on median 400-m gait speed. We measured VO2peak by graded treadmill exercise test and VO2 during 5-minute treadmill walking tests at standard (0.72 m/s) and preferred gait speeds. Fatigability was assessed with the Situational Fatigue Scale and the Borg rating of perceived exertion at the end of walking tests. Fatigue was assessed by questionnaire. Results. Preferred gait speed over 400 m (range: 0.75–1.58 m/s) averaged 1.34 m/s for fast walkers versus 1.05 m/s for slow walkers (p < .001). VO2peak was 26% lower (18.5 vs 25.1ml/kg/min, p = .001) in slow walkers than fast walkers. To walk at 0.72 m/s, slow walkers used a larger percentage of VO2peak (59% vs 42%, p < .001). To walk at preferred gait speed, slow walkers used more energy per unit distance (0.211 vs 0.186ml/kg/m, p = .047). Slow walkers reported higher rating of perceived exertion during walking and greater overall fatigability on the Situational Fatigue Scale, but no differences in fatigue. Conclusions. Slow walking was associated with reduced aerobic capacity, greater energetic cost of walking, and greater fatigability. Interventions to improve aerobic capacity or decrease energetic cost of walking may prevent slowing of gait speed and promote mobility in older adults. PMID:25190069

  20. The properties of fast and slow oblique solitons in a magnetized plasma

    NASA Astrophysics Data System (ADS)

    McKenzie, J. F.; Doyle, T. B.

    2002-01-01

    This work builds on a recent treatment by McKenzie and Doyle [Phys. Plasmas 8, 4367 (2001)], on oblique solitons in a cold magnetized plasma, to include the effects of plasma thermal pressure. Conservation of total momentum in the direction of wave propagation immediately shows that if the flow is supersonic, compressive (rarefactive) changes in the magnetic pressure induce decelerations (accelerations) in the flow speed, whereas if the flow is subsonic, compressive (rarefactive) changes in the magnetic pressure induce accelerations (decelerations) in the flow speed. Such behavior is characteristic of a Bernoulli-type plasma momentum flux which exhibits a minimum at the plasma sonic point. The plasma energy flux (kinetic plus enthalpy) also shows similar Bernoulli-type behavior. This transonic effect is manifest in the spatial structure equation for the flow speed (in the direction of propagation) which shows that soliton structures may exist if the wave speed lies either (i) in the range between the fast and Alfven speeds or (ii) between the sound and slow mode speed. These conditions follow from the requirement that a defined, characteristic "soliton parameter" m exceeds unity. It is in this latter slow soliton regime that the effects of plasma pressure are most keenly felt. The equilibrium points of the structure equation define the center of the wave. The structure of both fast and slow solitons is elucidated through the properties of the energy integral function of the structure equation. In particular, the slow soliton, which owes its existence to plasma pressure, may have either a compressive or rarefactive nature, and exhibits a rich structure, which is revealed through the spatial structure of the longitudinal speed and its corresponding transverse velocity hodograph.

Top