An efficient parallel algorithm for the solution of a tridiagonal linear system of equations
NASA Technical Reports Server (NTRS)
Stone, H. S.
1971-01-01
Tridiagonal linear systems of equations are solved on conventional serial machines in a time proportional to N, where N is the number of equations. The conventional algorithms do not lend themselves directly to parallel computations on computers of the ILLIAC IV class, in the sense that they appear to be inherently serial. An efficient parallel algorithm is presented in which computation time grows as log sub 2 N. The algorithm is based on recursive doubling solutions of linear recurrence relations, and can be used to solve recurrence relations of all orders.
A parallel Jacobson-Oksman optimization algorithm. [parallel processing (computers)
NASA Technical Reports Server (NTRS)
Straeter, T. A.; Markos, A. T.
1975-01-01
A gradient-dependent optimization technique which exploits the vector-streaming or parallel-computing capabilities of some modern computers is presented. The algorithm, derived by assuming that the function to be minimized is homogeneous, is a modification of the Jacobson-Oksman serial minimization method. In addition to describing the algorithm, conditions insuring the convergence of the iterates of the algorithm and the results of numerical experiments on a group of sample test functions are presented. The results of these experiments indicate that this algorithm will solve optimization problems in less computing time than conventional serial methods on machines having vector-streaming or parallel-computing capabilities.
NASA Technical Reports Server (NTRS)
Logan, Terry G.
1994-01-01
The purpose of this study is to investigate the performance of the integral equation computations using numerical source field-panel method in a massively parallel processing (MPP) environment. A comparative study of computational performance of the MPP CM-5 computer and conventional Cray-YMP supercomputer for a three-dimensional flow problem is made. A serial FORTRAN code is converted into a parallel CM-FORTRAN code. Some performance results are obtained on CM-5 with 32, 62, 128 nodes along with those on Cray-YMP with a single processor. The comparison of the performance indicates that the parallel CM-FORTRAN code near or out-performs the equivalent serial FORTRAN code for some cases.
MPI_XSTAR: MPI-based Parallelization of the XSTAR Photoionization Program
NASA Astrophysics Data System (ADS)
Danehkar, Ashkbiz; Nowak, Michael A.; Lee, Julia C.; Smith, Randall K.
2018-02-01
We describe a program for the parallel implementation of multiple runs of XSTAR, a photoionization code that is used to predict the physical properties of an ionized gas from its emission and/or absorption lines. The parallelization program, called MPI_XSTAR, has been developed and implemented in the C++ language by using the Message Passing Interface (MPI) protocol, a conventional standard of parallel computing. We have benchmarked parallel multiprocessing executions of XSTAR, using MPI_XSTAR, against a serial execution of XSTAR, in terms of the parallelization speedup and the computing resource efficiency. Our experience indicates that the parallel execution runs significantly faster than the serial execution, however, the efficiency in terms of the computing resource usage decreases with increasing the number of processors used in the parallel computing.
The use of a computerized algorithm to determine single cardiac cell volumes.
Marino, T A; Cook, L; Cook, P N; Dwyer, S J
1981-04-01
Single cardiac muscles cell volume data have been difficult to obtain, especially because the shape of a cell is quite complex. With the aid of a surface reconstruction method, a cell volume estimation algorithm has been developed that can be used on serial of cells. The cell surface is reconstructed by means of triangular tiles so that the cell is represented as a polyhedron. When this algorithm was tested on computer generated surfaces of a known volume, the difference was less than 1.6%. Serial sections of two phantoms of a known volume were also reconstructed and a comparison of the mathematically derived volumes and the computed volume estimations gave a per cent difference of between 2.8% and 4.1%. Finally cell volumes derived using conventional methods and volumes calculated using the algorithm were compared. The mean atrial muscle cell volume derived using conventional methods was 7752.7 +/- 644.7 micrometers3, while the mean computerized algorithm estimated atrial muscle cell volume was 7110.6 +/- 625.5 micrometers3. For AV bundle cells the mean cell volume obtained by conventional methods was 484.4 +/- 88.8 micrometers3 and the volume derived from the computer algorithm was 506.0 +/- 78.5 micrometers3. The differences between the volumes calculated using conventional methods and the algorithm were not significantly different.
Serial multiplier arrays for parallel computation
NASA Technical Reports Server (NTRS)
Winters, Kel
1990-01-01
Arrays of systolic serial-parallel multiplier elements are proposed as an alternative to conventional SIMD mesh serial adder arrays for applications that are multiplication intensive and require few stored operands. The design and operation of a number of multiplier and array configurations featuring locality of connection, modularity, and regularity of structure are discussed. A design methodology combining top-down and bottom-up techniques is described to facilitate development of custom high-performance CMOS multiplier element arrays as well as rapid synthesis of simulation models and semicustom prototype CMOS components. Finally, a differential version of NORA dynamic circuits requiring a single-phase uncomplemented clock signal introduced for this application.
NASA Technical Reports Server (NTRS)
Chau, Savio; Vatan, Farrokh; Randolph, Vincent; Baroth, Edmund C.
2006-01-01
Future In-Space propulsion systems for exploration programs will invariably require data collection from a large number of sensors. Consider the sensors needed for monitoring several vehicle systems states of health, including the collection of structural health data, over a large area. This would include the fuel tanks, habitat structure, and science containment of systems required for Lunar, Mars, or deep space exploration. Such a system would consist of several hundred or even thousands of sensors. Conventional avionics system design will require these sensors to be connected to a few Remote Health Units (RHU), which are connected to robust, micro flight computers through a serial bus. This results in a large mass of cabling and unacceptable weight. This paper first gives a survey of several techniques that may reduce the cabling mass for sensors. These techniques can be categorized into four classes: power line communication, serial sensor buses, compound serial buses, and wireless network. The power line communication approach uses the power line to carry both power and data, so that the conventional data lines can be eliminated. The serial sensor bus approach reduces most of the cabling by connecting all the sensors with a single (or redundant) serial bus. Many standard buses for industrial control and sensor buses can support several hundreds of nodes, however, have not been space qualified. Conventional avionics serial buses such as the Mil-Std-1553B bus and IEEE 1394a are space qualified but can support only a limited number of nodes. The third approach is to combine avionics buses to increase their addressability. The reliability, EMI/EMC, and flight qualification issues of wireless networks have to be addressed. Several wireless networks such as the IEEE 802.11 and Ultra Wide Band are surveyed in this paper. The placement of sensors can also affect cable mass. Excessive sensors increase the number of cables unnecessarily. Insufficient number of sensors may not provide adequate coverage of the system. This paper also discusses an optimal technique to place and validate sensors.
1990-09-01
military pilot acceptance of a safety network system would be based , as always, on the following: a. Do I really need such a system and will it be a...inferring pilot state based on computer analysis of pilot control inputs (or lack of)l. Having decided that the pilot is incapacitated, PMAS would alert...the advances being made in neural network computing machinery have necessitated a complete re-thinking of the conventional serial von Neuman machine
Cong, Hailin; Xu, Xiaodan; Yu, Bing; Liu, Huwei
2016-01-01
A simple and effective universal serial bus (USB) flash disk type microfluidic chip electrophoresis (MCE) was developed by using poly(dimethylsiloxane) based soft lithography and dry film based printed circuit board etching techniques in this paper. The MCE had a microchannel diameter of 375 μm and an effective length of 25 mm. Equipped with a conventional online electrochemical detector, the device enabled effectively separation of bovine serum albumin, lysozyme, and cytochrome c in 80 s under the ultra low voltage from a computer USB interface. Compared with traditional capillary electrophoresis, the USB flash disk type MCE is not only portable and inexpensive but also fast with high separation efficiency. PMID:27042249
Kinematic Determination of an Unmodeled Serial Manipulator by Means of an IMU
NASA Astrophysics Data System (ADS)
Ciarleglio, Constance A.
Kinematic determination for an unmodeled manipulator is usually done through a-priori knowledge of the manipulator physical characteristics or external sensor information. The mathematics of the kinematic estimation, often based on Denavit- Hartenberg convention, are complex and have high computation requirements, in addition to being unique to the manipulator for which the method is developed. Analytical methods that can compute kinematics on-the fly have the potential to be highly beneficial in dynamic environments where different configurations and variable manipulator types are often required. This thesis derives a new screw theory based method of kinematic determination, using a single inertial measurement unit (IMU), for use with any serial, revolute manipulator. The method allows the expansion of reconfigurable manipulator design and simplifies the kinematic process for existing manipulators. A simulation is presented where the theory of the method is verified and characterized with error. The method is then implemented on an existing manipulator as a verification of functionality.
Efficient Application of Continuous Fractional Component Monte Carlo in the Reaction Ensemble
2017-01-01
A new formulation of the Reaction Ensemble Monte Carlo technique (RxMC) combined with the Continuous Fractional Component Monte Carlo method is presented. This method is denoted by serial Rx/CFC. The key ingredient is that fractional molecules of either reactants or reaction products are present and that chemical reactions always involve fractional molecules. Serial Rx/CFC has the following advantages compared to other approaches: (1) One directly obtains chemical potentials of all reactants and reaction products. Obtained chemical potentials can be used directly as an independent check to ensure that chemical equilibrium is achieved. (2) Independent biasing is applied to the fractional molecules of reactants and reaction products. Therefore, the efficiency of the algorithm is significantly increased, compared to the other approaches. (3) Changes in the maximum scaling parameter of intermolecular interactions can be chosen differently for reactants and reaction products. (4) The number of fractional molecules is reduced. As a proof of principle, our method is tested for Lennard-Jones systems at various pressures and for various chemical reactions. Excellent agreement was found both for average densities and equilibrium mixture compositions computed using serial Rx/CFC, RxMC/CFCMC previously introduced by Rosch and Maginn (Journal of Chemical Theory and Computation, 2011, 7, 269–279), and the conventional RxMC approach. The serial Rx/CFC approach is also tested for the reaction of ammonia synthesis at various temperatures and pressures. Excellent agreement was found between results obtained from serial Rx/CFC, experimental results from literature, and thermodynamic modeling using the Peng–Robinson equation of state. The efficiency of reaction trial moves is improved by a factor of 2 to 3 (depending on the system) compared to the RxMC/CFCMC formulation by Rosch and Maginn. PMID:28737933
Chen, Weiliang; De Schutter, Erik
2017-01-01
Stochastic, spatial reaction-diffusion simulations have been widely used in systems biology and computational neuroscience. However, the increasing scale and complexity of models and morphologies have exceeded the capacity of any serial implementation. This led to the development of parallel solutions that benefit from the boost in performance of modern supercomputers. In this paper, we describe an MPI-based, parallel operator-splitting implementation for stochastic spatial reaction-diffusion simulations with irregular tetrahedral meshes. The performance of our implementation is first examined and analyzed with simulations of a simple model. We then demonstrate its application to real-world research by simulating the reaction-diffusion components of a published calcium burst model in both Purkinje neuron sub-branch and full dendrite morphologies. Simulation results indicate that our implementation is capable of achieving super-linear speedup for balanced loading simulations with reasonable molecule density and mesh quality. In the best scenario, a parallel simulation with 2,000 processes runs more than 3,600 times faster than its serial SSA counterpart, and achieves more than 20-fold speedup relative to parallel simulation with 100 processes. In a more realistic scenario with dynamic calcium influx and data recording, the parallel simulation with 1,000 processes and no load balancing is still 500 times faster than the conventional serial SSA simulation. PMID:28239346
Chen, Weiliang; De Schutter, Erik
2017-01-01
Stochastic, spatial reaction-diffusion simulations have been widely used in systems biology and computational neuroscience. However, the increasing scale and complexity of models and morphologies have exceeded the capacity of any serial implementation. This led to the development of parallel solutions that benefit from the boost in performance of modern supercomputers. In this paper, we describe an MPI-based, parallel operator-splitting implementation for stochastic spatial reaction-diffusion simulations with irregular tetrahedral meshes. The performance of our implementation is first examined and analyzed with simulations of a simple model. We then demonstrate its application to real-world research by simulating the reaction-diffusion components of a published calcium burst model in both Purkinje neuron sub-branch and full dendrite morphologies. Simulation results indicate that our implementation is capable of achieving super-linear speedup for balanced loading simulations with reasonable molecule density and mesh quality. In the best scenario, a parallel simulation with 2,000 processes runs more than 3,600 times faster than its serial SSA counterpart, and achieves more than 20-fold speedup relative to parallel simulation with 100 processes. In a more realistic scenario with dynamic calcium influx and data recording, the parallel simulation with 1,000 processes and no load balancing is still 500 times faster than the conventional serial SSA simulation.
Modeling Criminal Activity in Urban Landscapes
NASA Astrophysics Data System (ADS)
Brantingham, Patricia; Glässer, Uwe; Jackson, Piper; Vajihollahi, Mona
Computational and mathematical methods arguably have an enormous potential for serving practical needs in crime analysis and prevention by offering novel tools for crime investigations and experimental platforms for evidence-based policy making. We present a comprehensive formal framework and tool support for mathematical and computational modeling of criminal behavior to facilitate systematic experimental studies of a wide range of criminal activities in urban environments. The focus is on spatial and temporal aspects of different forms of crime, including opportunistic and serial violent crimes. However, the proposed framework provides a basis to push beyond conventional empirical research and engage the use of computational thinking and social simulations in the analysis of terrorism and counter-terrorism.
2010-04-01
failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE APR 2010 2. REPORT...The second is a ‘mechanical’ part that is controlled by circuit boards and is accessible by the technician via the serial console and running...was the use of conventional remote access solution designed for telecommuters or teleworkers in the Information Technology (IT) world, such as a
Mishchenko, Yuriy
2009-01-30
We describe an approach for automation of the process of reconstruction of neural tissue from serial section transmission electron micrographs. Such reconstructions require 3D segmentation of individual neuronal processes (axons and dendrites) performed in densely packed neuropil. We first detect neuronal cell profiles in each image in a stack of serial micrographs with multi-scale ridge detector. Short breaks in detected boundaries are interpolated using anisotropic contour completion formulated in fuzzy-logic framework. Detected profiles from adjacent sections are linked together based on cues such as shape similarity and image texture. Thus obtained 3D segmentation is validated by human operators in computer-guided proofreading process. Our approach makes possible reconstructions of neural tissue at final rate of about 5 microm3/manh, as determined primarily by the speed of proofreading. To date we have applied this approach to reconstruct few blocks of neural tissue from different regions of rat brain totaling over 1000microm3, and used these to evaluate reconstruction speed, quality, error rates, and presence of ambiguous locations in neuropil ssTEM imaging data.
Programed asynchronous serial data interrogation in a two-computer system
NASA Technical Reports Server (NTRS)
Schneberger, N. A.
1975-01-01
Technique permits redundant computers, with one unit in control mode and one in MONITOR mode, to interrogate the same serial data source. Its use for program-controlled serial data transfer results in extremely simple hardware and software mechanization.
Gee, Carole T
2013-11-01
As an alternative to conventional thin-sectioning, which destroys fossil material, high-resolution X-ray computed tomography (also called microtomography or microCT) integrated with scientific visualization, three-dimensional (3D) image segmentation, size analysis, and computer animation is explored as a nondestructive method of imaging the internal anatomy of 150-million-year-old conifer seed cones from the Late Jurassic Morrison Formation, USA, and of recent and other fossil cones. • MicroCT was carried out on cones using a General Electric phoenix v|tome|x s 240D, and resulting projections were processed with visualization software to produce image stacks of serial single sections for two-dimensional (2D) visualization, 3D segmented reconstructions with targeted structures in color, and computer animations. • If preserved in differing densities, microCT produced images of internal fossil tissues that showed important characters such as seed phyllotaxy or number of seeds per cone scale. Color segmentation of deeply embedded seeds highlighted the arrangement of seeds in spirals. MicroCT of recent cones was even more effective. • This is the first paper on microCT integrated with 3D segmentation and computer animation applied to silicified seed cones, which resulted in excellent 2D serial sections and segmented 3D reconstructions, revealing features requisite to cone identification and understanding of strobilus construction.
pcircle - A Suite of Scalable Parallel File System Tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
WANG, FEIYI
2015-10-01
Most of the software related to file system are written for conventional local file system, they are serialized and can't take advantage of the benefit of a large scale parallel file system. "pcircle" software builds on top of ubiquitous MPI in cluster computing environment and "work-stealing" pattern to provide a scalable, high-performance suite of file system tools. In particular - it implemented parallel data copy and parallel data checksumming, with advanced features such as async progress report, checkpoint and restart, as well as integrity checking.
The Reed-Solomon encoders: Conventional versus Berlekamp's architecture
NASA Technical Reports Server (NTRS)
Perlman, M.; Lee, J. J.
1982-01-01
Concatenated coding was adopted for interplanetary space missions. Concatenated coding was employed with a convolutional inner code and a Reed-Solomon (RS) outer code for spacecraft telemetry. Conventional RS encoders are compared with those that incorporate two architectural features which approximately halve the number of multiplications of a set of fixed arguments by any RS codeword symbol. The fixed arguments and the RS symbols are taken from a nonbinary finite field. Each set of multiplications is bit-serially performed and completed during one (bit-serial) symbol shift. All firmware employed by conventional RS encoders is eliminated.
The Circulation Analysis of Serial Use: Numbers Game or Key to Service?
Raisig, L. Miles
1967-01-01
The conventionally erected and reported circulation analysis of serial use in the individual and the feeder library is found to be statistically invalid and misleading, since it measures neither the intellectual use of the serial's contents nor the physical handlings of serial units, and is nonrepresentative of the in-depth library use of serials. It fails utterly to report or even to suggest the relation of intralibrary and interlibrary serial resources. The actual mechanics of the serial use analysis, and the active variables in the library situation which affect serial use, are demonstrated in a simulated analysis and are explained at length. A positive design is offered for the objective gathering and reporting of data on the local intellectual use and physical handling of serials and the relating of resources. Data gathering in the feeder library, and implications for the extension of the feeder library's resources, are discussed. PMID:6055863
Computational methods for the identification of spatially varying stiffness and damping in beams
NASA Technical Reports Server (NTRS)
Banks, H. T.; Rosen, I. G.
1986-01-01
A numerical approximation scheme for the estimation of functional parameters in Euler-Bernoulli models for the transverse vibration of flexible beams with tip bodies is developed. The method permits the identification of spatially varying flexural stiffness and Voigt-Kelvin viscoelastic damping coefficients which appear in the hybrid system of ordinary and partial differential equations and boundary conditions describing the dynamics of such structures. An inverse problem is formulated as a least squares fit to data subject to constraints in the form of a vector system of abstract first order evolution equations. Spline-based finite element approximations are used to finite dimensionalize the problem. Theoretical convergence results are given and numerical studies carried out on both conventional (serial) and vector computers are discussed.
Design of a massively parallel computer using bit serial processing elements
NASA Technical Reports Server (NTRS)
Aburdene, Maurice F.; Khouri, Kamal S.; Piatt, Jason E.; Zheng, Jianqing
1995-01-01
A 1-bit serial processor designed for a parallel computer architecture is described. This processor is used to develop a massively parallel computational engine, with a single instruction-multiple data (SIMD) architecture. The computer is simulated and tested to verify its operation and to measure its performance for further development.
A serial digital data communications device. [for real time flight simulation
NASA Technical Reports Server (NTRS)
Fetter, J. L.
1977-01-01
A general purpose computer peripheral device which is used to provide a full-duplex, serial, digital data transmission link between a Xerox Sigma computer and a wide variety of external equipment, including computers, terminals, and special purpose devices is reported. The interface has an extensive set of user defined options to assist the user in establishing the necessary data links. This report describes those options and other features of the serial communications interface and its performance by discussing its application to a particular problem.
ERIC Educational Resources Information Center
Mousikou, Petroula; Rastle, Kathleen; Besner, Derek; Coltheart, Max
2015-01-01
Dual-route theories of reading posit that a sublexical reading mechanism that operates serially and from left to right is involved in the orthography-to-phonology computation. These theories attribute the masked onset priming effect (MOPE) and the phonological Stroop effect (PSE) to the serial left-to-right operation of this mechanism. However,…
Clarke, G. M.; Murray, M.; Holloway, C. M. B.; Liu, K.; Zubovits, J. T.; Yaffe, M. J.
2012-01-01
Tumour size, most commonly measured by maximum linear extent, remains a strong predictor of survival in breast cancer. Tumour volume, proportional to the number of tumour cells, may be a more accurate surrogate for size. We describe a novel “3D pathology volumetric technique” for lumpectomies and compare it with 2D measurements. Volume renderings and total tumour volume are computed from digitized whole-mount serial sections using custom software tools. Results are presented for two lumpectomy specimens selected for tumour features which may challenge accurate measurement of tumour burden with conventional, sampling-based pathology: (1) an infiltrative pattern admixed with normal breast elements; (2) a localized invasive mass separated from the in situ component by benign tissue. Spatial relationships between key features (tumour foci, close or involved margins) are clearly visualized in volume renderings. Invasive tumour burden can be underestimated using conventional pathology, compared to the volumetric technique (infiltrative pattern: 30% underestimation; localized mass: 3% underestimation for invasive tumour, 44% for in situ component). Tumour volume approximated from 2D measurements (i.e., maximum linear extent), assuming elliptical geometry, was seen to overestimate volume compared to the 3D volumetric calculation (by a factor of 7x for the infiltrative pattern; 1.5x for the localized invasive mass). PMID:23320179
Gee, Carole T.
2013-01-01
• Premise of the study: As an alternative to conventional thin-sectioning, which destroys fossil material, high-resolution X-ray computed tomography (also called microtomography or microCT) integrated with scientific visualization, three-dimensional (3D) image segmentation, size analysis, and computer animation is explored as a nondestructive method of imaging the internal anatomy of 150-million-year-old conifer seed cones from the Late Jurassic Morrison Formation, USA, and of recent and other fossil cones. • Methods: MicroCT was carried out on cones using a General Electric phoenix v|tome|x s 240D, and resulting projections were processed with visualization software to produce image stacks of serial single sections for two-dimensional (2D) visualization, 3D segmented reconstructions with targeted structures in color, and computer animations. • Results: If preserved in differing densities, microCT produced images of internal fossil tissues that showed important characters such as seed phyllotaxy or number of seeds per cone scale. Color segmentation of deeply embedded seeds highlighted the arrangement of seeds in spirals. MicroCT of recent cones was even more effective. • Conclusions: This is the first paper on microCT integrated with 3D segmentation and computer animation applied to silicified seed cones, which resulted in excellent 2D serial sections and segmented 3D reconstructions, revealing features requisite to cone identification and understanding of strobilus construction. PMID:25202495
ERIC Educational Resources Information Center
Bensman, Stephen J.; Wilder, Stanley J.
1998-01-01
Analyzes the structure of the library market for scientific and technical (ST) serials. Describes an exercise aimed at a theoretical reconstruction of the ST-serials holdings of Louisiana State University (LSU) Libraries. Discusses the set definitions, measures, and algorithms necessary in the design of a computer program to appraise ST serials.…
Ice-sheet modelling accelerated by graphics cards
NASA Astrophysics Data System (ADS)
Brædstrup, Christian Fredborg; Damsgaard, Anders; Egholm, David Lundbek
2014-11-01
Studies of glaciers and ice sheets have increased the demand for high performance numerical ice flow models over the past decades. When exploring the highly non-linear dynamics of fast flowing glaciers and ice streams, or when coupling multiple flow processes for ice, water, and sediment, researchers are often forced to use super-computing clusters. As an alternative to conventional high-performance computing hardware, the Graphical Processing Unit (GPU) is capable of massively parallel computing while retaining a compact design and low cost. In this study, we present a strategy for accelerating a higher-order ice flow model using a GPU. By applying the newest GPU hardware, we achieve up to 180× speedup compared to a similar but serial CPU implementation. Our results suggest that GPU acceleration is a competitive option for ice-flow modelling when compared to CPU-optimised algorithms parallelised by the OpenMP or Message Passing Interface (MPI) protocols.
Santos, Jonathan; Chaudhari, Abhijit J; Joshi, Anand A; Ferrero, Andrea; Yang, Kai; Boone, John M; Badawi, Ramsey D
2014-09-01
Dedicated breast CT and PET/CT scanners provide detailed 3D anatomical and functional imaging data sets and are currently being investigated for applications in breast cancer management such as diagnosis, monitoring response to therapy and radiation therapy planning. Our objective was to evaluate the performance of the diffeomorphic demons (DD) non-rigid image registration method to spatially align 3D serial (pre- and post-contrast) dedicated breast computed tomography (CT), and longitudinally-acquired dedicated 3D breast CT and positron emission tomography (PET)/CT images. The algorithmic parameters of the DD method were optimized for the alignment of dedicated breast CT images using training data and fixed. The performance of the method for image alignment was quantitatively evaluated using three separate data sets; (1) serial breast CT pre- and post-contrast images of 20 women, (2) breast CT images of 20 women acquired before and after repositioning the subject on the scanner, and (3) dedicated breast PET/CT images of 7 women undergoing neo-adjuvant chemotherapy acquired pre-treatment and after 1 cycle of therapy. The DD registration method outperformed no registration (p < 0.001) and conventional affine registration (p ≤ 0.002) for serial and longitudinal breast CT and PET/CT image alignment. In spite of the large size of the imaging data, the computational cost of the DD method was found to be reasonable (3-5 min). Co-registration of dedicated breast CT and PET/CT images can be performed rapidly and reliably using the DD method. This is the first study evaluating the DD registration method for the alignment of dedicated breast CT and PET/CT images. Copyright © 2014 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
The serial message-passing schedule for LDPC decoding algorithms
NASA Astrophysics Data System (ADS)
Liu, Mingshan; Liu, Shanshan; Zhou, Yuan; Jiang, Xue
2015-12-01
The conventional message-passing schedule for LDPC decoding algorithms is the so-called flooding schedule. It has the disadvantage that the updated messages cannot be used until next iteration, thus reducing the convergence speed . In this case, the Layered Decoding algorithm (LBP) based on serial message-passing schedule is proposed. In this paper the decoding principle of LBP algorithm is briefly introduced, and then proposed its two improved algorithms, the grouped serial decoding algorithm (Grouped LBP) and the semi-serial decoding algorithm .They can improve LBP algorithm's decoding speed while maintaining a good decoding performance.
Hashimoto, Teruo; Thompson, George E; Zhou, Xiaorong; Withers, Philip J
2016-04-01
Mechanical serial block face scanning electron microscopy (SBFSEM) has emerged as a means of obtaining three dimensional (3D) electron images over volumes much larger than possible by focused ion beam (FIB) serial sectioning and at higher spatial resolution than achievable with conventional X-ray computed tomography (CT). Such high resolution 3D electron images can be employed for precisely determining the shape, volume fraction, distribution and connectivity of important microstructural features. While soft (fixed or frozen) biological samples are particularly well suited for nanoscale sectioning using an ultramicrotome, the technique can also produce excellent 3D images at electron microscope resolution in a time and resource-efficient manner for engineering materials. Currently, a lack of appreciation of the capabilities of ultramicrotomy and the operational challenges associated with minimising artefacts for different materials is limiting its wider application to engineering materials. Consequently, this paper outlines the current state of the art for SBFSEM examining in detail how damage is introduced during slicing and highlighting strategies for minimising such damage. A particular focus of the study is the acquisition of 3D images for a variety of metallic and coated systems. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
A novel method to acquire 3D data from serial 2D images of a dental cast
NASA Astrophysics Data System (ADS)
Yi, Yaxing; Li, Zhongke; Chen, Qi; Shao, Jun; Li, Xinshe; Liu, Zhiqin
2007-05-01
This paper introduced a newly developed method to acquire three-dimensional data from serial two-dimensional images of a dental cast. The system consists of a computer and a set of data acquiring device. The data acquiring device is used to take serial pictures of the a dental cast; an artificial neural network works to translate two-dimensional pictures to three-dimensional data; then three-dimensional image can reconstruct by the computer. The three-dimensional data acquiring of dental casts is the foundation of computer-aided diagnosis and treatment planning in orthodontics.
Lee, Kiju; Wang, Yunfeng; Chirikjian, Gregory S
2007-11-01
Over the past several decades a number of O(n) methods for forward and inverse dynamics computations have been developed in the multi-body dynamics and robotics literature. A method was developed in 1974 by Fixman for O(n) computation of the mass-matrix determinant for a serial polymer chain consisting of point masses. In other recent papers, we extended this method in order to compute the inverse of the mass matrix for serial chains consisting of point masses. In the present paper, we extend these ideas further and address the case of serial chains composed of rigid-bodies. This requires the use of relatively deep mathematics associated with the rotation group, SO(3), and the special Euclidean group, SE(3), and specifically, it requires that one differentiates functions of Lie-group-valued argument.
Ma, Li; Runesha, H Birali; Dvorkin, Daniel; Garbe, John R; Da, Yang
2008-01-01
Background Genome-wide association studies (GWAS) using single nucleotide polymorphism (SNP) markers provide opportunities to detect epistatic SNPs associated with quantitative traits and to detect the exact mode of an epistasis effect. Computational difficulty is the main bottleneck for epistasis testing in large scale GWAS. Results The EPISNPmpi and EPISNP computer programs were developed for testing single-locus and epistatic SNP effects on quantitative traits in GWAS, including tests of three single-locus effects for each SNP (SNP genotypic effect, additive and dominance effects) and five epistasis effects for each pair of SNPs (two-locus interaction, additive × additive, additive × dominance, dominance × additive, and dominance × dominance) based on the extended Kempthorne model. EPISNPmpi is the parallel computing program for epistasis testing in large scale GWAS and achieved excellent scalability for large scale analysis and portability for various parallel computing platforms. EPISNP is the serial computing program based on the EPISNPmpi code for epistasis testing in small scale GWAS using commonly available operating systems and computer hardware. Three serial computing utility programs were developed for graphical viewing of test results and epistasis networks, and for estimating CPU time and disk space requirements. Conclusion The EPISNPmpi parallel computing program provides an effective computing tool for epistasis testing in large scale GWAS, and the epiSNP serial computing programs are convenient tools for epistasis analysis in small scale GWAS using commonly available computer hardware. PMID:18644146
Lee, Kiju; Wang, Yunfeng; Chirikjian, Gregory S.
2010-01-01
Over the past several decades a number of O(n) methods for forward and inverse dynamics computations have been developed in the multi-body dynamics and robotics literature. A method was developed in 1974 by Fixman for O(n) computation of the mass-matrix determinant for a serial polymer chain consisting of point masses. In other recent papers, we extended this method in order to compute the inverse of the mass matrix for serial chains consisting of point masses. In the present paper, we extend these ideas further and address the case of serial chains composed of rigid-bodies. This requires the use of relatively deep mathematics associated with the rotation group, SO(3), and the special Euclidean group, SE(3), and specifically, it requires that one differentiates functions of Lie-group-valued argument. PMID:20165563
Mousikou, Petroula; Rastle, Kathleen; Besner, Derek; Coltheart, Max
2015-07-01
Dual-route theories of reading posit that a sublexical reading mechanism that operates serially and from left to right is involved in the orthography-to-phonology computation. These theories attribute the masked onset priming effect (MOPE) and the phonological Stroop effect (PSE) to the serial left-to-right operation of this mechanism. However, both effects may arise during speech planning, in the phonological encoding process, which also occurs serially and from left to right. In the present paper, we sought to determine the locus of serial processing in reading aloud by testing the contrasting predictions that the dual-route and speech planning accounts make in relation to the MOPE and the PSE. The results from three experiments that used the MOPE and the PSE paradigms in English are inconsistent with the idea that these effects arise during speech planning, and consistent with the claim that a sublexical serially operating reading mechanism is involved in the print-to-sound translation. Simulations of the empirical data on the MOPE with the dual route cascaded (DRC) and connectionist dual process (CDP++) models, which are computational implementations of the dual-route theory of reading, provide further support for the dual-route account. (c) 2015 APA, all rights reserved.
Dharmaraj, Christopher D; Thadikonda, Kishan; Fletcher, Anthony R; Doan, Phuc N; Devasahayam, Nallathamby; Matsumoto, Shingo; Johnson, Calvin A; Cook, John A; Mitchell, James B; Subramanian, Sankaran; Krishna, Murali C
2009-01-01
Three-dimensional Oximetric Electron Paramagnetic Resonance Imaging using the Single Point Imaging modality generates unpaired spin density and oxygen images that can readily distinguish between normal and tumor tissues in small animals. It is also possible with fast imaging to track the changes in tissue oxygenation in response to the oxygen content in the breathing air. However, this involves dealing with gigabytes of data for each 3D oximetric imaging experiment involving digital band pass filtering and background noise subtraction, followed by 3D Fourier reconstruction. This process is rather slow in a conventional uniprocessor system. This paper presents a parallelization framework using OpenMP runtime support and parallel MATLAB to execute such computationally intensive programs. The Intel compiler is used to develop a parallel C++ code based on OpenMP. The code is executed on four Dual-Core AMD Opteron shared memory processors, to reduce the computational burden of the filtration task significantly. The results show that the parallel code for filtration has achieved a speed up factor of 46.66 as against the equivalent serial MATLAB code. In addition, a parallel MATLAB code has been developed to perform 3D Fourier reconstruction. Speedup factors of 4.57 and 4.25 have been achieved during the reconstruction process and oximetry computation, for a data set with 23 x 23 x 23 gradient steps. The execution time has been computed for both the serial and parallel implementations using different dimensions of the data and presented for comparison. The reported system has been designed to be easily accessible even from low-cost personal computers through local internet (NIHnet). The experimental results demonstrate that the parallel computing provides a source of high computational power to obtain biophysical parameters from 3D EPR oximetric imaging, almost in real-time.
Note: optical receiver system for 152-channel magnetoencephalography.
Kim, Jin-Mok; Kwon, Hyukchan; Yu, Kwon-kyu; Lee, Yong-Ho; Kim, Kiwoong
2014-11-01
An optical receiver system composing 13 serial data restore/synchronizer modules and a single module combiner converted optical 32-bit serial data into 32-bit synchronous parallel data for a computer to acquire 152-channel magnetoencephalography (MEG) signals. A serial data restore/synchronizer module identified 32-bit channel-voltage bits from 48-bit streaming serial data, and then consecutively reproduced 13 times of 32-bit serial data, acting in a synchronous clock. After selecting a single among 13 reproduced data in each module, a module combiner converted it into 32-bit parallel data, which were carried to 32-port digital input board in a computer. When the receiver system together with optical transmitters were applied to 152-channel superconducting quantum interference device sensors, this MEG system maintained a field noise level of 3 fT/√Hz @ 100 Hz at a sample rate of 1 kSample/s per channel.
Algorithm-Based Fault Tolerance for Numerical Subroutines
NASA Technical Reports Server (NTRS)
Tumon, Michael; Granat, Robert; Lou, John
2007-01-01
A software library implements a new methodology of detecting faults in numerical subroutines, thus enabling application programs that contain the subroutines to recover transparently from single-event upsets. The software library in question is fault-detecting middleware that is wrapped around the numericalsubroutines. Conventional serial versions (based on LAPACK and FFTW) and a parallel version (based on ScaLAPACK) exist. The source code of the application program that contains the numerical subroutines is not modified, and the middleware is transparent to the user. The methodology used is a type of algorithm- based fault tolerance (ABFT). In ABFT, a checksum is computed before a computation and compared with the checksum of the computational result; an error is declared if the difference between the checksums exceeds some threshold. Novel normalization methods are used in the checksum comparison to ensure correct fault detections independent of algorithm inputs. In tests of this software reported in the peer-reviewed literature, this library was shown to enable detection of 99.9 percent of significant faults while generating no false alarms.
Gasoline-powered serial hybrid cars cause lower life cycle carbon emissions than battery cars
NASA Astrophysics Data System (ADS)
Meinrenken, Christoph J.; Lackner, Klaus S.
2011-04-01
Battery cars powered by grid electricity promise reduced life cycle green house gas (GHG) emissions from the automotive sector. Such scenarios usually point to the much higher emissions from conventional, internal combustion engine cars. However, today's commercially available serial hybrid technology achieves the well known efficiency gains from regenerative breaking, lack of gearbox, and light weighting - even if the electricity is generated onboard, from conventional fuels. Here, we analyze emissions for commercially available, state-of the-art battery cars (e.g. Nissan Leaf) and those of commercially available serial hybrid cars (e.g., GM Volt, at same size and performance). Crucially, we find that serial hybrid cars driven on (fossil) gasoline cause fewer life cycle GHG emissions (126g CO2e per km) than battery cars driven on current US grid electricity (142g CO2e per km). We attribute this novel finding to the significant incremental life cycle emissions from battery cars from losses during grid transmission, battery dis-/charging, and larger batteries. We discuss crucial implications for strategic policy decisions towards a low carbon automotive sector as well as relative land intensity when powering cars by biofuel vs. bioelectricity.
The Integrated Library System Design Concepts for a Complete Serials Control Subsystem.
1984-08-20
7AD-fl149 379 THE INTEGRTED LIBRARY SYSTEM DESIGN CONCEPTS FOR A 1/COMPLETE SERIALS CONTROL UBSYSTEM(U) ONLINE COMPUTER SYSTEMS INC GERMANTOWN MD 28...CONTROL SUBSYSTEM Presented to: The Pentagon Library The Pentagon Washington, DC 20310 Prepared by: Online Computer Systems, Inc. 20251 Century Blvd...MDA903-82-C-0535 9. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT, PROJECT, TASK AREA & WORK UNIT NUMBERS Online Computer Systems, Inc
Serial Back-Plane Technologies in Advanced Avionics Architectures
NASA Technical Reports Server (NTRS)
Varnavas, Kosta
2005-01-01
Current back plane technologies such as VME, and current personal computer back planes such as PCI, are shared bus systems that can exhibit nondeterministic latencies. This means a card can take control of the bus and use resources indefinitely affecting the ability of other cards in the back plane to acquire the bus. This provides a real hit on the reliability of the system. Additionally, these parallel busses only have bandwidths in the 100s of megahertz range and EMI and noise effects get worse the higher the bandwidth goes. To provide scalable, fault-tolerant, advanced computing systems, more applicable to today s connected computing environment and to better meet the needs of future requirements for advanced space instruments and vehicles, serial back-plane technologies should be implemented in advanced avionics architectures. Serial backplane technologies eliminate the problem of one card getting the bus and never relinquishing it, or one minor problem on the backplane bringing the whole system down. Being serial instead of parallel improves the reliability by reducing many of the signal integrity issues associated with parallel back planes and thus significantly improves reliability. The increased speeds associated with a serial backplane are an added bonus.
Information transfer rate with serial and simultaneous visual display formats
NASA Astrophysics Data System (ADS)
Matin, Ethel; Boff, Kenneth R.
1988-04-01
Information communication rate for a conventional display with three spatially separated windows was compared with rate for a serial display in which data frames were presented sequentially in one window. For both methods, each frame contained a randomly selected digit with various amounts of additional display 'clutter.' Subjects recalled the digits in a prescribed order. Large rate differences were found, with faster serial communication for all levels of the clutter factors. However, the rate difference was most pronounced for highly cluttered displays. An explanation for the latter effect in terms of visual masking in the retinal periphery was supported by the results of a second experiment. The working hypothesis that serial displays can speed information transfer for automatic but not for controlled processing is discussed.
Verifying speculative multithreading in an application
Felton, Mitchell D
2014-12-09
Verifying speculative multithreading in an application executing in a computing system, including: executing one or more test instructions serially thereby producing a serial result, including insuring that all data dependencies among the test instructions are satisfied; executing the test instructions speculatively in a plurality of threads thereby producing a speculative result; and determining whether a speculative multithreading error exists including: comparing the serial result to the speculative result and, if the serial result does not match the speculative result, determining that a speculative multithreading error exists.
Verifying speculative multithreading in an application
Felton, Mitchell D
2014-11-18
Verifying speculative multithreading in an application executing in a computing system, including: executing one or more test instructions serially thereby producing a serial result, including insuring that all data dependencies among the test instructions are satisfied; executing the test instructions speculatively in a plurality of threads thereby producing a speculative result; and determining whether a speculative multithreading error exists including: comparing the serial result to the speculative result and, if the serial result does not match the speculative result, determining that a speculative multithreading error exists.
2014-01-31
59 Figure 26. Raspberry Pi SBC... Raspberry Pi single compute board (SBC) (see section 3.3.1.2). These snoopers can intercept the serial data, decode the information, and retransmit the...data. The Raspberry Pi contains two serial ports that allow receiving, altering, and retransmitting of serial data. These monitor points will provide
On-Iine Management System for the Periodicals in JAERl
NASA Astrophysics Data System (ADS)
Itabashi, Keizo; Mineo, Yukinobu
The article describes the outlines of the on-line serials control system utilizing a mini-computer. The system is dealt with subscription, check-in, claiming, inquiry of serials information and binding of journals. In this system journal acquisition with serial arrival prediction in an on-line mode is carried on a priority principle to record the actual receipt of incoming issues.
CAVIAR: a tool to improve serial analysis of the 12-lead electrocardiogram.
Berg, J; Fayn, J; Edenbrandt, L; Lundh, B; Malmström, P; Rubel, P
1995-09-01
An important part of an electrocardiogram (ECG) interpretation is the comparison between the present ECG and earlier recordings. The purpose of the present study was to evaluate a combination of two computer-based methods, synthesized vectorcardiogram (VCG) and CAVIAR, in this comparison. The methods were applied to a group of 38 normal subjects and to a group of 36 patients treated with anthracyclines. A fraction of these patients are likely to develop cardiac injury during or after the treatment, since anthracyclines are known to cause heart failure and cardiomyopathy. Two ECGs were recorded on each patient, one before and one after the treatment. On each normal subject, two ECGs were recorded with an interval of 8-9 years. A synthesized VCG was calculated from each ECG and the two synthesized VCGs from each subject were analysed with the CAVIAR method. The CAVIAR analysis is a quantitative method and normal limits for four measurements were established using the normal group. Values above these limits were more frequent in the patient group than in the normal group. The conventional ECGs were also analysed visually by an experience ECG interpreter without knowledge of the result of the CAVIAR analysis. No significant serial changes were found in 10 of the patients with high CAVIAR values. Changes in the ECGs were found in two patients with normal CAVIAR values. In summary, synthesized VCG and CAVIAR could be used to highlight small serial changes that are difficult to find in a visual analysis of ECGs.
A massively parallel computational approach to coupled thermoelastic/porous gas flow problems
NASA Technical Reports Server (NTRS)
Shia, David; Mcmanus, Hugh L.
1995-01-01
A new computational scheme for coupled thermoelastic/porous gas flow problems is presented. Heat transfer, gas flow, and dynamic thermoelastic governing equations are expressed in fully explicit form, and solved on a massively parallel computer. The transpiration cooling problem is used as an example problem. The numerical solutions have been verified by comparison to available analytical solutions. Transient temperature, pressure, and stress distributions have been obtained. Small spatial oscillations in pressure and stress have been observed, which would be impractical to predict with previously available schemes. Comparisons between serial and massively parallel versions of the scheme have also been made. The results indicate that for small scale problems the serial and parallel versions use practically the same amount of CPU time. However, as the problem size increases the parallel version becomes more efficient than the serial version.
Pediatric digital chest imaging.
Tarver, R D; Cohen, M; Broderick, N J; Conces, D J
1990-01-01
The Philips Computed Radiography system performs well with pediatric portable chest radiographs, handling the throughout of a busy intensive care service 24 hours a day. Images are excellent and routinely provide a conventional (unenhanced) image and an edge-enhanced image. Radiation dose is decreased by the lowered frequency of repeat examinations and the ability of the plates to respond to a much lower dose and still provide an adequate image. The high quality and uniform density of serial PCR portable radiographs greatly enhances diagnostic content of the films. Decreased resolution has not been a problem clinically. Image manipulation and electronic transfer to remote viewing stations appear to be helpful and are currently being evaluated further. The PCR system provides a marked improvement in pediatric portable chest radiology.
NASA Technical Reports Server (NTRS)
Whalen, Robert T.; Napel, Sandy; Yan, Chye H.
1996-01-01
Progress in development of the methods required to study bone remodeling as a function of time is reported. The following topics are presented: 'A New Methodology for Registration Accuracy Evaluation', 'Registration of Serial Skeletal Images for Accurately Measuring Changes in Bone Density', and 'Precise and Accurate Gold Standard for Multimodality and Serial Registration Method Evaluations.'
The VLSI design of a Reed-Solomon encoder using Berlekamps bit-serial multiplier algorithm
NASA Technical Reports Server (NTRS)
Truong, T. K.; Deutsch, L. J.; Reed, I. S.; Hsu, I. S.; Wang, K.; Yeh, C. S.
1982-01-01
Realization of a bit-serial multiplication algorithm for the encoding of Reed-Solomon (RS) codes on a single VLSI chip using NMOS technology is demonstrated to be feasible. A dual basis (255, 223) over a Galois field is used. The conventional RS encoder for long codes ofter requires look-up tables to perform the multiplication of two field elements. Berlekamp's algorithm requires only shifting and exclusive-OR operations.
Backtracking and Re-execution in the Automatic Debugging of Parallelized Programs
NASA Technical Reports Server (NTRS)
Matthews, Gregory; Hood, Robert; Johnson, Stephen; Leggett, Peter; Biegel, Bryan (Technical Monitor)
2002-01-01
In this work we describe a new approach using relative debugging to find differences in computation between a serial program and a parallel version of th it program. We use a combination of re-execution and backtracking in order to find the first difference in computation that may ultimately lead to an incorrect value that the user has indicated. In our prototype implementation we use static analysis information from a parallelization tool in order to perform the backtracking as well as the mapping required between serial and parallel computations.
Accelerating functional verification of an integrated circuit
Deindl, Michael; Ruedinger, Jeffrey Joseph; Zoellin, Christian G.
2015-10-27
Illustrative embodiments include a method, system, and computer program product for accelerating functional verification in simulation testing of an integrated circuit (IC). Using a processor and a memory, a serial operation is replaced with a direct register access operation, wherein the serial operation is configured to perform bit shifting operation using a register in a simulation of the IC. The serial operation is blocked from manipulating the register in the simulation of the IC. Using the register in the simulation of the IC, the direct register access operation is performed in place of the serial operation.
Artificial neural networks using complex numbers and phase encoded weights.
Michel, Howard E; Awwal, Abdul Ahad S
2010-04-01
The model of a simple perceptron using phase-encoded inputs and complex-valued weights is proposed. The aggregation function, activation function, and learning rule for the proposed neuron are derived and applied to Boolean logic functions and simple computer vision tasks. The complex-valued neuron (CVN) is shown to be superior to traditional perceptrons. An improvement of 135% over the theoretical maximum of 104 linearly separable problems (of three variables) solvable by conventional perceptrons is achieved without additional logic, neuron stages, or higher order terms such as those required in polynomial logic gates. The application of CVN in distortion invariant character recognition and image segmentation is demonstrated. Implementation details are discussed, and the CVN is shown to be very attractive for optical implementation since optical computations are naturally complex. The cost of the CVN is less in all cases than the traditional neuron when implemented optically. Therefore, all the benefits of the CVN can be obtained without additional cost. However, on those implementations dependent on standard serial computers, CVN will be more cost effective only in those applications where its increased power can offset the requirement for additional neurons.
ERIC Educational Resources Information Center
Asher, Andrew D.; Duke, Lynda M.; Wilson, Suzanne
2013-01-01
In 2011, researchers at Bucknell University and Illinois Wesleyan University compared the search efficacy of Serial Solutions Summon, EBSCO Discovery Service, Google Scholar, and conventional library databases. Using a mixed-methods approach, qualitative and quantitative data were gathered on students' usage of these tools. Regardless of the…
On the reduced-complexity of LDPC decoders for beyond 400 Gb/s serial optical transmission
NASA Astrophysics Data System (ADS)
Djordjevic, Ivan B.; Xu, Lei; Wang, Ting
2010-12-01
Two reduced-complexity (RC) LDPC decoders are proposed, which can be used in combination with large-girth LDPC codes to enable beyond 400 Gb/s serial optical transmission. We show that optimally attenuated RC min-sum sum algorithm performs only 0.45 dB worse than conventional sum-product algorithm, while having lower storage memory requirements and much lower latency. We further evaluate the proposed algorithms for use in beyond 400 Gb/s serial optical transmission in combination with PolMUX 32-IPQ-based signal constellation and show that low BERs can be achieved for medium optical SNRs, while achieving the net coding gain above 11.4 dB.
Three more semantic serial position functions and a SIMPLE explanation.
Kelley, Matthew R; Neath, Ian; Surprenant, Aimée M
2013-05-01
There are innumerable demonstrations of serial position functions-with characteristic primacy and recency effects-in episodic tasks, but there are only a handful of such demonstrations in semantic memory tasks, and those demonstrations have used only two types of stimuli. Here, we provide three more examples of serial position functions when recalling from semantic memory. Participants were asked to reconstruct the order of (1) two cartoon theme song lyrics, (2) the seven Harry Potter books, and (3) two sets of movies, and all three demonstrations yielded conventional-looking serial position functions with primacy and recency effects. The data were well-fit by SIMPLE, a local distinctiveness model of memory that was originally designed to account for serial position effects in short- and long-term episodic memory. According to SIMPLE, serial position functions in both episodic and semantic memory tasks arise from the same type of processing: Items that are more separated from their close neighbors in psychological space at the time of recall will be better remembered. We argue that currently available evidence suggests that serial position functions observed when recalling items that are presumably in semantic memory arise because of the same processes as those observed when recalling items that are presumably in episodic memory.
Use of CYBER 203 and CYBER 205 computers for three-dimensional transonic flow calculations
NASA Technical Reports Server (NTRS)
Melson, N. D.; Keller, J. D.
1983-01-01
Experiences are discussed for modifying two three-dimensional transonic flow computer programs (FLO 22 and FLO 27) for use on the CDC CYBER 203 computer system. Both programs were originally written for use on serial machines. Several methods were attempted to optimize the execution of the two programs on the vector machine: leaving the program in a scalar form (i.e., serial computation) with compiler software used to optimize and vectorize the program, vectorizing parts of the existing algorithm in the program, and incorporating a vectorizable algorithm (ZEBRA I or ZEBRA II) in the program. Comparison runs of the programs were made on CDC CYBER 175. CYBER 203, and two pipe CDC CYBER 205 computer systems.
Integration of communications and tracking data processing simulation for space station
NASA Technical Reports Server (NTRS)
Lacovara, Robert C.
1987-01-01
A simplified model of the communications network for the Communications and Tracking Data Processing System (CTDP) was developed. It was simulated by use of programs running on several on-site computers. These programs communicate with one another by means of both local area networks and direct serial connections. The domain of the model and its simulation is from Orbital Replaceable Unit (ORU) interface to Data Management Systems (DMS). The simulation was designed to allow status queries from remote entities across the DMS networks to be propagated through the model to several simulated ORU's. The ORU response is then propagated back to the remote entity which originated the request. Response times at the various levels were investigated in a multi-tasking, multi-user operating system environment. Results indicate that the effective bandwidth of the system may be too low to support expected data volume requirements under conventional operating systems. Instead, some form of embedded process control program may be required on the node computers.
Serials Evaluation: An Innovative Approach.
ERIC Educational Resources Information Center
Berger, Marilyn; Devine, Jane
1990-01-01
Describes a method of analyzing serials collections in special libraries that combines evaluative criteria with database management technology. Choice of computer software is discussed, qualitative information used to evaluate subject coverage is examined, and quantitative and descriptive data that can be used for collection management are…
FireWire: Hot New Multimedia Interface or Flash in the Pan?
ERIC Educational Resources Information Center
Learn, Larry L., Ed.
1995-01-01
Examines potential solutions to the problem of personal computer cabling and configuration and serial port performance, namely "FireWire" (P1394) and "Universal Serial Bus" (USB). Discusses interface design, technical capabilities, user friendliness, compatibility, costs, and future perspectives. (AEF)
Serial network simplifies the design of multiple microcomputer systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Folkes, D.
1981-01-01
Recently there has been a lot of interest in developing network communication schemes for carrying digital data between locally distributed computing stations. Many of these schemes have focused on distributed networking techniques for data processing applications. These applications suggest the use of a serial, multipoint bus, where a number of remote intelligent units act as slaves to a central or host computer. Each slave would be serially addressable from the host and would perform required operations upon being addressed by the host. Based on an MK3873 single-chip microcomputer, the SCU 20 is designed to be such a remote slave device.more » The capabilities of the SCU 20 and its use in systems applications are examined.« less
Li, YuQian; Liu, ChunMei; Wachemo, Akiber Chufo; Yuan, HaiRong; Zou, DeXun; Liu, YanPing; Li, XiuJin
2017-07-01
Several completely stirred tank reactors (CSTR) connected in series for anaerobic digestion of corn stover were investigated in laboratory scale. Serial anaerobic digestion systems operated at a total HRT of 40days, and distribution of HRT are 10+30days (HRT10+30d), 20+20days (HRT20+20d), and 30+10days (HRT30+10d) were compared to a conventional one-step CSTR at the same HRT of 40d. The results showed that in HRT10+30d serial system, the process became very unstable at organic load of 50gTS·L -1 . The HRT20+20d and HRT30+10d serial systems improved methane production by 8.3-14.6% compared to the one-step system in all loads of 50, 70, 90gTS·L -1 . The conversion rates of total solid, cellulose, and hemicellulose were increased in serial anaerobic digestion systems compared to single system. The serial systems showed more stable process performance in high organic load. HRT30+10d system showed the best biogas production and conversions among all systems. Copyright © 2017. Published by Elsevier Ltd.
Freeform Optics: current challenges for future serial production
NASA Astrophysics Data System (ADS)
Schindler, C.; Köhler, T.; Roth, E.
2017-10-01
One of the major developments in optics industry recently is the commercial manufacturing of freeform surfaces for optical mid- and high performance systems. The loss of limitation on rotational symmetry enables completely new optical design solutions - but causes completely new challenges for the manufacturer too. Adapting the serial production from radial-symmetric to freeform optics cannot be done just by the extension of machine capabilities and software for every process step. New solutions for conventional optics productions or completely new process chains are necessary.
The (temporary?) queering of Japanese TV.
Miller, S D
2000-01-01
One of the primary texts of the "out" queer cinema of Japan is the television serial D s kai, first aired in 1993. Unlike Western television shows positing queer characters, D s kai presents its gay characters without apology or excuses, and as leads rather than as colorful appendages. At the same time, however, the show filters gay eroticism through the (hetero)normative mode of serial melodrama, at once pushing the boundaries of national permissiveness while normalizing and homogenizing homosexuality by rendering it within a conventional form.
Fourier domain preconditioned conjugate gradient algorithm for atmospheric tomography.
Yang, Qiang; Vogel, Curtis R; Ellerbroek, Brent L
2006-07-20
By 'atmospheric tomography' we mean the estimation of a layered atmospheric turbulence profile from measurements of the pupil-plane phase (or phase gradients) corresponding to several different guide star directions. We introduce what we believe to be a new Fourier domain preconditioned conjugate gradient (FD-PCG) algorithm for atmospheric tomography, and we compare its performance against an existing multigrid preconditioned conjugate gradient (MG-PCG) approach. Numerical results indicate that on conventional serial computers, FD-PCG is as accurate and robust as MG-PCG, but it is from one to two orders of magnitude faster for atmospheric tomography on 30 m class telescopes. Simulations are carried out for both natural guide stars and for a combination of finite-altitude laser guide stars and natural guide stars to resolve tip-tilt uncertainty.
NASA Astrophysics Data System (ADS)
Lee, J.; Kim, K.
A Very Large Scale Integration (VLSI) architecture for robot direct kinematic computation suitable for industrial robot manipulators was investigated. The Denavit-Hartenberg transformations are reviewed to exploit a proper processing element, namely an augmented CORDIC. Specifically, two distinct implementations are elaborated on, such as the bit-serial and parallel. Performance of each scheme is analyzed with respect to the time to compute one location of the end-effector of a 6-links manipulator, and the number of transistors required.
NASA Technical Reports Server (NTRS)
Lee, J.; Kim, K.
1991-01-01
A Very Large Scale Integration (VLSI) architecture for robot direct kinematic computation suitable for industrial robot manipulators was investigated. The Denavit-Hartenberg transformations are reviewed to exploit a proper processing element, namely an augmented CORDIC. Specifically, two distinct implementations are elaborated on, such as the bit-serial and parallel. Performance of each scheme is analyzed with respect to the time to compute one location of the end-effector of a 6-links manipulator, and the number of transistors required.
Star adaptation for two-algorithms used on serial computers
NASA Technical Reports Server (NTRS)
Howser, L. M.; Lambiotte, J. J., Jr.
1974-01-01
Two representative algorithms used on a serial computer and presently executed on the Control Data Corporation 6000 computer were adapted to execute efficiently on the Control Data STAR-100 computer. Gaussian elimination for the solution of simultaneous linear equations and the Gauss-Legendre quadrature formula for the approximation of an integral are the two algorithms discussed. A description is given of how the programs were adapted for STAR and why these adaptations were necessary to obtain an efficient STAR program. Some points to consider when adapting an algorithm for STAR are discussed. Program listings of the 6000 version coded in 6000 FORTRAN, the adapted STAR version coded in 6000 FORTRAN, and the STAR version coded in STAR FORTRAN are presented in the appendices.
SpaceWire Driver Software for Special DSPs
NASA Technical Reports Server (NTRS)
Clark, Douglas; Lux, James; Nishimoto, Kouji; Lang, Minh
2003-01-01
A computer program provides a high-level C-language interface to electronics circuitry that controls a SpaceWire interface in a system based on a space qualified version of the ADSP-21020 digital signal processor (DSP). SpaceWire is a spacecraft-oriented standard for packet-switching data-communication networks that comprise nodes connected through bidirectional digital serial links that utilize low-voltage differential signaling (LVDS). The software is tailored to the SMCS-332 application-specific integrated circuit (ASIC) (also available as the TSS901E), which provides three highspeed (150 Mbps) serial point-to-point links compliant with the proposed Institute of Electrical and Electronics Engineers (IEEE) Standard 1355.2 and equivalent European Space Agency (ESA) Standard ECSS-E-50-12. In the specific application of this software, the SpaceWire ASIC was combined with the DSP processor, memory, and control logic in a Multi-Chip Module DSP (MCM-DSP). The software is a collection of low-level driver routines that provide a simple message-passing application programming interface (API) for software running on the DSP. Routines are provided for interrupt-driven access to the two styles of interface provided by the SMCS: (1) the "word at a time" conventional host interface (HOCI); and (2) a higher performance "dual port memory" style interface (COMI).
A unifying framework for rigid multibody dynamics and serial and parallel computational issues
NASA Technical Reports Server (NTRS)
Fijany, Amir; Jain, Abhinandan
1989-01-01
A unifying framework for various formulations of the dynamics of open-chain rigid multibody systems is discussed. Their suitability for serial and parallel processing is assessed. The framework is based on the derivation of intrinsic, i.e., coordinate-free, equations of the algorithms which provides a suitable abstraction and permits a distinction to be made between the computational redundancy in the intrinsic and extrinsic equations. A set of spatial notation is used which allows the derivation of the various algorithms in a common setting and thus clarifies the relationships among them. The three classes of algorithms viz., O(n), O(n exp 2) and O(n exp 3) or the solution of the dynamics problem are investigated. Researchers begin with the derivation of O(n exp 3) algorithms based on the explicit computation of the mass matrix and it provides insight into the underlying basis of the O(n) algorithms. From a computational perspective, the optimal choice of a coordinate frame for the projection of the intrinsic equations is discussed and the serial computational complexity of the different algorithms is evaluated. The three classes of algorithms are also analyzed for suitability for parallel processing. It is shown that the problem belongs to the class of N C and the time and processor bounds are of O(log2/2(n)) and O(n exp 4), respectively. However, the algorithm that achieves the above bounds is not stable. Researchers show that the fastest stable parallel algorithm achieves a computational complexity of O(n) with O(n exp 4), respectively. However, the algorithm that achieves the above bounds is not stable. Researchers show that the fastest stable parallel algorithm achieves a computational complexity of O(n) with O(n exp 2) processors, and results from the parallelization of the O(n exp 3) serial algorithm.
NASA Astrophysics Data System (ADS)
Work, Paul R.
1991-12-01
This thesis investigates the parallelization of existing serial programs in computational electromagnetics for use in a parallel environment. Existing algorithms for calculating the radar cross section of an object are covered, and a ray-tracing code is chosen for implementation on a parallel machine. Current parallel architectures are introduced and a suitable parallel machine is selected for the implementation of the chosen ray-tracing algorithm. The standard techniques for the parallelization of serial codes are discussed, including load balancing and decomposition considerations, and appropriate methods for the parallelization effort are selected. A load balancing algorithm is modified to increase the efficiency of the application, and a high level design of the structure of the serial program is presented. A detailed design of the modifications for the parallel implementation is also included, with both the high level and the detailed design specified in a high level design language called UNITY. The correctness of the design is proven using UNITY and standard logic operations. The theoretical and empirical results show that it is possible to achieve an efficient parallel application for a serial computational electromagnetic program where the characteristics of the algorithm and the target architecture critically influence the development of such an implementation.
Demonstration of optical computing logics based on binary decision diagram.
Lin, Shiyun; Ishikawa, Yasuhiko; Wada, Kazumi
2012-01-16
Optical circuits are low power consumption and fast speed alternatives for the current information processing based on transistor circuits. However, because of no transistor function available in optics, the architecture for optical computing should be chosen that optics prefers. One of which is Binary Decision Diagram (BDD), where signal is processed by sending an optical signal from the root through a serial of switching nodes to the leaf (terminal). Speed of optical computing is limited by either transmission time of optical signals from the root to the leaf or switching time of a node. We have designed and experimentally demonstrated 1-bit and 2-bit adders based on the BDD architecture. The switching nodes are silicon ring resonators with a modulation depth of 10 dB and the states are changed by the plasma dispersion effect. The quality, Q of the rings designed is 1500, which allows fast transmission of signal, e.g., 1.3 ps calculated by a photon escaping time. A total processing time is thus analyzed to be ~9 ps for a 2-bit adder and would scales linearly with the number of bit. It is two orders of magnitude faster than the conventional CMOS circuitry, ~ns scale of delay. The presented results show the potential of fast speed optical computing circuits.
NASA Technical Reports Server (NTRS)
Fijany, Amir; Toomarian, Benny N.
2000-01-01
There has been significant improvement in the performance of VLSI devices, in terms of size, power consumption, and speed, in recent years and this trend may also continue for some near future. However, it is a well known fact that there are major obstacles, i.e., physical limitation of feature size reduction and ever increasing cost of foundry, that would prevent the long term continuation of this trend. This has motivated the exploration of some fundamentally new technologies that are not dependent on the conventional feature size approach. Such technologies are expected to enable scaling to continue to the ultimate level, i.e., molecular and atomistic size. Quantum computing, quantum dot-based computing, DNA based computing, biologically inspired computing, etc., are examples of such new technologies. In particular, quantum-dots based computing by using Quantum-dot Cellular Automata (QCA) has recently been intensely investigated as a promising new technology capable of offering significant improvement over conventional VLSI in terms of reduction of feature size (and hence increase in integration level), reduction of power consumption, and increase of switching speed. Quantum dot-based computing and memory in general and QCA specifically, are intriguing to NASA due to their high packing density (10(exp 11) - 10(exp 12) per square cm ) and low power consumption (no transfer of current) and potentially higher radiation tolerant. Under Revolutionary Computing Technology (RTC) Program at the NASA/JPL Center for Integrated Space Microelectronics (CISM), we have been investigating the potential applications of QCA for the space program. To this end, exploiting the intrinsic features of QCA, we have designed novel QCA-based circuits for co-planner (i.e., single layer) and compact implementation of a class of data permutation matrices, a class of interconnection networks, and a bit-serial processor. Building upon these circuits, we have developed novel algorithms and QCA-based architectures for highly parallel and systolic computation of signal/image processing applications, such as FFT and Wavelet and Wlash-Hadamard Transforms.
Three-dimensional surface reconstruction for industrial computed tomography
NASA Technical Reports Server (NTRS)
Vannier, M. W.; Knapp, R. H.; Gayou, D. E.; Sammon, N. P.; Butterfield, R. L.; Larson, J. W.
1985-01-01
Modern high resolution medical computed tomography (CT) scanners can produce geometrically accurate sectional images of many types of industrial objects. Computer software has been developed to convert serial CT scans into a three-dimensional surface form, suitable for display on the scanner itself. This software, originally developed for imaging the skull, has been adapted for application to industrial CT scanning, where serial CT scans thrrough an object of interest may be reconstructed to demonstrate spatial relationships in three dimensions that cannot be easily understood using the original slices. The methods of three-dimensional reconstruction and solid modeling are reviewed, and reconstruction in three dimensions from CT scans through familiar objects is demonstrated.
NASA Astrophysics Data System (ADS)
Childers, J. T.; Uram, T. D.; LeCompte, T. J.; Papka, M. E.; Benjamin, D. P.
2017-01-01
As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the Worldwide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. This paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application and the performance that was achieved.
Charles Darwin and the evolution of human grammatical systems.
Buckingham, Hugh W; Christman, Sarah S
2010-04-08
Charles Darwin's evolutionary theories of animal communication were deeply embedded in a centuries-old model of association psychology, whose prodromes have most often been traced to the writings of Aristotle. His notions of frequency of occurrence of pairings have been passed down through the centuries and were a major ontological feature in the formation of associative connectivity. He focused on the associations of cause and effect, contiguity of sequential occurrence, and similarity among items. Cause and effect were often reduced to another type of contiguity relation, so that Aristotle is most often evoked as the originator of the associative bondings through similarity and contiguity, contiguity being the most powerful and frequent means of association. Contiguity eventually became the overriding mechanism for serial ordering of mental events in both perception and action. The notions of concatenation throughout the association psychology took the form of "trains" of events, both sensory and motor, in such a way that serial ordering came to be viewed as an item-by-item string of locally contiguous events. Modern developments in the mathematics of serial ordering have advanced in sophistication since the early and middle twentieth century, and new computational methods have allowed us to reevaluate the serial concatenative theories of Darwin and the associationists. These new models of serial order permit a closer comparative scrutiny between human and nonhuman. The present study considers Darwin's insistence on a "degree" continuity between human and nonhuman animal serial ordering. We will consider a study of starling birdsongs and whether the serial ordering of those songs provides evidence that they have a syntax that at best differs only in degree and not in kind with the computations of human grammatical structures. We will argue that they, in fact, show no such thing.
Yousefzadeh, Amirreza; Jablonski, Miroslaw; Iakymchuk, Taras; Linares-Barranco, Alejandro; Rosado, Alfredo; Plana, Luis A; Temple, Steve; Serrano-Gotarredona, Teresa; Furber, Steve B; Linares-Barranco, Bernabe
2017-10-01
Address event representation (AER) is a widely employed asynchronous technique for interchanging "neural spikes" between different hardware elements in neuromorphic systems. Each neuron or cell in a chip or a system is assigned an address (or ID), which is typically communicated through a high-speed digital bus, thus time-multiplexing a high number of neural connections. Conventional AER links use parallel physical wires together with a pair of handshaking signals (request and acknowledge). In this paper, we present a fully serial implementation using bidirectional SATA connectors with a pair of low-voltage differential signaling (LVDS) wires for each direction. The proposed implementation can multiplex a number of conventional parallel AER links for each physical LVDS connection. It uses flow control, clock correction, and byte alignment techniques to transmit 32-bit address events reliably over multiplexed serial connections. The setup has been tested using commercial Spartan6 FPGAs attaining a maximum event transmission speed of 75 Meps (Mega events per second) for 32-bit events at a line rate of 3.0 Gbps. Full HDL codes (vhdl/verilog) and example demonstration codes for the SpiNNaker platform will be made available.
[Computer Assisted Instruction.
ERIC Educational Resources Information Center
Broderick, Bill; And Others
1987-01-01
These two serial issues are devoted to the impact of computers on education, and specifically their effects on developmental education programs. First "The Effects of Computer-Based Instruction" summarizes the literature on the impact of computer-based instruction, including a study by James and Chen-Lin Kulik and Peter Cohen, which found that:…
Baba, Akira; Yamauchi, Hideomi; Ogino, Nobuhiro; Okuyama, Yumi; Yamazoe, Shinji; Munetomo, Yohei; Kobashi, Yuko; Mogami, Takuji; Ojiri, Hiroya
2017-12-01
Positional change in the retropharyngeal carotid artery, a rare phenomenon over time, is even rarer in previous reports, and it is important to be aware of this before any neck surgical procedure. A woman in her 50s underwent an anterior maxillectomy for upper gingival cancer, without neck dissection. The patient had medical histories of diabetes mellitus and liver dysfunction, with unremarkable family histories. Serial neck contrast-enhanced computed tomography for detecting locoregional recurrence had been performed as a follow-up during 4 years. A radiological course of moving carotid arteries in serial computed tomography studies showed reciprocating positional changes (wandering) between normal and retropharyngeal regions. There was no locoregional recurrence of the gingival cancer. This is the first case to describe a so-rare presentation of wandering carotid arteries. It is important for clinicians to be aware of a wandering carotid artery to avoid potentially fatal complications.
Alvelo, Jessica L.; Papademetris, Xenophon; Mena-Hurtado, Carlos; Jeon, Sangchoon; Sumpio, Bauer E.; Sinusas, Albert J.
2018-01-01
Background: Single photon emission computed tomography (SPECT)/computed tomography (CT) imaging allows for assessment of skeletal muscle microvascular perfusion but has not been quantitatively assessed in angiosomes, or 3-dimensional vascular territories, of the foot. This study assessed and compared resting angiosome foot perfusion between healthy subjects and diabetic patients with critical limb ischemia (CLI). Additionally, the relationship between SPECT/CT imaging and the ankle–brachial index—a standard tool for evaluating peripheral artery disease—was assessed. Methods and Results: Healthy subjects (n=9) and diabetic patients with CLI and nonhealing ulcers (n=42) underwent SPECT/CT perfusion imaging of the feet. CT images were segmented into angiosomes for quantification of relative radiotracer uptake, expressed as standardized uptake values. Standardized uptake values were assessed in ulcerated angiosomes of patients with CLI and compared with whole-foot standardized uptake values in healthy subjects. Serial SPECT/CT imaging was performed to assess uptake kinetics of technetium-99m-tetrofosmin. The relationship between angiosome perfusion and ankle–brachial index was assessed via correlational analysis. Resting perfusion was significantly lower in CLI versus healthy subjects (P=0.0007). Intraclass correlation coefficients of 0.95 (healthy) and 0.93 (CLI) demonstrated excellent agreement between serial perfusion measurements. Correlational analysis, including healthy and CLI subjects, demonstrated a significant relationship between ankle–brachial index and SPECT/CT (P=0.01); however, this relationship was not significant for diabetic CLI patients only (P=0.2). Conclusions: SPECT/CT imaging assesses regional foot perfusion and detects abnormalities in microvascular perfusion that may be undetectable by conventional ankle–brachial index in patients with diabetes mellitus. SPECT/CT may provide a novel approach for evaluating responses to targeted therapies. PMID:29748311
REMOTE: Modem Communicator Program for the IBM personal computer
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGirt, F.
1984-06-01
REMOTE, a Modem Communicator Program, was developed to provide full duplex serial communication with arbitrary remote computers via either dial-up telephone modems or direct lines. The latest version of REMOTE (documented in this report) was developed for the IBM Personal Computer.
Fast Grasp Contact Computation for a Serial Robot
NASA Technical Reports Server (NTRS)
Hargrave, Brian (Inventor); Shi, Jianying (Inventor); Diftler, Myron A. (Inventor)
2015-01-01
A system includes a controller and a serial robot having links that are interconnected by a joint, wherein the robot can grasp a three-dimensional (3D) object in response to a commanded grasp pose. The controller receives input information, including the commanded grasp pose, a first set of information describing the kinematics of the robot, and a second set of information describing the position of the object to be grasped. The controller also calculates, in a two-dimensional (2D) plane, a set of contact points between the serial robot and a surface of the 3D object needed for the serial robot to achieve the commanded grasp pose. A required joint angle is then calculated in the 2D plane between the pair of links using the set of contact points. A control action is then executed with respect to the motion of the serial robot using the required joint angle.
Mondragón, Esther; Gray, Jonathan; Alonso, Eduardo; Bonardi, Charlotte; Jennings, Dómhnall J.
2014-01-01
This paper presents a novel representational framework for the Temporal Difference (TD) model of learning, which allows the computation of configural stimuli – cumulative compounds of stimuli that generate perceptual emergents known as configural cues. This Simultaneous and Serial Configural-cue Compound Stimuli Temporal Difference model (SSCC TD) can model both simultaneous and serial stimulus compounds, as well as compounds including the experimental context. This modification significantly broadens the range of phenomena which the TD paradigm can explain, and allows it to predict phenomena which traditional TD solutions cannot, particularly effects that depend on compound stimuli functioning as a whole, such as pattern learning and serial structural discriminations, and context-related effects. PMID:25054799
Lipidic cubic phase serial millisecond crystallography using synchrotron radiation
Nogly, Przemyslaw; James, Daniel; Wang, Dingjie; White, Thomas A.; Zatsepin, Nadia; Shilova, Anastasya; Nelson, Garrett; Liu, Haiguang; Johansson, Linda; Heymann, Michael; Jaeger, Kathrin; Metz, Markus; Wickstrand, Cecilia; Wu, Wenting; Båth, Petra; Berntsen, Peter; Oberthuer, Dominik; Panneels, Valerie; Cherezov, Vadim; Chapman, Henry; Schertler, Gebhard; Neutze, Richard; Spence, John; Moraes, Isabel; Burghammer, Manfred; Standfuss, Joerg; Weierstall, Uwe
2015-01-01
Lipidic cubic phases (LCPs) have emerged as successful matrixes for the crystallization of membrane proteins. Moreover, the viscous LCP also provides a highly effective delivery medium for serial femtosecond crystallography (SFX) at X-ray free-electron lasers (XFELs). Here, the adaptation of this technology to perform serial millisecond crystallography (SMX) at more widely available synchrotron microfocus beamlines is described. Compared with conventional microcrystallography, LCP-SMX eliminates the need for difficult handling of individual crystals and allows for data collection at room temperature. The technology is demonstrated by solving a structure of the light-driven proton-pump bacteriorhodopsin (bR) at a resolution of 2.4 Å. The room-temperature structure of bR is very similar to previous cryogenic structures but shows small yet distinct differences in the retinal ligand and proton-transfer pathway. PMID:25866654
Using Histories to Implement Atomic Objects
NASA Technical Reports Server (NTRS)
Ng, Pui
1987-01-01
In this paper we describe an approach of implementing atomicity. Atomicity requires that computations appear to be all-or-nothing and executed in a serialization order. The approach we describe has three characteristics. First, it utilizes the semantics of an application to improve concurrency. Second, it reduces the complexity of application-dependent synchronization code by analyzing the process of writing it. In fact, the process can be automated with logic programming. Third, our approach hides the protocol used to arrive at a serialization order from the applications. As a result, different protocols can be used without affecting the applications. Our approach uses a history tree abstraction. The history tree captures the ordering relationship among concurrent computations. By determining what types of computations exist in the history tree and their parameters, a computation can determine whether it can proceed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Childers, J. T.; Uram, T. D.; LeCompte, T. J.
As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the World- wide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. This paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application andmore » the performance that was achieved.« less
Childers, J. T.; Uram, T. D.; LeCompte, T. J.; ...
2016-09-29
As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the Worldwide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. Finally, this paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application andmore » the performance that was achieved.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Childers, J. T.; Uram, T. D.; LeCompte, T. J.
As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the Worldwide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. Finally, this paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application andmore » the performance that was achieved.« less
Application of high-performance computing to numerical simulation of human movement
NASA Technical Reports Server (NTRS)
Anderson, F. C.; Ziegler, J. M.; Pandy, M. G.; Whalen, R. T.
1995-01-01
We have examined the feasibility of using massively-parallel and vector-processing supercomputers to solve large-scale optimization problems for human movement. Specifically, we compared the computational expense of determining the optimal controls for the single support phase of gait using a conventional serial machine (SGI Iris 4D25), a MIMD parallel machine (Intel iPSC/860), and a parallel-vector-processing machine (Cray Y-MP 8/864). With the human body modeled as a 14 degree-of-freedom linkage actuated by 46 musculotendinous units, computation of the optimal controls for gait could take up to 3 months of CPU time on the Iris. Both the Cray and the Intel are able to reduce this time to practical levels. The optimal solution for gait can be found with about 77 hours of CPU on the Cray and with about 88 hours of CPU on the Intel. Although the overall speeds of the Cray and the Intel were found to be similar, the unique capabilities of each machine are better suited to different portions of the computational algorithm used. The Intel was best suited to computing the derivatives of the performance criterion and the constraints whereas the Cray was best suited to parameter optimization of the controls. These results suggest that the ideal computer architecture for solving very large-scale optimal control problems is a hybrid system in which a vector-processing machine is integrated into the communication network of a MIMD parallel machine.
Universal Linear Motor Driven Leg Press Dynamometer and Concept of Serial Stretch Loading.
Hamar, Dušan
2015-08-24
Paper deals with backgrounds and principles of universal linear motor driven leg press dynamometer and concept of serial stretch loading. The device is based on two computer controlled linear motors mounted to the horizontal rails. As the motors can keep either constant resistance force in selected position or velocity in both directions, the system allows simulation of any mode of muscle contraction. In addition, it also can generate defined serial stretch stimuli in a form of repeated force peaks. This is achieved by short segments of reversed velocity (in concentric phase) or acceleration (in eccentric phase). Such stimuli, generated at the rate of 10 Hz, have proven to be a more efficient means for the improvement of rate of the force development. This capability not only affects performance in many sports, but also plays a substantial role in prevention of falls and their consequences. Universal linear motor driven and computer controlled dynamometer with its unique feature to generate serial stretch stimuli seems to be an efficient and useful tool for enhancing strength training effects on neuromuscular function not only in athletes, but as well as in senior population and rehabilitation patients.
Formalization, equivalence and generalization of basic resonance electrical circuits
NASA Astrophysics Data System (ADS)
Penev, Dimitar; Arnaudov, Dimitar; Hinov, Nikolay
2017-12-01
In the work are presented basic resonance circuits, which are used in resonance energy converters. The following resonant circuits are considered: serial, serial with parallel load parallel capacitor, parallel and parallel with serial loaded inductance. For the circuits under consideration, expressions are generated for the frequencies of own oscillations and for the equivalence of the active power emitted in the load. Mathematical expressions are graphically constructed and verified using computer simulations. The results obtained are used in the model based design of resonant energy converters with DC or AC output. This guaranteed the output indicators of power electronic devices.
Zywietz, Christoph
2004-01-01
The evolution of information technology and of telematics and increasing efforts to establish an electronic health record stimulate the development and introduction of new concepts in health care. However, compared to other application areas, e.g., tourism, banking, commerce etc. the use of information technology in health care is still of limited success. In hospitals as well in ambulatory medicine (General Practitioner systems) computers are often only used for administrative purposes. Fully operational Hospital Information Systems (HIS) are rare and often island solutions. The situation is somewhat better for department systems (DIS), e.g., where image analysis, processing of biochemical data or of biosignals is in the clinical focus. Even before we have solved the various problems in health care data processing and management within the "conventional" care institutions new challenges are coming up with concepts of telemedicine for assisted and non-assisted home care for patients with chronic diseases or people at high risk. The major challenges for provision of tele-monitoring and alarming services are improvement of communication and interoperability of devices and care providers. A major obstacle in achieving such goals are lack of standards for devices as well for procedures and a lack of databases with information on "normal" variability of many medical parameters to be monitored by serial comparison in continuous medical care. Some of these aspects will be discussed in more detail.
Real-time dosimeter employed to evaluate the half-value layer in CT
NASA Astrophysics Data System (ADS)
McKenney, Sarah E.; Seibert, J. Anthony; Burkett, George W.; Gelskey, Dale; Sunde, Paul B.; Newman, James D.; Boone, John M.
2014-01-01
Half-value layer (HVL) measurements on commercial whole body computer tomography (CT) scanners require serial measurements and, in many institutions, the presence of a service engineer. An assembly of aluminum filters (AAF), designed to be used in conjunction with a real-time dosimeter, was developed to provide estimates of the HVL using clinical protocols. Two real-time dose probes, a solid-state and air ionization chamber, were examined. The AAF consisted of eight rectangular filters of high-purity aluminum (Type 1100), symmetrically positioned to form a cylindrical ‘cage’ around the probe's detective volume. The incident x-ray beam was attenuated by varying thicknesses of aluminum filters as the gantry completed a minimum of one rotation. Measurements employing real-time chambers were conducted both in service mode and with a routine abdomen/pelvis protocol for several combinations of x-ray tube potentials and bow tie filters. These measurements were validated against conventional serial HVL measurements. The average relative difference between the HVL measurements using the two methods was less than 5% when using a 122 mm diameter AAF; relative differences were reduced to 1.1% when the diameter was increased to 505 mm, possibly due to reduced scatter contamination. Use of a real-time dose probe and the AAF allowed for time-efficient measurements of beam quality on a clinical CT scanner using clinical protocols.
Asymmetry in serial femtosecond crystallography data.
Sharma, Amit; Johansson, Linda; Dunevall, Elin; Wahlgren, Weixiao Y; Neutze, Richard; Katona, Gergely
2017-03-01
Serial crystallography is an increasingly important approach to protein crystallography that exploits both X-ray free-electron laser (XFEL) and synchrotron radiation. Serial crystallography recovers complete X-ray diffraction data by processing and merging diffraction images from thousands of randomly oriented non-uniform microcrystals, of which all observations are partial Bragg reflections. Random fluctuations in the XFEL pulse energy spectrum, variations in the size and shape of microcrystals, integrating over millions of weak partial observations and instabilities in the XFEL beam position lead to new types of experimental errors. The quality of Bragg intensity estimates deriving from serial crystallography is therefore contingent upon assumptions made while modeling these data. Here it is observed that serial femtosecond crystallography (SFX) Bragg reflections do not follow a unimodal Gaussian distribution and it is recommended that an idealized assumption of single Gaussian peak profiles be relaxed to incorporate apparent asymmetries when processing SFX data. The phenomenon is illustrated by re-analyzing data collected from microcrystals of the Blastochloris viridis photosynthetic reaction center and comparing these intensity observations with conventional synchrotron data. The results show that skewness in the SFX observations captures the essence of the Wilson plot and an empirical treatment is suggested that can help to separate the diffraction Bragg intensity from the background.
Report on the Total System Computer Program for Medical Libraries.
ERIC Educational Resources Information Center
Divett, Robert T.; Jones, W. Wayne
The objective of this project was to develop an integrated computer program for the total operations of a medical library including acquisitions, cataloging, circulation, reference, a computer catalog, serials controls, and current awareness services. The report describes two systems approaches: the batch system and the terminal system. The batch…
FFT Computation with Systolic Arrays, A New Architecture
NASA Technical Reports Server (NTRS)
Boriakoff, Valentin
1994-01-01
The use of the Cooley-Tukey algorithm for computing the l-d FFT lends itself to a particular matrix factorization which suggests direct implementation by linearly-connected systolic arrays. Here we present a new systolic architecture that embodies this algorithm. This implementation requires a smaller number of processors and a smaller number of memory cells than other recent implementations, as well as having all the advantages of systolic arrays. For the implementation of the decimation-in-frequency case, word-serial data input allows continuous real-time operation without the need of a serial-to-parallel conversion device. No control or data stream switching is necessary. Computer simulation of this architecture was done in the context of a 1024 point DFT with a fixed point processor, and CMOS processor implementation has started.
Variability of serial same-day left ventricular ejection fraction using quantitative gated SPECT.
Vallejo, Enrique; Chaya, Hugo; Plancarte, Gerardo; Victoria, Diana; Bialostozky, David
2002-01-01
The accuracy of quantitative gated single photon emission computed tomography (SPECT) (QGS) and the potential limitations for estimation of left ventricular ejection fraction (LVEF) have been extensively evaluated. However, few studies have focused on the serial variability of QGS. This study was conducted to assess the serial variability of QGS for determination of LVEF between 2 sequential technetium 99m sestamibi-gated SPECT acquisitions at rest in both healthy and unhealthy subjects. The study population consisted of 2 groups: group I included 21 volunteers with a low likelihood of CAD, and group II included 22 consecutive patients with documented CAD. Both groups underwent serial SPECT imaging. The overall correlation between sequential images was high (r = 0.94, SEE = 5.3%), and the mean serial variability of LVEF was 5.15% +/- 3.51%. Serial variability was lower for images with high counts (3.45% +/- 3.23%) than for images with low counts (6.85% +/- 3.77%). The mean serial variability was not different between normal and abnormal high-dose images (3.0% +/- 1.56% vs 3.9% +/- 2.77%). However, mean serial variability for images derived from abnormal low-dose images was significantly greater than that derived from normal low-dose images (9.6% +/- 2.22% vs 3.1% +/- 2.12%, P <.05). Although QGS is an efficacious method to approximate LVEF values and is extremely valuable for incremental risk stratification of patients with coronary artery disease, it has significant variability in the estimation of LVEF on serial images. This should be taken into account when used for serial evaluation of LVEF.
Visualization of Pulsar Search Data
NASA Astrophysics Data System (ADS)
Foster, R. S.; Wolszczan, A.
1993-05-01
The search for periodic signals from rotating neutron stars or pulsars has been a computationally taxing problem to astronomers for more than twenty-five years. Over this time interval, increases in computational capability have allowed ever more sensitive searches, covering a larger parameter space. The volume of input data and the general presence of radio frequency interference typically produce numerous spurious signals. Visualization of the search output and enhanced real-time processing of significant candidate events allow the pulsar searcher to optimally processes and search for new radio pulsars. The pulsar search algorithm and visualization system presented in this paper currently runs on serial RISC based workstations, a traditional vector based super computer, and a massively parallel computer. A description of the serial software algorithm and its modifications for massively parallel computing are describe. The results of four successive searches for millisecond period radio pulsars using the Arecibo telescope at 430 MHz have resulted in the successful detection of new long-period and millisecond period radio pulsars.
High-Throughput Bit-Serial LDPC Decoder LSI Based on Multiple-Valued Asynchronous Interleaving
NASA Astrophysics Data System (ADS)
Onizawa, Naoya; Hanyu, Takahiro; Gaudet, Vincent C.
This paper presents a high-throughput bit-serial low-density parity-check (LDPC) decoder that uses an asynchronous interleaver. Since consecutive log-likelihood message values on the interleaver are similar, node computations are continuously performed by using the most recently arrived messages without significantly affecting bit-error rate (BER) performance. In the asynchronous interleaver, each message's arrival rate is based on the delay due to the wire length, so that the decoding throughput is not restricted by the worst-case latency, which results in a higher average rate of computation. Moreover, the use of a multiple-valued data representation makes it possible to multiplex control signals and data from mutual nodes, thus minimizing the number of handshaking steps in the asynchronous interleaver and eliminating the clock signal entirely. As a result, the decoding throughput becomes 1.3 times faster than that of a bit-serial synchronous decoder under a 90nm CMOS technology, at a comparable BER.
Stepping Stones to Literacy. What Works Clearinghouse Intervention Report
ERIC Educational Resources Information Center
What Works Clearinghouse, 2007
2007-01-01
Stepping Stones to Literacy (SSL) is a supplemental curriculum designed to promote listening, print conventions, phonological awareness, phonemic awareness, and serial processing/rapid naming (quickly naming familiar visual symbols and stimuli such as letters or colors). The program targets kindergarten and older preschool students considered to…
1993-12-31
effect of Ritalin on attention and traumatically brain injured adults and the issues concerning repeated measures using computer based testing with...heat, cold and fatigue on neurological functions, as well as, the interactive and independent effects of chemical agents and pharmaceuticals. 5) A...serial manner was becoming an increasingly important task in neuropsychology. Serial assessment was important for monitoring medication effects
Describing, using 'recognition cones'. [parallel-series model with English-like computer program
NASA Technical Reports Server (NTRS)
Uhr, L.
1973-01-01
A parallel-serial 'recognition cone' model is examined, taking into account the model's ability to describe scenes of objects. An actual program is presented in an English-like language. The concept of a 'description' is discussed together with possible types of descriptive information. Questions regarding the level and the variety of detail are considered along with approaches for improving the serial representations of parallel systems.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-14
..., or Partially-Exclusive Licensing of an Invention Concerning a Computer Controlled System for Laser... provides a computer controlled system for laser energy delivery to the retina. Information is received from... Application Serial No. 13/130,380, entitled ``Computer Controlled System for Laser Energy Delivery to the...
Jargon that Computes: Today's PC Terminology.
ERIC Educational Resources Information Center
Crawford, Walt
1997-01-01
Discusses PC (personal computer) and telecommunications terminology in context: Integrated Services Digital Network (ISDN); Asymmetric Digital Subscriber Line (ADSL); cable modems; satellite downloads; T1 and T3 lines; magnitudes ("giga-,""nano-"); Central Processing Unit (CPU); Random Access Memory (RAM); Universal Serial Bus…
A Computer Engineering Curriculum for the Air Force Academy: An Implementation Plan
1985-04-01
engineerinq is needed as a r ul of the findings? 5. What is the impact of this study’s rocommendat ion to pursue the Electrico I Engineering deqree with onpt...stepper motor 9 S35 LAB 36 Serial 10 S37 GR #3 - 38 8251 10 chip ) 39 LAB serial 10 10 * 40 LAB " 1)41 LAB S 42 Course review - S FINAL EXAM 00 % 80 0
Optical transmission modules for multi-channel superconducting quantum interference device readouts.
Kim, Jin-Mok; Kwon, Hyukchan; Yu, Kwon-kyu; Lee, Yong-Ho; Kim, Kiwoong
2013-12-01
We developed an optical transmission module consisting of 16-channel analog-to-digital converter (ADC), digital-noise filter, and one-line serial transmitter, which transferred Superconducting Quantum Interference Device (SQUID) readout data to a computer by a single optical cable. A 16-channel ADC sent out SQUID readouts data with 32-bit serial data of 8-bit channel and 24-bit voltage data at a sample rate of 1.5 kSample/s. A digital-noise filter suppressed digital noises generated by digital clocks to obtain SQUID modulation as large as possible. One-line serial transmitter reformed 32-bit serial data to the modulated data that contained data and clock, and sent them through a single optical cable. When the optical transmission modules were applied to 152-channel SQUID magnetoencephalography system, this system maintained a field noise level of 3 fT/√Hz @ 100 Hz.
Investigation, Modeling and Validation of Digital Bridge for a New Generation Hot-Wire Anemometer
NASA Astrophysics Data System (ADS)
Joshi, Karthik Kamalakar
The Digital Bridge Thermal Anemometer (DBTA) is a new generation anemometer that uses advanced electronics and a modified half-Wheatstone bridge configuration, specifically a sensor and a shunt resistor in series. This allows the miniaturization of the anemometer and the communication between host computer and anemometer is carried out using serial or ethernet which eliminates the noise due to the use of long cables in conventional anemometer and the digital data sent to host computer is immune to electrical noise. In the new configuration the potential drop across a shunt resistor is used to control the bridge. This thesis is confined to the anemometer used in constant temperature (CT) mode. The heat transfer relations are studied and new expressions are developed based on the new configuration of the bridge using perturbation analysis. The theoretical plant model of a commercially available sensor and a custom built sensor are derived and quantified. The plant model is used to design a controller to control the plant in closed-loop using feedback. To test the performance of the modified sensor used with a "generation-I" bridge and DAQ, an experiment was conducted. The controller was implemented in a user interface in LabVIEW. The test is to compare the results between a conventional TSI sensor with an IFA 300 anemometer and the setup describe above, in the wake behind a circular cylinder. Performance of the DBTA is satisfactory at low frequencies. A user interface capable of communicating with the anemometer to control the operation and collect data generated by anemometer is developed in LabVIEW.
CAMAC throughput of a new RISC-based data acquisition computer at the DIII-D tokamak
NASA Astrophysics Data System (ADS)
Vanderlaan, J. F.; Cummings, J. W.
1993-10-01
The amount of experimental data acquired per plasma discharge at DIII-D has continued to grow. The largest shot size in May 1991 was 49 Mbyte; in May 1992, 66 Mbyte; and in April 1993, 80 Mbyte. The increasing load has prompted the installation of a new Motorola 88100-based MODCOMP computer to supplement the existing core of three older MODCOMP data acquisition CPU's. New Kinetic Systems CAMAC serial highway driver hardware runs on the 88100 VME bus. The new operating system is MODCOMP REAL/IX version of AT&T System V UNIX with real-time extensions and networking capabilities; future plans call for installation of additional computers of this type for tokamak and neutral beam control functions. Experiences with the CAMAC hardware and software will be chronicled, including observation of data throughput. The Enhanced Serial Highway crate controller is advertised as twice as fast as the previous crate controller, and computer I/O speeds are expected to also increase data rates.
2013-02-01
Sonar AUV #Environmental Sampling Environmental AUV +name : string = OEX Ocean Explorer +name : string = Hammerhead Iver2 +name : string = Unicorn ...executable» Google Earth Bluefin 21 AUV ( Unicorn ) MOOS Computer GPS «serial» Bluefin 21 AUV (Macrura) MOOS Computer «acoustic» Micro-Modem «wired...Computer Bluefin 21 AUV ( Unicorn ) MOOS Computer NURC AUV (OEX) MOOS Computer Topside MOOS Computer «wifi» 5.0GHz WiLan «acoustic» Edgetech GPS
Analysis of XFEL serial diffraction data from individual crystalline fibrils
Wojtas, David H.; Ayyer, Kartik; Liang, Mengning; Mossou, Estelle; Romoli, Filippo; Seuring, Carolin; Beyerlein, Kenneth R.; Bean, Richard J.; Morgan, Andrew J.; Oberthuer, Dominik; Fleckenstein, Holger; Heymann, Michael; Gati, Cornelius; Yefanov, Oleksandr; Barthelmess, Miriam; Ornithopoulou, Eirini; Galli, Lorenzo; Xavier, P. Lourdu; Ling, Wai Li; Frank, Matthias; Yoon, Chun Hong; White, Thomas A.; Bajt, Saša; Mitraki, Anna; Boutet, Sebastien; Aquila, Andrew; Barty, Anton; Forsyth, V. Trevor; Chapman, Henry N.; Millane, Rick P.
2017-01-01
Serial diffraction data collected at the Linac Coherent Light Source from crystalline amyloid fibrils delivered in a liquid jet show that the fibrils are well oriented in the jet. At low fibril concentrations, diffraction patterns are recorded from single fibrils; these patterns are weak and contain only a few reflections. Methods are developed for determining the orientation of patterns in reciprocal space and merging them in three dimensions. This allows the individual structure amplitudes to be calculated, thus overcoming the limitations of orientation and cylindrical averaging in conventional fibre diffraction analysis. The advantages of this technique should allow structural studies of fibrous systems in biology that are inaccessible using existing techniques. PMID:29123682
Ahamed, Nizam U; Sundaraj, Kenneth; Poo, Tarn S
2013-03-01
This article describes the design of a robust, inexpensive, easy-to-use, small, and portable online electromyography acquisition system for monitoring electromyography signals during rehabilitation. This single-channel (one-muscle) system was connected via the universal serial bus port to a programmable Windows operating system handheld tablet personal computer for storage and analysis of the data by the end user. The raw electromyography signals were amplified in order to convert them to an observable scale. The inherent noise of 50 Hz (Malaysia) from power lines electromagnetic interference was then eliminated using a single-hybrid IC notch filter. These signals were sampled by a signal processing module and converted into 24-bit digital data. An algorithm was developed and programmed to transmit the digital data to the computer, where it was reassembled and displayed in the computer using software. Finally, the following device was furnished with the graphical user interface to display the online muscle strength streaming signal in a handheld tablet personal computer. This battery-operated system was tested on the biceps brachii muscles of 20 healthy subjects, and the results were compared to those obtained with a commercial single-channel (one-muscle) electromyography acquisition system. The results obtained using the developed device when compared to those obtained from a commercially available physiological signal monitoring system for activities involving muscle contractions were found to be comparable (the comparison of various statistical parameters) between male and female subjects. In addition, the key advantage of this developed system over the conventional desktop personal computer-based acquisition systems is its portability due to the use of a tablet personal computer in which the results are accessible graphically as well as stored in text (comma-separated value) form.
Managing search complexity in linguistic geometry.
Stilman, B
1997-01-01
This paper is a new step in the development of linguistic geometry. This formal theory is intended to discover and generalize the inner properties of human expert heuristics, which have been successful in a certain class of complex control systems, and apply them to different systems. In this paper, we investigate heuristics extracted in the form of hierarchical networks of planning paths of autonomous agents. Employing linguistic geometry tools the dynamic hierarchy of networks is represented as a hierarchy of formal attribute languages. The main ideas of this methodology are shown in the paper on two pilot examples of the solution of complex optimization problems. The first example is a problem of strategic planning for the air combat, in which concurrent actions of four vehicles are simulated as serial interleaving moves. The second example is a problem of strategic planning for the space comb of eight autonomous vehicles (with interleaving moves) that requires generation of the search tree of the depth 25 with the branching factor 30. This is beyond the capabilities of modern and conceivable future computers (employing conventional approaches). In both examples the linguistic geometry tools showed deep and highly selective searches in comparison with conventional search algorithms. For the first example a sketch of the proof of optimality of the solution is considered.
High-Performance Computation of Distributed-Memory Parallel 3D Voronoi and Delaunay Tessellation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peterka, Tom; Morozov, Dmitriy; Phillips, Carolyn
2014-11-14
Computing a Voronoi or Delaunay tessellation from a set of points is a core part of the analysis of many simulated and measured datasets: N-body simulations, molecular dynamics codes, and LIDAR point clouds are just a few examples. Such computational geometry methods are common in data analysis and visualization; but as the scale of simulations and observations surpasses billions of particles, the existing serial and shared-memory algorithms no longer suffice. A distributed-memory scalable parallel algorithm is the only feasible approach. The primary contribution of this paper is a new parallel Delaunay and Voronoi tessellation algorithm that automatically determines which neighbormore » points need to be exchanged among the subdomains of a spatial decomposition. Other contributions include periodic and wall boundary conditions, comparison of our method using two popular serial libraries, and application to numerous science datasets.« less
BAC Modification through Serial or Simultaneous Use of CRE/Lox Technology
Parrish, Mark; Unruh, Jay; Krumlauf, Robb
2011-01-01
Bacterial Artificial Chromosomes (BACs) are vital tools in mouse genomic analyses because of their ability to propagate large inserts. The size of these constructs, however, prevents the use of conventional molecular biology techniques for modification and manipulation. Techniques such as recombineering and Cre/Lox methodologies have thus become heavily relied upon for such purposes. In this work, we investigate the applicability of Lox variant sites for serial and/or simultaneous manipulations of BACs. We show that Lox spacer mutants are very specific, and inverted repeat variants reduce Lox reaction rates through reducing the affinity of Cre for the site, while retaining some functionality. Employing these methods, we produced serial modifications encompassing four independent changes which generated a mouse HoxB BAC with fluorescent reporter proteins inserted into four adjacent Hox genes. We also generated specific, simultaneous deletions using combinations of spacer variants and inverted repeat variants. These techniques will facilitate BAC manipulations and open a new repertoire of methods for BAC and genome manipulation. PMID:21197414
Edlund, Petra; Takala, Heikki; Claesson, Elin; ...
2016-10-19
Phytochromes are a family of photoreceptors that control light responses of plants, fungi and bacteria. A sequence of structural changes, which is not yet fully understood, leads to activation of an output domain. Time-resolved serial femtosecond crystallography (SFX) can potentially shine light on these conformational changes. Here we report the room temperature crystal structure of the chromophore-binding domains of the Deinococcus radiodurans phytochrome at 2.1 Å resolution. The structure was obtained by serial femtosecond X-ray crystallography from microcrystals at an X-ray free electron laser. We find overall good agreement compared to a crystal structure at 1.35 Å resolution derived frommore » conventional crystallography at cryogenic temperatures, which we also report here. The thioether linkage between chromophore and protein is subject to positional ambiguity at the synchrotron, but is fully resolved with SFX. As a result, the study paves the way for time-resolved structural investigations of the phytochrome photocycle with time-resolved SFX.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edlund, Petra; Takala, Heikki; Claesson, Elin
Phytochromes are a family of photoreceptors that control light responses of plants, fungi and bacteria. A sequence of structural changes, which is not yet fully understood, leads to activation of an output domain. Time-resolved serial femtosecond crystallography (SFX) can potentially shine light on these conformational changes. Here we report the room temperature crystal structure of the chromophore-binding domains of the Deinococcus radiodurans phytochrome at 2.1 Å resolution. The structure was obtained by serial femtosecond X-ray crystallography from microcrystals at an X-ray free electron laser. We find overall good agreement compared to a crystal structure at 1.35 Å resolution derived frommore » conventional crystallography at cryogenic temperatures, which we also report here. The thioether linkage between chromophore and protein is subject to positional ambiguity at the synchrotron, but is fully resolved with SFX. As a result, the study paves the way for time-resolved structural investigations of the phytochrome photocycle with time-resolved SFX.« less
JF-104 ground testing reaction control system (RCS) jets
NASA Technical Reports Server (NTRS)
1961-01-01
JF-104A (formerly YF-104A, serial # 55-2961) was modifed with a hydrogen peroxide reaction control system (RCS). Following a zoom climb to altitudes in the vicinity of 80,000 feet, the RCS system gave the aircraft controllability in the thin upper atmosphere where conventional control surfaces are ineffective.
Design Principles for a Comprehensive Library System.
ERIC Educational Resources Information Center
Uluakar, Tamer; And Others
1981-01-01
Describes an online design featuring circulation control, catalog access, and serial holdings that uses an incremental approach to system development. Utilizing a dedicated computer, this second of three releases pays particular attention to present and predicted computing capabilities as well as trends in library automation. (Author/RAA)
Medical serials control systems by computer--a state of the art review.
Brodman, E; Johnson, M F
1976-01-01
A review of the problems encountered in serials control systems is followed by a description of some of the present-day attempts to solve these problems. Specific networks are described, notably PHILSOM (developed at Washington University School of Medicine Library), the UCLA Biomedical Library's system, and OCLC in Columbus, Ohio. Finally, the role of minicomputers in present and future developments is discussed, and some cautious guesses are made on future directions in the field.
Climate change and the detection of trends in annual runoff
McCabe, G.J.; Wolock, D.M.
1997-01-01
This study examines the statistical likelihood of detecting a trend in annual runoff given an assumed change in mean annual runoff, the underlying year-to-year variability in runoff, and serial correlation of annual runoff. Means, standard deviations, and lag-1 serial correlations of annual runoff were computed for 585 stream gages in the conterminous United States, and these statistics were used to compute the probability of detecting a prescribed trend in annual runoff. Assuming a linear 20% change in mean annual runoff over a 100 yr period and a significance level of 95%, the average probability of detecting a significant trend was 28% among the 585 stream gages. The largest probability of detecting a trend was in the northwestern U.S., the Great Lakes region, the northeastern U.S., the Appalachian Mountains, and parts of the northern Rocky Mountains. The smallest probability of trend detection was in the central and southwestern U.S., and in Florida. Low probabilities of trend detection were associated with low ratios of mean annual runoff to the standard deviation of annual runoff and with high lag-1 serial correlation in the data.
The universal serial bus endoscope: design and initial clinical experience.
Hernandez-Zendejas, Gregorio; Dobke, Marek K; Guerrerosantos, Jose
2004-01-01
Endoscopic forehead lift is a well-established procedure in aesthetic plastic surgery. Many agree that currently available video-endoscopic equipment is bulky, multipieced and sometimes cumbersome in the operating theater. A novel system, the Universal Serial Bus Endoscope (USBE) was designed to simplify and reduce the number of necessary equipment pieces in the endoscopic setup. The USBE is attached by a single cable to a Universal Serial Bus (USB) port of a laptop computer. A built-in miniaturized cold light source provides illumination. A built-in digital camera chip enables procedure recording. The real-time images and movies obtained with USBE are displayed on the computer's screen and recorded on the laptop's hard disk drive. In this study, 25 patients underwent endoscopic browlift using the USBE system to test its clinical usefulness, all with good results and without complications or need for revision. The USBE was found to be reliable and easier to use than current video-endoscope equipment. The operative time needed to complete the procedure by the authors was reduced approximately 50%. The design and main technical characteristics of the USBE are presented.
A USB 2.0 computer interface for the UCO/Lick CCD cameras
NASA Astrophysics Data System (ADS)
Wei, Mingzhi; Stover, Richard J.
2004-09-01
The new UCO/Lick Observatory CCD camera uses a 200 MHz fiber optic cable to transmit image data and an RS232 serial line for low speed bidirectional command and control. Increasingly RS232 is a legacy interface supported on fewer computers. The fiber optic cable requires either a custom interface board that is plugged into the mainboard of the image acquisition computer to accept the fiber directly or an interface converter that translates the fiber data onto a widely used standard interface. We present here a simple USB 2.0 interface for the UCO/Lick camera. A single USB cable connects to the image acquisition computer and the camera's RS232 serial and fiber optic cables plug into the USB interface. Since most computers now support USB 2.0 the Lick interface makes it possible to use the camera on essentially any modern computer that has the supporting software. No hardware modifications or additions to the computer are needed. The necessary device driver software has been written for the Linux operating system which is now widely used at Lick Observatory. The complete data acquisition software for the Lick CCD camera is running on a variety of PC style computers as well as an HP laptop.
Parallel peak pruning for scalable SMP contour tree computation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carr, Hamish A.; Weber, Gunther H.; Sewell, Christopher M.
As data sets grow to exascale, automated data analysis and visualisation are increasingly important, to intermediate human understanding and to reduce demands on disk storage via in situ analysis. Trends in architecture of high performance computing systems necessitate analysis algorithms to make effective use of combinations of massively multicore and distributed systems. One of the principal analytic tools is the contour tree, which analyses relationships between contours to identify features of more than local importance. Unfortunately, the predominant algorithms for computing the contour tree are explicitly serial, and founded on serial metaphors, which has limited the scalability of this formmore » of analysis. While there is some work on distributed contour tree computation, and separately on hybrid GPU-CPU computation, there is no efficient algorithm with strong formal guarantees on performance allied with fast practical performance. Here in this paper, we report the first shared SMP algorithm for fully parallel contour tree computation, withfor-mal guarantees of O(lgnlgt) parallel steps and O(n lgn) work, and implementations with up to 10x parallel speed up in OpenMP and up to 50x speed up in NVIDIA Thrust.« less
1975-01-01
in the computer in 16 bit parallel computer DIO transfers at the max- imum computer I/O speed. it then transmits this data in a bit- serial echo...maximum DIO rate under computer interrupt control. The LCI also provides station interrupt information for transfer to the computer under computer...been in daily operation since 1973. The SAM-D Missile system is currently in the Engineering De - velopment phase which precedes the Production and
Accounting for partiality in serial crystallography using ray-tracing principles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kroon-Batenburg, Loes M. J., E-mail: l.m.j.kroon-batenburg@uu.nl; Schreurs, Antoine M. M.; Ravelli, Raimond B. G.
Serial crystallography generates partial reflections from still diffraction images. Partialities are estimated with EVAL ray-tracing simulations, thereby improving merged reflection data to a similar quality as conventional rotation data. Serial crystallography generates ‘still’ diffraction data sets that are composed of single diffraction images obtained from a large number of crystals arbitrarily oriented in the X-ray beam. Estimation of the reflection partialities, which accounts for the expected observed fractions of diffraction intensities, has so far been problematic. In this paper, a method is derived for modelling the partialities by making use of the ray-tracing diffraction-integration method EVAL. The method estimates partialitiesmore » based on crystal mosaicity, beam divergence, wavelength dispersion, crystal size and the interference function, accounting for crystallite size. It is shown that modelling of each reflection by a distribution of interference-function weighted rays yields a ‘still’ Lorentz factor. Still data are compared with a conventional rotation data set collected from a single lysozyme crystal. Overall, the presented still integration method improves the data quality markedly. The R factor of the still data compared with the rotation data decreases from 26% using a Monte Carlo approach to 12% after applying the Lorentz correction, to 5.3% when estimating partialities by EVAL and finally to 4.7% after post-refinement. The merging R{sub int} factor of the still data improves from 105 to 56% but remains high. This suggests that the accuracy of the model parameters could be further improved. However, with a multiplicity of around 40 and an R{sub int} of ∼50% the merged still data approximate the quality of the rotation data. The presented integration method suitably accounts for the partiality of the observed intensities in still diffraction data, which is a critical step to improve data quality in serial crystallography.« less
Case Studies in Library Computer Systems.
ERIC Educational Resources Information Center
Palmer, Richard Phillips
Twenty descriptive case studies of computer applications in a variety of libraries are presented in this book. Computerized circulation, serial and acquisition systems in public, high school, college, university and business libraries are included. Each of the studies discusses: 1) the environment in which the system operates, 2) the objectives of…
Chen, Yasheng; Dhar, Rajat; Heitsch, Laura; Ford, Andria; Fernandez-Cadenas, Israel; Carrera, Caty; Montaner, Joan; Lin, Weili; Shen, Dinggang; An, Hongyu; Lee, Jin-Moo
2016-01-01
Although cerebral edema is a major cause of death and deterioration following hemispheric stroke, there remains no validated biomarker that captures the full spectrum of this critical complication. We recently demonstrated that reduction in intracranial cerebrospinal fluid (CSF) volume (∆ CSF) on serial computed tomography (CT) scans provides an accurate measure of cerebral edema severity, which may aid in early triaging of stroke patients for craniectomy. However, application of such a volumetric approach would be too cumbersome to perform manually on serial scans in a real-world setting. We developed and validated an automated technique for CSF segmentation via integration of random forest (RF) based machine learning with geodesic active contour (GAC) segmentation. The proposed RF + GAC approach was compared to conventional Hounsfield Unit (HU) thresholding and RF segmentation methods using Dice similarity coefficient (DSC) and the correlation of volumetric measurements, with manual delineation serving as the ground truth. CSF spaces were outlined on scans performed at baseline (< 6 h after stroke onset) and early follow-up (FU) (closest to 24 h) in 38 acute ischemic stroke patients. RF performed significantly better than optimized HU thresholding (p < 10 - 4 in baseline and p < 10 - 5 in FU) and RF + GAC performed significantly better than RF (p < 10 - 3 in baseline and p < 10 - 5 in FU). Pearson correlation coefficients between the automatically detected ∆ CSF and the ground truth were r = 0.178 (p = 0.285), r = 0.876 (p < 10 - 6 ) and r = 0.879 (p < 10 - 6 ) for thresholding, RF and RF + GAC, respectively, with a slope closer to the line of identity in RF + GAC. When we applied the algorithm trained from images of one stroke center to segment CTs from another center, similar findings held. In conclusion, we have developed and validated an accurate automated approach to segment CSF and calculate its shifts on serial CT scans. This algorithm will allow us to efficiently and accurately measure the evolution of cerebral edema in future studies including large multi-site patient populations.
A transient FETI methodology for large-scale parallel implicit computations in structural mechanics
NASA Technical Reports Server (NTRS)
Farhat, Charbel; Crivelli, Luis; Roux, Francois-Xavier
1992-01-01
Explicit codes are often used to simulate the nonlinear dynamics of large-scale structural systems, even for low frequency response, because the storage and CPU requirements entailed by the repeated factorizations traditionally found in implicit codes rapidly overwhelm the available computing resources. With the advent of parallel processing, this trend is accelerating because explicit schemes are also easier to parallelize than implicit ones. However, the time step restriction imposed by the Courant stability condition on all explicit schemes cannot yet -- and perhaps will never -- be offset by the speed of parallel hardware. Therefore, it is essential to develop efficient and robust alternatives to direct methods that are also amenable to massively parallel processing because implicit codes using unconditionally stable time-integration algorithms are computationally more efficient when simulating low-frequency dynamics. Here we present a domain decomposition method for implicit schemes that requires significantly less storage than factorization algorithms, that is several times faster than other popular direct and iterative methods, that can be easily implemented on both shared and local memory parallel processors, and that is both computationally and communication-wise efficient. The proposed transient domain decomposition method is an extension of the method of Finite Element Tearing and Interconnecting (FETI) developed by Farhat and Roux for the solution of static problems. Serial and parallel performance results on the CRAY Y-MP/8 and the iPSC-860/128 systems are reported and analyzed for realistic structural dynamics problems. These results establish the superiority of the FETI method over both the serial/parallel conjugate gradient algorithm with diagonal scaling and the serial/parallel direct method, and contrast the computational power of the iPSC-860/128 parallel processor with that of the CRAY Y-MP/8 system.
Brewster, Aaron S.; Sawaya, Michael R.; Rodriguez, Jose; ...
2015-01-23
Still diffraction patterns from peptide nanocrystals with small unit cells are challenging to index using conventional methods owing to the limited number of spots and the lack of crystal orientation information for individual images. New indexing algorithms have been developed as part of the Computational Crystallography Toolbox( cctbx) to overcome these challenges. Accurate unit-cell information derived from an aggregate data set from thousands of diffraction patterns can be used to determine a crystal orientation matrix for individual images with as few as five reflections. These algorithms are potentially applicable not only to amyloid peptides but also to any set ofmore » diffraction patterns with sparse properties, such as low-resolution virus structures or high-throughput screening of still images captured by raster-scanning at synchrotron sources. As a proof of concept for this technique, successful integration of X-ray free-electron laser (XFEL) data to 2.5 Å resolution for the amyloid segment GNNQQNY from the Sup35 yeast prion is presented.« less
Wu, Yicong; Chandris, Panagiotis; Winter, Peter W.; Kim, Edward Y.; Jaumouillé, Valentin; Kumar, Abhishek; Guo, Min; Leung, Jacqueline M.; Smith, Corey; Rey-Suarez, Ivan; Liu, Huafeng; Waterman, Clare M.; Ramamurthi, Kumaran S.; La Riviere, Patrick J.; Shroff, Hari
2016-01-01
Most fluorescence microscopes are inefficient, collecting only a small fraction of the emitted light at any instant. Besides wasting valuable signal, this inefficiency also reduces spatial resolution and causes imaging volumes to exhibit significant resolution anisotropy. We describe microscopic and computational techniques that address these problems by simultaneously capturing and subsequently fusing and deconvolving multiple specimen views. Unlike previous methods that serially capture multiple views, our approach improves spatial resolution without introducing any additional illumination dose or compromising temporal resolution relative to conventional imaging. When applying our methods to single-view wide-field or dual-view light-sheet microscopy, we achieve a twofold improvement in volumetric resolution (~235 nm × 235 nm × 340 nm) as demonstrated on a variety of samples including microtubules in Toxoplasma gondii, SpoVM in sporulating Bacillus subtilis, and multiple protein distributions and organelles in eukaryotic cells. In every case, spatial resolution is improved with no drawback by harnessing previously unused fluorescence. PMID:27761486
Medical serials control systems by computer--a state of the art review.
Brodman, E; Johnson, M F
1976-01-01
A review of the problems encountered in serials control systems is followed by a description of some of the present-day attempts to solve these problems. Specific networks are described, notably PHILSOM (developed at Washington University School of Medicine Library), the UCLA Biomedical Library's system, and OCLC in Columbus, Ohio. Finally, the role of minicomputers in present and future developments is discussed, and some cautious guesses are made on future directions in the field. PMID:1247704
PETSc Users Manual Revision 3.7
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balay, Satish; Abhyankar, S.; Adams, M.
This manual describes the use of PETSc for the numerical solution of partial differential equations and related problems on high-performance computers. The Portable, Extensible Toolkit for Scientific Computation (PETSc) is a suite of data structures and routines that provide the building blocks for the implementation of large-scale application codes on parallel (and serial) computers. PETSc uses the MPI standard for all message-passing communication.
PETSc Users Manual Revision 3.8
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balay, S.; Abhyankar, S.; Adams, M.
This manual describes the use of PETSc for the numerical solution of partial differential equations and related problems on high-performance computers. The Portable, Extensible Toolkit for Scientific Computation (PETSc) is a suite of data structures and routines that provide the building blocks for the implementation of large-scale application codes on parallel (and serial) computers. PETSc uses the MPI standard for all message-passing communication.
Yeh, Paul; Hunter, Tane; Sinha, Devbarna; Ftouni, Sarah; Wallach, Elise; Jiang, Damian; Chan, Yih-Chih; Wong, Stephen Q; Silva, Maria Joao; Vedururu, Ravikiran; Doig, Kenneth; Lam, Enid; Arnau, Gisela Mir; Semple, Timothy; Wall, Meaghan; Zivanovic, Andjelija; Agarwal, Rishu; Petrone, Pasquale; Jones, Kate; Westerman, David; Blombery, Piers; Seymour, John F; Papenfuss, Anthony T; Dawson, Mark A; Tam, Constantine S; Dawson, Sarah-Jane
2017-03-17
Several novel therapeutics are poised to change the natural history of chronic lymphocytic leukaemia (CLL) and the increasing use of these therapies has highlighted limitations of traditional disease monitoring methods. Here we demonstrate that circulating tumour DNA (ctDNA) is readily detectable in patients with CLL. Importantly, ctDNA does not simply mirror the genomic information contained within circulating malignant lymphocytes but instead parallels changes across different disease compartments following treatment with novel therapies. Serial ctDNA analysis allows clonal dynamics to be monitored over time and identifies the emergence of genomic changes associated with Richter's syndrome (RS). In addition to conventional disease monitoring, ctDNA provides a unique opportunity for non-invasive serial analysis of CLL for molecular disease monitoring.
Analysis of XFEL serial diffraction data from individual crystalline fibrils
Wojtas, David H.; Ayyer, Kartik; Liang, Mengning; ...
2017-10-20
Serial diffraction data collected at the Linac Coherent Light Source from crystalline amyloid fibrils delivered in a liquid jet show that the fibrils are well oriented in the jet. At low fibril concentrations, diffraction patterns are recorded from single fibrils; these patterns are weak and contain only a few reflections. Methods are developed for determining the orientation of patterns in reciprocal space and merging them in three dimensions. This allows the individual structure amplitudes to be calculated, thus overcoming the limitations of orientation and cylindrical averaging in conventional fibre diffraction analysis. In conclusion, the advantages of this technique should allowmore » structural studies of fibrous systems in biology that are inaccessible using existing techniques.« less
Analysis of XFEL serial diffraction data from individual crystalline fibrils
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wojtas, David H.; Ayyer, Kartik; Liang, Mengning
Serial diffraction data collected at the Linac Coherent Light Source from crystalline amyloid fibrils delivered in a liquid jet show that the fibrils are well oriented in the jet. At low fibril concentrations, diffraction patterns are recorded from single fibrils; these patterns are weak and contain only a few reflections. Methods are developed for determining the orientation of patterns in reciprocal space and merging them in three dimensions. This allows the individual structure amplitudes to be calculated, thus overcoming the limitations of orientation and cylindrical averaging in conventional fibre diffraction analysis. In conclusion, the advantages of this technique should allowmore » structural studies of fibrous systems in biology that are inaccessible using existing techniques.« less
Kinematic synthesis of adjustable robotic mechanisms
NASA Astrophysics Data System (ADS)
Chuenchom, Thatchai
1993-01-01
Conventional hard automation, such as a linkage-based or a cam-driven system, provides high speed capability and repeatability but not the flexibility required in many industrial applications. The conventional mechanisms, that are typically single-degree-of-freedom systems, are being increasingly replaced by multi-degree-of-freedom multi-actuators driven by logic controllers. Although this new trend in sophistication provides greatly enhanced flexibility, there are many instances where the flexibility needs are exaggerated and the associated complexity is unnecessary. Traditional mechanism-based hard automation, on the other hand, neither can fulfill multi-task requirements nor are cost-effective mainly due to lack of methods and tools to design-in flexibility. This dissertation attempts to bridge this technological gap by developing Adjustable Robotic Mechanisms (ARM's) or 'programmable mechanisms' as a middle ground between high speed hard automation and expensive serial jointed-arm robots. This research introduces the concept of adjustable robotic mechanisms towards cost-effective manufacturing automation. A generalized analytical synthesis technique has been developed to support the computational design of ARM's that lays the theoretical foundation for synthesis of adjustable mechanisms. The synthesis method developed in this dissertation, called generalized adjustable dyad and triad synthesis, advances the well-known Burmester theory in kinematics to a new level. While this method provides planar solutions, a novel patented scheme is utilized for converting prescribed three-dimensional motion specifications into sets of planar projections. This provides an analytical and a computational tool for designing adjustable mechanisms that satisfy multiple sets of three-dimensional motion specifications. Several design issues were addressed, including adjustable parameter identification, branching defect, and mechanical errors. An efficient mathematical scheme for identification of adjustable member was also developed. The analytical synthesis techniques developed in this dissertation were successfully implemented in a graphic-intensive user-friendly computer program. A physical prototype of a general purpose adjustable robotic mechanism has been constructed to serve as a proof-of-concept model.
Fayn, J; Rubel, P
1988-01-01
The authors present a new computer program for serial ECG analysis that allows a direct comparison of any couple of three-dimensional ECGs and quantitatively assesses the degree of evolution of the spatial loops as well as of their initial, central, or terminal sectors. Loops and sectors are superposed as best as possible, with the aim of overcoming tracing variability of nonpathological origin. As a result, optimal measures of evolution are computed and a tabular summary of measurements is dynamically configured with respect to the patient's history and is then printed. A multivariate classifier assigns each couple of tracings to one of four classes of evolution. Color graphic displays corresponding to several modes of representation may also be plotted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomquist, Heidi K.; Fixel, Deborah A.; Fett, David Brian
The Xyce Parallel Electronic Simulator simulates electronic circuit behavior in DC, AC, HB, MPDE and transient mode using standard analog (DAE) and/or device (PDE) device models including several age and radiation aware devices. It supports a variety of computing platforms (both serial and parallel) computers. Lastly, it uses a variety of modern solution algorithms dynamic parallel load-balancing and iterative solvers.
Kan, Guangyuan; He, Xiaoyan; Ding, Liuqian; Li, Jiren; Liang, Ke; Hong, Yang
2017-10-01
The shuffled complex evolution optimization developed at the University of Arizona (SCE-UA) has been successfully applied in various kinds of scientific and engineering optimization applications, such as hydrological model parameter calibration, for many years. The algorithm possesses good global optimality, convergence stability and robustness. However, benchmark and real-world applications reveal the poor computational efficiency of the SCE-UA. This research aims at the parallelization and acceleration of the SCE-UA method based on powerful heterogeneous computing technology. The parallel SCE-UA is implemented on Intel Xeon multi-core CPU (by using OpenMP and OpenCL) and NVIDIA Tesla many-core GPU (by using OpenCL, CUDA, and OpenACC). The serial and parallel SCE-UA were tested based on the Griewank benchmark function. Comparison results indicate the parallel SCE-UA significantly improves computational efficiency compared to the original serial version. The OpenCL implementation obtains the best overall acceleration results however, with the most complex source code. The parallel SCE-UA has bright prospects to be applied in real-world applications.
Optics Program Modified for Multithreaded Parallel Computing
NASA Technical Reports Server (NTRS)
Lou, John; Bedding, Dave; Basinger, Scott
2006-01-01
A powerful high-performance computer program for simulating and analyzing adaptive and controlled optical systems has been developed by modifying the serial version of the Modeling and Analysis for Controlled Optical Systems (MACOS) program to impart capabilities for multithreaded parallel processing on computing systems ranging from supercomputers down to Symmetric Multiprocessing (SMP) personal computers. The modifications included the incorporation of OpenMP, a portable and widely supported application interface software, that can be used to explicitly add multithreaded parallelism to an application program under a shared-memory programming model. OpenMP was applied to parallelize ray-tracing calculations, one of the major computing components in MACOS. Multithreading is also used in the diffraction propagation of light in MACOS based on pthreads [POSIX Thread, (where "POSIX" signifies a portable operating system for UNIX)]. In tests of the parallelized version of MACOS, the speedup in ray-tracing calculations was found to be linear, or proportional to the number of processors, while the speedup in diffraction calculations ranged from 50 to 60 percent, depending on the type and number of processors. The parallelized version of MACOS is portable, and, to the user, its interface is basically the same as that of the original serial version of MACOS.
A review of the promises and challenges of micro-concentrator photovoltaics
NASA Astrophysics Data System (ADS)
Domínguez, César; Jost, Norman; Askins, Steve; Victoria, Marta; Antón, Ignacio
2017-09-01
Micro concentrator photovoltaics (micro-CPV) is an unconventional approach for developing high-efficiency low-cost PV systems. The micrifying of cells and optics brings about an increase of efficiency with respect to classical CPV, at the expense of some fundamental challenges at mass production. The large costs linked to miniaturization under conventional serial-assembly processes raise the need for the development of parallel manufacturing technologies. In return, the tiny sizes involved allows exploring unconventional optical architectures or revisiting conventional concepts that were typically discarded because of large material consumption or high bulk absorption at classical CPV sizes.
Quantitative analysis of tympanic membrane perforation: a simple and reliable method.
Ibekwe, T S; Adeosun, A A; Nwaorgu, O G
2009-01-01
Accurate assessment of the features of tympanic membrane perforation, especially size, site, duration and aetiology, is important, as it enables optimum management. To describe a simple, cheap and effective method of quantitatively analysing tympanic membrane perforations. The system described comprises a video-otoscope (capable of generating still and video images of the tympanic membrane), adapted via a universal serial bus box to a computer screen, with images analysed using the Image J geometrical analysis software package. The reproducibility of results and their correlation with conventional otoscopic methods of estimation were tested statistically with the paired t-test and correlational tests, using the Statistical Package for the Social Sciences version 11 software. The following equation was generated: P/T x 100 per cent = percentage perforation, where P is the area (in pixels2) of the tympanic membrane perforation and T is the total area (in pixels2) for the entire tympanic membrane (including the perforation). Illustrations are shown. Comparison of blinded data on tympanic membrane perforation area obtained independently from assessments by two trained otologists, of comparative years of experience, using the video-otoscopy system described, showed similar findings, with strong correlations devoid of inter-observer error (p = 0.000, r = 1). Comparison with conventional otoscopic assessment also indicated significant correlation, comparing results for two trained otologists, but some inter-observer variation was present (p = 0.000, r = 0.896). Correlation between the two methods for each of the otologists was also highly significant (p = 0.000). A computer-adapted video-otoscope, with images analysed by Image J software, represents a cheap, reliable, technology-driven, clinical method of quantitative analysis of tympanic membrane perforations and injuries.
Surveillance of industrial processes with correlated parameters
White, Andrew M.; Gross, Kenny C.; Kubic, William L.; Wigeland, Roald A.
1996-01-01
A system and method for surveillance of an industrial process. The system and method includes a plurality of sensors monitoring industrial process parameters, devices to convert the sensed data to computer compatible information and a computer which executes computer software directed to analyzing the sensor data to discern statistically reliable alarm conditions. The computer software is executed to remove serial correlation information and then calculate Mahalanobis distribution data to carry out a probability ratio test to determine alarm conditions.
Parallel Monotonic Basin Hopping for Low Thrust Trajectory Optimization
NASA Technical Reports Server (NTRS)
McCarty, Steven L.; McGuire, Melissa L.
2018-01-01
Monotonic Basin Hopping has been shown to be an effective method of solving low thrust trajectory optimization problems. This paper outlines an extension to the common serial implementation by parallelizing it over any number of available compute cores. The Parallel Monotonic Basin Hopping algorithm described herein is shown to be an effective way to more quickly locate feasible solutions, and improve locally optimal solutions in an automated way without requiring a feasible initial guess. The increased speed achieved through parallelization enables the algorithm to be applied to more complex problems that would otherwise be impractical for a serial implementation. Low thrust cislunar transfers and a hybrid Mars example case demonstrate the effectiveness of the algorithm. Finally, a preliminary scaling study quantifies the expected decrease in solve time compared to a serial implementation.,
[Eye movement study in multiple object search process].
Xu, Zhaofang; Liu, Zhongqi; Wang, Xingwei; Zhang, Xin
2017-04-01
The aim of this study is to investigate the search time regulation of objectives and eye movement behavior characteristics in the multi-objective visual search. The experimental task was accomplished with computer programming and presented characters on a 24 inch computer display. The subjects were asked to search three targets among the characters. Three target characters in the same group were of high similarity degree while those in different groups of target characters and distraction characters were in different similarity degrees. We recorded the search time and eye movement data through the whole experiment. It could be seen from the eye movement data that the quantity of fixation points was large when the target characters and distraction characters were similar. There were three kinds of visual search patterns for the subjects including parallel search, serial search, and parallel-serial search. In addition, the last pattern had the best search performance among the three search patterns, that is, the subjects who used parallel-serial search pattern spent shorter time finding the target. The order that the targets presented were able to affect the search performance significantly; and the similarity degree between target characters and distraction characters could also affect the search performance.
NASA Astrophysics Data System (ADS)
Yussup, N.; Ibrahim, M. M.; Lombigit, L.; Rahman, N. A. A.; Zin, M. R. M.
2014-02-01
Typically a system consists of hardware as the controller and software which is installed in the personal computer (PC). In the effective nuclear detection, the hardware involves the detection setup and the electronics used, with the software consisting of analysis tools and graphical display on PC. A data acquisition interface is necessary to enable the communication between the controller hardware and PC. Nowadays, Universal Serial Bus (USB) has become a standard connection method for computer peripherals and has replaced many varieties of serial and parallel ports. However the implementation of USB is complex. This paper describes the implementation of data acquisition interface between a field-programmable gate array (FPGA) board and a PC by exploiting the USB link of the FPGA board. The USB link is based on an FTDI chip which allows direct access of input and output to the Joint Test Action Group (JTAG) signals from a USB host and a complex programmable logic device (CPLD) with a 24 MHz clock input to the USB link. The implementation and results of using the USB link of FPGA board as the data interfacing are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yussup, N.; Ibrahim, M. M.; Lombigit, L.
Typically a system consists of hardware as the controller and software which is installed in the personal computer (PC). In the effective nuclear detection, the hardware involves the detection setup and the electronics used, with the software consisting of analysis tools and graphical display on PC. A data acquisition interface is necessary to enable the communication between the controller hardware and PC. Nowadays, Universal Serial Bus (USB) has become a standard connection method for computer peripherals and has replaced many varieties of serial and parallel ports. However the implementation of USB is complex. This paper describes the implementation of datamore » acquisition interface between a field-programmable gate array (FPGA) board and a PC by exploiting the USB link of the FPGA board. The USB link is based on an FTDI chip which allows direct access of input and output to the Joint Test Action Group (JTAG) signals from a USB host and a complex programmable logic device (CPLD) with a 24 MHz clock input to the USB link. The implementation and results of using the USB link of FPGA board as the data interfacing are discussed.« less
A computational approach to real-time image processing for serial time-encoded amplified microscopy
NASA Astrophysics Data System (ADS)
Oikawa, Minoru; Hiyama, Daisuke; Hirayama, Ryuji; Hasegawa, Satoki; Endo, Yutaka; Sugie, Takahisa; Tsumura, Norimichi; Kuroshima, Mai; Maki, Masanori; Okada, Genki; Lei, Cheng; Ozeki, Yasuyuki; Goda, Keisuke; Shimobaba, Tomoyoshi
2016-03-01
High-speed imaging is an indispensable technique, particularly for identifying or analyzing fast-moving objects. The serial time-encoded amplified microscopy (STEAM) technique was proposed to enable us to capture images with a frame rate 1,000 times faster than using conventional methods such as CCD (charge-coupled device) cameras. The application of this high-speed STEAM imaging technique to a real-time system, such as flow cytometry for a cell-sorting system, requires successively processing a large number of captured images with high throughput in real time. We are now developing a high-speed flow cytometer system including a STEAM camera. In this paper, we describe our approach to processing these large amounts of image data in real time. We use an analog-to-digital converter that has up to 7.0G samples/s and 8-bit resolution for capturing the output voltage signal that involves grayscale images from the STEAM camera. Therefore the direct data output from the STEAM camera generates 7.0G byte/s continuously. We provided a field-programmable gate array (FPGA) device as a digital signal pre-processor for image reconstruction and finding objects in a microfluidic channel with high data rates in real time. We also utilized graphics processing unit (GPU) devices for accelerating the calculation speed of identification of the reconstructed images. We built our prototype system, which including a STEAM camera, a FPGA device and a GPU device, and evaluated its performance in real-time identification of small particles (beads), as virtual biological cells, owing through a microfluidic channel.
Architecture independent environment for developing engineering software on MIMD computers
NASA Technical Reports Server (NTRS)
Valimohamed, Karim A.; Lopez, L. A.
1990-01-01
Engineers are constantly faced with solving problems of increasing complexity and detail. Multiple Instruction stream Multiple Data stream (MIMD) computers have been developed to overcome the performance limitations of serial computers. The hardware architectures of MIMD computers vary considerably and are much more sophisticated than serial computers. Developing large scale software for a variety of MIMD computers is difficult and expensive. There is a need to provide tools that facilitate programming these machines. First, the issues that must be considered to develop those tools are examined. The two main areas of concern were architecture independence and data management. Architecture independent software facilitates software portability and improves the longevity and utility of the software product. It provides some form of insurance for the investment of time and effort that goes into developing the software. The management of data is a crucial aspect of solving large engineering problems. It must be considered in light of the new hardware organizations that are available. Second, the functional design and implementation of a software environment that facilitates developing architecture independent software for large engineering applications are described. The topics of discussion include: a description of the model that supports the development of architecture independent software; identifying and exploiting concurrency within the application program; data coherence; engineering data base and memory management.
Self-Scheduling Parallel Methods for Multiple Serial Codes with Application to WOPWOP
NASA Technical Reports Server (NTRS)
Long, Lyle N.; Brentner, Kenneth S.
2000-01-01
This paper presents a scheme for efficiently running a large number of serial jobs on parallel computers. Two examples are given of computer programs that run relatively quickly, but often they must be run numerous times to obtain all the results needed. It is very common in science and engineering to have codes that are not massive computing challenges in themselves, but due to the number of instances that must be run, they do become large-scale computing problems. The two examples given here represent common problems in aerospace engineering: aerodynamic panel methods and aeroacoustic integral methods. The first example simply solves many systems of linear equations. This is representative of an aerodynamic panel code where someone would like to solve for numerous angles of attack. The complete code for this first example is included in the appendix so that it can be readily used by others as a template. The second example is an aeroacoustics code (WOPWOP) that solves the Ffowcs Williams Hawkings equation to predict the far-field sound due to rotating blades. In this example, one quite often needs to compute the sound at numerous observer locations, hence parallelization is utilized to automate the noise computation for a large number of observers.
Proactive action preparation: seeing action preparation as a continuous and proactive process.
Pezzulo, Giovanni; Ognibene, Dimitri
2012-07-01
In this paper, we aim to elucidate the processes that occur during action preparation from both a conceptual and a computational point of view. We first introduce the traditional, serial model of goal-directed action and discuss from a computational viewpoint its subprocesses occurring during the two phases of covert action preparation and overt motor control. Then, we discuss recent evidence indicating that these subprocesses are highly intertwined at representational and neural levels, which undermines the validity of the serial model and points instead to a parallel model of action specification and selection. Within the parallel view, we analyze the case of delayed choice, arguing that action preparation can be proactive, and preparatory processes can take place even before decisions are made. Specifically, we discuss how prior knowledge and prospective abilities can be used to maximize utility even before deciding what to do. To support our view, we present a computational implementation of (an approximated version of) proactive action preparation, showing its advantages in a simulated tennis-like scenario.
Transient CDK4/6 inhibition protects hematopoietic stem cells from chemotherapy-induced exhaustion.
He, Shenghui; Roberts, Patrick J; Sorrentino, Jessica A; Bisi, John E; Storrie-White, Hannah; Tiessen, Renger G; Makhuli, Karenann M; Wargin, William A; Tadema, Henko; van Hoogdalem, Ewoud-Jan; Strum, Jay C; Malik, Rajesh; Sharpless, Norman E
2017-04-26
Conventional cytotoxic chemotherapy is highly effective in certain cancers but causes dose-limiting damage to normal proliferating cells, especially hematopoietic stem and progenitor cells (HSPCs). Serial exposure to cytotoxics causes a long-term hematopoietic compromise ("exhaustion"), which limits the use of chemotherapy and success of cancer therapy. We show that the coadministration of G1T28 (trilaciclib), which is a small-molecule inhibitor of cyclin-dependent kinases 4 and 6 (CDK4/6), contemporaneously with cytotoxic chemotherapy protects murine hematopoietic stem cells (HSCs) from chemotherapy-induced exhaustion in a serial 5-fluorouracil treatment model. Consistent with a cell-intrinsic effect, we show directly preserved HSC function resulting in a more rapid recovery of peripheral blood counts, enhanced serial transplantation capacity, and reduced myeloid skewing. When administered to healthy human volunteers, G1T28 demonstrated excellent in vivo pharmacology and transiently inhibited bone marrow (BM) HSPC proliferation. These findings suggest that the combination of CDK4/6 inhibitors with cytotoxic chemotherapy should provide a means to attenuate therapy-induced BM exhaustion in patients with cancer. Copyright © 2017, American Association for the Advancement of Science.
The application of coded excitation technology in medical ultrasonic Doppler imaging
NASA Astrophysics Data System (ADS)
Li, Weifeng; Chen, Xiaodong; Bao, Jing; Yu, Daoyin
2008-03-01
Medical ultrasonic Doppler imaging is one of the most important domains of modern medical imaging technology. The application of coded excitation technology in medical ultrasonic Doppler imaging system has the potential of higher SNR and deeper penetration depth than conventional pulse-echo imaging system, it also improves the image quality, and enhances the sensitivity of feeble signal, furthermore, proper coded excitation is beneficial to received spectrum of Doppler signal. Firstly, this paper analyzes the application of coded excitation technology in medical ultrasonic Doppler imaging system abstractly, showing the advantage and bright future of coded excitation technology, then introduces the principle and the theory of coded excitation. Secondly, we compare some coded serials (including Chirp and fake Chirp signal, Barker codes, Golay's complementary serial, M-sequence, etc). Considering Mainlobe Width, Range Sidelobe Level, Signal-to-Noise Ratio and sensitivity of Doppler signal, we choose Barker codes as coded serial. At last, we design the coded excitation circuit. The result in B-mode imaging and Doppler flow measurement coincided with our expectation, which incarnated the advantage of application of coded excitation technology in Digital Medical Ultrasonic Doppler Endoscope Imaging System.
ERIC Educational Resources Information Center
Wagenmakers, Eric-Jan; Farrell, Simon; Ratcliff, Roger
2005-01-01
Recently, G. C. Van Orden, J. G. Holden, and M. T. Turvey (2003) proposed to abandon the conventional framework of cognitive psychology in favor of the framework of nonlinear dynamical systems theory. Van Orden et al. presented evidence that "purposive behavior originates in self-organized criticality" (p. 333). Here, the authors show that Van…
Analog design of wireless control for home equipment
NASA Astrophysics Data System (ADS)
Zheng, Shiyong; Li, Zhao; Li, Biqing; Jiang, Suping
2018-04-01
This design consists of a STC89C52 microcontroller, a serial Bluetooth module and the Android system. Production of STC89C52 controlled by single-chip computer telephone systems. The system is composed of mobile phone Android system as a master in the family centre,via serial Bluetooth module pass instructions and information to implement wireless transceiver using STC89C52 MCU wireless Bluetooth transmission to control homedevices. System high reliability, low cost easy to use, stong applicability and other characerristics, can be used in single-user family, has great significance.
1990-07-01
sleep to favor one set of material in preference to others. This could apply to skill learning as well as declarative memory with considerable potential...not be advantageous for an organism to store a large number of specific memories , specific records of the many experiences of each day of its lifetime...be stored in real time in a sequential representation, as on a serial computer tape. Access to this "episodic" memory would be by serial order, by time
Conjugate-Gradient Algorithms For Dynamics Of Manipulators
NASA Technical Reports Server (NTRS)
Fijany, Amir; Scheid, Robert E.
1993-01-01
Algorithms for serial and parallel computation of forward dynamics of multiple-link robotic manipulators by conjugate-gradient method developed. Parallel algorithms have potential for speedup of computations on multiple linked, specialized processors implemented in very-large-scale integrated circuits. Such processors used to stimulate dynamics, possibly faster than in real time, for purposes of planning and control.
O'Donnell, Michael
2015-01-01
State-and-transition simulation modeling relies on knowledge of vegetation composition and structure (states) that describe community conditions, mechanistic feedbacks such as fire that can affect vegetation establishment, and ecological processes that drive community conditions as well as the transitions between these states. However, as the need for modeling larger and more complex landscapes increase, a more advanced awareness of computing resources becomes essential. The objectives of this study include identifying challenges of executing state-and-transition simulation models, identifying common bottlenecks of computing resources, developing a workflow and software that enable parallel processing of Monte Carlo simulations, and identifying the advantages and disadvantages of different computing resources. To address these objectives, this study used the ApexRMS® SyncroSim software and embarrassingly parallel tasks of Monte Carlo simulations on a single multicore computer and on distributed computing systems. The results demonstrated that state-and-transition simulation models scale best in distributed computing environments, such as high-throughput and high-performance computing, because these environments disseminate the workloads across many compute nodes, thereby supporting analysis of larger landscapes, higher spatial resolution vegetation products, and more complex models. Using a case study and five different computing environments, the top result (high-throughput computing versus serial computations) indicated an approximate 96.6% decrease of computing time. With a single, multicore compute node (bottom result), the computing time indicated an 81.8% decrease relative to using serial computations. These results provide insight into the tradeoffs of using different computing resources when research necessitates advanced integration of ecoinformatics incorporating large and complicated data inputs and models. - See more at: http://aimspress.com/aimses/ch/reader/view_abstract.aspx?file_no=Environ2015030&flag=1#sthash.p1XKDtF8.dpuf
Surveillance of industrial processes with correlated parameters
White, A.M.; Gross, K.C.; Kubic, W.L.; Wigeland, R.A.
1996-12-17
A system and method for surveillance of an industrial process are disclosed. The system and method includes a plurality of sensors monitoring industrial process parameters, devices to convert the sensed data to computer compatible information and a computer which executes computer software directed to analyzing the sensor data to discern statistically reliable alarm conditions. The computer software is executed to remove serial correlation information and then calculate Mahalanobis distribution data to carry out a probability ratio test to determine alarm conditions. 10 figs.
Sasamori, Hitomi; Ohmura, Yu; Kubo, Takuya; Yoshida, Takayuki; Yoshioka, Mitsuhiro
2018-05-02
Immaturity in impulse control among adolescents could result in substance abuse, criminal involvement, and suicide. The brains of adolescents and adults are anatomically, neurophysiologically, and pharmacologically different. Therefore, preclinical models of adolescent impulsivity are required to screen drugs for adolescents and elucidate the neural mechanisms underlying age-related differences in impulsivity. The conventional 3- or 5-choice serial reaction time task, which is a widely used task to assess impulsivity in adult rodents, cannot be used for young mice because of two technical problems: impaired growth caused by food restriction and the very long training duration. To overcome these problems, we altered the conventional training process, optimizing the degree of food restriction for young animals and shortening the training duration. We found that almost all basal performance levels were similar between the novel and conventional procedures. We also confirmed the pharmacological validity of our results: the 5-hydroxytryptamine 2C (5-HT 2C ) receptor agonist Ro60-0175 (0.6 mg/kg, subcutaneous) reduced the occurrence of premature responses, whereas the 5-HT 2C receptor antagonist SB242084 (0.5 mg/kg intraperitoneal) increased their occurrence, consistent with results of previous studies using conventional procedures. Furthermore, we detected age-related differences in impulsivity using the novel procedure: adolescent mice were found to be more impulsive than adult mice, congruent with the results of human studies. Thus, the new procedure enables the assessment of impulsivity in adolescent mice and facilitates a better understanding of the neurophysiological/pharmacological properties of adolescents. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Estimating costs and performance of systems for machine processing of remotely sensed data
NASA Technical Reports Server (NTRS)
Ballard, R. J.; Eastwood, L. F., Jr.
1977-01-01
This paper outlines a method for estimating computer processing times and costs incurred in producing information products from digital remotely sensed data. The method accounts for both computation and overhead, and may be applied to any serial computer. The method is applied to estimate the cost and computer time involved in producing Level II Land Use and Vegetative Cover Maps for a five-state midwestern region. The results show that the amount of data to be processed overloads some example computer systems, but that the processing is feasible on others.
TOUGH3: A new efficient version of the TOUGH suite of multiphase flow and transport simulators
NASA Astrophysics Data System (ADS)
Jung, Yoojin; Pau, George Shu Heng; Finsterle, Stefan; Pollyea, Ryan M.
2017-11-01
The TOUGH suite of nonisothermal multiphase flow and transport simulators has been updated by various developers over many years to address a vast range of challenging subsurface problems. The increasing complexity of the simulated processes as well as the growing size of model domains that need to be handled call for an improvement in the simulator's computational robustness and efficiency. Moreover, modifications have been frequently introduced independently, resulting in multiple versions of TOUGH that (1) led to inconsistencies in feature implementation and usage, (2) made code maintenance and development inefficient, and (3) caused confusion to users and developers. TOUGH3-a new base version of TOUGH-addresses these issues. It consolidates both the serial (TOUGH2 V2.1) and parallel (TOUGH2-MP V2.0) implementations, enabling simulations to be performed on desktop computers and supercomputers using a single code. New PETSc parallel linear solvers are added to the existing serial solvers of TOUGH2 and the Aztec solver used in TOUGH2-MP. The PETSc solvers generally perform better than the Aztec solvers in parallel and the internal TOUGH3 linear solver in serial. TOUGH3 also incorporates many new features, addresses bugs, and improves the flexibility of data handling. Due to the improved capabilities and usability, TOUGH3 is more robust and efficient for solving tough and computationally demanding problems in diverse scientific and practical applications related to subsurface flow modeling.
Cognitive, emotional and social markers of serial murdering.
Angrilli, Alessandro; Sartori, Giuseppe; Donzella, Giovanna
2013-01-01
Although criminal psychopathy is starting to be relatively well described, our knowledge of the characteristics and scientific markers of serial murdering is still very poor. A serial killer who murdered more than five people, KT, was administered a battery of standardized tests aimed at measuring neuropsychological impairment and social/emotional cognition deficits. KT exhibited a striking dissociation between a high level of emotional detachment and a low score on the antisocial behavior scale on the Psychopathy Checklist-Revised (PCL-R). The Minnesota Multiphasic Personality Inventory-2 showed a normal pattern with the psychotic triad at borderline level. KT had a high intelligence score and showed almost no impairment in cognitive tests sensitive to frontal lobe dysfunction (Wisconsin Card Sorting Test, Theory of Mind, Tower of London, this latter evidenced a mild impairment in planning performance). In the tests on moral, emotional and social cognition, his patterns of response differed from matched controls and from past reports on criminal psychopaths as, unlike these individuals, KT exhibited normal recognition of fear and a relatively intact knowledge of moral rules but he was impaired in the recognition of anger, embarrassment and conventional social rules. The overall picture of KT suggests that serial killing may be closer to normality than psychopathy defined according to either the DSM IV or the PCL-R, and it would be characterized by a relatively spared moral cognition and selective deficits in social and emotional cognition domains.
New Computer Simulations of Macular Neural Functioning
NASA Technical Reports Server (NTRS)
Ross, Muriel D.; Doshay, D.; Linton, S.; Parnas, B.; Montgomery, K.; Chimento, T.
1994-01-01
We use high performance graphics workstations and supercomputers to study the functional significance of the three-dimensional (3-D) organization of gravity sensors. These sensors have a prototypic architecture foreshadowing more complex systems. Scaled-down simulations run on a Silicon Graphics workstation and scaled-up, 3-D versions run on a Cray Y-MP supercomputer. A semi-automated method of reconstruction of neural tissue from serial sections studied in a transmission electron microscope has been developed to eliminate tedious conventional photography. The reconstructions use a mesh as a step in generating a neural surface for visualization. Two meshes are required to model calyx surfaces. The meshes are connected and the resulting prisms represent the cytoplasm and the bounding membranes. A finite volume analysis method is employed to simulate voltage changes along the calyx in response to synapse activation on the calyx or on calyceal processes. The finite volume method insures that charge is conserved at the calyx-process junction. These and other models indicate that efferent processes act as voltage followers, and that the morphology of some afferent processes affects their functioning. In a final application, morphological information is symbolically represented in three dimensions in a computer. The possible functioning of the connectivities is tested using mathematical interpretations of physiological parameters taken from the literature. Symbolic, 3-D simulations are in progress to probe the functional significance of the connectivities. This research is expected to advance computer-based studies of macular functioning and of synaptic plasticity.
Gray, Charles M; Goodell, Baldwin; Lear, Alex
2007-07-01
We describe the design and performance of an electromechanical system for conducting multineuron recording experiments in alert non-human primates. The system is based on a simple design, consisting of a microdrive, control electronics, software, and a unique type of recording chamber. The microdrive consists of an aluminum frame, a set of eight linear actuators driven by computer-controlled miniature stepping motors, and two printed circuit boards (PCBs) that provide connectivity to the electrodes and the control electronics. The control circuitry is structured around an Atmel RISC-based microcontroller, which sends commands to as many as eight motor control cards, each capable of controlling eight motors. The microcontroller is programmed in C and uses serial communication to interface with a host computer. The graphical user interface for sending commands is written in C and runs on a conventional personal computer. The recording chamber is low in profile, mounts within a circular craniotomy, and incorporates a removable internal sleeve. A replaceable Sylastic membrane can be stretched across the bottom opening of the sleeve to provide a watertight seal between the cranial cavity and the external environment. This greatly reduces the susceptibility to infection, nearly eliminates the need for routine cleaning, and permits repeated introduction of electrodes into the brain at the same sites while maintaining the watertight seal. The system is reliable, easy to use, and has several advantages over other commercially available systems with similar capabilities.
A minimal SATA III Host Controller based on FPGA
NASA Astrophysics Data System (ADS)
Liu, Hailiang
2018-03-01
SATA (Serial Advanced Technology Attachment) is an advanced serial bus which has a outstanding performance in transmitting high speed real-time data applied in Personal Computers, Financial Industry, astronautics and aeronautics, etc. In this express, a minimal SATA III Host Controller based on Xilinx Kintex 7 serial FPGA is designed and implemented. Compared to the state-of-art, registers utilization are reduced 25.3% and LUTs utilization are reduced 65.9%. According to the experimental results, the controller works precisely and steady with the reading bandwidth of up to 536 MB per second and the writing bandwidth of up to 512 MB per second, both of which are close to the maximum bandwidth of the SSD(Solid State Disk) device. The host controller is very suitable for high speed data transmission and mass data storage.
Is human sentence parsing serial or parallel? Evidence from event-related brain potentials.
Hopf, Jens-Max; Bader, Markus; Meng, Michael; Bayer, Josef
2003-01-01
In this ERP study we investigate the processes that occur in syntactically ambiguous German sentences at the point of disambiguation. Whereas most psycholinguistic theories agree on the view that processing difficulties arise when parsing preferences are disconfirmed (so-called garden-path effects), important differences exist with respect to theoretical assumptions about the parser's recovery from a misparse. A key distinction can be made between parsers that compute all alternative syntactic structures in parallel (parallel parsers) and parsers that compute only a single preferred analysis (serial parsers). To distinguish empirically between parallel and serial parsing models, we compare ERP responses to garden-path sentences with ERP responses to truly ungrammatical sentences. Garden-path sentences contain a temporary and ultimately curable ungrammaticality, whereas truly ungrammatical sentences remain so permanently--a difference which gives rise to different predictions in the two classes of parsing architectures. At the disambiguating word, ERPs in both sentence types show negative shifts of similar onset latency, amplitude, and scalp distribution in an initial time window between 300 and 500 ms. In a following time window (500-700 ms), the negative shift to garden-path sentences disappears at right central parietal sites, while it continues in permanently ungrammatical sentences. These data are taken as evidence for a strictly serial parser. The absence of a difference in the early time window indicates that temporary and permanent ungrammaticalities trigger the same kind of parsing responses. Later differences can be related to successful reanalysis in garden-path but not in ungrammatical sentences. Copyright 2003 Elsevier Science B.V.
Multi-stained whole slide image alignment in digital pathology
NASA Astrophysics Data System (ADS)
Déniz, Oscar; Toomey, David; Conway, Catherine; Bueno, Gloria
2015-03-01
In Digital Pathology, one of the most simple and yet most useful feature is the ability to view serial sections of tissue simultaneously on a computer monitor. This enables the pathologist to evaluate the histology and expression of multiple markers for a patient in a single review. However, the rate limiting step in this process is the time taken for the pathologist to open each individual image, align the sections within the viewer, with a maximum of four slides at a time, and then manually move around the section. In addition, due to tissue processing and pre-analytical steps, sections with different stains have non-linear variations between the two acquisitions, that is, they will stretch and change shape from section to section. To date, no solution has come close to a workable solution to automatically align the serial sections into one composite image. This research work address this problem to obtain an automated serial section alignment tool enabling the pathologists to simply scroll through the various sections in a single viewer. To this aim a multi-resolution intensity-based registration method using mutual information as a similarity metric, an optimizer based on an evolutionary process and a bilinear transformation has been used. To characterize the performance of the algorithm 40 cases x 5 different serial sections stained with hematoxiline-eosine (HE), estrogen receptor (ER), progesterone receptor (PR), Ki67 and human epidermal growth factor receptor 2 (Her2), have been considered. The qualitative results obtained are promising, with average computation time of 26.4s for up to 14660x5799 images running interpreted code.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Busbey, A.B.
A number of methods and products, both hardware and software, to allow data exchange between Apple Macintosh computers and MS-DOS based systems. These included serial null modem connections, MS-DOS hardware and/or software emulation, MS-DOS disk-reading hardware and networking.
The Role of Microcomputers in Libraries.
ERIC Educational Resources Information Center
Lundeen, Gerald
1980-01-01
Describes the functions and characteristics of the microcomputer and discusses library applications including cataloging, circulation, acquisitions, serials control, reference and database systems, administration, current and future trends, and computers as media. Twenty references are listed. (CHC)
Achieving production-level use of HEP software at the Argonne Leadership Computing Facility
NASA Astrophysics Data System (ADS)
Uram, T. D.; Childers, J. T.; LeCompte, T. J.; Papka, M. E.; Benjamin, D.
2015-12-01
HEP's demand for computing resources has grown beyond the capacity of the Grid, and these demands will accelerate with the higher energy and luminosity planned for Run II. Mira, the ten petaFLOPs supercomputer at the Argonne Leadership Computing Facility, is a potentially significant compute resource for HEP research. Through an award of fifty million hours on Mira, we have delivered millions of events to LHC experiments by establishing the means of marshaling jobs through serial stages on local clusters, and parallel stages on Mira. We are running several HEP applications, including Alpgen, Pythia, Sherpa, and Geant4. Event generators, such as Sherpa, typically have a split workload: a small scale integration phase, and a second, more scalable, event-generation phase. To accommodate this workload on Mira we have developed two Python-based Django applications, Balsam and ARGO. Balsam is a generalized scheduler interface which uses a plugin system for interacting with scheduler software such as HTCondor, Cobalt, and TORQUE. ARGO is a workflow manager that submits jobs to instances of Balsam. Through these mechanisms, the serial and parallel tasks within jobs are executed on the appropriate resources. This approach and its integration with the PanDA production system will be discussed.
LabVIEW Serial Driver Software for an Electronic Load
NASA Technical Reports Server (NTRS)
Scullin, Vincent; Garcia, Christopher
2003-01-01
A LabVIEW-language computer program enables monitoring and control of a Transistor Devices, Inc., Dynaload WCL232 (or equivalent) electronic load via an RS-232 serial communication link between the electronic load and a remote personal computer. (The electronic load can operate at constant voltage, current, power consumption, or resistance.) The program generates a graphical user interface (GUI) at the computer that looks and acts like the front panel of the electronic load. Once the electronic load has been placed in remote-control mode, this program first queries the electronic load for the present values of all its operational and limit settings, and then drops into a cycle in which it reports the instantaneous voltage, current, and power values in displays that resemble those on the electronic load while monitoring the GUI images of pushbuttons for control actions by the user. By means of the pushbutton images and associated prompts, the user can perform such operations as changing limit values, the operating mode, or the set point. The benefit of this software is that it relieves the user of the need to learn one method for operating the electronic load locally and another method for operating it remotely via a personal computer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Langer, S; Rotman, D; Schwegler, E
The Institutional Computing Executive Group (ICEG) review of FY05-06 Multiprogrammatic and Institutional Computing (M and IC) activities is presented in the attached report. In summary, we find that the M and IC staff does an outstanding job of acquiring and supporting a wide range of institutional computing resources to meet the programmatic and scientific goals of LLNL. The responsiveness and high quality of support given to users and the programs investing in M and IC reflects the dedication and skill of the M and IC staff. M and IC has successfully managed serial capacity, parallel capacity, and capability computing resources.more » Serial capacity computing supports a wide range of scientific projects which require access to a few high performance processors within a shared memory computer. Parallel capacity computing supports scientific projects that require a moderate number of processors (up to roughly 1000) on a parallel computer. Capability computing supports parallel jobs that push the limits of simulation science. M and IC has worked closely with Stockpile Stewardship, and together they have made LLNL a premier institution for computational and simulation science. Such a standing is vital to the continued success of laboratory science programs and to the recruitment and retention of top scientists. This report provides recommendations to build on M and IC's accomplishments and improve simulation capabilities at LLNL. We recommend that institution fully fund (1) operation of the atlas cluster purchased in FY06 to support a few large projects; (2) operation of the thunder and zeus clusters to enable 'mid-range' parallel capacity simulations during normal operation and a limited number of large simulations during dedicated application time; (3) operation of the new yana cluster to support a wide range of serial capacity simulations; (4) improvements to the reliability and performance of the Lustre parallel file system; (5) support for the new GDO petabyte-class storage facility on the green network for use in data intensive external collaborations; and (6) continued support for visualization and other methods for analyzing large simulations. We also recommend that M and IC begin planning in FY07 for the next upgrade of its parallel clusters. LLNL investments in M and IC have resulted in a world-class simulation capability leading to innovative science. We thank the LLNL management for its continued support and thank the M and IC staff for its vision and dedicated efforts to make it all happen.« less
NASA Astrophysics Data System (ADS)
Croitoru, Bogdan; Tulbure, Adrian; Abrudean, Mihail; Secara, Mihai
2015-02-01
The present paper describes a software method for creating / managing one type of Transducer Electronic Datasheet (TEDS) according to IEEE 1451.4 standard in order to develop a prototype of smart multi-sensor platform (with up to ten different analog sensors simultaneously connected) with Plug and Play capabilities over ETHERNET and Wi-Fi. In the experiments were used: one analog temperature sensor, one analog light sensor, one PIC32-based microcontroller development board with analog and digital I/O ports and other computing resources, one 24LC256 I2C (Inter Integrated Circuit standard) serial Electrically Erasable Programmable Read Only Memory (EEPROM) memory with 32KB available space and 3 bytes internal buffer for page writes (1 byte for data and 2 bytes for address). It was developed a prototype algorithm for writing and reading TEDS information to / from I2C EEPROM memories using the standard C language (up to ten different TEDS blocks coexisting in the same EEPROM device at once). The algorithm is able to write and read one type of TEDS: transducer information with standard TEDS content. A second software application, written in VB.NET platform, was developed in order to access the EEPROM sensor information from a computer through a serial interface (USB).
The serial nature of the masked onset priming effect revisited.
Mousikou, Petroula; Coltheart, Max
2014-01-01
Reading aloud is faster when target words/nonwords are preceded by masked prime words/nonwords that share their first sound with the target (e.g., save-SINK) compared to when primes and targets are unrelated to each other (e.g., farm-SINK). This empirical phenomenon is the masked onset priming effect (MOPE) and is known to be due to serial left-to-right processing of the prime by a sublexical reading mechanism. However, the literature in this domain lacks a critical experiment. It is possible that when primes are real words their orthographic/phonological representations are activated in parallel and holistically during prime presentation, so any phoneme overlap between primes and targets (and not just initial-phoneme overlap) could facilitate target reading aloud. This is the prediction made by the only computational models of reading aloud that are able to simulate the MOPE, namely the DRC1.2.1, CDP+, and CDP++ models. We tested this prediction in the present study and found that initial-phoneme overlap (blip-BEST), but not end-phoneme overlap (flat-BEST), facilitated target reading aloud compared to no phoneme overlap (junk-BEST). These results provide support for a reading mechanism that operates serially and from left to right, yet are inconsistent with all existing computational models of single-word reading aloud.
28-Bit serial word simulator/monitor
NASA Technical Reports Server (NTRS)
Durbin, J. W.
1979-01-01
Modular interface unit transfers data at high speeds along four channels. Device expedites variable-word-length communication between computers. Operation eases exchange of bit information by automatically reformatting coded input data and status information to match requirements of output.
Infrared-Proximity-Sensor Modules For Robot
NASA Technical Reports Server (NTRS)
Parton, William; Wegerif, Daniel; Rosinski, Douglas
1995-01-01
Collision-avoidance system for articulated robot manipulators uses infrared proximity sensors grouped together in array of sensor modules. Sensor modules, called "sensorCells," distributed processing board-level products for acquiring data from proximity-sensors strategically mounted on robot manipulators. Each sensorCell self-contained and consists of multiple sensing elements, discrete electronics, microcontroller and communications components. Modules connected to central control computer by redundant serial digital communication subsystem including both serial and a multi-drop bus. Detects objects made of various materials at distance of up to 50 cm. For some materials, such as thermal protection system tiles, detection range reduced to approximately 20 cm.
Programmable Pulse-Position-Modulation Encoder
NASA Technical Reports Server (NTRS)
Zhu, David; Farr, William
2006-01-01
A programmable pulse-position-modulation (PPM) encoder has been designed for use in testing an optical communication link. The encoder includes a programmable state machine and an electronic code book that can be updated to accommodate different PPM coding schemes. The encoder includes a field-programmable gate array (FPGA) that is programmed to step through the stored state machine and code book and that drives a custom high-speed serializer circuit board that is capable of generating subnanosecond pulses. The stored state machine and code book can be updated by means of a simple text interface through the serial port of a personal computer.
Expanded serial communication capability for the transport systems research vehicle laptop computers
NASA Technical Reports Server (NTRS)
Easley, Wesley C.
1991-01-01
A recent upgrade of the Transport Systems Research Vehicle (TSRV) operated by the Advanced Transport Operating Systems Program Office at the NASA Langley Research Center included installation of a number of Grid 1500 series laptop computers. Each unit is a 80386-based IBM PC clone. RS-232 data busses are needed for TSRV flight research programs, and it has been advantageous to extend the application of the Grids in this area. Use was made of the expansion features of the Grid internal bus to add a user programmable serial communication channel. Software to allow use of the Grid bus expansion has been written and placed in a Turbo C library for incorporation into applications programs in a transparent manner via function calls. Port setup; interrupt-driven, two-way data transfer; and software flow control are built into the library functions.
Parallel Adaptive Mesh Refinement Library
NASA Technical Reports Server (NTRS)
Mac-Neice, Peter; Olson, Kevin
2005-01-01
Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.
A novel brain-computer interface based on the rapid serial visual presentation paradigm.
Acqualagna, Laura; Treder, Matthias Sebastian; Schreuder, Martijn; Blankertz, Benjamin
2010-01-01
Most present-day visual brain computer interfaces (BCIs) suffer from the fact that they rely on eye movements, are slow-paced, or feature a small vocabulary. As a potential remedy, we explored a novel BCI paradigm consisting of a central rapid serial visual presentation (RSVP) of the stimuli. It has a large vocabulary and realizes a BCI system based on covert non-spatial selective visual attention. In an offline study, eight participants were presented sequences of rapid bursts of symbols. Two different speeds and two different color conditions were investigated. Robust early visual and P300 components were elicited time-locked to the presentation of the target. Offline classification revealed a mean accuracy of up to 90% for selecting the correct symbol out of 30 possibilities. The results suggest that RSVP-BCI is a promising new paradigm, also for patients with oculomotor impairments.
Parallel Implicit Runge-Kutta Methods Applied to Coupled Orbit/Attitude Propagation
NASA Astrophysics Data System (ADS)
Hatten, Noble; Russell, Ryan P.
2017-12-01
A variable-step Gauss-Legendre implicit Runge-Kutta (GLIRK) propagator is applied to coupled orbit/attitude propagation. Concepts previously shown to improve efficiency in 3DOF propagation are modified and extended to the 6DOF problem, including the use of variable-fidelity dynamics models. The impact of computing the stage dynamics of a single step in parallel is examined using up to 23 threads and 22 associated GLIRK stages; one thread is reserved for an extra dynamics function evaluation used in the estimation of the local truncation error. Efficiency is found to peak for typical examples when using approximately 8 to 12 stages for both serial and parallel implementations. Accuracy and efficiency compare favorably to explicit Runge-Kutta and linear-multistep solvers for representative scenarios. However, linear-multistep methods are found to be more efficient for some applications, particularly in a serial computing environment, or when parallelism can be applied across multiple trajectories.
Parallelization of ARC3D with Computer-Aided Tools
NASA Technical Reports Server (NTRS)
Jin, Haoqiang; Hribar, Michelle; Yan, Jerry; Saini, Subhash (Technical Monitor)
1998-01-01
A series of efforts have been devoted to investigating methods of porting and parallelizing applications quickly and efficiently for new architectures, such as the SCSI Origin 2000 and Cray T3E. This report presents the parallelization of a CFD application, ARC3D, using the computer-aided tools, Cesspools. Steps of parallelizing this code and requirements of achieving better performance are discussed. The generated parallel version has achieved reasonably well performance, for example, having a speedup of 30 for 36 Cray T3E processors. However, this performance could not be obtained without modification of the original serial code. It is suggested that in many cases improving serial code and performing necessary code transformations are important parts for the automated parallelization process although user intervention in many of these parts are still necessary. Nevertheless, development and improvement of useful software tools, such as Cesspools, can help trim down many tedious parallelization details and improve the processing efficiency.
Matsumoto, Celso Soiti; Shinoda, Kei; Matsumoto, Harue; Seki, Keisuke; Nagasaka, Eiichiro; Iwata, Takeshi; Mizota, Atsushi
2014-08-05
To compare a conventional cathode-ray tube (CRT) screen to organic light-emitting diode (OLED) and liquid crystal display (LCD) screens as visual stimulators to elicit multifocal electroretinograms (mfERGs), mfERGs were recorded from seven eyes of seven healthy volunteers (21 ± 2 years). The mfERGs elicited by a conventional CRT screen (S710, Compaq Computer Co.) were compared to those elicited by a studio-grade master OLED monitor (PVM-1741, Sony, Japan) and a conventional LCD (S1721, Flexscan, Eizo Nanao Corp., Japan). The luminance changes of each monitor were measured with a photodiode. CRT, OLED, and LCD screens with a frame frequency of 60 Hz were studied. A hexagonal stimulus array with 61 stimulus elements was created on each monitor. The serial white stimuli of the OLED screen at 60 Hz did not fuse, and that of the LCD screens fused. The amplitudes of P1 and P2 of the first-order kernels of the mfERGs were not significantly different from those elicited by the CRT and OLED screens, and the P1 amplitude of the first-order kernel elicited by the LCD stimuli was significantly smaller than that elicited by the CRT in all the groups of the averaged hexagonal elements. The implicit times were approximately 10 ms longer in almost all components elicited by the LCD screen compared to those elicited by the CRT screen. The mfERGs elicited by monitors other than the CRT should be carefully interpreted, especially those elicited by LCD screens. The OLED had good performance, and we conclude that it can replace the CRT as a stimulator for mfERGs; however, a collection of normative data is recommended. © 2014 ARVO.
Ultra-compact coherent receiver with serial interface for pluggable transceiver.
Itoh, Toshihiro; Nakajima, Fumito; Ohno, Tetsuichiro; Yamanaka, Shogo; Soma, Shunichi; Saida, Takashi; Nosaka, Hideyuki; Murata, Koichi
2014-09-22
An ultra-compact integrated coherent receiver with a volume of 1.3 cc using a quad-channel transimpedance amplifier (TIA)-IC chip with a serial peripheral interface (SPI) is demonstrated for the first time. The TIA with the SPI and photodiode (PD) bias circuits, a miniature dual polarization optical hybrid, an octal-PD and small optical coupling system enabled the realization of the compact receiver. Measured transmission performance with 32 Gbaud dual-polarization quadrature phase shift keying signal is equivalent to that of the conventional multi-source agreement-based integrated coherent receiver with dual channel TIA-ICs. By comparing the bit-error rate (BER) performance with that under continuous SPI access, we also confirmed that there is no BER degradation caused by SPI interface access. Such an ultra-compact receiver is promising for realizing a new generation of pluggable transceivers.
Probabilistic motor sequence learning in a virtual reality serial reaction time task.
Sense, Florian; van Rijn, Hedderik
2018-01-01
The serial reaction time task is widely used to study learning and memory. The task is traditionally administered by showing target positions on a computer screen and collecting responses using a button box or keyboard. By comparing response times to random or sequenced items or by using different transition probabilities, various forms of learning can be studied. However, this traditional laboratory setting limits the number of possible experimental manipulations. Here, we present a virtual reality version of the serial reaction time task and show that learning effects emerge as expected despite the novel way in which responses are collected. We also show that response times are distributed as expected. The current experiment was conducted in a blank virtual reality room to verify these basic principles. For future applications, the technology can be used to modify the virtual reality environment in any conceivable way, permitting a wide range of previously impossible experimental manipulations.
NASA Astrophysics Data System (ADS)
Pérez, Israel; Ángel Hernández Cuevas, José; Trinidad Elizalde Galindo, José
2018-05-01
We designed and developed a desktop AC susceptometer for the characterization of materials. The system consists of a lock-in amplifier, an AC function generator, a couple of coils, a sample holder, a computer system with a designed software in freeware C++ code, and an Arduino card coupled to a Bluetooth module. The Arduino/Bluetooth serial interface allows the user to have a connection to almost any computer and thus avoids the problem of connectivity between the computer and the peripherals, such as the lock-in amplifier and the function generator. The Bluetooth transmitter/receiver used is a commercial device which is robust and fast. These new features reduce the size and increase the versatility of the susceptometer, for it can be used with a simple laptop. To test our instrument, we performed measurements on magnetic materials and show that the system is reliable at both room temperature and cryogenic temperatures (77 K). The instrument is suitable for any physics or engineering laboratory either for research or academic purposes.
Simulation of LHC events on a millions threads
NASA Astrophysics Data System (ADS)
Childers, J. T.; Uram, T. D.; LeCompte, T. J.; Papka, M. E.; Benjamin, D. P.
2015-12-01
Demand for Grid resources is expected to double during LHC Run II as compared to Run I; the capacity of the Grid, however, will not double. The HEP community must consider how to bridge this computing gap by targeting larger compute resources and using the available compute resources as efficiently as possible. Argonne's Mira, the fifth fastest supercomputer in the world, can run roughly five times the number of parallel processes that the ATLAS experiment typically uses on the Grid. We ported Alpgen, a serial x86 code, to run as a parallel application under MPI on the Blue Gene/Q architecture. By analysis of the Alpgen code, we reduced the memory footprint to allow running 64 threads per node, utilizing the four hardware threads available per core on the PowerPC A2 processor. Event generation and unweighting, typically run as independent serial phases, are coupled together in a single job in this scenario, reducing intermediate writes to the filesystem. By these optimizations, we have successfully run LHC proton-proton physics event generation at the scale of a million threads, filling two-thirds of Mira.
Implementation and analysis of a Navier-Stokes algorithm on parallel computers
NASA Technical Reports Server (NTRS)
Fatoohi, Raad A.; Grosch, Chester E.
1988-01-01
The results of the implementation of a Navier-Stokes algorithm on three parallel/vector computers are presented. The object of this research is to determine how well, or poorly, a single numerical algorithm would map onto three different architectures. The algorithm is a compact difference scheme for the solution of the incompressible, two-dimensional, time-dependent Navier-Stokes equations. The computers were chosen so as to encompass a variety of architectures. They are the following: the MPP, an SIMD machine with 16K bit serial processors; Flex/32, an MIMD machine with 20 processors; and Cray/2. The implementation of the algorithm is discussed in relation to these architectures and measures of the performance on each machine are given. The basic comparison is among SIMD instruction parallelism on the MPP, MIMD process parallelism on the Flex/32, and vectorization of a serial code on the Cray/2. Simple performance models are used to describe the performance. These models highlight the bottlenecks and limiting factors for this algorithm on these architectures. Finally, conclusions are presented.
A hybrid analog-digital phase-locked loop for frequency mode non-contact scanning probe microscopy.
Mehta, M M; Chandrasekhar, V
2014-01-01
Non-contact scanning probe microscopy (SPM) has developed into a powerful technique to image many different properties of samples. The conventional method involves monitoring the amplitude, phase, or frequency of a cantilever oscillating at or near its resonant frequency as it is scanned across the surface of a sample. For high Q factor cantilevers, monitoring the resonant frequency is the preferred method in order to obtain reasonable scan times. This can be done by using a phase-locked-loop (PLL). PLLs can be obtained as commercial integrated circuits, but these do not have the frequency resolution required for SPM. To increase the resolution, all-digital PLLs requiring sophisticated digital signal processors or field programmable gate arrays have also been implemented. We describe here a hybrid analog/digital PLL where most of the components are implemented using discrete analog integrated circuits, but the frequency resolution is provided by a direct digital synthesis chip controlled by a simple peripheral interface controller (PIC) microcontroller. The PLL has excellent frequency resolution and noise, and can be controlled and read by a computer via a universal serial bus connection.
A hybrid analog-digital phase-locked loop for frequency mode non-contact scanning probe microscopy
NASA Astrophysics Data System (ADS)
Mehta, M. M.; Chandrasekhar, V.
2014-01-01
Non-contact scanning probe microscopy (SPM) has developed into a powerful technique to image many different properties of samples. The conventional method involves monitoring the amplitude, phase, or frequency of a cantilever oscillating at or near its resonant frequency as it is scanned across the surface of a sample. For high Q factor cantilevers, monitoring the resonant frequency is the preferred method in order to obtain reasonable scan times. This can be done by using a phase-locked-loop (PLL). PLLs can be obtained as commercial integrated circuits, but these do not have the frequency resolution required for SPM. To increase the resolution, all-digital PLLs requiring sophisticated digital signal processors or field programmable gate arrays have also been implemented. We describe here a hybrid analog/digital PLL where most of the components are implemented using discrete analog integrated circuits, but the frequency resolution is provided by a direct digital synthesis chip controlled by a simple peripheral interface controller (PIC) microcontroller. The PLL has excellent frequency resolution and noise, and can be controlled and read by a computer via a universal serial bus connection.
A Novel Design of 4-Class BCI Using Two Binary Classifiers and Parallel Mental Tasks
Geng, Tao; Gan, John Q.; Dyson, Matthew; Tsui, Chun SL; Sepulveda, Francisco
2008-01-01
A novel 4-class single-trial brain computer interface (BCI) based on two (rather than four or more) binary linear discriminant analysis (LDA) classifiers is proposed, which is called a “parallel BCI.” Unlike other BCIs where mental tasks are executed and classified in a serial way one after another, the parallel BCI uses properly designed parallel mental tasks that are executed on both sides of the subject body simultaneously, which is the main novelty of the BCI paradigm used in our experiments. Each of the two binary classifiers only classifies the mental tasks executed on one side of the subject body, and the results of the two binary classifiers are combined to give the result of the 4-class BCI. Data was recorded in experiments with both real movement and motor imagery in 3 able-bodied subjects. Artifacts were not detected or removed. Offline analysis has shown that, in some subjects, the parallel BCI can generate a higher accuracy than a conventional 4-class BCI, although both of them have used the same feature selection and classification algorithms. PMID:18584040
CWDM for very-short-reach and optical-backplane interconnections
NASA Astrophysics Data System (ADS)
Laha, Michael J.
2002-06-01
Course Wavelength Division Multiplexing (CWDM) provides access to next generation optical interconnect data rates by utilizing conventional electro-optical components that are widely available in the market today. This is achieved through the use of CWDM multiplexers and demultiplexers that integrate commodity type active components, lasers and photodiodes, into small optical subassemblies. In contrast to dense wavelength division multiplexing (DWDM), in which multiple serial data streams are combined to create aggregate data pipes perhaps 100s of gigabits wide, CWDM uses multiple laser sources contained in one module to create a serial equivalent data stream. For example, four 2.5 Gb/s lasers are multiplexed to create a 10 Gb/s data pipe. The advantages of CWDM over traditional serial optical interconnects include lower module power consumption, smaller packaging, and a superior electrical interface. This discussion will detail the concept of CWDM and design parameters that are considered when productizing a CWDM module into an industry standard optical interconnect. Additionally, a scalable parallel CWDM hybrid architecture will be described that allows the transport of large amounts of data from rack to rack in an economical fashion. This particular solution is targeted at solving optical backplane bottleneck problems predicted for the next generation terabit and petabit routers.
Boxerman, Jerrold L; Ellingson, Benjamin M; Jeyapalan, Suriya; Elinzano, Heinrich; Harris, Robert J; Rogg, Jeffrey M; Pope, Whitney B; Safran, Howard
2017-06-01
For patients with high-grade glioma on clinical trials it is important to accurately assess time of disease progression. However, differentiation between pseudoprogression (PsP) and progressive disease (PD) is unreliable with standard magnetic resonance imaging (MRI) techniques. Dynamic susceptibility contrast perfusion MRI (DSC-MRI) can measure relative cerebral blood volume (rCBV) and may help distinguish PsP from PD. A subset of patients with high-grade glioma on a phase II clinical trial with temozolomide, paclitaxel poliglumex, and concurrent radiation were assessed. Nine patients (3 grade III, 6 grade IV), with a total of 19 enhancing lesions demonstrating progressive enhancement (≥25% increase from nadir) on postchemoradiation conventional contrast-enhanced MRI, had serial DSC-MRI. Mean leakage-corrected rCBV within enhancing lesions was computed for all postchemoradiation time points. Of the 19 progressively enhancing lesions, 10 were classified as PsP and 9 as PD by biopsy/surgery or serial enhancement patterns during interval follow-up MRI. Mean rCBV at initial progressive enhancement did not differ significantly between PsP and PD (2.35 vs. 2.17; P=0.67). However, change in rCBV at first subsequent follow-up (-0.84 vs. 0.84; P=0.001) and the overall linear trend in rCBV after initial progressive enhancement (negative vs. positive slope; P=0.04) differed significantly between PsP and PD. Longitudinal trends in rCBV may be more useful than absolute rCBV in distinguishing PsP from PD in chemoradiation-treated high-grade gliomas with DSC-MRI. Further studies of DSC-MRI in high-grade glioma as a potential technique for distinguishing PsP from PD are indicated.
ERIC Educational Resources Information Center
Fezzani, K.; Albinet, C.; Thon, B.; Marquie, J. -C.
2010-01-01
The present study investigated the extent to which the impact of motor difficulty on the acquisition of a computer task varies as a function of age. Fourteen young and 14 older participants performed 352 sequences of 10 serial pointing movements with a wireless pen on a digitiser tablet. A conditional probabilistic structure governed the…
A Serial Bus Architecture for Parallel Processing Systems
1986-09-01
pins are needed to effect the data transfer. As Integrated Circuits grow in computational power, more communication capacity is needed, pushing...chip. The wider the communication path the more pins are needed to effect the data transfer. As Integrated Circuits grow in computational power, more...13 2. A Suitable Architecture Sought 14 II. OPTIMUM ARCHITECTURE OF LARGE INTEGRATED A. PARTIONING SILICON FOR MAXIMUM 1? 1. Transistor
High-Speed Systolic Array Testbed.
1987-10-01
applications since the concept was introduced by H.T. Kung In 1978. This highly parallel architecture of nearet neighbor data communciation and...must be addressed. For instance, should bit-serial or bit parallei computation be utilized. Does the dynamic range of the candidate applications or...numericai stability of the algorithms used require computations In fixed point and Integer format or the architecturally more complex and slower floating
Serial recall of colors: Two models of memory for serial order applied to continuous visual stimuli.
Peteranderl, Sonja; Oberauer, Klaus
2018-01-01
This study investigated the effects of serial position and temporal distinctiveness on serial recall of simple visual stimuli. Participants observed lists of five colors presented at varying, unpredictably ordered interitem intervals, and their task was to reproduce the colors in their order of presentation by selecting colors on a continuous-response scale. To control for the possibility of verbal labeling, articulatory suppression was required in one of two experimental sessions. The predictions were derived through simulation from two computational models of serial recall: SIMPLE represents the class of temporal-distinctiveness models, whereas SOB-CS represents event-based models. According to temporal-distinctiveness models, items that are temporally isolated within a list are recalled more accurately than items that are temporally crowded. In contrast, event-based models assume that the time intervals between items do not affect recall performance per se, although free time following an item can improve memory for that item because of extended time for the encoding. The experimental and the simulated data were fit to an interference measurement model to measure the tendency to confuse items with other items nearby on the list-the locality constraint-in people as well as in the models. The continuous-reproduction performance showed a pronounced primacy effect with no recency, as well as some evidence for transpositions obeying the locality constraint. Though not entirely conclusive, this evidence favors event-based models over a role for temporal distinctiveness. There was also a strong detrimental effect of articulatory suppression, suggesting that verbal codes can be used to support serial-order memory of simple visual stimuli.
ERIC Educational Resources Information Center
Elsweiler, John A., Jr.; And Others
1990-01-01
Presents summaries of 12 papers presented at the 1990 Computers in Libraries Conference. Topics discussed include online searching; microcomputer-based serials management; microcomputer-based workstations; online public access catalogs (OPACs); multitype library networking; CD-ROM searches; locally mounted online databases; collection evaluation;…
Supersystems: OCLC Continues to Innovate.
ERIC Educational Resources Information Center
Jenkins, Judith
1983-01-01
Activities of Online Computer Library Center, a nonprofit corporation developed in 1967 that provides a cooperative, computerized network, are discussed. Member, staff, and financial growth; unique subsystems (cataloging, acquisitions, serials control, interlibrary loan, retrospective conversion); problems with terminals, taxes, and competitive…
The study of early human embryos using interactive 3-dimensional computer reconstructions.
Scarborough, J; Aiton, J F; McLachlan, J C; Smart, S D; Whiten, S C
1997-07-01
Tracings of serial histological sections from 4 human embryos at different Carnegie stages were used to create 3-dimensional (3D) computer models of the developing heart. The models were constructed using commercially available software developed for graphic design and the production of computer generated virtual reality environments. They are available as interactive objects which can be downloaded via the World Wide Web. This simple method of 3D reconstruction offers significant advantages for understanding important events in morphological sciences.
NASA Astrophysics Data System (ADS)
Cauquil, Jean-Marc; Martin, Jean-Yves; Bruins, Peter; Benschop, A. A. J.
2003-01-01
The life time tests realised on the serial production of Rotary Mmonoblock RM2 coolers show a measured MTTF of 4900 hours. The conventional test profile applied to these coolers is representative of operation in typical application. The duration of such life time tests is very long. The results of a design change and its impact on MTTF are available only several months after the assembly of the prototypes. We decided to develop a test method in order to reduce the duration of these life time tests. The principle is to define a test protocol easy to implement, more severe than typical application profile in order to accelerate life time tests. The accelerated test profile was defined and tested successfully. This new technique allows us to reduce life time tests costs and duration and thus the costs involved. As a consequence, we decided to have a screening of our production with this accelerated test. This allows us to master continuously the quality of our serial products and to collect additional data. This paper presents the results of life time tests performed on RM2 coolers according to the conventional and accelerated test profiles as well as the first results on the new RM2 design which show a calculated MTTF of 10000 hours.
Algebraic multigrid domain and range decomposition (AMG-DD / AMG-RD)*
Bank, R.; Falgout, R. D.; Jones, T.; ...
2015-10-29
In modern large-scale supercomputing applications, algebraic multigrid (AMG) is a leading choice for solving matrix equations. However, the high cost of communication relative to that of computation is a concern for the scalability of traditional implementations of AMG on emerging architectures. This paper introduces two new algebraic multilevel algorithms, algebraic multigrid domain decomposition (AMG-DD) and algebraic multigrid range decomposition (AMG-RD), that replace traditional AMG V-cycles with a fully overlapping domain decomposition approach. While the methods introduced here are similar in spirit to the geometric methods developed by Brandt and Diskin [Multigrid solvers on decomposed domains, in Domain Decomposition Methods inmore » Science and Engineering, Contemp. Math. 157, AMS, Providence, RI, 1994, pp. 135--155], Mitchell [Electron. Trans. Numer. Anal., 6 (1997), pp. 224--233], and Bank and Holst [SIAM J. Sci. Comput., 22 (2000), pp. 1411--1443], they differ primarily in that they are purely algebraic: AMG-RD and AMG-DD trade communication for computation by forming global composite “grids” based only on the matrix, not the geometry. (As is the usual AMG convention, “grids” here should be taken only in the algebraic sense, regardless of whether or not it corresponds to any geometry.) Another important distinguishing feature of AMG-RD and AMG-DD is their novel residual communication process that enables effective parallel computation on composite grids, avoiding the all-to-all communication costs of the geometric methods. The main purpose of this paper is to study the potential of these two algebraic methods as possible alternatives to existing AMG approaches for future parallel machines. As a result, this paper develops some theoretical properties of these methods and reports on serial numerical tests of their convergence properties over a spectrum of problem parameters.« less
Caron, Alexis; Lelong, Christine; Bartels, T; Dorchies, O; Gury, T; Chalier, Catherine; Benning, Véronique
2015-08-01
As a general practice in rodent toxicology studies, satellite animals are used for toxicokinetic determinations, because of the potential impact of serial blood sampling on toxicological endpoints. Besides toxicological and toxicokinetic determinations, blood samples obtained longitudinally from a same animal may be used for the assessment of additional parameters (e.g., metabolism, pharmacodynamics, safety biomarkers) to maximize information that can be deduced from rodents. We investigated whether removal of up to 6 × 200 μL of blood over 24h can be applied in GLP rat toxicology studies without affecting the scientific outcome. 8 week-old female rats (200-300 g) were dosed for up to 1 month with a standard vehicle and subjected or not (controls) to serial blood sampling for sham toxicokinetic/ancillary determinations, using miniaturized methods allowing collection of 6 × 50, 100 or 200 μL over 24h. In-life endpoints, clinical pathology parameters and histopathology of organs sensitive to blood volume reduction were evaluated at several time points after completion of sampling. In sampled rats, minimal and reversible changes in red blood cell mass (maximally 15%) and subtle variations in liver enzymes, fibrinogen and neutrophils were not associated with any organ/tissue macroscopic or microscopic correlate. Serial blood sampling (up to 6 × 200 μL over 24h) is compatible with the assessment of standard toxicity endpoints in adult rats. Copyright © 2015 Elsevier Inc. All rights reserved.
Horstmann, Heinz; Körber, Christoph; Sätzler, Kurt; Aydin, Daniel; Kuner, Thomas
2012-01-01
High resolution, three-dimensional (3D) representations of cellular ultrastructure are essential for structure function studies in all areas of cell biology. While limited subcellular volumes have been routinely examined using serial section transmission electron microscopy (ssTEM), complete ultrastructural reconstructions of large volumes, entire cells or even tissue are difficult to achieve using ssTEM. Here, we introduce a novel approach combining serial sectioning of tissue with scanning electron microscopy (SEM) using a conductive silicon wafer as a support. Ribbons containing hundreds of 35 nm thick sections can be generated and imaged on the wafer at a lateral pixel resolution of 3.7 nm by recording the backscattered electrons with the in-lens detector of the SEM. The resulting electron micrographs are qualitatively comparable to those obtained by conventional TEM. S3EM images of the same region of interest in consecutive sections can be used for 3D reconstructions of large structures. We demonstrate the potential of this approach by reconstructing a 31.7 µm3 volume of a calyx of Held presynaptic terminal. The approach introduced here, Serial Section SEM (S3EM), for the first time provides the possibility to obtain 3D ultrastructure of large volumes with high resolution and to selectively and repetitively home in on structures of interest. S3EM accelerates process duration, is amenable to full automation and can be implemented with standard instrumentation. PMID:22523574
Horstmann, Heinz; Körber, Christoph; Sätzler, Kurt; Aydin, Daniel; Kuner, Thomas
2012-01-01
High resolution, three-dimensional (3D) representations of cellular ultrastructure are essential for structure function studies in all areas of cell biology. While limited subcellular volumes have been routinely examined using serial section transmission electron microscopy (ssTEM), complete ultrastructural reconstructions of large volumes, entire cells or even tissue are difficult to achieve using ssTEM. Here, we introduce a novel approach combining serial sectioning of tissue with scanning electron microscopy (SEM) using a conductive silicon wafer as a support. Ribbons containing hundreds of 35 nm thick sections can be generated and imaged on the wafer at a lateral pixel resolution of 3.7 nm by recording the backscattered electrons with the in-lens detector of the SEM. The resulting electron micrographs are qualitatively comparable to those obtained by conventional TEM. S(3)EM images of the same region of interest in consecutive sections can be used for 3D reconstructions of large structures. We demonstrate the potential of this approach by reconstructing a 31.7 µm(3) volume of a calyx of Held presynaptic terminal. The approach introduced here, Serial Section SEM (S(3)EM), for the first time provides the possibility to obtain 3D ultrastructure of large volumes with high resolution and to selectively and repetitively home in on structures of interest. S(3)EM accelerates process duration, is amenable to full automation and can be implemented with standard instrumentation.
Bae, Dae Kyung; Song, Sang Jun; Kim, Kang Il; Hur, Dong; Jeong, Ho Yeon
2016-03-01
The purpose of the present study was to compare the clinical and radiographic results and survival rates between computer-assisted and conventional closing wedge high tibial osteotomies (HTOs). Data from a consecutive cohort comprised of 75 computer-assisted HTOs and 75 conventional HTOs were retrospectively reviewed. The Knee Society knee and function scores, Hospital for Special Surgery (HSS) score and femorotibial angle (FTA) were compared between the two groups. Survival rates were also compared with procedure failure. The knee and function scores at one year postoperatively were slightly better in the computer-assisted group than those in conventional group (90.1 vs. 86.1) (82.0 vs. 76.0). The HSS scores at one year postoperatively were slightly better for the computer-assisted HTOs than those of conventional HTOs (89.5 vs. 81.8). The inlier of the postoperative FTA was wider in the computer-assisted group than that in the conventional HTO group (88.0% vs. 58.7%), and mean postoperative FTA was greater in the computer-assisted group that in the conventional HTO group (valgus 9.0° vs. valgus 7.6°, p<0.001). The five- and 10-year survival rates were 97.1% and 89.6%, respectively. No difference was detected in nine-year survival rates (p=0.369) between the two groups, although the clinical and radiographic results were better in the computer-assisted group that those in the conventional HTO group. Mid-term survival rates did not differ between computer-assisted and conventional HTOs. A comparative analysis of longer-term survival rate is required to demonstrate the long-term benefit of computer-assisted HTO. III. Copyright © 2015 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeVolpi, A.; Palm, R.
CFE poses a number of verification challenges that could be met in part by an accurate and low-cost means of aiding in accountability of treaty-limited equipment. Although the treaty as signed does not explicitly call for the use of tags, there is a provision for recording serial numbers'' and placing special marks'' on equipment subject to reduction. There are approximately 150,000 residual items to be tracked for CFE-I, about half for each alliance of state parties. These highly mobile items are subject to complex treaty limitations: deployment limits and zones, ceilings subceilings, holdings and allowances. There are controls and requirementsmore » for storage, conversion, and reduction. In addition, there are national security concerns regarding modernization and mobilization capability. As written into the treaty, a heavy reliance has been placed on human inspectors for CFE verification. Inspectors will mostly make visual observations and photographs as the means of monitoring compliance; these observations can be recorded by handwriting or keyed into a laptop computer. CFE is now less a treaty between two alliances than a treaty among 22 state parties, with inspection data an reports to be shared with each party in the official languages designated by CSCE. One of the potential roles for bar-coded tags would be to provide a universal, exchangable, computer-compatible language for tracking TLE. 10 figs.« less
Hybrid VLSI/QCA Architecture for Computing FFTs
NASA Technical Reports Server (NTRS)
Fijany, Amir; Toomarian, Nikzad; Modarres, Katayoon; Spotnitz, Matthew
2003-01-01
A data-processor architecture that would incorporate elements of both conventional very-large-scale integrated (VLSI) circuitry and quantum-dot cellular automata (QCA) has been proposed to enable the highly parallel and systolic computation of fast Fourier transforms (FFTs). The proposed circuit would complement the QCA-based circuits described in several prior NASA Tech Briefs articles, namely Implementing Permutation Matrices by Use of Quantum Dots (NPO-20801), Vol. 25, No. 10 (October 2001), page 42; Compact Interconnection Networks Based on Quantum Dots (NPO-20855) Vol. 27, No. 1 (January 2003), page 32; and Bit-Serial Adder Based on Quantum Dots (NPO-20869), Vol. 27, No. 1 (January 2003), page 35. The cited prior articles described the limitations of very-large-scale integrated (VLSI) circuitry and the major potential advantage afforded by QCA. To recapitulate: In a VLSI circuit, signal paths that are required not to interact with each other must not cross in the same plane. In contrast, for reasons too complex to describe in the limited space available for this article, suitably designed and operated QCAbased signal paths that are required not to interact with each other can nevertheless be allowed to cross each other in the same plane without adverse effect. In principle, this characteristic could be exploited to design compact, coplanar, simple (relative to VLSI) QCA-based networks to implement complex, advanced interconnection schemes.
Marolf, Angela; Blaik, Margaret; Ackerman, Norman; Watson, Elizabeth; Gibson, Nicole; Thompson, Margret
2008-01-01
The role of digital imaging is increasing as these systems are becoming more affordable and accessible. Advantages of computed radiography compared with conventional film/screen combinations include improved contrast resolution and postprocessing capabilities. Computed radiography's spatial resolution is inferior to conventional radiography; however, this limitation is considered clinically insignificant. This study prospectively compared digital imaging and conventional radiography in detecting small volume pneumoperitoneum. Twenty cadaver dogs (15-30 kg) were injected with 0.25, 0.25, and 0.5 ml for 1 ml total of air intra-abdominally, and radiographed sequentially using computed and conventional radiographic technologies. Three radiologists independently evaluated the images, and receiver operating curve (ROC) analysis compared the two imaging modalities. There was no statistical difference between computed and conventional radiography in detecting free abdominal air, but overall computed radiography was relatively more sensitive based on ROC analysis. Computed radiographic images consistently and significantly demonstrated a minimal amount of 0.5 ml of free air based on ROC analysis. However, no minimal air amount was consistently or significantly detected with conventional film. Readers were more likely to detect free air on lateral computed images than the other projections, with no significant increased sensitivity between film/screen projections. Further studies are indicated to determine the differences or lack thereof between various digital imaging systems and conventional film/screen systems.
Acquisition of Real-Time Operation Analytics for an Automated Serial Sectioning System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Madison, Jonathan D.; Underwood, O. D.; Poulter, Gregory A.
Mechanical serial sectioning is a highly repetitive technique employed in metallography for the rendering of 3D reconstructions of microstructure. While alternate techniques such as ultrasonic detection, micro-computed tomography, and focused ion beam milling have progressed much in recent years, few alternatives provide equivalent opportunities for comparatively high resolutions over significantly sized cross-sectional areas and volumes. To that end, the introduction of automated serial sectioning systems has greatly heightened repeatability and increased data collection rates while diminishing opportunity for mishandling and other user-introduced errors. Unfortunately, even among current, state-of-the-art automated serial sectioning systems, challenges in data collection have not been fullymore » eradicated. Therefore, this paper highlights two specific advances to assist in this area; a non-contact laser triangulation method for assessment of material removal rates and a newly developed graphical user interface providing real-time monitoring of experimental progress. Furthermore, both are shown to be helpful in the rapid identification of anomalies and interruptions, while also providing comparable and less error-prone measures of removal rate over the course of these long-term, challenging, and innately destructive characterization experiments.« less
Acquisition of Real-Time Operation Analytics for an Automated Serial Sectioning System
Madison, Jonathan D.; Underwood, O. D.; Poulter, Gregory A.; ...
2017-03-22
Mechanical serial sectioning is a highly repetitive technique employed in metallography for the rendering of 3D reconstructions of microstructure. While alternate techniques such as ultrasonic detection, micro-computed tomography, and focused ion beam milling have progressed much in recent years, few alternatives provide equivalent opportunities for comparatively high resolutions over significantly sized cross-sectional areas and volumes. To that end, the introduction of automated serial sectioning systems has greatly heightened repeatability and increased data collection rates while diminishing opportunity for mishandling and other user-introduced errors. Unfortunately, even among current, state-of-the-art automated serial sectioning systems, challenges in data collection have not been fullymore » eradicated. Therefore, this paper highlights two specific advances to assist in this area; a non-contact laser triangulation method for assessment of material removal rates and a newly developed graphical user interface providing real-time monitoring of experimental progress. Furthermore, both are shown to be helpful in the rapid identification of anomalies and interruptions, while also providing comparable and less error-prone measures of removal rate over the course of these long-term, challenging, and innately destructive characterization experiments.« less
Biological and Computational Modeling of Mammographic Density and Stromal Patterning
2010-07-01
clumping Score Monolayer Absent Many Absent Absent Absent 1 Nucl. overlap Mild Moderate Mild Micro- nucleoli Rare 2 Clustering Moderate...Few Moderate Micro- nucleoli Occasional 3 Loss cohesion Conspicuous Absent Frequent Macro- nucleoli Frequent 4 We performed serial RPFNA
Confocal microscopy imaging of solid tissue
Confocal laser scanning microscopy (CLSM) is a technique that is capable of generating serial sections of whole-mount tissue and then reassembling the computer acquired images as a virtual 3-dimensional structure. In many ways CLSM offers an alternative to traditional sectioning ...
NASA Technical Reports Server (NTRS)
Fijany, Amir; Djouani, Karim; Fried, George; Pontnau, Jean
1997-01-01
In this paper a new factorization technique for computation of inverse of mass matrix, and the operational space mass matrix, as arising in implementation of the operational space control scheme, is presented.
Seruya, Mitchel; Fisher, Mark; Rodriguez, Eduardo D
2013-11-01
There has been rising interest in computer-aided design/computer-aided manufacturing for preoperative planning and execution of osseous free flap reconstruction. The purpose of this study was to compare outcomes between computer-assisted and conventional fibula free flap techniques for craniofacial reconstruction. A two-center, retrospective review was carried out on patients who underwent fibula free flap surgery for craniofacial reconstruction from 2003 to 2012. Patients were categorized by the type of reconstructive technique: conventional (between 2003 and 2009) or computer-aided design/computer-aided manufacturing (from 2010 to 2012). Demographics, surgical factors, and perioperative and long-term outcomes were compared. A total of 68 patients underwent microsurgical craniofacial reconstruction: 58 conventional and 10 computer-aided design and manufacturing fibula free flaps. By demographics, patients undergoing the computer-aided design/computer-aided manufacturing method were significantly older and had a higher rate of radiotherapy exposure compared with conventional patients. Intraoperatively, the median number of osteotomies was significantly higher (2.0 versus 1.0, p=0.002) and the median ischemia time was significantly shorter (120 minutes versus 170 minutes, p=0.004) for the computer-aided design/computer-aided manufacturing technique compared with conventional techniques; operative times were shorter for patients undergoing the computer-aided design/computer-aided manufacturing technique, although this did not reach statistical significance. Perioperative and long-term outcomes were equivalent for the two groups, notably, hospital length of stay, recipient-site infection, partial and total flap loss, and rate of soft-tissue and bony tissue revisions. Microsurgical craniofacial reconstruction using a computer-assisted fibula flap technique yielded significantly shorter ischemia times amidst a higher number of osteotomies compared with conventional techniques. Therapeutic, III.
Parallelisation study of a three-dimensional environmental flow model
NASA Astrophysics Data System (ADS)
O'Donncha, Fearghal; Ragnoli, Emanuele; Suits, Frank
2014-03-01
There are many simulation codes in the geosciences that are serial and cannot take advantage of the parallel computational resources commonly available today. One model important for our work in coastal ocean current modelling is EFDC, a Fortran 77 code configured for optimal deployment on vector computers. In order to take advantage of our cache-based, blade computing system we restructured EFDC from serial to parallel, thereby allowing us to run existing models more quickly, and to simulate larger and more detailed models that were previously impractical. Since the source code for EFDC is extensive and involves detailed computation, it is important to do such a port in a manner that limits changes to the files, while achieving the desired speedup. We describe a parallelisation strategy involving surgical changes to the source files to minimise error-prone alteration of the underlying computations, while allowing load-balanced domain decomposition for efficient execution on a commodity cluster. The use of conjugate gradient posed particular challenges due to implicit non-local communication posing a hindrance to standard domain partitioning schemes; a number of techniques are discussed to address this in a feasible, computationally efficient manner. The parallel implementation demonstrates good scalability in combination with a novel domain partitioning scheme that specifically handles mixed water/land regions commonly found in coastal simulations. The approach presented here represents a practical methodology to rejuvenate legacy code on a commodity blade cluster with reasonable effort; our solution has direct application to other similar codes in the geosciences.
NASA Astrophysics Data System (ADS)
Delafontaine-Martel, P.; Lefebvre, J.; Damseh, R.; Castonguay, A.; Tardif, P.; Lesage, F.
2018-02-01
In this study, an automated serial two-photon microscope was used to image a fluorescent gelatin filled rodent's brain in 3D. A method to compute vascular density using automatic segmentation was combined with coregistration techniques to build group-level vasculature metrics. By studying the medial prefrontal cortex and the hippocampal formation of 3 age groups (2, 4.5 and 8 months old), we compared vascular density for both WT and an Alzheimer model transgenic brain (APP/PS1). We observe a loss of vascular density caused by the ageing process and we propose further analysis to confirm our results.
Data compression using Chebyshev transform
NASA Technical Reports Server (NTRS)
Cheng, Andrew F. (Inventor); Hawkins, III, S. Edward (Inventor); Nguyen, Lillian (Inventor); Monaco, Christopher A. (Inventor); Seagrave, Gordon G. (Inventor)
2007-01-01
The present invention is a method, system, and computer program product for implementation of a capable, general purpose compression algorithm that can be engaged on the fly. This invention has particular practical application with time-series data, and more particularly, time-series data obtained form a spacecraft, or similar situations where cost, size and/or power limitations are prevalent, although it is not limited to such applications. It is also particularly applicable to the compression of serial data streams and works in one, two, or three dimensions. The original input data is approximated by Chebyshev polynomials, achieving very high compression ratios on serial data streams with minimal loss of scientific information.
Bit-Serial Adder Based on Quantum Dots
NASA Technical Reports Server (NTRS)
Fijany, Amir; Toomarian, Nikzad; Modarress, Katayoon; Spotnitz, Mathew
2003-01-01
A proposed integrated circuit based on quantum-dot cellular automata (QCA) would function as a bit-serial adder. This circuit would serve as a prototype building block for demonstrating the feasibility of quantum-dots computing and for the further development of increasingly complex and increasingly capable quantum-dots computing circuits. QCA-based bit-serial adders would be especially useful in that they would enable the development of highly parallel and systolic processors for implementing fast Fourier, cosine, Hartley, and wavelet transforms. The proposed circuit would complement the QCA-based circuits described in "Implementing Permutation Matrices by Use of Quantum Dots" (NPO-20801), NASA Tech Briefs, Vol. 25, No. 10 (October 2001), page 42 and "Compact Interconnection Networks Based on Quantum Dots" (NPO-20855), which appears elsewhere in this issue. Those articles described the limitations of very-large-scale-integrated (VLSI) circuitry and the major potential advantage afforded by QCA. To recapitulate: In a VLSI circuit, signal paths that are required not to interact with each other must not cross in the same plane. In contrast, for reasons too complex to describe in the limited space available for this article, suitably designed and operated QCA-based signal paths that are required not to interact with each other can nevertheless be allowed to cross each other in the same plane without adverse effect. In principle, this characteristic could be exploited to design compact, coplanar, simple (relative to VLSI) QCA-based networks to implement complex, advanced interconnection schemes. To enable a meaningful description of the proposed bit-serial adder, it is necessary to further recapitulate the description of a quantum-dot cellular automation from the first-mentioned prior article: A quantum-dot cellular automaton contains four quantum dots positioned at the corners of a square cell. The cell contains two extra mobile electrons that can tunnel (in the quantum-mechanical sense) between neighboring dots within the cell. The Coulomb repulsion between the two electrons tends to make them occupy antipodal dots in the cell. For an isolated cell, there are two energetically equivalent arrangements (denoted polarization states) of the extra electrons. The cell polarization is used to encode binary information. Because the polarization of a nonisolated cell depends on Coulomb-repulsion interactions with neighboring cells, universal logic gates and binary wires could be constructed, in principle, by arraying QCA of suitable design in suitable patterns. Again, for reasons too complex to describe here, in order to ensure accuracy and timeliness of the output of a QCA array, it is necessary to resort to an adiabatic switching scheme in which the QCA array is divided into subarrays, each controlled by a different phase of a multiphase clock signal. In this scheme, each subarray is given time to perform its computation, then its state is frozen by raising its inter-dot potential barriers and its output is fed as the input to the successor subarray. The successor subarray is kept in an unpolarized state so it does not influence the calculation of preceding subarray. Such a clocking scheme is consistent with pipeline computation in the sense that each different subarray can perform a different part of an overall computation. In other words, QCA arrays are inherently suitable for pipeline and, moreover, systolic computations. This sequential or pipeline aspect of QCA would be utilized in the proposed bit-serial adders.
Naitow, Hisashi; Matsuura, Yoshinori; Tono, Kensuke; Joti, Yasumasa; Kameshima, Takashi; Hatsui, Takaki; Yabashi, Makina; Tanaka, Rie; Tanaka, Tomoyuki; Sugahara, Michihiro; Kobayashi, Jun; Nango, Eriko; Iwata, So; Kunishima, Naoki
2017-08-01
Serial femtosecond crystallography (SFX) with an X-ray free-electron laser is used for the structural determination of proteins from a large number of microcrystals at room temperature. To examine the feasibility of pharmaceutical applications of SFX, a ligand-soaking experiment using thermolysin microcrystals has been performed using SFX. The results were compared with those from a conventional experiment with synchrotron radiation (SR) at 100 K. A protein-ligand complex structure was successfully obtained from an SFX experiment using microcrystals soaked with a small-molecule ligand; both oil-based and water-based crystal carriers gave essentially the same results. In a comparison of the SFX and SR structures, clear differences were observed in the unit-cell parameters, in the alternate conformation of side chains, in the degree of water coordination and in the ligand-binding mode.
Broadcasting collective operation contributions throughout a parallel computer
Faraj, Ahmad [Rochester, MN
2012-02-21
Methods, systems, and products are disclosed for broadcasting collective operation contributions throughout a parallel computer. The parallel computer includes a plurality of compute nodes connected together through a data communications network. Each compute node has a plurality of processors for use in collective parallel operations on the parallel computer. Broadcasting collective operation contributions throughout a parallel computer according to embodiments of the present invention includes: transmitting, by each processor on each compute node, that processor's collective operation contribution to the other processors on that compute node using intra-node communications; and transmitting on a designated network link, by each processor on each compute node according to a serial processor transmission sequence, that processor's collective operation contribution to the other processors on the other compute nodes using inter-node communications.
Wu, Zheyang; Yang, Chun; Tang, Dalin
2011-06-01
It has been hypothesized that mechanical risk factors may be used to predict future atherosclerotic plaque rupture. Truly predictive methods for plaque rupture and methods to identify the best predictor(s) from all the candidates are lacking in the literature. A novel combination of computational and statistical models based on serial magnetic resonance imaging (MRI) was introduced to quantify sensitivity and specificity of mechanical predictors to identify the best candidate for plaque rupture site prediction. Serial in vivo MRI data of carotid plaque from one patient was acquired with follow-up scan showing ulceration. 3D computational fluid-structure interaction (FSI) models using both baseline and follow-up data were constructed and plaque wall stress (PWS) and strain (PWSn) and flow maximum shear stress (FSS) were extracted from all 600 matched nodal points (100 points per matched slice, baseline matching follow-up) on the lumen surface for analysis. Each of the 600 points was marked "ulcer" or "nonulcer" using follow-up scan. Predictive statistical models for each of the seven combinations of PWS, PWSn, and FSS were trained using the follow-up data and applied to the baseline data to assess their sensitivity and specificity using the 600 data points for ulcer predictions. Sensitivity of prediction is defined as the proportion of the true positive outcomes that are predicted to be positive. Specificity of prediction is defined as the proportion of the true negative outcomes that are correctly predicted to be negative. Using probability 0.3 as a threshold to infer ulcer occurrence at the prediction stage, the combination of PWS and PWSn provided the best predictive accuracy with (sensitivity, specificity) = (0.97, 0.958). Sensitivity and specificity given by PWS, PWSn, and FSS individually were (0.788, 0.968), (0.515, 0.968), and (0.758, 0.928), respectively. The proposed computational-statistical process provides a novel method and a framework to assess the sensitivity and specificity of various risk indicators and offers the potential to identify the optimized predictor for plaque rupture using serial MRI with follow-up scan showing ulceration as the gold standard for method validation. While serial MRI data with actual rupture are hard to acquire, this single-case study suggests that combination of multiple predictors may provide potential improvement to existing plaque assessment schemes. With large-scale patient studies, this predictive modeling process may provide more solid ground for rupture predictor selection strategies and methods for image-based plaque vulnerability assessment.
A 3-Dimensional Atlas of Human Tongue Muscles
SANDERS, IRA; MU, LIANCAI
2013-01-01
The human tongue is one of the most important yet least understood structures of the body. One reason for the relative lack of research on the human tongue is its complex anatomy. This is a real barrier to investigators as there are few anatomical resources in the literature that show this complex anatomy clearly. As a result, the diagnosis and treatment of tongue disorders lags behind that for other structures of the head and neck. This report intended to fill this gap by displaying the tongue’s anatomy in multiple ways. The primary material used in this study was serial axial images of the male and female human tongue from the Visible Human (VH) Project of the National Library of Medicine. In addition, thick serial coronal sections of three human tongues were rendered translucent. The VH axial images were computer reconstructed into serial coronal sections and each tongue muscle was outlined. These outlines were used to construct a 3-dimensional computer model of the tongue that allows each muscle to be seen in its in vivo anatomical position. The thick coronal sections supplement the 3-D model by showing details of the complex interweaving of tongue muscles throughout the tongue. The graphics are perhaps the clearest guide to date to aid clinical or basic science investigators in identifying each tongue muscle in any part of the human tongue. PMID:23650264
Jang, Ji-Yong; Kim, Jung-Sun; Shin, Dong-Ho; Kim, Byeong-Keuk; Ko, Young-Guk; Choi, Donghoon; Jang, Yangsoo; Hong, Myeong-Ki
2015-10-01
Serial follow-up optical coherence tomography (OCT) was used to evaluate the effect of optimal lipid-lowering therapy on qualitative changes in neointimal tissue characteristics after drug-eluting stent (DES) implantation. DES-treated patients (n = 218) who received statin therapy were examined with serial follow-up OCT. First and second follow-up OCT evaluations were performed approximately 6 and 18 months after the index procedure, respectively. Patients were divided into two groups, based on the level of low-density lipoprotein-cholesterol (LDL-C), which was measured at the second follow-up. The optimal lipid-lowering group (n = 121) had an LDL-C reduction of ≥50% or an LDL-C level ≤70 mg/dL, and the conventional group (n = 97). Neointimal characteristics were qualitatively categorized as homogeneous or non-homogeneous patterns using OCT. The non-homogeneous group included heterogeneous, layered, or neoatherosclerosis patterns. Qualitative changes in neointimal tissue characteristics between the first and second follow-up OCT examinations were assessed. Between the first and second follow-up OCT procedures, the neointimal cross-sectional area increased more substantially in the conventional group (0.4 mm(2) vs. 0.2 mm(2) in the optimal lipid-lowering group, p = 0.01). The neointimal pattern changed from homogeneous to non-homogeneous less often in the optimal lipid-lowering group (1.3%, 1/77, p < 0.001) than in the conventional group (15.3%, 11/72, p = 0.44). Optimal LDL-C reduction was an independent predictor for the prevention of neointimal pattern change from homogeneous to non-homogeneous (odds ratio: 0.05, 95% confidence interval: 0.01∼0.46, p = 0.008). Our findings suggest that an intensive reduction in LDL-C levels can prevent non-homogeneous changes in the neointima and increases in neointimal cross-sectional area compared with conventional LDL-C controls. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chvetsov, A
Purpose: To develop a tumor response model which could be uses to compute tumor hypoxic fraction using serial volumetric tumor imaging. This algorithm may be used for treatment response assessment and also for guidance of more expensive PET imaging of hypoxia. Methods: Previously developed two-level cell population tumor response model was modified to include a third cell level describing hypoxic and necrotic cells. This third level was considered constant value during radiotherapy treatment; therefore, inclusion additional parameter did not compromise stability of model fitting to imaging data. Fitting the model to serial volumetric imaging data was performed using a leastmore » squares objective function and simulated annealing algorithm. The problem of reconstruction of radiobiological parameters from serial imaging data was considered as inverse ill-posed problem described by the Fredholm integral equation of the first kind. Variational regularization was used to stabilize solutions. Results: To evaluate performance of the algorithm, we used a set of serial CT imaging data on tumor-volume for 14 head and neck cancer patients. The hypoxic fractions were reconstructed for each patient and the distribution of hypoxic fractions was compared to the distribution of initial hypoxic fractions previously measured using histograph. The measured and reconstructed from imaging data distributions of hypoxic fractions are in good agreement. The reconstructed distribution of cell surviving fraction was also in better agreement with in vitro data than previously obtained using the two-level cell population model. Conclusion: Our results indicate that it is possible to evaluate the initial hypoxic tumor fraction using serial volumetric imaging and a tumor response model. This algorithm can be used for treatment response assessment and guidance of more expensive PET imaging.« less
Chan, B H; Leung, Y Y
2018-04-01
The comparison of serial radiographs and clinical photographs is considered the current accepted standard for the diagnosis of active condylar hyperplasia in patients with facial asymmetry. Single photon emission computed tomography (SPECT) has recently been proposed as an alternative method. SPECT can be interpreted using three reported methods absolute difference in uptake, uptake ratio, and relative uptake. SPECT findings were compared to those from serial comparisons of radiographs and clinical photographs taken at the time of SPECT and a year later; the sensitivities and specificities were determined. Two hundred patient scans were evaluated. Thirty-four patients showed active growth on serial growth assessment. On comparison with serial growth assessment, the sensitivity and specificity of the three methods ranged between 32.4% and 67.6%, and 36.1% and 78.3%, respectively. Analysis using receiver operating characteristic (ROC) curves revealed area under the curve (AUC) values of <0.58. The average age (mean±standard deviation) of patients with active growth was 18.6±2.8 years, and average growth in the anteroposterior, vertical, and transverse directions was 0.94±0.91mm, 0.88±0.86mm, and 1.4±0.66 mm, respectively. With such low sensitivity and specificity values, it is not justifiable to use SPECT in place of serial growth assessment for the determination of condylar growth status. Copyright © 2017 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
STAR adaptation of QR algorithm. [program for solving over-determined systems of linear equations
NASA Technical Reports Server (NTRS)
Shah, S. N.
1981-01-01
The QR algorithm used on a serial computer and executed on the Control Data Corporation 6000 Computer was adapted to execute efficiently on the Control Data STAR-100 computer. How the scalar program was adapted for the STAR-100 and why these adaptations yielded an efficient STAR program is described. Program listings of the old scalar version and the vectorized SL/1 version are presented in the appendices. Execution times for the two versions applied to the same system of linear equations, are compared.
Portable Airborne Laser System Measures Forest-Canopy Height
NASA Technical Reports Server (NTRS)
Nelson, Ross
2005-01-01
(PALS) is a combination of laser ranging, video imaging, positioning, and data-processing subsystems designed for measuring the heights of forest canopies along linear transects from tens to thousands of kilometers long. Unlike prior laser ranging systems designed to serve the same purpose, the PALS is not restricted to use aboard a single aircraft of a specific type: the PALS fits into two large suitcases that can be carried to any convenient location, and the PALS can be installed in almost any local aircraft for hire, thereby making it possible to sample remote forests at relatively low cost. The initial cost and the cost of repairing the PALS are also lower because the PALS hardware consists mostly of commercial off-the-shelf (COTS) units that can easily be replaced in the field. The COTS units include a laser ranging transceiver, a charge-coupled-device camera that images the laser-illuminated targets, a differential Global Positioning System (dGPS) receiver capable of operation within the Wide Area Augmentation System, a video titler, a video cassette recorder (VCR), and a laptop computer equipped with two serial ports. The VCR and computer are powered by batteries; the other units are powered at 12 VDC from the 28-VDC aircraft power system via a low-pass filter and a voltage converter. The dGPS receiver feeds location and time data, at an update rate of 0.5 Hz, to the video titler and the computer. The laser ranging transceiver, operating at a sampling rate of 2 kHz, feeds its serial range and amplitude data stream to the computer. The analog video signal from the CCD camera is fed into the video titler wherein the signal is annotated with position and time information. The titler then forwards the annotated signal to the VCR for recording on 8-mm tapes. The dGPS and laser range and amplitude serial data streams are processed by software that displays the laser trace and the dGPS information as they are fed into the computer, subsamples the laser range and amplitude data, interleaves the subsampled data with the dGPS information, and records the resulting interleaved data stream.
NASA Technical Reports Server (NTRS)
Hanebutte, Ulf R.; Joslin, Ronald D.; Zubair, Mohammad
1994-01-01
The implementation and the performance of a parallel spatial direct numerical simulation (PSDNS) code are reported for the IBM SP1 supercomputer. The spatially evolving disturbances that are associated with laminar-to-turbulent in three-dimensional boundary-layer flows are computed with the PS-DNS code. By remapping the distributed data structure during the course of the calculation, optimized serial library routines can be utilized that substantially increase the computational performance. Although the remapping incurs a high communication penalty, the parallel efficiency of the code remains above 40% for all performed calculations. By using appropriate compile options and optimized library routines, the serial code achieves 52-56 Mflops on a single node of the SP1 (45% of theoretical peak performance). The actual performance of the PSDNS code on the SP1 is evaluated with a 'real world' simulation that consists of 1.7 million grid points. One time step of this simulation is calculated on eight nodes of the SP1 in the same time as required by a Cray Y/MP for the same simulation. The scalability information provides estimated computational costs that match the actual costs relative to changes in the number of grid points.
Parallel Implementation of the Wideband DOA Algorithm on the IBM Cell BE Processor
2010-05-01
Abstract—The Multiple Signal Classification ( MUSIC ) algorithm is a powerful technique for determining the Direction of Arrival (DOA) of signals...Broadband Engine Processor (Cell BE). The process of adapting the serial based MUSIC algorithm to the Cell BE will be analyzed in terms of parallelism and...using Multiple Signal Classification MUSIC algorithm [4] • Computation of Focus matrix • Computation of number of sources • Separation of Signal
Attentional Episodes in Visual Perception
ERIC Educational Resources Information Center
Wyble, Brad; Potter, Mary C.; Bowman, Howard; Nieuwenstein, Mark
2011-01-01
Is one's temporal perception of the world truly as seamless as it appears? This article presents a computationally motivated theory suggesting that visual attention samples information from temporal episodes (episodic simultaneous type/serial token model; Wyble, Bowman, & Nieuwenstein, 2009). Breaks between these episodes are punctuated by periods…
NASA Astrophysics Data System (ADS)
Khosla, Deepak; Huber, David J.; Bhattacharyya, Rajan
2017-05-01
In this paper, we describe an algorithm and system for optimizing search and detection performance for "items of interest" (IOI) in large-sized images and videos that employ the Rapid Serial Visual Presentation (RSVP) based EEG paradigm and surprise algorithms that incorporate motion processing to determine whether static or video RSVP is used. The system works by first computing a motion surprise map on image sub-regions (chips) of incoming sensor video data and then uses those surprise maps to label the chips as either "static" or "moving". This information tells the system whether to use a static or video RSVP presentation and decoding algorithm in order to optimize EEG based detection of IOI in each chip. Using this method, we are able to demonstrate classification of a series of image regions from video with an azimuth value of 1, indicating perfect classification, over a range of display frequencies and video speeds.
NASA Astrophysics Data System (ADS)
Bobkov, S. G.; Serdin, O. V.; Arkhangelskiy, A. I.; Arkhangelskaja, I. V.; Suchkov, S. I.; Topchiev, N. P.
The problem of electronic component unification at the different levels (circuits, interfaces, hardware and software) used in space industry is considered. The task of computer systems for space purposes developing is discussed by example of scientific data acquisition system for space project GAMMA-400. The basic characteristics of high reliable and fault tolerant chips developed by SRISA RAS for space applicable computational systems are given. To reduce power consumption and enhance data reliability, embedded system interconnect made hierarchical: upper level is Serial RapidIO 1x or 4x with rate transfer 1.25 Gbaud; next level - SpaceWire with rate transfer up to 400 Mbaud and lower level - MIL-STD-1553B and RS232/RS485. The Ethernet 10/100 is technology interface and provided connection with the previously released modules too. Systems interconnection allows creating different redundancy systems. Designers can develop heterogeneous systems that employ the peer-to-peer networking performance of Serial RapidIO using multiprocessor clusters interconnected by SpaceWire.
Automatic Parallelization of Numerical Python Applications using the Global Arrays Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daily, Jeffrey A.; Lewis, Robert R.
2011-11-30
Global Arrays is a software system from Pacific Northwest National Laboratory that enables an efficient, portable, and parallel shared-memory programming interface to manipulate distributed dense arrays. The NumPy module is the de facto standard for numerical calculation in the Python programming language, a language whose use is growing rapidly in the scientific and engineering communities. NumPy provides a powerful N-dimensional array class as well as other scientific computing capabilities. However, like the majority of the core Python modules, NumPy is inherently serial. Using a combination of Global Arrays and NumPy, we have reimplemented NumPy as a distributed drop-in replacement calledmore » Global Arrays in NumPy (GAiN). Serial NumPy applications can become parallel, scalable GAiN applications with only minor source code changes. Scalability studies of several different GAiN applications will be presented showing the utility of developing serial NumPy codes which can later run on more capable clusters or supercomputers.« less
Experiences with serial and parallel algorithms for channel routing using simulated annealing
NASA Technical Reports Server (NTRS)
Brouwer, Randall Jay
1988-01-01
Two algorithms for channel routing using simulated annealing are presented. Simulated annealing is an optimization methodology which allows the solution process to back up out of local minima that may be encountered by inappropriate selections. By properly controlling the annealing process, it is very likely that the optimal solution to an NP-complete problem such as channel routing may be found. The algorithm presented proposes very relaxed restrictions on the types of allowable transformations, including overlapping nets. By freeing that restriction and controlling overlap situations with an appropriate cost function, the algorithm becomes very flexible and can be applied to many extensions of channel routing. The selection of the transformation utilizes a number of heuristics, still retaining the pseudorandom nature of simulated annealing. The algorithm was implemented as a serial program for a workstation, and a parallel program designed for a hypercube computer. The details of the serial implementation are presented, including many of the heuristics used and some of the resulting solutions.
Saitoh, Sei; Ohno, Nobuhiko; Saitoh, Yurika; Terada, Nobuo; Shimo, Satoshi; Aida, Kaoru; Fujii, Hideki; Kobayashi, Tetsuro; Ohno, Shinichi
2018-01-01
Combined analysis of immunostaining for various biological molecules coupled with investigations of ultrastructural features of individual cells is a powerful approach for studies of cellular functions in normal and pathological conditions. However, weak antigenicity of tissues fixed by conventional methods poses a problem for immunoassays. This study introduces a method of correlative light and electron microscopy imaging of the same endocrine cells of compact and diffuse islets from human pancreatic tissue specimens. The method utilizes serial sections obtained from Epon-embedded specimens fixed with glutaraldehyde and osmium tetroxide. Double-immunofluorescence staining of thick Epon sections for endocrine hormones (insulin and glucagon) and regenerating islet-derived gene 1 α (REG1α) was performed following the removal of Epoxy resin with sodium ethoxide, antigen retrieval by autoclaving, and de-osmification treatment with hydrogen peroxide. The immunofluorescence images of endocrine cells were superimposed with the electron microscopy images of the same cells obtained from serial ultrathin sections. Immunofluorescence images showed well-preserved secretory granules in endocrine cells, whereas electron microscopy observations demonstrated corresponding secretory granules and intracellular organelles in the same cells. In conclusion, the correlative imaging approach developed by us may be useful for examining ultrastructural features in combination with immunolocalisation of endocrine hormones in the same human pancreatic islets. PMID:29622846
A home-built digital optical MRI console using high-speed serial links.
Tang, Weinan; Wang, Weimin; Liu, Wentao; Ma, Yajun; Tang, Xin; Xiao, Liang; Gao, Jia-Hong
2015-08-01
To develop a high performance, cost-effective digital optical console for scalable multichannel MRI. The console system was implemented with flexibility and efficiency based on a modular architecture with distributed pulse sequencers. High-speed serial links were optimally utilized to interconnect the system, providing fast digital communication with a multi-gigabit data rate. The conventional analog radio frequency (RF) chain was replaced with a digital RF manipulation. The acquisition electronics were designed in close proximity to RF coils and preamplifiers, using a digital optical link to transmit the MR signal. A prototype of the console was constructed with a broad frequency range from direct current to 100 MHz. A temporal resolution of 1 μs was achieved for both the RF and gradient operations. The MR signal was digitized in the scanner room with an overall dynamic range between 16 and 24 bits and was transmitted to a master controller over a duplex optic fiber with a high data rate of 3.125 gigabits per second. High-quality phantom and human images were obtained using the prototype on both 0.36T and 1.5T clinical MRI scanners. A homemade digital optical MRI console with high-speed serial interconnection has been developed to better serve imaging research and clinical applications. © 2014 Wiley Periodicals, Inc.
Embracing the quantum limit in silicon computing.
Morton, John J L; McCamey, Dane R; Eriksson, Mark A; Lyon, Stephen A
2011-11-16
Quantum computers hold the promise of massive performance enhancements across a range of applications, from cryptography and databases to revolutionary scientific simulation tools. Such computers would make use of the same quantum mechanical phenomena that pose limitations on the continued shrinking of conventional information processing devices. Many of the key requirements for quantum computing differ markedly from those of conventional computers. However, silicon, which plays a central part in conventional information processing, has many properties that make it a superb platform around which to build a quantum computer. © 2011 Macmillan Publishers Limited. All rights reserved
Comparative Analysis Between Computed and Conventional Inferior Alveolar Nerve Block Techniques.
Araújo, Gabriela Madeira; Barbalho, Jimmy Charles Melo; Dias, Tasiana Guedes de Souza; Santos, Thiago de Santana; Vasconcellos, Ricardo José de Holanda; de Morais, Hécio Henrique Araújo
2015-11-01
The aim of this randomized, double-blind, controlled trial was to compare the computed and conventional inferior alveolar nerve block techniques in symmetrically positioned inferior third molars. Both computed and conventional anesthetic techniques were performed in 29 healthy patients (58 surgeries) aged between 18 and 40 years. The anesthetic of choice was 2% lidocaine with 1: 200,000 epinephrine. The Visual Analogue Scale assessed the pain variable after anesthetic infiltration. Patient satisfaction was evaluated using the Likert Scale. Heart and respiratory rates, mean time to perform technique, and the need for additional anesthesia were also evaluated. Pain variable means were higher for the conventional technique as compared with computed, 3.45 ± 2.73 and 2.86 ± 1.96, respectively, but no statistically significant differences were found (P > 0.05). Patient satisfaction showed no statistically significant differences. The average computed technique runtime and the conventional were 3.85 and 1.61 minutes, respectively, showing statistically significant differences (P <0.001). The computed anesthetic technique showed lower mean pain perception, but did not show statistically significant differences when contrasted to the conventional technique.
New dynamic FET logic and serial memory circuits for VLSI GaAs technology
NASA Technical Reports Server (NTRS)
Eldin, A. G.
1991-01-01
The complexity of GaAs field effect transistor (FET) very large scale integration (VLSI) circuits is limited by the maximum power dissipation while the uniformity of the device parameters determines the functional yield. In this work, digital GaAs FET circuits are presented that eliminate the DC power dissipation and reduce the area to 50% of that of the conventional static circuits. Its larger tolerance to device parameter variations results in higher functional yield.
Paper-based device for separation and cultivation of single microalga.
Chen, Chih-Chung; Liu, Yi-Ju; Yao, Da-Jeng
2015-12-01
Single-cell separation is among the most useful techniques in biochemical research, diagnosis and various industrial applications. Microalgae species have great economic importance as industrial raw materials. Microalgae species collected from environment are typically a mixed and heterogeneous population of species that must be isolated and purified for examination and further application. Conventional methods, such as serial dilution and a streaking-plate method, are intensive of labor and inefficient. We developed a paper-based device for separation and cultivation of single microalga. The fabrication was simply conducted with a common laser printer and required only a few minutes without lithographic instruments and clean-room. The driving force of the paper device was simple capillarity without a complicated pump connection that is part of most devices for microfluidics. The open-structure design of the paper device makes it operable with a common laboratory micropipette for sample transfer and manipulation with a naked eye or adaptable to a robotic system with functionality of high-throughput retrieval and analysis. The efficiency of isolating a single cell from mixed microalgae species is seven times as great as with a conventional method involving serial dilution. The paper device can serve also as an incubator for microalgae growth on simply rinsing the paper with a growth medium. Many applications such as highly expressed cell selection and various single-cell analysis would be applicable. Copyright © 2015 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shim, Yunsic; Amar, Jacques G.
While temperature-accelerated dynamics (TAD) is a powerful method for carrying out non-equilibrium simulations of systems over extended time scales, the computational cost of serial TAD increases approximately as N{sup 3} where N is the number of atoms. In addition, although a parallel TAD method based on domain decomposition [Y. Shim et al., Phys. Rev. B 76, 205439 (2007)] has been shown to provide significantly improved scaling, the dynamics in such an approach is only approximate while the size of activated events is limited by the spatial decomposition size. Accordingly, it is of interest to develop methods to improve the scalingmore » of serial TAD. As a first step in understanding the factors which determine the scaling behavior, we first present results for the overall scaling of serial TAD and its components, which were obtained from simulations of Ag/Ag(100) growth and Ag/Ag(100) annealing, and compare with theoretical predictions. We then discuss two methods based on localization which may be used to address two of the primary “bottlenecks” to the scaling of serial TAD with system size. By implementing both of these methods, we find that for intermediate system-sizes, the scaling is improved by almost a factor of N{sup 1/2}. Some additional possible methods to improve the scaling of TAD are also discussed.« less
Improved scaling of temperature-accelerated dynamics using localization
NASA Astrophysics Data System (ADS)
Shim, Yunsic; Amar, Jacques G.
2016-07-01
While temperature-accelerated dynamics (TAD) is a powerful method for carrying out non-equilibrium simulations of systems over extended time scales, the computational cost of serial TAD increases approximately as N3 where N is the number of atoms. In addition, although a parallel TAD method based on domain decomposition [Y. Shim et al., Phys. Rev. B 76, 205439 (2007)] has been shown to provide significantly improved scaling, the dynamics in such an approach is only approximate while the size of activated events is limited by the spatial decomposition size. Accordingly, it is of interest to develop methods to improve the scaling of serial TAD. As a first step in understanding the factors which determine the scaling behavior, we first present results for the overall scaling of serial TAD and its components, which were obtained from simulations of Ag/Ag(100) growth and Ag/Ag(100) annealing, and compare with theoretical predictions. We then discuss two methods based on localization which may be used to address two of the primary "bottlenecks" to the scaling of serial TAD with system size. By implementing both of these methods, we find that for intermediate system-sizes, the scaling is improved by almost a factor of N1/2. Some additional possible methods to improve the scaling of TAD are also discussed.
MAMMALIAN APOPTOSIS IN WHOLE NEONATAL OVARIES, EMBRYOS AND FETAL LIMBS USING CONFOCAL MICROSCOPY
The emergence of confocal laser scanning microscopy (CLSM) as a technique capable of optically generating serial sections of whole-mount tissue and then reassembling the computer-stored images as a virtual 3-dimensional structure offers a viable alternative to traditional section...
Electronic Data Interchange (EDI) for Libraries and Publishers.
ERIC Educational Resources Information Center
Santosuosso, Joe
1992-01-01
Defines electronic data interchange (EDI) as the exchange of data between computer systems without human intervention or interpretation. Standards are discussed; and the implementation of EDI in libraries and the serials publishing community in the areas of orders and acquisitions, claims, and invoice processing is described. (LRW)
Have You Got What It Takes...And Are You Using All You Could?
ERIC Educational Resources Information Center
Dyer, Hilary
1994-01-01
Suggests ways of using personal computers in special libraries, including online searching; CD-ROM networks; reference work; current awareness services; press cuttings services; selective dissemination of information; local databases; object linking and embedding; cataloging; acquisitions; circulation; serials control; interlibrary loan; space…
Confocal microscopy studies of morphology and apoptosis: ovaries, limbs, embryos and insects
Confocal laser scanning microscopy (CLSM) is a technique that is capable of generating serial sections of whole-mount tissue and then reassembling the computer-stored images as a virtual 3-dimensional structure. In many ways CLSM offers an alternative to traditional sectioning ap...
Confocal microscopy of thick tissue sections: 3D visualizaiton of rat kidney glomeruli
Confocal laser scanning microscopy (CLSM) as a technique capable of generating serial sections of whole-mount tissue and then reassembling the computer-acquired images as a virtual 3-dimentional structure. In many ways CLSM offers an alternative to traditional sectioning approac...
Confocal Microscopy of thick tissue sections: 3D Visualization of rat kidney glomeruli
Confocal laser scanning microscopy (CLSM) as a technique capable of generating serial sections of whole-mount tissue and then reassembling the computer-acquired images as a virtual 3-dimentional structure. In many ways CLSM offers an alternative to traditional sectioning approac...
Parallel algorithms for computation of the manipulator inertia matrix
NASA Technical Reports Server (NTRS)
Amin-Javaheri, Masoud; Orin, David E.
1989-01-01
The development of an O(log2N) parallel algorithm for the manipulator inertia matrix is presented. It is based on the most efficient serial algorithm which uses the composite rigid body method. Recursive doubling is used to reformulate the linear recurrence equations which are required to compute the diagonal elements of the matrix. It results in O(log2N) levels of computation. Computation of the off-diagonal elements involves N linear recurrences of varying-size and a new method, which avoids redundant computation of position and orientation transforms for the manipulator, is developed. The O(log2N) algorithm is presented in both equation and graphic forms which clearly show the parallelism inherent in the algorithm.
NASA Astrophysics Data System (ADS)
Li, Peng; Wu, Di
2018-01-01
Two competing approaches have been developed over the years for multi-echelon inventory system optimization, stochastic-service approach (SSA) and guaranteed-service approach (GSA). Although they solve the same inventory policy optimization problem in their core, they make different assumptions with regard to the role of safety stock. This paper provides a detailed comparison of the two approaches by considering operating flexibility costs in the optimization of (R, Q) policies for a continuous review serial inventory system. The results indicate the GSA model is more efficiency in solving the complicated inventory problem in terms of the computation time, and the cost difference of the two approaches is quite small.
Experiences in using the CYBER 203 for three-dimensional transonic flow calculations
NASA Technical Reports Server (NTRS)
Melson, N. D.; Keller, J. D.
1982-01-01
In this paper, the authors report on some of their experiences modifying two three-dimensional transonic flow programs (FLO22 and FLO27) for use on the NASA Langley Research Center CYBER 203. Both of the programs discussed were originally written for use on serial machines. Several methods were attempted to optimize the execution of the two programs on the vector machine, including: (1) leaving the program in a scalar form (i.e., serial computation) with compiler software used to optimize and vectorize the program, (2) vectorizing parts of the existing algorithm in the program, and (3) incorporating a new vectorizable algorithm (ZEBRA I or ZEBRA II) in the program.
Graphics processing unit based computation for NDE applications
NASA Astrophysics Data System (ADS)
Nahas, C. A.; Rajagopal, Prabhu; Balasubramaniam, Krishnan; Krishnamurthy, C. V.
2012-05-01
Advances in parallel processing in recent years are helping to improve the cost of numerical simulation. Breakthroughs in Graphical Processing Unit (GPU) based computation now offer the prospect of further drastic improvements. The introduction of 'compute unified device architecture' (CUDA) by NVIDIA (the global technology company based in Santa Clara, California, USA) has made programming GPUs for general purpose computing accessible to the average programmer. Here we use CUDA to develop parallel finite difference schemes as applicable to two problems of interest to NDE community, namely heat diffusion and elastic wave propagation. The implementations are for two-dimensions. Performance improvement of the GPU implementation against serial CPU implementation is then discussed.
2013-01-01
Background In biomedical research, a huge variety of different techniques is currently available for the structural examination of small specimens, including conventional light microscopy (LM), transmission electron microscopy (TEM), confocal laser scanning microscopy (CLSM), microscopic X-ray computed tomography (microCT), and many others. Since every imaging method is physically limited by certain parameters, a correlative use of complementary methods often yields a significant broader range of information. Here we demonstrate the advantages of the correlative use of microCT, light microscopy, and transmission electron microscopy for the analysis of small biological samples. Results We used a small juvenile bivalve mollusc (Mytilus galloprovincialis, approximately 0.8 mm length) to demonstrate the workflow of a correlative examination by microCT, LM serial section analysis, and TEM-re-sectioning. Initially these three datasets were analyzed separately, and subsequently they were fused in one 3D scene. This workflow is very straightforward. The specimen was processed as usual for transmission electron microscopy including post-fixation in osmium tetroxide and embedding in epoxy resin. Subsequently it was imaged with microCT. Post-fixation in osmium tetroxide yielded sufficient X-ray contrast for microCT imaging, since the X-ray absorption of epoxy resin is low. Thereafter, the same specimen was serially sectioned for LM investigation. The serial section images were aligned and specific organ systems were reconstructed based on manual segmentation and surface rendering. According to the region of interest (ROI), specific LM sections were detached from the slides, re-mounted on resin blocks and re-sectioned (ultrathin) for TEM. For analysis, image data from the three different modalities was co-registered into a single 3D scene using the software AMIRA®. We were able to register both the LM section series volume and TEM slices neatly to the microCT dataset, with small geometric deviations occurring only in the peripheral areas of the specimen. Based on co-registered datasets the excretory organs, which were chosen as ROI for this study, could be investigated regarding both their ultrastructure as well as their position in the organism and their spatial relationship to adjacent tissues. We found structures typical for mollusc excretory systems, including ultrafiltration sites at the pericardial wall, and ducts leading from the pericardium towards the kidneys, which exhibit a typical basal infolding system. Conclusions The presented approach allows a comprehensive analysis and presentation of small objects regarding both the overall organization as well as cellular and subcellular details. Although our protocol involves a variety of different equipment and procedures, we maintain that it offers savings in both effort and cost. Co-registration of datasets from different imaging modalities can be accomplished with high-end desktop computers and offers new opportunities for understanding and communicating structural relationships within organisms and tissues. In general, the correlative use of different microscopic imaging techniques will continue to become more widespread in morphological and structural research in zoology. Classical TEM serial section investigations are extremely time consuming, and modern methods for 3D analysis of ultrastructure such as SBF-SEM and FIB-SEM are limited to very small volumes for examination. Thus the re-sectioning of LM sections is suitable for speeding up TEM examination substantially, while microCT could become a key-method for complementing ultrastructural examinations. PMID:23915384
Algorithms and software for solving finite element equations on serial and parallel architectures
NASA Technical Reports Server (NTRS)
George, Alan
1989-01-01
Over the past 15 years numerous new techniques have been developed for solving systems of equations and eigenvalue problems arising in finite element computations. A package called SPARSPAK has been developed by the author and his co-workers which exploits these new methods. The broad objective of this research project is to incorporate some of this software in the Computational Structural Mechanics (CSM) testbed, and to extend the techniques for use on multiprocessor architectures.
Podoleanu, Adrian Gh; Bradu, Adrian
2013-08-12
Conventional spectral domain interferometry (SDI) methods suffer from the need of data linearization. When applied to optical coherence tomography (OCT), conventional SDI methods are limited in their 3D capability, as they cannot deliver direct en-face cuts. Here we introduce a novel SDI method, which eliminates these disadvantages. We denote this method as Master - Slave Interferometry (MSI), because a signal is acquired by a slave interferometer for an optical path difference (OPD) value determined by a master interferometer. The MSI method radically changes the main building block of an SDI sensor and of a spectral domain OCT set-up. The serially provided signal in conventional technology is replaced by multiple signals, a signal for each OPD point in the object investigated. This opens novel avenues in parallel sensing and in parallelization of signal processing in 3D-OCT, with applications in high- resolution medical imaging and microscopy investigation of biosamples. Eliminating the need of linearization leads to lower cost OCT systems and opens potential avenues in increasing the speed of production of en-face OCT images in comparison with conventional SDI.
Simple Logic for Big Problems: An Inside Look at Relational Databases.
ERIC Educational Resources Information Center
Seba, Douglas B.; Smith, Pat
1982-01-01
Discusses database design concept termed "normalization" (process replacing associations between data with associations in two-dimensional tabular form) which results in formation of relational databases (they are to computers what dictionaries are to spoken languages). Applications of the database in serials control and complex systems…
AIAA spacecraft GN&C interface standards initiative: Overview
NASA Technical Reports Server (NTRS)
Challoner, A. Dorian
1995-01-01
The American Institute of Aeronautics and Astronautics (AIAA) has undertaken an important standards initiative in the area of spacecraft guidance, navigation, and control (GN&C) subsystem interfaces. The objective of this effort is to establish standards that will promote interchangeability of major GN&C components, thus enabling substantially lower spacecraft development costs. Although initiated by developers of conventional spacecraft GN&C, it is anticipated that interface standards will also be of value in reducing the development costs of micro-engineered spacecraft. The standardization targets are specifically limited to interfaces only, including information (i.e. data and signal), power, mechanical, thermal, and environmental interfaces between various GN&C components and between GN&C subsystems and other subsystems. The current emphasis is on information interfaces between various hardware elements (e.g., between star trackers and flight computers). The poster presentation will briefly describe the program, including the mechanics and schedule, and will publicize the technical products as they exist at the time of the conference. In particular, the rationale for the adoption of the AS1773 fiber-optic serial data bus and the status of data interface standards at the application layer will be presented.
Optimizing transformations of stencil operations for parallel cache-based architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bassetti, F.; Davis, K.
This paper describes a new technique for optimizing serial and parallel stencil- and stencil-like operations for cache-based architectures. This technique takes advantage of the semantic knowledge implicity in stencil-like computations. The technique is implemented as a source-to-source program transformation; because of its specificity it could not be expected of a conventional compiler. Empirical results demonstrate a uniform factor of two speedup. The experiments clearly show the benefits of this technique to be a consequence, as intended, of the reduction in cache misses. The test codes are based on a 5-point stencil obtained by the discretization of the Poisson equation andmore » applied to a two-dimensional uniform grid using the Jacobi method as an iterative solver. Results are presented for a 1-D tiling for a single processor, and in parallel using 1-D data partition. For the parallel case both blocking and non-blocking communication are tested. The same scheme of experiments has bee n performed for the 2-D tiling case. However, for the parallel case the 2-D partitioning is not discussed here, so the parallel case handled for 2-D is 2-D tiling with 1-D data partitioning.« less
State-of-the-art: Radiological investigation of pleural disease.
Hallifax, R J; Talwar, A; Wrightson, J M; Edey, A; Gleeson, F V
2017-03-01
Pleural disease is common. Radiological investigation of pleural effusion, thickening, masses, and pneumothorax is key in diagnosing and determining management. Conventional chest radiograph (CXR) remains as the initial investigation of choice for patients with suspected pleural disease. When abnormalities are detected, thoracic ultrasound (US), computed tomography (CT), magnetic resonance imaging (MRI) and positron emission tomography (PET) can each play important roles in further investigation, but appropriate modality selection is critical. US adds significant value in the identification of pleural fluid and pleural nodularity, guiding pleural procedures and, increasingly, as "point of care" assessment for pneumothorax, but is highly operator dependent. CT scan is the modality of choice for further assessment of pleural disease: Characterising pleural thickening, some pleural effusions and demonstration of homogeneity of pleural masses and areas of fatty attenuation or calcification. MRI has specific utility for soft tissue abnormalities and may have a role for younger patients requiring follow-up serial imaging. MRI and PET/CT may provide additional information in malignant pleural disease regarding prognosis and response to therapy. This article summarises existing techniques, highlighting the benefits and applications of these different imaging modalities and provides an up to date review of the evidence. Copyright © 2017 Elsevier Ltd. All rights reserved.
Numerical Analysis of Ginzburg-Landau Models for Superconductivity.
NASA Astrophysics Data System (ADS)
Coskun, Erhan
Thin film conventional, as well as High T _{c} superconductors of various geometric shapes placed under both uniform and variable strength magnetic field are studied using the universially accepted macroscopic Ginzburg-Landau model. A series of new theoretical results concerning the properties of solution is presented using the semi -discrete time-dependent Ginzburg-Landau equations, staggered grid setup and natural boundary conditions. Efficient serial algorithms including a novel adaptive algorithm is developed and successfully implemented for solving the governing highly nonlinear parabolic system of equations. Refinement technique used in the adaptive algorithm is based on modified forward Euler method which was also developed by us to ease the restriction on time step size for stability considerations. Stability and convergence properties of forward and modified forward Euler schemes are studied. Numerical simulations of various recent physical experiments of technological importance such as vortes motion and pinning are performed. The numerical code for solving time-dependent Ginzburg-Landau equations is parallelized using BlockComm -Chameleon and PCN. The parallel code was run on the distributed memory multiprocessors intel iPSC/860, IBM-SP1 and cluster of Sun Sparc workstations, all located at Mathematics and Computer Science Division, Argonne National Laboratory.
Molecular Monte Carlo Simulations Using Graphics Processing Units: To Waste Recycle or Not?
Kim, Jihan; Rodgers, Jocelyn M; Athènes, Manuel; Smit, Berend
2011-10-11
In the waste recycling Monte Carlo (WRMC) algorithm, (1) multiple trial states may be simultaneously generated and utilized during Monte Carlo moves to improve the statistical accuracy of the simulations, suggesting that such an algorithm may be well posed for implementation in parallel on graphics processing units (GPUs). In this paper, we implement two waste recycling Monte Carlo algorithms in CUDA (Compute Unified Device Architecture) using uniformly distributed random trial states and trial states based on displacement random-walk steps, and we test the methods on a methane-zeolite MFI framework system to evaluate their utility. We discuss the specific implementation details of the waste recycling GPU algorithm and compare the methods to other parallel algorithms optimized for the framework system. We analyze the relationship between the statistical accuracy of our simulations and the CUDA block size to determine the efficient allocation of the GPU hardware resources. We make comparisons between the GPU and the serial CPU Monte Carlo implementations to assess speedup over conventional microprocessors. Finally, we apply our optimized GPU algorithms to the important problem of determining free energy landscapes, in this case for molecular motion through the zeolite LTA.
Digital tomosynthesis mammography using a parallel maximum-likelihood reconstruction method
NASA Astrophysics Data System (ADS)
Wu, Tao; Zhang, Juemin; Moore, Richard; Rafferty, Elizabeth; Kopans, Daniel; Meleis, Waleed; Kaeli, David
2004-05-01
A parallel reconstruction method, based on an iterative maximum likelihood (ML) algorithm, is developed to provide fast reconstruction for digital tomosynthesis mammography. Tomosynthesis mammography acquires 11 low-dose projections of a breast by moving an x-ray tube over a 50° angular range. In parallel reconstruction, each projection is divided into multiple segments along the chest-to-nipple direction. Using the 11 projections, segments located at the same distance from the chest wall are combined to compute a partial reconstruction of the total breast volume. The shape of the partial reconstruction forms a thin slab, angled toward the x-ray source at a projection angle 0°. The reconstruction of the total breast volume is obtained by merging the partial reconstructions. The overlap region between neighboring partial reconstructions and neighboring projection segments is utilized to compensate for the incomplete data at the boundary locations present in the partial reconstructions. A serial execution of the reconstruction is compared to a parallel implementation, using clinical data. The serial code was run on a PC with a single PentiumIV 2.2GHz CPU. The parallel implementation was developed using MPI and run on a 64-node Linux cluster using 800MHz Itanium CPUs. The serial reconstruction for a medium-sized breast (5cm thickness, 11cm chest-to-nipple distance) takes 115 minutes, while a parallel implementation takes only 3.5 minutes. The reconstruction time for a larger breast using a serial implementation takes 187 minutes, while a parallel implementation takes 6.5 minutes. No significant differences were observed between the reconstructions produced by the serial and parallel implementations.
Zhou, Yanli; Faber, Tracy L.; Patel, Zenic; Folks, Russell D.; Cheung, Alice A.; Garcia, Ernest V.; Soman, Prem; Li, Dianfu; Cao, Kejiang; Chen, Ji
2013-01-01
Objective Left ventricular (LV) function and dyssynchrony parameters measured from serial gated single-photon emission computed tomography (SPECT) myocardial perfusion imaging (MPI) using blinded processing had a poorer repeatability than when manual side-by-side processing was used. The objective of this study was to validate whether an automatic alignment tool can reduce the variability of LV function and dyssynchrony parameters in serial gated SPECT MPI. Methods Thirty patients who had undergone serial gated SPECT MPI were prospectively enrolled in this study. Thirty minutes after the first acquisition, each patient was repositioned and a gated SPECT MPI image was reacquired. The two data sets were first processed blinded from each other by the same technologist in different weeks. These processed data were then realigned by the automatic tool, and manual side-by-side processing was carried out. All processing methods used standard iterative reconstruction and Butterworth filtering. The Emory Cardiac Toolbox was used to measure the LV function and dyssynchrony parameters. Results The automatic tool failed in one patient, who had a large, severe scar in the inferobasal wall. In the remaining 29 patients, the repeatability of the LV function and dyssynchrony parameters after automatic alignment was significantly improved from blinded processing and was comparable to manual side-by-side processing. Conclusion The automatic alignment tool can be an alternative method to manual side-by-side processing to improve the repeatability of LV function and dyssynchrony measurements by serial gated SPECT MPI. PMID:23211996
Choi, Jiwoong; Hoffman, Eric A; Lin, Ching-Long; Milhem, Mohammed M; Tessier, Jean; Newell, John D
2017-01-01
Extra-thoracic tumors send out pilot cells that attach to the pulmonary endothelium. We hypothesized that this could alter regional lung mechanics (tissue stiffening or accumulation of fluid and inflammatory cells) through interactions with host cells. We explored this with serial inspiratory computed tomography (CT) and image matching to assess regional changes in lung expansion. We retrospectively assessed 44 pairs of two serial CT scans on 21 sarcoma patients: 12 without lung metastases and 9 with lung metastases. For each subject, two or more serial inspiratory clinically-derived CT scans were retrospectively collected. Two research-derived control groups were included: 7 normal nonsmokers and 12 asymptomatic smokers with two inspiratory scans taken the same day or one year apart respectively. We performed image registration for local-to-local matching scans to baseline, and derived local expansion and density changes at an acinar scale. Welch two sample t test was used for comparison between groups. Statistical significance was determined with a p value < 0.05. Lung regions of metastatic sarcoma patients (but not the normal control group) demonstrated an increased proportion of normalized lung expansion between the first and second CT. These hyper-expanded regions were associated with, but not limited to, visible metastatic lung lesions. Compared with the normal control group, the percent of increased normalized hyper-expanded lung in sarcoma subjects was significantly increased (p < 0.05). There was also evidence of increased lung "tissue" volume (non-air components) in the hyper-expanded regions of the cancer subjects relative to non-hyper-expanded regions. "Tissue" volume increase was present in the hyper-expanded regions of metastatic and non-metastatic sarcoma subjects. This putatively could represent regional inflammation related to the presence of tumor pilot cell-host related interactions. This new quantitative CT (QCT) method for linking serial acquired inspiratory CT images may provide a diagnostic and prognostic means to objectively characterize regional responses in the lung following oncological treatment and monitoring for lung metastases.
Application of a Scalable, Parallel, Unstructured-Grid-Based Navier-Stokes Solver
NASA Technical Reports Server (NTRS)
Parikh, Paresh
2001-01-01
A parallel version of an unstructured-grid based Navier-Stokes solver, USM3Dns, previously developed for efficient operation on a variety of parallel computers, has been enhanced to incorporate upgrades made to the serial version. The resultant parallel code has been extensively tested on a variety of problems of aerospace interest and on two sets of parallel computers to understand and document its characteristics. An innovative grid renumbering construct and use of non-blocking communication are shown to produce superlinear computing performance. Preliminary results from parallelization of a recently introduced "porous surface" boundary condition are also presented.
2011-09-01
supply for the IMU switching 5, 12V ATX power supply for the computer and hard drive An L1/L2 active antenna on small back plane USB to serial...switching 5, 12V ATX power supply for the computer and hard drive Figure 4. UAS Target Location Technology for Ground Based Observers (TLGBO...15V power supply for the IMU H. switching 5, 12V ATX power supply for the computer & hard drive I. An L1/L2 active antenna on a small back
Parallel solution of sparse one-dimensional dynamic programming problems
NASA Technical Reports Server (NTRS)
Nicol, David M.
1989-01-01
Parallel computation offers the potential for quickly solving large computational problems. However, it is often a non-trivial task to effectively use parallel computers. Solution methods must sometimes be reformulated to exploit parallelism; the reformulations are often more complex than their slower serial counterparts. We illustrate these points by studying the parallelization of sparse one-dimensional dynamic programming problems, those which do not obviously admit substantial parallelization. We propose a new method for parallelizing such problems, develop analytic models which help us to identify problems which parallelize well, and compare the performance of our algorithm with existing algorithms on a multiprocessor.
Method for implementation of recursive hierarchical segmentation on parallel computers
NASA Technical Reports Server (NTRS)
Tilton, James C. (Inventor)
2005-01-01
A method, computer readable storage, and apparatus for implementing a recursive hierarchical segmentation algorithm on a parallel computing platform. The method includes setting a bottom level of recursion that defines where a recursive division of an image into sections stops dividing, and setting an intermediate level of recursion where the recursive division changes from a parallel implementation into a serial implementation. The segmentation algorithm is implemented according to the set levels. The method can also include setting a convergence check level of recursion with which the first level of recursion communicates with when performing a convergence check.
drPACS: A Simple UNIX Execution Pipeline
NASA Astrophysics Data System (ADS)
Teuben, P.
2011-07-01
We describe a very simple yet flexible and effective pipeliner for UNIX commands. It creates a Makefile to define a set of serially dependent commands. The commands in the pipeline share a common set of parameters by which they can communicate. Commands must follow a simple convention to retrieve and store parameters. Pipeline parameters can optionally be made persistent across multiple runs of the pipeline. Tools were added to simplify running a large series of pipelines, which can then also be run in parallel.
High Rate Digital Demodulator ASIC
NASA Technical Reports Server (NTRS)
Ghuman, Parminder; Sheikh, Salman; Koubek, Steve; Hoy, Scott; Gray, Andrew
1998-01-01
The architecture of High Rate (600 Mega-bits per second) Digital Demodulator (HRDD) ASIC capable of demodulating BPSK and QPSK modulated data is presented in this paper. The advantages of all-digital processing include increased flexibility and reliability with reduced reproduction costs. Conventional serial digital processing would require high processing rates necessitating a hardware implementation in other than CMOS technology such as Gallium Arsenide (GaAs) which has high cost and power requirements. It is more desirable to use CMOS technology with its lower power requirements and higher gate density. However, digital demodulation of high data rates in CMOS requires parallel algorithms to process the sampled data at a rate lower than the data rate. The parallel processing algorithms described here were developed jointly by NASA's Goddard Space Flight Center (GSFC) and the Jet Propulsion Laboratory (JPL). The resulting all-digital receiver has the capability to demodulate BPSK, QPSK, OQPSK, and DQPSK at data rates in excess of 300 Mega-bits per second (Mbps) per channel. This paper will provide an overview of the parallel architecture and features of the HRDR ASIC. In addition, this paper will provide an over-view of the implementation of the hardware architectures used to create flexibility over conventional high rate analog or hybrid receivers. This flexibility includes a wide range of data rates, modulation schemes, and operating environments. In conclusion it will be shown how this high rate digital demodulator can be used with an off-the-shelf A/D and a flexible analog front end, both of which are numerically computer controlled, to produce a very flexible, low cost high rate digital receiver.
Note: computer controlled rotation mount for large diameter optics.
Rakonjac, Ana; Roberts, Kris O; Deb, Amita B; Kjærgaard, Niels
2013-02-01
We describe the construction of a motorized optical rotation mount with a 40 mm clear aperture. The device is used to remotely control the power of large diameter laser beams for a magneto-optical trap. A piezo-electric ultrasonic motor on a printed circuit board provides rotation with a precision better than 0.03° and allows for a very compact design. The rotation unit is controlled from a computer via serial communication, making integration into most software control platforms straightforward.
2006-09-01
required directional control for each thruster due to their high precision and equivalent power and computer interface requirements to those for the...Universal Serial Bus) ports, LPT (Line Printing Terminal) and KVM (Keyboard-Video- Mouse) interfaces. Additionally, power is supplied to the computer through...of the IDE cable to the Prometheus Development Kit ACC-IDEEXT. Connect a small drive power connector from the desktop ATX power supply to the ACC
2014-07-08
internction ( BCI ) system allows h uman subjects to communicate with or control an extemal device with their brain signals [1], or to use those brain...signals to interact with computers, environments, or even other humans [2]. One application of BCI is to use brnin signals to distinguish target...images within a large collection of non-target images [2]. Such BCI -based systems can drastically increase the speed of target identification in
Lee, Hyunyoung; Cheon, Byungsik; Hwang, Minho; Kang, Donghoon; Kwon, Dong-Soo
2018-02-01
In robotic surgical systems, commercial master devices have limitations owing to insufficient workspace and lack of intuitiveness. To overcome these limitations, a remote-center-of-motion (RCM) master manipulator was proposed. The feasibility of the proposed RCM structure was evaluated through kinematic analysis using a conventional serial structure. Two performance comparison experiments (peg transfer task and objective transfer task) were conducted for the developed master and Phantom Omni. The kinematic analysis results showed that compared with the serial structure, the proposed RCM structure has better performance in terms of design efficiency (19%) and workspace quality (59.08%). Further, in comparison with Phantom Omni, the developed master significantly increased task efficiency and significantly decreased workload in both experiments. The comparatively better performance in terms of intuitiveness, design efficiency, and operability of the proposed master for a robotic system for minimally invasive surgery was confirmed through kinematic and experimental analysis. Copyright © 2017 John Wiley & Sons, Ltd.
Tardif, Pier-Luc; Bertrand, Marie-Jeanne; Abran, Maxime; Castonguay, Alexandre; Lefebvre, Joël; Stähli, Barbara E; Merlet, Nolwenn; Mihalache-Avram, Teodora; Geoffroy, Pascale; Mecteau, Mélanie; Busseuil, David; Ni, Feng; Abulrob, Abedelnasser; Rhéaume, Éric; L'Allier, Philippe; Tardif, Jean-Claude; Lesage, Frédéric
2016-12-15
Atherosclerotic cardiovascular diseases are characterized by the formation of a plaque in the arterial wall. Intravascular ultrasound (IVUS) provides high-resolution images allowing delineation of atherosclerotic plaques. When combined with near infrared fluorescence (NIRF), the plaque can also be studied at a molecular level with a large variety of biomarkers. In this work, we present a system enabling automated volumetric histology imaging of excised aortas that can spatially correlate results with combined IVUS/NIRF imaging of lipid-rich atheroma in cholesterol-fed rabbits. Pullbacks in the rabbit aortas were performed with a dual modality IVUS/NIRF catheter developed by our group. Ex vivo three-dimensional (3D) histology was performed combining optical coherence tomography (OCT) and confocal fluorescence microscopy, providing high-resolution anatomical and molecular information, respectively, to validate in vivo findings. The microscope was combined with a serial slicer allowing for the imaging of the whole vessel automatically. Colocalization of in vivo and ex vivo results is demonstrated. Slices can then be recovered to be tested in conventional histology.
Ceramic micro-injection molded nozzles for serial femtosecond crystallography sample delivery
NASA Astrophysics Data System (ADS)
Beyerlein, K. R.; Adriano, L.; Heymann, M.; Kirian, R.; Knoška, J.; Wilde, F.; Chapman, H. N.; Bajt, S.
2015-12-01
Serial femtosecond crystallography (SFX) using X-ray Free-Electron Lasers (XFELs) allows for room temperature protein structure determination without evidence of conventional radiation damage. In this method, a liquid suspension of protein microcrystals can be delivered to the X-ray beam in vacuum as a micro-jet, which replenishes the crystals at a rate that exceeds the current XFEL pulse repetition rate. Gas dynamic virtual nozzles produce the required micrometer-sized streams by the focusing action of a coaxial sheath gas and have been shown to be effective for SFX experiments. Here, we describe the design and characterization of such nozzles assembled from ceramic micro-injection molded outer gas-focusing capillaries. Trends of the emitted jet diameter and jet length as a function of supplied liquid and gas flow rates are measured by a fast imaging system. The observed trends are explained by derived relationships considering choked gas flow and liquid flow conservation. Finally, the performance of these nozzles in a SFX experiment is presented, including an analysis of the observed background.
All-optical gain-clamped wideband serial EDFA with ring-shaped laser
NASA Astrophysics Data System (ADS)
Lu, Yung-Hsin; Chi, Sien
2004-01-01
We experimentally investigate the static and dynamic properties of all-optical gain-clamped wideband (1530-1600 nm) serial erbium-doped fiber amplifier with a single ring-shaped laser, which consists of a circulator and a fiber Bragg grating at the output end. The lasing light passing through the second stage is intentionally blocked at the output end by a C/L-band wavelength division multiplexer owning the huge insertion loss, and thus, the copropagating ring-laser light is formed by the first stage. This design can simultaneously clamp the gains of 1547 and 1584 nm probes near 14 dB and shows the same dynamic range of input power up to -4 dBm for conventional band and long-wavelength band. Furthermore, the transient responses of 1551 and 1596 nm surviving channels exhibit small power excursions (<0.54 dB) as the total saturating tone with -2 dBm is modulated on and off at 270 Hz.
NASA Technical Reports Server (NTRS)
Robinson, Julie A.; Tate-Brown, Judy M.
2009-01-01
Using a commercial software CD and minimal up-mass, SNFM monitors the Payload local area network (LAN) to analyze and troubleshoot LAN data traffic. Validating LAN traffic models may allow for faster and more reliable computer networks to sustain systems and science on future space missions. Research Summary: This experiment studies the function of the computer network onboard the ISS. On-orbit packet statistics are captured and used to validate ground based medium rate data link models and enhance the way that the local area network (LAN) is monitored. This information will allow monitoring and improvement in the data transfer capabilities of on-orbit computer networks. The Serial Network Flow Monitor (SNFM) experiment attempts to characterize the network equivalent of traffic jams on board ISS. The SNFM team is able to specifically target historical problem areas including the SAMS (Space Acceleration Measurement System) communication issues, data transmissions from the ISS to the ground teams, and multiple users on the network at the same time. By looking at how various users interact with each other on the network, conflicts can be identified and work can begin on solutions. SNFM is comprised of a commercial off the shelf software package that monitors packet traffic through the payload Ethernet LANs (local area networks) on board ISS.
Xyce parallel electronic simulator users guide, version 6.1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, Eric R; Mei, Ting; Russo, Thomas V.
This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas; Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). This includes support for most popular parallel and serial computers; A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to developmore » new types of analysis without requiring the implementation of analysis-specific device models; Device models that are specifically tailored to meet Sandia's needs, including some radiationaware devices (for Sandia users only); and Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase-a message passing parallel implementation-which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows.« less
Xyce parallel electronic simulator users' guide, Version 6.0.1.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, Eric R; Mei, Ting; Russo, Thomas V.
This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to developmore » new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandias needs, including some radiationaware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase a message passing parallel implementation which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows.« less
Efficient Parallel Video Processing Techniques on GPU: From Framework to Implementation
Su, Huayou; Wen, Mei; Wu, Nan; Ren, Ju; Zhang, Chunyuan
2014-01-01
Through reorganizing the execution order and optimizing the data structure, we proposed an efficient parallel framework for H.264/AVC encoder based on massively parallel architecture. We implemented the proposed framework by CUDA on NVIDIA's GPU. Not only the compute intensive components of the H.264 encoder are parallelized but also the control intensive components are realized effectively, such as CAVLC and deblocking filter. In addition, we proposed serial optimization methods, including the multiresolution multiwindow for motion estimation, multilevel parallel strategy to enhance the parallelism of intracoding as much as possible, component-based parallel CAVLC, and direction-priority deblocking filter. More than 96% of workload of H.264 encoder is offloaded to GPU. Experimental results show that the parallel implementation outperforms the serial program by 20 times of speedup ratio and satisfies the requirement of the real-time HD encoding of 30 fps. The loss of PSNR is from 0.14 dB to 0.77 dB, when keeping the same bitrate. Through the analysis to the kernels, we found that speedup ratios of the compute intensive algorithms are proportional with the computation power of the GPU. However, the performance of the control intensive parts (CAVLC) is much related to the memory bandwidth, which gives an insight for new architecture design. PMID:24757432
Xyce parallel electronic simulator users guide, version 6.0.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, Eric R; Mei, Ting; Russo, Thomas V.
This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to developmore » new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandias needs, including some radiationaware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase a message passing parallel implementation which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows.« less
Methods of parallel computation applied on granular simulations
NASA Astrophysics Data System (ADS)
Martins, Gustavo H. B.; Atman, Allbens P. F.
2017-06-01
Every year, parallel computing has becoming cheaper and more accessible. As consequence, applications were spreading over all research areas. Granular materials is a promising area for parallel computing. To prove this statement we study the impact of parallel computing in simulations of the BNE (Brazil Nut Effect). This property is due the remarkable arising of an intruder confined to a granular media when vertically shaken against gravity. By means of DEM (Discrete Element Methods) simulations, we study the code performance testing different methods to improve clock time. A comparison between serial and parallel algorithms, using OpenMP® is also shown. The best improvement was obtained by optimizing the function that find contacts using Verlet's cells.
Intraoperative 3-Dimensional Computed Tomography and Navigation in Foot and Ankle Surgery.
Chowdhary, Ashwin; Drittenbass, Lisca; Dubois-Ferrière, Victor; Stern, Richard; Assal, Mathieu
2016-09-01
Computer-assisted orthopedic surgery has developed dramatically during the past 2 decades. This article describes the use of intraoperative 3-dimensional computed tomography and navigation in foot and ankle surgery. Traditional imaging based on serial radiography or C-arm-based fluoroscopy does not provide simultaneous real-time 3-dimensional imaging, and thus leads to suboptimal visualization and guidance. Three-dimensional computed tomography allows for accurate intraoperative visualization of the position of bones and/or navigation implants. Such imaging and navigation helps to further reduce intraoperative complications, leads to improved surgical outcomes, and may become the gold standard in foot and ankle surgery. [Orthopedics.2016; 39(5):e1005-e1010.]. Copyright 2016, SLACK Incorporated.
Shao, Chenzhong; Tanaka, Shuji; Nakayama, Takahiro; Hata, Yoshiyuki; Muroyama, Masanori
2018-01-15
For installing many sensors in a limited space with a limited computing resource, the digitization of the sensor output at the site of sensation has advantages such as a small amount of wiring, low signal interference and high scalability. For this purpose, we have developed a dedicated Complementary Metal-Oxide-Semiconductor (CMOS) Large-Scale Integration (LSI) (referred to as "sensor platform LSI") for bus-networked Micro-Electro-Mechanical-Systems (MEMS)-LSI integrated sensors. In this LSI, collision avoidance, adaptation and event-driven functions are simply implemented to relieve data collision and congestion in asynchronous serial bus communication. In this study, we developed a network system with 48 sensor platform LSIs based on Printed Circuit Board (PCB) in a backbone bus topology with the bus length being 2.4 m. We evaluated the serial communication performance when 48 LSIs operated simultaneously with the adaptation function. The number of data packets received from each LSI was almost identical, and the average sampling frequency of 384 capacitance channels (eight for each LSI) was 73.66 Hz.
DFT algorithms for bit-serial GaAs array processor architectures
NASA Technical Reports Server (NTRS)
Mcmillan, Gary B.
1988-01-01
Systems and Processes Engineering Corporation (SPEC) has developed an innovative array processor architecture for computing Fourier transforms and other commonly used signal processing algorithms. This architecture is designed to extract the highest possible array performance from state-of-the-art GaAs technology. SPEC's architectural design includes a high performance RISC processor implemented in GaAs, along with a Floating Point Coprocessor and a unique Array Communications Coprocessor, also implemented in GaAs technology. Together, these data processors represent the latest in technology, both from an architectural and implementation viewpoint. SPEC has examined numerous algorithms and parallel processing architectures to determine the optimum array processor architecture. SPEC has developed an array processor architecture with integral communications ability to provide maximum node connectivity. The Array Communications Coprocessor embeds communications operations directly in the core of the processor architecture. A Floating Point Coprocessor architecture has been defined that utilizes Bit-Serial arithmetic units, operating at very high frequency, to perform floating point operations. These Bit-Serial devices reduce the device integration level and complexity to a level compatible with state-of-the-art GaAs device technology.
Development of CMOS Imager Block for Capsule Endoscope
NASA Astrophysics Data System (ADS)
Shafie, S.; Fodzi, F. A. M.; Tung, L. Q.; Lioe, D. X.; Halin, I. A.; Hasan, W. Z. W.; Jaafar, H.
2014-04-01
This paper presents the development of imager block to be associated in a capsule endoscopy system. Since the capsule endoscope is used to diagnose gastrointestinal diseases, the imager block must be in small size which is comfortable for the patients to swallow. In this project, a small size 1.5V button battery is used as the power supply while the voltage supply requirements for other components such as microcontroller and CMOS image sensor are higher. Therefore, a voltage booster circuit is proposed to boost up the voltage supply from 1.5V to 3.3V. A low power microcontroller is used to generate control pulses for the CMOS image sensor and to convert the 8-bits parallel data output to serial data to be transmitted to the display panel. The results show that the voltage booster circuit was able to boost the voltage supply from 1.5V to 3.3V. The microcontroller precisely controls the CMOS image sensor to produce parallel data which is then serialized again by the microcontroller. The serial data is then successfully translated to 2fps image and displayed on computer.
Low- Z polymer sample supports for fixed-target serial femtosecond X-ray crystallography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feld, Geoffrey K.; Heymann, Michael; Benner, W. Henry
X-ray free-electron lasers (XFELs) offer a new avenue to the structural probing of complex materials, including biomolecules. Delivery of precious sample to the XFEL beam is a key consideration, as the sample of interest must be serially replaced after each destructive pulse. The fixed-target approach to sample delivery involves depositing samples on a thin-film support and subsequent serial introduction via a translating stage. Some classes of biological materials, including two-dimensional protein crystals, must be introduced on fixed-target supports, as they require a flat surface to prevent sample wrinkling. A series of wafer and transmission electron microscopy (TEM)-style grid supports constructedmore » of low- Z plastic have been custom-designed and produced. Aluminium TEM grid holders were engineered, capable of delivering up to 20 different conventional or plastic TEM grids using fixed-target stages available at the Linac Coherent Light Source (LCLS). As proof-of-principle, X-ray diffraction has been demonstrated from two-dimensional crystals of bacteriorhodopsin and three-dimensional crystals of anthrax toxin protective antigen mounted on these supports at the LCLS. In conclusion, the benefits and limitations of these low- Z fixed-target supports are discussed; it is the authors' belief that they represent a viable and efficient alternative to previously reported fixed-target supports for conducting diffraction studies with XFELs.« less
Decoupling Identification for Serial Two-Link Two-Inertia System
NASA Astrophysics Data System (ADS)
Oaki, Junji; Adachi, Shuichi
The purpose of our study is to develop a precise model by applying the technique of system identification for the model-based control of a nonlinear robot arm, under taking joint-elasticity into consideration. We previously proposed a systematic identification method, called “decoupling identification,” for a “SCARA-type” planar two-link robot arm with elastic joints caused by the Harmonic-drive® reduction gears. The proposed method serves as an extension of the conventional rigid-joint-model-based identification. The robot arm is treated as a serial two-link two-inertia system with nonlinearity. The decoupling identification method using link-accelerometer signals enables the serial two-link two-inertia system to be divided into two linear one-link two-inertia systems. The MATLAB®'s commands for state-space model estimation are utilized in the proposed method. Physical parameters such as motor inertias, link inertias, joint-friction coefficients, and joint-spring coefficients are estimated through the identified one-link two-inertia systems using a gray-box approach. This paper describes accuracy evaluations using the two-link arm for the decoupling identification method under introducing closed-loop-controlled elements and varying amplitude-setup of identification-input. Experimental results show that the identification method also works with closed-loop-controlled elements. Therefore, the identification method is applicable to a “PUMA-type” vertical robot arm under gravity.
Xue, Zhong; Shen, Dinggang; Li, Hai; Wong, Stephen
2010-01-01
The traditional fuzzy clustering algorithm and its extensions have been successfully applied in medical image segmentation. However, because of the variability of tissues and anatomical structures, the clustering results might be biased by the tissue population and intensity differences. For example, clustering-based algorithms tend to over-segment white matter tissues of MR brain images. To solve this problem, we introduce a tissue probability map constrained clustering algorithm and apply it to serial MR brain image segmentation, i.e., a series of 3-D MR brain images of the same subject at different time points. Using the new serial image segmentation algorithm in the framework of the CLASSIC framework, which iteratively segments the images and estimates the longitudinal deformations, we improved both accuracy and robustness for serial image computing, and at the mean time produced longitudinally consistent segmentation and stable measures. In the algorithm, the tissue probability maps consist of both the population-based and subject-specific segmentation priors. Experimental study using both simulated longitudinal MR brain data and the Alzheimer’s Disease Neuroimaging Initiative (ADNI) data confirmed that using both priors more accurate and robust segmentation results can be obtained. The proposed algorithm can be applied in longitudinal follow up studies of MR brain imaging with subtle morphological changes for neurological disorders. PMID:26566399
Physics of Information Assurance
the United States Patent and Trademark Office on April 14, 2016. HfO memristor devices were measured over a range of temperatures up to 250C. They showed stability in performance at these elevated temperatures....Free Space Optical Data Transmission for Secure Computing patent application has a provisional application serial number 62/322,391 and was filed in
2008-04-01
5 Fluxgate magnetometer ... magnetometer into digital format, and transmitted as a single serial data string to log the Cs and fluxgate magnetometer data. After procurement...Hardware The system hardware comprises an EMI sensor, Cs vapor magnetometer , fluxgate magnetometer , hand-held data acquisition computer, integrated
Interfacing Optical Document Scanners: Principles and Practical Considerations.
ERIC Educational Resources Information Center
Krus, David J.; Kodimer, Dennis
1987-01-01
Handlers for interfacing the ScanTron and 2700 Optical Mark Readers with the IBM AT/XT/PC and Tandy 2000/1000/3000 iAPX 88/186/286 based computers were described. Differences between programing an RS232C serial port using BIOS interrupts and directly addressing the Motorola 8550 ART microprocessor were discussed. (Author/LMO)
NASA Technical Reports Server (NTRS)
Rodriguez, R. M.
1975-01-01
The Balloon-Borne Ultraviolet Stellar Spectrometer (BUSS) Science Data Docummutation Program (BAPS48) is a pulse code modulation docummutation program that will format the BUSS science data contained on a one inch PCM tracking tape into a seven track serial bit stream formatted digital tape.
CT appearance of mesenteric saponification.
Paris, A; Willing, S J
1991-01-01
Although saponification of the pancreas is a frequent finding on computed tomography, saponification of extrapancreatic mesenteric sites has not been previously recognized. A case is presented of acute pancreatitis in which serial scans over a four-year period documented calcifications in old extrapancreatic phlegmons. Saponification from pancreatitis should be considered in the differential diagnosis of mesenteric calcifications.
Magnet measurement interfacing to the G-64 Euro standard bus and testing G-64 modules
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogrefe, R.L.
1995-07-01
The Magnet Measurement system utilizes various modules with a G-64 Euro (Gespac) Standard Interface. All modules are designed to be software controlled, normally under the constraints of the OS-9 operating system with all data transfers to a host computer accomplished by a serial link.
Advances in Parallelization for Large Scale Oct-Tree Mesh Generation
NASA Technical Reports Server (NTRS)
O'Connell, Matthew; Karman, Steve L.
2015-01-01
Despite great advancements in the parallelization of numerical simulation codes over the last 20 years, it is still common to perform grid generation in serial. Generating large scale grids in serial often requires using special "grid generation" compute machines that can have more than ten times the memory of average machines. While some parallel mesh generation techniques have been proposed, generating very large meshes for LES or aeroacoustic simulations is still a challenging problem. An automated method for the parallel generation of very large scale off-body hierarchical meshes is presented here. This work enables large scale parallel generation of off-body meshes by using a novel combination of parallel grid generation techniques and a hybrid "top down" and "bottom up" oct-tree method. Meshes are generated using hardware commonly found in parallel compute clusters. The capability to generate very large meshes is demonstrated by the generation of off-body meshes surrounding complex aerospace geometries. Results are shown including a one billion cell mesh generated around a Predator Unmanned Aerial Vehicle geometry, which was generated on 64 processors in under 45 minutes.
Support for Debugging Automatically Parallelized Programs
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Hood, Robert; Biegel, Bryan (Technical Monitor)
2001-01-01
We describe a system that simplifies the process of debugging programs produced by computer-aided parallelization tools. The system uses relative debugging techniques to compare serial and parallel executions in order to show where the computations begin to differ. If the original serial code is correct, errors due to parallelization will be isolated by the comparison. One of the primary goals of the system is to minimize the effort required of the user. To that end, the debugging system uses information produced by the parallelization tool to drive the comparison process. In particular the debugging system relies on the parallelization tool to provide information about where variables may have been modified and how arrays are distributed across multiple processes. User effort is also reduced through the use of dynamic instrumentation. This allows us to modify the program execution without changing the way the user builds the executable. The use of dynamic instrumentation also permits us to compare the executions in a fine-grained fashion and only involve the debugger when a difference has been detected. This reduces the overhead of executing instrumentation.
Proteus: a reconfigurable computational network for computer vision
NASA Astrophysics Data System (ADS)
Haralick, Robert M.; Somani, Arun K.; Wittenbrink, Craig M.; Johnson, Robert; Cooper, Kenneth; Shapiro, Linda G.; Phillips, Ihsin T.; Hwang, Jenq N.; Cheung, William; Yao, Yung H.; Chen, Chung-Ho; Yang, Larry; Daugherty, Brian; Lorbeski, Bob; Loving, Kent; Miller, Tom; Parkins, Larye; Soos, Steven L.
1992-04-01
The Proteus architecture is a highly parallel MIMD, multiple instruction, multiple-data machine, optimized for large granularity tasks such as machine vision and image processing The system can achieve 20 Giga-flops (80 Giga-flops peak). It accepts data via multiple serial links at a rate of up to 640 megabytes/second. The system employs a hierarchical reconfigurable interconnection network with the highest level being a circuit switched Enhanced Hypercube serial interconnection network for internal data transfers. The system is designed to use 256 to 1,024 RISC processors. The processors use one megabyte external Read/Write Allocating Caches for reduced multiprocessor contention. The system detects, locates, and replaces faulty subsystems using redundant hardware to facilitate fault tolerance. The parallelism is directly controllable through an advanced software system for partitioning, scheduling, and development. System software includes a translator for the INSIGHT language, a parallel debugger, low and high level simulators, and a message passing system for all control needs. Image processing application software includes a variety of point operators neighborhood, operators, convolution, and the mathematical morphology operations of binary and gray scale dilation, erosion, opening, and closing.
Relative Debugging of Automatically Parallelized Programs
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Hood, Robert; Biegel, Bryan (Technical Monitor)
2002-01-01
We describe a system that simplifies the process of debugging programs produced by computer-aided parallelization tools. The system uses relative debugging techniques to compare serial and parallel executions in order to show where the computations begin to differ. If the original serial code is correct, errors due to parallelization will be isolated by the comparison. One of the primary goals of the system is to minimize the effort required of the user. To that end, the debugging system uses information produced by the parallelization tool to drive the comparison process. In particular, the debugging system relies on the parallelization tool to provide information about where variables may have been modified and how arrays are distributed across multiple processes. User effort is also reduced through the use of dynamic instrumentation. This allows us to modify, the program execution with out changing the way the user builds the executable. The use of dynamic instrumentation also permits us to compare the executions in a fine-grained fashion and only involve the debugger when a difference has been detected. This reduces the overhead of executing instrumentation.
Hierarchical Processing of Auditory Objects in Humans
Kumar, Sukhbinder; Stephan, Klaas E; Warren, Jason D; Friston, Karl J; Griffiths, Timothy D
2007-01-01
This work examines the computational architecture used by the brain during the analysis of the spectral envelope of sounds, an important acoustic feature for defining auditory objects. Dynamic causal modelling and Bayesian model selection were used to evaluate a family of 16 network models explaining functional magnetic resonance imaging responses in the right temporal lobe during spectral envelope analysis. The models encode different hypotheses about the effective connectivity between Heschl's Gyrus (HG), containing the primary auditory cortex, planum temporale (PT), and superior temporal sulcus (STS), and the modulation of that coupling during spectral envelope analysis. In particular, we aimed to determine whether information processing during spectral envelope analysis takes place in a serial or parallel fashion. The analysis provides strong support for a serial architecture with connections from HG to PT and from PT to STS and an increase of the HG to PT connection during spectral envelope analysis. The work supports a computational model of auditory object processing, based on the abstraction of spectro-temporal “templates” in the PT before further analysis of the abstracted form in anterior temporal lobe areas. PMID:17542641
A comparative study of serial and parallel aeroelastic computations of wings
NASA Technical Reports Server (NTRS)
Byun, Chansup; Guruswamy, Guru P.
1994-01-01
A procedure for computing the aeroelasticity of wings on parallel multiple-instruction, multiple-data (MIMD) computers is presented. In this procedure, fluids are modeled using Euler equations, and structures are modeled using modal or finite element equations. The procedure is designed in such a way that each discipline can be developed and maintained independently by using a domain decomposition approach. In the present parallel procedure, each computational domain is scalable. A parallel integration scheme is used to compute aeroelastic responses by solving fluid and structural equations concurrently. The computational efficiency issues of parallel integration of both fluid and structural equations are investigated in detail. This approach, which reduces the total computational time by a factor of almost 2, is demonstrated for a typical aeroelastic wing by using various numbers of processors on the Intel iPSC/860.
Efficient Parallel Kernel Solvers for Computational Fluid Dynamics Applications
NASA Technical Reports Server (NTRS)
Sun, Xian-He
1997-01-01
Distributed-memory parallel computers dominate today's parallel computing arena. These machines, such as Intel Paragon, IBM SP2, and Cray Origin2OO, have successfully delivered high performance computing power for solving some of the so-called "grand-challenge" problems. Despite initial success, parallel machines have not been widely accepted in production engineering environments due to the complexity of parallel programming. On a parallel computing system, a task has to be partitioned and distributed appropriately among processors to reduce communication cost and to attain load balance. More importantly, even with careful partitioning and mapping, the performance of an algorithm may still be unsatisfactory, since conventional sequential algorithms may be serial in nature and may not be implemented efficiently on parallel machines. In many cases, new algorithms have to be introduced to increase parallel performance. In order to achieve optimal performance, in addition to partitioning and mapping, a careful performance study should be conducted for a given application to find a good algorithm-machine combination. This process, however, is usually painful and elusive. The goal of this project is to design and develop efficient parallel algorithms for highly accurate Computational Fluid Dynamics (CFD) simulations and other engineering applications. The work plan is 1) developing highly accurate parallel numerical algorithms, 2) conduct preliminary testing to verify the effectiveness and potential of these algorithms, 3) incorporate newly developed algorithms into actual simulation packages. The work plan has well achieved. Two highly accurate, efficient Poisson solvers have been developed and tested based on two different approaches: (1) Adopting a mathematical geometry which has a better capacity to describe the fluid, (2) Using compact scheme to gain high order accuracy in numerical discretization. The previously developed Parallel Diagonal Dominant (PDD) algorithm and Reduced Parallel Diagonal Dominant (RPDD) algorithm have been carefully studied on different parallel platforms for different applications, and a NASA simulation code developed by Man M. Rai and his colleagues has been parallelized and implemented based on data dependency analysis. These achievements are addressed in detail in the paper.
Address-event-based platform for bioinspired spiking systems
NASA Astrophysics Data System (ADS)
Jiménez-Fernández, A.; Luján, C. D.; Linares-Barranco, A.; Gómez-Rodríguez, F.; Rivas, M.; Jiménez, G.; Civit, A.
2007-05-01
Address Event Representation (AER) is an emergent neuromorphic interchip communication protocol that allows a real-time virtual massive connectivity between huge number neurons, located on different chips. By exploiting high speed digital communication circuits (with nano-seconds timings), synaptic neural connections can be time multiplexed, while neural activity signals (with mili-seconds timings) are sampled at low frequencies. Also, neurons generate "events" according to their activity levels. More active neurons generate more events per unit time, and access the interchip communication channel more frequently, while neurons with low activity consume less communication bandwidth. When building multi-chip muti-layered AER systems, it is absolutely necessary to have a computer interface that allows (a) reading AER interchip traffic into the computer and visualizing it on the screen, and (b) converting conventional frame-based video stream in the computer into AER and injecting it at some point of the AER structure. This is necessary for test and debugging of complex AER systems. In the other hand, the use of a commercial personal computer implies to depend on software tools and operating systems that can make the system slower and un-robust. This paper addresses the problem of communicating several AER based chips to compose a powerful processing system. The problem was discussed in the Neuromorphic Engineering Workshop of 2006. The platform is based basically on an embedded computer, a powerful FPGA and serial links, to make the system faster and be stand alone (independent from a PC). A new platform is presented that allow to connect up to eight AER based chips to a Spartan 3 4000 FPGA. The FPGA is responsible of the network communication based in Address-Event and, at the same time, to map and transform the address space of the traffic to implement a pre-processing. A MMU microprocessor (Intel XScale 400MHz Gumstix Connex computer) is also connected to the FPGA to allow the platform to implement eventbased algorithms to interact to the AER system, like control algorithms, network connectivity, USB support, etc. The LVDS transceiver allows a bandwidth of up to 1.32 Gbps, around ~66 Mega events per second (Mevps).
Split-mouth comparison of the accuracy of computer-generated and conventional surgical guides.
Farley, Nathaniel E; Kennedy, Kelly; McGlumphy, Edwin A; Clelland, Nancy L
2013-01-01
Recent clinical studies have shown that implant placement is highly predictable with computer-generated surgical guides; however, the reliability of these guides has not been compared to that of conventional guides clinically. This study aimed to compare the accuracy of reproducing planned implant positions with computer-generated and conventional surgical guides using a split-mouth design. Ten patients received two implants each in symmetric locations. All implants were planned virtually using a software program and information from cone beam computed tomographic scans taken with scan appliances in place. Patients were randomly selected for computer-aided design/computer-assisted manufacture (CAD/CAM)-guided implant placement on their right or left side. Conventional guides were used on the contralateral side. Patients underwent operative cone beam computed tomography postoperatively. Planned and actual implant positions were compared using three-dimensional analyses capable of measuring volume overlap as well as differences in angles and coronal and apical positions. Results were compared using a mixed-model repeated-measures analysis of variance and were further analyzed using a Bartlett test for unequal variance (α = .05). Implants placed with CAD/CAM guides were closer to the planned positions in all eight categories examined. However, statistically significant differences were shown only for coronal horizontal distances. It was also shown that CAD/CAM guides had less variability than conventional guides, which was statistically significant for apical distance. Implants placed using CAD/CAM surgical guides provided greater accuracy in a lateral direction than conventional guides. In addition, CAD/CAM guides were more consistent in their deviation from the planned locations than conventional guides.
Nichols, C R; Breeden, E S; Loehrer, P J; Williams, S D; Einhorn, L H
1993-01-06
Case reports have suggested that treatment with high-dose etoposide can result in development of a unique secondary leukemia. This study was designed to estimate the risk of developing leukemia for patients receiving conventional doses of etoposide along with cisplatin and bleomycin. We reviewed the records at Indiana University of all untreated patients entering clinical trials using etoposide at conventional doses (cumulative dose, 2000 mg/m2 or less) for germ cell cancer between 1982 and 1991. The records of all patients who received a chemotherapy regimen containing etoposide, ifosfamide, or cisplatin after failing to respond to primary chemotherapy were also reviewed. Between 1982 and 1991, 538 patients entered serial clinical trials with planned cumulative etoposide doses of 1500-2000 mg/m2 in combination with cisplatin plus either ifosfamide or bleomycin. Of these 538 patients, 348 received an etoposide combination as initial chemotherapy and 190 received etoposide as part of salvage treatment. To date, 315 patients are alive, with median follow-up of 4.9 years, and 337 patients have had follow-up beyond 2 years. Two patients (0.37%) developed leukemia. One developed acute undifferentiated leukemia with a t(4;11) (q21;q23) cytogenetic abnormality 2.0 years after starting etoposide-based therapy, and one developed acute myelomonoblastic leukemia with no chromosome abnormalities 2.3 years after beginning chemotherapy. During this period, several hundred patients were treated with etoposide-based chemotherapy and did not enter clinical trials. Three of these patients are known to have developed hematologic abnormalities, including one patient with acute monoblastic leukemia with a t(11;19)(q13;p13) abnormality. Secondary leukemia after treatment with a conventional dose of etoposide does occur, but the low incidence does not alter the risk-to-benefit ratio of etoposide-based chemotherapy in germ cell cancer. The reports of leukemia associated with high doses of etoposide emphasize the need for diligent follow-up of patients and make careful risk-to-benefit analysis imperative.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ajani, Abdallah A.; Qureshi, Muhammad M.; Kovalchuk, Nataliya
To evaluate the change in volume and movement of the parotid gland measured by serial contrast-enhanced computed tomography scans in patients with head and neck cancer treated with parotid-sparing intensity-modulated radiotherapy (IMRT). A prospective study was performed on 13 patients with head and neck cancer undergoing dose-painted IMRT to 69.96 Gy in 33 fractions. Serial computed tomography scans were performed at baseline, weeks 2, 4, and 6 of radiotherapy (RT), and at 6 weeks post-RT. The parotid volume was contoured at each scan, and the movement of the medial and lateral borders was measured. The patient's body weight was recordedmore » at each corresponding week during RT. Regression analyses were performed to ascertain the rate of change during treatment as a percent change per fraction in parotid volume and distance relative to baseline. The mean parotid volume decreased by 37.3% from baseline to week 6 of RT. The overall rate of change in parotid volume during RT was−1.30% per fraction (−1.67% and−0.91% per fraction in≥31 Gy and<31 Gy mean planned parotid dose groups, respectively, p = 0.0004). The movement of parotid borders was greater in the≥31 Gy mean parotid dose group compared with the<31 Gy group (0.22% per fraction and 0.14% per fraction for the lateral border and 0.19% per fraction and 0.06% per fraction for the medial border, respectively). The median change in body weight was−7.4% (range, 0.75% to−17.5%) during RT. A positive correlation was noted between change in body weight and parotid volume during the course of RT (Spearman correlation coefficient, r = 0.66, p<0.01). Head and neck IMRT results in a volume loss of the parotid gland, which is related to the planned parotid dose, and the patient's weight loss during RT.« less
Computer-aided tracking and characterization of homicides and sexual assaults (CATCH)
NASA Astrophysics Data System (ADS)
Kangas, Lars J.; Terrones, Kristine M.; Keppel, Robert D.; La Moria, Robert D.
1999-03-01
When a serial offender strikes, it usually means that the investigation is unprecedented for that police agency. The volume of incoming leads and pieces of information in the case(s) can be overwhelming as evidenced by the thousands of leads gathered in the Ted Bundy Murders, Atlanta Child Murders, and the Green River Murders. Serial cases can be long term investigations in which the suspect remains unknown and continues to perpetrate crimes. With state and local murder investigative systems beginning to crop up, it will become important to manage that information in a timely and efficient way by developing computer programs to assist in that task. One vital function will be to compare violent crime cases from different jurisdictions so investigators can approach the investigation knowing that similar cases exist. CATCH (Computer Aided Tracking and Characterization of Homicides) is being developed to assist crime investigations by assessing likely characteristics of unknown offenders, by relating a specific crime case to other cases, and by providing a tool for clustering similar cases that may be attributed to the same offenders. CATCH is a collection of tools that assist the crime analyst in the investigation process by providing advanced data mining and visualization capabilities.These tools include clustering maps, query tools, geographic maps, timelines, etc. Each tool is designed to give the crime analyst a different view of the case data. The clustering tools in CATCH are based on artificial neural networks (ANNs). The ANNs learn to cluster similar cases from approximately 5000 murders and 3000 sexual assaults residing in a database. The clustering algorithm is applied to parameters describing modus operandi (MO), signature characteristics of the offenders, and other parameters describing the victim and offender. The proximity of cases within a two-dimensional representation of the clusters allows the analyst to identify similar or serial murders and sexual assaults.
Yu, Dongjun; Wu, Xiaowei; Shen, Hongbin; Yang, Jian; Tang, Zhenmin; Qi, Yong; Yang, Jingyu
2012-12-01
Membrane proteins are encoded by ~ 30% in the genome and function importantly in the living organisms. Previous studies have revealed that membrane proteins' structures and functions show obvious cell organelle-specific properties. Hence, it is highly desired to predict membrane protein's subcellular location from the primary sequence considering the extreme difficulties of membrane protein wet-lab studies. Although many models have been developed for predicting protein subcellular locations, only a few are specific to membrane proteins. Existing prediction approaches were constructed based on statistical machine learning algorithms with serial combination of multi-view features, i.e., different feature vectors are simply serially combined to form a super feature vector. However, such simple combination of features will simultaneously increase the information redundancy that could, in turn, deteriorate the final prediction accuracy. That's why it was often found that prediction success rates in the serial super space were even lower than those in a single-view space. The purpose of this paper is investigation of a proper method for fusing multiple multi-view protein sequential features for subcellular location predictions. Instead of serial strategy, we propose a novel parallel framework for fusing multiple membrane protein multi-view attributes that will represent protein samples in complex spaces. We also proposed generalized principle component analysis (GPCA) for feature reduction purpose in the complex geometry. All the experimental results through different machine learning algorithms on benchmark membrane protein subcellular localization datasets demonstrate that the newly proposed parallel strategy outperforms the traditional serial approach. We also demonstrate the efficacy of the parallel strategy on a soluble protein subcellular localization dataset indicating the parallel technique is flexible to suite for other computational biology problems. The software and datasets are available at: http://www.csbio.sjtu.edu.cn/bioinf/mpsp.
Kliner, Dustin; Wang, Li; Winger, Daniel; Follansbee, William P; Soman, Prem
2015-12-01
Gated single-photon emission computed tomography (SPECT) is widely used for myocardial perfusion imaging and provides an automated assessment of left ventricular ejection fraction (LVEF). We prospectively tested the repeatability of serial SPECT-derived LVEF. This information is essential in order to inform the interpretation of a change in LV function on serial testing. Consenting patients (n = 50) from among those referred for clinically indicated gated myocardial perfusion SPECT (MPs) were recruited. Following the clinical rest-stress study, patients were repositioned on the camera table for a second acquisition using identical parameters. Patient positioning, image acquisition and processing for the second scan were independently performed by a technologist blinded to the clinical scan. Quantitative LVEF was generated by Quantitative Gated SPECT and recorded as EF1 and EF2, respectively. Repeatability of serial results was assessed using the Bland-Altman method. The limits of repeatability and repeatability coefficients were generated to determine the maximum variation in LVEF that can be expected to result from test variability. Repeatability was tested across a broad range of LV systolic function and myocardial perfusion. The mean difference between EF1 and EF2 was 1.6% (EF units), with 95% limits of repeatability of +9.1% to -6.0% (repeatability coefficient 7.5%). Correlation between serial EF measurements was excellent (r = 0.9809). Similar results were obtained in subgroups based on normal or abnormal EF and myocardial perfusion. The largest repeatability coefficient of 8.1% was seen in patients with abnormal LV systolic function. When test protocol and acquisition parameters are kept constant, a difference of >8% EF units on serial MPs is indicative of a true change 95% of the time.
van de Kamp, Cornelis; Gawthrop, Peter J.; Gollee, Henrik; Lakie, Martin; Loram, Ian D.
2013-01-01
Modular organization in control architecture may underlie the versatility of human motor control; but the nature of the interface relating sensory input through task-selection in the space of performance variables to control actions in the space of the elemental variables is currently unknown. Our central question is whether the control architecture converges to a serial process along a single channel? In discrete reaction time experiments, psychologists have firmly associated a serial single channel hypothesis with refractoriness and response selection [psychological refractory period (PRP)]. Recently, we developed a methodology and evidence identifying refractoriness in sustained control of an external single degree-of-freedom system. We hypothesize that multi-segmental whole-body control also shows refractoriness. Eight participants controlled their whole body to ensure a head marker tracked a target as fast and accurately as possible. Analysis showed enhanced delays in response to stimuli with close temporal proximity to the preceding stimulus. Consistent with our preceding work, this evidence is incompatible with control as a linear time invariant process. This evidence is consistent with a single-channel serial ballistic process within the intermittent control paradigm with an intermittent interval of around 0.5 s. A control architecture reproducing intentional human movement control must reproduce refractoriness. Intermittent control is designed to provide computational time for an online optimization process and is appropriate for flexible adaptive control. For human motor control we suggest that parallel sensory input converges to a serial, single channel process involving planning, selection, and temporal inhibition of alternative responses prior to low dimensional motor output. Such design could aid robots to reproduce the flexibility of human control. PMID:23675342
Aono, Masashi; Gunji, Yukio-Pegio
2003-10-01
The emergence derived from errors is the key importance for both novel computing and novel usage of the computer. In this paper, we propose an implementable experimental plan for the biological computing so as to elicit the emergent property of complex systems. An individual plasmodium of the true slime mold Physarum polycephalum acts in the slime mold computer. Modifying the Elementary Cellular Automaton as it entails the global synchronization problem upon the parallel computing provides the NP-complete problem solved by the slime mold computer. The possibility to solve the problem by giving neither all possible results nor explicit prescription of solution-seeking is discussed. In slime mold computing, the distributivity in the local computing logic can change dynamically, and its parallel non-distributed computing cannot be reduced into the spatial addition of multiple serial computings. The computing system based on exhaustive absence of the super-system may produce, something more than filling the vacancy.
Determinant Computation on the GPU using the Condensation Method
NASA Astrophysics Data System (ADS)
Anisul Haque, Sardar; Moreno Maza, Marc
2012-02-01
We report on a GPU implementation of the condensation method designed by Abdelmalek Salem and Kouachi Said for computing the determinant of a matrix. We consider two types of coefficients: modular integers and floating point numbers. We evaluate the performance of our code by measuring its effective bandwidth and argue that it is numerical stable in the floating point number case. In addition, we compare our code with serial implementation of determinant computation from well-known mathematical packages. Our results suggest that a GPU implementation of the condensation method has a large potential for improving those packages in terms of running time and numerical stability.
General-Purpose Serial Interface For Remote Control
NASA Technical Reports Server (NTRS)
Busquets, Anthony M.; Gupton, Lawrence E.
1990-01-01
Computer controls remote television camera. General-purpose controller developed to serve as interface between host computer and pan/tilt/zoom/focus functions on series of automated video cameras. Interface port based on 8251 programmable communications-interface circuit configured for tristated outputs, and connects controller system to any host computer with RS-232 input/output (I/O) port. Accepts byte-coded data from host, compares them with prestored codes in read-only memory (ROM), and closes or opens appropriate switches. Six output ports control opening and closing of as many as 48 switches. Operator controls remote television camera by speaking commands, in system including general-purpose controller.
Employing OpenCL to Accelerate Ab Initio Calculations on Graphics Processing Units.
Kussmann, Jörg; Ochsenfeld, Christian
2017-06-13
We present an extension of our graphics processing units (GPU)-accelerated quantum chemistry package to employ OpenCL compute kernels, which can be executed on a wide range of computing devices like CPUs, Intel Xeon Phi, and AMD GPUs. Here, we focus on the use of AMD GPUs and discuss differences as compared to CUDA-based calculations on NVIDIA GPUs. First illustrative timings are presented for hybrid density functional theory calculations using serial as well as parallel compute environments. The results show that AMD GPUs are as fast or faster than comparable NVIDIA GPUs and provide a viable alternative for quantum chemical applications.
An efficient method for computation of the manipulator inertia matrix
NASA Technical Reports Server (NTRS)
Fijany, Amir; Bejczy, Antal K.
1989-01-01
An efficient method of computation of the manipulator inertia matrix is presented. Using spatial notations, the method leads to the definition of the composite rigid-body spatial inertia, which is a spatial representation of the notion of augmented body. The previously proposed methods, the physical interpretations leading to their derivation, and their redundancies are analyzed. The proposed method achieves a greater efficiency by eliminating the redundancy in the intrinsic equations as well as by a better choice of coordinate frame for their projection. In this case, removing the redundancy leads to greater efficiency of the computation in both serial and parallel senses.
Vectorization on the star computer of several numerical methods for a fluid flow problem
NASA Technical Reports Server (NTRS)
Lambiotte, J. J., Jr.; Howser, L. M.
1974-01-01
A reexamination of some numerical methods is considered in light of the new class of computers which use vector streaming to achieve high computation rates. A study has been made of the effect on the relative efficiency of several numerical methods applied to a particular fluid flow problem when they are implemented on a vector computer. The method of Brailovskaya, the alternating direction implicit method, a fully implicit method, and a new method called partial implicitization have been applied to the problem of determining the steady state solution of the two-dimensional flow of a viscous imcompressible fluid in a square cavity driven by a sliding wall. Results are obtained for three mesh sizes and a comparison is made of the methods for serial computation.
1987-12-01
8217ftp.. *,*IS ~. ~bw ~ ft.. p ’ft ’ft ft.. ’ft *I~ P* ’ft ’p 0n-I ci via 1 ca j I .11’ ft~ ’ fttH vialca *- ’ft ft..I ft. ’ft ft.. --ft ..ft ’ft ftp
The Mariner Venus Mercury flight data subsystem.
NASA Technical Reports Server (NTRS)
Whitehead, P. B.
1972-01-01
The flight data subsystem (FDS) discussed handles both the engineering and scientific measurements performed on the MVM'73. It formats the data into serial data streams, and sends it to the modulation/demodulation subsystem for transmission to earth or to the data storage subsystem for storage on a digital tape recorder. The FDS is controlled by serial digital words, called coded commands, received from the central computer sequencer of from the ground via the modulation/demodulation subsystem. The eight major blocks of the FDS are: power converter, timing and control, engineering data, memory, memory input/output and control, nonimaging data, imaging data, and data output. The FDS incorporates some 4000 components, weighs 17 kg, and uses 35 W of power. General data on the mission and spacecraft are given.
Target recognition of ladar range images using even-order Zernike moments.
Liu, Zheng-Jun; Li, Qi; Xia, Zhi-Wei; Wang, Qi
2012-11-01
Ladar range images have attracted considerable attention in automatic target recognition fields. In this paper, Zernike moments (ZMs) are applied to classify the target of the range image from an arbitrary azimuth angle. However, ZMs suffer from high computational costs. To improve the performance of target recognition based on small samples, even-order ZMs with serial-parallel backpropagation neural networks (BPNNs) are applied to recognize the target of the range image. It is found that the rotation invariance and classified performance of the even-order ZMs are both better than for odd-order moments and for moments compressed by principal component analysis. The experimental results demonstrate that combining the even-order ZMs with serial-parallel BPNNs can significantly improve the recognition rate for small samples.
Wang, Xiaoyu; Seo, Dong Joo; Lee, Min Hwa
2014-01-01
This study aimed to develop a loop-mediated isothermal amplification (LAMP) method for the rapid detection of Arcobacter species. Specific primers targeting the 23S ribosomal RNA gene were used to detect Arcobacter butzleri, Arcobacter cryaerophilus, and Arcobacter skirrowii. The specificity of the LAMP primer set was assessed using DNA samples from a panel of Arcobacter and Campylobacter species, and the sensitivity was determined using serial dilutions of Arcobacter species cultures. LAMP showed a 10- to 1,000-fold-higher sensitivity than multiplex PCR, with a detection limit of 2 to 20 CFU per reaction in vitro. Whereas multiplex PCR showed cross-reactivity with Campylobacter species, the LAMP method developed in this study was more sensitive and reliable than conventional PCR or multiplex PCR for the detection of Arcobacter species. PMID:24478488
2006-09-01
September 2005 Abstract The grain size of as-cast Ti- 6Al - 4V is reduced by about an order of magnitude from 1700 to 200 /lm with an addition of 0.1 wt...and enhances subsequent mechanical working response [l J. The grain sizes of conventional cast titanium alloys (e.g, Ti-6AI- 4V ) are rather coarse...microstructure; Serial sectioning 1. Introduction The addition of boron to titanium alloys such as Ti 6AI- 4V can significantly enhance their strength
Supercomputer algorithms for efficient linear octree encoding of three-dimensional brain images.
Berger, S B; Reis, D J
1995-02-01
We designed and implemented algorithms for three-dimensional (3-D) reconstruction of brain images from serial sections using two important supercomputer architectures, vector and parallel. These architectures were represented by the Cray YMP and Connection Machine CM-2, respectively. The programs operated on linear octree representations of the brain data sets, and achieved 500-800 times acceleration when compared with a conventional laboratory workstation. As the need for higher resolution data sets increases, supercomputer algorithms may offer a means of performing 3-D reconstruction well above current experimental limits.
Solution of a large hydrodynamic problem using the STAR-100 computer
NASA Technical Reports Server (NTRS)
Weilmuenster, K. J.; Howser, L. M.
1976-01-01
A representative hydrodynamics problem, the shock initiated flow over a flat plate, was used for exploring data organizations and program structures needed to exploit the STAR-100 vector processing computer. A brief description of the problem is followed by a discussion of how each portion of the computational process was vectorized. Finally, timings of different portions of the program are compared with equivalent operations on serial machines. The speed up of the STAR-100 over the CDC 6600 program is shown to increase as the problem size increases. All computations were carried out on a CDC 6600 and a CDC STAR 100, with code written in FORTRAN for the 6600 and in STAR FORTRAN for the STAR 100.
TRADOC Union List of Periodicals.
1988-08-01
Comp11t er Products TSO Nov 1987+ Compul er Rev iew TRQ 1980. Computer Security Journal TRADOC UNION LIST OF SERIALS P\\GE 103 TSO 1987+ Computer Shopper...c:urrcnrt monith onl.N TSP 1988+ TS( cu-rrenft year only TTF Ap1)r 21 , ŝ 86 + TT1T[’ rr-erit ,ear only De fvii <t - v ’P,’ort T s : 12+ Dfe r :iS 5...Executive Orders TTG 1948-1962. Executive Proclamations TTG 1955-1960. Executive Productivity TSR 0277-5298 current year only Executive Risk Assessment TRC V
A personal computer-based, multitasking data acquisition system
NASA Technical Reports Server (NTRS)
Bailey, Steven A.
1990-01-01
A multitasking, data acquisition system was written to simultaneously collect meteorological radar and telemetry data from two sources. This system is based on the personal computer architecture. Data is collected via two asynchronous serial ports and is deposited to disk. The system is written in both the C programming language and assembler. It consists of three parts: a multitasking kernel for data collection, a shell with pull down windows as user interface, and a graphics processor for editing data and creating coded messages. An explanation of both system principles and program structure is presented.
Computed tomography of infantile hepatic hemangioendothelioma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lucaya, J.; Enriquez, G.; Amat, L.
1985-04-01
Computed tomography (CT) was performed on five infants with hepatic hemangioendothelioma. Precontrast scans showed solitary or multiple, homogeneous, circumscribed areas with reduced attenuation values. Tiny tumoral calcifications were identified in two patients. Serial scans, after injection of a bolus of contrast material, showed early massive enhancement, which was either diffuse or peripheral. On delayed scans, multinocular tumors became isodense with surrounding liver, while all solitary ones showed varied degrees of centripetal enhancement and persistent central cleftlike unenhanced areas. The authors believe that these CT features are characteristic and obviate arteriographic confirmation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dmitriy Morozov, Tom Peterka
2014-07-29
Computing a Voronoi or Delaunay tessellation from a set of points is a core part of the analysis of many simulated and measured datasets. As the scale of simulations and observations surpasses billions of particles, a distributed-memory scalable parallel algorithm is the only feasible approach. The primary contribution of this software is a distributed-memory parallel Delaunay and Voronoi tessellation algorithm based on existing serial computational geometry libraries that automatically determines which neighbor points need to be exchanged among the subdomains of a spatial decomposition. Other contributions include the addition of periodic and wall boundary conditions.
Hardware Implementation of Serially Concatenated PPM Decoder
NASA Technical Reports Server (NTRS)
Moision, Bruce; Hamkins, Jon; Barsoum, Maged; Cheng, Michael; Nakashima, Michael
2009-01-01
A prototype decoder for a serially concatenated pulse position modulation (SCPPM) code has been implemented in a field-programmable gate array (FPGA). At the time of this reporting, this is the first known hardware SCPPM decoder. The SCPPM coding scheme, conceived for free-space optical communications with both deep-space and terrestrial applications in mind, is an improvement of several dB over the conventional Reed-Solomon PPM scheme. The design of the FPGA SCPPM decoder is based on a turbo decoding algorithm that requires relatively low computational complexity while delivering error-rate performance within approximately 1 dB of channel capacity. The SCPPM encoder consists of an outer convolutional encoder, an interleaver, an accumulator, and an inner modulation encoder (more precisely, a mapping of bits to PPM symbols). Each code is describable by a trellis (a finite directed graph). The SCPPM decoder consists of an inner soft-in-soft-out (SISO) module, a de-interleaver, an outer SISO module, and an interleaver connected in a loop (see figure). Each SISO module applies the Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm to compute a-posteriori bit log-likelihood ratios (LLRs) from apriori LLRs by traversing the code trellis in forward and backward directions. The SISO modules iteratively refine the LLRs by passing the estimates between one another much like the working of a turbine engine. Extrinsic information (the difference between the a-posteriori and a-priori LLRs) is exchanged rather than the a-posteriori LLRs to minimize undesired feedback. All computations are performed in the logarithmic domain, wherein multiplications are translated into additions, thereby reducing complexity and sensitivity to fixed-point implementation roundoff errors. To lower the required memory for storing channel likelihood data and the amounts of data transfer between the decoder and the receiver, one can discard the majority of channel likelihoods, using only the remainder in operation of the decoder. This is accomplished in the receiver by transmitting only a subset consisting of the likelihoods that correspond to time slots containing the largest numbers of observed photons during each PPM symbol period. The assumed number of observed photons in the remaining time slots is set to the mean of a noise slot. In low background noise, the selection of a small subset in this manner results in only negligible loss. Other features of the decoder design to reduce complexity and increase speed include (1) quantization of metrics in an efficient procedure chosen to incur no more than a small performance loss and (2) the use of the max-star function that allows sum of exponentials to be computed by simple operations that involve only an addition, a subtraction, and a table lookup. Another prominent feature of the design is a provision for access to interleaver and de-interleaver memory in a single clock cycle, eliminating the multiple clock-cycle latency characteristic of prior interleaver and de-interleaver designs.
Performance Models for Split-execution Computing Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humble, Travis S; McCaskey, Alex; Schrock, Jonathan
Split-execution computing leverages the capabilities of multiple computational models to solve problems, but splitting program execution across different computational models incurs costs associated with the translation between domains. We analyze the performance of a split-execution computing system developed from conventional and quantum processing units (QPUs) by using behavioral models that track resource usage. We focus on asymmetric processing models built using conventional CPUs and a family of special-purpose QPUs that employ quantum computing principles. Our performance models account for the translation of a classical optimization problem into the physical representation required by the quantum processor while also accounting for hardwaremore » limitations and conventional processor speed and memory. We conclude that the bottleneck in this split-execution computing system lies at the quantum-classical interface and that the primary time cost is independent of quantum processor behavior.« less
Davidson, R W
1985-01-01
The increasing need to communicate to exchange data can be handled by personal microcomputers. The necessity for the transference of information stored in one type of personal computer to another type of personal computer is often encountered in the process of integrating multiple sources of information stored in different and incompatible computers in Medical Research and Practice. A practical example is demonstrated with two relatively inexpensive commonly used computers, the IBM PC jr. and the Apple IIe. The basic input/output (I/O) interface chip for serial communication for each computer are joined together using a Null connector and cable to form a communications link. Using BASIC (Beginner's All-purpose Symbolic Instruction Code) Computer Language and the Disk Operating System (DOS) the communications handshaking protocol and file transfer is established between the two computers. The BASIC programming languages used are Applesoft (Apple Personal Computer) and PC BASIC (IBM Personal computer).
SiSeRHMap v1.0: a simulator for mapped seismic response using a hybrid model
NASA Astrophysics Data System (ADS)
Grelle, G.; Bonito, L.; Lampasi, A.; Revellino, P.; Guerriero, L.; Sappa, G.; Guadagno, F. M.
2015-06-01
SiSeRHMap is a computerized methodology capable of drawing up prediction maps of seismic response. It was realized on the basis of a hybrid model which combines different approaches and models in a new and non-conventional way. These approaches and models are organized in a code-architecture composed of five interdependent modules. A GIS (Geographic Information System) Cubic Model (GCM), which is a layered computational structure based on the concept of lithodynamic units and zones, aims at reproducing a parameterized layered subsoil model. A metamodeling process confers a hybrid nature to the methodology. In this process, the one-dimensional linear equivalent analysis produces acceleration response spectra of shear wave velocity-thickness profiles, defined as trainers, which are randomly selected in each zone. Subsequently, a numerical adaptive simulation model (Spectra) is optimized on the above trainer acceleration response spectra by means of a dedicated Evolutionary Algorithm (EA) and the Levenberg-Marquardt Algorithm (LMA) as the final optimizer. In the final step, the GCM Maps Executor module produces a serial map-set of a stratigraphic seismic response at different periods, grid-solving the calibrated Spectra model. In addition, the spectra topographic amplification is also computed by means of a numerical prediction model. This latter is built to match the results of the numerical simulations related to isolate reliefs using GIS topographic attributes. In this way, different sets of seismic response maps are developed, on which, also maps of seismic design response spectra are defined by means of an enveloping technique.
Márquez Neila, Pablo; Baumela, Luis; González-Soriano, Juncal; Rodríguez, Jose-Rodrigo; DeFelipe, Javier; Merchán-Pérez, Ángel
2016-04-01
Recent electron microscopy (EM) imaging techniques permit the automatic acquisition of a large number of serial sections from brain samples. Manual segmentation of these images is tedious, time-consuming and requires a high degree of user expertise. Therefore, there is considerable interest in developing automatic segmentation methods. However, currently available methods are computationally demanding in terms of computer time and memory usage, and to work properly many of them require image stacks to be isotropic, that is, voxels must have the same size in the X, Y and Z axes. We present a method that works with anisotropic voxels and that is computationally efficient allowing the segmentation of large image stacks. Our approach involves anisotropy-aware regularization via conditional random field inference and surface smoothing techniques to improve the segmentation and visualization. We have focused on the segmentation of mitochondria and synaptic junctions in EM stacks from the cerebral cortex, and have compared the results to those obtained by other methods. Our method is faster than other methods with similar segmentation results. Our image regularization procedure introduces high-level knowledge about the structure of labels. We have also reduced memory requirements with the introduction of energy optimization in overlapping partitions, which permits the regularization of very large image stacks. Finally, the surface smoothing step improves the appearance of three-dimensional renderings of the segmented volumes.
Modeling carbon dioxide, pH, and un-ionized ammonia relationships in serial reuse systems
Colt, J.; Watten, B.; Rust, M.
2009-01-01
In serial reuse systems, excretion of metabolic carbon dioxide has a significant impact on ambient pH, carbon dioxide, and un-ionized ammonia concentrations. This impact depends strongly on alkalinity, water flow rate, feeding rate, and loss of carbon dioxide to the atmosphere. A reduction in pH from metabolic carbon dioxide can significantly reduce the un-ionized ammonia concentration and increase the carbon dioxide concentrations compared to those parameters computed from influent pH. The ability to accurately predict pH in serial reuse systems is critical to their design and effective operation. A trial and error solution to the alkalinity-pH system was used to estimate important water quality parameters in serial reuse systems. Transfer of oxygen and carbon dioxide across the air-water interface, at overflow weirs, and impacts of substrate-attached algae and suspended bacteria were modeled. Gas transfer at the weirs was much greater than transfer across the air-water boundary. This simulation model can rapidly estimate influent and effluent concentrations of dissolved oxygen, carbon dioxide, and un-ionized ammonia as a function of water temperature, elevation, water flow, and weir type. The accuracy of the estimates strongly depends on assumed pollutional loading rates and gas transfer at the weirs. The current simulation model is based on mean daily loading rates; the impacts of daily variation loading rates are discussed. Copies of the source code and executable program are available free of charge.
Modeling Carbon Dioxide, pH and Un-Ionized Ammonia Relationships in Serial Reuse Systems
Watten, Barnaby J.; Rust, Michael; Colt, John
2009-01-01
In serial reuse systems, excretion of metabolic carbon dioxide has a significant impact on ambient pH, carbon dioxide, and un-ionized ammonia concentrations. This impact depends strongly on alkalinity, water flow rate, feeding rate, and loss of carbon dioxide to the atmosphere. A reduction in pH from metabolic carbon dioxide can significantly reduce the un-ionized ammonia concentration and increase the carbon dioxide concentrations compared to those parameters computed from influent pH. The ability to accurately predict pH in serial reuse systems is critical to their design and effective operation. A trial and error solution to the alkalinity–pH system was used to estimate important water quality parameters in serial reuse systems. Transfer of oxygen and carbon dioxide across the air–water interface, at overflow weirs, and impacts of substrate-attached algae and suspended bacteria were modeled. Gas transfer at the weirs was much greater than transfer across the air–water boundary. This simulation model can rapidly estimate influent and effluent concentrations of dissolved oxygen, carbon dioxide, and un-ionized ammonia as a function of water temperature, elevation, water flow, and weir type. The accuracy of the estimates strongly depends on assumed pollutional loading rates and gas transfer at the weirs. The current simulation model is based on mean daily loading rates; the impacts of daily variation loading rates are discussed. Copies of the source code and executable program are available free of charge.
Pion radiation for high grade astrocytoma: results of a randomized study.
Pickles, T; Goodman, G B; Rheaume, D E; Duncan, G G; Fryer, C J; Bhimji, S; Ludgate, C; Syndikus, I; Graham, P; Dimitrov, M; Bowen, J
1997-02-01
This study attempted to compare within a randomized study the outcome of pion radiation therapy vs. conventional photon irradiation for the treatment of high-grade astrocytomas. Eighty-four patients were randomized to pion therapy (33-34.5 Gy pi), or conventional photon irradiation (60 Gy). Entry criteria included astrocytoma (modified Kernohan high Grade 3 or Grade 4), age 18-70, Karnofsky performance status (KPS) > or = 50, ability to start irradiation within 30 days of surgery, unifocal tumor, and treatment volume < 850 cc. The high-dose volume in both arms was computed tomography enhancement plus a 2-cm margin. The study was designed with the power to detect a twofold difference between arms. Eighty-one eligible patients were equally balanced for all known prognostic variables. Pion patients started radiation 7 days earlier on average than photon patients, but other treatment-related variables did not differ. There were no significant differences for either early or late radiation toxicity between treatment arms. Actuarial survival analysis shows no differences in terms of time to local recurrence or overall survival where median survival was 10 months in both arms (p = 0.22). The physician-assessed KPS and patient-assessed quality of life (QOL) measurements were generally maintained within 10 percentage points until shortly before tumor recurrence. There was no apparent difference in the serial KPS or QOL scores between treatment arms. In contrast to high linear energy transfer (LET) therapy for central nervous system tumors, such as neutron or neon therapy, the safety of pion therapy, which is of intermediate LET, has been reaffirmed. However, this study has demonstrated no therapeutic gain for pion therapy of glioblastoma.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jackson, Price A.; Kron, Tomas; Beauregard, Jean-Mathieu
2013-11-15
Purpose: To create an accurate map of the distribution of radiation dose deposition in healthy and target tissues during radionuclide therapy.Methods: Serial quantitative SPECT/CT images were acquired at 4, 24, and 72 h for 28 {sup 177}Lu-octreotate peptide receptor radionuclide therapy (PRRT) administrations in 17 patients with advanced neuroendocrine tumors. Deformable image registration was combined with an in-house programming algorithm to interpolate pharmacokinetic uptake and clearance at a voxel level. The resultant cumulated activity image series are comprised of values representing the total number of decays within each voxel's volume. For PRRT, cumulated activity was translated to absorbed dose basedmore » on Monte Carlo-determined voxel S-values at a combination of long and short ranges. These dosimetric image sets were compared for mean radiation absorbed dose to at-risk organs using a conventional MIRD protocol (OLINDA 1.1).Results: Absorbed dose values to solid organs (liver, kidneys, and spleen) were within 10% using both techniques. Dose estimates to marrow were greater using the voxelized protocol, attributed to the software incorporating crossfire effect from nearby tumor volumes.Conclusions: The technique presented offers an efficient, automated tool for PRRT dosimetry based on serial post-therapy imaging. Following retrospective analysis, this method of high-resolution dosimetry may allow physicians to prescribe activity based on required dose to tumor volume or radiation limits to healthy tissue in individual patients.« less
ERIC Educational Resources Information Center
Khoshsima, Hooshang; Hosseini, Monirosadat; Toroujeni, Seyyed Morteza Hashemi
2017-01-01
Advent of technology has caused growing interest in using computers to convert conventional paper and pencil-based testing (Henceforth PPT) into Computer-based testing (Henceforth CBT) in the field of education during last decades. This constant promulgation of computers to reshape the conventional tests into computerized format permeated the…
Shinohara, Gen; Morita, Kiyozo; Hoshino, Masato; Ko, Yoshihiro; Tsukube, Takuro; Kaneko, Yukihiro; Morishita, Hiroyuki; Oshima, Yoshihiro; Matsuhisa, Hironori; Iwaki, Ryuma; Takahashi, Masashi; Matsuyama, Takaaki; Hashimoto, Kazuhiro; Yagi, Naoto
2016-11-01
The feasibility of synchrotron radiation-based phase-contrast computed tomography (PCCT) for visualization of the atrioventricular (AV) conduction axis in human whole heart specimens was tested using four postmortem structurally normal newborn hearts obtained at autopsy. A PCCT imaging system at the beamline BL20B2 in a SPring-8 synchrotron radiation facility was used. The PCCT imaging of the conduction system was performed with "virtual" slicing of the three-dimensional reconstructed images. For histological verification, specimens were cut into planes similar to the PCCT images, then cut into 5-μm serial sections and stained with Masson's trichrome. In PCCT images of all four of the whole hearts of newborns, the AV conduction axis was distinguished as a low-density structure, which was serially traceable from the compact node to the penetrating bundle within the central fibrous body, and to the branching bundle into the left and right bundle branches. This was verified by histological serial sectioning. This is the first demonstration that visualization of the AV conduction axis within human whole heart specimens is feasible with PCCT. © The Author(s) 2016.
Shao, Chenzhong; Tanaka, Shuji; Nakayama, Takahiro; Hata, Yoshiyuki
2018-01-01
For installing many sensors in a limited space with a limited computing resource, the digitization of the sensor output at the site of sensation has advantages such as a small amount of wiring, low signal interference and high scalability. For this purpose, we have developed a dedicated Complementary Metal-Oxide-Semiconductor (CMOS) Large-Scale Integration (LSI) (referred to as “sensor platform LSI”) for bus-networked Micro-Electro-Mechanical-Systems (MEMS)-LSI integrated sensors. In this LSI, collision avoidance, adaptation and event-driven functions are simply implemented to relieve data collision and congestion in asynchronous serial bus communication. In this study, we developed a network system with 48 sensor platform LSIs based on Printed Circuit Board (PCB) in a backbone bus topology with the bus length being 2.4 m. We evaluated the serial communication performance when 48 LSIs operated simultaneously with the adaptation function. The number of data packets received from each LSI was almost identical, and the average sampling frequency of 384 capacitance channels (eight for each LSI) was 73.66 Hz. PMID:29342923
Boucher, B J; Claff, H R; Edmonson, M; Evans, S; Harris, B T; Hull, S A; Jones, E J; Mellins, D H; Safir, J G; Taylor, B
1987-01-01
A pilot Diabetic Support Service (DSS) based on a computer register was devised for diabetic patients identified within three group practices in an inner city district of London. Of 159 eligible diabetics, 142 were followed over 2 years. Glycosylated haemoglobin (GHb) monitoring and adequacy of clinic reviews were audited. Care achieved by the DSS was compared with conventional Diabetic Clinic (DC) management of a sample of 200 diabetics from the same district. Serial GHb measurements were made on 66.2% of DSS and 44.5% of DC patients: GHb fell significantly only in DSS patients (13.1% to 11.4%). Proportional falls in GHb were comparable in each DSS treatment group (diet alone, oral hypoglycaemic agents, and insulin) and for hospital attenders and non-attenders equally. The planned clinical reviews were achieved in 40.1% of DSS patients entered (29% GP only, 54% of clinic attenders) and in 15% of DC patients (plus 75% fundal and blood pressure examination). The study led to provision of a formal diabetic clinic annual review system, diabetic mini-clinics in two of the three group practices, and the appointment of two Diabetic Liaison Sisters. With administrative simplification the system is to be made available to all diabetics in the District through their GPs during 1986-8.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brewster, Aaron S.; Sawaya, Michael R.; University of California, Los Angeles, CA 90095-1570
2015-02-01
Special methods are required to interpret sparse diffraction patterns collected from peptide crystals at X-ray free-electron lasers. Bragg spots can be indexed from composite-image powder rings, with crystal orientations then deduced from a very limited number of spot positions. Still diffraction patterns from peptide nanocrystals with small unit cells are challenging to index using conventional methods owing to the limited number of spots and the lack of crystal orientation information for individual images. New indexing algorithms have been developed as part of the Computational Crystallography Toolbox (cctbx) to overcome these challenges. Accurate unit-cell information derived from an aggregate data setmore » from thousands of diffraction patterns can be used to determine a crystal orientation matrix for individual images with as few as five reflections. These algorithms are potentially applicable not only to amyloid peptides but also to any set of diffraction patterns with sparse properties, such as low-resolution virus structures or high-throughput screening of still images captured by raster-scanning at synchrotron sources. As a proof of concept for this technique, successful integration of X-ray free-electron laser (XFEL) data to 2.5 Å resolution for the amyloid segment GNNQQNY from the Sup35 yeast prion is presented.« less
Automated Stitching of Microtubule Centerlines across Serial Electron Tomograms
Weber, Britta; Tranfield, Erin M.; Höög, Johanna L.; Baum, Daniel; Antony, Claude; Hyman, Tony; Verbavatz, Jean-Marc; Prohaska, Steffen
2014-01-01
Tracing microtubule centerlines in serial section electron tomography requires microtubules to be stitched across sections, that is lines from different sections need to be aligned, endpoints need to be matched at section boundaries to establish a correspondence between neighboring sections, and corresponding lines need to be connected across multiple sections. We present computational methods for these tasks: 1) An initial alignment is computed using a distance compatibility graph. 2) A fine alignment is then computed with a probabilistic variant of the iterative closest points algorithm, which we extended to handle the orientation of lines by introducing a periodic random variable to the probabilistic formulation. 3) Endpoint correspondence is established by formulating a matching problem in terms of a Markov random field and computing the best matching with belief propagation. Belief propagation is not generally guaranteed to converge to a minimum. We show how convergence can be achieved, nonetheless, with minimal manual input. In addition to stitching microtubule centerlines, the correspondence is also applied to transform and merge the electron tomograms. We applied the proposed methods to samples from the mitotic spindle in C. elegans, the meiotic spindle in X. laevis, and sub-pellicular microtubule arrays in T. brucei. The methods were able to stitch microtubules across section boundaries in good agreement with experts' opinions for the spindle samples. Results, however, were not satisfactory for the microtubule arrays. For certain experiments, such as an analysis of the spindle, the proposed methods can replace manual expert tracing and thus enable the analysis of microtubules over long distances with reasonable manual effort. PMID:25438148
Automated stitching of microtubule centerlines across serial electron tomograms.
Weber, Britta; Tranfield, Erin M; Höög, Johanna L; Baum, Daniel; Antony, Claude; Hyman, Tony; Verbavatz, Jean-Marc; Prohaska, Steffen
2014-01-01
Tracing microtubule centerlines in serial section electron tomography requires microtubules to be stitched across sections, that is lines from different sections need to be aligned, endpoints need to be matched at section boundaries to establish a correspondence between neighboring sections, and corresponding lines need to be connected across multiple sections. We present computational methods for these tasks: 1) An initial alignment is computed using a distance compatibility graph. 2) A fine alignment is then computed with a probabilistic variant of the iterative closest points algorithm, which we extended to handle the orientation of lines by introducing a periodic random variable to the probabilistic formulation. 3) Endpoint correspondence is established by formulating a matching problem in terms of a Markov random field and computing the best matching with belief propagation. Belief propagation is not generally guaranteed to converge to a minimum. We show how convergence can be achieved, nonetheless, with minimal manual input. In addition to stitching microtubule centerlines, the correspondence is also applied to transform and merge the electron tomograms. We applied the proposed methods to samples from the mitotic spindle in C. elegans, the meiotic spindle in X. laevis, and sub-pellicular microtubule arrays in T. brucei. The methods were able to stitch microtubules across section boundaries in good agreement with experts' opinions for the spindle samples. Results, however, were not satisfactory for the microtubule arrays. For certain experiments, such as an analysis of the spindle, the proposed methods can replace manual expert tracing and thus enable the analysis of microtubules over long distances with reasonable manual effort.
Parallel Computation of the Jacobian Matrix for Nonlinear Equation Solvers Using MATLAB
NASA Technical Reports Server (NTRS)
Rose, Geoffrey K.; Nguyen, Duc T.; Newman, Brett A.
2017-01-01
Demonstrating speedup for parallel code on a multicore shared memory PC can be challenging in MATLAB due to underlying parallel operations that are often opaque to the user. This can limit potential for improvement of serial code even for the so-called embarrassingly parallel applications. One such application is the computation of the Jacobian matrix inherent to most nonlinear equation solvers. Computation of this matrix represents the primary bottleneck in nonlinear solver speed such that commercial finite element (FE) and multi-body-dynamic (MBD) codes attempt to minimize computations. A timing study using MATLAB's Parallel Computing Toolbox was performed for numerical computation of the Jacobian. Several approaches for implementing parallel code were investigated while only the single program multiple data (spmd) method using composite objects provided positive results. Parallel code speedup is demonstrated but the goal of linear speedup through the addition of processors was not achieved due to PC architecture.
ERIC Educational Resources Information Center
Glasser, L.
1987-01-01
This paper explores how Fourier Transform (FT) mimics spectral transformation, how this property can be exploited to advantage in spectroscopy, and how the FT can be used in data treatment. A table displays a number of important FT serial/spectral pairs related by Fourier Transformations. A bibliography and listing of computer software related to…
ERIC Educational Resources Information Center
Dougherty, Richard M.; Stephens, James G.
The objectives of the study were to record: (1) the problems encountered in interpreting and using the Illinois program documentation; (2) the modifications required to reconcile system incompatibilities and inefficiencies due to different computer configurations; (3) the input instruction modifications made to accommodate local library processing…
NASA Astrophysics Data System (ADS)
Wu, Li; Zhang, Bin; Wu, Ping; Liu, Qian; Gong, Hui
2007-05-01
A high-resolution optical imaging system was designed and developed to obtain the serial transverse section images of the biologic tissue, such as the mouse brain, in which new knife-edge imaging technology, high-speed and high-sensitive line-scan CCD and linear air bearing stages were adopted and incorporated with an OLYMPUS microscope. The section images on the tip of the knife-edge were synchronously captured by the reflection imaging in the microscope while cutting the biologic tissue. The biologic tissue can be sectioned at interval of 250 nm with the same resolution of the transverse section images obtained in x and y plane. And the cutting job can be automatically finished based on the control program wrote specially in advance, so we save the mass labor of the registration of the vast images data. In addition, by using this system a larger sample can be cut than conventional ultramicrotome so as to avoid the loss of the tissue structure information because of splitting the tissue sample to meet the size request of the ultramicrotome.
Wilke, Scott A.; Antonios, Joseph K.; Bushong, Eric A.; Badkoobehi, Ali; Malek, Elmar; Hwang, Minju; Terada, Masako; Ellisman, Mark H.
2013-01-01
The hippocampal mossy fiber (MF) terminal is among the largest and most complex synaptic structures in the brain. Our understanding of the development of this morphologically elaborate structure has been limited because of the inability of standard electron microscopy techniques to quickly and accurately reconstruct large volumes of neuropil. Here we use serial block-face electron microscopy (SBEM) to surmount these limitations and investigate the establishment of MF connectivity during mouse postnatal development. Based on volume reconstructions, we find that MF axons initially form bouton-like specializations directly onto dendritic shafts, that dendritic protrusions primarily arise independently of bouton contact sites, and that a dramatic increase in presynaptic and postsynaptic complexity follows the association of MF boutons with CA3 dendritic protrusions. We also identify a transient period of MF bouton filopodial exploration, followed by refinement of sites of synaptic connectivity. These observations enhance our understanding of the development of this highly specialized synapse and illustrate the power of SBEM to resolve details of developing microcircuits at a level not easily attainable with conventional approaches. PMID:23303931
Ceramic micro-injection molded nozzles for serial femtosecond crystallography sample delivery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beyerlein, K. R.; Adriano, L.; Heymann, M.
Serial femtosecond crystallography (SFX) using X-ray Free-Electron Lasers (XFELs) allows for room temperature protein structure determination without evidence of conventional radiation damage. In this method, a liquid suspension of protein microcrystals can be delivered to the X-ray beam in vacuum as a micro-jet, which replenishes the crystals at a rate that exceeds the current XFEL pulse repetition rate. Gas dynamic virtual nozzles produce the required micrometer-sized streams by the focusing action of a coaxial sheath gas and have been shown to be effective for SFX experiments. Here, we describe the design and characterization of such nozzles assembled from ceramic micro-injectionmore » molded outer gas-focusing capillaries. Trends of the emitted jet diameter and jet length as a function of supplied liquid and gas flow rates are measured by a fast imaging system. The observed trends are explained by derived relationships considering choked gas flow and liquidflow conservation. In conclusion, the performance of these nozzles in a SFX experiment is presented, including an analysis of the observed background.« less
Ceramic micro-injection molded nozzles for serial femtosecond crystallography sample delivery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beyerlein, K. R.; Heymann, M.; Kirian, R.
Serial femtosecond crystallography (SFX) using X-ray Free-Electron Lasers (XFELs) allows for room temperature protein structure determination without evidence of conventional radiation damage. In this method, a liquid suspension of protein microcrystals can be delivered to the X-ray beam in vacuum as a micro-jet, which replenishes the crystals at a rate that exceeds the current XFEL pulse repetition rate. Gas dynamic virtual nozzles produce the required micrometer-sized streams by the focusing action of a coaxial sheath gas and have been shown to be effective for SFX experiments. Here, we describe the design and characterization of such nozzles assembled from ceramic micro-injectionmore » molded outer gas-focusing capillaries. Trends of the emitted jet diameter and jet length as a function of supplied liquid and gas flow rates are measured by a fast imaging system. The observed trends are explained by derived relationships considering choked gas flow and liquid flow conservation. Finally, the performance of these nozzles in a SFX experiment is presented, including an analysis of the observed background.« less
Ceramic micro-injection molded nozzles for serial femtosecond crystallography sample delivery
Beyerlein, K. R.; Adriano, L.; Heymann, M.; ...
2015-12-08
Serial femtosecond crystallography (SFX) using X-ray Free-Electron Lasers (XFELs) allows for room temperature protein structure determination without evidence of conventional radiation damage. In this method, a liquid suspension of protein microcrystals can be delivered to the X-ray beam in vacuum as a micro-jet, which replenishes the crystals at a rate that exceeds the current XFEL pulse repetition rate. Gas dynamic virtual nozzles produce the required micrometer-sized streams by the focusing action of a coaxial sheath gas and have been shown to be effective for SFX experiments. Here, we describe the design and characterization of such nozzles assembled from ceramic micro-injectionmore » molded outer gas-focusing capillaries. Trends of the emitted jet diameter and jet length as a function of supplied liquid and gas flow rates are measured by a fast imaging system. The observed trends are explained by derived relationships considering choked gas flow and liquidflow conservation. In conclusion, the performance of these nozzles in a SFX experiment is presented, including an analysis of the observed background.« less
McClelland, Jodie A; Webster, Kate E; Ramteke, Alankar A; Feller, Julian A
2017-06-01
Computer-assisted navigation in total knee arthroplasty (TKA) reduces variability and may improve accuracy in the postoperative static alignment. The effect of navigation on alignment and biomechanics during more dynamic movements has not been investigated. This study compared knee biomechanics during level walking of 121 participants: 39 with conventional TKA, 42 with computer-assisted navigation TKA and 40 unimpaired control participants. Standing lower-limb alignment was significantly closer to ideal in participants with navigation TKA. During gait, when differences in walking speed were accounted for, participants with conventional TKA had less knee flexion during stance and swing than controls (P<0.01), but there were no differences between participants with navigation TKA and controls for the same variables. Both groups of participants with TKA had lower knee adduction moments than controls (P<0.01). In summary, there were fewer differences in the biomechanics of computer-assisted navigation TKA patients compared to controls than for patients with conventional TKA. Computer-assisted navigation TKA may restore biomechanics during walking that are closer to normal than conventional TKA. Copyright © 2017 Elsevier B.V. All rights reserved.
Si, Jiahe; Colgate, Stirling A; Li, Hui; Martinic, Joe; Westpfahl, David
2013-10-01
New Mexico Institute of Mining and Technology liquid sodium αω-dynamo experiment models the magnetic field generation in the universe as discussed in detail by Colgate, Li, and Pariev [Phys. Plasmas 8, 2425 (2001)]. To obtain a quasi-laminar flow with magnetic Reynolds number R(m) ~ 120, the dynamo experiment consists of two co-axial cylinders of 30.5 cm and 61 cm in diameter spinning up to 70 Hz and 17.5 Hz, respectively. During the experiment, the temperature of the cylinders must be maintained to 110 °C to ensure that the sodium remains fluid. This presents a challenge to implement a data acquisition (DAQ) system in such high temperature, high-speed rotating frame, in which the sensors (including 18 Hall sensors, 5 pressure sensors, and 5 temperature sensors, etc.) are under the centrifugal acceleration up to 376g. In addition, the data must be transmitted and stored in a computer 100 ft away for safety. The analog signals are digitized, converted to serial signals by an analog-to-digital converter and a field-programmable gate array. Power is provided through brush/ring sets. The serial signals are sent through ring/shoe sets capacitively, then reshaped with cross-talk noises removed. A microcontroller-based interface circuit is used to decode the serial signals and communicate with the data acquisition computer. The DAQ accommodates pressure up to 1000 psi, temperature up to more than 130 °C, and magnetic field up to 1000 G. First physics results have been analyzed and published. The next stage of the αω-dynamo experiment includes the DAQ system upgrade.
Evaluation of Esophageal Anastomotic Integrity With Serial Pleural Amylase Levels.
Miller, Daniel L; Helms, Gerald A; Mayfield, William R
2018-01-01
An anastomotic leak is the most devastating and potentially fatal complication after esophagectomy. Current detection methods can be inaccurate and place patients at risk of other complications. Analysis of pleural fluid for amylase may be more accurate and place patients at less of a risk for evaluating the integrity of an esophageal anastomosis. We retrospectively reviewed prospective data of 45 consecutive patients who underwent an Ivor Lewis esophagectomy over an 18-month period and evaluated their anastomotic integrity with serial pleural amylase levels (PAL). There were 40 men (89%), and median age was 63 years (range, 35 to 79). Indication for esophagectomy was cancer in 38 patients (84%); 27 (71%) underwent neoadjuvant chemoradiation. A barium swallow was performed in the first 25 patients at median postoperative day (POD) 5 (range, 5 to 10); the swallow was negative in 23 patients (93%). Serial PALs were obtained starting on POD 3 and stopped 1 day after toleration of clear liquids. The PALs in the no-leak patients were highest on POD 3 (median 42 IU/L; range, 20 to 102 IU/L) and decreased (median 15 IU/L; range, 8 to 34 IU/L) to the lowest levels 1 day after clear liquid toleration (p = 0.04). Two patients had a leak and had peak PALs of 227 IU/L and 630 IU/L, respectively; both leaks occurred on POD 4, 1 day before their scheduled swallow test. The last 20 patients underwent serial PALs only, without a planned swallow test or computed tomography scan for anastomotic integrity evaluation. One of these patients had a leak on POD 5 with a low PAL of 55 IU/L the day before the spike of more than 4,000 IU/L. Two of the leaks were treated with esophageal stent placement and intravenous antibiotics, and the remaining patient's leak resolved with intravenous antibiotics, no oral intake, and observation only. None of the leak patients required transthoracic esophageal repair or drainage of an empyema. There was 1 postoperative death (2%) secondary to aspiration pneumonia on POD 10; no leak was ever identified, and the patient had been eating for 3 days before death. Complications occurred in 15 patients (33%), most commonly respiratory; no respiratory issues occurred in PAL-only evaluated patients. No late anastomotic leaks occurred in any patient while in the hospital or after discharge. Serial PALs for the detection of esophageal anastomotic leaks proved to be accurate, safe, and inexpensive. Elimination of barium swallows and computed tomography scans for evaluation of anastomotic integrity may decrease aspiration risks as well as associated pulmonary failure during the postoperative period. Serial PALs may be the preferred method of detecting an anastomotic leak after esophagectomy. A prospective randomized study is warranted. Copyright © 2018 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nishimura, Goro, E-mail: gnishi@imd.es.hokudai.ac.jp
2015-10-15
A photon timing recorder was realized in a field programmable gate array to capture all timing data of photons on multiple channels with down to a 1-ns resolution and to transfer all data to a host computer in real-time through universal serial bus with more than 10 M events/s transfer rate. The main concept is that photon time series can be regarded as a serial communication data stream. This recorder was successfully applied for simultaneous measurements of fluorescence fluctuation and lifetime of near-infrared dyes in solution. This design is not only limited to the fluorescence fluctuation measurement but also applicablemore » to any kind of photon counting experiments in a nanosecond time range because of the simple and easily modifiable design.« less
Lange, Nicholas D.; Thomas, Rick P.; Davelaar, Eddy J.
2012-01-01
The pre-decisional process of hypothesis generation is a ubiquitous cognitive faculty that we continually employ in an effort to understand our environment and thereby support appropriate judgments and decisions. Although we are beginning to understand the fundamental processes underlying hypothesis generation, little is known about how various temporal dynamics, inherent in real world generation tasks, influence the retrieval of hypotheses from long-term memory. This paper presents two experiments investigating three data acquisition dynamics in a simulated medical diagnosis task. The results indicate that the mere serial order of data, data consistency (with previously generated hypotheses), and mode of responding influence the hypothesis generation process. An extension of the HyGene computational model endowed with dynamic data acquisition processes is forwarded and explored to provide an account of the present data. PMID:22754547
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naumann, Axel; /CERN; Canal, Philippe
2008-01-01
High performance computing with a large code base and C++ has proved to be a good combination. But when it comes to storing data, C++ is a problematic choice: it offers no support for serialization, type definitions are amazingly complex to parse, and the dependency analysis (what does object A need to be stored?) is incredibly difficult. Nevertheless, the LHC data consists of C++ objects that are serialized with help from ROOT's reflection database and interpreter CINT. The fact that we can do it on that scale, and the performance with which we do it makes this approach unique andmore » stirs interest even outside HEP. I will show how CINT collects and stores information about C++ types, what the current major challenges are (dictionary size), and what CINT and ROOT have done and plan to do about it.« less
Research and design of photovoltaic power monitoring system based on Zig Bee
NASA Astrophysics Data System (ADS)
Zhu, Lijuan; Yun, Zhonghua; Bianbawangdui; Bianbaciren
2018-01-01
In order to monitor and study the impact of environmental parameters on photovoltaic cells, a photovoltaic cell monitoring system based on ZigBee is designed. The system uses ZigBee wireless communication technology to achieve real-time acquisition of P-I-V curves and environmental parameters of terminal nodes, and transfer the data to the coordinator, the coordinator communicates with the STM32 through the serial port. In addition, STM32 uses the serial port to transfer data to the host computer written by LabVIEW, and the collected data is displayed in real time, as well as stored in the background database. The experimental results show that the system has a stable performance, accurate measurement, high sensitivity, high reliability, can better realize real-time collection of photovoltaic cell characteristics and environmental parameters.
Pupil-linked arousal is driven by decision uncertainty and alters serial choice bias
NASA Astrophysics Data System (ADS)
Urai, Anne E.; Braun, Anke; Donner, Tobias H.
2017-03-01
While judging their sensory environments, decision-makers seem to use the uncertainty about their choices to guide adjustments of their subsequent behaviour. One possible source of these behavioural adjustments is arousal: decision uncertainty might drive the brain's arousal systems, which control global brain state and might thereby shape subsequent decision-making. Here, we measure pupil diameter, a proxy for central arousal state, in human observers performing a perceptual choice task of varying difficulty. Pupil dilation, after choice but before external feedback, reflects three hallmark signatures of decision uncertainty derived from a computational model. This increase in pupil-linked arousal boosts observers' tendency to alternate their choice on the subsequent trial. We conclude that decision uncertainty drives rapid changes in pupil-linked arousal state, which shape the serial correlation structure of ongoing choice behaviour.
PARAMESH: A Parallel Adaptive Mesh Refinement Community Toolkit
NASA Technical Reports Server (NTRS)
MacNeice, Peter; Olson, Kevin M.; Mobarry, Clark; deFainchtein, Rosalinda; Packer, Charles
1999-01-01
In this paper, we describe a community toolkit which is designed to provide parallel support with adaptive mesh capability for a large and important class of computational models, those using structured, logically cartesian meshes. The package of Fortran 90 subroutines, called PARAMESH, is designed to provide an application developer with an easy route to extend an existing serial code which uses a logically cartesian structured mesh into a parallel code with adaptive mesh refinement. Alternatively, in its simplest use, and with minimal effort, it can operate as a domain decomposition tool for users who want to parallelize their serial codes, but who do not wish to use adaptivity. The package can provide them with an incremental evolutionary path for their code, converting it first to uniformly refined parallel code, and then later if they so desire, adding adaptivity.
The identification of the variation of atherosclerosis plaques by invasive and non-invasive methods
NASA Technical Reports Server (NTRS)
Selzer, R. H.; Blankenhorn, D. H.
1982-01-01
Computer-enhanced visualization of coronary arteries and lesions within them is discussed, comparing invasive and noninvasive methods. Trial design factors in computer lesions assessment are briefly discussed, and the use of the computer edge-tracking technique in that assessment is described. The results of a small pilot study conducted on serial cineangiograms of men with premature atherosclerosis are presented. A canine study to determine the feasibility of quantifying atherosclerosis from intravenous carotid angiograms is discussed. Comparative error for arterial and venous injection in the canines is determined, and the mode of processing the films to achieve better visualization is described. The application of the computer edge-tracking technique to an ultrasound image of the human carotid artery is also shown and briefly discussed.
Analysis OpenMP performance of AMD and Intel architecture for breaking waves simulation using MPS
NASA Astrophysics Data System (ADS)
Alamsyah, M. N. A.; Utomo, A.; Gunawan, P. H.
2018-03-01
Simulation of breaking waves by using Navier-Stokes equation via moving particle semi-implicit method (MPS) over close domain is given. The results show the parallel computing on multicore architecture using OpenMP platform can reduce the computational time almost half of the serial time. Here, the comparison using two computer architectures (AMD and Intel) are performed. The results using Intel architecture is shown better than AMD architecture in CPU time. However, in efficiency, the computer with AMD architecture gives slightly higher than the Intel. For the simulation by 1512 number of particles, the CPU time using Intel and AMD are 12662.47 and 28282.30 respectively. Moreover, the efficiency using similar number of particles, AMD obtains 50.09 % and Intel up to 49.42 %.
Gicquel, Yannig; Schubert, Robin; Kapis, Svetlana; Bourenkov, Gleb; Schneider, Thomas; Perbandt, Markus; Betzel, Christian; Chapman, Henry N; Heymann, Michael
2018-04-24
This protocol describes fabricating microfluidic devices with low X-ray background optimized for goniometer based fixed target serial crystallography. The devices are patterned from epoxy glue using soft lithography and are suitable for in situ X-ray diffraction experiments at room temperature. The sample wells are lidded on both sides with polymeric polyimide foil windows that allow diffraction data collection with low X-ray background. This fabrication method is undemanding and inexpensive. After the sourcing of a SU-8 master wafer, all fabrication can be completed outside of a cleanroom in a typical research lab environment. The chip design and fabrication protocol utilize capillary valving to microfluidically split an aqueous reaction into defined nanoliter sized droplets. This loading mechanism avoids the sample loss from channel dead-volume and can easily be performed manually without using pumps or other equipment for fluid actuation. We describe how isolated nanoliter sized drops of protein solution can be monitored in situ by dynamic light scattering to control protein crystal nucleation and growth. After suitable crystals are grown, complete X-ray diffraction datasets can be collected using goniometer based in situ fixed target serial X-ray crystallography at room temperature. The protocol provides custom scripts to process diffraction datasets using a suite of software tools to solve and refine the protein crystal structure. This approach avoids the artefacts possibly induced during cryo-preservation or manual crystal handling in conventional crystallography experiments. We present and compare three protein structures that were solved using small crystals with dimensions of approximately 10-20 µm grown in chip. By crystallizing and diffracting in situ, handling and hence mechanical disturbances of fragile crystals is minimized. The protocol details how to fabricate a custom X-ray transparent microfluidic chip suitable for in situ serial crystallography. As almost every crystal can be used for diffraction data collection, these microfluidic chips are a very efficient crystal delivery method.
Srivastava, Rajeshwar N; Dwivedi, Mukesh K; Bhagat, Amit K; Raj, Saloni; Agarwal, Rajiv; Chandra, Abhijit
2016-06-01
The conventional methods of treatment of pressure ulcers (PUs) by serial debridement and daily dressings require prolonged hospitalisation, associated with considerable morbidity. There is, however, recent evidence to suggest that negative pressure wound therapy (NPWT) accelerates healing. The commercial devices for NPWT are costly, cumbersome, and electricity dependent. We compared PU wound healing in traumatic paraplegia patients by conventional dressing and by an innovative negative pressure device (NPD). In this prospective, non-randomised trial, 48 traumatic paraplegia patients with PUs of stages 3 and 4 were recruited. Patients were divided into two groups: group A (n = 24) received NPWT with our NPD, and group B (n = 24) received conventional methods of dressing. All patients were followed up for 9 weeks. At week 9, all patients on NPD showed a statistically significant improvement in PU healing in terms of slough clearance, granulation tissue formation, wound discharge and culture. A significant reduction in wound size and ulcer depth was observed in NPD as compared with conventional methods at all follow-up time points (P = 0·0001). NPWT by the innovative device heals PUs at a significantly higher rate than conventional treatment. The device is safe, easy to apply and cost-effective. © 2014 The Authors. International Wound Journal © 2014 Medicalhelplines.com Inc and John Wiley & Sons Ltd.
ERIC Educational Resources Information Center
Association for Educational Data Systems, Washington, DC.
Two abstracts and seventeen articles on computer assisted instruction (CAI) presented at the 1976 Association for Educational Data Systems (AEDS) convention are included here. Four new computer programs are described: Author System for Education and Training (ASET); GNOSIS, a Swedish/English CAI package; Statistical Interactive Programming System…
ERIC Educational Resources Information Center
Association for Educational Data Systems, Washington, DC.
The theme of the 1976 convention of the Association for Educational Data Systems (AEDS) was educational data processing and information systems. Special attention was focused on educational management information systems, computer centers and networks, computer assisted instruction, computerized testing, guidance, and higher education. This…
Computed tomography in the evaluation of Crohn disease
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldberg, H.I.; Gore, R.M.; Margulis, A.R.
1983-02-01
The abdominal and pelvic computed tomographic examinations in 28 patients with Crohn disease were analyzed and correlated with conventional barium studies, sinograms, and surgical findings. Mucosal abnormalities such as aphthous lesions, pseudopolyps, and ulcerations were only imaged by conventional techniques. Computed tomography proved superior in demonstrating the mural, serosal, and mesenteric abnormalities such as bowel wall thickening (82%), fibrofatty proliferation of mesenteric fat (39%), mesenteric abscess (25%), inflammatory reaction of the mesentery (14%), and mesenteric lymphadenopathy (18%). Computed tomography was most useful clinically in defining the nature of mass effects, separation, or displacement of small bowel segments seen on smallmore » bowel series. Although conventional barium studies remain the initial diagnostic procedure in evaluating Crohn disease, computed tomography can be a useful adjunct in resolving difficult clinical and radiologic diagnostic problems.« less
Interactive collision detection for deformable models using streaming AABBs.
Zhang, Xinyu; Kim, Young J
2007-01-01
We present an interactive and accurate collision detection algorithm for deformable, polygonal objects based on the streaming computational model. Our algorithm can detect all possible pairwise primitive-level intersections between two severely deforming models at highly interactive rates. In our streaming computational model, we consider a set of axis aligned bounding boxes (AABBs) that bound each of the given deformable objects as an input stream and perform massively-parallel pairwise, overlapping tests onto the incoming streams. As a result, we are able to prevent performance stalls in the streaming pipeline that can be caused by expensive indexing mechanism required by bounding volume hierarchy-based streaming algorithms. At runtime, as the underlying models deform over time, we employ a novel, streaming algorithm to update the geometric changes in the AABB streams. Moreover, in order to get only the computed result (i.e., collision results between AABBs) without reading back the entire output streams, we propose a streaming en/decoding strategy that can be performed in a hierarchical fashion. After determining overlapped AABBs, we perform a primitive-level (e.g., triangle) intersection checking on a serial computational model such as CPUs. We implemented the entire pipeline of our algorithm using off-the-shelf graphics processors (GPUs), such as nVIDIA GeForce 7800 GTX, for streaming computations, and Intel Dual Core 3.4G processors for serial computations. We benchmarked our algorithm with different models of varying complexities, ranging from 15K up to 50K triangles, under various deformation motions, and the timings were obtained as 30 approximately 100 FPS depending on the complexity of models and their relative configurations. Finally, we made comparisons with a well-known GPU-based collision detection algorithm, CULLIDE [4] and observed about three times performance improvement over the earlier approach. We also made comparisons with a SW-based AABB culling algorithm [2] and observed about two times improvement.
Microcontroller interface for diode array spectrometry
NASA Astrophysics Data System (ADS)
Aguo, L.; Williams, R. R.
An alternative to bus-based computer interfacing is presented using diode array spectrometry as a typical application. The new interface consists of an embedded single-chip microcomputer, known as a microcontroller, which provides all necessary digital I/O and analog-to-digital conversion (ADC) along with an unprecedented amount of intelligence. Communication with a host computer system is accomplished by a standard serial interface so this type of interfacing is applicable to a wide range of personal and minicomputers and can be easily networked. Data are acquired asynchronousty and sent to the host on command. New operating modes which have no traditional counterparts are presented.
A PDP-15 to industrial-14 interface at the Lewis Research Center's cyclotron
NASA Technical Reports Server (NTRS)
Kebberly, F. R.; Leonard, R. F.
1977-01-01
An interface (hardware and software) was built which permits the loading, monitoring, and control of a digital equipment industrial-14/30 programmable controller by a PDP-15 computer. The interface utilizes the serial mode for data transfer to and from the controller, so that the required hardware is essentially that of a teletype unit except for the speed of transmission. Software described here permits the user to load binary paper tape, read or load individual controller memory locations, and if desired turn controller outputs on and off directly from the computer.
Ceftriaxone-associated pancreatitis captured on serial computed tomography scans.
Nakagawa, Nozomu; Ochi, Nobuaki; Yamane, Hiromichi; Honda, Yoshihiro; Nagasaki, Yasunari; Urata, Noriyo; Nakanishi, Hidekazu; Kawamoto, Hirofumi; Takigawa, Nagio
2018-02-01
A 74-year-old man was treated with ceftriaxone for 5 days and subsequently experienced epigastric pain. Computed tomography (CT) was performed 7 and 3 days before epigastralgia. Although the first CT image revealed no radiographic signs in his biliary system, the second CT image revealed dense radiopaque material in the gallbladder lumen. The third CT image, taken at symptom onset, showed high density in the common bile duct and enlargement of the pancreatic head. This is a very rare case of pseudolithiasis involving the common bile duct, as captured on a series of CT images.
ERIC Educational Resources Information Center
Association for the Development of Computer-based Instructional Systems.
The second of three volumes of papers presented at the 1979 ADCIS convention, this collection includes 37 papers presented to four special interest groups--computer based training, deaf education, elementary/secondary education/junior colleges, and health education. The eight papers on computer based training describe computer graphics, computer…
Wong, Danny Ka-Ho; Tsoi, Ottilia; Huang, Fung-Yu; Seto, Wai-Kay; Fung, James; Lai, Ching-Lung
2014-01-01
Nucleoside/nucleotide analogue for the treatment of chronic hepatitis B virus (HBV) infection is hampered by the emergence of drug resistance mutations. Conventional PCR sequencing cannot detect minor variants of <20%. We developed a modified co-amplification at lower denaturation temperature-PCR (COLD-PCR) method for the detection of HBV minority drug resistance mutations. The critical denaturation temperature for COLD-PCR was determined to be 78°C. Sensitivity of COLD-PCR sequencing was determined using serially diluted plasmids containing mixed proportions of HBV reverse transcriptase (rt) wild-type and mutant sequences. Conventional PCR sequencing detected mutations only if they existed in ≥25%, whereas COLD-PCR sequencing detected mutations when they existed in 5 to 10% of the viral population. The performance of COLD-PCR was compared to conventional PCR sequencing and a line probe assay (LiPA) using 215 samples obtained from 136 lamivudine- or telbivudine-treated patients with virological breakthrough. Among these 215 samples, drug resistance mutations were detected in 155 (72%), 148 (69%), and 113 samples (53%) by LiPA, COLD-PCR, and conventional PCR sequencing, respectively. Nineteen (9%) samples had mutations detectable by COLD-PCR but not LiPA, while 26 (12%) samples had mutations detectable by LiPA but not COLD-PCR, indicating both methods were comparable (P = 0.371). COLD-PCR was more sensitive than conventional PCR sequencing. Thirty-five (16%) samples had mutations detectable by COLD-PCR but not conventional PCR sequencing, while none had mutations detected by conventional PCR sequencing but not COLD-PCR (P < 0.0001). COLD-PCR sequencing is a simple method which is comparable to LiPA and superior to conventional PCR sequencing in detecting minor lamivudine/telbivudine resistance mutations. PMID:24951803
Gilkey, Jeffrey C [Albuquerque, NM; Duesterhaus, Michelle A [Albuquerque, NM; Peter, Frank J [Albuquerque, NM; Renn, Rosemarie A [Alburquerque, NM; Baker, Michael S [Albuquerque, NM
2006-08-15
A first-in-first-out (FIFO) microelectromechanical memory apparatus (also termed a mechanical memory) is disclosed. The mechanical memory utilizes a plurality of memory cells, with each memory cell having a beam which can be bowed in either of two directions of curvature to indicate two different logic states for that memory cell. The memory cells can be arranged around a wheel which operates as a clocking actuator to serially shift data from one memory cell to the next. The mechanical memory can be formed using conventional surface micromachining, and can be formed as either a nonvolatile memory or as a volatile memory.
Gilkey, Jeffrey C [Albuquerque, NM; Duesterhaus, Michelle A [Albuquerque, NM; Peter, Frank J [Albuquerque, NM; Renn, Rosemarie A [Albuquerque, NM; Baker, Michael S [Albuquerque, NM
2006-05-16
A first-in-first-out (FIFO) microelectromechanical memory apparatus (also termed a mechanical memory) is disclosed. The mechanical memory utilizes a plurality of memory cells, with each memory cell having a beam which can be bowed in either of two directions of curvature to indicate two different logic states for that memory cell. The memory cells can be arranged around a wheel which operates as a clocking actuator to serially shift data from one memory cell to the next. The mechanical memory can be formed using conventional surface micromachining, and can be formed as either a nonvolatile memory or as a volatile memory.
Laboratory Assessment of Commercially Available Ultrasonic Rangefinders
2015-11-01
measurements . The Arduino board and ultrasonic rangefinder were connected to the computer via universal serial bus (USB) cable, which acted as both...the MB1023 sensor placed at 0.5 meters from an office space wall. Based on these p-values, measurements from three different angles were not...taking acoustic measurements in a particular environment, transducers and noise sources must first be spatially located. The United States Army
Gritzo, R.E.
1985-09-12
A remote reset circuit acts as a stand-along monitor and controller by clocking in each character sent by a terminal to a computer and comparing it to a given reference character. When a match occurs, the remote reset circuit activates the system's hardware reset line. The remote reset circuit is hardware based centered around monostable multivibrators and is unaffected by system crashes, partial serial transmissions, or power supply transients. 4 figs.
2005 22nd International Symposium on Ballistics. Volume 3 Thursday - Friday
2005-11-18
QinetiQ; Vladimir Titarev, Eleuterio Toro , Umeritek Limited The Mechanism Analysis of Interior Ballistics of Serial Chamber Gun, Dr. Sanjiu Ying, Charge...Elements and Meshless Particles, Gordon R. Johnson and Robert A. Stryk, Network Computing Services, Inc. Experimental and Numerical Study of the...Internal Ballistics Clive R. Woodley, David Finbow, QinetiQ; Vladimir Titarev, Eleuterio Toro , Numeritek Limited 22nd International Symposium on
Northeast Parallel Architectures Center (NPAC)
1992-07-01
Computational Techniques: Mapping receptor units to processors , using NEWS communication to model interaction in the inhibitory field Goal of the Research...algorithms for classical problems to take advantage of multiple processors . Experiments in probability that have been too time consuming on serial...machine and achieved speedups of 4 to 5 times with 11 processors . It is believed that a slightly better speedup is achievable. In the case of stuck
A divide and conquer approach to the nonsymmetric eigenvalue problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jessup, E.R.
1991-01-01
Serial computation combined with high communication costs on distributed-memory multiprocessors make parallel implementations of the QR method for the nonsymmetric eigenvalue problem inefficient. This paper introduces an alternative algorithm for the nonsymmetric tridiagonal eigenvalue problem based on rank two tearing and updating of the matrix. The parallelism of this divide and conquer approach stems from independent solution of the updating problems. 11 refs.
Beacon data acquisition and display system
Skogmo, D.G.; Black, B.D.
1991-12-17
A system for transmitting aircraft beacon information received by a secondary surveillance radar through telephone lines to a remote display includes a digitizer connected to the radar for preparing a serial file of data records containing position and identification information of the beacons detected by each sweep of the radar. This information is transmitted through the telephone lines to a remote computer where it is displayed. 6 figures.
Beacon data acquisition and display system
Skogmo, David G.; Black, Billy D.
1991-01-01
A system for transmitting aircraft beacon information received by a secondary surveillance radar through telephone lines to a remote display includes a digitizer connected to the radar for preparing a serial file of data records containing position and identification information of the beacons detected by each sweep of the radar. This information is transmitted through the telephone lines to a remote computer where it is displayed.
Cloud Computing Solutions for the Marine Corps: An Architecture to Support Expeditionary Logistics
2013-09-01
reform IT financial , acquisition, and contracting practices (Takai, 2012). The second step is to optimize data center consolidation . Kundra (2010...the U.S. Government. IRB Protocol number ____N/A____. 12a. DISTRIBUTION / AVAILABILITY STATEMENT Approved for public release;distribution is...USB universal serial bus USMCELC United States Marine Corps Expeditionary Logistics Cloud UUNS urgent universal needs statement xix VA volt
Gritzo, Russell E.
1987-01-01
A remote reset circuit acts as a stand-alone monitor and controller by clocking in each character sent by a terminal to a computer and comparing it to a given reference character. When a match occurs, the remote reset circuit activates the system's hardware reset line. The remote reset circuit is hardware based centered around monostable multivibrators and is unaffected by system crashes, partial serial transmissions, or power supply transients.
A Digital Motion Control System for Large Telescopes
NASA Astrophysics Data System (ADS)
Hunter, T. R.; Wilson, R. W.; Kimberk, R.; Leiker, P. S.
2001-05-01
We have designed and programmed a digital motion control system for large telescopes, in particular, the 6-meter antennas of the Submillimeter Array on Mauna Kea. The system consists of a single robust, high-reliability microcontroller board which implements a two-axis velocity servo while monitoring and responding to critical safety parameters. Excellent tracking performance has been achieved with this system (0.3 arcsecond RMS at sidereal rate). The 24x24 centimeter four-layer printed circuit board contains a multitude of hardware devices: 40 digital inputs (for limit switches and fault indicators), 32 digital outputs (to enable/disable motor amplifiers and brakes), a quad 22-bit ADC (to read the motor tachometers), four 16-bit DACs (that provide torque signals to the motor amplifiers), a 32-LED status panel, a serial port to the LynxOS PowerPC antenna computer (RS422/460kbps), a serial port to the Palm Vx handpaddle (RS232/115kbps), and serial links to the low-resolution absolute encoders on the azimuth and elevation axes. Each section of the board employs independent ground planes and power supplies, with optical isolation on all I/O channels. The processor is an Intel 80C196KC 16-bit microcontroller running at 20MHz on an 8-bit bus. This processor executes an interrupt-driven, scheduler-based software system written in C and assembled into an EPROM with user-accessible variables stored in NVSRAM. Under normal operation, velocity update requests arrive at 100Hz from the position-loop servo process running independently on the antenna computer. A variety of telescope safety checks are performed at 279Hz including routine servicing of a 6 millisecond watchdog timer. Additional ADCs onboard the microcontroller monitor the winding temperature and current in the brushless three-phase drive motors. The PID servo gains can be dynamically changed in software. Calibration factors and software filters can be applied to the tachometer readings prior to the application of the servo gains in the torque computations. The Palm pilot handpaddle displays the complete status of the telescope and allows full local control of the drives in an intuitive, touchscreen user interface which is especially useful during reconfigurations of the antenna array.
ERIC Educational Resources Information Center
Whiting, Hal; Kline, Theresa J. B.
2006-01-01
This study examined the equivalency of computer and conventional versions of the Test of Workplace Essential Skills (TOWES), a test of adult literacy skills in Reading Text, Document Use and Numeracy. Seventy-three college students completed the computer version, and their scores were compared with those who had taken the test in the conventional…
Computational Labs Using VPython Complement Conventional Labs in Online and Regular Physics Classes
NASA Astrophysics Data System (ADS)
Bachlechner, Martina E.
2009-03-01
Fairmont State University has developed online physics classes for the high-school teaching certificate based on the text book Matter and Interaction by Chabay and Sherwood. This lead to using computational VPython labs also in the traditional class room setting to complement conventional labs. The computational modeling process has proven to provide an excellent basis for the subsequent conventional lab and allows for a concrete experience of the difference between behavior according to a model and realistic behavior. Observations in the regular class room setting feed back into the development of the online classes.
NASA Astrophysics Data System (ADS)
Kan, Guangyuan; He, Xiaoyan; Ding, Liuqian; Li, Jiren; Hong, Yang; Zuo, Depeng; Ren, Minglei; Lei, Tianjie; Liang, Ke
2018-01-01
Hydrological model calibration has been a hot issue for decades. The shuffled complex evolution method developed at the University of Arizona (SCE-UA) has been proved to be an effective and robust optimization approach. However, its computational efficiency deteriorates significantly when the amount of hydrometeorological data increases. In recent years, the rise of heterogeneous parallel computing has brought hope for the acceleration of hydrological model calibration. This study proposed a parallel SCE-UA method and applied it to the calibration of a watershed rainfall-runoff model, the Xinanjiang model. The parallel method was implemented on heterogeneous computing systems using OpenMP and CUDA. Performance testing and sensitivity analysis were carried out to verify its correctness and efficiency. Comparison results indicated that heterogeneous parallel computing-accelerated SCE-UA converged much more quickly than the original serial version and possessed satisfactory accuracy and stability for the task of fast hydrological model calibration.
OpenACC performance for simulating 2D radial dambreak using FVM HLLE flux
NASA Astrophysics Data System (ADS)
Gunawan, P. H.; Pahlevi, M. R.
2018-03-01
The aim of this paper is to investigate the performances of openACC platform for computing 2D radial dambreak. Here, the shallow water equation will be used to describe and simulate 2D radial dambreak with finite volume method (FVM) using HLLE flux. OpenACC is a parallel computing platform based on GPU cores. Indeed, from this research this platform is used to minimize computational time on the numerical scheme performance. The results show the using OpenACC, the computational time is reduced. For the dry and wet radial dambreak simulations using 2048 grids, the computational time of parallel is obtained 575.984 s and 584.830 s respectively for both simulations. These results show the successful of OpenACC when they are compared with the serial time of dry and wet radial dambreak simulations which are collected 28047.500 s and 29269.40 s respectively.
Development of a remote control console for the HHIRF 25-MV tandem accelerator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hasanul Basher, A.M.
1991-09-01
The CAMAC-based control system for the 25-MV Tandem Accelerator at HHIRF uses two Perkin-Elmer, 32-bit minicomputers: a message-switching computer and a supervisory computer. Two operator consoles are located on one of the six serial highways. Operator control is provided by means of a console CRT, trackball, assignable shaft encoders and meters. The message-switching computer transmits and receives control information on the serial highways. At present, the CRT pages with updated parameters can be displayed and parameters can be controlled only from the two existing consoles, one in the Tandem control room and the other in the ORIC control room. Itmore » has become necessary to expand the control capability to several other locations in the building. With the expansion of control and monitoring capability of accelerator parameters to other locations, the operators will be able to control and observe the result of the control action at the same time. Since the new control console will be PC-based, the existing page format will be changed. The PC will be communicating with the Perkin-Elmer through RS-232 and a communication software package. Hardware configuration has been established, a communication software program that reads the pages from the shared memory has been developed. In this paper, we present the implementation strategy, works completed, existing and new page format, future action plans, explanation of pages and use of related global variables, a sample session, and flowcharts.« less
A new method of three-dimensional computer assisted reconstruction of the developing biliary tract.
Prudhomme, M; Gaubert-Cristol, R; Jaeger, M; De Reffye, P; Godlewski, G
1999-01-01
A three-dimensional (3-D) computer assisted reconstruction of the biliary tract was performed in human and rat embryos at Carnegie stage 23 to describe and compare the biliary structures and to point out the anatomic relations between the structures of the hepatic pedicle. Light micrograph images from consecutive serial sagittal sections (diameter 7 mm) of one human and 16 rat embryos were directly digitalized with a CCD camera. The serial views were aligned automatically by software. The data were analysed following segmentation and thresholding, allowing automatic reconstruction. The main bile ducts ascended in the mesoderm of the hepatoduodenal ligament. The extrahepatic bile ducts: common bile duct (CD), cystic duct and gallbladder in the human, formed a compound system which could not be shown so clearly in histologic sections. The hepato-pancreatic ampulla was studied as visualised through the duodenum. The course of the CD was like a chicane. The gallbladder diameter and length were similar to those of the CD. Computer-assisted reconstruction permitted easy acquisition of the data by direct examination of the sections through the microscope. This method showed the relationships between the different structures of the hepatic pedicle and allowed estimation of the volume of the bile duct. These findings were not obvious in two-dimensional (2-D) views from histologic sections. Each embryonic stage could be rebuilt in 3-D, which could introduce the time as a fourth dimension, fundamental for the study of organogenesis.
A remote control console for the HHIRF 25-MV Tandem Accelerator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hasanul Basher, A.M.
The CAMAC-based control system for the 25-MV Tandem Accelerator at HHIRF uses two Perkin-Elmer, 32-bit minicomputers: a message-switching computer and a supervisory computer. Two operator consoles are located on one of the six serial highways. Operator control is provided by means of a console CRT, trackball, assignable shaft encoders, and meters. The message-switching computer transmits and receives control information on the serial highways. At present, the CRT pages with updated parameters can be displayed and parameters can be controlled only from the two existing consoles, one in the Tandem control room and the other in the ORIC control room. Itmore » has become necessary to expand the control capability to several other locations in the building. With the expansion of control and monitoring capability of accelerator parameters to other locations, the operators will be able to control and observe the result of the control action at the same time. This capability will be useful in the new Radioactive Ion Beam project of the division. Since the new control console will be PC-based, the existing page format will be changed. The PC will be communicating with the Perkin-Elmer through RS-232 with the aid of a communication protocol. Hardware configuration has been established, a software program that reads the pages from the shared memory, and a communication protocol have been developed. The following sections present the implementation strategy, work completed, future action plans, and the functional details of the communication protocol.« less
Xyce™ Parallel Electronic Simulator Users' Guide, Version 6.5.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, Eric R.; Aadithya, Karthik V.; Mei, Ting
This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to developmore » new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandia's needs, including some radiation- aware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase -- a message passing parallel implementation -- which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. The information herein is subject to change without notice. Copyright © 2002-2016 Sandia Corporation. All rights reserved.« less
NASA Astrophysics Data System (ADS)
Vivoni, Enrique R.; Mascaro, Giuseppe; Mniszewski, Susan; Fasel, Patricia; Springer, Everett P.; Ivanov, Valeriy Y.; Bras, Rafael L.
2011-10-01
SummaryA major challenge in the use of fully-distributed hydrologic models has been the lack of computational capabilities for high-resolution, long-term simulations in large river basins. In this study, we present the parallel model implementation and real-world hydrologic assessment of the Triangulated Irregular Network (TIN)-based Real-time Integrated Basin Simulator (tRIBS). Our parallelization approach is based on the decomposition of a complex watershed using the channel network as a directed graph. The resulting sub-basin partitioning divides effort among processors and handles hydrologic exchanges across boundaries. Through numerical experiments in a set of nested basins, we quantify parallel performance relative to serial runs for a range of processors, simulation complexities and lengths, and sub-basin partitioning methods, while accounting for inter-run variability on a parallel computing system. In contrast to serial simulations, the parallel model speed-up depends on the variability of hydrologic processes. Load balancing significantly improves parallel speed-up with proportionally faster runs as simulation complexity (domain resolution and channel network extent) increases. The best strategy for large river basins is to combine a balanced partitioning with an extended channel network, with potential savings through a lower TIN resolution. Based on these advances, a wider range of applications for fully-distributed hydrologic models are now possible. This is illustrated through a set of ensemble forecasts that account for precipitation uncertainty derived from a statistical downscaling model.
NASA Technical Reports Server (NTRS)
Bartels, Robert E.
1998-01-01
Flow and turbulence models applied to the problem of shock buffet onset are studied. The accuracy of the interactive boundary layer and the thin-layer Navier-Stokes equations solved with recent upwind techniques using similar transport field equation turbulence models is assessed for standard steady test cases, including conditions having significant shock separation. The two methods are found to compare well in the shock buffet onset region of a supercritical airfoil that involves strong trailing-edge separation. A computational analysis using the interactive-boundary layer has revealed a Reynolds scaling effect in the shock buffet onset of the supercritical airfoil, which compares well with experiment. The methods are next applied to a conventional airfoil. Steady shock-separated computations of the conventional airfoil with the two methods compare well with experiment. Although the interactive boundary layer computations in the shock buffet region compare well with experiment for the conventional airfoil, the thin-layer Navier-Stokes computations do not. These findings are discussed in connection with possible mechanisms important in the onset of shock buffet and the constraints imposed by current numerical modeling techniques.
Zerbini, Talita; da Silva, Luiz Fernando Ferraz; Ferro, Antonio Carlos Gonçalves; Kay, Fernando Uliana; Junior, Edson Amaro; Pasqualucci, Carlos Augusto Gonçalves; do Nascimento Saldiva, Paulo Hilario
2014-01-01
OBJECTIVE: The aim of the present work is to analyze the differences and similarities between the elements of a conventional autopsy and images obtained from postmortem computed tomography in a case of a homicide stab wound. METHOD: Comparison between the findings of different methods: autopsy and postmortem computed tomography. RESULTS: In some aspects, autopsy is still superior to imaging, especially in relation to external examination and the description of lesion vitality. However, the findings of gas embolism, pneumothorax and pulmonary emphysema and the relationship between the internal path of the instrument of aggression and the entry wound are better demonstrated by postmortem computed tomography. CONCLUSIONS: Although multislice computed tomography has greater accuracy than autopsy, we believe that the conventional autopsy method is fundamental for providing evidence in criminal investigations. PMID:25518020
Default Mode and Executive Networks Areas: Association with the Serial Order in Divergent Thinking
Heinonen, Jarmo; Numminen, Jussi; Hlushchuk, Yevhen; Antell, Henrik; Taatila, Vesa; Suomala, Jyrki
2016-01-01
Scientific findings have suggested a two-fold structure of the cognitive process. By using the heuristic thinking mode, people automatically process information that tends to be invariant across days, whereas by using the explicit thinking mode people explicitly process information that tends to be variant compared to typical previously learned information patterns. Previous studies on creativity found an association between creativity and the brain regions in the prefrontal cortex, the anterior cingulate cortex, the default mode network and the executive network. However, which neural networks contribute to the explicit mode of thinking during idea generation remains an open question. We employed an fMRI paradigm to examine which brain regions were activated when participants (n = 16) mentally generated alternative uses for everyday objects. Most previous creativity studies required participants to verbalize responses during idea generation, whereas in this study participants produced mental alternatives without verbalizing. This study found activation in the left anterior insula when contrasting idea generation and object identification. This finding suggests that the insula (part of the brain’s salience network) plays a role in facilitating both the central executive and default mode networks to activate idea generation. We also investigated closely the effect of the serial order of idea being generated on brain responses: The amplitude of fMRI responses correlated positively with the serial order of idea being generated in the anterior cingulate cortex, which is part of the central executive network. Positive correlation with the serial order was also observed in the regions typically assigned to the default mode network: the precuneus/cuneus, inferior parietal lobule and posterior cingulate cortex. These networks support the explicit mode of thinking and help the individual to convert conventional mental models to new ones. The serial order correlated negatively with the BOLD responses in the posterior presupplementary motor area, left premotor cortex, right cerebellum and left inferior frontal gyrus. This finding might imply that idea generation without a verbal processing demand reflecting lack of need for new object identification in idea generation events. The results of the study are consistent with recent creativity studies, which emphasize that the creativity process involves working memory capacity to spontaneously shift between different kinds of thinking modes according to the context. PMID:27627760
Evaluation of the transport matrix method for simulation of ocean biogeochemical tracers
NASA Astrophysics Data System (ADS)
Kvale, Karin F.; Khatiwala, Samar; Dietze, Heiner; Kriest, Iris; Oschlies, Andreas
2017-06-01
Conventional integration of Earth system and ocean models can accrue considerable computational expenses, particularly for marine biogeochemical applications. Offline
numerical schemes in which only the biogeochemical tracers are time stepped and transported using a pre-computed circulation field can substantially reduce the burden and are thus an attractive alternative. One such scheme is the transport matrix method
(TMM), which represents tracer transport as a sequence of sparse matrix-vector products that can be performed efficiently on distributed-memory computers. While the TMM has been used for a variety of geochemical and biogeochemical studies, to date the resulting solutions have not been comprehensively assessed against their online
counterparts. Here, we present a detailed comparison of the two. It is based on simulations of the state-of-the-art biogeochemical sub-model embedded within the widely used coarse-resolution University of Victoria Earth System Climate Model (UVic ESCM). The default, non-linear advection scheme was first replaced with a linear, third-order upwind-biased advection scheme to satisfy the linearity requirement of the TMM. Transport matrices were extracted from an equilibrium run of the physical model and subsequently used to integrate the biogeochemical model offline to equilibrium. The identical biogeochemical model was also run online. Our simulations show that offline integration introduces some bias to biogeochemical quantities through the omission of the polar filtering used in UVic ESCM and in the offline application of time-dependent forcing fields, with high latitudes showing the largest differences with respect to the online model. Differences in other regions and in the seasonality of nutrients and phytoplankton distributions are found to be relatively minor, giving confidence that the TMM is a reliable tool for offline integration of complex biogeochemical models. Moreover, while UVic ESCM is a serial code, the TMM can be run on a parallel machine with no change to the underlying biogeochemical code, thus providing orders of magnitude speed-up over the online model.
Computational Nuclear Physics and Post Hartree-Fock Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lietz, Justin; Sam, Novario; Hjorth-Jensen, M.
We present a computational approach to infinite nuclear matter employing Hartree-Fock theory, many-body perturbation theory and coupled cluster theory. These lectures are closely linked with those of chapters 9, 10 and 11 and serve as input for the correlation functions employed in Monte Carlo calculations in chapter 9, the in-medium similarity renormalization group theory of dense fermionic systems of chapter 10 and the Green's function approach in chapter 11. We provide extensive code examples and benchmark calculations, allowing thereby an eventual reader to start writing her/his own codes. We start with an object-oriented serial code and end with discussions onmore » strategies for porting the code to present and planned high-performance computing facilities.« less
Automated noninvasive classification of renal cancer on multiphase CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Linguraru, Marius George; Wang, Shijun; Shah, Furhawn
2011-10-15
Purpose: To explore the added value of the shape of renal lesions for classifying renal neoplasms. To investigate the potential of computer-aided analysis of contrast-enhanced computed-tomography (CT) to quantify and classify renal lesions. Methods: A computer-aided clinical tool based on adaptive level sets was employed to analyze 125 renal lesions from contrast-enhanced abdominal CT studies of 43 patients. There were 47 cysts and 78 neoplasms: 22 Von Hippel-Lindau (VHL), 16 Birt-Hogg-Dube (BHD), 19 hereditary papillary renal carcinomas (HPRC), and 21 hereditary leiomyomatosis and renal cell cancers (HLRCC). The technique quantified the three-dimensional size and enhancement of lesions. Intrapatient and interphasemore » registration facilitated the study of lesion serial enhancement. The histograms of curvature-related features were used to classify the lesion types. The areas under the curve (AUC) were calculated for receiver operating characteristic curves. Results: Tumors were robustly segmented with 0.80 overlap (0.98 correlation) between manual and semi-automated quantifications. The method further identified morphological discrepancies between the types of lesions. The classification based on lesion appearance, enhancement and morphology between cysts and cancers showed AUC = 0.98; for BHD + VHL (solid cancers) vs. HPRC + HLRCC AUC = 0.99; for VHL vs. BHD AUC = 0.82; and for HPRC vs. HLRCC AUC = 0.84. All semi-automated classifications were statistically significant (p < 0.05) and superior to the analyses based solely on serial enhancement. Conclusions: The computer-aided clinical tool allowed the accurate quantification of cystic, solid, and mixed renal tumors. Cancer types were classified into four categories using their shape and enhancement. Comprehensive imaging biomarkers of renal neoplasms on abdominal CT may facilitate their noninvasive classification, guide clinical management, and monitor responses to drugs or interventions.« less
Page, M. P. A.; Norris, D.
2009-01-01
We briefly review the considerable evidence for a common ordering mechanism underlying both immediate serial recall (ISR) tasks (e.g. digit span, non-word repetition) and the learning of phonological word forms. In addition, we discuss how recent work on the Hebb repetition effect is consistent with the idea that learning in this task is itself a laboratory analogue of the sequence-learning component of phonological word-form learning. In this light, we present a unifying modelling framework that seeks to account for ISR and Hebb repetition effects, while being extensible to word-form learning. Because word-form learning is performed in the service of later word recognition, our modelling framework also subsumes a mechanism for word recognition from continuous speech. Simulations of a computational implementation of the modelling framework are presented and are shown to be in accordance with data from the Hebb repetition paradigm. PMID:19933143
New method for designing serial resonant power converters
NASA Astrophysics Data System (ADS)
Hinov, Nikolay
2017-12-01
In current work is presented one comprehensive method for design of serial resonant energy converters. The method is based on new simplified approach in analysis of such kind power electronic devices. It is grounded on supposing resonant mode of operation when finding relation between input and output voltage regardless of other operational modes (when controlling frequency is below or above resonant frequency). This approach is named `quasiresonant method of analysis', because it is based on assuming that all operational modes are `sort of' resonant modes. An estimation of error was made because of the a.m. hypothesis and is compared to the classic analysis. The `quasiresonant method' of analysis gains two main advantages: speed and easiness in designing of presented power circuits. Hence it is very useful in practice and in teaching Power Electronics. Its applicability is proven with mathematic modelling and computer simulation.
Se-SAD serial femtosecond crystallography datasets from selenobiotinyl-streptavidin
Yoon, Chun Hong; DeMirci, Hasan; Sierra, Raymond G.; Dao, E. Han; Ahmadi, Radman; Aksit, Fulya; Aquila, Andrew L.; Batyuk, Alexander; Ciftci, Halilibrahim; Guillet, Serge; Hayes, Matt J.; Hayes, Brandon; Lane, Thomas J.; Liang, Meng; Lundström, Ulf; Koglin, Jason E.; Mgbam, Paul; Rao, Yashas; Rendahl, Theodore; Rodriguez, Evan; Zhang, Lindsey; Wakatsuki, Soichi; Boutet, Sébastien; Holton, James M.; Hunter, Mark S.
2017-01-01
We provide a detailed description of selenobiotinyl-streptavidin (Se-B SA) co-crystal datasets recorded using the Coherent X-ray Imaging (CXI) instrument at the Linac Coherent Light Source (LCLS) for selenium single-wavelength anomalous diffraction (Se-SAD) structure determination. Se-B SA was chosen as the model system for its high affinity between biotin and streptavidin where the sulfur atom in the biotin molecule (C10H16N2O3S) is substituted with selenium. The dataset was collected at three different transmissions (100, 50, and 10%) using a serial sample chamber setup which allows for two sample chambers, a front chamber and a back chamber, to operate simultaneously. Diffraction patterns from Se-B SA were recorded to a resolution of 1.9 Å. The dataset is publicly available through the Coherent X-ray Imaging Data Bank (CXIDB) and also on LCLS compute nodes as a resource for research and algorithm development. PMID:28440794
NASA Astrophysics Data System (ADS)
Chan, Kenneth H.; Tom, Henry; Darling, Cynthia L.; Fried, Daniel
2015-02-01
Previous studies have established that caries lesions can be imaged with high contrast without the interference of stains at near-IR wavelengths greater than 1300-nm. It has been demonstrated that computer controlled laser scanning systems utilizing IR lasers operating at high pulse repetition rates can be used for serial imaging and selective removal of caries lesions. In this study, we report our progress towards the development of algorithms for generating rasterized ablation maps from near-IR reflectance images for the removal of natural lesions from tooth occlusal surfaces. An InGaAs camera and a filtered tungsten-halogen lamp producing near-IR light in the range of 1500-1700-nm were used to collect crosspolarization reflectance images of tooth occlusal surfaces. A CO2 laser operating at a wavelength of 9.3- μm with a pulse duration of 10-15-μs was used for image-guided ablation.
Tomographic techniques for the study of exceptionally preserved fossils
Sutton, Mark D
2008-01-01
Three-dimensional fossils, especially those preserving soft-part anatomy, are a rich source of palaeontological information; they can, however, be difficult to work with. Imaging of serial planes through an object (tomography) allows study of both the inside and outside of three-dimensional fossils. Tomography may be performed using physical grinding or sawing coupled with photography, through optical techniques of serial focusing, or using a variety of scanning technologies such as neutron tomography, magnetic resonance imaging and most usefully X-ray computed tomography. This latter technique is applicable at a variety of scales, and when combined with a synchrotron X-ray source can produce very high-quality data that may be augmented by phase-contrast information to enhance contrast. Tomographic data can be visualized in several ways, the most effective of which is the production of isosurface-based ‘virtual fossils’ that can be manipulated and dissected interactively. PMID:18426749
Se-SAD serial femtosecond crystallography datasets from selenobiotinyl-streptavidin
NASA Astrophysics Data System (ADS)
Yoon, Chun Hong; Demirci, Hasan; Sierra, Raymond G.; Dao, E. Han; Ahmadi, Radman; Aksit, Fulya; Aquila, Andrew L.; Batyuk, Alexander; Ciftci, Halilibrahim; Guillet, Serge; Hayes, Matt J.; Hayes, Brandon; Lane, Thomas J.; Liang, Meng; Lundström, Ulf; Koglin, Jason E.; Mgbam, Paul; Rao, Yashas; Rendahl, Theodore; Rodriguez, Evan; Zhang, Lindsey; Wakatsuki, Soichi; Boutet, Sébastien; Holton, James M.; Hunter, Mark S.
2017-04-01
We provide a detailed description of selenobiotinyl-streptavidin (Se-B SA) co-crystal datasets recorded using the Coherent X-ray Imaging (CXI) instrument at the Linac Coherent Light Source (LCLS) for selenium single-wavelength anomalous diffraction (Se-SAD) structure determination. Se-B SA was chosen as the model system for its high affinity between biotin and streptavidin where the sulfur atom in the biotin molecule (C10H16N2O3S) is substituted with selenium. The dataset was collected at three different transmissions (100, 50, and 10%) using a serial sample chamber setup which allows for two sample chambers, a front chamber and a back chamber, to operate simultaneously. Diffraction patterns from Se-B SA were recorded to a resolution of 1.9 Å. The dataset is publicly available through the Coherent X-ray Imaging Data Bank (CXIDB) and also on LCLS compute nodes as a resource for research and algorithm development.
Serial interactome capture of the human cell nucleus.
Conrad, Thomas; Albrecht, Anne-Susann; de Melo Costa, Veronica Rodrigues; Sauer, Sascha; Meierhofer, David; Ørom, Ulf Andersson
2016-04-04
Novel RNA-guided cellular functions are paralleled by an increasing number of RNA-binding proteins (RBPs). Here we present 'serial RNA interactome capture' (serIC), a multiple purification procedure of ultraviolet-crosslinked poly(A)-RNA-protein complexes that enables global RBP detection with high specificity. We apply serIC to the nuclei of proliferating K562 cells to obtain the first human nuclear RNA interactome. The domain composition of the 382 identified nuclear RBPs markedly differs from previous IC experiments, including few factors without known RNA-binding domains that are in good agreement with computationally predicted RNA binding. serIC extends the number of DNA-RNA-binding proteins (DRBPs), and reveals a network of RBPs involved in p53 signalling and double-strand break repair. serIC is an effective tool to couple global RBP capture with additional selection or labelling steps for specific detection of highly purified RBPs.
Memory-based frame synchronizer. [for digital communication systems
NASA Technical Reports Server (NTRS)
Stattel, R. J.; Niswander, J. K. (Inventor)
1981-01-01
A frame synchronizer for use in digital communications systems wherein data formats can be easily and dynamically changed is described. The use of memory array elements provide increased flexibility in format selection and sync word selection in addition to real time reconfiguration ability. The frame synchronizer comprises a serial-to-parallel converter which converts a serial input data stream to a constantly changing parallel data output. This parallel data output is supplied to programmable sync word recognizers each consisting of a multiplexer and a random access memory (RAM). The multiplexer is connected to both the parallel data output and an address bus which may be connected to a microprocessor or computer for purposes of programming the sync word recognizer. The RAM is used as an associative memory or decorder and is programmed to identify a specific sync word. Additional programmable RAMs are used as counter decoders to define word bit length, frame word length, and paragraph frame length.
3-D Imaging In Virtual Environment: A Scientific Clinical and Teaching Tool
NASA Technical Reports Server (NTRS)
Ross, Muriel D.; DeVincenzi, Donald L. (Technical Monitor)
1996-01-01
The advent of powerful graphics workstations and computers has led to the advancement of scientific knowledge through three-dimensional (3-D) reconstruction and imaging of biological cells and tissues. The Biocomputation Center at NASA Ames Research Center pioneered the effort to produce an entirely computerized method for reconstruction of objects from serial sections studied in a transmission electron microscope (TEM). The software developed, ROSS (Reconstruction of Serial Sections), is now being distributed to users across the United States through Space Act Agreements. The software is in widely disparate fields such as geology, botany, biology and medicine. In the Biocomputation Center, ROSS serves as the basis for development of virtual environment technologies for scientific and medical use. This report will describe the Virtual Surgery Workstation Project that is ongoing with clinicians at Stanford University Medical Center, and the role of the Visible Human data in the project.
On-the-fly transition search and applications to temperature-accelerated dynamics
NASA Astrophysics Data System (ADS)
Shim, Yunsic; Amar, Jacques
2015-03-01
Temperature-accelerated dynamics (TAD) is a powerful method to study non-equilibrium processes and has been providing surprising insights for a variety of systems. While serial TAD simulations have been limited by the roughly N3 increase in the computational cost as a function of the number of atoms N in the system, recently we have shown that by carrying out parallel TAD simulations which combine spatial decomposition with our semi-rigorous synchronous sublattice algorithm, significantly improved scaling is possible. However, in this approach the size of activated events is limited by the processor size while the dynamics is not exact. Here we discuss progress in improving the scaling of serial TAD by combining the use of on-the-fly transition searching with our previously developed localized saddle-point method. We demonstrate improved performance for the cases of Ag/Ag(100) annealing and Cu/Cu(100) growth. Supported by NSF DMR-1410840.
Kallmünzer, Bernd; Breuer, Lorenz; Hering, Christiane; Raaz-Schrauder, Dorette; Kollmar, Rainer; Huttner, Hagen B; Schwab, Stefan; Köhrmann, Martin
2012-04-01
Anticoagulation is a highly effective secondary prevention in patients with cardioembolic stroke and atrial fibrillation/flutter (AF). However, the condition remains underdiagnosed, because paroxysmal AF may be missed by diagnostic tests in the acute phase. In this study, the sensitivity of AF detection was assessed for serial electrocardiographic recordings and continuous stroke unit telemetric monitoring with or without a structured algorithm to analyze telemetric data (SEA-AF). Three hundred forty-six consecutive patients with acute ischemic stroke were prospectively included and subjected to standard telemetric monitoring. In addition, telemetric data were separately analyzed following SEA-AF, consisting of a structured evaluation of episodes with high risk for AF and a chronological beat-to-beat screening of the full registration. Serial electrocardiograms were conducted in 24-hour intervals. Median effective telemetry monitoring time was 75.5 hours (interquartile range 64-86 hours). Overall, AF was diagnosed in 119 of 346 patients (34.4%). The structured reading algorithm was the most sensitive method to detected AF. Conventional telemetry and serial electrocardiographic assessments were less effective. However, only 35% of patients with previously documented paroxysmal AF and negative baseline electrocardiogram demonstrated AF episodes during monitoring. Continuous stroke unit telemetry using SEA-AF shows a significantly higher detection rate for AF compared with daily electrocardiographic assessments and standard telemetry without structured reading. The low overall probability to detect paroxysmal AF with either method during the first days after stroke demonstrates the urgent need for complementary diagnostic strategies such as long-term monitoring and frequent follow-up assessments. Clinical Trial Registration- URL: www.clinicaltrials.gov. Unique identifier: NCT01177748.
Accounting for partiality in serial crystallography using ray-tracing principles.
Kroon-Batenburg, Loes M J; Schreurs, Antoine M M; Ravelli, Raimond B G; Gros, Piet
2015-09-01
Serial crystallography generates `still' diffraction data sets that are composed of single diffraction images obtained from a large number of crystals arbitrarily oriented in the X-ray beam. Estimation of the reflection partialities, which accounts for the expected observed fractions of diffraction intensities, has so far been problematic. In this paper, a method is derived for modelling the partialities by making use of the ray-tracing diffraction-integration method EVAL. The method estimates partialities based on crystal mosaicity, beam divergence, wavelength dispersion, crystal size and the interference function, accounting for crystallite size. It is shown that modelling of each reflection by a distribution of interference-function weighted rays yields a `still' Lorentz factor. Still data are compared with a conventional rotation data set collected from a single lysozyme crystal. Overall, the presented still integration method improves the data quality markedly. The R factor of the still data compared with the rotation data decreases from 26% using a Monte Carlo approach to 12% after applying the Lorentz correction, to 5.3% when estimating partialities by EVAL and finally to 4.7% after post-refinement. The merging R(int) factor of the still data improves from 105 to 56% but remains high. This suggests that the accuracy of the model parameters could be further improved. However, with a multiplicity of around 40 and an R(int) of ∼50% the merged still data approximate the quality of the rotation data. The presented integration method suitably accounts for the partiality of the observed intensities in still diffraction data, which is a critical step to improve data quality in serial crystallography.
Broering, N C
1983-01-01
Georgetown University's Library Information System (LIS), an integrated library system designed and implemented at the Dahlgren Memorial Library, is broadly described from an administrative point of view. LIS' functional components consist of eight "user-friendly" modules: catalog, circulation, serials, bibliographic management (including Mini-MEDLINE), acquisitions, accounting, networking, and computer-assisted instruction. This article touches on emerging library services, user education, and computer information services, which are also changing the role of staff librarians. The computer's networking capability brings the library directly to users through personal or institutional computers at remote sites. The proposed Integrated Medical Center Information System at Georgetown University will include interface with LIS through a network mechanism. LIS is being replicated at other libraries, and a microcomputer version is being tested for use in a hospital setting. PMID:6688749
Li, Xiaohui; Yu, Jianhua; Gong, Yuekun; Ren, Kaijing; Liu, Jun
2015-04-21
To assess the early postoperative clinical and radiographic outcomes after navigation-assisted or standard instrumentation total knee arthroplasty (TKA). From August 2007 to May 2008, 60 KSS-A type patients underwent 67 primary TKA operations by the same surgical team. Twenty-two operations were performed with the Image-free navigation system with an average age of 64.5 years while the remaining 45 underwent conventional manual procedures with an average age of 66 years. Their preoperative demographic and functional data had no statistical differences (P>0.05). The operative duration, blood loss volume and hospitalization days were compared for two groups. And radiographic data included coronal femoral component angle, coronal tibial component angle, sagittal femoral component angle, sagittal tibial component angle and coronal tibiofemoral angle after one month. And functional assessment scores were evaluated at 1, 3 and 6 months postoperatively. Operative duration was significantly longer for computer navigation (P<0.05). The average blood loss volume was 555.26 ml in computer navigation group and 647.56 ml in conventional manual method group (P<0.05). And hospitalization stay was shorter in computer navigation group than that in conventional method group (7.74 vs 8.68 days) (P=0.04). The alignment deviation was better in computer-assisted group than that in conventional manual method group (P<0.05). The percentage of patients with a coronal tibiofemoral angle within ±3 of ideal value was 95.45% for computer-assisted mini-invasive TKA group and 80% for conventional TKA group (P=0.003). The Knee Society Clinical Rating Score was higher in computer-assisted group than that in conventional manual method group at 1 and 3 montha post-operation. However, no statistical inter-group difference existed at 6 months post-operation. Navigation allows a surgeon to precisely implant the components for TKA. And it offers faster functional recovery and shorter hospitalization stay. At 6 months post-operation, there is no statistical inter-group difference in KSS scores.
Estimation method for serial dilution experiments.
Ben-David, Avishai; Davidson, Charles E
2014-12-01
Titration of microorganisms in infectious or environmental samples is a corner stone of quantitative microbiology. A simple method is presented to estimate the microbial counts obtained with the serial dilution technique for microorganisms that can grow on bacteriological media and develop into a colony. The number (concentration) of viable microbial organisms is estimated from a single dilution plate (assay) without a need for replicate plates. Our method selects the best agar plate with which to estimate the microbial counts, and takes into account the colony size and plate area that both contribute to the likelihood of miscounting the number of colonies on a plate. The estimate of the optimal count given by our method can be used to narrow the search for the best (optimal) dilution plate and saves time. The required inputs are the plate size, the microbial colony size, and the serial dilution factors. The proposed approach shows relative accuracy well within ±0.1log10 from data produced by computer simulations. The method maintains this accuracy even in the presence of dilution errors of up to 10% (for both the aliquot and diluent volumes), microbial counts between 10(4) and 10(12) colony-forming units, dilution ratios from 2 to 100, and plate size to colony size ratios between 6.25 to 200. Published by Elsevier B.V.
Technical report on the surface reconstruction of stacked contours by using the commercial software
NASA Astrophysics Data System (ADS)
Shin, Dong Sun; Chung, Min Suk; Hwang, Sung Bae; Park, Jin Seo
2007-03-01
After drawing and stacking contours of a structure, which is identified in the serially sectioned images, three-dimensional (3D) image can be made by surface reconstruction. Usually, software is composed for the surface reconstruction. In order to compose the software, medical doctors have to acquire the help of computer engineers. So in this research, surface reconstruction of stacked contours was tried by using commercial software. The purpose of this research is to enable medical doctors to perform surface reconstruction to make 3D images by themselves. The materials of this research were 996 anatomic images (1 mm intervals) of left lower limb, which were made by serial sectioning of a cadaver. On the Adobe Photoshop, contours of 114 anatomic structures were drawn, which were exported to Adobe Illustrator files. On the Maya, contours of each anatomic structure were stacked. On the Rhino, superoinferior lines were drawn along all stacked contours to fill quadrangular surfaces between contours. On the Maya, the contours were deleted. 3D images of 114 anatomic structures were assembled with their original locations preserved. With the surface reconstruction technique, developed in this research, medical doctors themselves could make 3D images of the serially sectioned images such as CTs and MRIs.
Cone beam computed tomography in the diagnosis of dental disease.
Tetradis, Sotirios; Anstey, Paul; Graff-Radford, Steven
2011-07-01
Conventional radiographs provide important information for dental disease diagnosis. However, they represent 2-D images of 3-D objects with significant structure superimposition and unpredictable magnification. Cone beam computed tomography, however, allows true 3-D visualization of the dentoalveolar structures, avoiding major limitations of conventional radiographs. Cone beam computed tomography images offer great advantages in disease detection for selected patients. The authors discuss cone beam computed tomography applications in dental disease diagnosis, reviewing the pertinent literature when available.
SiSeRHMap v1.0: a simulator for mapped seismic response using a hybrid model
NASA Astrophysics Data System (ADS)
Grelle, Gerardo; Bonito, Laura; Lampasi, Alessandro; Revellino, Paola; Guerriero, Luigi; Sappa, Giuseppe; Guadagno, Francesco Maria
2016-04-01
The SiSeRHMap (simulator for mapped seismic response using a hybrid model) is a computerized methodology capable of elaborating prediction maps of seismic response in terms of acceleration spectra. It was realized on the basis of a hybrid model which combines different approaches and models in a new and non-conventional way. These approaches and models are organized in a code architecture composed of five interdependent modules. A GIS (geographic information system) cubic model (GCM), which is a layered computational structure based on the concept of lithodynamic units and zones, aims at reproducing a parameterized layered subsoil model. A meta-modelling process confers a hybrid nature to the methodology. In this process, the one-dimensional (1-D) linear equivalent analysis produces acceleration response spectra for a specified number of site profiles using one or more input motions. The shear wave velocity-thickness profiles, defined as trainers, are randomly selected in each zone. Subsequently, a numerical adaptive simulation model (Emul-spectra) is optimized on the above trainer acceleration response spectra by means of a dedicated evolutionary algorithm (EA) and the Levenberg-Marquardt algorithm (LMA) as the final optimizer. In the final step, the GCM maps executor module produces a serial map set of a stratigraphic seismic response at different periods, grid solving the calibrated Emul-spectra model. In addition, the spectra topographic amplification is also computed by means of a 3-D validated numerical prediction model. This model is built to match the results of the numerical simulations related to isolate reliefs using GIS morphometric data. In this way, different sets of seismic response maps are developed on which maps of design acceleration response spectra are also defined by means of an enveloping technique.
Simulation and management games for training command and control in emergencies.
Levi, Leon; Bregman, David
2003-01-01
The aim of our project was to introduce and implement simulation techniques in a problematic field of increasing health care system preparedness for disasters. This field was chosen as knowledge is gained by few experienced staff members who need to disperse it to others during the busy routine work of the system personnel. Knowledge management techniques ranging from classifying the current data, centralized organizational knowledge storage and using it for decision making and dispersing it through the organization--were used in this project. In the first stage we analyzed the current system of building a preparedness protocol (set of orders). We identified the pitfalls of changing personnel and loosing knowledge gained through lessons from local and national experience. For this stage we developed a database of resources and objects (casualties) to be used in the simulation in different possibilities. One of those was the differentiation between drills with trainer and those in front of computers enable to set the needed solution. The model rules for different scenarios of multi-casualty incidents from conventional warfare trauma to combined chemical/toxicological as well as, levels of care pre and inside hospitals--were incorporated to the database management system (we used Microsoft Access' DBMS). The hardware for management game was comprised of serial computers with network and possibility of projection of scenes. For prehospital phase the possibility of portable PC's and connections to central server was used to assess bidirectional flow of information. Simulation software (ARENA) and graphical interfase (Visual Basic, GUI) as shown in the attached figure. We hereby conclude that our system provides solutions which are in use in different levels of healthcare system to assess and improve management command and control for different scenarios of multi-casualty incidents.
Dingari, Narahara Chari; Barman, Ishan; Kang, Jeon Woong; Kong, Chae-Ryon; Dasari, Ramachandra R.; Feld, Michael S.
2011-01-01
While Raman spectroscopy provides a powerful tool for noninvasive and real time diagnostics of biological samples, its translation to the clinical setting has been impeded by the lack of robustness of spectroscopic calibration models and the size and cumbersome nature of conventional laboratory Raman systems. Linear multivariate calibration models employing full spectrum analysis are often misled by spurious correlations, such as system drift and covariations among constituents. In addition, such calibration schemes are prone to overfitting, especially in the presence of external interferences that may create nonlinearities in the spectra-concentration relationship. To address both of these issues we incorporate residue error plot-based wavelength selection and nonlinear support vector regression (SVR). Wavelength selection is used to eliminate uninformative regions of the spectrum, while SVR is used to model the curved effects such as those created by tissue turbidity and temperature fluctuations. Using glucose detection in tissue phantoms as a representative example, we show that even a substantial reduction in the number of wavelengths analyzed using SVR lead to calibration models of equivalent prediction accuracy as linear full spectrum analysis. Further, with clinical datasets obtained from human subject studies, we also demonstrate the prospective applicability of the selected wavelength subsets without sacrificing prediction accuracy, which has extensive implications for calibration maintenance and transfer. Additionally, such wavelength selection could substantially reduce the collection time of serial Raman acquisition systems. Given the reduced footprint of serial Raman systems in relation to conventional dispersive Raman spectrometers, we anticipate that the incorporation of wavelength selection in such hardware designs will enhance the possibility of miniaturized clinical systems for disease diagnosis in the near future. PMID:21895336
Computers: Yesterday, Today & Tomorrow.
1986-04-07
these repetitive calculations, he progressed through several scientific stages. THE ABACUS Invented more than 4,000 years ago, the abacus is considered...by many to have been the world’s first digital calculator. It uses beads and positional values to represent quantities. The abacus served as man’s...Pascal’s mathematical digital calculator, designed around the concept of serially connected decimal counting gears. These gears were interconnected in a 10
2005 22nd International Symposium on Ballistics Volume 2 Wednesday
2005-11-18
Information 1 Experimental and Numerical Study of the Penetration of Tungsten Carbide Into Steel Targets During High Rates of Strain John F . Moxnes...QinetiQ; Vladimir Titarev, Eleuterio Toro , Umeritek Limited The Mechanism Analysis of Interior Ballistics of Serial Chamber Gun, Dr. Sanjiu Ying, Charge...Elements and Meshless Particles, Gordon R. Johnson and Robert A. Stryk, Network Computing Services, Inc. Experimental and Numerical Study of the
NASA Technical Reports Server (NTRS)
Macneice, Peter
1995-01-01
This is an introduction to numerical Particle-Mesh techniques, which are commonly used to model plasmas, gravitational N-body systems, and both compressible and incompressible fluids. The theory behind this approach is presented, and its practical implementation, both for serial and parallel machines, is discussed. This document is based on a four-hour lecture course presented by the author at the NASA Summer School for High Performance Computational Physics, held at Goddard Space Flight Center.
CMOS Camera Array With Onboard Memory
NASA Technical Reports Server (NTRS)
Gat, Nahum
2009-01-01
A compact CMOS (complementary metal oxide semiconductor) camera system has been developed with high resolution (1.3 Megapixels), a USB (universal serial bus) 2.0 interface, and an onboard memory. Exposure times, and other operating parameters, are sent from a control PC via the USB port. Data from the camera can be received via the USB port and the interface allows for simple control and data capture through a laptop computer.
Viggiano, A; Coppola, G
2014-04-01
A simple circuit is described to make an AC-amplifier and an analog-to-digital converter in a single, compact solution, for use in basic research, but not on humans. The circuit sends data to and is powered from a common USB port of modern computers; using proper firmware and driver the communication with the device is an emulated RS232 serial port.
Viggiano, A; Coppola, G
2014-01-01
A simple circuit is described to make an AC-amplifier and an analog-to-digital converter in a single, compact solution, for use in basic research, but not on humans. The circuit sends data to and is powered from a common USB port of modern computers; using proper firmware and driver the communication with the device is an emulated RS232 serial port. PMID:24809030
NASA Technical Reports Server (NTRS)
1995-01-01
Software Bisque's TheSky, SkyPro and Remote Astronomy Software incorporate technology developed for the Hubble Space Telescope. TheSky and SkyPro work together to orchestrate locating, identifying and acquiring images of deep sky objects. With all three systems, the user can directly control computer-driven telescopes and charge coupled device (CCD) cameras through serial ports. Through the systems, astronomers and students can remotely operate a telescope at the Mount Wilson Observatory Institute.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2014-01-17
This library is an implementation of the Sparse Approximate Matrix Multiplication (SpAMM) algorithm introduced. It provides a matrix data type, and an approximate matrix product, which exhibits linear scaling computational complexity for matrices with decay. The product error and the performance of the multiply can be tuned by choosing an appropriate tolerance. The library can be compiled for serial execution or parallel execution on shared memory systems with an OpenMP capable compiler
Lithium Niobate Arithmetic Logic Unit
1991-03-01
Boot51] A.D. Booth, "A Signed Binary Multiplication Technique," Quarterly Journal of Mechanics and Applied Mathematics , Vol. IV Part 2, 1951. [ChWi79...Trans. Computers, Vol. C-26, No. 7, July 1977, pp. 681-687. [Wake8 I] John F. Wakerly , "Miocrocomputer Architecture and Programming," John Wiley and...different division methods and discusses their applicability to simple bit serial implementation. Several different designs are then presented and
Efficient Approximation Algorithms for Weighted $b$-Matching
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khan, Arif; Pothen, Alex; Mostofa Ali Patwary, Md.
2016-01-01
We describe a half-approximation algorithm, b-Suitor, for computing a b-Matching of maximum weight in a graph with weights on the edges. b-Matching is a generalization of the well-known Matching problem in graphs, where the objective is to choose a subset of M edges in the graph such that at most a specified number b(v) of edges in M are incident on each vertex v. Subject to this restriction we maximize the sum of the weights of the edges in M. We prove that the b-Suitor algorithm computes the same b-Matching as the one obtained by the greedy algorithm for themore » problem. We implement the algorithm on serial and shared-memory parallel processors, and compare its performance against a collection of approximation algorithms that have been proposed for the Matching problem. Our results show that the b-Suitor algorithm outperforms the Greedy and Locally Dominant edge algorithms by one to two orders of magnitude on a serial processor. The b-Suitor algorithm has a high degree of concurrency, and it scales well up to 240 threads on a shared memory multiprocessor. The b-Suitor algorithm outperforms the Locally Dominant edge algorithm by a factor of fourteen on 16 cores of an Intel Xeon multiprocessor.« less
Upgrade to the Cryogenic Hydrogen Gas Target Monitoring System
NASA Astrophysics Data System (ADS)
Slater, Michael; Tribble, Robert
2013-10-01
The cryogenic hydrogen gas target at Texas A&M is a vital component for creating a secondary radioactive beam that is then used in experiments in the Momentum Achromat Recoil Spectrometer (MARS). A stable beam from the K500 superconducting cyclotron enters the gas cell and some incident particles are transmuted by a nuclear reaction into a radioactive beam, which are separated from the primary beam and used in MARS experiments. The pressure in the target chamber is monitored so that a predictable isotope production rate can be assured. A ``black box'' received the analog pressure data and sent RS232 serial data through an outdated serial connection to an outdated Visual Basic 6 (VB6) program, which plotted the chamber pressure continuously. The black box has been upgraded to an Arduino UNO microcontroller [Atmel Inc.], which can receive the pressure data and output via USB to a computer. It has been programmed to also accept temperature data for future upgrade. A new computer program, with updated capabilities, has been written in Python. The software can send email alerts, create audible alarms through the Arduino, and plot pressure and temperature. The program has been designed to better fit the needs of the users. Funded by DOE and NSF-REU Program.
Quantum simulations with noisy quantum computers
NASA Astrophysics Data System (ADS)
Gambetta, Jay
Quantum computing is a new computational paradigm that is expected to lie beyond the standard model of computation. This implies a quantum computer can solve problems that can't be solved by a conventional computer with tractable overhead. To fully harness this power we need a universal fault-tolerant quantum computer. However the overhead in building such a machine is high and a full solution appears to be many years away. Nevertheless, we believe that we can build machines in the near term that cannot be emulated by a conventional computer. It is then interesting to ask what these can be used for. In this talk we will present our advances in simulating complex quantum systems with noisy quantum computers. We will show experimental implementations of this on some small quantum computers.
Evaluation of a patient specific femoral alignment guide for hip resurfacing.
Olsen, Michael; Naudie, Douglas D; Edwards, Max R; Sellan, Michael E; McCalden, Richard W; Schemitsch, Emil H
2014-03-01
A novel alternative to conventional instrumentation for femoral component insertion in hip resurfacing is a patient specific, computed tomography based femoral alignment guide. A benchside study using cadaveric femora was performed comparing a custom alignment guide to conventional instrumentation and computer navigation. A clinical series of twenty-five hip resurfacings utilizing a custom alignment guide was conducted by three surgeons experienced in hip resurfacing. Using cadaveric femora, the custom guide was comparable to conventional instrumentation with computer navigation proving superior to both. Clinical femoral component alignment accuracy was 3.7° and measured within ± 5° of plan in 20 of 24 cases. Patient specific femoral alignment guides provide a satisfactory level of accuracy and may be a better alternative to conventional instrumentation for initial femoral guidewire placement in hip resurfacing. Crown Copyright © 2014. All rights reserved.
NASA Technical Reports Server (NTRS)
Lichtenstein, J. H.
1975-01-01
Power-spectral-density calculations were made of the lateral responses to atmospheric turbulence for several conventional and short take-off and landing (STOL) airplanes. The turbulence was modeled as three orthogonal velocity components, which were uncorrelated, and each was represented with a one-dimensional power spectrum. Power spectral densities were computed for displacements, rates, and accelerations in roll, yaw, and sideslip. In addition, the power spectral density of the transverse acceleration was computed. Evaluation of ride quality based on a specific ride quality criterion was also made. The results show that the STOL airplanes generally had larger values for the rate and acceleration power spectra (and, consequently, larger corresponding root-mean-square values) than the conventional airplanes. The ride quality criterion gave poorer ratings to the STOL airplanes than to the conventional airplanes.
Biolik, A; Heide, S; Lessig, R; Hachmann, V; Stoevesandt, D; Kellner, J; Jäschke, C; Watzke, S
2018-04-01
One option for improving the quality of medical post mortem examinations is through intensified training of medical students, especially in countries where such a requirement exists regardless of the area of specialisation. For this reason, new teaching and learning methods on this topic have recently been introduced. These new approaches include e-learning modules or SkillsLab stations; one way to objectify the resultant learning outcomes is by means of the OSCE process. However, despite offering several advantages, this examination format also requires considerable resources, in particular in regards to medical examiners. For this reason, many clinical disciplines have already implemented computer-based OSCE examination formats. This study investigates whether the conventional exam format for the OSCE forensic "Death Certificate" station could be replaced with a computer-based approach in future. For this study, 123 students completed the OSCE "Death Certificate" station, using both a computer-based and conventional format, half starting with the Computer the other starting with the conventional approach in their OSCE rotation. Assignment of examination cases was random. The examination results for the two stations were compared and both overall results and the individual items of the exam checklist were analysed by means of inferential statistics. Following statistical analysis of examination cases of varying difficulty levels and correction of the repeated measures effect, the results of both examination formats appear to be comparable. Thus, in the descriptive item analysis, while there were some significant differences between the computer-based and conventional OSCE stations, these differences were not reflected in the overall results after a correction factor was applied (e.g. point deductions for assistance from the medical examiner was possible only at the conventional station). Thus, we demonstrate that the computer-based OSCE "Death Certificate" station is a cost-efficient and standardised format for examination that yields results comparable to those from a conventional format exam. Moreover, the examination results also indicate the need to optimize both the test itself (adjusting the degree of difficulty of the case vignettes) and the corresponding instructional and learning methods (including, for example, the use of computer programmes to complete the death certificate in small group formats in the SkillsLab). Copyright © 2018 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Ji, Yong Bae; Song, Chang Myeon; Bang, Hyang Sook; Park, Hae Jin; Lee, Ji Young; Tae, Kyung
2017-07-01
The purpose of this study was to compare the functional and cosmetic outcomes of robot-assisted neck dissection with those of conventional neck dissection. We prospectively analyzed 113 patients with head and neck cancer who underwent unilateral neck dissection by a robot-assisted postauricular facelift approach (38 patients) or conventional trans-cervical approach (75 patients). Postoperative functional outcomes such as edema, sensory loss, pain, and fibrosis in the neck, and limitations of neck and shoulder motion, and cosmetic satisfaction scored by questionnaire were evaluated serially up to 1year postoperatively, and compared between the two groups. There were differences at baseline clinicopathologic characteristics including age, T classification and stage between the two groups. The mean score of neck edema was lower in the robotic group than that of the conventional group at 1day and 3days postoperatively, and sensory loss was also lower in the robotic group at 1day, 3days and 1week postoperatively (P<0.05). The postoperative cosmetic satisfaction were significantly higher in the robotic group than the conventional group at 1month, 3, 6, and 12months postoperatively. Transient marginal nerve palsy was higher in the robotic group than the conventional group (P=0.043). Postoperative neck edema and sensory loss were lower in the robotic group in the early postoperative period although its clinical significance is not clear. Cosmetic satisfaction was superior in the robotic group. Copyright © 2017 Elsevier Ltd. All rights reserved.
Lee, Nyoung Keun; Lee, Byung Hoon; Hwang, Yoon Joon; Kim, Su Young; Lee, Ji Young; Joo, Mee
2011-04-01
Acute hemorrhagic leukoencephalitis (AHL) is a rare and usually fatal disease characterized by an acute onset of neurological abnormalities. We describe the case of a 37-year-old man with biphasic AHL with a focus on the rare involvement of the brain stem and cerebellum. Initial computed tomography (CT) and magnetic resonance imaging revealed two hemorrhagic foci in the left middle cerebellar peduncle. After 15 days multifocal hematomas in the contralateral cerebellar hemisphere were imaged using CT. The pathological diagnosis was AHL. Following high-dose steroid treatment, the patient recovered with minor neurological sequelae.
[A skin cell segregating control system based on PC].
Liu, Wen-zhong; Zhou, Ming; Zhang, Hong-bing
2005-11-01
A skin cell segregating control system based on PC (personal computer) is presented in this paper. Its front controller is a single-chip microcomputer which enables the manipulation for 6 patients simultaneously, and thus provides a great convenience for clinical treatments for vitiligo. With the use of serial port communication technology, it's possible to monitor and control the front controller in a PC terminal. And the application of computer image acquisition technology realizes the synchronous acquisition of pathologic shin cell images pre/after the operation and a case history. Clinical tests prove its conformity with national standards and the pre-set technological requirements.
Inherent Conservatism in Deterministic Quasi-Static Structural Analysis
NASA Technical Reports Server (NTRS)
Verderaime, V.
1997-01-01
The cause of the long-suspected excessive conservatism in the prevailing structural deterministic safety factor has been identified as an inherent violation of the error propagation laws when reducing statistical data to deterministic values and then combining them algebraically through successive structural computational processes. These errors are restricted to the applied stress computations, and because mean and variations of the tolerance limit format are added, the errors are positive, serially cumulative, and excessively conservative. Reliability methods circumvent these errors and provide more efficient and uniform safe structures. The document is a tutorial on the deficiencies and nature of the current safety factor and of its improvement and transition to absolute reliability.
Complex Instruction Set Quantum Computing
NASA Astrophysics Data System (ADS)
Sanders, G. D.; Kim, K. W.; Holton, W. C.
1998-03-01
In proposed quantum computers, electromagnetic pulses are used to implement logic gates on quantum bits (qubits). Gates are unitary transformations applied to coherent qubit wavefunctions and a universal computer can be created using a minimal set of gates. By applying many elementary gates in sequence, desired quantum computations can be performed. This reduced instruction set approach to quantum computing (RISC QC) is characterized by serial application of a few basic pulse shapes and a long coherence time. However, the unitary matrix of the overall computation is ultimately a unitary matrix of the same size as any of the elementary matrices. This suggests that we might replace a sequence of reduced instructions with a single complex instruction using an optimally taylored pulse. We refer to this approach as complex instruction set quantum computing (CISC QC). One trades the requirement for long coherence times for the ability to design and generate potentially more complex pulses. We consider a model system of coupled qubits interacting through nearest neighbor coupling and show that CISC QC can reduce the time required to perform quantum computations.
Serial killers with military experience: applying learning theory to serial murder.
Castle, Tammy; Hensley, Christopher
2002-08-01
Scholars have endeavored to study the motivation and causality behind serial murder by researching biological, psychological, and sociological variables. Some of these studies have provided support for the relationship between these variables and serial murder. However, the study of serial murder continues to be an exploratory rather than explanatory research topic. This article examines the possible link between serial killers and military service. Citing previous research using social learning theory for the study of murder, this article explores how potential serial killers learn to reinforce violence, aggression, and murder in military boot camps. As with other variables considered in serial killer research, military experience alone cannot account for all cases of serial murder. Future research should continue to examine this possible link.
The National Shipbuilding Research Program, Computer Aided Process Planning for Shipyards
1986-08-01
Factory Simulation with Conventional Factory Planning Techniques Financial Justification of State-of-the-Art Investment: A Study Using CAPP I–5 T I T L...and engineer to order.” “Factory Simulation: Approach to Integration of Computer- Based Factory Simulation with Conventional Factory Planning Techniques
Starborg, Tobias; Kadler, Karl E
2015-03-01
Studies of gene regulation, signaling pathways, and stem cell biology are contributing greatly to our understanding of early embryonic vertebrate development. However, much less is known about the events during the latter half of embryonic development, when tissues comprising mostly extracellular matrix (ECM) are formed. The matrix extends far beyond the boundaries of individual cells and is refractory to study by conventional biochemical and molecular techniques; thus major gaps exist in our knowledge of the formation and three-dimensional (3D) organization of the dense tissues that form the bulk of adult vertebrates. Serial block face-scanning electron microscopy (SBF-SEM) has the ability to image volumes of tissue containing numerous cells at a resolution sufficient to study the organization of the ECM. Furthermore, whereas light microscopy was once relatively straightforward and electron microscopy was performed in specialist laboratories, the tables are turned; SBF-SEM is relatively straightforward and is becoming routine in high-end resolution studies of embryonic structures in vivo. In this review, we discuss the emergence of SBF-SEM as a tool for studying embryonic vertebrate development. © 2015 Wiley Periodicals, Inc.
Normal tissue complication probability modelling of tissue fibrosis following breast radiotherapy
NASA Astrophysics Data System (ADS)
Alexander, M. A. R.; Brooks, W. A.; Blake, S. W.
2007-04-01
Cosmetic late effects of radiotherapy such as tissue fibrosis are increasingly regarded as being of importance. It is generally considered that the complication probability of a radiotherapy plan is dependent on the dose uniformity, and can be reduced by using better compensation to remove dose hotspots. This work aimed to model the effects of improved dose homogeneity on complication probability. The Lyman and relative seriality NTCP models were fitted to clinical fibrosis data for the breast collated from the literature. Breast outlines were obtained from a commercially available Rando phantom using the Osiris system. Multislice breast treatment plans were produced using a variety of compensation methods. Dose-volume histograms (DVHs) obtained for each treatment plan were reduced to simple numerical parameters using the equivalent uniform dose and effective volume DVH reduction methods. These parameters were input into the models to obtain complication probability predictions. The fitted model parameters were consistent with a parallel tissue architecture. Conventional clinical plans generally showed reducing complication probabilities with increasing compensation sophistication. Extremely homogenous plans representing idealized IMRT treatments showed increased complication probabilities compared to conventional planning methods, as a result of increased dose to areas receiving sub-prescription doses using conventional techniques.
Detection of Gastrointestinal Pathogens from Stool Samples on Hemoccult Cards by Multiplex PCR.
Alberer, Martin; Schlenker, Nicklas; Bauer, Malkin; Helfrich, Kerstin; Mengele, Carolin; Löscher, Thomas; Nothdurft, Hans Dieter; Bretzel, Gisela; Beissner, Marcus
2017-01-01
Purpose . Up to 30% of international travelers are affected by travelers' diarrhea (TD). Reliable data on the etiology of TD is lacking. Sufficient laboratory capacity at travel destinations is often unavailable and transporting conventional stool samples to the home country is inconvenient. We evaluated the use of Hemoccult cards for stool sampling combined with a multiplex PCR for the detection of model viral, bacterial, and protozoal TD pathogens. Methods . Following the creation of serial dilutions for each model pathogen, last positive dilution steps (LPDs) and thereof calculated last positive sample concentrations (LPCs) were compared between conventional stool samples and card samples. Furthermore, card samples were tested after a prolonged time interval simulating storage during a travel duration of up to 6 weeks. Results . The LPDs/LPCs were comparable to testing of conventional stool samples. After storage on Hemoccult cards, the recovery rate was 97.6% for C. jejuni , 100% for E . histolytica , 97.6% for norovirus GI, and 100% for GII. Detection of expected pathogens was possible at weekly intervals up to 42 days. Conclusion . Stool samples on Hemoccult cards stored at room temperature can be used in combination with a multiplex PCR as a reliable tool for testing of TD pathogens.
Cardona, Albert; Saalfeld, Stephan; Preibisch, Stephan; Schmid, Benjamin; Cheng, Anchi; Pulokas, Jim; Tomancak, Pavel; Hartenstein, Volker
2010-01-01
The analysis of microcircuitry (the connectivity at the level of individual neuronal processes and synapses), which is indispensable for our understanding of brain function, is based on serial transmission electron microscopy (TEM) or one of its modern variants. Due to technical limitations, most previous studies that used serial TEM recorded relatively small stacks of individual neurons. As a result, our knowledge of microcircuitry in any nervous system is very limited. We applied the software package TrakEM2 to reconstruct neuronal microcircuitry from TEM sections of a small brain, the early larval brain of Drosophila melanogaster. TrakEM2 enables us to embed the analysis of the TEM image volumes at the microcircuit level into a light microscopically derived neuro-anatomical framework, by registering confocal stacks containing sparsely labeled neural structures with the TEM image volume. We imaged two sets of serial TEM sections of the Drosophila first instar larval brain neuropile and one ventral nerve cord segment, and here report our first results pertaining to Drosophila brain microcircuitry. Terminal neurites fall into a small number of generic classes termed globular, varicose, axiform, and dendritiform. Globular and varicose neurites have large diameter segments that carry almost exclusively presynaptic sites. Dendritiform neurites are thin, highly branched processes that are almost exclusively postsynaptic. Due to the high branching density of dendritiform fibers and the fact that synapses are polyadic, neurites are highly interconnected even within small neuropile volumes. We describe the network motifs most frequently encountered in the Drosophila neuropile. Our study introduces an approach towards a comprehensive anatomical reconstruction of neuronal microcircuitry and delivers microcircuitry comparisons between vertebrate and insect neuropile. PMID:20957184
The eyeball killer: serial killings with postmortem globe enucleation.
Coyle, Julie; Ross, Karen F; Barnard, Jeffrey J; Peacock, Elizabeth; Linch, Charles A; Prahlow, Joseph A
2015-05-01
Although serial killings are relatively rare, they can be the cause of a great deal of anxiety while the killer remains at-large. Despite the fact that the motivations for serial killings are typically quite complex, the psychological analysis of a serial killer can provide valuable insight into how and why certain individuals become serial killers. Such knowledge may be instrumental in preventing future serial killings or in solving ongoing cases. In certain serial killings, the various incidents have a variety of similar features. Identification of similarities between separate homicidal incidents is necessary to recognize that a serial killer may be actively killing. In this report, the authors present a group of serial killings involving three prostitutes who were shot to death over a 3-month period. Scene and autopsy findings, including the unusual finding of postmortem enucleation of the eyes, led investigators to recognize the serial nature of the homicides. © 2015 American Academy of Forensic Sciences.
Visual acuity and quality of life in dry eye disease: Proceedings of the OCEAN group meeting.
Benítez-Del-Castillo, José; Labetoulle, Marc; Baudouin, Christophe; Rolando, Maurizio; Akova, Yonca A; Aragona, Pasquale; Geerling, Gerd; Merayo-Lloves, Jesús; Messmer, Elisabeth M; Boboridis, Kostas
2017-04-01
Dry eye disease (DED) results in tear film instability and hyperosmolarity, inflammation of the ocular surface and, ultimately, visual disturbance that can significantly impact a patient's quality of life. The effects on visual acuity result in difficulties with driving, reading and computer use and negatively impact psychological health. These effects also extend to the workplace, with a loss of productivity and quality of work causing substantial economic losses. The effects of DED and the impact on vision experienced by patients may not be given sufficient importance by ophthalmologists. Functional visual acuity (FVA) is a measure of visual acuity after sustained eye opening without blinking for at least 10 s and mimics the sustained visual acuity of daily life. Measuring dynamic FVA allows the detection of impaired visual function in patients with DED who may display normal conventional visual acuity. There are currently several tests and methods that can be used to measure dynamic visual function: the SSC-350 FVA measurement system, assessment of best-corrected visual acuity decay using the interblink visual acuity decay test, serial measurements of ocular and corneal higher order aberrations, and measurement of dynamic vision quality using the Optical Quality Analysis System. Although the equipment for these methods may be too large or unaffordable for use in clinical practice, FVA testing is an important assessment for DED. Copyright © 2016 Elsevier Inc. All rights reserved.
Skin conditions: benign nodular skin lesions.
Nguyen, Tam; Zuniga, Ramiro
2013-04-01
Benign subcutaneous lesions are a common reason that patients visit family physicians. Lipomas are the most common of these lesions; they most often occur on the trunk and proximal extremities. Recent data show that as many as half of the fat cells in lipomas are atypical. Ultrasound is used increasingly to confirm lipoma diagnosis, but deep lesions should be evaluated with magnetic resonance imaging study or computed tomography scan to exclude involvement of underlying structures and/or liposarcoma. Small lesions can sometimes be managed with serial injections of midpotency steroids. Larger lesions (larger than 5 cm), those compressing other structures, or those suspicious for malignancy should be excised using standard surgical excision or, when possible, the newer minimal-scar segmental extraction technique. Ganglion cysts are another common lesion, the presence of which often is confirmed with ultrasound if the diagnosis is not clinically apparent. Management includes splinting, aspiration, and/or injection of steroids, with or without hyaluronidase. Epidermal inclusion cysts, also called sebaceous cysts, typically are asymptomatic unless they become infected. Ultrasound can aid in diagnosis. The only definitive management is surgical excision with complete removal of the cyst wall or capsule, using minimal-scar segmental extraction or conventional surgical removal. Written permission from the American Academy of Family Physicians is required for reproduction of this material in whole or in part in any form or medium.
Simulated parallel annealing within a neighborhood for optimization of biomechanical systems.
Higginson, J S; Neptune, R R; Anderson, F C
2005-09-01
Optimization problems for biomechanical systems have become extremely complex. Simulated annealing (SA) algorithms have performed well in a variety of test problems and biomechanical applications; however, despite advances in computer speed, convergence to optimal solutions for systems of even moderate complexity has remained prohibitive. The objective of this study was to develop a portable parallel version of a SA algorithm for solving optimization problems in biomechanics. The algorithm for simulated parallel annealing within a neighborhood (SPAN) was designed to minimize interprocessor communication time and closely retain the heuristics of the serial SA algorithm. The computational speed of the SPAN algorithm scaled linearly with the number of processors on different computer platforms for a simple quadratic test problem and for a more complex forward dynamic simulation of human pedaling.
Extreme hydronephrosis due to uretropelvic junction obstruction in infant (case report).
Krzemień, Grażyna; Szmigielska, Agnieszka; Bombiński, Przemysław; Barczuk, Marzena; Biejat, Agnieszka; Warchoł, Stanisław; Dudek-Warchoł, Teresa
2016-01-01
Hydronephrosis is the one of the most common congenital abnormalities of urinary tract. The left kidney is more commonly affected than the right side and is more common in males. To determine the role of ultrasonography, renal dynamic scintigraphy and lowerdose computed tomography urography in preoperative diagnostic workup of infant with extreme hydronephrosis. We presented the boy with antenatally diagnosed hydronephrosis. In serial, postnatal ultrasonography, renal scintigraphy and computed tomography urography we observed slightly declining function in the dilated kidney and increasing pelvic dilatation. Pyeloplasty was performed at the age of four months with good result. Results of ultrasonography and renal dynamic scintigraphy in child with extreme hydronephrosis can be difficult to asses, therefore before the surgical procedure a lower-dose computed tomography urography should be performed.
PARAVT: Parallel Voronoi tessellation code
NASA Astrophysics Data System (ADS)
González, R. E.
2016-10-01
In this study, we present a new open source code for massive parallel computation of Voronoi tessellations (VT hereafter) in large data sets. The code is focused for astrophysical purposes where VT densities and neighbors are widely used. There are several serial Voronoi tessellation codes, however no open source and parallel implementations are available to handle the large number of particles/galaxies in current N-body simulations and sky surveys. Parallelization is implemented under MPI and VT using Qhull library. Domain decomposition takes into account consistent boundary computation between tasks, and includes periodic conditions. In addition, the code computes neighbors list, Voronoi density, Voronoi cell volume, density gradient for each particle, and densities on a regular grid. Code implementation and user guide are publicly available at https://github.com/regonzar/paravt.
Solving the Cauchy-Riemann equations on parallel computers
NASA Technical Reports Server (NTRS)
Fatoohi, Raad A.; Grosch, Chester E.
1987-01-01
Discussed is the implementation of a single algorithm on three parallel-vector computers. The algorithm is a relaxation scheme for the solution of the Cauchy-Riemann equations; a set of coupled first order partial differential equations. The computers were chosen so as to encompass a variety of architectures. They are: the MPP, and SIMD machine with 16K bit serial processors; FLEX/32, an MIMD machine with 20 processors; and CRAY/2, an MIMD machine with four vector processors. The machine architectures are briefly described. The implementation of the algorithm is discussed in relation to these architectures and measures of the performance on each machine are given. Simple performance models are used to describe the performance. These models highlight the bottlenecks and limiting factors for this algorithm on these architectures. Conclusions are presented.
NASA Astrophysics Data System (ADS)
Amelia, Afritha; Julham; Viyata Sundawa, Bakti; Pardede, Morlan; Sutrisno, Wiwinta; Rusdi, Muhammad
2017-09-01
RS232 of serial communication is the communication system in the computer and microcontroller. This communication was studied in Department of Electrical Engineering and Department of Computer Engineering and Informatics Department at Politeknik Negeri Medan. Recently, an application of simulation was installed on the computer which used for teaching and learning process. The drawback of this system is not useful for communication method between learner and trainer. Therefore, this study was created method of 10 stage to which divided into 7 stages and 3 major phases. It can be namely the analysis of potential problems and data collection, trainer design, and empirical testing and revision. After that, the trainer and module were tested in order to get feedback from the learner. The result showed that 70.10% of feedback which wide reasonable from the learner of questionnaire.
A parallel variable metric optimization algorithm
NASA Technical Reports Server (NTRS)
Straeter, T. A.
1973-01-01
An algorithm, designed to exploit the parallel computing or vector streaming (pipeline) capabilities of computers is presented. When p is the degree of parallelism, then one cycle of the parallel variable metric algorithm is defined as follows: first, the function and its gradient are computed in parallel at p different values of the independent variable; then the metric is modified by p rank-one corrections; and finally, a single univariant minimization is carried out in the Newton-like direction. Several properties of this algorithm are established. The convergence of the iterates to the solution is proved for a quadratic functional on a real separable Hilbert space. For a finite-dimensional space the convergence is in one cycle when p equals the dimension of the space. Results of numerical experiments indicate that the new algorithm will exploit parallel or pipeline computing capabilities to effect faster convergence than serial techniques.
Neuromorphic Kalman filter implementation in IBM’s TrueNorth
NASA Astrophysics Data System (ADS)
Carney, R.; Bouchard, K.; Calafiura, P.; Clark, D.; Donofrio, D.; Garcia-Sciveres, M.; Livezey, J.
2017-10-01
Following the advent of a post-Moore’s law field of computation, novel architectures continue to emerge. With composite, multi-million connection neuromorphic chips like IBM’s TrueNorth, neural engineering has now become a feasible technology in this novel computing paradigm. High Energy Physics experiments are continuously exploring new methods of computation and data handling, including neuromorphic, to support the growing challenges of the field and be prepared for future commodity computing trends. This work details the first instance of a Kalman filter implementation in IBM’s neuromorphic architecture, TrueNorth, for both parallel and serial spike trains. The implementation is tested on multiple simulated systems and its performance is evaluated with respect to an equivalent non-spiking Kalman filter. The limits of the implementation are explored whilst varying the size of weight and threshold registers, the number of spikes used to encode a state, size of neuron block for spatial encoding, and neuron potential reset schemes.
On the reduced-complexity of LDPC decoders for ultra-high-speed optical transmission.
Djordjevic, Ivan B; Xu, Lei; Wang, Ting
2010-10-25
We propose two reduced-complexity (RC) LDPC decoders, which can be used in combination with large-girth LDPC codes to enable ultra-high-speed serial optical transmission. We show that optimally attenuated RC min-sum sum algorithm performs only 0.46 dB (at BER of 10(-9)) worse than conventional sum-product algorithm, while having lower storage memory requirements and much lower latency. We further study the use of RC LDPC decoding algorithms in multilevel coded modulation with coherent detection and show that with RC decoding algorithms we can achieve the net coding gain larger than 11 dB at BERs below 10(-9).
Stimulus and Response-Locked P3 Activity in a Dynamic Rapid Serial Visual Presentation (RSVP) Task
2013-01-01
Perception and Psychophysics 1973, 14, 265–272. Touryan, J.; Gibson, L.; Horne, J. H.; Weber, P. Real-Time Classification of Neural Signals ...execution. 15. SUBJECT TERMS P300, RSVP, EEG, target recognition, reaction time, ERP 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT...applications and as an input signal in many brain computer interactive technologies (BCITs) for both patients and healthy individuals. ERPs are extracted
Hwang, Jeong-Hwa; Misumi, Shigeki; Curran-Everett, Douglas; Brown, Kevin K; Sahin, Hakan; Lynch, David A
2011-08-01
The aim of this study was to evaluate the prognostic implications of computed tomography (CT) and physiologic variables at baseline and on sequential evaluation in patients with fibrosing interstitial pneumonia. We identified 72 patients with fibrosing interstitial pneumonia (42 with idiopathic disease, 30 with collagen vascular disease). Pulmonary function tests and CT were performed at the time of diagnosis and at a median follow-up of 12 months, respectively. Two chest radiologists scored the extent of specific abnormalities and overall disease on baseline and follow-up CT. Rate of survival was estimated using the Kaplan-Meier method. Three Cox proportional hazards models were constructed to evaluate the relationship between CT and physiologic variables and rate of survival: model 1 included only baseline variables, model 2 included only serial change variables, and model 3 included both baseline and serial change variables. On follow-up CT, the extent of mixed ground-glass and reticular opacities (P<0.001), pure reticular opacity (P=0.04), honeycombing (P=0.02), and overall extent of disease (P<0.001) was increased in the idiopathic group, whereas these variables remained unchanged in the collagen vascular disease group. Patients with idiopathic disease had a shorter rate of survival than those with collagen vascular disease (P=0.03). In model 1, the extent of honeycombing on baseline CT was the only independent predictor of mortality (P=0.02). In model 2, progression in honeycombing was the only predictor of mortality (P=0.005). In model 3, baseline extent of honeycombing and progression of honeycombing were the only independent predictors of mortality (P=0.001 and 0.002, respectively). Neither baseline nor serial change physiologic variables, nor the presence of collagen vascular disease, was predictive of rate of survival. The extent of honeycombing at baseline and its progression on follow-up CT are important determinants of rate of survival in patients with fibrosing interstitial pneumonia.
Computational Approaches to Vestibular Research
NASA Technical Reports Server (NTRS)
Ross, Muriel D.; Wade, Charles E. (Technical Monitor)
1994-01-01
The Biocomputation Center at NASA Ames Research Center is dedicated to a union between computational, experimental and theoretical approaches to the study of neuroscience and of life sciences in general. The current emphasis is on computer reconstruction and visualization of vestibular macular architecture in three-dimensions (3-D), and on mathematical modeling and computer simulation of neural activity in the functioning system. Our methods are being used to interpret the influence of spaceflight on mammalian vestibular maculas in a model system, that of the adult Sprague-Dawley rat. More than twenty 3-D reconstructions of type I and type II hair cells and their afferents have been completed by digitization of contours traced from serial sections photographed in a transmission electron microscope. This labor-intensive method has now been replace d by a semiautomated method developed in the Biocomputation Center in which conventional photography is eliminated. All viewing, storage and manipulation of original data is done using Silicon Graphics workstations. Recent improvements to the software include a new mesh generation method for connecting contours. This method will permit the investigator to describe any surface, regardless of complexity, including highly branched structures such as are routinely found in neurons. This same mesh can be used for 3-D, finite volume simulation of synapse activation and voltage spread on neuronal surfaces visualized via the reconstruction process. These simulations help the investigator interpret the relationship between neuroarchitecture and physiology, and are of assistance in determining which experiments will best test theoretical interpretations. Data are also used to develop abstract, 3-D models that dynamically display neuronal activity ongoing in the system. Finally, the same data can be used to visualize the neural tissue in a virtual environment. Our exhibit will depict capabilities of our computational approaches and some of our findings from their application. For example, our research has demonstrated that maculas of adult mammals retain the property of synaptic plasticity. Ribbon synapses increase numerically and undergo changes in type and distribution (p<0.0001) in type II hair cells after exposure to microgravity for as few as nine days. The finding of macular synaptic plasticity is pertinent to the clinic, and may help explain some. balance disorders in humans. The software used in our investigations will be demonstrated for those interested in applying it in their own research.
Influence of electrical and hybrid heating on bread quality during baking.
Chhanwal, N; Ezhilarasi, P N; Indrani, D; Anandharamakrishnan, C
2015-07-01
Energy efficiency and product quality are the key factors for any food processing industry. The aim of the study was to develop energy and time efficient baking process. The hybrid heating (Infrared + Electrical) oven was designed and fabricated using two infrared lamps and electric heating coils. The developed oven can be operated in serial or combined heating modes. The standardized baking conditions were 18 min at 220°C to produce the bread from hybrid heating oven. Effect of baking with hybrid heating mode (H-1 and H-2, hybrid oven) on the quality characteristics of bread as against conventional heating mode (C-1, pilot scale oven; C-2, hybrid oven) was studied. The results showed that breads baked in hybrid heating mode (H-2) had higher moisture content (28.87%), higher volume (670 cm(3)), lower crumb firmness value (374.6 g), and overall quality score (67.0) comparable to conventional baking process (68.5). Moreover, bread baked in hybrid heating mode showed 28% reduction in baking time.
Rizoiu, I M; Eversole, L R; Kimmel, A I
1996-10-01
Lasers are effective tools for soft tissue surgery. The erbium, chromium: yttrium, scandium, gallium, garnet laser is a new system that incorporates an air-water spray. This study evaluates the cutting margins of this laser and compares healing with laser and conventional scalpel and punch biopsy-induced wounds. New Zealand white rabbits were divided into serial sacrifice groups; the tissues were grossly and microscopically analyzed after laser and convential steel surgical wounding. Wound margins were found to show minimal edge coagulation artifact and were 20 to 40 mm in width. Laser wounds showed minimal to no hemorrhage and re-epithelialization and collagenization were found to occur by day 7 in both laser and conventional groups. The new laser system is an effective soft tissue surgical device; wound healing is comparable to that associated with surgical steel wounds. The minimal edge artifact observed with this laser system should allow for the procurement of diagnostic biopsy specimens.
Adebahr, Sonja; Schimek-Jasch, Tanja; Nestle, Ursula; Brunner, Thomas B
2016-08-01
The oesophagus as a serial organ located in the central chest is frequent subject to "incidental" dose application in radiotherapy for several thoracic malignancies including oesophageal cancer itself. Especially due to the radiosensitive mucosa severe radiotherapy induced sequelae can occur, acute oesophagitis and strictures as late toxicity being the most frequent side-effects. In this review we focus on oesophageal side effects derived from treatment of gastrointestinal cancer and secondly provide an overview on oesophageal toxicity from conventional and stereotactic fractionated radiotherapy to the thoracic area in general. Available data on pathogenesis, frequency, onset, and severity of oesophageal side effects are summarized. Whereas for conventional radiotherapy the associations of applied doses to certain volumes of the oesophagus are well described, the tolerance dose to the mediastinal structures for hypofractionated therapy is unknown. The review provides available attempts to predict the risk of oesophageal side effects from dosimetric parameters of SBRT. Copyright © 2016 Elsevier Ltd. All rights reserved.
The Education of Serials Catalogers.
ERIC Educational Resources Information Center
Soper, Mary Ellen
1987-01-01
Reviews surveys of accredited library schools' efforts to prepare students to work with serials and practitioners' attitudes toward their formal serials education, and presents results of a 1986 survey of serials cataloging courses offered by library schools. Continuing education and the importance of special instruction for serials work are…
Rojo, Gemma; Sandoval-Rodríguez, Alejandra; López, Angélica; Ortiz, Sylvia; Correa, Juana P; Saavedra, Miguel; Botto-Mahan, Carezza; Cattan, Pedro E; Solari, Aldo
2017-08-07
Chagas disease caused by Trypanosoma cruzi is considered a major public health problem in America. After an acute phase the disease changes to a chronic phase with very low parasitemia. The parasite presents high genetic variability with seven discrete typing units (DTUs): TcI-TcVI and Tc bat. The aim of this work is to evaluate fluctuation of parasitemia and T. cruzi DTUs in naturally infected Octodon degus. After animal capture parasitemia was obtained by qPCR and later the animals were evaluated by three serial xenodiagnoses using two insect vector species, Mepraia spinolai and Triatoma infestans. The parasites amplified over time by insect xenodiagnosis were analyzed by conventional PCR and after that the infective T. cruzi were characterized by means of hybridization tests. The determination of O. degus parasitemia before serial xenodiagnosis by qPCR reveals a great heterogeneity from 1 to 812 parasite equivalents/ml in the blood stream. The T. cruzi DTU composition in 23 analyzed animals by xenodiagnosis oscillated from mixed infections with different DTUs to infections without DTU identification or vice versa, this is equivalent to 50% of the studied animals. Detection of triatomine infection and composition of T. cruzi DTUs was achieved more efficiently 40 days post-infection rather than after 80 or 120 days. Trypanosoma cruzi DTUs composition fluctuates over time in naturally infected O. degus. Three replicates of serial xenodiagnosis confirmed that living parasites have been studied. Our results allow us to confirm that M. spinolai and T. infestans are equally competent to maintain T. cruzi DTUs since similar results of infection were obtained after xenodiagnosis procedure.
Accounting for partiality in serial crystallography using ray-tracing principles
Kroon-Batenburg, Loes M. J.; Schreurs, Antoine M. M.; Ravelli, Raimond B. G.; Gros, Piet
2015-01-01
Serial crystallography generates ‘still’ diffraction data sets that are composed of single diffraction images obtained from a large number of crystals arbitrarily oriented in the X-ray beam. Estimation of the reflection partialities, which accounts for the expected observed fractions of diffraction intensities, has so far been problematic. In this paper, a method is derived for modelling the partialities by making use of the ray-tracing diffraction-integration method EVAL. The method estimates partialities based on crystal mosaicity, beam divergence, wavelength dispersion, crystal size and the interference function, accounting for crystallite size. It is shown that modelling of each reflection by a distribution of interference-function weighted rays yields a ‘still’ Lorentz factor. Still data are compared with a conventional rotation data set collected from a single lysozyme crystal. Overall, the presented still integration method improves the data quality markedly. The R factor of the still data compared with the rotation data decreases from 26% using a Monte Carlo approach to 12% after applying the Lorentz correction, to 5.3% when estimating partialities by EVAL and finally to 4.7% after post-refinement. The merging R int factor of the still data improves from 105 to 56% but remains high. This suggests that the accuracy of the model parameters could be further improved. However, with a multiplicity of around 40 and an R int of ∼50% the merged still data approximate the quality of the rotation data. The presented integration method suitably accounts for the partiality of the observed intensities in still diffraction data, which is a critical step to improve data quality in serial crystallography. PMID:26327370
Huang, Chia-Ying; Olieric, Vincent; Ma, Pikyee; Howe, Nicole; Vogeley, Lutz; Liu, Xiangyu; Warshamanage, Rangana; Weinert, Tobias; Panepucci, Ezequiel; Kobilka, Brian; Diederichs, Kay; Wang, Meitian; Caffrey, Martin
2016-01-01
Here, a method for presenting crystals of soluble and membrane proteins growing in the lipid cubic or sponge phase for in situ diffraction data collection at cryogenic temperatures is introduced. The method dispenses with the need for the technically demanding and inefficient crystal-harvesting step that is an integral part of the lipid cubic phase or in meso method of growing crystals. Crystals are dispersed in a bolus of mesophase sandwiched between thin plastic windows. The bolus contains tens to hundreds of crystals, visible with an in-line microscope at macromolecular crystallography synchrotron beamlines and suitably disposed for conventional or serial crystallographic data collection. Wells containing the crystal-laden boluses are removed individually from hermetically sealed glass plates in which crystallization occurs, affixed to pins on goniometer bases and excess precipitant is removed from around the mesophase. The wells are snap-cooled in liquid nitrogen, stored and shipped in Dewars, and manually or robotically mounted on a goniometer in a cryostream for diffraction data collection at 100 K, as is performed routinely with standard, loop-harvested crystals. The method is a variant on the recently introduced in meso in situ serial crystallography (IMISX) method that enables crystallographic measurements at cryogenic temperatures where crystal lifetimes are enormously enhanced whilst reducing protein consumption dramatically. The new approach has been used to generate high-resolution crystal structures of a G-protein-coupled receptor, α-helical and β-barrel transporters and an enzyme as model integral membrane proteins. Insulin and lysozyme were used as test soluble proteins. The quality of the data that can be generated by this method was attested to by performing sulfur and bromine SAD phasing with two of the test proteins. PMID:26894538
Perry, Cameron N; Cartamil, Daniel P; Bernal, Diego; Sepulveda, Chugey A; Theilmann, Rebecca J; Graham, Jeffrey B; Frank, Lawrence R
2007-04-01
T1-weighted magnetic resonance imaging (MRI) in conjunction with image and segmentation analysis (i.e., the process of digitally partitioning tissues based on specified MR image characteristics) was evaluated as a noninvasive alternative for differentiating muscle fiber types and quantifying the amounts of slow, red aerobic muscle in the shortfin mako shark (Isurus oxyrinchus) and the salmon shark (Lamna ditropis). MRI-determinations of red muscle quantity and position made for the mid-body sections of three mako sharks (73.5-110 cm fork length, FL) are in close agreement (within the 95% confidence intervals) with data obtained for the same sections by the conventional dissection method involving serial cross-sectioning and volumetric analyses, and with previously reported findings for this species. The overall distribution of salmon shark red muscle as a function of body fork length was also found to be consistent with previously acquired serial dissection data for this species; however, MR imaging revealed an anterior shift in peak red muscle cross-sectional area corresponding to an increase in body mass. Moreover, MRI facilitated visualization of the intact and anatomically correct relationship of tendon linking the red muscle and the caudal peduncle. This study thus demonstrates that MRI is effective in acquiring high-resolution three-dimensional digital data with high contrast between different fish tissue types. Relative to serial dissection, MRI allows more precise quantification of the position, volume, and other details about the types of muscle within the fish myotome, while conserving specimen structural integrity. Copyright (c) 2007 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Barberis, Stefano; Carminati, Leonardo; Leveraro, Franco; Mazza, Simone Michele; Perini, Laura; Perlz, Francesco; Rebatto, David; Tura, Ruggero; Vaccarossa, Luca; Villaplana, Miguel
2015-12-01
We present the approach of the University of Milan Physics Department and the local unit of INFN to allow and encourage the sharing among different research areas of computing, storage and networking resources (the largest ones being those composing the Milan WLCG Tier-2 centre and tailored to the needs of the ATLAS experiment). Computing resources are organised as independent HTCondor pools, with a global master in charge of monitoring them and optimising their usage. The configuration has to provide satisfactory throughput for both serial and parallel (multicore, MPI) jobs. A combination of local, remote and cloud storage options are available. The experience of users from different research areas operating on this shared infrastructure is discussed. The promising direction of improving scientific computing throughput by federating access to distributed computing and storage also seems to fit very well with the objectives listed in the European Horizon 2020 framework for research and development.
Pyramidal neurovision architecture for vision machines
NASA Astrophysics Data System (ADS)
Gupta, Madan M.; Knopf, George K.
1993-08-01
The vision system employed by an intelligent robot must be active; active in the sense that it must be capable of selectively acquiring the minimal amount of relevant information for a given task. An efficient active vision system architecture that is based loosely upon the parallel-hierarchical (pyramidal) structure of the biological visual pathway is presented in this paper. Although the computational architecture of the proposed pyramidal neuro-vision system is far less sophisticated than the architecture of the biological visual pathway, it does retain some essential features such as the converging multilayered structure of its biological counterpart. In terms of visual information processing, the neuro-vision system is constructed from a hierarchy of several interactive computational levels, whereupon each level contains one or more nonlinear parallel processors. Computationally efficient vision machines can be developed by utilizing both the parallel and serial information processing techniques within the pyramidal computing architecture. A computer simulation of a pyramidal vision system for active scene surveillance is presented.
Distributed parallel computing in stochastic modeling of groundwater systems.
Dong, Yanhui; Li, Guomin; Xu, Haizhen
2013-03-01
Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.
NASA Technical Reports Server (NTRS)
Gill, Doron; Tadmor, Eitan
1988-01-01
An efficient method is proposed to solve the eigenproblem of N by N Symmetric Tridiagonal (ST) matrices. Unlike the standard eigensolvers which necessitate O(N cubed) operations to compute the eigenvectors of such ST matrices, the proposed method computes both the eigenvalues and eigenvectors with only O(N squared) operations. The method is based on serial implementation of the recently introduced Divide and Conquer (DC) algorithm. It exploits the fact that by O(N squared) of DC operations, one can compute the eigenvalues of N by N ST matrix and a finite number of pairs of successive rows of its eigenvector matrix. The rest of the eigenvectors--all of them or one at a time--are computed by linear three-term recurrence relations. Numerical examples are presented which demonstrate the superiority of the proposed method by saving an order of magnitude in execution time at the expense of sacrificing a few orders of accuracy.
Parallelization of Nullspace Algorithm for the computation of metabolic pathways
Jevremović, Dimitrije; Trinh, Cong T.; Srienc, Friedrich; Sosa, Carlos P.; Boley, Daniel
2011-01-01
Elementary mode analysis is a useful metabolic pathway analysis tool in understanding and analyzing cellular metabolism, since elementary modes can represent metabolic pathways with unique and minimal sets of enzyme-catalyzed reactions of a metabolic network under steady state conditions. However, computation of the elementary modes of a genome- scale metabolic network with 100–1000 reactions is very expensive and sometimes not feasible with the commonly used serial Nullspace Algorithm. In this work, we develop a distributed memory parallelization of the Nullspace Algorithm to handle efficiently the computation of the elementary modes of a large metabolic network. We give an implementation in C++ language with the support of MPI library functions for the parallel communication. Our proposed algorithm is accompanied with an analysis of the complexity and identification of major bottlenecks during computation of all possible pathways of a large metabolic network. The algorithm includes methods to achieve load balancing among the compute-nodes and specific communication patterns to reduce the communication overhead and improve efficiency. PMID:22058581
ERIC Educational Resources Information Center
International Federation of Library Associations and Institutions, The Hague (Netherlands).
Papers on serial publications presented at the 1986 International Federation of Library Associations (IFLA) conference include: (1) "Scenario for Microcomputer-Based Serials Cataloging from ISDS (International Serials Data System) Records--New Horizons for Serial Librarianship in the Developing Countries by the Availability of Adequate…
Union Listing via OCLC's Serials Control Subsystem.
ERIC Educational Resources Information Center
O'Malley, Terrence J.
1984-01-01
Describes library use of Conversion of Serials Project's (CONSER) online national machine-readable database for serials to create online union lists of serials via OCLC's Serial Control Subsystem. Problems in selection of appropriate, accurate, and authenticated records and prospects for the future are discussed. Twenty sources and sample records…
Fixman compensating potential for general branched molecules
NASA Astrophysics Data System (ADS)
Jain, Abhinandan; Kandel, Saugat; Wagner, Jeffrey; Larsen, Adrien; Vaidehi, Nagarajan
2013-12-01
The technique of constraining high frequency modes of molecular motion is an effective way to increase simulation time scale and improve conformational sampling in molecular dynamics simulations. However, it has been shown that constraints on higher frequency modes such as bond lengths and bond angles stiffen the molecular model, thereby introducing systematic biases in the statistical behavior of the simulations. Fixman proposed a compensating potential to remove such biases in the thermodynamic and kinetic properties calculated from dynamics simulations. Previous implementations of the Fixman potential have been limited to only short serial chain systems. In this paper, we present a spatial operator algebra based algorithm to calculate the Fixman potential and its gradient within constrained dynamics simulations for branched topology molecules of any size. Our numerical studies on molecules of increasing complexity validate our algorithm by demonstrating recovery of the dihedral angle probability distribution function for systems that range in complexity from serial chains to protein molecules. We observe that the Fixman compensating potential recovers the free energy surface of a serial chain polymer, thus annulling the biases caused by constraining the bond lengths and bond angles. The inclusion of Fixman potential entails only a modest increase in the computational cost in these simulations. We believe that this work represents the first instance where the Fixman potential has been used for general branched systems, and establishes the viability for its use in constrained dynamics simulations of proteins and other macromolecules.
Sakakura, Kenichi; Ladich, Elena; Fuimaono, Kristine; Grunewald, Debby; O'Fallon, Patrick; Spognardi, Anna-Maria; Markham, Peter; Otsuka, Fumiyuki; Yahagi, Kazuyuki; Shen, Kai; Kolodgie, Frank D; Joner, Michael; Virmani, Renu
2015-01-01
The long-term efficacy of radiofrequency ablation of renal autonomic nerves has been proven in nonrandomized studies. However, long-term safety of the renal artery (RA) is of concern. The aim of our study was to determine if cooling during radiofrequency ablation preserved the RA while allowing equivalent nerve damage. A total of 9 swine (18 RAs) were included, and allocated to irrigated radiofrequency (n=6 RAs, temperature setting: 50°C), conventional radiofrequency (n=6 RAs, nonirrigated, temperature setting: 65°C), and high-temperature radiofrequency (n=6 RAs, nonirrigated, temperature setting: 90°C) groups. RAs were harvested at 10 days, serially sectioned from proximal to distal including perirenal tissues and examined after paraffin embedding, and staining with hematoxylin-eosin and Movat pentachrome. RAs and periarterial tissue including nerves were semiquantitatively assessed and scored. A total of 660 histological sections from 18 RAs were histologically examined by light microscopy. Arterial medial injury was significantly less in the irrigated radiofrequency group (depth of medial injury, circumferential involvement, and thinning) than that in the conventional radiofrequency group (P<0.001 for circumference; P=0.003 for thinning). Severe collagen damage such as denatured collagen was also significantly less in the irrigated compared with the conventional radiofrequency group (P<0.001). Nerve damage although not statistically different between the irrigated radiofrequency group and conventional radiofrequency group (P=0.36), there was a trend toward less nerve damage in the irrigated compared with conventional. Compared to conventional radiofrequency, circumferential medial damage in highest-temperature nonirrigated radiofrequency group was significantly greater (P<0.001). Saline irrigation significantly reduces arterial and periarterial tissue damage during radiofrequency ablation, and there is a trend toward less nerve damage. © 2014 American Heart Association, Inc.
Serials Automation for San Jose State University Library.
ERIC Educational Resources Information Center
Liu, Susana J.
This study (1) examines the university's serials system and identifies its problems; (2) analyzes the current manual operations in the serials department, with emphasis on the serials check-in system; and (3) determines whether or not computerization of some or all of the serials subsystems would improve the department's internal effectiveness and…
ERIC Educational Resources Information Center
Association for Educational Data Systems, Washington, DC.
This publication presents a summary of and index to the presentations given at the Association for Educational Data Systems (AEDS) Convention held in Minneapolis, Minnesota, during May 5-8, 1981. Summarized are 66 short papers that cover a variety of educational computing activities and projects completed by educational institutions,…
Fluid/Structure Interaction Studies of Aircraft Using High Fidelity Equations on Parallel Computers
NASA Technical Reports Server (NTRS)
Guruswamy, Guru; VanDalsem, William (Technical Monitor)
1994-01-01
Abstract Aeroelasticity which involves strong coupling of fluids, structures and controls is an important element in designing an aircraft. Computational aeroelasticity using low fidelity methods such as the linear aerodynamic flow equations coupled with the modal structural equations are well advanced. Though these low fidelity approaches are computationally less intensive, they are not adequate for the analysis of modern aircraft such as High Speed Civil Transport (HSCT) and Advanced Subsonic Transport (AST) which can experience complex flow/structure interactions. HSCT can experience vortex induced aeroelastic oscillations whereas AST can experience transonic buffet associated structural oscillations. Both aircraft may experience a dip in the flutter speed at the transonic regime. For accurate aeroelastic computations at these complex fluid/structure interaction situations, high fidelity equations such as the Navier-Stokes for fluids and the finite-elements for structures are needed. Computations using these high fidelity equations require large computational resources both in memory and speed. Current conventional super computers have reached their limitations both in memory and speed. As a result, parallel computers have evolved to overcome the limitations of conventional computers. This paper will address the transition that is taking place in computational aeroelasticity from conventional computers to parallel computers. The paper will address special techniques needed to take advantage of the architecture of new parallel computers. Results will be illustrated from computations made on iPSC/860 and IBM SP2 computer by using ENSAERO code that directly couples the Euler/Navier-Stokes flow equations with high resolution finite-element structural equations.
Discrete square root filtering - A survey of current techniques.
NASA Technical Reports Server (NTRS)
Kaminskii, P. G.; Bryson, A. E., Jr.; Schmidt, S. F.
1971-01-01
Current techniques in square root filtering are surveyed and related by applying a duality association. Four efficient square root implementations are suggested, and compared with three common conventional implementations in terms of computational complexity and precision. It is shown that the square root computational burden should not exceed the conventional by more than 50% in most practical problems. An examination of numerical conditioning predicts that the square root approach can yield twice the effective precision of the conventional filter in ill-conditioned problems. This prediction is verified in two examples.
Partitioning and packing mathematical simulation models for calculation on parallel computers
NASA Technical Reports Server (NTRS)
Arpasi, D. J.; Milner, E. J.
1986-01-01
The development of multiprocessor simulations from a serial set of ordinary differential equations describing a physical system is described. Degrees of parallelism (i.e., coupling between the equations) and their impact on parallel processing are discussed. The problem of identifying computational parallelism within sets of closely coupled equations that require the exchange of current values of variables is described. A technique is presented for identifying this parallelism and for partitioning the equations for parallel solution on a multiprocessor. An algorithm which packs the equations into a minimum number of processors is also described. The results of the packing algorithm when applied to a turbojet engine model are presented in terms of processor utilization.
A Demons algorithm for image registration with locally adaptive regularization.
Cahill, Nathan D; Noble, J Alison; Hawkes, David J
2009-01-01
Thirion's Demons is a popular algorithm for nonrigid image registration because of its linear computational complexity and ease of implementation. It approximately solves the diffusion registration problem by successively estimating force vectors that drive the deformation toward alignment and smoothing the force vectors by Gaussian convolution. In this article, we show how the Demons algorithm can be generalized to allow image-driven locally adaptive regularization in a manner that preserves both the linear complexity and ease of implementation of the original Demons algorithm. We show that the proposed algorithm exhibits lower target registration error and requires less computational effort than the original Demons algorithm on the registration of serial chest CT scans of patients with lung nodules.
Programmable data communications controller requirements
NASA Technical Reports Server (NTRS)
1977-01-01
The design requirements for a Programmable Data Communications Controller (PDCC) that reduces the difficulties in attaching data terminal equipment to a computer are presented. The PDCC is an interface between the computer I/O channel and the bit serial communication lines. Each communication line is supported by a communication port that handles all line control functions and performs most terminal control functions. The port is fabricated on a printed circuit board that plugs into a card chassis, mating with a connector that is joined to all other card stations by a data bus. Ports are individually programmable; each includes a microprocessor, a programmable read-only memory for instruction storage, and a random access memory for data storage.
Serials Management: A Practical Guide. Frontiers of Access to Library Materials No. 3.
ERIC Educational Resources Information Center
Chen, Chiou-Sen Dora
This book advises librarians, paraprofessional library supervisors, and library school students on problems unique to the management of serials. Chapter 1 explains the character and publication patterns of serials. Chapter 2 discusses the scope and the organizational structure of serials management, and the role of the serials manager. Chapter 3…
Ultrastructure and growth factor content of equine platelet-rich fibrin gels.
Textor, Jamie A; Murphy, Kaitlin C; Leach, J Kent; Tablin, Fern
2014-04-01
To compare fiber diameter, pore area, compressive stiffness, gelation properties, and selected growth factor content of platelet-rich fibrin gels (PRFGs) and conventional fibrin gels (FGs). PRFGs and conventional FGs prepared from the blood of 10 healthy horses. Autologous fibrinogen was used to form conventional FGs. The PRFGs were formed from autologous platelet-rich plasma of various platelet concentrations (100 × 10³ platelets/μL, 250 × 10³ platelets/μL, 500 × 10³ platelets/μL, and 1,000 × 10³ platelets/μL). All gels contained an identical fibrinogen concentration (20 mg/mL). Fiber diameter and pore area were evaluated with scanning electron microscopy. Maximum gelation rate was assessed with spectrophotometry, and gel stiffness was determined by measuring the compressive modulus. Gel weights were measured serially over 14 days as an index of contraction (volume loss). Platelet-derived growth factor-BB and transforming growth factor-β1 concentrations were quantified with ELISAs. Fiber diameters were significantly larger and mean pore areas were significantly smaller in PRFGs than in conventional FGs. Gel weight decreased significantly over time, differed significantly between PRFGs and conventional FGs, and was significantly correlated with platelet concentration. Platelet-derived growth factor-BB and transforming growth factor-β1 concentrations were highest in gels and releasates derived from 1,000 × 10³ platelets/μL. The inclusion of platelets in FGs altered the architecture and increased the growth factor content of the resulting scaffold. Platelets may represent a useful means of modifying these gels for applications in veterinary and human regenerative medicine.
Low cost open data acquisition system for biomedical applications
NASA Astrophysics Data System (ADS)
Zabolotny, Wojciech M.; Laniewski-Wollk, Przemyslaw; Zaworski, Wojciech
2005-09-01
In the biomedical applications it is often necessary to collect measurement data from different devices. It is relatively easy, if the devices are equipped with a MIB or Ethernet interface, however often they feature only the asynchronous serial link, and sometimes the measured values are available only as the analog signals. The system presented in the paper is a low cost alternative to commercially available data acquisition systems. The hardware and software architecture of the system is fully open, so it is possible to customize it for particular needs. The presented system offers various possibilities to connect it to the computer based data processing unit - e.g. using the USB or Ethernet ports. Both interfaces allow also to use many such systems in parallel to increase amount of serial and analog inputs. The open source software used in the system makes possible to process the acquired data with standard tools like MATLAB, Scilab or Octave, or with a dedicated, user supplied application.
Multicenter AIDS Cohort Study Quantitative Coronary Plaque Progression Study: rationale and design.
Nakanishi, Rine; Post, Wendy S; Osawa, Kazuhiro; Jayawardena, Eranthi; Kim, Michael; Sheidaee, Nasim; Nezarat, Negin; Rahmani, Sina; Kim, Nicholas; Hathiramani, Nicolai; Susarla, Shriraj; Palella, Frank; Witt, Mallory; Blaha, Michael J; Brown, Todd T; Kingsley, Lawrence; Haberlen, Sabina A; Dailing, Christopher; Budoff, Matthew J
2018-01-01
The association of HIV with coronary atherosclerosis has been established; however, the progression of coronary atherosclerosis over time among participants with HIV is not well known. The Multicenter AIDS Cohort Study Quantitative Coronary Plaque Progression Study is a large prospective multicenter study quantifying progression of coronary plaque assessed by serial coronary computed tomography angiography (CTA). HIV-infected and uninfected men who were enrolled in the Multicenter AIDS Cohort Study Cardiovascular Substudy were eligible to complete a follow-up contrast coronary CTA 3-6 years after baseline. We measured coronary plaque volume and characteristics (calcified and noncalcified plaque including fibrous, fibrous-fatty, and low attenuation) and vulnerable plaque among HIV-infected and uninfected men using semiautomated plaque software to investigate the progression of coronary atherosclerosis over time. We describe a novel, large prospective multicenter study investigating incidence, transition of characteristics, and progression in coronary atherosclerosis quantitatively assessed by serial coronary CTAs among HIV-infected and uninfected men.
NASA Astrophysics Data System (ADS)
Khosla, Deepak; Huber, David J.; Martin, Kevin
2017-05-01
This paper† describes a technique in which we improve upon the prior performance of the Rapid Serial Visual Presentation (RSVP) EEG paradigm for image classification though the insertion of visual attention distracters and overall sequence reordering based upon the expected ratio of rare to common "events" in the environment and operational context. Inserting distracter images maintains the ratio of common events to rare events at an ideal level, maximizing the rare event detection via P300 EEG response to the RSVP stimuli. The method has two steps: first, we compute the optimal number of distracters needed for an RSVP stimuli based on the desired sequence length and expected number of targets and insert the distracters into the RSVP sequence, and then we reorder the RSVP sequence to maximize P300 detection. We show that by reducing the ratio of target events to nontarget events using this method, we can allow RSVP sequences with more targets without sacrificing area under the ROC curve (azimuth).
Se-SAD serial femtosecond crystallography datasets from selenobiotinyl-streptavidin
Yoon, Chun Hong; DeMirci, Hasan; Sierra, Raymond G.; ...
2017-04-25
We provide a detailed description of selenobiotinyl-streptavidin (Se-B SA) co-crystal datasets recorded using the Coherent X-ray Imaging (CXI) instrument at the Linac Coherent Light Source (LCLS) for selenium single-wavelength anomalous diffraction (Se-SAD) structure determination. Se-B SA was chosen as the model system for its high affinity between biotin and streptavidin where the sulfur atom in the biotin molecule (C 10H 16N 2O 3S) is substituted with selenium. The dataset was collected at three different transmissions (100, 50, and 10%) using a serial sample chamber setup which allows for two sample chambers, a front chamber and a back chamber, to operatemore » simultaneously. Diffraction patterns from Se-B SA were recorded to a resolution of 1.9 Å. The dataset is publicly available through the Coherent X-ray Imaging Data Bank (CXIDB) and also on LCLS compute nodes as a resource for research and algorithm development.« less
Se-SAD serial femtosecond crystallography datasets from selenobiotinyl-streptavidin
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoon, Chun Hong; DeMirci, Hasan; Sierra, Raymond G.
We provide a detailed description of selenobiotinyl-streptavidin (Se-B SA) co-crystal datasets recorded using the Coherent X-ray Imaging (CXI) instrument at the Linac Coherent Light Source (LCLS) for selenium single-wavelength anomalous diffraction (Se-SAD) structure determination. Se-B SA was chosen as the model system for its high affinity between biotin and streptavidin where the sulfur atom in the biotin molecule (C 10H 16N 2O 3S) is substituted with selenium. The dataset was collected at three different transmissions (100, 50, and 10%) using a serial sample chamber setup which allows for two sample chambers, a front chamber and a back chamber, to operatemore » simultaneously. Diffraction patterns from Se-B SA were recorded to a resolution of 1.9 Å. The dataset is publicly available through the Coherent X-ray Imaging Data Bank (CXIDB) and also on LCLS compute nodes as a resource for research and algorithm development.« less
Process yield improvements with process control terminal for varian serial ion implanters
NASA Astrophysics Data System (ADS)
Higashi, Harry; Soni, Ameeta; Martinez, Larry; Week, Ken
Implant processes in a modern wafer production fab are extremely complex. There can be several types of misprocessing, i.e. wrong dose or species, double implants and missed implants. Process Control Terminals (PCT) for Varian 350Ds installed at Intel fabs were found to substantially reduce the number of misprocessing steps. This paper describes those misprocessing steps and their subsequent reduction with use of PCTs. Reliable and simple process control with serial process ion implanters has been in increasing demand. A well designed process control terminal greatly increases device yield by monitoring all pertinent implanter functions and enabling process engineering personnel to set up process recipes for simple and accurate system operation. By programming user-selectable interlocks, implant errors are reduced and those that occur are logged for further analysis and prevention. A process control terminal should also be compatible with office personal computers for greater flexibility in system use and data analysis. The impact from the capability of a process control terminal is increased productivity, ergo higher device yield.
NASA Technical Reports Server (NTRS)
Easley, W. C.; Tanguy, J. S.
1986-01-01
An upgrade of the transport systems research vehicle (TSRV) experimental flight system retained the original monochrome display system. The original host computer was replaced with a Norden 11/70, a new digital autonomous terminal access communication (DATAC) data bus was installed for data transfer between display system and host, while a new data interface method was required. The new display data interface uses four split phase bipolar (SPBP) serial busses. The DATAC bus uses a shared interface ram (SIR) for intermediate storage of its data transfer. A display interface unit (DIU) was designed and configured to read from and write to the SIR to properly convert the data from parallel to SPBP serial and vice versa. It is found that separation of data for use by each SPBP bus and synchronization of data tranfer throughout the entire experimental flight system are major problems which require solution in DIU design. The techniques used to accomplish these new data interface requirements are described.