NASA Astrophysics Data System (ADS)
Loring, B.; Karimabadi, H.; Rortershteyn, V.
2015-10-01
The surface line integral convolution(LIC) visualization technique produces dense visualization of vector fields on arbitrary surfaces. We present a screen space surface LIC algorithm for use in distributed memory data parallel sort last rendering infrastructures. The motivations for our work are to support analysis of datasets that are too large to fit in the main memory of a single computer and compatibility with prevalent parallel scientific visualization tools such as ParaView and VisIt. By working in screen space using OpenGL we can leverage the computational power of GPUs when they are available and run without them when they are not. We address efficiency and performance issues that arise from the transformation of data from physical to screen space by selecting an alternate screen space domain decomposition. We analyze the algorithm's scaling behavior with and without GPUs on two high performance computing systems using data from turbulent plasma simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loring, Burlen; Karimabadi, Homa; Rortershteyn, Vadim
2014-07-01
The surface line integral convolution(LIC) visualization technique produces dense visualization of vector fields on arbitrary surfaces. We present a screen space surface LIC algorithm for use in distributed memory data parallel sort last rendering infrastructures. The motivations for our work are to support analysis of datasets that are too large to fit in the main memory of a single computer and compatibility with prevalent parallel scientific visualization tools such as ParaView and VisIt. By working in screen space using OpenGL we can leverage the computational power of GPUs when they are available and run without them when they are not.more » We address efficiency and performance issues that arise from the transformation of data from physical to screen space by selecting an alternate screen space domain decomposition. We analyze the algorithm's scaling behavior with and without GPUs on two high performance computing systems using data from turbulent plasma simulations.« less
IceT users' guide and reference.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moreland, Kenneth D.
2011-01-01
The Image Composition Engine for Tiles (IceT) is a high-performance sort-last parallel rendering library. In addition to providing accelerated rendering for a standard display, IceT provides the unique ability to generate images for tiled displays. The overall resolution of the display may be several times larger than any viewport that may be rendered by a single machine. This document is an overview of the user interface to IceT.
Chromium: A Stress-Processing Framework for Interactive Rendering on Clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humphreys, G,; Houston, M.; Ng, Y.-R.
2002-01-11
We describe Chromium, a system for manipulating streams of graphics API commands on clusters of workstations. Chromium's stream filters can be arranged to create sort-first and sort-last parallel graphics architectures that, in many cases, support the same applications while using only commodity graphics accelerators. In addition, these stream filters can be extended programmatically, allowing the user to customize the stream transformations performed by nodes in a cluster. Because our stream processing mechanism is completely general, any cluster-parallel rendering algorithm can be either implemented on top of or embedded in Chromium. In this paper, we give examples of real-world applications thatmore » use Chromium to achieve good scalability on clusters of workstations, and describe other potential uses of this stream processing technology. By completely abstracting the underlying graphics architecture, network topology, and API command processing semantics, we allow a variety of applications to run in different environments.« less
Distributed shared memory for roaming large volumes.
Castanié, Laurent; Mion, Christophe; Cavin, Xavier; Lévy, Bruno
2006-01-01
We present a cluster-based volume rendering system for roaming very large volumes. This system allows to move a gigabyte-sized probe inside a total volume of several tens or hundreds of gigabytes in real-time. While the size of the probe is limited by the total amount of texture memory on the cluster, the size of the total data set has no theoretical limit. The cluster is used as a distributed graphics processing unit that both aggregates graphics power and graphics memory. A hardware-accelerated volume renderer runs in parallel on the cluster nodes and the final image compositing is implemented using a pipelined sort-last rendering algorithm. Meanwhile, volume bricking and volume paging allow efficient data caching. On each rendering node, a distributed hierarchical cache system implements a global software-based distributed shared memory on the cluster. In case of a cache miss, this system first checks page residency on the other cluster nodes instead of directly accessing local disks. Using two Gigabit Ethernet network interfaces per node, we accelerate data fetching by a factor of 4 compared to directly accessing local disks. The system also implements asynchronous disk access and texture loading, which makes it possible to overlap data loading, volume slicing and rendering for optimal volume roaming.
Parallel Rendering of Large Time-Varying Volume Data
NASA Technical Reports Server (NTRS)
Garbutt, Alexander E.
2005-01-01
Interactive visualization of large time-varying 3D volume datasets has been and still is a great challenge to the modem computational world. It stretches the limits of the memory capacity, the disk space, the network bandwidth and the CPU speed of a conventional computer. In this SURF project, we propose to develop a parallel volume rendering program on SGI's Prism, a cluster computer equipped with state-of-the-art graphic hardware. The proposed program combines both parallel computing and hardware rendering in order to achieve an interactive rendering rate. We use 3D texture mapping and a hardware shader to implement 3D volume rendering on each workstation. We use SGI's VisServer to enable remote rendering using Prism's graphic hardware. And last, we will integrate this new program with ParVox, a parallel distributed visualization system developed at JPL. At the end of the project, we Will demonstrate remote interactive visualization using this new hardware volume renderer on JPL's Prism System using a time-varying dataset from selected JPL applications.
Efficient Scalable Median Filtering Using Histogram-Based Operations.
Green, Oded
2018-05-01
Median filtering is a smoothing technique for noise removal in images. While there are various implementations of median filtering for a single-core CPU, there are few implementations for accelerators and multi-core systems. Many parallel implementations of median filtering use a sorting algorithm for rearranging the values within a filtering window and taking the median of the sorted value. While using sorting algorithms allows for simple parallel implementations, the cost of the sorting becomes prohibitive as the filtering windows grow. This makes such algorithms, sequential and parallel alike, inefficient. In this work, we introduce the first software parallel median filtering that is non-sorting-based. The new algorithm uses efficient histogram-based operations. These reduce the computational requirements of the new algorithm while also accessing the image fewer times. We show an implementation of our algorithm for both the CPU and NVIDIA's CUDA supported graphics processing unit (GPU). The new algorithm is compared with several other leading CPU and GPU implementations. The CPU implementation has near perfect linear scaling with a speedup on a quad-core system. The GPU implementation is several orders of magnitude faster than the other GPU implementations for mid-size median filters. For small kernels, and , comparison-based approaches are preferable as fewer operations are required. Lastly, the new algorithm is open-source and can be found in the OpenCV library.
Parallel integer sorting with medium and fine-scale parallelism
NASA Technical Reports Server (NTRS)
Dagum, Leonardo
1993-01-01
Two new parallel integer sorting algorithms, queue-sort and barrel-sort, are presented and analyzed in detail. These algorithms do not have optimal parallel complexity, yet they show very good performance in practice. Queue-sort designed for fine-scale parallel architectures which allow the queueing of multiple messages to the same destination. Barrel-sort is designed for medium-scale parallel architectures with a high message passing overhead. The performance results from the implementation of queue-sort on a Connection Machine CM-2 and barrel-sort on a 128 processor iPSC/860 are given. The two implementations are found to be comparable in performance but not as good as a fully vectorized bucket sort on the Cray YMP.
Sun, Xiaobo; Gao, Jingjing; Jin, Peng; Eng, Celeste; Burchard, Esteban G; Beaty, Terri H; Ruczinski, Ingo; Mathias, Rasika A; Barnes, Kathleen; Wang, Fusheng; Qin, Zhaohui S
2018-06-01
Sorted merging of genomic data is a common data operation necessary in many sequencing-based studies. It involves sorting and merging genomic data from different subjects by their genomic locations. In particular, merging a large number of variant call format (VCF) files is frequently required in large-scale whole-genome sequencing or whole-exome sequencing projects. Traditional single-machine based methods become increasingly inefficient when processing large numbers of files due to the excessive computation time and Input/Output bottleneck. Distributed systems and more recent cloud-based systems offer an attractive solution. However, carefully designed and optimized workflow patterns and execution plans (schemas) are required to take full advantage of the increased computing power while overcoming bottlenecks to achieve high performance. In this study, we custom-design optimized schemas for three Apache big data platforms, Hadoop (MapReduce), HBase, and Spark, to perform sorted merging of a large number of VCF files. These schemas all adopt the divide-and-conquer strategy to split the merging job into sequential phases/stages consisting of subtasks that are conquered in an ordered, parallel, and bottleneck-free way. In two illustrating examples, we test the performance of our schemas on merging multiple VCF files into either a single TPED or a single VCF file, which are benchmarked with the traditional single/parallel multiway-merge methods, message passing interface (MPI)-based high-performance computing (HPC) implementation, and the popular VCFTools. Our experiments suggest all three schemas either deliver a significant improvement in efficiency or render much better strong and weak scalabilities over traditional methods. Our findings provide generalized scalable schemas for performing sorted merging on genetics and genomics data using these Apache distributed systems.
Gao, Jingjing; Jin, Peng; Eng, Celeste; Burchard, Esteban G; Beaty, Terri H; Ruczinski, Ingo; Mathias, Rasika A; Barnes, Kathleen; Wang, Fusheng
2018-01-01
Abstract Background Sorted merging of genomic data is a common data operation necessary in many sequencing-based studies. It involves sorting and merging genomic data from different subjects by their genomic locations. In particular, merging a large number of variant call format (VCF) files is frequently required in large-scale whole-genome sequencing or whole-exome sequencing projects. Traditional single-machine based methods become increasingly inefficient when processing large numbers of files due to the excessive computation time and Input/Output bottleneck. Distributed systems and more recent cloud-based systems offer an attractive solution. However, carefully designed and optimized workflow patterns and execution plans (schemas) are required to take full advantage of the increased computing power while overcoming bottlenecks to achieve high performance. Findings In this study, we custom-design optimized schemas for three Apache big data platforms, Hadoop (MapReduce), HBase, and Spark, to perform sorted merging of a large number of VCF files. These schemas all adopt the divide-and-conquer strategy to split the merging job into sequential phases/stages consisting of subtasks that are conquered in an ordered, parallel, and bottleneck-free way. In two illustrating examples, we test the performance of our schemas on merging multiple VCF files into either a single TPED or a single VCF file, which are benchmarked with the traditional single/parallel multiway-merge methods, message passing interface (MPI)–based high-performance computing (HPC) implementation, and the popular VCFTools. Conclusions Our experiments suggest all three schemas either deliver a significant improvement in efficiency or render much better strong and weak scalabilities over traditional methods. Our findings provide generalized scalable schemas for performing sorted merging on genetics and genomics data using these Apache distributed systems. PMID:29762754
Effects of Presence at Delivery upon Paternal-Infant Bonding
1988-05-01
to the newborn resulting in a focusing of his attention on the infant, extreme elation or a "high," and an S 7 increased sense of self - esteem ...of transition checklist, parents ’ perceptions of role competence, postnatal self - esteem scale, obligatory infant behavior checklist, normative change...The last decade has seen a revolution of sorts in the manner health care is rendered in the realm of parent -child nursing. Prior to the 1970’ s having a
NASA Technical Reports Server (NTRS)
Crockett, Thomas W.
1995-01-01
This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.
Data parallel sorting for particle simulation
NASA Technical Reports Server (NTRS)
Dagum, Leonardo
1992-01-01
Sorting on a parallel architecture is a communications intensive event which can incur a high penalty in applications where it is required. In the case of particle simulation, only integer sorting is necessary, and sequential implementations easily attain the minimum performance bound of O (N) for N particles. Parallel implementations, however, have to cope with the parallel sorting problem which, in addition to incurring a heavy communications cost, can make the minimun performance bound difficult to attain. This paper demonstrates how the sorting problem in a particle simulation can be reduced to a merging problem, and describes an efficient data parallel algorithm to solve this merging problem in a particle simulation. The new algorithm is shown to be optimal under conditions usual for particle simulation, and its fieldwise implementation on the Connection Machine is analyzed in detail. The new algorithm is about four times faster than a fieldwise implementation of radix sort on the Connection Machine.
A Parallel Pipelined Renderer for the Time-Varying Volume Data
NASA Technical Reports Server (NTRS)
Chiueh, Tzi-Cker; Ma, Kwan-Liu
1997-01-01
This paper presents a strategy for efficiently rendering time-varying volume data sets on a distributed-memory parallel computer. Time-varying volume data take large storage space and visualizing them requires reading large files continuously or periodically throughout the course of the visualization process. Instead of using all the processors to collectively render one volume at a time, a pipelined rendering process is formed by partitioning processors into groups to render multiple volumes concurrently. In this way, the overall rendering time may be greatly reduced because the pipelined rendering tasks are overlapped with the I/O required to load each volume into a group of processors; moreover, parallelization overhead may be reduced as a result of partitioning the processors. We modify an existing parallel volume renderer to exploit various levels of rendering parallelism and to study how the partitioning of processors may lead to optimal rendering performance. Two factors which are important to the overall execution time are re-source utilization efficiency and pipeline startup latency. The optimal partitioning configuration is the one that balances these two factors. Tests on Intel Paragon computers show that in general optimal partitionings do exist for a given rendering task and result in 40-50% saving in overall rendering time.
Beyond the Renderer: Software Architecture for Parallel Graphics and Visualization
NASA Technical Reports Server (NTRS)
Crockett, Thomas W.
1996-01-01
As numerous implementations have demonstrated, software-based parallel rendering is an effective way to obtain the needed computational power for a variety of challenging applications in computer graphics and scientific visualization. To fully realize their potential, however, parallel renderers need to be integrated into a complete environment for generating, manipulating, and delivering visual data. We examine the structure and components of such an environment, including the programming and user interfaces, rendering engines, and image delivery systems. We consider some of the constraints imposed by real-world applications and discuss the problems and issues involved in bringing parallel rendering out of the lab and into production.
Görlach, E; Richmond, R; Lewis, I
1998-08-01
For the last two years, the mass spectroscopy section of the Novartis Pharma Research Core Technology group has analyzed tens of thousands of multiple parallel synthesis samples from the Novartis Pharma Combinatorial Chemistry program, using an in-house developed automated high-throughput flow injection analysis electrospray ionization mass spectroscopy system. The electrospray spectra of these samples reflect the many structures present after the cleavage step from the solid support. The overall success of the sequential synthesis is mirrored in the purity of the expected end product, but the partial success of individual synthesis steps is evident in the impurities in the mass spectrum. However this latter reaction information, which is of considerable utility to the combinatorial chemist, is effectively hidden from view by the very large number of analyzed samples. This information is now revealed at the workbench of the combinatorial chemist by a novel three-dimensional display of each rack's complete mass spectral ion current using the in-house RackViewer Visual Basic application. Colorization of "forbidden loss" and "forbidden gas-adduct" zones, normalization to expected monoisotopic molecular weight, colorization of ionization intensity, and sorting by row or column were used in combination to highlight systematic patterns in the mass spectroscopy data.
Hierarchical and Parallelizable Direct Volume Rendering for Irregular and Multiple Grids
NASA Technical Reports Server (NTRS)
Wilhelms, Jane; VanGelder, Allen; Tarantino, Paul; Gibbs, Jonathan
1996-01-01
A general volume rendering technique is described that efficiently produces images of excellent quality from data defined over irregular grids having a wide variety of formats. Rendering is done in software, eliminating the need for special graphics hardware, as well as any artifacts associated with graphics hardware. Images of volumes with about one million cells can be produced in one to several minutes on a workstation with a 150 MHz processor. A significant advantage of this method for applications such as computational fluid dynamics is that it can process multiple intersecting grids. Such grids present problems for most current volume rendering techniques. Also, the wide range of cell sizes (by a factor of 10,000 or more), which is typical of such applications, does not present difficulties, as it does for many techniques. A spatial hierarchical organization makes it possible to access data from a restricted region efficiently. The tree has greater depth in regions of greater detail, determined by the number of cells in the region. It also makes it possible to render useful 'preview' images very quickly (about one second for one-million-cell grids) by displaying each region associated with a tree node as one cell. Previews show enough detail to navigate effectively in very large data sets. The algorithmic techniques include use of a kappa-d tree, with prefix-order partitioning of triangles, to reduce the number of primitives that must be processed for one rendering, coarse-grain parallelism for a shared-memory MIMD architecture, a new perspective transformation that achieves greater numerical accuracy, and a scanline algorithm with depth sorting and a new clipping technique.
A unified framework for building high performance DVEs
NASA Astrophysics Data System (ADS)
Lei, Kaibin; Ma, Zhixia; Xiong, Hua
2011-10-01
A unified framework for integrating PC cluster based parallel rendering with distributed virtual environments (DVEs) is presented in this paper. While various scene graphs have been proposed in DVEs, it is difficult to enable collaboration of different scene graphs. This paper proposes a technique for non-distributed scene graphs with the capability of object and event distribution. With the increase of graphics data, DVEs require more powerful rendering ability. But general scene graphs are inefficient in parallel rendering. The paper also proposes a technique to connect a DVE and a PC cluster based parallel rendering environment. A distributed multi-player video game is developed to show the interaction of different scene graphs and the parallel rendering performance on a large tiled display wall.
Equalizer: a scalable parallel rendering framework.
Eilemann, Stefan; Makhinya, Maxim; Pajarola, Renato
2009-01-01
Continuing improvements in CPU and GPU performances as well as increasing multi-core processor and cluster-based parallelism demand for flexible and scalable parallel rendering solutions that can exploit multipipe hardware accelerated graphics. In fact, to achieve interactive visualization, scalable rendering systems are essential to cope with the rapid growth of data sets. However, parallel rendering systems are non-trivial to develop and often only application specific implementations have been proposed. The task of developing a scalable parallel rendering framework is even more difficult if it should be generic to support various types of data and visualization applications, and at the same time work efficiently on a cluster with distributed graphics cards. In this paper we introduce a novel system called Equalizer, a toolkit for scalable parallel rendering based on OpenGL which provides an application programming interface (API) to develop scalable graphics applications for a wide range of systems ranging from large distributed visualization clusters and multi-processor multipipe graphics systems to single-processor single-pipe desktop machines. We describe the system architecture, the basic API, discuss its advantages over previous approaches, present example configurations and usage scenarios as well as scalability results.
A Binary Array Asynchronous Sorting Algorithm with Using Petri Nets
NASA Astrophysics Data System (ADS)
Voevoda, A. A.; Romannikov, D. O.
2017-01-01
Nowadays the tasks of computations speed-up and/or their optimization are actual. Among the approaches on how to solve these tasks, a method applying approaches of parallelization and asynchronization to a sorting algorithm is considered in the paper. The sorting methods are ones of elementary methods and they are used in a huge amount of different applications. In the paper, we offer a method of an array sorting that based on a division into a set of independent adjacent pairs of numbers and their parallel and asynchronous comparison. And this one distinguishes the offered method from the traditional sorting algorithms (like quick sorting, merge sorting, insertion sorting and others). The algorithm is implemented with the use of Petri nets, like the most suitable tool for an asynchronous systems description.
Parallel sort with a ranged, partitioned key-value store in a high perfomance computing environment
Bent, John M.; Faibish, Sorin; Grider, Gary; Torres, Aaron; Poole, Stephen W.
2016-01-26
Improved sorting techniques are provided that perform a parallel sort using a ranged, partitioned key-value store in a high performance computing (HPC) environment. A plurality of input data files comprising unsorted key-value data in a partitioned key-value store are sorted. The partitioned key-value store comprises a range server for each of a plurality of ranges. Each input data file has an associated reader thread. Each reader thread reads the unsorted key-value data in the corresponding input data file and performs a local sort of the unsorted key-value data to generate sorted key-value data. A plurality of sorted, ranged subsets of each of the sorted key-value data are generated based on the plurality of ranges. Each sorted, ranged subset corresponds to a given one of the ranges and is provided to one of the range servers corresponding to the range of the sorted, ranged subset. Each range server sorts the received sorted, ranged subsets and provides a sorted range. A plurality of the sorted ranges are concatenated to obtain a globally sorted result.
The Border Star 85 Survey: Toward an Archeology of Landscapes
1988-12-12
historic properties on that highly active military tire TRU method as implemented) were inadequate for installation. rendering determinations of National...Dofia Ana phase settlement, such required only minimal reporting sufficient to render Na- that one could speculate as to how and why variation among...this dependent upon precipitation. In normal or high rainfall sort are complicated, however, by factors that render them years there would be many
Design considerations for parallel graphics libraries
NASA Technical Reports Server (NTRS)
Crockett, Thomas W.
1994-01-01
Applications which run on parallel supercomputers are often characterized by massive datasets. Converting these vast collections of numbers to visual form has proven to be a powerful aid to comprehension. For a variety of reasons, it may be desirable to provide this visual feedback at runtime. One way to accomplish this is to exploit the available parallelism to perform graphics operations in place. In order to do this, we need appropriate parallel rendering algorithms and library interfaces. This paper provides a tutorial introduction to some of the issues which arise in designing parallel graphics libraries and their underlying rendering algorithms. The focus is on polygon rendering for distributed memory message-passing systems. We illustrate our discussion with examples from PGL, a parallel graphics library which has been developed on the Intel family of parallel systems.
Portability and Cross-Platform Performance of an MPI-Based Parallel Polygon Renderer
NASA Technical Reports Server (NTRS)
Crockett, Thomas W.
1999-01-01
Visualizing the results of computations performed on large-scale parallel computers is a challenging problem, due to the size of the datasets involved. One approach is to perform the visualization and graphics operations in place, exploiting the available parallelism to obtain the necessary rendering performance. Over the past several years, we have been developing algorithms and software to support visualization applications on NASA's parallel supercomputers. Our results have been incorporated into a parallel polygon rendering system called PGL. PGL was initially developed on tightly-coupled distributed-memory message-passing systems, including Intel's iPSC/860 and Paragon, and IBM's SP2. Over the past year, we have ported it to a variety of additional platforms, including the HP Exemplar, SGI Origin2OOO, Cray T3E, and clusters of Sun workstations. In implementing PGL, we have had two primary goals: cross-platform portability and high performance. Portability is important because (1) our manpower resources are limited, making it difficult to develop and maintain multiple versions of the code, and (2) NASA's complement of parallel computing platforms is diverse and subject to frequent change. Performance is important in delivering adequate rendering rates for complex scenes and ensuring that parallel computing resources are used effectively. Unfortunately, these two goals are often at odds. In this paper we report on our experiences with portability and performance of the PGL polygon renderer across a range of parallel computing platforms.
A Parallel Rendering Algorithm for MIMD Architectures
NASA Technical Reports Server (NTRS)
Crockett, Thomas W.; Orloff, Tobias
1991-01-01
Applications such as animation and scientific visualization demand high performance rendering of complex three dimensional scenes. To deliver the necessary rendering rates, highly parallel hardware architectures are required. The challenge is then to design algorithms and software which effectively use the hardware parallelism. A rendering algorithm targeted to distributed memory MIMD architectures is described. For maximum performance, the algorithm exploits both object-level and pixel-level parallelism. The behavior of the algorithm is examined both analytically and experimentally. Its performance for large numbers of processors is found to be limited primarily by communication overheads. An experimental implementation for the Intel iPSC/860 shows increasing performance from 1 to 128 processors across a wide range of scene complexities. It is shown that minimal modifications to the algorithm will adapt it for use on shared memory architectures as well.
Automated Handling of Garments for Pressing
1991-09-30
Parallel Algorithms for 2D Kalman Filtering ................................. 47 DJ. Potter and M.P. Cline Hash Table and Sorted Array: A Case Study of... Kalman Filtering on the Connection Machine ............................ 55 MA. Palis and D.K. Krecker Parallel Sorting of Large Arrays on the MasPar...ALGORITHM’VS FOR SEAM SENSING. .. .. .. ... ... .... ..... 24 6.1 KarelTW Algorithms .. .. ... ... ... ... .... ... ...... 24 6.1.1 Image Filtering
Tile-based Level of Detail for the Parallel Age
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niski, K; Cohen, J D
Today's PCs incorporate multiple CPUs and GPUs and are easily arranged in clusters for high-performance, interactive graphics. We present an approach based on hierarchical, screen-space tiles to parallelizing rendering with level of detail. Adapt tiles, render tiles, and machine tiles are associated with CPUs, GPUs, and PCs, respectively, to efficiently parallelize the workload with good resource utilization. Adaptive tile sizes provide load balancing while our level of detail system allows total and independent management of the load on CPUs and GPUs. We demonstrate our approach on parallel configurations consisting of both single PCs and a cluster of PCs.
Approaching the exa-scale: a real-world evaluation of rendering extremely large data sets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patchett, John M; Ahrens, James P; Lo, Li - Ta
2010-10-15
Extremely large scale analysis is becoming increasingly important as supercomputers and their simulations move from petascale to exascale. The lack of dedicated hardware acceleration for rendering on today's supercomputing platforms motivates our detailed evaluation of the possibility of interactive rendering on the supercomputer. In order to facilitate our understanding of rendering on the supercomputing platform, we focus on scalability of rendering algorithms and architecture envisioned for exascale datasets. To understand tradeoffs for dealing with extremely large datasets, we compare three different rendering algorithms for large polygonal data: software based ray tracing, software based rasterization and hardware accelerated rasterization. We presentmore » a case study of strong and weak scaling of rendering extremely large data on both GPU and CPU based parallel supercomputers using Para View, a parallel visualization tool. Wc use three different data sets: two synthetic and one from a scientific application. At an extreme scale, algorithmic rendering choices make a difference and should be considered while approaching exascale computing, visualization, and analysis. We find software based ray-tracing offers a viable approach for scalable rendering of the projected future massive data sizes.« less
NASA Technical Reports Server (NTRS)
Dagum, Leonardo
1989-01-01
The data parallel implementation of a particle simulation for hypersonic rarefied flow described by Dagum associates a single parallel data element with each particle in the simulation. The simulated space is divided into discrete regions called cells containing a variable and constantly changing number of particles. The implementation requires a global sort of the parallel data elements so as to arrange them in an order that allows immediate access to the information associated with cells in the simulation. Described here is a very fast algorithm for performing the necessary ranking of the parallel data elements. The performance of the new algorithm is compared with that of the microcoded instruction for ranking on the Connection Machine.
The 2nd Symposium on the Frontiers of Massively Parallel Computations
NASA Technical Reports Server (NTRS)
Mills, Ronnie (Editor)
1988-01-01
Programming languages, computer graphics, neural networks, massively parallel computers, SIMD architecture, algorithms, digital terrain models, sort computation, simulation of charged particle transport on the massively parallel processor and image processing are among the topics discussed.
Real-time volume rendering of digital medical images on an iOS device
NASA Astrophysics Data System (ADS)
Noon, Christian; Holub, Joseph; Winer, Eliot
2013-03-01
Performing high quality 3D visualizations on mobile devices, while tantalizingly close in many areas, is still a quite difficult task. This is especially true for 3D volume rendering of digital medical images. Allowing this would empower medical personnel a powerful tool to diagnose and treat patients and train the next generation of physicians. This research focuses on performing real time volume rendering of digital medical images on iOS devices using custom developed GPU shaders for orthogonal texture slicing. An interactive volume renderer was designed and developed with several new features including dynamic modification of render resolutions, an incremental render loop, a shader-based clipping algorithm to support OpenGL ES 2.0, and an internal backface culling algorithm for properly sorting rendered geometry with alpha blending. The application was developed using several application programming interfaces (APIs) such as OpenSceneGraph (OSG) as the primary graphics renderer coupled with iOS Cocoa Touch for user interaction, and DCMTK for DICOM I/O. The developed application rendered volume datasets over 450 slices up to 50-60 frames per second, depending on the specific model of the iOS device. All rendering is done locally on the device so no Internet connection is required.
The Area-Time Complexity of Sorting.
1984-12-01
suggests a classification of keys into short (k < logn), long (k > 2 logn), and of medium length. Optimal or near-optimal designs of VLSI sorters are...suggests a classification of keys into short (k 4 logn ), long (k > 21ogn ), and of medium length. Optimal or near-optimal designs of VLSI sorters are...ARCHITECTURES 79 5.1 Introduction 79 5.2 Parallel Algorithms for Sorting 80 . 5.3 Parallel Architectures 88 6 OPTIMAL VLSI SORTERS FOR KEYS OF LENGTH k - logn
A real-time spike sorting method based on the embedded GPU.
Zelan Yang; Kedi Xu; Xiang Tian; Shaomin Zhang; Xiaoxiang Zheng
2017-07-01
Microelectrode arrays with hundreds of channels have been widely used to acquire neuron population signals in neuroscience studies. Online spike sorting is becoming one of the most important challenges for high-throughput neural signal acquisition systems. Graphic processing unit (GPU) with high parallel computing capability might provide an alternative solution for increasing real-time computational demands on spike sorting. This study reported a method of real-time spike sorting through computing unified device architecture (CUDA) which was implemented on an embedded GPU (NVIDIA JETSON Tegra K1, TK1). The sorting approach is based on the principal component analysis (PCA) and K-means. By analyzing the parallelism of each process, the method was further optimized in the thread memory model of GPU. Our results showed that the GPU-based classifier on TK1 is 37.92 times faster than the MATLAB-based classifier on PC while their accuracies were the same with each other. The high-performance computing features of embedded GPU demonstrated in our studies suggested that the embedded GPU provide a promising platform for the real-time neural signal processing.
Hybrid Parallelism for Volume Rendering on Large-, Multi-, and Many-Core Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Howison, Mark; Bethel, E. Wes; Childs, Hank
2012-01-01
With the computing industry trending towards multi- and many-core processors, we study how a standard visualization algorithm, ray-casting volume rendering, can benefit from a hybrid parallelism approach. Hybrid parallelism provides the best of both worlds: using distributed-memory parallelism across a large numbers of nodes increases available FLOPs and memory, while exploiting shared-memory parallelism among the cores within each node ensures that each node performs its portion of the larger calculation as efficiently as possible. We demonstrate results from weak and strong scaling studies, at levels of concurrency ranging up to 216,000, and with datasets as large as 12.2 trillion cells.more » The greatest benefit from hybrid parallelism lies in the communication portion of the algorithm, the dominant cost at higher levels of concurrency. We show that reducing the number of participants with a hybrid approach significantly improves performance.« less
A data distributed parallel algorithm for ray-traced volume rendering
NASA Technical Reports Server (NTRS)
Ma, Kwan-Liu; Painter, James S.; Hansen, Charles D.; Krogh, Michael F.
1993-01-01
This paper presents a divide-and-conquer ray-traced volume rendering algorithm and a parallel image compositing method, along with their implementation and performance on the Connection Machine CM-5, and networked workstations. This algorithm distributes both the data and the computations to individual processing units to achieve fast, high-quality rendering of high-resolution data. The volume data, once distributed, is left intact. The processing nodes perform local ray tracing of their subvolume concurrently. No communication between processing units is needed during this locally ray-tracing process. A subimage is generated by each processing unit and the final image is obtained by compositing subimages in the proper order, which can be determined a priori. Test results on both the CM-5 and a group of networked workstations demonstrate the practicality of our rendering algorithm and compositing method.
Marcos Zapata's "Last Supper": a feast of European religion and Andean culture.
Zendt, Christina
2010-01-01
In Marcos Zapata's 1753 painting of the Last Supper in Cuzco, Peru, Christian symbolism is filtered through Andean cultural tradition. Zapata was a late member of the Cuzco School of Painting, a group comprised of few European immigrants and handfuls of mestizo and Indian artists. The painters in Cuzco learned mostly from prints of European paintings, and their style tends to blend local culture into the traditional painting of their conquistadors. Imagery was the most successful tool used by the Spaniards in their quest to Christianize the Andean population. By teaching locals to paint Christian subjects, they were able to infuse Christianity into Andean traditions. Zapata's rendering of the Last Supper utilizes this cultural blending while staying true to the Christian symbolism within the subject. Instead of the traditional lamb, Zapata's Last Supper features a platter of cuy, or guinea pig, an Andean delicacy stocked with protein as well as cultural significance. Cuy was traditionally a sacrificial animal at Inca agricultural festivals and in this way it offers poignant parallel to the lamb, as a traditional Christian sacrificial animal.
A concept of volume rendering guided search process to analyze medical data set.
Zhou, Jianlong; Xiao, Chun; Wang, Zhiyan; Takatsuka, Masahiro
2008-03-01
This paper firstly presents an approach of parallel coordinates based parameter control panel (PCP). The PCP is used to control parameters of focal region-based volume rendering (FRVR) during data analysis. It uses a parallel coordinates style interface. Different rendering parameters represented with nodes on each axis, and renditions based on related parameters are connected using polylines to show dependencies between renditions and parameters. Based on the PCP, a concept of volume rendering guided search process is proposed. The search pipeline is divided into four phases. Different parameters of FRVR are recorded and modulated in the PCP during search phases. The concept shows that volume visualization could play the role of guiding a search process in the rendition space to help users to efficiently find local structures of interest. The usability of the proposed approach is evaluated to show its effectiveness.
A parallel coordinates style interface for exploratory volume visualization.
Tory, Melanie; Potts, Simeon; Möller, Torsten
2005-01-01
We present a user interface, based on parallel coordinates, that facilitates exploration of volume data. By explicitly representing the visualization parameter space, the interface provides an overview of rendering options and enables users to easily explore different parameters. Rendered images are stored in an integrated history bar that facilitates backtracking to previous visualization options. Initial usability testing showed clear agreement between users and experts of various backgrounds (usability, graphic design, volume visualization, and medical physics) that the proposed user interface is a valuable data exploration tool.
2002-01-01
wrappers to other widely used languages, namely TCL/TK, Java, and Python . VTK is very powerful and covers polygonal models and image processing classes and...follows: � Large Data Visualization and Rendering � Information Visualization for Beginners � Rendering and Visualization in Parallel Environments
Parallel Visualization of Large-Scale Aerodynamics Calculations: A Case Study on the Cray T3E
NASA Technical Reports Server (NTRS)
Ma, Kwan-Liu; Crockett, Thomas W.
1999-01-01
This paper reports the performance of a parallel volume rendering algorithm for visualizing a large-scale, unstructured-grid dataset produced by a three-dimensional aerodynamics simulation. This dataset, containing over 18 million tetrahedra, allows us to extend our performance results to a problem which is more than 30 times larger than the one we examined previously. This high resolution dataset also allows us to see fine, three-dimensional features in the flow field. All our tests were performed on the Silicon Graphics Inc. (SGI)/Cray T3E operated by NASA's Goddard Space Flight Center. Using 511 processors, a rendering rate of almost 9 million tetrahedra/second was achieved with a parallel overhead of 26%.
ALG-2 activates the MVB sorting function of ALIX through relieving its intramolecular interaction
Sun, Sheng; Zhou, Xi; Corvera, Joe; Gallick, Gary E; Lin, Sue-Hwa; Kuang, Jian
2015-01-01
The modular adaptor protein ALIX is critically involved in endosomal sorting complexes required for transport (ESCRT)-mediated multivesicular body (MVB) sorting of activated epidermal growth factor receptor (EGFR); however, ALIX contains a default intramolecular interaction that renders ALIX unable to perform this ESCRT function. The ALIX partner protein ALG-2 is a calcium-binding protein that belongs to the calmodulin superfamily. Prompted by a defined biological function of calmodulin, we determined the role of ALG-2 in regulating ALIX involvement in MVB sorting of activated EGFR. Our results show that calcium-dependent ALG-2 interaction with ALIX completely relieves the intramolecular interaction of ALIX and promotes CHMP4-dependent ALIX association with the membrane. EGFR activation induces increased ALG-2 interaction with ALIX, and this increased interaction is responsible for increased ALIX association with the membrane. Functionally, inhibition of ALIX activation by ALG-2 inhibits MVB sorting of activated EGFR as effectively as inhibition of ALIX interaction with CHMP4 does; however, inhibition of ALIX activation by ALG-2 does not affect cytokinetic abscission or equine infectious anemia virus (EIAV) budding. These findings indicate that calcium-dependent ALG-2 interaction with ALIX is specifically responsible for generating functional ALIX that supports MVB sorting of ubiquitinated membrane receptors. PMID:27462417
The Container Problem in Bubble-Sort Graphs
NASA Astrophysics Data System (ADS)
Suzuki, Yasuto; Kaneko, Keiichi
Bubble-sort graphs are variants of Cayley graphs. A bubble-sort graph is suitable as a topology for massively parallel systems because of its simple and regular structure. Therefore, in this study, we focus on n-bubble-sort graphs and propose an algorithm to obtain n-1 disjoint paths between two arbitrary nodes in time bounded by a polynomial in n, the degree of the graph plus one. We estimate the time complexity of the algorithm and the sum of the path lengths after proving the correctness of the algorithm. In addition, we report the results of computer experiments evaluating the average performance of the algorithm.
Parallel volume ray-casting for unstructured-grid data on distributed-memory architectures
NASA Technical Reports Server (NTRS)
Ma, Kwan-Liu
1995-01-01
As computing technology continues to advance, computational modeling of scientific and engineering problems produces data of increasing complexity: large in size and unstructured in shape. Volume visualization of such data is a challenging problem. This paper proposes a distributed parallel solution that makes ray-casting volume rendering of unstructured-grid data practical. Both the data and the rendering process are distributed among processors. At each processor, ray-casting of local data is performed independent of the other processors. The global image composing processes, which require inter-processor communication, are overlapped with the local ray-casting processes to achieve maximum parallel efficiency. This algorithm differs from previous ones in four ways: it is completely distributed, less view-dependent, reasonably scalable, and flexible. Without using dynamic load balancing, test results on the Intel Paragon using from two to 128 processors show, on average, about 60% parallel efficiency.
Run-time parallelization and scheduling of loops
NASA Technical Reports Server (NTRS)
Saltz, Joel H.; Mirchandaney, Ravi; Baxter, Doug
1988-01-01
The class of problems that can be effectively compiled by parallelizing compilers is discussed. This is accomplished with the doconsider construct which would allow these compilers to parallelize many problems in which substantial loop-level parallelism is available but cannot be detected by standard compile-time analysis. We describe and experimentally analyze mechanisms used to parallelize the work required for these types of loops. In each of these methods, a new loop structure is produced by modifying the loop to be parallelized. We also present the rules by which these loop transformations may be automated in order that they be included in language compilers. The main application area of the research involves problems in scientific computations and engineering. The workload used in our experiment includes a mixture of real problems as well as synthetically generated inputs. From our extensive tests on the Encore Multimax/320, we have reached the conclusion that for the types of workloads we have investigated, self-execution almost always performs better than pre-scheduling. Further, the improvement in performance that accrues as a result of global topological sorting of indices as opposed to the less expensive local sorting, is not very significant in the case of self-execution.
Scan line graphics generation on the massively parallel processor
NASA Technical Reports Server (NTRS)
Dorband, John E.
1988-01-01
Described here is how researchers implemented a scan line graphics generation algorithm on the Massively Parallel Processor (MPP). Pixels are computed in parallel and their results are applied to the Z buffer in large groups. To perform pixel value calculations, facilitate load balancing across the processors and apply the results to the Z buffer efficiently in parallel requires special virtual routing (sort computation) techniques developed by the author especially for use on single-instruction multiple-data (SIMD) architectures.
Physiology of spermatozoa at high dilution rates: the influence of seminal plasma.
Maxwell, W M; Johnson, L A
1999-12-01
Extensive dilution of spermatozoa, as occurs during flow-cytometric sperm sorting, can reduce their motility and viability. These effects may be minimized by the use of appropriate dilution and collection media, containing balanced salts, energy sources, egg yolk and some protein. Dilution and flow-cytometric sorting of spermatozoa, which involves the removal of seminal plasma, also destabilizes sperm membranes leading to functional capacitation. This membrane destabilization renders the spermatozoa immediately capable of fertilization in vitro, or in vivo after deposition close to the site of fertilization, but shortens their lifespan, resulting in premature death if the cells are deposited in the female tract distant from the site of fertilization or are held in vitro at standard storage temperatures. This functional capacitation can be reversed in boar spermatozoa by inclusion of seminal plasma in the medium used to collect the cells from the cell sorter and, consequently, reduces their in vitro fertility. It has yet to be determined whether seminal plasma would have similar effects on flow cytometrically sorted spermatozoa of other species, and what its effects might be on the in vivo fertility of flow sorted boar.
Realistic Real-Time Outdoor Rendering in Augmented Reality
Kolivand, Hoshang; Sunar, Mohd Shahrizal
2014-01-01
Realistic rendering techniques of outdoor Augmented Reality (AR) has been an attractive topic since the last two decades considering the sizeable amount of publications in computer graphics. Realistic virtual objects in outdoor rendering AR systems require sophisticated effects such as: shadows, daylight and interactions between sky colours and virtual as well as real objects. A few realistic rendering techniques have been designed to overcome this obstacle, most of which are related to non real-time rendering. However, the problem still remains, especially in outdoor rendering. This paper proposed a much newer, unique technique to achieve realistic real-time outdoor rendering, while taking into account the interaction between sky colours and objects in AR systems with respect to shadows in any specific location, date and time. This approach involves three main phases, which cover different outdoor AR rendering requirements. Firstly, sky colour was generated with respect to the position of the sun. Second step involves the shadow generation algorithm, Z-Partitioning: Gaussian and Fog Shadow Maps (Z-GaF Shadow Maps). Lastly, a technique to integrate sky colours and shadows through its effects on virtual objects in the AR system, is introduced. The experimental results reveal that the proposed technique has significantly improved the realism of real-time outdoor AR rendering, thus solving the problem of realistic AR systems. PMID:25268480
Realistic real-time outdoor rendering in augmented reality.
Kolivand, Hoshang; Sunar, Mohd Shahrizal
2014-01-01
Realistic rendering techniques of outdoor Augmented Reality (AR) has been an attractive topic since the last two decades considering the sizeable amount of publications in computer graphics. Realistic virtual objects in outdoor rendering AR systems require sophisticated effects such as: shadows, daylight and interactions between sky colours and virtual as well as real objects. A few realistic rendering techniques have been designed to overcome this obstacle, most of which are related to non real-time rendering. However, the problem still remains, especially in outdoor rendering. This paper proposed a much newer, unique technique to achieve realistic real-time outdoor rendering, while taking into account the interaction between sky colours and objects in AR systems with respect to shadows in any specific location, date and time. This approach involves three main phases, which cover different outdoor AR rendering requirements. Firstly, sky colour was generated with respect to the position of the sun. Second step involves the shadow generation algorithm, Z-Partitioning: Gaussian and Fog Shadow Maps (Z-GaF Shadow Maps). Lastly, a technique to integrate sky colours and shadows through its effects on virtual objects in the AR system, is introduced. The experimental results reveal that the proposed technique has significantly improved the realism of real-time outdoor AR rendering, thus solving the problem of realistic AR systems.
A Survey of Parallel Sorting Algorithms.
1981-12-01
see that, in this algorithm, each Processor i, for 1 itp -2, interacts directly only with Processors i+l and i-l. Processor j 0 only interacts with...Chan76] Chandra, A.K., "Maximal Parallelism in Matrix Multiplication," IBM Report RC. 6193, Watson Research Center, Yorktown Heights, N.Y., October 1976
Parallel Algorithms and Patterns
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robey, Robert W.
2016-06-16
This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.
Supplement to the December 1974 Space Investigation Documentation System (SIDS) report
NASA Technical Reports Server (NTRS)
1975-01-01
A listing and brief description of spacecraft and experiments designed to update the December 1974 Space Investigations Documentation System (SIDS) report to March 31, 1975 was presented. The information is given in two sections. In the first, spacecraft and experiment descriptions are sorted by spacecraft common name. Within each spacecraft lising, experiments are sorted by the principal investigator's or team leader's last name. Each spacecraft entry heading contains the spacecraft common name, alternate names, NSSDC ID code, last reported state of the spacecraft, actual or planned launch date, weight, launch site and vehicle, sponsor, orbit parameters, personnel. Each experiment entry heading contains the experiment name, NSSDC ID code, last reported status, the Office of Space Science (OSS) division, the relevant SIDS disciplines, personnel. In the second, all spacecraft and experiment names described in the previous section and in the December 1974 report are sorted out.
Shields, C Wyatt; Reyes, Catherine D; López, Gabriel P
2015-03-07
Accurate and high throughput cell sorting is a critical enabling technology in molecular and cellular biology, biotechnology, and medicine. While conventional methods can provide high efficiency sorting in short timescales, advances in microfluidics have enabled the realization of miniaturized devices offering similar capabilities that exploit a variety of physical principles. We classify these technologies as either active or passive. Active systems generally use external fields (e.g., acoustic, electric, magnetic, and optical) to impose forces to displace cells for sorting, whereas passive systems use inertial forces, filters, and adhesion mechanisms to purify cell populations. Cell sorting on microchips provides numerous advantages over conventional methods by reducing the size of necessary equipment, eliminating potentially biohazardous aerosols, and simplifying the complex protocols commonly associated with cell sorting. Additionally, microchip devices are well suited for parallelization, enabling complete lab-on-a-chip devices for cellular isolation, analysis, and experimental processing. In this review, we examine the breadth of microfluidic cell sorting technologies, while focusing on those that offer the greatest potential for translation into clinical and industrial practice and that offer multiple, useful functions. We organize these sorting technologies by the type of cell preparation required (i.e., fluorescent label-based sorting, bead-based sorting, and label-free sorting) as well as by the physical principles underlying each sorting mechanism.
Shields, C. Wyatt; Reyes, Catherine D.; López, Gabriel P.
2015-01-01
Accurate and high throughput cell sorting is a critical enabling technology in molecular and cellular biology, biotechnology, and medicine. While conventional methods can provide high efficiency sorting in short timescales, advances in microfluidics have enabled the realization of miniaturized devices offering similar capabilities that exploit a variety of physical principles. We classify these technologies as either active or passive. Active systems generally use external fields (e.g., acoustic, electric, magnetic, and optical) to impose forces to displace cells for sorting, whereas passive systems use inertial forces, filters, and adhesion mechanisms to purify cell populations. Cell sorting on microchips provides numerous advantages over conventional methods by reducing the size of necessary equipment, eliminating potentially biohazardous aerosols, and simplifying the complex protocols commonly associated with cell sorting. Additionally, microchip devices are well suited for parallelization, enabling complete lab-on-a-chip devices for cellular isolation, analysis, and experimental processing. In this review, we examine the breadth of microfluidic cell sorting technologies, while focusing on those that offer the greatest potential for translation into clinical and industrial practice and that offer multiple, useful functions. We organize these sorting technologies by the type of cell preparation required (i.e., fluorescent label-based sorting, bead-based sorting, and label-free sorting) as well as by the physical principles underlying each sorting mechanism. PMID:25598308
Neural Parallel Engine: A toolbox for massively parallel neural signal processing.
Tam, Wing-Kin; Yang, Zhi
2018-05-01
Large-scale neural recordings provide detailed information on neuronal activities and can help elicit the underlying neural mechanisms of the brain. However, the computational burden is also formidable when we try to process the huge data stream generated by such recordings. In this study, we report the development of Neural Parallel Engine (NPE), a toolbox for massively parallel neural signal processing on graphical processing units (GPUs). It offers a selection of the most commonly used routines in neural signal processing such as spike detection and spike sorting, including advanced algorithms such as exponential-component-power-component (EC-PC) spike detection and binary pursuit spike sorting. We also propose a new method for detecting peaks in parallel through a parallel compact operation. Our toolbox is able to offer a 5× to 110× speedup compared with its CPU counterparts depending on the algorithms. A user-friendly MATLAB interface is provided to allow easy integration of the toolbox into existing workflows. Previous efforts on GPU neural signal processing only focus on a few rudimentary algorithms, are not well-optimized and often do not provide a user-friendly programming interface to fit into existing workflows. There is a strong need for a comprehensive toolbox for massively parallel neural signal processing. A new toolbox for massively parallel neural signal processing has been created. It can offer significant speedup in processing signals from large-scale recordings up to thousands of channels. Copyright © 2018 Elsevier B.V. All rights reserved.
Rapid Parallel Calculation of shell Element Based On GPU
NASA Astrophysics Data System (ADS)
Wanga, Jian Hua; Lia, Guang Yao; Lib, Sheng; Li, Guang Yao
2010-06-01
Long computing time bottlenecked the application of finite element. In this paper, an effective method to speed up the FEM calculation by using the existing modern graphic processing unit and programmable colored rendering tool was put forward, which devised the representation of unit information in accordance with the features of GPU, converted all the unit calculation into film rendering process, solved the simulation work of all the unit calculation of the internal force, and overcame the shortcomings of lowly parallel level appeared ever before when it run in a single computer. Studies shown that this method could improve efficiency and shorten calculating hours greatly. The results of emulation calculation about the elasticity problem of large number cells in the sheet metal proved that using the GPU parallel simulation calculation was faster than using the CPU's. It is useful and efficient to solve the project problems in this way.
Binary-space-partitioned images for resolving image-based visibility.
Fu, Chi-Wing; Wong, Tien-Tsin; Tong, Wai-Shun; Tang, Chi-Keung; Hanson, Andrew J
2004-01-01
We propose a novel 2D representation for 3D visibility sorting, the Binary-Space-Partitioned Image (BSPI), to accelerate real-time image-based rendering. BSPI is an efficient 2D realization of a 3D BSP tree, which is commonly used in computer graphics for time-critical visibility sorting. Since the overall structure of a BSP tree is encoded in a BSPI, traversing a BSPI is comparable to traversing the corresponding BSP tree. BSPI performs visibility sorting efficiently and accurately in the 2D image space by warping the reference image triangle-by-triangle instead of pixel-by-pixel. Multiple BSPIs can be combined to solve "disocclusion," when an occluded portion of the scene becomes visible at a novel viewpoint. Our method is highly automatic, including a tensor voting preprocessing step that generates candidate image partition lines for BSPIs, filters the noisy input data by rejecting outliers, and interpolates missing information. Our system has been applied to a variety of real data, including stereo, motion, and range images.
Radar Design to Protect Against Surprise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doerry, Armin W.
Technological and doctrinal surprise is about rendering preparations for conflict as irrelevant or ineffective . For a sensor, this means essentially rendering the sensor as irrelevant or ineffective in its ability to help determine truth. Recovery from this sort of surprise is facilitated by flexibility in our own technology and doctrine. For a sensor, this mean s flexibility in its architecture, design, tactics, and the designing organizations ' processes. - 4 - Acknowledgements This report is the result of a n unfunded research and development activity . Sandia National Laboratories is a multi - program laboratory manage d and operatedmore » by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE - AC04 - 94AL85000.« less
NASA Astrophysics Data System (ADS)
Tomori, Zoltan; Keša, Peter; Nikorovič, Matej; Kaůka, Jan; Zemánek, Pavel
2016-12-01
We proposed the improved control software for the holographic optical tweezers (HOT) proper for simple semi-automated sorting. The controller receives data from both the human interface sensors and the HOT microscope camera and processes them. As a result, the new positions of active laser traps are calculated, packed into the network format and sent to the remote HOT. Using the photo-polymerization technique, we created a sorting container consisting of two parallel horizontal walls where one wall contains "gates" representing a place where the trapped particle enters into the container. The positions of particles and gates are obtained by image analysis technique which can be exploited to achieve the higher level of automation. Sorting is documented on computer game simulation and the real experiment.
ERIC Educational Resources Information Center
Saglam, Mehmet; Sungu, Hilmi
2015-01-01
This study zeros in on rendering the teachers' discriminations among their students in various aspects in the narratives of primary school students of 1950s, 1970s and 1980s' Turkey. Construction and reconstruction of personal and social stories of teachers and students is also a sort of education and educational research. The method of the…
The Visualization Toolkit (VTK): Rewriting the rendering code for modern graphics cards
NASA Astrophysics Data System (ADS)
Hanwell, Marcus D.; Martin, Kenneth M.; Chaudhary, Aashish; Avila, Lisa S.
2015-09-01
The Visualization Toolkit (VTK) is an open source, permissively licensed, cross-platform toolkit for scientific data processing, visualization, and data analysis. It is over two decades old, originally developed for a very different graphics card architecture. Modern graphics cards feature fully programmable, highly parallelized architectures with large core counts. VTK's rendering code was rewritten to take advantage of modern graphics cards, maintaining most of the toolkit's programming interfaces. This offers the opportunity to compare the performance of old and new rendering code on the same systems/cards. Significant improvements in rendering speeds and memory footprints mean that scientific data can be visualized in greater detail than ever before. The widespread use of VTK means that these improvements will reap significant benefits.
n-body simulations using message passing parallel computers.
NASA Astrophysics Data System (ADS)
Grama, A. Y.; Kumar, V.; Sameh, A.
The authors present new parallel formulations of the Barnes-Hut method for n-body simulations on message passing computers. These parallel formulations partition the domain efficiently incurring minimal communication overhead. This is in contrast to existing schemes that are based on sorting a large number of keys or on the use of global data structures. The new formulations are augmented by alternate communication strategies which serve to minimize communication overhead. The impact of these communication strategies is experimentally studied. The authors report on experimental results obtained from an astrophysical simulation on an nCUBE2 parallel computer.
Pashkova, Natasha; Gakhar, Lokesh; Winistorfer, Stanley; Sunshine, Anna B.; Rich, Matthew; Dunham, Maitreya J.; Yu, Liping; Piper, Robert
2013-01-01
SUMMARY Sorting of ubiquitinated membrane proteins into lumenal vesicles of multivesicular bodies is mediated by the ESCRT apparatus and accessory proteins such as Bro1, which recruits the deubiquitinating enzyme Doa4 to remove ubiquitin from cargo. Here we propose that Bro1 works as a receptor for the selective sorting of ubiquitinated cargos. We found synthetic genetic interactions between BRO1 and ESCRT-0, suggesting Bro1 functions similarly to ESCRT-0. Multiple structural approaches demonstrated that Bro1 binds ubiquitin via the N-terminal trihelical arm of its middle V domain. Mutants of Bro1 that lack the ability to bind Ub were dramatically impaired in their ability to sort Ub-cargo membrane proteins, but only when combined with hypomorphic alleles of ESCRT-0. These data suggest that Bro1 and other Bro1 family members function in parallel with ESCRT-0 to recognize and sort Ub-cargos. PMID:23726974
Syntactic Change in the Parallel Architecture: The Case of Parasitic Gaps
ERIC Educational Resources Information Center
Culicover, Peter W.
2017-01-01
In Jackendoff's Parallel Architecture, the well-formed expressions of a language are licensed by correspondences between phonology, syntax, and conceptual structure. I show how this architecture can be used to make sense of the existence of parasitic gap constructions. A parasitic gap is one that is rendered acceptable because of the presence of…
Three-dimensional rendering in medicine: some common misconceptions
NASA Astrophysics Data System (ADS)
Udupa, Jayaram K.
2001-05-01
As seen in the medical imaging literature and in the poster presentations at the annual conference of the Radiological Society of North America during the past 10 years, several mis conceptions are held relating to 3D rendering of medical images. The purpose of this presentation is to illustrate and clarify these with medical examples. Most of the misconceptions have to do with a mix up of the issues related to the common visualization techniques, viz., surface rendering (SR) and volume rendering (VR), and methods of image segmentation. In our survey, we came across the following most commonly held conceptions which we believe (and shall demonstrate) are not correct: (1) SR equated to thresholding. (2) VR considered not requiring segmentation. (3) VR considered to achieve higher resolution than SR. (4) SR/VR considered to require specialized hardware to achieve adequate speed. We shall briefly define and establish some fundamental terms to obviate any potential for terminology-related misconceptions. Subsequently, we shall sort out these issues and illustrate with examples as to why the above conceptions are incorrect. There are many SR methods that use segmentations that are far superior to thresholding. All VR techniques (except the straightforward MIP) require some form of fuzzy object specification, that is, fuzzy segmentation. The details seen in renditions depend fundamentally on, in addition to the rendering method, segmentation techniques also. There are fast-software-based rendering methods that give a performance on PCs similar to or exceeding that of expensive hardware systems. Most of the difficulties encountered in visualization (and also in image processing and analysis) stem from the difficulties in segmentation. It is important to identify these and separate them from the issues related purely to 3D rendering.
Parallel text rendering by a PostScript interpreter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kritskii, S.P.; Zastavnoi, B.A.
1994-11-01
The most radical method of increasing the performance of devices controlled by PostScript interpreters may be the use of multiprocessor controllers. This paper presents a method for parallelizing the operation of a PostScript interpreter for rendering text. The proposed method is based on decomposition of the outlines of letters into horizontal strips covering equal areas. The subroutines thus obtained are distributed to the processors in a network and then filled in by conventional sequential algorithms. A special algorithm has been developed for dividing the outlines of characters into subroutines so that each may be colored independently of the others. Themore » algorithm uses special estimates for estimating the correct partition so that the corresponding outlines are divided into horizontal strips. A method is presented for finding such estimates. Two different processing approaches are presented. In the first, one of the processors performs the decomposition of the outlines and distributes the strips to the remaining processors, which are responsible for the rendering. In the second approach, the decomposition process is itself distributed among the processors in the network.« less
Chrono: A Parallel Physics Library for Rigid-Body, Flexible-Body, and Fluid Dynamics
2013-08-01
big data. Chrono::Render is capable of using 320 cores and is built around Pixar’s RenderMan. All these components combine to produce Chrono, a multi...rather small collection of rigid and/or deformable bodies of complex geometry (hourglass wall, wheel, track shoe, excava- tor blade, dipper ), and a...motivated by the scope of arbitrary data sets and the potentially immense scene complexity that results from big data; REYES, the underlying architecture
Exploiting Data Similarity to Reduce Memory Footprints
2011-01-01
leslie3d Fortran Computational Fluid Dynamics (CFD) application 122. tachyon C Parallel Ray Tracing application 128.GAPgeofem C and Fortran Simulates...benefits most from SBLLmalloc; LAMMPS, which shows moderate similarity from primarily zero pages; and 122. tachyon , a parallel ray- tracing application...similarity across MPI tasks. They primarily are zero- pages although a small fraction (≈10%) are non-zero pages. 122. tachyon is an image rendering
David W. Green; Thomas M. Gorman; Joseph F. Murphy; Matthew B. Wheeler
2007-01-01
This study evaluates the effect of moisture content on the properties of 127- to 152.4-mm (5- to 6-in.-) diameter lodgepole pine (Pinus contorta Engelm.) logs that were tested either in bending or in compression parallel to the grain. Lodgepole pine logs were obtained from a dense stand near Seeley Lake, Montana, and sorted into four piles of 30 logs each. Two groups...
NASA Astrophysics Data System (ADS)
Rao, Lang; Cai, Bo; Yu, Xiao-Lei; Guo, Shi-Shang; Liu, Wei; Zhao, Xing-Zhong
2015-05-01
3D microelectrodes are one-step fabricated into a microfluidic droplet separator by filling conductive silver paste into PDMS microchambers. The advantages of 3D silver paste electrodes in promoting droplet sorting accuracy are systematically demonstrated by theoretical calculation, numerical simulation and experimental validation. The employment of 3D electrodes also helps to decrease the droplet sorting voltage, guaranteeing that cells encapsulated in droplets undergo chip-based sorting processes are at better metabolic status for further potential cellular assays. At last, target droplet containing single cell are selectively sorted out from others by an appropriate electric pulse. This method provides a simple and inexpensive alternative to fabricate 3D electrodes, and it is expected our 3D electrode-integrated microfluidic droplet separator platform can be widely used in single cell operation and analysis.
Abdellah, Marwan; Eldeib, Ayman; Owis, Mohamed I
2015-01-01
This paper features an advanced implementation of the X-ray rendering algorithm that harnesses the giant computing power of the current commodity graphics processors to accelerate the generation of high resolution digitally reconstructed radiographs (DRRs). The presented pipeline exploits the latest features of NVIDIA Graphics Processing Unit (GPU) architectures, mainly bindless texture objects and dynamic parallelism. The rendering throughput is substantially improved by exploiting the interoperability mechanisms between CUDA and OpenGL. The benchmarks of our optimized rendering pipeline reflect its capability of generating DRRs with resolutions of 2048(2) and 4096(2) at interactive and semi interactive frame-rates using an NVIDIA GeForce 970 GTX device.
USING LINKED MICROMAP PLOTS TO CHARACTERIZE OMERNIK ECOREGIONS
The paper introduces linked micromap (LM plots for presenting environmental summaries. The LM template includes parallel sequences of micromap, able, and statistical summary graphics panels with attention paid to perceptual grouping, sorting and linking of the summary components...
Miklós, István; Darling, Aaron E
2009-06-22
Inversions are among the most common mutations acting on the order and orientation of genes in a genome, and polynomial-time algorithms exist to obtain a minimal length series of inversions that transform one genome arrangement to another. However, the minimum length series of inversions (the optimal sorting path) is often not unique as many such optimal sorting paths exist. If we assume that all optimal sorting paths are equally likely, then statistical inference on genome arrangement history must account for all such sorting paths and not just a single estimate. No deterministic polynomial algorithm is known to count the number of optimal sorting paths nor sample from the uniform distribution of optimal sorting paths. Here, we propose a stochastic method that uniformly samples the set of all optimal sorting paths. Our method uses a novel formulation of parallel Markov chain Monte Carlo. In practice, our method can quickly estimate the total number of optimal sorting paths. We introduce a variant of our approach in which short inversions are modeled to be more likely, and we show how the method can be used to estimate the distribution of inversion lengths and breakpoint usage in pathogenic Yersinia pestis. The proposed method has been implemented in a program called "MC4Inversion." We draw comparison of MC4Inversion to the sampler implemented in BADGER and a previously described importance sampling (IS) technique. We find that on high-divergence data sets, MC4Inversion finds more optimal sorting paths per second than BADGER and the IS technique and simultaneously avoids bias inherent in the IS technique.
Kennedy, Deirdre; Cronin, Ultan P.; Wilkinson, Martin G.
2011-01-01
Three common food pathogenic microorganisms were exposed to treatments simulating those used in food processing. Treated cell suspensions were then analyzed for reduction in growth by plate counting. Flow cytometry (FCM) and fluorescence-activated cell sorting (FACS) were carried out on treated cells stained for membrane integrity (Syto 9/propidium iodide) or the presence of membrane potential [DiOC2(3)]. For each microbial species, representative cells from various subpopulations detected by FCM were sorted onto selective and nonselective agar and evaluated for growth and recovery rates. In general, treatments giving rise to the highest reductions in counts also had the greatest effects on cell membrane integrity and membrane potential. Overall, treatments that impacted cell membrane permeability did not necessarily have a comparable effect on membrane potential. In addition, some bacterial species with extensively damaged membranes, as detected by FCM, appeared to be able to replicate and grow after sorting. Growth of sorted cells from various subpopulations was not always reflected in plate counts, and in some cases the staining protocol may have rendered cells unculturable. Optimized FCM protocols generated a greater insight into the extent of the heterogeneous bacterial population responses to food control measures than did plate counts. This study underlined the requirement to use FACS to relate various cytometric profiles generated by various staining protocols with the ability of cells to grow on microbial agar plates. Such information is a prerequisite for more-widespread adoption of FCM as a routine microbiological analytical technique. PMID:21602370
Hybrid Parallel Contour Trees, Version 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sewell, Christopher; Fasel, Patricia; Carr, Hamish
A common operation in scientific visualization is to compute and render a contour of a data set. Given a function of the form f : R^d -> R, a level set is defined as an inverse image f^-1(h) for an isovalue h, and a contour is a single connected component of a level set. The Reeb graph can then be defined to be the result of contracting each contour to a single point, and is well defined for Euclidean spaces or for general manifolds. For simple domains, the graph is guaranteed to be a tree, and is called the contourmore » tree. Analysis can then be performed on the contour tree in order to identify isovalues of particular interest, based on various metrics, and render the corresponding contours, without having to know such isovalues a priori. This code is intended to be the first data-parallel algorithm for computing contour trees. Our implementation will use the portable data-parallel primitives provided by Nvidia’s Thrust library, allowing us to compile our same code for both GPUs and multi-core CPUs. Native OpenMP and purely serial versions of the code will likely also be included. It will also be extended to provide a hybrid data-parallel / distributed algorithm, allowing scaling beyond a single GPU or CPU.« less
Optimizing agent-based transmission models for infectious diseases.
Willem, Lander; Stijven, Sean; Tijskens, Engelbert; Beutels, Philippe; Hens, Niel; Broeckhove, Jan
2015-06-02
Infectious disease modeling and computational power have evolved such that large-scale agent-based models (ABMs) have become feasible. However, the increasing hardware complexity requires adapted software designs to achieve the full potential of current high-performance workstations. We have found large performance differences with a discrete-time ABM for close-contact disease transmission due to data locality. Sorting the population according to the social contact clusters reduced simulation time by a factor of two. Data locality and model performance can also be improved by storing person attributes separately instead of using person objects. Next, decreasing the number of operations by sorting people by health status before processing disease transmission has also a large impact on model performance. Depending of the clinical attack rate, target population and computer hardware, the introduction of the sort phase decreased the run time from 26% up to more than 70%. We have investigated the application of parallel programming techniques and found that the speedup is significant but it drops quickly with the number of cores. We observed that the effect of scheduling and workload chunk size is model specific and can make a large difference. Investment in performance optimization of ABM simulator code can lead to significant run time reductions. The key steps are straightforward: the data structure for the population and sorting people on health status before effecting disease propagation. We believe these conclusions to be valid for a wide range of infectious disease ABMs. We recommend that future studies evaluate the impact of data management, algorithmic procedures and parallelization on model performance.
Lefebvre, Baptiste; Deny, Stéphane; Gardella, Christophe; Stimberg, Marcel; Jetter, Florian; Zeck, Guenther; Picaud, Serge; Duebel, Jens
2018-01-01
In recent years, multielectrode arrays and large silicon probes have been developed to record simultaneously between hundreds and thousands of electrodes packed with a high density. However, they require novel methods to extract the spiking activity of large ensembles of neurons. Here, we developed a new toolbox to sort spikes from these large-scale extracellular data. To validate our method, we performed simultaneous extracellular and loose patch recordings in rodents to obtain ‘ground truth’ data, where the solution to this sorting problem is known for one cell. The performance of our algorithm was always close to the best expected performance, over a broad range of signal-to-noise ratios, in vitro and in vivo. The algorithm is entirely parallelized and has been successfully tested on recordings with up to 4225 electrodes. Our toolbox thus offers a generic solution to sort accurately spikes for up to thousands of electrodes. PMID:29557782
Design and application of the falling vertical sorting machine
NASA Astrophysics Data System (ADS)
Zuo, Ping; Peng, Tao; Yang, Hai
2018-04-01
In the process of tobacco production, it is necessary to pack the smoke according to the needs of different customers. A sorting machine is used to pick up the cigarette at present, there is a launch channel machine, a percussible vertical machine, But in the sorting process, the rolling channel machine is different in terms of the quality of smoke and the frictional force. It is difficult to ensure the location and posture of the belt sorting line, which causes the manipulator to not grasp. The strike type vertical machine is difficult to control the parallelism of the smoke. Now this team has developed a falling sorting machine, which has solved the smoke drop of a cigarette to the transmission belt. There will not be no code, can satisfy most of the different types of smoke sorting and no damage to smoke status. The dynamic characteristics such as the angular error of the opening and closing mechanism are carried out by ADAMS software. The simulation results show that the maximum angular error is 0.016rad. Through the test of the device, the goods falling speed is 7031/hour, the good of the falling position error within 2mm, meet the crawl accuracy requirements of the palletizing robot.
High-fidelity real-time maritime scene rendering
NASA Astrophysics Data System (ADS)
Shyu, Hawjye; Taczak, Thomas M.; Cox, Kevin; Gover, Robert; Maraviglia, Carlos; Cahill, Colin
2011-06-01
The ability to simulate authentic engagements using real-world hardware is an increasingly important tool. For rendering maritime environments, scene generators must be capable of rendering radiometrically accurate scenes with correct temporal and spatial characteristics. When the simulation is used as input to real-world hardware or human observers, the scene generator must operate in real-time. This paper introduces a novel, real-time scene generation capability for rendering radiometrically accurate scenes of backgrounds and targets in maritime environments. The new model is an optimized and parallelized version of the US Navy CRUISE_Missiles rendering engine. It was designed to accept environmental descriptions and engagement geometry data from external sources, render a scene, transform the radiometric scene using the electro-optical response functions of a sensor under test, and output the resulting signal to real-world hardware. This paper reviews components of the scene rendering algorithm, and details the modifications required to run this code in real-time. A description of the simulation architecture and interfaces to external hardware and models is presented. Performance assessments of the frame rate and radiometric accuracy of the new code are summarized. This work was completed in FY10 under Office of Secretary of Defense (OSD) Central Test and Evaluation Investment Program (CTEIP) funding and will undergo a validation process in FY11.
Design and Implementation of High-Performance GIS Dynamic Objects Rendering Engine
NASA Astrophysics Data System (ADS)
Zhong, Y.; Wang, S.; Li, R.; Yun, W.; Song, G.
2017-12-01
Spatio-temporal dynamic visualization is more vivid than static visualization. It important to use dynamic visualization techniques to reveal the variation process and trend vividly and comprehensively for the geographical phenomenon. To deal with challenges caused by dynamic visualization of both 2D and 3D spatial dynamic targets, especially for different spatial data types require high-performance GIS dynamic objects rendering engine. The main approach for improving the rendering engine with vast dynamic targets relies on key technologies of high-performance GIS, including memory computing, parallel computing, GPU computing and high-performance algorisms. In this study, high-performance GIS dynamic objects rendering engine is designed and implemented for solving the problem based on hybrid accelerative techniques. The high-performance GIS rendering engine contains GPU computing, OpenGL technology, and high-performance algorism with the advantage of 64-bit memory computing. It processes 2D, 3D dynamic target data efficiently and runs smoothly with vast dynamic target data. The prototype system of high-performance GIS dynamic objects rendering engine is developed based SuperMap GIS iObjects. The experiments are designed for large-scale spatial data visualization, the results showed that the high-performance GIS dynamic objects rendering engine have the advantage of high performance. Rendering two-dimensional and three-dimensional dynamic objects achieve 20 times faster on GPU than on CPU.
Adaptive proxy map server for efficient vector spatial data rendering
NASA Astrophysics Data System (ADS)
Sayar, Ahmet
2013-01-01
The rapid transmission of vector map data over the Internet is becoming a bottleneck of spatial data delivery and visualization in web-based environment because of increasing data amount and limited network bandwidth. In order to improve both the transmission and rendering performances of vector spatial data over the Internet, we propose a proxy map server enabling parallel vector data fetching as well as caching to improve the performance of web-based map servers in a dynamic environment. Proxy map server is placed seamlessly anywhere between the client and the final services, intercepting users' requests. It employs an efficient parallelization technique based on spatial proximity and data density in case distributed replica exists for the same spatial data. The effectiveness of the proposed technique is proved at the end of the article by the application of creating map images enriched with earthquake seismic data records.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sewell, Christopher Meyer
This is a set of slides from a guest lecture for a class at the University of Texas, El Paso on visualization and data analysis for high-performance computing. The topics covered are the following: trends in high-performance computing; scientific visualization, such as OpenGL, ray tracing and volume rendering, VTK, and ParaView; data science at scale, such as in-situ visualization, image databases, distributed memory parallelism, shared memory parallelism, VTK-m, "big data", and then an analysis example.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beer, Neil Reginald; Colston, Jr, Billy W.
An apparatus for chip-based sorting, amplification, detection, and identification of a sample having a planar substrate. The planar substrate is divided into cells. The cells are arranged on the planar substrate in rows and columns. Electrodes are located in the cells. A micro-reactor maker produces micro-reactors containing the sample. The micro-reactor maker is positioned to deliver the micro-reactors to the planar substrate. A microprocessor is connected to the electrodes for manipulating the micro-reactors on the planar substrate. A detector is positioned to interrogate the sample contained in the micro-reactors.
Functional analysis of tight junction organization.
DiBona, D R
1985-01-01
The functional basis of tight junction design has been examined from the point of view that this rate-limiting barrier to paracellular transport is a multicompartment system. Review of the osmotic sensitivity of these structures points to the need for this sort of analysis for meaningful correlation of structure and function under a range of conditions. A similar conclusion is drawn with respect to results from voltage-clamping protocols where reversal of spontaneous transmural potential difference elicits parallel changes in both structure and function in much the same way as does reversal of naturally occurring osmotic gradients. In each case, it becomes necessary to regard the junction as a functionally polarized structure to account for observations of its rectifying properties. Lastly, the details of experimentally-induced junction deformation are examined in light of current theories of its organization; arguments are presented in favor of the view that the primary components of intramembranous organization (as viewed with freeze-fracture techniques) are lipidic rather than proteinaceous.
Darling, Aaron E.
2009-01-01
Inversions are among the most common mutations acting on the order and orientation of genes in a genome, and polynomial-time algorithms exist to obtain a minimal length series of inversions that transform one genome arrangement to another. However, the minimum length series of inversions (the optimal sorting path) is often not unique as many such optimal sorting paths exist. If we assume that all optimal sorting paths are equally likely, then statistical inference on genome arrangement history must account for all such sorting paths and not just a single estimate. No deterministic polynomial algorithm is known to count the number of optimal sorting paths nor sample from the uniform distribution of optimal sorting paths. Here, we propose a stochastic method that uniformly samples the set of all optimal sorting paths. Our method uses a novel formulation of parallel Markov chain Monte Carlo. In practice, our method can quickly estimate the total number of optimal sorting paths. We introduce a variant of our approach in which short inversions are modeled to be more likely, and we show how the method can be used to estimate the distribution of inversion lengths and breakpoint usage in pathogenic Yersinia pestis. The proposed method has been implemented in a program called “MC4Inversion.” We draw comparison of MC4Inversion to the sampler implemented in BADGER and a previously described importance sampling (IS) technique. We find that on high-divergence data sets, MC4Inversion finds more optimal sorting paths per second than BADGER and the IS technique and simultaneously avoids bias inherent in the IS technique. PMID:20333186
Pattern recognition with parallel associative memory
NASA Technical Reports Server (NTRS)
Toth, Charles K.; Schenk, Toni
1990-01-01
An examination is conducted of the feasibility of searching targets in aerial photographs by means of a parallel associative memory (PAM) that is based on the nearest-neighbor algorithm; the Hamming distance is used as a measure of closeness, in order to discriminate patterns. Attention has been given to targets typically used for ground-control points. The method developed sorts out approximate target positions where precise localizations are needed, in the course of the data-acquisition process. The majority of control points in different images were correctly identified.
Flow cytometry for enrichment and titration in massively parallel DNA sequencing
Sandberg, Julia; Ståhl, Patrik L.; Ahmadian, Afshin; Bjursell, Magnus K.; Lundeberg, Joakim
2009-01-01
Massively parallel DNA sequencing is revolutionizing genomics research throughout the life sciences. However, the reagent costs and labor requirements in current sequencing protocols are still substantial, although improvements are continuously being made. Here, we demonstrate an effective alternative to existing sample titration protocols for the Roche/454 system using Fluorescence Activated Cell Sorting (FACS) technology to determine the optimal DNA-to-bead ratio prior to large-scale sequencing. Our method, which eliminates the need for the costly pilot sequencing of samples during titration is capable of rapidly providing accurate DNA-to-bead ratios that are not biased by the quantification and sedimentation steps included in current protocols. Moreover, we demonstrate that FACS sorting can be readily used to highly enrich fractions of beads carrying template DNA, with near total elimination of empty beads and no downstream sacrifice of DNA sequencing quality. Automated enrichment by FACS is a simple approach to obtain pure samples for bead-based sequencing systems, and offers an efficient, low-cost alternative to current enrichment protocols. PMID:19304748
NASA Technical Reports Server (NTRS)
Dorband, John E.
1987-01-01
Generating graphics to faithfully represent information can be a computationally intensive task. A way of using the Massively Parallel Processor to generate images by ray tracing is presented. This technique uses sort computation, a method of performing generalized routing interspersed with computation on a single-instruction-multiple-data (SIMD) computer.
NASA Astrophysics Data System (ADS)
Olson, Richard F.
2013-05-01
Rendering of point scatterer based radar scenes for millimeter wave (mmW) seeker tests in real-time hardware-in-the-loop (HWIL) scene generation requires efficient algorithms and vector-friendly computer architectures for complex signal synthesis. New processor technology from Intel implements an extended 256-bit vector SIMD instruction set (AVX, AVX2) in a multi-core CPU design providing peak execution rates of hundreds of GigaFLOPS (GFLOPS) on one chip. Real world mmW scene generation code can approach peak SIMD execution rates only after careful algorithm and source code design. An effective software design will maintain high computing intensity emphasizing register-to-register SIMD arithmetic operations over data movement between CPU caches or off-chip memories. Engineers at the U.S. Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) applied two basic parallel coding methods to assess new 256-bit SIMD multi-core architectures for mmW scene generation in HWIL. These include use of POSIX threads built on vector library functions and more portable, highlevel parallel code based on compiler technology (e.g. OpenMP pragmas and SIMD autovectorization). Since CPU technology is rapidly advancing toward high processor core counts and TeraFLOPS peak SIMD execution rates, it is imperative that coding methods be identified which produce efficient and maintainable parallel code. This paper describes the algorithms used in point scatterer target model rendering, the parallelization of those algorithms, and the execution performance achieved on an AVX multi-core machine using the two basic parallel coding methods. The paper concludes with estimates for scale-up performance on upcoming multi-core technology.
NASA Astrophysics Data System (ADS)
Destefano, Anthony; Heerikhuisen, Jacob
2015-04-01
Fully 3D particle simulations can be a computationally and memory expensive task, especially when high resolution grid cells are required. The problem becomes further complicated when parallelization is needed. In this work we focus on computational methods to solve these difficulties. Hilbert curves are used to map the 3D particle space to the 1D contiguous memory space. This method of organization allows for minimized cache misses on the GPU as well as a sorted structure that is equivalent to an octal tree data structure. This type of sorted structure is attractive for uses in adaptive mesh implementations due to the logarithm search time. Implementations using the Message Passing Interface (MPI) library and NVIDIA's parallel computing platform CUDA will be compared, as MPI is commonly used on server nodes with many CPU's. We will also compare static grid structures with those of adaptive mesh structures. The physical test bed will be simulating heavy interstellar atoms interacting with a background plasma, the heliosphere, simulated from fully consistent coupled MHD/kinetic particle code. It is known that charge exchange is an important factor in space plasmas, specifically it modifies the structure of the heliosphere itself. We would like to thank the Alabama Supercomputer Authority for the use of their computational resources.
Fruit Sorting Using Fuzzy Logic Techniques
NASA Astrophysics Data System (ADS)
Elamvazuthi, Irraivan; Sinnadurai, Rajendran; Aftab Ahmed Khan, Mohamed Khan; Vasant, Pandian
2009-08-01
Fruit and vegetables market is getting highly selective, requiring their suppliers to distribute the goods according to very strict standards of quality and presentation. In the last years, a number of fruit sorting and grading systems have appeared to fulfill the needs of the fruit processing industry. However, most of them are overly complex and too costly for the small and medium scale industry (SMIs) in Malaysia. In order to address these shortcomings, a prototype machine was developed by integrating the fruit sorting, labeling and packing processes. To realise the prototype, many design issues were dealt with. Special attention is paid to the electronic weighing sub-system for measuring weight, and the opto-electronic sub-system for determining the height and width of the fruits. Specifically, this paper discusses the application of fuzzy logic techniques in the sorting process.
High-Throughput, Motility-Based Sorter for Microswimmers such as C. elegans
Yuan, Jinzhou; Zhou, Jessie; Raizen, David M.; Bau, Haim H.
2015-01-01
Animal motility varies with genotype, disease, aging, and environmental conditions. In many studies, it is desirable to carry out high throughput motility-based sorting to isolate rare animals for, among other things, forward genetic screens to identify genetic pathways that regulate phenotypes of interest. Many commonly used screening processes are labor-intensive, lack sensitivity, and require extensive investigator training. Here, we describe a sensitive, high throughput, automated, motility-based method for sorting nematodes. Our method is implemented in a simple microfluidic device capable of sorting thousands of animals per hour per module, and is amenable to parallelism. The device successfully enriches for known C. elegans motility mutants. Furthermore, using this device, we isolate low-abundance mutants capable of suppressing the somnogenic effects of the flp-13 gene, which regulates C. elegans sleep. By performing genetic complementation tests, we demonstrate that our motility-based sorting device efficiently isolates mutants for the same gene identified by tedious visual inspection of behavior on an agar surface. Therefore, our motility-based sorter is capable of performing high throughput gene discovery approaches to investigate fundamental biological processes. PMID:26008643
NASA Astrophysics Data System (ADS)
Tanaka, S.; Hasegawa, K.; Okamoto, N.; Umegaki, R.; Wang, S.; Uemura, M.; Okamoto, A.; Koyamada, K.
2016-06-01
We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminating this need allows us to avoid long computation times when creating natural and precise 3D see-through views of laser-scanned cultural heritage objects. The opacity of each laser-scanned object is also flexibly controllable. For a laser-scanned point cloud consisting of more than 107 or 108 3D points, the pre-processing requires only a few minutes, and the rendering can be executed at interactive frame rates. Our method enables the creation of cumulative 3D see-through images of time-series laser-scanned data. It also offers the possibility of fused visualization for observing a laser-scanned object behind a transparent high-quality photographic image placed in the 3D scene. We demonstrate the effectiveness of our method by applying it to festival floats of high cultural value. These festival floats have complex outer and inner 3D structures and are suitable for see-through imaging.
Efficient sequential and parallel algorithms for finding edit distance based motifs.
Pal, Soumitra; Xiao, Peng; Rajasekaran, Sanguthevar
2016-08-18
Motif search is an important step in extracting meaningful patterns from biological data. The general problem of motif search is intractable and there is a pressing need to develop efficient, exact and approximation algorithms to solve this problem. In this paper, we present several novel, exact, sequential and parallel algorithms for solving the (l,d) Edit-distance-based Motif Search (EMS) problem: given two integers l,d and n biological strings, find all strings of length l that appear in each input string with atmost d errors of types substitution, insertion and deletion. One popular technique to solve the problem is to explore for each input string the set of all possible l-mers that belong to the d-neighborhood of any substring of the input string and output those which are common for all input strings. We introduce a novel and provably efficient neighborhood exploration technique. We show that it is enough to consider the candidates in neighborhood which are at a distance exactly d. We compactly represent these candidate motifs using wildcard characters and efficiently explore them with very few repetitions. Our sequential algorithm uses a trie based data structure to efficiently store and sort the candidate motifs. Our parallel algorithm in a multi-core shared memory setting uses arrays for storing and a novel modification of radix-sort for sorting the candidate motifs. The algorithms for EMS are customarily evaluated on several challenging instances such as (8,1), (12,2), (16,3), (20,4), and so on. The best previously known algorithm, EMS1, is sequential and in estimated 3 days solves up to instance (16,3). Our sequential algorithms are more than 20 times faster on (16,3). On other hard instances such as (9,2), (11,3), (13,4), our algorithms are much faster. Our parallel algorithm has more than 600 % scaling performance while using 16 threads. Our algorithms have pushed up the state-of-the-art of EMS solvers and we believe that the techniques introduced in this paper are also applicable to other motif search problems such as Planted Motif Search (PMS) and Simple Motif Search (SMS).
IB-LBM simulation on blood cell sorting with a micro-fence structure.
Wei, Qiang; Xu, Yuan-Qing; Tian, Fang-bao; Gao, Tian-xin; Tang, Xiao-ying; Zu, Wen-Hong
2014-01-01
A size-based blood cell sorting model with a micro-fence structure is proposed in the frame of immersed boundary and lattice Boltzmann method (IB-LBM). The fluid dynamics is obtained by solving the discrete lattice Boltzmann equation, and the cells motion and deformation are handled by the immersed boundary method. A micro-fence consists of two parallel slope post rows which are adopted to separate red blood cells (RBCs) from white blood cells (WBCs), in which the cells to be separated are transported one after another by the flow into the passageway between the two post rows. Effected by the cross flow, RBCs are schemed to get through the pores of the nether post row since they are smaller and more deformable compared with WBCs. WBCs are required to move along the nether post row till they get out the micro-fence. Simulation results indicate that for a fix width of pores, the slope angle of the post row plays an important role in cell sorting. The cells mixture can not be separated properly in a small slope angle, while obvious blockages by WBCs will take place to disturb the continuous cell sorting in a big slope angle. As an optimal result, an adaptive slope angle is found to sort RBCs form WBCs correctly and continuously.
k(+)-buffer: An Efficient, Memory-Friendly and Dynamic k-buffer Framework.
Vasilakis, Andreas-Alexandros; Papaioannou, Georgios; Fudos, Ioannis
2015-06-01
Depth-sorted fragment determination is fundamental for a host of image-based techniques which simulates complex rendering effects. It is also a challenging task in terms of time and space required when rasterizing scenes with high depth complexity. When low graphics memory requirements are of utmost importance, k-buffer can objectively be considered as the most preferred framework which advantageously ensures the correct depth order on a subset of all generated fragments. Although various alternatives have been introduced to partially or completely alleviate the noticeable quality artifacts produced by the initial k-buffer algorithm in the expense of memory increase or performance downgrade, appropriate tools to automatically and dynamically compute the most suitable value of k are still missing. To this end, we introduce k(+)-buffer, a fast framework that accurately simulates the behavior of k-buffer in a single rendering pass. Two memory-bounded data structures: (i) the max-array and (ii) the max-heap are developed on the GPU to concurrently maintain the k-foremost fragments per pixel by exploring pixel synchronization and fragment culling. Memory-friendly strategies are further introduced to dynamically (a) lessen the wasteful memory allocation of individual pixels with low depth complexity frequencies, (b) minimize the allocated size of k-buffer according to different application goals and hardware limitations via a straightforward depth histogram analysis and (c) manage local GPU cache with a fixed-memory depth-sorting mechanism. Finally, an extensive experimental evaluation is provided demonstrating the advantages of our work over all prior k-buffer variants in terms of memory usage, performance cost and image quality.
Roy, Swapnoneel; Thakur, Ashok Kumar
2008-01-01
Genome rearrangements have been modelled by a variety of primitives such as reversals, transpositions, block moves and block interchanges. We consider such a genome rearrangement primitive Strip Exchanges. Given a permutation, the challenge is to sort it by using minimum number of strip exchanges. A strip exchanging move interchanges the positions of two chosen strips so that they merge with other strips. The strip exchange problem is to sort a permutation using minimum number of strip exchanges. We present here the first non-trivial 2-approximation algorithm to this problem. We also observe that sorting by strip-exchanges is fixed-parameter-tractable. Lastly we discuss the application of strip exchanges in a different area Optical Character Recognition (OCR) with an example.
On Dark Times, Parallel Universes, and Deja Vu.
ERIC Educational Resources Information Center
Starnes, Bobby Ann
2000-01-01
Effectiveness cannot be found in the mediocrity arising from programs that require lessons, teaching strategies, and precisely executed materials to ensure integrity. Expensive, scripted programs like Success for All are designed not to improve teaching, but to render the art of teaching unnecessary. (MLH)
Parallel-SymD: A Parallel Approach to Detect Internal Symmetry in Protein Domains.
Jha, Ashwani; Flurchick, K M; Bikdash, Marwan; Kc, Dukka B
2016-01-01
Internally symmetric proteins are proteins that have a symmetrical structure in their monomeric single-chain form. Around 10-15% of the protein domains can be regarded as having some sort of internal symmetry. In this regard, we previously published SymD (symmetry detection), an algorithm that determines whether a given protein structure has internal symmetry by attempting to align the protein to its own copy after the copy is circularly permuted by all possible numbers of residues. SymD has proven to be a useful algorithm to detect symmetry. In this paper, we present a new parallelized algorithm called Parallel-SymD for detecting symmetry of proteins on clusters of computers. The achieved speedup of the new Parallel-SymD algorithm scales well with the number of computing processors. Scaling is better for proteins with a larger number of residues. For a protein of 509 residues, a speedup of 63 was achieved on a parallel system with 100 processors.
Parallel-SymD: A Parallel Approach to Detect Internal Symmetry in Protein Domains
Jha, Ashwani; Flurchick, K. M.; Bikdash, Marwan
2016-01-01
Internally symmetric proteins are proteins that have a symmetrical structure in their monomeric single-chain form. Around 10–15% of the protein domains can be regarded as having some sort of internal symmetry. In this regard, we previously published SymD (symmetry detection), an algorithm that determines whether a given protein structure has internal symmetry by attempting to align the protein to its own copy after the copy is circularly permuted by all possible numbers of residues. SymD has proven to be a useful algorithm to detect symmetry. In this paper, we present a new parallelized algorithm called Parallel-SymD for detecting symmetry of proteins on clusters of computers. The achieved speedup of the new Parallel-SymD algorithm scales well with the number of computing processors. Scaling is better for proteins with a larger number of residues. For a protein of 509 residues, a speedup of 63 was achieved on a parallel system with 100 processors. PMID:27747230
Mechanically robust microfluidics and bulk wave acoustics to sort microparticles
NASA Astrophysics Data System (ADS)
Dauson, Erin R.; Gregory, Kelvin B.; Greve, David W.; Healy, Gregory P.; Oppenheim, Irving J.
2016-04-01
Sorting microparticles (or cells, or bacteria) is significant for scientific, medical and industrial purposes. Research groups have used lithium niobate SAW devices to produce standing waves, and then to align microparticles at the node lines in polydimethylsiloxane (PDMS, silicone) microfluidic channels. The "tilted angle" (skewed) configuration is a recent breakthrough producing particle trajectories that cross multiple node lines, making it practical to sort particles. However, lithium niobate wafers and PDMS microfluidic channels are not mechanically robust. We demonstrate "tilted angle" microparticle sorting in novel devices that are robust, rapidly prototyped, and manufacturable. We form our microfluidic system in a rigid polymethyl methacrylate (PMMA, acrylic) prism, sandwiched by lead-zirconium-titanate (PZT) wafers, operating in through-thickness mode with inertial backing, that produce standing bulk waves. The overall configuration is compact and mechanically robust, and actuating PZT wafers in through-thickness mode is highly efficient. Moving to this novel configuration introduced new acoustics questions involving internal reflections, but we show experimental images confirming the intended nodal geometry. Microparticles in "tilted angle" devices display undulating trajectories, where deviation from the straight path increases with particle diameter and with excitation voltage to create the mechanism by which particles are sorted. We show a simplified analytical model by which a "phase space" is constructed to characterize effective particle sorting, and we compare our experimental data to the predictions from that simplified model; precise correlation is not expected and is not observed, but the important physical trends from the model are paralleled in the measured particle trajectories.
An efficient parallel algorithm for the calculation of canonical MP2 energies.
Baker, Jon; Pulay, Peter
2002-09-01
We present the parallel version of a previous serial algorithm for the efficient calculation of canonical MP2 energies (Pulay, P.; Saebo, S.; Wolinski, K. Chem Phys Lett 2001, 344, 543). It is based on the Saebo-Almlöf direct-integral transformation, coupled with an efficient prescreening of the AO integrals. The parallel algorithm avoids synchronization delays by spawning a second set of slaves during the bin-sort prior to the second half-transformation. Results are presented for systems with up to 2000 basis functions. MP2 energies for molecules with 400-500 basis functions can be routinely calculated to microhartree accuracy on a small number of processors (6-8) in a matter of minutes with modern PC-based parallel computers. Copyright 2002 Wiley Periodicals, Inc. J Comput Chem 23: 1150-1156, 2002
An Exploration of Distributed Parallel Sorting in GSS
ERIC Educational Resources Information Center
Diller, Christopher B. R.
2013-01-01
When the members of a group work collaboratively using a group support system (GSS), they often "brainstorm" a list of ideas in response to a question or challenge that faces the group. The satisfaction levels of group members are usually high following this activity. However, satisfaction levels with the process almost always drop…
Communication Studies of DMP and SMP Machines
NASA Technical Reports Server (NTRS)
Sohn, Andrew; Biswas, Rupak; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
Understanding the interplay between machines and problems is key to obtaining high performance on parallel machines. This paper investigates the interplay between programming paradigms and communication capabilities of parallel machines. In particular, we explicate the communication capabilities of the IBM SP-2 distributed-memory multiprocessor and the SGI PowerCHALLENGEarray symmetric multiprocessor. Two benchmark problems of bitonic sorting and Fast Fourier Transform are selected for experiments. Communication-efficient algorithms are developed to exploit the overlapping capabilities of the machines. Programs are written in Message-Passing Interface for portability and identical codes are used for both machines. Various data sizes and message sizes are used to test the machines' communication capabilities. Experimental results indicate that the communication performance of the multiprocessors are consistent with the size of messages. The SP-2 is sensitive to message size but yields a much higher communication overlapping because of the communication co-processor. The PowerCHALLENGEarray is not highly sensitive to message size and yields a low communication overlapping. Bitonic sorting yields lower performance compared to FFT due to a smaller computation-to-communication ratio.
An implementation of a tree code on a SIMD, parallel computer
NASA Technical Reports Server (NTRS)
Olson, Kevin M.; Dorband, John E.
1994-01-01
We describe a fast tree algorithm for gravitational N-body simulation on SIMD parallel computers. The tree construction uses fast, parallel sorts. The sorted lists are recursively divided along their x, y and z coordinates. This data structure is a completely balanced tree (i.e., each particle is paired with exactly one other particle) and maintains good spatial locality. An implementation of this tree-building algorithm on a 16k processor Maspar MP-1 performs well and constitutes only a small fraction (approximately 15%) of the entire cycle of finding the accelerations. Each node in the tree is treated as a monopole. The tree search and the summation of accelerations also perform well. During the tree search, node data that is needed from another processor is simply fetched. Roughly 55% of the tree search time is spent in communications between processors. We apply the code to two problems of astrophysical interest. The first is a simulation of the close passage of two gravitationally, interacting, disk galaxies using 65,636 particles. We also simulate the formation of structure in an expanding, model universe using 1,048,576 particles. Our code attains speeds comparable to one head of a Cray Y-MP, so single instruction, multiple data (SIMD) type computers can be used for these simulations. The cost/performance ratio for SIMD machines like the Maspar MP-1 make them an extremely attractive alternative to either vector processors or large multiple instruction, multiple data (MIMD) type parallel computers. With further optimizations (e.g., more careful load balancing), speeds in excess of today's vector processing computers should be possible.
Krityakierne, Tipaluck; Akhtar, Taimoor; Shoemaker, Christine A.
2016-02-02
This paper presents a parallel surrogate-based global optimization method for computationally expensive objective functions that is more effective for larger numbers of processors. To reach this goal, we integrated concepts from multi-objective optimization and tabu search into, single objective, surrogate optimization. Our proposed derivative-free algorithm, called SOP, uses non-dominated sorting of points for which the expensive function has been previously evaluated. The two objectives are the expensive function value of the point and the minimum distance of the point to previously evaluated points. Based on the results of non-dominated sorting, P points from the sorted fronts are selected as centersmore » from which many candidate points are generated by random perturbations. Based on surrogate approximation, the best candidate point is subsequently selected for expensive evaluation for each of the P centers, with simultaneous computation on P processors. Centers that previously did not generate good solutions are tabu with a given tenure. We show almost sure convergence of this algorithm under some conditions. The performance of SOP is compared with two RBF based methods. The test results show that SOP is an efficient method that can reduce time required to find a good near optimal solution. In a number of cases the efficiency of SOP is so good that SOP with 8 processors found an accurate answer in less wall-clock time than the other algorithms did with 32 processors.« less
Mobile collaborative medical display system.
Park, Sanghun; Kim, Wontae; Ihm, Insung
2008-03-01
Because of recent advances in wireless communication technologies, the world of mobile computing is flourishing with a variety of applications. In this study, we present an integrated architecture for a personal digital assistant (PDA)-based mobile medical display system that supports collaborative work between remote users. We aim to develop a system that enables users in different regions to share a working environment for collaborative visualization with the potential for exploring huge medical datasets. Our system consists of three major components: mobile client, gateway, and parallel rendering server. The mobile client serves as a front end and enables users to choose the visualization and control parameters interactively and cooperatively. The gateway handles requests and responses between mobile clients and the rendering server for efficient communication. Through the gateway, it is possible to share working environments between users, allowing them to work together in computer supported cooperative work (CSCW) mode. Finally, the parallel rendering server is responsible for performing heavy visualization tasks. Our experience indicates that some features currently available to our mobile clients for collaborative scientific visualization are limited due to the poor performance of mobile devices and the low bandwidth of wireless connections. However, as mobile devices and wireless network systems are experiencing considerable elevation in their capabilities, we believe that our methodology will be utilized effectively in building quite responsive, useful mobile collaborative medical systems in the very near future.
pWeb: A High-Performance, Parallel-Computing Framework for Web-Browser-Based Medical Simulation.
Halic, Tansel; Ahn, Woojin; De, Suvranu
2014-01-01
This work presents a pWeb - a new language and compiler for parallelization of client-side compute intensive web applications such as surgical simulations. The recently introduced HTML5 standard has enabled creating unprecedented applications on the web. Low performance of the web browser, however, remains the bottleneck of computationally intensive applications including visualization of complex scenes, real time physical simulations and image processing compared to native ones. The new proposed language is built upon web workers for multithreaded programming in HTML5. The language provides fundamental functionalities of parallel programming languages as well as the fork/join parallel model which is not supported by web workers. The language compiler automatically generates an equivalent parallel script that complies with the HTML5 standard. A case study on realistic rendering for surgical simulations demonstrates enhanced performance with a compact set of instructions.
Generation of multiple Bessel beams for a biophotonics workstation.
Cizmár, T; Kollárová, V; Tsampoula, X; Gunn-Moore, F; Sibbett, W; Bouchal, Z; Dholakia, K
2008-09-01
We present a simple method using an axicon and spatial light modulator to create multiple parallel Bessel beams and precisely control their individual positions in three dimensions. This technique is tested as an alternative to classical holographic beam shaping commonly used now in optical tweezers. Various applications of precise control of multiple Bessel beams are demonstrated within a single microscope giving rise to new methods for three-dimensional positional control of trapped particles or active sorting of micro-objects as well as "focus-free" photoporation of living cells. Overall this concept is termed a 'biophotonics workstation' where users may readily trap, sort and porate material using Bessel light modes in a microscope.
Pinched flow fractionation of microbubbles for ultrasound contrast agent enrichment
NASA Astrophysics Data System (ADS)
Versluis, Michel; Kok, Maarten; Segers, Tim
2014-11-01
An ultrasound contrast agent (UCA) suspension contains a wide size distribution of encapsulated microbubbles (typically 1-10 μm in diameter) that resonate to the driving ultrasound field by the intrinsic relationship between bubble size and ultrasound frequency. Medical transducers, however, operate in a narrow frequency range, which severely limits the number of bubbles that contribute to the echo signal. Thus, the sensitivity can be improved by narrowing down the size distribution of the bubble suspension. Here, we present a novel, low-cost, lab-on-a-chip method for the sorting of contrast microbubbles by size, based on a microfluidic separation technique known as pinched flow fractionation (PFF). We show by experimental and numerical investigation that the inclusion of particle rotation is essential for an accurate physical description of the sorting behavior of the larger bubbles. Successful sorting of a bubble suspension with a narrow size distribution (3.0 +/- 0.6 μm) has been achieved with a PFF microdevice. This sorting technique can be easily parallelized, and may lead to a significant improvement in the sensitivity of contrast-enhanced medical ultrasound. This work is supported by NanoNextNL, a micro and nanotechnology consortium of the Government of the Netherlands and 130 partners.
On the suitability of the connection machine for direct particle simulation
NASA Technical Reports Server (NTRS)
Dagum, Leonard
1990-01-01
The algorithmic structure was examined of the vectorizable Stanford particle simulation (SPS) method and the structure is reformulated in data parallel form. Some of the SPS algorithms can be directly translated to data parallel, but several of the vectorizable algorithms have no direct data parallel equivalent. This requires the development of new, strictly data parallel algorithms. In particular, a new sorting algorithm is developed to identify collision candidates in the simulation and a master/slave algorithm is developed to minimize communication cost in large table look up. Validation of the method is undertaken through test calculations for thermal relaxation of a gas, shock wave profiles, and shock reflection from a stationary wall. A qualitative measure is provided of the performance of the Connection Machine for direct particle simulation. The massively parallel architecture of the Connection Machine is found quite suitable for this type of calculation. However, there are difficulties in taking full advantage of this architecture because of lack of a broad based tradition of data parallel programming. An important outcome of this work has been new data parallel algorithms specifically of use for direct particle simulation but which also expand the data parallel diction.
High-Throughput, Motility-Based Sorter for Microswimmers and Gene Discovery Platform
NASA Astrophysics Data System (ADS)
Yuan, Jinzhou; Raizen, David; Bau, Haim
2015-11-01
Animal motility varies with genotype, disease progression, aging, and environmental conditions. In many studies, it is desirable to carry out high throughput motility-based sorting to isolate rare animals for, among other things, forward genetic screens to identify genetic pathways that regulate phenotypes of interest. Many commonly used screening processes are labor-intensive, lack sensitivity, and require extensive investigator training. Here, we describe a sensitive, high throughput, automated, motility-based method for sorting nematodes. Our method was implemented in a simple microfluidic device capable of sorting many thousands of animals per hour per module, and is amenable to parallelism. The device successfully enriched for known C. elegans motility mutants. Furthermore, using this device, we isolated low-abundance mutants capable of suppressing the somnogenic effects of the flp-13 gene, which regulates sleep-like quiescence in C. elegans. Subsequent genomic sequencing led to the identification of a flp-13-suppressor gene. This research was supported, in part, by NIH NIA Grant 5R03AG042690-02.
Communication library for run-time visualization of distributed, asynchronous data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rowlan, J.; Wightman, B.T.
1994-04-01
In this paper we present a method for collecting and visualizing data generated by a parallel computational simulation during run time. Data distributed across multiple processes is sent across parallel communication lines to a remote workstation, which sorts and queues the data for visualization. We have implemented our method in a set of tools called PORTAL (for Parallel aRchitecture data-TrAnsfer Library). The tools comprise generic routines for sending data from a parallel program (callable from either C or FORTRAN), a semi-parallel communication scheme currently built upon Unix Sockets, and a real-time connection to the scientific visualization program AVS. Our methodmore » is most valuable when used to examine large datasets that can be efficiently generated and do not need to be stored on disk. The PORTAL source libraries, detailed documentation, and a working example can be obtained by anonymous ftp from info.mcs.anl.gov from the file portal.tar.Z from the directory pub/portal.« less
Data Storage & Management: Backing up to the Future
ERIC Educational Resources Information Center
Briggs, Linda L.
2006-01-01
"I saved my presentation in my personal drive on the server last night, but now I can't find it. It just seems to be gone. Can you get it back?" "It looks like the mail server is corrupted. When was the last backup?" These sorts of questions, whether from faculty, students, or IT staff, can be an IT nightmare, or they can set…
Spiral Transformation for High-Resolution and Efficient Sorting of Optical Vortex Modes.
Wen, Yuanhui; Chremmos, Ioannis; Chen, Yujie; Zhu, Jiangbo; Zhang, Yanfeng; Yu, Siyuan
2018-05-11
Mode sorting is an essential function for optical multiplexing systems that exploit the orthogonality of the orbital angular momentum mode space. The familiar log-polar optical transformation provides a simple yet efficient approach whose resolution is, however, restricted by a considerable overlap between adjacent modes resulting from the limited excursion of the phase along a complete circle around the optical vortex axis. We propose and experimentally verify a new optical transformation that maps spirals (instead of concentric circles) to parallel lines. As the phase excursion along a spiral in the wave front of an optical vortex is theoretically unlimited, this new optical transformation can separate orbital angular momentum modes with superior resolution while maintaining unity efficiency.
Spiral Transformation for High-Resolution and Efficient Sorting of Optical Vortex Modes
NASA Astrophysics Data System (ADS)
Wen, Yuanhui; Chremmos, Ioannis; Chen, Yujie; Zhu, Jiangbo; Zhang, Yanfeng; Yu, Siyuan
2018-05-01
Mode sorting is an essential function for optical multiplexing systems that exploit the orthogonality of the orbital angular momentum mode space. The familiar log-polar optical transformation provides a simple yet efficient approach whose resolution is, however, restricted by a considerable overlap between adjacent modes resulting from the limited excursion of the phase along a complete circle around the optical vortex axis. We propose and experimentally verify a new optical transformation that maps spirals (instead of concentric circles) to parallel lines. As the phase excursion along a spiral in the wave front of an optical vortex is theoretically unlimited, this new optical transformation can separate orbital angular momentum modes with superior resolution while maintaining unity efficiency.
Performance Evaluation in Network-Based Parallel Computing
NASA Technical Reports Server (NTRS)
Dezhgosha, Kamyar
1996-01-01
Network-based parallel computing is emerging as a cost-effective alternative for solving many problems which require use of supercomputers or massively parallel computers. The primary objective of this project has been to conduct experimental research on performance evaluation for clustered parallel computing. First, a testbed was established by augmenting our existing SUNSPARCs' network with PVM (Parallel Virtual Machine) which is a software system for linking clusters of machines. Second, a set of three basic applications were selected. The applications consist of a parallel search, a parallel sort, a parallel matrix multiplication. These application programs were implemented in C programming language under PVM. Third, we conducted performance evaluation under various configurations and problem sizes. Alternative parallel computing models and workload allocations for application programs were explored. The performance metric was limited to elapsed time or response time which in the context of parallel computing can be expressed in terms of speedup. The results reveal that the overhead of communication latency between processes in many cases is the restricting factor to performance. That is, coarse-grain parallelism which requires less frequent communication between processes will result in higher performance in network-based computing. Finally, we are in the final stages of installing an Asynchronous Transfer Mode (ATM) switch and four ATM interfaces (each 155 Mbps) which will allow us to extend our study to newer applications, performance metrics, and configurations.
DeVries, T J; Gill, R M
2012-05-01
This study was designed to determine the effect of adding a molasses-based liquid feed (LF) supplement to a total mixed ration (TMR) on the feed sorting behavior and production of dairy cows. Twelve lactating Holstein cows (88.2±19.5 DIM) were exposed, in a crossover design with 21-d periods, to each of 2 treatment diets: 1) control TMR and 2) control TMR with 4.1% dietary dry matter LF added. Dry matter intake (DMI), sorting, and milk yield were recorded for the last 7 d of each treatment period. Milk samples were collected for composition analysis for the last 3 d of each treatment period; these data were used to calculate 4% fat-corrected milk and energy-corrected milk yield. Sorting was determined by subjecting fresh feed and orts samples to particle separation and expressing the actual intake of each particle fraction as a percentage of the predicted intake of that fraction. Addition of LF did not noticeably change the nutrient composition of the ration, with the exception of an expected increase in dietary sugar concentration (from 4.0 to 5.4%). Liquid feed supplementation affected the particle size distribution of the ration, resulting in a lesser amount of short and a greater amount of fine particles. Cows sorted against the longest ration particles on both treatment diets; the extent of this sorting was greater on the control diet (55.0 vs. 68.8%). Dry matter intake was 1.4 kg/d higher when cows were fed the LF diet as compared with the control diet, resulting in higher acid-detergent fiber, neutral-detergent fiber, and sugar intakes. As a result of the increased DMI, cows tended to produce 1.9 kg/d more milk and produced 3.1 and 3.2 kg/d more 4% fat-corrected milk and energy-corrected milk, respectively, on the LF diet. As a result, cows tended to produce more milk fat (0.13 kg/d) and produced more milk protein (0.09 kg/d) on the LF diet. No difference between treatments was observed in the efficiency of milk production. Overall, adding a molasses-based LF to TMR can be used to decrease feed sorting, enhance DMI, and improve milk yield. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Teachers' Cultural Maps: Asia as a "Tricky Sort of Subject Matter" in Curriculum Inquiry
ERIC Educational Resources Information Center
Salter, Peta
2014-01-01
The refocussing of Australia-Asia relations is manifest in a combination of national policy moves in Australia. Parallel shifts have been made in Europe, the United States, Canada and New Zealand. In Australia, the curricular response to this shift has become known as "Asia literacy." This study is drawn from a wider project that…
Particle-in-cell simulations on graphic processing units
NASA Astrophysics Data System (ADS)
Ren, C.; Zhou, X.; Li, J.; Huang, M. C.; Zhao, Y.
2014-10-01
We will show our recent progress in using GPU's to accelerate the PIC code OSIRIS [Fonseca et al. LNCS 2331, 342 (2002)]. The OISRIS parallel structure is retained and the computation-intensive kernels are shipped to GPU's. Algorithms for the kernels are adapted for the GPU, including high-order charge-conserving current deposition schemes with few branching and parallel particle sorting [Kong et al., JCP 230, 1676 (2011)]. These algorithms make efficient use of the GPU shared memory. This work was supported by U.S. Department of Energy under Grant No. DE-FC02-04ER54789 and by NSF under Grant No. PHY-1314734.
Parallel alignment of bacteria using near-field optical force array for cell sorting
NASA Astrophysics Data System (ADS)
Zhao, H. T.; Zhang, Y.; Chin, L. K.; Yap, P. H.; Wang, K.; Ser, W.; Liu, A. Q.
2017-08-01
This paper presents a near-field approach to align multiple rod-shaped bacteria based on the interference pattern in silicon nano-waveguide arrays. The bacteria in the optical field will be first trapped by the gradient force and then rotated by the scattering force to the equilibrium position. In the experiment, the Shigella bacteria is rotated 90 deg and aligned to horizontal direction in 9.4 s. Meanwhile, 150 Shigella is trapped on the surface in 5 min and 86% is aligned with angle < 5 deg. This method is a promising toolbox for the research of parallel single-cell biophysical characterization, cell-cell interaction, etc.
High Performance GPU-Based Fourier Volume Rendering.
Abdellah, Marwan; Eldeib, Ayman; Sharawi, Amr
2015-01-01
Fourier volume rendering (FVR) is a significant visualization technique that has been used widely in digital radiography. As a result of its (N (2)logN) time complexity, it provides a faster alternative to spatial domain volume rendering algorithms that are (N (3)) computationally complex. Relying on the Fourier projection-slice theorem, this technique operates on the spectral representation of a 3D volume instead of processing its spatial representation to generate attenuation-only projections that look like X-ray radiographs. Due to the rapid evolution of its underlying architecture, the graphics processing unit (GPU) became an attractive competent platform that can deliver giant computational raw power compared to the central processing unit (CPU) on a per-dollar-basis. The introduction of the compute unified device architecture (CUDA) technology enables embarrassingly-parallel algorithms to run efficiently on CUDA-capable GPU architectures. In this work, a high performance GPU-accelerated implementation of the FVR pipeline on CUDA-enabled GPUs is presented. This proposed implementation can achieve a speed-up of 117x compared to a single-threaded hybrid implementation that uses the CPU and GPU together by taking advantage of executing the rendering pipeline entirely on recent GPU architectures.
Scalable isosurface visualization of massive datasets on commodity off-the-shelf clusters
Bajaj, Chandrajit
2009-01-01
Tomographic imaging and computer simulations are increasingly yielding massive datasets. Interactive and exploratory visualizations have rapidly become indispensable tools to study large volumetric imaging and simulation data. Our scalable isosurface visualization framework on commodity off-the-shelf clusters is an end-to-end parallel and progressive platform, from initial data access to the final display. Interactive browsing of extracted isosurfaces is made possible by using parallel isosurface extraction, and rendering in conjunction with a new specialized piece of image compositing hardware called Metabuffer. In this paper, we focus on the back end scalability by introducing a fully parallel and out-of-core isosurface extraction algorithm. It achieves scalability by using both parallel and out-of-core processing and parallel disks. It statically partitions the volume data to parallel disks with a balanced workload spectrum, and builds I/O-optimal external interval trees to minimize the number of I/O operations of loading large data from disk. We also describe an isosurface compression scheme that is efficient for progress extraction, transmission and storage of isosurfaces. PMID:19756231
NASA Astrophysics Data System (ADS)
Flemming, Burghard W.
2017-08-01
This study investigates the effect of particle shape on the transport and deposition of mixed siliciclastic-bioclastic sediments in the lower mesotidal Langebaan Lagoon along the South Atlantic coast of South Africa. As the two sediment components have undergone mutual sorting for the last 7 ka, they can be expected to have reached a highest possible degree of hydraulic equivalence. A comparison of sieve and settling tube data shows that, with progressive coarsening of the size fractions, the mean diameters of individual sediment components increasingly depart from the spherical quartz standard, the experimental data demonstrating the hydraulic incompatibility of the sieve data. Overall, the spatial distribution patterns of textural parameters (mean settling diameter, sorting and skewness) of the siliciclastic and bioclastic sediment components are very similar. Bivariate plots between them reveal linear trends when averaged over small intervals. A systematic deviation is observed in sorting, the trend ranging from uniformity at poorer sorting levels to a progressively increasing lag of the bioclastic component relative to the siliciclastic one as overall sorting improves. The deviation amounts to 0.8 relative sorting units at the optimal sorting level. The small textural differences between the two components are considered to reflect the influence of particle shape, which prevents the bioclastic fraction from achieving complete textural equivalence with the siliciclastic one. This is also reflected in the inferred transport behaviour of the two shape components, the bioclastic fraction moving closer to the bed than the siliciclastic one because of the higher drag experienced by low shape factor particles. As a consequence, the bed-phase development of bioclastic sediments departs significantly from that of siliciclastic sediments. Systematic flume experiments, however, are currently still lacking.
Query-Driven Visualization and Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruebel, Oliver; Bethel, E. Wes; Prabhat, Mr.
2012-11-01
This report focuses on an approach to high performance visualization and analysis, termed query-driven visualization and analysis (QDV). QDV aims to reduce the amount of data that needs to be processed by the visualization, analysis, and rendering pipelines. The goal of the data reduction process is to separate out data that is "scientifically interesting'' and to focus visualization, analysis, and rendering on that interesting subset. The premise is that for any given visualization or analysis task, the data subset of interest is much smaller than the larger, complete data set. This strategy---extracting smaller data subsets of interest and focusing ofmore » the visualization processing on these subsets---is complementary to the approach of increasing the capacity of the visualization, analysis, and rendering pipelines through parallelism. This report discusses the fundamental concepts in QDV, their relationship to different stages in the visualization and analysis pipelines, and presents QDV's application to problems in diverse areas, ranging from forensic cybersecurity to high energy physics.« less
NASA Technical Reports Server (NTRS)
Sanz, J.; Pischel, K.; Hubler, D.
1992-01-01
An application for parallel computation on a combined cluster of powerful workstations and supercomputers was developed. A Parallel Virtual Machine (PVM) is used as message passage language on a macro-tasking parallelization of the Aerodynamic Inverse Design and Analysis for a Full Engine computer code. The heterogeneous nature of the cluster is perfectly handled by the controlling host machine. Communication is established via Ethernet with the TCP/IP protocol over an open network. A reasonable overhead is imposed for internode communication, rendering an efficient utilization of the engaged processors. Perhaps one of the most interesting features of the system is its versatile nature, that permits the usage of the computational resources available that are experiencing less use at a given point in time.
DataForge: Modular platform for data storage and analysis
NASA Astrophysics Data System (ADS)
Nozik, Alexander
2018-04-01
DataForge is a framework for automated data acquisition, storage and analysis based on modern achievements of applied programming. The aim of the DataForge is to automate some standard tasks like parallel data processing, logging, output sorting and distributed computing. Also the framework extensively uses declarative programming principles via meta-data concept which allows a certain degree of meta-programming and improves results reproducibility.
Villagómez-Ornelas, Paloma; Hernández-López, Pedro; Carrasco-Enríquez, Brenda; Barrios-Sánchez, Karina; Pérez-Escamilla, Rafael; Melgar-Quiñónez, Hugo
2014-01-01
This article validates the statistical consistency of two food security scales: the Mexican Food Security Scale (EMSA) and the Latin American and Caribbean Food Security Scale (ELCSA). Validity tests were conducted in order to verify that both scales were consistent instruments, conformed by independent, properly calibrated and adequately sorted items, arranged in a continuum of severity. The following tests were developed: sorting of items; Cronbach's alpha analysis; parallelism of prevalence curves; Rasch models; sensitivity analysis through mean differences' hypothesis test. The tests showed that both scales meet the required attributes and are robust statistical instruments for food security measurement. This is relevant given that the lack of access to food indicator, included in multidimensional poverty measurement in Mexico, is calculated with EMSA.
Matching nuts and bolts in O(n log n) time
DOE Office of Scientific and Technical Information (OSTI.GOV)
Komlos, J.; Ma, Yuan; Szemeredi, E.
Given a set of n nuts of distinct widths and a set of n bolts such that each nut corresponds to a unique bolt of the same width, how should we match every nut with its corresponding bolt by comparing nuts with bolts (no comparison is allowed between two nuts or between two bolts)? The problem can be naturally viewed as a variant of the classic sorting problem as follows. Given two lists of n numbers each such that one list is a permutation of the other, how should we sort the lists by comparisons only between numbers in differentmore » lists? We give an O(n log n)-time deterministic algorithm for the problem. This is optimal up to a constant factor and answers an open question posed by Alon, Blum, Fiat, Kannan, Naor, and Ostrovsky. Moreover, when copies of nuts and bolts are allowed, our algorithm runs in optimal O(log n) time on n processors in Valiant`s parallel comparison tree model. Our algorithm is based on the AKS sorting algorithm with substantial modifications.« less
Prüss, Harald; Grosse, Gisela; Brunk, Irene; Veh, Rüdiger W; Ahnert-Hilger, Gudrun
2010-03-01
The development of the hippocampal network requires neuronal activity, which is shaped by the differential expression and sorting of a variety of potassium channels. Parallel to their maturation, hippocampal neurons undergo a distinct development of their ion channel profile. The age-dependent dimension of ion channel occurrence is of utmost importance as it is interdependently linked to network formation. However, data regarding the exact temporal expression of potassium channels during postnatal hippocampal development are scarce. We therefore studied the expression of several voltage-gated potassium channel proteins during hippocampal development in vivo and in primary cultures, focusing on channels that were sorted to the axonal compartment. The Kv1.1, Kv1.2, Kv1.4, and Kv3.4 proteins showed a considerable temporal variation of axonal localization among neuronal subpopulations. It is possible, therefore, that hippocampal neurons possess cell type-specific mechanisms for channel compartmentalization. Thus, age-dependent axonal sorting of the potassium channel proteins offers a new approach to functionally distinguish classes of hippocampal neurons and may extend our understanding of hippocampal circuitry and memory processing.
Birchler, Axel; Berger, Mischa; Jäggin, Verena; Lopes, Telma; Etzrodt, Martin; Misun, Patrick Mark; Pena-Francesch, Maria; Schroeder, Timm; Hierlemann, Andreas; Frey, Olivier
2016-01-19
Open microfluidic cell culturing devices offer new possibilities to simplify loading, culturing, and harvesting of individual cells or microtissues due to the fact that liquids and cells/microtissues are directly accessible. We present a complete workflow for microfluidic handling and culturing of individual cells and microtissue spheroids, which is based on the hanging-drop network concept: The open microfluidic devices are seamlessly combined with fluorescence-activated cell sorting (FACS), so that individual cells, including stem cells, can be directly sorted into specified culturing compartments in a fully automated way and at high accuracy. Moreover, already assembled microtissue spheroids can be loaded into the microfluidic structures by using a conventional pipet. Cell and microtissue culturing is then performed in hanging drops under controlled perfusion. On-chip drop size control measures were applied to stabilize the system. Cells and microtissue spheroids can be retrieved from the chip by using a parallelized transfer method. The presented methodology holds great promise for combinatorial screening of stem-cell and multicellular-spheroid cultures.
NASA Astrophysics Data System (ADS)
Bose, S.; Singh, R.; Hollatz, M. H.; Lee, C.-H.; Karp, J.; Karnik, R.
2012-02-01
Cell sorting serves an important role in clinical diagnosis and biological research. Most of the existing microscale sorting techniques are either non-specific to antigen type or rely on capturing cells making sample recovery difficult. We demonstrate a simple; yet effective technique for isolating cells in an antigen specific manner by using transient interactions of the cell surface antigens with asymmetric receptor patterned surface. Using microfluidic devices incorporating P-selectin patterns we demonstrate separation of HL60 cells from K562 cells. We achieved a sorting purity above 90% and efficiency greater than 85% with this system. We also present a mathematical model incorporating flow mediated and adhesion mediated transport of cells in the microchannel that can be used to predict the performance of these devices. Lastly, we demonstrate the clinical significance of the method by demonstrating single step separation of neutrophils from whole blood. When whole blood is introduced in the device, the granulocyte population gets separated exclusively yielding neutrophils of high purity (<10% RBC contamination). To our knowledge, this is the first ever demonstration of continuous label free sorting of neutrophils from whole blood. We believe this technology will be useful in developing point-of-care diagnostic devices and also for a host of cell sorting applications.
Simplification of Visual Rendering in Simulated Prosthetic Vision Facilitates Navigation.
Vergnieux, Victor; Macé, Marc J-M; Jouffrais, Christophe
2017-09-01
Visual neuroprostheses are still limited and simulated prosthetic vision (SPV) is used to evaluate potential and forthcoming functionality of these implants. SPV has been used to evaluate the minimum requirement on visual neuroprosthetic characteristics to restore various functions such as reading, objects and face recognition, object grasping, etc. Some of these studies focused on obstacle avoidance but only a few investigated orientation or navigation abilities with prosthetic vision. The resolution of current arrays of electrodes is not sufficient to allow navigation tasks without additional processing of the visual input. In this study, we simulated a low resolution array (15 × 18 electrodes, similar to a forthcoming generation of arrays) and evaluated the navigation abilities restored when visual information was processed with various computer vision algorithms to enhance the visual rendering. Three main visual rendering strategies were compared to a control rendering in a wayfinding task within an unknown environment. The control rendering corresponded to a resizing of the original image onto the electrode array size, according to the average brightness of the pixels. In the first rendering strategy, vision distance was limited to 3, 6, or 9 m, respectively. In the second strategy, the rendering was not based on the brightness of the image pixels, but on the distance between the user and the elements in the field of view. In the last rendering strategy, only the edges of the environments were displayed, similar to a wireframe rendering. All the tested renderings, except the 3 m limitation of the viewing distance, improved navigation performance and decreased cognitive load. Interestingly, the distance-based and wireframe renderings also improved the cognitive mapping of the unknown environment. These results show that low resolution implants are usable for wayfinding if specific computer vision algorithms are used to select and display appropriate information regarding the environment. © 2017 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moreland, Kenneth D.
2017-07-01
The FY17Q3 milestone of the ECP/VTK-m project includes the completion of a VTK-m filter that computes normal vectors for surfaces. Normal vectors are those that point perpendicular to the surface and are an important direction when rendering the surface. The implementation includes the parallel algorithm itself, a filter module to simplify integrating it into other software, and documentation in the VTK-m Users’ Guide. With the completion of this milestone, we are able to necessary information to rendering systems to provide appropriate shading of surfaces. This milestone also feeds into subsequent milestones that progressively improve the approximation of surface direction.
Alphabetical Order Effects in School Admissions
ERIC Educational Resources Information Center
Jurajda, Štepán; Münich, Daniel
2016-01-01
If school admission committees use alphabetically sorted lists of applicants in their evaluations, one's position in the alphabet according to last name initial may be important in determining access to selective schools. Jurajda and Münich (2010) "Admission to Selective Schools, Alphabetically". "Economics of Education…
Boström, Jan; Elger, Christian E.; Mormann, Florian
2016-01-01
Recording extracellulary from neurons in the brains of animals in vivo is among the most established experimental techniques in neuroscience, and has recently become feasible in humans. Many interesting scientific questions can be addressed only when extracellular recordings last several hours, and when individual neurons are tracked throughout the entire recording. Such questions regard, for example, neuronal mechanisms of learning and memory consolidation, and the generation of epileptic seizures. Several difficulties have so far limited the use of extracellular multi-hour recordings in neuroscience: Datasets become huge, and data are necessarily noisy in clinical recording environments. No methods for spike sorting of such recordings have been available. Spike sorting refers to the process of identifying the contributions of several neurons to the signal recorded in one electrode. To overcome these difficulties, we developed Combinato: a complete data-analysis framework for spike sorting in noisy recordings lasting twelve hours or more. Our framework includes software for artifact rejection, automatic spike sorting, manual optimization, and efficient visualization of results. Our completely automatic framework excels at two tasks: It outperforms existing methods when tested on simulated and real data, and it enables researchers to analyze multi-hour recordings. We evaluated our methods on both short and multi-hour simulated datasets. To evaluate the performance of our methods in an actual neuroscientific experiment, we used data from from neurosurgical patients, recorded in order to identify visually responsive neurons in the medial temporal lobe. These neurons responded to the semantic content, rather than to visual features, of a given stimulus. To test our methods with multi-hour recordings, we made use of neurons in the human medial temporal lobe that respond selectively to the same stimulus in the evening and next morning. PMID:27930664
Following Up: Huygens Data, Questions Will Guide Cassini on Its Future Titan Passes
NASA Technical Reports Server (NTRS)
Morring, Frank, Jr.; Taverna, Michael A.
2005-01-01
Planetary scientists plan to use the instruments on NASA's Cassini Saturn orbiter in future flybys of Titan to answer questions raised by Europe's Huygens probe. In particular, they hope to discover whether the dark, flat areas on the moon's surface are liquid or quasi-liquid methane seas. As the scientists plotted their next moves, European Space Agency (ESA) officials continued their investigation into how a critical command in the Huygens descent sequence was omitted costing the imaging team half of its data and rendering another Huygens instrument useless. There were suggestions that U.S. export-control rules may have hampered the sort of close international cooperation that might have caught the error.
Low-Level Graphics Cues For Solicit Image Interpretation
NASA Astrophysics Data System (ADS)
McAnulty, Michael A.; Gemmill, Jill P.; Kegley, Kathleen A.; Chiu, Haw-Tsang
1984-08-01
Several straightforward techniques for displaying arbitrary solids of the sort encountered in the life sciences are presented, all variations of simple three-dimensional scatter plots. They are all targeted for a medium cost raster display (an AED-5l2 has been used here). Practically any host computer may be used to implement them. All techniques are broadly applicable and were implemented as Master Degree projects. The major hardware constraint is data transmission speed, and this is met by minimizing the amount of graphical data, ignoring enhancement of the data, and using terminal scan-conversion and aspect firmware wherever possible. Three simple rendering techniques and the use of several graphics cues are described.
Gooding, Thomas Michael [Rochester, MN
2011-04-19
An analytical mechanism for a massively parallel computer system automatically analyzes data retrieved from the system, and identifies nodes which exhibit anomalous behavior in comparison to their immediate neighbors. Preferably, anomalous behavior is determined by comparing call-return stack tracebacks for each node, grouping like nodes together, and identifying neighboring nodes which do not themselves belong to the group. A node, not itself in the group, having a large number of neighbors in the group, is a likely locality of error. The analyzer preferably presents this information to the user by sorting the neighbors according to number of adjoining members of the group.
A GaAs vector processor based on parallel RISC microprocessors
NASA Astrophysics Data System (ADS)
Misko, Tim A.; Rasset, Terry L.
A vector processor architecture based on the development of a 32-bit microprocessor using gallium arsenide (GaAs) technology has been developed. The McDonnell Douglas vector processor (MVP) will be fabricated completely from GaAs digital integrated circuits. The MVP architecture includes a vector memory of 1 megabyte, a parallel bus architecture with eight processing elements connected in parallel, and a control processor. The processing elements consist of a reduced instruction set CPU (RISC) with four floating-point coprocessor units and necessary memory interface functions. This architecture has been simulated for several benchmark programs including complex fast Fourier transform (FFT), complex inner product, trigonometric functions, and sort-merge routine. The results of this study indicate that the MVP can process a 1024-point complex FFT at a speed of 112 microsec (389 megaflops) while consuming approximately 618 W of power in a volume of approximately 0.1 ft-cubed.
Systems of power, axes of inequity: parallels, intersections, braiding the strands.
Jones, Camara P
2014-10-01
This commentary builds on work examining the impacts of racism on health to identify parallels and intersections with regard to able-ism and health. The "Cliff Analogy" framework for distinguishing between five levels of health intervention is used to sort the Healthy People 2020 goals on Disability and Health along an array from medical care to addressing the social determinants of equity. Parallels between racism and able-ism as systems of power, similarities and differences between "race" and disability status as axes of inequity, intersections of "race" and disability status in individuals and in communities, and the promise of convergent strength between the anti-racism community and the disability rights community are highlighted. With health equity defined as assurance of the conditions for optimal health for all people, it is noted that achieving health equity requires valuing all individuals and populations equally, recognizing and rectifying historical injustices, and providing resources according to need.
CudaChain: an alternative algorithm for finding 2D convex hulls on the GPU.
Mei, Gang
2016-01-01
This paper presents an alternative GPU-accelerated convex hull algorithm and a novel S orting-based P reprocessing A pproach (SPA) for planar point sets. The proposed convex hull algorithm termed as CudaChain consists of two stages: (1) two rounds of preprocessing performed on the GPU and (2) the finalization of calculating the expected convex hull on the CPU. Those interior points locating inside a quadrilateral formed by four extreme points are first discarded, and then the remaining points are distributed into several (typically four) sub regions. For each subset of points, they are first sorted in parallel; then the second round of discarding is performed using SPA; and finally a simple chain is formed for the current remaining points. A simple polygon can be easily generated by directly connecting all the chains in sub regions. The expected convex hull of the input points can be finally obtained by calculating the convex hull of the simple polygon. The library Thrust is utilized to realize the parallel sorting, reduction, and partitioning for better efficiency and simplicity. Experimental results show that: (1) SPA can very effectively detect and discard the interior points; and (2) CudaChain achieves 5×-6× speedups over the famous Qhull implementation for 20M points.
Parallel, stochastic measurement of molecular surface area.
Juba, Derek; Varshney, Amitabh
2008-08-01
Biochemists often wish to compute surface areas of proteins. A variety of algorithms have been developed for this task, but they are designed for traditional single-processor architectures. The current trend in computer hardware is towards increasingly parallel architectures for which these algorithms are not well suited. We describe a parallel, stochastic algorithm for molecular surface area computation that maps well to the emerging multi-core architectures. Our algorithm is also progressive, providing a rough estimate of surface area immediately and refining this estimate as time goes on. Furthermore, the algorithm generates points on the molecular surface which can be used for point-based rendering. We demonstrate a GPU implementation of our algorithm and show that it compares favorably with several existing molecular surface computation programs, giving fast estimates of the molecular surface area with good accuracy.
Parallel approach on sorting of genes in search of optimal solution.
Kumar, Pranav; Sahoo, G
2018-05-01
An important tool for comparing genome analysis is the rearrangement event that can transform one given genome into other. For finding minimum sequence of fission and fusion, we have proposed here an algorithm and have shown a transformation example for converting the source genome into the target genome. The proposed algorithm comprises of circular sequence i.e. "cycle graph" in place of mapping. The main concept of algorithm is based on optimal result of permutation. These sorting processes are performed in constant running time by showing permutation in the form of cycle. In biological instances it has been observed that transposition occurs half of the frequency as that of reversal. In this paper we are not dealing with reversal instead commencing with the rearrangement of fission, fusion as well as transposition. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Hayano, Akira; Ishii, Eiichi
2016-10-01
This study investigates the mechanical relationship between bedding-parallel and bedding-oblique faults in a Neogene massive siliceous mudstone at the site of the Horonobe Underground Research Laboratory (URL) in Hokkaido, Japan, on the basis of observations of drill-core recovered from pilot boreholes and fracture mapping on shaft and gallery walls. Four bedding-parallel faults with visible fault gouge, named respectively the MM Fault, the Last MM Fault, the S1 Fault, and the S2 Fault (stratigraphically, from the highest to the lowest), were observed in two pilot boreholes (PB-V01 and SAB-1). The distribution of the bedding-parallel faults at 350 m depth in the Horonobe URL indicates that these faults are spread over at least several tens of meters in parallel along a bedding plane. The observation that the bedding-oblique fault displaces the Last MM fault is consistent with the previous interpretation that the bedding- oblique faults formed after the bedding-parallel faults. In addition, the bedding-parallel faults terminate near the MM and S1 faults, indicating that the bedding-parallel faults with visible fault gouge act to terminate the propagation of younger bedding-oblique faults. In particular, the MM and S1 faults, which have a relatively thick fault gouge, appear to have had a stronger control on the propagation of bedding-oblique faults than did the Last MM fault, which has a relatively thin fault gouge.
NASA Astrophysics Data System (ADS)
Carter, Rachel; Huhman, Brett; Love, Corey T.; Zenyuk, Iryna V.
2018-03-01
X-ray computed tomography (X-ray CT) across multiple length scales is utilized for the first time to investigate the physical abuse of high C-rate pulsed discharge on cells wired individually and in parallel.. Manufactured lithium iron phosphate cells boasting high rate capability were pulse power tested in both wiring conditions with high discharge currents of 10C for a high number of cycles (up to 1200) until end of life (<80% of initial discharge capacity retained). The parallel assembly reached end of life more rapidly for reasons unknown prior to CT investigations. The investigation revealed evidence of overdischarge in the most degraded cell from the parallel assembly, compared to more traditional failure in the individual cell. The parallel-wired cell exhibited dissolution of copper from the anode current collector and subsequent deposition throughout the separator near the cathode of the cell. This overdischarge-induced copper deposition, notably impossible to confirm with other state of health (SOH) monitoring methods, is diagnosed using CT by rendering the interior current collector without harm or alteration to the active materials. Correlation of CT observations to the electrochemical pulse data from the parallel-wired cells reveals the risk of parallel wiring during high C-rate pulse discharge.
NASA Astrophysics Data System (ADS)
Loveley, M. R.; Marcantonio, F.; Lyle, M. W.; Wang, J. K.
2013-12-01
In this study, we attempt to understand how preferential sorting of fine particles during redistribution processes in the Panama Basin affects the 230Th constant-flux proxy. Fine particles likely contain greater amounts of 230Th, so that preferential sorting of fine particles may bias sediment mass accumulation rates (MARs). We examined sediments that span the past 25 kyr from two new sediment cores retrieved within about 56 km of each other in the northern part of the basin (MV1013-01-'4JC', 5° 44.699'N 85° 45.498' W, 1730 m depth; MV1014-01-'8JC', 6° 14.038'N 86° 2.613' W, 1993 m depth). Core 4JC, closer to the ridge top that bounds the basin (Cocos Ridge), has a thin sediment drape, while the deeper core 8JC, has a thicker sediment drape and lies further from the ridge top. 230Th-derived focusing factors from 4JC are similar and suggest winnowing with average values of about 0.5 and 0.6 during the Holocene and the last glacial, respectively. For 8JC, calculated average focusing factors are significantly different and suggest focusing with values of about 2 during the Holocene and 4 during the last glacial. Since the two sites are close to each other, one would expect similar rain rates and, therefore, similar 230Th-derived MARs within similar windows of time, i.e., the rain rate should not vary significantly at each site temporally. In addition, the radiocarbon-derived sand (>63μm) MARs should behave similarly since coarser particles are likely not transported by bottom currents. Sand MARs are, indeed, similar during the Holocene and the last glacial at each site. During the last glacial, however, sand MARs are about a factor of 3 higher than those during the Holocene. On the other hand, there is little variability in the 230Th-derived MARs both spatially and temporally. We interpret the discrepancies between the radiocarbon-derived sand and 230Th-derived MARs as being due to preferential sorting of fine particles during the redistribution of sediments by deep-sea currents. The 230Th-derived focusing factors are being overestimated at the deeper site and vice versa at the shallower site, and the degree of inaccuracy varies temporally. We discuss this temporal variability and its relationship to deep-sea current velocities.
An application of the MPP to the interactive manipulation of stereo images of digital terrain models
NASA Technical Reports Server (NTRS)
Pol, Sanjay; Mcallister, David; Davis, Edward
1987-01-01
Massively Parallel Processor algorithms were developed for the interactive manipulation of flat shaded digital terrain models defined over grids. The emphasis is on real time manipulation of stereo images. Standard graphics transformations are applied to a 128 x 128 grid of elevations followed by shading and a perspective projection to produce the right eye image. The surface is then rendered using a simple painter's algorithm for hidden surface removal. The left eye image is produced by rotating the surface 6 degs about the viewer's y axis followed by a perspective projection and rendering of the image as described above. The left and right eye images are then presented on a graphics device using standard stereo technology. Performance evaluations and comparisons are presented.
Volumetric visualization algorithm development for an FPGA-based custom computing machine
NASA Astrophysics Data System (ADS)
Sallinen, Sami J.; Alakuijala, Jyrki; Helminen, Hannu; Laitinen, Joakim
1998-05-01
Rendering volumetric medical images is a burdensome computational task for contemporary computers due to the large size of the data sets. Custom designed reconfigurable hardware could considerably speed up volume visualization if an algorithm suitable for the platform is used. We present an algorithm and speedup techniques for visualizing volumetric medical CT and MR images with a custom-computing machine based on a Field Programmable Gate Array (FPGA). We also present simulated performance results of the proposed algorithm calculated with a software implementation running on a desktop PC. Our algorithm is capable of generating perspective projection renderings of single and multiple isosurfaces with transparency, simulated X-ray images, and Maximum Intensity Projections (MIP). Although more speedup techniques exist for parallel projection than for perspective projection, we have constrained ourselves to perspective viewing, because of its importance in the field of radiotherapy. The algorithm we have developed is based on ray casting, and the rendering is sped up by three different methods: shading speedup by gradient precalculation, a new generalized version of Ray-Acceleration by Distance Coding (RADC), and background ray elimination by speculative ray selection.
Parallel object-oriented decision tree system
Kamath,; Chandrika, Cantu-Paz [Dublin, CA; Erick, [Oakland, CA
2006-02-28
A data mining decision tree system that uncovers patterns, associations, anomalies, and other statistically significant structures in data by reading and displaying data files, extracting relevant features for each of the objects, and using a method of recognizing patterns among the objects based upon object features through a decision tree that reads the data, sorts the data if necessary, determines the best manner to split the data into subsets according to some criterion, and splits the data.
Forces on particles in microstreaming flows
NASA Astrophysics Data System (ADS)
Hilgenfeldt, Sascha; Rallabandi, Bhargav; Thameem, Raqeeb
2015-11-01
In various microfluidic applications, vortical steady streaming from ultrasonically driven microbubbles is used in concert with a pressure-driven channel flow to manipulate objects. While a quantitative theory of this boundary-induced streaming is available, little work has been devoted to a fundamental understanding of the forces exerted on microparticles in boundary streaming flows, even though the differential action of such forces is central to applications like size-sensitive sorting. Contrary to other microfluidic sorting devices, the forces in bubble microstreaming act over millisecond times and micron length scales, without the need for accumulated deflections over long distances. Accordingly, we develop a theory of hydrodynamic forces on the fast time scale of bubble oscillation using the lubrication approximation, showing for the first time how particle displacements are rectified near moving boundaries over multiple oscillations in parallel with the generation of the steady streaming flow. The dependence of particle migration on particle size and the flow parameters is compared with experimental data. The theory is applicable to boundary streaming phenomena in general and demonstrates how particles can be sorted very quickly and without compromising device throughput. We acknowledge support by the National Science Foundation under grant number CBET-1236141.
Algorithmic commonalities in the parallel environment
NASA Technical Reports Server (NTRS)
Mcanulty, Michael A.; Wainer, Michael S.
1987-01-01
The ultimate aim of this project was to analyze procedures from substantially different application areas to discover what is either common or peculiar in the process of conversion to the Massively Parallel Processor (MPP). Three areas were identified: molecular dynamic simulation, production systems (rule systems), and various graphics and vision algorithms. To date, only selected graphics procedures have been investigated. They are the most readily available, and produce the most visible results. These include simple polygon patch rendering, raycasting against a constructive solid geometric model, and stochastic or fractal based textured surface algorithms. Only the simplest of conversion strategies, mapping a major loop to the array, has been investigated so far. It is not entirely satisfactory.
Algorithms and programming tools for image processing on the MPP, part 2
NASA Technical Reports Server (NTRS)
Reeves, Anthony P.
1986-01-01
A number of algorithms were developed for image warping and pyramid image filtering. Techniques were investigated for the parallel processing of a large number of independent irregular shaped regions on the MPP. In addition some utilities for dealing with very long vectors and for sorting were developed. Documentation pages for the algorithms which are available for distribution are given. The performance of the MPP for a number of basic data manipulations was determined. From these results it is possible to predict the efficiency of the MPP for a number of algorithms and applications. The Parallel Pascal development system, which is a portable programming environment for the MPP, was improved and better documentation including a tutorial was written. This environment allows programs for the MPP to be developed on any conventional computer system; it consists of a set of system programs and a library of general purpose Parallel Pascal functions. The algorithms were tested on the MPP and a presentation on the development system was made to the MPP users group. The UNIX version of the Parallel Pascal System was distributed to a number of new sites.
Parallel, distributed and GPU computing technologies in single-particle electron microscopy
Schmeisser, Martin; Heisen, Burkhard C.; Luettich, Mario; Busche, Boris; Hauer, Florian; Koske, Tobias; Knauber, Karl-Heinz; Stark, Holger
2009-01-01
Most known methods for the determination of the structure of macromolecular complexes are limited or at least restricted at some point by their computational demands. Recent developments in information technology such as multicore, parallel and GPU processing can be used to overcome these limitations. In particular, graphics processing units (GPUs), which were originally developed for rendering real-time effects in computer games, are now ubiquitous and provide unprecedented computational power for scientific applications. Each parallel-processing paradigm alone can improve overall performance; the increased computational performance obtained by combining all paradigms, unleashing the full power of today’s technology, makes certain applications feasible that were previously virtually impossible. In this article, state-of-the-art paradigms are introduced, the tools and infrastructure needed to apply these paradigms are presented and a state-of-the-art infrastructure and solution strategy for moving scientific applications to the next generation of computer hardware is outlined. PMID:19564686
Implementation of a high-speed face recognition system that uses an optical parallel correlator.
Watanabe, Eriko; Kodate, Kashiko
2005-02-10
We implement a fully automatic fast face recognition system by using a 1000 frame/s optical parallel correlator designed and assembled by us. The operational speed for the 1:N (i.e., matching one image against N, where N refers to the number of images in the database) identification experiment (4000 face images) amounts to less than 1.5 s, including the preprocessing and postprocessing times. The binary real-only matched filter is devised for the sake of face recognition, and the system is optimized by the false-rejection rate (FRR) and the false-acceptance rate (FAR), according to 300 samples selected by the biometrics guideline. From trial 1:N identification experiments with the optical parallel correlator, we acquired low error rates of 2.6% FRR and 1.3% FAR. Facial images of people wearing thin glasses or heavy makeup that rendered identification difficult were identified with this system.
Parallel, distributed and GPU computing technologies in single-particle electron microscopy.
Schmeisser, Martin; Heisen, Burkhard C; Luettich, Mario; Busche, Boris; Hauer, Florian; Koske, Tobias; Knauber, Karl-Heinz; Stark, Holger
2009-07-01
Most known methods for the determination of the structure of macromolecular complexes are limited or at least restricted at some point by their computational demands. Recent developments in information technology such as multicore, parallel and GPU processing can be used to overcome these limitations. In particular, graphics processing units (GPUs), which were originally developed for rendering real-time effects in computer games, are now ubiquitous and provide unprecedented computational power for scientific applications. Each parallel-processing paradigm alone can improve overall performance; the increased computational performance obtained by combining all paradigms, unleashing the full power of today's technology, makes certain applications feasible that were previously virtually impossible. In this article, state-of-the-art paradigms are introduced, the tools and infrastructure needed to apply these paradigms are presented and a state-of-the-art infrastructure and solution strategy for moving scientific applications to the next generation of computer hardware is outlined.
Selectively transporting small chiral particles with circularly polarized Airy beams.
Lu, Wanli; Chen, Huajin; Guo, Sandong; Liu, Shiyang; Lin, Zhifang
2018-05-01
Based on the full wave simulation, we demonstrate that a circularly polarized vector Airy beam can selectively transport small chiral particles along a curved trajectory via the chirality-tailored optical forces. The transverse optical forces can draw the chiral particles with different particle chirality towards or away from the intensity maxima of the beam, leading to the selective trapping in the transverse plane. The transversely trapped chiral particles are then accelerated along a curved trajectory of the Airy beam by the chirality-tailored longitudinal scattering force, rendering an alternative way to sort and/or transport chiral particles with specified helicity. Finally, the underlying physics of the chirality induced transverse trap and de-trap phenomena are examined by the analytical theory within the dipole approximation.
Cyclic deformation of bidisperse two-dimensional foams
NASA Astrophysics Data System (ADS)
Fátima Vaz, M.; Cox, S. J.; Teixeira, P. I. C.
2011-12-01
In-plane deformation of foams was studied experimentally by subjecting bidisperse foams to cycles of traction and compression at a prescribed rate. Each foam contained bubbles of two sizes with given area ratio and one of three initial arrangements: sorted perpendicular to the axis of deformation (iso-strain), sorted parallel to the axis of deformation (iso-stress), or randomly mixed. Image analysis was used to measure the characteristics of the foams, including the number of edges separating small from large bubbles N sl , the perimeter (surface energy), the distribution of the number of sides of the bubbles, and the topological disorder μ2(N). Foams that were initially mixed were found to remain mixed after the deformation. The response of sorted foams, however, depended on the initial geometry, including the area fraction of small bubbles and the total number of bubbles. For a given experiment we found that (i) the perimeter of a sorted foam varied little; (ii) each foam tended towards a mixed state, measured through the saturation of N sl ; and (iii) the topological disorder μ2(N) increased up to an "equilibrium" value. The results of different experiments showed that (i) the change in disorder, ? decreased with the area fraction of small bubbles under iso-strain, but was independent of it under iso-stress; and (ii) ? increased with ? under iso-strain, but was again independent of it under iso-stress. We offer explanations for these effects in terms of elementary topological processes induced by the deformations that occur at the bubble scale.
A real time sorting algorithm to time sort any deterministic time disordered data stream
NASA Astrophysics Data System (ADS)
Saini, J.; Mandal, S.; Chakrabarti, A.; Chattopadhyay, S.
2017-12-01
In new generation high intensity high energy physics experiments, millions of free streaming high rate data sources are to be readout. Free streaming data with associated time-stamp can only be controlled by thresholds as there is no trigger information available for the readout. Therefore, these readouts are prone to collect large amount of noise and unwanted data. For this reason, these experiments can have output data rate of several orders of magnitude higher than the useful signal data rate. It is therefore necessary to perform online processing of the data to extract useful information from the full data set. Without trigger information, pre-processing on the free streaming data can only be done with time based correlation among the data set. Multiple data sources have different path delays and bandwidth utilizations and therefore the unsorted merged data requires significant computational efforts for real time manifestation of sorting before analysis. Present work reports a new high speed scalable data stream sorting algorithm with its architectural design, verified through Field programmable Gate Array (FPGA) based hardware simulation. Realistic time based simulated data likely to be collected in an high energy physics experiment have been used to study the performance of the algorithm. The proposed algorithm uses parallel read-write blocks with added memory management and zero suppression features to make it efficient for high rate data-streams. This algorithm is best suited for online data streams with deterministic time disorder/unsorting on FPGA like hardware.
NASA Astrophysics Data System (ADS)
Cho, Wan-Ho; Ih, Jeong-Guon; Toi, Takeshi
2015-12-01
For rendering a desired characteristics of a sound field, a proper conditioning of acoustic actuators in an array are required, but the source condition depends strongly on its position. Actuators located at inefficient positions for control would consume the input power too much or become too much sensitive to disturbing noise. These actuators can be considered redundant, which should be sorted out as far as such elimination does not damage the whole control performance significantly. It is known that the inverse approach based on the acoustical holography concept, employing the transfer matrix between sources and field points as core element, is useful for rendering the desired sound field. By investigating the information indwelling in the transfer matrix between actuators and field points, the linear independency of an actuator from the others in the array can be evaluated. To this end, the square of the right singular vector, which means the radiation contribution from the source, can be used as an indicator. Inefficient position for fulfilling the desired sound field can be determined as one having smallest indicator value among all possible actuator positions. The elimination process continues one by one, or group by group, until the remaining number of actuators meets the preset number. Control examples of exterior and interior spaces are taken for the validation. The results reveal that the present method for choosing least dependent actuators, for a given number of actuators and field condition, is quite effective in realizing the desired sound field with a noisy input condition, and in minimizing the required input power.
Modeling Nanocomposites for Molecular Dynamics (MD) Simulations
2015-01-01
Parallel Simulator ( LAMMPS ) is used as the MD simulator [9], the coordinates must be formatted for use in LAMMPSs. VMD has a set of tools (TopoTools...that can be used to generate a LAMMPS -readable format [6]. 3 Figure 4. Ethylene Monomer Produced From Coordinates in PDB and Rendered Using...where, i and j are the atom subscripts. Simulations are performed using LAMMPS simulation software. Periodic boundary conditions are
UWGSP4: an imaging and graphics superworkstation and its medical applications
NASA Astrophysics Data System (ADS)
Jong, Jing-Ming; Park, Hyun Wook; Eo, Kilsu; Kim, Min-Hwan; Zhang, Peng; Kim, Yongmin
1992-05-01
UWGSP4 is configured with a parallel architecture for image processing and a pipelined architecture for computer graphics. The system's peak performance is 1,280 MFLOPS for image processing and over 200,000 Gouraud shaded 3-D polygons per second for graphics. The simulated sustained performance is about 50% of the peak performance in general image processing. Most of the 2-D image processing functions are efficiently vectorized and parallelized in UWGSP4. A performance of 770 MFLOPS in convolution and 440 MFLOPS in FFT is achieved. The real-time cine display, up to 32 frames of 1280 X 1024 pixels per second, is supported. In 3-D imaging, the update rate for the surface rendering is 10 frames of 20,000 polygons per second; the update rate for the volume rendering is 6 frames of 128 X 128 X 128 voxels per second. The system provides 1280 X 1024 X 32-bit double frame buffers and one 1280 X 1024 X 8-bit overlay buffer for supporting realistic animation, 24-bit true color, and text annotation. A 1280 X 1024- pixel, 66-Hz noninterlaced display screen with 1:1 aspect ratio can be windowed into the frame buffer for the display of any portion of the processed image or graphics.
Stochastic Model of Clogging in a Microfluidic Cell Sorter
NASA Astrophysics Data System (ADS)
Fai, Thomas; Rycroft, Chris
2016-11-01
Microfluidic devices for sorting cells by deformability show promise for various medical purposes, e.g. detecting sickle cell anemia and circulating tumor cells. One class of such devices consists of a two-dimensional array of narrow channels, each column containing several identical channels in parallel. Cells are driven through the device by an applied pressure or flow rate. Such devices allows for many cells to be sorted simultaneously, but cells eventually clog individual channels and change the device properties in an unpredictable manner. In this talk, we propose a stochastic model for the failure of such microfluidic devices by clogging and present preliminary theoretical and computational results. The model can be recast as an ODE that exhibits finite time blow-up under certain conditions. The failure time distribution is investigated analytically in certain limiting cases, and more realistic versions of the model are solved by computer simulation.
Real-Time Model and Simulation Architecture for Half- and Full-Bridge Modular Multilevel Converters
NASA Astrophysics Data System (ADS)
Ashourloo, Mojtaba
This work presents an equivalent model and simulation architecture for real-time electromagnetic transient analysis of either half-bridge or full-bridge modular multilevel converter (MMC) with 400 sub-modules (SMs) per arm. The proposed CPU/FPGA-based architecture is optimized for the parallel implementation of the presented MMC model on the FPGA and is beneficiary of a high-throughput floating-point computational engine. The developed real-time simulation architecture is capable of simulating MMCs with 400 SMs per arm at 825 nanoseconds. To address the difficulties of the sorting process implementation, a modified Odd-Even Bubble sorting is presented in this work. The comparison of the results under various test scenarios reveals that the proposed real-time simulator is representing the system responses in the same way of its corresponding off-line counterpart obtained from the PSCAD/EMTDC program.
Label-free cell separation and sorting in microfluidic systems
Gossett, Daniel R.; Weaver, Westbrook M.; Mach, Albert J.; Hur, Soojung Claire; Tse, Henry Tat Kwong; Lee, Wonhee; Amini, Hamed
2010-01-01
Cell separation and sorting are essential steps in cell biology research and in many diagnostic and therapeutic methods. Recently, there has been interest in methods which avoid the use of biochemical labels; numerous intrinsic biomarkers have been explored to identify cells including size, electrical polarizability, and hydrodynamic properties. This review highlights microfluidic techniques used for label-free discrimination and fractionation of cell populations. Microfluidic systems have been adopted to precisely handle single cells and interface with other tools for biochemical analysis. We analyzed many of these techniques, detailing their mode of separation, while concentrating on recent developments and evaluating their prospects for application. Furthermore, this was done from a perspective where inertial effects are considered important and general performance metrics were proposed which would ease comparison of reported technologies. Lastly, we assess the current state of these technologies and suggest directions which may make them more accessible. Figure A wide range of microfluidic technologies have been developed to separate and sort cells by taking advantage of differences in their intrinsic biophysical properties PMID:20419490
Periyakoil, Vyjeyanthi S; Noda, Arthur M; Kraemer, Helena Chmura
2010-05-01
Preserving patient dignity is a sentinel premise of palliative care. This study was conducted to gain a better understanding of factors influencing preservation of dignity in the last chapter of life. We conducted an open-ended written survey of 100 multidisciplinary providers (69% response rate) and responses were categorized to identify 2 main themes, 5 subthemes, and 10 individual factors that were used to create the preservation of dignity card-sort tool (p-DCT). The 10-item rank order tool was administered to a cohort of community dwelling Filipino Americans (n = 140, age mean = 61.3, 45% male and 55% female). A Spearman correlation matrix was constructed for all the 10 individual factors as well as the themes and subthemes based on the data generated by the subjects. The individual factors were minimally correlated with each other indicating that each factor was an independent stand-alone factor. The median, 25th and 75th percentile ranks were calculated and "s/he has self-respect" (intrinsic theme, self-esteem subtheme) emerged as the most important factor (mean rank 3.0 and median rank 2.0) followed by "others treat her/him with respect" (extrinsic theme, respect subtheme) with a mean rank = 3.6 and median = 3.0. The p-DCT is a simple, rank order card-sort tool that may help clinicians identify patients' perceptions of key factors influencing the preservation of their dignity in the last chapter of life.
NASA Astrophysics Data System (ADS)
Wang, Yuan; Chen, Zhidong; Sang, Xinzhu; Li, Hui; Zhao, Linmin
2018-03-01
Holographic displays can provide the complete optical wave field of a three-dimensional (3D) scene, including the depth perception. However, it often takes a long computation time to produce traditional computer-generated holograms (CGHs) without more complex and photorealistic rendering. The backward ray-tracing technique is able to render photorealistic high-quality images, which noticeably reduce the computation time achieved from the high-degree parallelism. Here, a high-efficiency photorealistic computer-generated hologram method is presented based on the ray-tracing technique. Rays are parallelly launched and traced under different illuminations and circumstances. Experimental results demonstrate the effectiveness of the proposed method. Compared with the traditional point cloud CGH, the computation time is decreased to 24 s to reconstruct a 3D object of 100 ×100 rays with continuous depth change.
A heterogeneous computing environment for simulating astrophysical fluid flows
NASA Technical Reports Server (NTRS)
Cazes, J.
1994-01-01
In the Concurrent Computing Laboratory in the Department of Physics and Astronomy at Louisiana State University we have constructed a heterogeneous computing environment that permits us to routinely simulate complicated three-dimensional fluid flows and to readily visualize the results of each simulation via three-dimensional animation sequences. An 8192-node MasPar MP-1 computer with 0.5 GBytes of RAM provides 250 MFlops of execution speed for our fluid flow simulations. Utilizing the parallel virtual machine (PVM) language, at periodic intervals data is automatically transferred from the MP-1 to a cluster of workstations where individual three-dimensional images are rendered for inclusion in a single animation sequence. Work is underway to replace executions on the MP-1 with simulations performed on the 512-node CM-5 at NCSA and to simultaneously gain access to more potent volume rendering workstations.
Experimental investigation of gravity effects on sediment sorting on Mars
NASA Astrophysics Data System (ADS)
Kuhn, Nikolaus J.; Kuhn, Brigitte; Gartmann, Andres
2014-05-01
Sorting of sedimentary rocks is a proxy for the environmental conditions at the time of deposition, in particular the runoff that moved and deposited the material forming the rocks. Settling of sediment is strongly influenced by the gravity of a planetary body. As a consequence, sorting of a sedimentary rock varies with gravity for a given depth and velocity of surface runoff. Theoretical considerations for spheres indicate that sorting is more uniform on Mars than on Earth for runoff of identical depth. In reality, such considerations have to be applied with great caution because the shape of a particle strongly influences drag. Drag itself can only be calculated directly for an irregularly shaped particle with great computational effort, if at all. Therefore, even for terrestrial applications, sediment settling velocities are often determined directly, e.g. by measurements using settling tubes. In this study the results of settling tube tests conducted under reduced gravity during three experimental flights conducted in November 2012 and 2013 are presented. Nine types of sediment, ranging in size, shape and density were tested in custom-designed settling tubes during parabolas of Martian gravity lasting 20 to 25 seconds. Based on the observed settling velocities, the applicability of empirical relationships developed on Earth to assess particle settling on Mars are discussed. In addition, the potential effects of reduced gravity on the sorting of sedimentary rocks and their use as a proxy for runoff and thus environmental conditions on Mars are examined.
Sorting cells of the microalga Chlorococcum littorale with increased triacylglycerol productivity.
Cabanelas, Iago Teles Dominguez; van der Zwart, Mathijs; Kleinegris, Dorinde M M; Wijffels, René H; Barbosa, Maria J
2016-01-01
Despite extensive research in the last decades, microalgae are still only economically feasible for high valued markets. Strain improvement is a strategy to increase productivities, hence reducing costs. In this work, we focus on microalgae selection: taking advantage of the natural biological variability of species to select variations based on desired characteristics. We focused on triacylglycerol (TAG), which have applications ranging from biodiesel to high-value omega-3 fatty-acids. Hence, we demonstrated a strategy to sort microalgae cells with increased TAG productivity. 1. We successfully identified sub-populations of cells with increased TAG productivity using Fluorescence assisted cell sorting (FACS). 2. We sequentially sorted cells after repeated cycles of N-starvation, resulting in five sorted populations (S1-S5). 3. The comparison between sorted and original populations showed that S5 had the highest TAG productivity [0.34 against 0.18 g l(-1) day(-1) (original), continuous light]. 4. Original and S5 were compared in lab-scale reactors under simulated summer conditions confirming the increased TAG productivity of S5 (0.4 against 0.2 g l(-1) day(-1)). Biomass composition analyses showed that S5 produced more biomass under N-starvation because of an increase only in TAG content and, flow cytometry showed that our selection removed cells with lower efficiency in producing TAGs. All combined, our results present a successful strategy to improve the TAG productivity of Chlorococcum littorale, without resourcing to genetic manipulation or random mutagenesis. Additionally, the improved TAG productivity of S5 was confirmed under simulated summer conditions, highlighting the industrial potential of S5 for microalgal TAG production.
ERIC Educational Resources Information Center
O'Neill, Arthur; Speechley, Bob
2011-01-01
The authors want to figure out what happened in Australian post-secondary education over the last 50 or so years and to predict what sort of arrangement their great-grandchildren, and great-great-grandchildren, will encounter 50 years hence. To put this modest project another way: what in 2060 might a historian (assuming there are, then,…
NASA Astrophysics Data System (ADS)
Mergeay, J.; De Meester, L.; Verschuren, D.
2009-04-01
To assess the influence of long-term temporal processes in community assembly, we reconstructed the community changes of two dominant components of freshwater food webs, planktonic Daphnia water fleas and benthic chironomid midge larvae, in a fluctuating tropical lake through eight cycles of major lake-level fluctuation spanning 1800 years. Our results show a highly unpredictable pattern of community assembly in Daphnia, akin to neutrality, but largely dictated by long-lasting priority effects. These priority effects were likely caused by rapid population growth of resident species during lake refilling from a standing stock in a deep crater refuge, thereby pre-empting niche space for new immigrants. Contrastingly, chironomid larvae showed a more classical species sorting response to long-term environmental change, with more limited contribution of stochastic temporal processes. Overall our study emphasizes the importance of temporal processes and niche pre-emption in metacommunity ecology, and suggests a important role for mass effects in time. It also emphasizes the value of paleoecological research to improve understanding of ecological processes in natural ecosystems.
Particle-in-cell simulations with charge-conserving current deposition on graphic processing units
NASA Astrophysics Data System (ADS)
Ren, Chuang; Kong, Xianglong; Huang, Michael; Decyk, Viktor; Mori, Warren
2011-10-01
Recently using CUDA, we have developed an electromagnetic Particle-in-Cell (PIC) code with charge-conserving current deposition for Nvidia graphic processing units (GPU's) (Kong et al., Journal of Computational Physics 230, 1676 (2011). On a Tesla M2050 (Fermi) card, the GPU PIC code can achieve a one-particle-step process time of 1.2 - 3.2 ns in 2D and 2.3 - 7.2 ns in 3D, depending on plasma temperatures. In this talk we will discuss novel algorithms for GPU-PIC including charge-conserving current deposition scheme with few branching and parallel particle sorting. These algorithms have made efficient use of the GPU shared memory. We will also discuss how to replace the computation kernels of existing parallel CPU codes while keeping their parallel structures. This work was supported by U.S. Department of Energy under Grant Nos. DE-FG02-06ER54879 and DE-FC02-04ER54789 and by NSF under Grant Nos. PHY-0903797 and CCF-0747324.
Selection Determinants in College Students' Financial Tools
ERIC Educational Resources Information Center
Huang, Wei-Ting
2016-01-01
Recently, considerable concern has arisen over the complex financial markets, which are inclined to require more individual responsibility. Accordingly, students have to bear more responsibility for their financial management. Nevertheless, in a sluggish economy with high unemployment, the commercial events during the last decade have rendered the…
Long-lasting desynchronization in rat hippocampal slice induced by coordinated reset stimulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tass, P. A.; Barnikol, U. B.; Department of Stereotaxic and Functional Neurosurgery, University of Cologne, D-50931 Cologne
2009-07-15
In computational models it has been shown that appropriate stimulation protocols may reshape the connectivity pattern of neural or oscillator networks with synaptic plasticity in a way that the network learns or unlearns strong synchronization. The underlying mechanism is that a network is shifted from one attractor to another, so that long-lasting stimulation effects are caused which persist after the cessation of stimulation. Here we study long-lasting effects of multisite electrical stimulation in a rat hippocampal slice rendered epileptic by magnesium withdrawal. We show that desynchronizing coordinated reset stimulation causes a long-lasting desynchronization between hippocampal neuronal populations together with amore » widespread decrease in the amplitude of the epileptiform activity. In contrast, periodic stimulation induces a long-lasting increase in both synchronization and amplitude.« less
An image-space parallel convolution filtering algorithm based on shadow map
NASA Astrophysics Data System (ADS)
Li, Hua; Yang, Huamin; Zhao, Jianping
2017-07-01
Shadow mapping is commonly used in real-time rendering. In this paper, we presented an accurate and efficient method of soft shadows generation from planar area lights. First this method generated a depth map from light's view, and analyzed the depth-discontinuities areas as well as shadow boundaries. Then these areas were described as binary values in the texture map called binary light-visibility map, and a parallel convolution filtering algorithm based on GPU was enforced to smooth out the boundaries with a box filter. Experiments show that our algorithm is an effective shadow map based method that produces perceptually accurate soft shadows in real time with more details of shadow boundaries compared with the previous works.
Genetic heterogeneity of RPMI-8402, a T-acute lymphoblastic leukemia cell line
STOCZYNSKA-FIDELUS, EWELINA; PIASKOWSKI, SYLWESTER; PAWLOWSKA, ROZA; SZYBKA, MALGORZATA; PECIAK, JOANNA; HULAS-BIGOSZEWSKA, KRYSTYNA; WINIECKA-KLIMEK, MARTA; RIESKE, PIOTR
2016-01-01
Thorough examination of genetic heterogeneity of cell lines is uncommon. In order to address this issue, the present study analyzed the genetic heterogeneity of RPMI-8402, a T-acute lymphoblastic leukemia (T-ALL) cell line. For this purpose, traditional techniques such as fluorescence in situ hybridization and immunocytochemistry were used, in addition to more advanced techniques, including cell sorting, Sanger sequencing and massive parallel sequencing. The results indicated that the RPMI-8402 cell line consists of several genetically different cell subpopulations. Furthermore, massive parallel sequencing of RPMI-8402 provided insight into the evolution of T-ALL carcinogenesis, since this cell line exhibited the genetic heterogeneity typical of T-ALL. Therefore, the use of cell lines for drug testing in future studies may aid the progress of anticancer drug research. PMID:26870252
Jang, Eun-Pyo; Yang, Heesun
2013-09-01
This work reports on a simple solvothermal synthesis of InP/ZnS core/shell quantum dots (QDs) using a much safer and cheaper phosphorus precursor of tris(dimethylamino)phosphine than the most popularly chosen tris(trimethylsilyl)phosphine. The band gap of InP QDs is facilely controlled by varying the solvothermal core growth time (4 vs. 6 h) with a fixed temperature of 150 degrees C, and the successive solvothermal ZnS shelling at 220 degrees C for 6 h results in green- and yellow-emtting InP/ZnS QD with emission quantum yield of 41-42%. The broad size distribution of as-synthesized InP/ZnS QDs, which appears to be inherent in the current solvothermal approach, is improved by a size-selective sorting procedure, and the emission properties of the resulting size-sorted QD fractions are investigated. To produce white emission for general lighting source, a blue light-emitting diode (LED) is combined with non-size-soroted green or yellow QDs as wavelength converters. Furthermore, the QD-LED that includes a blend of green and yellow QDs is fabricated to generate a white lighting source with an enhanced color rendering performance, and its electroluminescent properties are characterized in detail.
Introduction to Numerical Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schoonover, Joseph A.
2016-06-14
These are slides for a lecture for the Parallel Computing Summer Research Internship at the National Security Education Center. This gives an introduction to numerical methods. Repetitive algorithms are used to obtain approximate solutions to mathematical problems, using sorting, searching, root finding, optimization, interpolation, extrapolation, least squares regresion, Eigenvalue problems, ordinary differential equations, and partial differential equations. Many equations are shown. Discretizations allow us to approximate solutions to mathematical models of physical systems using a repetitive algorithm and introduce errors that can lead to numerical instabilities if we are not careful.
VLSI Design, Parallel Computation and Distributed Computing
1991-09-30
I U1 TA 3 Daniel Mleitman U. : C ..( -_. .. .s .. . . . . Tom Leighton David Shmoys . ........A ,~i ;.t , 77 Michael Sipser , Di.,t a-., Eva Tardos...Leighton and Plaxton on the construction of a sim- ple c log .- depth circuit (where c < 7.5) that sorts a random permutation with very high probability...puting iPOD( ). Aug-ust 1992. Vancouver. British Columbia (to appear). 20. B 1Xti~ c .. U(.ii. 1. Gopal. M. [Kaplan and S. Kutten, "Distributed Control for
"Tools For Analysis and Visualization of Large Time- Varying CFD Data Sets"
NASA Technical Reports Server (NTRS)
Wilhelms, Jane; vanGelder, Allen
1999-01-01
During the four years of this grant (including the one year extension), we have explored many aspects of the visualization of large CFD (Computational Fluid Dynamics) datasets. These have included new direct volume rendering approaches, hierarchical methods, volume decimation, error metrics, parallelization, hardware texture mapping, and methods for analyzing and comparing images. First, we implemented an extremely general direct volume rendering approach that can be used to render rectilinear, curvilinear, or tetrahedral grids, including overlapping multiple zone grids, and time-varying grids. Next, we developed techniques for associating the sample data with a k-d tree, a simple hierarchial data model to approximate samples in the regions covered by each node of the tree, and an error metric for the accuracy of the model. We also explored a new method for determining the accuracy of approximate models based on the light field method described at ACM SIGGRAPH (Association for Computing Machinery Special Interest Group on Computer Graphics) '96. In our initial implementation, we automatically image the volume from 32 approximately evenly distributed positions on the surface of an enclosing tessellated sphere. We then calculate differences between these images under different conditions of volume approximation or decimation.
Galectin-3 modulates the polarized surface delivery of β1-integrin in epithelial cells.
Hönig, Ellena; Ringer, Karina; Dewes, Jenny; von Mach, Tobias; Kamm, Natalia; Kreitzer, Geri; Jacob, Ralf
2018-05-10
Epithelial cells require a precise intracellular transport and sorting machinery in order to establish and maintain their polarized architecture. This machinery includes beta-galactoside binding galectins for glycoprotein targeting to the apical membrane. Galectin-3 sorts cargo destined for the apical plasma membrane into vesicular carriers. After delivery of cargo to the apical milieu, galectin-3 recycles back into sorting organelles. We analyzed the role of galectin-3 in the polarized distribution of β1-integrin in MDCK cells. Integrins are located primarily at the basolateral domain of epithelial cells. We demonstrate that a minor pool of β1-integrin interacts with galectin-3 at the apical plasma membrane. Knockdown of galectin-3 decreases apical delivery of β1-integrin. This loss is restored by supplementation with recombinant galectin-3 and galectin-3 overexpression. Our data suggest that galectin-3 targets newly synthesized β1-integrin to the apical membrane and promotes apical delivery of β1-integrin internalized from the basolateral membrane. In parallel, galectin-3 knockout results in a reduction in cell proliferation and an impairment in proper cyst development. Our results suggest that galectin-3 modulates the surface distribution of β1-integrin and affects the morphogenesis of polarized cells. © 2018. Published by The Company of Biologists Ltd.
Efficient Helicopter Aerodynamic and Aeroacoustic Predictions on Parallel Computers
NASA Technical Reports Server (NTRS)
Wissink, Andrew M.; Lyrintzis, Anastasios S.; Strawn, Roger C.; Oliker, Leonid; Biswas, Rupak
1996-01-01
This paper presents parallel implementations of two codes used in a combined CFD/Kirchhoff methodology to predict the aerodynamics and aeroacoustics properties of helicopters. The rotorcraft Navier-Stokes code, TURNS, computes the aerodynamic flowfield near the helicopter blades and the Kirchhoff acoustics code computes the noise in the far field, using the TURNS solution as input. The overall parallel strategy adds MPI message passing calls to the existing serial codes to allow for communication between processors. As a result, the total code modifications required for parallel execution are relatively small. The biggest bottleneck in running the TURNS code in parallel comes from the LU-SGS algorithm that solves the implicit system of equations. We use a new hybrid domain decomposition implementation of LU-SGS to obtain good parallel performance on the SP-2. TURNS demonstrates excellent parallel speedups for quasi-steady and unsteady three-dimensional calculations of a helicopter blade in forward flight. The execution rate attained by the code on 114 processors is six times faster than the same cases run on one processor of the Cray C-90. The parallel Kirchhoff code also shows excellent parallel speedups and fast execution rates. As a performance demonstration, unsteady acoustic pressures are computed at 1886 far-field observer locations for a sample acoustics problem. The calculation requires over two hundred hours of CPU time on one C-90 processor but takes only a few hours on 80 processors of the SP2. The resultant far-field acoustic field is analyzed with state of-the-art audio and video rendering of the propagating acoustic signals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomquist, Heidi K.; Fixel, Deborah A.; Fett, David Brian
The Xyce Parallel Electronic Simulator simulates electronic circuit behavior in DC, AC, HB, MPDE and transient mode using standard analog (DAE) and/or device (PDE) device models including several age and radiation aware devices. It supports a variety of computing platforms (both serial and parallel) computers. Lastly, it uses a variety of modern solution algorithms dynamic parallel load-balancing and iterative solvers.
NASA Astrophysics Data System (ADS)
Le Goff, Alain; Cathala, Thierry; Latger, Jean
2015-10-01
To provide technical assessments of EO/IR flares and self-protection systems for aircraft, DGA Information superiority resorts to synthetic image generation to model the operational battlefield of an aircraft, as viewed by EO/IR threats. For this purpose, it completed the SE-Workbench suite from OKTAL-SE with functionalities to predict a realistic aircraft IR signature and is yet integrating the real-time EO/IR rendering engine of SE-Workbench called SE-FAST-IR. This engine is a set of physics-based software and libraries that allows preparing and visualizing a 3D scene for the EO/IR domain. It takes advantage of recent advances in GPU computing techniques. The recent past evolutions that have been performed concern mainly the realistic and physical rendering of reflections, the rendering of both radiative and thermal shadows, the use of procedural techniques for the managing and the rendering of very large terrains, the implementation of Image- Based Rendering for dynamic interpolation of plume static signatures and lastly for aircraft the dynamic interpolation of thermal states. The next step is the representation of the spectral, directional, spatial and temporal signature of flares by Lacroix Defense using OKTAL-SE technology. This representation is prepared from experimental data acquired during windblast tests and high speed track tests. It is based on particle system mechanisms to model the different components of a flare. The validation of a flare model will comprise a simulation of real trials and a comparison of simulation outputs to experimental results concerning the flare signature and above all the behavior of the stimulated threat.
Winkelman, Jonathan D; Suarez, Cristian; Hocky, Glen M; Harker, Alyssa J; Morganthaler, Alisha N; Christensen, Jenna R; Voth, Gregory A; Bartles, James R; Kovar, David R
2016-10-24
Cells assemble and maintain functionally distinct actin cytoskeleton networks with various actin filament organizations and dynamics through the coordinated action of different sets of actin-binding proteins. The biochemical and functional properties of diverse actin-binding proteins, both alone and in combination, have been increasingly well studied. Conversely, how different sets of actin-binding proteins properly sort to distinct actin filament networks in the first place is not nearly as well understood. Actin-binding protein sorting is critical for the self-organization of diverse dynamic actin cytoskeleton networks within a common cytoplasm. Using in vitro reconstitution techniques including biomimetic assays and single-molecule multi-color total internal reflection fluorescence microscopy, we discovered that sorting of the prominent actin-bundling proteins fascin and α-actinin to distinct networks is an intrinsic behavior, free of complicated cellular signaling cascades. When mixed, fascin and α-actinin mutually exclude each other by promoting their own recruitment and inhibiting recruitment of the other, resulting in the formation of distinct fascin- or α-actinin-bundled domains. Subdiffraction-resolution light microscopy and negative-staining electron microscopy revealed that fascin domains are densely packed, whereas α-actinin domains consist of widely spaced parallel actin filaments. Importantly, other actin-binding proteins such as fimbrin and espin show high specificity between these two bundle types within the same reaction. Here we directly observe that fascin and α-actinin intrinsically segregate to discrete bundled domains that are specifically recognized by other actin-binding proteins. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Krasilenko, Vladimir G.; Nikolsky, Alexander I.; Lazarev, Alexander A.; Lazareva, Maria V.
2010-05-01
In the paper we show that the biologically motivated conception of time-pulse encoding usage gives a set of advantages (single methodological basis, universality, tuning simplicity, learning and programming et al) at creation and design of sensor systems with parallel input-output and processing for 2D structures hybrid and next generations neuro-fuzzy neurocomputers. We show design principles of programmable relational optoelectronic time-pulse encoded processors on the base of continuous logic, order logic and temporal waves processes. We consider a structure that execute analog signal extraction, analog and time-pulse coded variables sorting. We offer optoelectronic realization of such base relational order logic element, that consists of time-pulse coded photoconverters (pulse-width and pulse-phase modulators) with direct and complementary outputs, sorting network on logical elements and programmable commutation blocks. We make technical parameters estimations of devices and processors on such base elements by simulation and experimental research: optical input signals power 0.2 - 20 uW, processing time 1 - 10 us, supply voltage 1 - 3 V, consumption power 10 - 100 uW, extended functional possibilities, learning possibilities. We discuss some aspects of possible rules and principles of learning and programmable tuning on required function, relational operation and realization of hardware blocks for modifications of such processors. We show that it is possible to create sorting machines, neural networks and hybrid data-processing systems with untraditional numerical systems and pictures operands on the basis of such quasiuniversal hardware simple blocks with flexible programmable tuning.
Early Enrollees and Peer Age Effect: First Evidence from INVALSI Data
ERIC Educational Resources Information Center
Ordine, Patrizia; Rose, Giuseppe; Sposato, Daniela
2015-01-01
This paper estimates peer age effect on educational outcomes of Italian pupils attending primary school by exploiting changes in enrollment rules over the last few years. The empirical procedure allows to understand if there is selection in classroom formation, arguing that in the absence of pupils sorting by early age at school entry, it is…
A Crack in the Sorting Machine: What We Should Not Learn from China
ERIC Educational Resources Information Center
Hammond, Bruce G.
2010-01-01
Most nations now administer standardized tests--for adult job seekers and young students alike--but the Chinese remain the world's preeminent practitioners. The nation's national college entrance exam, known as the "Gaokao", lasts for nine hours across two days. The author has seen the intensity of China's work ethic firsthand as…
ERIC Educational Resources Information Center
Schneider, Silke L.; Tieben, Nicole
2011-01-01
The German secondary education system is highly stratified. However, the higher tracks have expanded vastly over the last decades, leading to substantial changes in the distribution of students across the different tracks. Following the German re-unification, the school structure itself has also changed to some degree. Furthermore, several smaller…
Noda, Arthur M.; Chmura Kraemer, Helena
2010-01-01
Abstract Background Preserving patient dignity is a sentinel premise of palliative care. This study was conducted to gain a better understanding of factors influencing preservation of dignity in the last chapter of life. Methods We conducted an open-ended written survey of 100 multidisciplinary providers (69% response rate) and responses were categorized to identify 2 main themes, 5 subthemes, and 10 individual factors that were used to create the preservation of dignity card-sort tool (p-DCT). The 10-item rank order tool was administered to a cohort of community dwelling Filipino Americans (n = 140, age mean = 61.3, 45% male and 55% female). A Spearman correlation matrix was constructed for all the 10 individual factors as well as the themes and subthemes based on the data generated by the subjects. Results The individual factors were minimally correlated with each other indicating that each factor was an independent stand-alone factor. The median, 25th and 75th percentile ranks were calculated and “s/he has self-respect” (intrinsic theme, self-esteem subtheme) emerged as the most important factor (mean rank 3.0 and median rank 2.0) followed by “others treat her/him with respect” (extrinsic theme, respect subtheme) with a mean rank = 3.6 and median = 3.0. Conclusion The p-DCT is a simple, rank order card-sort tool that may help clinicians identify patients' perceptions of key factors influencing the preservation of their dignity in the last chapter of life. PMID:20420549
Declarative language design for interactive visualization.
Heer, Jeffrey; Bostock, Michael
2010-01-01
We investigate the design of declarative, domain-specific languages for constructing interactive visualizations. By separating specification from execution, declarative languages can simplify development, enable unobtrusive optimization, and support retargeting across platforms. We describe the design of the Protovis specification language and its implementation within an object-oriented, statically-typed programming language (Java). We demonstrate how to support rich visualizations without requiring a toolkit-specific data model and extend Protovis to enable declarative specification of animated transitions. To support cross-platform deployment, we introduce rendering and event-handling infrastructures decoupled from the runtime platform, letting designers retarget visualization specifications (e.g., from desktop to mobile phone) with reduced effort. We also explore optimizations such as runtime compilation of visualization specifications, parallelized execution, and hardware-accelerated rendering. We present benchmark studies measuring the performance gains provided by these optimizations and compare performance to existing Java-based visualization tools, demonstrating scalability improvements exceeding an order of magnitude.
NASA Astrophysics Data System (ADS)
Otake, Y.; Leonard, S.; Reiter, A.; Rajan, P.; Siewerdsen, J. H.; Ishii, M.; Taylor, R. H.; Hager, G. D.
2015-03-01
We present a system for registering the coordinate frame of an endoscope to pre- or intra- operatively acquired CT data based on optimizing the similarity metric between an endoscopic image and an image predicted via rendering of CT. Our method is robust and semi-automatic because it takes account of physical constraints, specifically, collisions between the endoscope and the anatomy, to initialize and constrain the search. The proposed optimization method is based on a stochastic optimization algorithm that evaluates a large number of similarity metric functions in parallel on a graphics processing unit. Images from a cadaver and a patient were used for evaluation. The registration error was 0.83 mm and 1.97 mm for cadaver and patient images respectively. The average registration time for 60 trials was 4.4 seconds. The patient study demonstrated robustness of the proposed algorithm against a moderate anatomical deformation.
Miller-Cushon, E K; DeVries, T J
2017-03-01
Feeding management factors have great potential to influence activity patterns and feeding behavior of dairy cows, which may have implications for performance. The objectives of this study were to assess the effects of feed push-up frequency on the behavioral patterns of dairy cows, and to determine associations between behavior and milk yield and composition. Lactating Holstein dairy cows (n = 28, parity = 1.9 ± 1.1; mean ± SD) were housed in tiestalls, milked twice per day, and offered ad libitum access to water and a total mixed ration (containing, on a dry matter basis: 25% corn silage, 25% grass/alfalfa haylage, 30% high-moisture corn, and 20% protein/mineral supplement), provided twice per day. Cows were divided into 2 groups of 14 (balanced by days in milk, milk production, and parity) and individually exposed to each of 2 treatments in a crossover design with 21-d periods; treatment 1 had infrequent feed push-up (3×/d), whereas treatment 2 had frequent feed push-up (5×/d). During the last 7 d of each period, dry matter intake and milk production were recorded and lying behavior was monitored using electronic data loggers. During the last 2 d of each period, milk samples were collected for analysis of protein and fat content and feed samples of fresh feed and orts were collected for particle size analysis. The particle size separator had 3 screens (19, 8, and 1.18 mm) and a bottom pan, resulting in 4 fractions (long, medium, short, fine). Sorting was calculated as the actual intake of each particle size fraction expressed as a percentage of the predicted intake of that fraction. Feed push-up frequency had no effect on lying time [11.4 ± 0.37 h/d; mean ± standard error (SE)], milk production (40.2 ± 1.28 kg/d) and composition (milk protein: 3.30 ± 0.048%; milk fat: 3.81 ± 0.077%), or feed sorting. Cows sorted against long particles (78.0 ± 2.2%) and for short (102.6 ± 0.6%) and fine (108.4 ± 0.9%) particles. Milk fat content decreased by 0.1 percentage points for every 10% increase in sorting against long particles and was not associated with lying behavior or other cow-level factors. Milk protein content decreased by 0.03 percentage points for every hour decrease in lying time and by 0.04 percentage points for every 10% increase in sorting against long particles. These results suggest that sorting against long ration particles may negatively affect milk composition. Additionally, we did not find that altering feed push-up frequency affected feed sorting or cow standing and lying patterns. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Wheat-based foods and non celiac gluten/wheat sensitivity: Is drastic processing the main key issue?
Fardet, Anthony
2015-12-01
While gluten and wheat must be absolutely avoided in coeliac disease and allergy, respectively, nutritional recommendations are largely more confused about non-coeliac wheat/gluten sensitivity (NCWGS). Today, some even recommend avoiding all cereal-based foods. In this paper, the increased NCWGS prevalence is hypothesized to parallel the use of more and more drastic processes applied to the original wheat grain. First, a parallel between gluten-related disorders and wheat processing and consumption evolution is briefly proposed. Notably, increased use of exogenous vital gluten is considered. Drastic processing in wheat technology are mainly grain fractionation and refining followed by recombination and salt, sugars and fats addition, being able to render ultra-processed cereal-based foods more prone to trigger chronic low-grade inflammation. Concerning bread, intensive kneading and the choice of wheat varieties with high baking quality may have rendered gluten less digestible, moving digestion from pancreatic to intestinal proteases. The hypothesis of a gluten resistant fraction reaching colon and interacting with microflora is also considered in relation with increased inflammation. Besides, wheat flour refining removes fiber co-passenger which have potential anti-inflammatory property able to protect digestive epithelium. Finally, some research tracks are proposed, notably the comparison of NCWGS prevalence in populations consuming ultra-versus minimally-processed cereal-based foods. Copyright © 2015 Elsevier Ltd. All rights reserved.
Corridor One:An Integrated Distance Visualization Enuronments for SSI+ASCI Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christopher R. Johnson, Charles D. Hansen
2001-10-29
The goal of Corridor One: An Integrated Distance Visualization Environment for ASCI and SSI Application was to combine the forces of six leading edge laboratories working in the areas of visualization and distributed computing and high performance networking (Argonne National Laboratory, Lawrence Berkeley National Laboratory, Los Alamos National Laboratory, University of Illinois, University of Utah and Princeton University) to develop and deploy the most advanced integrated distance visualization environment for large-scale scientific visualization and demonstrate it on applications relevant to the DOE SSI and ASCI programs. The Corridor One team brought world class expertise in parallel rendering, deep image basedmore » rendering, immersive environment technology, large-format multi-projector wall based displays, volume and surface visualization algorithms, collaboration tools and streaming media technology, network protocols for image transmission, high-performance networking, quality of service technology and distributed computing middleware. Our strategy was to build on the very successful teams that produced the I-WAY, ''Computational Grids'' and CAVE technology and to add these to the teams that have developed the fastest parallel visualizations systems and the most widely used networking infrastructure for multicast and distributed media. Unfortunately, just as we were getting going on the Corridor One project, DOE cut the program after the first year. As such, our final report consists of our progress during year one of the grant.« less
Choi, Hyungsuk; Choi, Woohyuk; Quan, Tran Minh; Hildebrand, David G C; Pfister, Hanspeter; Jeong, Won-Ki
2014-12-01
As the size of image data from microscopes and telescopes increases, the need for high-throughput processing and visualization of large volumetric data has become more pressing. At the same time, many-core processors and GPU accelerators are commonplace, making high-performance distributed heterogeneous computing systems affordable. However, effectively utilizing GPU clusters is difficult for novice programmers, and even experienced programmers often fail to fully leverage the computing power of new parallel architectures due to their steep learning curve and programming complexity. In this paper, we propose Vivaldi, a new domain-specific language for volume processing and visualization on distributed heterogeneous computing systems. Vivaldi's Python-like grammar and parallel processing abstractions provide flexible programming tools for non-experts to easily write high-performance parallel computing code. Vivaldi provides commonly used functions and numerical operators for customized visualization and high-throughput image processing applications. We demonstrate the performance and usability of Vivaldi on several examples ranging from volume rendering to image segmentation.
Camel milk: a possible boon for type 1 diabetic patients.
Agrawal, R P; Tantia, P; Jain, S; Agrawal, R; Agrawal, V
2013-11-03
Poor nutrition in utero and in early life combined with over nutrition in later life may also play a role in epidemic of diabetes. The efficacy of camel milk consumption as an adjunct to routine diabetic management in type 1 diabetes is a approach showing new rays of hope to cope with this disorder by adding a food supplement with medicinal values. Research on the beneficial aspects of camel milk has been taking place in different corners of globe since last three decades. Continuous efforts to disclose the role of camel milk in diabetes has rendered it title of 'white gold'. Biochemical studies has revealed the components e.g. insulin like protein, lactoferrin, immunoglobulins are responsible for imparting camel milk the scientific weightage. In parallel, epidemiological surveys stating low prevalence of diabetes in communities consuming camel milk clearly indicate towards its hopeful role in maintaining hyperglycemia. This article shades light on camel milk production, composition, characteristics as well as it expresses positive effect of camel milk on blood glucose level, insulin dose, beta cell function. This review also compiles various epidemiological studies carried out to bring forth utility of camel milk suggesting it as a useful food supplement or alternative therapy for type 1 diabetic patients.
Kalyvianaki, Konstantina; Gebhart, Veronika; Peroulis, Nikolaos; Panagiotopoulou, Christina; Kiagiadaki, Fotini; Pediaditakis, Iosif; Aivaliotis, Michalis; Moustou, Eleni; Tzardi, Maria; Notas, George; Castanas, Elias; Kampa, Marilena
2017-01-01
Accumulating evidence during the last decades revealed that androgen can exert membrane initiated actions that involve signaling via specific kinases and the modulation of significant cellular processes, important for prostate cancer cell growth and metastasis. Results of the present work clearly show that androgens can specifically act at the membrane level via the GPCR oxoeicosanoid receptor 1 (OXER1) in prostate cancer cells. In fact, OXER1 expression parallels that of membrane androgen binding in prostate cancer cell lines and tumor specimens, while in silico docking simulation of OXER1 showed that testosterone could bind to OXER1 within the same grove as 5-OxoETE, the natural ligand of OXER1. Interestingly, testosterone antagonizes the effects of 5-oxoETE on specific signaling pathways and rapid effects such as actin cytoskeleton reorganization that ultimately can modulate cell migration and metastasis. These findings verify that membrane-acting androgens exert specific effects through an antagonistic interaction with OXER1. Additionally, this interaction between androgen and OXER1, which is an arachidonic acid metabolite receptor expressed in prostate cancer, provides a novel link between steroid and lipid actions and renders OXER1 as new player in the disease. These findings should be taken into account in the design of novel therapeutic approaches in prostate cancer. PMID:28290516
GPU accelerated generation of digitally reconstructed radiographs for 2-D/3-D image registration.
Dorgham, Osama M; Laycock, Stephen D; Fisher, Mark H
2012-09-01
Recent advances in programming languages for graphics processing units (GPUs) provide developers with a convenient way of implementing applications which can be executed on the CPU and GPU interchangeably. GPUs are becoming relatively cheap, powerful, and widely available hardware components, which can be used to perform intensive calculations. The last decade of hardware performance developments shows that GPU-based computation is progressing significantly faster than CPU-based computation, particularly if one considers the execution of highly parallelisable algorithms. Future predictions illustrate that this trend is likely to continue. In this paper, we introduce a way of accelerating 2-D/3-D image registration by developing a hybrid system which executes on the CPU and utilizes the GPU for parallelizing the generation of digitally reconstructed radiographs (DRRs). Based on the advancements of the GPU over the CPU, it is timely to exploit the benefits of many-core GPU technology by developing algorithms for DRR generation. Although some previous work has investigated the rendering of DRRs using the GPU, this paper investigates approximations which reduce the computational overhead while still maintaining a quality consistent with that needed for 2-D/3-D registration with sufficient accuracy to be clinically acceptable in certain applications of radiation oncology. Furthermore, by comparing implementations of 2-D/3-D registration on the CPU and GPU, we investigate current performance and propose an optimal framework for PC implementations addressing the rigid registration problem. Using this framework, we are able to render DRR images from a 256×256×133 CT volume in ~24 ms using an NVidia GeForce 8800 GTX and in ~2 ms using NVidia GeForce GTX 580. In addition to applications requiring fast automatic patient setup, these levels of performance suggest image-guided radiation therapy at video frame rates is technically feasible using relatively low cost PC architecture.
Microstructure characterisation of Ti-6Al-4V from different additive manufacturing processes
NASA Astrophysics Data System (ADS)
Neikter, M.; Åkerfeldt, P.; Pederson, R.; Antti, M.-L.
2017-10-01
The focus of this work has been microstructure characterisation of Ti-6Al-4V manufactured by five different additive manufacturing (AM) processes. The microstructure features being characterised are the prior β size, grain boundary α and α lath thickness. It was found that material manufactured with powder bed fusion processes has smaller prior β grains than the material from directed energy deposition processes. The AM processes with fast cooling rate render in thinner α laths and also thinner, and in some cases discontinuous, grain boundary α. Furthermore, it has been observed that material manufactured with the directed energy deposition processes has parallel bands, except for one condition when the parameters were changed, while the powder bed fusion processes do not have any parallel bands.
Richard Wright and the Agony over Integration
ERIC Educational Resources Information Center
Cassuto, Leonard
2008-01-01
Richard Wright's literary career begins with a lynching and ends with a serial murderer. "Big Boy Leaves Home," the 1936 story that leads off Wright's first book, "Uncle Tom's Children" (1938), renders the vicious mob-execution of a young black man falsely accused of rape. "A Father's Law," Wright's last novel, left unfinished at his unexpected…
ERIC Educational Resources Information Center
Rutherford, Alexandra; Vaughn-Blount, Kelli; Ball, Laura C.
2010-01-01
Feminist psychology began as an avowedly political project with an explicit social change agenda. However, over the last two decades, a number of critics have argued that feminist psychology has become mired in an epistemological impasse where positivist commitments effectively mute its political project, rendering the field acceptable to…
Making Sense of Eating Disorders in Schools
ERIC Educational Resources Information Center
Rich, Emma; Evans, John
2005-01-01
Over the last two decades we have witnessed an emerging set of conditions in schools which render them contexts replete with social messages about the body, health, and self. Research has suggested that both the formal and informal contexts of education are heavily imbued with a "culture of healthism" which places moral obligation and…
GPU-based multi-volume ray casting within VTK for medical applications.
Bozorgi, Mohammadmehdi; Lindseth, Frank
2015-03-01
Multi-volume visualization is important for displaying relevant information in multimodal or multitemporal medical imaging studies. The main objective with the current study was to develop an efficient GPU-based multi-volume ray caster (MVRC) and validate the proposed visualization system in the context of image-guided surgical navigation. Ray casting can produce high-quality 2D images from 3D volume data but the method is computationally demanding, especially when multiple volumes are involved, so a parallel GPU version has been implemented. In the proposed MVRC, imaginary rays are sent through the volumes (one ray for each pixel in the view), and at equal and short intervals along the rays, samples are collected from each volume. Samples from all the volumes are composited using front to back α-blending. Since all the rays can be processed simultaneously, the MVRC was implemented in parallel on the GPU to achieve acceptable interactive frame rates. The method is fully integrated within the visualization toolkit (VTK) pipeline with the ability to apply different operations (e.g., transformations, clipping, and cropping) on each volume separately. The implemented method is cross-platform (Windows, Linux and Mac OSX) and runs on different graphics card (NVidia and AMD). The speed of the MVRC was tested with one to five volumes of varying sizes: 128(3), 256(3), and 512(3). A Tesla C2070 GPU was used, and the output image size was 600 × 600 pixels. The original VTK single-volume ray caster and the MVRC were compared when rendering only one volume. The multi-volume rendering system achieved an interactive frame rate (> 15 fps) when rendering five small volumes (128 (3) voxels), four medium-sized volumes (256(3) voxels), and two large volumes (512(3) voxels). When rendering single volumes, the frame rate of the MVRC was comparable to the original VTK ray caster for small and medium-sized datasets but was approximately 3 frames per second slower for large datasets. The MVRC was successfully integrated in an existing surgical navigation system and was shown to be clinically useful during an ultrasound-guided neurosurgical tumor resection. A GPU-based MVRC for VTK is a useful tool in medical visualization. The proposed multi-volume GPU-based ray caster for VTK provided high-quality images at reasonable frame rates. The MVRC was effective when used in a neurosurgical navigation application.
Colleges Should Reimagine Themselves in an Oil-Scarce World
ERIC Educational Resources Information Center
Carlson, Scott
2008-01-01
Some years ago, bringing up peak oil--the concept that oil production will crest and then decline, leading to all sorts of trouble in society--might have made one seem like the kind of person who frequents Web sites that sell survival books and freeze-dried food. Today such discussion has pretty much hit the mainstream. Last month The Wall Street…
An Update: Changes Abound in Forestry Cost-Share Assistance Programs
Robert J. Moulton
1999-01-01
There have been some major changes in the line-up and funding for federal incentive programs that provide technical and financial assistance to non-industrial private forest (NIPF) landowners since I last reported on this subject ("Sorting Through Cost-Share Assistance Programs," Nov./Dec. 1994 Tree Farmer). The purpose of this article is to bring you up to...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Witek, Barbara
Research has shown that biofeedback is a viable alternative treatment especially for disorders like headache and hypertension. The aim of this review paper is to illustrate ideas of a promising application of electroencephalograph (EEG) in biofeedback. This sort of biofeedback is called neurofeedback and its efficacy in treating epilepsy and Attention Deficit Hyperactivity Disorder (ADHD) is discussed. Lastly, a brief history of the study of neurofeedback is presented.
Identifying Characteristics of High School Dropouts: Data Mining with A Decision Tree Model
ERIC Educational Resources Information Center
Veitch, William Robert.
2004-01-01
The notion that all students should finish high school has grown throughout the last century and continues to be an important goal for all educational levels in this new century. Non-completion has been related to all sorts of social, financial, and psychological issues. Many studies have attempted to put together a process that will identify…
The Application of LOGO! in Control System of a Transmission and Sorting Mechanism
NASA Astrophysics Data System (ADS)
Liu, Jian; Lv, Yuan-Jun
Logic programming of general logic control module LOGO! has been recommended the application in transmission and sorting mechanism. First, the structure and operating principle of the mechanism had been introduced. Then the pneumatic loop of the mechanism had been plotted in the software of FluidSIM-P. At last, pneumatic loop and motors had been control by LOGO!, which makes the control process simple and clear instead of the complicated control of ordinary relay. LOGO! can achieve the complicated interlock control composed of inter relays and time relays. In the control process, the logic control function of LOGO! is fully used to logic programming so that the system realizes the control of air cylinder and motor. It is reliable and adjustable mechanism after application.
Wasserman, Edward A.; Brooks, Daniel I.; McMurray, Bob
2014-01-01
Might there be parallels between category learning in animals and word learning in children? To examine this possibility, we devised a new associative learning technique for teaching pigeons to sort 128 photographs of objects into 16 human language categories. We found that pigeons learned all 16 categories in parallel, they perceived the perceptual coherence of the different object categories, and they generalized their categorization behavior to novel photographs from the training categories. More detailed analyses of the factors that predict trial-by-trial learning implicated a number of factors that may shape learning. First, we found considerable trial-by-trial dependency of pigeons’ categorization responses, consistent with several recent studies that invoke this dependency to claim that humans acquire words via symbolic or inferential mechanisms; this finding suggests that such dependencies may also arise in associative systems. Second, our trial-by-trial analyses divulged seemingly irrelevant aspects of the categorization task, like the spatial location of the report responses, which influenced learning. Third, those trial-by-trial analyses also supported the possibility that learning may be determined both by strengthening correct stimulus-response associations and by weakening incorrect stimulus-response associations. The parallel between all these findings and important aspects of human word learning suggests that associative learning mechanisms may play a much stronger part in complex human behavior than is commonly believed. PMID:25497520
Biocellion: accelerating computer simulation of multicellular biological system models
Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya
2014-01-01
Motivation: Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. Results: We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Availability and implementation: Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. Contact: seunghwa.kang@pnnl.gov PMID:25064572
Son of IXION: A Steady State Centrifugally Confined Plasma for Fusion*
NASA Astrophysics Data System (ADS)
Hassam, Adil
1996-11-01
A magnetic confinement scheme in which the inertial, u.grad(u), forces effect parallel confinement is proposed. The basic geometry is mirror-like as far as the poloidal field goes or, more simply, multipole (FM-1) type. The rotation is toroidal in this geometry. A supersonic rotation can effect complete parallel confinement, with the usual magnetic mirror force rendered irrelevant. The rotation shear, in addition, aids in the suppression of the flute mode. This suppression is not complete which indicates the addition of a toroidal field, at maximum of the order of the poloidal field. We show that at rotation in excess of Mach 3, the parallel particle and heat losses can be minimized to below the Lawson breakeven point. The crossfield transport can be expected to be better than tokamaks on account of the large velocity shear. Other advantages of the scheme are that it is steady state and disruption free. An exploratory experiment that tests equilibrium, parallel detachment, and MHD stability is proposed. The concept resembles earlier (Geneva, 1958) experiments on "homopolar generators" and a mirror configuration called IXION. Ixion, Greek mythological king, was forever strapped to a rotating, flaming wheel. *Work supported by DOE
Dynamical diffraction imaging (topography) with X-ray synchrotron radiation
NASA Technical Reports Server (NTRS)
Kuriyama, M.; Steiner, B. W.; Dobbyn, R. C.
1989-01-01
By contrast to electron microscopy, which yields information on the location of features in small regions of materials, X-ray diffraction imaging can portray minute deviations from perfect crystalline order over larger areas. Synchrotron radiation-based X-ray optics technology uses a highly parallel incident beam to eliminate ambiguities in the interpretation of image details; scattering phenomena previously unobserved are now readily detected. Synchrotron diffraction imaging renders high-resolution, real-time, in situ observations of materials under pertinent environmental conditions possible.
Distributed volume rendering and stereoscopic display for radiotherapy treatment planning
NASA Astrophysics Data System (ADS)
Hancock, David J.
The thesis describes attempts to use direct volume rendering techniques to produce visualisations useful in the preparation of radiotherapy treatment plans. The selected algorithms allow the generation of data-rich images which can be used to assist the radiologist in comprehending complicated three-dimensional phenomena. The treatment plans are formulated using a three dimensional model which combines patient data acquired from CT scanning and the results of a simulation of the radiation delivery. Multiple intersecting beams with shaped profiles are used and the region of intersection is designed to closely match the position and shape of the targeted tumour region. The proposed treatment must be evaluated as to how well the target region is enveloped by the high dose occurring where the beams intersect, and also as to whether the treatment is likely to expose non-tumour regions to unacceptably high levels of radiation. Conventionally the plans are reviewed by examining CT images overlaid with contours indicating dose levels. Volume visualisation offers a possible saving in time by presenting the data in three dimensional form thereby removing the need to examine a set of slices. The most difficult aspect is to depict unambiguously the relationships between the different data. For example, if a particular beam configuration results in unintended irradiation of a sensitive organ, then it is essential to ensure that this is clearly displayed, and that the 3D relationships between the beams and other data can be readily perceived in order to decide how to correct the problem. The user interface has been designed to present a unified view of the different techniques available for identifying features of interest within the data. The system differs from those previously reported in that complex visualisations can be constructed incrementally, and several different combinations of features can be viewed simultaneously. To maximise the quantity of relevant data presented in a single view, large regions of the data are rendered very transparently. This is done to ensure that interesting features buried deep within the data are visible from any viewpoint. Rendering images with high degrees of transparency raises a number of problems, primarily the drop in quality of depth cues in the image, but also the increase in computational requirements over surface-based visualisations. One solution to the increase in image generation times is the use of parallel architectures, which are an attractive platform for large visualisation tasks such as this. A parallel implementation of the direct volume rendering algorithm is described and its performance is evaluated. Several issues must be addressed in implementing an interactive rendering system in a distributed computing environment: principally overcoming the latency and limited bandwidth of the typical network connection. This thesis reports a pipelining strategy developed to improve the level of interactivity in such situations. Stereoscopic image presentation offers a method to offset the reduction in clarity of the depth information in the transparent images. The results of an investigation into the effectiveness of stereoscopic display as an aid to perception in highly transparent images are presented. Subjects were shown scenes of a synthetic test data set in which conventional depth cues were very limited. The experiments were designed to discover what effect stereoscopic viewing of the transparent, volume rendered images had on user's depth perception.
Evaluation of Dowfrost(TM) HD as a Thermal Control Fluid for Constellation Vehicles
NASA Technical Reports Server (NTRS)
Lee, Steve
2009-01-01
A test was conducted from November 2008 to January 2009 to help determine the compatibility of an inhibited propylene glycol/water solution with planned Constellation vehicles. Dowfrost(TradeMark) HD was selected as the baseline for Orion, as well as other Constellation systems. Therefore, the same Dowfrost(TradeMark) HD/Water solution planned for Orion was chosen for this test. The fluid was subjected to a thermal fluid loop that had flightlike properties, as compared to Orion. The fluid loop had similar wetted materials, temperatures, flow rates, and aluminum wetted surface area to fluid volume ratio. The test was designed to last for 10 years, the life expectancy of the lunar habitat. However, the test lasted less than two months. System filters became clogged with precipitate, rendering the fluid system inoperable. Upon examination of the precipitate, it was determined that the precipitate composition contained aluminum, which could have only come from materials in the test stand, as aluminum is not part of the Dowfrost(TradeMark HD composition. Also, the fluid pH was determined to have increased from 10.1, at the first test sample, to 12.2, at the completion of the test. This high of a pH is corrosive to aluminum and was certainly a contributing factor to the development of precipitate. Chemical analyses and bench-top tests are currently ongoing to determine the underlying cause for this rapid degradation of the fluid. Hamilton Sundstrand, the contractor developing the Orion thermal fluid loop, is performing a parallel effort to not only understand the cause of fluid degradation in the test, but also to investigate solutions to avoid this problem in the Orion s thermal control system. JSC also consulted with the Hamilton Sundstrand team in the development of this test and the subsequent analysis.
Realtime Compositing of Procedural Facade Textures on the Gpu
NASA Astrophysics Data System (ADS)
Krecklau, L.; Kobbelt, L.
2011-09-01
The real time rendering of complex virtual city models has become more important in the last few years for many practical applications like realistic navigation or urban planning. For maximum rendering performance, the complexity of the geometry or textures can be reduced by decreasing the resolution until the data set can fully reside on the memory of the graphics card. This typically results in a low quality of the virtual city model. Alternatively, a streaming algorithm can load the high quality data set from the hard drive. However, this approach requires a large amount of persistent storage providing several gigabytes of static data. We present a system that uses a texture atlas containing atomic tiles like windows, doors or wall patterns, and that combines those elements on-the-fly directly on the graphics card. The presented approach benefits from a sophisticated randomization approach that produces lots of different facades while the grammar description itself remains small. By using a ray casting apporach, we are able to trace through transparent windows revealing procedurally generated rooms which further contributes to the realism of the rendering. The presented method enables real time rendering of city models with a high level of detail for facades while still relying on a small memory footprint.
NASA Astrophysics Data System (ADS)
Krasilenko, Vladimir G.; Bardachenko, Vitaliy F.; Nikolsky, Alexander I.; Lazarev, Alexander A.
2007-04-01
In the paper we show that the biologically motivated conception of the use of time-pulse encoding gives the row of advantages (single methodological basis, universality, simplicity of tuning, training and programming et al) at creation and designing of sensor systems with parallel input-output and processing, 2D-structures of hybrid and neuro-fuzzy neurocomputers of next generations. We show principles of construction of programmable relational optoelectronic time-pulse coded processors, continuous logic, order logic and temporal waves processes, that lie in basis of the creation. We consider structure that executes extraction of analog signal of the set grade (order), sorting of analog and time-pulse coded variables. We offer optoelectronic realization of such base relational elements of order logic, which consists of time-pulse coded phototransformers (pulse-width and pulse-phase modulators) with direct and complementary outputs, sorting network on logical elements and programmable commutations blocks. We make estimations of basic technical parameters of such base devices and processors on their basis by simulation and experimental research: power of optical input signals - 0.200-20 μW, processing time - microseconds, supply voltage - 1.5-10 V, consumption power - hundreds of microwatts per element, extended functional possibilities, training possibilities. We discuss some aspects of possible rules and principles of training and programmable tuning on the required function, relational operation and realization of hardware blocks for modifications of such processors. We show as on the basis of such quasiuniversal hardware simple block and flexible programmable tuning it is possible to create sorting machines, neural networks and hybrid data-processing systems with the untraditional numerical systems and pictures operands.
2017-01-01
Tight and tunable control of gene expression is a highly desirable goal in synthetic biology for constructing predictable gene circuits and achieving preferred phenotypes. Elucidating the sequence–function relationship of promoters is crucial for manipulating gene expression at the transcriptional level, particularly for inducible systems dependent on transcriptional regulators. Sort-seq methods employing fluorescence-activated cell sorting (FACS) and high-throughput sequencing allow for the quantitative analysis of sequence–function relationships in a robust and rapid way. Here we utilized a massively parallel sort-seq approach to analyze the formaldehyde-inducible Escherichia coli promoter (Pfrm) with single-nucleotide resolution. A library of mutated formaldehyde-inducible promoters was cloned upstream of gfp on a plasmid. The library was partitioned into bins via FACS on the basis of green fluorescent protein (GFP) expression level, and mutated promoters falling into each expression bin were identified with high-throughput sequencing. The resulting analysis identified two 19 base pair repressor binding sites, one upstream of the −35 RNA polymerase (RNAP) binding site and one overlapping with the −10 site, and assessed the relative importance of each position and base therein. Key mutations were identified for tuning expression levels and were used to engineer formaldehyde-inducible promoters with predictable activities. Engineered variants demonstrated up to 14-fold lower basal expression, 13-fold higher induced expression, and a 3.6-fold stronger response as indicated by relative dynamic range. Finally, an engineered formaldehyde-inducible promoter was employed to drive the expression of heterologous methanol assimilation genes and achieved increased biomass levels on methanol, a non-native substrate of E. coli. PMID:28463494
Creating ensembles of oblique decision trees with evolutionary algorithms and sampling
Cantu-Paz, Erick [Oakland, CA; Kamath, Chandrika [Tracy, CA
2006-06-13
A decision tree system that is part of a parallel object-oriented pattern recognition system, which in turn is part of an object oriented data mining system. A decision tree process includes the step of reading the data. If necessary, the data is sorted. A potential split of the data is evaluated according to some criterion. An initial split of the data is determined. The final split of the data is determined using evolutionary algorithms and statistical sampling techniques. The data is split. Multiple decision trees are combined in ensembles.
Heterogeneous Multi-Robot Multi-Sensor Platform for Intruder Detection
2009-09-15
propagation model, with variance τi: si ~ N(b0i + b1i *logDi, τ i). The initial parameters (b0i, b1i, τ i ) of the model are unknown, and the training...that the advantage of MOO-learned mode would become more significant over time compared with the other mode. 1 2 3 4 5 6 7 0 0.05 0.1 0.15 0.2...nondominated sorting genetic algorithm for multi-objective optimization: NSGA-II,” in Parallel Problem Solving from Nature (PPSN VI), M. Schoenauer
Card sorting to evaluate the robustness of the information architecture of a protocol website.
Wentzel, J; Müller, F; Beerlage-de Jong, N; van Gemert-Pijnen, J
2016-02-01
A website on Methicillin-Resistant Staphylococcus Aureus, MRSA-net, was developed for Health Care Workers (HCWs) and the general public, in German and in Dutch. The website's content was based on existing protocols and its structure was based on a card sort study. A Human Centered Design approach was applied to ensure a match between user and technology. In the current study we assess whether the website's structure still matches user needs, again via a card sort study. An open card sort study was conducted. Randomly drawn samples of 100 on-site search queries as they were entered on the MRSA-net website (during one year of use) were used as card input. In individual sessions, the cards were sorted by each participant (18 German and 10 Dutch HCWs, and 10 German and 10 Dutch members of the general public) into piles that were meaningful to them. Each participant provided a label for every pile of cards they created. Cluster analysis was performed on the resulting sorts, creating an overview of clusters of items placed together in one pile most frequently. In addition, pile labels were qualitatively analyzed to identify the participants' mental models. Cluster analysis confirmed existing categories and revealed new themes emerging from the search query samples, such as financial issues and consequences for the patient. Even though MRSA-net addresses these topics, they are not prominently covered in the menu structure. The label analysis shows that 7 of a total of 44 MRSA-net categories were not reproduced by the participants. Additional themes such as information on other pathogens and categories such as legal issues emerged. This study shows that the card sort performed to create MRSA-net resulted in overall long-lasting structure and categories. New categories were identified, indicating that additional information needs emerged. Therefore, evaluating website structure should be a recurrent activity. Card sorting with ecological data as input for the cards is useful to identify changes in needs and mental models. By combining qualitative and quantitative analysis we gained insight into additional information needed by the target group, including their view on the domain and related themes. The results show differences between the four user groups in their sorts, which can mostly be explained by the groups' background. These findings confirm that HCD is a valuable approach to tailor information to the target group. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Gaĭdar, B V; Ivantsov, V A; Sidel'nikov, V O; Rusev, I T; Madaĭ, D Iu; Kokoev, V G; Zinov'ev, E V; Mutalibov, M M
2004-06-01
The article is devoted to the review of modern opinions concerning the experience of military operation medical support in conditions of local wars and military conflicts. On the base of analysis of medical assistance rendered to the wounded and casualties in Republic of Chechnya the advantages and defects of different approaches are discussed. The experience in rendering assistance to the casualties in the Armed Forces of NATO countries during the local wars for the last decades is discussed. It is shown that the optimal variant of organization of treatment-and-evacuation measures during the local armed conflicts and wars is the two-stage scheme of evacuation: the first medical aid--the qualified (specialized) medical aid.
Direct Volume Rendering with Shading via Three-Dimensional Textures
NASA Technical Reports Server (NTRS)
VanGelder, Allen; Kim, Kwansik
1996-01-01
A new and easy-to-implement method for direct volume rendering that uses 3D texture maps for acceleration, and incorporates directional lighting, is described. The implementation, called Voltx, produces high-quality images at nearly interactive speeds on workstations with hardware support for three-dimensional texture maps. Previously reported methods did not incorporate a light model, and did not address issues of multiple texture maps for large volumes. Our research shows that these extensions impact performance by about a factor of ten. Voltx supports orthographic, perspective, and stereo views. This paper describes the theory and implementation of this technique, and compares it to the shear-warp factorization approach. A rectilinear data set is converted into a three-dimensional texture map containing color and opacity information. Quantized normal vectors and a lookup table provide efficiency. A new tesselation of the sphere is described, which serves as the basis for normal-vector quantization. A new gradient-based shading criterion is described, in which the gradient magnitude is interpreted in the context of the field-data value and the material classification parameters, and not in isolation. In the rendering phase, the texture map is applied to a stack of parallel planes, which effectively cut the texture into many slabs. The slabs are composited to form an image.
A 3-RSR Haptic Wearable Device for Rendering Fingertip Contact Forces.
Leonardis, Daniele; Solazzi, Massimiliano; Bortone, Ilaria; Frisoli, Antonio
2017-01-01
A novel wearable haptic device for modulating contact forces at the fingertip is presented. Rendering of forces by skin deformation in three degrees of freedom (DoF), with contact-no contact capabilities, was implemented through rigid parallel kinematics. The novel asymmetrical three revolute-spherical-revolute (3-RSR) configuration allowed compact dimensions with minimum encumbrance of the hand workspace. The device was designed to render constant to low frequency deformation of the fingerpad in three DoF, combining light weight with relatively high output forces. A differential method for solving the non-trivial inverse kinematics is proposed and implemented in real time for controlling the device. The first experimental activity evaluated discrimination of different fingerpad stretch directions in a group of five subjects. The second experiment, enrolling 19 subjects, evaluated cutaneous feedback provided in a virtual pick-and-place manipulation task. Stiffness of the fingerpad plus device was measured and used to calibrate the physics of the virtual environment. The third experiment with 10 subjects evaluated interaction forces in a virtual lift-and-hold task. Although with different performance in the two manipulation experiments, overall results show that participants better controlled interaction forces when the cutaneous feedback was active, with significant differences between the visual and visuo-haptic experimental conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krityakierne, Tipaluck; Akhtar, Taimoor; Shoemaker, Christine A.
This paper presents a parallel surrogate-based global optimization method for computationally expensive objective functions that is more effective for larger numbers of processors. To reach this goal, we integrated concepts from multi-objective optimization and tabu search into, single objective, surrogate optimization. Our proposed derivative-free algorithm, called SOP, uses non-dominated sorting of points for which the expensive function has been previously evaluated. The two objectives are the expensive function value of the point and the minimum distance of the point to previously evaluated points. Based on the results of non-dominated sorting, P points from the sorted fronts are selected as centersmore » from which many candidate points are generated by random perturbations. Based on surrogate approximation, the best candidate point is subsequently selected for expensive evaluation for each of the P centers, with simultaneous computation on P processors. Centers that previously did not generate good solutions are tabu with a given tenure. We show almost sure convergence of this algorithm under some conditions. The performance of SOP is compared with two RBF based methods. The test results show that SOP is an efficient method that can reduce time required to find a good near optimal solution. In a number of cases the efficiency of SOP is so good that SOP with 8 processors found an accurate answer in less wall-clock time than the other algorithms did with 32 processors.« less
Born Free but in Chains: Academic Freedom and Rights of Governance
ERIC Educational Resources Information Center
Academe, 2005
2005-01-01
This article presents the address delivered by Roger Bowen, American Association of University Professors' (AAUP) general secretary, last fall to the Coalition of Faculty Associations of Western New York. The AAUP's history could be rendered in a series of biographies about academic dissenters who dared to speak truth to power. His address centers…
ERIC Educational Resources Information Center
Hadjichambis, Andreas Ch.; Paraskeva-Hadjichambi, Demetra; Ioannou, Hara; Georgiou, Yiannis; Manoli, Constantinos C.
2015-01-01
During the last decades, current consumption patterns have been recurrently blamed for rendering both the environment and our lifestyles unsustainable. Young children are considered a critical group in the effort to make a shift towards sustainable consumption (environmentally friendly consumption). However, young people should be able to consider…
Michael Sukop,; Cunningham, Kevin J.
2014-01-01
Digital optical borehole images at approximately 2 mm vertical resolution and borehole caliper data were used to create three-dimensional renderings of the distribution of (1) matrix porosity and (2) vuggy megaporosity for the karst carbonate Biscayne aquifer in southeastern Florida. The renderings based on the borehole data were used as input into Lattice Boltzmann methods to obtain intrinsic permeability estimates for this extremely transmissive aquifer, where traditional aquifer test methods may fail due to very small drawdowns and non-Darcian flow that can reduce apparent hydraulic conductivity. Variogram analysis of the borehole data suggests a nearly isotropic rock structure at lag lengths up to the nominal borehole diameter. A strong correlation between the diameter of the borehole and the presence of vuggy megaporosity in the data set led to a bias in the variogram where the computed horizontal spatial autocorrelation is strong at lag distances greater than the nominal borehole size. Lattice Boltzmann simulation of flow across a 0.4 × 0.4 × 17 m (2.72 m3 volume) parallel-walled column of rendered matrix and vuggy megaporosity indicates a high hydraulic conductivity of 53 m s−1. This value is similar to previous Lattice Boltzmann calculations of hydraulic conductivity in smaller limestone samples of the Biscayne aquifer. The development of simulation methods that reproduce dual-porosity systems with higher resolution and fidelity and that consider flow through horizontally longer renderings could provide improved estimates of the hydraulic conductivity and help to address questions about the importance of scale.
NASA Astrophysics Data System (ADS)
Sukop, Michael C.; Cunningham, Kevin J.
2014-11-01
Digital optical borehole images at approximately 2 mm vertical resolution and borehole caliper data were used to create three-dimensional renderings of the distribution of (1) matrix porosity and (2) vuggy megaporosity for the karst carbonate Biscayne aquifer in southeastern Florida. The renderings based on the borehole data were used as input into Lattice Boltzmann methods to obtain intrinsic permeability estimates for this extremely transmissive aquifer, where traditional aquifer test methods may fail due to very small drawdowns and non-Darcian flow that can reduce apparent hydraulic conductivity. Variogram analysis of the borehole data suggests a nearly isotropic rock structure at lag lengths up to the nominal borehole diameter. A strong correlation between the diameter of the borehole and the presence of vuggy megaporosity in the data set led to a bias in the variogram where the computed horizontal spatial autocorrelation is strong at lag distances greater than the nominal borehole size. Lattice Boltzmann simulation of flow across a 0.4 × 0.4 × 17 m (2.72 m3 volume) parallel-walled column of rendered matrix and vuggy megaporosity indicates a high hydraulic conductivity of 53 m s-1. This value is similar to previous Lattice Boltzmann calculations of hydraulic conductivity in smaller limestone samples of the Biscayne aquifer. The development of simulation methods that reproduce dual-porosity systems with higher resolution and fidelity and that consider flow through horizontally longer renderings could provide improved estimates of the hydraulic conductivity and help to address questions about the importance of scale.
NASA Astrophysics Data System (ADS)
Witek, Barbara
2007-11-01
Research has shown that biofeedback is a viable alternative treatment especially for disorders like headache and hypertension. The aim of this review paper is to illustrate ideas of a promising application of electroencephalograph (EEG) in biofeedback. This sort of biofeedback is called neurofeedback and its efficacy in treating epilepsy and Attention Deficit Hyperactivity Disorder (ADHD) is discussed. Lastly, a brief history of the study of neurofeedback is presented.
The Last Battle: With "Mockingjay" on Its Way, Suzanne Collins Weighs in on Katniss and the Capitol
ERIC Educational Resources Information Center
Margolis, Rick
2010-01-01
Ever since Katniss Everdeen, the arrow-slinging heroine of Suzanne Collins's "Hunger Games" trilogy, was snatched from the cruel clutches of a ruthless government, one can't stop thinking about the feisty 16-year-old from District 12. What sort of flesh-devouring, mutant killing machine awaits her next? How can she possibly lead a successful…
Computations on the massively parallel processor at the Goddard Space Flight Center
NASA Technical Reports Server (NTRS)
Strong, James P.
1991-01-01
Described are four significant algorithms implemented on the massively parallel processor (MPP) at the Goddard Space Flight Center. Two are in the area of image analysis. Of the other two, one is a mathematical simulation experiment and the other deals with the efficient transfer of data between distantly separated processors in the MPP array. The first algorithm presented is the automatic determination of elevations from stereo pairs. The second algorithm solves mathematical logistic equations capable of producing both ordered and chaotic (or random) solutions. This work can potentially lead to the simulation of artificial life processes. The third algorithm is the automatic segmentation of images into reasonable regions based on some similarity criterion, while the fourth is an implementation of a bitonic sort of data which significantly overcomes the nearest neighbor interconnection constraints on the MPP for transferring data between distant processors.
Motions of the hand expose the partial and parallel activation of stereotypes.
Freeman, Jonathan B; Ambady, Nalini
2009-10-01
Perceivers spontaneously sort other people's faces into social categories and activate the stereotype knowledge associated with those categories. In the work described here, participants, presented with sex-typical and sex-atypical faces (i.e., faces containing a mixture of male and female features), identified which of two gender stereotypes (one masculine and one feminine) was appropriate for the face. Meanwhile, their hand movements were measured by recording the streaming x, y coordinates of the computer mouse. As participants stereotyped sex-atypical faces, real-time motor responses exhibited a continuous spatial attraction toward the opposite-gender stereotype. These data provide evidence for the partial and parallel activation of stereotypes belonging to alternate social categories. Thus, perceptual cues of the face can trigger a graded mixture of simultaneously active stereotype knowledge tied to alternate social categories, and this mixture settles over time onto ultimate judgments.
Hu, Peng; Fabyanic, Emily; Kwon, Deborah Y; Tang, Sheng; Zhou, Zhaolan; Wu, Hao
2017-12-07
Massively parallel single-cell RNA sequencing can precisely resolve cellular diversity in a high-throughput manner at low cost, but unbiased isolation of intact single cells from complex tissues such as adult mammalian brains is challenging. Here, we integrate sucrose-gradient-assisted purification of nuclei with droplet microfluidics to develop a highly scalable single-nucleus RNA-seq approach (sNucDrop-seq), which is free of enzymatic dissociation and nucleus sorting. By profiling ∼18,000 nuclei isolated from cortical tissues of adult mice, we demonstrate that sNucDrop-seq not only accurately reveals neuronal and non-neuronal subtype composition with high sensitivity but also enables in-depth analysis of transient transcriptional states driven by neuronal activity, at single-cell resolution, in vivo. Copyright © 2017 Elsevier Inc. All rights reserved.
Surrogates for numerical simulations; optimization of eddy-promoter heat exchangers
NASA Technical Reports Server (NTRS)
Patera, Anthony T.; Patera, Anthony
1993-01-01
Although the advent of fast and inexpensive parallel computers has rendered numerous previously intractable calculations feasible, many numerical simulations remain too resource-intensive to be directly inserted in engineering optimization efforts. An attractive alternative to direct insertion considers models for computational systems: the expensive simulation is evoked only to construct and validate a simplified, input-output model; this simplified input-output model then serves as a simulation surrogate in subsequent engineering optimization studies. A simple 'Bayesian-validated' statistical framework for the construction, validation, and purposive application of static computer simulation surrogates is presented. As an example, dissipation-transport optimization of laminar-flow eddy-promoter heat exchangers are considered: parallel spectral element Navier-Stokes calculations serve to construct and validate surrogates for the flowrate and Nusselt number; these surrogates then represent the originating Navier-Stokes equations in the ensuing design process.
Efficient visibility-driven medical image visualisation via adaptive binned visibility histogram.
Jung, Younhyun; Kim, Jinman; Kumar, Ashnil; Feng, David Dagan; Fulham, Michael
2016-07-01
'Visibility' is a fundamental optical property that represents the observable, by users, proportion of the voxels in a volume during interactive volume rendering. The manipulation of this 'visibility' improves the volume rendering processes; for instance by ensuring the visibility of regions of interest (ROIs) or by guiding the identification of an optimal rendering view-point. The construction of visibility histograms (VHs), which represent the distribution of all the visibility of all voxels in the rendered volume, enables users to explore the volume with real-time feedback about occlusion patterns among spatially related structures during volume rendering manipulations. Volume rendered medical images have been a primary beneficiary of VH given the need to ensure that specific ROIs are visible relative to the surrounding structures, e.g. the visualisation of tumours that may otherwise be occluded by neighbouring structures. VH construction and its subsequent manipulations, however, are computationally expensive due to the histogram binning of the visibilities. This limits the real-time application of VH to medical images that have large intensity ranges and volume dimensions and require a large number of histogram bins. In this study, we introduce an efficient adaptive binned visibility histogram (AB-VH) in which a smaller number of histogram bins are used to represent the visibility distribution of the full VH. We adaptively bin medical images by using a cluster analysis algorithm that groups the voxels according to their intensity similarities into a smaller subset of bins while preserving the distribution of the intensity range of the original images. We increase efficiency by exploiting the parallel computation and multiple render targets (MRT) extension of the modern graphical processing units (GPUs) and this enables efficient computation of the histogram. We show the application of our method to single-modality computed tomography (CT), magnetic resonance (MR) imaging and multi-modality positron emission tomography-CT (PET-CT). In our experiments, the AB-VH markedly improved the computational efficiency for the VH construction and thus improved the subsequent VH-driven volume manipulations. This efficiency was achieved without major degradation in the VH visually and numerical differences between the AB-VH and its full-bin counterpart. We applied several variants of the K-means clustering algorithm with varying Ks (the number of clusters) and found that higher values of K resulted in better performance at a lower computational gain. The AB-VH also had an improved performance when compared to the conventional method of down-sampling of the histogram bins (equal binning) for volume rendering visualisation. Copyright © 2016 Elsevier Ltd. All rights reserved.
A general heuristic for genome rearrangement problems.
Dias, Ulisses; Galvão, Gustavo Rodrigues; Lintzmayer, Carla Négri; Dias, Zanoni
2014-06-01
In this paper, we present a general heuristic for several problems in the genome rearrangement field. Our heuristic does not solve any problem directly, it is rather used to improve the solutions provided by any non-optimal algorithm that solve them. Therefore, we have implemented several algorithms described in the literature and several algorithms developed by ourselves. As a whole, we implemented 23 algorithms for 9 well known problems in the genome rearrangement field. A total of 13 algorithms were implemented for problems that use the notions of prefix and suffix operations. In addition, we worked on 5 algorithms for the classic problem of sorting by transposition and we conclude the experiments by presenting results for 3 approximation algorithms for the sorting by reversals and transpositions problem and 2 approximation algorithms for the sorting by reversals problem. Another algorithm with better approximation ratio can be found for the last genome rearrangement problem, but it is purely theoretical with no practical implementation. The algorithms we implemented in addition to our heuristic lead to the best practical results in each case. In particular, we were able to improve results on the sorting by transpositions problem, which is a very special case because many efforts have been made to generate algorithms with good results in practice and some of these algorithms provide results that equal the optimum solutions in many cases. Our source codes and benchmarks are freely available upon request from the authors so that it will be easier to compare new approaches against our results.
Transputer parallel processing at NASA Lewis Research Center
NASA Technical Reports Server (NTRS)
Ellis, Graham K.
1989-01-01
The transputer parallel processing lab at NASA Lewis Research Center (LeRC) consists of 69 processors (transputers) that can be connected into various networks for use in general purpose concurrent processing applications. The main goal of the lab is to develop concurrent scientific and engineering application programs that will take advantage of the computational speed increases available on a parallel processor over the traditional sequential processor. Current research involves the development of basic programming tools. These tools will help standardize program interfaces to specific hardware by providing a set of common libraries for applications programmers. The thrust of the current effort is in developing a set of tools for graphics rendering/animation. The applications programmer currently has two options for on-screen plotting. One option can be used for static graphics displays and the other can be used for animated motion. The option for static display involves the use of 2-D graphics primitives that can be called from within an application program. These routines perform the standard 2-D geometric graphics operations in real-coordinate space as well as allowing multiple windows on a single screen.
MacDougall, Preston J; Henze, Christopher E; Volkov, Anatoliy
2016-11-01
We present a unique platform for molecular visualization and design that uses novel subatomic feature detection software in tandem with 3D hyperwall visualization technology. We demonstrate the fleshing-out of pharmacophores in drug molecules, as well as reactive sites in catalysts, focusing on subatomic features. Topological analysis with picometer resolution, in conjunction with interactive volume-rendering of the Laplacian of the electronic charge density, leads to new insight into docking and catalysis. Visual data-mining is done efficiently and in parallel using a 4×4 3D hyperwall (a tiled array of 3D monitors driven independently by slave GPUs but displaying high-resolution, synchronized and functionally-related images). The visual texture of images for a wide variety of molecular systems are intuitive to experienced chemists but also appealing to neophytes, making the platform simultaneously useful as a tool for advanced research as well as for pedagogical and STEM education outreach purposes. Copyright © 2016. Published by Elsevier Inc.
Worldwide Report, Nuclear Development and Proliferation
1984-03-05
transmissions and broadcasts. Materials from foreign- language sources are translated; those from English- language sources are transcribed or reprinted, with... Processing indicators such as [Text] or [Excerpt] in the first line of each item, or following the last line of a brief, indicate how the original...information was processed . Where no processing indicator is given, the infor- mation was summarized or extracted. Unfamiliar names rendered
Latin America Report, No. 2712
1983-07-26
other characteristics retained. Headlines, editorial reports, and material enclosed in brackets [] are supplied by JPRS. Processing indicators such...as [Text] or [Excerpt] in the first line of each item, or following the last line of a brief, indicate how the original information was processed ...Where no processing indicator is given, the infor- mation was summarized or extracted. Unfamiliar names rendered phonetically or transliterated are
Biocellion: accelerating computer simulation of multicellular biological system models.
Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya
2014-11-01
Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
A Novel Approach to Visualizing Dark Matter Simulations.
Kaehler, R; Hahn, O; Abel, T
2012-12-01
In the last decades cosmological N-body dark matter simulations have enabled ab initio studies of the formation of structure in the Universe. Gravity amplified small density fluctuations generated shortly after the Big Bang, leading to the formation of galaxies in the cosmic web. These calculations have led to a growing demand for methods to analyze time-dependent particle based simulations. Rendering methods for such N-body simulation data usually employ some kind of splatting approach via point based rendering primitives and approximate the spatial distributions of physical quantities using kernel interpolation techniques, common in SPH (Smoothed Particle Hydrodynamics)-codes. This paper proposes three GPU-assisted rendering approaches, based on a new, more accurate method to compute the physical densities of dark matter simulation data. It uses full phase-space information to generate a tetrahedral tessellation of the computational domain, with mesh vertices defined by the simulation's dark matter particle positions. Over time the mesh is deformed by gravitational forces, causing the tetrahedral cells to warp and overlap. The new methods are well suited to visualize the cosmic web. In particular they preserve caustics, regions of high density that emerge, when several streams of dark matter particles share the same location in space, indicating the formation of structures like sheets, filaments and halos. We demonstrate the superior image quality of the new approaches in a comparison with three standard rendering techniques for N-body simulation data.
Hinchingbrooke staff deserve an apology.
Scott, Graham
2015-01-20
Imagine being a nurse at Hinchingbrooke Hospital in Cambridgeshire. Those who have been there for a while will have endured year after year of mismanagement as one regime after another failed to run the trust effectively. First the finances were allowed to get into an unholy mess, so the organisation was handed to private firm Circle to sort out. Last week, Circle decided to walk away, leaving the NHS to start again.
2015-06-01
localized or generalized), duration ( chronic or aggressive) and severity (mild, moderate or severe) of periodontal disease which will assist in rendering...intrabony defects can be seen in chronic forms of periodontal disease. 9 Figure 4a shows a normal bony pattern in which the bone level follows the... PERIODONTAL REGENERATION OF 1-, 2-, AND 3-WALLED INTRABONY DEFECTS USING ACCELL CONNEXUS® VERSUS DEMINERALIZED FREEZE- DRIED BONE ALLOGRAFT: A
Plane-Based Sampling for Ray Casting Algorithm in Sequential Medical Images
Lin, Lili; Chen, Shengyong; Shao, Yan; Gu, Zichun
2013-01-01
This paper proposes a plane-based sampling method to improve the traditional Ray Casting Algorithm (RCA) for the fast reconstruction of a three-dimensional biomedical model from sequential images. In the novel method, the optical properties of all sampling points depend on the intersection points when a ray travels through an equidistant parallel plan cluster of the volume dataset. The results show that the method improves the rendering speed at over three times compared with the conventional algorithm and the image quality is well guaranteed. PMID:23424608
High-dimensional cluster analysis with the Masked EM Algorithm
Kadir, Shabnam N.; Goodman, Dan F. M.; Harris, Kenneth D.
2014-01-01
Cluster analysis faces two problems in high dimensions: first, the “curse of dimensionality” that can lead to overfitting and poor generalization performance; and second, the sheer time taken for conventional algorithms to process large amounts of high-dimensional data. We describe a solution to these problems, designed for the application of “spike sorting” for next-generation high channel-count neural probes. In this problem, only a small subset of features provide information about the cluster member-ship of any one data vector, but this informative feature subset is not the same for all data points, rendering classical feature selection ineffective. We introduce a “Masked EM” algorithm that allows accurate and time-efficient clustering of up to millions of points in thousands of dimensions. We demonstrate its applicability to synthetic data, and to real-world high-channel-count spike sorting data. PMID:25149694
Reporter-Based Isolation of Developmental Myogenic Progenitors
Kheir, Eyemen; Cusella, Gabriella; Messina, Graziella; Cossu, Giulio; Biressi, Stefano
2018-01-01
The formation and activity of mammalian tissues entail finely regulated processes, involving the concerted organization and interaction of multiple cell types. In recent years the prospective isolation of distinct progenitor and stem cell populations has become a powerful tool in the hands of developmental biologists and has rendered the investigation of their intrinsic properties possible. In this protocol, we describe how to purify progenitors with different lineage history and degree of differentiation from embryonic and fetal skeletal muscle by fluorescence-activated cell sorting (FACS). The approach takes advantage of a panel of murine strains expressing fluorescent reporter genes specifically in the myogenic progenitors. We provide a detailed description of the dissection procedures and of the enzymatic dissociation required to maximize the yield of mononucleated cells for subsequent FACS-based purification. The procedure takes ~6–7 h to complete and allows for the isolation and the subsequent molecular and phenotypic characterization of developmental myogenic progenitors. PMID:29674978
NASA Astrophysics Data System (ADS)
Smith, A. M.
1989-08-01
As a result of railway excavations the Pietermaritzburg Shale-Vryheid Formation transition is spectacularly exposed on the southern slope of Zungwini Mountain. Nine facies and three facies associations are recognised. Deposition occurred in a palaeoshelf and offshore setting. The reconstructed coastline was SW-NE with land to the northwest. The inner shelf was tide- and the outer-shelf storm-influenced. Fluvial input supplied sediment which was reworked into flood-tidal sandwaves, probably within the confines of an estuary. A rising sea level brought the sandwaves into the realm of a more distal, coast-parallel, storm-tidal current regime where reworking of the sediment occurred. Intense storm-augmented tidal currents swept some of the better-sorted material seaward to be deposited as storm layers in the inner and outer shelf. These same currents formed the low-density turbidites and sediment plumes from which the offshore argillaceous deposits were formed. The shelf edge poorly sorted rhythmite facies may have developed from sediment flushed out of the rivers during flood or from the flood-tidal sandwave system as a result of exceptional coastal storms.
Koblmüller, Stephan; Egger, Bernd; Sturmbauer, Christian; Sefc, Kristina M
2010-04-01
The evolutionary history of the endemic Lake Tanganyika cichlid tribe Tropheini, the sister group of the species flocks of Lake Malawi and the Lake Victoria region, was reconstructed from 2009 bp DNA sequence of two mitochondrial genes (ND2 and control region) and from 1293 AFLP markers. A period of rapid cladogenesis at the onset of the diversification of the Tropheini produced a multitude of specialized, predominantly rock-dwelling aufwuchs-feeders that now dominate in Lake Tanganyika's shallow habitat. Nested within the stenotopic rock-dwellers is a monophyletic group of species, which also utilize more sediment-rich habitat. Most of the extant species date back to at least 0.7 million years ago. Several instances of disagreement between AFLP and mtDNA tree topology are attributed to ancient incomplete lineage sorting, introgression and hybridization. A large degree of correspondence between AFLP clustering and trophic types indicated fewer cases of parallel evolution of trophic ecomorphology than previously inferred from mitochondrial data. (c) 2009 Elsevier Inc. All rights reserved.
Segregation physics of a macroscale granular ratchet
NASA Astrophysics Data System (ADS)
Bhateja, Ashish; Sharma, Ishan; Singh, Jayant K.
2017-05-01
New experiments with multigrain mixtures in a laterally shaken, horizontal channel show complete axial segregation of species. The channel consists of multiple concatenated trapeziums, and superficially resembles microratchets wherein asymmetric geometries and potentials transport, and sort, randomly agitated microscopic particles. However, the physics of our macroscale granular ratchet is fundamentally different, as macroscopic segregation is gravity driven. Our observations are not explained by classical granular segregation theories either. Motivated by the experiments, extensive parallelized discrete element simulations reveal that the macroratchet differentiates grains through hierarchical bidirectional segregation over two different time scales: Grains rapidly sort vertically into horizontal bands spanning the channel's length that, subsequently, slowly separate axially, driven by strikingly gentle, average interfacial pressure gradients acting over long distances. At its maximum, the pressure gradient responsible for axial separation was due to a change in height of about two big grain diameters (d =7 mm) over a meter-long channel. The strong directional segregation achieved by the granular macroratchet has practical importance, while identifying the underlying new physics will further our understanding of granular segregation in industrial and geophysical processes.
New Computational Methods for the Prediction and Analysis of Helicopter Noise
NASA Technical Reports Server (NTRS)
Strawn, Roger C.; Oliker, Leonid; Biswas, Rupak
1996-01-01
This paper describes several new methods to predict and analyze rotorcraft noise. These methods are: 1) a combined computational fluid dynamics and Kirchhoff scheme for far-field noise predictions, 2) parallel computer implementation of the Kirchhoff integrations, 3) audio and visual rendering of the computed acoustic predictions over large far-field regions, and 4) acoustic tracebacks to the Kirchhoff surface to pinpoint the sources of the rotor noise. The paper describes each method and presents sample results for three test cases. The first case consists of in-plane high-speed impulsive noise and the other two cases show idealized parallel and oblique blade-vortex interactions. The computed results show good agreement with available experimental data but convey much more information about the far-field noise propagation. When taken together, these new analysis methods exploit the power of new computer technologies and offer the potential to significantly improve our prediction and understanding of rotorcraft noise.
Scientific Visualization and Simulation for Multi-dimensional Marine Environment Data
NASA Astrophysics Data System (ADS)
Su, T.; Liu, H.; Wang, W.; Song, Z.; Jia, Z.
2017-12-01
As higher attention on the ocean and rapid development of marine detection, there are increasingly demands for realistic simulation and interactive visualization of marine environment in real time. Based on advanced technology such as GPU rendering, CUDA parallel computing and rapid grid oriented strategy, a series of efficient and high-quality visualization methods, which can deal with large-scale and multi-dimensional marine data in different environmental circumstances, has been proposed in this paper. Firstly, a high-quality seawater simulation is realized by FFT algorithm, bump mapping and texture animation technology. Secondly, large-scale multi-dimensional marine hydrological environmental data is virtualized by 3d interactive technologies and volume rendering techniques. Thirdly, seabed terrain data is simulated with improved Delaunay algorithm, surface reconstruction algorithm, dynamic LOD algorithm and GPU programming techniques. Fourthly, seamless modelling in real time for both ocean and land based on digital globe is achieved by the WebGL technique to meet the requirement of web-based application. The experiments suggest that these methods can not only have a satisfying marine environment simulation effect, but also meet the rendering requirements of global multi-dimension marine data. Additionally, a simulation system for underwater oil spill is established by OSG 3D-rendering engine. It is integrated with the marine visualization method mentioned above, which shows movement processes, physical parameters, current velocity and direction for different types of deep water oil spill particle (oil spill particles, hydrates particles, gas particles, etc.) dynamically and simultaneously in multi-dimension. With such application, valuable reference and decision-making information can be provided for understanding the progress of oil spill in deep water, which is helpful for ocean disaster forecasting, warning and emergency response.
Rendering potential wearable robot designs with the LOPES gait trainer.
Koopman, B; van Asseldonk, E H F; van der Kooij, H; van Dijk, W; Ronsse, R
2011-01-01
In recent years, wearable robots (WRs) for rehabilitation, personal assistance, or human augmentation are gaining increasing interest. To make these devices more energy efficient, radical changes to the mechanical structure of the device are being considered. However, it remains very difficult to predict how people will respond to, and interact with, WRs that differ in terms of mechanical design. Users may adjust their gait pattern in response to the mechanical restrictions or properties of the device. The goal of this pilot study is to show the feasibility of rendering the mechanical properties of different potential WR designs using the robotic gait training device LOPES. This paper describes a new method that selectively cancels the dynamics of LOPES itself and adds the dynamics of the rendered WR using two parallel inverse models. Adaptive frequency oscillators were used to get estimates of the joint position, velocity, and acceleration. Using the inverse models, different WR designs can be evaluated, eliminating the need to build several prototypes. As a proof of principle, we simulated the effect of a very simple WR that consisted of a mass attached to the ankles. Preliminary results show that we are partially able to cancel the dynamics of LOPES. Additionally, the simulation of the mass showed an increase in muscle activity but not in the same level as during the control, where subjects actually carried the mass. In conclusion, the results in this paper suggest that LOPES can be used to render different WRs. In addition, it is very likely that the results can be further optimized when more effort is put in retrieving proper estimations for the velocity and acceleration, which are required for the inverse models. © 2011 IEEE
NASA Astrophysics Data System (ADS)
Ali-Bey, Mohamed; Moughamir, Saïd; Manamanni, Noureddine
2011-12-01
in this paper a simulator of a multi-view shooting system with parallel optical axes and structurally variable configuration is proposed. The considered system is dedicated to the production of 3D contents for auto-stereoscopic visualization. The global shooting/viewing geometrical process, which is the kernel of this shooting system, is detailed and the different viewing, transformation and capture parameters are then defined. An appropriate perspective projection model is afterward derived to work out a simulator. At first, this latter is used to validate the global geometrical process in the case of a static configuration. Next, the simulator is used to show the limitations of a static configuration of this shooting system type by considering the case of dynamic scenes and then a dynamic scheme is achieved to allow a correct capture of this kind of scenes. After that, the effect of the different geometrical capture parameters on the 3D rendering quality and the necessity or not of their adaptation is studied. Finally, some dynamic effects and their repercussions on the 3D rendering quality of dynamic scenes are analyzed using error images and some image quantization tools. Simulation and experimental results are presented throughout this paper to illustrate the different studied points. Some conclusions and perspectives end the paper. [Figure not available: see fulltext.
Gundupalli, Sathish Paulraj; Hait, Subrata; Thakur, Atul
2017-12-01
There has been a significant rise in municipal solid waste (MSW) generation in the last few decades due to rapid urbanization and industrialization. Due to the lack of source segregation practice, a need for automated segregation of recyclables from MSW exists in the developing countries. This paper reports a thermal imaging based system for classifying useful recyclables from simulated MSW sample. Experimental results have demonstrated the possibility to use thermal imaging technique for classification and a robotic system for sorting of recyclables in a single process step. The reported classification system yields an accuracy in the range of 85-96% and is comparable with the existing single-material recyclable classification techniques. We believe that the reported thermal imaging based system can emerge as a viable and inexpensive large-scale classification-cum-sorting technology in recycling plants for processing MSW in developing countries. Copyright © 2017 Elsevier Ltd. All rights reserved.
Experimental investigation of gravity effects on sediment sorting on Mars
NASA Astrophysics Data System (ADS)
Kuhn, Nikolaus J.; Kuhn, Brigitte; Gartmann, Andres
2016-04-01
Introduction: Sorting of sedimentary rocks is a proxy for the environmental conditions at the time of deposition, in particular the runoff that moved and deposited the material forming the rocks. Settling of sediment in water is strongly influenced by the gravity of a planetary body. As a consequence, sorting of a sedimentary rock varies with gravity for a given depth and velocity of surface runoff. Theoretical considerations for spheres indicate that sorting is more uniform on Mars than on Earth for runoff of identical depth. In reality, such considerations have to be applied with great caution because the shape of a particle strongly influences drag. Drag itself can only be calculated directly for an irregularly shaped particle with great computational effort, if at all. Therefore, even for terrestrial applications, sediment settling velocities are often determined directly, e.g. by measurements using settling tubes. Experiments: In this study the results of settling tube tests conducted under reduced gravity during three Mars Sedimentation Experiment (MarsSedEx I, II and III) flights, conducted between 2012 and 2015, are presented. Ten types of sediment, ranging in size, shape and density were tested in custom-designed settling tubes during parabolas of Martian gravity lasting 20 to 25 seconds. Results: The experiments conducted during the MarsSedEx reduced gravity experiments showed that the violation of fluid dynamics caused by using empirical models and parameter values developed for sediment transport on Earth lead to significant miscalculations for Mars, specifically an underetsimation of settling velcoity because of an overestimation of turbulant drag. The error is caused by the flawed representation of particle drag on Mars. Drag coefficients are not a property of a sediment particle, but a property of the flow around the particle, and thus strongly affected by gravity. Conlcusions: The observed errors in settling velocity when using terrestrial models and parameter values on Mars have implications for sediment movement and sorting, in particular for sandstones and conglomerates, and thus analogies drawn between Earth and Mars. Most significantly, sorting on Mars is less pronounced for given flow conditions than on Earth. References: [1] Kuhn N. J. (2014) Experiments in Reduced Gravity - Sediment Settling on Mars, Elsevier.
Tutankhamun and his brothers. Familial gynecomastia in the Eighteenth Dynasty.
Paulshock, B Z
1980-07-11
Many images of the last four hereditary pharaohs of the Eighteenth Egyptian Dynasty (1559 BC to 1319 BC), Amenophis III, Amenophis IV (also known as Akhenaten), Smenkhkare, and Tutankhamun, show them with gynecomastia. Amenophis III was most probably the sire of the last three. The feminine physique and other abnormalities of Amenophis IV have been extensively commented on as indicative of some sort of pathological condition, but the gynecomastia of the others, including Tutankhamun, has been glossed over or considered an artistic mannerism of the El Amarna period. An alternative theory, that the gynecomastia was actually representational and indicative of a familial abnormality in two or three generations, is suggested.
2013-01-01
Background Many proteins and peptides have been used in therapeutic or industrial applications. They are often produced in microbial production hosts by fermentation. Robust protein production in the hosts and efficient downstream purification are two critical factors that could significantly reduce cost for microbial protein production by fermentation. Producing proteins/peptides as inclusion bodies in the hosts has the potential to achieve both high titers in fermentation and cost-effective downstream purification. Manipulation of the host cells such as overexpression/deletion of certain genes could lead to producing more and/or denser inclusion bodies. However, there are limited screening methods to help to identify beneficial genetic changes rendering more protein production and/or denser inclusion bodies. Results We report development and optimization of a simple density gradient method that can be used for distinguishing and sorting E. coli cells with different buoyant densities. We demonstrate utilization of the method to screen genetic libraries to identify a) expression of glyQS loci on plasmid that increased expression of a peptide of interest as well as the buoyant density of inclusion body producing E. coli cells; and b) deletion of a host gltA gene that increased the buoyant density of the inclusion body produced in the E. coli cells. Conclusion A novel density gradient sorting method was developed to screen genetic libraries. Beneficial host genetic changes could be exploited to improve recombinant protein expression as well as downstream protein purification. PMID:23638724
Pandey, Neeraj; Sachan, Annapurna; Chen, Qi; Ruebling-Jass, Kristin; Bhalla, Ritu; Panguluri, Kiran Kumar; Rouviere, Pierre E; Cheng, Qiong
2013-05-02
Many proteins and peptides have been used in therapeutic or industrial applications. They are often produced in microbial production hosts by fermentation. Robust protein production in the hosts and efficient downstream purification are two critical factors that could significantly reduce cost for microbial protein production by fermentation. Producing proteins/peptides as inclusion bodies in the hosts has the potential to achieve both high titers in fermentation and cost-effective downstream purification. Manipulation of the host cells such as overexpression/deletion of certain genes could lead to producing more and/or denser inclusion bodies. However, there are limited screening methods to help to identify beneficial genetic changes rendering more protein production and/or denser inclusion bodies. We report development and optimization of a simple density gradient method that can be used for distinguishing and sorting E. coli cells with different buoyant densities. We demonstrate utilization of the method to screen genetic libraries to identify a) expression of glyQS loci on plasmid that increased expression of a peptide of interest as well as the buoyant density of inclusion body producing E. coli cells; and b) deletion of a host gltA gene that increased the buoyant density of the inclusion body produced in the E. coli cells. A novel density gradient sorting method was developed to screen genetic libraries. Beneficial host genetic changes could be exploited to improve recombinant protein expression as well as downstream protein purification.
Thermographic inspection of external thermal insulation systems with mechanical fixing
NASA Astrophysics Data System (ADS)
Simões, Nuno; Simões, Inês; Serra, Catarina; Tadeu, António
2015-05-01
An External Thermal Insulation Composite System (ETICS) kit may include anchors to mechanically fix the insulation product onto the wall. Using this option increases safety when compared to a simple bonded solution, however, it is more expensive and needs higher labor resources. The insulation product is then coated with rendering, which applied to the insulation material without any air gap. The rendering comprises one or more layers of coats with an embedded reinforcement. The most common multi-coat rendering system presents a base coat applied directly to the insulation product with a glass fiber mesh as reinforcement, followed by a second base coat, before a very thin coat (key coat) that prepares the surface to receive the finishing and decorative coat. The thickness of the rendering system may vary between around 5 to 10 mm. The higher thicknesses may be associated with a reinforcement composed by two layers of glass fiber mesh. The main purpose of this work is to apply infrared thermography (IRT) techniques to 2 ETICS solution (single or double layer of glass fiber mesh) and evaluate its capability in the detection of anchors. The reliability of IRT was tested using an ETICS configuration of expanded cork boards and a rendering system with one or two layers of glass fiber mesh. An active thermography approach was performed in laboratory conditions, in transmission and reflection mode. In the reflection mode halogen lamps and air heater were employed as the thermal stimulus. Air heater was also the source used in the transmission mode tests. The resulting data was processed in both time and frequency domains. In this last approach, phase contrast images were generated and studied.
Parallel stitching of 2D materials
Ling, Xi; Wu, Lijun; Lin, Yuxuan; ...
2016-01-27
Diverse parallel stitched 2D heterostructures, including metal–semiconductor, semiconductor–semiconductor, and insulator–semiconductor, are synthesized directly through selective “sowing” of aromatic molecules as the seeds in the chemical vapor deposition (CVD) method. Lastly, the methodology enables the large-scale fabrication of lateral heterostructures, which offers tremendous potential for its application in integrated circuits.
Performance evaluation of canny edge detection on a tiled multicore architecture
NASA Astrophysics Data System (ADS)
Brethorst, Andrew Z.; Desai, Nehal; Enright, Douglas P.; Scrofano, Ronald
2011-01-01
In the last few years, a variety of multicore architectures have been used to parallelize image processing applications. In this paper, we focus on assessing the parallel speed-ups of different Canny edge detection parallelization strategies on the Tile64, a tiled multicore architecture developed by the Tilera Corporation. Included in these strategies are different ways Canny edge detection can be parallelized, as well as differences in data management. The two parallelization strategies examined were loop-level parallelism and domain decomposition. Loop-level parallelism is achieved through the use of OpenMP,1 and it is capable of parallelization across the range of values over which a loop iterates. Domain decomposition is the process of breaking down an image into subimages, where each subimage is processed independently, in parallel. The results of the two strategies show that for the same number of threads, programmer implemented, domain decomposition exhibits higher speed-ups than the compiler managed, loop-level parallelism implemented with OpenMP.
Nelson, Nadine; Szekeres, Karoly; Cooper, Denise; Ghansah, Tomar
2012-06-18
MDSC are a heterogeneous population of immature macrophages, dendritic cells and granulocytes that accumulate in lymphoid organs in pathological conditions including parasitic infection, inflammation, traumatic stress, graft-versus-host disease, diabetes and cancer. In mice, MDSC express Mac-1 (CD11b) and Gr-1 (Ly6G and Ly6C) surface antigens. It is important to note that MDSC are well studied in various tumor-bearing hosts where they are significantly expanded and suppress anti-tumor immune responses compared to naïve counterparts. However, depending on the pathological condition, there are different subpopulations of MDSC with distinct mechanisms and targets of suppression. Therefore, effective methods to isolate viable MDSC populations are important in elucidating their different molecular mechanisms of suppression in vitro and in vivo. Recently, the Ghansah group has reported the expansion of MDSC in a murine pancreatic cancer model. Our tumor-bearing MDSC display a loss of homeostasis and increased suppressive function compared to naïve MDSC. MDSC percentages are significantly less in lymphoid compartments of naïve vs. tumor-bearing mice. This is a major caveat, which often hinders accurate comparative analyses of these MDSC. Therefore, enriching Gr-1(+) leukocytes from naïve mice prior to Fluorescence Activated Cell Sorting (FACS) enhances purity, viability and significantly reduces sort time. However, enrichment of Gr-1(+) leukocytes from tumor-bearing mice is optional as these are in abundance for quick FACS sorting. Therefore, in this protocol, we describe a highly efficient method of immunophenotyping MDSC and enriching Gr-1(+) leukocytes from spleens of naïve mice for sorting MDSC in a timely manner. Immunocompetent C57BL/6 mice are inoculated with murine Panc02 cells subcutaneously whereas naïve mice receive 1XPBS. Approximately 30 days post inoculation; spleens are harvested and processed into single-cell suspensions using a cell dissociation sieve. Splenocytes are then Red Blood Cell (RBC) lysed and an aliquot of these leukocytes are stained using fluorochrome-conjugated antibodies against Mac-1 and Gr-1 to immunophenotype MDSC percentages using Flow Cytometry. In a parallel experiment, whole leukocytes from naïve mice are stained with fluorescent-conjugated Gr-1 antibodies, incubated with PE-MicroBeads and positively selected using an automated Magnetic Activated Cell Sorting (autoMACS) Pro Separator. Next, an aliquot of Gr-1(+) leukocytes are stained with Mac-1 antibodies to identify the increase in MDSC percentages using Flow Cytometry. Now, these Gr1(+) enriched leukocytes are ready for FACS sorting of MDSC to be used in comparative analyses (naïve vs. tumor- bearing) in in vivo and in vitro assays.
A Parallel Processing Algorithm for Remote Sensing Classification
NASA Technical Reports Server (NTRS)
Gualtieri, J. Anthony
2005-01-01
A current thread in parallel computation is the use of cluster computers created by networking a few to thousands of commodity general-purpose workstation-level commuters using the Linux operating system. For example on the Medusa cluster at NASA/GSFC, this provides for super computing performance, 130 G(sub flops) (Linpack Benchmark) at moderate cost, $370K. However, to be useful for scientific computing in the area of Earth science, issues of ease of programming, access to existing scientific libraries, and portability of existing code need to be considered. In this paper, I address these issues in the context of tools for rendering earth science remote sensing data into useful products. In particular, I focus on a problem that can be decomposed into a set of independent tasks, which on a serial computer would be performed sequentially, but with a cluster computer can be performed in parallel, giving an obvious speedup. To make the ideas concrete, I consider the problem of classifying hyperspectral imagery where some ground truth is available to train the classifier. In particular I will use the Support Vector Machine (SVM) approach as applied to hyperspectral imagery. The approach will be to introduce notions about parallel computation and then to restrict the development to the SVM problem. Pseudocode (an outline of the computation) will be described and then details specific to the implementation will be given. Then timing results will be reported to show what speedups are possible using parallel computation. The paper will close with a discussion of the results.
Return on Investment: Ensuring Special Forces Can Fight Another Day
2011-12-01
maximum recommended protein intake, even for weight training and bodybuilding. Most non- vegetarian athletes take in more than this in their normal... diet . 5. Limit fat intake to less than 30% of total calories (1 gram of fat = 9 calories0. Items to watch are red meat, peanuts, solid dairy...cereal products every day. 9. Sort-term weight reduction diets are generally useless and occasionally dangerous. Lasting weight modification is
Project MANTIS: A MANTle Induction Simulator for coupling geodynamic and electromagnetic modeling
NASA Astrophysics Data System (ADS)
Weiss, C. J.
2009-12-01
A key component to testing geodynamic hypotheses resulting from the 3D mantle convection simulations is the ability to easily translate the predicted physiochemical state to the model space relevant for an independent geophysical observation, such as earth's seismic, geodetic or electromagnetic response. In this contribution a new parallel code for simulating low-frequency, global-scale electromagnetic induction phenomena is introduced that has the same Earth discretization as the popular CitcomS mantle convection code. Hence, projection of the CitcomS model into the model space of electrical conductivity is greatly simplified, and focuses solely on the node-to-node, physics-based relationship between these Earth parameters without the need for "upscaling", "downscaling", averaging or harmonizing with some other model basis such as spherical harmonics. Preliminary performance tests of the MANTIS code on shared and distributed memory parallel compute platforms shows favorable scaling (>70% efficiency) for up to 500 processors. As with CitcomS, an OpenDX visualization widget (VISMAN) is also provided for 3D rendering and interactive interrogation of model results. Details of the MANTIS code will be briefly discussed here, focusing on compatibility with CitcomS modeling, as will be preliminary results in which the electromagnetic response of a CitcomS model is evaluated. VISMAN rendering of electrical tomography-derived electrical conductivity model overlain by an a 1x1 deg crustal conductivity map. Grey scale represents the log_10 magnitude of conductivity [S/m]. Arrows are horiztonal components of a hypothetical magnetospheric source field used to electromagnetically excite the conductivity model.
Yue, Chao; Li, Wen; Reeves, Geoffrey D.; ...
2016-07-01
Interactions between interplanetary (IP) shocks and the Earth's magnetosphere manifest many important space physics phenomena including low-energy ion flux enhancements and particle acceleration. In order to investigate the mechanisms driving shock-induced enhancement of low-energy ion flux, we have examined two IP shock events that occurred when the Van Allen Probes were located near the equator while ionospheric and ground observations were available around the spacecraft footprints. We have found that, associated with the shock arrival, electromagnetic fields intensified, and low-energy ion fluxes, including H +, He +, and O +, were enhanced dramatically in both the parallel and perpendicular directions.more » During the 2 October 2013 shock event, both parallel and perpendicular flux enhancements lasted more than 20 min with larger fluxes observed in the perpendicular direction. In contrast, for the 15 March 2013 shock event, the low-energy perpendicular ion fluxes increased only in the first 5 min during an impulse of electric field, while the parallel flux enhancement lasted more than 30 min. In addition, ionospheric outflows were observed after shock arrivals. From a simple particle motion calculation, we found that the rapid response of low-energy ions is due to drifts of plasmaspheric population by the enhanced electric field. Furthermore, the fast acceleration in the perpendicular direction cannot solely be explained by E × B drift but betatron acceleration also plays a role. Adiabatic acceleration may also explain the fast response of the enhanced parallel ion fluxes, while ion outflows may contribute to the enhanced parallel fluxes that last longer than the perpendicular fluxes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yue, Chao; Li, Wen; Reeves, Geoffrey D.
Interactions between interplanetary (IP) shocks and the Earth's magnetosphere manifest many important space physics phenomena including low-energy ion flux enhancements and particle acceleration. In order to investigate the mechanisms driving shock-induced enhancement of low-energy ion flux, we have examined two IP shock events that occurred when the Van Allen Probes were located near the equator while ionospheric and ground observations were available around the spacecraft footprints. We have found that, associated with the shock arrival, electromagnetic fields intensified, and low-energy ion fluxes, including H +, He +, and O +, were enhanced dramatically in both the parallel and perpendicular directions.more » During the 2 October 2013 shock event, both parallel and perpendicular flux enhancements lasted more than 20 min with larger fluxes observed in the perpendicular direction. In contrast, for the 15 March 2013 shock event, the low-energy perpendicular ion fluxes increased only in the first 5 min during an impulse of electric field, while the parallel flux enhancement lasted more than 30 min. In addition, ionospheric outflows were observed after shock arrivals. From a simple particle motion calculation, we found that the rapid response of low-energy ions is due to drifts of plasmaspheric population by the enhanced electric field. Furthermore, the fast acceleration in the perpendicular direction cannot solely be explained by E × B drift but betatron acceleration also plays a role. Adiabatic acceleration may also explain the fast response of the enhanced parallel ion fluxes, while ion outflows may contribute to the enhanced parallel fluxes that last longer than the perpendicular fluxes.« less
Pacanowski, Romain; Salazar Celis, Oliver; Schlick, Christophe; Granier, Xavier; Poulin, Pierre; Cuyt, Annie
2012-11-01
Over the last two decades, much effort has been devoted to accurately measuring Bidirectional Reflectance Distribution Functions (BRDFs) of real-world materials and to use efficiently the resulting data for rendering. Because of their large size, it is difficult to use directly measured BRDFs for real-time applications, and fitting the most sophisticated analytical BRDF models is still a complex task. In this paper, we introduce Rational BRDF, a general-purpose and efficient representation for arbitrary BRDFs, based on Rational Functions (RFs). Using an adapted parametrization, we demonstrate how Rational BRDFs offer 1) a more compact and efficient representation using low-degree RFs, 2) an accurate fitting of measured materials with guaranteed control of the residual error, and 3) efficient importance sampling by applying the same fitting process to determine the inverse of the Cumulative Distribution Function (CDF) generated from the BRDF for use in Monte-Carlo rendering.
SequenceL: Automated Parallel Algorithms Derived from CSP-NT Computational Laws
NASA Technical Reports Server (NTRS)
Cooke, Daniel; Rushton, Nelson
2013-01-01
With the introduction of new parallel architectures like the cell and multicore chips from IBM, Intel, AMD, and ARM, as well as the petascale processing available for highend computing, a larger number of programmers will need to write parallel codes. Adding the parallel control structure to the sequence, selection, and iterative control constructs increases the complexity of code development, which often results in increased development costs and decreased reliability. SequenceL is a high-level programming language that is, a programming language that is closer to a human s way of thinking than to a machine s. Historically, high-level languages have resulted in decreased development costs and increased reliability, at the expense of performance. In recent applications at JSC and in industry, SequenceL has demonstrated the usual advantages of high-level programming in terms of low cost and high reliability. SequenceL programs, however, have run at speeds typically comparable with, and in many cases faster than, their counterparts written in C and C++ when run on single-core processors. Moreover, SequenceL is able to generate parallel executables automatically for multicore hardware, gaining parallel speedups without any extra effort from the programmer beyond what is required to write the sequen tial/singlecore code. A SequenceL-to-C++ translator has been developed that automatically renders readable multithreaded C++ from a combination of a SequenceL program and sample data input. The SequenceL language is based on two fundamental computational laws, Consume-Simplify- Produce (CSP) and Normalize-Trans - pose (NT), which enable it to automate the creation of parallel algorithms from high-level code that has no annotations of parallelism whatsoever. In our anecdotal experience, SequenceL development has been in every case less costly than development of the same algorithm in sequential (that is, single-core, single process) C or C++, and an order of magnitude less costly than development of comparable parallel code. Moreover, SequenceL not only automatically parallelizes the code, but since it is based on CSP-NT, it is provably race free, thus eliminating the largest quality challenge the parallelized software developer faces.
High-Performance 3D Articulated Robot Display
NASA Technical Reports Server (NTRS)
Powell, Mark W.; Torres, Recaredo J.; Mittman, David S.; Kurien, James A.; Abramyan, Lucy
2011-01-01
In the domain of telerobotic operations, the primary challenge facing the operator is to understand the state of the robotic platform. One key aspect of understanding the state is to visualize the physical location and configuration of the platform. As there is a wide variety of mobile robots, the requirements for visualizing their configurations vary diversely across different platforms. There can also be diversity in the mechanical mobility, such as wheeled, tracked, or legged mobility over surfaces. Adaptable 3D articulated robot visualization software can accommodate a wide variety of robotic platforms and environments. The visualization has been used for surface, aerial, space, and water robotic vehicle visualization during field testing. It has been used to enable operations of wheeled and legged surface vehicles, and can be readily adapted to facilitate other mechanical mobility solutions. The 3D visualization can render an articulated 3D model of a robotic platform for any environment. Given the model, the software receives real-time telemetry from the avionics system onboard the vehicle and animates the robot visualization to reflect the telemetered physical state. This is used to track the position and attitude in real time to monitor the progress of the vehicle as it traverses its environment. It is also used to monitor the state of any or all articulated elements of the vehicle, such as arms, legs, or control surfaces. The visualization can also render other sorts of telemetered states visually, such as stress or strains that are measured by the avionics. Such data can be used to color or annotate the virtual vehicle to indicate nominal or off-nominal states during operation. The visualization is also able to render the simulated environment where the vehicle is operating. For surface and aerial vehicles, it can render the terrain under the vehicle as the avionics sends it location information (GPS, odometry, or star tracking), and locate the vehicle over or on the terrain correctly. For long traverses over terrain, the visualization can stream in terrain piecewise in order to maintain the current area of interest for the operator without incurring unreasonable resource constraints on the computing platform. The visualization software is designed to run on laptops that can operate in field-testing environments without Internet access, which is a frequently encountered situation when testing in remote locations that simulate planetary environments such as Mars and other planetary bodies.
NASA Astrophysics Data System (ADS)
Arnold, Michael
Calculations have indicated that aligned arrays of semiconducting carbon nanotubes (CNTs) promise to outperform conventional semiconducting materials in short-channel, aggressively scaled field effect transistors (FETs) like those used in semiconductor logic and high frequency amplifier technologies. These calculations have been based on extrapolation of measurements of FETs based on one CNT, in which ballistic transport approaching the quantum conductance limit of 2Go = 4e2/h has been achieved. However, constraints in CNT sorting, processing, alignment, and contacts give rise to non-idealities when CNTs are implemented in densely-packed parallel arrays, which has resulted in a conductance per CNT far from 2Go. The consequence has been that it has been very difficult to create high performance CNT array FETs, and CNT array FETs have not outperformed but rather underperformed channel materials such as Si by 6 x or more. Here, we report nearly ballistic CNT array FETs at a density of 50 CNTs um-1, created via CNT sorting, wafer-scale alignment and assembly, and treatment. The on-state conductance in the arrays is as high as 0.46 Go per CNT, and the conductance of the arrays reaches 1.7 mS um-1, which is 7 x higher than previous state-of-the-art CNT array FETs made by other methods. The saturated on-state current density reaches 900 uA um-1 and is similar to or exceeds that of Si FETs when compared at equivalent gate oxide thickness, off-state current density, and channel length. The on-state current density exceeds that of GaAs FETs, as well. This leap in CNT FET array performance is a significant advance towards the exploitation of CNTs in high-performance semiconductor electronics technologies.
Performance comparison analysis library communication cluster system using merge sort
NASA Astrophysics Data System (ADS)
Wulandari, D. A. R.; Ramadhan, M. E.
2018-04-01
Begins by using a single processor, to increase the speed of computing time, the use of multi-processor was introduced. The second paradigm is known as parallel computing, example cluster. The cluster must have the communication potocol for processing, one of it is message passing Interface (MPI). MPI have many library, both of them OPENMPI and MPICH2. Performance of the cluster machine depend on suitable between performance characters of library communication and characters of the problem so this study aims to analyze the comparative performances libraries in handling parallel computing process. The case study in this research are MPICH2 and OpenMPI. This case research execute sorting’s problem to know the performance of cluster system. The sorting problem use mergesort method. The research method is by implementing OpenMPI and MPICH2 on a Linux-based cluster by using five computer virtual then analyze the performance of the system by different scenario tests and three parameters for to know the performance of MPICH2 and OpenMPI. These performances are execution time, speedup and efficiency. The results of this study showed that the addition of each data size makes OpenMPI and MPICH2 have an average speed-up and efficiency tend to increase but at a large data size decreases. increased data size doesn’t necessarily increased speed up and efficiency but only execution time example in 100000 data size. OpenMPI has a execution time greater than MPICH2 example in 1000 data size average execution time with MPICH2 is 0,009721 and OpenMPI is 0,003895 OpenMPI can customize communication needs.
Ko, Jina; Yelleswarapu, Venkata; Singh, Anup; Shah, Nishal
2016-01-01
Microfluidic devices can sort immunomagnetically labeled cells with sensitivity and specificity much greater than that of conventional methods, primarily because the size of microfluidic channels and micro-scale magnets can be matched to that of individual cells. However, these small feature sizes come at the expense of limited throughput (ϕ < 5 mL h−1) and susceptibility to clogging, which have hindered current microfluidic technology from processing relevant volumes of clinical samples, e.g. V > 10 mL whole blood. Here, we report a new approach to micromagnetic sorting that can achieve highly specific cell separation in unprocessed complex samples at a throughput (ϕ > 100 mL h−1) 100× greater than that of conventional microfluidics. To achieve this goal, we have devised a new approach to micromagnetic sorting, the magnetic nickel iron electroformed trap (MagNET), which enables high flow rates by having millions of micromagnetic traps operate in parallel. Our design rotates the conventional microfluidic approach by 90° to form magnetic traps at the edges of pores instead of in channels, enabling millions of the magnetic traps to be incorporated into a centimeter sized device. Unlike previous work, where magnetic structures were defined using conventional microfabrication, we take inspiration from soft lithography and create a master from which many replica electroformed magnetic micropore devices can be economically manufactured. These free-standing 12 µm thick permalloy (Ni80Fe20) films contain micropores of arbitrary shape and position, allowing the device to be tailored for maximal capture efficiency and throughput. We demonstrate MagNET's capabilities by fabricating devices with both circular and rectangular pores and use these devices to rapidly (ϕ = 180 mL h−1) and specifically sort rare tumor cells from white blood cells. PMID:27170379
Ontogeny of surface markers on functionally distinct T cell subsets in the chicken.
Traill, K N; Böck, G; Boyd, R L; Ratheiser, K; Wick, G
1984-01-01
Three subsets of chicken peripheral T cells (T1, T2 and T3) have been identified in peripheral blood of adult chickens on the basis of fluorescence intensity after staining with certain xenogeneic anti-thymus cell sera (from turkeys and rabbits). They differentiate between 3-10 weeks of age in parallel with development of responsiveness to the mitogens concanavalin A (Con A), phytohemagglutinin (PHA) and pokeweed mitogen (PWM). Functional tests on the T subsets, sorted with a fluorescence-activated cell sorter, have shown that T2, 3 cells respond to Con A, PHA and PWM and are capable of eliciting a graft-vs.-host reaction (GvHR). In contrast, although T1 cells respond to Con A, they respond poorly to PHA and not at all to PWM or in GvHR. There was some indication of cooperation between T1 and T2,3 cells for the PHA response. Parallels between these chicken subsets and helper and suppressor/cytotoxic subsets in mammalian systems are discussed.
The use of drug-coated balloons in the treatment of femoropopliteal and infrapopliteal disease.
Li, Jun; Karim, Adham; Shishehbor, Mehdi
2018-05-25
While the field of endovascular interventions has evolved in the last decade, technological advancements have rendered drug-coated balloons (DCBs) to be the first line therapy for femoropopliteal artery disease. As the knowledge continues to advance, the application of DCB to the infrapopliteal segments and its role in the setting of plaque modification atherectomy to minimize stent utilization will be further elucidated.
Mahmud, Mufti; Pulizzi, Rocco; Vasilaki, Eleni; Giugliano, Michele
2014-01-01
Micro-Electrode Arrays (MEAs) have emerged as a mature technique to investigate brain (dys)functions in vivo and in in vitro animal models. Often referred to as "smart" Petri dishes, MEAs have demonstrated a great potential particularly for medium-throughput studies in vitro, both in academic and pharmaceutical industrial contexts. Enabling rapid comparison of ionic/pharmacological/genetic manipulations with control conditions, MEAs are employed to screen compounds by monitoring non-invasively the spontaneous and evoked neuronal electrical activity in longitudinal studies, with relatively inexpensive equipment. However, in order to acquire sufficient statistical significance, recordings last up to tens of minutes and generate large amount of raw data (e.g., 60 channels/MEA, 16 bits A/D conversion, 20 kHz sampling rate: approximately 8 GB/MEA,h uncompressed). Thus, when the experimental conditions to be tested are numerous, the availability of fast, standardized, and automated signal preprocessing becomes pivotal for any subsequent analysis and data archiving. To this aim, we developed an in-house cloud-computing system, named QSpike Tools, where CPU-intensive operations, required for preprocessing of each recorded channel (e.g., filtering, multi-unit activity detection, spike-sorting, etc.), are decomposed and batch-queued to a multi-core architecture or to a computers cluster. With the commercial availability of new and inexpensive high-density MEAs, we believe that disseminating QSpike Tools might facilitate its wide adoption and customization, and inspire the creation of community-supported cloud-computing facilities for MEAs users.
FUEL ELEMENT INTERLOCKING ARRANGEMENT
Fortescue, P.; Nicoll, D.
1963-01-01
This patent relates to a system for mutually interlocking a multiplicity of elongated, parallel, coextensive, upright reactor fuel elements so as to render a laterally selfsupporting bundle, while admitting of concurrent, selective, vertical withdrawal of a sizeable number of elements without any of the remaining elements toppling, Each element is provided with a generally rectangular end cap. When a rank of caps is aligned in square contact, each free edge centrally defines an outwardly profecting dovetail, and extremitally cooperates with its adjacent cap by defining a juxtaposed half of a dovetail- receptive mortise. Successive ranks are staggered to afford mating of their dovetails and mortises. (AEC)
High sensitive vectorial B-probe for low frequency plasma waves.
Ullrich, Stefan; Grulke, Olaf; Klinger, Thomas; Rahbarnia, Kian
2013-11-01
A miniaturized multidimensional magnetic probe is developed for application in a low-temperature plasma environment. A very high sensitivity for low-frequency magnetic field fluctuations with constant phase run, a very good signal-to-noise ratio combined with an efficient electrostatic pickup rejection, renders the probe superior compared with any commercial solution. A two-step calibration allows for absolute measurement of amplitude and direction of magnetic field fluctuations. The excellent probe performance is demonstrated by measurements of the parallel current pattern of coherent electrostatic drift wave modes in the VINETA (versatile instrument for studies on nonlinearity, electromagnetism, turbulence, and applications) experiment.
NASA Astrophysics Data System (ADS)
Raphael, David T.; McIntee, Diane; Tsuruda, Jay S.; Colletti, Patrick; Tatevossian, Raymond; Frazier, James
2006-03-01
We explored multiple image processing approaches by which to display the segmented adult brachial plexus in a three-dimensional manner. Magnetic resonance neurography (MRN) 1.5-Tesla scans with STIR sequences, which preferentially highlight nerves, were performed in adult volunteers to generate high-resolution raw images. Using multiple software programs, the raw MRN images were then manipulated so as to achieve segmentation of plexus neurovascular structures, which were incorporated into three different visualization schemes: rotating upper thoracic girdle skeletal frames, dynamic fly-throughs parallel to the clavicle, and thin slab volume-rendered composite projections.
NASA Astrophysics Data System (ADS)
Mizumoto, Ikuro; Tsunematsu, Junpei; Fujii, Seiya
2016-09-01
In this paper, a design method of an output feedback control system with a simple feedforward input for a combustion model of diesel engine will be proposed based on the almost strictly positive real-ness (ASPR-ness) of the controlled system for a combustion control of diesel engines. A parallel feedforward compensator (PFC) design scheme which renders the resulting augmented controlled system ASPR will also be proposed in order to design a stable output feedback control system for the considered combustion model. The effectiveness of our proposed method will be confirmed through numerical simulations.
Sound source tracking device for telematic spatial sound field reproduction
NASA Astrophysics Data System (ADS)
Cardenas, Bruno
This research describes an algorithm that localizes sound sources for use in telematic applications. The localization algorithm is based on amplitude differences between various channels of a microphone array of directional shotgun microphones. The amplitude differences will be used to locate multiple performers and reproduce their voices, which were recorded at close distance with lavalier microphones, spatially corrected using a loudspeaker rendering system. In order to track multiple sound sources in parallel the information gained from the lavalier microphones will be utilized to estimate the signal-to-noise ratio between each performer and the concurrent performers.
Tile-based parallel coordinates and its application in financial visualization
NASA Astrophysics Data System (ADS)
Alsakran, Jamal; Zhao, Ye; Zhao, Xinlei
2010-01-01
Parallel coordinates technique has been widely used in information visualization applications and it has achieved great success in visualizing multivariate data and perceiving their trends. Nevertheless, visual clutter usually weakens or even diminishes its ability when the data size increases. In this paper, we first propose a tile-based parallel coordinates, where the plotting area is divided into rectangular tiles. Each tile stores an intersection density that counts the total number of polylines intersecting with that tile. Consequently, the intersection density is mapped to optical attributes, such as color and opacity, by interactive transfer functions. The method visualizes the polylines efficiently and informatively in accordance with the density distribution, and thus, reduces visual cluttering and promotes knowledge discovery. The interactivity of our method allows the user to instantaneously manipulate the tiles distribution and the transfer functions. Specifically, the classic parallel coordinates rendering is a special case of our method when each tile represents only one pixel. A case study on a real world data set, U.S. stock mutual fund data of year 2006, is presented to show the capability of our method in visually analyzing financial data. The presented visual analysis is conducted by an expert in the domain of finance. Our method gains the support from professionals in the finance field, they embrace it as a potential investment analysis tool for mutual fund managers, financial planners, and investors.
In-Situ Three-Dimensional Shape Rendering from Strain Values Obtained Through Optical Fiber Sensors
NASA Technical Reports Server (NTRS)
Chan, Hon Man (Inventor); Parker, Jr., Allen R. (Inventor)
2015-01-01
A method and system for rendering the shape of a multi-core optical fiber or multi-fiber bundle in three-dimensional space in real time based on measured fiber strain data. Three optical fiber cores arc arranged in parallel at 120.degree. intervals about a central axis. A series of longitudinally co-located strain sensor triplets, typically fiber Bragg gratings, are positioned along the length of each fiber at known intervals. A tunable laser interrogates the sensors to detect strain on the fiber cores. Software determines the strain magnitude (.DELTA.L/L) for each fiber at a given triplet, but then applies beam theory to calculate curvature, beading angle and torsion of the fiber bundle, and from there it determines the shape of the fiber in s Cartesian coordinate system by solving a series of ordinary differential equations expanded from the Frenet-Serrat equations. This approach eliminates the need for computationally time-intensive curve-tilting and allows the three-dimensional shape of the optical fiber assembly to be displayed in real-time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, Benjamin S.
The Futility package contains the following: 1) Definition of the size of integers and real numbers; 2) A generic Unit test harness; 3) Definitions for some basic extensions to the Fortran language: arbitrary length strings, a parameter list construct, exception handlers, command line processor, timers; 4) Geometry definitions: point, line, plane, box, cylinder, polyhedron; 5) File wrapper functions: standard Fortran input/output files, Fortran binary files, HDF5 files; 6) Parallel wrapper functions: MPI, and Open MP abstraction layers, partitioning algorithms; 7) Math utilities: BLAS, Matrix and Vector definitions, Linear Solver methods and wrappers for other TPLs (PETSC, MKL, etc), preconditioner classes;more » 8) Misc: random number generator, water saturation properties, sorting algorithms.« less
Lu, Emily; Elizondo-Riojas, Miguel-Angel; Chang, Jeffrey T; Volk, David E
2014-06-10
Next-generation sequencing results from bead-based aptamer libraries have demonstrated that traditional DNA/RNA alignment software is insufficient. This is particularly true for X-aptamers containing specialty bases (W, X, Y, Z, ...) that are identified by special encoding. Thus, we sought an automated program that uses the inherent design scheme of bead-based X-aptamers to create a hypothetical reference library and Markov modeling techniques to provide improved alignments. Aptaligner provides this feature as well as length error and noise level cutoff features, is parallelized to run on multiple central processing units (cores), and sorts sequences from a single chip into projects and subprojects.
NASA Astrophysics Data System (ADS)
Kwon, Chang Woo; Gihm, Yong Sik
2017-07-01
In the Cretaceous Buan Volcanics (SW Korea), blocky and fluidal peperites are developed in a bed of poorly sorted, massive pumiceous lapilli tuff (hot sediments) as a result of the vertical to subvertical intrusion of the trachyandesitic dikes into the bed. Blocky peperites are composed of polyhedral or platy juvenile clasts with a jigsaw-crack texture. Fluidal peperites are characterized by fluidal or globular juvenile clasts with irregular or ragged margins. The blocky peperites are ubiquitous in the host sediments, whereas the fluidal peperites only occur in fine-grained zone (well sorted fine to very fine ash) that are aligned parallel to the dike margin. The development of the fine-grained zone within the poorly sorted host sediments is interpreted to form by grain size segregation caused by upward moving pore water (fluidization) that has resulted from heat transfer from intruding magma toward the waterlogged host sediments during intrusion. With the release of pore water and the selective entrainment of fine-grained ash, the fine-grained zone formed within the host sediments. Subsequent interactions between the fine-grained zone and the intruding magma resulted in ductile deformation of the magma, which generated fluidal peperites. Outside the fine-grained zone, because of the relative deficiency of both pore water and fine-grained ash, intruding magma fragmented in a brittle manner, resulting in the formation of blocky peperites. The results of this study suggest that redistribution of constituent particles (ash) and interstitial fluids during fluidization resulted in heterogeneous physical conditions of the host sediments, which influenced peperite-forming processes.
Multi-locus phylogenetics, lineage sorting, and reticulation in Pinus subsection Australes.
Gernandt, David S; Aguirre Dugua, Xitlali; Vázquez-Lobo, Alejandra; Willyard, Ann; Moreno Letelier, Alejandra; Pérez de la Rosa, Jorge A; Piñero, Daniel; Liston, Aaron
2018-04-23
Both incomplete lineage sorting and reticulation have been proposed as causes of phylogenetic incongruence. Disentangling these factors may be most difficult in long-lived, wind-pollinated plants with large population sizes and weak reproductive barriers. We used solution hybridization for targeted enrichment and massive parallel sequencing to characterize low-copy-number nuclear genes and high-copy-number plastomes (Hyb-Seq) in 74 individuals of Pinus subsection Australes, a group of ~30 New World pine species of exceptional ecological and economic importance. We inferred relationships using methods that account for both incomplete lineage sorting and reticulation. Concatenation- and coalescent-based trees inferred from nuclear genes mainly agreed with one another, but they contradicted the plastid DNA tree in recovering the Attenuatae (the California closed-cone pines) and Oocarpae (the egg-cone pines of Mexico and Central America) as monophyletic and the Australes sensu stricto (the southern yellow pines) as paraphyletic to the Oocarpae. The plastid tree featured some relationships that were discordant with morphological and geographic evidence and species limits. Incorporating gene flow into the coalescent analyses better fit the data, but evidence supporting the hypothesis that hybridization explains the non-monophyly of the Attenuatae in the plastid tree was equivocal. Our analyses document cytonuclear discordance in Pinus subsection Australes. We attribute this discordance to ancient and recent introgression and present a phylogenetic hypothesis in which mostly hierarchical relationships are overlain by gene flow. © 2018 The Authors. American Journal of Botany is published by Wiley Periodicals, Inc. on behalf of the Botanical Society of America.
MarsSedEx I and II: Experimental investigation of gravity effects on sedimentation on Mars
NASA Astrophysics Data System (ADS)
Kuhn, N. J.; Kuhn, B.; Gartmann, A.
2014-12-01
Sorting of sedimentary rocks is a proxy for the environmental conditions at the time of deposition, in particular the runoff that moved and deposited the material forming the rocks. Settling of sediment is strongly influenced by the gravity of a planetary body. As a consequence, sorting of a sedimentary rock varies with gravity for a given depth and velocity of surface runoff. Theoretical considerations for spheres indicate that sorting is less uniform on Mars than on Earth for runoff of identical depth. The effects of gravity on flow hydraulics limit the use of common, semi-empirical models developed to simulate particle settling in terrestrial environments, on Mars. Assessing sedimentation patterns on Mars, aimed at identifying strata potentially hosting traces of life, is potentially affected by such uncertainties. Using first-principle approaches, e.g. through Computational Fluid Dynamics, for calculating settling velocities on other planetary bodies requires a large effort and is limited by the values of boundary conditions, e.g. the shape of the particle. The degree of uncertainty resulting from the differences in gravity on Earth and Mars was therefore tested during three reduced-gravity flights, the MarsSedEx I and II missions, conducted in November 2012 and 2013. Nine types of sediment, ranging in size, shape and density were tested in custom-designed settling tubes during parabolas of Martian gravity lasting 20 to 25 seconds. Based on the observed settling velocities, the uncertainties of empirical relationships developed on Earth to assess particle settling on Mars are discussed. In addition, the potential effects of reduced gravity on patterns of erosion, transport and sorting of sediment, including the implications for identifying strata bearing traces of past life on are examined.
Fox, Glen; Manley, Marena
2014-01-30
Single kernel (SK) near infrared (NIR) reflectance and transmittance technologies have been developed during the last two decades for a range of cereal grain physical quality and chemical traits as well as detecting and predicting levels of toxins produced by fungi. Challenges during the development of single kernel near infrared (SK-NIR) spectroscopy applications are modifications of existing NIR technology to present single kernels for scanning as well as modifying reference methods for the trait of interest. Numerous applications have been developed, and cover almost all cereals although most have been for key traits including moisture, protein, starch and oil in the globally important food grains, i.e. maize, wheat, rice and barley. An additional benefit in developing SK-NIR applications has been to demonstrate the value in sorting grain infected with a fungus or mycotoxins such as deoxynivalenol, fumonisins and aflatoxins. However, there is still a need to develop cost-effective technologies for high-speed sorting which can be used for small grain samples such as those from breeding programmes or commercial sorting; capable of sorting tonnes per hour. Development of SK-NIR technologies also includes standardisation of SK reference methods to analyse single kernels. For protein content, the use of the Dumas method would require minimal standardisation; for starch or oil content, considerable development would be required. SK-NIR, including the use of hyperspectral imaging, will improve our understanding of grain quality and the inherent variation in the range of a trait. In the area of food safety, this technology will benefit farmers, industry and consumers if it enables contaminated grain to be removed from the human food chain. © 2013 Society of Chemical Industry.
Critical role of the sorting polymer in carbon nanotube-based minority carrier devices
Mallajosyula, Arun T.; Nie, Wanyi; Gupta, Gautam; ...
2016-11-27
A prerequisite for carbon nanotube-based optoelectronic devices is the ability to sort them into a pure semiconductor phase. One of the most common sorting routes is enabled through using specific wrapping polymers. Here we show that subtle changes in the polymer structure can have a dramatic influence on the figures of merit of a carbon nanotube-based photovoltaic device. By comparing two commonly used polyfluorenes (PFO and PFO-BPy) for wrapping (7,5) and (6,5) chirality SWCNTs, we demonstrate that they have contrasting effects on the device efficiency. We attribute this to the differences in their ability to efficiently transfer charge. Although PFOmore » may act as an efficient interfacial layer at the anode, PFO-BPy, having the additional pyridine side groups, forms a high resistance layer degrading the device efficiency. By comparing PFO|C 60 and C 60-only devices, we found that presence of a PFO layer at low optical densities resulted in the increase of all three solar cell parameters, giving nearly an order of magnitude higher efficiency over that of C 60-only devices. In addition, with a relatively higher contribution to photocurrent from the PFO-C 60 interface, an open circuit voltage of 0.55 V was obtained for PFO-(7,5)-C 60 devices. On the other hand, PFO-BPy does not affect the open circuit voltage but drastically reduces the short circuit current density. Lastly, these results indicate that the charge transport properties and energy levels of the sorting polymers have to be taken into account to fully understand their effect on carbon nanotube-based solar cells.« less
Efficient sequential and parallel algorithms for record linkage.
Mamun, Abdullah-Al; Mi, Tian; Aseltine, Robert; Rajasekaran, Sanguthevar
2014-01-01
Integrating data from multiple sources is a crucial and challenging problem. Even though there exist numerous algorithms for record linkage or deduplication, they suffer from either large time needs or restrictions on the number of datasets that they can integrate. In this paper we report efficient sequential and parallel algorithms for record linkage which handle any number of datasets and outperform previous algorithms. Our algorithms employ hierarchical clustering algorithms as the basis. A key idea that we use is radix sorting on certain attributes to eliminate identical records before any further processing. Another novel idea is to form a graph that links similar records and find the connected components. Our sequential and parallel algorithms have been tested on a real dataset of 1,083,878 records and synthetic datasets ranging in size from 50,000 to 9,000,000 records. Our sequential algorithm runs at least two times faster, for any dataset, than the previous best-known algorithm, the two-phase algorithm using faster computation of the edit distance (TPA (FCED)). The speedups obtained by our parallel algorithm are almost linear. For example, we get a speedup of 7.5 with 8 cores (residing in a single node), 14.1 with 16 cores (residing in two nodes), and 26.4 with 32 cores (residing in four nodes). We have compared the performance of our sequential algorithm with TPA (FCED) and found that our algorithm outperforms the previous one. The accuracy is the same as that of this previous best-known algorithm.
Design of multiple sequence alignment algorithms on parallel, distributed memory supercomputers.
Church, Philip C; Goscinski, Andrzej; Holt, Kathryn; Inouye, Michael; Ghoting, Amol; Makarychev, Konstantin; Reumann, Matthias
2011-01-01
The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time.
2017-01-01
Background Palliative care planning for nursing home residents with advanced dementia is often suboptimal. This study compared effects of facilitated case conferencing (FCC) with usual care (UC) on end-of-life care. Methods A two arm parallel cluster randomised controlled trial was conducted. The sample included people with advanced dementia from 20 Australian nursing homes and their families and professional caregivers. In each intervention nursing home (n = 10), Palliative Care Planning Coordinators (PCPCs) facilitated family case conferences and trained staff in person-centred palliative care for 16 hours per week over 18 months. The primary outcome was family-rated quality of end-of-life care (End-of-Life Dementia [EOLD] Scales). Secondary outcomes included nurse-rated EOLD scales, resident quality of life (Quality of Life in Late-stage Dementia [QUALID]) and quality of care over the last month of life (pharmacological/non-pharmacological palliative strategies, hospitalization or inappropriate interventions). Results Two-hundred-eighty-six people with advanced dementia took part but only 131 died (64 in UC and 67 in FCC which was fewer than anticipated), rendering the primary analysis under-powered with no group effect seen in EOLD scales. Significant differences in pharmacological (P < 0.01) and non-pharmacological (P < 0.05) palliative management in last month of life were seen. Intercurrent illness was associated with lower family-rated EOLD Satisfaction with Care (coefficient 2.97, P < 0.05) and lower staff-rated EOLD Comfort Assessment with Dying (coefficient 4.37, P < 0.01). Per protocol analyses showed positive relationships between EOLD and staff hours to bed ratios, proportion of residents with dementia and staff attitudes. Conclusion FCC facilitates a palliative approach to care. Future trials of case conferencing should consider outcomes and processes regarding decision making and planning for anticipated events and acute illness. Trial registration Australian New Zealand Clinical Trial Registry ACTRN12612001164886 PMID:28786995
The Airborne Optical Systems Testbed (AOSTB)
2017-05-31
appropriate color to each pixel in and displayed in a two -dimensional array. Another method is to render a 3D model from the data and display the model as if...USA Distribution A: Public Release ALBOTA@LL.MIT.EDU ABSTRACT Over the last two decades MIT Lincoln Laboratory (MITLL) has pioneered the development... two -dimensional (2D) grid of detectors. Rather than measuring intensity, as in a conventional camera, these detectors measure the photon time-of
NASA Astrophysics Data System (ADS)
Wu, J.; Yang, Y.; Luo, Q.; Wu, J.
2012-12-01
This study presents a new hybrid multi-objective evolutionary algorithm, the niched Pareto tabu search combined with a genetic algorithm (NPTSGA), whereby the global search ability of niched Pareto tabu search (NPTS) is improved by the diversification of candidate solutions arose from the evolving nondominated sorting genetic algorithm II (NSGA-II) population. Also, the NPTSGA coupled with the commonly used groundwater flow and transport codes, MODFLOW and MT3DMS, is developed for multi-objective optimal design of groundwater remediation systems. The proposed methodology is then applied to a large-scale field groundwater remediation system for cleanup of large trichloroethylene (TCE) plume at the Massachusetts Military Reservation (MMR) in Cape Cod, Massachusetts. Furthermore, a master-slave (MS) parallelization scheme based on the Message Passing Interface (MPI) is incorporated into the NPTSGA to implement objective function evaluations in distributed processor environment, which can greatly improve the efficiency of the NPTSGA in finding Pareto-optimal solutions to the real-world application. This study shows that the MS parallel NPTSGA in comparison with the original NPTS and NSGA-II can balance the tradeoff between diversity and optimality of solutions during the search process and is an efficient and effective tool for optimizing the multi-objective design of groundwater remediation systems under complicated hydrogeologic conditions.
Spine centerline extraction and efficient spine reading of MRI and CT data
NASA Astrophysics Data System (ADS)
Lorenz, C.; Vogt, N.; Börnert, P.; Brosch, T.
2018-03-01
Radiological assessment of the spine is performed regularly in the context of orthopedics, neurology, oncology, and trauma management. Due to the extension and curved geometry of the spinal column, reading is time-consuming and requires substantial user interaction to navigate through the data during inspection. In this paper a spine geometry guided viewing approach is proposed facilitating reading by reducing the degrees of freedom to be manipulated during inspection of the data. The method is using the spine centerline as a representation of the spine geometry. We assume that renderings most useful for reading are those that can be locally defined based on a rotation and translation relative to the spine centerline. The resulting renderings conserve locally the relation to the spine and lead to curved planar reformats that can be adjusted using a small set of parameters to minimize user interaction. The spine centerline is extracted by an automated image to image foveal fully convolutional neural network (FFCN) based approach. The network consists of three parallel convolutional pathways working on different levels of resolution and processed fields of view. The outputs of the parallel pathways are combined by a subsequent feature integration pathway to yield the (final) centerline probability map, which is converted into a set of spine centerline points. The network has been trained separately using two data set types, one comprising a mixture of T1 and T2 weighted spine MR images and one using CT image data. We achieve an average centerline position error of 1.7 mm for MR and 0.9 mm for CT and a DICE coefficient of 0.84 for MR and 0.95 for CT. Based on the thus obtained centerline viewing and multi-planar reformatting can be easily facilitated.
Center for the Evaluation of Biomarkers for the Early Detection of Breast Cancer
2009-10-01
Table S1, including mammography (BI-RADS) score and breast density at the last mammogram before diagnosis, lymph node positivity, tumor size, number of...Bibliography 1.& Fletcher RH. Should all people over the age of 50 have regular fecal occult-blood tests? If it works, why not do it? New England Journal...blocked for 15 minutes at room temperature in fluores- cence-activated cell sorting (FACS) buffer (PBS, 0.1% bovine serum albumin (BSA), 0.02% sodium
A User’s/Programmer’s Manual for TWAKE.
1988-05-06
subroutines sorted according to primary function Inout OutDut Utjiitv Ean. Solve LDDOEL CALORD GETBAT ASSMAT EDATA COMOC LINKI ASMSQ BDINPT DRVBUG LINK2 BANCHO...beginning at the left most node (no. 1) and continuing to the last node in that row (no. 19). IBORD LEFT 2 BOTTOM 2 RIGHT 2 TOP 2 DONE LINKI 2 T call...LINK1 3 T GEOMFL Call SUBROUTINE NODELM again to compute element thickness and area from data calculated in GEOMFL. LINKI 2 T NODELM Call SUBROUTINE
Study on Impact Acoustic—Visual Sensor-Based Sorting of ELV Plastic Materials
Huang, Jiu; Tian, Chuyuan; Ren, Jingwei; Bian, Zhengfu
2017-01-01
This paper concentrates on a study of a novel multi-sensor aided method by using acoustic and visual sensors for detection, recognition and separation of End-of Life vehicles’ (ELVs) plastic materials, in order to optimize the recycling rate of automotive shredder residues (ASRs). Sensor-based sorting technologies have been utilized for material recycling for the last two decades. One of the problems still remaining results from black and dark dyed plastics which are very difficult to recognize using visual sensors. In this paper a new multi-sensor technology for black plastic recognition and sorting by using impact resonant acoustic emissions (AEs) and laser triangulation scanning was introduced. A pilot sorting system which consists of a 3-dimensional visual sensor and an acoustic sensor was also established; two kinds commonly used vehicle plastics, polypropylene (PP) and acrylonitrile-butadiene-styrene (ABS) and two kinds of modified vehicle plastics, polypropylene/ethylene-propylene-diene-monomer (PP-EPDM) and acrylonitrile-butadiene-styrene/polycarbonate (ABS-PC) were tested. In this study the geometrical features of tested plastic scraps were measured by the visual sensor, and their corresponding impact acoustic emission (AE) signals were acquired by the acoustic sensor. The signal processing and feature extraction of visual data as well as acoustic signals were realized by virtual instruments. Impact acoustic features were recognized by using FFT based power spectral density analysis. The results shows that the characteristics of the tested PP and ABS plastics were totally different, but similar to their respective modified materials. The probability of scrap material recognition rate, i.e., the theoretical sorting efficiency between PP and PP-EPDM, could reach about 50%, and between ABS and ABS-PC it could reach about 75% with diameters ranging from 14 mm to 23 mm, and with exclusion of abnormal impacts, the actual separation rates were 39.2% for PP, 41.4% for PP/EPDM scraps as well as 62.4% for ABS, and 70.8% for ABS/PC scraps. Within the diameter range of 8-13 mm, only 25% of PP and 27% of PP/EPDM scraps, as well as 43% of ABS, and 47% of ABS/PC scraps were finally separated. This research proposes a new approach for sensor-aided automatic recognition and sorting of black plastic materials, it is an effective method for ASR reduction and recycling. PMID:28594341
Study on Impact Acoustic-Visual Sensor-Based Sorting of ELV Plastic Materials.
Huang, Jiu; Tian, Chuyuan; Ren, Jingwei; Bian, Zhengfu
2017-06-08
This paper concentrates on a study of a novel multi-sensor aided method by using acoustic and visual sensors for detection, recognition and separation of End-of Life vehicles' (ELVs) plastic materials, in order to optimize the recycling rate of automotive shredder residues (ASRs). Sensor-based sorting technologies have been utilized for material recycling for the last two decades. One of the problems still remaining results from black and dark dyed plastics which are very difficult to recognize using visual sensors. In this paper a new multi-sensor technology for black plastic recognition and sorting by using impact resonant acoustic emissions (AEs) and laser triangulation scanning was introduced. A pilot sorting system which consists of a 3-dimensional visual sensor and an acoustic sensor was also established; two kinds commonly used vehicle plastics, polypropylene (PP) and acrylonitrile-butadiene-styrene (ABS) and two kinds of modified vehicle plastics, polypropylene/ethylene-propylene-diene-monomer (PP-EPDM) and acrylonitrile-butadiene-styrene/polycarbonate (ABS-PC) were tested. In this study the geometrical features of tested plastic scraps were measured by the visual sensor, and their corresponding impact acoustic emission (AE) signals were acquired by the acoustic sensor. The signal processing and feature extraction of visual data as well as acoustic signals were realized by virtual instruments. Impact acoustic features were recognized by using FFT based power spectral density analysis. The results shows that the characteristics of the tested PP and ABS plastics were totally different, but similar to their respective modified materials. The probability of scrap material recognition rate, i.e., the theoretical sorting efficiency between PP and PP-EPDM, could reach about 50%, and between ABS and ABS-PC it could reach about 75% with diameters ranging from 14 mm to 23 mm, and with exclusion of abnormal impacts, the actual separation rates were 39.2% for PP, 41.4% for PP/EPDM scraps as well as 62.4% for ABS, and 70.8% for ABS/PC scraps. Within the diameter range of 8-13 mm, only 25% of PP and 27% of PP/EPDM scraps, as well as 43% of ABS, and 47% of ABS/PC scraps were finally separated. This research proposes a new approach for sensor-aided automatic recognition and sorting of black plastic materials, it is an effective method for ASR reduction and recycling.
GPU-completeness: theory and implications
NASA Astrophysics Data System (ADS)
Lin, I.-Jong
2011-01-01
This paper formalizes a major insight into a class of algorithms that relate parallelism and performance. The purpose of this paper is to define a class of algorithms that trades off parallelism for quality of result (e.g. visual quality, compression rate), and we propose a similar method for algorithmic classification based on NP-Completeness techniques, applied toward parallel acceleration. We will define this class of algorithm as "GPU-Complete" and will postulate the necessary properties of the algorithms for admission into this class. We will also formally relate his algorithmic space and imaging algorithms space. This concept is based upon our experience in the print production area where GPUs (Graphic Processing Units) have shown a substantial cost/performance advantage within the context of HPdelivered enterprise services and commercial printing infrastructure. While CPUs and GPUs are converging in their underlying hardware and functional blocks, their system behaviors are clearly distinct in many ways: memory system design, programming paradigms, and massively parallel SIMD architecture. There are applications that are clearly suited to each architecture: for CPU: language compilation, word processing, operating systems, and other applications that are highly sequential in nature; for GPU: video rendering, particle simulation, pixel color conversion, and other problems clearly amenable to massive parallelization. While GPUs establishing themselves as a second, distinct computing architecture from CPUs, their end-to-end system cost/performance advantage in certain parts of computation inform the structure of algorithms and their efficient parallel implementations. While GPUs are merely one type of architecture for parallelization, we show that their introduction into the design space of printing systems demonstrate the trade-offs against competing multi-core, FPGA, and ASIC architectures. While each architecture has its own optimal application, we believe that the selection of architecture can be defined in terms of properties of GPU-Completeness. For a welldefined subset of algorithms, GPU-Completeness is intended to connect the parallelism, algorithms and efficient architectures into a unified framework to show that multiple layers of parallel implementation are guided by the same underlying trade-off.
Paraskevopoulou, Sivylla E; Wu, Di; Eftekhar, Amir; Constandinou, Timothy G
2014-09-30
This work presents a novel unsupervised algorithm for real-time adaptive clustering of neural spike data (spike sorting). The proposed Hierarchical Adaptive Means (HAM) clustering method combines centroid-based clustering with hierarchical cluster connectivity to classify incoming spikes using groups of clusters. It is described how the proposed method can adaptively track the incoming spike data without requiring any past history, iteration or training and autonomously determines the number of spike classes. Its performance (classification accuracy) has been tested using multiple datasets (both simulated and recorded) achieving a near-identical accuracy compared to k-means (using 10-iterations and provided with the number of spike classes). Also, its robustness in applying to different feature extraction methods has been demonstrated by achieving classification accuracies above 80% across multiple datasets. Last but crucially, its low complexity, that has been quantified through both memory and computation requirements makes this method hugely attractive for future hardware implementation. Copyright © 2014 Elsevier B.V. All rights reserved.
Fagoonee, Sharmila; Famulari, Elvira Smeralda; Silengo, Lorenzo; Tolosano, Emanuela; Altruda, Fiorella
2015-01-01
One of the major hurdles in liver gene and cell therapy is availability of ex vivo-expanded hepatocytes. Pluripotent stem cells are an attractive alternative. Here, we show that hepatocyte precursors can be isolated from male germline cell-derived pluripotent stem cells (GPSCs) using the hepatoblast marker, Liv2, and induced to differentiate into hepatocytes in vitro. These cells expressed hepatic-specific genes and were functional as demonstrated by their ability to secrete albumin and produce urea. When transplanted in the liver parenchyma of partially hepatectomised mice, Liv2-sorted cells showed regional and heterogeneous engraftment in the injected lobe. Moreover, approximately 50% of Y chromosome-positive, GPSC-derived cells were found in the female livers, in the region of engraftment, even one month after cell injection. This is the first study showing that Liv2-sorted GPSCs-derived hepatocytes can undergo long lasting engraftment in the mouse liver. Thus, GPSCs might offer promise for regenerative medicine. PMID:26323094
A novel analytical technique suitable for the identification of plastics.
Nečemer, Marijan; Kump, Peter; Sket, Primož; Plavec, Janez; Grdadolnik, Jože; Zvanut, Maja
2013-01-01
The enormous development and production of plastic materials in the last century resulted in increasing numbers of such kinds of objects. Development of a simple and fast technique to classify different types of plastics could be used in many activities dealing with plastic materials such as packaging of food, sorting of used plastic materials, and also, if technique would be non-destructive, for conservation of plastic artifacts in museum collections, a relatively new field of interest since 1990. In our previous paper we introduced a non-destructive technique for fast identification of unknown plastics based on EDXRF spectrometry,1 using as a case study some plastic artifacts archived in the Museum in order to show the advantages of the nondestructive identification of plastic material. In order to validate our technique it was necessary to apply for this purpose the comparison of analyses with some of the analytical techniques, which are more suitable and so far rather widely applied in identifying some most common sorts of plastic materials.
Reconfigurable microfluidic hanging drop network for multi-tissue interaction and analysis.
Frey, Olivier; Misun, Patrick M; Fluri, David A; Hengstler, Jan G; Hierlemann, Andreas
2014-06-30
Integration of multiple three-dimensional microtissues into microfluidic networks enables new insights in how different organs or tissues of an organism interact. Here, we present a platform that extends the hanging-drop technology, used for multi-cellular spheroid formation, to multifunctional complex microfluidic networks. Engineered as completely open, 'hanging' microfluidic system at the bottom of a substrate, the platform features high flexibility in microtissue arrangements and interconnections, while fabrication is simple and operation robust. Multiple spheroids of different cell types are formed in parallel on the same platform; the different tissues are then connected in physiological order for multi-tissue experiments through reconfiguration of the fluidic network. Liquid flow is precisely controlled through the hanging drops, which enable nutrient supply, substance dosage and inter-organ metabolic communication. The possibility to perform parallelized microtissue formation on the same chip that is subsequently used for complex multi-tissue experiments renders the developed platform a promising technology for 'body-on-a-chip'-related research.
Analysis of multiple internal reflections in a parallel aligned liquid crystal on silicon SLM.
Martínez, José Luis; Moreno, Ignacio; del Mar Sánchez-López, María; Vargas, Asticio; García-Martínez, Pascuala
2014-10-20
Multiple internal reflection effects on the optical modulation of a commercial reflective parallel-aligned liquid-crystal on silicon (PAL-LCoS) spatial light modulator (SLM) are analyzed. The display is illuminated with different wavelengths and different angles of incidence. Non-negligible Fabry-Perot (FP) effect is observed due to the sandwiched LC layer structure. A simplified physical model that quantitatively accounts for the observed phenomena is proposed. It is shown how the expected pure phase modulation response is substantially modified in the following aspects: 1) a coupled amplitude modulation, 2) a non-linear behavior of the phase modulation, 3) some amount of unmodulated light, and 4) a reduction of the effective phase modulation as the angle of incidence increases. Finally, it is shown that multiple reflections can be useful since the effect of a displayed diffraction grating is doubled on a beam that is reflected twice through the LC layer, thus rendering gratings with doubled phase modulation depth.
GPU accelerated particle visualization with Splotch
NASA Astrophysics Data System (ADS)
Rivi, M.; Gheller, C.; Dykes, T.; Krokos, M.; Dolag, K.
2014-07-01
Splotch is a rendering algorithm for exploration and visual discovery in particle-based datasets coming from astronomical observations or numerical simulations. The strengths of the approach are production of high quality imagery and support for very large-scale datasets through an effective mix of the OpenMP and MPI parallel programming paradigms. This article reports our experiences in re-designing Splotch for exploiting emerging HPC architectures nowadays increasingly populated with GPUs. A performance model is introduced to guide our re-factoring of Splotch. A number of parallelization issues are discussed, in particular relating to race conditions and workload balancing, towards achieving optimal performances. Our implementation was accomplished by using the CUDA programming paradigm. Our strategy is founded on novel schemes achieving optimized data organization and classification of particles. We deploy a reference cosmological simulation to present performance results on acceleration gains and scalability. We finally outline our vision for future work developments including possibilities for further optimizations and exploitation of hybrid systems and emerging accelerators.
Hanson, Marta; Pomata, Gianna
2017-03-01
This essay deals with the medical recipe as an epistemic genre that played an important role in the cross-cultural transmission of knowledge. The article first compares the development of the recipe as a textual form in Chinese and European premodern medical cultures. It then focuses on the use of recipes in the transmission of Chinese pharmacology to Europe in the second half of the seventeenth century. The main sources examined are the Chinese medicinal formulas translated—presumably—by the Jesuit Michael Boym and published in Specimen Medicinae Sinicae (1682), a text that introduced Chinese pulse medicine to Europe. The article examines how the translator rendered the Chinese formulas into Latin for a European audience. Arguably, the translation was facilitated by the fact that the recipe as a distinct epistemic genre had developed, with strong parallels, in both Europe and China. Building on these parallels, the translator used the recipe as a shared textual format that would allow the transfer of knowledge between the two medical cultures.
Tools for Analysis and Visualization of Large Time-Varying CFD Data Sets
NASA Technical Reports Server (NTRS)
Wilhelms, Jane; VanGelder, Allen
1997-01-01
In the second year, we continued to built upon and improve our scanline-based direct volume renderer that we developed in the first year of this grant. This extremely general rendering approach can handle regular or irregular grids, including overlapping multiple grids, and polygon mesh surfaces. It runs in parallel on multi-processors. It can also be used in conjunction with a k-d tree hierarchy, where approximate models and error terms are stored in the nodes of the tree, and approximate fast renderings can be created. We have extended our software to handle time-varying data where the data changes but the grid does not. We are now working on extending it to handle more general time-varying data. We have also developed a new extension of our direct volume renderer that uses automatic decimation of the 3D grid, as opposed to an explicit hierarchy. We explored this alternative approach as being more appropriate for very large data sets, where the extra expense of a tree may be unacceptable. We also describe a new approach to direct volume rendering using hardware 3D textures and incorporates lighting effects. Volume rendering using hardware 3D textures is extremely fast, and machines capable of using this technique are becoming more moderately priced. While this technique, at present, is limited to use with regular grids, we are pursuing possible algorithms extending the approach to more general grid types. We have also begun to explore a new method for determining the accuracy of approximate models based on the light field method described at ACM SIGGRAPH '96. In our initial implementation, we automatically image the volume from 32 equi-distant positions on the surface of an enclosing tessellated sphere. We then calculate differences between these images under different conditions of volume approximation or decimation. We are studying whether this will give a quantitative measure of the effects of approximation. We have created new tools for exploring the differences between images produced by various rendering methods. Images created by our software can be stored in the SGI RGB format. Our idtools software reads in pair of images and compares them using various metrics. The differences of the images using the RGB, HSV, and HSL color models can be calculated and shown. We can also calculate the auto-correlation function and the Fourier transform of the image and image differences. We will explore how these image differences compare in order to find useful metrics for quantifying the success of various visualization approaches. In general, progress was consistent with our research plan for the second year of the grant.
The Louisiana State University waste-to-energy incinerator
NASA Astrophysics Data System (ADS)
1994-10-01
This proposed action is for cost-shared construction of an incinerator/steam-generation facility at Louisiana State University under the State Energy Conservation Program (SECP). The SECP, created by the Energy Policy and Conservation Act, calls upon DOE to encourage energy conservation, renewable energy, and energy efficiency by providing Federal technical and financial assistance in developing and implementing comprehensive state energy conservation plans and projects. Currently, LSU runs a campus-wide recycling program in order to reduce the quantity of solid waste requiring disposal. This program has removed recyclable paper from the waste stream; however, a considerable quantity of other non-recyclable combustible wastes are produced on campus. Until recently, these wastes were disposed of in the Devil's Swamp landfill (also known as the East Baton Rouge Parish landfill). When this facility reached its capacity, a new landfill was opened a short distance away, and this new site is now used for disposal of the University's non-recyclable wastes. While this new landfill has enough capacity to last for at least 20 years (from 1994), the University has identified the need for a more efficient and effective manner of waste disposal than landfilling. The University also has non-renderable biological and potentially infectious waste materials from the School of Veterinary Medicine and the Student Health Center, primarily the former, whose wastes include animal carcasses and bedding materials. Renderable animal wastes from the School of Veterinary Medicine are sent to a rendering plant. Non-renderable, non-infectious animal wastes currently are disposed of in an existing on-campus incinerator near the School of Veterinary Medicine building.
Predicting story goodness performance from cognitive measures following traumatic brain injury.
Lê, Karen; Coelho, Carl; Mozeiko, Jennifer; Krueger, Frank; Grafman, Jordan
2012-05-01
This study examined the prediction of performance on measures of the Story Goodness Index (SGI; Lê, Coelho, Mozeiko, & Grafman, 2011) from executive function (EF) and memory measures following traumatic brain injury (TBI). It was hypothesized that EF and memory measures would significantly predict SGI outcomes. One hundred sixty-seven individuals with TBI participated in the study. Story retellings were analyzed using the SGI protocol. Three cognitive measures--Delis-Kaplan Executive Function System (D-KEFS; Delis, Kaplan, & Kramer, 2001) Sorting Test, Wechsler Memory Scale--Third Edition (WMS-III; Wechsler, 1997) Working Memory Primary Index (WMI), and WMS-III Immediate Memory Primary Index (IMI)--were entered into a multiple linear regression model for each discourse measure. Two sets of regression analyses were performed, the first with the Sorting Test as the first predictor and the second with it as the last. The first set of regression analyses identified the Sorting Test and IMI as the only significant predictors of performance on measures of the SGI. The second set identified all measures as significant predictors when evaluating each step of the regression function. The cognitive variables predicted performance on the SGI measures, although there were differences in the amount of explained variance. The results (a) suggest that storytelling ability draws on a number of underlying skills and (b) underscore the importance of using discrete cognitive tasks rather than broad cognitive indices to investigate the cognitive substrates of discourse.
Response of bed surface patchiness to reductions in sediment supply
NASA Astrophysics Data System (ADS)
Nelson, Peter A.; Venditti, Jeremy G.; Dietrich, William E.; Kirchner, James W.; Ikeda, Hiroshi; Iseya, Fujiko; Sklar, Leonard S.
2009-06-01
River beds are often arranged into patches of similar grain size and sorting. Patches can be distinguished into "free patches," which are zones of sorted material that move freely, such as bed load sheets; "forced patches," which are areas of sorting forced by topographic controls; and "fixed patches" of bed material rendered immobile through localized coarsening that remain fairly persistent through time. Two sets of flume experiments (one using bimodal, sand-rich sediment and the other using unimodal, sand-free sediment) are used to explore how fixed and free patches respond to stepwise reductions in sediment supply. At high sediment supply, migrating bed load sheets formed even in unimodal, sand-free sediment, yet grain interactions visibly played a central role in their formation. In both sets of experiments, reductions in supply led to the development of fixed coarse patches, which expanded at the expense of finer, more mobile patches, narrowing the zone of active bed load transport and leading to the eventual disappearance of migrating bed load sheets. Reductions in sediment supply decreased the migration rate of bed load sheets and increased the spacing between successive sheets. One-dimensional morphodynamic models of river channel beds generally are not designed to capture the observed variability, but should be capable of capturing the time-averaged character of the channel. When applied to our experiments, a 1-D morphodynamic model (RTe-bookAgDegNormGravMixPW.xls) predicted the bed load flux well, but overpredicted slope changes and was unable to predict the substantial variability in bed load flux (and load grain size) because of the migration of mobile patches. Our results suggest that (1) the distribution of free and fixed patches is primarily a function of sediment supply, (2) the dynamics of bed load sheets are primarily scaled by sediment supply, (3) channels with reduced sediment supply may inherently be unable to transport sediment uniformly across their width, and (4) cross-stream variability in shear stress and grain size can produce potentially large errors in width-averaged sediment flux calculations.
Evidence of bad recycling practices: BFRs in children's toys and food-contact articles.
Guzzonato, A; Puype, F; Harrad, S J
2017-07-19
Brominated flame retardants (BFRs) have been used intentionally in a wide range of plastics, but are now found in an even wider range of such materials (including children's toys and food contact articles) as a result of recycling practices that mix BFR-containing waste plastics with "virgin" materials. In this study Br was quantified in toy and food contact samples on the assumption that its concentration can be used as a metric for BFR contamination. Subsequently, compound specific determination of BFRs was performed to evaluate the validity of the aforementioned assumption, crucial to render rapid, inexpensive, in situ Br determination in non-laboratory environments (such as waste handling facilities) a viable option for sorting wastes according to their BFR content. We report semi-quantitative compound specific BFR concentrations to give an overview of the distribution of individual BFRs in the analyzed samples. Finally, we evaluated the correlations between waste electrical and electronic equipment (WEEE) related substances (Ca, Sb and rare earth elements (REEs)) and Br as a proxy for identifying poor sorting practices in different waste streams. 26 samples of toys, food-contact articles and WEEE were analyzed with a suite of different techniques in order to obtain comprehensive information about their elemental and molecular composition. The information obtained from principal component analysis about WEEE-related compounds provides new insights into the influence of sorting practices on the extent of products' contamination and bringing out polymer-related trends in the pollutants' signature. 61% of all samples were Br positive: of these samples, 45% had decaBDE concentrations exceeding the concentration limits for PBDEs and their main constituent polymer was - according to the REE signature of such samples - Acrylonitrile Butadiene Styrene (ABS), uses of which include copying equipment, laptops and computers. The ability to better track chemicals of concern and their trends in products is the main requirement for high-level management and control of material cycles to become non-toxic in the future as proposed in the EU's 7 th Environmental Action Plan.
MaCaulay, S Lance; Stoichevska, Violet; Grusovin, Julian; Gough, Keith H; Castelli, Laura A; Ward, Colin W
2003-01-01
SNX9 (sorting nexin 9) is one member of a family of proteins implicated in protein trafficking. This family is characterized by a unique PX (Phox homology) domain that includes a proline-rich sequence and an upstream phospholipid binding domain. Many sorting nexins, including SNX9, also have a C-terminal coiled region. SNX9 additionally has an N-terminal SH3 (Src homology 3) domain. Here we have investigated the cellular localization of SNX9 and the potential role it plays in insulin action. SNX9 had a cytosolic and punctate distribution, consistent with endosomal and cytosolic localization, in 3T3L1 adipocytes. It was excluded from the nucleus. The SH3 domain was responsible, at least in part, for the membrane localization of SNX9, since expression of an SH3-domain-deleted GFP (green fluorescent protein)-SNX9 fusion protein in HEK293T cells rendered the protein cytosolic. Membrane localization may also be attributed in part to the PX domain, since in vitro phospholipid binding studies demonstrated SNX9 binding to polyphosphoinositides. Insulin induced movement of SNX9 to membrane fractions from the cytosol. A GST (glutathione S-transferase)-SNX9 fusion protein was associated with IGF1 (insulin-like growth factor 1) and insulin receptors in vitro. A GFP-SNX9 fusion protein, overexpressed in 3T3L1 adipocytes, co-immunoprecipitated with insulin receptors. Furthermore, overexpression of this GFP-SNX9 fusion protein in CHOT cells decreased insulin binding, consistent with a role for SNX9 in the trafficking of insulin receptors. Microinjection of 3T3L1 cells with an antibody against SNX9 inhibited stimulation by insulin of GLUT4 translocation. These results support the involvement of SNX9 in insulin action, via an influence on the processing/trafficking of insulin receptors. A secondary role in regulation of the cellular processing, transport and/or subcellular localization of GLUT4 is also suggested. PMID:12917015
LOLAweb: a containerized web server for interactive genomic locus overlap enrichment analysis.
Nagraj, V P; Magee, Neal E; Sheffield, Nathan C
2018-06-06
The past few years have seen an explosion of interest in understanding the role of regulatory DNA. This interest has driven large-scale production of functional genomics data and analytical methods. One popular analysis is to test for enrichment of overlaps between a query set of genomic regions and a database of region sets. In this way, new genomic data can be easily connected to annotations from external data sources. Here, we present an interactive interface for enrichment analysis of genomic locus overlaps using a web server called LOLAweb. LOLAweb accepts a set of genomic ranges from the user and tests it for enrichment against a database of region sets. LOLAweb renders results in an R Shiny application to provide interactive visualization features, enabling users to filter, sort, and explore enrichment results dynamically. LOLAweb is built and deployed in a Linux container, making it scalable to many concurrent users on our servers and also enabling users to download and run LOLAweb locally.
In Vitro Assays for Mouse Müller Cell Phenotyping Through microRNA Profiling in the Damaged Retina.
Reyes-Aguirre, Luis I; Quintero, Heberto; Estrada-Leyva, Brenda; Lamas, Mónica
2018-01-01
microRNA profiling has identified cell-specific expression patterns that could represent molecular signatures triggering the acquisition of a specific phenotype; in other words, of cellular identity and its associated function. Several groups have hypothesized that retinal cell phenotyping could be achieved through the determination of the global pattern of miRNA expression across specific cell types in the adult retina. This is especially relevant for Müller glia in the context of retinal damage, as these cells undergo dramatic changes of gene expression in response to injury, that render them susceptible to acquire a progenitor-like phenotype and be a source of new neurons.We describe a method that combines an experimental protocol for excitotoxic-induced retinal damage through N-methyl-D-aspartate subretinal injection with magnetic-activated cell sorting (MACS) of Müller cells and RNA isolation for microRNA profiling. Comparison of microRNA patterns of expression should allow Müller cell phenotyping under different experimental conditions.
Disruption of lysosome function promotes tumor growth and metastasis in Drosophila.
Chi, Congwu; Zhu, Huanhu; Han, Min; Zhuang, Yuan; Wu, Xiaohui; Xu, Tian
2010-07-09
Lysosome function is essential to many physiological processes. It has been suggested that deregulation of lysosome function could contribute to cancer. Through a genetic screen in Drosophila, we have discovered that mutations disrupting lysosomal degradation pathway components contribute to tumor development and progression. Loss-of-function mutations in the Class C vacuolar protein sorting (VPS) gene, deep orange (dor), dramatically promote tumor overgrowth and invasion of the Ras(V12) cells. Knocking down either of the two other components of the Class C VPS complex, carnation (car) and vps16A, also renders Ras(V12) cells capable for uncontrolled growth and metastatic behavior. Finally, chemical disruption of the lysosomal function by feeding animals with antimalarial drugs, chloroquine or monensin, leads to malignant tumor growth of the Ras(V12) cells. Taken together, our data provide evidence for a causative role of lysosome dysfunction in tumor growth and invasion and indicate that members of the Class C VPS complex behave as tumor suppressors.
Visualization assisted by parallel processing
NASA Astrophysics Data System (ADS)
Lange, B.; Rey, H.; Vasques, X.; Puech, W.; Rodriguez, N.
2011-01-01
This paper discusses the experimental results of our visualization model for data extracted from sensors. The objective of this paper is to find a computationally efficient method to produce a real time rendering visualization for a large amount of data. We develop visualization method to monitor temperature variance of a data center. Sensors are placed on three layers and do not cover all the room. We use particle paradigm to interpolate data sensors. Particles model the "space" of the room. In this work we use a partition of the particle set, using two mathematical methods: Delaunay triangulation and Voronoý cells. Avis and Bhattacharya present these two algorithms in. Particles provide information on the room temperature at different coordinates over time. To locate and update particles data we define a computational cost function. To solve this function in an efficient way, we use a client server paradigm. Server computes data and client display this data on different kind of hardware. This paper is organized as follows. The first part presents related algorithm used to visualize large flow of data. The second part presents different platforms and methods used, which was evaluated in order to determine the better solution for the task proposed. The benchmark use the computational cost of our algorithm that formed based on located particles compared to sensors and on update of particles value. The benchmark was done on a personal computer using CPU, multi core programming, GPU programming and hybrid GPU/CPU. GPU programming method is growing in the research field; this method allows getting a real time rendering instates of a precompute rendering. For improving our results, we compute our algorithm on a High Performance Computing (HPC), this benchmark was used to improve multi-core method. HPC is commonly used in data visualization (astronomy, physic, etc) for improving the rendering and getting real-time.
A 2D MTF approach to evaluate and guide dynamic imaging developments.
Chao, Tzu-Cheng; Chung, Hsiao-Wen; Hoge, W Scott; Madore, Bruno
2010-02-01
As the number and complexity of partially sampled dynamic imaging methods continue to increase, reliable strategies to evaluate performance may prove most useful. In the present work, an analytical framework to evaluate given reconstruction methods is presented. A perturbation algorithm allows the proposed evaluation scheme to perform robustly without requiring knowledge about the inner workings of the method being evaluated. A main output of the evaluation process consists of a two-dimensional modulation transfer function, an easy-to-interpret visual rendering of a method's ability to capture all combinations of spatial and temporal frequencies. Approaches to evaluate noise properties and artifact content at all spatial and temporal frequencies are also proposed. One fully sampled phantom and three fully sampled cardiac cine datasets were subsampled (R = 4 and 8) and reconstructed with the different methods tested here. A hybrid method, which combines the main advantageous features observed in our assessments, was proposed and tested in a cardiac cine application, with acceleration factors of 3.5 and 6.3 (skip factors of 4 and 8, respectively). This approach combines features from methods such as k-t sensitivity encoding, unaliasing by Fourier encoding the overlaps in the temporal dimension-sensitivity encoding, generalized autocalibrating partially parallel acquisition, sensitivity profiles from an array of coils for encoding and reconstruction in parallel, self, hybrid referencing with unaliasing by Fourier encoding the overlaps in the temporal dimension and generalized autocalibrating partially parallel acquisition, and generalized autocalibrating partially parallel acquisition-enhanced sensitivity maps for sensitivity encoding reconstructions.
New views of granular mass flows
Iverson, R.M.; Vallance, J.W.
2001-01-01
Concentrated grain-fluid mixtures in rock avalanches, debris flows, and pyroclastic flows do not behave as simple materials with fixed rheologies. Instead, rheology evolves as mixture agitation, grain concentration, and fluid-pressure change during flow initiation, transit, and deposition. Throughout a flow, however, normal forces on planes parallel to the free upper surface approximately balance the weight of the superincumbent mixture, and the Coulomb friction rule describes bulk intergranular shear stresses on such planes. Pore-fluid pressure can temporarily or locally enhance mixture mobility by reducing Coulomb friction and transferring shear stress to the fluid phase. Initial conditions, boundary conditions, and grain comminution and sorting can influence pore-fluid pressures and cause variations in flow dynamics and deposits.
Plana-Ruiz, S; Portillo, J; Estradé, S; Peiró, F; Kolb, Ute; Nicolopoulos, S
2018-06-06
A general method to set illuminating conditions for selectable beam convergence and probe size is presented in this work for Transmission Electron Microscopes (TEM) fitted with µs/pixel fast beam scanning control, (S)TEM, and an annular dark field detector. The case of interest of beam convergence and probe size, which enables diffraction pattern indexation, is then used as a starting point in this work to add 100 Hz precession to the beam while imaging the specimen at a fast rate and keeping the projector system in diffraction mode. The described systematic alignment method for the adjustment of beam precession on the specimen plane while scanning at fast rates is mainly based on the sharpness of the precessed STEM image. The complete alignment method for parallel condition and precession, Quasi-Parallel PED-STEM, is presented in block diagram scheme, as it has been tested on a variety of instruments. The immediate application of this methodology is that it renders the TEM column ready for the acquisition of Precessed Electron Diffraction Tomographies (EDT) as well as for the acquisition of slow Precessed Scanning Nanometer Electron Diffraction (SNED). Examples of the quality of the Precessed Electron Diffraction (PED) patterns and PED-STEM alignment images are presented with corresponding probe sizes and convergence angles. Copyright © 2018. Published by Elsevier B.V.
Homemade Buckeye-Pi: A Learning Many-Node Platform for High-Performance Parallel Computing
NASA Astrophysics Data System (ADS)
Amooie, M. A.; Moortgat, J.
2017-12-01
We report on the "Buckeye-Pi" cluster, the supercomputer developed in The Ohio State University School of Earth Sciences from 128 inexpensive Raspberry Pi (RPi) 3 Model B single-board computers. Each RPi is equipped with fast Quad Core 1.2GHz ARMv8 64bit processor, 1GB of RAM, and 32GB microSD card for local storage. Therefore, the cluster has a total RAM of 128GB that is distributed on the individual nodes and a flash capacity of 4TB with 512 processors, while it benefits from low power consumption, easy portability, and low total cost. The cluster uses the Message Passing Interface protocol to manage the communications between each node. These features render our platform the most powerful RPi supercomputer to date and suitable for educational applications in high-performance-computing (HPC) and handling of large datasets. In particular, we use the Buckeye-Pi to implement optimized parallel codes in our in-house simulator for subsurface media flows with the goal of achieving a massively-parallelized scalable code. We present benchmarking results for the computational performance across various number of RPi nodes. We believe our project could inspire scientists and students to consider the proposed unconventional cluster architecture as a mainstream and a feasible learning platform for challenging engineering and scientific problems.
Anderson, H.L.; Kinnison, W.W.; Lillberg, J.W.
1985-04-30
An apparatus and method for electronically reading planar two-dimensional ..beta..-ray emitter-labeled gel electrophoretograms. A single, flat rectangular multiwire proportional chamber is placed in close proximity to the gel and the assembly placed in an intense uniform magnetic field disposed in a perpendicular manner to the rectangular face of the proportional chamber. Beta rays emitted in the direction of the proportional chamber are caused to execute helical motions which substantially preserve knowledge the coordinates of their origin in the gel. Perpendicularly oriented, parallel wire, parallel plane cathodes electronically sense the location of the ..beta..-rays from ionization generated thereby in a detection gas coupled with an electron avalanche effect resulting from the action of a parallel wire anode located therebetween. A scintillator permits the present apparatus to be rendered insensitive when signals are generated from cosmic rays incident on the proportional chamber. Resolution for concentrations of radioactive compounds in the gel exceeds 700-..mu..m. The apparatus and method of the present invention represent a significant improvement over conventional autoradiographic techniques in dynamic range, linearity and sensitivity of data collection. A concentration and position map for gel electrophoretograms having significant concentrations of labeled compounds and/or highly radioactive labeling nuclides can generally be obtained in less than one hour.
Anderson, Herbert L.; Kinnison, W. Wayne; Lillberg, John W.
1987-01-01
Apparatus and method for electronically reading planar two dimensional .beta.-ray emitter-labeled gel electrophoretograms. A single, flat rectangular multiwire proportional chamber is placed in close proximity to the gel and the assembly placed in an intense uniform magnetic field disposed in a perpendicular manner to the rectangular face of the proportional chamber. Beta rays emitted in the direction of the proportional chamber are caused to execute helical motions which substantially preserve knowledge of the coordinates of their origin in the gel. Perpendicularly oriented, parallel wire, parallel plane cathodes electronically sense the location of the .beta.-rays from ionization generated thereby in a detection gas coupled with an electron avalanche effect resulting from the action of a parallel wire anode located therebetween. A scintillator permits the present apparatus to be rendered insensitive when signals are generated from cosmic rays incident on the proportional chamber. Resolution for concentrations of radioactive compounds in the gel exceeds 700 .mu.m. The apparatus and method of the present invention represent a significant improvement over conventional autoradiographic techniques in dynamic range, linearity and sensitivity of data collection. A concentration and position map for gel electrophoretograms having significant concentrations of labeled compounds and/or highly radioactive labeling nuclides can generally be obtained in less than one hour.
2013-10-18
of the enclosed tasks plus the last parallel task for a total of five parallel tasks for one iteration, i). for j = 1…N for i = 1… 8 Figure...drizzling juices culminating in a state of salivating desire to cut a piece and enjoy. On the other hand, the smell could be that of a pungent, unpleasant
Hagen, Espen; Ness, Torbjørn V; Khosrowshahi, Amir; Sørensen, Christina; Fyhn, Marianne; Hafting, Torkel; Franke, Felix; Einevoll, Gaute T
2015-04-30
New, silicon-based multielectrodes comprising hundreds or more electrode contacts offer the possibility to record spike trains from thousands of neurons simultaneously. This potential cannot be realized unless accurate, reliable automated methods for spike sorting are developed, in turn requiring benchmarking data sets with known ground-truth spike times. We here present a general simulation tool for computing benchmarking data for evaluation of spike-sorting algorithms entitled ViSAPy (Virtual Spiking Activity in Python). The tool is based on a well-established biophysical forward-modeling scheme and is implemented as a Python package built on top of the neuronal simulator NEURON and the Python tool LFPy. ViSAPy allows for arbitrary combinations of multicompartmental neuron models and geometries of recording multielectrodes. Three example benchmarking data sets are generated, i.e., tetrode and polytrode data mimicking in vivo cortical recordings and microelectrode array (MEA) recordings of in vitro activity in salamander retinas. The synthesized example benchmarking data mimics salient features of typical experimental recordings, for example, spike waveforms depending on interspike interval. ViSAPy goes beyond existing methods as it includes biologically realistic model noise, synaptic activation by recurrent spiking networks, finite-sized electrode contacts, and allows for inhomogeneous electrical conductivities. ViSAPy is optimized to allow for generation of long time series of benchmarking data, spanning minutes of biological time, by parallel execution on multi-core computers. ViSAPy is an open-ended tool as it can be generalized to produce benchmarking data or arbitrary recording-electrode geometries and with various levels of complexity. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
Vuorenpää, Anne; Jørgensen, Trine N.; Newman, Amy H.; Madsen, Kenneth L.; Scheinin, Mika
2016-01-01
The norepinephrine transporter (NET) mediates reuptake of synaptically released norepinephrine in central and peripheral noradrenergic neurons. The molecular processes governing availability of NET in the plasma membrane are poorly understood. Here we use the fluorescent cocaine analogue JHC 1-64, as well as several other approaches, to investigate the trafficking itinerary of NET in live noradrenergic neurons. Confocal imaging revealed extensive constitutive internalization of JHC 1-64-labeled NET in the neuronal somata, proximal extensions and presynaptic boutons. Phorbol 12-myristate 13-acetate increased intracellular accumulation of JHC 1-64-labeled NET and caused a parallel reduction in uptake capacity. Internalized NET strongly colocalized with the “long loop” recycling marker Rab11, whereas less overlap was seen with the “short loop” recycling marker Rab4 and the late endosomal marker Rab7. Moreover, mitigating Rab11 function by overexpression of dominant negative Rab11 impaired NET function. Sorting of NET to the Rab11 recycling compartment was further supported by confocal imaging and reversible biotinylation experiments in transfected differentiated CATH.a cells. In contrast to NET, the dopamine transporter displayed markedly less constitutive internalization and limited sorting to the Rab11 recycling compartment in the differentiated CATH.a cells. Exchange of domains between the two homologous transporters revealed that this difference was determined by non-conserved structural elements in the intracellular N terminus. We conclude that NET displays a distinct trafficking itinerary characterized by continuous shuffling between the plasma membrane and the Rab11 recycling compartment and that the functional integrity of the Rab11 compartment is critical for maintaining proper presynaptic NET function. PMID:26786096
Clinical image processing engine
NASA Astrophysics Data System (ADS)
Han, Wei; Yao, Jianhua; Chen, Jeremy; Summers, Ronald
2009-02-01
Our group provides clinical image processing services to various institutes at NIH. We develop or adapt image processing programs for a variety of applications. However, each program requires a human operator to select a specific set of images and execute the program, as well as store the results appropriately for later use. To improve efficiency, we design a parallelized clinical image processing engine (CIPE) to streamline and parallelize our service. The engine takes DICOM images from a PACS server, sorts and distributes the images to different applications, multithreads the execution of applications, and collects results from the applications. The engine consists of four modules: a listener, a router, a job manager and a data manager. A template filter in XML format is defined to specify the image specification for each application. A MySQL database is created to store and manage the incoming DICOM images and application results. The engine achieves two important goals: reduce the amount of time and manpower required to process medical images, and reduce the turnaround time for responding. We tested our engine on three different applications with 12 datasets and demonstrated that the engine improved the efficiency dramatically.
Efficient Record Linkage Algorithms Using Complete Linkage Clustering.
Mamun, Abdullah-Al; Aseltine, Robert; Rajasekaran, Sanguthevar
2016-01-01
Data from different agencies share data of the same individuals. Linking these datasets to identify all the records belonging to the same individuals is a crucial and challenging problem, especially given the large volumes of data. A large number of available algorithms for record linkage are prone to either time inefficiency or low-accuracy in finding matches and non-matches among the records. In this paper we propose efficient as well as reliable sequential and parallel algorithms for the record linkage problem employing hierarchical clustering methods. We employ complete linkage hierarchical clustering algorithms to address this problem. In addition to hierarchical clustering, we also use two other techniques: elimination of duplicate records and blocking. Our algorithms use sorting as a sub-routine to identify identical copies of records. We have tested our algorithms on datasets with millions of synthetic records. Experimental results show that our algorithms achieve nearly 100% accuracy. Parallel implementations achieve almost linear speedups. Time complexities of these algorithms do not exceed those of previous best-known algorithms. Our proposed algorithms outperform previous best-known algorithms in terms of accuracy consuming reasonable run times.
Efficient Record Linkage Algorithms Using Complete Linkage Clustering
Mamun, Abdullah-Al; Aseltine, Robert; Rajasekaran, Sanguthevar
2016-01-01
Data from different agencies share data of the same individuals. Linking these datasets to identify all the records belonging to the same individuals is a crucial and challenging problem, especially given the large volumes of data. A large number of available algorithms for record linkage are prone to either time inefficiency or low-accuracy in finding matches and non-matches among the records. In this paper we propose efficient as well as reliable sequential and parallel algorithms for the record linkage problem employing hierarchical clustering methods. We employ complete linkage hierarchical clustering algorithms to address this problem. In addition to hierarchical clustering, we also use two other techniques: elimination of duplicate records and blocking. Our algorithms use sorting as a sub-routine to identify identical copies of records. We have tested our algorithms on datasets with millions of synthetic records. Experimental results show that our algorithms achieve nearly 100% accuracy. Parallel implementations achieve almost linear speedups. Time complexities of these algorithms do not exceed those of previous best-known algorithms. Our proposed algorithms outperform previous best-known algorithms in terms of accuracy consuming reasonable run times. PMID:27124604
Felton, C A; DeVries, T J
2010-06-01
The objective of this study was to determine the effects of water addition to a high-moisture total mixed ration (TMR) on feed temperature, feed intake, feed sorting behavior, and milk production of dairy cows. Twelve lactating Holstein cows (155.8+/-60.1 DIM), individually fed once daily at 1000 h, were exposed to 3 diets in a Latin square design with 28-d treatment periods. Diets had the same ingredient composition [30.9% corn silage, 30.3% alfalfa haylage, 21.2% high-moisture corn, and 17.6% protein supplement; dry matter (DM) basis] and differed only in DM concentration, which was reduced by the addition of water. Treatment diets averaged 56.3, 50.8, and 44.1% DM. The study was conducted between May and August when environmental temperature was 18.2+/-3.6 degrees C and ambient temperature in the barn was 24.4+/-3.3 degrees C. Dry matter intake (DMI) was monitored for each animal for the last 14 d of each treatment period. For the final 7 d of each period, milk production was monitored, feed temperature and ambient temperature and humidity were recorded (daily at 1000, 1300, and 1600 h), and fresh feed and orts were sampled for determination of sorting. For the final 4 d of each period, milk samples were taken for composition analysis. Samples taken for determining sorting were separated using a Penn State Particle Separator that had 3 screens (19, 8, and 1.18 mm) and a bottom pan, resulting in 4 fractions (long, medium, short, and fine). Sorting was calculated as the actual intake of each particle size fraction expressed as a percentage of the predicted intake of that fraction. Greater amounts of water added to the TMR resulted in greater increases in feed temperature in the hours after feed delivery, greater sorting against long particles, and decreased DMI, reducing the overall intake of starch and neutral detergent fiber. Milk production and composition were not affected by the addition of water to the TMR. Efficiency of production of milk was, however, increased with greater amounts of water added to the TMR. The increases in feed temperature in the hours after feed delivery were enhanced by higher ambient temperatures; this may be indicative of feed spoilage and thus may have contributed to the reduced DMI observed. Overall, these results suggest that the addition of water to high-moisture TMR (less than 60% DM) containing primarily haylage and silage forage sources will not always discourage cows from sorting, but rather may increase this behavior and limit the nutrient consumption of cows, particularly when ambient temperature is high. 2010 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Wilber 3: A Python-Django Web Application For Acquiring Large-scale Event-oriented Seismic Data
NASA Astrophysics Data System (ADS)
Newman, R. L.; Clark, A.; Trabant, C. M.; Karstens, R.; Hutko, A. R.; Casey, R. E.; Ahern, T. K.
2013-12-01
Since 2001, the IRIS Data Management Center (DMC) WILBER II system has provided a convenient web-based interface for locating seismic data related to a particular event, and requesting a subset of that data for download. Since its launch, both the scale of available data and the technology of web-based applications have developed significantly. Wilber 3 is a ground-up redesign that leverages a number of public and open-source projects to provide an event-oriented data request interface with a high level of interactivity and scalability for multiple data types. Wilber 3 uses the IRIS/Federation of Digital Seismic Networks (FDSN) web services for event data, metadata, and time-series data. Combining a carefully optimized Google Map with the highly scalable SlickGrid data API, the Wilber 3 client-side interface can load tens of thousands of events or networks/stations in a single request, and provide instantly responsive browsing, sorting, and filtering of event and meta data in the web browser, without further reliance on the data service. The server-side of Wilber 3 is a Python-Django application, one of over a dozen developed in the last year at IRIS, whose common framework, components, and administrative overhead represent a massive savings in developer resources. Requests for assembled datasets, which may include thousands of data channels and gigabytes of data, are queued and executed using the Celery distributed Python task scheduler, giving Wilber 3 the ability to operate in parallel across a large number of nodes.
Mahmud, Mufti; Pulizzi, Rocco; Vasilaki, Eleni; Giugliano, Michele
2014-01-01
Micro-Electrode Arrays (MEAs) have emerged as a mature technique to investigate brain (dys)functions in vivo and in in vitro animal models. Often referred to as “smart” Petri dishes, MEAs have demonstrated a great potential particularly for medium-throughput studies in vitro, both in academic and pharmaceutical industrial contexts. Enabling rapid comparison of ionic/pharmacological/genetic manipulations with control conditions, MEAs are employed to screen compounds by monitoring non-invasively the spontaneous and evoked neuronal electrical activity in longitudinal studies, with relatively inexpensive equipment. However, in order to acquire sufficient statistical significance, recordings last up to tens of minutes and generate large amount of raw data (e.g., 60 channels/MEA, 16 bits A/D conversion, 20 kHz sampling rate: approximately 8 GB/MEA,h uncompressed). Thus, when the experimental conditions to be tested are numerous, the availability of fast, standardized, and automated signal preprocessing becomes pivotal for any subsequent analysis and data archiving. To this aim, we developed an in-house cloud-computing system, named QSpike Tools, where CPU-intensive operations, required for preprocessing of each recorded channel (e.g., filtering, multi-unit activity detection, spike-sorting, etc.), are decomposed and batch-queued to a multi-core architecture or to a computers cluster. With the commercial availability of new and inexpensive high-density MEAs, we believe that disseminating QSpike Tools might facilitate its wide adoption and customization, and inspire the creation of community-supported cloud-computing facilities for MEAs users. PMID:24678297
Efficient sequential and parallel algorithms for record linkage
Mamun, Abdullah-Al; Mi, Tian; Aseltine, Robert; Rajasekaran, Sanguthevar
2014-01-01
Background and objective Integrating data from multiple sources is a crucial and challenging problem. Even though there exist numerous algorithms for record linkage or deduplication, they suffer from either large time needs or restrictions on the number of datasets that they can integrate. In this paper we report efficient sequential and parallel algorithms for record linkage which handle any number of datasets and outperform previous algorithms. Methods Our algorithms employ hierarchical clustering algorithms as the basis. A key idea that we use is radix sorting on certain attributes to eliminate identical records before any further processing. Another novel idea is to form a graph that links similar records and find the connected components. Results Our sequential and parallel algorithms have been tested on a real dataset of 1 083 878 records and synthetic datasets ranging in size from 50 000 to 9 000 000 records. Our sequential algorithm runs at least two times faster, for any dataset, than the previous best-known algorithm, the two-phase algorithm using faster computation of the edit distance (TPA (FCED)). The speedups obtained by our parallel algorithm are almost linear. For example, we get a speedup of 7.5 with 8 cores (residing in a single node), 14.1 with 16 cores (residing in two nodes), and 26.4 with 32 cores (residing in four nodes). Conclusions We have compared the performance of our sequential algorithm with TPA (FCED) and found that our algorithm outperforms the previous one. The accuracy is the same as that of this previous best-known algorithm. PMID:24154837
Software for Acoustic Rendering
NASA Technical Reports Server (NTRS)
Miller, Joel D.
2003-01-01
SLAB is a software system that can be run on a personal computer to simulate an acoustic environment in real time. SLAB was developed to enable computational experimentation in which one can exert low-level control over a variety of signal-processing parameters, related to spatialization, for conducting psychoacoustic studies. Among the parameters that can be manipulated are the number and position of reflections, the fidelity (that is, the number of taps in finite-impulse-response filters), the system latency, and the update rate of the filters. Another goal in the development of SLAB was to provide an inexpensive means of dynamic synthesis of virtual audio over headphones, without need for special-purpose signal-processing hardware. SLAB has a modular, object-oriented design that affords the flexibility and extensibility needed to accommodate a variety of computational experiments and signal-flow structures. SLAB s spatial renderer has a fixed signal-flow architecture corresponding to a set of parallel signal paths from each source to a listener. This fixed architecture can be regarded as a compromise that optimizes efficiency at the expense of complete flexibility. Such a compromise is necessary, given the design goal of enabling computational psychoacoustic experimentation on inexpensive personal computers.
TransCut: interactive rendering of translucent cutouts.
Li, Dongping; Sun, Xin; Ren, Zhong; Lin, Stephen; Tong, Yiying; Guo, Baining; Zhou, Kun
2013-03-01
We present TransCut, a technique for interactive rendering of translucent objects undergoing fracturing and cutting operations. As the object is fractured or cut open, the user can directly examine and intuitively understand the complex translucent interior, as well as edit material properties through painting on cross sections and recombining the broken pieces—all with immediate and realistic visual feedback. This new mode of interaction with translucent volumes is made possible with two technical contributions. The first is a novel solver for the diffusion equation (DE) over a tetrahedral mesh that produces high-quality results comparable to the state-of-art finite element method (FEM) of Arbree et al. but at substantially higher speeds. This accuracy and efficiency is obtained by computing the discrete divergences of the diffusion equation and constructing the DE matrix using analytic formulas derived for linear finite elements. The second contribution is a multiresolution algorithm to significantly accelerate our DE solver while adapting to the frequent changes in topological structure of dynamic objects. The entire multiresolution DE solver is highly parallel and easily implemented on the GPU. We believe TransCut provides a novel visual effect for heterogeneous translucent objects undergoing fracturing and cutting operations.
Accelerating the Original Profile Kernel.
Hamp, Tobias; Goldberg, Tatyana; Rost, Burkhard
2013-01-01
One of the most accurate multi-class protein classification systems continues to be the profile-based SVM kernel introduced by the Leslie group. Unfortunately, its CPU requirements render it too slow for practical applications of large-scale classification tasks. Here, we introduce several software improvements that enable significant acceleration. Using various non-redundant data sets, we demonstrate that our new implementation reaches a maximal speed-up as high as 14-fold for calculating the same kernel matrix. Some predictions are over 200 times faster and render the kernel as possibly the top contender in a low ratio of speed/performance. Additionally, we explain how to parallelize various computations and provide an integrative program that reduces creating a production-quality classifier to a single program call. The new implementation is available as a Debian package under a free academic license and does not depend on commercial software. For non-Debian based distributions, the source package ships with a traditional Makefile-based installer. Download and installation instructions can be found at https://rostlab.org/owiki/index.php/Fast_Profile_Kernel. Bugs and other issues may be reported at https://rostlab.org/bugzilla3/enter_bug.cgi?product=fastprofkernel.
A case study in evolutionary contingency.
Blount, Zachary D
2016-08-01
Biological evolution is a fundamentally historical phenomenon in which intertwined stochastic and deterministic processes shape lineages with long, continuous histories that exist in a changing world that has a history of its own. The degree to which these characteristics render evolution historically contingent, and evolutionary outcomes thereby unpredictably sensitive to history has been the subject of considerable debate in recent decades. Microbial evolution experiments have proven among the most fruitful means of empirically investigating the issue of historical contingency in evolution. One such experiment is the Escherichia coli Long-Term Evolution Experiment (LTEE), in which twelve populations founded from the same clone of E. coli have evolved in parallel under identical conditions. Aerobic growth on citrate (Cit(+)), a novel trait for E. coli, evolved in one of these populations after more than 30,000 generations. Experimental replays of this population's evolution from various points in its history showed that the Cit(+) trait was historically contingent upon earlier mutations that potentiated the trait by rendering it mutationally accessible. Here I review this case of evolutionary contingency and discuss what it implies about the importance of historical contingency arising from the core processes of evolution. Copyright © 2015 Elsevier Ltd. All rights reserved.
Development of a novel cell sorting method that samples population diversity in flow cytometry.
Osborne, Geoffrey W; Andersen, Stacey B; Battye, Francis L
2015-11-01
Flow cytometry based electrostatic cell sorting is an important tool in the separation of cell populations. Existing instruments can sort single cells into multi-well collection plates, and keep track of cell of origin and sorted well location. However currently single sorted cell results reflect the population distribution and fail to capture the population diversity. Software was designed that implements a novel sorting approach, "Slice and Dice Sorting," that links a graphical representation of a multi-well plate to logic that ensures that single cells are sampled and sorted from all areas defined by the sort region/s. Therefore the diversity of the total population is captured, and the more frequently occurring or rarer cell types are all sampled. The sorting approach was tested computationally, and using functional cell based assays. Computationally we demonstrate that conventional single cell sorting can sample as little as 50% of the population diversity dependant on the population distribution, and that Slice and Dice sorting samples much more of the variety present within a cell population. We then show by sorting single cells into wells using the Slice and Dice sorting method that there are cells sorted using this method that would be either rarely sorted, or not sorted at all using conventional single cell sorting approaches. The present study demonstrates a novel single cell sorting method that samples much more of the population diversity than current methods. It has implications in clonal selection, stem cell sorting, single cell sequencing and any areas where population heterogeneity is of importance. © 2015 International Society for Advancement of Cytometry.
The last days of Sala al-Din (Saladin) "noble enemy" of the third Crusade.
Mackowiak, Philip A
2010-10-01
Saladin, "noble enemy" of Richard the Lionheart and victor at the battle of Hattin, died suddenly in 1193 A.D. at the age of 56. The clinical information preserved in the historical record is insufficient to render a definitive diagnosis for Saladin's final illness, and yet, it contains enough details to narrow the list of possibilities to just a few and also to critique his treatment in light of the medical concepts of his day.
Understanding Enterprise Systems' Impact(s) on Business Relationships
NASA Astrophysics Data System (ADS)
Ekman, Peter; Thilenius, Peter
Enterprise systems (ESs), i.e. standardized applications supplied from software vendors such as SAP or Oracle, have been extensively employed by companies during the last decade. Today all Fortune 500 companies have, or are in the process of installing, this kind of information system (Seddon et al. 2003). A wide-spread denotation for these applications is enterprise resource planning (ERP) systems. But the broad utilization use of these software packages in business is rendering this labelling too narrow (Davenport 2000).
[Prevention of hepatitis in dialysis centers. A catalog of recommendations and suggestions. 3].
Thieler, H; Schmidt, U
1979-07-01
This last of three reports on the prevention of hepatitis in dialysis centres deals with the kind and frequency of desinfection measures in the dialysis area, contains advices to the mode of transfer of patients between dialysis centres and makes demands to the tests of hepatitis-B-antigen and antibodies. Finally proposals concerning the frequency of controls for HBs-antigen and anti-HBs and for the passive immunisation with anti-HBs-enriched immunoglobulin are rendered.
The Maltese-Libyan Entente in the Mediterranean Basin
1977-04-01
last century and nest of the present ore she has lived on the taxes , duties , and fees exacted for the use of her magnif i- cent harbor at Valletta ... Valletta . With the waning of British imperial power in the aftermath of the Second % *crld War all the accumulated problems of colonialism se~ red to ~~~rge...gainfully Employed worked in the &~nira ltv 7 - - . - - dockyards in Valletta , 40% of Malta ’s incczre frcxn ~ip1oynent cane frczn services rendered
Grote, I; Rosales, J; Baer, D M
1996-11-01
Three preschool children repeatedly did four kinds of sorts with a deck of stimulus cards: a difficult, untaught target sort and three other sorts considered analytic of self-instructing the target performance. The untaught target sort was to find in a deck of cards those matching what two sample cards had in common. Most preschool children must be taught to mediate this problem. The three other kinds of sorts taught skills involved in the target performance or its mediation. As correct self-instructive talk emerged in the target sorts, it was confirmed. The untaught target sorts were interspersed infrequently among the three alternating directly taught skill sorts, to see if accurate target sorts, and accurate self-instructive talk about the target sorts, would emerge as the three skill sorts were mastered. As all the sorts progressed, increasing accuracy was seen first in the skill sorts and then in the untaught target sorts. All three subjects showed subsequent generalization to new target sorts involving other stimulus sets. Correct spontaneous self-instructions about the target sorts increased from near zero at the beginning of the experiment to consistency at its end. Thus the three skill sorts appeared sufficient for the emergence of a self-instructed solution to the previously insoluble target performance.
The Louisiana State University waste-to-energy incinerator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1994-10-26
This proposed action is for cost-shared construction of an incinerator/steam-generation facility at Louisiana State University under the State Energy Conservation Program (SECP). The SECP, created by the Energy Policy and Conservation Act, calls upon DOE to encourage energy conservation, renewable energy, and energy efficiency by providing Federal technical and financial assistance in developing and implementing comprehensive state energy conservation plans and projects. Currently, LSU runs a campus-wide recycling program in order to reduce the quantity of solid waste requiring disposal. This program has removed recyclable paper from the waste stream; however, a considerable quantity of other non-recyclable combustible wastes aremore » produced on campus. Until recently, these wastes were disposed of in the Devil`s Swamp landfill (also known as the East Baton Rouge Parish landfill). When this facility reached its capacity, a new landfill was opened a short distance away, and this new site is now used for disposal of the University`s non-recyclable wastes. While this new landfill has enough capacity to last for at least 20 years (from 1994), the University has identified the need for a more efficient and effective manner of waste disposal than landfilling. The University also has non-renderable biological and potentially infectious waste materials from the School of Veterinary Medicine and the Student Health Center, primarily the former, whose wastes include animal carcasses and bedding materials. Renderable animal wastes from the School of Veterinary Medicine are sent to a rendering plant. Non-renderable, non-infectious animal wastes currently are disposed of in an existing on-campus incinerator near the School of Veterinary Medicine building.« less
DoBias, Matthew; Galloro, Vince
2008-12-15
The president-elect sent strong signals that he's serious about reform, with some key appointments last week. The move drew applause from industry executives, including one who said the system is now set on a crash course. "There's no guarantee that someone is going to be able to put Humpty Dumpty back together, but we're never going to know until someone starts sorting through the pieces," says William Atkinson, left, of WakeMed.
Hydrodynamic lift for single cell manipulation in a femtosecond laser fabricated optofluidic chip
NASA Astrophysics Data System (ADS)
Bragheri, Francesca; Osellame, Roberto
2017-08-01
Single cell sorting based either on fluorescence or on mechanical properties has been exploited in the last years in microfluidic devices. Hydrodynamic focusing allows increasing the efficiency of theses devices by improving the matching between the region of optical analysis and that of cell flow. Here we present a very simple solution fabricated by femtosecond laser micromachining that exploits flow laminarity in microfluidic channels to easily lift the sample flowing position to the channel portion illuminated by the optical waveguides used for single cell trapping and analysis.
Advances in Parallelization for Large Scale Oct-Tree Mesh Generation
NASA Technical Reports Server (NTRS)
O'Connell, Matthew; Karman, Steve L.
2015-01-01
Despite great advancements in the parallelization of numerical simulation codes over the last 20 years, it is still common to perform grid generation in serial. Generating large scale grids in serial often requires using special "grid generation" compute machines that can have more than ten times the memory of average machines. While some parallel mesh generation techniques have been proposed, generating very large meshes for LES or aeroacoustic simulations is still a challenging problem. An automated method for the parallel generation of very large scale off-body hierarchical meshes is presented here. This work enables large scale parallel generation of off-body meshes by using a novel combination of parallel grid generation techniques and a hybrid "top down" and "bottom up" oct-tree method. Meshes are generated using hardware commonly found in parallel compute clusters. The capability to generate very large meshes is demonstrated by the generation of off-body meshes surrounding complex aerospace geometries. Results are shown including a one billion cell mesh generated around a Predator Unmanned Aerial Vehicle geometry, which was generated on 64 processors in under 45 minutes.
INVITED TOPICAL REVIEW: Parallel magnetic resonance imaging
NASA Astrophysics Data System (ADS)
Larkman, David J.; Nunes, Rita G.
2007-04-01
Parallel imaging has been the single biggest innovation in magnetic resonance imaging in the last decade. The use of multiple receiver coils to augment the time consuming Fourier encoding has reduced acquisition times significantly. This increase in speed comes at a time when other approaches to acquisition time reduction were reaching engineering and human limits. A brief summary of spatial encoding in MRI is followed by an introduction to the problem parallel imaging is designed to solve. There are a large number of parallel reconstruction algorithms; this article reviews a cross-section, SENSE, SMASH, g-SMASH and GRAPPA, selected to demonstrate the different approaches. Theoretical (the g-factor) and practical (coil design) limits to acquisition speed are reviewed. The practical implementation of parallel imaging is also discussed, in particular coil calibration. How to recognize potential failure modes and their associated artefacts are shown. Well-established applications including angiography, cardiac imaging and applications using echo planar imaging are reviewed and we discuss what makes a good application for parallel imaging. Finally, active research areas where parallel imaging is being used to improve data quality by repairing artefacted images are also reviewed.
PACMan to Help Sort Hubble Proposals
NASA Astrophysics Data System (ADS)
Kohler, Susanna
2017-04-01
Every year, astronomers submit over a thousand proposals requesting time on the Hubble Space Telescope (HST). Currently, humans must sort through each of these proposals by hand before sending them off for review. Could this burden be shifted to computers?A Problem of VolumeAstronomer Molly Peeples gathered stats on the HST submissions sent in last week for the upcoming HST Cycle 25 (the deadline was Friday night), relative to previous years. This years proposal round broke the record, with over 1200 proposals submitted in total for Cycle 25. [Molly Peeples]Each proposal cycle for HST time attracts on the order of 1100 proposals accounting for far more HST time than is available. The proposals are therefore carefully reviewed by around 150 international members of the astronomy community during a six-month process to select those with the highest scientific merit.Ideally, each proposal will be read by reviewers that have scientific expertise relevant to the proposal topic: if a proposal requests HST time to study star formation, for instance, then the reviewers assigned to it should have research expertise in star formation.How does this matching of proposals to reviewers occur? The current method relies on self-reported categorization of the submitted proposals. This is unreliable, however; proposals are often mis-categorized by submitters due to misunderstanding or ambiguous cases.As a result, the Science Policies Group at the Space Telescope Science Institute (STScI) which oversees the review of HST proposals must go through each of the proposals by hand and re-categorize them. The proposals are then matched to reviewers with self-declared expertise in the same category.With the number of HST proposals on the rise and the expectation that the upcoming James Webb Space Telescope (JWST) will elicit even more proposals for time than Hubble scientists at STScI and NASA are now asking: could the human hours necessary for this task be spared? Could a computer program conceivably do this matching instead?Comparison of PACMans categorization to the manual sorting for HST Cycle 24 proposals. Green: proposals similarly categorized by both. Yellow: proposals whose manual classifications are within the top 60% of sorted PACMan classifications. Red: proposals categorized differently by each. [Strolger et al. 2017]Introducing PACManLed by Louis-Gregory Strolger (STScI), a team of scientists has developed PACMan: the Proposal Auto-Categorizer and Manager. PACMan is whats known as a Naive Bayesian classifier; its essentially a spam-filtering routine that uses word probabilities to sort proposals into multiple scientific categories and identify people to serve on review panels in those same scientific areas.PACMan works by looking at the words in aproposal and comparing them to those in a training set of proposals in this case, the previous years HST proposals, sorted by humans. By using the training set, PACMan learns how to accurately classify proposals.PACMan then looks up each reviewer on the Astrophysical Data System (ADS) and compiles the abstracts from the reviewers past 10 years worth of scientific publications. This text is then evaluated in a similar way to the text of the proposals, determining each reviewers suitability to evaluate a proposal.How Did It Do?Comparison of PACMan sorting to manual sorting, specifically for the HST Cycle 24 proposals that were recategorized by the Science Policies Group (SPG) from what the submitter (PI) selected. Of these swaps, 48% would have been predicted by PACMan. [Strolger et al. 2017]Strolger and collaborators show that with a training set of one previous cycles proposals, PACMan correctly categorizes the next cycles proposals roughly 87% of the time. By increasing the size of the training set to include more past cycles, PACMans accuracy can be improved up to 95% (though the algorithm will have to be retrained any time the proposal categories change).PACMans results were also consistent for reviewers: it found that nearly all of the reviewers (92%) asked to serve in the last cycle were appropriate reviewers for the subject area based on their ADS publication record.There are still some hiccups in automating this process for instance, finding the reviewers on ADS can require human intervention due to names not being unique. As the scientific community moves toward persistent and distinguishable identifiers (like ORCIDs), however, this problem will be mitigated.Strolger and collaborators believe that PACMan demonstrates a promising means of increasing the efficiency and impartiality of the HST proposal sorting process. This tool will likely be used to assist or replace humans in this processin future HST and JWST cycles.CitationLouis-Gregory Strolger et al 2017 AJ 153 181. doi:10.3847/1538-3881/aa6112
Stone, Graham N; Lohse, Konrad; Nicholls, James A; Fuentes-Utrilla, Pablo; Sinclair, Frazer; Schönrogge, Karsten; Csóka, György; Melika, George; Nieves-Aldrey, Jose-Luis; Pujade-Villar, Juli; Tavakoli, Majide; Askew, Richard R; Hickerson, Michael J
2012-03-20
How geographically widespread biological communities assemble remains a major question in ecology. Do parallel population histories allow sustained interactions (such as host-parasite or plant-pollinator) among species, or do discordant histories necessarily interrupt them? Though few empirical data exist, these issues are central to our understanding of multispecies evolutionary dynamics. Here we use hierarchical approximate Bayesian analysis of DNA sequence data for 12 herbivores and 19 parasitoids to reconstruct the assembly of an insect community spanning the Western Palearctic and assess the support for alternative host tracking and ecological sorting hypotheses. We show that assembly occurred primarily by delayed host tracking from a shared eastern origin. Herbivores escaped their enemies for millennia before parasitoid pursuit restored initial associations, with generalist parasitoids no better able to track their hosts than specialists. In contrast, ecological sorting played only a minor role. Substantial turnover in host-parasitoid associations means that coevolution must have been diffuse, probably contributing to the parasitoid generalism seen in this and similar systems. Reintegration of parasitoids after host escape shows these communities to have been unsaturated throughout their history, arguing against major roles for parasitoid niche evolution or competition during community assembly. Copyright © 2012 Elsevier Ltd. All rights reserved.
Lee, Patrick K H; Men, Yujie; Wang, Shanquan; He, Jianzhong; Alvarez-Cohen, Lisa
2015-02-03
Dehalococcoides mccartyi are functionally important bacteria that catalyze the reductive dechlorination of chlorinated ethenes. However, these anaerobic bacteria are fastidious to isolate, making downstream genomic characterization challenging. In order to facilitate genomic analysis, a fluorescence-activated cell sorting (FACS) method was developed in this study to separate D. mccartyi cells from a microbial community, and the DNA of the isolated cells was processed by whole genome amplification (WGA) and hybridized onto a D. mccartyi microarray for comparative genomics against four sequenced strains. First, FACS was successfully applied to a D. mccartyi isolate as positive control, and then microarray results verified that WGA from 10(6) cells or ∼1 ng of genomic DNA yielded high-quality coverage detecting nearly all genes across the genome. As expected, some inter- and intrasample variability in WGA was observed, but these biases were minimized by performing multiple parallel amplifications. Subsequent application of the FACS and WGA protocols to two enrichment cultures containing ∼10% and ∼1% D. mccartyi cells successfully enabled genomic analysis. As proof of concept, this study demonstrates that coupling FACS with WGA and microarrays is a promising tool to expedite genomic characterization of target strains in environmental communities where the relative concentrations are low.
NASA Astrophysics Data System (ADS)
Han, Qiguo; Zhu, Kai; Shi, Wenming; Wu, Kuayu; Chen, Kai
2018-02-01
In order to solve the problem of low voltage ride through(LVRT) of the major auxiliary equipment’s variable-frequency drive (VFD) in thermal power plant, the scheme of supercapacitor paralleled in the DC link of VFD is put forward, furthermore, two solutions of direct parallel support and voltage boost parallel support of supercapacitor are proposed. The capacitor values for the relevant motor loads are calculated according to the law of energy conservation, and they are verified by Matlab simulation. At last, a set of test prototype is set up, and the test results prove the feasibility of the proposed schemes.
Lsiviewer 2.0 - a Client-Oriented Online Visualization Tool for Geospatial Vector Data
NASA Astrophysics Data System (ADS)
Manikanta, K.; Rajan, K. S.
2017-09-01
Geospatial data visualization systems have been predominantly through applications that are installed and run in a desktop environment. Over the last decade, with the advent of web technologies and its adoption by Geospatial community, the server-client model for data handling, data rendering and visualization respectively has been the most prevalent approach in Web-GIS. While the client devices have become functionally more powerful over the recent years, the above model has largely ignored it and is still in a mode of serverdominant computing paradigm. In this paper, an attempt has been made to develop and demonstrate LSIViewer - a simple, easy-to-use and robust online geospatial data visualisation system for the user's own data that harness the client's capabilities for data rendering and user-interactive styling, with a reduced load on the server. The developed system can support multiple geospatial vector formats and can be integrated with other web-based systems like WMS, WFS, etc. The technology stack used to build this system is Node.js on the server side and HTML5 Canvas and JavaScript on the client side. Various tests run on a range of vector datasets, upto 35 MB, showed that the time taken to render the vector data using LSIViewer is comparable to a desktop GIS application, QGIS, over an identical system.
Separation and sorting of cells in microsystems using physical principles
NASA Astrophysics Data System (ADS)
Lee, Gi-Hun; Kim, Sung-Hwan; Ahn, Kihoon; Lee, Sang-Hoon; Park, Joong Yull
2016-01-01
In the last decade, microfabrication techniques have been combined with microfluidics and applied to cell biology. Utilizing such new techniques, various cell studies have been performed for the research of stem cells, immune cells, cancer, neurons, etc. Among the various biological applications of microtechnology-based platforms, cell separation technology has been highly regarded in biological and clinical fields for sorting different types of cells, finding circulating tumor cells (CTCs), and blood cell separation, amongst other things. Many cell separation methods have been created using various physical principles. Representatively, these include hydrodynamic, acoustic, dielectrophoretic, magnetic, optical, and filtering methods. In this review, each of these methods will be introduced, and their physical principles and sample applications described. Each physical principle has its own advantages and disadvantages. The engineers who design the systems and the biologists who use them should understand the pros and cons of each method or principle, to broaden the use of microsystems for cell separation. Continuous development of microsystems for cell separation will lead to new opportunities for diagnosing CTCs and cancer metastasis, as well as other elements in the bloodstream.
Denni Algorithm An Enhanced Of SMS (Scan, Move and Sort) Algorithm
NASA Astrophysics Data System (ADS)
Aprilsyah Lubis, Denni; Salim Sitompul, Opim; Marwan; Tulus; Andri Budiman, M.
2017-12-01
Sorting has been a profound area for the algorithmic researchers, and many resources are invested to suggest a more working sorting algorithm. For this purpose many existing sorting algorithms were observed in terms of the efficiency of the algorithmic complexity. Efficient sorting is important to optimize the use of other algorithms that require sorted lists to work correctly. Sorting has been considered as a fundamental problem in the study of algorithms that due to many reasons namely, the necessary to sort information is inherent in many applications, algorithms often use sorting as a key subroutine, in algorithm design there are many essential techniques represented in the body of sorting algorithms, and many engineering issues come to the fore when implementing sorting algorithms., Many algorithms are very well known for sorting the unordered lists, and one of the well-known algorithms that make the process of sorting to be more economical and efficient is SMS (Scan, Move and Sort) algorithm, an enhancement of Quicksort invented Rami Mansi in 2010. This paper presents a new sorting algorithm called Denni-algorithm. The Denni algorithm is considered as an enhancement on the SMS algorithm in average, and worst cases. The Denni algorithm is compared with the SMS algorithm and the results were promising.
Imaging Local Ca2+ Signals in Cultured Mammalian Cells
Lock, Jeffrey T.; Ellefsen, Kyle L.; Settle, Bret; Parker, Ian; Smith, Ian F.
2015-01-01
Cytosolic Ca2+ ions regulate numerous aspects of cellular activity in almost all cell types, controlling processes as wide-ranging as gene transcription, electrical excitability and cell proliferation. The diversity and specificity of Ca2+ signaling derives from mechanisms by which Ca2+ signals are generated to act over different time and spatial scales, ranging from cell-wide oscillations and waves occurring over the periods of minutes to local transient Ca2+ microdomains (Ca2+ puffs) lasting milliseconds. Recent advances in electron multiplied CCD (EMCCD) cameras now allow for imaging of local Ca2+ signals with a 128 x 128 pixel spatial resolution at rates of >500 frames sec-1 (fps). This approach is highly parallel and enables the simultaneous monitoring of hundreds of channels or puff sites in a single experiment. However, the vast amounts of data generated (ca. 1 Gb per min) render visual identification and analysis of local Ca2+ events impracticable. Here we describe and demonstrate the procedures for the acquisition, detection, and analysis of local IP3-mediated Ca2+ signals in intact mammalian cells loaded with Ca2+ indicators using both wide-field epi-fluorescence (WF) and total internal reflection fluorescence (TIRF) microscopy. Furthermore, we describe an algorithm developed within the open-source software environment Python that automates the identification and analysis of these local Ca2+ signals. The algorithm localizes sites of Ca2+ release with sub-pixel resolution; allows user review of data; and outputs time sequences of fluorescence ratio signals together with amplitude and kinetic data in an Excel-compatible table. PMID:25867132
Rmax: A systematic approach to evaluate instrument sort performance using center stream catch☆
Riddell, Andrew; Gardner, Rui; Perez-Gonzalez, Alexis; Lopes, Telma; Martinez, Lola
2015-01-01
Sorting performance can be evaluated with regard to Purity, Yield and/or Recovery of the sorted fraction. Purity is a check on the quality of the sample and the sort decisions made by the instrument. Recovery and Yield definitions vary with some authors regarding both as how efficient the instrument is at sorting the target particles from the original sample, others distinguishing Recovery from Yield, where the former is used to describe the accuracy of the instrument’s sort count. Yield and Recovery are often neglected, mostly due to difficulties in their measurement. Purity of the sort product is often cited alone but is not sufficient to evaluate sorting performance. All of these three performance metrics require re-sampling of the sorted fraction. But, unlike Purity, calculating Yield and/or Recovery calls for the absolute counting of particles in the sorted fraction, which may not be feasible, particularly when dealing with rare populations and precious samples. In addition, the counting process itself involves large errors. Here we describe a new metric for evaluating instrument sort Recovery, defined as the number of particles sorted relative to the number of original particles to be sorted. This calculation requires only measuring the ratios of target and non-target populations in the original pre-sort sample and in the waste stream or center stream catch (CSC), avoiding re-sampling the sorted fraction and absolute counting. We called this new metric Rmax, since it corresponds to the maximum expected Recovery for a particular set of instrument parameters. Rmax is ideal to evaluate and troubleshoot the optimum drop-charge delay of the sorter, or any instrument related failures that will affect sort performance. It can be used as a daily quality control check but can be particularly useful to assess instrument performance before single-cell sorting experiments. Because we do not perturb the sort fraction we can calculate Rmax during the sort process, being especially valuable to check instrument performance during rare population sorts. PMID:25747337
NASA Astrophysics Data System (ADS)
Rowe, M. P.; Pugh, E. N., Jr.; Tyo, J. S.; Engheta, N.
1995-03-01
Many animals have visual systems that exploit the polarization of light, and some of these systems are thought to compute difference signals in parallel from arrays of photoreceptors optimally tuned to orthogonal polarizations. We hypothesize that such polarization-difference systems can improve the visibility of objects in scattering media by serving as common-mode rejection amplifiers that reduce the effects of background scattering and amplify the signal from targets whose polarization-difference magnitude is distinct from the background. We present experimental results obtained with a target in a highly scattering medium, demonstrating that a manmade polarization-difference system can render readily visible surface features invisible to conventional imaging.
Modulation-frequency encoded multi-color fluorescent DNA analysis in an optofluidic chip.
Dongre, Chaitanya; van Weerd, Jasper; Besselink, Geert A J; Vazquez, Rebeca Martinez; Osellame, Roberto; Cerullo, Giulio; van Weeghel, Rob; van den Vlekkert, Hans H; Hoekstra, Hugo J W M; Pollnau, Markus
2011-02-21
We introduce a principle of parallel optical processing to an optofluidic lab-on-a-chip. During electrophoretic separation, the ultra-low limit of detection achieved with our set-up allows us to record fluorescence from covalently end-labeled DNA molecules. Different sets of exclusively color-labeled DNA fragments-otherwise rendered indistinguishable by spatio-temporal coincidence-are traced back to their origin by modulation-frequency-encoded multi-wavelength laser excitation, fluorescence detection with a single ultrasensitive, albeit color-blind photomultiplier, and Fourier analysis decoding. As a proof of principle, fragments obtained by multiplex ligation-dependent probe amplification from independent human genomic segments, associated with genetic predispositions to breast cancer and anemia, are simultaneously analyzed.
The fabrication of integrated carbon pipes with sub-micron diameters
NASA Astrophysics Data System (ADS)
Kim, B. M.; Murray, T.; Bau, H. H.
2005-08-01
A method for fabricating integrated carbon pipes (nanopipettes) of sub-micron diameters and tens of microns in length is demonstrated. The carbon pipes are formed from a template consisting of the tip of a pulled alumino-silicate glass capillary coated with carbon deposited from a vapour phase. This method renders carbon nanopipettes without the need for ex situ assembly and facilitates parallel production of multiple carbon-pipe devices. An electric-field-driven transfer of ions in a KCl solution through the integrated carbon pipes exhibits nonlinear current-voltage (I-V) curves, markedly different from the Ohmic I-V curves observed in glass pipettes under similar conditions. The filling of the nanopipette with fluorescent suspension is also demonstrated.
Raman, R; Das, P
1991-09-01
Parallel to the inactivation of the X chromosome in somatic cells of female, the male X in mammals is rendered inactive during spermatogenesis. Pseudoautosomal genes, those present on the X-Y meiotically pairable region of male, escape inactivation in female soma. It is suggested, but not demonstrated, that they may also be refractory to the inactivation signal in male germ cells. We have assayed activity of the enzyme steroid sulfatase, product of a pseudoautosomal gene, in testicular cells of the mouse and shown its presence in premeiotic, meiotic (pachytene), and postmeiotic (spermatid) cell types. It appears that, as in females, pseudoautosomal genes may escape inactivation in male germ cells also.
Rose, Hannah; Hoar, Bryanne; Kutz, Susan J.; Morgan, Eric R.
2014-01-01
Global change, including climate, policy, land use and other associated environmental changes, is likely to have a major impact on parasitic disease in wildlife, altering the spatio-temporal patterns of transmission, with wide-ranging implications for wildlife, domestic animals, humans and ecosystem health. Predicting the potential impact of climate change on parasites infecting wildlife will become increasingly important in the management of species of conservation concern and control of disease at the wildlife–livestock and wildlife–human interface, but is confounded by incomplete knowledge of host–parasite interactions, logistical difficulties, small sample sizes and limited opportunities to manipulate the system. By exploiting parallels between livestock and wildlife, existing theoretical frameworks and research on livestock and their gastrointestinal nematodes can be adapted to wildlife systems. Similarities in the gastrointestinal nematodes and the life-histories of wild and domestic ruminants, coupled with a detailed knowledge of the ecology and life-cycle of the parasites, render the ruminant-GIN host–parasite system particularly amenable to a cross-disciplinary approach. PMID:25197625
Log-less metadata management on metadata server for parallel file systems.
Liao, Jianwei; Xiao, Guoqiang; Peng, Xiaoning
2014-01-01
This paper presents a novel metadata management mechanism on the metadata server (MDS) for parallel and distributed file systems. In this technique, the client file system backs up the sent metadata requests, which have been handled by the metadata server, so that the MDS does not need to log metadata changes to nonvolatile storage for achieving highly available metadata service, as well as better performance improvement in metadata processing. As the client file system backs up certain sent metadata requests in its memory, the overhead for handling these backup requests is much smaller than that brought by the metadata server, while it adopts logging or journaling to yield highly available metadata service. The experimental results show that this newly proposed mechanism can significantly improve the speed of metadata processing and render a better I/O data throughput, in contrast to conventional metadata management schemes, that is, logging or journaling on MDS. Besides, a complete metadata recovery can be achieved by replaying the backup logs cached by all involved clients, when the metadata server has crashed or gone into nonoperational state exceptionally.
Log-Less Metadata Management on Metadata Server for Parallel File Systems
Xiao, Guoqiang; Peng, Xiaoning
2014-01-01
This paper presents a novel metadata management mechanism on the metadata server (MDS) for parallel and distributed file systems. In this technique, the client file system backs up the sent metadata requests, which have been handled by the metadata server, so that the MDS does not need to log metadata changes to nonvolatile storage for achieving highly available metadata service, as well as better performance improvement in metadata processing. As the client file system backs up certain sent metadata requests in its memory, the overhead for handling these backup requests is much smaller than that brought by the metadata server, while it adopts logging or journaling to yield highly available metadata service. The experimental results show that this newly proposed mechanism can significantly improve the speed of metadata processing and render a better I/O data throughput, in contrast to conventional metadata management schemes, that is, logging or journaling on MDS. Besides, a complete metadata recovery can be achieved by replaying the backup logs cached by all involved clients, when the metadata server has crashed or gone into nonoperational state exceptionally. PMID:24892093
ParaView visualization of Abaqus output on the mechanical deformation of complex microstructures
NASA Astrophysics Data System (ADS)
Liu, Qingbin; Li, Jiang; Liu, Jie
2017-02-01
Abaqus® is a popular software suite for finite element analysis. It delivers linear and nonlinear analyses of mechanical and fluid dynamics, includes multi-body system and multi-physics coupling. However, the visualization capability of Abaqus using its CAE module is limited. Models from microtomography have extremely complicated structures, and datasets of Abaqus output are huge, requiring a visualization tool more powerful than Abaqus/CAE. We convert Abaqus output into the XML-based VTK format by developing a Python script and then using ParaView to visualize the results. Such capabilities as volume rendering, tensor glyphs, superior animation and other filters allow ParaView to offer excellent visualizing manifestations. ParaView's parallel visualization makes it possible to visualize very big data. To support full parallel visualization, the Python script achieves data partitioning by reorganizing all nodes, elements and the corresponding results on those nodes and elements. The data partition scheme minimizes data redundancy and works efficiently. Given its good readability and extendibility, the script can be extended to the processing of more different problems in Abaqus. We share the script with Abaqus users on GitHub.
Model based rib-cage unfolding for trauma CT
NASA Astrophysics Data System (ADS)
von Berg, Jens; Klinder, Tobias; Lorenz, Cristian
2018-03-01
A CT rib-cage unfolding method is proposed that does not require to determine rib centerlines but determines the visceral cavity surface by model base segmentation. Image intensities are sampled across this surface that is flattened using a model based 3D thin-plate-spline registration. An average rib centerline model projected onto this surface serves as a reference system for registration. The flattening registration is designed so that ribs similar to the centerline model are mapped onto parallel lines preserving their relative length. Ribs deviating from this model appear deviating from straight parallel ribs in the unfolded view, accordingly. As the mapping is continuous also the details in intercostal space and those adjacent to the ribs are rendered well. The most beneficial application area is Trauma CT where a fast detection of rib fractures is a crucial task. Specifically in trauma, automatic rib centerline detection may not be guaranteed due to fractures and dislocations. The application by visual assessment on the large public LIDC data base of lung CT proved general feasibility of this early work.
Emotional stimuli exert parallel effects on attention and memory.
Talmi, Deborah; Ziegler, Marilyne; Hawksworth, Jade; Lalani, Safina; Herman, C Peter; Moscovitch, Morris
2013-01-01
Because emotional and neutral stimuli typically differ on non-emotional dimensions, it has been difficult to determine conclusively which factors underlie the ability of emotional stimuli to enhance immediate long-term memory. Here we induced arousal by varying participants' goals, a method that removes many potential confounds between emotional and non-emotional items. Hungry and sated participants encoded food and clothing images under divided attention conditions. Sated participants attended to and recalled food and clothing images equivalently. Hungry participants performed worse on the concurrent tone-discrimination task when they viewed food relative to clothing images, suggesting enhanced attention to food images, and they recalled more food than clothing images. A follow-up regression analysis of the factors predicting memory for individual pictures revealed that food images had parallel effects on attention and memory in hungry participants, so that enhanced attention to food images did not predict their enhanced memory. We suggest that immediate long-term memory for food is enhanced in the hungry state because hunger leads to more distinctive processing of food images rendering them more accessible during retrieval.
NASA Astrophysics Data System (ADS)
Núñez, M.; Robie, T.; Vlachos, D. G.
2017-10-01
Kinetic Monte Carlo (KMC) simulation provides insights into catalytic reactions unobtainable with either experiments or mean-field microkinetic models. Sensitivity analysis of KMC models assesses the robustness of the predictions to parametric perturbations and identifies rate determining steps in a chemical reaction network. Stiffness in the chemical reaction network, a ubiquitous feature, demands lengthy run times for KMC models and renders efficient sensitivity analysis based on the likelihood ratio method unusable. We address the challenge of efficiently conducting KMC simulations and performing accurate sensitivity analysis in systems with unknown time scales by employing two acceleration techniques: rate constant rescaling and parallel processing. We develop statistical criteria that ensure sufficient sampling of non-equilibrium steady state conditions. Our approach provides the twofold benefit of accelerating the simulation itself and enabling likelihood ratio sensitivity analysis, which provides further speedup relative to finite difference sensitivity analysis. As a result, the likelihood ratio method can be applied to real chemistry. We apply our methodology to the water-gas shift reaction on Pt(111).
NASA Astrophysics Data System (ADS)
Chacon, Luis; Del-Castillo-Negrete, Diego; Hauck, Cory
2012-10-01
Modeling electron transport in magnetized plasmas is extremely challenging due to the extreme anisotropy between parallel (to the magnetic field) and perpendicular directions (χ/χ˜10^10 in fusion plasmas). Recently, a Lagrangian Green's function approach, developed for the purely parallel transport case,footnotetextD. del-Castillo-Negrete, L. Chac'on, PRL, 106, 195004 (2011)^,footnotetextD. del-Castillo-Negrete, L. Chac'on, Phys. Plasmas, 19, 056112 (2012) has been extended to the anisotropic transport case in the tokamak-ordering limit with constant density.footnotetextL. Chac'on, D. del-Castillo-Negrete, C. Hauck, JCP, submitted (2012) An operator-split algorithm is proposed that allows one to treat Eulerian and Lagrangian components separately. The approach is shown to feature bounded numerical errors for arbitrary χ/χ ratios, which renders it asymptotic-preserving. In this poster, we will present the generalization of the Lagrangian approach to arbitrary magnetic fields. We will demonstrate the potential of the approach with various challenging configurations, including the case of transport across a magnetic island in cylindrical geometry.
A Comparison of Approaches for Solving Hard Graph-Theoretic Problems
2015-05-01
collaborative effort “ Adiabatic Quantum Computing Applications Research” (14-RI-CRADA-02) between the Information Directorate and Lock- 3 Algorithm 3...using Matlab, a quantum annealing approach using the D-Wave computer , and lastly using satisfiability modulo theory (SMT) and corresponding SMT...methods are explored and consist of a parallel computing approach using Matlab, a quantum annealing approach using the D-Wave computer , and lastly using
NASA Astrophysics Data System (ADS)
Pal, Chiranjit; Chaudhuri, Tandrima; Chattopdhyay, Subrata; Banerjee, Manas
2017-04-01
This study sort out chemical physics of non-covalent interaction between Copper phthalocyanine (CuPC) with Methanato borondifluoride derivatives (MBDF) in chloroform and ethanol. Formation of isosbestic points indicated stable ground state equilibrium between CuPC and MBDF, association ability were more pronounced in less polar chloroform. Interesting overall parallel orientation of MBDF over CuPC in gas phase geometries indicated that fluorine centre of MBDF lying just above the Cu-centre of CuPC. Thus strong interaction between Cu(II)- and F- centre could not be overruled and was also established by NBO calculation. TDDFT along with FMO features and heat of reaction values clearly designated the existence of π-π interaction and effect of solvent polarity on that interaction.
Lineation-parallel c-axis Fabric of Quartz Formed Under Water-rich Conditions
NASA Astrophysics Data System (ADS)
Wang, Y.; Zhang, J.; Li, P.
2014-12-01
The crystallographic preferred orientation (CPO) of quartz is of great significance because it records much valuable information pertinent to the deformation of quartz-rich rocks in the continental crust. The lineation-parallel c-axis CPO (i.e., c-axis forming a maximum parallel to the lineation) in naturally deformed quartz is generally considered to form under high temperature (> ~550 ºC) conditions. However, most laboratory deformation experiments on quartzite failed to produce such a CPO at high temperatures up to 1200 ºC. Here we reported a new occurrence of the lineation-parallel c-axis CPO of quartz from kyanite-quartz veins in eclogite. Optical microstructural observations, fourier transform infrared (FTIR) and electron backscattered diffraction (EBSD) techniques were integrated to illuminate the nature of quartz CPOs. Quartz exhibits mostly straight to slightly curved grain boundaries, modest intracrystalline plasticity, and significant shape preferred orientation (SPO) and CPOs, indicating dislocation creep dominated the deformation of quartz. Kyanite grains in the veins are mostly strain-free, suggestive of their higher strength than quartz. The pronounced SPO and CPOs in kyanite were interpreted to originate from anisotropic crystal growth and/or mechanical rotation during vein-parallel shearing. FTIR results show quartz contains a trivial amount of structurally bound water (several tens of H/106 Si), while kyanite has a water content of 384-729 H/106 Si; however, petrographic observations suggest quartz from the veins were practically deformed under water-rich conditions. We argue that the observed lineation-parallel c-axis fabric in quartz was inherited from preexisting CPOs as a result of anisotropic grain growth under stress facilitated by water, but rather than due to a dominant c-slip. The preservation of the quartz CPOs probably benefited from the preexisting quartz CPOs which renders most quartz grains unsuitably oriented for an easy a-slip at lower temperatures and the weak deformation during subsequent exhumation. This hypothesis provides a reasonable explanation for the observations that most lineation-parallel c-axis fabrics of quartz were found in veins and that deformation experiments on quartz-rich rocks at high temperature failed to produce such CPOs.
Stimuli Responsive Systems Constructed Using Cucurbit[n]uril-Type Molecular Containers
2015-01-01
Conspectus This Account focuses on stimuli responsive systems that function in aqueous solution using examples drawn from the work of the Isaacs group using cucurbit[n]uril (CB[n]) molecular containers as key recognition elements. Our entry into the area of stimuli responsive systems began with the preparation of glycoluril derived molecular clips that efficiently distinguish between self and nonself by H-bonds and π–π interactions even within complex mixtures and therefore undergo self-sorting. We concluded that the selectivity of a wide variety of H-bonded supramolecular assemblies was higher than previously appreciated and that self-sorting is not exceptional behavior. This lead us to examine self-sorting within the context of CB[n] host–guest chemistry in water. We discovered that CB[n] homologues (CB[7] and CB[8]) display remarkably high binding affinity (Ka up to 1017 M–1) and selectivity (ΔΔG) toward their guests, which renders CB[n]s prime components for the construction of stimuli responsive host–guest systems. The CB[7]·adamantaneammonium ion complex, which is particularly privileged (Ka = 4.2 × 1012 M–1), was introduced by us as a stimulus to trigger constitutional changes in multicomponent self-sorting systems. For example, we describe how the free energy associated with the formation of host–guest complexes of CB[n]-type receptors can drive conformational changes of included guests like triazene–arylene foldamers and cationic calix[4]arenes, as well as induced conformational changes (e.g., ammonium guest size dependent homotropic allostery, metal ion triggered folding, and heterochiral dimerization) of the hosts themselves. Many guests display large pKa shifts within their CB[n]–guest complexes, which we used to promote pH controlled guest swapping and thermal trans-to-cis isomerization of azobenzene derivatives. We also used the high affinity and selectivity of CB[7] toward its guests to outcompete an enzyme (bovine carbonic anhydrase) for a two-faced inhibitor, which allowed stimuli responsive regulation of enzymatic activity. These results prompted us to examine the use of CB[n]-type receptors in both in vitro and in vivo biological systems. We demonstrated that adamantaneammonium ion can be used to intracellularly sequester CB[7] from gold nanoparticles passivated with hexanediammonium ion·CB[7] complexes and thereby trigger cytotoxicity. CB[7] derivatives bearing a biotin targeting group enhance the cytotoxicity of encapsulated oxaliplatin toward L1210FR cells. Finally, acyclic CB[n]-type receptors function as solubilizing excipients for insoluble drugs for drug delivery purposes and as a broad spectrum reversal agent for the neuromuscular blocking agents rocuronium, vecuronium, and cis-atracurium in rats. The work highlights the great potential for integration of CB[n]-type receptors with biological systems. PMID:24785941
Effects of chromatic image statistics on illumination induced color differences.
Lucassen, Marcel P; Gevers, Theo; Gijsenij, Arjan; Dekker, Niels
2013-09-01
We measure the color fidelity of visual scenes that are rendered under different (simulated) illuminants and shown on a calibrated LCD display. Observers make triad illuminant comparisons involving the renderings from two chromatic test illuminants and one achromatic reference illuminant shown simultaneously. Four chromatic test illuminants are used: two along the daylight locus (yellow and blue), and two perpendicular to it (red and green). The observers select the rendering having the best color fidelity, thereby indirectly judging which of the two test illuminants induces the smallest color differences compared to the reference. Both multicolor test scenes and natural scenes are studied. The multicolor scenes are synthesized and represent ellipsoidal distributions in CIELAB chromaticity space having the same mean chromaticity but different chromatic orientations. We show that, for those distributions, color fidelity is best when the vector of the illuminant change (pointing from neutral to chromatic) is parallel to the major axis of the scene's chromatic distribution. For our selection of natural scenes, which generally have much broader chromatic distributions, we measure a higher color fidelity for the yellow and blue illuminants than for red and green. Scrambled versions of the natural images are also studied to exclude possible semantic effects. We quantitatively predict the average observer response (i.e., the illuminant probability) with four types of models, differing in the extent to which they incorporate information processing by the visual system. Results show different levels of performance for the models, and different levels for the multicolor scenes and the natural scenes. Overall, models based on the scene averaged color difference have the best performance. We discuss how color constancy algorithms may be improved by exploiting knowledge of the chromatic distribution of the visual scene.
NASA Technical Reports Server (NTRS)
Dorband, John E.
1988-01-01
Sorting has long been used to organize data in preparation for further computation, but sort computation allows some types of computation to be performed during the sort. Sort aggregation and sort distribution are the two basic forms of sort computation. Sort aggregation generates an accumulative or aggregate result for each group of records and places this result in one of the records. An aggregate operation can be any operation that is both associative and commutative, i.e., any operation whose result does not depend on the order of the operands or the order in which the operations are performed. Sort distribution copies the value from a field of a specific record in a group into that field in every record of that group.
Derivation of sorting programs
NASA Technical Reports Server (NTRS)
Varghese, Joseph; Loganantharaj, Rasiah
1990-01-01
Program synthesis for critical applications has become a viable alternative to program verification. Nested resolution and its extension are used to synthesize a set of sorting programs from their first order logic specifications. A set of sorting programs, such as, naive sort, merge sort, and insertion sort, were successfully synthesized starting from the same set of specifications.
2014-09-01
simulation time frame from 30 days to one year. This was enabled by porting the simulation to the Pleiades supercomputer at NASA Ames Research Center, a...including the motivation for changes to our past approach. We then present the software implementation (3) on the NASA Ames Pleiades supercomputer...significantly updated since last year’s paper [25]. The main incentive for that was the shift to a highly parallel approach in order to utilize the Pleiades
Parallel-plate transmission line type of EMP simulators: Systematic review and recommendations
NASA Astrophysics Data System (ADS)
Giri, D. V.; Liu, T. K.; Tesche, F. M.; King, R. W. P.
1980-05-01
This report presents various aspects of the two-parallel-plate transmission line type of EMP simulator. Much of the work is the result of research efforts conducted during the last two decades at the Air Force Weapons Laboratory, and in industries/universities as well. The principal features of individual simulator components are discussed. The report also emphasizes that it is imperative to hybridize our understanding of individual components so that we can draw meaningful conclusions of simulator performance as a whole.
Spin-the-bottle Sort and Annealing Sort: Oblivious Sorting via Round-robin Random Comparisons
Goodrich, Michael T.
2013-01-01
We study sorting algorithms based on randomized round-robin comparisons. Specifically, we study Spin-the-bottle sort, where comparisons are unrestricted, and Annealing sort, where comparisons are restricted to a distance bounded by a temperature parameter. Both algorithms are simple, randomized, data-oblivious sorting algorithms, which are useful in privacy-preserving computations, but, as we show, Annealing sort is much more efficient. We show that there is an input permutation that causes Spin-the-bottle sort to require Ω(n2 log n) expected time in order to succeed, and that in O(n2 log n) time this algorithm succeeds with high probability for any input. We also show there is a specification of Annealing sort that runs in O(n log n) time and succeeds with very high probability. PMID:24550575
Evidence of photospheric vortex flows at supergranular junctions observed by FG/SOT (Hinode)
NASA Astrophysics Data System (ADS)
Attie, R.; Innes, D. E.; Potts, H. E.
2009-01-01
Context: Twisting motions of different sorts are observed in several layers of the solar atmosphere. Chromospheric sunspot whorls and rotation of sunspots or even higher up in the lower corona sigmoids are examples of the large-scale twisted topology of many solar features. Nevertheless, their occurrence on a large scale in the quiet photosphere has not been investigated yet. Aims: The present study reveals the existence of vortex flows located at the supergranular junctions of the quiet Sun. Methods: We used a 1-h and a 5-h time series of the granulation in blue continuum and G-band images from FG/SOT to derive the photospheric flows. A feature-tracking technique called balltracking was performed to track the granules and reveal the underlying flow fields. Results: In both time series, we identify long lasting vortex flow located at supergranular junctions. The first vortex flow lasts at least 1 h and is ~20´´ wide (~15.5 Mm). The second vortex flow lasts more than 2 h and is ~27´´ wide (~21 Mm).
Scalable and portable visualization of large atomistic datasets
NASA Astrophysics Data System (ADS)
Sharma, Ashish; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya
2004-10-01
A scalable and portable code named Atomsviewer has been developed to interactively visualize a large atomistic dataset consisting of up to a billion atoms. The code uses a hierarchical view frustum-culling algorithm based on the octree data structure to efficiently remove atoms outside of the user's field-of-view. Probabilistic and depth-based occlusion-culling algorithms then select atoms, which have a high probability of being visible. Finally a multiresolution algorithm is used to render the selected subset of visible atoms at varying levels of detail. Atomsviewer is written in C++ and OpenGL, and it has been tested on a number of architectures including Windows, Macintosh, and SGI. Atomsviewer has been used to visualize tens of millions of atoms on a standard desktop computer and, in its parallel version, up to a billion atoms. Program summaryTitle of program: Atomsviewer Catalogue identifier: ADUM Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUM Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer for which the program is designed and others on which it has been tested: 2.4 GHz Pentium 4/Xeon processor, professional graphics card; Apple G4 (867 MHz)/G5, professional graphics card Operating systems under which the program has been tested: Windows 2000/XP, Mac OS 10.2/10.3, SGI IRIX 6.5 Programming languages used: C++, C and OpenGL Memory required to execute with typical data: 1 gigabyte of RAM High speed storage required: 60 gigabytes No. of lines in the distributed program including test data, etc.: 550 241 No. of bytes in the distributed program including test data, etc.: 6 258 245 Number of bits in a word: Arbitrary Number of processors used: 1 Has the code been vectorized or parallelized: No Distribution format: tar gzip file Nature of physical problem: Scientific visualization of atomic systems Method of solution: Rendering of atoms using computer graphic techniques, culling algorithms for data minimization, and levels-of-detail for minimal rendering Restrictions on the complexity of the problem: None Typical running time: The program is interactive in its execution Unusual features of the program: None References: The conceptual foundation and subsequent implementation of the algorithms are found in [A. Sharma, A. Nakano, R.K. Kalia, P. Vashishta, S. Kodiyalam, P. Miller, W. Zhao, X.L. Liu, T.J. Campbell, A. Haas, Presence—Teleoperators and Virtual Environments 12 (1) (2003)].
Safe sorting of GFP-transduced live cells for subsequent culture using a modified FACS vantage.
Sørensen, T U; Gram, G J; Nielsen, S D; Hansen, J E
1999-12-01
A stream-in-air cell sorter enables rapid sorting to a high purity, but it is not well suited for sorting of infectious material due to the risk of airborne spread to the surroundings. A FACS Vantage cell sorter was modified for safe use with potentially HIV infected cells. Safety tests with bacteriophages were performed to evaluate the potential spread of biologically active material during cell sorting. Cells transduced with a retroviral vector carrying the gene for GFP were sorted on the basis of their GFP fluorescence, and GFP expression was followed during subsequent culture. The bacteriophage sorting showed that the biologically active material was confined to the sorting chamber. A failure mode simulating a nozzle blockage resulted in detectable droplets inside the sorting chamber, but no droplets could be detected when an additional air suction from the sorting chamber had been put on. The GFP transduced cells were sorted to 99% purity. Cells not expressing GFP at the time of sorting did not turn on the gene during subsequent culture. Un-sorted cells and cells sorted to be positive for GFP showed a decrease in the fraction of GFP positive cells during culture. Sorting of live infected cells can be performed safely and with no deleterious effects on vector expression using the modified FACS Vantage instrument. Copyright 1999 Wiley-Liss, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meade, Roger Allen
In the summer of 1944, J. Robert Oppenheimer and Los Alamos faced a crisis. An isotopic impurity in Plutonium rendered the metal unusable in a gun-assembled atomic bomb (i.e., Little Boy). Making this situation worse was a shortage of Uranium. The combination of these two problems threatened the entire wartime project. The answer to this dilemma, in part, was to develop a novel assembly method for Plutonium using the supersonic shock waves created by several tons of high explosives to compress a ball of Plutonium into a supercritical state. Since this method, implosion, was not much more than a theoreticalmore » construct, the Trinity test was devised to proof test the process. Given the speculative nature of implosion, Trinity was a gamble of sorts. If the test failed (i.e., little or no nuclear yield), the blast of the high explosives would scatter the scarce and expensive Plutonium over the surrounding desert. Since the probability of failure remained high into the early summer of 1945, some method of containing a failed nuclear explosion was needed. Jumbo was the answer.« less
Economic rationality and health and lifestyle choices for people with diabetes.
Baker, Rachel Mairi
2006-11-01
Economic rationality is traditionally represented by goal-oriented, maximising behaviour, or 'instrumental rationality'. Such a consequentialist, instrumental model of choice is often implicit in a biomedical approach to health promotion and education. The research reported here assesses the relevance of a broader conceptual framework of rationality, which includes 'procedural' and 'expressive' rationality as complements to an instrumental model of rationality, in a health context. Q methodology was used to derive 'factors' underlying health and lifestyle choices, based on a factor analysis of the results of a card sorting procedure undertaken by 27 adult respondents with type 2 diabetes in Newcastle upon Tyne, UK. These factors were then compared with the rationality framework and the appropriateness of an extended model of economic rationality as a means of better understanding health and lifestyle choices was assessed. Taking a wider rational choice perspective, choices which are rendered irrational within a narrow-biomedical or strictly instrumental model, can be understood in terms of a coherent rationale, grounded in the accounts of respondents. The implications of these findings are discussed in terms of rational choice theory and diabetes management and research.
Moral responsibility for (un)healthy behaviour.
Brown, Rebecca C H
2013-11-01
Combatting chronic, lifestyle-related disease has become a healthcare priority in the developed world. The role personal responsibility should play in healthcare provision has growing pertinence given the growing significance of individual lifestyle choices for health. Media reporting focussing on the 'bad behaviour' of individuals suffering lifestyle-related disease, and policies aimed at encouraging 'responsibilisation' in healthcare highlight the importance of understanding the scope of responsibility ascriptions in this context. Research into the social determinants of health and psychological mechanisms of health behaviour could undermine some commonly held and tacit assumptions about the moral responsibility of agents for the sorts of lifestyles they adopt. I use Philip Petit's conception of freedom as 'fitness to be held responsible' to consider the significance of some of this evidence for assessing the moral responsibility of agents. I propose that, in some cases, factors outside the agent's control may influence behaviour in such a way as to undermine her freedom along the three dimensions described by Pettit: freedom of action; a sense of identification with one's actions; and whether one's social position renders one vulnerable to pressure from more powerful others.
Moral responsibility for (un)healthy behaviour
Brown, Rebecca C H
2013-01-01
Combatting chronic, lifestyle-related disease has become a healthcare priority in the developed world. The role personal responsibility should play in healthcare provision has growing pertinence given the growing significance of individual lifestyle choices for health. Media reporting focussing on the ‘bad behaviour’ of individuals suffering lifestyle-related disease, and policies aimed at encouraging ‘responsibilisation’ in healthcare highlight the importance of understanding the scope of responsibility ascriptions in this context. Research into the social determinants of health and psychological mechanisms of health behaviour could undermine some commonly held and tacit assumptions about the moral responsibility of agents for the sorts of lifestyles they adopt. I use Philip Petit's conception of freedom as ‘fitness to be held responsible’ to consider the significance of some of this evidence for assessing the moral responsibility of agents. I propose that, in some cases, factors outside the agent's control may influence behaviour in such a way as to undermine her freedom along the three dimensions described by Pettit: freedom of action; a sense of identification with one's actions; and whether one's social position renders one vulnerable to pressure from more powerful others. PMID:23315854
Slowing down bubbles with sound
NASA Astrophysics Data System (ADS)
Poulain, Cedric; Dangla, Remie; Guinard, Marion
2009-11-01
We present experimental evidence that a bubble moving in a fluid in which a well-chosen acoustic noise is superimposed can be significantly slowed down even for moderate acoustic pressure. Through mean velocity measurements, we show that a condition for this effect to occur is for the acoustic noise spectrum to match or overlap the bubble's fundamental resonant mode. We render the bubble's oscillations and translational movements using high speed video. We show that radial oscillations (Rayleigh-Plesset type) have no effect on the mean velocity, while above a critical pressure, a parametric type instability (Faraday waves) is triggered and gives rise to nonlinear surface oscillations. We evidence that these surface waves are subharmonic and responsible for the bubble's drag increase. When the acoustic intensity is increased, Faraday modes interact and the strongly nonlinear oscillations behave randomly, leading to a random behavior of the bubble's trajectory and consequently to a higher slow down. Our observations may suggest new strategies for bubbly flow control, or two-phase microfluidic devices. It might also be applicable to other elastic objects, such as globules, cells or vesicles, for medical applications such as elasticity-based sorting.
Magnetic water-in-water droplet microfluidics
NASA Astrophysics Data System (ADS)
Navi, Maryam; Abbasi, Niki; Tsai, Scott
2017-11-01
Aqueous two-phase systems (ATPS) have shown to be ideal candidates for replacing the conventional water-oil systems used in droplet microfluidics. We use an ATPS of Polyethylene Glycol (PEG) and Dextran (DEX) for microfluidic generation of magnetic water-in-water droplets. As ferrofluid partitions to DEX phase, there is no significant diffusion of ferrofluid at the interface of the droplets, rendering generation of magnetic DEX droplets in a non-magnetic continuous phase of PEG possible. In this system, both phases are water-based and highly biocompatible. We microfluidically generate magnetic DEX droplets at a flow-focusing junction in a jetting regime. We sort the droplets based on their size by placing a permanent magnet downstream of the droplet generation region, and show that the deflection of droplets is in good agreement with a mathematical model. We also show that the magnetic DEX droplets can be stabilized by lysozyme and be used for separation of single cell containing water-in-water droplets. This system of magnetic water-in-water droplet manipulation may find biomedical applications such as single-cell studies and drug delivery.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brun, B.
1997-07-01
Computer technology has improved tremendously during the last years with larger media capacity, memory and more computational power. Visual computing with high-performance graphic interface and desktop computational power have changed the way engineers accomplish everyday tasks, development and safety studies analysis. The emergence of parallel computing will permit simulation over a larger domain. In addition, new development methods, languages and tools have appeared in the last several years.
Design and realization of sort manipulator of crystal-angle sort machine
NASA Astrophysics Data System (ADS)
Wang, Ming-shun; Chen, Shu-ping; Guan, Shou-ping; Zhang, Yao-wei
2005-12-01
It is a current tendency of development in automation technology to replace manpower with manipulators in working places where dangerous, harmful, heavy or repetitive work is involved. The sort manipulator is installed in a crystal-angle sort machine to take the place of manpower, and engaged in unloading and sorting work. It is the outcome of combing together mechanism, electric transmission, and pneumatic element and micro-controller control. The step motor makes the sort manipulator operate precisely. The pneumatic elements make the sort manipulator be cleverer. Micro-controller's software bestows some simple artificial intelligence on the sort manipulator, so that it can precisely repeat its unloading and sorting work. The combination of manipulator's zero position and step motor counting control puts an end to accumulating error in long time operation. A sort manipulator's design in the practice engineering has been proved to be correct and reliable.
The long view: how the financial downturn will change health care.
Moore, Keith; Coddington, Dean; Byrne, Deirdre
2009-01-01
There are five reasons that today's economic downturn will have a much broader impact on U.S. health care than did past recessions: This downturn is likely to be more severe and last longer. Healthcare organizations are experiencing problems from several directions simultaneously. Healthcare organizations entered this downturn more heavily leveraged and more vulnerable. This downturn is notjust a recession, but a major realignment for financing practices. As the realignment occurs and the new financing order sorts itself out, healthcare organizations are not likely to receive the favorable treatment they had in the past.
Simulations of Tidally Driven Formation of Binary Planet Systems
NASA Astrophysics Data System (ADS)
Murray, R. Zachary P.; Guillochon, James
2018-01-01
In the last decade there have been hundreds of exoplanets discovered by the Kepler, CoRoT and many other initiatives. This wealth of data suggests the possibility of detecting exoplanets with large satellites. This project seeks to model the interactions between orbiting planets using the FLASH hydrodynamics code developed by The Flash Center for Computational Science at University of Chicago. We model the encounters in a wide variety of encounter scenarios and initial conditions including variations in encounter depth, mass ratio, and encounter velocity and attempt to constrain what sorts of binary planet configurations are possible and stable.
Parallelization Issues and Particle-In Codes.
NASA Astrophysics Data System (ADS)
Elster, Anne Cathrine
1994-01-01
"Everything should be made as simple as possible, but not simpler." Albert Einstein. The field of parallel scientific computing has concentrated on parallelization of individual modules such as matrix solvers and factorizers. However, many applications involve several interacting modules. Our analyses of a particle-in-cell code modeling charged particles in an electric field, show that these accompanying dependencies affect data partitioning and lead to new parallelization strategies concerning processor, memory and cache utilization. Our test-bed, a KSR1, is a distributed memory machine with a globally shared addressing space. However, most of the new methods presented hold generally for hierarchical and/or distributed memory systems. We introduce a novel approach that uses dual pointers on the local particle arrays to keep the particle locations automatically partially sorted. Complexity and performance analyses with accompanying KSR benchmarks, have been included for both this scheme and for the traditional replicated grids approach. The latter approach maintains load-balance with respect to particles. However, our results demonstrate it fails to scale properly for problems with large grids (say, greater than 128-by-128) running on as few as 15 KSR nodes, since the extra storage and computation time associated with adding the grid copies, becomes significant. Our grid partitioning scheme, although harder to implement, does not need to replicate the whole grid. Consequently, it scales well for large problems on highly parallel systems. It may, however, require load balancing schemes for non-uniform particle distributions. Our dual pointer approach may facilitate this through dynamically partitioned grids. We also introduce hierarchical data structures that store neighboring grid-points within the same cache -line by reordering the grid indexing. This alignment produces a 25% savings in cache-hits for a 4-by-4 cache. A consideration of the input data's effect on the simulation may lead to further improvements. For example, in the case of mean particle drift, it is often advantageous to partition the grid primarily along the direction of the drift. The particle-in-cell codes for this study were tested using physical parameters, which lead to predictable phenomena including plasma oscillations and two-stream instabilities. An overview of the most central references related to parallel particle codes is also given.
ERIC Educational Resources Information Center
Grote, Irene; And Others
1996-01-01
Three preschoolers performed four sorts with stimulus cards--an untaught target sort and three directly taught alternating sorts considered to self-instruct the target performance. Accuracy increased first in the skill sorts and then in the untaught target sorts. All subjects generalized to new target sorts. Correct spontaneous self-instructions…
To sort or not to sort: the impact of spike-sorting on neural decoding performance.
Todorova, Sonia; Sadtler, Patrick; Batista, Aaron; Chase, Steven; Ventura, Valérie
2014-10-01
Brain-computer interfaces (BCIs) are a promising technology for restoring motor ability to paralyzed patients. Spiking-based BCIs have successfully been used in clinical trials to control multi-degree-of-freedom robotic devices. Current implementations of these devices require a lengthy spike-sorting step, which is an obstacle to moving this technology from the lab to the clinic. A viable alternative is to avoid spike-sorting, treating all threshold crossings of the voltage waveform on an electrode as coming from one putative neuron. It is not known, however, how much decoding information might be lost by ignoring spike identity. We present a full analysis of the effects of spike-sorting schemes on decoding performance. Specifically, we compare how well two common decoders, the optimal linear estimator and the Kalman filter, reconstruct the arm movements of non-human primates performing reaching tasks, when receiving input from various sorting schemes. The schemes we tested included: using threshold crossings without spike-sorting; expert-sorting discarding the noise; expert-sorting, including the noise as if it were another neuron; and automatic spike-sorting using waveform features. We also decoded from a joint statistical model for the waveforms and tuning curves, which does not involve an explicit spike-sorting step. Discarding the threshold crossings that cannot be assigned to neurons degrades decoding: no spikes should be discarded. Decoding based on spike-sorted units outperforms decoding based on electrodes voltage crossings: spike-sorting is useful. The four waveform based spike-sorting methods tested here yield similar decoding efficiencies: a fast and simple method is competitive. Decoding using the joint waveform and tuning model shows promise but is not consistently superior. Our results indicate that simple automated spike-sorting performs as well as the more computationally or manually intensive methods used here. Even basic spike-sorting adds value to the low-threshold waveform-crossing methods often employed in BCI decoding.
To sort or not to sort: the impact of spike-sorting on neural decoding performance
NASA Astrophysics Data System (ADS)
Todorova, Sonia; Sadtler, Patrick; Batista, Aaron; Chase, Steven; Ventura, Valérie
2014-10-01
Objective. Brain-computer interfaces (BCIs) are a promising technology for restoring motor ability to paralyzed patients. Spiking-based BCIs have successfully been used in clinical trials to control multi-degree-of-freedom robotic devices. Current implementations of these devices require a lengthy spike-sorting step, which is an obstacle to moving this technology from the lab to the clinic. A viable alternative is to avoid spike-sorting, treating all threshold crossings of the voltage waveform on an electrode as coming from one putative neuron. It is not known, however, how much decoding information might be lost by ignoring spike identity. Approach. We present a full analysis of the effects of spike-sorting schemes on decoding performance. Specifically, we compare how well two common decoders, the optimal linear estimator and the Kalman filter, reconstruct the arm movements of non-human primates performing reaching tasks, when receiving input from various sorting schemes. The schemes we tested included: using threshold crossings without spike-sorting; expert-sorting discarding the noise; expert-sorting, including the noise as if it were another neuron; and automatic spike-sorting using waveform features. We also decoded from a joint statistical model for the waveforms and tuning curves, which does not involve an explicit spike-sorting step. Main results. Discarding the threshold crossings that cannot be assigned to neurons degrades decoding: no spikes should be discarded. Decoding based on spike-sorted units outperforms decoding based on electrodes voltage crossings: spike-sorting is useful. The four waveform based spike-sorting methods tested here yield similar decoding efficiencies: a fast and simple method is competitive. Decoding using the joint waveform and tuning model shows promise but is not consistently superior. Significance. Our results indicate that simple automated spike-sorting performs as well as the more computationally or manually intensive methods used here. Even basic spike-sorting adds value to the low-threshold waveform-crossing methods often employed in BCI decoding.
Carlton, Jez G.; Bujny, Miriam V.; Peter, Brian J.; Oorschot, Viola M. J.; Rutherford, Anna; Arkell, Rebecca S.; Klumperman, Judith; McMahon, Harvey T.; Cullen, Peter J.
2006-01-01
Summary Sorting nexins are a large family of phox-homology-domain-containing proteins that have been implicated in the control of endosomal sorting. Sorting nexin-1 is a component of the mammalian retromer complex that regulates retrieval of the cation-independent mannose 6-phosphate receptor from endosomes to the trans-Golgi network. In yeast, retromer is composed of Vps5p (the orthologue of sorting nexin-1), Vps17p (a related sorting nexin) and a cargo selective subcomplex composed of Vps26p, Vps29p and Vps35p. With the exception of Vps17p, mammalian orthologues of all yeast retromer components have been identified. For Vps17p, one potential mammalian orthologue is sorting nexin-2. Here we show that, like sorting nexin-1, sorting nexin-2 binds phosphatidylinositol 3-monophosphate and phosphatidylinositol 3,5-bisphosphate, and possesses a Bin/Amphiphysin/Rvs domain that can sense membrane curvature. However, in contrast to sorting nexin-1, sorting nexin-2 could not induce membrane tubulation in vitro or in vivo. Functionally, we show that endogenous sorting nexin-1 and sorting nexin-2 co-localise on high curvature tubular elements of the 3-phosphoinositide-enriched early endosome, and that suppression of sorting nexin-2 does not perturb the degradative sorting of receptors for epidermal growth factor or transferrin, nor the steady-state distribution of the cation-independent mannose 6-phosphate receptor. However, suppression of sorting nexin-2 results in a subtle alteration in the kinetics of cation-independent mannose 6-phosphate receptor retrieval. These data suggest that although sorting nexin-2 may be a component of the retromer complex, its presence is not essential for the regulation of endosome-to-trans Golgi network retrieval of the cation-independent mannose 6-phosphate receptor. PMID:16179610
An efficient dynamic load balancing algorithm
NASA Astrophysics Data System (ADS)
Lagaros, Nikos D.
2014-01-01
In engineering problems, randomness and uncertainties are inherent. Robust design procedures, formulated in the framework of multi-objective optimization, have been proposed in order to take into account sources of randomness and uncertainty. These design procedures require orders of magnitude more computational effort than conventional analysis or optimum design processes since a very large number of finite element analyses is required to be dealt. It is therefore an imperative need to exploit the capabilities of computing resources in order to deal with this kind of problems. In particular, parallel computing can be implemented at the level of metaheuristic optimization, by exploiting the physical parallelization feature of the nondominated sorting evolution strategies method, as well as at the level of repeated structural analyses required for assessing the behavioural constraints and for calculating the objective functions. In this study an efficient dynamic load balancing algorithm for optimum exploitation of available computing resources is proposed and, without loss of generality, is applied for computing the desired Pareto front. In such problems the computation of the complete Pareto front with feasible designs only, constitutes a very challenging task. The proposed algorithm achieves linear speedup factors and almost 100% speedup factor values with reference to the sequential procedure.
Zhang, Xuejun; Lei, Jiaxing
2015-01-01
Considering reducing the airspace congestion and the flight delay simultaneously, this paper formulates the airway network flow assignment (ANFA) problem as a multiobjective optimization model and presents a new multiobjective optimization framework to solve it. Firstly, an effective multi-island parallel evolution algorithm with multiple evolution populations is employed to improve the optimization capability. Secondly, the nondominated sorting genetic algorithm II is applied for each population. In addition, a cooperative coevolution algorithm is adapted to divide the ANFA problem into several low-dimensional biobjective optimization problems which are easier to deal with. Finally, in order to maintain the diversity of solutions and to avoid prematurity, a dynamic adjustment operator based on solution congestion degree is specifically designed for the ANFA problem. Simulation results using the real traffic data from China air route network and daily flight plans demonstrate that the proposed approach can improve the solution quality effectively, showing superiority to the existing approaches such as the multiobjective genetic algorithm, the well-known multiobjective evolutionary algorithm based on decomposition, and a cooperative coevolution multiobjective algorithm as well as other parallel evolution algorithms with different migration topology. PMID:26180840
Integrated design and management of complex and fast track projects
NASA Astrophysics Data System (ADS)
Mancini, Dario
2003-02-01
Modern scientific and technological projects are increasingly in competition over scientific aims, technological innovation, performance, time and cost. They require a dedicated and innovative organization able to satisfy contemporarily various technical and logistic constraints imposed by the final user, and guarantee the satisfaction of technical specifications, identified on the basis of scientific aims. In order to satisfy all the above, the management has to be strategically innovative and intuitive, by removing, first of all, the bottlenecks that are pointed out, usually only at the end of the projects, as the causes of general dissatisfaction. More than 30 years spent working on complex multidisciplinary systems and 20 years of formative experience in managing contemporarily both scientific, technological and industrial projects have given the author the possibility to study, test and validate strategies for parallel project management and integrated design, merged in a sort of unique optimized task, using the newly-coined word "Technomethodology". The paper highlights useful information to be taken into consideration during project organization to minimize the program deviations from the expected goals and describe some of the basic meanings of this new advanced method that is the key for parallel successful management of multiple and interdisciplinary activities.
Alexander, David
2002-03-01
This paper compares the terrorist outrages of 11 September 2001 in New York City and Washington to the Lisbon earthquake of 1 November 1755. Both events occurred, literally out of the blue, at critical junctures in history and both struck at the heart of large trading networks. Both affected public attitudes towards disaster as, not only did they cause unparalleled destruction, but they also represented symbolic victories of chaos over order, and of moral catastrophism over a benign view of human endeavour. The Lisbon earthquake led to a protracted debate on teleology, which has some parallels in the debate on technological values in modern society. It remains to be seen whether there will be parallels in the reconstruction and the ways in which major disasters are rationalised in the long term. But despite the differences between these two events--which are obviously very large as nearly 250 years of history separate them and they were the work of different sorts of forces--there are lessons to be learned from the comparison. One of these is that disaster can contribute to a perilous form of self absorption and cultural isolation.
Increasing morphological complexity in multiple parallel lineages of the Crustacea
Adamowicz, Sarah J.; Purvis, Andy; Wills, Matthew A.
2008-01-01
The prospect of finding macroevolutionary trends and rules in the history of life is tremendously appealing, but very few pervasive trends have been found. Here, we demonstrate a parallel increase in the morphological complexity of most of the deep lineages within a major clade. We focus on the Crustacea, measuring the morphological differentiation of limbs. First, we show a clear trend of increasing complexity among 66 free-living, ordinal-level taxa from the Phanerozoic fossil record. We next demonstrate that this trend is pervasive, occurring in 10 or 11 of 12 matched-pair comparisons (across five morphological diversity indices) between extinct Paleozoic and related Recent taxa. This clearly differentiates the pattern from the effects of lineage sorting. Furthermore, newly appearing taxa tend to have had more types of limbs and a higher degree of limb differentiation than the contemporaneous average, whereas those going extinct showed higher-than-average limb redundancy. Patterns of contemporary species diversity partially reflect the paleontological trend. These results provide a rare demonstration of a large-scale and probably driven trend occurring across multiple independent lineages and influencing both the form and number of species through deep time and in the present day. PMID:18347335
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mallajosyula, Arun T.; Nie, Wanyi; Gupta, Gautam
A prerequisite for carbon nanotube-based optoelectronic devices is the ability to sort them into a pure semiconductor phase. One of the most common sorting routes is enabled through using specific wrapping polymers. Here we show that subtle changes in the polymer structure can have a dramatic influence on the figures of merit of a carbon nanotube-based photovoltaic device. By comparing two commonly used polyfluorenes (PFO and PFO-BPy) for wrapping (7,5) and (6,5) chirality SWCNTs, we demonstrate that they have contrasting effects on the device efficiency. We attribute this to the differences in their ability to efficiently transfer charge. Although PFOmore » may act as an efficient interfacial layer at the anode, PFO-BPy, having the additional pyridine side groups, forms a high resistance layer degrading the device efficiency. By comparing PFO|C 60 and C 60-only devices, we found that presence of a PFO layer at low optical densities resulted in the increase of all three solar cell parameters, giving nearly an order of magnitude higher efficiency over that of C 60-only devices. In addition, with a relatively higher contribution to photocurrent from the PFO-C 60 interface, an open circuit voltage of 0.55 V was obtained for PFO-(7,5)-C 60 devices. On the other hand, PFO-BPy does not affect the open circuit voltage but drastically reduces the short circuit current density. Lastly, these results indicate that the charge transport properties and energy levels of the sorting polymers have to be taken into account to fully understand their effect on carbon nanotube-based solar cells.« less
A Parallel Fast Sweeping Method for the Eikonal Equation
NASA Astrophysics Data System (ADS)
Baker, B.
2017-12-01
Recently, there has been an exciting emergence of probabilistic methods for travel time tomography. Unlike gradient-based optimization strategies, probabilistic tomographic methods are resistant to becoming trapped in a local minimum and provide a much better quantification of parameter resolution than, say, appealing to ray density or performing checkerboard reconstruction tests. The benefits associated with random sampling methods however are only realized by successive computation of predicted travel times in, potentially, strongly heterogeneous media. To this end this abstract is concerned with expediting the solution of the Eikonal equation. While many Eikonal solvers use a fast marching method, the proposed solver will use the iterative fast sweeping method because the eight fixed sweep orderings in each iteration are natural targets for parallelization. To reduce the number of iterations and grid points required the high-accuracy finite difference stencil of Nobel et al., 2014 is implemented. A directed acyclic graph (DAG) is created with a priori knowledge of the sweep ordering and finite different stencil. By performing a topological sort of the DAG sets of independent nodes are identified as candidates for concurrent updating. Additionally, the proposed solver will also address scalability during earthquake relocation, a necessary step in local and regional earthquake tomography and a barrier to extending probabilistic methods from active source to passive source applications, by introducing an asynchronous parallel forward solve phase for all receivers in the network. Synthetic examples using the SEG over-thrust model will be presented.
Extensive gene tree discordance and hemiplasy shaped the genomes of North American columnar cacti.
Copetti, Dario; Búrquez, Alberto; Bustamante, Enriquena; Charboneau, Joseph L M; Childs, Kevin L; Eguiarte, Luis E; Lee, Seunghee; Liu, Tiffany L; McMahon, Michelle M; Whiteman, Noah K; Wing, Rod A; Wojciechowski, Martin F; Sanderson, Michael J
2017-11-07
Few clades of plants have proven as difficult to classify as cacti. One explanation may be an unusually high level of convergent and parallel evolution (homoplasy). To evaluate support for this phylogenetic hypothesis at the molecular level, we sequenced the genomes of four cacti in the especially problematic tribe Pachycereeae, which contains most of the large columnar cacti of Mexico and adjacent areas, including the iconic saguaro cactus ( Carnegiea gigantea ) of the Sonoran Desert. We assembled a high-coverage draft genome for saguaro and lower coverage genomes for three other genera of tribe Pachycereeae ( Pachycereus , Lophocereus , and Stenocereus ) and a more distant outgroup cactus, Pereskia We used these to construct 4,436 orthologous gene alignments. Species tree inference consistently returned the same phylogeny, but gene tree discordance was high: 37% of gene trees having at least 90% bootstrap support conflicted with the species tree. Evidently, discordance is a product of long generation times and moderately large effective population sizes, leading to extensive incomplete lineage sorting (ILS). In the best supported gene trees, 58% of apparent homoplasy at amino sites in the species tree is due to gene tree-species tree discordance rather than parallel substitutions in the gene trees themselves, a phenomenon termed "hemiplasy." The high rate of genomic hemiplasy may contribute to apparent parallelisms in phenotypic traits, which could confound understanding of species relationships and character evolution in cacti. Published under the PNAS license.
Extensive gene tree discordance and hemiplasy shaped the genomes of North American columnar cacti
Búrquez, Alberto; Bustamante, Enriquena; Charboneau, Joseph L. M.; Childs, Kevin L.; Eguiarte, Luis E.; Lee, Seunghee; Liu, Tiffany L.; McMahon, Michelle M.; Whiteman, Noah K.; Wing, Rod A.; Wojciechowski, Martin F.; Sanderson, Michael J.
2017-01-01
Few clades of plants have proven as difficult to classify as cacti. One explanation may be an unusually high level of convergent and parallel evolution (homoplasy). To evaluate support for this phylogenetic hypothesis at the molecular level, we sequenced the genomes of four cacti in the especially problematic tribe Pachycereeae, which contains most of the large columnar cacti of Mexico and adjacent areas, including the iconic saguaro cactus (Carnegiea gigantea) of the Sonoran Desert. We assembled a high-coverage draft genome for saguaro and lower coverage genomes for three other genera of tribe Pachycereeae (Pachycereus, Lophocereus, and Stenocereus) and a more distant outgroup cactus, Pereskia. We used these to construct 4,436 orthologous gene alignments. Species tree inference consistently returned the same phylogeny, but gene tree discordance was high: 37% of gene trees having at least 90% bootstrap support conflicted with the species tree. Evidently, discordance is a product of long generation times and moderately large effective population sizes, leading to extensive incomplete lineage sorting (ILS). In the best supported gene trees, 58% of apparent homoplasy at amino sites in the species tree is due to gene tree-species tree discordance rather than parallel substitutions in the gene trees themselves, a phenomenon termed “hemiplasy.” The high rate of genomic hemiplasy may contribute to apparent parallelisms in phenotypic traits, which could confound understanding of species relationships and character evolution in cacti. PMID:29078296
Madej, Mary Ann; Sutherland, D.G.; Lisle, T.E.; Pryor, B.
2009-01-01
At the reach scale, a channel adjusts to sediment supply and flow through mutual interactions among channel form, bed particle size, and flow dynamics that govern river bed mobility. Sediment can impair the beneficial uses of a river, but the timescales for studying recovery following high sediment loading in the field setting make flume experiments appealing. We use a flume experiment, coupled with field measurements in a gravel-bed river, to explore sediment transport, storage, and mobility relations under various sediment supply conditions. Our flume experiment modeled adjustments of channel morphology, slope, and armoring in a gravel-bed channel. Under moderate sediment increases, channel bed elevation increased and sediment output increased, but channel planform remained similar to pre-feed conditions. During the following degradational cycle, most of the excess sediment was evacuated from the flume and the bed became armored. Under high sediment feed, channel bed elevation increased, the bed became smoother, mid-channel bars and bedload sheets formed, and water surface slope increased. Concurrently, output increased and became more poorly sorted. During the last degradational cycle, the channel became armored and channel incision ceased before all excess sediment was removed. Selective transport of finer material was evident throughout the aggradational cycles and became more pronounced during degradational cycles as the bed became armored. Our flume results of changes in bed elevation, sediment storage, channel morphology, and bed texture parallel those from field surveys of Redwood Creek, northern California, which has exhibited channel bed degradation for 30??years following a large aggradation event in the 1970s. The flume experiment suggested that channel recovery in terms of reestablishing a specific morphology may not occur, but the channel may return to a state of balancing sediment supply and transport capacity.
Bayer image parallel decoding based on GPU
NASA Astrophysics Data System (ADS)
Hu, Rihui; Xu, Zhiyong; Wei, Yuxing; Sun, Shaohua
2012-11-01
In the photoelectrical tracking system, Bayer image is decompressed in traditional method, which is CPU-based. However, it is too slow when the images become large, for example, 2K×2K×16bit. In order to accelerate the Bayer image decoding, this paper introduces a parallel speedup method for NVIDA's Graphics Processor Unit (GPU) which supports CUDA architecture. The decoding procedure can be divided into three parts: the first is serial part, the second is task-parallelism part, and the last is data-parallelism part including inverse quantization, inverse discrete wavelet transform (IDWT) as well as image post-processing part. For reducing the execution time, the task-parallelism part is optimized by OpenMP techniques. The data-parallelism part could advance its efficiency through executing on the GPU as CUDA parallel program. The optimization techniques include instruction optimization, shared memory access optimization, the access memory coalesced optimization and texture memory optimization. In particular, it can significantly speed up the IDWT by rewriting the 2D (Tow-dimensional) serial IDWT into 1D parallel IDWT. Through experimenting with 1K×1K×16bit Bayer image, data-parallelism part is 10 more times faster than CPU-based implementation. Finally, a CPU+GPU heterogeneous decompression system was designed. The experimental result shows that it could achieve 3 to 5 times speed increase compared to the CPU serial method.
Fast parallel algorithm for slicing STL based on pipeline
NASA Astrophysics Data System (ADS)
Ma, Xulong; Lin, Feng; Yao, Bo
2016-05-01
In Additive Manufacturing field, the current researches of data processing mainly focus on a slicing process of large STL files or complicated CAD models. To improve the efficiency and reduce the slicing time, a parallel algorithm has great advantages. However, traditional algorithms can't make full use of multi-core CPU hardware resources. In the paper, a fast parallel algorithm is presented to speed up data processing. A pipeline mode is adopted to design the parallel algorithm. And the complexity of the pipeline algorithm is analyzed theoretically. To evaluate the performance of the new algorithm, effects of threads number and layers number are investigated by a serial of experiments. The experimental results show that the threads number and layers number are two remarkable factors to the speedup ratio. The tendency of speedup versus threads number reveals a positive relationship which greatly agrees with the Amdahl's law, and the tendency of speedup versus layers number also keeps a positive relationship agreeing with Gustafson's law. The new algorithm uses topological information to compute contours with a parallel method of speedup. Another parallel algorithm based on data parallel is used in experiments to show that pipeline parallel mode is more efficient. A case study at last shows a suspending performance of the new parallel algorithm. Compared with the serial slicing algorithm, the new pipeline parallel algorithm can make full use of the multi-core CPU hardware, accelerate the slicing process, and compared with the data parallel slicing algorithm, the new slicing algorithm in this paper adopts a pipeline parallel model, and a much higher speedup ratio and efficiency is achieved.
Wireless live streaming video of laparoscopic surgery: a bandwidth analysis for handheld computers.
Gandsas, Alex; McIntire, Katherine; George, Ivan M; Witzke, Wayne; Hoskins, James D; Park, Adrian
2002-01-01
Over the last six years, streaming media has emerged as a powerful tool for delivering multimedia content over networks. Concurrently, wireless technology has evolved, freeing users from desktop boundaries and wired infrastructures. At the University of Kentucky Medical Center, we have integrated these technologies to develop a system that can wirelessly transmit live surgery from the operating room to a handheld computer. This study establishes the feasibility of using our system to view surgeries and describes the effect of bandwidth on image quality. A live laparoscopic ventral hernia repair was transmitted to a single handheld computer using five encoding speeds at a constant frame rate, and the quality of the resulting streaming images was evaluated. No video images were rendered when video data were encoded at 28.8 kilobytes per second (Kbps), the slowest encoding bitrate studied. The highest quality images were rendered at encoding speeds greater than or equal to 150 Kbps. Of note, a 15 second transmission delay was experienced using all four encoding schemes that rendered video images. We believe that the wireless transmission of streaming video to handheld computers has tremendous potential to enhance surgical education. For medical students and residents, the ability to view live surgeries, lectures, courses and seminars on handheld computers means a larger number of learning opportunities. In addition, we envision that wireless enabled devices may be used to telemonitor surgical procedures. However, bandwidth availability and streaming delay are major issues that must be addressed before wireless telementoring becomes a reality.
Parallelization of the preconditioned IDR solver for modern multicore computer systems
NASA Astrophysics Data System (ADS)
Bessonov, O. A.; Fedoseyev, A. I.
2012-10-01
This paper present the analysis, parallelization and optimization approach for the large sparse matrix solver CNSPACK for modern multicore microprocessors. CNSPACK is an advanced solver successfully used for coupled solution of stiff problems arising in multiphysics applications such as CFD, semiconductor transport, kinetic and quantum problems. It employs iterative IDR algorithm with ILU preconditioning (user chosen ILU preconditioning order). CNSPACK has been successfully used during last decade for solving problems in several application areas, including fluid dynamics and semiconductor device simulation. However, there was a dramatic change in processor architectures and computer system organization in recent years. Due to this, performance criteria and methods have been revisited, together with involving the parallelization of the solver and preconditioner using Open MP environment. Results of the successful implementation for efficient parallelization are presented for the most advances computer system (Intel Core i7-9xx or two-processor Xeon 55xx/56xx).
Design of monitoring system for mail-sorting based on the Profibus S7 series PLC
NASA Astrophysics Data System (ADS)
Zhang, W.; Jia, S. H.; Wang, Y. H.; Liu, H.; Tang, G. C.
2017-01-01
With the rapid development of the postal express, the workload of mail sorting is increasing, but the automatic technology of mail sorting is not mature enough. In view of this, the system uses Siemens S7-300 PLC as the main station controller, PLC of Siemens S7-200/400 is from the station controller, through the man-machine interface configuration software MCGS, PROFIBUS-DP communication, RFID technology and mechanical sorting hand achieve mail classification sorting monitoring. Among them, distinguish mail-sorting by scanning RFID posted in the mail electronic bar code (fixed code), the system uses the corresponding controller on the acquisition of information processing, the processed information transmit to the sorting manipulator by PROFIBUS-DP. The system can realize accurate and efficient mail sorting, which will promote the development of mail sorting technology.
Tephra Blanket Record of a Violent Strombolian Eruption, Sunset Crater, Arizona
NASA Astrophysics Data System (ADS)
Wagner, K. D.; Ort, M. H.
2015-12-01
New fieldwork provides a detailed description of the widespread tephra of the ~1085 CE Sunset Crater eruption in the San Francisco Volcanic Field, Arizona, and refines interpretation of the eruptive sequence. The basal fine-lapilli tephra-fall-units I-IV are considered in detail. Units I and II are massive, with Unit I composed of angular to spiny clasts and II composed of more equant, oxidized clasts. Units III and IV have inversely graded bases and massive tops and are composed of angular to spiny iridescent and mixed iridescent and oxidized angular clasts, respectively. Xenoliths are rare in all units (<0.1%): sedimentary xenoliths are consistent with the known shallow country rock (Moenkopi and Kaibab Fms); magmatic xenoliths are pumiceous rhyolite mingled with basalt. Unit II is less sideromelane rich (20%) than Units I, III, and IV (60-80%). Above these units are at least two more coarse tephra-fall units. Variably preserved ash and fine-lapilli laminae cap the tephra blanket. This deposit is highly susceptible to reworking, and likely experienced both syn- and post-eruptive aeolian redistribution. It appears as either well sorted, alternating planar-parallel beds of ash and fine lapilli with rare wavy beds, or as cross- or planar-bedded ash. The tephra blanket as a whole is stratigraphically underlain by a fissure-fed lava flow and lapilli-fall units are intercalated with two larger flows. Mean grain size is coarsest in Unit I but coarsens in Units II-IV. Units I, III, and IV are moderately to poorly sorted with no skew. Unit II is better sorted and more coarse-skewed. Units I and III are slightly more platykurtic than II and IV. Without considering possible spatial effects introduced by dispersion patterns, bootstrap ANOVA confidence intervals suggest at least Unit II sorting and skewness are from distinct populations. Isopachs indicate Units I and II were associated with a 10-km-long fissure source. After or during Unit II's deposition, activity localized to Sunset Crater. Units III and IV were emplaced with waxing to sustained activity, and followed by at least two more sustained episodes. Two lava flows began effusing from the cone during this period and remained active after explosive activity ceased. Primary tephra deposition ended with a period of small discrete explosions.
Unusual folding and rolling of Glacio-Lacustrine sediments, Upper Fraser Canyon, British Columbia
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baxter, S.
1987-05-01
Folding and rolling of graded but unconsolidated sediments by at least 720/sup 0/ produced a structure resembling a large Swiss roll about 6 ft wide and 4 ft high. The sediments were initially horizontal and well sorted, grading from coarse sands to fine silts. About 50 ft away, at the same level, the sediments include irregular layers of poorly sorted, ice-rafted pebbles and boulders. The sequence is unconformably overlain by till. The axis of folding appears to be parallel to the eastern wall of the Fraser Canyon. The outcrop is in the Stevens Pit (sand and gravel) immediately east ofmore » the Trans-Canada Highway, 2 mi south of Lytton, B.C., at an elevation of 1000 ft, approximately 600 ft above the present level of the Fraser River. The sands and silts accumulated in a lake adjacent to the east margin of a stagnant and relatively small glacier occupying the upper part of the Frazer Canyon. Partial or complete melting of small icebergs caused deposition of coarser material. A subsequent cooling trend led to an advance of the glacier, an advance which at this location caused some of the adjacent and by now frozen sediments to be rolled up like an old carpet. Further advance of the glacier caused it to override and thus preserve the deformed sequence.« less
Park, Kwangjin; Botelho, Salomé Calado; Hong, Joonki; Österberg, Marie; Kim, Hyun
2013-01-01
Mitochondrial inner membrane proteins that carry an N-terminal presequence are sorted by one of two pathways: stop transfer or conservative sorting. However, the sorting pathway is known for only a small number of proteins, in part due to the lack of robust experimental tools with which to study. Here we present an approach that facilitates determination of inner membrane protein sorting pathways in vivo by fusing a mitochondrial inner membrane protein to the C-terminal part of Mgm1p containing the rhomboid cleavage region. We validated the Mgm1 fusion approach using a set of proteins for which the sorting pathway is known, and determined sorting pathways of inner membrane proteins for which the sorting mode was previously uncharacterized. For Sdh4p, a multispanning membrane protein, our results suggest that both conservative sorting and stop transfer mechanisms are required for insertion. Furthermore, the sorting process of Mgm1 fusion proteins was analyzed under different growth conditions and yeast mutant strains that were defective in the import motor or the m-AAA protease function. Our results show that the sorting of mitochondrial proteins carrying moderately hydrophobic transmembrane segments is sensitive to cellular conditions, implying that mitochondrial import and membrane sorting in the physiological environment may be dynamically tuned. PMID:23184936
Progress and Challenges in Coupled Hydrodynamic-Ecological Estuarine Modeling
Numerical modeling has emerged over the last several decades as a widely accepted tool for investigations in environmental sciences. In estuarine research, hydrodynamic and ecological models have moved along parallel tracks with regard to complexity, refinement, computational po...
Towards Photo Watercolorization with Artistic Verisimilitude.
Wang, Miaoyi; Wang, Bin; Fei, Yun; Qian, Kanglai; Wang, Wenping; Chen, Jiating; Yong, Jun-Hai
2014-10-01
We present a novel artistic-verisimilitude driven system for watercolor rendering of images and photos. Our system achieves realistic simulation of a set of important characteristics of watercolor paintings that have not been well implemented before. Specifically, we designed several image filters to achieve: 1) watercolor-specified color transferring; 2) saliency-based level-of-detail drawing; 3) hand tremor effect due to human neural noise; and 4) an artistically controlled wet-in-wet effect in the border regions of different wet pigments. A user study indicates that our method can produce watercolor results of artistic verisimilitude better than previous filter-based or physical-based methods. Furthermore, our algorithm is efficient and can easily be parallelized, making it suitable for interactive image watercolorization.
An Exact Efficiency Formula for Holographic Heat Engines
Johnson, Clifford
2016-03-31
Further consideration is given to the efficiency of a class of black hole heat engines that perform mechanical work via the pdV terms present in the First Law of extended gravitational thermodynamics. It is noted that, when the engine cycle is a rectangle with sides parallel to the (p,V) axes, the efficiency can be written simply in terms of the mass of the black hole evaluated at the corners. Since an arbitrary cycle can be approximated to any desired accuracy by a tiling of rectangles, a general geometrical algorithm for computing the efficiency of such a cycle follows. Finally, amore » simple generalization of the algorithm renders it applicable to broader classes of heat engine, even beyond the black hole context.« less
Seminal plasma affects sperm sex sorting in boars.
Alkmin, Diego V; Parrilla, Inmaculada; Tarantini, Tatiana; Del Olmo, David; Vazquez, Juan M; Martinez, Emilio A; Roca, Jordi
2016-04-01
Two experiments were conducted in boar semen samples to evaluate how both holding time (24h) and the presence of seminal plasma (SP) before sorting affect sperm sortability and the ability of sex-sorted spermatozoa to tolerate liquid storage. Whole ejaculate samples were divided into three aliquots immediately after collection: one was diluted (1:1, v/v) in Beltsville thawing solution (BTS; 50% SP); the SP of the other two aliquots was removed and the sperm pellets were diluted with BTS + 10% of their own SP (10% SP) or BTS alone (0% SP). The three aliquots of each ejaculate were divided into two portions, one that was processed immediately for sorting and a second that was sorted after 24h storage at 15-17°C. In the first experiment, the ability to exhibit well-defined X- and Y-chromosome-bearing sperm peaks (split) in the cytometry histogram and the subsequent sorting efficiency were assessed (20 ejaculates). In contrast with holding time, the SP proportion influenced the parameters examined, as evidenced by the higher number of ejaculates exhibiting split and better sorting efficiency (P<0.05) in semen samples with 0-10% SP compared with those with 50% SP. In a second experiment, the quality (viability, total and progressive motility) and functionality (plasma membrane fluidity and intracellular generation of reactive oxygen species) of sex-sorted spermatozoa were evaluated after 0, 72 and 120h storage at 15-17°C (10 ejaculates). Holding time and SP proportion did not influence the quality or functionality of stored sex-sorted spermatozoa. In conclusion, a holding time as long as 24h before sorting did not negatively affect sex sorting efficiency or the ability of sorted boar spermatozoa to tolerate long-term liquid storage. A high proportion of SP (50%) in the semen samples before sorting reduced the number of ejaculates to be sorted and negatively influenced the sorting efficiency, but did not affect the ability of sex-sorted spermatozoa to tolerate liquid storage.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1994-02-02
This report consists of three separate but related reports. They are (1) Human Resource Development, (2) Carbon-based Structural Materials Research Cluster, and (3) Data Parallel Algorithms for Scientific Computing. To meet the objectives of the Human Resource Development plan, the plan includes K--12 enrichment activities, undergraduate research opportunities for students at the state`s two Historically Black Colleges and Universities, graduate research through cluster assistantships and through a traineeship program targeted specifically to minorities, women and the disabled, and faculty development through participation in research clusters. One research cluster is the chemistry and physics of carbon-based materials. The objective of thismore » cluster is to develop a self-sustaining group of researchers in carbon-based materials research within the institutions of higher education in the state of West Virginia. The projects will involve analysis of cokes, graphites and other carbons in order to understand the properties that provide desirable structural characteristics including resistance to oxidation, levels of anisotropy and structural characteristics of the carbons themselves. In the proposed cluster on parallel algorithms, research by four WVU faculty and three state liberal arts college faculty are: (1) modeling of self-organized critical systems by cellular automata; (2) multiprefix algorithms and fat-free embeddings; (3) offline and online partitioning of data computation; and (4) manipulating and rendering three dimensional objects. This cluster furthers the state Experimental Program to Stimulate Competitive Research plan by building on existing strengths at WVU in parallel algorithms.« less
Tolerance of Erythrocytes in Poultry: Induction and Specificity
Mitchison, N. A.
1962-01-01
Measurement of the rate of elimination of 51Cr-labelled erythrocytes provides a reliable test of immunity in fowls. Chickens can be rendered tolerant of homologous and turkey erythrocytes, as judged by this test, by receiving a series of transfusions of irradiated blood. The series were arranged so that foreign cells remained present in the circulation from the time of hatching. Tolerance induced by this treatment is generally incomplete, but can last indefinitely. In some chickens the manifestation of tolerance of turkey erythrocytes is delayed, probably because of passive transmission of antibody from the dam. Chickens old enough to react against small transfusions of homologous blood can still be rendered tolerant by massive transfusions. Tolerance of the erythrocytes from an individual donor extends only slightly to those from other donors. Tolerance acquired in this way, through transfusion of irradiated blood, stands in contrast to the more stable and complete tolerance that can be acquired through administration of viable cells. Viable cells, on the other hand, provide a less sensitive test, for birds which tolerate skin homografts often eliminate rapidly erythrocytes from the same donor. PMID:14474652
Correlative visualization techniques for multidimensional data
NASA Technical Reports Server (NTRS)
Treinish, Lloyd A.; Goettsche, Craig
1989-01-01
Critical to the understanding of data is the ability to provide pictorial or visual representation of those data, particularly in support of correlative data analysis. Despite the advancement of visualization techniques for scientific data over the last several years, there are still significant problems in bringing today's hardware and software technology into the hands of the typical scientist. For example, there are other computer science domains outside of computer graphics that are required to make visualization effective such as data management. Well-defined, flexible mechanisms for data access and management must be combined with rendering algorithms, data transformation, etc. to form a generic visualization pipeline. A generalized approach to data visualization is critical for the correlative analysis of distinct, complex, multidimensional data sets in the space and Earth sciences. Different classes of data representation techniques must be used within such a framework, which can range from simple, static two- and three-dimensional line plots to animation, surface rendering, and volumetric imaging. Static examples of actual data analyses will illustrate the importance of an effective pipeline in data visualization system.
Mitochondrial inheritance in budding yeasts: towards an integrated understanding.
Solieri, Lisa
2010-11-01
Recent advances in yeast mitogenomics have significantly contributed to our understanding of the diversity of organization, structure and topology in the mitochondrial genome of budding yeasts. In parallel, new insights on mitochondrial DNA (mtDNA) inheritance in the model organism Saccharomyces cerevisiae highlighted an integrated scenario where recombination, replication and segregation of mtDNA are intricately linked to mitochondrial nucleoid (mt-nucleoid) structure and organelle sorting. In addition to this, recent discoveries of bifunctional roles of some mitochondrial proteins have interesting implications on mito-nuclear genome interactions and the relationship between mtDNA inheritance, yeast fitness and speciation. This review summarizes the current knowledge on yeast mitogenomics, mtDNA inheritance with regard to mt-nucleoid structure and organelle dynamics, and mito-nuclear genome interactions. Copyright © 2010 Elsevier Ltd. All rights reserved.
Enhanced and selective optical trapping in a slot-graphite photonic crystal.
Krishnan, Aravind; Huang, Ningfeng; Wu, Shao-Hua; Martínez, Luis Javier; Povinelli, Michelle L
2016-10-03
Applicability of optical trapping tools for nanomanipulation is limited by the available laser power and trap efficiency. We utilized the strong confinement of light in a slot-graphite photonic crystal to develop high-efficiency parallel trapping over a large area. The stiffness is 35 times higher than our previously demonstrated on-chip, near field traps. We demonstrate the ability to trap both dielectric and metallic particles of sub-micron size. We find that the growth kinetics of nanoparticle arrays on the slot-graphite template depends on particle size. This difference is exploited to selectively trap one type of particle out of a binary colloidal mixture, creating an efficient optical sieve. This technique has rich potential for analysis, diagnostics, and enrichment and sorting of microscopic entities.
Taylor, T; Massey, C
2001-01-01
Karl Sims' work on evolving body shapes and controllers for three-dimensional, physically simulated creatures generated wide interest on its publication in 1994. The purpose of this article is threefold: (a) to highlight a spate of recent work by a number of researchers in replicating, and in some cases extending, Sims' results using standard PCs (Sims' original work was done on a Connection Machine CM-5 parallel computer). In particular, a re-implementation of Sims' work by the authors will be described and discussed; (b) to illustrate how off-the-shelf physics engines can be used in this sort of work, and also to highlight some deficiencies of these engines and pitfalls when using them; and (c) to indicate how these recent studies stand in respect to Sims' original work.
Tool for Rapid Analysis of Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Restrepo, Carolina; McCall, Kurt E.; Hurtado, John E.
2013-01-01
Designing a spacecraft, or any other complex engineering system, requires extensive simulation and analysis work. Oftentimes, the large amounts of simulation data generated are very difficult and time consuming to analyze, with the added risk of overlooking potentially critical problems in the design. The authors have developed a generic data analysis tool that can quickly sort through large data sets and point an analyst to the areas in the data set that cause specific types of failures. The first version of this tool was a serial code and the current version is a parallel code, which has greatly increased the analysis capabilities. This paper describes the new implementation of this analysis tool on a graphical processing unit, and presents analysis results for NASA's Orion Monte Carlo data to demonstrate its capabilities.
Stochastic Model of Vesicular Sorting in Cellular Organelles
NASA Astrophysics Data System (ADS)
Vagne, Quentin; Sens, Pierre
2018-02-01
The proper sorting of membrane components by regulated exchange between cellular organelles is crucial to intracellular organization. This process relies on the budding and fusion of transport vesicles, and should be strongly influenced by stochastic fluctuations, considering the relatively small size of many organelles. We identify the perfect sorting of two membrane components initially mixed in a single compartment as a first passage process, and we show that the mean sorting time exhibits two distinct regimes as a function of the ratio of vesicle fusion to budding rates. Low ratio values lead to fast sorting but result in a broad size distribution of sorted compartments dominated by small entities. High ratio values result in two well-defined sorted compartments but sorting is exponentially slow. Our results suggest an optimal balance between vesicle budding and fusion for the rapid and efficient sorting of membrane components and highlight the importance of stochastic effects for the steady-state organization of intracellular compartments.
Identification and genetic analysis of cancer cells with PCR-activated cell sorting
Eastburn, Dennis J.; Sciambi, Adam; Abate, Adam R.
2014-01-01
Cell sorting is a central tool in life science research for analyzing cellular heterogeneity or enriching rare cells out of large populations. Although methods like FACS and FISH-FC can characterize and isolate cells from heterogeneous populations, they are limited by their reliance on antibodies, or the requirement to chemically fix cells. We introduce a new cell sorting technology that robustly sorts based on sequence-specific analysis of cellular nucleic acids. Our approach, PCR-activated cell sorting (PACS), uses TaqMan PCR to detect nucleic acids within single cells and trigger their sorting. With this method, we identified and sorted prostate cancer cells from a heterogeneous population by performing >132 000 simultaneous single-cell TaqMan RT-PCR reactions targeting vimentin mRNA. Following vimentin-positive droplet sorting and downstream analysis of recovered nucleic acids, we found that cancer-specific genomes and transcripts were significantly enriched. Additionally, we demonstrate that PACS can be used to sort and enrich cells via TaqMan PCR reactions targeting single-copy genomic DNA. PACS provides a general new technical capability that expands the application space of cell sorting by enabling sorting based on cellular information not amenable to existing approaches. PMID:25030902
A Gravity-Driven Microfluidic Particle Sorting Device with Hydrodynamic Separation Amplification
Huh, Dongeun; Bahng, Joong Hwan; Ling, Yibo; Wei, Hsien-Hung; Kripfgans, Oliver D.; Fowlkes, J. Brian; Grotberg, James B.; Takayama, Shuichi
2008-01-01
This paper describes a simple microfluidic sorting system that can perform size-profiling and continuous mass-dependent separation of particles through combined use of gravity (1g) and hydrodynamic flows capable of rapidly amplifying sedimentation-based separation between particles. Operation of the device relies on two microfluidic transport processes: i) initial hydrodynamic focusing of particles in a microchannel oriented parallel to gravity, ii) subsequent sample separation where positional difference between particles with different mass generated by sedimentation is further amplified by hydrodynamic flows whose streamlines gradually widen out due to the geometry of a widening microchannel oriented perpendicular to gravity. The microfluidic sorting device was fabricated in poly(dimethylsiloxane) (PDMS), and hydrodynamic flows in microchannels were driven by gravity without using external pumps. We conducted theoretical and experimental studies on fluid dynamic characteristics of laminar flows in widening microchannels and hydrodynamic amplification of particle separation. Direct trajectory monitoring, collection, and post-analysis of separated particles were performed using polystyrene microbeads with different sizes to demonstrate rapid (< 1 min) and high-purity (> 99.9 %) separation. Finally, we demonstrated biomedical applications of our system by isolating small-sized (diameter < 6 μm) perfluorocarbon liquid droplets from polydisperse droplet emulsions, which is crucial in preparing contrast agents for safe, reliable ultrasound medical imaging, tracers for magnetic resonance imaging, or transpulmonary droplets used in ultrasound-based occlusion therapy for cancer treatment. Our method enables straightforward, rapid real-time size-monitoring and continuous separation of particles in simple stand-alone microfabricated devices without the need for bulky and complex external power sources. We believe that this system will provide a useful tool o separate colloids and particles for various analytical and preparative applications, and may hold 3 potential for separation of cells or development of diagnostic tools requiring point-of-care sample preparation or testing. PMID:17297936
Montano, G A; Kraemer, D C; Love, C C; Robeck, T R; O'Brien, J K
2012-06-01
Artificial insemination (AI) with sex-sorted frozen-thawed spermatozoa has led to enhanced management of ex situ bottlenose dolphin populations. Extended distance of animals from the sorting facility can be overcome by the use of frozen-thawed, sorted and recryopreserved spermatozoa. Although one bottlenose dolphin calf had been born using sexed frozen-thawed spermatozoa derived from frozen semen, a critical evaluation of in vitro sperm quality is needed to justify the routine use of such samples in AI programs. Sperm motility parameters and plasma membrane integrity were influenced by stage of the sex-sorting process, sperm type (non-sorted and sorted) and freezing method (straw and directional) (P<0.05). After recryopreservation, sorted spermatozoa frozen with the directional freezing method maintained higher (P<0.05) motility parameters over a 24-h incubation period compared to spermatozoa frozen using straws. Quality of sperm DNA of non-sorted spermatozoa, as assessed by the sperm chromatin structure assay (SCSA), was high and remained unchanged throughout freeze-thawing and incubation processes. Though a possible interaction between Hoechst 33342 and the SCSA-derived acridine orange was observed in stained and sorted samples, the proportion of sex-sorted, recryopreserved spermatozoa exhibiting denatured DNA was low (6.6±4.1%) at 6 h after the second thawing step and remained unchanged (P>0.05) at 24 h. The viability of sorted spermatozoa was higher (P<0.05) than that of non-sorted spermatozoa across all time points after recryopreservation. Collective results indicate that bottlenose dolphin spermatozoa undergoing cryopreservation, sorting and recryopreservation are of adequate quality for use in AI.
Parasitic momentum flux in the tokamak core
Stoltzfus-Dueck, T.
2017-03-06
A geometrical correction to the E × B drift causes an outward flux of co-current momentum whenever electrostatic potential energy is transferred to ion parallel flows. The robust, fully nonlinear symmetry breaking follows from the free-energy flow in phase space and does not depend on any assumed linear eigenmode structure. The resulting rotation peaking is counter-current and scales as temperature over plasma current. Lastly, this peaking mechanism can only act when fluctuations are low-frequency enough to excite ion parallel flows, which may explain some recent experimental observations related to rotation reversals.
A Fast Algorithm for Massively Parallel, Long-Term, Simulation of Complex Molecular Dynamics Systems
NASA Technical Reports Server (NTRS)
Jaramillo-Botero, Andres; Goddard, William A, III; Fijany, Amir
1997-01-01
The advances in theory and computing technology over the last decade have led to enormous progress in applying atomistic molecular dynamics (MD) methods to the characterization, prediction, and design of chemical, biological, and material systems,.
Haunting Echoes of the Last Round-Up: "9066" Revisited.
ERIC Educational Resources Information Center
Trager, James G.
1980-01-01
Discusses the discrimination against and internment of Japanese Americans during World War II, and reminds readers that Congress and the Supreme Court approved the mass discriminatory action. Draws a parallel to current discrimination against Iranians in the United States. (GC)
Synergetic computer and holonics - information dynamics of a semantic computer
NASA Astrophysics Data System (ADS)
Shimizu, H.; Yamaguchi, Y.
1987-12-01
The dynamics of semantic information in biosystem is studied based on holons, generators of mutual relations. Any biosystem has an internal world, a so-called "self", which has an intrinsic purpose rendering the system continuously alive and developed as much as possible against a fluctuating external world. External signals to the system through sensory organs are classified by the self into two basic categories, semantic information with some meaning and value for the purpose and inputs from background and noise sources. Due to this breaking of semantic symmetry, any input signals are transformed into a figure and background, respectively. As a typical example, the visual perception of vertebrates is studied. For such semantic transformation the external signal is first decomposed and converted into a number of elementary signs named "syntons" which are then transmitted into a sensory area of cortex corresponding to an image synthesizer. The synthesizer is a sort of autonomic parallel processor composed of autonomic units, "holons", which are characterized by many internal modes. Syntons are fed into the holons one by one. A set of the elementary meanings, the so-called "semons", provided to the synton are encoded in the internal modes of the holon; that is, each internal mode encodes a semon. A dynamic information theory for the transformation of external signals to semantic information is developed based on our model which we call holovision. Holovision is a dynamic model of visual perception that processes an autonomic ability to self-organize visual images. Autonomous oscillators are utilized as the line processors to encode line elements with specific orientations in their phases as semons. An information space is defined according to the assembly of holons; the spatial plane on which holons are arranged is a syntactic subspace while the internal modes of the holons span a semantic subspace in the orthogonal direction. In this information space, the image of a figure is self-organized - as a sort of spatiotemporal pattern - by autonomic coordinations of the holons that select relevant internal modes, accompanied with compression of irrelevant syntons that correspond to the background. Holons coded by a synton are relevantly connected by means of coherent relations, i.e., dynamic connections with time-coherence, in order to represent the image that varies in time depending on the instantaneous state of the external object. These connections depend on the internal modes that are cooperatively selectively selected by the holons. The image is regarded as a bridge between the external and internal world that has both external and internal consistency. The meaning of the image, i.e., transformed semantic information, is spontaneously transferred from semantic items that have a coherent relation with the image, and the external signal is perceived by the self through the image. We demonstrate that images are indeed self-organized in holovision in the previously described sense. Simulated processes of the creation of semantic information in holovision are shown to display typical features of the forgoing steps of information compression. Based on these results, we propose quantitative indices that represent the value of semantic information in the image processor as well as in the memory.
An economic analysis of the processing technologies in CDW recycling platforms.
Oliveira Neto, Raul; Gastineau, Pascal; Cazacliu, Bogdan Grigore; Le Guen, Lauredan; Paranhos, Régis Sebben; Petter, Carlos Otávio
2017-02-01
This paper proposes an economic analysis of three different types of processing in CDW (construction and demolition waste) recycling platforms, according to the sophistication of the processing technologies (current advanced, advanced and advanced sorting). The methodology that is adopted is in the economic evaluation concept of projects and is classified with a scoping study phase. In these contexts, three levels of CDW processing capabilities for recycling platforms are analyzed (100, 300 and 600 thousand tons per year). This article considers databases obtained from similar projects that have been published in the specialized literature; the data sources are primarily from the European continent. The paper shows that current advanced process has better economic performance, in terms of IRR, related to the other two processes. The IRR associated with advanced and advanced sorting processes could be raised by, (i) higher price of secondary primary material, and/or (ii) higher capacity of platforms, and/or (iii) higher sharing of secondary primary material in the total production. The first two points depend on the market conditions (prices and total quantity of CDW available) and (potential) fiscal or incentive policies. The last one depends on technological progress. Copyright © 2016 Elsevier Ltd. All rights reserved.
Percival, J M; Thomas, G; Cock, T A; Gardiner, E M; Jeffrey, P L; Lin, J J; Weinberger, R P; Gunning, P
2000-11-01
The nonmuscle actin cytoskeleton consists of multiple networks of actin microfilaments. Many of these filament systems are bound by the actin-binding protein tropomyosin (Tm). We investigated whether Tm isoforms could be cell cycle regulated during G0 and G1 phases of the cell cycle in synchronised NIH 3T3 fibroblasts. Using Tm isoform-specific antibodies, we investigated protein expression levels of specific Tms in G0 and G1 phases and whether co-expressed isoforms could be sorted into different compartments. Protein levels of Tms 1, 2, 5a, 6, from the alpha Tm(fast) and beta-Tm genes increased approximately 2-fold during mid-late G1. Tm 3 levels did not change appreciably during G1 progression. In contrast, Tm 5NM gene isoform levels (Tm 5NM-1-11) increased 2-fold at 5 h into G1 and this increase was maintained for the following 3 h. However, Tm 5NM-1 and -2 levels decreased by a factor of three during this time. Comparison of the staining of the antibodies CG3 (detects all Tm 5NM gene products), WS5/9d (detects only two Tms from the Tm 5NM gene, Tm 5NM-1 and -2) and alpha(f)9d (detects specific Tms from the alpha Tm(fast) and beta-Tm genes) antibodies revealed 3 spatially distinct microfilament systems. Tm isoforms detected by alpha(f)9d were dramatically sorted from isoforms from the Tm 5NM gene detected by CG3. Tm 5NM-1 and Tm 5NM-2 were not incorporated into stress fibres, unlike other Tm 5NM isoforms, and marked a discrete, punctate, and highly polarised compartment in NIH 3T3 fibroblasts. All microfilament systems, excluding that detected by the WS5/9d antibody, were observed to coalign into parallel stress fibres at 8 h into G1. However, Tms detected by the CG3 and alpha(f)9d antibodies were incorporated into filaments at different times indicating distinct temporal control mechanisms. Microfilaments in NIH 3T3 cells containing Tm 5NM isoforms were more resistant to cytochalasin D-mediated actin depolymerisation than filaments containing isoforms from the alpha Tm(fast) and beta-Tm genes. This suggests that Tm 5NM isoforms may be in different microfilaments to alpha Tm(fast) and beta-Tm isoforms even when present in the same stress fibre. Staining of primary mouse fibroblasts showed identical Tm sorting patterns to those seen in cultured NIH 3T3 cells. Furthermore, we demonstrate that sorting of Tms is not restricted to cultured cells and can be observed in human columnar epithelial cells in vivo. We conclude that the expression and localisation of Tm isoforms are differentially regulated in G0 and G1 phase of the cell cycle. Tms mark multiple microfilament compartments with restricted tropomyosin composition. The creation of distinct microfilament compartments by differential sorting of Tm isoforms is observable in primary fibroblasts, cultured 3T3 cells and epithelial cells in vivo. Copyright 2000 Wiley-Liss, Inc.
A Quality Sorting of Fruit Using a New Automatic Image Processing Method
NASA Astrophysics Data System (ADS)
Amenomori, Michihiro; Yokomizu, Nobuyuki
This paper presents an innovative approach for quality sorting of objects such as apples sorting in an agricultural factory, using an image processing algorithm. The objective of our approach are; firstly to sort the objects by their colors precisely; secondly to detect any irregularity of the colors surrounding the apples efficiently. An experiment has been conducted and the results have been obtained and compared with that has been preformed by human sorting process and by color sensor sorting devices. The results demonstrate that our approach is capable to sort the objects rapidly and the percentage of classification valid rate was 100 %.
Müller, Johannes; Hipsley, Christy A; Maisano, Jessica A
2016-11-01
The fossorial amphisbaenians, or worm lizards, are characterized by a suite of specialized characters in the skull and postcranium, however fossil evidence suggests that at least some of these shared derived traits evolved convergently. Unfortunately the lack of detailed knowledge of many fossil taxa has rendered a more precise interpretation difficult. Here we describe the cranial anatomy of the oldest-known well-preserved amphisbaenian, Spathorhynchus fossorium, from the Eocene Green River Formation, Wyoming, USA, using high-resolution X-ray computed tomography (HRXCT). This taxon possesses one of the most strongly reinforced crania known among amphisbaenians, with many dermal bones overlapping each other internally. In contrast to modern taxa, S. fossorium has a paired orbitosphenoid, lacks a true compound bone in the mandible, and possesses a fully enclosed orbital rim. The last feature represents a highly derived structure in that the jugal establishes contact with the frontal internally, reinforcing the posterior orbital margin. S. fossorium also possesses a strongly modified Vidian canal with a previously unknown connection to the ventral surface of the parabasisphenoid. Comparison with the closely related fossil taxon Dyticonastis rensbergeri reveals that these derived traits are also shared by the latter species and potentially represent synapopmorphies of an extinct Paleogene clade of amphisbaenians. The presence of a reinforced orbital rim suggests selection against the loss of a functional eye and indicates an ecology potentially different from modern taxa. Given the currently accepted phylogenetic position of Spathorhynchus and Dyticonastis, we predict that supposedly 'unique' cranial traits traditionally linked to fossoriality such as a fused orbitosphenoid and the reduction of the eye show a more complex character history than previously assumed, including both parallel evolution and reversals to superficially primitive conditions. © 2016 Anatomical Society.
Wilson, Robert L.; Frisz, Jessica F.; Hanafin, William P.; Carpenter, Kevin J.; Hutcheon, Ian D.; Weber, Peter K.; Kraft, Mary L.
2014-01-01
The local abundance of specific lipid species near a membrane protein is hypothesized to influence the protein’s activity. The ability to simultaneously image the distributions of specific protein and lipid species in the cell membrane would facilitate testing these hypotheses. Recent advances in imaging the distribution of cell membrane lipids with mass spectrometry have created the desire for membrane protein probes that can be simultaneously imaged with isotope labeled lipids. Such probes would enable conclusive tests of whether specific proteins co-localize with particular lipid species. Here, we describe the development of fluorine-functionalized colloidal gold immunolabels that facilitate the detection and imaging of specific proteins in parallel with lipids in the plasma membrane using high-resolution SIMS performed with a NanoSIMS. First, we developed a method to functionalize colloidal gold nanoparticles with a partially fluorinated mixed monolayer that permitted NanoSIMS detection and rendered the functionalized nanoparticles dispersible in aqueous buffer. Then, to allow for selective protein labeling, we attached the fluorinated colloidal gold nanoparticles to the nonbinding portion of antibodies. By combining these functionalized immunolabels with metabolic incorporation of stable isotopes, we demonstrate that influenza hemagglutinin and cellular lipids can be imaged in parallel using NanoSIMS. These labels enable a general approach to simultaneously imaging specific proteins and lipids with high sensitivity and lateral resolution, which may be used to evaluate predictions of protein co-localization with specific lipid species. PMID:22284327
Development of the PARVMEC Code for Rapid Analysis of 3D MHD Equilibrium
NASA Astrophysics Data System (ADS)
Seal, Sudip; Hirshman, Steven; Cianciosa, Mark; Wingen, Andreas; Unterberg, Ezekiel; Wilcox, Robert; ORNL Collaboration
2015-11-01
The VMEC three-dimensional (3D) MHD equilibrium has been used extensively for designing stellarator experiments and analyzing experimental data in such strongly 3D systems. Recent applications of VMEC include 2D systems such as tokamaks (in particular, the D3D experiment), where application of very small (delB/B ~ 10-3) 3D resonant magnetic field perturbations render the underlying assumption of axisymmetry invalid. In order to facilitate the rapid analysis of such equilibria (for example, for reconstruction purposes), we have undertaken the task of parallelizing the VMEC code (PARVMEC) to produce a scalable and temporally rapidly convergent equilibrium code for use on parallel distributed memory platforms. The parallelization task naturally splits into three distinct parts 1) radial surfaces in the fixed-boundary part of the calculation; 2) two 2D angular meshes needed to compute the Green's function integrals over the plasma boundary for the free-boundary part of the code; and 3) block tridiagonal matrix needed to compute the full (3D) pre-conditioner near the final equilibrium state. Preliminary results show that scalability is achieved for tasks 1 and 3, with task 2 still nearing completion. The impact of this work on the rapid reconstruction of D3D plasmas using PARVMEC in the V3FIT code will be discussed. Work supported by U.S. DOE under Contract DE-AC05-00OR22725 with UT-Battelle, LLC.
NASA Astrophysics Data System (ADS)
Umbarkar, A. J.; Balande, U. T.; Seth, P. D.
2017-06-01
The field of nature inspired computing and optimization techniques have evolved to solve difficult optimization problems in diverse fields of engineering, science and technology. The firefly attraction process is mimicked in the algorithm for solving optimization problems. In Firefly Algorithm (FA) sorting of fireflies is done by using sorting algorithm. The original FA is proposed with bubble sort for ranking the fireflies. In this paper, the quick sort replaces bubble sort to decrease the time complexity of FA. The dataset used is unconstrained benchmark functions from CEC 2005 [22]. The comparison of FA using bubble sort and FA using quick sort is performed with respect to best, worst, mean, standard deviation, number of comparisons and execution time. The experimental result shows that FA using quick sort requires less number of comparisons but requires more execution time. The increased number of fireflies helps to converge into optimal solution whereas by varying dimension for algorithm performed better at a lower dimension than higher dimension.
NASA Astrophysics Data System (ADS)
Neff, John A.
1989-12-01
Experiments originating from Gestalt psychology have shown that representing information in a symbolic form provides a more effective means to understanding. Computer scientists have been struggling for the last two decades to determine how best to create, manipulate, and store collections of symbolic structures. In the past, much of this struggling led to software innovations because that was the path of least resistance. For example, the development of heuristics for organizing the searching through knowledge bases was much less expensive than building massively parallel machines that could search in parallel. That is now beginning to change with the emergence of parallel architectures which are showing the potential for handling symbolic structures. This paper will review the relationships between symbolic computing and parallel computing architectures, and will identify opportunities for optics to significantly impact the performance of such computing machines. Although neural networks are an exciting subset of massively parallel computing structures, this paper will not touch on this area since it is receiving a great deal of attention in the literature. That is, the concepts presented herein do not consider the distributed representation of knowledge.
NASA Astrophysics Data System (ADS)
Brächer, T.; Pirro, P.; Hillebrands, B.
2017-06-01
Magnonics and magnon spintronics aim at the utilization of spin waves and magnons, their quanta, for the construction of wave-based logic networks via the generation of pure all-magnon spin currents and their interfacing with electric charge transport. The promise of efficient parallel data processing and low power consumption renders this field one of the most promising research areas in spintronics. In this context, the process of parallel parametric amplification, i.e., the conversion of microwave photons into magnons at one half of the microwave frequency, has proven to be a versatile tool to excite and to manipulate spin waves. Its beneficial and unique properties such as frequency and mode-selectivity, the possibility to excite spin waves in a wide wavevector range and the creation of phase-correlated wave pairs, have enabled the achievement of important milestones like the magnon Bose-Einstein condensation and the cloning and trapping of spin-wave packets. Parallel parametric amplification, which allows for the selective amplification of magnons while conserving their phase is, thus, one of the key methods of spin-wave generation and amplification. The application of parallel parametric amplification to CMOS-compatible micro- and nano-structures is an important step towards the realization of magnonic networks. This is motivated not only by the fact that amplifiers are an important tool for the construction of any extended logic network but also by the unique properties of parallel parametric amplification. In particular, the creation of phase-correlated wave pairs allows for rewarding alternative logic operations such as a phase-dependent amplification of the incident waves. Recently, the successful application of parallel parametric amplification to metallic microstructures has been reported which constitutes an important milestone for the application of magnonics in practical devices. It has been demonstrated that parametric amplification provides an excellent tool to generate and to amplify spin waves in these systems in a wide wavevector range. In particular, the amplification greatly benefits from the discreteness of the spin-wave spectra since the size of the microstructures is comparable to the spin-wave wavelength. This opens up new, interesting routes of spin-wave amplification and manipulation. In this review, we will give an overview over the recent developments and achievements in this field.
Learning cellular sorting pathways using protein interactions and sequence motifs.
Lin, Tien-Ho; Bar-Joseph, Ziv; Murphy, Robert F
2011-11-01
Proper subcellular localization is critical for proteins to perform their roles in cellular functions. Proteins are transported by different cellular sorting pathways, some of which take a protein through several intermediate locations until reaching its final destination. The pathway a protein is transported through is determined by carrier proteins that bind to specific sequence motifs. In this article, we present a new method that integrates protein interaction and sequence motif data to model how proteins are sorted through these sorting pathways. We use a hidden Markov model (HMM) to represent protein sorting pathways. The model is able to determine intermediate sorting states and to assign carrier proteins and motifs to the sorting pathways. In simulation studies, we show that the method can accurately recover an underlying sorting model. Using data for yeast, we show that our model leads to accurate prediction of subcellular localization. We also show that the pathways learned by our model recover many known sorting pathways and correctly assign proteins to the path they utilize. The learned model identified new pathways and their putative carriers and motifs and these may represent novel protein sorting mechanisms. Supplementary results and software implementation are available from http://murphylab.web.cmu.edu/software/2010_RECOMB_pathways/.
Simic, Vladimir
2015-01-01
End-of-life vehicles (ELVs) are vehicles that have reached the end of their useful lives and are no longer registered or licensed for use. The ELV recycling problem has become very serious in the last decade and more and more efforts are made in order to reduce the impact of ELVs on the environment. This paper proposes the fuzzy risk explicit interval linear programming model for ELV recycling planning in the EU. It has advantages in reflecting uncertainties presented in terms of intervals in the ELV recycling systems and fuzziness in decision makers' preferences. The formulated model has been applied to a numerical study in which different decision maker types and several ELV types under two EU ELV Directive legislative cases were examined. This study is conducted in order to examine the influences of the decision maker type, the α-cut level, the EU ELV Directive and the ELV type on decisions about vehicle hulks procuring, storing unprocessed hulks, sorting generated material fractions, allocating sorted waste flows and allocating sorted metals. Decision maker type can influence quantity of vehicle hulks kept in storages. The EU ELV Directive and decision maker type have no influence on which vehicle hulk type is kept in the storage. Vehicle hulk type, the EU ELV Directive and decision maker type do not influence the creation of metal allocation plans, since each isolated metal has its regular destination. The valid EU ELV Directive eco-efficiency quotas can be reached even when advanced thermal treatment plants are excluded from the ELV recycling process. The introduction of the stringent eco-efficiency quotas will significantly reduce the quantities of land-filled waste fractions regardless of the type of decision makers who will manage vehicle recycling system. In order to reach these stringent quotas, significant quantities of sorted waste need to be processed in advanced thermal treatment plants. Proposed model can serve as the support for the European vehicle recycling managers in creating more successful ELV recycling plans. Copyright © 2014 Elsevier Ltd. All rights reserved.
Concentrated formulations and methods for neutralizing chemical and biological toxants
Tucker, Mark D.; Betty, Rita G.; Tadros, Maher E.
2004-04-20
A formulation and method of making and using that neutralizes the adverse health effects of both chemical and biological toxants, especially chemical warfare (CW) and biological warfare (BW) agents. The aqueous formulation is non-toxic and non-corrosive and can be delivered as a long-lasting foam, spray, or fog. The formulation includes solubilizing compounds that serve to effectively render the CW or BW toxant susceptible to attack, so that a nucleophillic agent can attack the compound via a hydrolysis or oxidation reaction. The formulation can kill up to 99.99999% of bacterial spores within one hour of exposure.
On the Diversity of Linguistic Data and the Integration of the Language Sciences.
D'Alessandro, Roberta; van Oostendorp, Marc
2017-01-01
An integrated science of language is usually advocated as a step forward for linguistic research. In this paper, we maintain that integration of this sort is premature, and cannot take place before we identify a common object of study. We advocate instead a science of language that is inherently multi-faceted, and takes into account the different viewpoints as well as the different definitions of the object of study. We also advocate the use of different data sources, which, if non-contradictory, can provide more solid evidence for linguistic analysis. Last, we argue that generative grammar is an important tile in the puzzle.
Displacement of particles in microfluidics by laser-generated tandem bubbles
NASA Astrophysics Data System (ADS)
Lautz, Jaclyn; Sankin, Georgy; Yuan, Fang; Zhong, Pei
2010-11-01
The dynamic interaction between laser-generated tandem bubble and individual polystyrene particles of 2 and 10 μm in diameter is studied in a microfluidic channel (25 μm height) by high-speed imaging and particle image velocimetry. The asymmetric collapse of the tandem bubble produces a pair of microjets and associated long-lasting vortices that can propel a single particle to a maximum velocity of 1.4 m/s in 30 μs after the bubble collapse with a resultant directional displacement up to 60 μm in 150 μs. This method may be useful for high-throughput cell sorting in microfluidic devices.
Birth of kids after artificial insemination with sex-sorted, frozen-thawed goat spermatozoa.
Bathgate, R; Mace, N; Heasman, K; Evans, G; Maxwell, W M C; de Graaf, S P
2013-12-01
Successful sex-sorting of goat spermatozoa and subsequent birth of pre-sexed kids have yet to be reported. As such, a series of experiments were conducted to develop protocols for sperm-sorting (using a modified flow cytometer, MoFlo SX(®) ) and cryopreservation of goat spermatozoa. Saanen goat spermatozoa (n = 2 males) were (i) collected into Salamon's or Tris catch media post-sorting and (ii) frozen in Tris-citrate-glucose media supplemented with 5, 10 or 20% egg yolk in (iii) 0.25 ml pellets on dry ice or 0.25 ml straws in a controlled-rate freezer. Post-sort and post-thaw sperm quality were assessed by motility (CASA), viability and acrosome integrity (PI/FITC-PNA). Sex-sorted goat spermatozoa frozen in pellets displayed significantly higher post-thaw motility and viability than spermatozoa frozen in straws. Catch media and differing egg yolk concentration had no effect on the sperm parameters tested. The in vitro and in vivo fertility of sex-sorted goat spermatozoa produced with this optimum protocol were then tested by means of a heterologous ova binding assay and intrauterine artificial insemination of Saanen goat does, respectively. Sex-sorted goat spermatozoa bound to sheep ova zona pellucidae in similar numbers (p > 0.05) to non-sorted goat spermatozoa, non-sorted ram spermatozoa and sex-sorted ram spermatozoa. Following intrauterine artificial insemination with sex-sorted spermatozoa, 38% (5/13) of does kidded with 83% (3/5) of kids being of the expected sex. Does inseminated with non-sorted spermatozoa achieved a 50% (3/6) kidding rate and a sex ratio of 3 : 1 (F : M). This study demonstrates for the first time that goat spermatozoa can be sex-sorted by flow cytometry, successfully frozen and used to produce pre-sexed kids. © 2013 Blackwell Verlag GmbH.
Gambardella, Stefano; Biagioni, Francesca; Ferese, Rosangela; Busceti, Carla L; Frati, Alessandro; Novelli, Giuseppe; Ruggieri, Stefano; Fornai, Francesco
2016-01-01
Mammalian retromers play a critical role in protein trans-membrane sorting from endosome to the trans-Golgi network (TGN). Recently, retromer alterations have been related to the onset of Parkinson's Disease (PD) since the variant p.Asp620Asn in VPS35 (Vacuolar Protein Sorting 35) was identified as a cause of late onset PD. This variant causes a primary defect in endosomal trafficking and retromers formation. Other mutations in VPS genes have been reported in both sporadic and familial PD. These mutations are less defined. Understanding the specific prevalence of all VPS gene mutations is key to understand the relevance of retromers impairment in the onset of PD. A number of PD-related mutations despite affecting different biochemical systems (autophagy, mitophagy, proteasome, endosomes, protein folding), all converge in producing an impairment in cell clearance. This may explain how genetic predispositions to PD may derive from slightly deleterious VPS mutations when combined with environmental agents overwhelming the clearance of the cell. This manuscript reviews genetic data produced in the last 5 years to re-define the actual prevalence of VPS gene mutations in the onset of PD. The prevalence of p.Asp620Asn mutation in VPS35 is 0.286 of familial PD. This increases up to 0.548 when considering mutations affecting all VPS genes. This configures mutations in VPS genes as the second most frequent autosomal dominant PD genotype. This high prevalence, joined with increased awareness of the role played by retromers in the neurobiology of PD, suggests environmentally-induced VPS alterations as crucial in the genesis of PD.
Heasly, Benjamin S; Cottaris, Nicolas P; Lichtman, Daniel P; Xiao, Bei; Brainard, David H
2014-02-07
RenderToolbox3 provides MATLAB utilities and prescribes a workflow that should be useful to researchers who want to employ graphics in the study of vision and perhaps in other endeavors as well. In particular, RenderToolbox3 facilitates rendering scene families in which various scene attributes and renderer behaviors are manipulated parametrically, enables spectral specification of object reflectance and illuminant spectra, enables the use of physically based material specifications, helps validate renderer output, and converts renderer output to physical units of radiance. This paper describes the design and functionality of the toolbox and discusses several examples that demonstrate its use. We have designed RenderToolbox3 to be portable across computer hardware and operating systems and to be free and open source (except for MATLAB itself). RenderToolbox3 is available at https://github.com/DavidBrainard/RenderToolbox3.
Huang, Jiu; Zhu, Zhuangzhuang; Tian, Chuyuan; Bian, Zhengfu
2018-01-01
With the increase the worldwide consumption of vehicles, end-of-life vehicles (ELVs) have kept rapidly increasing in the last two decades. Metallic parts and materials of ELVs can be easily reused and recycled, but the automobile shredder residues (ASRs), of which elastomer and plastic materials make up the vast majority, are difficult to recycle. ASRs are classified as hazardous materials in the main industrial countries, and are required to be materially recycled up to 85–95% by mass until 2020. However, there is neither sufficient theoretical nor practical experience for sorting ASR polymers. In this research, we provide a novel method by using S-Band microwave irradiation together with 3D scanning as well as infrared thermal imaging sensors for the recognition and sorting of typical plastics and elastomers from the ASR mixture. In this study, an industrial magnetron array with 2.45 GHz irradiation was utilized as the microwave source. Seven kinds of ELV polymer (PVC, ABS, PP, EPDM, NBR, CR, and SBR) crushed scrap residues were tested. After specific power microwave irradiation for a certain time, the tested polymer materials were heated up to different extents corresponding to their respective sensitivities to microwave irradiation. Due to the variations in polymer chemical structure and additive agents, polymers have different sensitivities to microwave radiation, which leads to differences in temperature rises. The differences of temperature increase were obtained by a thermal infrared sensor, and the position and geometrical features of the tested scraps were acquired by a 3D imaging sensor. With this information, the scrap material could be recognized and then sorted. The results showed that this method was effective when the tested polymer materials were heated up to more than 30 °C. For full recognition of the tested polymer scraps, the minimum temperature variations of 5 °C and 10.5 °C for plastics and elastomers were needed, respectively. The sorting efficiency was independent of particle sizes but depended on the power and time of the microwave irradiation. Generally, more than 75% (mass) of the tested polymer materials could be successfully recognized and sorted under an irradiation power of 3 kW. Plastics were much more insensitive to microwave irradiation than elastomers. With this method, the tested mixture of the plastic group (PVC, ABS, PP) and the mixture of elastomer group (EPDM, NBR, CR, and SBR) could be fully separated with an efficiency of 100%. PMID:29702564
Huang, Jiu; Zhu, Zhuangzhuang; Tian, Chuyuan; Bian, Zhengfu
2018-04-27
With the increase the worldwide consumption of vehicles, end-of-life vehicles (ELVs) have kept rapidly increasing in the last two decades. Metallic parts and materials of ELVs can be easily reused and recycled, but the automobile shredder residues (ASRs), of which elastomer and plastic materials make up the vast majority, are difficult to recycle. ASRs are classified as hazardous materials in the main industrial countries, and are required to be materially recycled up to 85⁻95% by mass until 2020. However, there is neither sufficient theoretical nor practical experience for sorting ASR polymers. In this research, we provide a novel method by using S-Band microwave irradiation together with 3D scanning as well as infrared thermal imaging sensors for the recognition and sorting of typical plastics and elastomers from the ASR mixture. In this study, an industrial magnetron array with 2.45 GHz irradiation was utilized as the microwave source. Seven kinds of ELV polymer (PVC, ABS, PP, EPDM, NBR, CR, and SBR) crushed scrap residues were tested. After specific power microwave irradiation for a certain time, the tested polymer materials were heated up to different extents corresponding to their respective sensitivities to microwave irradiation. Due to the variations in polymer chemical structure and additive agents, polymers have different sensitivities to microwave radiation, which leads to differences in temperature rises. The differences of temperature increase were obtained by a thermal infrared sensor, and the position and geometrical features of the tested scraps were acquired by a 3D imaging sensor. With this information, the scrap material could be recognized and then sorted. The results showed that this method was effective when the tested polymer materials were heated up to more than 30 °C. For full recognition of the tested polymer scraps, the minimum temperature variations of 5 °C and 10.5 °C for plastics and elastomers were needed, respectively. The sorting efficiency was independent of particle sizes but depended on the power and time of the microwave irradiation. Generally, more than 75% (mass) of the tested polymer materials could be successfully recognized and sorted under an irradiation power of 3 kW. Plastics were much more insensitive to microwave irradiation than elastomers. With this method, the tested mixture of the plastic group (PVC, ABS, PP) and the mixture of elastomer group (EPDM, NBR, CR, and SBR) could be fully separated with an efficiency of 100%.
Sorting drops and cells with acoustics: acoustic microfluidic fluorescence-activated cell sorter.
Schmid, Lothar; Weitz, David A; Franke, Thomas
2014-10-07
We describe a versatile microfluidic fluorescence-activated cell sorter that uses acoustic actuation to sort cells or drops at ultra-high rates. Our acoustic sorter combines the advantages of traditional fluorescence-activated cell (FACS) and droplet sorting (FADS) and is applicable for a multitude of objects. We sort aqueous droplets, at rates as high as several kHz, into two or even more outlet channels. We can also sort cells directly from the medium without prior encapsulation into drops; we demonstrate this by sorting fluorescently labeled mouse melanoma cells in a single phase fluid. Our acoustic microfluidic FACS is compatible with standard cell sorting cytometers, yet, at the same time, enables a rich variety of more sophisticated applications.
Surface acoustic wave actuated cell sorting (SAWACS).
Franke, T; Braunmüller, S; Schmid, L; Wixforth, A; Weitz, D A
2010-03-21
We describe a novel microfluidic cell sorter which operates in continuous flow at high sorting rates. The device is based on a surface acoustic wave cell-sorting scheme and combines many advantages of fluorescence activated cell sorting (FACS) and fluorescence activated droplet sorting (FADS) in microfluidic channels. It is fully integrated on a PDMS device, and allows fast electronic control of cell diversion. We direct cells by acoustic streaming excited by a surface acoustic wave which deflects the fluid independently of the contrast in material properties of deflected objects and the continuous phase; thus the device underlying principle works without additional enhancement of the sorting by prior labelling of the cells with responsive markers such as magnetic or polarizable beads. Single cells are sorted directly from bulk media at rates as fast as several kHz without prior encapsulation into liquid droplet compartments as in traditional FACS. We have successfully directed HaCaT cells (human keratinocytes), fibroblasts from mice and MV3 melanoma cells. The low shear forces of this sorting method ensure that cells survive after sorting.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terawaki, Shin-ichi, E-mail: terawaki@gunma-u.ac.jp; SPring-8 Center, RIKEN, 1-1-1 Koto, Sayo-cho, Sayo-gun, Hyogo 679-5148; Yoshikane, Asuka
Bicaudal-D1 (BICD1) is an α-helical coiled-coil protein mediating the attachment of specific cargo to cytoplasmic dynein. It plays an essential role in minus end-directed intracellular transport along microtubules. The third C-terminal coiled-coil region of BICD1 (BICD1 CC3) has an important role in cargo sorting, including intracellular vesicles associating with the small GTPase Rab6 and the nuclear pore complex Ran binding protein 2 (RanBP2), and inhibiting the association with cytoplasmic dynein by binding to the first N-terminal coiled-coil region (CC1). The crystal structure of BICD1 CC3 revealed a parallel homodimeric coiled-coil with asymmetry and complementary knobs-into-holes interactions, differing from Drosophila BicDmore » CC3. Furthermore, our binding study indicated that BICD1 CC3 possesses a binding surface for two distinct cargos, Rab6 and RanBP2, and that the CC1-binding site overlaps with the Rab6-binding site. These findings suggest a molecular basis for cargo recognition and autoinhibition of BICD proteins during dynein-dependent intracellular retrograde transport. - Highlights: • BICD1 CC3 is a parallel homodimeric coiled-coil with axial asymmetry. • The coiled-coil packing of BICD1 CC3 is adapted to the equivalent heptad position. • BICD1 CC3 has distinct binding sites for two classes of cargo, Rab6 and RanBP2. • The CC1-binding site of BICD1 CC3 overlaps with the Rab6-binding site.« less
Multi-Depth-Map Raytracing for Efficient Large-Scene Reconstruction.
Arikan, Murat; Preiner, Reinhold; Wimmer, Michael
2016-02-01
With the enormous advances of the acquisition technology over the last years, fast processing and high-quality visualization of large point clouds have gained increasing attention. Commonly, a mesh surface is reconstructed from the point cloud and a high-resolution texture is generated over the mesh from the images taken at the site to represent surface materials. However, this global reconstruction and texturing approach becomes impractical with increasing data sizes. Recently, due to its potential for scalability and extensibility, a method for texturing a set of depth maps in a preprocessing and stitching them at runtime has been proposed to represent large scenes. However, the rendering performance of this method is strongly dependent on the number of depth maps and their resolution. Moreover, for the proposed scene representation, every single depth map has to be textured by the images, which in practice heavily increases processing costs. In this paper, we present a novel method to break these dependencies by introducing an efficient raytracing of multiple depth maps. In a preprocessing phase, we first generate high-resolution textured depth maps by rendering the input points from image cameras and then perform a graph-cut based optimization to assign a small subset of these points to the images. At runtime, we use the resulting point-to-image assignments (1) to identify for each view ray which depth map contains the closest ray-surface intersection and (2) to efficiently compute this intersection point. The resulting algorithm accelerates both the texturing and the rendering of the depth maps by an order of magnitude.
Comrades' Power: Student Representation and Activism in Universities in Kenya
ERIC Educational Resources Information Center
Macharia, Mwangi J.
2015-01-01
In the last decade, student politics and governance of universities in Kenya and in other African countries have undergone a tremendous transformation. The unprecedented expansion and massification of public universities, the introduction of "Module 2" programmes, the admission of private, "parallel" and…
Research of grasping algorithm based on scara industrial robot
NASA Astrophysics Data System (ADS)
Peng, Tao; Zuo, Ping; Yang, Hai
2018-04-01
As the tobacco industry grows, facing the challenge of the international tobacco giant, efficient logistics service is one of the key factors. How to complete the tobacco sorting task of efficient economy is the goal of tobacco sorting and optimization research. Now the cigarette distribution system uses a single line to carry out the single brand sorting task, this article adopts a single line to realize the cigarette sorting task of different brands. Using scara robot special algorithm for sorting and packaging, the optimization scheme significantly enhances the indicators of smoke sorting system. Saving labor productivity, obviously improve production efficiency.
Learning Cellular Sorting Pathways Using Protein Interactions and Sequence Motifs
Lin, Tien-Ho; Bar-Joseph, Ziv
2011-01-01
Abstract Proper subcellular localization is critical for proteins to perform their roles in cellular functions. Proteins are transported by different cellular sorting pathways, some of which take a protein through several intermediate locations until reaching its final destination. The pathway a protein is transported through is determined by carrier proteins that bind to specific sequence motifs. In this article, we present a new method that integrates protein interaction and sequence motif data to model how proteins are sorted through these sorting pathways. We use a hidden Markov model (HMM) to represent protein sorting pathways. The model is able to determine intermediate sorting states and to assign carrier proteins and motifs to the sorting pathways. In simulation studies, we show that the method can accurately recover an underlying sorting model. Using data for yeast, we show that our model leads to accurate prediction of subcellular localization. We also show that the pathways learned by our model recover many known sorting pathways and correctly assign proteins to the path they utilize. The learned model identified new pathways and their putative carriers and motifs and these may represent novel protein sorting mechanisms. Supplementary results and software implementation are available from http://murphylab.web.cmu.edu/software/2010_RECOMB_pathways/. PMID:21999284
A New Algorithm Using the Non-Dominated Tree to Improve Non-Dominated Sorting.
Gustavsson, Patrik; Syberfeldt, Anna
2018-01-01
Non-dominated sorting is a technique often used in evolutionary algorithms to determine the quality of solutions in a population. The most common algorithm is the Fast Non-dominated Sort (FNS). This algorithm, however, has the drawback that its performance deteriorates when the population size grows. The same drawback applies also to other non-dominating sorting algorithms such as the Efficient Non-dominated Sort with Binary Strategy (ENS-BS). An algorithm suggested to overcome this drawback is the Divide-and-Conquer Non-dominated Sort (DCNS) which works well on a limited number of objectives but deteriorates when the number of objectives grows. This article presents a new, more efficient algorithm called the Efficient Non-dominated Sort with Non-Dominated Tree (ENS-NDT). ENS-NDT is an extension of the ENS-BS algorithm and uses a novel Non-Dominated Tree (NDTree) to speed up the non-dominated sorting. ENS-NDT is able to handle large population sizes and a large number of objectives more efficiently than existing algorithms for non-dominated sorting. In the article, it is shown that with ENS-NDT the runtime of multi-objective optimization algorithms such as the Non-Dominated Sorting Genetic Algorithm II (NSGA-II) can be substantially reduced.
Davidson, Meghan M; Kaushanskaya, Margarita; Ellis Weismer, Susan
2018-05-25
Word reading and oral language predict reading comprehension, which is generally poor, in individuals with autism spectrum disorder (ASD). However, working memory (WM), despite documented weaknesses, has not been thoroughly investigated as a predictor of reading comprehension in ASD. This study examined the role of three parallel WM N-back tasks using abstract shapes, familiar objects, and written words in children (8-14 years) with ASD (n = 19) and their typically developing peers (n = 24). All three types of WM were significant predictors of reading comprehension when considered alone. However, these relationships were rendered non-significant with the addition of age, word reading, vocabulary, and group entered into the models. Oral vocabulary emerged as the strongest predictor of reading comprehension.
PyEPL: a cross-platform experiment-programming library.
Geller, Aaron S; Schlefer, Ian K; Sederberg, Per B; Jacobs, Joshua; Kahana, Michael J
2007-11-01
PyEPL (the Python Experiment-Programming Library) is a Python library which allows cross-platform and object-oriented coding of behavioral experiments. It provides functions for displaying text and images onscreen, as well as playing and recording sound, and is capable of rendering 3-D virtual environments forspatial-navigation tasks. It is currently tested for Mac OS X and Linux. It interfaces with Activewire USB cards (on Mac OS X) and the parallel port (on Linux) for synchronization of experimental events with physiological recordings. In this article, we first present two sample programs which illustrate core PyEPL features. The examples demonstrate visual stimulus presentation, keyboard input, and simulation and exploration of a simple 3-D environment. We then describe the components and strategies used in implementing PyEPL.
PyEPL: A cross-platform experiment-programming library
Geller, Aaron S.; Schleifer, Ian K.; Sederberg, Per B.; Jacobs, Joshua; Kahana, Michael J.
2009-01-01
PyEPL (the Python Experiment-Programming Library) is a Python library which allows cross-platform and object-oriented coding of behavioral experiments. It provides functions for displaying text and images onscreen, as well as playing and recording sound, and is capable of rendering 3-D virtual environments for spatial-navigation tasks. It is currently tested for Mac OS X and Linux. It interfaces with Activewire USB cards (on Mac OS X) and the parallel port (on Linux) for synchronization of experimental events with physiological recordings. In this article, we first present two sample programs which illustrate core PyEPL features. The examples demonstrate visual stimulus presentation, keyboard input, and simulation and exploration of a simple 3-D environment. We then describe the components and strategies used in implementing PyEPL. PMID:18183912
Proceedings of the 14th International Conference on the Numerical Simulation of Plasmas
NASA Astrophysics Data System (ADS)
Partial Contents are as follows: Numerical Simulations of the Vlasov-Maxwell Equations by Coupled Particle-Finite Element Methods on Unstructured Meshes; Electromagnetic PIC Simulations Using Finite Elements on Unstructured Grids; Modelling Travelling Wave Output Structures with the Particle-in-Cell Code CONDOR; SST--A Single-Slice Particle Simulation Code; Graphical Display and Animation of Data Produced by Electromagnetic, Particle-in-Cell Codes; A Post-Processor for the PEST Code; Gray Scale Rendering of Beam Profile Data; A 2D Electromagnetic PIC Code for Distributed Memory Parallel Computers; 3-D Electromagnetic PIC Simulation on the NRL Connection Machine; Plasma PIC Simulations on MIMD Computers; Vlasov-Maxwell Algorithm for Electromagnetic Plasma Simulation on Distributed Architectures; MHD Boundary Layer Calculation Using the Vortex Method; and Eulerian Codes for Plasma Simulations.
Non-physician Clinicians in Sub-Saharan Africa and the Evolving Role of Physicians
Eyal, Nir; Cancedda, Corrado; Kyamanywa, Patrick; Hurst, Samia A.
2016-01-01
Responding to critical shortages of physicians, most sub-Saharan countries have scaled up training of non-physician clinicians (NPCs), resulting in a gradual but decisive shift to NPCs as the cornerstone of healthcare delivery. This development should unfold in parallel with strategic rethinking about the role of physicians and with innovations in physician education and in-service training. In important ways, a growing number of NPCs only renders physicians more necessary – for example, as specialized healthcare providers and as leaders, managers, mentors, and public health administrators. Physicians in sub-Saharan Africa ought to be trained in all of these capacities. This evolution in the role of physicians may also help address known challenges to the successful integration of NPCs in the health system. PMID:26927585
Statistical Analysis of NAS Parallel Benchmarks and LINPACK Results
NASA Technical Reports Server (NTRS)
Meuer, Hans-Werner; Simon, Horst D.; Strohmeier, Erich; Lasinski, T. A. (Technical Monitor)
1994-01-01
In the last three years extensive performance data have been reported for parallel machines both based on the NAS Parallel Benchmarks, and on LINPACK. In this study we have used the reported benchmark results and performed a number of statistical experiments using factor, cluster, and regression analyses. In addition to the performance results of LINPACK and the eight NAS parallel benchmarks, we have also included peak performance of the machine, and the LINPACK n and n(sub 1/2) values. Some of the results and observations can be summarized as follows: 1) All benchmarks are strongly correlated with peak performance. 2) LINPACK and EP have each a unique signature. 3) The remaining NPB can grouped into three groups as follows: (CG and IS), (LU and SP), and (MG, FT, and BT). Hence three (or four with EP) benchmarks are sufficient to characterize the overall NPB performance. Our poster presentation will follow a standard poster format, and will present the data of our statistical analysis in detail.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cziczo, Daniel
2016-05-01
The formation of clouds is an essential element in understanding the Earth’s radiative budget. Liquid water clouds form when the relative humidity exceeds saturation and condensedphase water nucleates on atmospheric particulate matter. The effect of aerosol properties such as size, morphology, and composition on cloud droplet formation has been studied theoretically as well as in the laboratory and field. Almost without exception these studies have been limited to parallel measurements of aerosol properties and cloud formation or collection of material after the cloud has formed, at which point nucleation information has been lost. Studies of this sort are adequate whenmore » a large fraction of the aerosol activates, but correlations and resulting model parameterizations are much more uncertain at lower supersaturations and activated fractions.« less
Drewes, Rich; Zou, Quan; Goodman, Philip H
2009-01-01
Neuroscience modeling experiments often involve multiple complex neural network and cell model variants, complex input stimuli and input protocols, followed by complex data analysis. Coordinating all this complexity becomes a central difficulty for the experimenter. The Python programming language, along with its extensive library packages, has emerged as a leading "glue" tool for managing all sorts of complex programmatic tasks. This paper describes a toolkit called Brainlab, written in Python, that leverages Python's strengths for the task of managing the general complexity of neuroscience modeling experiments. Brainlab was also designed to overcome the major difficulties of working with the NCS (NeoCortical Simulator) environment in particular. Brainlab is an integrated model-building, experimentation, and data analysis environment for the powerful parallel spiking neural network simulator system NCS.
Python for large-scale electrophysiology.
Spacek, Martin; Blanche, Tim; Swindale, Nicholas
2008-01-01
Electrophysiology is increasingly moving towards highly parallel recording techniques which generate large data sets. We record extracellularly in vivo in cat and rat visual cortex with 54-channel silicon polytrodes, under time-locked visual stimulation, from localized neuronal populations within a cortical column. To help deal with the complexity of generating and analysing these data, we used the Python programming language to develop three software projects: one for temporally precise visual stimulus generation ("dimstim"); one for electrophysiological waveform visualization and spike sorting ("spyke"); and one for spike train and stimulus analysis ("neuropy"). All three are open source and available for download (http://swindale.ecc.ubc.ca/code). The requirements and solutions for these projects differed greatly, yet we found Python to be well suited for all three. Here we present our software as a showcase of the extensive capabilities of Python in neuroscience.
Drewes, Rich; Zou, Quan; Goodman, Philip H.
2008-01-01
Neuroscience modeling experiments often involve multiple complex neural network and cell model variants, complex input stimuli and input protocols, followed by complex data analysis. Coordinating all this complexity becomes a central difficulty for the experimenter. The Python programming language, along with its extensive library packages, has emerged as a leading “glue” tool for managing all sorts of complex programmatic tasks. This paper describes a toolkit called Brainlab, written in Python, that leverages Python's strengths for the task of managing the general complexity of neuroscience modeling experiments. Brainlab was also designed to overcome the major difficulties of working with the NCS (NeoCortical Simulator) environment in particular. Brainlab is an integrated model-building, experimentation, and data analysis environment for the powerful parallel spiking neural network simulator system NCS. PMID:19506707
Proteolipidic Composition of Exosomes Changes during Reticulocyte Maturation*
Carayon, Kévin; Chaoui, Karima; Ronzier, Elsa; Lazar, Ikrame; Bertrand-Michel, Justine; Roques, Véronique; Balor, Stéphanie; Terce, François; Lopez, André; Salomé, Laurence; Joly, Etienne
2011-01-01
During the orchestrated process leading to mature erythrocytes, reticulocytes must synthesize large amounts of hemoglobin, while eliminating numerous cellular components. Exosomes are small secreted vesicles that play an important role in this process of specific elimination. To understand the mechanisms of proteolipidic sorting leading to their biogenesis, we have explored changes in the composition of exosomes released by reticulocytes during their differentiation, in parallel to their physical properties. By combining proteomic and lipidomic approaches, we found dramatic alterations in the composition of the exosomes retrieved over the course of a 7-day in vitro differentiation protocol. Our data support a previously proposed model, whereby in reticulocytes the biogenesis of exosomes involves several distinct mechanisms for the preferential recruitment of particular proteins and lipids and suggest that the respective prominence of those pathways changes over the course of the differentiation process. PMID:21828046
Global Detection of Live Virtual Machine Migration Based on Cellular Neural Networks
Xie, Kang; Yang, Yixian; Zhang, Ling; Jing, Maohua; Xin, Yang; Li, Zhongxian
2014-01-01
In order to meet the demands of operation monitoring of large scale, autoscaling, and heterogeneous virtual resources in the existing cloud computing, a new method of live virtual machine (VM) migration detection algorithm based on the cellular neural networks (CNNs), is presented. Through analyzing the detection process, the parameter relationship of CNN is mapped as an optimization problem, in which improved particle swarm optimization algorithm based on bubble sort is used to solve the problem. Experimental results demonstrate that the proposed method can display the VM migration processing intuitively. Compared with the best fit heuristic algorithm, this approach reduces the processing time, and emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI) implementation allowing the VM migration detection to be performed better. PMID:24959631
Global detection of live virtual machine migration based on cellular neural networks.
Xie, Kang; Yang, Yixian; Zhang, Ling; Jing, Maohua; Xin, Yang; Li, Zhongxian
2014-01-01
In order to meet the demands of operation monitoring of large scale, autoscaling, and heterogeneous virtual resources in the existing cloud computing, a new method of live virtual machine (VM) migration detection algorithm based on the cellular neural networks (CNNs), is presented. Through analyzing the detection process, the parameter relationship of CNN is mapped as an optimization problem, in which improved particle swarm optimization algorithm based on bubble sort is used to solve the problem. Experimental results demonstrate that the proposed method can display the VM migration processing intuitively. Compared with the best fit heuristic algorithm, this approach reduces the processing time, and emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI) implementation allowing the VM migration detection to be performed better.
Measuring the orbital angular momentum spectrum of an electron beam
Grillo, Vincenzo; Tavabi, Amir H.; Venturi, Federico; Larocque, Hugo; Balboni, Roberto; Gazzadi, Gian Carlo; Frabboni, Stefano; Lu, Peng-Han; Mafakheri, Erfan; Bouchard, Frédéric; Dunin-Borkowski, Rafal E.; Boyd, Robert W.; Lavery, Martin P. J.; Padgett, Miles J.; Karimi, Ebrahim
2017-01-01
Electron waves that carry orbital angular momentum (OAM) are characterized by a quantized and unbounded magnetic dipole moment parallel to their propagation direction. When interacting with magnetic materials, the wavefunctions of such electrons are inherently modified. Such variations therefore motivate the need to analyse electron wavefunctions, especially their wavefronts, to obtain information regarding the material's structure. Here, we propose, design and demonstrate the performance of a device based on nanoscale holograms for measuring an electron's OAM components by spatially separating them. We sort pure and superposed OAM states of electrons with OAM values of between −10 and 10. We employ the device to analyse the OAM spectrum of electrons that have been affected by a micron-scale magnetic dipole, thus establishing that our sorter can be an instrument for nanoscale magnetic spectroscopy. PMID:28537248
Particle Transport and Size Sorting in Bubble Microstreaming Flow
NASA Astrophysics Data System (ADS)
Thameem, Raqeeb; Rallabandi, Bhargav; Wang, Cheng; Hilgenfeldt, Sascha
2014-11-01
Ultrasonic driving of sessile semicylindrical bubbles results in powerful steady streaming flows that are robust over a wide range of driving frequencies. In a microchannel, this flow field pattern can be fine-tuned to achieve size-sensitive sorting and trapping of particles at scales much smaller than the bubble itself; the sorting mechanism has been successfully described based on simple geometrical considerations. We investigate the sorting process in more detail, both experimentally (using new parameter variations that allow greater control over the sorting) and theoretically (incorporating the device geometry as well as the superimposed channel flow into an asymptotic theory). This results in optimized criteria for size sorting and a theoretical description that closely matches the particle behavior close to the bubble, the crucial region for size sorting.
Approaches to Macroevolution: 1. General Concepts and Origin of Variation.
Jablonski, David
2017-01-01
Approaches to macroevolution require integration of its two fundamental components, i.e. the origin and the sorting of variation, in a hierarchical framework. Macroevolution occurs in multiple currencies that are only loosely correlated, notably taxonomic diversity, morphological disparity, and functional variety. The origin of variation within this conceptual framework is increasingly understood in developmental terms, with the semi-hierarchical structure of gene regulatory networks (GRNs, used here in a broad sense incorporating not just the genetic circuitry per se but the factors controlling the timing and location of gene expression and repression), the non-linear relation between magnitude of genetic change and the phenotypic results, the evolutionary potential of co-opting existing GRNs, and developmental responsiveness to nongenetic signals (i.e. epigenetics and plasticity), all requiring modification of standard microevolutionary models, and rendering difficult any simple definition of evolutionary novelty. The developmental factors underlying macroevolution create anisotropic probabilities-i.e., an uneven density distribution-of evolutionary change around any given phenotypic starting point, and the potential for coordinated changes among traits that can accommodate change via epigenetic mechanisms. From this standpoint, "punctuated equilibrium" and "phyletic gradualism" simply represent two cells in a matrix of evolutionary models of phenotypic change, and the origin of trends and evolutionary novelty are not simply functions of ecological opportunity. Over long timescales, contingency becomes especially important, and can be viewed in terms of macroevolutionary lags (the temporal separation between the origin of a trait or clade and subsequent diversification); such lags can arise by several mechanisms: as geological or phylogenetic artifacts, or when diversifications require synergistic interactions among traits, or between traits and external events. The temporal and spatial patterns of the origins of evolutionary novelties are a challenge to macroevolutionary theory; individual events can be described retrospectively, but a general model relating development, genetics, and ecology is needed. An accompanying paper (Jablonski in Evol Biol 2017) reviews diversity dynamics and the sorting of variation, with some general conclusions.
OCTGRAV: Sparse Octree Gravitational N-body Code on Graphics Processing Units
NASA Astrophysics Data System (ADS)
Gaburov, Evghenii; Bédorf, Jeroen; Portegies Zwart, Simon
2010-10-01
Octgrav is a very fast tree-code which runs on massively parallel Graphical Processing Units (GPU) with NVIDIA CUDA architecture. The algorithms are based on parallel-scan and sort methods. The tree-construction and calculation of multipole moments is carried out on the host CPU, while the force calculation which consists of tree walks and evaluation of interaction list is carried out on the GPU. In this way, a sustained performance of about 100GFLOP/s and data transfer rates of about 50GB/s is achieved. It takes about a second to compute forces on a million particles with an opening angle of heta approx 0.5. To test the performance and feasibility, we implemented the algorithms in CUDA in the form of a gravitational tree-code which completely runs on the GPU. The tree construction and traverse algorithms are portable to many-core devices which have support for CUDA or OpenCL programming languages. The gravitational tree-code outperforms tuned CPU code during the tree-construction and shows a performance improvement of more than a factor 20 overall, resulting in a processing rate of more than 2.8 million particles per second. The code has a convenient user interface and is freely available for use.
NASA Astrophysics Data System (ADS)
Sewell, Stephen
This thesis introduces a software framework that effectively utilizes low-cost commercially available Graphic Processing Units (GPUs) to simulate complex scientific plasma phenomena that are modeled using the Particle-In-Cell (PIC) paradigm. The software framework that was developed conforms to the Compute Unified Device Architecture (CUDA), a standard for general purpose graphic processing that was introduced by NVIDIA Corporation. This framework has been verified for correctness and applied to advance the state of understanding of the electromagnetic aspects of the development of the Aurora Borealis and Aurora Australis. For each phase of the PIC methodology, this research has identified one or more methods to exploit the problem's natural parallelism and effectively map it for execution on the graphic processing unit and its host processor. The sources of overhead that can reduce the effectiveness of parallelization for each of these methods have also been identified. One of the novel aspects of this research was the utilization of particle sorting during the grid interpolation phase. The final representation resulted in simulations that executed about 38 times faster than simulations that were run on a single-core general-purpose processing system. The scalability of this framework to larger problem sizes and future generation systems has also been investigated.
Accelerating next generation sequencing data analysis with system level optimizations.
Kathiresan, Nagarajan; Temanni, Ramzi; Almabrazi, Hakeem; Syed, Najeeb; Jithesh, Puthen V; Al-Ali, Rashid
2017-08-22
Next generation sequencing (NGS) data analysis is highly compute intensive. In-memory computing, vectorization, bulk data transfer, CPU frequency scaling are some of the hardware features in the modern computing architectures. To get the best execution time and utilize these hardware features, it is necessary to tune the system level parameters before running the application. We studied the GATK-HaplotypeCaller which is part of common NGS workflows, that consume more than 43% of the total execution time. Multiple GATK 3.x versions were benchmarked and the execution time of HaplotypeCaller was optimized by various system level parameters which included: (i) tuning the parallel garbage collection and kernel shared memory to simulate in-memory computing, (ii) architecture-specific tuning in the PairHMM library for vectorization, (iii) including Java 1.8 features through GATK source code compilation and building a runtime environment for parallel sorting and bulk data transfer (iv) the default 'on-demand' mode of CPU frequency is over-clocked by using 'performance-mode' to accelerate the Java multi-threads. As a result, the HaplotypeCaller execution time was reduced by 82.66% in GATK 3.3 and 42.61% in GATK 3.7. Overall, the execution time of NGS pipeline was reduced to 70.60% and 34.14% for GATK 3.3 and GATK 3.7 respectively.
O'Brien, J K; Roth, T L; Stoops, M A; Ball, R L; Steinman, K J; Montano, G A; Love, C C; Robeck, T R
2015-01-01
White rhinoceros ejaculates (n=9) collected by electroejaculation from four males were shipped (10°C, 12h) to develop procedures for the production of chilled and frozen-thawed sex-sorted spermatozoa of adequate quality for artificial insemination (AI). Of all electroejaculate fractions, 39.7% (31/78) exhibited high quality post-collection (≥70% total motility and membrane integrity) and of those, 54.8% (17/31) presented reduced in vitro quality after transport and were retrospectively determined to exhibit urine-contamination (≥21.0μg creatinine/ml). Of fractions analyzed for creatinine concentration, 69% (44/64) were classified as urine-contaminated. For high quality non-contaminated fractions, in vitro parameters (motility, velocity, membrane, acrosome and DNA integrity) of chilled non-sorted and sorted spermatozoa were well-maintained at 5°C up to 54h post-collection, whereby >70% of post-transport (non-sorted) or post-sort (sorted) values were retained. By 54h post-collection, some motility parameters were higher (P<0.05) for non-sorted spermatozoa (total motility, rapid velocity, average path velocity) whereas all remaining motion parameters as well as membrane, acrosome and DNA integrity were similar between sperm types. In comparison with a straw method, directional freezing resulted in enhanced (P<0.05) motility and velocity of non-sorted and sorted spermatozoa, with comparable overall post-thaw quality between sperm types. High purity enrichment of X-bearing (89±6%) or Y-bearing (86±3%) spermatozoa was achieved using moderate sorting rates (2540±498X-spermatozoa/s; 1800±557Y-spermatozoa/s). Collective in vitro characteristics of sorted-chilled or sorted-frozen-thawed spermatozoa derived from high quality electroejaculates indicate acceptable fertility potential for use in AI. Copyright © 2014 Elsevier B.V. All rights reserved.
Encapsulation of sex sorted boar semen: sperm membrane status and oocyte penetration parameters.
Spinaci, Marcella; Chlapanidas, Theodora; Bucci, Diego; Vallorani, Claudia; Perteghella, Sara; Lucconi, Giulia; Communod, Ricardo; Vigo, Daniele; Galeati, Giovanna; Faustini, Massimo; Torre, Maria Luisa
2013-03-01
Although sorted semen is experimentally used for artificial, intrauterine, and intratubal insemination and in vitro fertilization, its commercial application in swine species is still far from a reality. This is because of the low sort rate and the large number of sperm required for routine artificial insemination in the pig, compared with other production animals, and the greater susceptibility of porcine spermatozoa to stress induced by the different sex sorting steps and the postsorting handling protocols. The encapsulation technology could overcome this limitation in vivo, protecting and allowing the slow release of low-dose sorted semen. The aim of this work was to evaluate the impact of the encapsulation process on viability, acrosome integrity, and on the in vitro fertilizing potential of sorted boar semen. Our results indicate that the encapsulation technique does not damage boar sorted semen; in fact, during a 72-hour storage, no differences were observed between liquid-stored sorted semen and encapsulated sorted semen in terms of plasma membrane (39.98 ± 14.38% vs. 44.32 ± 11.72%, respectively) and acrosome integrity (74.32 ± 12.17% vs. 66.07 ± 10.83%, respectively). Encapsulated sorted spermatozoa presented a lower penetration potential than nonencapsulated ones (47.02% vs. 24.57%, respectively, P < 0.0001), and a significant reduction of polyspermic fertilization (60.76% vs. 36.43%, respectively, polyspermic ova/total ova; P < 0.0001). However, no difference (P > 0.05) was observed in terms of total efficiency of fertilization expressed as normospermic oocytes/total oocytes (18.45% vs. 15.43% for sorted diluted and sorted encapsulated semen, respectively). The encapsulation could be an alternative method of storing of pig sex sorted spermatozoa and is potentially a promising technique in order to optimize the use of low dose of sexed spermatozoa in vivo. Copyright © 2013 Elsevier Inc. All rights reserved.
Li, Jibiao; Woolbright, Benjamin L; Zhao, Wen; Wang, Yifeng; Matye, David; Hagenbuch, Bruno; Jaeschke, Hartmut; Li, Tiangang
2018-01-01
Sortilin 1 (Sort1) is an intracellular trafficking receptor that mediates protein sorting in the endocytic or secretory pathways. Recent studies revealed a role of Sort1 in the regulation of cholesterol and bile acid (BA) metabolism. This study further investigated the role of Sort1 in modulating BA detoxification and cholestatic liver injury in bile duct ligated mice. We found that Sort1 knockout (KO) mice had attenuated liver injury 24 h after bile duct ligation (BDL), which was mainly attributed to less bile infarct formation. Sham-operated Sort1 KO mice had about 20% larger BA pool size than sham-operated wildtype (WT) mice, but 24 h after BDL Sort1 KO mice had significantly attenuated hepatic BA accumulation and smaller BA pool size. After 14 days BDL, Sort1 KO mice showed significantly lower hepatic BA concentration and reduced expression of inflammatory and fibrotic marker genes, but similar degree of liver fibrosis compared with WT mice. Unbiased quantitative proteomics revealed that Sort1 KO mice had increased hepatic BA sulfotransferase 2A1, but unaltered phase-I BA metabolizing cytochrome P450s or phase-III BA efflux transporters. Consistently, Sort1 KO mice showed elevated plasma sulfated taurocholate after BDL. Finally, we found that liver Sort1 was repressed after BDL, which may be due to BA activation of farnesoid x receptor. In conclusion, we report a role of Sort1 in the regulation of hepatic BA detoxification and cholestatic liver injury in mice. The mechanisms underlying increased hepatic BA elimination in Sort1 KO mice after BDL require further investigation. © The Author 2017. Published by Oxford University Press on behalf of the Society of Toxicology. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Reducing 4D CT artifacts using optimized sorting based on anatomic similarity.
Johnston, Eric; Diehn, Maximilian; Murphy, James D; Loo, Billy W; Maxim, Peter G
2011-05-01
Four-dimensional (4D) computed tomography (CT) has been widely used as a tool to characterize respiratory motion in radiotherapy. The two most commonly used 4D CT algorithms sort images by the associated respiratory phase or displacement into a predefined number of bins, and are prone to image artifacts at transitions between bed positions. The purpose of this work is to demonstrate a method of reducing motion artifacts in 4D CT by incorporating anatomic similarity into phase or displacement based sorting protocols. Ten patient datasets were retrospectively sorted using both the displacement and phase based sorting algorithms. Conventional sorting methods allow selection of only the nearest-neighbor image in time or displacement within each bin. In our method, for each bed position either the displacement or the phase defines the center of a bin range about which several candidate images are selected. The two dimensional correlation coefficients between slices bordering the interface between adjacent couch positions are then calculated for all candidate pairings. Two slices have a high correlation if they are anatomically similar. Candidates from each bin are then selected to maximize the slice correlation over the entire data set using the Dijkstra's shortest path algorithm. To assess the reduction of artifacts, two thoracic radiation oncologists independently compared the resorted 4D datasets pairwise with conventionally sorted datasets, blinded to the sorting method, to choose which had the least motion artifacts. Agreement between reviewers was evaluated using the weighted kappa score. Anatomically based image selection resulted in 4D CT datasets with significantly reduced motion artifacts with both displacement (P = 0.0063) and phase sorting (P = 0.00022). There was good agreement between the two reviewers, with complete agreement 34 times and complete disagreement 6 times. Optimized sorting using anatomic similarity significantly reduces 4D CT motion artifacts compared to conventional phase or displacement based sorting. This improved sorting algorithm is a straightforward extension of the two most common 4D CT sorting algorithms.
NIH Toolbox Cognition Battery (NIHTB-CB): list sorting test to measure working memory.
Tulsky, David S; Carlozzi, Noelle; Chiaravalloti, Nancy D; Beaumont, Jennifer L; Kisala, Pamela A; Mungas, Dan; Conway, Kevin; Gershon, Richard
2014-07-01
The List Sorting Working Memory Test was designed to assess working memory (WM) as part of the NIH Toolbox Cognition Battery. List Sorting is a sequencing task requiring children and adults to sort and sequence stimuli that are presented visually and auditorily. Validation data are presented for 268 participants ages 20 to 85 years. A subset of participants (N=89) was retested 7 to 21 days later. As expected, the List Sorting Test had moderately high correlations with other measures of working memory and executive functioning (convergent validity) but a low correlation with a test of receptive vocabulary (discriminant validity). Furthermore, List Sorting demonstrates expected changes over the age span and has excellent test-retest reliability. Collectively, these results provide initial support for the construct validity of the List Sorting Working Memory Measure as a measure of working memory. However, the relationship between the List Sorting Test and general executive function has yet to be determined.
Manual sorting to eliminate aflatoxin from peanuts.
Galvez, F C F; Francisco, M L D L; Villarino, B J; Lustre, A O; Resurreccion, A V A
2003-10-01
A manual sorting procedure was developed to eliminate aflatoxin contamination from peanuts. The efficiency of the sorting process in eliminating aflatoxin-contaminated kernels from lots of raw peanuts was verified. The blanching of 20 kg of peanuts at 140 degrees C for 25 min in preheated roasters facilitated the manual sorting of aflatoxin-contaminated kernels after deskinning. The manual sorting of raw materials with initially high aflatoxin contents (300 ppb) resulted in aflatoxin-free peanuts (i.e., peanuts in which no aflatoxin was detected). Verification procedures showed that the sorted sound peanuts contained no aflatoxin or contained low levels (<15 ppb) of aflatoxin. The results obtained confirmed that the sorting process was effective in separating contaminated peanuts whether or nor contamination was extensive. At the commercial level, when roasters were not preheated, the dry blanching of 50 kg of peanuts for 45 to 55 min facilitated the proper deskinning and subsequent manual sorting of aflatoxin-contaminated peanut kernels from sound kernels.
Automated spike sorting algorithm based on Laplacian eigenmaps and k-means clustering.
Chah, E; Hok, V; Della-Chiesa, A; Miller, J J H; O'Mara, S M; Reilly, R B
2011-02-01
This study presents a new automatic spike sorting method based on feature extraction by Laplacian eigenmaps combined with k-means clustering. The performance of the proposed method was compared against previously reported algorithms such as principal component analysis (PCA) and amplitude-based feature extraction. Two types of classifier (namely k-means and classification expectation-maximization) were incorporated within the spike sorting algorithms, in order to find a suitable classifier for the feature sets. Simulated data sets and in-vivo tetrode multichannel recordings were employed to assess the performance of the spike sorting algorithms. The results show that the proposed algorithm yields significantly improved performance with mean sorting accuracy of 73% and sorting error of 10% compared to PCA which combined with k-means had a sorting accuracy of 58% and sorting error of 10%.A correction was made to this article on 22 February 2011. The spacing of the title was amended on the abstract page. No changes were made to the article PDF and the print version was unaffected.
A Simple Deep Learning Method for Neuronal Spike Sorting
NASA Astrophysics Data System (ADS)
Yang, Kai; Wu, Haifeng; Zeng, Yu
2017-10-01
Spike sorting is one of key technique to understand brain activity. With the development of modern electrophysiology technology, some recent multi-electrode technologies have been able to record the activity of thousands of neuronal spikes simultaneously. The spike sorting in this case will increase the computational complexity of conventional sorting algorithms. In this paper, we will focus spike sorting on how to reduce the complexity, and introduce a deep learning algorithm, principal component analysis network (PCANet) to spike sorting. The introduced method starts from a conventional model and establish a Toeplitz matrix. Through the column vectors in the matrix, we trains a PCANet, where some eigenvalue vectors of spikes could be extracted. Finally, support vector machine (SVM) is used to sort spikes. In experiments, we choose two groups of simulated data from public databases availably and compare this introduced method with conventional methods. The results indicate that the introduced method indeed has lower complexity with the same sorting errors as the conventional methods.
NIH Toolbox Cognition Battery (NIHTB-CB): The List Sorting Test to Measure Working Memory
Tulsky, David S.; Carlozzi, Noelle; Chiaravalloti, Nancy D.; Beaumont, Jennifer L.; Kisala, Pamela A.; Mungas, Dan; Conway, Kevin; Gershon, Richard
2015-01-01
The List Sorting Working Memory Test was designed to assess working memory (WM) as part of the NIH Toolbox Cognition Battery. List Sorting is a sequencing task requiring children and adults to sort and sequence stimuli that are presented visually and auditorily. Validation data are presented for 268 participants ages 20 to 85 years. A subset of participants (N=89) was retested 7 to 21 days later. As expected, the List Sorting Test had moderately high correlations with other measures of working memory and executive functioning (convergent validity) but a low correlation with a test of receptive vocabulary (discriminant validity). Furthermore, List Sorting demonstrates expected changes over the age span and has excellent test-retest reliability. Collectively, these results provide initial support the construct validity of the List Sorting Working Memory Measure as a measure of working memory. However, the relation between the List Sorting Test and general executive function has yet to be determined. PMID:24959983
An Ultrasonic Multi-Beam Concentration Meter with a Neuro-Fuzzy Algorithm for Water Treatment Plants
Lee, Ho-Hyun; Jang, Sang-Bok; Shin, Gang-Wook; Hong, Sung-Taek; Lee, Dae-Jong; Chun, Myung Geun
2015-01-01
Ultrasonic concentration meters have widely been used at water purification, sewage treatment and waste water treatment plants to sort and transfer high concentration sludges and to control the amount of chemical dosage. When an unusual substance is contained in the sludge, however, the attenuation of ultrasonic waves could be increased or not be transmitted to the receiver. In this case, the value measured by a concentration meter is higher than the actual density value or vibration. As well, it is difficult to automate the residuals treatment process according to the various problems such as sludge attachment or sensor failure. An ultrasonic multi-beam concentration sensor was considered to solve these problems, but an abnormal concentration value of a specific ultrasonic beam degrades the accuracy of the entire measurement in case of using a conventional arithmetic mean for all measurement values, so this paper proposes a method to improve the accuracy of the sludge concentration determination by choosing reliable sensor values and applying a neuro-fuzzy learning algorithm. The newly developed meter is proven to render useful results from a variety of experiments on a real water treatment plant. PMID:26512666
Lee, Ho-Hyun; Jang, Sang-Bok; Shin, Gang-Wook; Hong, Sung-Taek; Lee, Dae-Jong; Chun, Myung Geun
2015-10-23
Ultrasonic concentration meters have widely been used at water purification, sewage treatment and waste water treatment plants to sort and transfer high concentration sludges and to control the amount of chemical dosage. When an unusual substance is contained in the sludge, however, the attenuation of ultrasonic waves could be increased or not be transmitted to the receiver. In this case, the value measured by a concentration meter is higher than the actual density value or vibration. As well, it is difficult to automate the residuals treatment process according to the various problems such as sludge attachment or sensor failure. An ultrasonic multi-beam concentration sensor was considered to solve these problems, but an abnormal concentration value of a specific ultrasonic beam degrades the accuracy of the entire measurement in case of using a conventional arithmetic mean for all measurement values, so this paper proposes a method to improve the accuracy of the sludge concentration determination by choosing reliable sensor values and applying a neuro-fuzzy learning algorithm. The newly developed meter is proven to render useful results from a variety of experiments on a real water treatment plant.
Economic and environmental optimization of waste treatment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Münster, M.; Ravn, H.; Hedegaard, K.
2015-04-15
Highlights: • Optimizing waste treatment by incorporating LCA methodology. • Applying different objectives (minimizing costs or GHG emissions). • Prioritizing multiple objectives given different weights. • Optimum depends on objective and assumed displaced electricity production. - Abstract: This article presents the new systems engineering optimization model, OptiWaste, which incorporates a life cycle assessment (LCA) methodology and captures important characteristics of waste management systems. As part of the optimization, the model identifies the most attractive waste management options. The model renders it possible to apply different optimization objectives such as minimizing costs or greenhouse gas emissions or to prioritize several objectivesmore » given different weights. A simple illustrative case is analysed, covering alternative treatments of one tonne of residual household waste: incineration of the full amount or sorting out organic waste for biogas production for either combined heat and power generation or as fuel in vehicles. The case study illustrates that the optimal solution depends on the objective and assumptions regarding the background system – illustrated with different assumptions regarding displaced electricity production. The article shows that it is feasible to combine LCA methodology with optimization. Furthermore, it highlights the need for including the integrated waste and energy system into the model.« less
CARDS - comprehensive aerological reference data set. Station history, Version 2.1
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1994-03-01
The possibility of anthropogenic climate change has reached the attention of Government officials and researchers. However, one cannot study climate change without climate data. The CARDS project will produce high-quality upper-air data for the research community and for policy-makers. The authors intend to produce a dataset which is: easy to use, as complete as possible, as free of random errors as possible. They will also attempt to identify biases and remove them whenever possible. In this report, they relate progress toward their goal. They created a robust new format for archiving upper-air data, and designed a relational database structure tomore » hold them. The authors have converted 13 datasets to the new format and have archived over 10,000,000 individual soundings from 10 separate data sources. They produce and archive a metadata summary of each sounding they load. They have researched station histories, and have built a preliminary upper-air station history database. They have converted station-sorted data from their primary database into synoptic-sorted data in a parallel database. They have tested and will soon implement an advanced quality-control procedure, capable of detecting and often repairing errors in geopotential height, temperature, humidity, and wind. This unique quality-control method uses simultaneous vertical, horizontal, and temporal checks of several meteorological variables. It can detect errors other methods cannot. This report contains the station histories for the CARDS data set.« less
NASA Astrophysics Data System (ADS)
Zhang, J.; Lei, X.; Liu, P.; Wang, H.; Li, Z.
2017-12-01
Flood control operation of multi-reservoir systems such as parallel reservoirs and hybrid reservoirs often suffer from complex interactions and trade-off among tributaries and the mainstream. The optimization of such systems is computationally intensive due to nonlinear storage curves, numerous constraints and complex hydraulic connections. This paper aims to derive the optimal flood control operating rules based on the trade-off among tributaries and the mainstream using a new algorithm known as weighted non-dominated sorting genetic algorithm II (WNSGA II). WNSGA II could locate the Pareto frontier in non-dominated region efficiently due to the directed searching by weighted crowding distance, and the results are compared with those of conventional operating rules (COR) and single objective genetic algorithm (GA). Xijiang river basin in China is selected as a case study, with eight reservoirs and five flood control sections within four tributaries and the mainstream. Furthermore, the effects of inflow uncertainty have been assessed. Results indicate that: (1) WNSGA II could locate the non-dominated solutions faster and provide better Pareto frontier than the traditional non-dominated sorting genetic algorithm II (NSGA II) due to the weighted crowding distance; (2) WNSGA II outperforms COR and GA on flood control in the whole basin; (3) The multi-objective operating rules from WNSGA II deal with the inflow uncertainties better than COR. Therefore, the WNSGA II can be used to derive stable operating rules for large-scale reservoir systems effectively and efficiently.
UNC-108/Rab2 Regulates Postendocytic Trafficking in Caenorhabditis elegans
Chun, Denise K.; McEwen, Jason M.; Burbea, Michelle
2008-01-01
After endocytosis, membrane proteins are often sorted between two alternative pathways: a recycling pathway and a degradation pathway. Relatively little is known about how trafficking through these alternative pathways is differentially regulated. Here, we identify UNC-108/Rab2 as a regulator of postendocytic trafficking in both neurons and coelomocytes. Mutations in the Caenorhabditis elegans Rab2 gene unc-108, caused the green fluorescent protein (GFP)-tagged glutamate receptor GLR-1 (GLR-1::GFP) to accumulate in the ventral cord and in neuronal cell bodies. In neuronal cell bodies of unc-108/Rab2 mutants, GLR-1::GFP was found in tubulovesicular structures that colocalized with markers for early and recycling endosomes, including Syntaxin-13 and Rab8. GFP-tagged Syntaxin-13 also accumulated in the ventral cord of unc-108/Rab2 mutants. UNC-108/Rab2 was not required for ubiquitin-mediated sorting of GLR-1::GFP into the multivesicular body (MVB) degradation pathway. Mutations disrupting the MVB pathway and unc-108/Rab2 mutations had additive effects on GLR-1::GFP levels in the ventral cord. In coelomocytes, postendocytic trafficking of the marker Texas Red-bovine serum albumin was delayed. These results demonstrate that UNC-108/Rab2 regulates postendocytic trafficking, most likely at the level of early or recycling endosomes, and that UNC-108/Rab2 and the MVB pathway define alternative postendocytic trafficking mechanisms that operate in parallel. These results define a new function for Rab2 in protein trafficking. PMID:18434599
Regulation of synaptic activity by snapin-mediated endolysosomal transport and sorting
Di Giovanni, Jerome; Sheng, Zu-Hang
2015-01-01
Recycling synaptic vesicles (SVs) transit through early endosomal sorting stations, which raises a fundamental question: are SVs sorted toward endolysosomal pathways? Here, we used snapin mutants as tools to assess how endolysosomal sorting and trafficking impact presynaptic activity in wild-type and snapin−/− neurons. Snapin acts as a dynein adaptor that mediates the retrograde transport of late endosomes (LEs) and interacts with dysbindin, a subunit of the endosomal sorting complex BLOC-1. Expressing dynein-binding defective snapin mutants induced SV accumulation at presynaptic terminals, mimicking the snapin−/− phenotype. Conversely, over-expressing snapin reduced SV pool size by enhancing SV trafficking to the endolysosomal pathway. Using a SV-targeted Ca2+ sensor, we demonstrate that snapin–dysbindin interaction regulates SV positional priming through BLOC-1/AP-3-dependent sorting. Our study reveals a bipartite regulation of presynaptic activity by endolysosomal trafficking and sorting: LE transport regulates SV pool size, and BLOC-1/AP-3-dependent sorting fine-tunes the Ca2+ sensitivity of SV release. Therefore, our study provides new mechanistic insights into the maintenance and regulation of SV pool size and synchronized SV fusion through snapin-mediated LE trafficking and endosomal sorting. PMID:26108535
Yu, Jessica S; Pertusi, Dante A; Adeniran, Adebola V; Tyo, Keith E J
2017-03-15
High throughput screening by fluorescence activated cell sorting (FACS) is a common task in protein engineering and directed evolution. It can also be a rate-limiting step if high false positive or negative rates necessitate multiple rounds of enrichment. Current FACS software requires the user to define sorting gates by intuition and is practically limited to two dimensions. In cases when multiple rounds of enrichment are required, the software cannot forecast the enrichment effort required. We have developed CellSort, a support vector machine (SVM) algorithm that identifies optimal sorting gates based on machine learning using positive and negative control populations. CellSort can take advantage of more than two dimensions to enhance the ability to distinguish between populations. We also present a Bayesian approach to predict the number of sorting rounds required to enrich a population from a given library size. This Bayesian approach allowed us to determine strategies for biasing the sorting gates in order to reduce the required number of enrichment rounds. This algorithm should be generally useful for improve sorting outcomes and reducing effort when using FACS. Source code available at http://tyolab.northwestern.edu/tools/ . k-tyo@northwestern.edu. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com