A new deadlock resolution protocol and message matching algorithm for the extreme-scale simulator
Engelmann, Christian; Naughton, III, Thomas J.
2016-03-22
Investigating the performance of parallel applications at scale on future high-performance computing (HPC) architectures and the performance impact of different HPC architecture choices is an important component of HPC hardware/software co-design. The Extreme-scale Simulator (xSim) is a simulation toolkit for investigating the performance of parallel applications at scale. xSim scales to millions of simulated Message Passing Interface (MPI) processes. The overhead introduced by a simulation tool is an important performance and productivity aspect. This paper documents two improvements to xSim: (1)~a new deadlock resolution protocol to reduce the parallel discrete event simulation overhead and (2)~a new simulated MPI message matchingmore » algorithm to reduce the oversubscription management overhead. The results clearly show a significant performance improvement. The simulation overhead for running the NAS Parallel Benchmark suite was reduced from 102% to 0% for the embarrassingly parallel (EP) benchmark and from 1,020% to 238% for the conjugate gradient (CG) benchmark. xSim offers a highly accurate simulation mode for better tracking of injected MPI process failures. Furthermore, with highly accurate simulation, the overhead was reduced from 3,332% to 204% for EP and from 37,511% to 13,808% for CG.« less
Specification and Analysis of Parallel Machine Architecture
1990-03-17
Parallel Machine Architeture C.V. Ramamoorthy Computer Science Division Dept. of Electrical Engineering and Computer Science University of California...capacity. (4) Adaptive: The overhead in resolution of deadlocks, etc. should be in proportion to their frequency. (5) Avoid rollbacks: Rollbacks can be...snapshots of system state graphically at a rate proportional to simulation time. Some of the examples are as follow: (1) When the simulation clock of
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elmagarmid, A.K.
The availability of distributed data bases is directly affected by the timely detection and resolution of deadlocks. Consequently, mechanisms are needed to make deadlock detection algorithms resilient to failures. Presented first is a centralized algorithm that allows transactions to have multiple requests outstanding. Next, a new distributed deadlock detection algorithm (DDDA) is presented, using a global detector (GD) to detect global deadlocks and local detectors (LDs) to detect local deadlocks. This algorithm essentially identifies transaction-resource interactions that m cause global (multisite) deadlocks. Third, a deadlock detection algorithm utilizing a transaction-wait-for (TWF) graph is presented. It is a fully disjoint algorithmmore » that allows multiple outstanding requests. The proposed algorithm can achieve improved overall performance by using multiple disjoint controllers coupled with the two-phase property while maintaining the simplicity of centralized schemes. Fourth, an algorithm that combines deadlock detection and avoidance is given. This algorithm uses concurrent transaction controllers and resource coordinators to achieve maximum distribution. The language of CSP is used to describe this algorithm. Finally, two efficient deadlock resolution protocols are given along with some guidelines to be used in choosing a transaction for abortion.« less
Reducing False Positives in Runtime Analysis of Deadlocks
NASA Technical Reports Server (NTRS)
Bensalem, Saddek; Havelund, Klaus; Clancy, Daniel (Technical Monitor)
2002-01-01
This paper presents an improvement of a standard algorithm for detecting dead-lock potentials in multi-threaded programs, in that it reduces the number of false positives. The standard algorithm works as follows. The multi-threaded program under observation is executed, while lock and unlock events are observed. A graph of locks is built, with edges between locks symbolizing locking orders. Any cycle in the graph signifies a potential for a deadlock. The typical standard example is the group of dining philosophers sharing forks. The algorithm is interesting because it can catch deadlock potentials even though no deadlocks occur in the examined trace, and at the same time it scales very well in contrast t o more formal approaches to deadlock detection. The algorithm, however, can yield false positives (as well as false negatives). The extension of the algorithm described in this paper reduces the amount of false positives for three particular cases: when a gate lock protects a cycle, when a single thread introduces a cycle, and when the code segments in different threads that cause the cycle can actually not execute in parallel. The paper formalizes a theory for dynamic deadlock detection and compares it to model checking and static analysis techniques. It furthermore describes an implementation for analyzing Java programs and its application to two case studies: a planetary rover and a space craft altitude control system.
Applications of colored petri net and genetic algorithms to cluster tool scheduling
NASA Astrophysics Data System (ADS)
Liu, Tung-Kuan; Kuo, Chih-Jen; Hsiao, Yung-Chin; Tsai, Jinn-Tsong; Chou, Jyh-Horng
2005-12-01
In this paper, we propose a method, which uses Coloured Petri Net (CPN) and genetic algorithm (GA) to obtain an optimal deadlock-free schedule and to solve re-entrant problem for the flexible process of the cluster tool. The process of the cluster tool for producing a wafer usually can be classified into three types: 1) sequential process, 2) parallel process, and 3) sequential parallel process. But these processes are not economical enough to produce a variety of wafers in small volume. Therefore, this paper will propose the flexible process where the operations of fabricating wafers are randomly arranged to achieve the best utilization of the cluster tool. However, the flexible process may have deadlock and re-entrant problems which can be detected by CPN. On the other hand, GAs have been applied to find the optimal schedule for many types of manufacturing processes. Therefore, we successfully integrate CPN and GAs to obtain an optimal schedule with the deadlock and re-entrant problems for the flexible process of the cluster tool.
Napolitano, Jr., Leonard M.
1995-01-01
The Lambda network is a single stage, packet-switched interprocessor communication network for a distributed memory, parallel processor computer. Its design arises from the desired network characteristics of minimizing mean and maximum packet transfer time, local routing, expandability, deadlock avoidance, and fault tolerance. The network is based on fixed degree nodes and has mean and maximum packet transfer distances where n is the number of processors. The routing method is detailed, as are methods for expandability, deadlock avoidance, and fault tolerance.
Napolitano, L.M. Jr.
1995-11-28
The Lambda network is a single stage, packet-switched interprocessor communication network for a distributed memory, parallel processor computer. Its design arises from the desired network characteristics of minimizing mean and maximum packet transfer time, local routing, expandability, deadlock avoidance, and fault tolerance. The network is based on fixed degree nodes and has mean and maximum packet transfer distances where n is the number of processors. The routing method is detailed, as are methods for expandability, deadlock avoidance, and fault tolerance. 14 figs.
Reliability models for dataflow computer systems
NASA Technical Reports Server (NTRS)
Kavi, K. M.; Buckles, B. P.
1985-01-01
The demands for concurrent operation within a computer system and the representation of parallelism in programming languages have yielded a new form of program representation known as data flow (DENN 74, DENN 75, TREL 82a). A new model based on data flow principles for parallel computations and parallel computer systems is presented. Necessary conditions for liveness and deadlock freeness in data flow graphs are derived. The data flow graph is used as a model to represent asynchronous concurrent computer architectures including data flow computers.
NASA Astrophysics Data System (ADS)
Matsakis, Nicholas D.; Gross, Thomas R.
Intervals are a new, higher-level primitive for parallel programming with which programmers directly construct the program schedule. Programs using intervals can be statically analyzed to ensure that they do not deadlock or contain data races. In this paper, we demonstrate the flexibility of intervals by showing how to use them to emulate common parallel control-flow constructs like barriers and signals, as well as higher-level patterns such as bounded-buffer producer-consumer. We have implemented intervals as a publicly available library for Java and Scala.
Preventing messaging queue deadlocks in a DMA environment
Blocksome, Michael A; Chen, Dong; Gooding, Thomas; Heidelberger, Philip; Parker, Jeff
2014-01-14
Embodiments of the invention may be used to manage message queues in a parallel computing environment to prevent message queue deadlock. A direct memory access controller of a compute node may determine when a messaging queue is full. In response, the DMA may generate and interrupt. An interrupt handler may stop the DMA and swap all descriptors from the full messaging queue into a larger queue (or enlarge the original queue). The interrupt handler then restarts the DMA. Alternatively, the interrupt handler stops the DMA, allocates a memory block to hold queue data, and then moves descriptors from the full messaging queue into the allocated memory block. The interrupt handler then restarts the DMA. During a normal messaging advance cycle, a messaging manager attempts to inject the descriptors in the memory block into other messaging queues until the descriptors have all been processed.
Deciding the liveness for a subclass of weighted Petri nets based on structurally circular wait
NASA Astrophysics Data System (ADS)
Liu, GuanJun; Chen, LiJing
2016-05-01
Weighted Petri nets as a kind of formal language are widely used to model and verify discrete event systems related to resource allocation like flexible manufacturing systems. System of Simple Sequential Processes with Multi-Resources (S3PMR, a subclass of weighted Petri nets and an important extension to the well-known System of Simple Sequential Processes with Resources, can model many discrete event systems in which (1) multiple processes may run in parallel and (2) each execution step of each process may use multiple units from multiple resource types. This paper gives a necessary and sufficient condition for the liveness of S3PMR. A new structural concept called Structurally Circular Wait (SCW) is proposed for S3PMR. Blocking Marking (BM) associated with an SCW is defined. It is proven that a marked S3PMR is live if and only if each SCW has no BM. We use an example of multi-processor system-on-chip to show that SCW and BM can precisely characterise the (partial) deadlocks for S3PMR. Simultaneously, two examples are used to show the advantages of SCW in preventing deadlocks of S3PMR. These results are significant for the further research on dealing with the deadlock problem.
Increasing available FIFO space to prevent messaging queue deadlocks in a DMA environment
Blocksome, Michael A [Rochester, MN; Chen, Dong [Croton On Hudson, NY; Gooding, Thomas [Rochester, MN; Heidelberger, Philip [Cortlandt Manor, NY; Parker, Jeff [Rochester, MN
2012-02-07
Embodiments of the invention may be used to manage message queues in a parallel computing environment to prevent message queue deadlock. A direct memory access controller of a compute node may determine when a messaging queue is full. In response, the DMA may generate an interrupt. An interrupt handler may stop the DMA and swap all descriptors from the full messaging queue into a larger queue (or enlarge the original queue). The interrupt handler then restarts the DMA. Alternatively, the interrupt handler stops the DMA, allocates a memory block to hold queue data, and then moves descriptors from the full messaging queue into the allocated memory block. The interrupt handler then restarts the DMA. During a normal messaging advance cycle, a messaging manager attempts to inject the descriptors in the memory block into other messaging queues until the descriptors have all been processed.
Xing, KeYi; Han, LiBin; Zhou, MengChu; Wang, Feng
2012-06-01
Deadlock-free control and scheduling are vital for optimizing the performance of automated manufacturing systems (AMSs) with shared resources and route flexibility. Based on the Petri net models of AMSs, this paper embeds the optimal deadlock avoidance policy into the genetic algorithm and develops a novel deadlock-free genetic scheduling algorithm for AMSs. A possible solution of the scheduling problem is coded as a chromosome representation that is a permutation with repetition of parts. By using the one-step look-ahead method in the optimal deadlock control policy, the feasibility of a chromosome is checked, and infeasible chromosomes are amended into feasible ones, which can be easily decoded into a feasible deadlock-free schedule. The chromosome representation and polynomial complexity of checking and amending procedures together support the cooperative aspect of genetic search for scheduling problems strongly.
NASA Astrophysics Data System (ADS)
Aono, Masashi; Gunji, Yukio-Pegio
2004-08-01
How can non-algorithmic/non-deterministic computational syntax be computed? "The hyperincursive system" introduced by Dubois is an anticipatory system embracing the contradiction/uncertainty. Although it may provide a novel viewpoint for the understanding of complex systems, conventional digital computers cannot run faithfully as the hyperincursive computational syntax specifies, in a strict sense. Then is it an imaginary story? In this paper we try to argue that it is not. We show that a model of complex systems "Elementary Conflictable Cellular Automata (ECCA)" proposed by Aono and Gunji is embracing the hyperincursivity and the nonlocality. ECCA is based on locality-only type settings basically as well as other CA models, and/but at the same time, each cell is required to refer to globality-dominant regularity. Due to this contradictory locality-globality loop, the time evolution equation specifies that the system reaches the deadlock/infinite-loop. However, we show that there is a possibility of the resolution of these problems if the computing system has parallel and/but non-distributed property like an amoeboid organism. This paper is an introduction to "the slime mold computing" that is an attempt to cultivate an unconventional notion of computation.
Utilization Elementary Siphons of Petri Net to Solved Deadlocks in Flexible Manufacturing Systems
NASA Astrophysics Data System (ADS)
Abdul-Hussin, Mowafak Hassan
2015-07-01
This article presents an approach to the constructing a class structural analysis of Petri nets, where elementary siphons are mainly used in the development of a deadlock control policy of flexible manufacturing systems (FMSs), that has been exploited successfully for the design of supervisors of some supervisory control problems. Deadlock-free operation of FMSs is significant objectives of siphons in the Petri net. The structure analysis of Petri net models has efficiency in control of FMSs, however different policy can be implemented for the deadlock prevention. Petri nets models based deadlock prevention for FMS's has gained considerable interest in the development of control theory and methods for design, controlling, operation, and performance evaluation depending of the special class of Petri nets called S3PR. Both structural analysis and reachability tree analysis is used for the purposes analysis, simulation and control of Petri nets. In our ex-perimental approach based to siphon is able to resolve the problem of deadlock occurred to Petri nets that are illustrated with an FMS.
Performance issues for domain-oriented time-driven distributed simulations
NASA Technical Reports Server (NTRS)
Nicol, David M.
1987-01-01
It has long been recognized that simulations form an interesting and important class of computations that may benefit from distributed or parallel processing. Since the point of parallel processing is improved performance, the recent proliferation of multiprocessors requires that we consider the performance issues that naturally arise when attempting to implement a distributed simulation. Three such issues are: (1) the problem of mapping the simulation onto the architecture, (2) the possibilities for performing redundant computation in order to reduce communication, and (3) the avoidance of deadlock due to distributed contention for message-buffer space. These issues are discussed in the context of a battlefield simulation implemented on a medium-scale multiprocessor message-passing architecture.
Multitasking-Pascal extensions solve concurrency problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mackie, P.H.
1982-09-29
To avoid deadlock (one process waiting for a resource than another process can't release) and indefinite postponement (one process being continually denied a resource request) in a multitasking-system application, it is possible to use a high-level development language with built-in concurrency handlers. Parallel Pascal is one such language; it extends standard Pascal via special task synchronizers: a new data type called signal, new system procedures called wait and send and a Boolean function termed awaited. To understand the language's use the author examines the problems it helps solve.
Random vs. Combinatorial Methods for Discrete Event Simulation of a Grid Computer Network
NASA Technical Reports Server (NTRS)
Kuhn, D. Richard; Kacker, Raghu; Lei, Yu
2010-01-01
This study compared random and t-way combinatorial inputs of a network simulator, to determine if these two approaches produce significantly different deadlock detection for varying network configurations. Modeling deadlock detection is important for analyzing configuration changes that could inadvertently degrade network operations, or to determine modifications that could be made by attackers to deliberately induce deadlock. Discrete event simulation of a network may be conducted using random generation, of inputs. In this study, we compare random with combinatorial generation of inputs. Combinatorial (or t-way) testing requires every combination of any t parameter values to be covered by at least one test. Combinatorial methods can be highly effective because empirical data suggest that nearly all failures involve the interaction of a small number of parameters (1 to 6). Thus, for example, if all deadlocks involve at most 5-way interactions between n parameters, then exhaustive testing of all n-way interactions adds no additional information that would not be obtained by testing all 5-way interactions. While the maximum degree of interaction between parameters involved in the deadlocks clearly cannot be known in advance, covering all t-way interactions may be more efficient than using random generation of inputs. In this study we tested this hypothesis for t = 2, 3, and 4 for deadlock detection in a network simulation. Achieving the same degree of coverage provided by 4-way tests would have required approximately 3.2 times as many random tests; thus combinatorial methods were more efficient for detecting deadlocks involving a higher degree of interactions. The paper reviews explanations for these results and implications for modeling and simulation.
Synthesizing parallel imaging applications using the CAP (computer-aided parallelization) tool
NASA Astrophysics Data System (ADS)
Gennart, Benoit A.; Mazzariol, Marc; Messerli, Vincent; Hersch, Roger D.
1997-12-01
Imaging applications such as filtering, image transforms and compression/decompression require vast amounts of computing power when applied to large data sets. These applications would potentially benefit from the use of parallel processing. However, dedicated parallel computers are expensive and their processing power per node lags behind that of the most recent commodity components. Furthermore, developing parallel applications remains a difficult task: writing and debugging the application is difficult (deadlocks), programs may not be portable from one parallel architecture to the other, and performance often comes short of expectations. In order to facilitate the development of parallel applications, we propose the CAP computer-aided parallelization tool which enables application programmers to specify at a high-level of abstraction the flow of data between pipelined-parallel operations. In addition, the CAP tool supports the programmer in developing parallel imaging and storage operations. CAP enables combining efficiently parallel storage access routines and image processing sequential operations. This paper shows how processing and I/O intensive imaging applications must be implemented to take advantage of parallelism and pipelining between data access and processing. This paper's contribution is (1) to show how such implementations can be compactly specified in CAP, and (2) to demonstrate that CAP specified applications achieve the performance of custom parallel code. The paper analyzes theoretically the performance of CAP specified applications and demonstrates the accuracy of the theoretical analysis through experimental measurements.
''Towards a High-Performance and Robust Implementation of MPI-IO on Top of GPFS''
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prost, J.P.; Tremann, R.; Blackwore, R.
2000-01-11
MPI-IO/GPFS is a prototype implementation of the I/O chapter of the Message Passing Interface (MPI) 2 standard. It uses the IBM General Parallel File System (GPFS), with prototyped extensions, as the underlying file system. this paper describes the features of this prototype which support its high performance and robustness. The use of hints at the file system level and at the MPI-IO level allows tailoring the use of the file system to the application needs. Error handling in collective operations provides robust error reporting and deadlock prevention in case of returning errors.
The Design and Implementation of a Parallel Persistent Object System
1992-02-01
Semantica JilP L Add descriptor (FP.L) to active pool JMPF r L If Frames, FP+r.O-0 Add descriptor (FP,L) to active pool Else Add descriptor (FP, IP+l...Syntax Semantica STARTO rl r2 Let FP’ = Frames[FP+rl] Let IP’ - Frames[FP+r2] Add (FP.IP+I) to active pool Add (FP I, P’ ) to active pool STARTI rl r2 r3...deadlock. 119 Syntaz Semantica ALLOCFRANE ri rj Let CallerFP - Frames [FP+ri] Let ResultIP - Frames [FP+ri+1] Let SignalIP = Frames [FP+ri+2] Let ResSlot
Wu, Naiqi; Zhou, MengChu
2005-12-01
An automated manufacturing system (AMS) contains a number of versatile machines (or workstations), buffers, an automated material handling system (MHS), and is computer-controlled. An effective and flexible alternative for implementing MHS is to use automated guided vehicle (AGV) system. The deadlock issue in AMS is very important in its operation and has extensively been studied. The deadlock problems were separately treated for parts in production and transportation and many techniques were developed for each problem. However, such treatment does not take the advantage of the flexibility offered by multiple AGVs. In general, it is intractable to obtain maximally permissive control policy for either problem. Instead, this paper investigates these two problems in an integrated way. First we model an AGV system and part processing processes by resource-oriented Petri nets, respectively. Then the two models are integrated by using macro transitions. Based on the combined model, a novel control policy for deadlock avoidance is proposed. It is shown to be maximally permissive with computational complexity of O (n2) where n is the number of machines in AMS if the complexity for controlling the part transportation by AGVs is not considered. Thus, the complexity of deadlock avoidance for the whole system is bounded by the complexity in controlling the AGV system. An illustrative example shows its application and power.
Comparing digraph and Petri net approaches to deadlock avoidance in FMS.
Fanti, M P; Maione, B; Turchiano, B
2000-01-01
Flexible manufacturing systems (FMSs) are modern production facilities with easy adaptability to variable production plans and goals. These systems may exhibit deadlock situations occurring when a circular wait arises because each piece in a set requires a resource currently held by another job in the same set. Several authors have proposed different policies to control resource allocation in order to avoid deadlock problems. These approaches are mainly based on some formal models of manufacturing systems, such as Petri nets (PNs), directed graphs, etc. Since they describe various peculiarities of the FMS operation in a modular and systematic way, PNs are the most extensively used tool to model such systems. On the other hand, digraphs are more synthetic than PNs because their vertices are just the system resources. So, digraphs describe the interactions between jobs and resources only, while neglecting other details on the system operation. The aim of this paper is to show the tight connections between the two approaches to the deadlock problem, by proposing a unitary framework that links graph-theoretic and PN models and results. In this context, we establish a direct correspondence between the structural elements of the PN (empty siphons) and those of the digraphs (maximal-weight zero-outdegree strong components) characterizing a deadlock occurrence. The paper also shows that the avoidance policies derived from digraphs can be implemented by controlled PNs.
A nonlinear q-voter model with deadlocks on the Watts-Strogatz graph
NASA Astrophysics Data System (ADS)
Sznajd-Weron, Katarzyna; Michal Suszczynski, Karol
2014-07-01
We study the nonlinear $q$-voter model with deadlocks on a Watts-Strogats graph. Using Monte Carlo simulations, we obtain so called exit probability and exit time. We determine how network properties, such as randomness or density of links influence exit properties of a model.
Deadlock Detection in Computer Networks
1977-09-01
it entity class name (ndm-procownerref) = -:"node tab5le" I procnode_name z res-rnode-name call then return; nc ll c eck -for-deadlock(p_obplref...demo12 ~-exlusive sae con Caobridg Fina Sttonaa con0 Official Distribution List Defense Documentation Center New York Area Office Cameron Station 715
Structural insights into Rhino-Deadlock complex for germline piRNA cluster specification.
Yu, Bowen; Lin, Yu An; Parhad, Swapnil S; Jin, Zhaohui; Ma, Jinbiao; Theurkauf, William E; Zhang, Zz Zhao; Huang, Ying
2018-06-01
PIWI-interacting RNAs (piRNAs) silence transposons in germ cells to maintain genome stability and animal fertility. Rhino, a rapidly evolving heterochromatin protein 1 (HP1) family protein, binds Deadlock in a species-specific manner and so defines the piRNA-producing loci in the Drosophila genome. Here, we determine the crystal structures of Rhino-Deadlock complex in Drosophila melanogaster and simulans In both species, one Rhino binds the N-terminal helix-hairpin-helix motif of one Deadlock protein through a novel interface formed by the beta-sheet in the Rhino chromoshadow domain. Disrupting the interface leads to infertility and transposon hyperactivation in flies. Our structural and functional experiments indicate that electrostatic repulsion at the interaction interface causes cross-species incompatibility between the sibling species. By determining the molecular architecture of this piRNA-producing machinery, we discover a novel HP1-partner interacting mode that is crucial to piRNA biogenesis and transposon silencing. We thus explain the cross-species incompatibility of two sibling species at the molecular level. © 2018 The Authors.
Performance analysis of static locking in replicated distributed database systems
NASA Technical Reports Server (NTRS)
Kuang, Yinghong; Mukkamala, Ravi
1991-01-01
Data replication and transaction deadlocks can severely affect the performance of distributed database systems. Many current evaluation techniques ignore these aspects, because it is difficult to evaluate through analysis and time consuming to evaluate through simulation. A technique is used that combines simulation and analysis to closely illustrate the impact of deadlock and evaluate performance of replicated distributed database with both shared and exclusive locks.
MPI Runtime Error Detection with MUST: Advances in Deadlock Detection
Hilbrich, Tobias; Protze, Joachim; Schulz, Martin; ...
2013-01-01
The widely used Message Passing Interface (MPI) is complex and rich. As a result, application developers require automated tools to avoid and to detect MPI programming errors. We present the Marmot Umpire Scalable Tool (MUST) that detects such errors with significantly increased scalability. We present improvements to our graph-based deadlock detection approach for MPI, which cover future MPI extensions. Our enhancements also check complex MPI constructs that no previous graph-based detection approach handled correctly. Finally, we present optimizations for the processing of MPI operations that reduce runtime deadlock detection overheads. Existing approaches often require ( p ) analysis time permore » MPI operation, for p processes. We empirically observe that our improvements lead to sub-linear or better analysis time per operation for a wide range of real world applications.« less
Performance analysis of static locking in replicated distributed database systems
NASA Technical Reports Server (NTRS)
Kuang, Yinghong; Mukkamala, Ravi
1991-01-01
Data replications and transaction deadlocks can severely affect the performance of distributed database systems. Many current evaluation techniques ignore these aspects, because it is difficult to evaluate through analysis and time consuming to evaluate through simulation. Here, a technique is discussed that combines simulation and analysis to closely illustrate the impact of deadlock and evaluate performance of replicated distributed databases with both shared and exclusive locks.
NASA Astrophysics Data System (ADS)
Zhao, Mi; Hou, Yifan; Liu, Ding
2010-10-01
In this article we deal with deadlock prevention problems for S4PR, a class of generalised Petri nets, which can well model a large class of flexible manufacturing systems where deadlocks are caused by insufficiently marked siphons. We present a deadlock prevention methodology that is an iterative approach consisting of two stages. The first one is called siphon control, which is to add for each insufficiently marked minimal siphon a control place to the original net. Its objective is to prevent a minimal siphon from being insufficiently marked. The second one, called control-induced siphon control, is to add a control place to the augmented net with its output arcs connecting to the source transitions, which assures that there are no new insufficiently marked siphons generated. At each iteration, a mixed integer programming approach is adopted for generalised Petri nets to obtain an insufficiently marked minimal siphon from the maximal deadly siphon. This way complete siphon enumeration is avoided that is much more time-consuming for a sizeable plant model than the proposed method. The relation of the proposed method and the liveness and reversibility of the controlled net is obtained. Examples are presented to demonstrate the presented method.
A Petri Net Approach Based Elementary Siphons Supervisor for Flexible Manufacturing Systems
NASA Astrophysics Data System (ADS)
Abdul-Hussin, Mowafak Hassan
2015-05-01
This paper presents an approach to constructing a class of an S3PR net for modeling, simulation and control of processes occurring in the flexible manufacturing system (FMS) used based elementary siphons of a Petri net. Siphons are very important to the analysis and control of deadlocks of FMS that is significant objectives of siphons. Petri net models in the efficiency structure analysis, and utilization of the FMSs when different policy can be implemented lead to the deadlock prevention. We are representing an effective deadlock-free policy of a special class of Petri nets called S3PR. Simulation of Petri net structural analysis and reachability graph analysis is used for analysis and control of Petri nets. Petri nets contain been successfully as one of the most powerful tools for modelling of FMS, where Using structural analysis, we show that liveness of such systems can be attributed to the absence of under marked siphons.
Material Implementation of Hyperincursive Field on Slime Mold Computer
NASA Astrophysics Data System (ADS)
Aono, Masashi; Gunji, Yukio-Pegio
2004-08-01
"Elementary Conflictable Cellular Automaton (ECCA)" was introduced by Aono and Gunji as a problematic computational syntax embracing the non-deterministic/non-algorithmic property due to its hyperincursivity and nonlocality. Although ECCA's hyperincursive evolution equation indicates the occurrence of the deadlock/infinite-loop, we do not consider that this problem declares the fundamental impossibility of implementing ECCA materially. Dubois proposed to call a computing system where uncertainty/contradiction occurs "the hyperincursive field". In this paper we introduce a material implementation of the hyperincursive field by using plasmodia of the true slime mold Physarum polycephalum. The amoeboid organism is adopted as a computing media of ECCA slime mold computer (ECCA-SMC) mainly because; it is a parallel non-distributed system whose locally branched tips (components) can act in parallel with asynchronism and nonlocal correlation. A notable characteristic of ECCA-SMC is that a cell representing a spatio-temporal segment of computation is occupied (overlapped) redundantly by multiple spatially adjacent computing operations and by temporally successive computing events. The overlapped time representation may contribute to the progression of discussions on unconventional notions of the time.
Suplatov, Dmitry; Popova, Nina; Zhumatiy, Sergey; Voevodin, Vladimir; Švedas, Vytas
2016-04-01
Rapid expansion of online resources providing access to genomic, structural, and functional information associated with biological macromolecules opens an opportunity to gain a deeper understanding of the mechanisms of biological processes due to systematic analysis of large datasets. This, however, requires novel strategies to optimally utilize computer processing power. Some methods in bioinformatics and molecular modeling require extensive computational resources. Other algorithms have fast implementations which take at most several hours to analyze a common input on a modern desktop station, however, due to multiple invocations for a large number of subtasks the full task requires a significant computing power. Therefore, an efficient computational solution to large-scale biological problems requires both a wise parallel implementation of resource-hungry methods as well as a smart workflow to manage multiple invocations of relatively fast algorithms. In this work, a new computer software mpiWrapper has been developed to accommodate non-parallel implementations of scientific algorithms within the parallel supercomputing environment. The Message Passing Interface has been implemented to exchange information between nodes. Two specialized threads - one for task management and communication, and another for subtask execution - are invoked on each processing unit to avoid deadlock while using blocking calls to MPI. The mpiWrapper can be used to launch all conventional Linux applications without the need to modify their original source codes and supports resubmission of subtasks on node failure. We show that this approach can be used to process huge amounts of biological data efficiently by running non-parallel programs in parallel mode on a supercomputer. The C++ source code and documentation are available from http://biokinet.belozersky.msu.ru/mpiWrapper .
Gauthier, J-M; Widart, F
2008-09-01
Our investigation into dream and delirium in schizophrenic subjects was based on the notion of magical thought developed by Sami-Ali. Starting from this notion, we attempted to determine whether they somehow differentiate the psychic space of dream from that of delirium, whether either of these two spaces, or both, are caught in a relational deadlock, and eventually to analyse the quality of the relationship to others. The underlying assumption is that magical thought is foregrounded in the psychic life of schizophrenic subjects, and that these subjects do not distinguish between the psychic spaces of dream and delirium, nor between the world, others, and themselves. Results show first that the prevalence of magical thought has the following consequences: (a) features characterizing space are those of an imaginary space, i.e. internal and external realities are blurred, what is outside is reflected inside, and vice versa, the subject-object distinction is cancelled to leave one single reality that ignores contradiction; (b) the time of discourse is an imaginary time: their discourses express past and future as belonging to an absolute "perpetual" present. Events they mention are experienced as contemporary. Causal relations, being imaginary, can be reversed. However, some socially sanctioned landmarks in time are often maintained. These are rarely related to any event in their own emotional lives. Second, our results provide evidence for some permeability between the space of dream and the space of delirium. Yet, this permeability can vary from one subject to another. Third, they show that relational deadlocks recur regularly, though not systematically, in the lives of our subjects. Relational deadlocks in dreams are not easy to detect. Finally, the kind of relationship those people have to others is quite paradoxical: physical closeness results in emotional distance, and conversely emotional closeness is only possible in physical distance. The others cannot really exist in the relationship; they are a horizon towards which those people yearn without ever being able to reach it. This kind of paradox, of relational deadlock, may be the ground on which psychosis thrives: it is impossible to be close to the nonself, while it is also deeply wished for. Even the logic of imaginary space - in which everything should be possible - cannot help overcome this paradox. And - a paradox within the paradox - this logic still comes upon a deadline in the potentially infinite and indefinite space it produces. This boundary could be the deadlock that cannot and yet must be overcome, a deadlock that would endlessly fuel the patients' delirium.
Heterogeneous Concurrent Modeling and Design in Java (Volume 3: Ptolemy II Domains)
2008-04-15
Starting the model 88 6.5.3. Atomic Communication in Concurrent Execution 90 6.5.4. Detecting Deadlocks: 90 6.6. Application to Resource Management 90 6.6.1...Resource Management Demo 90 6.6.2. ResourcePool 91 6.7. Threads in an Actor 91 6.7.1. Creating Extra Threads in an Actor 91 6.7.2. Manually Blocking...Local Time Management 117 8.4.2. Detecting Deadlock 118 8.4.3. Ending Execution 118 8.5. Example DDE Applications 119 9. PN Domain 121 9.1
Building a generalized distributed system model
NASA Technical Reports Server (NTRS)
Mukkamala, Ravi; Foudriat, E. C.
1991-01-01
A modeling tool for both analysis and design of distributed systems is discussed. Since many research institutions have access to networks of workstations, the researchers decided to build a tool running on top of the workstations to function as a prototype as well as a distributed simulator for a computing system. The effects of system modeling on performance prediction in distributed systems and the effect of static locking and deadlocks on the performance predictions of distributed transactions are also discussed. While the probability of deadlock is considerably small, its effects on performance could be significant.
Recursive solution of number of reachable states of a simple subclass of FMS
NASA Astrophysics Data System (ADS)
Chao, Daniel Yuh
2014-03-01
This paper aims to compute the number of reachable (forbidden, live and deadlock) states for flexible manufacturing systems (FMS) without the construction of reachability graph. The problem is nontrivial and takes, in general, an exponential amount of time to solve. Hence, this paper focusses on a simple version of Systems of Simple Sequential Processes with Resources (S3PR), called kth-order system, where each resource place holds one token to be shared between two processes. The exact number of reachable (forbidden, live and deadlock) states can be computed recursively.
Leung, Vitus J [Albuquerque, NM; Phillips, Cynthia A [Albuquerque, NM; Bender, Michael A [East Northport, NY; Bunde, David P [Urbana, IL
2009-07-21
In a multiple processor computing apparatus, directional routing restrictions and a logical channel construct permit fault tolerant, deadlock-free routing. Processor allocation can be performed by creating a linear ordering of the processors based on routing rules used for routing communications between the processors. The linear ordering can assume a loop configuration, and bin-packing is applied to this loop configuration. The interconnection of the processors can be conceptualized as a generally rectangular 3-dimensional grid, and the MC allocation algorithm is applied with respect to the 3-dimensional grid.
NASA Astrophysics Data System (ADS)
Chao, Daniel Yuh; Yu, Tsung Hsien
2016-01-01
Due to the state explosion problem, it has been unimaginable to enumerate reachable states for Petri nets. Chao broke the barrier earlier by developing the very first closed-form solution of the number of reachable and other states for marked graphs and the kth order system. Instead of using first-met bad marking, we propose 'the moment to launch resource allocation' (MLR) as a partial deadlock avoidance policy for a large, real-time dynamic resource allocation system. Presently, we can use the future deadlock ratio of the current state as the indicator of MLR due to which the ratio can be obtained real-time by a closed-form formula. This paper progresses the application of an MLR concept one step further on Gen-Left kth order systems (one non-sharing resource place in any position of the left-side process), which is also the most fundamental asymmetric net structure, by the construction of the system's closed-form solution of the control-related states (reachable, forbidden, live and deadlock states) with a formula depending on the parameters of k and the location of the non-sharing resource. Here, we kick off a new era of real-time, dynamic resource allocation decisions by constructing a generalisation formula of kth order systems (Gen-Left) with r* on the left side but at arbitrary locations.
Monitor design with multiple self-loops for maximally permissive supervisors.
Chen, YuFeng; Li, ZhiWu; Barkaoui, Kamel; Uzam, Murat
2016-03-01
In this paper, we improve the previous work by considering that a control place can have multiple self-loops. Then, two integer linear programming problems (ILPPs) are formulated. Based on the first ILPP, an iterative deadlock control policy is developed, where a control place is computed at each iteration to implement as many marking/transition separation instances (MTSIs) as possible. The second ILPP can find a set of control places to implement all MTSIs and the objective function is used to minimize the number of control places. It is a non-iterative deadlock control strategy since we need to solve the ILPP only once. Both ILPPs can make all legal markings reachable in the controlled system, i.e., the obtained supervisor is behaviorally optimal. Finally, we provide examples to illustrate the proposed approaches. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Beyond Assertiveness Training: A Problem-Solving Approach.
ERIC Educational Resources Information Center
Scott, Nancy A.
1979-01-01
Assertiveness training models show shortcomings in those situations where assertiveness results in stalemates or conflicts, or both. Deadlocks may occur when antagonists demonstrate appropriate assertive behavior. Conflict management using problem-solving skills allows individuals to learn appropriate methods of dealing with conflictual or…
A verification strategy for web services composition using enhanced stacked automata model.
Nagamouttou, Danapaquiame; Egambaram, Ilavarasan; Krishnan, Muthumanickam; Narasingam, Poonkuzhali
2015-01-01
Currently, Service-Oriented Architecture (SOA) is becoming the most popular software architecture of contemporary enterprise applications, and one crucial technique of its implementation is web services. Individual service offered by some service providers may symbolize limited business functionality; however, by composing individual services from different service providers, a composite service describing the intact business process of an enterprise can be made. Many new standards have been defined to decipher web service composition problem namely Business Process Execution Language (BPEL). BPEL provides an initial work for forming an Extended Markup Language (XML) specification language for defining and implementing business practice workflows for web services. The problems with most realistic approaches to service composition are the verification of composed web services. It has to depend on formal verification method to ensure the correctness of composed services. A few research works has been carried out in the literature survey for verification of web services for deterministic system. Moreover the existing models did not address the verification properties like dead transition, deadlock, reachability and safetyness. In this paper, a new model to verify the composed web services using Enhanced Stacked Automata Model (ESAM) has been proposed. The correctness properties of the non-deterministic system have been evaluated based on the properties like dead transition, deadlock, safetyness, liveness and reachability. Initially web services are composed using Business Process Execution Language for Web Service (BPEL4WS) and it is converted into ESAM (combination of Muller Automata (MA) and Push Down Automata (PDA)) and it is transformed into Promela language, an input language for Simple ProMeLa Interpreter (SPIN) tool. The model is verified using SPIN tool and the results revealed better recital in terms of finding dead transition and deadlock in contrast to the existing models.
School Consolidation Efforts in Mississippi
ERIC Educational Resources Information Center
Peters, Gary B.; Freeman, David
2007-01-01
Mississippi lawmakers have struggled with budgetary matters as turf wars intensified between political parties. The somber mood of legislators permeated the state. Hostile environments in both chambers had driven legislators to uncompromising positions, which created deadlock in the state capital. School consolidation, which not long ago was…
Eyre, Catherine; Muftah, Wafa; Hiscox, Jennifer; Hunt, Julie; Kille, Peter; Boddy, Lynne; Rogers, Hilary J
2010-08-01
Trametes versicolor is an important white rot fungus of both industrial and ecological interest. Saprotrophic basidiomycetes are the major decomposition agents in woodland ecosystems, and rarely form monospecific populations, therefore interspecific mycelial interactions continually occur. Interactions have different outcomes including replacement of one species by the other or deadlock. We have made subtractive cDNA libraries to enrich for genes that are expressed when T. versicolor interacts with another saprotrophic basidiomycete, Stereum gausapatum, an interaction that results in the replacement of the latter. Expressed sequence tags (ESTs) (1920) were used for microarray analysis, and their expression compared during interaction with three different fungi: S. gausapatum (replaced by T. versicolor), Bjerkandera adusta (deadlock) and Hypholoma fasciculare (replaced T. versicolor). Expression of significantly more probes changed in the interaction between T. versicolor and S. gausapatum or B. adusta compared to H. fasciculare, suggesting a relationship between interaction outcome and changes in gene expression. Copyright © 2010 The British Mycological Society. Published by Elsevier Ltd. All rights reserved.
A Computer Simulation of the System-Wide Effects of Parallel-Offset Route Maneuvers
NASA Technical Reports Server (NTRS)
Lauderdale, Todd A.; Santiago, Confesor; Pankok, Carl
2010-01-01
Most aircraft managed by air-traffic controllers in the National Airspace System are capable of flying parallel-offset routes. This paper presents the results of two related studies on the effects of increased use of offset routes as a conflict resolution maneuver. The first study analyzes offset routes in the context of all standard resolution types which air-traffic controllers currently use. This study shows that by utilizing parallel-offset route maneuvers, significant system-wide savings in delay due to conflict resolution of up to 30% are possible. It also shows that most offset resolutions replace horizontal-vectoring resolutions. The second study builds on the results of the first and directly compares offset resolutions and standard horizontal-vectoring maneuvers to determine that in-trail conflicts are often more efficiently resolved by offset maneuvers.
26 CFR 1.957-1 - Definition of controlled foreign corporation.
Code of Federal Regulations, 2011 CFR
2011-04-01
... of that body of persons exercising, with respect to such corporation, the powers ordinarily exercised...-half of the members of such governing body of such foreign corporation, either to cast a vote deciding an evenly divided vote of such body or, for the duration of any deadlock which may arise, to exercise...
26 CFR 1.957-1 - Definition of controlled foreign corporation.
Code of Federal Regulations, 2014 CFR
2014-04-01
... of that body of persons exercising, with respect to such corporation, the powers ordinarily exercised...-half of the members of such governing body of such foreign corporation, either to cast a vote deciding an evenly divided vote of such body or, for the duration of any deadlock which may arise, to exercise...
26 CFR 1.957-1 - Definition of controlled foreign corporation.
Code of Federal Regulations, 2013 CFR
2013-04-01
... of that body of persons exercising, with respect to such corporation, the powers ordinarily exercised...-half of the members of such governing body of such foreign corporation, either to cast a vote deciding an evenly divided vote of such body or, for the duration of any deadlock which may arise, to exercise...
26 CFR 1.957-1 - Definition of controlled foreign corporation.
Code of Federal Regulations, 2012 CFR
2012-04-01
... of that body of persons exercising, with respect to such corporation, the powers ordinarily exercised...-half of the members of such governing body of such foreign corporation, either to cast a vote deciding an evenly divided vote of such body or, for the duration of any deadlock which may arise, to exercise...
A Reflection on Educating College Students about the Value of Public-Sector Unions
ERIC Educational Resources Information Center
Mooney, Christine; Volchok, Edward
2016-01-01
This year, labor unions got a reprieve: The Supreme Court deadlocked in a much-anticipated case that could have turned almost every state into Wisconsin, where partisan interests have crippled union power. The case, "Friedrichs vs. California Teachers Association," addressed a previous case, "Abood v. Detroit Board of…
Questioning the Scholarly Discussion around Decentralization in Turkish Education System
ERIC Educational Resources Information Center
Yildiz, Soner Onder
2016-01-01
From the beginning of Turkish Republic till date, Turkish Education System (TES) has been steered by a handful of politicians and civil servants, who enjoy maximum centralized authority. Over the years, therefore, centralized management has repeatedly been blamed for the deadlocks hampering progress in the TES. Turkish scholars often seem to find…
Faibish, Sorin; Bent, John M.; Tzelnic, Percy; Grider, Gary; Torres, Aaron
2015-10-20
Techniques are provided for storing files in a parallel computing system using different resolutions. A method is provided for storing at least one file generated by a distributed application in a parallel computing system. The file comprises one or more of a complete file and a sub-file. The method comprises the steps of obtaining semantic information related to the file; generating a plurality of replicas of the file with different resolutions based on the semantic information; and storing the file and the plurality of replicas of the file in one or more storage nodes of the parallel computing system. The different resolutions comprise, for example, a variable number of bits and/or a different sub-set of data elements from the file. A plurality of the sub-files can be merged to reproduce the file.
NASA Astrophysics Data System (ADS)
Choi, Yong Nam; Kim, Shin Ae; Kim, Sung Kyu; Kim, Sung Baek; Lee, Chang-Hee; Mikula, Pavel
2004-07-01
In a conventional diffractometer having single monochromator, only one position, parallel position, is used for the diffraction experiment (i.e. detection) because the resolution property of the other one, anti-parallel position, is very poor. However, a bent perfect crystal (BPC) monochromator at monochromatic focusing condition can provide a quite flat and equal resolution property at both parallel and anti-parallel positions and thus one can have a chance to use both sides for the diffraction experiment. From the data of the FWHM and the Delta d/d measured on three diffraction geometries (symmetric, asymmetric compression and asymmetric expansion), we can conclude that the simultaneous diffraction measurement in both parallel and anti-parallel positions can be achieved.
Parallel adaptive wavelet collocation method for PDEs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nejadmalayeri, Alireza, E-mail: Alireza.Nejadmalayeri@gmail.com; Vezolainen, Alexei, E-mail: Alexei.Vezolainen@Colorado.edu; Brown-Dymkoski, Eric, E-mail: Eric.Browndymkoski@Colorado.edu
2015-10-01
A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allowsmore » fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 2048{sup 3} using as many as 2048 CPU cores.« less
Demi, Libertario; Viti, Jacopo; Kusters, Lieneke; Guidi, Francesco; Tortoli, Piero; Mischi, Massimo
2013-11-01
The speed of sound in the human body limits the achievable data acquisition rate of pulsed ultrasound scanners. To overcome this limitation, parallel beamforming techniques are used in ultrasound 2-D and 3-D imaging systems. Different parallel beamforming approaches have been proposed. They may be grouped into two major categories: parallel beamforming in reception and parallel beamforming in transmission. The first category is not optimal for harmonic imaging; the second category may be more easily applied to harmonic imaging. However, inter-beam interference represents an issue. To overcome these shortcomings and exploit the benefit of combining harmonic imaging and high data acquisition rate, a new approach has been recently presented which relies on orthogonal frequency division multiplexing (OFDM) to perform parallel beamforming in transmission. In this paper, parallel transmit beamforming using OFDM is implemented for the first time on an ultrasound scanner. An advanced open platform for ultrasound research is used to investigate the axial resolution and interbeam interference achievable with parallel transmit beamforming using OFDM. Both fundamental and second-harmonic imaging modalities have been considered. Results show that, for fundamental imaging, axial resolution in the order of 2 mm can be achieved in combination with interbeam interference in the order of -30 dB. For second-harmonic imaging, axial resolution in the order of 1 mm can be achieved in combination with interbeam interference in the order of -35 dB.
OpenMP parallelization of a gridded SWAT (SWATG)
NASA Astrophysics Data System (ADS)
Zhang, Ying; Hou, Jinliang; Cao, Yongpan; Gu, Juan; Huang, Chunlin
2017-12-01
Large-scale, long-term and high spatial resolution simulation is a common issue in environmental modeling. A Gridded Hydrologic Response Unit (HRU)-based Soil and Water Assessment Tool (SWATG) that integrates grid modeling scheme with different spatial representations also presents such problems. The time-consuming problem affects applications of very high resolution large-scale watershed modeling. The OpenMP (Open Multi-Processing) parallel application interface is integrated with SWATG (called SWATGP) to accelerate grid modeling based on the HRU level. Such parallel implementation takes better advantage of the computational power of a shared memory computer system. We conducted two experiments at multiple temporal and spatial scales of hydrological modeling using SWATG and SWATGP on a high-end server. At 500-m resolution, SWATGP was found to be up to nine times faster than SWATG in modeling over a roughly 2000 km2 watershed with 1 CPU and a 15 thread configuration. The study results demonstrate that parallel models save considerable time relative to traditional sequential simulation runs. Parallel computations of environmental models are beneficial for model applications, especially at large spatial and temporal scales and at high resolutions. The proposed SWATGP model is thus a promising tool for large-scale and high-resolution water resources research and management in addition to offering data fusion and model coupling ability.
Research on Automatic Programming
1975-12-31
Sequential processes, deadlocks, and semaphore primitives , Ph.D. Thesis, Harvard University, November 1974; Center for Research in Computing...verified. 13 Code generated to effect the synchronization makes use of the ECL control extension facility (Prenner’s CI, see [Prenner]). The... semaphore operations [Dijkstra] is being developed. Initial results for this code generator are very encouraging; in many cases generated code is
Overcoming the current deadlock in antibiotic research.
Schäberle, Till F; Hack, Ingrid M
2014-04-01
Antibiotic-resistant bacteria are on the rise, making it harder to treat bacterial infections. The situation is aggravated by the shrinking of the antibiotic development pipeline. To finance urgently needed incentives for antibiotic research, creative financing solutions are needed. Public-private partnerships (PPPs) are a successful model for moving forward. Copyright © 2013 Elsevier Ltd. All rights reserved.
Somalia’s Endless Transition: Breaking the Deadlock (Strategic Forum, Number 257, June 2010)
2010-06-01
the TFG is not to follow the example of its predecessor, the Transitional National Government, in a slow slide to oblivion , this requires the use...Somalia: The Trouble with Puntland,” Africa Briefing, no. 64 (August 12, 2009), and Human Rights Watch, “Hostages to Peace: Threats to Human Rights and
On Robustness of Deadlock Detection Algorithms for Distributed Computing Systems.
1982-02-01
temrs : nake it much,- ore Eff’: -ult -,o detect, avcii :r -revenn -hsr fn -,he earlier muJtiroaming centralized computing systems. :eadlock :)rever...failure of site C would not have been critical after the B ^ad ’ teen sent. The effect of a type c site (site _ in our examrle’ falling would have no
Submicron Systems Architecture Project
1981-11-01
This project is concerned with the architecture , design , and testing of VLSI Systems. The principal activities in this report period include: The Tree Machine; COPE, The Homogeneous Machine; Computational Arrays; Switch-Level Model for MOS Logic Design; Testing; Local Network and Designer Workstations; Self-timed Systems; Characterization of Deadlock Free Resource Contention; Concurrency Algebra; Language Design and Logic for Program Verification.
Closest Presidential Race Ever...Or Is It?
ERIC Educational Resources Information Center
Constitutional Rights Foundation, Los Angeles, CA.
All evening on election night 2000, candidates George W. Bush and Al Gore were deadlocked in the tightest-ever race for the office of President of the United States. As the numbers were reported from each state, the battle for votes in the electoral college swung back and forth from Republicans to Democrats. The next morning, the issue was still…
Analysis of DIRAC's behavior using model checking with process algebra
NASA Astrophysics Data System (ADS)
Remenska, Daniela; Templon, Jeff; Willemse, Tim; Bal, Henri; Verstoep, Kees; Fokkink, Wan; Charpentier, Philippe; Graciani Diaz, Ricardo; Lanciotti, Elisa; Roiser, Stefan; Ciba, Krzysztof
2012-12-01
DIRAC is the grid solution developed to support LHCb production activities as well as user data analysis. It consists of distributed services and agents delivering the workload to the grid resources. Services maintain database back-ends to store dynamic state information of entities such as jobs, queues, staging requests, etc. Agents use polling to check and possibly react to changes in the system state. Each agent's logic is relatively simple; the main complexity lies in their cooperation. Agents run concurrently, and collaborate using the databases as shared memory. The databases can be accessed directly by the agents if running locally or through a DIRAC service interface if necessary. This shared-memory model causes entities to occasionally get into inconsistent states. Tracing and fixing such problems becomes formidable due to the inherent parallelism present. We propose more rigorous methods to cope with this. Model checking is one such technique for analysis of an abstract model of a system. Unlike conventional testing, it allows full control over the parallel processes execution, and supports exhaustive state-space exploration. We used the mCRL2 language and toolset to model the behavior of two related DIRAC subsystems: the workload and storage management system. Based on process algebra, mCRL2 allows defining custom data types as well as functions over these. This makes it suitable for modeling the data manipulations made by DIRAC's agents. By visualizing the state space and replaying scenarios with the toolkit's simulator, we have detected race-conditions and deadlocks in these systems, which, in several cases, were confirmed to occur in the reality. Several properties of interest were formulated and verified with the tool. Our future direction is automating the translation from DIRAC to a formal model.
Parallel Spectral Acquisition with an Ion Cyclotron Resonance Cell Array.
Park, Sung-Gun; Anderson, Gordon A; Navare, Arti T; Bruce, James E
2016-01-19
Mass measurement accuracy is a critical analytical figure-of-merit in most areas of mass spectrometry application. However, the time required for acquisition of high-resolution, high mass accuracy data limits many applications and is an aspect under continual pressure for development. Current efforts target implementation of higher electrostatic and magnetic fields because ion oscillatory frequencies increase linearly with field strength. As such, the time required for spectral acquisition of a given resolving power and mass accuracy decreases linearly with increasing fields. Mass spectrometer developments to include multiple high-resolution detectors that can be operated in parallel could further decrease the acquisition time by a factor of n, the number of detectors. Efforts described here resulted in development of an instrument with a set of Fourier transform ion cyclotron resonance (ICR) cells as detectors that constitute the first MS array capable of parallel high-resolution spectral acquisition. ICR cell array systems consisting of three or five cells were constructed with printed circuit boards and installed within a single superconducting magnet and vacuum system. Independent ion populations were injected and trapped within each cell in the array. Upon filling the array, all ions in all cells were simultaneously excited and ICR signals from each cell were independently amplified and recorded in parallel. Presented here are the initial results of successful parallel spectral acquisition, parallel mass spectrometry (MS) and MS/MS measurements, and parallel high-resolution acquisition with the MS array system.
2010-10-14
High-Resolution Functional Mapping of the Venezuelan Equine Encephalitis Virus Genome by Insertional Mutagenesis and Massively Parallel Sequencing...Venezuelan equine encephalitis virus (VEEV) genome. We initially used a capillary electrophoresis method to gain insight into the role of the VEEV...Smith JM, Schmaljohn CS (2010) High-Resolution Functional Mapping of the Venezuelan Equine Encephalitis Virus Genome by Insertional Mutagenesis and
Exploiting Virtual Synchrony in Distributed Systems
1987-02-01
for distributed systems yield the best performance relative to the level of synchronization guaranteed by the primitive . A pro- grammer could then... synchronization facility. Semaphores Replicated binary and general semaphores . Monitors Monitor lock, condition variables and signals. Deadlock detection...We describe applications of a new software abstraction called the virtually synchronous process group. Such a group consists of a set of processes
Coordination control of flexible manufacturing systems
NASA Astrophysics Data System (ADS)
Menon, Satheesh R.
One of the first attempts was made to develop a model driven system for coordination control of Flexible Manufacturing Systems (FMS). The structure and activities of the FMS are modeled using a colored Petri Net based system. This approach has the advantage of being able to model the concurrency inherent in the system. It provides a method for encoding the system state, state transitions and the feasible transitions at any given state. Further structural analysis (for detecting conflicting actions, deadlocks which might occur during operation, etc.) can be performed. The problem is also addressed of implementing and testing the behavior of existing dynamic scheduling approaches in simulations of realistic situations. A simulation architecture was proposed and performance evaluation was carried out for establishing the correctness of the model, stability of the system from a structural (deadlocks) and temporal (boundedness of backlogs) points of view, and for collection of statistics for performance measures such as machine and robot utilizations, average wait times and idle times of resources. A real-time implementation architecture for the coordination controller was also developed and implemented in a software simulated environment. Given the current technology of FMS control, the model-driven colored Petri net-based approach promises to develop a very flexible control environment.
NASA Astrophysics Data System (ADS)
Zhao, Mi; Li, ZhiWu; Hu, HeSuan
2010-09-01
This article develops a deadlock prevention policy for a class of generalised Petri nets, which can well model a large class of flexible manufacturing systems. The analysis of such a system leads us to characterise the deadlock situations in terms of the insufficiently marked siphons in its generalised Petri-net model. The proposed policy is carried out in an iterative way. At each step a minimal siphon is derived from a maximal deadly marked siphon that is found by solving a mixed integer programming (MIP) problem. An algorithm is formalised that can efficiently compute such a minimal siphon from a maximal one. A monitor is added for a derived minimal siphon such that it is max-controlled if it is elementary with respect to the siphons that have been derived. The liveness of the controlled system is decided by the fact that no siphon can be derived due to the MIP solution. After a liveness-enforcing net supervisor computed without complete siphon enumeration, the output-arcs of the additional monitors are rearranged such that the monitors act while restricting the system less. Examples are presented to demonstrate the proposed method.
Multi-Resolution Climate Ensemble Parameter Analysis with Nested Parallel Coordinates Plots.
Wang, Junpeng; Liu, Xiaotong; Shen, Han-Wei; Lin, Guang
2017-01-01
Due to the uncertain nature of weather prediction, climate simulations are usually performed multiple times with different spatial resolutions. The outputs of simulations are multi-resolution spatial temporal ensembles. Each simulation run uses a unique set of values for multiple convective parameters. Distinct parameter settings from different simulation runs in different resolutions constitute a multi-resolution high-dimensional parameter space. Understanding the correlation between the different convective parameters, and establishing a connection between the parameter settings and the ensemble outputs are crucial to domain scientists. The multi-resolution high-dimensional parameter space, however, presents a unique challenge to the existing correlation visualization techniques. We present Nested Parallel Coordinates Plot (NPCP), a new type of parallel coordinates plots that enables visualization of intra-resolution and inter-resolution parameter correlations. With flexible user control, NPCP integrates superimposition, juxtaposition and explicit encodings in a single view for comparative data visualization and analysis. We develop an integrated visual analytics system to help domain scientists understand the connection between multi-resolution convective parameters and the large spatial temporal ensembles. Our system presents intricate climate ensembles with a comprehensive overview and on-demand geographic details. We demonstrate NPCP, along with the climate ensemble visualization system, based on real-world use-cases from our collaborators in computational and predictive science.
Limpanuparb, Taweetham; Milthorpe, Josh; Rendell, Alistair P
2014-10-30
Use of the modern parallel programming language X10 for computing long-range Coulomb and exchange interactions is presented. By using X10, a partitioned global address space language with support for task parallelism and the explicit representation of data locality, the resolution of the Ewald operator can be parallelized in a straightforward manner including use of both intranode and internode parallelism. We evaluate four different schemes for dynamic load balancing of integral calculation using X10's work stealing runtime, and report performance results for long-range HF energy calculation of large molecule/high quality basis running on up to 1024 cores of a high performance cluster machine. Copyright © 2014 Wiley Periodicals, Inc.
Otazo, Ricardo; Lin, Fa-Hsuan; Wiggins, Graham; Jordan, Ramiro; Sodickson, Daniel; Posse, Stefan
2009-01-01
Standard parallel magnetic resonance imaging (MRI) techniques suffer from residual aliasing artifacts when the coil sensitivities vary within the image voxel. In this work, a parallel MRI approach known as Superresolution SENSE (SURE-SENSE) is presented in which acceleration is performed by acquiring only the central region of k-space instead of increasing the sampling distance over the complete k-space matrix and reconstruction is explicitly based on intra-voxel coil sensitivity variation. In SURE-SENSE, parallel MRI reconstruction is formulated as a superresolution imaging problem where a collection of low resolution images acquired with multiple receiver coils are combined into a single image with higher spatial resolution using coil sensitivities acquired with high spatial resolution. The effective acceleration of conventional gradient encoding is given by the gain in spatial resolution, which is dictated by the degree of variation of the different coil sensitivity profiles within the low resolution image voxel. Since SURE-SENSE is an ill-posed inverse problem, Tikhonov regularization is employed to control noise amplification. Unlike standard SENSE, for which acceleration is constrained to the phase-encoding dimension/s, SURE-SENSE allows acceleration along all encoding directions — for example, two-dimensional acceleration of a 2D echo-planar acquisition. SURE-SENSE is particularly suitable for low spatial resolution imaging modalities such as spectroscopic imaging and functional imaging with high temporal resolution. Application to echo-planar functional and spectroscopic imaging in human brain is presented using two-dimensional acceleration with a 32-channel receiver coil. PMID:19341804
Capabilities of Fully Parallelized MHD Stability Code MARS
NASA Astrophysics Data System (ADS)
Svidzinski, Vladimir; Galkin, Sergei; Kim, Jin-Soo; Liu, Yueqiang
2016-10-01
Results of full parallelization of the plasma stability code MARS will be reported. MARS calculates eigenmodes in 2D axisymmetric toroidal equilibria in MHD-kinetic plasma models. Parallel version of MARS, named PMARS, has been recently developed at FAR-TECH. Parallelized MARS is an efficient tool for simulation of MHD instabilities with low, intermediate and high toroidal mode numbers within both fluid and kinetic plasma models, implemented in MARS. Parallelization of the code included parallelization of the construction of the matrix for the eigenvalue problem and parallelization of the inverse vector iterations algorithm, implemented in MARS for the solution of the formulated eigenvalue problem. Construction of the matrix is parallelized by distributing the load among processors assigned to different magnetic surfaces. Parallelization of the solution of the eigenvalue problem is made by repeating steps of the MARS algorithm using parallel libraries and procedures. Parallelized MARS is capable of calculating eigenmodes with significantly increased spatial resolution: up to 5,000 adapted radial grid points with up to 500 poloidal harmonics. Such resolution is sufficient for simulation of kink, tearing and peeling-ballooning instabilities with physically relevant parameters. Work is supported by the U.S. DOE SBIR program.
The Open Source Hardening Project
2009-07-01
invariants have been violated. Further, despite the severity of storage system bugs, deployed testing meth- ods remain primitive , typically a combination of...an exponen- tially increasing number all-at-once. As a result, EX- PLODE unsurprisingly looks like a primitive operating system: it has a queue of...via semaphores . However, doing so requires intrusive changes and, in our experience [30], backfires with unexpected deadlock since semaphores prevent a
Detecting Potential Synchronization Constraint Deadlocks from Formal System Specifications
1992-03-01
family of languages, consisting of the Larch Shared Language and a series of Larch interface languages, specific to particular programming languages...specify sequential (non- concurrent) programs , and explicitly does not include the ability to specify atomic actions (Guttag, 1985). Larch is therefore...synchronized communication between two such agents is ronsidered as a single action. The transitions in CCS trees are labelled to show how they are
Worldwide Report, Arms Control.
1985-06-20
originate with the source. Times within items are as given by source. The contents of this publication in no way represent the poli- cies, views or...jj U.S. Responsible for Deadlock 16 Weekly Moscow Discussion Show Views Talks ’ _ (Moscow Domestic Service, 17 May 85) Lavrentyev, Shishkin...Avoid Space Militarization (A. Kozyrev ; Moscow MEZHDUNARODNAYA ZHIZN*. No 4 A*r85> .......: :. 3i Soviet Chief of Staff Says SDI Violates ABM
NASA Astrophysics Data System (ADS)
Ji, X.; Shen, C.
2017-12-01
Flood inundation presents substantial societal hazards and also changes biogeochemistry for systems like the Amazon. It is often expensive to simulate high-resolution flood inundation and propagation in a long-term watershed-scale model. Due to the Courant-Friedrichs-Lewy (CFL) restriction, high resolution and large local flow velocity both demand prohibitively small time steps even for parallel codes. Here we develop a parallel surface-subsurface process-based model enhanced by multi-resolution meshes that are adaptively switched on or off. The high-resolution overland flow meshes are enabled only when the flood wave invades to floodplains. This model applies semi-implicit, semi-Lagrangian (SISL) scheme in solving dynamic wave equations, and with the assistant of the multi-mesh method, it also adaptively chooses the dynamic wave equation only in the area of deep inundation. Therefore, the model achieves a balance between accuracy and computational cost.
Rideaux, Reuben; Apthorp, Deborah; Edwards, Mark
2015-02-12
Recent findings have indicated the capacity to consolidate multiple items into visual short-term memory in parallel varies as a function of the type of information. That is, while color can be consolidated in parallel, evidence suggests that orientation cannot. Here we investigated the capacity to consolidate multiple motion directions in parallel and reexamined this capacity using orientation. This was achieved by determining the shortest exposure duration necessary to consolidate a single item, then examining whether two items, presented simultaneously, could be consolidated in that time. The results show that parallel consolidation of direction and orientation information is possible, and that parallel consolidation of direction appears to be limited to two. Additionally, we demonstrate the importance of adequate separation between feature intervals used to define items when attempting to consolidate in parallel, suggesting that when multiple items are consolidated in parallel, as opposed to serially, the resolution of representations suffer. Finally, we used facilitation of spatial attention to show that the deterioration of item resolution occurs during parallel consolidation, as opposed to storage. © 2015 ARVO.
Grammatical Role Parallelism Influences Ambiguous Pronoun Resolution in German
Sauermann, Antje; Gagarina, Natalia
2017-01-01
Previous research on pronoun resolution in German revealed that personal pronouns in German tend to refer to the subject or topic antecedents, however, these results are based on studies involving subject personal pronouns. We report a visual world eye-tracking study that investigated the impact of the word order and grammatical role parallelism on the online comprehension of pronouns in German-speaking adults. Word order of the antecedents and parallelism by the grammatical role of the anaphor was modified in the study. The results show that parallelism of the grammatical role had an early and strong effect on the processing of the pronoun, with subject anaphors being resolved to subject antecedents and object anaphors to object antecedents, regardless of the word order (information status) of the antecedents. Our results demonstrate that personal pronouns may not in general be associated with the subject or topic of a sentence but that their resolution is modulated by additional factors such as the grammatical role. Further studies are required to investigate whether parallelism also affects offline antecedent choices. PMID:28790940
NASA Technical Reports Server (NTRS)
Wang, P.; Li, P.
1998-01-01
A high-resolution numerical study on parallel systems is reported on three-dimensional, time-dependent, thermal convective flows. A parallel implentation on the finite volume method with a multigrid scheme is discussed, and a parallel visualization systemm is developed on distributed systems for visualizing the flow.
NASA Astrophysics Data System (ADS)
Kreituss, Imants; Bode, Jeffrey W.
2017-05-01
Kinetic resolution is a common method to obtain enantioenriched material from a racemic mixture. This process will deliver enantiopure unreacted material when the selectivity factor of the process, s, is greater than 1; however, the scalemic reaction product is often discarded. Parallel kinetic resolution, on the other hand, provides access to two enantioenriched products from a single racemic starting material, but suffers from a variety of practical challenges regarding experimental design that limit its applications. Here, we describe the development of a flow-based system that enables practical parallel kinetic resolution of saturated N-heterocycles. This process provides access to both enantiomers of the starting material in good yield and high enantiopurity; similar results with classical kinetic resolution would require selectivity factors in the range of s = 100. To achieve this, two immobilized quasienantiomeric acylating agents were designed for the asymmetric acylation of racemic saturated N-heterocycles. Using the flow-based system we could efficiently separate, recover and reuse the polymer-supported reagents. The amide products could be readily separated and hydrolysed to the corresponding amines without detectable epimerization.
Kuwait: Security, Reform, and U.S. Policy
2012-04-11
franchise for the National Assembly and other elected positions (such as the Kuwait City municipal council). For at least two decades, the extent of the... franchise has been a closely watched indicator of Kuwait’s political liberalization. The government has expanded the electorate gradually, first by...extending the franchise to sons of naturalized Kuwaitis and Kuwaitis naturalized for at least 20 (as opposed to 30) years. The long deadlock on female
Kuwait: Security, Reform, and U.S. Policy
2011-04-26
past two decades in extending the franchise for the National Assembly and other elected positions (such as the Kuwait City municipal council). The...extent of the franchise has been a closely watched indicator of Kuwait’s political liberalization. The government has expanded the electorate gradually...first by extending the franchise to sons of naturalized Kuwaitis and Kuwaitis naturalized for at least 20 (as opposed to 30) years. The long deadlock
Breaking the Nordic Defense Deadlock
2015-02-01
popular hopes for internation- al achievements in the disarmament field all contrib- uted to the perception among liberals in Sweden that reductions...during World War II and neither was it in her interest. Sweden was nonaligned, and adapted to the changing war situation. Strong popular support for...United Nations] was a ‘ luxury good’, only affordable because the Nordics were allowed a free ride on a security order created by the presence of an
A Welcome Proposal to Amend the GMO Legislation of the EU.
Eriksson, Dennis; Harwood, Wendy; Hofvander, Per; Jones, Huw; Rogowsky, Peter; Stöger, Eva; Visser, Richard G F
2018-05-25
Is the European Union (EU) regulatory framework for genetically modified organisms (GMOs) adequate for emerging techniques, such as genome editing? This has been discussed extensively for more than 10 years. A recent proposal from The Netherlands offers a way to break the deadlock. Here, we discuss how the proposal would affect examples from public plant research. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
PRISMATIC: Unified Hierarchical Probabilistic Verification Tool
2011-09-01
security protocols such as for anonymity and quantum cryptography ; and biological reaction pathways. PRISM is currently the leading probabilistic...a whole will only deadlock and fail with a probability ≤ p/2. The assumption allows us to partition the overall system verification problem into two ...run on any port using the standard HTTP protocol. In this way multiple instances of the PRISMATIC web service can respond to different requests when
A Checkmate, Not a Stalemate: Turkey Versus the PKK
2014-06-01
ETA) cases. The conditions that contribute to a deadlock are the source of much de ate. Stalemates may emanate from the each side’s perception...analyze Turkey’s conflict with the PKK 1 Dean G. ruitt “Escalation and De -escalation in symmetric... De ora G. eeling “ Terrorism in Turkey: History deology Structure and Strategy ” in The PKK: A Decades-Old Brutal Marxist-Leninist Separatist
An Overview of the Runtime Verification Tool Java PathExplorer
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Rosu, Grigore; Clancy, Daniel (Technical Monitor)
2002-01-01
We present an overview of the Java PathExplorer runtime verification tool, in short referred to as JPAX. JPAX can monitor the execution of a Java program and check that it conforms with a set of user provided properties formulated in temporal logic. JPAX can in addition analyze the program for concurrency errors such as deadlocks and data races. The concurrency analysis requires no user provided specification. The tool facilitates automated instrumentation of a program's bytecode, which when executed will emit an event stream, the execution trace, to an observer. The observer dispatches the incoming event stream to a set of observer processes, each performing a specialized analysis, such as the temporal logic verification, the deadlock analysis and the data race analysis. Temporal logic specifications can be formulated by the user in the Maude rewriting logic, where Maude is a high-speed rewriting system for equational logic, but here extended with executable temporal logic. The Maude rewriting engine is then activated as an event driven monitoring process. Alternatively, temporal specifications can be translated into efficient automata, which check the event stream. JPAX can be used during program testing to gain increased information about program executions, and can potentially furthermore be applied during operation to survey safety critical systems.
NASA Astrophysics Data System (ADS)
Vivoni, Enrique R.; Mascaro, Giuseppe; Mniszewski, Susan; Fasel, Patricia; Springer, Everett P.; Ivanov, Valeriy Y.; Bras, Rafael L.
2011-10-01
SummaryA major challenge in the use of fully-distributed hydrologic models has been the lack of computational capabilities for high-resolution, long-term simulations in large river basins. In this study, we present the parallel model implementation and real-world hydrologic assessment of the Triangulated Irregular Network (TIN)-based Real-time Integrated Basin Simulator (tRIBS). Our parallelization approach is based on the decomposition of a complex watershed using the channel network as a directed graph. The resulting sub-basin partitioning divides effort among processors and handles hydrologic exchanges across boundaries. Through numerical experiments in a set of nested basins, we quantify parallel performance relative to serial runs for a range of processors, simulation complexities and lengths, and sub-basin partitioning methods, while accounting for inter-run variability on a parallel computing system. In contrast to serial simulations, the parallel model speed-up depends on the variability of hydrologic processes. Load balancing significantly improves parallel speed-up with proportionally faster runs as simulation complexity (domain resolution and channel network extent) increases. The best strategy for large river basins is to combine a balanced partitioning with an extended channel network, with potential savings through a lower TIN resolution. Based on these advances, a wider range of applications for fully-distributed hydrologic models are now possible. This is illustrated through a set of ensemble forecasts that account for precipitation uncertainty derived from a statistical downscaling model.
Practical Formal Verification of MPI and Thread Programs
NASA Astrophysics Data System (ADS)
Gopalakrishnan, Ganesh; Kirby, Robert M.
Large-scale simulation codes in science and engineering are written using the Message Passing Interface (MPI). Shared memory threads are widely used directly, or to implement higher level programming abstractions. Traditional debugging methods for MPI or thread programs are incapable of providing useful formal guarantees about coverage. They get bogged down in the sheer number of interleavings (schedules), often missing shallow bugs. In this tutorial we will introduce two practical formal verification tools: ISP (for MPI C programs) and Inspect (for Pthread C programs). Unlike other formal verification tools, ISP and Inspect run directly on user source codes (much like a debugger). They pursue only the relevant set of process interleavings, using our own customized Dynamic Partial Order Reduction algorithms. For a given test harness, DPOR allows these tools to guarantee the absence of deadlocks, instrumented MPI object leaks and communication races (using ISP), and shared memory races (using Inspect). ISP and Inspect have been used to verify large pieces of code: in excess of 10,000 lines of MPI/C for ISP in under 5 seconds, and about 5,000 lines of Pthread/C code in a few hours (and much faster with the use of a cluster or by exploiting special cases such as symmetry) for Inspect. We will also demonstrate the Microsoft Visual Studio and Eclipse Parallel Tools Platform integrations of ISP (these will be available on the LiveCD).
17 CFR 12.24 - Parallel proceedings.
Code of Federal Regulations, 2010 CFR
2010-04-01
...) Definition. For purposes of this section, a parallel proceeding shall include: (1) An arbitration proceeding... the receivership includes the resolution of claims made by customers; or (3) A petition filed under... any of the foregoing with knowledge of a parallel proceeding shall promptly notify the Commission, by...
NASA Astrophysics Data System (ADS)
Yu, Zhongzhi; Liu, Shaocong; Sun, Shiyi; Kuang, Cuifang; Liu, Xu
2018-06-01
Parallel detection, which can use the additional information of a pinhole plane image taken at every excitation scan position, could be an efficient method to enhance the resolution of a confocal laser scanning microscope. In this paper, we discuss images obtained under different conditions and using different image restoration methods with parallel detection to quantitatively compare the imaging quality. The conditions include different noise levels and different detector array settings. The image restoration methods include linear deconvolution and pixel reassignment with Richard-Lucy deconvolution and with maximum-likelihood estimation deconvolution. The results show that the linear deconvolution share properties such as high-efficiency and the best performance under all different conditions, and is therefore expected to be of use for future biomedical routine research.
2014-04-01
synchronization primitives based on preset templates can result in over synchronization if unchecked, possibly creating deadlock situations. Further...inputs rather than enforcing synchronization with a global clock. MRICDF models software as a network of communicating actors. Four primitive actors...control wants to send interrupt or not. Since this is shared buffer, a semaphore mechanism is assumed to synchronize the read/write of this buffer. The
Replication and Reconfiguration in a Distributed Mail Repository.
1987-04-01
a single machine and sees no improvement in availability over the old repository. Further, the static allocation of users to particular machines means...Reconfiguration Good old Wateon! You are the one fixed point in a changing universel -Sir Arthur Conan Doyle How can I be sure In a world that’s constantly...automatic storage allocation , and the Argus debugger. Then I discuss the drawbacks involved in using Argus: deadlocks, the awkwardness of retrying actions
Fast I/O for Massively Parallel Applications
NASA Technical Reports Server (NTRS)
OKeefe, Matthew T.
1996-01-01
The two primary goals for this report were the design, contruction and modeling of parallel disk arrays for scientific visualization and animation, and a study of the IO requirements of highly parallel applications. In addition, further work in parallel display systems required to project and animate the very high-resolution frames resulting from our supercomputing simulations in ocean circulation and compressible gas dynamics.
The role of parallelism in the real-time processing of anaphora.
Poirier, Josée; Walenski, Matthew; Shapiro, Lewis P
2012-06-01
Parallelism effects refer to the facilitated processing of a target structure when it follows a similar, parallel structure. In coordination, a parallelism-related conjunction triggers the expectation that a second conjunct with the same structure as the first conjunct should occur. It has been proposed that parallelism effects reflect the use of the first structure as a template that guides the processing of the second. In this study, we examined the role of parallelism in real-time anaphora resolution by charting activation patterns in coordinated constructions containing anaphora, Verb-Phrase Ellipsis (VPE) and Noun-Phrase Traces (NP-traces). Specifically, we hypothesised that an expectation of parallelism would incite the parser to assume a structure similar to the first conjunct in the second, anaphora-containing conjunct. The speculation of a similar structure would result in early postulation of covert anaphora. Experiment 1 confirms that following a parallelism-related conjunction, first-conjunct material is activated in the second conjunct. Experiment 2 reveals that an NP-trace in the second conjunct is posited immediately where licensed, which is earlier than previously reported in the literature. In light of our findings, we propose an intricate relation between structural expectations and anaphor resolution.
The role of parallelism in the real-time processing of anaphora
Poirier, Josée; Walenski, Matthew; Shapiro, Lewis P.
2012-01-01
Parallelism effects refer to the facilitated processing of a target structure when it follows a similar, parallel structure. In coordination, a parallelism-related conjunction triggers the expectation that a second conjunct with the same structure as the first conjunct should occur. It has been proposed that parallelism effects reflect the use of the first structure as a template that guides the processing of the second. In this study, we examined the role of parallelism in real-time anaphora resolution by charting activation patterns in coordinated constructions containing anaphora, Verb-Phrase Ellipsis (VPE) and Noun-Phrase Traces (NP-traces). Specifically, we hypothesised that an expectation of parallelism would incite the parser to assume a structure similar to the first conjunct in the second, anaphora-containing conjunct. The speculation of a similar structure would result in early postulation of covert anaphora. Experiment 1 confirms that following a parallelism-related conjunction, first-conjunct material is activated in the second conjunct. Experiment 2 reveals that an NP-trace in the second conjunct is posited immediately where licensed, which is earlier than previously reported in the literature. In light of our findings, we propose an intricate relation between structural expectations and anaphor resolution. PMID:23741080
Yazawa, Koji; Suzuki, Furitsu; Nishiyama, Yusuke; Ohata, Takuya; Aoki, Akihiro; Nishimura, Katsuyuki; Kaji, Hironori; Shimizu, Tadashi; Asakura, Tetsuo
2012-11-25
The accurate (1)H positions of alanine tripeptide, A(3), with anti-parallel and parallel β-sheet structures could be determined by highly resolved (1)H DQMAS solid-state NMR spectra and (1)H chemical shift calculation with gauge-including projector augmented wave calculations.
NASA Astrophysics Data System (ADS)
Mandai, Shingo; Jain, Vishwas; Charbon, Edoardo
2014-02-01
This paper presents a digital silicon photomultiplier (SiPM) partitioned in columns, whereas each column is connected to a column-parallel time-to-digital converter (TDC), in order to improve the timing resolution of single-photon detection. By reducing the number of pixels per TDC using a sharing scheme with three TDCs per column, the pixel-to-pixel skew is reduced. We report the basic characterization of the SiPM, comprising 416 single-photon avalanche diodes (SPADs); the characterization includes photon detection probability, dark count rate, afterpulsing, and crosstalk. We achieved 264-ps full-width at half maximum timing resolution of single-photon detection using a 48-fold column-parallel TDC with a temporal resolution of 51.8 ps (least significant bit), fully integrated in standard complementary metal-oxide semiconductor technology.
NASA Astrophysics Data System (ADS)
Xu, Jincheng; Liu, Wei; Wang, Jin; Liu, Linong; Zhang, Jianfeng
2018-02-01
De-absorption pre-stack time migration (QPSTM) compensates for the absorption and dispersion of seismic waves by introducing an effective Q parameter, thereby making it an effective tool for 3D, high-resolution imaging of seismic data. Although the optimal aperture obtained via stationary-phase migration reduces the computational cost of 3D QPSTM and yields 3D stationary-phase QPSTM, the associated computational efficiency is still the main problem in the processing of 3D, high-resolution images for real large-scale seismic data. In the current paper, we proposed a division method for large-scale, 3D seismic data to optimize the performance of stationary-phase QPSTM on clusters of graphics processing units (GPU). Then, we designed an imaging point parallel strategy to achieve an optimal parallel computing performance. Afterward, we adopted an asynchronous double buffering scheme for multi-stream to perform the GPU/CPU parallel computing. Moreover, several key optimization strategies of computation and storage based on the compute unified device architecture (CUDA) were adopted to accelerate the 3D stationary-phase QPSTM algorithm. Compared with the initial GPU code, the implementation of the key optimization steps, including thread optimization, shared memory optimization, register optimization and special function units (SFU), greatly improved the efficiency. A numerical example employing real large-scale, 3D seismic data showed that our scheme is nearly 80 times faster than the CPU-QPSTM algorithm. Our GPU/CPU heterogeneous parallel computing framework significant reduces the computational cost and facilitates 3D high-resolution imaging for large-scale seismic data.
Anti-parallel EUV Flows Observed along Active Region Filament Threads with Hi-C
NASA Astrophysics Data System (ADS)
Alexander, Caroline E.; Walsh, Robert W.; Régnier, Stéphane; Cirtain, Jonathan; Winebarger, Amy R.; Golub, Leon; Kobayashi, Ken; Platt, Simon; Mitchell, Nick; Korreck, Kelly; DePontieu, Bart; DeForest, Craig; Weber, Mark; Title, Alan; Kuzin, Sergey
2013-09-01
Plasma flows within prominences/filaments have been observed for many years and hold valuable clues concerning the mass and energy balance within these structures. Previous observations of these flows primarily come from Hα and cool extreme-ultraviolet (EUV) lines (e.g., 304 Å) where estimates of the size of the prominence threads has been limited by the resolution of the available instrumentation. Evidence of "counter-steaming" flows has previously been inferred from these cool plasma observations, but now, for the first time, these flows have been directly imaged along fundamental filament threads within the million degree corona (at 193 Å). In this work, we present observations of an AR filament observed with the High-resolution Coronal Imager (Hi-C) that exhibits anti-parallel flows along adjacent filament threads. Complementary data from the Solar Dynamics Observatory (SDO)/Atmospheric Imaging Assembly (AIA) and Helioseismic and Magnetic Imager are presented. The ultra-high spatial and temporal resolution of Hi-C allow the anti-parallel flow velocities to be measured (70-80 km s-1) and gives an indication of the resolvable thickness of the individual strands (0.''8 ± 0.''1). The temperature of the plasma flows was estimated to be log T (K) = 5.45 ± 0.10 using Emission Measure loci analysis. We find that SDO/AIA cannot clearly observe these anti-parallel flows or measure their velocity or thread width due to its larger pixel size. We suggest that anti-parallel/counter-streaming flows are likely commonplace within all filaments and are currently not observed in EUV due to current instrument spatial resolution.
Instability in the Horn of Africa: An Assessment of Ethiopian-Eritrean Conflict
2010-09-01
2000. 169 Zenebeworke Tadesse , “The Ethiopia-Eritrea Conflict,” 113, at: http://www.conflicttransform.net/Eritrea%20&%20Ethiopia.pdf (accessed July 20...dispute. The United States’ rapid diplomatic response team led by Susan Rice, U.S. Assistant Secretary of State, worked closely with Paul Kagame...actors to advance implementation have failed, resulting in continuing deadlock. B. THE U.S-RWANDA PEACE PLAN The U.S-Rwanda team was the first to
1990-01-12
Communications COM-28, (April 1980 ) 425-432. [6] Munch-Anderson, B. and T.U. Zahle, Scheduling According to Job Priority With Prevention of Deadlock and...Interconnection, IEEE Transactions on Communications COM-28, (April 1980 ) 425-432. (4] Cook, R.P., StarMod, A Language for Distributed Programming...Research Triangle Park, NC 27709-2211 Attention: Dr. David W. Hislop * Electronics Division 51 Dr. David W. Hislop Electronics Division U. S. Army Research
[The powerlessness of care: practice analysis as a way out].
Trégouet, Stéphane
2016-01-01
Some clinical situations, particularly associated with psychosis, push the teams in their entrenchment. Here, the caregiver's professional ideal is attacked from all sides in the face of the patient's resistance and denial, resulting in a feeling of therapeutic deadlock. The caregiving team, through discussion, will perhaps find a way out through the implementation of therapeutic strategies, encompassing a process of creation, consolidation and relaxing of a professional stance. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
1988-03-01
Mechanism; Computer Security. 16. PRICE CODE 17. SECURITY CLASSIFICATION IS. SECURITY CLASSIFICATION 19. SECURITY CLASSIFICATION 20. UMrrATION OF ABSTRACT...denial of service. This paper assumes that the reader is a computer science or engineering professional working in the area of formal specification and...recovery from such events as deadlocks and crashes can be accounted for in the computation of the waiting time for each service in the service hierarchy
Evidence for the speed-value trade-off: human and monkey decision making is magnitude sensitive.
Pirrone, Angelo; Azab, Habiba; Hayden, Benjamin Y; Stafford, Tom; Marshall, James A R
2018-04-01
Complex natural systems from brains to bee swarms have evolved to make adaptive multifactorial decisions. Recent theoretical and empirical work suggests that many evolved systems may take advantage of common motifs across multiple domains. We are particularly interested in value sensitivity (i.e., sensitivity to the magnitude or intensity of the stimuli or reward under consideration) as a mechanism to resolve deadlocks adaptively. This mechanism favours long-term reward maximization over accuracy in a simple manner, because it avoids costly delays associated with ambivalence between similar options; speed-value trade-offs have been proposed to be evolutionarily advantageous for many kinds of decision. A key prediction of the value-sensitivity hypothesis is that choices between equally-valued options will proceed faster when the options have a high value than when they have a low value. However, value-sensitivity is not part of idealised choice models such as diffusion to bound. Here we examine two different choice behaviours in two different species, perceptual decisions in humans and economic choices in rhesus monkeys, to test this hypothesis. We observe the same value sensitivity in both human perceptual decisions and monkey value-based decisions. These results endorse the idea that neural decision systems make use of the same basic principle of value-sensitivity in order to resolve costly deadlocks and thus improve long-term reward intake.
Evidence for the speed-value trade-off: human and monkey decision making is magnitude sensitive
Pirrone, Angelo; Azab, Habiba; Hayden, Benjamin Y.; Stafford, Tom; Marshall, James A. R.
2017-01-01
Complex natural systems from brains to bee swarms have evolved to make adaptive multifactorial decisions. Recent theoretical and empirical work suggests that many evolved systems may take advantage of common motifs across multiple domains. We are particularly interested in value sensitivity (i.e., sensitivity to the magnitude or intensity of the stimuli or reward under consideration) as a mechanism to resolve deadlocks adaptively. This mechanism favours long-term reward maximization over accuracy in a simple manner, because it avoids costly delays associated with ambivalence between similar options; speed-value trade-offs have been proposed to be evolutionarily advantageous for many kinds of decision. A key prediction of the value-sensitivity hypothesis is that choices between equally-valued options will proceed faster when the options have a high value than when they have a low value. However, value-sensitivity is not part of idealised choice models such as diffusion to bound. Here we examine two different choice behaviours in two different species, perceptual decisions in humans and economic choices in rhesus monkeys, to test this hypothesis. We observe the same value sensitivity in both human perceptual decisions and monkey value-based decisions. These results endorse the idea that neural decision systems make use of the same basic principle of value-sensitivity in order to resolve costly deadlocks and thus improve long-term reward intake. PMID:29682592
Parallel simulation of tsunami inundation on a large-scale supercomputer
NASA Astrophysics Data System (ADS)
Oishi, Y.; Imamura, F.; Sugawara, D.
2013-12-01
An accurate prediction of tsunami inundation is important for disaster mitigation purposes. One approach is to approximate the tsunami wave source through an instant inversion analysis using real-time observation data (e.g., Tsushima et al., 2009) and then use the resulting wave source data in an instant tsunami inundation simulation. However, a bottleneck of this approach is the large computational cost of the non-linear inundation simulation and the computational power of recent massively parallel supercomputers is helpful to enable faster than real-time execution of a tsunami inundation simulation. Parallel computers have become approximately 1000 times faster in 10 years (www.top500.org), and so it is expected that very fast parallel computers will be more and more prevalent in the near future. Therefore, it is important to investigate how to efficiently conduct a tsunami simulation on parallel computers. In this study, we are targeting very fast tsunami inundation simulations on the K computer, currently the fastest Japanese supercomputer, which has a theoretical peak performance of 11.2 PFLOPS. One computing node of the K computer consists of 1 CPU with 8 cores that share memory, and the nodes are connected through a high-performance torus-mesh network. The K computer is designed for distributed-memory parallel computation, so we have developed a parallel tsunami model. Our model is based on TUNAMI-N2 model of Tohoku University, which is based on a leap-frog finite difference method. A grid nesting scheme is employed to apply high-resolution grids only at the coastal regions. To balance the computation load of each CPU in the parallelization, CPUs are first allocated to each nested layer in proportion to the number of grid points of the nested layer. Using CPUs allocated to each layer, 1-D domain decomposition is performed on each layer. In the parallel computation, three types of communication are necessary: (1) communication to adjacent neighbours for the finite difference calculation, (2) communication between adjacent layers for the calculations to connect each layer, and (3) global communication to obtain the time step which satisfies the CFL condition in the whole domain. A preliminary test on the K computer showed the parallel efficiency on 1024 cores was 57% relative to 64 cores. We estimate that the parallel efficiency will be considerably improved by applying a 2-D domain decomposition instead of the present 1-D domain decomposition in future work. The present parallel tsunami model was applied to the 2011 Great Tohoku tsunami. The coarsest resolution layer covers a 758 km × 1155 km region with a 405 m grid spacing. A nesting of five layers was used with the resolution ratio of 1/3 between nested layers. The finest resolution region has 5 m resolution and covers most of the coastal region of Sendai city. To complete 2 hours of simulation time, the serial (non-parallel) computation took approximately 4 days on a workstation. To complete the same simulation on 1024 cores of the K computer, it took 45 minutes which is more than two times faster than real-time. This presentation discusses the updated parallel computational performance and the efficient use of the K computer when considering the characteristics of the tsunami inundation simulation model in relation to the characteristics and capabilities of the K computer.
Park, Jong Kang; Rowlands, Christopher J; So, Peter T C
2017-01-01
Temporal focusing multiphoton microscopy is a technique for performing highly parallelized multiphoton microscopy while still maintaining depth discrimination. While the conventional wide-field configuration for temporal focusing suffers from sub-optimal axial resolution, line scanning temporal focusing, implemented here using a digital micromirror device (DMD), can provide substantial improvement. The DMD-based line scanning temporal focusing technique dynamically trades off the degree of parallelization, and hence imaging speed, for axial resolution, allowing performance parameters to be adapted to the experimental requirements. We demonstrate this new instrument in calibration specimens and in biological specimens, including a mouse kidney slice.
Park, Jong Kang; Rowlands, Christopher J.; So, Peter T. C.
2017-01-01
Temporal focusing multiphoton microscopy is a technique for performing highly parallelized multiphoton microscopy while still maintaining depth discrimination. While the conventional wide-field configuration for temporal focusing suffers from sub-optimal axial resolution, line scanning temporal focusing, implemented here using a digital micromirror device (DMD), can provide substantial improvement. The DMD-based line scanning temporal focusing technique dynamically trades off the degree of parallelization, and hence imaging speed, for axial resolution, allowing performance parameters to be adapted to the experimental requirements. We demonstrate this new instrument in calibration specimens and in biological specimens, including a mouse kidney slice. PMID:29387484
Development of high-resolution x-ray CT system using parallel beam geometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoneyama, Akio, E-mail: akio.yoneyama.bu@hitachi.com; Baba, Rika; Hyodo, Kazuyuki
2016-01-28
For fine three-dimensional observations of large biomedical and organic material samples, we developed a high-resolution X-ray CT system. The system consists of a sample positioner, a 5-μm scintillator, microscopy lenses, and a water-cooled sCMOS detector. Parallel beam geometry was adopted to attain a field of view of a few mm square. A fine three-dimensional image of birch branch was obtained using a 9-keV X-ray at BL16XU of SPring-8 in Japan. The spatial resolution estimated from the line profile of a sectional image was about 3 μm.
Tao, Shengzhen; Trzasko, Joshua D; Shu, Yunhong; Weavers, Paul T; Huston, John; Gray, Erin M; Bernstein, Matt A
2016-06-01
To describe how integrated gradient nonlinearity (GNL) correction can be used within noniterative partial Fourier (homodyne) and parallel (SENSE and GRAPPA) MR image reconstruction strategies, and demonstrate that performing GNL correction during, rather than after, these routines mitigates the image blurring and resolution loss caused by postreconstruction image domain based GNL correction. Starting from partial Fourier and parallel magnetic resonance imaging signal models that explicitly account for GNL, noniterative image reconstruction strategies for each accelerated acquisition technique are derived under the same core mathematical assumptions as their standard counterparts. A series of phantom and in vivo experiments on retrospectively undersampled data were performed to investigate the spatial resolution benefit of integrated GNL correction over conventional postreconstruction correction. Phantom and in vivo results demonstrate that the integrated GNL correction reduces the image blurring introduced by the conventional GNL correction, while still correcting GNL-induced coarse-scale geometrical distortion. Images generated from undersampled data using the proposed integrated GNL strategies offer superior depiction of fine image detail, for example, phantom resolution inserts and anatomical tissue boundaries. Noniterative partial Fourier and parallel imaging reconstruction methods with integrated GNL correction reduce the resolution loss that occurs during conventional postreconstruction GNL correction while preserving the computational efficiency of standard reconstruction techniques. Magn Reson Med 75:2534-2544, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Compact holographic optical neural network system for real-time pattern recognition
NASA Astrophysics Data System (ADS)
Lu, Taiwei; Mintzer, David T.; Kostrzewski, Andrew A.; Lin, Freddie S.
1996-08-01
One of the important characteristics of artificial neural networks is their capability for massive interconnection and parallel processing. Recently, specialized electronic neural network processors and VLSI neural chips have been introduced in the commercial market. The number of parallel channels they can handle is limited because of the limited parallel interconnections that can be implemented with 1D electronic wires. High-resolution pattern recognition problems can require a large number of neurons for parallel processing of an image. This paper describes a holographic optical neural network (HONN) that is based on high- resolution volume holographic materials and is capable of performing massive 3D parallel interconnection of tens of thousands of neurons. A HONN with more than 16,000 neurons packaged in an attache case has been developed. Rotation- shift-scale-invariant pattern recognition operations have been demonstrated with this system. System parameters such as the signal-to-noise ratio, dynamic range, and processing speed are discussed.
Bammer, Roland; Hope, Thomas A.; Aksoy, Murat; Alley, Marcus T.
2012-01-01
Exact knowledge of blood flow characteristics in the major cerebral vessels is of great relevance for diagnosing cerebrovascular abnormalities. This involves the assessment of hemodynamically critical areas as well as the derivation of biomechanical parameters such as wall shear stress and pressure gradients. A time-resolved, 3D phase-contrast (PC) MRI method using parallel imaging was implemented to measure blood flow in three dimensions at multiple instances over the cardiac cycle. The 4D velocity data obtained from 14 healthy volunteers were used to investigate dynamic blood flow with the use of multiplanar reformatting, 3D streamlines, and 4D particle tracing. In addition, the effects of magnetic field strength, parallel imaging, and temporal resolution on the data were investigated in a comparative evaluation at 1.5T and 3T using three different parallel imaging reduction factors and three different temporal resolutions in eight of the 14 subjects. Studies were consistently performed faster at 3T than at 1.5T because of better parallel imaging performance. A high temporal resolution (65 ms) was required to follow dynamic processes in the intracranial vessels. The 4D flow measurements provided a high degree of vascular conspicuity. Time-resolved streamline analysis provided features that have not been reported previously for the intracranial vasculature. PMID:17195166
Rauniyar, Navin
2015-01-01
The parallel reaction monitoring (PRM) assay has emerged as an alternative method of targeted quantification. The PRM assay is performed in a high resolution and high mass accuracy mode on a mass spectrometer. This review presents the features that make PRM a highly specific and selective method for targeted quantification using quadrupole-Orbitrap hybrid instruments. In addition, this review discusses the label-based and label-free methods of quantification that can be performed with the targeted approach. PMID:26633379
Arms Control Verification Is No Longer a Stumbling Block to True Disarmament
1992-05-01
that they had had sirce 1945.17 The Atomoic Energy Cormission failed to reach an agreeriment due to the deadlock between the two plans. The Soviet...the Soviets exploding their first nuclear device on September 2", 149 ’ 13 The talks or general and complete disarmament had failed. The failure to...and by the general state of international arid do,mestic toolitics at the time. 7 Adequate verificat ion involves the acceptance of a. degree of
Point-to-Point Multicast Communications Protocol
NASA Technical Reports Server (NTRS)
Byrd, Gregory T.; Nakano, Russell; Delagi, Bruce A.
1987-01-01
This paper describes a protocol to support point-to-point interprocessor communications with multicast. Dynamic, cut-through routing with local flow control is used to provide a high-throughput, low-latency communications path between processors. In addition multicast transmissions are available, in which copies of a packet are sent to multiple destinations using common resources as much as possible. Special packet terminators and selective buffering are introduced to avoid a deadlock during multicasts. A simulated implementation of the protocol is also described.
Fellner, C; Doenitz, C; Finkenzeller, T; Jung, E M; Rennert, J; Schlaier, J
2009-01-01
Geometric distortions and low spatial resolution are current limitations in functional magnetic resonance imaging (fMRI). The aim of this study was to evaluate if application of parallel imaging or significant reduction of voxel size in combination with a new 32-channel head array coil can reduce those drawbacks at 1.5 T for a simple hand motor task. Therefore, maximum t-values (tmax) in different regions of activation, time-dependent signal-to-noise ratios (SNR(t)) as well as distortions within the precentral gyrus were evaluated. Comparing fMRI with and without parallel imaging in 17 healthy subjects revealed significantly reduced geometric distortions in anterior-posterior direction. Using parallel imaging, tmax only showed a mild reduction (7-11%) although SNR(t) was significantly diminished (25%). In 7 healthy subjects high-resolution (2 x 2 x 2 mm3) fMRI was compared with standard fMRI (3 x 3 x 3 mm3) in a 32-channel coil and with high-resolution fMRI in a 12-channel coil. The new coil yielded a clear improvement for tmax (21-32%) and SNR(t) (51%) in comparison with the 12-channel coil. Geometric distortions were smaller due to the smaller voxel size. Therefore, the reduction in tmax (8-16%) and SNR(t) (52%) in the high-resolution experiment seems to be tolerable with this coil. In conclusion, parallel imaging is an alternative to reduce geometric distortions in fMRI at 1.5 T. Using a 32-channel coil, reduction of the voxel size might be the preferable way to improve spatial accuracy.
ANTI-PARALLEL EUV FLOWS OBSERVED ALONG ACTIVE REGION FILAMENT THREADS WITH HI-C
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexander, Caroline E.; Walsh, Robert W.; Régnier, Stéphane
Plasma flows within prominences/filaments have been observed for many years and hold valuable clues concerning the mass and energy balance within these structures. Previous observations of these flows primarily come from Hα and cool extreme-ultraviolet (EUV) lines (e.g., 304 Å) where estimates of the size of the prominence threads has been limited by the resolution of the available instrumentation. Evidence of 'counter-steaming' flows has previously been inferred from these cool plasma observations, but now, for the first time, these flows have been directly imaged along fundamental filament threads within the million degree corona (at 193 Å). In this work, wemore » present observations of an AR filament observed with the High-resolution Coronal Imager (Hi-C) that exhibits anti-parallel flows along adjacent filament threads. Complementary data from the Solar Dynamics Observatory (SDO)/Atmospheric Imaging Assembly (AIA) and Helioseismic and Magnetic Imager are presented. The ultra-high spatial and temporal resolution of Hi-C allow the anti-parallel flow velocities to be measured (70-80 km s{sup –1}) and gives an indication of the resolvable thickness of the individual strands (0.''8 ± 0.''1). The temperature of the plasma flows was estimated to be log T (K) = 5.45 ± 0.10 using Emission Measure loci analysis. We find that SDO/AIA cannot clearly observe these anti-parallel flows or measure their velocity or thread width due to its larger pixel size. We suggest that anti-parallel/counter-streaming flows are likely commonplace within all filaments and are currently not observed in EUV due to current instrument spatial resolution.« less
Development of Parallel Code for the Alaska Tsunami Forecast Model
NASA Astrophysics Data System (ADS)
Bahng, B.; Knight, W. R.; Whitmore, P.
2014-12-01
The Alaska Tsunami Forecast Model (ATFM) is a numerical model used to forecast propagation and inundation of tsunamis generated by earthquakes and other means in both the Pacific and Atlantic Oceans. At the U.S. National Tsunami Warning Center (NTWC), the model is mainly used in a pre-computed fashion. That is, results for hundreds of hypothetical events are computed before alerts, and are accessed and calibrated with observations during tsunamis to immediately produce forecasts. ATFM uses the non-linear, depth-averaged, shallow-water equations of motion with multiply nested grids in two-way communications between domains of each parent-child pair as waves get closer to coastal waters. Even with the pre-computation the task becomes non-trivial as sub-grid resolution gets finer. Currently, the finest resolution Digital Elevation Models (DEM) used by ATFM are 1/3 arc-seconds. With a serial code, large or multiple areas of very high resolution can produce run-times that are unrealistic even in a pre-computed approach. One way to increase the model performance is code parallelization used in conjunction with a multi-processor computing environment. NTWC developers have undertaken an ATFM code-parallelization effort to streamline the creation of the pre-computed database of results with the long term aim of tsunami forecasts from source to high resolution shoreline grids in real time. Parallelization will also permit timely regeneration of the forecast model database with new DEMs; and, will make possible future inclusion of new physics such as the non-hydrostatic treatment of tsunami propagation. The purpose of our presentation is to elaborate on the parallelization approach and to show the compute speed increase on various multi-processor systems.
Robust High-Resolution Cloth Using Parallelism, History-Based Collisions and Accurate Friction
Selle, Andrew; Su, Jonathan; Irving, Geoffrey; Fedkiw, Ronald
2015-01-01
In this paper we simulate high resolution cloth consisting of up to 2 million triangles which allows us to achieve highly detailed folds and wrinkles. Since the level of detail is also influenced by object collision and self collision, we propose a more accurate model for cloth-object friction. We also propose a robust history-based repulsion/collision framework where repulsions are treated accurately and efficiently on a per time step basis. Distributed memory parallelism is used for both time evolution and collisions and we specifically address Gauss-Seidel ordering of repulsion/collision response. This algorithm is demonstrated by several high-resolution and high-fidelity simulations. PMID:19147895
Tankam, Patrice; Santhanam, Anand P.; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P.
2014-01-01
Abstract. Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6 mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing. PMID:24695868
Tankam, Patrice; Santhanam, Anand P; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P
2014-07-01
Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6 mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing.
Choice of Grating Orientation for Evaluation of Peripheral Vision
Venkataraman, Abinaya Priya; Winter, Simon; Rosén, Robert; Lundström, Linda
2016-01-01
ABSTRACT Purpose Peripheral resolution acuity depends on the orientation of the stimuli. However, it is uncertain if such a meridional effect also exists for peripheral detection tasks because they are affected by optical errors. Knowledge of the quantitative differences in acuity for different grating orientations is crucial for choosing the appropriate stimuli for evaluations of peripheral resolution and detection tasks. We assessed resolution and detection thresholds for different grating orientations in the peripheral visual field. Methods Resolution and detection thresholds were evaluated for gratings of four different orientations in eight different visual field meridians in the 20-deg visual field in white light. Detection measurements in monochromatic light (543 nm; bandwidth, 10 nm) were also performed to evaluate the effects of chromatic aberration on the meridional effect. A combination of trial lenses and adaptive optics system was used to correct the monochromatic lower- and higher-order aberrations. Results For both resolution and detection tasks, gratings parallel to the visual field meridian had better threshold compared with the perpendicular gratings, whereas the two oblique gratings had similar thresholds. The parallel and perpendicular grating acuity differences for resolution and detection tasks were 0.16 logMAR and 0.11 logMAD, respectively. Elimination of chromatic errors did not affect the meridional preference in detection acuity. Conclusions Similar to peripheral resolution, detection also shows a meridional effect that appears to have a neural origin. The threshold difference seen for parallel and perpendicular gratings suggests the use of two oblique gratings as stimuli in alternative forced-choice procedures for peripheral vision evaluation to reduce measurement variation. PMID:26889822
Choice of Grating Orientation for Evaluation of Peripheral Vision.
Venkataraman, Abinaya Priya; Winter, Simon; Rosén, Robert; Lundström, Linda
2016-06-01
Peripheral resolution acuity depends on the orientation of the stimuli. However, it is uncertain if such a meridional effect also exists for peripheral detection tasks because they are affected by optical errors. Knowledge of the quantitative differences in acuity for different grating orientations is crucial for choosing the appropriate stimuli for evaluations of peripheral resolution and detection tasks. We assessed resolution and detection thresholds for different grating orientations in the peripheral visual field. Resolution and detection thresholds were evaluated for gratings of four different orientations in eight different visual field meridians in the 20-deg visual field in white light. Detection measurements in monochromatic light (543 nm; bandwidth, 10 nm) were also performed to evaluate the effects of chromatic aberration on the meridional effect. A combination of trial lenses and adaptive optics system was used to correct the monochromatic lower- and higher-order aberrations. For both resolution and detection tasks, gratings parallel to the visual field meridian had better threshold compared with the perpendicular gratings, whereas the two oblique gratings had similar thresholds. The parallel and perpendicular grating acuity differences for resolution and detection tasks were 0.16 logMAR and 0.11 logMAD, respectively. Elimination of chromatic errors did not affect the meridional preference in detection acuity. Similar to peripheral resolution, detection also shows a meridional effect that appears to have a neural origin. The threshold difference seen for parallel and perpendicular gratings suggests the use of two oblique gratings as stimuli in alternative forced-choice procedures for peripheral vision evaluation to reduce measurement variation.
Parallelization of a blind deconvolution algorithm
NASA Astrophysics Data System (ADS)
Matson, Charles L.; Borelli, Kathy J.
2006-09-01
Often it is of interest to deblur imagery in order to obtain higher-resolution images. Deblurring requires knowledge of the blurring function - information that is often not available separately from the blurred imagery. Blind deconvolution algorithms overcome this problem by jointly estimating both the high-resolution image and the blurring function from the blurred imagery. Because blind deconvolution algorithms are iterative in nature, they can take minutes to days to deblur an image depending how many frames of data are used for the deblurring and the platforms on which the algorithms are executed. Here we present our progress in parallelizing a blind deconvolution algorithm to increase its execution speed. This progress includes sub-frame parallelization and a code structure that is not specialized to a specific computer hardware architecture.
Parallelization and Algorithmic Enhancements of High Resolution IRAS Image Construction
NASA Technical Reports Server (NTRS)
Cao, Yu; Prince, Thomas A.; Tereby, Susan; Beichman, Charles A.
1996-01-01
The Infrared Astronomical Satellite caried out a nearly complete survey of the infrared sky, and the survey data are important for the study of many astrophysical phenomena. However, many data sets at other wavelengths have higher resolutions than that of the co-added IRAS maps, and high resolution IRAS images are strongly desired both for their own information content and their usefulness in correlation. The HIRES program was developed by the Infrared Processing and Analysis Center (IPAC) to produce high resolution (approx. 1') images from IRAS data using the Maximum Correlation Method (MCM). We describe the port of HIRES to the Intel Paragon, a massively parallel supercomputer, other software developments for mass production of HIRES images, and the IRAS Galaxy Atlas, a project to map the Galactic plane at 60 and 100(micro)m.
Controllability of control and mixture weakly dependent siphons in S3PR
NASA Astrophysics Data System (ADS)
Hong, Liang; Chao, Daniel Y.
2013-08-01
Deadlocks in a flexible manufacturing system modelled by Petri nets arise from insufficiently marked siphons. Monitors are added to control these siphons to avoid deadlocks rendering the system too complicated since the total number of monitors grows exponentially. Li and Zhou propose to add monitors only to elementary siphons while controlling the other (strongly or weakly) dependent siphons by adjusting control depth variables. To avoid generating new siphons, the control arcs are ended at source transitions of process nets. This disturbs the original model more and hence loses more live states. Negative terms in the controllability make the control policy for weakly dependent siphons rather conservative. We studied earlier on the controllability of strongly dependent siphons and proposed to add monitors in the order of basic, compound, control, partial mixture and full mixture (strongly dependent) siphons to reduce the number of mixed integer programming iterations and redundant monitors. This article further investigates the controllability of siphons derived from weakly 2-compound siphons. We discover that the controllability for weakly and strongly compound siphons is similar. It no longer holds for control and mixture siphons. Some control and mixture siphons, derived from strongly 2-compound siphons are not redundant - no longer so for those derived from weakly 2-compound siphons; that is all control and mixture siphons are redundant. They do not need to be the conservative one as proposed by Li and Zhou. Thus, we can adopt the maximally permissive control policy even though new siphons are generated.
Bedez, Mathieu; Belhachmi, Zakaria; Haeberlé, Olivier; Greget, Renaud; Moussaoui, Saliha; Bouteiller, Jean-Marie; Bischoff, Serge
2016-01-15
The resolution of a model describing the electrical activity of neural tissue and its propagation within this tissue is highly consuming in term of computing time and requires strong computing power to achieve good results. In this study, we present a method to solve a model describing the electrical propagation in neuronal tissue, using parareal algorithm, coupling with parallelization space using CUDA in graphical processing unit (GPU). We applied the method of resolution to different dimensions of the geometry of our model (1-D, 2-D and 3-D). The GPU results are compared with simulations from a multi-core processor cluster, using message-passing interface (MPI), where the spatial scale was parallelized in order to reach a comparable calculation time than that of the presented method using GPU. A gain of a factor 100 in term of computational time between sequential results and those obtained using the GPU has been obtained, in the case of 3-D geometry. Given the structure of the GPU, this factor increases according to the fineness of the geometry used in the computation. To the best of our knowledge, it is the first time such a method is used, even in the case of neuroscience. Parallelization time coupled with GPU parallelization space allows for drastically reducing computational time with a fine resolution of the model describing the propagation of the electrical signal in a neuronal tissue. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Marx, Alain; Lütjens, Hinrich
2017-03-01
A hybrid MPI/OpenMP parallel version of the XTOR-2F code [Lütjens and Luciani, J. Comput. Phys. 229 (2010) 8130] solving the two-fluid MHD equations in full tokamak geometry by means of an iterative Newton-Krylov matrix-free method has been developed. The present work shows that the code has been parallelized significantly despite the numerical profile of the problem solved by XTOR-2F, i.e. a discretization with pseudo-spectral representations in all angular directions, the stiffness of the two-fluid stability problem in tokamaks, and the use of a direct LU decomposition to invert the physical pre-conditioner at every Krylov iteration of the solver. The execution time of the parallelized version is an order of magnitude smaller than the sequential one for low resolution cases, with an increasing speedup when the discretization mesh is refined. Moreover, it allows to perform simulations with higher resolutions, previously forbidden because of memory limitations.
Narcissism and Object Relations in Hypochondria.
Albarracin, Dolorès
2015-08-01
Hypochondria remains little studied from a theoretical point of view. Whereas psychoanalysts know how difficult it is to handle hypochondriac subjects, the few works studying the relationship between patients and physicians resort to a cognitive-behavioral approach. These latter conclude that the quality of this relationship is more important than the disappearance of the symptoms. The aim of this work is to show how psychoanalysis can conceptualize hypochondria as a disruption of narcissism, leading to an apparent relational deadlock. Considering hypochondria as a narcissistic transference constitutes a useful contribution of psychoanalysis to medicine and psychotherapeutic care.
NASA Technical Reports Server (NTRS)
Mehhtz, Peter
2005-01-01
JPF is an explicit state software model checker for Java bytecode. Today, JPF is a swiss army knife for all sort of runtime based verification purposes. This basically means JPF is a Java virtual machine that executes your program not just once (like a normal VM), but theoretically in all possible ways, checking for property violations like deadlocks or unhandled exceptions along all potential execution paths. If it finds an error, JPF reports the whole execution that leads to it. Unlike a normal debugger, JPF keeps track of every step how it got to the defect.
Project Integration Architecture: Distributed Lock Management, Deadlock Detection, and Set Iteration
NASA Technical Reports Server (NTRS)
Jones, William Henry
2005-01-01
The migration of the Project Integration Architecture (PIA) to the distributed object environment of the Common Object Request Broker Architecture (CORBA) brings with it the nearly unavoidable requirements of multiaccessor, asynchronous operations. In order to maintain the integrity of data structures in such an environment, it is necessary to provide a locking mechanism capable of protecting the complex operations typical of the PIA architecture. This paper reports on the implementation of a locking mechanism to treat that need. Additionally, the ancillary features necessary to make the distributed lock mechanism work are discussed.
High-resolution brain SPECT imaging by combination of parallel and tilted detector heads.
Suzuki, Atsuro; Takeuchi, Wataru; Ishitsu, Takafumi; Morimoto, Yuichi; Kobashi, Keiji; Ueno, Yuichiro
2015-10-01
To improve the spatial resolution of brain single-photon emission computed tomography (SPECT), we propose a new brain SPECT system in which the detector heads are tilted towards the rotation axis so that they are closer to the brain. In addition, parallel detector heads are used to obtain the complete projection data set. We evaluated this parallel and tilted detector head system (PT-SPECT) in simulations. In the simulation study, the tilt angle of the detector heads relative to the axis was 45°. The distance from the collimator surface of the parallel detector heads to the axis was 130 mm. The distance from the collimator surface of the tilted detector heads to the origin on the axis was 110 mm. A CdTe semiconductor panel with a 1.4 mm detector pitch and a parallel-hole collimator were employed in both types of detector head. A line source phantom, cold-rod brain-shaped phantom, and cerebral blood flow phantom were evaluated. The projection data were generated by forward-projection of the phantom images using physics models, and Poisson noise at clinical levels was applied to the projection data. The ordered-subsets expectation maximization algorithm with physics models was used. We also evaluated conventional SPECT using four parallel detector heads for the sake of comparison. The evaluation of the line source phantom showed that the transaxial FWHM in the central slice for conventional SPECT ranged from 6.1 to 8.5 mm, while that for PT-SPECT ranged from 5.3 to 6.9 mm. The cold-rod brain-shaped phantom image showed that conventional SPECT could visualize up to 8-mm-diameter rods. By contrast, PT-SPECT could visualize up to 6-mm-diameter rods in upper slices of a cerebrum. The cerebral blood flow phantom image showed that the PT-SPECT system provided higher resolution at the thalamus and caudate nucleus as well as at the longitudinal fissure of the cerebrum compared with conventional SPECT. PT-SPECT provides improved image resolution at not only upper but also at central slices of the cerebrum.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ban, H. Y.; Kavuri, V. C., E-mail: venk@physics.up
Purpose: The authors introduce a state-of-the-art all-optical clinical diffuse optical tomography (DOT) imaging instrument which collects spatially dense, multispectral, frequency-domain breast data in the parallel-plate geometry. Methods: The instrument utilizes a CCD-based heterodyne detection scheme that permits massively parallel detection of diffuse photon density wave amplitude and phase for a large number of source–detector pairs (10{sup 6}). The stand-alone clinical DOT instrument thus offers high spatial resolution with reduced crosstalk between absorption and scattering. Other novel features include a fringe profilometry system for breast boundary segmentation, real-time data normalization, and a patient bed design which permits both axial and sagittalmore » breast measurements. Results: The authors validated the instrument using tissue simulating phantoms with two different chromophore-containing targets and one scattering target. The authors also demonstrated the instrument in a case study breast cancer patient; the reconstructed 3D image of endogenous chromophores and scattering gave tumor localization in agreement with MRI. Conclusions: Imaging with a novel parallel-plate DOT breast imager that employs highly parallel, high-resolution CCD detection in the frequency-domain was demonstrated.« less
Multiplexed EFPI sensors with ultra-high resolution
NASA Astrophysics Data System (ADS)
Ushakov, Nikolai; Liokumovich, Leonid
2014-05-01
An investigation of performance of multiplexed displacement sensors based on extrinsic Fabry-Perot interferometers has been carried out. We have considered serial and parallel configurations and analyzed the issues and advantages of the both. We have also extended the previously developed baseline demodulation algorithm for the case of a system of multiplexed sensors. Serial and parallel multiplexing schemes have been experimentally implemented with 3 and 4 sensing elements, respectively. For both configurations the achieved baseline standard deviations were between 30 and 200 pm, which is, to the best of our knowledge, more than an order less than any other multiplexed EFPI resolution ever reported.
USDA-ARS?s Scientific Manuscript database
With enhanced data availability, distributed watershed models for large areas with high spatial and temporal resolution are increasingly used to understand water budgets and examine effects of human activities and climate change/variability on water resources. Developing parallel computing software...
Simultaneous fluoroscopic and nuclear imaging: impact of collimator choice on nuclear image quality.
van der Velden, Sandra; Beijst, Casper; Viergever, Max A; de Jong, Hugo W A M
2017-01-01
X-ray-guided oncological interventions could benefit from the availability of simultaneously acquired nuclear images during the procedure. To this end, a real-time, hybrid fluoroscopic and nuclear imaging device, consisting of an X-ray c-arm combined with gamma imaging capability, is currently being developed (Beijst C, Elschot M, Viergever MA, de Jong HW. Radiol. 2015;278:232-238). The setup comprises four gamma cameras placed adjacent to the X-ray tube. The four camera views are used to reconstruct an intermediate three-dimensional image, which is subsequently converted to a virtual nuclear projection image that overlaps with the X-ray image. The purpose of the present simulation study is to evaluate the impact of gamma camera collimator choice (parallel hole versus pinhole) on the quality of the virtual nuclear image. Simulation studies were performed with a digital image quality phantom including realistic noise and resolution effects, with a dynamic frame acquisition time of 1 s and a total activity of 150 MBq. Projections were simulated for 3, 5, and 7 mm pinholes and for three parallel hole collimators (low-energy all-purpose (LEAP), low-energy high-resolution (LEHR) and low-energy ultra-high-resolution (LEUHR)). Intermediate reconstruction was performed with maximum likelihood expectation-maximization (MLEM) with point spread function (PSF) modeling. In the virtual projection derived therefrom, contrast, noise level, and detectability were determined and compared with the ideal projection, that is, as if a gamma camera were located at the position of the X-ray detector. Furthermore, image deformations and spatial resolution were quantified. Additionally, simultaneous fluoroscopic and nuclear images of a sphere phantom were acquired with a physical prototype system and compared with the simulations. For small hot spots, contrast is comparable for all simulated collimators. Noise levels are, however, 3 to 8 times higher in pinhole geometries than in parallel hole geometries. This results in higher contrast-to-noise ratios for parallel hole geometries. Smaller spheres can thus be detected with parallel hole collimators than with pinhole collimators (17 mm vs 28 mm). Pinhole geometries show larger image deformations than parallel hole geometries. Spatial resolution varied between 1.25 cm for the 3 mm pinhole and 4 cm for the LEAP collimator. The simulation method was successfully validated by the experiments with the physical prototype. A real-time hybrid fluoroscopic and nuclear imaging device is currently being developed. Image quality of nuclear images obtained with different collimators was compared in terms of contrast, noise, and detectability. Parallel hole collimators showed lower noise and better detectability than pinhole collimators. © 2016 American Association of Physicists in Medicine.
Van Audenhaege, Karen; Van Holen, Roel; Vandenberghe, Stefaan; Vanhove, Christian; Metzler, Scott D.; Moore, Stephen C.
2015-01-01
In single photon emission computed tomography, the choice of the collimator has a major impact on the sensitivity and resolution of the system. Traditional parallel-hole and fan-beam collimators used in clinical practice, for example, have a relatively poor sensitivity and subcentimeter spatial resolution, while in small-animal imaging, pinhole collimators are used to obtain submillimeter resolution and multiple pinholes are often combined to increase sensitivity. This paper reviews methods for production, sensitivity maximization, and task-based optimization of collimation for both clinical and preclinical imaging applications. New opportunities for improved collimation are now arising primarily because of (i) new collimator-production techniques and (ii) detectors with improved intrinsic spatial resolution that have recently become available. These new technologies are expected to impact the design of collimators in the future. The authors also discuss concepts like septal penetration, high-resolution applications, multiplexing, sampling completeness, and adaptive systems, and the authors conclude with an example of an optimization study for a parallel-hole, fan-beam, cone-beam, and multiple-pinhole collimator for different applications. PMID:26233207
Fiber optic cable-based high-resolution, long-distance VGA extenders
NASA Astrophysics Data System (ADS)
Rhee, Jin-Geun; Lee, Iksoo; Kim, Heejoon; Kim, Sungjoon; Koh, Yeon-Wan; Kim, Hoik; Lim, Jiseok; Kim, Chur; Kim, Jungwon
2013-02-01
Remote transfer of high-resolution video information finds more applications in detached display applications for large facilities such as theaters, sports complex, airports, and security facilities. Active optical cables (AOCs) provide a promising approach for enhancing both the transmittable resolution and distance that standard copper-based cables cannot reach. In addition to the standard digital formats such as HDMI, the high-resolution, long-distance transfer of VGA format signals is important for applications where high-resolution analog video ports should be also supported, such as military/defense applications and high-resolution video camera links. In this presentation we present the development of a compressionless, high-resolution (up to WUXGA, 1920x1200), long-distance (up to 2 km) VGA extenders based on serialized technique. We employed asynchronous serial transmission and clock regeneration techniques, which enables lower cost implementation of VGA extenders by removing the necessity for clock transmission and large memory at the receiver. Two 3.125-Gbps transceivers are used in parallel to meet the required maximum video data rate of 6.25 Gbps. As the data are transmitted asynchronously, 24-bit pixel clock time stamp is employed to regenerate video pixel clock accurately at the receiver side. In parallel to the video information, stereo audio and RS-232 control signals are transmitted as well.
Driver, Christine
2005-04-01
This paper explores the dynamics brought into analytic work when there is a symmetric fusion between psyche and soma within the patient. It will consider how such a fusion may emerge from reverberations between physical constitution and a lack of maternal attunement, containment and reflective function. I will describe the work with a patient, Jane, who was diagnosed with Myalgic Encephalomyelitis (ME) during the course of her analysis. The dynamic of her physical symptoms within the analytic work, and the impact of her internal affects and internal 'objects' within the transference and countertransference, indicated a difficulty in finding an homeostatic balance resulting in overactivity and underactivity at both somatic and psychological levels. Using the clinical work with Jane this paper will also examine the interrelationship between mother-infant attachment, an inadequate internalized maternal reflective function, affect dysregulation, unconscious fusion, the lack of psyche-soma differentiation and the impact of the latter in relation to internal regulation systems, or lack of, in patients with Chronic Fatigue Syndrome (CFS) and Myalgic Encephalomyelitis (ME). I will draw on similar work carried out by Holland (1997), Simpson (1997) and Simpson et al. (1997). The paper will also employ the concept of the reflective function (Fonagy 2001; Knox 2003), and consider Matte-Blanco's (1999) concepts of generalization and unconscious symmetry in relation to the patient's internal world. I go on to consider how analysis provides a point outside the 'fusion' that can enable the 'deadlock' to be broken.
Dynamic grid refinement for partial differential equations on parallel computers
NASA Technical Reports Server (NTRS)
Mccormick, S.; Quinlan, D.
1989-01-01
The fast adaptive composite grid method (FAC) is an algorithm that uses various levels of uniform grids to provide adaptive resolution and fast solution of PDEs. An asynchronous version of FAC, called AFAC, that completely eliminates the bottleneck to parallelism is presented. This paper describes the advantage that this algorithm has in adaptive refinement for moving singularities on multiprocessor computers. This work is applicable to the parallel solution of two- and three-dimensional shock tracking problems.
Novel 16-channel receive coil array for accelerated upper airway MRI at 3 Tesla.
Kim, Yoon-Chul; Hayes, Cecil E; Narayanan, Shrikanth S; Nayak, Krishna S
2011-06-01
Upper airway MRI can provide a noninvasive assessment of speech and swallowing disorders and sleep apnea. Recent work has demonstrated the value of high-resolution three-dimensional imaging and dynamic two-dimensional imaging and the importance of further improvements in spatio-temporal resolution. The purpose of the study was to describe a novel 16-channel 3 Tesla receive coil that is highly sensitive to the human upper airway and investigate the performance of accelerated upper airway MRI with the coil. In three-dimensional imaging of the upper airway during static posture, 6-fold acceleration is demonstrated using parallel imaging, potentially leading to capturing a whole three-dimensional vocal tract with 1.25 mm isotropic resolution within 9 sec of sustained sound production. Midsagittal spiral parallel imaging of vocal tract dynamics during natural speech production is demonstrated with 2 × 2 mm(2) in-plane spatial and 84 ms temporal resolution. Copyright © 2010 Wiley-Liss, Inc.
Time-frequency model for echo-delay resolution in wideband biosonar.
Neretti, Nicola; Sanderson, Mark I; Intrator, Nathan; Simmons, James A
2003-04-01
A time/frequency model of the bat's auditory system was developed to examine the basis for the fine (approximately 2 micros) echo-delay resolution of big brown bats (Eptesicus fuscus), and its performance at resolving closely spaced FM sonar echoes in the bat's 20-100-kHz band at different signal-to-noise ratios was computed. The model uses parallel bandpass filters spaced over this band to generate envelopes that individually can have much lower bandwidth than the bat's ultrasonic sonar sounds and still achieve fine delay resolution. Because fine delay separations are inside the integration time of the model's filters (approximately 250-300 micros), resolving them means using interference patterns along the frequency dimension (spectral peaks and notches). The low bandwidth content of the filter outputs is suitable for relay of information to higher auditory areas that have intrinsically poor temporal response properties. If implemented in fully parallel analog-digital hardware, the model is computationally extremely efficient and would improve resolution in military and industrial sonar receivers.
Simultaneous Multi-Slice fMRI using Spiral Trajectories
Zahneisen, Benjamin; Poser, Benedikt A.; Ernst, Thomas; Stenger, V. Andrew
2014-01-01
Parallel imaging methods using multi-coil receiver arrays have been shown to be effective for increasing MRI acquisition speed. However parallel imaging methods for fMRI with 2D sequences show only limited improvements in temporal resolution because of the long echo times needed for BOLD contrast. Recently, Simultaneous Multi-Slice (SMS) imaging techniques have been shown to increase fMRI temporal resolution by factors of four and higher. In SMS fMRI multiple slices can be acquired simultaneously using Echo Planar Imaging (EPI) and the overlapping slices are un-aliased using a parallel imaging reconstruction with multiple receivers. The slice separation can be further improved using the “blipped-CAIPI” EPI sequence that provides a more efficient sampling of the SMS 3D k-space. In this paper a blipped-spiral SMS sequence for ultra-fast fMRI is presented. The blipped-spiral sequence combines the sampling efficiency of spiral trajectories with the SMS encoding concept used in blipped-CAIPI EPI. We show that blipped spiral acquisition can achieve almost whole brain coverage at 3 mm isotropic resolution in 168 ms. It is also demonstrated that the high temporal resolution allows for dynamic BOLD lag time measurement using visual/motor and retinotopic mapping paradigms. The local BOLD lag time within the visual cortex following the retinotopic mapping stimulation of expanding flickering rings is directly measured and easily translated into an eccentricity map of the cortex. PMID:24518259
Matthew Parks; Richard Cronn; Aaron Liston
2009-01-01
We reconstruct the infrageneric phylogeny of Pinus from 37 nearly-complete chloroplast genomes (average 109 kilobases each of an approximately 120 kilobase genome) generated using multiplexed massively parallel sequencing. We found that 30/33 ingroup nodes resolved wlth > 95-percent bootstrap support; this is a substantial improvement relative...
NASA Astrophysics Data System (ADS)
Yedra, Lluís; Eswara, Santhana; Dowsett, David; Wirtz, Tom
2016-06-01
Isotopic analysis is of paramount importance across the entire gamut of scientific research. To advance the frontiers of knowledge, a technique for nanoscale isotopic analysis is indispensable. Secondary Ion Mass Spectrometry (SIMS) is a well-established technique for analyzing isotopes, but its spatial-resolution is fundamentally limited. Transmission Electron Microscopy (TEM) is a well-known method for high-resolution imaging down to the atomic scale. However, isotopic analysis in TEM is not possible. Here, we introduce a powerful new paradigm for in-situ correlative microscopy called the Parallel Ion Electron Spectrometry by synergizing SIMS with TEM. We demonstrate this technique by distinguishing lithium carbonate nanoparticles according to the isotopic label of lithium, viz. 6Li and 7Li and imaging them at high-resolution by TEM, adding a new dimension to correlative microscopy.
Analysis techniques for diagnosing runaway ion distributions in the reversed field pinch
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, J., E-mail: jkim536@wisc.edu; Anderson, J. K.; Capecchi, W.
2016-11-15
An advanced neutral particle analyzer (ANPA) on the Madison Symmetric Torus measures deuterium ions of energy ranges 8-45 keV with an energy resolution of 2-4 keV and time resolution of 10 μs. Three different experimental configurations measure distinct portions of the naturally occurring fast ion distributions: fast ions moving parallel, anti-parallel, or perpendicular to the plasma current. On a radial-facing port, fast ions moving perpendicular to the current have the necessary pitch to be measured by the ANPA. With the diagnostic positioned on a tangent line through the plasma core, a chord integration over fast ion density, background neutral density,more » and local appropriate pitch defines the measured sample. The plasma current can be reversed to measure anti-parallel fast ions in the same configuration. Comparisons of energy distributions for the three configurations show an anisotropic fast ion distribution favoring high pitch ions.« less
Far Infrared Imaging Spectrometer for Large Aperture Infrared Telescope System
1985-12-01
resolution Fabry - Perot spectrometer (103 < Resolution < 104) for wavelengths from about 50 to 200 micrometer, employing extended field diffraction limited...photo- metry. The Naval Research Laboratory will provide a high resolution Far Infrared Imaging Spectrometer (FIRIS) using Fabry - Perot techniques in...detectors to provide spatial information. The Fabry - Perot uses electromagnetic coil displacement drivers with a lead screw drive to obtain parallel
State 'laboratories' test health care reform solutions.
Elliott, B A
1993-02-01
Widely recognized by the states as a pressing policy issue, health care reform appears to have moved up on the national policy agenda as well. President Clinton has promised to address the issue during his first 100 days in office. Previously, however, the federal government has been deadlocked on health care reform, leaving the states to become the laboratories for developing and testing proposed solutions to our health care crisis. By passing MinnesotaCare in last year's legislative session, Minnesota joined the growing number of states attempting to provide access to affordable, quality health care to their citizens.
ERIC Educational Resources Information Center
Gow, David W., Jr.; Keller, Corey J.; Eskandar, Emad; Meng, Nate; Cash, Sydney S.
2009-01-01
In this work, we apply Granger causality analysis to high spatiotemporal resolution intracranial EEG (iEEG) data to examine how different components of the left perisylvian language network interact during spoken language perception. The specific focus is on the characterization of serial versus parallel processing dependencies in the dominant…
Competitive Parallel Processing For Compression Of Data
NASA Technical Reports Server (NTRS)
Diner, Daniel B.; Fender, Antony R. H.
1990-01-01
Momentarily-best compression algorithm selected. Proposed competitive-parallel-processing system compresses data for transmission in channel of limited band-width. Likely application for compression lies in high-resolution, stereoscopic color-television broadcasting. Data from information-rich source like color-television camera compressed by several processors, each operating with different algorithm. Referee processor selects momentarily-best compressed output.
A Successful Test of Parallel Replication Teams in Teaching Research Methods
ERIC Educational Resources Information Center
Standing, Lionel G.; Astrologo, Lisa; Benbow, Felecia F.; Cyr-Gauthier, Chelsea S.; Williams, Charlotte A.
2016-01-01
This paper describes the novel use of parallel student teams from a research methods course to perform a replication study, and suggests that this approach offers pedagogical benefits for both students and teachers, as well as potentially contributing to a resolution of the replication crisis in psychology today. Four teams, of five undergraduates…
Parallel detecting super-resolution microscopy using correlation based image restoration
NASA Astrophysics Data System (ADS)
Yu, Zhongzhi; Liu, Shaocong; Zhu, Dazhao; Kuang, Cuifang; Liu, Xu
2017-12-01
A novel approach to achieve the image restoration is proposed in which each detector's relative position in the detector array is no longer a necessity. We can identify each detector's relative location by extracting a certain area from one of the detector's image and scanning it on other detectors' images. According to this location, we can generate the point spread functions (PSF) for each detector and perform deconvolution for image restoration. Equipped with this method, the microscope with discretionally designed detector array can be easily constructed without the concern of exact relative locations of detectors. The simulated results and experimental results show the total improvement in resolution with a factor of 1.7 compared to conventional confocal fluorescence microscopy. With the significant enhancement in resolution and easiness for application of this method, this novel method should have potential for a wide range of application in fluorescence microscopy based on parallel detecting.
Model-based spectral estimation of Doppler signals using parallel genetic algorithms.
Solano González, J; Rodríguez Vázquez, K; García Nocetti, D F
2000-05-01
Conventional spectral analysis methods use a fast Fourier transform (FFT) on consecutive or overlapping windowed data segments. For Doppler ultrasound signals, this approach suffers from an inadequate frequency resolution due to the time segment duration and the non-stationarity characteristics of the signals. Parametric or model-based estimators can give significant improvements in the time-frequency resolution at the expense of a higher computational complexity. This work describes an approach which implements in real-time a parametric spectral estimator method using genetic algorithms (GAs) in order to find the optimum set of parameters for the adaptive filter that minimises the error function. The aim is to reduce the computational complexity of the conventional algorithm by using the simplicity associated to GAs and exploiting its parallel characteristics. This will allow the implementation of higher order filters, increasing the spectrum resolution, and opening a greater scope for using more complex methods.
A mirror for lab-based quasi-monochromatic parallel x-rays
NASA Astrophysics Data System (ADS)
Nguyen, Thanhhai; Lu, Xun; Lee, Chang Jun; Jung, Jin-Ho; Jin, Gye-Hwan; Kim, Sung Youb; Jeon, Insu
2014-09-01
A multilayered parabolic mirror with six W/Al bilayers was designed and fabricated to generate monochromatic parallel x-rays using a lab-based x-ray source. Using this mirror, curved bright bands were obtained in x-ray images as reflected x-rays. The parallelism of the reflected x-rays was investigated using the shape of the bands. The intensity and monochromatic characteristics of the reflected x-rays were evaluated through measurements of the x-ray spectra in the band. High intensity, nearly monochromatic, and parallel x-rays, which can be used for high resolution x-ray microscopes and local radiation therapy systems, were obtained.
Verification and Planning Based on Coinductive Logic Programming
NASA Technical Reports Server (NTRS)
Bansal, Ajay; Min, Richard; Simon, Luke; Mallya, Ajay; Gupta, Gopal
2008-01-01
Coinduction is a powerful technique for reasoning about unfounded sets, unbounded structures, infinite automata, and interactive computations [6]. Where induction corresponds to least fixed point's semantics, coinduction corresponds to greatest fixed point semantics. Recently coinduction has been incorporated into logic programming and an elegant operational semantics developed for it [11, 12]. This operational semantics is the greatest fix point counterpart of SLD resolution (SLD resolution imparts operational semantics to least fix point based computations) and is termed co- SLD resolution. In co-SLD resolution, a predicate goal p( t) succeeds if it unifies with one of its ancestor calls. In addition, rational infinite terms are allowed as arguments of predicates. Infinite terms are represented as solutions to unification equations and the occurs check is omitted during the unification process. Coinductive Logic Programming (Co-LP) and Co-SLD resolution can be used to elegantly perform model checking and planning. A combined SLD and Co-SLD resolution based LP system forms the common basis for planning, scheduling, verification, model checking, and constraint solving [9, 4]. This is achieved by amalgamating SLD resolution, co-SLD resolution, and constraint logic programming [13] in a single logic programming system. Given that parallelism in logic programs can be implicitly exploited [8], complex, compute-intensive applications (planning, scheduling, model checking, etc.) can be executed in parallel on multi-core machines. Parallel execution can result in speed-ups as well as in larger instances of the problems being solved. In the remainder we elaborate on (i) how planning can be elegantly and efficiently performed under real-time constraints, (ii) how real-time systems can be elegantly and efficiently model- checked, as well as (iii) how hybrid systems can be verified in a combined system with both co-SLD and SLD resolution. Implementations of co-SLD resolution as well as preliminary implementations of the planning and verification applications have been developed [4]. Co-LP and Model Checking: The vast majority of properties that are to be verified can be classified into safety properties and liveness properties. It is well known within model checking that safety properties can be verified by reachability analysis, i.e, if a counter-example to the property exists, it can be finitely determined by enumerating all the reachable states of the Kripke structure.
A maximum likelihood method for high resolution proton radiography/proton CT
NASA Astrophysics Data System (ADS)
Collins-Fekete, Charles-Antoine; Brousmiche, Sébastien; Portillo, Stephen K. N.; Beaulieu, Luc; Seco, Joao
2016-12-01
Multiple Coulomb scattering (MCS) is the largest contributor to blurring in proton imaging. In this work, we developed a maximum likelihood least squares estimator that improves proton radiography’s spatial resolution. The water equivalent thickness (WET) through projections defined from the source to the detector pixels were estimated such that they maximizes the likelihood of the energy loss of every proton crossing the volume. The length spent in each projection was calculated through the optimized cubic spline path estimate. The proton radiographies were produced using Geant4 simulations. Three phantoms were studied here: a slanted cube in a tank of water to measure 2D spatial resolution, a voxelized head phantom for clinical performance evaluation as well as a parametric Catphan phantom (CTP528) for 3D spatial resolution. Two proton beam configurations were used: a parallel and a conical beam. Proton beams of 200 and 330 MeV were simulated to acquire the radiography. Spatial resolution is increased from 2.44 lp cm-1 to 4.53 lp cm-1 in the 200 MeV beam and from 3.49 lp cm-1 to 5.76 lp cm-1 in the 330 MeV beam. Beam configurations do not affect the reconstructed spatial resolution as investigated between a radiography acquired with the parallel (3.49 lp cm-1 to 5.76 lp cm-1) or conical beam (from 3.49 lp cm-1 to 5.56 lp cm-1). The improved images were then used as input in a photon tomography algorithm. The proton CT reconstruction of the Catphan phantom shows high spatial resolution (from 2.79 to 5.55 lp cm-1 for the parallel beam and from 3.03 to 5.15 lp cm-1 for the conical beam) and the reconstruction of the head phantom, although qualitative, shows high contrast in the gradient region. The proposed formulation of the optimization demonstrates serious potential to increase the spatial resolution (up by 65 % ) in proton radiography and greatly accelerate proton computed tomography reconstruction.
SU-C-207A-01: A Novel Maximum Likelihood Method for High-Resolution Proton Radiography/proton CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins-Fekete, C; Centre Hospitalier University de Quebec, Quebec, QC; Mass General Hospital
2016-06-15
Purpose: Multiple Coulomb scattering is the largest contributor to blurring in proton imaging. Here we tested a maximum likelihood least squares estimator (MLLSE) to improve the spatial resolution of proton radiography (pRad) and proton computed tomography (pCT). Methods: The object is discretized into voxels and the average relative stopping power through voxel columns defined from the source to the detector pixels is optimized such that it maximizes the likelihood of the proton energy loss. The length spent by individual protons in each column is calculated through an optimized cubic spline estimate. pRad images were first produced using Geant4 simulations. Anmore » anthropomorphic head phantom and the Catphan line-pair module for 3-D spatial resolution were studied and resulting images were analyzed. Both parallel and conical beam have been investigated for simulated pRad acquisition. Then, experimental data of a pediatric head phantom (CIRS) were acquired using a recently completed experimental pCT scanner. Specific filters were applied on proton angle and energy loss data to remove proton histories that underwent nuclear interactions. The MTF10% (lp/mm) was used to evaluate and compare spatial resolution. Results: Numerical simulations showed improvement in the pRad spatial resolution for the parallel (2.75 to 6.71 lp/cm) and conical beam (3.08 to 5.83 lp/cm) reconstructed with the MLLSE compared to averaging detector pixel signals. For full tomographic reconstruction, the improved pRad were used as input into a simultaneous algebraic reconstruction algorithm. The Catphan pCT reconstruction based on the MLLSE-enhanced projection showed spatial resolution improvement for the parallel (2.83 to 5.86 lp/cm) and conical beam (3.03 to 5.15 lp/cm). The anthropomorphic head pCT displayed important contrast gains in high-gradient regions. Experimental results also demonstrated significant improvement in spatial resolution of the pediatric head radiography. Conclusion: The proposed MLLSE shows promising potential to increase the spatial resolution (up to 244%) in proton imaging.« less
A maximum likelihood method for high resolution proton radiography/proton CT.
Collins-Fekete, Charles-Antoine; Brousmiche, Sébastien; Portillo, Stephen K N; Beaulieu, Luc; Seco, Joao
2016-12-07
Multiple Coulomb scattering (MCS) is the largest contributor to blurring in proton imaging. In this work, we developed a maximum likelihood least squares estimator that improves proton radiography's spatial resolution. The water equivalent thickness (WET) through projections defined from the source to the detector pixels were estimated such that they maximizes the likelihood of the energy loss of every proton crossing the volume. The length spent in each projection was calculated through the optimized cubic spline path estimate. The proton radiographies were produced using Geant4 simulations. Three phantoms were studied here: a slanted cube in a tank of water to measure 2D spatial resolution, a voxelized head phantom for clinical performance evaluation as well as a parametric Catphan phantom (CTP528) for 3D spatial resolution. Two proton beam configurations were used: a parallel and a conical beam. Proton beams of 200 and 330 MeV were simulated to acquire the radiography. Spatial resolution is increased from 2.44 lp cm -1 to 4.53 lp cm -1 in the 200 MeV beam and from 3.49 lp cm -1 to 5.76 lp cm -1 in the 330 MeV beam. Beam configurations do not affect the reconstructed spatial resolution as investigated between a radiography acquired with the parallel (3.49 lp cm -1 to 5.76 lp cm -1 ) or conical beam (from 3.49 lp cm -1 to 5.56 lp cm -1 ). The improved images were then used as input in a photon tomography algorithm. The proton CT reconstruction of the Catphan phantom shows high spatial resolution (from 2.79 to 5.55 lp cm -1 for the parallel beam and from 3.03 to 5.15 lp cm -1 for the conical beam) and the reconstruction of the head phantom, although qualitative, shows high contrast in the gradient region. The proposed formulation of the optimization demonstrates serious potential to increase the spatial resolution (up by 65[Formula: see text]) in proton radiography and greatly accelerate proton computed tomography reconstruction.
Kristiansen, Thomas B; Pedersen, Anders G; Eugen-Olsen, Jesper; Katzenstein, Terese L; Lundgren, Jens D
2005-01-01
Our objective was to investigate whether steadily increasing resistance levels are inevitable in the course of a failing but unchanged Highly Active Antiretroviral Therapy (HAART) regimen. Patients having an unchanged HAART regimen and a good CD4 response (100 cells/microl above nadir) despite consistent HIV-RNA levels above 200 copies/ml were included in the study. The study period spanned at least 12 months and included 47 plasma samples from 17 patients that were sequenced and analysed with respect to evolutionary changes. At inclusion, the median CD4 count was 300 cells/ml (inter-quartile range (IQR): 231-380) and the median HIV-RNA was 2000 copies/ml (IQR: 1301-6090). Reverse transcription inhibitor (RTI) mutations increased 0.5 mutations per y (STD = 0.8 mutations per y), while major protease inhibitor (PI) resistance mutations increased at a rate of 0.2 mutations per y (STD = 0.8 mutations per y) and minor PI resistance mutations increased at a rate of 0.3 mutations per y (STD = 0.7 mutations per y). The rate at which RTI mutations accumulated decreased during the study period (p = 0.035). Interestingly, the rate of mutation accumulation was not associated with HIV-RNA level. The majority of patients kept accumulating new resistance mutations. However, 3 out of 17 patients with viral failure were caught in an apparent mutational deadlock, thus the development of additional resistance during a failing HAART is not inevitable. We hypothesize that certain patterns of mutations can cause a mutational deadlock where the evolutionary benefit of further resistance mutation is limited if the patient is kept on a stable HAART regimen.
Stereo depth distortions in teleoperation
NASA Technical Reports Server (NTRS)
Diner, Daniel B.; Vonsydow, Marika
1988-01-01
In teleoperation, a typical application of stereo vision is to view a work space located short distances (1 to 3m) in front of the cameras. The work presented here treats converged camera placement and studies the effects of intercamera distance, camera-to-object viewing distance, and focal length of the camera lenses on both stereo depth resolution and stereo depth distortion. While viewing the fronto-parallel plane 1.4 m in front of the cameras, depth errors are measured on the order of 2cm. A geometric analysis was made of the distortion of the fronto-parallel plane of divergence for stereo TV viewing. The results of the analysis were then verified experimentally. The objective was to determine the optimal camera configuration which gave high stereo depth resolution while minimizing stereo depth distortion. It is found that for converged cameras at a fixed camera-to-object viewing distance, larger intercamera distances allow higher depth resolutions, but cause greater depth distortions. Thus with larger intercamera distances, operators will make greater depth errors (because of the greater distortions), but will be more certain that they are not errors (because of the higher resolution).
NASA Astrophysics Data System (ADS)
Adamek, J.; Seidl, J.; Horacek, J.; Komm, M.; Eich, T.; Panek, R.; Cavalier, J.; Devitre, A.; Peterka, M.; Vondracek, P.; Stöckel, J.; Sestak, D.; Grover, O.; Bilkova, P.; Böhm, P.; Varju, J.; Havranek, A.; Weinzettl, V.; Lovell, J.; Dimitrova, M.; Mitosinkova, K.; Dejarnac, R.; Hron, M.; The COMPASS Team; The EUROfusion MST1 Team
2017-11-01
A new system of probes was recently installed in the divertor of tokamak COMPASS in order to investigate the ELM energy density with high spatial and temporal resolution. The new system consists of two arrays of rooftop-shaped Langmuir probes (LPs) used to measure the floating potential or the ion saturation current density and one array of Ball-pen probes (BPPs) used to measure the plasma potential with a spatial resolution of ~3.5 mm. The combination of floating BPPs and LPs yields the electron temperature with microsecond temporal resolution. We report on the design of the new divertor probe arrays and first results of electron temperature profile measurements in ELMy H-mode and L-mode. We also present comparative measurements of the parallel heat flux using the new probe arrays and fast infrared termography (IR) data during L-mode with excellent agreement between both techniques using a heat power transmission coefficient γ = 7. The ELM energy density {{\\varepsilon }\\parallel } was measured during a set of NBI assisted ELMy H-mode discharges. The peak values of {{\\varepsilon }\\parallel } were compared with those predicted by model and with experimental data from JET, AUG and MAST with a good agreement.
Biocellion: accelerating computer simulation of multicellular biological system models
Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya
2014-01-01
Motivation: Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. Results: We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Availability and implementation: Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. Contact: seunghwa.kang@pnnl.gov PMID:25064572
A 24-ch Phased-Array System for Hyperpolarized Helium Gas Parallel MRI to Evaluate Lung Functions.
Lee, Ray; Johnson, Glyn; Stefanescu, Cornel; Trampel, Robert; McGuinness, Georgeann; Stoeckel, Bernd
2005-01-01
Hyperpolarized 3He gas MRI has a serious potential for assessing pulmonary functions. Due to the fact that the non-equilibrium of the gas results in a steady depletion of the signal level over the course of the excitations, the signal-tonoise ratio (SNR) can be independent of the number of the data acquisitions under certain circumstances. This provides a unique opportunity for parallel MRI for gaining both temporal and spatial resolution without reducing SNR. We have built a 24-channel receive / 2-channel transmit phased array system for 3He parallel imaging. Our in vivo experimental results proved that the significant temporal and spatial resolution can be gained at no cost to the SNR. With 3D data acquisition, eight fold (2x4) scan time reduction can be achieved without any aliasing in images. Additionally, a rigid analysis using the low impedance preamplifier for decoupling presented evidence of strong coupling.
Characterization of Harmonic Signal Acquisition with Parallel Dipole and Multipole Detectors
NASA Astrophysics Data System (ADS)
Park, Sung-Gun; Anderson, Gordon A.; Bruce, James E.
2018-04-01
Fourier transform ion cyclotron resonance mass spectrometry (FTICR-MS) is a powerful instrument for the study of complex biological samples due to its high resolution and mass measurement accuracy. However, the relatively long signal acquisition periods needed to achieve high resolution can serve to limit applications of FTICR-MS. The use of multiple pairs of detector electrodes enables detection of harmonic frequencies present at integer multiples of the fundamental cyclotron frequency, and the obtained resolving power for a given acquisition period increases linearly with the order of harmonic signal. However, harmonic signal detection also increases spectral complexity and presents challenges for interpretation. In the present work, ICR cells with independent dipole and harmonic detection electrodes and preamplifiers are demonstrated. A benefit of this approach is the ability to independently acquire fundamental and multiple harmonic signals in parallel using the same ions under identical conditions, enabling direct comparison of achieved performance as parameters are varied. Spectra from harmonic signals showed generally higher resolving power than spectra acquired with fundamental signals and equal signal duration. In addition, the maximum observed signal to noise (S/N) ratio from harmonic signals exceeded that of fundamental signals by 50 to 100%. Finally, parallel detection of fundamental and harmonic signals enables deconvolution of overlapping harmonic signals since observed fundamental frequencies can be used to unambiguously calculate all possible harmonic frequencies. Thus, the present application of parallel fundamental and harmonic signal acquisition offers a general approach to improve utilization of harmonic signals to yield high-resolution spectra with decreased acquisition time. [Figure not available: see fulltext.
3D hyperpolarized C-13 EPI with calibrationless parallel imaging
NASA Astrophysics Data System (ADS)
Gordon, Jeremy W.; Hansen, Rie B.; Shin, Peter J.; Feng, Yesu; Vigneron, Daniel B.; Larson, Peder E. Z.
2018-04-01
With the translation of metabolic MRI with hyperpolarized 13C agents into the clinic, imaging approaches will require large volumetric FOVs to support clinical applications. Parallel imaging techniques will be crucial to increasing volumetric scan coverage while minimizing RF requirements and temporal resolution. Calibrationless parallel imaging approaches are well-suited for this application because they eliminate the need to acquire coil profile maps or auto-calibration data. In this work, we explored the utility of a calibrationless parallel imaging method (SAKE) and corresponding sampling strategies to accelerate and undersample hyperpolarized 13C data using 3D blipped EPI acquisitions and multichannel receive coils, and demonstrated its application in a human study of [1-13C]pyruvate metabolism.
Nose, Atsushi; Yamazaki, Tomohiro; Katayama, Hironobu; Uehara, Shuji; Kobayashi, Masatsugu; Shida, Sayaka; Odahara, Masaki; Takamiya, Kenichi; Matsumoto, Shizunori; Miyashita, Leo; Watanabe, Yoshihiro; Izawa, Takashi; Muramatsu, Yoshinori; Nitta, Yoshikazu; Ishikawa, Masatoshi
2018-04-24
We have developed a high-speed vision chip using 3D stacking technology to address the increasing demand for high-speed vision chips in diverse applications. The chip comprises a 1/3.2-inch, 1.27 Mpixel, 500 fps (0.31 Mpixel, 1000 fps, 2 × 2 binning) vision chip with 3D-stacked column-parallel Analog-to-Digital Converters (ADCs) and 140 Giga Operation per Second (GOPS) programmable Single Instruction Multiple Data (SIMD) column-parallel PEs for new sensing applications. The 3D-stacked structure and column parallel processing architecture achieve high sensitivity, high resolution, and high-accuracy object positioning.
NASA Astrophysics Data System (ADS)
Sourbier, F.; Operto, S.; Virieux, J.
2006-12-01
We present a distributed-memory parallel algorithm for 2D visco-acoustic full-waveform inversion of wide-angle seismic data. Our code is written in fortran90 and use MPI for parallelism. The algorithm was applied to real wide-angle data set recorded by 100 OBSs with a 1-km spacing in the eastern-Nankai trough (Japan) to image the deep structure of the subduction zone. Full-waveform inversion is applied sequentially to discrete frequencies by proceeding from the low to the high frequencies. The inverse problem is solved with a classic gradient method. Full-waveform modeling is performed with a frequency-domain finite-difference method. In the frequency-domain, solving the wave equation requires resolution of a large unsymmetric system of linear equations. We use the massively parallel direct solver MUMPS (http://www.enseeiht.fr/irit/apo/MUMPS) for distributed-memory computer to solve this system. The MUMPS solver is based on a multifrontal method for the parallel factorization. The MUMPS algorithm is subdivided in 3 main steps: a symbolic analysis step that performs re-ordering of the matrix coefficients to minimize the fill-in of the matrix during the subsequent factorization and an estimation of the assembly tree of the matrix. Second, the factorization is performed with dynamic scheduling to accomodate numerical pivoting and provides the LU factors distributed over all the processors. Third, the resolution is performed for multiple sources. To compute the gradient of the cost function, 2 simulations per shot are required (one to compute the forward wavefield and one to back-propagate residuals). The multi-source resolutions can be performed in parallel with MUMPS. In the end, each processor stores in core a sub-domain of all the solutions. These distributed solutions can be exploited to compute in parallel the gradient of the cost function. Since the gradient of the cost function is a weighted stack of the shot and residual solutions of MUMPS, each processor computes the corresponding sub-domain of the gradient. In the end, the gradient is centralized on the master processor using a collective communation. The gradient is scaled by the diagonal elements of the Hessian matrix. This scaling is computed only once per frequency before the first iteration of the inversion. Estimation of the diagonal terms of the Hessian requires performing one simulation per non redondant shot and receiver position. The same strategy that the one used for the gradient is used to compute the diagonal Hessian in parallel. This algorithm was applied to a dense wide-angle data set recorded by 100 OBSs in the eastern Nankai trough, offshore Japan. Thirteen frequencies ranging from 3 and 15 Hz were inverted. Tweny iterations per frequency were computed leading to 260 tomographic velocity models of increasing resolution. The velocity model dimensions are 105 km x 25 km corresponding to a finite-difference grid of 4201 x 1001 grid with a 25-m grid interval. The number of shot was 1005 and the number of inverted OBS gathers was 93. The inversion requires 20 days on 6 32-bits bi-processor nodes with 4 Gbytes of RAM memory per node when only the LU factorization is performed in parallel. Preliminary estimations of the time required to perform the inversion with the fully-parallelized code is 6 and 4 days using 20 and 50 processors respectively.
NASA Astrophysics Data System (ADS)
Gribovskaya, I. V.; Gladchenko, I. A.; Zinenko, G. K.
Two methods of extracting mineral elements from otherwise deadlock products of a life-support system are presented. We describe first optimum conditions for recovering elements by water extraction from dry wastes of plants, biomass ash, and solid human wastes after passing them through the catalytic furnace; and, second, we describe acid extracts of biogenous elements by 1N and 2N HNO_3 from these products. Ways to use the extracts of elements in plant nutrition are considered in order to increase the extent to which the mineral loop of a life-support system can be closed.
Micro-bias and macro-performance.
Seaver, S M D; Moreira, A A; Sales-Pardo, M; Malmgren, R D; Diermeier, D; Amaral, L A N
2009-02-01
We use agent-based modeling to investigate the effect of conservatism and partisanship on the efficiency with which large populations solve the density classification task - a paradigmatic problem for information aggregation and consensus building. We find that conservative agents enhance the populations' ability to efficiently solve the density classification task despite large levels of noise in the system. In contrast, we find that the presence of even a small fraction of partisans holding the minority position will result in deadlock or a consensus on an incorrect answer. Our results provide a possible explanation for the emergence of conservatism and suggest that even low levels of partisanship can lead to significant social costs.
Instrumentation of Java Bytecode for Runtime Analysis
NASA Technical Reports Server (NTRS)
Goldberg, Allen; Haveland, Klaus
2003-01-01
This paper describes JSpy, a system for high-level instrumentation of Java bytecode and its use with JPaX, OUT system for runtime analysis of Java programs. JPaX monitors the execution of temporal logic formulas and performs predicative analysis of deadlocks and data races. JSpy s input is an instrumentation specification, which consists of a collection of rules, where a rule is a predicate/action pair The predicate is a conjunction of syntactic constraints on a Java statement, and the action is a description of logging information to be inserted in the bytecode corresponding to the statement. JSpy is built using JTrek an instrumentation package at a lower level of abstraction.
Toward 10-km mesh global climate simulations
NASA Astrophysics Data System (ADS)
Ohfuchi, W.; Enomoto, T.; Takaya, K.; Yoshioka, M. K.
2002-12-01
An atmospheric general circulation model (AGCM) that runs very efficiently on the Earth Simulator (ES) was developed. The ES is a gigantic vector-parallel computer with the peak performance of 40 Tflops. The AGCM, named AFES (AGCM for ES), was based on the version 5.4.02 of an AGCM developed jointly by the Center for Climate System Research, the University of Tokyo and the Japanese National Institute for Environmental Sciences. The AFES was, however, totally rewritten in FORTRAN90 and MPI while the original AGCM was written in FORTRAN77 and not capable of parallel computing. The AFES achieved 26 Tflops (about 65 % of the peak performance of the ES) at resolution of T1279L96 (10-km horizontal resolution and 500-m vertical resolution in middle troposphere to lower stratosphere). Some results of 10- to 20-day global simulations will be presented. At this moment, only short-term simulations are possible due to data storage limitation. As ten tera flops computing is achieved, peta byte data storage are necessary to conduct climate-type simulations at this super-high resolution global simulations. Some possibilities for future research topics in global super-high resolution climate simulations will be discussed. Some target topics are mesoscale structures and self-organization of the Baiu-Meiyu front over Japan, cyclogenecsis over the North Pacific and typhoons around the Japan area. Also improvement in local precipitation with increasing horizontal resolution will be demonstrated.
NASA Astrophysics Data System (ADS)
Fairchild, A.; Chirayath, V.; Gladen, R.; McDonald, A.; Lim, Z.; Chrysler, M.; Koymen, A.; Weiss, A.
Simion 8.1®simulations were used to determine the energy resolution of a 1 meter long Time of Flight Positron annihilation induced Auger Electron Spectrometer (TOF-PAES). The spectrometer consists of: 1. a magnetic gradient section used to parallelize the electrons leaving the sample along the beam axis, 2. an electric field free time of flight tube and 3. a detection section with a set of ExB plates that deflect electrons exiting the TOF tube into a Micro-Channel Plate (MCP). Simulations of the time of flight distribution of electrons emitted according to a known secondary electron emission distribution, for various sample biases, were compared to experimental energy calibration peaks and found to be in excellent agreement. The TOF spectra at the highest sample bias was used to determine the timing resolution function describing the timing spread due to the electronics. Simulations were then performed to calculate the energy resolution at various electron energies in order to deconvolute the combined influence of the magnetic field parallelizer, the timing resolution, and the voltage gradient at the ExB plates. The energy resolution of the 1m TOF-PAES was compared to a newly constructed 3 meter long system. The results were used to optimize the geometry and the potentials of the ExB plates for obtaining the best energy resolution. This work was supported by NSF Grant NSF Grant No. DMR 1508719 and DMR 1338130.
Wargo, Christopher J.; Gore, John C.
2013-01-01
Localized high-resolution diffusion tensor images (DTI) from the midbrain were obtained using reduced field-of-view (rFOV) methods combined with SENSE parallel imaging and single-shot echo planar (EPI) acquisitions at 7 T. This combination aimed to diminish sensitivities of DTI to motion, susceptibility variations, and EPI artifacts at ultra-high field. Outer-volume suppression (OVS) was applied in DTI acquisitions at 2- and 1-mm2 resolutions, b=1000 s/mm2, and six diffusion directions, resulting in scans of 7- and 14-min durations. Mean apparent diffusion coefficient (ADC) and fractional anisotropy (FA) values were measured in various fiber tract locations at the two resolutions and compared. Geometric distortion and signal-to-noise ratio (SNR) were additionally measured and compared for reduced-FOV and full-FOV DTI scans. Up to an eight-fold data reduction was achieved using DTI-OVS with SENSE at 1 mm2, and geometric distortion was halved. The localization of fiber tracts was improved, enabling targeted FA and ADC measurements. Significant differences in diffusion properties were observed between resolutions for a number of regions suggesting that FA values are impacted by partial volume effects even at a 2-mm2 resolution. The combined SENSE DTI-OVS approach allows large reductions in DTI data acquisition and provides improved quality for high-resolution diffusion studies of the human brain. PMID:23541390
NASA Astrophysics Data System (ADS)
Sourbier, Florent; Operto, Stéphane; Virieux, Jean; Amestoy, Patrick; L'Excellent, Jean-Yves
2009-03-01
This is the first paper in a two-part series that describes a massively parallel code that performs 2D frequency-domain full-waveform inversion of wide-aperture seismic data for imaging complex structures. Full-waveform inversion methods, namely quantitative seismic imaging methods based on the resolution of the full wave equation, are computationally expensive. Therefore, designing efficient algorithms which take advantage of parallel computing facilities is critical for the appraisal of these approaches when applied to representative case studies and for further improvements. Full-waveform modelling requires the resolution of a large sparse system of linear equations which is performed with the massively parallel direct solver MUMPS for efficient multiple-shot simulations. Efficiency of the multiple-shot solution phase (forward/backward substitutions) is improved by using the BLAS3 library. The inverse problem relies on a classic local optimization approach implemented with a gradient method. The direct solver returns the multiple-shot wavefield solutions distributed over the processors according to a domain decomposition driven by the distribution of the LU factors. The domain decomposition of the wavefield solutions is used to compute in parallel the gradient of the objective function and the diagonal Hessian, this latter providing a suitable scaling of the gradient. The algorithm allows one to test different strategies for multiscale frequency inversion ranging from successive mono-frequency inversion to simultaneous multifrequency inversion. These different inversion strategies will be illustrated in the following companion paper. The parallel efficiency and the scalability of the code will also be quantified.
ERIC Educational Resources Information Center
Fletcher, Richard K., Jr.
This description of procedures for dumping high and low resolution graphics using the Apple IIe microcomputer system focuses on two special hardware configurations that are commonly used in schools--the Apple Dot Matrix Printer with the Apple Parallel Interface Card, and the Imagewriter Printer with the Apple Super Serial Interface Card. Special…
Performance Improvements of the CYCOFOS Flow Model
NASA Astrophysics Data System (ADS)
Radhakrishnan, Hari; Moulitsas, Irene; Syrakos, Alexandros; Zodiatis, George; Nikolaides, Andreas; Hayes, Daniel; Georgiou, Georgios C.
2013-04-01
The CYCOFOS-Cyprus Coastal Ocean Forecasting and Observing System has been operational since early 2002, providing daily sea current, temperature, salinity and sea level forecasting data for the next 4 and 10 days to end-users in the Levantine Basin, necessary for operational application in marine safety, particularly concerning oil spills and floating objects predictions. CYCOFOS flow model, similar to most of the coastal and sub-regional operational hydrodynamic forecasting systems of the MONGOOS-Mediterranean Oceanographic Network for Global Ocean Observing System is based on the POM-Princeton Ocean Model. CYCOFOS is nested with the MyOcean Mediterranean regional forecasting data and with SKIRON and ECMWF for surface forcing. The increasing demand for higher and higher resolution data to meet coastal and offshore downstream applications motivated the parallelization of the CYCOFOS POM model. This development was carried out in the frame of the IPcycofos project, funded by the Cyprus Research Promotion Foundation. The parallel processing provides a viable solution to satisfy these demands without sacrificing accuracy or omitting any physical phenomena. Prior to IPcycofos project, there are been several attempts to parallelise the POM, as for example the MP-POM. The existing parallel code models rely on the use of specific outdated hardware architectures and associated software. The objective of the IPcycofos project is to produce an operational parallel version of the CYCOFOS POM code that can replicate the results of the serial version of the POM code used in CYCOFOS. The parallelization of the CYCOFOS POM model use Message Passing Interface-MPI, implemented on commodity computing clusters running open source software and not depending on any specialized vendor hardware. The parallel CYCOFOS POM code constructed in a modular fashion, allowing a fast re-locatable downscaled implementation. The MPI takes advantage of the Cartesian nature of the POM mesh, and use the built-in functionality of MPI routines to split the mesh, using a weighting scheme, along longitude and latitude among the processors. Each server processor work on the model based on domain decomposition techniques. The new parallel CYCOFOS POM code has been benchmarked against the serial POM version of CYCOFOS for speed, accuracy, and resolution and the results are more than satisfactory. With a higher resolution CYCOFOS Levantine model domain the forecasts need much less time than the serial CYCOFOS POM coarser version, both with identical accuracy.
NASA Astrophysics Data System (ADS)
Jang, W.; Engda, T. A.; Neff, J. C.; Herrick, J.
2017-12-01
Many crop models are increasingly used to evaluate crop yields at regional and global scales. However, implementation of these models across large areas using fine-scale grids is limited by computational time requirements. In order to facilitate global gridded crop modeling with various scenarios (i.e., different crop, management schedule, fertilizer, and irrigation) using the Environmental Policy Integrated Climate (EPIC) model, we developed a distributed parallel computing framework in Python. Our local desktop with 14 cores (28 threads) was used to test the distributed parallel computing framework in Iringa, Tanzania which has 406,839 grid cells. High-resolution soil data, SoilGrids (250 x 250 m), and climate data, AgMERRA (0.25 x 0.25 deg) were also used as input data for the gridded EPIC model. The framework includes a master file for parallel computing, input database, input data formatters, EPIC model execution, and output analyzers. Through the master file for parallel computing, the user-defined number of threads of CPU divides the EPIC simulation into jobs. Then, Using EPIC input data formatters, the raw database is formatted for EPIC input data and the formatted data moves into EPIC simulation jobs. Then, 28 EPIC jobs run simultaneously and only interesting results files are parsed and moved into output analyzers. We applied various scenarios with seven different slopes and twenty-four fertilizer ranges. Parallelized input generators create different scenarios as a list for distributed parallel computing. After all simulations are completed, parallelized output analyzers are used to analyze all outputs according to the different scenarios. This saves significant computing time and resources, making it possible to conduct gridded modeling at regional to global scales with high-resolution data. For example, serial processing for the Iringa test case would require 113 hours, while using the framework developed in this study requires only approximately 6 hours, a nearly 95% reduction in computing time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Procassini, R.J.
1997-12-31
The fine-scale, multi-space resolution that is envisioned for accurate simulations of complex weapons systems in three spatial dimensions implies flop-rate and memory-storage requirements that will only be obtained in the near future through the use of parallel computational techniques. Since the Monte Carlo transport models in these simulations usually stress both of these computational resources, they are prime candidates for parallelization. The MONACO Monte Carlo transport package, which is currently under development at LLNL, will utilize two types of parallelism within the context of a multi-physics design code: decomposition of the spatial domain across processors (spatial parallelism) and distribution ofmore » particles in a given spatial subdomain across additional processors (particle parallelism). This implementation of the package will utilize explicit data communication between domains (message passing). Such a parallel implementation of a Monte Carlo transport model will result in non-deterministic communication patterns. The communication of particles between subdomains during a Monte Carlo time step may require a significant level of effort to achieve a high parallel efficiency.« less
2012-10-01
using the open-source code Large-scale Atomic/Molecular Massively Parallel Simulator ( LAMMPS ) (http://lammps.sandia.gov) (23). The commercial...parameters are proprietary and cannot be ported to the LAMMPS 4 simulation code. In our molecular dynamics simulations at the atomistic resolution, we...IBI iterative Boltzmann inversion LAMMPS Large-scale Atomic/Molecular Massively Parallel Simulator MAPS Materials Processes and Simulations MS
Biocellion: accelerating computer simulation of multicellular biological system models.
Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya
2014-11-01
Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Chen, I-Wen; Papagiakoumou, Eirini; Emiliani, Valentina
2018-06-01
Optogenetics neuronal targeting combined with single-photon wide-field illumination has already proved its enormous potential in neuroscience, enabling the optical control of entire neuronal networks and disentangling their role in the control of specific behaviors. However, establishing how a single or a sub-set of neurons controls a specific behavior, or how functionally identical neurons are connected in a particular task, or yet how behaviors can be modified in real-time by the complex wiring diagram of neuronal connections requires more sophisticated approaches enabling to drive neuronal circuits activity with single-cell precision and millisecond temporal resolution. This has motivated on one side the development of flexible optical methods for two-photon (2P) optogenetic activation using either, or a hybrid of two approaches: scanning and parallel illumination. On the other side, it has stimulated the engineering of new opsins with modified spectral characteristics, channel kinetics and spatial distribution of expression, offering the necessary flexibility of choosing the appropriate opsin for each application. The need for optical manipulation of multiple targets with millisecond temporal resolution has imposed three-dimension (3D) parallel holographic illumination as the technique of choice for optical control of neuronal circuits organized in 3D. Today 3D parallel illumination exists in several complementary variants, each with a different degree of simplicity, light uniformity, temporal precision and axial resolution. In parallel, the possibility to reach hundreds of targets in 3D volumes has prompted the development of low-repetition rate amplified laser sources enabling high peak power, while keeping low average power for stimulating each cell. All together those progresses open the way for a precise optical manipulation of neuronal circuits with unprecedented precision and flexibility. Copyright © 2018 Elsevier Ltd. All rights reserved.
Kalman filter techniques for accelerated Cartesian dynamic cardiac imaging.
Feng, Xue; Salerno, Michael; Kramer, Christopher M; Meyer, Craig H
2013-05-01
In dynamic MRI, spatial and temporal parallel imaging can be exploited to reduce scan time. Real-time reconstruction enables immediate visualization during the scan. Commonly used view-sharing techniques suffer from limited temporal resolution, and many of the more advanced reconstruction methods are either retrospective, time-consuming, or both. A Kalman filter model capable of real-time reconstruction can be used to increase the spatial and temporal resolution in dynamic MRI reconstruction. The original study describing the use of the Kalman filter in dynamic MRI was limited to non-Cartesian trajectories because of a limitation intrinsic to the dynamic model used in that study. Here the limitation is overcome, and the model is applied to the more commonly used Cartesian trajectory with fast reconstruction. Furthermore, a combination of the Kalman filter model with Cartesian parallel imaging is presented to further increase the spatial and temporal resolution and signal-to-noise ratio. Simulations and experiments were conducted to demonstrate that the Kalman filter model can increase the temporal resolution of the image series compared with view-sharing techniques and decrease the spatial aliasing compared with TGRAPPA. The method requires relatively little computation, and thus is suitable for real-time reconstruction. Copyright © 2012 Wiley Periodicals, Inc.
Kalman Filter Techniques for Accelerated Cartesian Dynamic Cardiac Imaging
Feng, Xue; Salerno, Michael; Kramer, Christopher M.; Meyer, Craig H.
2012-01-01
In dynamic MRI, spatial and temporal parallel imaging can be exploited to reduce scan time. Real-time reconstruction enables immediate visualization during the scan. Commonly used view-sharing techniques suffer from limited temporal resolution, and many of the more advanced reconstruction methods are either retrospective, time-consuming, or both. A Kalman filter model capable of real-time reconstruction can be used to increase the spatial and temporal resolution in dynamic MRI reconstruction. The original study describing the use of the Kalman filter in dynamic MRI was limited to non-Cartesian trajectories, because of a limitation intrinsic to the dynamic model used in that study. Here the limitation is overcome and the model is applied to the more commonly used Cartesian trajectory with fast reconstruction. Furthermore, a combination of the Kalman filter model with Cartesian parallel imaging is presented to further increase the spatial and temporal resolution and SNR. Simulations and experiments were conducted to demonstrate that the Kalman filter model can increase the temporal resolution of the image series compared with view sharing techniques and decrease the spatial aliasing compared with TGRAPPA. The method requires relatively little computation, and thus is suitable for real-time reconstruction. PMID:22926804
High-Resolution Study of the First Stretching Overtones of H3Si79Br.
Ceausu; Graner; Bürger; Mkadmi; Pracna; Lafferty
1998-11-01
The Fourier transform infrared spectrum of monoisotopic H3Si79Br (resolution 7.7 x 10(-3) cm-1) was studied from 4200 to 4520 cm-1, in the region of the first overtones of the Si-H stretching vibration. The investigation of the spectrum revealed the presence of two band systems, the first consisting of one parallel (nu0 = 4340.2002 cm-1) and one perpendicular (nu0 = 4342.1432 cm-1) strong component, and the second of one parallel (nu0 = 4405.789 cm-1) and one perpendicular (nu0 = 4416.233 cm-1) weak component. The rovibrational analysis shows strong local perturbations for both strong and weak systems. Seven hundred eighty-one nonzero-weighted transitions belonging to the strong system [the (200) manifold in the local mode picture] were fitted to a simple model involving a perpendicular component interacting by a weak Coriolis resonance with a parallel component. The most severely perturbed transitions (whose ||obs-calc || values exceeded 3 x 10(-3) cm-1) were given zero weights. The standard deviations of the fit were 1.0 x 10(-3) and 0.69 x 10(-3) cm-1 for the parallel and the perpendicular components, respectively. The weak band system, severely perturbed by many "dark" perturbers, was fitted to a model involving one parallel and one perpendicular band, connected by a Coriolis-type resonance. The K" . DeltaK = +10 to +18 subbands of the perpendicular component, which showed very high observed - calculated values ( approximately 0.5 cm-1), were excluded from this calculation. The standard deviations of the fit were 11 x 10(-3) and 13 x 10(-3) cm-1 for the parallel and the perpendicular components, respectively. Copyright 1998 Academic Press.
NASA Technical Reports Server (NTRS)
Wang, Yu (Inventor)
2006-01-01
A miniature, ultra-high resolution, and color scanning microscope using microchannel and solid-state technology that does not require focus adjustment. One embodiment includes a source of collimated radiant energy for illuminating a sample, a plurality of narrow angle filters comprising a microchannel structure to permit the passage of only unscattered radiant energy through the microchannels with some portion of the radiant energy entering the microchannels from the sample, a solid-state sensor array attached to the microchannel structure, the microchannels being aligned with an element of the solid-state sensor array, that portion of the radiant energy entering the microchannels parallel to the microchannel walls travels to the sensor element generating an electrical signal from which an image is reconstructed by an external device, and a moving element for movement of the microchannel structure relative to the sample. Discloses a method for scanning samples whereby the sensor array elements trace parallel paths that are arbitrarily close to the parallel paths traced by other elements of the array.
Efficient parallel resolution of the simplified transport equations in mixed-dual formulation
NASA Astrophysics Data System (ADS)
Barrault, M.; Lathuilière, B.; Ramet, P.; Roman, J.
2011-03-01
A reactivity computation consists of computing the highest eigenvalue of a generalized eigenvalue problem, for which an inverse power algorithm is commonly used. Very fine modelizations are difficult to treat for our sequential solver, based on the simplified transport equations, in terms of memory consumption and computational time. A first implementation of a Lagrangian based domain decomposition method brings to a poor parallel efficiency because of an increase in the power iterations [1]. In order to obtain a high parallel efficiency, we improve the parallelization scheme by changing the location of the loop over the subdomains in the overall algorithm and by benefiting from the characteristics of the Raviart-Thomas finite element. The new parallel algorithm still allows us to locally adapt the numerical scheme (mesh, finite element order). However, it can be significantly optimized for the matching grid case. The good behavior of the new parallelization scheme is demonstrated for the matching grid case on several hundreds of nodes for computations based on a pin-by-pin discretization.
NASA Astrophysics Data System (ADS)
Zhao, Shaoshuai; Ni, Chen; Cao, Jing; Li, Zhengqiang; Chen, Xingfeng; Ma, Yan; Yang, Leiku; Hou, Weizhen; Qie, Lili; Ge, Bangyu; Liu, Li; Xing, Jin
2018-03-01
The remote sensing image is usually polluted by atmosphere components especially like aerosol particles. For the quantitative remote sensing applications, the radiative transfer model based atmospheric correction is used to get the reflectance with decoupling the atmosphere and surface by consuming a long computational time. The parallel computing is a solution method for the temporal acceleration. The parallel strategy which uses multi-CPU to work simultaneously is designed to do atmospheric correction for a multispectral remote sensing image. The parallel framework's flow and the main parallel body of atmospheric correction are described. Then, the multispectral remote sensing image of the Chinese Gaofen-2 satellite is used to test the acceleration efficiency. When the CPU number is increasing from 1 to 8, the computational speed is also increasing. The biggest acceleration rate is 6.5. Under the 8 CPU working mode, the whole image atmospheric correction costs 4 minutes.
Segmentation of remotely sensed data using parallel region growing
NASA Technical Reports Server (NTRS)
Tilton, J. C.; Cox, S. C.
1983-01-01
The improved spatial resolution of the new earth resources satellites will increase the need for effective utilization of spatial information in machine processing of remotely sensed data. One promising technique is scene segmentation by region growing. Region growing can use spatial information in two ways: only spatially adjacent regions merge together, and merging criteria can be based on region-wide spatial features. A simple region growing approach is described in which the similarity criterion is based on region mean and variance (a simple spatial feature). An effective way to implement region growing for remote sensing is as an iterative parallel process on a large parallel processor. A straightforward parallel pixel-based implementation of the algorithm is explored and its efficiency is compared with sequential pixel-based, sequential region-based, and parallel region-based implementations. Experimental results from on aircraft scanner data set are presented, as is a discussioon of proposed improvements to the segmentation algorithm.
NASA Technical Reports Server (NTRS)
Shen, Bo-Wen; Cheung, Samson; Li, Jui-Lin F.; Wu, Yu-ling
2013-01-01
In this study, we discuss the performance of the parallel ensemble empirical mode decomposition (EMD) in the analysis of tropical waves that are associated with tropical cyclone (TC) formation. To efficiently analyze high-resolution, global, multiple-dimensional data sets, we first implement multilevel parallelism into the ensemble EMD (EEMD) and obtain a parallel speedup of 720 using 200 eight-core processors. We then apply the parallel EEMD (PEEMD) to extract the intrinsic mode functions (IMFs) from preselected data sets that represent (1) idealized tropical waves and (2) large-scale environmental flows associated with Hurricane Sandy (2012). Results indicate that the PEEMD is efficient and effective in revealing the major wave characteristics of the data, such as wavelengths and periods, by sifting out the dominant (wave) components. This approach has a potential for hurricane climate study by examining the statistical relationship between tropical waves and TC formation.
A parallel graded-mesh FDTD algorithm for human-antenna interaction problems.
Catarinucci, Luca; Tarricone, Luciano
2009-01-01
The finite difference time domain method (FDTD) is frequently used for the numerical solution of a wide variety of electromagnetic (EM) problems and, among them, those concerning human exposure to EM fields. In many practical cases related to the assessment of occupational EM exposure, large simulation domains are modeled and high space resolution adopted, so that strong memory and central processing unit power requirements have to be satisfied. To better afford the computational effort, the use of parallel computing is a winning approach; alternatively, subgridding techniques are often implemented. However, the simultaneous use of subgridding schemes and parallel algorithms is very new. In this paper, an easy-to-implement and highly-efficient parallel graded-mesh (GM) FDTD scheme is proposed and applied to human-antenna interaction problems, demonstrating its appropriateness in dealing with complex occupational tasks and showing its capability to guarantee the advantages of a traditional subgridding technique without affecting the parallel FDTD performance.
Generalized parallel-perspective stereo mosaics from airborne video.
Zhu, Zhigang; Hanson, Allen R; Riseman, Edward M
2004-02-01
In this paper, we present a new method for automatically and efficiently generating stereoscopic mosaics by seamless registration of images collected by a video camera mounted on an airborne platform. Using a parallel-perspective representation, a pair of geometrically registered stereo mosaics can be precisely constructed under quite general motion. A novel parallel ray interpolation for stereo mosaicing (PRISM) approach is proposed to make stereo mosaics seamless in the presence of obvious motion parallax and for rather arbitrary scenes. Parallel-perspective stereo mosaics generated with the PRISM method have better depth resolution than perspective stereo due to the adaptive baseline geometry. Moreover, unlike previous results showing that parallel-perspective stereo has a constant depth error, we conclude that the depth estimation error of stereo mosaics is in fact a linear function of the absolute depths of a scene. Experimental results on long video sequences are given.
Liao, Congyu; Chen, Ying; Cao, Xiaozhi; Chen, Song; He, Hongjian; Mani, Merry; Jacob, Mathews; Magnotta, Vincent; Zhong, Jianhui
2017-03-01
To propose a novel reconstruction method using parallel imaging with low rank constraint to accelerate high resolution multishot spiral diffusion imaging. The undersampled high resolution diffusion data were reconstructed based on a low rank (LR) constraint using similarities between the data of different interleaves from a multishot spiral acquisition. The self-navigated phase compensation using the low resolution phase data in the center of k-space was applied to correct shot-to-shot phase variations induced by motion artifacts. The low rank reconstruction was combined with sensitivity encoding (SENSE) for further acceleration. The efficiency of the proposed joint reconstruction framework, dubbed LR-SENSE, was evaluated through error quantifications and compared with ℓ1 regularized compressed sensing method and conventional iterative SENSE method using the same datasets. It was shown that with a same acceleration factor, the proposed LR-SENSE method had the smallest normalized sum-of-squares errors among all the compared methods in all diffusion weighted images and DTI-derived index maps, when evaluated with different acceleration factors (R = 2, 3, 4) and for all the acquired diffusion directions. Robust high resolution diffusion weighted image can be efficiently reconstructed from highly undersampled multishot spiral data with the proposed LR-SENSE method. Magn Reson Med 77:1359-1366, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Mars-solar wind interaction: LatHyS, an improved parallel 3-D multispecies hybrid model
NASA Astrophysics Data System (ADS)
Modolo, Ronan; Hess, Sebastien; Mancini, Marco; Leblanc, Francois; Chaufray, Jean-Yves; Brain, David; Leclercq, Ludivine; Esteban-Hernández, Rosa; Chanteur, Gerard; Weill, Philippe; González-Galindo, Francisco; Forget, Francois; Yagi, Manabu; Mazelle, Christian
2016-07-01
In order to better represent Mars-solar wind interaction, we present an unprecedented model achieving spatial resolution down to 50 km, a so far unexplored resolution for global kinetic models of the Martian ionized environment. Such resolution approaches the ionospheric plasma scale height. In practice, the model is derived from a first version described in Modolo et al. (2005). An important effort of parallelization has been conducted and is presented here. A better description of the ionosphere was also implemented including ionospheric chemistry, electrical conductivities, and a drag force modeling the ion-neutral collisions in the ionosphere. This new version of the code, named LatHyS (Latmos Hybrid Simulation), is here used to characterize the impact of various spatial resolutions on simulation results. In addition, and following a global model challenge effort, we present the results of simulation run for three cases which allow addressing the effect of the suprathermal corona and of the solar EUV activity on the magnetospheric plasma boundaries and on the global escape. Simulation results showed that global patterns are relatively similar for the different spatial resolution runs, but finest grid runs provide a better representation of the ionosphere and display more details of the planetary plasma dynamic. Simulation results suggest that a significant fraction of escaping O+ ions is originated from below 1200 km altitude.
Java PathExplorer: A Runtime Verification Tool
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Rosu, Grigore; Clancy, Daniel (Technical Monitor)
2001-01-01
We describe recent work on designing an environment called Java PathExplorer for monitoring the execution of Java programs. This environment facilitates the testing of execution traces against high level specifications, including temporal logic formulae. In addition, it contains algorithms for detecting classical error patterns in concurrent programs, such as deadlocks and data races. An initial prototype of the tool has been applied to the executive module of the planetary Rover K9, developed at NASA Ames. In this paper we describe the background and motivation for the development of this tool, including comments on how it relates to formal methods tools as well as to traditional testing, and we then present the tool itself.
Client - server programs analysis in the EPOCA environment
NASA Astrophysics Data System (ADS)
Donatelli, Susanna; Mazzocca, Nicola; Russo, Stefano
1996-09-01
Client - server processing is a popular paradigm for distributed computing. In the development of client - server programs, the designer has first to ensure that the implementation behaves correctly, in particular that it is deadlock free. Second, he has to guarantee that the program meets predefined performance requirements. This paper addresses the issues in the analysis of client - server programs in EPOCA. EPOCA is a computer-aided software engeneering (CASE) support system that allows the automated construction and analysis of generalized stochastic Petri net (GSPN) models of concurrent applications. The paper describes, on the basis of a realistic case study, how client - server systems are modelled in EPOCA, and the kind of qualitative and quantitative analysis supported by its tools.
Task allocation among multiple intelligent robots
NASA Technical Reports Server (NTRS)
Gasser, L.; Bekey, G.
1987-01-01
Researchers describe the design of a decentralized mechanism for allocating assembly tasks in a multiple robot assembly workstation. Currently, the approach focuses on distributed allocation to explore its feasibility and its potential for adaptability to changing circumstances, rather than for optimizing throughput. Individual greedy robots make their own local allocation decisions using both dynamic allocation policies which propagate through a network of allocation goals, and local static and dynamic constraints describing which robots are elibible for which assembly tasks. Global coherence is achieved by proper weighting of allocation pressures propagating through the assembly plan. Deadlock avoidance and synchronization is achieved using periodic reassessments of local allocation decisions, ageing of allocation goals, and short-term allocation locks on goals.
NASA Technical Reports Server (NTRS)
Goethel, Thomas; Glesner, Sabine
2009-01-01
The correctness of safety-critical embedded software is crucial, whereas non-functional properties like deadlock-freedom and real-time constraints are particularly important. The real-time calculus Timed Communicating Sequential Processes (CSP) is capable of expressing such properties and can therefore be used to verify embedded software. In this paper, we present our formalization of Timed CSP in the Isabelle/HOL theorem prover, which we have formulated as an operational coalgebraic semantics together with bisimulation equivalences and coalgebraic invariants. Furthermore, we apply these techniques in an abstract specification with real-time constraints, which is the basis for current work in which we verify the components of a simple real-time operating system deployed on a satellite.
Virtual Control Policy for Binary Ordered Resources Petri Net Class.
Rovetto, Carlos A; Concepción, Tomás J; Cano, Elia Esther
2016-08-18
Prevention and avoidance of deadlocks in sensor networks that use the wormhole routing algorithm is an active research domain. There are diverse control policies that will address this problem being our approach a new method. In this paper we present a virtual control policy for the new specialized Petri net subclass called Binary Ordered Resources Petri Net (BORPN). Essentially, it is an ordinary class constructed from various state machines that share unitary resources in a complex form, which allows branching and joining of processes. The reduced structure of this new class gives advantages that allow analysis of the entire system's behavior, which is a prohibitive task for large systems because of the complexity and routing algorithms.
National Laboratory for Advanced Scientific Visualization at UNAM - Mexico
NASA Astrophysics Data System (ADS)
Manea, Marina; Constantin Manea, Vlad; Varela, Alfredo
2016-04-01
In 2015, the National Autonomous University of Mexico (UNAM) joined the family of Universities and Research Centers where advanced visualization and computing plays a key role to promote and advance missions in research, education, community outreach, as well as business-oriented consulting. This initiative provides access to a great variety of advanced hardware and software resources and offers a range of consulting services that spans a variety of areas related to scientific visualization, among which are: neuroanatomy, embryonic development, genome related studies, geosciences, geography, physics and mathematics related disciplines. The National Laboratory for Advanced Scientific Visualization delivers services through three main infrastructure environments: the 3D fully immersive display system Cave, the high resolution parallel visualization system Powerwall, the high resolution spherical displays Earth Simulator. The entire visualization infrastructure is interconnected to a high-performance-computing-cluster (HPCC) called ADA in honor to Ada Lovelace, considered to be the first computer programmer. The Cave is an extra large 3.6m wide room with projected images on the front, left and right, as well as floor walls. Specialized crystal eyes LCD-shutter glasses provide a strong stereo depth perception, and a variety of tracking devices allow software to track the position of a user's hand, head and wand. The Powerwall is designed to bring large amounts of complex data together through parallel computing for team interaction and collaboration. This system is composed by 24 (6x4) high-resolution ultra-thin (2 mm) bezel monitors connected to a high-performance GPU cluster. The Earth Simulator is a large (60") high-resolution spherical display used for global-scale data visualization like geophysical, meteorological, climate and ecology data. The HPCC-ADA, is a 1000+ computing core system, which offers parallel computing resources to applications that requires large quantity of memory as well as large and fast parallel storage systems. The entire system temperature is controlled by an energy and space efficient cooling solution, based on large rear door liquid cooled heat exchangers. This state-of-the-art infrastructure will boost research activities in the region, offer a powerful scientific tool for teaching at undergraduate and graduate levels, and enhance association and cooperation with business-oriented organizations.
NASA Technical Reports Server (NTRS)
Schunk, Richard Gregory; Chung, T. J.
2001-01-01
A parallelized version of the Flowfield Dependent Variation (FDV) Method is developed to analyze a problem of current research interest, the flowfield resulting from a triple shock/boundary layer interaction. Such flowfields are often encountered in the inlets of high speed air-breathing vehicles including the NASA Hyper-X research vehicle. In order to resolve the complex shock structure and to provide adequate resolution for boundary layer computations of the convective heat transfer from surfaces inside the inlet, models containing over 500,000 nodes are needed. Efficient parallelization of the computation is essential to achieving results in a timely manner. Results from a parallelization scheme, based upon multi-threading, as implemented on multiple processor supercomputers and workstations is presented.
CFD Analysis and Design Optimization Using Parallel Computers
NASA Technical Reports Server (NTRS)
Martinelli, Luigi; Alonso, Juan Jose; Jameson, Antony; Reuther, James
1997-01-01
A versatile and efficient multi-block method is presented for the simulation of both steady and unsteady flow, as well as aerodynamic design optimization of complete aircraft configurations. The compressible Euler and Reynolds Averaged Navier-Stokes (RANS) equations are discretized using a high resolution scheme on body-fitted structured meshes. An efficient multigrid implicit scheme is implemented for time-accurate flow calculations. Optimum aerodynamic shape design is achieved at very low cost using an adjoint formulation. The method is implemented on parallel computing systems using the MPI message passing interface standard to ensure portability. The results demonstrate that, by combining highly efficient algorithms with parallel computing, it is possible to perform detailed steady and unsteady analysis as well as automatic design for complex configurations using the present generation of parallel computers.
Megavolt parallel potentials arising from double-layer streams in the Earth's outer radiation belt.
Mozer, F S; Bale, S D; Bonnell, J W; Chaston, C C; Roth, I; Wygant, J
2013-12-06
Huge numbers of double layers carrying electric fields parallel to the local magnetic field line have been observed on the Van Allen probes in connection with in situ relativistic electron acceleration in the Earth's outer radiation belt. For one case with adequate high time resolution data, 7000 double layers were observed in an interval of 1 min to produce a 230,000 V net parallel potential drop crossing the spacecraft. Lower resolution data show that this event lasted for 6 min and that more than 1,000,000 volts of net parallel potential crossed the spacecraft during this time. A double layer traverses the length of a magnetic field line in about 15 s and the orbital motion of the spacecraft perpendicular to the magnetic field was about 700 km during this 6 min interval. Thus, the instantaneous parallel potential along a single magnetic field line was the order of tens of kilovolts. Electrons on the field line might experience many such potential steps in their lifetimes to accelerate them to energies where they serve as the seed population for relativistic acceleration by coherent, large amplitude whistler mode waves. Because the double-layer speed of 3100 km/s is the order of the electron acoustic speed (and not the ion acoustic speed) of a 25 eV plasma, the double layers may result from a new electron acoustic mode. Acceleration mechanisms involving double layers may also be important in planetary radiation belts such as Jupiter, Saturn, Uranus, and Neptune, in the solar corona during flares, and in astrophysical objects.
Smoldyn on graphics processing units: massively parallel Brownian dynamics simulations.
Dematté, Lorenzo
2012-01-01
Space is a very important aspect in the simulation of biochemical systems; recently, the need for simulation algorithms able to cope with space is becoming more and more compelling. Complex and detailed models of biochemical systems need to deal with the movement of single molecules and particles, taking into consideration localized fluctuations, transportation phenomena, and diffusion. A common drawback of spatial models lies in their complexity: models can become very large, and their simulation could be time consuming, especially if we want to capture the systems behavior in a reliable way using stochastic methods in conjunction with a high spatial resolution. In order to deliver the promise done by systems biology to be able to understand a system as whole, we need to scale up the size of models we are able to simulate, moving from sequential to parallel simulation algorithms. In this paper, we analyze Smoldyn, a widely diffused algorithm for stochastic simulation of chemical reactions with spatial resolution and single molecule detail, and we propose an alternative, innovative implementation that exploits the parallelism of Graphics Processing Units (GPUs). The implementation executes the most computational demanding steps (computation of diffusion, unimolecular, and bimolecular reaction, as well as the most common cases of molecule-surface interaction) on the GPU, computing them in parallel on each molecule of the system. The implementation offers good speed-ups and real time, high quality graphics output
NASA Astrophysics Data System (ADS)
Li, Gaohua; Fu, Xiang; Wang, Fuxin
2017-10-01
The low-dissipation high-order accurate hybrid up-winding/central scheme based on fifth-order weighted essentially non-oscillatory (WENO) and sixth-order central schemes, along with the Spalart-Allmaras (SA)-based delayed detached eddy simulation (DDES) turbulence model, and the flow feature-based adaptive mesh refinement (AMR), are implemented into a dual-mesh overset grid infrastructure with parallel computing capabilities, for the purpose of simulating vortex-dominated unsteady detached wake flows with high spatial resolutions. The overset grid assembly (OGA) process based on collection detection theory and implicit hole-cutting algorithm achieves an automatic coupling for the near-body and off-body solvers, and the error-and-try method is used for obtaining a globally balanced load distribution among the composed multiple codes. The results of flows over high Reynolds cylinder and two-bladed helicopter rotor show that the combination of high-order hybrid scheme, advanced turbulence model, and overset adaptive mesh refinement can effectively enhance the spatial resolution for the simulation of turbulent wake eddies.
Wilson, Robert L.; Frisz, Jessica F.; Hanafin, William P.; Carpenter, Kevin J.; Hutcheon, Ian D.; Weber, Peter K.; Kraft, Mary L.
2014-01-01
The local abundance of specific lipid species near a membrane protein is hypothesized to influence the protein’s activity. The ability to simultaneously image the distributions of specific protein and lipid species in the cell membrane would facilitate testing these hypotheses. Recent advances in imaging the distribution of cell membrane lipids with mass spectrometry have created the desire for membrane protein probes that can be simultaneously imaged with isotope labeled lipids. Such probes would enable conclusive tests of whether specific proteins co-localize with particular lipid species. Here, we describe the development of fluorine-functionalized colloidal gold immunolabels that facilitate the detection and imaging of specific proteins in parallel with lipids in the plasma membrane using high-resolution SIMS performed with a NanoSIMS. First, we developed a method to functionalize colloidal gold nanoparticles with a partially fluorinated mixed monolayer that permitted NanoSIMS detection and rendered the functionalized nanoparticles dispersible in aqueous buffer. Then, to allow for selective protein labeling, we attached the fluorinated colloidal gold nanoparticles to the nonbinding portion of antibodies. By combining these functionalized immunolabels with metabolic incorporation of stable isotopes, we demonstrate that influenza hemagglutinin and cellular lipids can be imaged in parallel using NanoSIMS. These labels enable a general approach to simultaneously imaging specific proteins and lipids with high sensitivity and lateral resolution, which may be used to evaluate predictions of protein co-localization with specific lipid species. PMID:22284327
"What Should I Be When I Grow Up?" Helping Gifted Children Set Lifelong Goals
ERIC Educational Resources Information Center
Lindbom-Cho, Desiree R.
2013-01-01
The new year brings about one's desire to change and to improve one's self. These emotions quickly fade and turn into lofty resolutions that are not fulfilled. For parents of gifted children, many parallels can be made between making New Year's resolutions and setting more long-term goals related to their education and/or career.…
Impact of buildings on surface solar radiation over urban Beijing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Bin; Liou, Kuo-Nan; Gu, Yu
The rugged surface of an urban area due to varying buildings can interact with solar beams and affect both the magnitude and spatiotemporal distribution of surface solar fluxes. Here we systematically examine the impact of buildings on downward surface solar fluxes over urban Beijing by using a 3-D radiation parameterization that accounts for 3-D building structures vs. the conventional plane-parallel scheme. We find that the resulting downward surface solar flux deviations between the 3-D and the plane-parallel schemes are generally ±1–10 W m -2 at 800 m grid resolution and within ±1 W m -2 at 4 km resolution. Pairsmore » of positive–negative flux deviations on different sides of buildings are resolved at 800 m resolution, while they offset each other at 4 km resolution. Flux deviations from the unobstructed horizontal surface at 4 km resolution are positive around noon but negative in the early morning and late afternoon. The corresponding deviations at 800 m resolution, in contrast, show diurnal variations that are strongly dependent on the location of the grids relative to the buildings. Both the magnitude and spatiotemporal variations of flux deviations are largely dominated by the direct flux. Furthermore, we find that flux deviations can potentially be an order of magnitude larger by using a finer grid resolution. Atmospheric aerosols can reduce the magnitude of downward surface solar flux deviations by 10–65 %, while the surface albedo generally has a rather moderate impact on flux deviations. The results imply that the effect of buildings on downward surface solar fluxes may not be critically significant in mesoscale atmospheric models with a grid resolution of 4 km or coarser. However, the effect can play a crucial role in meso-urban atmospheric models as well as microscale urban dispersion models with resolutions of 1 m to 1 km.« less
Li, Yiming; Ishitsuka, Yuji; Hedde, Per Niklas; Nienhaus, G Ulrich
2013-06-25
In localization-based super-resolution microscopy, individual fluorescent markers are stochastically photoactivated and subsequently localized within a series of camera frames, yielding a final image with a resolution far beyond the diffraction limit. Yet, before localization can be performed, the subregions within the frames where the individual molecules are present have to be identified-oftentimes in the presence of high background. In this work, we address the importance of reliable molecule identification for the quality of the final reconstructed super-resolution image. We present a fast and robust algorithm (a-livePALM) that vastly improves the molecule detection efficiency while minimizing false assignments that can lead to image artifacts.
NASA Astrophysics Data System (ADS)
Lee, Minsuk; Won, Youngjae; Park, Byungjun; Lee, Seungrag
2017-02-01
Not only static characteristics but also dynamic characteristics of the red blood cell (RBC) contains useful information for the blood diagnosis. Quantitative phase imaging (QPI) can capture sample images with subnanometer scale depth resolution and millisecond scale temporal resolution. Various researches have been used QPI for the RBC diagnosis, and recently many researches has been developed to decrease the process time of RBC information extraction using QPI by the parallel computing algorithm, however previous studies are interested in the static parameters such as morphology of the cells or simple dynamic parameters such as root mean square (RMS) of the membrane fluctuations. Previously, we presented a practical blood test method using the time series correlation analysis of RBC membrane flickering with QPI. However, this method has shown that there is a limit to the clinical application because of the long computation time. In this study, we present an accelerated time series correlation analysis of RBC membrane flickering using the parallel computing algorithm. This method showed consistent fractal scaling exponent results of the surrounding medium and the normal RBC with our previous research.
Parallel MR imaging: a user's guide.
Glockner, James F; Hu, Houchun H; Stanley, David W; Angelos, Lisa; King, Kevin
2005-01-01
Parallel imaging is a recently developed family of techniques that take advantage of the spatial information inherent in phased-array radiofrequency coils to reduce acquisition times in magnetic resonance imaging. In parallel imaging, the number of sampled k-space lines is reduced, often by a factor of two or greater, thereby significantly shortening the acquisition time. Parallel imaging techniques have only recently become commercially available, and the wide range of clinical applications is just beginning to be explored. The potential clinical applications primarily involve reduction in acquisition time, improved spatial resolution, or a combination of the two. Improvements in image quality can be achieved by reducing the echo train lengths of fast spin-echo and single-shot fast spin-echo sequences. Parallel imaging is particularly attractive for cardiac and vascular applications and will likely prove valuable as 3-T body and cardiovascular imaging becomes part of standard clinical practice. Limitations of parallel imaging include reduced signal-to-noise ratio and reconstruction artifacts. It is important to consider these limitations when deciding when to use these techniques. (c) RSNA, 2005.
Transmission Index Research of Parallel Manipulators Based on Matrix Orthogonal Degree
NASA Astrophysics Data System (ADS)
Shao, Zhu-Feng; Mo, Jiao; Tang, Xiao-Qiang; Wang, Li-Ping
2017-11-01
Performance index is the standard of performance evaluation, and is the foundation of both performance analysis and optimal design for the parallel manipulator. Seeking the suitable kinematic indices is always an important and challenging issue for the parallel manipulator. So far, there are extensive studies in this field, but few existing indices can meet all the requirements, such as simple, intuitive, and universal. To solve this problem, the matrix orthogonal degree is adopted, and generalized transmission indices that can evaluate motion/force transmissibility of fully parallel manipulators are proposed. Transmission performance analysis of typical branches, end effectors, and parallel manipulators is given to illustrate proposed indices and analysis methodology. Simulation and analysis results reveal that proposed transmission indices possess significant advantages, such as normalized finite (ranging from 0 to 1), dimensionally homogeneous, frame-free, intuitive and easy to calculate. Besides, proposed indices well indicate the good transmission region and relativity to the singularity with better resolution than the traditional local conditioning index, and provide a novel tool for kinematic analysis and optimal design of fully parallel manipulators.
NASA Astrophysics Data System (ADS)
Sauvage, Marc; Amiaux, Jérome; Austin, James; Bello, Mara; Bianucci, Giovanni; Chesné, Simon; Citterio, Oberto; Collette, Christophe; Correia, Sébastien; Durand, Gilles A.; Molinari, Sergio; Pareschi, Giovanni; Penfornis, Yann; Sironi, Giorgia; Valsecchi, Giuseppe; Verpoort, Sven; Wittrock, Ulrich
2016-07-01
Astronomy is driven by the quest for higher sensitivity and improved angular resolution in order to detect fainter or smaller objects. The far-infrared to submillimeter domain is a unique probe of the cold and obscured Universe, harboring for instance the precious signatures of key elements such as water. Space observations are mandatory given the blocking effect of our atmosphere. However the methods we have relied on so far to develop increasingly larger telescopes are now reaching a hard limit, with the JWST illustrating this in more than one way (e.g. it will be launched by one of the most powerful rocket, it requires the largest existing facility on Earth to be qualified). With the Thinned Aperture Light Collector (TALC) project, a concept of a deployable 20 m annular telescope, we propose to break out of this deadlock by developing novel technologies for space telescopes, which are disruptive in three aspects: • An innovative deployable mirror whose topology, based on stacking rather than folding, leads to an optimum ratio of collecting area over volume, and creates a telescope with an eight times larger collecting area and three times higher angular resolution compared to JWST from the same pre-deployed volume; • An ultra-light weight segmented primary mirror, based on electrodeposited Nickel, Composite and Honeycomb stacks, built with a replica process to control costs and mitigate the industrial risks; • An active optics control layer based on piezo-electric layers incorporated into the mirror rear shell allowing control of the shape by internal stress rather than by reaction on a structure. We present in this paper the roadmap we have built to bring these three disruptive technologies to technology readiness level 3. We will achieve this goal through design and realization of representative elements: segments of mirrors for optical quality verification, active optics implemented on representative mirror stacks to characterize the shape correction capabilities, and mechanical models for validation of the deployment concept. Accompanying these developments, a strong system activity will ensure that the ultimate goal of having an integrated system can be met, especially in terms of (a) scalability toward a larger structure, and (b) verification philosophy.
Regional-scale calculation of the LS factor using parallel processing
NASA Astrophysics Data System (ADS)
Liu, Kai; Tang, Guoan; Jiang, Ling; Zhu, A.-Xing; Yang, Jianyi; Song, Xiaodong
2015-05-01
With the increase of data resolution and the increasing application of USLE over large areas, the existing serial implementation of algorithms for computing the LS factor is becoming a bottleneck. In this paper, a parallel processing model based on message passing interface (MPI) is presented for the calculation of the LS factor, so that massive datasets at a regional scale can be processed efficiently. The parallel model contains algorithms for calculating flow direction, flow accumulation, drainage network, slope, slope length and the LS factor. According to the existence of data dependence, the algorithms are divided into local algorithms and global algorithms. Parallel strategy are designed according to the algorithm characters including the decomposition method for maintaining the integrity of the results, optimized workflow for reducing the time taken for exporting the unnecessary intermediate data and a buffer-communication-computation strategy for improving the communication efficiency. Experiments on a multi-node system show that the proposed parallel model allows efficient calculation of the LS factor at a regional scale with a massive dataset.
NASA Astrophysics Data System (ADS)
Yan, Hui; Wang, K. G.; Jones, Jim E.
2016-06-01
A parallel algorithm for large-scale three-dimensional phase-field simulations of phase coarsening is developed and implemented on high-performance architectures. From the large-scale simulations, a new kinetics in phase coarsening in the region of ultrahigh volume fraction is found. The parallel implementation is capable of harnessing the greater computer power available from high-performance architectures. The parallelized code enables increase in three-dimensional simulation system size up to a 5123 grid cube. Through the parallelized code, practical runtime can be achieved for three-dimensional large-scale simulations, and the statistical significance of the results from these high resolution parallel simulations are greatly improved over those obtainable from serial simulations. A detailed performance analysis on speed-up and scalability is presented, showing good scalability which improves with increasing problem size. In addition, a model for prediction of runtime is developed, which shows a good agreement with actual run time from numerical tests.
NASA Astrophysics Data System (ADS)
Yu, A. R.; Park, S.-J.; Choi, Y. Y.; Kim, K. M.; Kim, H.-J.
2015-09-01
Triumph X-SPECT is a newly released CZT-based preclinical small-animal SPECT system with interchangeable collimators. The purpose of this work was to evaluate and systematically compare the imaging performances of three different collimators in the CZT-based preclinical small-animal system: a single-pinhole collimator (SPH), a multi-pinhole collimator (MPH) and a parallel-hole collimator. We measured the spatial resolutions and sensitivities of the three collimators with 99mTc sources, considering three distinct energy window widths (5, 10, and 20%), and used the NEMA NU4-2008 Image Quality phantom to test the imaging performance of the three collimators in terms of uniformity and spill-over ratio (SOR) for each energy window. With a 10% energy window width at a radius of rotation (ROR) of 30 mm, the system resolution of the SPH, MPH and parallel-hole collimators was 0.715, 0.855 and 3.270 mm FWHM, respectively. For the same energy window, the sensitivity of the system with SPH, MPH and parallel-hole collimators was 32.860, 152.514 and 49.205 counts/sec/MBq at a 100 mm source-to-detector distance and 6.790, 33.376 and 49.038 counts/sec/MBq at a 130 mm source-to-detector distance, respectively. The image noise and SORair for the three collimators were 20.137, 12.278 and 11.232 (%STDunif) and 0.106, 0.140 and 0.161, respectively. Overall, the results show that the SPH had better spatial resolution than the other collimators. The MPH had the highest sensitivity at 100 mm source-to-collimator distance, and the parallel-hole collimator had the highest sensitivity at 130 mm-source-to-detector distance. Therefore, the proper collimator for Triumph X-SPECT system must be determined by the task. These results provide valuable reference data and insight into the imaging performance of various collimators in CZT-based preclinical small-animal SPECT.
NASA Astrophysics Data System (ADS)
Hofierka, Jaroslav; Lacko, Michal; Zubal, Stanislav
2017-10-01
In this paper, we describe the parallelization of three complex and computationally intensive modules of GRASS GIS using the OpenMP application programming interface for multi-core computers. These include the v.surf.rst module for spatial interpolation, the r.sun module for solar radiation modeling and the r.sim.water module for water flow simulation. We briefly describe the functionality of the modules and parallelization approaches used in the modules. Our approach includes the analysis of the module's functionality, identification of source code segments suitable for parallelization and proper application of OpenMP parallelization code to create efficient threads processing the subtasks. We document the efficiency of the solutions using the airborne laser scanning data representing land surface in the test area and derived high-resolution digital terrain model grids. We discuss the performance speed-up and parallelization efficiency depending on the number of processor threads. The study showed a substantial increase in computation speeds on a standard multi-core computer while maintaining the accuracy of results in comparison to the output from original modules. The presented parallelization approach showed the simplicity and efficiency of the parallelization of open-source GRASS GIS modules using OpenMP, leading to an increased performance of this geospatial software on standard multi-core computers.
Opinion strength influences the spatial dynamics of opinion formation
Baumgaertner, Bert O.; Tyson, Rebecca T.; Krone, Stephen M.
2016-01-01
Opinions are rarely binary; they can be held with different degrees of conviction, and this expanded attitude spectrum can affect the influence one opinion has on others. Our goal is to understand how different aspects of influence lead to recognizable spatio-temporal patterns of opinions and their strengths. To do this, we introduce a stochastic spatial agent-based model of opinion dynamics that includes a spectrum of opinion strengths and various possible rules for how the opinion strength of one individual affects the influence that this individual has on others. Through simulations, we find that even a small amount of amplification of opinion strength through interaction with like-minded neighbors can tip the scales in favor of polarization and deadlock. PMID:28529381
NASA Technical Reports Server (NTRS)
Havelund, Klaus
1999-01-01
The JAVA PATHFINDER, JPF, is a translator from a subset of JAVA 1.0 to PROMELA, the programming language of the SPIN model checker. The purpose of JPF is to establish a framework for verification and debugging of JAVA programming based on model checking. The main goal is to automate program verification such that a programmer can apply it in the daily work without the need for a specialist to manually reformulate a program into a different notation in order to analyze the program. The system is especially suited for analyzing multi-threaded JAVA applications, where normal testing usually falls short. The system can find deadlocks and violations of boolean assertions stated by the programmer in a special assertion language. This document explains how to Use JPF.
European policymaking on the tobacco advertising ban: the importance of escape routes.
Adamini, Sandra; Versluis, Esther; Maarse, Hans
2011-01-01
This article analyses the European Union policymaking process regarding tobacco advertising. While others already highlighted the importance of intergovernmental bargaining between member states to explain the outcome of the tobacco advertising case, the main aim of this article is to identify the use of escape routes by the Commission, the European Parliament, the Council and interest groups that played an important role in overcoming the deadlock. When looking at the different institutions that structure policymaking, we argue that indeed focusing on escape routes provides a clear insight in the process and in what strategies were necessary to 'make Europe work'. In the end, it appears to be a combination of escape routes that resulted in the final decision.
NASA Astrophysics Data System (ADS)
Liu, GaiYun; Chao, Daniel Yuh
2015-08-01
To date, research on the supervisor design for flexible manufacturing systems focuses on speeding up the computation of optimal (maximally permissive) liveness-enforcing controllers. Recent deadlock prevention policies for systems of simple sequential processes with resources (S3PR) reduce the computation burden by considering only the minimal portion of all first-met bad markings (FBMs). Maximal permissiveness is ensured by not forbidding any live state. This paper proposes a method to further reduce the size of minimal set of FBMs to efficiently solve integer linear programming problems while maintaining maximal permissiveness using a vector-covering approach. This paper improves the previous work and achieves the simplest structure with the minimal number of monitors.
Virtual Control Policy for Binary Ordered Resources Petri Net Class
Rovetto, Carlos A.; Concepción, Tomás J.; Cano, Elia Esther
2016-01-01
Prevention and avoidance of deadlocks in sensor networks that use the wormhole routing algorithm is an active research domain. There are diverse control policies that will address this problem being our approach a new method. In this paper we present a virtual control policy for the new specialized Petri net subclass called Binary Ordered Resources Petri Net (BORPN). Essentially, it is an ordinary class constructed from various state machines that share unitary resources in a complex form, which allows branching and joining of processes. The reduced structure of this new class gives advantages that allow analysis of the entire system’s behavior, which is a prohibitive task for large systems because of the complexity and routing algorithms. PMID:27548170
Living conditions in Iraq: 10 years after the US-led invasion
Hassounah, S; Dubois, E; Abdalrahman, B; Raheem, M; Jamil, H; Majeed, A
2014-01-01
In the early 1980s, Iraq was a middle-income and rapidly developing country with a well-developed health system. A few decades later – after wars, sanctions and a violent sectarian upsurge – child and maternal health indicators have deteriorated, its poverty headcount index is at 22.9% and diseases such as cholera have remerged. Today Iraq is beset by chronic political deadlock and a complexity of economic challenges; accordingly, all aspects of life are suffering, including health. Irrespective of the monumental investment to improve components of the health system, via national and international efforts, the health status of the population can only advance through resounding and synergistic effort in other aspects of life affecting health: the social determinants of health. PMID:24833655
TDat: An Efficient Platform for Processing Petabyte-Scale Whole-Brain Volumetric Images.
Li, Yuxin; Gong, Hui; Yang, Xiaoquan; Yuan, Jing; Jiang, Tao; Li, Xiangning; Sun, Qingtao; Zhu, Dan; Wang, Zhenyu; Luo, Qingming; Li, Anan
2017-01-01
Three-dimensional imaging of whole mammalian brains at single-neuron resolution has generated terabyte (TB)- and even petabyte (PB)-sized datasets. Due to their size, processing these massive image datasets can be hindered by the computer hardware and software typically found in biological laboratories. To fill this gap, we have developed an efficient platform named TDat, which adopts a novel data reformatting strategy by reading cuboid data and employing parallel computing. In data reformatting, TDat is more efficient than any other software. In data accessing, we adopted parallelization to fully explore the capability for data transmission in computers. We applied TDat in large-volume data rigid registration and neuron tracing in whole-brain data with single-neuron resolution, which has never been demonstrated in other studies. We also showed its compatibility with various computing platforms, image processing software and imaging systems.
The CAnadian NIRISS Unbiased Cluster Survey (CANUCS)
NASA Astrophysics Data System (ADS)
Ravindranath, Swara; NIRISS GTO Team
2017-06-01
CANUCS GTO program is a JWST spectroscopy and imaging survey of five massive galaxy clusters and ten parallel fields using the NIRISS low-resolution grisms, NIRCam imaging and NIRSpec multi-object spectroscopy. The primary goal is to understand the evolution of low mass galaxies across cosmic time. The resolved emission line maps and line ratios for many galaxies, with some at resolution of 100pc via the magnification by gravitational lensing will enable determining the spatial distribution of star formation, dust and metals. Other science goals include the detection and characterization of galaxies within the reionization epoch, using multiply-imaged lensed galaxies to constrain cluster mass distributions and dark matter substructure, and understanding star-formation suppression in the most massive galaxy clusters. In this talk I will describe the science goals of the CANUCS program. The proposed prime and parallel observations will be presented with details of the implementation of the observation strategy using JWST proposal planning tools.
Development of an optical inspection platform for surface defect detection in touch panel glass
NASA Astrophysics Data System (ADS)
Chang, Ming; Chen, Bo-Cheng; Gabayno, Jacque Lynn; Chen, Ming-Fu
2016-04-01
An optical inspection platform combining parallel image processing with high resolution opto-mechanical module was developed for defect inspection of touch panel glass. Dark field images were acquired using a 12288-pixel line CCD camera with 3.5 µm per pixel resolution and 12 kHz line rate. Key features of the glass surface were analyzed by parallel image processing on combined CPU and GPU platforms. Defect inspection of touch panel glass, which provided 386 megapixel image data per sample, was completed in roughly 5 seconds. High detection rate of surface scratches on the touch panel glass was realized with minimum defects size of about 10 µm after inspection. The implementation of a custom illumination source significantly improved the scattering efficiency on the surface, therefore enhancing the contrast in the acquired images and overall performance of the inspection system.
Dual-resolution dose assessments for proton beamlet using MCNPX 2.6.0
NASA Astrophysics Data System (ADS)
Chao, T. C.; Wei, S. C.; Wu, S. W.; Tung, C. J.; Tu, S. J.; Cheng, H. W.; Lee, C. C.
2015-11-01
The purpose of this study is to access proton dose distribution in dual resolution phantoms using MCNPX 2.6.0. The dual resolution phantom uses higher resolution in Bragg peak, area near large dose gradient, or heterogeneous interface and lower resolution in the rest. MCNPX 2.6.0 was installed in Ubuntu 10.04 with MPI for parallel computing. FMesh1 tallies were utilized to record the energy deposition which is a special designed tally for voxel phantoms that converts dose deposition from fluence. 60 and 120 MeV narrow proton beam were incident into Coarse, Dual and Fine resolution phantoms with pure water, water-bone-water and water-air-water setups. The doses in coarse resolution phantoms are underestimated owing to partial volume effect. The dose distributions in dual or high resolution phantoms agreed well with each other and dual resolution phantoms were at least 10 times more efficient than fine resolution one. Because the secondary particle range is much longer in air than in water, the dose of low density region may be under-estimated if the resolution or calculation grid is not small enough.
Diederichs, Tim; Nguyen, Quoc Hung; Urban, Michael; Tampé, Robert; Tornow, Marc
2018-06-13
Membrane proteins involved in transport processes are key targets for pharmaceutical research and industry. Despite continuous improvements and new developments in the field of electrical readouts for the analysis of transport kinetics, a well-suited methodology for high-throughput characterization of single transporters with nonionic substrates and slow turnover rates is still lacking. Here, we report on a novel architecture of silicon chips with embedded nanopore microcavities, based on a silicon-on-insulator technology for high-throughput optical readouts. Arrays containing more than 14 000 inverted-pyramidal cavities of 50 femtoliter volumes and 80 nm circular pore openings were constructed via high-resolution electron-beam lithography in combination with reactive ion etching and anisotropic wet etching. These cavities feature both, an optically transparent bottom and top cap. Atomic force microscopy analysis reveals an overall extremely smooth chip surface, particularly in the vicinity of the nanopores, which exhibits well-defined edges. Our unprecedented transparent chip design provides parallel and independent fluorescent readout of both cavities and buffer reservoir for unbiased single-transporter recordings. Spreading of large unilamellar vesicles with efficiencies up to 96% created nanopore-supported lipid bilayers, which are stable for more than 1 day. A high lipid mobility in the supported membrane was determined by fluorescent recovery after photobleaching. Flux kinetics of α-hemolysin were characterized at single-pore resolution with a rate constant of 0.96 ± 0.06 × 10 -3 s -1 . Here, we deliver an ideal chip platform for pharmaceutical research, which features high parallelism and throughput, synergistically combined with single-transporter resolution.
A tool for simulating parallel branch-and-bound methods
NASA Astrophysics Data System (ADS)
Golubeva, Yana; Orlov, Yury; Posypkin, Mikhail
2016-01-01
The Branch-and-Bound method is known as one of the most powerful but very resource consuming global optimization methods. Parallel and distributed computing can efficiently cope with this issue. The major difficulty in parallel B&B method is the need for dynamic load redistribution. Therefore design and study of load balancing algorithms is a separate and very important research topic. This paper presents a tool for simulating parallel Branchand-Bound method. The simulator allows one to run load balancing algorithms with various numbers of processors, sizes of the search tree, the characteristics of the supercomputer's interconnect thereby fostering deep study of load distribution strategies. The process of resolution of the optimization problem by B&B method is replaced by a stochastic branching process. Data exchanges are modeled using the concept of logical time. The user friendly graphical interface to the simulator provides efficient visualization and convenient performance analysis.
Parallel Adaptive Mesh Refinement Library
NASA Technical Reports Server (NTRS)
Mac-Neice, Peter; Olson, Kevin
2005-01-01
Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.
Fast parallel 3D profilometer with DMD technology
NASA Astrophysics Data System (ADS)
Hou, Wenmei; Zhang, Yunbo
2011-12-01
Confocal microscope has been a powerful tool for three-dimensional profile analysis. Single mode confocal microscope is limited by scanning speed. This paper presents a 3D profilometer prototype of parallel confocal microscope based on DMD (Digital Micromirror Device). In this system the DMD takes the place of Nipkow Disk which is a classical parallel scanning scheme to realize parallel lateral scanning technique. Operated with certain pattern, the DMD generates a virtual pinholes array which separates the light into multi-beams. The key parameters that affect the measurement (pinhole size and the lateral scanning distance) can be configured conveniently by different patterns sent to DMD chip. To avoid disturbance between two virtual pinholes working at the same time, a scanning strategy is adopted. Depth response curve both axial and abaxial were extract. Measurement experiments have been carried out on silicon structured sample, and axial resolution of 55nm is achieved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lecomte, Roger; Arpin, Louis; Beaudoin, Jean-Franç
Purpose: LabPET II is a new generation APD-based PET scanner designed to achieve sub-mm spatial resolution using truly pixelated detectors and highly integrated parallel front-end processing electronics. Methods: The basic element uses a 4×8 array of 1.12×1.12 mm{sup 2} Lu{sub 1.9}Y{sub 0.1}SiO{sub 5}:Ce (LYSO) scintillator pixels with one-to-one coupling to a 4×8 pixelated monolithic APD array mounted on a ceramic carrier. Four detector arrays are mounted on a daughter board carrying two flip-chip, 64-channel, mixed-signal, application-specific integrated circuits (ASIC) on the backside interfacing to two detector arrays each. Fully parallel signal processing was implemented in silico by encoding time andmore » energy information using a dual-threshold Time-over-Threshold (ToT) scheme. The self-contained 128-channel detector module was designed as a generic component for ultra-high resolution PET imaging of small to medium-size animals. Results: Energy and timing performance were optimized by carefully setting ToT thresholds to minimize the noise/slope ratio. ToT spectra clearly show resolved 511 keV photopeak and Compton edge with ToT resolution well below 10%. After correction for nonlinear ToT response, energy resolution is typically 24±2% FWHM. Coincidence time resolution between opposing 128-channel modules is below 4 ns FWHM. Initial imaging results demonstrate that 0.8 mm hot spots of a Derenzo phantom can be resolved. Conclusion: A new generation PET scanner featuring truly pixelated detectors was developed and shown to achieve a spatial resolution approaching the physical limit of PET. Future plans are to integrate a small-bore dedicated mouse version of the scanner within a PET/CT platform.« less
Atomic resolution characterization of a SrTiO{sub 3} grain boundary in the STEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGibbon, M.M.; Browning, N.D.; Chisholm, M.F.
This paper uses the complementary techniques of high resolution Z-contrast imaging and PEELS (parallel detection electron energy loss spectroscopy) to investigate the atomic structure and chemistry of a 25 degree symmetric tilt boundary in a bicrystal of the electroceramic SrTiO{sub 3}. The gain boundary is composed of two different boundary structural units which occur in about equal numbers: one which contains Ti-O columns and the other without.
2002-01-01
their expression profile and for classification of cells into tumerous and non- tumerous classes. Then we will present a parallel tree method for... cancerous cells. We will use the same dataset and use tree structured classifiers with multi-resolution analysis for classifying cancerous from non- cancerous ...cells. We have the expressions of 4096 genes from 98 different cell types. Of these 98, 72 are cancerous while 26 are non- cancerous . We are interested
Cascaded VLSI neural network architecture for on-line learning
NASA Technical Reports Server (NTRS)
Thakoor, Anilkumar P. (Inventor); Duong, Tuan A. (Inventor); Daud, Taher (Inventor)
1992-01-01
High-speed, analog, fully-parallel, and asynchronous building blocks are cascaded for larger sizes and enhanced resolution. A hardware compatible algorithm permits hardware-in-the-loop learning despite limited weight resolution. A computation intensive feature classification application was demonstrated with this flexible hardware and new algorithm at high speed. This result indicates that these building block chips can be embedded as an application specific coprocessor for solving real world problems at extremely high data rates.
Cascaded VLSI neural network architecture for on-line learning
NASA Technical Reports Server (NTRS)
Duong, Tuan A. (Inventor); Daud, Taher (Inventor); Thakoor, Anilkumar P. (Inventor)
1995-01-01
High-speed, analog, fully-parallel and asynchronous building blocks are cascaded for larger sizes and enhanced resolution. A hardware-compatible algorithm permits hardware-in-the-loop learning despite limited weight resolution. A comparison-intensive feature classification application has been demonstrated with this flexible hardware and new algorithm at high speed. This result indicates that these building block chips can be embedded as application-specific-coprocessors for solving real-world problems at extremely high data rates.
Nael, Kambiz; Fenchel, Michael; Krishnam, Mayil; Finn, J Paul; Laub, Gerhard; Ruehm, Stefan G
2007-06-01
To evaluate the technical feasibility of high spatial resolution contrast-enhanced magnetic resonance angiography (CE-MRA) with highly accelerated parallel acquisition at 3.0 T using a 32-channel phased array coil, and a high relaxivity contrast agent. Ten adult healthy volunteers (5 men, 5 women, aged 21-66 years) underwent high spatial resolution CE-MRA of the pulmonary circulation. Imaging was performed at 3 T using a 32-channel phase array coil. After intravenous injection of 1 mL of gadobenate dimeglumine (Gd-BOPTA) at 1.5 mL/s, a timing bolus was used to measure the transit time from the arm vein to the main pulmonary artery. Subsequently following intravenous injection of 0.1 mmol/kg of Gd-BOPTA at the same rate, isotropic high spatial resolution data sets (1 x 1 x 1 mm3) CE-MRA of the entire pulmonary circulation were acquired using a fast gradient-recalled echo sequence (TR/TE 3/1.2 milliseconds, FA 18 degrees) and highly accelerated parallel acquisition (GRAPPA x 6) during a 20-second breath hold. The presence of artifact, noise, and image quality of the pulmonary arterial segments were evaluated independently by 2 radiologists. Phantom measurements were performed to assess the signal-to-noise ratio (SNR). Statistical analysis of data was performed by using Wilcoxon rank sum test and 2-sample Student t test. The interobserver variability was tested by kappa coefficient. All studies were of diagnostic quality as determined by both observers. The pulmonary arteries were routinely identified up to fifth-order branches, with definition in the diagnostic range and excellent interobserver agreement (kappa = 0.84, 95% confidence interval 0.77-0.90). Phantom measurements showed significantly lower SNR (P < 0.01) using GRAPPA (17.3 +/- 18.8) compared with measurements without parallel acquisition (58 +/- 49.4). The described 3 T CE-MRA protocol in addition to high T1 relaxivity of Gd-BOPTA provides sufficient SNR to support highly accelerated parallel acquisition (GRAPPA x 6), resulting in acquisition of isotopic (1 x 1 x 1 mm3) voxels over the entire pulmonary circulation in 20 seconds.
Parallel mapping of optical near-field interactions by molecular motor-driven quantum dots.
Groß, Heiko; Heil, Hannah S; Ehrig, Jens; Schwarz, Friedrich W; Hecht, Bert; Diez, Stefan
2018-04-30
In the vicinity of metallic nanostructures, absorption and emission rates of optical emitters can be modulated by several orders of magnitude 1,2 . Control of such near-field light-matter interaction is essential for applications in biosensing 3 , light harvesting 4 and quantum communication 5,6 and requires precise mapping of optical near-field interactions, for which single-emitter probes are promising candidates 7-11 . However, currently available techniques are limited in terms of throughput, resolution and/or non-invasiveness. Here, we present an approach for the parallel mapping of optical near-field interactions with a resolution of <5 nm using surface-bound motor proteins to transport microtubules carrying single emitters (quantum dots). The deterministic motion of the quantum dots allows for the interpolation of their tracked positions, resulting in an increased spatial resolution and a suppression of localization artefacts. We apply this method to map the near-field distribution of nanoslits engraved into gold layers and find an excellent agreement with finite-difference time-domain simulations. Our technique can be readily applied to a variety of surfaces for scalable, nanometre-resolved and artefact-free near-field mapping using conventional wide-field microscopes.
The structure of the electron diffusion region during asymmetric anti-parallel magnetic reconnection
NASA Astrophysics Data System (ADS)
Swisdak, M.; Drake, J. F.; Price, L.; Burch, J. L.; Cassak, P.
2017-12-01
The structure of the electron diffusion region during asymmetric magnetic reconnection is ex- plored with high-resolution particle-in-cell simulations that focus on an magnetopause event ob- served by the Magnetospheric Multiscale Mission (MMS). A major surprise is the development of a standing, oblique whistler-like structure with regions of intense positive and negative dissipation. This structure arises from high-speed electrons that flow along the magnetosheath magnetic sepa- ratrices, converge in the dissipation region and jet across the x-line into the magnetosphere. The jet produces a region of negative charge and generates intense parallel electric fields that eject the electrons downstream along the magnetospheric separatrices. The ejected electrons produce the parallel velocity-space crescents documented by MMS.
Parallel multiplex laser feedback interferometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Song; Tan, Yidong; Zhang, Shulian, E-mail: zsl-dpi@mail.tsinghua.edu.cn
2013-12-15
We present a parallel multiplex laser feedback interferometer based on spatial multiplexing which avoids the signal crosstalk in the former feedback interferometer. The interferometer outputs two close parallel laser beams, whose frequencies are shifted by two acousto-optic modulators by 2Ω simultaneously. A static reference mirror is inserted into one of the optical paths as the reference optical path. The other beam impinges on the target as the measurement optical path. Phase variations of the two feedback laser beams are simultaneously measured through heterodyne demodulation with two different detectors. Their subtraction accurately reflects the target displacement. Under typical room conditions, experimentalmore » results show a resolution of 1.6 nm and accuracy of 7.8 nm within the range of 100 μm.« less
Parallel Visualization of Large-Scale Aerodynamics Calculations: A Case Study on the Cray T3E
NASA Technical Reports Server (NTRS)
Ma, Kwan-Liu; Crockett, Thomas W.
1999-01-01
This paper reports the performance of a parallel volume rendering algorithm for visualizing a large-scale, unstructured-grid dataset produced by a three-dimensional aerodynamics simulation. This dataset, containing over 18 million tetrahedra, allows us to extend our performance results to a problem which is more than 30 times larger than the one we examined previously. This high resolution dataset also allows us to see fine, three-dimensional features in the flow field. All our tests were performed on the Silicon Graphics Inc. (SGI)/Cray T3E operated by NASA's Goddard Space Flight Center. Using 511 processors, a rendering rate of almost 9 million tetrahedra/second was achieved with a parallel overhead of 26%.
High-Resolution Adaptive Optics Test-Bed for Vision Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilks, S C; Thomspon, C A; Olivier, S S
2001-09-27
We discuss the design and implementation of a low-cost, high-resolution adaptive optics test-bed for vision research. It is well known that high-order aberrations in the human eye reduce optical resolution and limit visual acuity. However, the effects of aberration-free eyesight on vision are only now beginning to be studied using adaptive optics to sense and correct the aberrations in the eye. We are developing a high-resolution adaptive optics system for this purpose using a Hamamatsu Parallel Aligned Nematic Liquid Crystal Spatial Light Modulator. Phase-wrapping is used to extend the effective stroke of the device, and the wavefront sensing and wavefrontmore » correction are done at different wavelengths. Issues associated with these techniques will be discussed.« less
THz holography in reflection using a high resolution microbolometer array.
Zolliker, Peter; Hack, Erwin
2015-05-04
We demonstrate a digital holographic setup for Terahertz imaging of surfaces in reflection. The set-up is based on a high-power continuous wave (CW) THz laser and a high-resolution (640 × 480 pixel) bolometer detector array. Wave propagation to non-parallel planes is used to reconstruct the object surface that is rotated relative to the detector plane. In addition we implement synthetic aperture methods for resolution enhancement and compare Fourier transform phase retrieval to phase stepping methods. A lateral resolution of 200 μm and a relative phase sensitivity of about 0.4 rad corresponding to a depth resolution of 6 μm are estimated from reconstructed images of two specially prepared test targets, respectively. We highlight the use of digital THz holography for surface profilometry as well as its potential for video-rate imaging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2017-05-17
PeleC is an adaptive-mesh compressible hydrodynamics code for reacting flows. It solves the compressible Navier-Stokes with multispecies transport in a block structured framework. The resulting algorithm is well suited for flows with localized resolution requirements and robust to discontinuities. User controllable refinement crieteria has the potential to result in extremely small numerical dissipation and dispersion, making this code appropriate for both research and applied usage. The code is built on the AMReX library which facilitates hierarchical parallelism and manages distributed memory parallism. PeleC algorithms are implemented to express shared memory parallelism.
NASA Technical Reports Server (NTRS)
Manohar, Mareboyana; Tilton, James C.
1994-01-01
A progressive vector quantization (VQ) compression approach is discussed which decomposes image data into a number of levels using full search VQ. The final level is losslessly compressed, enabling lossless reconstruction. The computational difficulties are addressed by implementation on a massively parallel SIMD machine. We demonstrate progressive VQ on multispectral imagery obtained from the Advanced Very High Resolution Radiometer instrument and other Earth observation image data, and investigate the trade-offs in selecting the number of decomposition levels and codebook training method.
Ream, Justin M; Doshi, Ankur; Lala, Shailee V; Kim, Sooah; Rusinek, Henry; Chandarana, Hersh
2015-06-01
The purpose of this article was to assess the feasibility of golden-angle radial acquisition with compress sensing reconstruction (Golden-angle RAdial Sparse Parallel [GRASP]) for acquiring high temporal resolution data for pharmacokinetic modeling while maintaining high image quality in patients with Crohn disease terminal ileitis. Fourteen patients with biopsy-proven Crohn terminal ileitis were scanned using both contrast-enhanced GRASP and Cartesian breath-hold (volume-interpolated breath-hold examination [VIBE]) acquisitions. GRASP data were reconstructed with 2.4-second temporal resolution and fitted to the generalized kinetic model using an individualized arterial input function to derive the volume transfer coefficient (K(trans)) and interstitial volume (v(e)). Reconstructions, including data from the entire GRASP acquisition and Cartesian VIBE acquisitions, were rated for image quality, artifact, and detection of typical Crohn ileitis features. Inflamed loops of ileum had significantly higher K(trans) (3.36 ± 2.49 vs 0.86 ± 0.49 min(-1), p < 0.005) and v(e) (0.53 ± 0.15 vs 0.20 ± 0.11, p < 0.005) compared with normal bowel loops. There were no significant differences between GRASP and Cartesian VIBE for overall image quality (p = 0.180) or detection of Crohn ileitis features, although streak artifact was worse with the GRASP acquisition (p = 0.001). High temporal resolution data for pharmacokinetic modeling and high spatial resolution data for morphologic image analysis can be achieved in the same acquisition using GRASP.
Shayovitz, Dror; Herrmann, Harald; Sohler, Wolfgang; Ricken, Raimund; Silberhorn, Christine; Marom, Dan M
2012-11-19
We demonstrate high resolution and increased efficiency background-free time-to-space conversion using spectrally resolved non-degenerate and collinear SFG in a bulk PPLN crystal. A serial-to-parallel resolution factor of 95 and a time window of 42 ps were achieved. A 60-fold increase in conversion efficiency slope compared with our previous work using a BBO crystal [D. Shayovitz and D. M. Marom, Opt. Lett. 36, 1957 (2011)] was recorded. Finally the measured 40 GHz narrow linewidth of the output SFG signal implies the possibility to extract phase information by employing coherent detection techniques.
A parallel adaptive mesh refinement algorithm
NASA Technical Reports Server (NTRS)
Quirk, James J.; Hanebutte, Ulf R.
1993-01-01
Over recent years, Adaptive Mesh Refinement (AMR) algorithms which dynamically match the local resolution of the computational grid to the numerical solution being sought have emerged as powerful tools for solving problems that contain disparate length and time scales. In particular, several workers have demonstrated the effectiveness of employing an adaptive, block-structured hierarchical grid system for simulations of complex shock wave phenomena. Unfortunately, from the parallel algorithm developer's viewpoint, this class of scheme is quite involved; these schemes cannot be distilled down to a small kernel upon which various parallelizing strategies may be tested. However, because of their block-structured nature such schemes are inherently parallel, so all is not lost. In this paper we describe the method by which Quirk's AMR algorithm has been parallelized. This method is built upon just a few simple message passing routines and so it may be implemented across a broad class of MIMD machines. Moreover, the method of parallelization is such that the original serial code is left virtually intact, and so we are left with just a single product to support. The importance of this fact should not be underestimated given the size and complexity of the original algorithm.
Advances and challenges in cryo ptychography at the Advanced Photon Source.
Deng, J; Vine, D J; Chen, S; Nashed, Y S G; Jin, Q; Peterka, T; Vogt, S; Jacobsen, C
Ptychography has emerged as a nondestructive tool to quantitatively study extended samples at a high spatial resolution. In this manuscript, we report on recent developments from our team. We have combined cryo ptychography and fluorescence microscopy to provide simultaneous views of ultrastructure and elemental composition, we have developed multi-GPU parallel computation to speed up ptychographic reconstructions, and we have implemented fly-scan ptychography to allow for faster data acquisition. We conclude with a discussion of future challenges in high-resolution 3D ptychography.
High Resolution Simulations of Arctic Sea Ice, 1979-1993
2003-01-01
William H. Lipscomb * PO[ARISSP To evaluate improvements in modelling Arctic sea ice, we compare results from two regional models at 1/120 horizontal...resolution. The first is a coupled ice-ocean model of the Arctic Ocean, consisting of an ocean model (adapted from the Parallel Ocean Program, Los...Alamos National Laboratory [LANL]) and the "old" sea ice model . The second model uses the same grid but consists of an improved "new" sea ice model (LANL
Global Swath and Gridded Data Tiling
NASA Technical Reports Server (NTRS)
Thompson, Charles K.
2012-01-01
This software generates cylindrically projected tiles of swath-based or gridded satellite data for the purpose of dynamically generating high-resolution global images covering various time periods, scaling ranges, and colors called "tiles." It reconstructs a global image given a set of tiles covering a particular time range, scaling values, and a color table. The program is configurable in terms of tile size, spatial resolution, format of input data, location of input data (local or distributed), number of processes run in parallel, and data conditioning.
Macro-actor execution on multilevel data-driven architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaudiot, J.L.; Najjar, W.
1988-12-31
The data-flow model of computation brings to multiprocessors high programmability at the expense of increased overhead. Applying the model at a higher level leads to better performance but also introduces loss of parallelism. We demonstrate here syntax directed program decomposition methods for the creation of large macro-actors in numerical algorithms. In order to alleviate some of the problems introduced by the lower resolution interpretation, we describe a multi-level of resolution and analyze the requirements for its actual hardware and software integration.
Combining points and lines in rectifying satellite images
NASA Astrophysics Data System (ADS)
Elaksher, Ahmed F.
2017-09-01
The quick advance in remote sensing technologies established the potential to gather accurate and reliable information about the Earth surface using high resolution satellite images. Remote sensing satellite images of less than one-meter pixel size are currently used in large-scale mapping. Rigorous photogrammetric equations are usually used to describe the relationship between the image coordinates and ground coordinates. These equations require the knowledge of the exterior and interior orientation parameters of the image that might not be available. On the other hand, the parallel projection transformation could be used to represent the mathematical relationship between the image-space and objectspace coordinate systems and provides the required accuracy for large-scale mapping using fewer ground control features. This article investigates the differences between point-based and line-based parallel projection transformation models in rectifying satellite images with different resolutions. The point-based parallel projection transformation model and its extended form are presented and the corresponding line-based forms are developed. Results showed that the RMS computed using the point- or line-based transformation models are equivalent and satisfy the requirement for large-scale mapping. The differences between the transformation parameters computed using the point- and line-based transformation models are insignificant. The results showed high correlation between the differences in the ground elevation and the RMS.
Moslemi, Vahid; Ashoor, Mansour
2017-05-01
In addition to the trade-off between resolution and sensitivity which is a common problem among all types of parallel hole collimators (PCs), obtained images by high energy PCs (HEPCs) suffer from hole-pattern artifact (HPA) due to further septa thickness. In this study, a new design on the collimator has been proposed to improve the trade-off between resolution and sensitivity and to eliminate the HPA. A novel PC, namely high energy extended PC (HEEPC), is proposed and is compared to HEPCs. In the new PC, trapezoidal denticles were added upon the septa in the detector side. The performance of the HEEPCs were evaluated and compared to that of HEPCs using a Monte Carlo-N-particle version5 (MCNP5) simulation. The point spread functions (PSF) of HEPCs and HEEPCs were obtained as well as the various parameters such as resolution, sensitivity, scattering, and penetration ratios, and the HPA of the collimators was assessed. Furthermore, a Picker phantom study was performed to examine the effects of the collimators on the quality of planar images. It was found that the HEEPC D with an identical resolution to that of HEPC C increased sensitivity by 34.7%, and it improved the trade-off between resolution and sensitivity as well as to eliminate the HPA. In the picker phantom study, the HEEPC D indicated the hot and cold lesions with the higher contrast, lower noise, and higher contrast to noise ratio (CNR). Since the HEEPCs modify the shaping of PSFs, they are able to improve the trade-off between the resolution and sensitivity; consequently, planar images can be achieved with higher contrast resolutions. Furthermore, because the HEEPC S reduce the HPA and produce images with a higher CNR, compared to HEPCs, the obtained images by HEEPCs have a higher quality, which can help physicians to provide better diagnosis.
Parallel processing architecture for H.264 deblocking filter on multi-core platforms
NASA Astrophysics Data System (ADS)
Prasad, Durga P.; Sonachalam, Sekar; Kunchamwar, Mangesh K.; Gunupudi, Nageswara Rao
2012-03-01
Massively parallel computing (multi-core) chips offer outstanding new solutions that satisfy the increasing demand for high resolution and high quality video compression technologies such as H.264. Such solutions not only provide exceptional quality but also efficiency, low power, and low latency, previously unattainable in software based designs. While custom hardware and Application Specific Integrated Circuit (ASIC) technologies may achieve lowlatency, low power, and real-time performance in some consumer devices, many applications require a flexible and scalable software-defined solution. The deblocking filter in H.264 encoder/decoder poses difficult implementation challenges because of heavy data dependencies and the conditional nature of the computations. Deblocking filter implementations tend to be fixed and difficult to reconfigure for different needs. The ability to scale up for higher quality requirements such as 10-bit pixel depth or a 4:2:2 chroma format often reduces the throughput of a parallel architecture designed for lower feature set. A scalable architecture for deblocking filtering, created with a massively parallel processor based solution, means that the same encoder or decoder will be deployed in a variety of applications, at different video resolutions, for different power requirements, and at higher bit-depths and better color sub sampling patterns like YUV, 4:2:2, or 4:4:4 formats. Low power, software-defined encoders/decoders may be implemented using a massively parallel processor array, like that found in HyperX technology, with 100 or more cores and distributed memory. The large number of processor elements allows the silicon device to operate more efficiently than conventional DSP or CPU technology. This software programing model for massively parallel processors offers a flexible implementation and a power efficiency close to that of ASIC solutions. This work describes a scalable parallel architecture for an H.264 compliant deblocking filter for multi core platforms such as HyperX technology. Parallel techniques such as parallel processing of independent macroblocks, sub blocks, and pixel row level are examined in this work. The deblocking architecture consists of a basic cell called deblocking filter unit (DFU) and dependent data buffer manager (DFM). The DFU can be used in several instances, catering to different performance needs the DFM serves the data required for the different number of DFUs, and also manages all the neighboring data required for future data processing of DFUs. This approach achieves the scalability, flexibility, and performance excellence required in deblocking filters.
MMS Observations of Parallel Electric Fields During a Quasi-Perpendicular Bow Shock Crossing
NASA Astrophysics Data System (ADS)
Goodrich, K.; Schwartz, S. J.; Ergun, R.; Wilder, F. D.; Holmes, J.; Burch, J. L.; Gershman, D. J.; Giles, B. L.; Khotyaintsev, Y. V.; Le Contel, O.; Lindqvist, P. A.; Strangeway, R. J.; Russell, C.; Torbert, R. B.
2016-12-01
Previous observations of the terrestrial bow shock have frequently shown large-amplitude fluctuations in the parallel electric field. These parallel electric fields are seen as both nonlinear solitary structures, such as double layers and electron phase-space holes, and short-wavelength waves, which can reach amplitudes greater than 100 mV/m. The Magnetospheric Multi-Scale (MMS) Mission has crossed the Earth's bow shock more than 200 times. The parallel electric field signatures observed in these crossings are seen in very discrete packets and evolve over time scales of less than a second, indicating the presence of a wealth of kinetic-scale activity. The high time resolution of the Fast Particle Instrument (FPI) available on MMS offers greater detail of the kinetic-scale physics that occur at bow shocks than ever before, allowing greater insight into the overall effect of these observed electric fields. We present a characterization of these parallel electric fields found in a single bow shock event and how it reflects the kinetic-scale activity that can occur at the terrestrial bow shock.
PRATHAM: Parallel Thermal Hydraulics Simulations using Advanced Mesoscopic Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joshi, Abhijit S; Jain, Prashant K; Mudrich, Jaime A
2012-01-01
At the Oak Ridge National Laboratory, efforts are under way to develop a 3D, parallel LBM code called PRATHAM (PaRAllel Thermal Hydraulic simulations using Advanced Mesoscopic Methods) to demonstrate the accuracy and scalability of LBM for turbulent flow simulations in nuclear applications. The code has been developed using FORTRAN-90, and parallelized using the message passing interface MPI library. Silo library is used to compact and write the data files, and VisIt visualization software is used to post-process the simulation data in parallel. Both the single relaxation time (SRT) and multi relaxation time (MRT) LBM schemes have been implemented in PRATHAM.more » To capture turbulence without prohibitively increasing the grid resolution requirements, an LES approach [5] is adopted allowing large scale eddies to be numerically resolved while modeling the smaller (subgrid) eddies. In this work, a Smagorinsky model has been used, which modifies the fluid viscosity by an additional eddy viscosity depending on the magnitude of the rate-of-strain tensor. In LBM, this is achieved by locally varying the relaxation time of the fluid.« less
Integrated testing and verification system for research flight software design document
NASA Technical Reports Server (NTRS)
Taylor, R. N.; Merilatt, R. L.; Osterweil, L. J.
1979-01-01
The NASA Langley Research Center is developing the MUST (Multipurpose User-oriented Software Technology) program to cut the cost of producing research flight software through a system of software support tools. The HAL/S language is the primary subject of the design. Boeing Computer Services Company (BCS) has designed an integrated verification and testing capability as part of MUST. Documentation, verification and test options are provided with special attention on real time, multiprocessing issues. The needs of the entire software production cycle have been considered, with effective management and reduced lifecycle costs as foremost goals. Capabilities have been included in the design for static detection of data flow anomalies involving communicating concurrent processes. Some types of ill formed process synchronization and deadlock also are detected statically.
Using Runtime Analysis to Guide Model Checking of Java Programs
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Norvig, Peter (Technical Monitor)
2001-01-01
This paper describes how two runtime analysis algorithms, an existing data race detection algorithm and a new deadlock detection algorithm, have been implemented to analyze Java programs. Runtime analysis is based on the idea of executing the program once. and observing the generated run to extract various kinds of information. This information can then be used to predict whether other different runs may violate some properties of interest, in addition of course to demonstrate whether the generated run itself violates such properties. These runtime analyses can be performed stand-alone to generate a set of warnings. It is furthermore demonstrated how these warnings can be used to guide a model checker, thereby reducing the search space. The described techniques have been implemented in the b e grown Java model checker called PathFinder.
Improving generalized inverted index lock wait times
NASA Astrophysics Data System (ADS)
Borodin, A.; Mirvoda, S.; Porshnev, S.; Ponomareva, O.
2018-01-01
Concurrent operations on tree like data structures is a cornerstone of any database system. Concurrent operations intended for improving read\\write performance and usually implemented via some way of locking. Deadlock-free methods of concurrency control are known as tree locking protocols. These protocols provide basic operations(verbs) and algorithm (ways of operation invocations) for applying it to any tree-like data structure. These algorithms operate on data, managed by storage engine which are very different among RDBMS implementations. In this paper, we discuss tree locking protocol implementation for General inverted index (Gin) applied to multiversion concurrency control (MVCC) storage engine inside PostgreSQL RDBMS. After that we introduce improvements to locking protocol and provide usage statistics about evaluation of our improvement in very high load environment in one of the world’s largest IT company.
Stochastic sensitivity of a bistable energy model for visual perception
NASA Astrophysics Data System (ADS)
Pisarchik, Alexander N.; Bashkirtseva, Irina; Ryashko, Lev
2017-01-01
Modern trends in physiology, psychology and cognitive neuroscience suggest that noise is an essential component of brain functionality and self-organization. With adequate noise the brain as a complex dynamical system can easily access different ordered states and improve signal detection for decision-making by preventing deadlocks. Using a stochastic sensitivity function approach, we analyze how sensitive equilibrium points are to Gaussian noise in a bistable energy model often used for qualitative description of visual perception. The probability distribution of noise-induced transitions between two coexisting percepts is calculated at different noise intensity and system stability. Stochastic squeezing of the hysteresis range and its transition from positive (bistable regime) to negative (intermittency regime) are demonstrated as the noise intensity increases. The hysteresis is more sensitive to noise in the system with higher stability.
Applying Jlint to Space Exploration Software
NASA Technical Reports Server (NTRS)
Artho, Cyrille; Havelund, Klaus
2004-01-01
Java is a very successful programming language which is also becoming widespread in embedded systems, where software correctness is critical. Jlint is a simple but highly efficient static analyzer that checks a Java program for several common errors, such as null pointer exceptions, and overflow errors. It also includes checks for multi-threading problems, such as deadlocks and data races. The case study described here shows the effectiveness of Jlint in find-false positives in the multi-threading warnings gives an insight into design patterns commonly used in multi-threaded code. The results show that a few analysis techniques are sufficient to avoid almost all false positives. These techniques include investigating all possible callers and a few code idioms. Verifying the correct application of these patterns is still crucial, because their correct usage is not trivial.
Modeling and stochastic analysis of dynamic mechanisms of the perception
NASA Astrophysics Data System (ADS)
Pisarchik, A.; Bashkirtseva, I.; Ryashko, L.
2017-10-01
Modern studies in physiology and cognitive neuroscience consider a noise as an important constructive factor of the brain functionality. Under the adequate noise, the brain can rapidly access different ordered states, and provide decision-making by preventing deadlocks. Bistable dynamic models are often used for the study of the underlying mechanisms of the visual perception. In the present paper, we consider a bistable energy model subject to both additive and parametric noise. Using the catastrophe theory formalism and stochastic sensitivity functions technique, we analyze a response of the equilibria to noise, and study noise-induced transitions between equilibria. We demonstrate and analyse the effect of hysteresis squeezing when the intensity of noise is increased. Stochastic bifurcations connected with the suppression of oscillations by parametric noises are discussed.
Petri net modelling of buffers in automated manufacturing systems.
Zhou, M; Dicesare, F
1996-01-01
This paper presents Petri net models of buffers and a methodology by which buffers can be included in a system without introducing deadlocks or overflows. The context is automated manufacturing. The buffers and models are classified as random order or order preserved (first-in-first-out or last-in-first-out), single-input-single-output or multiple-input-multiple-output, part type and/or space distinguishable or indistinguishable, and bounded or safe. Theoretical results for the development of Petri net models which include buffer modules are developed. This theory provides the conditions under which the system properties of boundedness, liveness, and reversibility are preserved. The results are illustrated through two manufacturing system examples: a multiple machine and multiple buffer production line and an automatic storage and retrieval system in the context of flexible manufacturing.
Parallel robot for micro assembly with integrated innovative optical 3D-sensor
NASA Astrophysics Data System (ADS)
Hesselbach, Juergen; Ispas, Diana; Pokar, Gero; Soetebier, Sven; Tutsch, Rainer
2002-10-01
Recent advances in the fields of MEMS and MOEMS often require precise assembly of very small parts with an accuracy of a few microns. In order to meet this demand, a new approach using a robot based on parallel mechanisms in combination with a novel 3D-vision system has been chosen. The planar parallel robot structure with 2 DOF provides a high resolution in the XY-plane. It carries two additional serial axes for linear and rotational movement in/about z direction. In order to achieve high precision as well as good dynamic capabilities, the drive concept for the parallel (main) axes incorporates air bearings in combination with a linear electric servo motors. High accuracy position feedback is provided by optical encoders with a resolution of 0.1 μm. To allow for visualization and visual control of assembly processes, a camera module fits into the hollow tool head. It consists of a miniature CCD camera and a light source. In addition a modular gripper support is integrated into the tool head. To increase the accuracy a control loop based on an optoelectronic sensor will be implemented. As a result of an in-depth analysis of different approaches a photogrammetric system using one single camera and special beam-splitting optics was chosen. A pattern of elliptical marks is applied to the surfaces of workpiece and gripper. Using a model-based recognition algorithm the image processing software identifies the gripper and the workpiece and determines their relative position. A deviation vector is calculated and fed into the robot control to guide the gripper.
Fast Particle Methods for Multiscale Phenomena Simulations
NASA Technical Reports Server (NTRS)
Koumoutsakos, P.; Wray, A.; Shariff, K.; Pohorille, Andrew
2000-01-01
We are developing particle methods oriented at improving computational modeling capabilities of multiscale physical phenomena in : (i) high Reynolds number unsteady vortical flows, (ii) particle laden and interfacial flows, (iii)molecular dynamics studies of nanoscale droplets and studies of the structure, functions, and evolution of the earliest living cell. The unifying computational approach involves particle methods implemented in parallel computer architectures. The inherent adaptivity, robustness and efficiency of particle methods makes them a multidisciplinary computational tool capable of bridging the gap of micro-scale and continuum flow simulations. Using efficient tree data structures, multipole expansion algorithms, and improved particle-grid interpolation, particle methods allow for simulations using millions of computational elements, making possible the resolution of a wide range of length and time scales of these important physical phenomena.The current challenges in these simulations are in : [i] the proper formulation of particle methods in the molecular and continuous level for the discretization of the governing equations [ii] the resolution of the wide range of time and length scales governing the phenomena under investigation. [iii] the minimization of numerical artifacts that may interfere with the physics of the systems under consideration. [iv] the parallelization of processes such as tree traversal and grid-particle interpolations We are conducting simulations using vortex methods, molecular dynamics and smooth particle hydrodynamics, exploiting their unifying concepts such as : the solution of the N-body problem in parallel computers, highly accurate particle-particle and grid-particle interpolations, parallel FFT's and the formulation of processes such as diffusion in the context of particle methods. This approach enables us to transcend among seemingly unrelated areas of research.
NASA Astrophysics Data System (ADS)
Breuillard, H.; Matteini, L.; Argall, M. R.; Sahraoui, F.; Andriopoulou, M.; Le Contel, O.; Retinò, A.; Mirioni, L.; Huang, S. Y.; Gershman, D. J.; Ergun, R. E.; Wilder, F. D.; Goodrich, K. A.; Ahmadi, N.; Yordanova, E.; Vaivads, A.; Turner, D. L.; Khotyaintsev, Yu. V.; Graham, D. B.; Lindqvist, P.-A.; Chasapis, A.; Burch, J. L.; Torbert, R. B.; Russell, C. T.; Magnes, W.; Strangeway, R. J.; Plaschke, F.; Moore, T. E.; Giles, B. L.; Paterson, W. R.; Pollock, C. J.; Lavraud, B.; Fuselier, S. A.; Cohen, I. J.
2018-06-01
The Earth’s magnetosheath, which is characterized by highly turbulent fluctuations, is usually divided into two regions of different properties as a function of the angle between the interplanetary magnetic field and the shock normal. In this study, we make use of high-time resolution instruments on board the Magnetospheric MultiScale spacecraft to determine and compare the properties of subsolar magnetosheath turbulence in both regions, i.e., downstream of the quasi-parallel and quasi-perpendicular bow shocks. In particular, we take advantage of the unprecedented temporal resolution of the Fast Plasma Investigation instrument to show the density fluctuations down to sub-ion scales for the first time. We show that the nature of turbulence is highly compressible down to electron scales, particularly in the quasi-parallel magnetosheath. In this region, the magnetic turbulence also shows an inertial (Kolmogorov-like) range, indicating that the fluctuations are not formed locally, in contrast with the quasi-perpendicular magnetosheath. We also show that the electromagnetic turbulence is dominated by electric fluctuations at sub-ion scales (f > 1 Hz) and that magnetic and electric spectra steepen at the largest-electron scale. The latter indicates a change in the nature of turbulence at electron scales. Finally, we show that the electric fluctuations around the electron gyrofrequency are mostly parallel in the quasi-perpendicular magnetosheath, where intense whistlers are observed. This result suggests that energy dissipation, plasma heating, and acceleration might be driven by intense electrostatic parallel structures/waves, which can be linked to whistler waves.
NASA Astrophysics Data System (ADS)
Rybakin, B.; Bogatencov, P.; Secrieru, G.; Iliuha, N.
2013-10-01
The paper deals with a parallel algorithm for calculations on multiprocessor computers and GPU accelerators. The calculations of shock waves interaction with low-density bubble results and the problem of the gas flow with the forces of gravity are presented. This algorithm combines a possibility to capture a high resolution of shock waves, the second-order accuracy for TVD schemes, and a possibility to observe a low-level diffusion of the advection scheme. Many complex problems of continuum mechanics are numerically solved on structured or unstructured grids. To improve the accuracy of the calculations is necessary to choose a sufficiently small grid (with a small cell size). This leads to the drawback of a substantial increase of computation time. Therefore, for the calculations of complex problems it is reasonable to use the method of Adaptive Mesh Refinement. That is, the grid refinement is performed only in the areas of interest of the structure, where, e.g., the shock waves are generated, or a complex geometry or other such features exist. Thus, the computing time is greatly reduced. In addition, the execution of the application on the resulting sequence of nested, decreasing nets can be parallelized. Proposed algorithm is based on the AMR method. Utilization of AMR method can significantly improve the resolution of the difference grid in areas of high interest, and from other side to accelerate the processes of the multi-dimensional problems calculating. Parallel algorithms of the analyzed difference models realized for the purpose of calculations on graphic processors using the CUDA technology [1].
Definition of the Spatial Resolution of X-Ray Microanalysis in Thin Foils
NASA Technical Reports Server (NTRS)
Williams, D. B.; Michael, J. R.; Goldstein, J. I.; Romig, A. D., Jr.
1992-01-01
The spatial resolution of X-ray microanalysis in thin foils is defined in terms of the incident electron beam diameter and the average beam broadening. The beam diameter is defined as the full width tenth maximum of a Gaussian intensity distribution. The spatial resolution is calculated by a convolution of the beam diameter and the average beam broadening. This definition of the spatial resolution can be related simply to experimental measurements of composition profiles across interphase interfaces. Monte Carlo calculations using a high-speed parallel supercomputer show good agreement with this definition of the spatial resolution and calculations based on this definition. The agreement is good over a range of specimen thicknesses and atomic number, but is poor when excessive beam tailing distorts the assumed Gaussian electron intensity distributions. Beam tailing occurs in low-Z materials because of fast secondary electrons and in high-Z materials because of plural scattering.
Recent Advances in 3D Time-Resolved Contrast-Enhanced MR Angiography
Riederer, Stephen J.; Haider, Clifton R.; Borisch, Eric A.; Weavers, Paul T.; Young, Phillip M.
2015-01-01
Contrast-enhanced MR angiography (CE-MRA) was first introduced for clinical studies approximately 20 years ago. Early work provided 3 to 4 mm spatial resolution with acquisition times in the 30 sec range. Since that time there has been continuing effort to provide improved spatial resolution with reduced acquisition time, allowing high resolution three-dimensional (3D) time-resolved studies. The purpose of this work is to describe how this has been accomplished. Specific technical enablers have been: improved gradients allowing reduced repetition times, improved k-space sampling and reconstruction methods, parallel acquisition particularly in two directions, and improved and higher count receiver coil arrays. These have collectively made high resolution time-resolved studies readily available for many anatomic regions. Depending on the application, approximate 1 mm isotropic resolution is now possible with frame times of several seconds. Clinical applications of time-resolved CE-MRA are briefly reviewed. PMID:26032598
A SPECT Scanner for Rodent Imaging Based on Small-Area Gamma Cameras
NASA Astrophysics Data System (ADS)
Lage, Eduardo; Villena, José L.; Tapias, Gustavo; Martinez, Naira P.; Soto-Montenegro, Maria L.; Abella, Mónica; Sisniega, Alejandro; Pino, Francisco; Ros, Domènec; Pavia, Javier; Desco, Manuel; Vaquero, Juan J.
2010-10-01
We developed a cost-effective SPECT scanner prototype (rSPECT) for in vivo imaging of rodents based on small-area gamma cameras. Each detector consists of a position-sensitive photomultiplier tube (PS-PMT) coupled to a 30 x 30 Nal(Tl) scintillator array and electronics attached to the PS-PMT sockets for adapting the detector signals to an in-house developed data acquisition system. The detector components are enclosed in a lead-shielded case with a receptacle to insert the collimators. System performance was assessed using 99mTc for a high-resolution parallel-hole collimator, and for a 0.75-mm pinhole collimator with a 60° aperture angle and a 42-mm collimator length. The energy resolution is about 10.7% of the photopeak energy. The overall system sensitivity is about 3 cps/μCi/detector and planar spatial resolution ranges from 2.4 mm at 1 cm source-to-collimator distance to 4.1 mm at 4.5 cm with parallel-hole collimators. With pinhole collimators planar spatial resolution ranges from 1.2 mm at 1 cm source-to-collimator distance to 2.4 mm at 4.5 cm; sensitivity at these distances ranges from 2.8 to 0.5 cps/μCi/detector. Tomographic hot-rod phantom images are presented together with images of bone, myocardium and brain of living rodents to demonstrate the feasibility of preclinical small-animal studies with the rSPECT.
PP-SWAT: A phython-based computing software for efficient multiobjective callibration of SWAT
USDA-ARS?s Scientific Manuscript database
With enhanced data availability, distributed watershed models for large areas with high spatial and temporal resolution are increasingly used to understand water budgets and examine effects of human activities and climate change/variability on water resources. Developing parallel computing software...
Microfluidic local perfusion chambers for the visualization and manipulation of synapses
Taylor, Anne M.; Dieterich, Daniela C.; Ito, Hiroshi T.; Kim, Sally A.; Schuman, Erin M.
2010-01-01
Summary The polarized nature of neurons as well as the size and density of synapses complicates the manipulation and visualization of cell biological processes that control synaptic function. Here we developed a microfluidic local perfusion (μLP) chamber to access and manipulate synaptic regions and pre- and post-synaptic compartments in vitro. This chamber directs the formation of synapses in >100 parallel rows connecting separate neuron populations. A perfusion channel transects the parallel rows allowing access to synaptic regions with high spatial and temporal resolution. We used this chamber to investigate synapse-to-nucleus signaling. Using the calcium indicator dye, Fluo-4, we measured changes in calcium at dendrites and somata, following local perfusion of glutamate. Exploiting the high temporal resolution of the chamber, we exposed synapses to “spaced” or “massed” application of glutamate and then examined levels of pCREB in somata. Lastly, we applied the metabotropic receptor agonist, DHPG, to dendrites and observed increases in Arc transcription and Arc transcript localization. PMID:20399729
Structure of a thermostable serralysin from Serratia sp. FS14 at 1.1 Å resolution.
Wu, Dongxia; Ran, Tinting; Wang, Weiwu; Xu, Dongqing
2016-01-01
Serralysin is a well studied metalloprotease, and typical serralysins are not thermostable. The serralysin isolated from Serratia sp. FS14 was found to be thermostable, and in order to reveal the mechanism responsible for its thermostability, the crystal structure of serralysin from Serratia sp. FS14 was solved to a crystallographic R factor of 0.1619 at 1.10 Å resolution. Similar to its homologues, it mainly consists of two domains: an N-terminal catalytic domain and a `parallel β-roll' C-terminal domain. Comparative studies show that the shape of the catalytic active-site cavity is more open owing to the 189-198 loop, with a short 310-helix protruding further from the molecular surface, and that the β-sheets comprising the `parallel β-roll' are longer than those in its homologues. The formation of hydrogen bonds from one of the nonconserved residues (Asn200) to Lys27 may contribute to the thermostability.
Structure of a thermostable serralysin from Serratia sp. FS14 at 1.1 Å resolution
Wu, Dongxia; Ran, Tinting; Wang, Weiwu; Xu, Dongqing
2016-01-01
Serralysin is a well studied metalloprotease, and typical serralysins are not thermostable. The serralysin isolated from Serratia sp. FS14 was found to be thermostable, and in order to reveal the mechanism responsible for its thermostability, the crystal structure of serralysin from Serratia sp. FS14 was solved to a crystallographic R factor of 0.1619 at 1.10 Å resolution. Similar to its homologues, it mainly consists of two domains: an N-terminal catalytic domain and a ‘parallel β-roll’ C-terminal domain. Comparative studies show that the shape of the catalytic active-site cavity is more open owing to the 189–198 loop, with a short 310-helix protruding further from the molecular surface, and that the β-sheets comprising the ‘parallel β-roll’ are longer than those in its homologues. The formation of hydrogen bonds from one of the nonconserved residues (Asn200) to Lys27 may contribute to the thermostability. PMID:26750478
An Automated Parallel Image Registration Technique Based on the Correlation of Wavelet Features
NASA Technical Reports Server (NTRS)
LeMoigne, Jacqueline; Campbell, William J.; Cromp, Robert F.; Zukor, Dorothy (Technical Monitor)
2001-01-01
With the increasing importance of multiple platform/multiple remote sensing missions, fast and automatic integration of digital data from disparate sources has become critical to the success of these endeavors. Our work utilizes maxima of wavelet coefficients to form the basic features of a correlation-based automatic registration algorithm. Our wavelet-based registration algorithm is tested successfully with data from the National Oceanic and Atmospheric Administration (NOAA) Advanced Very High Resolution Radiometer (AVHRR) and the Landsat/Thematic Mapper(TM), which differ by translation and/or rotation. By the choice of high-frequency wavelet features, this method is similar to an edge-based correlation method, but by exploiting the multi-resolution nature of a wavelet decomposition, our method achieves higher computational speeds for comparable accuracies. This algorithm has been implemented on a Single Instruction Multiple Data (SIMD) massively parallel computer, the MasPar MP-2, as well as on the CrayT3D, the Cray T3E and a Beowulf cluster of Pentium workstations.
Single-spin stochastic optical reconstruction microscopy
Pfender, Matthias; Aslam, Nabeel; Waldherr, Gerald; Neumann, Philipp; Wrachtrup, Jörg
2014-01-01
We experimentally demonstrate precision addressing of single-quantum emitters by combined optical microscopy and spin resonance techniques. To this end, we use nitrogen vacancy (NV) color centers in diamond confined within a few ten nanometers as individually resolvable quantum systems. By developing a stochastic optical reconstruction microscopy (STORM) technique for NV centers, we are able to simultaneously perform sub–diffraction-limit imaging and optically detected spin resonance (ODMR) measurements on NV spins. This allows the assignment of spin resonance spectra to individual NV center locations with nanometer-scale resolution and thus further improves spatial discrimination. For example, we resolved formerly indistinguishable emitters by their spectra. Furthermore, ODMR spectra contain metrology information allowing for sub–diffraction-limit sensing of, for instance, magnetic or electric fields with inherently parallel data acquisition. As an example, we have detected nuclear spins with nanometer-scale precision. Finally, we give prospects of how this technique can evolve into a fully parallel quantum sensor for nanometer resolution imaging of delocalized quantum correlations. PMID:25267655
Super-resolution for asymmetric resolution of FIB-SEM 3D imaging using AI with deep learning.
Hagita, Katsumi; Higuchi, Takeshi; Jinnai, Hiroshi
2018-04-12
Scanning electron microscopy equipped with a focused ion beam (FIB-SEM) is a promising three-dimensional (3D) imaging technique for nano- and meso-scale morphologies. In FIB-SEM, the specimen surface is stripped by an ion beam and imaged by an SEM installed orthogonally to the FIB. The lateral resolution is governed by the SEM, while the depth resolution, i.e., the FIB milling direction, is determined by the thickness of the stripped thin layer. In most cases, the lateral resolution is superior to the depth resolution; hence, asymmetric resolution is generated in the 3D image. Here, we propose a new approach based on an image-processing or deep-learning-based method for super-resolution of 3D images with such asymmetric resolution, so as to restore the depth resolution to achieve symmetric resolution. The deep-learning-based method learns from high-resolution sub-images obtained via SEM and recovers low-resolution sub-images parallel to the FIB milling direction. The 3D morphologies of polymeric nano-composites are used as test images, which are subjected to the deep-learning-based method as well as conventional methods. We find that the former yields superior restoration, particularly as the asymmetric resolution is increased. Our super-resolution approach for images having asymmetric resolution enables observation time reduction.
Visual resolution in incoherent and coherent light: preliminary investigation
NASA Astrophysics Data System (ADS)
Sarnowska-Habrat, Katarzyna; Dubik, Boguslawa; Zajac, Marek
2001-05-01
In ophthalmology and optometry a number of measures are used for describing quality of human vision such as resolution, visual acuity, contrast sensitivity function, etc. In this paper we will concentrate on the vision quality understood as a resolution of periodic object being a set of equidistant parallel lines of given spacing and direction. The measurement procedure is based on presenting the test to the investigated person and determining the highest spatial frequency he/she can still resolve. In this paper we describe a number of experiments in which we use test tables illuminated with light both coherent and incoherent of different spectral characteristics. Our experiments suggest that while considering incoherent polychromatic illumination the resolution in blue light is substantially worse than in white light. In coherent illumination speckling effect causes worsening of resolution. While using laser light it is easy to generate a sinusoidal interference pattern which can serve as test object. In the paper we compare the results of resolution measurements with test tables and interference fringes.
MPgrafic: A parallel MPI version of Grafic-1
NASA Astrophysics Data System (ADS)
Prunet, Simon; Pichon, Christophe
2013-04-01
MPgrafic is a parallel MPI version of Grafic-1 which can produce large cosmological initial conditions on a cluster without requiring shared memory. The real Fourier transforms are carried in place using fftw while minimizing the amount of used memory (at the expense of performance) in the spirit of Grafic-1. The writing of the output file is also carried in parallel. In addition to the technical parallelization, it provides three extensions over Grafic-1: it can produce power spectra with baryon wiggles (DJ Eisenstein and W. Hu, Ap. J. 496);it has the optional ability to load a lower resolution noise map corresponding to the low frequency component which will fix the larger scale modes of the simulation (extra flag 0/1 at the end of the input process) in the spirit of Grafic-2;it can be used in conjunction with constrfield, which generates initial conditions phases from a list of local constraints on density, tidal field density gradient and velocity.
Processing large remote sensing image data sets on Beowulf clusters
Steinwand, Daniel R.; Maddox, Brian; Beckmann, Tim; Schmidt, Gail
2003-01-01
High-performance computing is often concerned with the speed at which floating- point calculations can be performed. The architectures of many parallel computers and/or their network topologies are based on these investigations. Often, benchmarks resulting from these investigations are compiled with little regard to how a large dataset would move about in these systems. This part of the Beowulf study addresses that concern by looking at specific applications software and system-level modifications. Applications include an implementation of a smoothing filter for time-series data, a parallel implementation of the decision tree algorithm used in the Landcover Characterization project, a parallel Kriging algorithm used to fit point data collected in the field on invasive species to a regular grid, and modifications to the Beowulf project's resampling algorithm to handle larger, higher resolution datasets at a national scale. Systems-level investigations include a feasibility study on Flat Neighborhood Networks and modifications of that concept with Parallel File Systems.
NASA Technical Reports Server (NTRS)
Schutz, Bob E.; Baker, Gregory A.
1997-01-01
The recovery of a high resolution geopotential from satellite gradiometer observations motivates the examination of high performance computational techniques. The primary subject matter addresses specifically the use of satellite gradiometer and GPS observations to form and invert the normal matrix associated with a large degree and order geopotential solution. Memory resident and out-of-core parallel linear algebra techniques along with data parallel batch algorithms form the foundation of the least squares application structure. A secondary topic includes the adoption of object oriented programming techniques to enhance modularity and reusability of code. Applications implementing the parallel and object oriented methods successfully calculate the degree variance for a degree and order 110 geopotential solution on 32 processors of the Cray T3E. The memory resident gradiometer application exhibits an overall application performance of 5.4 Gflops, and the out-of-core linear solver exhibits an overall performance of 2.4 Gflops. The combination solution derived from a sun synchronous gradiometer orbit produce average geoid height variances of 17 millimeters.
Parallel Computation of the Regional Ocean Modeling System (ROMS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, P; Song, Y T; Chao, Y
2005-04-05
The Regional Ocean Modeling System (ROMS) is a regional ocean general circulation modeling system solving the free surface, hydrostatic, primitive equations over varying topography. It is free software distributed world-wide for studying both complex coastal ocean problems and the basin-to-global scale ocean circulation. The original ROMS code could only be run on shared-memory systems. With the increasing need to simulate larger model domains with finer resolutions and on a variety of computer platforms, there is a need in the ocean-modeling community to have a ROMS code that can be run on any parallel computer ranging from 10 to hundreds ofmore » processors. Recently, we have explored parallelization for ROMS using the MPI programming model. In this paper, an efficient parallelization strategy for such a large-scale scientific software package, based on an existing shared-memory computing model, is presented. In addition, scientific applications and data-performance issues on a couple of SGI systems, including Columbia, the world's third-fastest supercomputer, are discussed.« less
Developing parallel GeoFEST(P) using the PYRAMID AMR library
NASA Technical Reports Server (NTRS)
Norton, Charles D.; Lyzenga, Greg; Parker, Jay; Tisdale, Robert E.
2004-01-01
The PYRAMID parallel unstructured adaptive mesh refinement (AMR) library has been coupled with the GeoFEST geophysical finite element simulation tool to support parallel active tectonics simulations. Specifically, we have demonstrated modeling of coseismic and postseismic surface displacement due to a simulated Earthquake for the Landers system of interacting faults in Southern California. The new software demonstrated a 25-times resolution improvement and a 4-times reduction in time to solution over the sequential baseline milestone case. Simulations on workstations using a few tens of thousands of stress displacement finite elements can now be expanded to multiple millions of elements with greater than 98% scaled efficiency on various parallel platforms over many hundreds of processors. Our most recent work has demonstrated that we can dynamically adapt the computational grid as stress grows on a fault. In this paper, we will describe the major issues and challenges associated with coupling these two programs to create GeoFEST(P). Performance and visualization results will also be described.
Parallel computing in experimental mechanics and optical measurement: A review (II)
NASA Astrophysics Data System (ADS)
Wang, Tianyi; Kemao, Qian
2018-05-01
With advantages such as non-destructiveness, high sensitivity and high accuracy, optical techniques have successfully integrated into various important physical quantities in experimental mechanics (EM) and optical measurement (OM). However, in pursuit of higher image resolutions for higher accuracy, the computation burden of optical techniques has become much heavier. Therefore, in recent years, heterogeneous platforms composing of hardware such as CPUs and GPUs, have been widely employed to accelerate these techniques due to their cost-effectiveness, short development cycle, easy portability, and high scalability. In this paper, we analyze various works by first illustrating their different architectures, followed by introducing their various parallel patterns for high speed computation. Next, we review the effects of CPU and GPU parallel computing specifically in EM & OM applications in a broad scope, which include digital image/volume correlation, fringe pattern analysis, tomography, hyperspectral imaging, computer-generated holograms, and integral imaging. In our survey, we have found that high parallelism can always be exploited in such applications for the development of high-performance systems.
NASA Astrophysics Data System (ADS)
Baker, Gregory Allen
The recovery of a high resolution geopotential from satellite gradiometer observations motivates the examination of high performance computational techniques. The primary subject matter addresses specifically the use of satellite gradiometer and GPS observations to form and invert the normal matrix associated with a large degree and order geopotential solution. Memory resident and out-of-core parallel linear algebra techniques along with data parallel batch algorithms form the foundation of the least squares application structure. A secondary topic includes the adoption of object oriented programming techniques to enhance modularity and reusability of code. Applications implementing the parallel and object oriented methods successfully calculate the degree variance for a degree and order 110 geopotential solution on 32 processors of the Cray T3E. The memory resident gradiometer application exhibits an overall application performance of 5.4 Gflops, and the out-of-core linear solver exhibits an overall performance of 2.4 Gflops. The combination solution derived from a sun synchronous gradiometer orbit produce average geoid height variances of 17 millimeters.
Enhancing GIS Capabilities for High Resolution Earth Science Grids
NASA Astrophysics Data System (ADS)
Koziol, B. W.; Oehmke, R.; Li, P.; O'Kuinghttons, R.; Theurich, G.; DeLuca, C.
2017-12-01
Applications for high performance GIS will continue to increase as Earth system models pursue more realistic representations of Earth system processes. Finer spatial resolution model input and output, unstructured or irregular modeling grids, data assimilation, and regional coordinate systems present novel challenges for GIS frameworks operating in the Earth system modeling domain. This presentation provides an overview of two GIS-driven applications that combine high performance software with big geospatial datasets to produce value-added tools for the modeling and geoscientific community. First, a large-scale interpolation experiment using National Hydrography Dataset (NHD) catchments, a high resolution rectilinear CONUS grid, and the Earth System Modeling Framework's (ESMF) conservative interpolation capability will be described. ESMF is a parallel, high-performance software toolkit that provides capabilities (e.g. interpolation) for building and coupling Earth science applications. ESMF is developed primarily by the NOAA Environmental Software Infrastructure and Interoperability (NESII) group. The purpose of this experiment was to test and demonstrate the utility of high performance scientific software in traditional GIS domains. Special attention will be paid to the nuanced requirements for dealing with high resolution, unstructured grids in scientific data formats. Second, a chunked interpolation application using ESMF and OpenClimateGIS (OCGIS) will demonstrate how spatial subsetting can virtually remove computing resource ceilings for very high spatial resolution interpolation operations. OCGIS is a NESII-developed Python software package designed for the geospatial manipulation of high-dimensional scientific datasets. An overview of the data processing workflow, why a chunked approach is required, and how the application could be adapted to meet operational requirements will be discussed here. In addition, we'll provide a general overview of OCGIS's parallel subsetting capabilities including challenges in the design and implementation of a scientific data subsetter.
Khachaturian, Mark Haig
2010-01-01
Awake monkey fMRI and diffusion MRI combined with conventional neuroscience techniques has the potential to study the structural and functional neural network. The majority of monkey fMRI and diffusion MRI experiments are performed with single coils which suffer from severe EPI distortions which limit resolution. By constructing phased array coils for monkey MRI studies, gains in SNR and anatomical accuracy (i.e., reduction of EPI distortions) can be achieved using parallel imaging. The major challenges associated with constructing phased array coils for monkeys are the variation in head size and space constraints. Here, we apply phased array technology to a 4-channel phased array coil capable of improving the resolution and image quality of full brain awake monkey fMRI and diffusion MRI experiments. The phased array coil is that can adapt to different rhesus monkey head sizes (ages 4-8) and fits in the limited space provided by monkey stereotactic equipment and provides SNR gains in primary visual cortex and anatomical accuracy in conjunction with parallel imaging and improves resolution in fMRI experiments by a factor of 2 (1.25 mm to 1.0 mm isotropic) and diffusion MRI experiments by a factor of 4 (1.5 mm to 0.9 mm isotropic).
Imaging characteristics of scintimammography using parallel-hole and pinhole collimators
NASA Astrophysics Data System (ADS)
Tsui, B. M. W.; Wessell, D. E.; Zhao, X. D.; Wang, W. T.; Lewis, D. P.; Frey, E. C.
1998-08-01
The purpose of the study is to investigate the imaging characteristics of scintimammography (SM) using parallel-hole (PR) and pinhole (PN) collimators in a clinical setting. Experimental data were acquired from a phantom that models the breast with small lesions using a low energy high resolution (LEHR) PR and a PN collimator. At close distances, the PN collimator provides better spatial resolution and higher detection efficiency than the PR collimator, at the expense of a smaller field-of-view (FOV). Detection of small breast lesions can be further enhanced by noise smoothing, field uniformity correction, scatter subtraction and resolution recovery filtering. Monte Carlo (MC) simulation data were generated from the 3D MCAT phantom that realistically models the Tc-99m sestamibi uptake and attenuation distributions in an average female patient. For both PR and PN collimation, the scatter to primary ratio (S/P) decreases from the base of the breast to the nipple and is higher in the left than right breast due to scatter of photons from the heart. Results from the study add to understanding of the imaging characteristics of SM using PR and PN collimators and assist in the design of data acquisition and image processing methods to enhance the detection of breast lesions using SM.
Khachaturian, Mark Haig
2010-01-01
Awake monkey fMRI and diffusion MRI combined with conventional neuroscience techniques has the potential to study the structural and functional neural network. The majority of monkey fMRI and diffusion MRI experiments are performed with single coils which suffer from severe EPI distortions which limit resolution. By constructing phased array coils for monkey MRI studies, gains in SNR and anatomical accuracy (i.e., reduction of EPI distortions) can be achieved using parallel imaging. The major challenges associated with constructing phased array coils for monkeys are the variation in head size and space constraints. Here, we apply phased array technology to a 4-channel phased array coil capable of improving the resolution and image quality of full brain awake monkey fMRI and diffusion MRI experiments. The phased array coil is that can adapt to different rhesus monkey head sizes (ages 4–8) and fits in the limited space provided by monkey stereotactic equipment and provides SNR gains in primary visual cortex and anatomical accuracy in conjunction with parallel imaging and improves resolution in fMRI experiments by a factor of 2 (1.25 mm to 1.0 mm isotropic) and diffusion MRI experiments by a factor of 4 (1.5 mm to 0.9 mm isotropic). PMID:21243106
Slip-parallel seismic lineations on the Northern Hayward Fault, California
Waldhauser, F.; Ellsworth, W.L.; Cole, A.
1999-01-01
A high-resolution relative earthquake location procedure is used to image the fine-scale seismicity structure of the northern Hayward fault, California. The seismicity defines a narrow, near-vertical fault zone containing horizontal alignments of hypocenters extending along the fault zone. The lineations persist over the 15-year observation interval, implying the localization of conditions on the fault where brittle failure conditions are met. The horizontal orientation of the lineations parallels the slip direction of the fault, suggesting that they are the result of the smearing of frictionally weak material along the fault plane over thousands of years.
The WISDOM Study: breaking the deadlock in the breast cancer screening debate.
Esserman, Laura J
2017-01-01
There are few medical issues that have generated as much controversy as screening for breast cancer. In science, controversy often stimulates innovation; however, the intensely divisive debate over mammographic screening has had the opposite effect and has stifled progress. The same two questions-whether it is better to screen annually or bi-annually, and whether women are best served by beginning screening at 40 or some later age-have been debated for 20 years, based on data generated three to four decades ago. The controversy has continued largely because our current approach to screening assumes all women have the same risk for the same type of breast cancer. In fact, we now know that cancers vary tremendously in terms of timing of onset, rate of growth, and probability of metastasis. In an era of personalized medicine, we have the opportunity to investigate tailored screening based on a woman's specific risk for a specific tumor type, generating new data that can inform best practices rather than to continue the rancorous debate. It is time to move from debate to wisdom by asking new questions and generating new knowledge. The WISDOM Study (Women Informed to Screen Depending On Measures of risk) is a pragmatic, adaptive, randomized clinical trial comparing a comprehensive risk-based, or personalized approach to traditional annual breast cancer screening. The multicenter trial will enroll 100,000 women, powered for a primary endpoint of non-inferiority with respect to the number of late stage cancers detected. The trial will determine whether screening based on personalized risk is as safe, less morbid, preferred by women, will facilitate prevention for those most likely to benefit, and adapt as we learn who is at risk for what kind of cancer. Funded by the Patient Centered Outcomes Research Institute, WISDOM is the product of a multi-year stakeholder engagement process that has brought together consumers, advocates, primary care physicians, specialists, policy makers, technology companies and payers to help break the deadlock in this debate and advance towards a new, dynamic approach to breast cancer screening.
Grinke, Eduard; Tetzlaff, Christian; Wörgötter, Florentin; Manoonpong, Poramate
2015-01-01
Walking animals, like insects, with little neural computing can effectively perform complex behaviors. For example, they can walk around their environment, escape from corners/deadlocks, and avoid or climb over obstacles. While performing all these behaviors, they can also adapt their movements to deal with an unknown situation. As a consequence, they successfully navigate through their complex environment. The versatile and adaptive abilities are the result of an integration of several ingredients embedded in their sensorimotor loop. Biological studies reveal that the ingredients include neural dynamics, plasticity, sensory feedback, and biomechanics. Generating such versatile and adaptive behaviors for a many degrees-of-freedom (DOFs) walking robot is a challenging task. Thus, in this study, we present a bio-inspired approach to solve this task. Specifically, the approach combines neural mechanisms with plasticity, exteroceptive sensory feedback, and biomechanics. The neural mechanisms consist of adaptive neural sensory processing and modular neural locomotion control. The sensory processing is based on a small recurrent neural network consisting of two fully connected neurons. Online correlation-based learning with synaptic scaling is applied to adequately change the connections of the network. By doing so, we can effectively exploit neural dynamics (i.e., hysteresis effects and single attractors) in the network to generate different turning angles with short-term memory for a walking robot. The turning information is transmitted as descending steering signals to the neural locomotion control which translates the signals into motor actions. As a result, the robot can walk around and adapt its turning angle for avoiding obstacles in different situations. The adaptation also enables the robot to effectively escape from sharp corners or deadlocks. Using backbone joint control embedded in the the locomotion control allows the robot to climb over small obstacles. Consequently, it can successfully explore and navigate in complex environments. We firstly tested our approach on a physical simulation environment and then applied it to our real biomechanical walking robot AMOSII with 19 DOFs to adaptively avoid obstacles and navigate in the real world.
Grinke, Eduard; Tetzlaff, Christian; Wörgötter, Florentin; Manoonpong, Poramate
2015-01-01
Walking animals, like insects, with little neural computing can effectively perform complex behaviors. For example, they can walk around their environment, escape from corners/deadlocks, and avoid or climb over obstacles. While performing all these behaviors, they can also adapt their movements to deal with an unknown situation. As a consequence, they successfully navigate through their complex environment. The versatile and adaptive abilities are the result of an integration of several ingredients embedded in their sensorimotor loop. Biological studies reveal that the ingredients include neural dynamics, plasticity, sensory feedback, and biomechanics. Generating such versatile and adaptive behaviors for a many degrees-of-freedom (DOFs) walking robot is a challenging task. Thus, in this study, we present a bio-inspired approach to solve this task. Specifically, the approach combines neural mechanisms with plasticity, exteroceptive sensory feedback, and biomechanics. The neural mechanisms consist of adaptive neural sensory processing and modular neural locomotion control. The sensory processing is based on a small recurrent neural network consisting of two fully connected neurons. Online correlation-based learning with synaptic scaling is applied to adequately change the connections of the network. By doing so, we can effectively exploit neural dynamics (i.e., hysteresis effects and single attractors) in the network to generate different turning angles with short-term memory for a walking robot. The turning information is transmitted as descending steering signals to the neural locomotion control which translates the signals into motor actions. As a result, the robot can walk around and adapt its turning angle for avoiding obstacles in different situations. The adaptation also enables the robot to effectively escape from sharp corners or deadlocks. Using backbone joint control embedded in the the locomotion control allows the robot to climb over small obstacles. Consequently, it can successfully explore and navigate in complex environments. We firstly tested our approach on a physical simulation environment and then applied it to our real biomechanical walking robot AMOSII with 19 DOFs to adaptively avoid obstacles and navigate in the real world. PMID:26528176
Parallelization of a Fully-Distributed Hydrologic Model using Sub-basin Partitioning
NASA Astrophysics Data System (ADS)
Vivoni, E. R.; Mniszewski, S.; Fasel, P.; Springer, E.; Ivanov, V. Y.; Bras, R. L.
2005-12-01
A primary obstacle towards advances in watershed simulations has been the limited computational capacity available to most models. The growing trend of model complexity, data availability and physical representation has not been matched by adequate developments in computational efficiency. This situation has created a serious bottleneck which limits existing distributed hydrologic models to small domains and short simulations. In this study, we present novel developments in the parallelization of a fully-distributed hydrologic model. Our work is based on the TIN-based Real-time Integrated Basin Simulator (tRIBS), which provides continuous hydrologic simulation using a multiple resolution representation of complex terrain based on a triangulated irregular network (TIN). While the use of TINs reduces computational demand, the sequential version of the model is currently limited over large basins (>10,000 km2) and long simulation periods (>1 year). To address this, a parallel MPI-based version of the tRIBS model has been implemented and tested using high performance computing resources at Los Alamos National Laboratory. Our approach utilizes domain decomposition based on sub-basin partitioning of the watershed. A stream reach graph based on the channel network structure is used to guide the sub-basin partitioning. Individual sub-basins or sub-graphs of sub-basins are assigned to separate processors to carry out internal hydrologic computations (e.g. rainfall-runoff transformation). Routed streamflow from each sub-basin forms the major hydrologic data exchange along the stream reach graph. Individual sub-basins also share subsurface hydrologic fluxes across adjacent boundaries. We demonstrate how the sub-basin partitioning provides computational feasibility and efficiency for a set of test watersheds in northeastern Oklahoma. We compare the performance of the sequential and parallelized versions to highlight the efficiency gained as the number of processors increases. We also discuss how the coupled use of TINs and parallel processing can lead to feasible long-term simulations in regional watersheds while preserving basin properties at high-resolution.
Watershed Influences on Nearshore Waters Across the Entire US Great Lakes Coastal Region
We have combined three elements of observation to enable a comprehensive characterization of the Great Lakes nearshore that links nearshore conditions with their adjacent coastal watersheds. The three elements are: 1) a shore-parallel, high-resolution survey of the nearshore usin...
NASA Astrophysics Data System (ADS)
Bonano, Manuela; Buonanno, Sabatino; Ojha, Chandrakanta; Berardino, Paolo; Lanari, Riccardo; Zeni, Giovanni; Manunta, Michele
2017-04-01
The advanced DInSAR technique referred to as Small BAseline Subset (SBAS) algorithm has already largely demonstrated its effectiveness to carry out multi-scale and multi-platform surface deformation analyses relevant to both natural and man-made hazards. Thanks to its capability to generate displacement maps and long-term deformation time series at both regional (low resolution analysis) and local (full resolution analysis) spatial scales, it allows to get more insights on the spatial and temporal patterns of localized displacements relevant to single buildings and infrastructures over extended urban areas, with a key role in supporting risk mitigation and preservation activities. The extensive application of the multi-scale SBAS-DInSAR approach in many scientific contexts has gone hand in hand with new SAR satellite mission development, characterized by different frequency bands, spatial resolution, revisit times and ground coverage. This brought to the generation of huge DInSAR data stacks to be efficiently handled, processed and archived, with a strong impact on both the data storage and the computational requirements needed for generating the full resolution SBAS-DInSAR results. Accordingly, innovative and effective solutions for the automatic processing of massive SAR data archives and for the operational management of the derived SBAS-DInSAR products need to be designed and implemented, by exploiting the high efficiency (in terms of portability, scalability and computing performances) of the new ICT methodologies. In this work, we present a novel parallel implementation of the full resolution SBAS-DInSAR processing chain, aimed at investigating localized displacements affecting single buildings and infrastructures relevant to very large urban areas, relying on different granularity level parallelization strategies. The image granularity level is applied in most steps of the SBAS-DInSAR processing chain and exploits the multiprocessor systems with distributed memory. Moreover, in some processing steps very heavy from the computational point of view, the Graphical Processing Units (GPU) are exploited for the processing of blocks working on a pixel-by-pixel basis, requiring strong modifications on some key parts of the sequential full resolution SBAS-DInSAR processing chain. GPU processing is implemented by efficiently exploiting parallel processing architectures (as CUDA) for increasing the computing performances, in terms of optimization of the available GPU memory, as well as reduction of the Input/Output operations on the GPU and of the whole processing time for specific blocks w.r.t. the corresponding sequential implementation, particularly critical in presence of huge DInSAR datasets. Moreover, to efficiently handle the massive amount of DInSAR measurements provided by the new generation SAR constellations (CSK and Sentinel-1), we perform a proper re-design strategy aimed at the robust assimilation of the full resolution SBAS-DInSAR results into the web-based Geonode platform of the Spatial Data Infrastructure, thus allowing the efficient management, analysis and integration of the interferometric results with different data sources.
LTE modeling of inhomogeneous chromospheric structure using high-resolution limb observations
NASA Technical Reports Server (NTRS)
Lindsey, C.
1987-01-01
The paper discusses considerations relevant to LTE modeling of rough atmospheres. Particular attention is given to the application of recent high-resolution observations of the solar limb in the far-infrared and radio continuum to the modeling of chromospheric spicules. It is explained how the continuum limb observations can be combined with morphological knowledge of spicule structure to model the physical conditions in chromospheric spicules. This discussion forms the basis for a chromospheric model presented in a parallel publication based on observations ranging from 100 microns to 2.6 mm.
NASA Astrophysics Data System (ADS)
Lyu, Jingyuan; Nakarmi, Ukash; Zhang, Chaoyi; Ying, Leslie
2016-05-01
This paper presents a new approach to highly accelerated dynamic parallel MRI using low rank matrix completion, partial separability (PS) model. In data acquisition, k-space data is moderately randomly undersampled at the center kspace navigator locations, but highly undersampled at the outer k-space for each temporal frame. In reconstruction, the navigator data is reconstructed from undersampled data using structured low-rank matrix completion. After all the unacquired navigator data is estimated, the partial separable model is used to obtain partial k-t data. Then the parallel imaging method is used to acquire the entire dynamic image series from highly undersampled data. The proposed method has shown to achieve high quality reconstructions with reduction factors up to 31, and temporal resolution of 29ms, when the conventional PS method fails.
Parallel discontinuous Galerkin FEM for computing hyperbolic conservation law on unstructured grids
NASA Astrophysics Data System (ADS)
Ma, Xinrong; Duan, Zhijian
2018-04-01
High-order resolution Discontinuous Galerkin finite element methods (DGFEM) has been known as a good method for solving Euler equations and Navier-Stokes equations on unstructured grid, but it costs too much computational resources. An efficient parallel algorithm was presented for solving the compressible Euler equations. Moreover, the multigrid strategy based on three-stage three-order TVD Runge-Kutta scheme was used in order to improve the computational efficiency of DGFEM and accelerate the convergence of the solution of unsteady compressible Euler equations. In order to make each processor maintain load balancing, the domain decomposition method was employed. Numerical experiment performed for the inviscid transonic flow fluid problems around NACA0012 airfoil and M6 wing. The results indicated that our parallel algorithm can improve acceleration and efficiency significantly, which is suitable for calculating the complex flow fluid.
A 64Cycles/MB, Luma-Chroma Parallelized H.264/AVC Deblocking Filter for 4K × 2K Applications
NASA Astrophysics Data System (ADS)
Shen, Weiwei; Fan, Yibo; Zeng, Xiaoyang
In this paper, a high-throughput debloking filter is presented for H.264/AVC standard, catering video applications with 4K × 2K (4096 × 2304) ultra-definition resolution. In order to strengthen the parallelism without simply increasing the area, we propose a luma-chroma parallel method. Meanwhile, this work reduces the number of processing cycles, the amount of external memory traffic and the working frequency, by using triple four-stage pipeline filters and a luma-chroma interlaced sequence. Furthermore, it eliminates most unnecessary off-chip memory bandwidth with a highly reusable memory scheme, and adopts a “slide window” buffer scheme. As a result, our design can support 4K × 2K at 30fps applications at the working frequency of only 70.8MHz.
A new collimator for I-123-IMP SPECT imaging of the brain
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oyamada, H.; Fukukita, H.; Tanaka, E.
1985-05-01
At present, commercially available I-123-IMP is contaminated with I-124 and its concentration on the assay date is said to be approximately 5%. Therefore, the application of medium energy parallel hole collimator (MEPC) used in many places for SPECT results in deterioration of the image quality. Recently, the authors have developed a new collimator for I-123-IMP SPECT imaging comprised of 4 slat type units; ultrahigh resolution (UHR), high resolution (HR), high sensitivity (HS), and ultrahigh sensitivity (UHS). The slit width/septum thickness in mm for UHR, HR, HS, and UHS are 0.9/0.5, 1.5/0.85, 3.2/1.5, and 5.2/2.0, respectively. In practice, either UHR ormore » HR is set to the detector (Shimadzu LFOV-E, modified type) together with either HS or UHS. The former is always set to the detector with the slit direction parallel to the rotation axis, and the latter is set with its slit direction at a right angle to the former. This is based on an idea that, upon sacrifice of resolution to some extent, sensitivity can be gained on the axial direction while the resolution on the transaxial slice will still be sufficiently preserved. Resolutions (transaxial direction/axial direction) in FWHM (mm) for each combination (UHR-HS, UHR-UHS, HR-HS, and HR-UHS) were 15.9/31.4, 15.9/36.5,23.2/33.3, and 23.9/40.7, respectively, whereas the resolution of MEPC was 28.7/29.5. On the other hand, relative sensitivities to MEPC were 0.57, 0.86, 0.80, and 1.16. The authors conclude that the combination of UHR and HS is best suited for clinical practice and, at present they are obtaining I-123-IMP SPECT images of good quality.« less
Lee, Si Hoon; Lindquist, Nathan C.; Wittenberg, Nathan J.; Jordan, Luke R.; Oh, Sang-Hyun
2012-01-01
With recent advances in high-throughput proteomics and systems biology, there is a growing demand for new instruments that can precisely quantify a wide range of receptor-ligand binding kinetics in a high-throughput fashion. Here we demonstrate a surface plasmon resonance (SPR) imaging spectroscopy instrument capable of extracting binding kinetics and affinities from 50 parallel microfluidic channels simultaneously. The instrument utilizes large-area (~cm2) metallic nanohole arrays as SPR sensing substrates and combines a broadband light source, a high-resolution imaging spectrometer and a low-noise CCD camera to extract spectral information from every channel in real time with a refractive index resolution of 7.7 × 10−6. To demonstrate the utility of our instrument for quantifying a wide range of biomolecular interactions, each parallel microfluidic channel is coated with a biomimetic supported lipid membrane containing ganglioside (GM1) receptors. The binding kinetics of cholera toxin b (CTX-b) to GM1 are then measured in a single experiment from 50 channels. By combining the highly parallel microfluidic device with large-area periodic nanohole array chips, our SPR imaging spectrometer system enables high-throughput, label-free, real-time SPR biosensing, and its full-spectral imaging capability combined with nanohole arrays could enable integration of SPR imaging with concurrent surface-enhanced Raman spectroscopy. PMID:22895607
NASA Astrophysics Data System (ADS)
Tellman, B.; Schwarz, B.
2014-12-01
This talk describes the development of a web application to predict and communicate vulnerability to floods given publicly available data, disaster science, and geotech cloud capabilities. The proof of concept in Google Earth Engine API with initial testing on case studies in New York and Utterakhand India demonstrates the potential of highly parallelized cloud computing to model socio-ecological disaster vulnerability at high spatial and temporal resolution and in near real time. Cloud computing facilitates statistical modeling with variables derived from large public social and ecological data sets, including census data, nighttime lights (NTL), and World Pop to derive social parameters together with elevation, satellite imagery, rainfall, and observed flood data from Dartmouth Flood Observatory to derive biophysical parameters. While more traditional, physically based hydrological models that rely on flow algorithms and numerical methods are currently unavailable in parallelized computing platforms like Google Earth Engine, there is high potential to explore "data driven" modeling that trades physics for statistics in a parallelized environment. A data driven approach to flood modeling with geographically weighted logistic regression has been initially tested on Hurricane Irene in southeastern New York. Comparison of model results with observed flood data reveals a 97% accuracy of the model to predict flooded pixels. Testing on multiple storms is required to further validate this initial promising approach. A statistical social-ecological flood model that could produce rapid vulnerability assessments to predict who might require immediate evacuation and where could serve as an early warning. This type of early warning system would be especially relevant in data poor places lacking the computing power, high resolution data such as LiDar and stream gauges, or hydrologic expertise to run physically based models in real time. As the data-driven model presented relies on globally available data, the only real time data input required would be typical data from a weather service, e.g. precipitation or coarse resolution flood prediction. However, model uncertainty will vary locally depending upon the resolution and frequency of observed flood and socio-economic damage impact data.
NASA Astrophysics Data System (ADS)
Steinke, R. C.; Ogden, F. L.; Lai, W.; Moreno, H. A.; Pureza, L. G.
2014-12-01
Physics-based watershed models are useful tools for hydrologic studies, water resources management and economic analyses in the contexts of climate, land-use, and water-use changes. This poster presents a parallel implementation of a quasi 3-dimensional, physics-based, high-resolution, distributed water resources model suitable for simulating large watersheds in a massively parallel computing environment. Developing this model is one of the objectives of the NSF EPSCoR RII Track II CI-WATER project, which is joint between Wyoming and Utah EPSCoR jurisdictions. The model, which we call ADHydro, is aimed at simulating important processes in the Rocky Mountain west, including: rainfall and infiltration, snowfall and snowmelt in complex terrain, vegetation and evapotranspiration, soil heat flux and freezing, overland flow, channel flow, groundwater flow, water management and irrigation. Model forcing is provided by the Weather Research and Forecasting (WRF) model, and ADHydro is coupled with the NOAH-MP land-surface scheme for calculating fluxes between the land and atmosphere. The ADHydro implementation uses the Charm++ parallel run time system. Charm++ is based on location transparent message passing between migrateable C++ objects. Each object represents an entity in the model such as a mesh element. These objects can be migrated between processors or serialized to disk allowing the Charm++ system to automatically provide capabilities such as load balancing and checkpointing. Objects interact with each other by passing messages that the Charm++ system routes to the correct destination object regardless of its current location. This poster discusses the algorithms, communication patterns, and caching strategies used to implement ADHydro with Charm++. The ADHydro model code will be released to the hydrologic community in late 2014.
NASA Astrophysics Data System (ADS)
Shcherbakov, Alexandre S.; Chavez Dagostino, Miguel; Arellanes, Adan O.; Aguirre Lopez, Arturo
2016-09-01
We develop a multi-band spectrometer with a few spatially parallel optical arms for the combined processing of their data flow. Such multi-band capability has various applications in astrophysical scenarios at different scales: from objects in the distant universe to planetary atmospheres in the Solar system. Each optical arm exhibits original performances to provide parallel multi-band observations with different scales simultaneously. Similar possibility is based on designing each optical arm individually via exploiting different materials for acousto-optical cells operating within various regimes, frequency ranges and light wavelengths from independent light sources. Individual beam shapers provide both the needed incident light polarization and the required apodization to increase the dynamic range of a system. After parallel acousto-optical processing, data flows are united by the joint CCD matrix on the stage of the combined electronic data processing. At the moment, the prototype combines still three bands, i.e. includes three spatial optical arms. The first low-frequency arm operates at the central frequencies 60-80 MHz with frequency bandwidth 40 MHz. The second arm is oriented to middle-frequencies 350-500 MHz with frequency bandwidth 200-300 MHz. The third arm is intended for ultra-high-frequency radio-wave signals about 1.0-1.5 GHz with frequency bandwidth <300 MHz. To-day, this spectrometer has the following preliminary performances. The first arm exhibits frequency resolution 20 KHz; while the second and third arms give the resolution 150-200 KHz. The numbers of resolvable spots are 1500- 2000 depending on the regime of operation. The fourth optical arm at the frequency range 3.5 GHz is currently under construction.
We combine three elements for a comprehensive characterization that links nearshore conditions with coastal watershed disturbance metrics. The three elements are: 1) a shore-parallel, high-resolution nearshore survey using continuous in situ towed sensors; 2) a spatially-balanc...
Evaluation of Holographic Technology in Close Air Support Mission Planning and Execution
2008-03-18
Figures Figure 1: 2D representation of 3D hologram of Baghdad area using 1-meter resolution LIDAR data...Alias Maya Software................................................................................................. 11 Figure 3: Suburban...and light detection and ranging ( LIDAR ) sensors for several geographic areas was performed in parallel with formulation of the approach
ERIC Educational Resources Information Center
Gasevic, Dragan; Devedzic, Vladan
2004-01-01
This paper presents Petri net software tool P3 that is developed for training purposes of the Architecture and organization of computers (AOC) course. The P3 has the following features: graphical modeling interface, interactive simulation by single and parallel (with previous conflict resolution) transition firing, two well-known Petri net…
A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL)
NASA Technical Reports Server (NTRS)
Carroll, Chester C.; Owen, Jeffrey E.
1988-01-01
A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL) is presented which overcomes the traditional disadvantages of simulations executed on a digital computer. The incorporation of parallel processing allows the mapping of simulations into a digital computer to be done in the same inherently parallel manner as they are currently mapped onto an analog computer. The direct-execution format maximizes the efficiency of the executed code since the need for a high level language compiler is eliminated. Resolution is greatly increased over that which is available with an analog computer without the sacrifice in execution speed normally expected with digitial computer simulations. Although this report covers all aspects of the new architecture, key emphasis is placed on the processing element configuration and the microprogramming of the ACLS constructs. The execution times for all ACLS constructs are computed using a model of a processing element based on the AMD 29000 CPU and the AMD 29027 FPU. The increase in execution speed provided by parallel processing is exemplified by comparing the derived execution times of two ACSL programs with the execution times for the same programs executed on a similar sequential architecture.
Visual analysis of inter-process communication for large-scale parallel computing.
Muelder, Chris; Gygi, Francois; Ma, Kwan-Liu
2009-01-01
In serial computation, program profiling is often helpful for optimization of key sections of code. When moving to parallel computation, not only does the code execution need to be considered but also communication between the different processes which can induce delays that are detrimental to performance. As the number of processes increases, so does the impact of the communication delays on performance. For large-scale parallel applications, it is critical to understand how the communication impacts performance in order to make the code more efficient. There are several tools available for visualizing program execution and communications on parallel systems. These tools generally provide either views which statistically summarize the entire program execution or process-centric views. However, process-centric visualizations do not scale well as the number of processes gets very large. In particular, the most common representation of parallel processes is a Gantt char t with a row for each process. As the number of processes increases, these charts can become difficult to work with and can even exceed screen resolution. We propose a new visualization approach that affords more scalability and then demonstrate it on systems running with up to 16,384 processes.
Scaling Optimization of the SIESTA MHD Code
NASA Astrophysics Data System (ADS)
Seal, Sudip; Hirshman, Steven; Perumalla, Kalyan
2013-10-01
SIESTA is a parallel three-dimensional plasma equilibrium code capable of resolving magnetic islands at high spatial resolutions for toroidal plasmas. Originally designed to exploit small-scale parallelism, SIESTA has now been scaled to execute efficiently over several thousands of processors P. This scaling improvement was accomplished with minimal intrusion to the execution flow of the original version. First, the efficiency of the iterative solutions was improved by integrating the parallel tridiagonal block solver code BCYCLIC. Krylov-space generation in GMRES was then accelerated using a customized parallel matrix-vector multiplication algorithm. Novel parallel Hessian generation algorithms were integrated and memory access latencies were dramatically reduced through loop nest optimizations and data layout rearrangement. These optimizations sped up equilibria calculations by factors of 30-50. It is possible to compute solutions with granularity N/P near unity on extremely fine radial meshes (N > 1024 points). Grid separation in SIESTA, which manifests itself primarily in the resonant components of the pressure far from rational surfaces, is strongly suppressed by finer meshes. Large problem sizes of up to 300 K simultaneous non-linear coupled equations have been solved on the NERSC supercomputers. Work supported by U.S. DOE under Contract DE-AC05-00OR22725 with UT-Battelle, LLC.
Monitoring Java Programs with Java PathExplorer
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Rosu, Grigore; Clancy, Daniel (Technical Monitor)
2001-01-01
We present recent work on the development Java PathExplorer (JPAX), a tool for monitoring the execution of Java programs. JPAX can be used during program testing to gain increased information about program executions, and can potentially furthermore be applied during operation to survey safety critical systems. The tool facilitates automated instrumentation of a program's late code which will then omit events to an observer during its execution. The observer checks the events against user provided high level requirement specifications, for example temporal logic formulae, and against lower level error detection procedures, for example concurrency related such as deadlock and data race algorithms. High level requirement specifications together with their underlying logics are defined in the Maude rewriting logic, and then can either be directly checked using the Maude rewriting engine, or be first translated to efficient data structures and then checked in Java.
Experiments with Test Case Generation and Runtime Analysis
NASA Technical Reports Server (NTRS)
Artho, Cyrille; Drusinsky, Doron; Goldberg, Allen; Havelund, Klaus; Lowry, Mike; Pasareanu, Corina; Rosu, Grigore; Visser, Willem; Koga, Dennis (Technical Monitor)
2003-01-01
Software testing is typically an ad hoc process where human testers manually write many test inputs and expected test results, perhaps automating their execution in a regression suite. This process is cumbersome and costly. This paper reports preliminary results on an approach to further automate this process. The approach consists of combining automated test case generation based on systematically exploring the program's input domain, with runtime analysis, where execution traces are monitored and verified against temporal logic specifications, or analyzed using advanced algorithms for detecting concurrency errors such as data races and deadlocks. The approach suggests to generate specifications dynamically per input instance rather than statically once-and-for-all. The paper describes experiments with variants of this approach in the context of two examples, a planetary rover controller and a space craft fault protection system.
Chen, Dong; Eisley, Noel A.; Steinmacher-Burow, Burkhard; Heidelberger, Philip
2013-01-29
A computer implemented method and a system for routing data packets in a multi-dimensional computer network. The method comprises routing a data packet among nodes along one dimension towards a root node, each node having input and output communication links, said root node not having any outgoing uplinks, and determining at each node if the data packet has reached a predefined coordinate for the dimension or an edge of the subrectangle for the dimension, and if the data packet has reached the predefined coordinate for the dimension or the edge of the subrectangle for the dimension, determining if the data packet has reached the root node, and if the data packet has not reached the root node, routing the data packet among nodes along another dimension towards the root node.
A depth-of-interaction PET detector using mutual gain-equalized silicon photomultiplier
DOE Office of Scientific and Technical Information (OSTI.GOV)
W. Xi, A.G, Weisenberger, H. Dong, Brian Kross, S. Lee, J. McKisson, Carl Zorn
We developed a prototype high resolution, high efficiency depth-encoding detector for PET applications based on dual-ended readout of LYSO array with two silicon photomultipliers (SiPMs). Flood images, energy resolution, and depth-of-interaction (DOI) resolution were measured for a LYSO array - 0.7 mm in crystal pitch and 10 mm in thickness - with four unpolished parallel sides. Flood images were obtained such that individual crystal element in the array is resolved. The energy resolution of the entire array was measured to be 33%, while individual crystal pixel elements utilizing the signal from both sides ranged from 23.3% to 27%. By applyingmore » a mutual-gain equalization method, a DOI resolution of 2 mm for the crystal array was obtained in the experiments while simulations indicate {approx}1 mm DOI resolution could possibly be achieved. The experimental DOI resolution can be further improved by obtaining revised detector supporting electronics with better energy resolutions. This study provides a detailed detector calibration and DOI response characterization of the dual-ended readout SiPM-based PET detectors, which will be important in the design and calibration of a PET scanner in the future.« less
Spiral Transformation for High-Resolution and Efficient Sorting of Optical Vortex Modes.
Wen, Yuanhui; Chremmos, Ioannis; Chen, Yujie; Zhu, Jiangbo; Zhang, Yanfeng; Yu, Siyuan
2018-05-11
Mode sorting is an essential function for optical multiplexing systems that exploit the orthogonality of the orbital angular momentum mode space. The familiar log-polar optical transformation provides a simple yet efficient approach whose resolution is, however, restricted by a considerable overlap between adjacent modes resulting from the limited excursion of the phase along a complete circle around the optical vortex axis. We propose and experimentally verify a new optical transformation that maps spirals (instead of concentric circles) to parallel lines. As the phase excursion along a spiral in the wave front of an optical vortex is theoretically unlimited, this new optical transformation can separate orbital angular momentum modes with superior resolution while maintaining unity efficiency.
Organization and Dynamics of Receptor Proteins in a Plasma Membrane.
Koldsø, Heidi; Sansom, Mark S P
2015-11-25
The interactions of membrane proteins are influenced by their lipid environment, with key lipid species able to regulate membrane protein function. Advances in high-resolution microscopy can reveal the organization and dynamics of proteins and lipids within living cells at resolutions <200 nm. Parallel advances in molecular simulations provide near-atomic-resolution models of the dynamics of the organization of membranes of in vivo-like complexity. We explore the dynamics of proteins and lipids in crowded and complex plasma membrane models, thereby closing the gap in length and complexity between computations and experiments. Our simulations provide insights into the mutual interplay between lipids and proteins in determining mesoscale (20-100 nm) fluctuations of the bilayer, and in enabling oligomerization and clustering of membrane proteins.
Spiral Transformation for High-Resolution and Efficient Sorting of Optical Vortex Modes
NASA Astrophysics Data System (ADS)
Wen, Yuanhui; Chremmos, Ioannis; Chen, Yujie; Zhu, Jiangbo; Zhang, Yanfeng; Yu, Siyuan
2018-05-01
Mode sorting is an essential function for optical multiplexing systems that exploit the orthogonality of the orbital angular momentum mode space. The familiar log-polar optical transformation provides a simple yet efficient approach whose resolution is, however, restricted by a considerable overlap between adjacent modes resulting from the limited excursion of the phase along a complete circle around the optical vortex axis. We propose and experimentally verify a new optical transformation that maps spirals (instead of concentric circles) to parallel lines. As the phase excursion along a spiral in the wave front of an optical vortex is theoretically unlimited, this new optical transformation can separate orbital angular momentum modes with superior resolution while maintaining unity efficiency.
High-resolution whole-brain diffusion MRI at 7T using radiofrequency parallel transmission.
Wu, Xiaoping; Auerbach, Edward J; Vu, An T; Moeller, Steen; Lenglet, Christophe; Schmitter, Sebastian; Van de Moortele, Pierre-François; Yacoub, Essa; Uğurbil, Kâmil
2018-03-30
Investigating the utility of RF parallel transmission (pTx) for Human Connectome Project (HCP)-style whole-brain diffusion MRI (dMRI) data at 7 Tesla (7T). Healthy subjects were scanned in pTx and single-transmit (1Tx) modes. Multiband (MB), single-spoke pTx pulses were designed to image sagittal slices. HCP-style dMRI data (i.e., 1.05-mm resolutions, MB2, b-values = 1000/2000 s/mm 2 , 286 images and 40-min scan) and data with higher accelerations (MB3 and MB4) were acquired with pTx. pTx significantly improved flip-angle detected signal uniformity across the brain, yielding ∼19% increase in temporal SNR (tSNR) averaged over the brain relative to 1Tx. This allowed significantly enhanced estimation of multiple fiber orientations (with ∼21% decrease in dispersion) in HCP-style 7T dMRI datasets. Additionally, pTx pulses achieved substantially lower power deposition, permitting higher accelerations, enabling collection of the same data in 2/3 and 1/2 the scan time or of more data in the same scan time. pTx provides a solution to two major limitations for slice-accelerated high-resolution whole-brain dMRI at 7T; it improves flip-angle uniformity, and enables higher slice acceleration relative to current state-of-the-art. As such, pTx provides significant advantages for rapid acquisition of high-quality, high-resolution truly whole-brain dMRI data. © 2018 International Society for Magnetic Resonance in Medicine.
Fast generation of computer-generated hologram by graphics processing unit
NASA Astrophysics Data System (ADS)
Matsuda, Sho; Fujii, Tomohiko; Yamaguchi, Takeshi; Yoshikawa, Hiroshi
2009-02-01
A cylindrical hologram is well known to be viewable in 360 deg. This hologram depends high pixel resolution.Therefore, Computer-Generated Cylindrical Hologram (CGCH) requires huge calculation amount.In our previous research, we used look-up table method for fast calculation with Intel Pentium4 2.8 GHz.It took 480 hours to calculate high resolution CGCH (504,000 x 63,000 pixels and the average number of object points are 27,000).To improve quality of CGCH reconstructed image, fringe pattern requires higher spatial frequency and resolution.Therefore, to increase the calculation speed, we have to change the calculation method. In this paper, to reduce the calculation time of CGCH (912,000 x 108,000 pixels), we employ Graphics Processing Unit (GPU).It took 4,406 hours to calculate high resolution CGCH on Xeon 3.4 GHz.Since GPU has many streaming processors and a parallel processing structure, GPU works as the high performance parallel processor.In addition, GPU gives max performance to 2 dimensional data and streaming data.Recently, GPU can be utilized for the general purpose (GPGPU).For example, NVIDIA's GeForce7 series became a programmable processor with Cg programming language.Next GeForce8 series have CUDA as software development kit made by NVIDIA.Theoretically, calculation ability of GPU is announced as 500 GFLOPS. From the experimental result, we have achieved that 47 times faster calculation compared with our previous work which used CPU.Therefore, CGCH can be generated in 95 hours.So, total time is 110 hours to calculate and print the CGCH.
NASA Technical Reports Server (NTRS)
Fox-Rabinovitz, Michael S.; Takacs, Lawrence L.; Suarez, Max; Sawyer, William; Govindaraju, Ravi C.
1999-01-01
The results obtained with the variable resolution stretched grid (SG) GEOS GCM (Goddard Earth Observing System General Circulation Models) are discussed, with the emphasis on the regional down-scaling effects and their dependence on the stretched grid design and parameters. A variable resolution SG-GCM and SG-DAS using a global stretched grid with fine resolution over an area of interest, is a viable new approach to REGIONAL and subregional CLIMATE studies and applications. The stretched grid approach is an ideal tool for representing regional to global scale interactions. It is an alternative to the widely used nested grid approach introduced a decade ago as a pioneering step in regional climate modeling. The GEOS SG-GCM is used for simulations of the anomalous U.S. climate events of 1988 drought and 1993 flood, with enhanced regional resolution. The height low level jet, precipitation and other diagnostic patterns are successfully simulated and show the efficient down-scaling over the area of interest the U.S. An imitation of the nested grid approach is performed using the developed SG-DAS (Data Assimilation System) that incorporates the SG-GCM. The SG-DAS is run with withholding data over the area of interest. The design immitates the nested grid framework with boundary conditions provided from analyses. No boundary condition buffer is needed for the case due to the global domain of integration used for the SG-GCM and SG-DAS. The experiments based on the newly developed versions of the GEOS SG-GCM and SG-DAS, with finer 0.5 degree (and higher) regional resolution, are briefly discussed. The major aspects of parallelization of the SG-GCM code are outlined. The KEY OBJECTIVES of the study are: 1) obtaining an efficient DOWN-SCALING over the area of interest with fine and very fine resolution; 2) providing CONSISTENT interactions between regional and global scales including the consistent representation of regional ENERGY and WATER BALANCES; 3) providing a high computational efficiency for future SG-GCM and SG-DAS versions using PARALLEL codes.
Dal Palù, Alessandro; Pontelli, Enrico; He, Jing; Lu, Yonggang
2007-01-01
The paper describes a novel framework, constructed using Constraint Logic Programming (CLP) and parallelism, to determine the association between parts of the primary sequence of a protein and alpha-helices extracted from 3D low-resolution descriptions of large protein complexes. The association is determined by extracting constraints from the 3D information, regarding length, relative position and connectivity of helices, and solving these constraints with the guidance of a secondary structure prediction algorithm. Parallelism is employed to enhance performance on large proteins. The framework provides a fast, inexpensive alternative to determine the exact tertiary structure of unknown proteins.
Aberration-free superresolution imaging via binary speckle pattern encoding and processing
NASA Astrophysics Data System (ADS)
Ben-Eliezer, Eyal; Marom, Emanuel
2007-04-01
We present an approach that provides superresolution beyond the classical limit as well as image restoration in the presence of aberrations; in particular, the ability to obtain superresolution while extending the depth of field (DOF) simultaneously is tested experimentally. It is based on an approach, recently proposed, shown to increase the resolution significantly for in-focus images by speckle encoding and decoding. In our approach, an object multiplied by a fine binary speckle pattern may be located anywhere along an extended DOF region. Since the exact magnification is not known in the presence of defocus aberration, the acquired low-resolution image is electronically processed via a parallel-branch decoding scheme, where in each branch the image is multiplied by the same high-resolution synchronized time-varying binary speckle but with different magnification. Finally, a hard-decision algorithm chooses the branch that provides the highest-resolution output image, thus achieving insensitivity to aberrations as well as DOF variations. Simulation as well as experimental results are presented, exhibiting significant resolution improvement factors.
A Petascale Non-Hydrostatic Atmospheric Dynamical Core in the HOMME Framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tufo, Henry
The High-Order Method Modeling Environment (HOMME) is a framework for building scalable, conserva- tive atmospheric models for climate simulation and general atmospheric-modeling applications. Its spatial discretizations are based on Spectral-Element (SE) and Discontinuous Galerkin (DG) methods. These are local methods employing high-order accurate spectral basis-functions that have been shown to perform well on massively parallel supercomputers at any resolution and scale particularly well at high resolutions. HOMME provides the framework upon which the CAM-SE community atmosphere model dynamical-core is constructed. In its current incarnation, CAM-SE employs the hydrostatic primitive-equations (PE) of motion, which limits its resolution to simulations coarser thanmore » 0.1 per grid cell. The primary objective of this project is to remove this resolution limitation by providing HOMME with the capabilities needed to build nonhydrostatic models that solve the compressible Euler/Navier-Stokes equations.« less
NASA Astrophysics Data System (ADS)
Jian, Aoqun; Zou, Lu; Tang, Haiquan; Duan, Qianqian; Ji, Jianlong; Zhang, Qianwu; Zhang, Xuming; Sang, Shengbo
2017-06-01
The issue of thermal effects is inevitable for the ultrahigh refractive index (RI) measurement. A biosensor with parallel-coupled dual-microring resonator configuration is proposed to achieve high resolution and free thermal effects measurement. Based on the coupled-resonator-induced transparency effect, the design and principle of the biosensor are introduced in detail, and the performance of the sensor is deduced by simulations. Compared to the biosensor based on a single-ring configuration, the designed biosensor has a 10-fold increased Q value according to the simulation results, thus the sensor is expected to achieve a particularly high resolution. In addition, the output signal of the mathematical model of the proposed sensor can eliminate the thermal influence by adopting an algorithm. This work is expected to have great application potentials in the areas of high-resolution RI measurement, such as biomedical discoveries, virus screening, and drinking water safety.
Paparelli, Laura; Corthout, Nikky; Pavie, Benjamin; Annaert, Wim; Munck, Sebastian
2016-01-01
The spatial distribution of proteins within the cell affects their capability to interact with other molecules and directly influences cellular processes and signaling. At the plasma membrane, multiple factors drive protein compartmentalization into specialized functional domains, leading to the formation of clusters in which intermolecule interactions are facilitated. Therefore, quantifying protein distributions is a necessity for understanding their regulation and function. The recent advent of super-resolution microscopy has opened up the possibility of imaging protein distributions at the nanometer scale. In parallel, new spatial analysis methods have been developed to quantify distribution patterns in super-resolution images. In this chapter, we provide an overview of super-resolution microscopy and summarize the factors influencing protein arrangements on the plasma membrane. Finally, we highlight methods for analyzing clusterization of plasma membrane proteins, including examples of their applications.
Parallel hyperbolic PDE simulation on clusters: Cell versus GPU
NASA Astrophysics Data System (ADS)
Rostrup, Scott; De Sterck, Hans
2010-12-01
Increasingly, high-performance computing is looking towards data-parallel computational devices to enhance computational performance. Two technologies that have received significant attention are IBM's Cell Processor and NVIDIA's CUDA programming model for graphics processing unit (GPU) computing. In this paper we investigate the acceleration of parallel hyperbolic partial differential equation simulation on structured grids with explicit time integration on clusters with Cell and GPU backends. The message passing interface (MPI) is used for communication between nodes at the coarsest level of parallelism. Optimizations of the simulation code at the several finer levels of parallelism that the data-parallel devices provide are described in terms of data layout, data flow and data-parallel instructions. Optimized Cell and GPU performance are compared with reference code performance on a single x86 central processing unit (CPU) core in single and double precision. We further compare the CPU, Cell and GPU platforms on a chip-to-chip basis, and compare performance on single cluster nodes with two CPUs, two Cell processors or two GPUs in a shared memory configuration (without MPI). We finally compare performance on clusters with 32 CPUs, 32 Cell processors, and 32 GPUs using MPI. Our GPU cluster results use NVIDIA Tesla GPUs with GT200 architecture, but some preliminary results on recently introduced NVIDIA GPUs with the next-generation Fermi architecture are also included. This paper provides computational scientists and engineers who are considering porting their codes to accelerator environments with insight into how structured grid based explicit algorithms can be optimized for clusters with Cell and GPU accelerators. It also provides insight into the speed-up that may be gained on current and future accelerator architectures for this class of applications. Program summaryProgram title: SWsolver Catalogue identifier: AEGY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGY_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL v3 No. of lines in distributed program, including test data, etc.: 59 168 No. of bytes in distributed program, including test data, etc.: 453 409 Distribution format: tar.gz Programming language: C, CUDA Computer: Parallel Computing Clusters. Individual compute nodes may consist of x86 CPU, Cell processor, or x86 CPU with attached NVIDIA GPU accelerator. Operating system: Linux Has the code been vectorised or parallelized?: Yes. Tested on 1-128 x86 CPU cores, 1-32 Cell Processors, and 1-32 NVIDIA GPUs. RAM: Tested on Problems requiring up to 4 GB per compute node. Classification: 12 External routines: MPI, CUDA, IBM Cell SDK Nature of problem: MPI-parallel simulation of Shallow Water equations using high-resolution 2D hyperbolic equation solver on regular Cartesian grids for x86 CPU, Cell Processor, and NVIDIA GPU using CUDA. Solution method: SWsolver provides 3 implementations of a high-resolution 2D Shallow Water equation solver on regular Cartesian grids, for CPU, Cell Processor, and NVIDIA GPU. Each implementation uses MPI to divide work across a parallel computing cluster. Additional comments: Sub-program numdiff is used for the test run.
Wang, Sheng; Ding, Miao; Chen, Xuanze; Chang, Lei; Sun, Yujie
2017-01-01
Direct visualization of protein-protein interactions (PPIs) at high spatial and temporal resolution in live cells is crucial for understanding the intricate and dynamic behaviors of signaling protein complexes. Recently, bimolecular fluorescence complementation (BiFC) assays have been combined with super-resolution imaging techniques including PALM and SOFI to visualize PPIs at the nanometer spatial resolution. RESOLFT nanoscopy has been proven as a powerful live-cell super-resolution imaging technique. With regard to the detection and visualization of PPIs in live cells with high temporal and spatial resolution, here we developed a BiFC assay using split rsEGFP2, a highly photostable and reversibly photoswitchable fluorescent protein previously developed for RESOLFT nanoscopy. Combined with parallelized RESOLFT microscopy, we demonstrated the high spatiotemporal resolving capability of a rsEGFP2-based BiFC assay by detecting and visualizing specifically the heterodimerization interactions between Bcl-xL and Bak as well as the dynamics of the complex on mitochondria membrane in live cells. PMID:28663931
Ji, Jim; Wright, Steven
2005-01-01
Parallel imaging using multiple phased-array coils and receiver channels has become an effective approach to high-speed magnetic resonance imaging (MRI). To obtain high spatiotemporal resolution, the k-space is subsampled and later interpolated using multiple channel data. Higher subsampling factors result in faster image acquisition. However, the subsampling factors are upper-bounded by the number of parallel channels. Phase constraints have been previously proposed to overcome this limitation with some success. In this paper, we demonstrate that in certain applications it is possible to obtain acceleration factors potentially up to twice the channel numbers by using a real image constraint. Data acquisition and processing methods to manipulate and estimate of the image phase information are presented for improving image reconstruction. In-vivo brain MRI experimental results show that accelerations up to 6 are feasible with 4-channel data.
NASA Astrophysics Data System (ADS)
Nakhostin, M.; Baba, M.
2014-06-01
Parallel-plate avalanche counters have long been recognized as timing detectors for heavily ionizing particles. However, these detectors suffer from a poor pulse-height resolution which limits their capability to discriminate between different ionizing particles. In this paper, a new approach for discriminating between charged particles of different specific energy-loss with avalanche counters is demonstrated. We show that the effect of the self-induced space-charge in parallel-plate avalanche counters leads to a strong correlation between the shape of output current pulses and the amount of primary ionization created by the incident charged particles. The correlation is then exploited for the discrimination of charged particles with different energy-losses in the detector. The experimental results obtained with α-particles from an 241Am α-source demonstrate a discrimination capability far beyond that achievable with the standard pulse-height discrimination method.
SNSPD with parallel nanowires (Conference Presentation)
NASA Astrophysics Data System (ADS)
Ejrnaes, Mikkel; Parlato, Loredana; Gaggero, Alessandro; Mattioli, Francesco; Leoni, Roberto; Pepe, Giampiero; Cristiano, Roberto
2017-05-01
Superconducting nanowire single-photon detectors (SNSPDs) have shown to be promising in applications such as quantum communication and computation, quantum optics, imaging, metrology and sensing. They offer the advantages of a low dark count rate, high efficiency, a broadband response, a short time jitter, a high repetition rate, and no need for gated-mode operation. Several SNSPD designs have been proposed in literature. Here, we discuss the so-called parallel nanowires configurations. They were introduced with the aim of improving some SNSPD property like detection efficiency, speed, signal-to-noise ratio, or photon number resolution. Although apparently similar, the various parallel designs are not the same. There is no one design that can improve the mentioned properties all together. In fact, each design presents its own characteristics with specific advantages and drawbacks. In this work, we will discuss the various designs outlining peculiarities and possible improvements.
A seismic reflection image for the base of a tectonic plate.
Stern, T A; Henrys, S A; Okaya, D; Louie, J N; Savage, M K; Lamb, S; Sato, H; Sutherland, R; Iwasaki, T
2015-02-05
Plate tectonics successfully describes the surface of Earth as a mosaic of moving lithospheric plates. But it is not clear what happens at the base of the plates, the lithosphere-asthenosphere boundary (LAB). The LAB has been well imaged with converted teleseismic waves, whose 10-40-kilometre wavelength controls the structural resolution. Here we use explosion-generated seismic waves (of about 0.5-kilometre wavelength) to form a high-resolution image for the base of an oceanic plate that is subducting beneath North Island, New Zealand. Our 80-kilometre-wide image is based on P-wave reflections and shows an approximately 15° dipping, abrupt, seismic wave-speed transition (less than 1 kilometre thick) at a depth of about 100 kilometres. The boundary is parallel to the top of the plate and seismic attributes indicate a P-wave speed decrease of at least 8 ± 3 per cent across it. A parallel reflection event approximately 10 kilometres deeper shows that the decrease in P-wave speed is confined to a channel at the base of the plate, which we interpret as a sheared zone of ponded partial melts or volatiles. This is independent, high-resolution evidence for a low-viscosity channel at the LAB that decouples plates from mantle flow beneath, and allows plate tectonics to work.
NASA Astrophysics Data System (ADS)
Ding, Xuemei; Wang, Bingyuan; Liu, Dongyuan; Zhang, Yao; He, Jie; Zhao, Huijuan; Gao, Feng
2018-02-01
During the past two decades there has been a dramatic rise in the use of functional near-infrared spectroscopy (fNIRS) as a neuroimaging technique in cognitive neuroscience research. Diffuse optical tomography (DOT) and optical topography (OT) can be employed as the optical imaging techniques for brain activity investigation. However, most current imagers with analogue detection are limited by sensitivity and dynamic range. Although photon-counting detection can significantly improve detection sensitivity, the intrinsic nature of sequential excitations reduces temporal resolution. To improve temporal resolution, sensitivity and dynamic range, we develop a multi-channel continuous-wave (CW) system for brain functional imaging based on a novel lock-in photon-counting technique. The system consists of 60 Light-emitting device (LED) sources at three wavelengths of 660nm, 780nm and 830nm, which are modulated by current-stabilized square-wave signals at different frequencies, and 12 photomultiplier tubes (PMT) based on lock-in photon-counting technique. This design combines the ultra-high sensitivity of the photon-counting technique with the parallelism of the digital lock-in technique. We can therefore acquire the diffused light intensity for all the source-detector pairs (SD-pairs) in parallel. The performance assessments of the system are conducted using phantom experiments, and demonstrate its excellent measurement linearity, negligible inter-channel crosstalk, strong noise robustness and high temporal resolution.
García-Cabezas, Miguel Ángel; Barbas, Helen
2018-01-01
Noninvasive imaging and tractography methods have yielded information on broad communication networks but lack resolution to delineate intralaminar cortical and subcortical pathways in humans. An important unanswered question is whether we can use the wealth of precise information on pathways from monkeys to understand connections in humans. We addressed this question within a theoretical framework of systematic cortical variation and used identical high-resolution methods to compare the architecture of cortical gray matter and the white matter beneath, which gives rise to short- and long-distance pathways in humans and rhesus monkeys. We used the prefrontal cortex as a model system because of its key role in attention, emotions, and executive function, which are processes often affected in brain diseases. We found striking parallels and consistent trends in the gray and white matter architecture in humans and monkeys and between the architecture and actual connections mapped with neural tracers in rhesus monkeys and, by extension, in humans. Using the novel architectonic portrait as a base, we found significant changes in pathways between nearby prefrontal and distant areas in autism. Our findings reveal that a theoretical framework allows study of normal neural communication in humans at high resolution and specific disruptions in diverse psychiatric and neurodegenerative diseases. PMID:29401206
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barker, Andrew T.; Gelever, Stephan A.; Lee, Chak S.
2017-12-12
smoothG is a collection of parallel C++ classes/functions that algebraically constructs reduced models of different resolutions from a given high-fidelity graph model. In addition, smoothG also provides efficient linear solvers for the reduced models. Other than pure graph problem, the software finds its application in subsurface flow and power grid simulations in which graph Laplacians are found
Dumping Low and High Resolution Graphics on the Apple IIe Microcomputer System.
ERIC Educational Resources Information Center
Fletcher, Richard K., Jr.; Ruckman, Frank, Jr.
This paper discusses and outlines procedures for obtaining a hard copy of the graphic output of a microcomputer or "dumping a graphic" using the Apple Dot Matrix Printer with the Apple Parallel Interface Card, and the Imagewriter Printer with the Apple Super Serial Interface Card. Hardware configurations and instructions for high…
ERIC Educational Resources Information Center
Smith, Stacie Nicole; Fairman, David
2004-01-01
Conflict resolution education (CRE) grew out of several parallel efforts: integrating social justice into schools, concerns about safety and youth violence, and desires to enhance responsible citizenship. Today, CRE encompasses, or is a component of, a broad range of initiatives in schools: violence prevention programs, diversity and tolerance…
Hybrid parallel computing architecture for multiview phase shifting
NASA Astrophysics Data System (ADS)
Zhong, Kai; Li, Zhongwei; Zhou, Xiaohui; Shi, Yusheng; Wang, Congjun
2014-11-01
The multiview phase-shifting method shows its powerful capability in achieving high resolution three-dimensional (3-D) shape measurement. Unfortunately, this ability results in very high computation costs and 3-D computations have to be processed offline. To realize real-time 3-D shape measurement, a hybrid parallel computing architecture is proposed for multiview phase shifting. In this architecture, the central processing unit can co-operate with the graphic processing unit (GPU) to achieve hybrid parallel computing. The high computation cost procedures, including lens distortion rectification, phase computation, correspondence, and 3-D reconstruction, are implemented in GPU, and a three-layer kernel function model is designed to simultaneously realize coarse-grained and fine-grained paralleling computing. Experimental results verify that the developed system can perform 50 fps (frame per second) real-time 3-D measurement with 260 K 3-D points per frame. A speedup of up to 180 times is obtained for the performance of the proposed technique using a NVIDIA GT560Ti graphics card rather than a sequential C in a 3.4 GHZ Inter Core i7 3770.
NASA Astrophysics Data System (ADS)
Hou, Zhenlong; Huang, Danian
2017-09-01
In this paper, we make a study on the inversion of probability tomography (IPT) with gravity gradiometry data at first. The space resolution of the results is improved by multi-tensor joint inversion, depth weighting matrix and the other methods. Aiming at solving the problems brought by the big data in the exploration, we present the parallel algorithm and the performance analysis combining Compute Unified Device Architecture (CUDA) with Open Multi-Processing (OpenMP) based on Graphics Processing Unit (GPU) accelerating. In the test of the synthetic model and real data from Vinton Dome, we get the improved results. It is also proved that the improved inversion algorithm is effective and feasible. The performance of parallel algorithm we designed is better than the other ones with CUDA. The maximum speedup could be more than 200. In the performance analysis, multi-GPU speedup and multi-GPU efficiency are applied to analyze the scalability of the multi-GPU programs. The designed parallel algorithm is demonstrated to be able to process larger scale of data and the new analysis method is practical.
NASA Astrophysics Data System (ADS)
Hara, Tatsuhiko
2004-08-01
We implement the Direct Solution Method (DSM) on a vector-parallel supercomputer and show that it is possible to significantly improve its computational efficiency through parallel computing. We apply the parallel DSM calculation to waveform inversion of long period (250-500 s) surface wave data for three-dimensional (3-D) S-wave velocity structure in the upper and uppermost lower mantle. We use a spherical harmonic expansion to represent lateral variation with the maximum angular degree 16. We find significant low velocities under south Pacific hot spots in the transition zone. This is consistent with other seismological studies conducted in the Superplume project, which suggests deep roots of these hot spots. We also perform simultaneous waveform inversion for 3-D S-wave velocity and Q structure. Since resolution for Q is not good, we develop a new technique in which power spectra are used as data for inversion. We find good correlation between long wavelength patterns of Vs and Q in the transition zone such as high Vs and high Q under the western Pacific.
Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes
NASA Astrophysics Data System (ADS)
Guzman, Horacio V.; Junghans, Christoph; Kremer, Kurt; Stuehn, Torsten
2017-11-01
Multiscale and inhomogeneous molecular systems are challenging topics in the field of molecular simulation. In particular, modeling biological systems in the context of multiscale simulations and exploring material properties are driving a permanent development of new simulation methods and optimization algorithms. In computational terms, those methods require parallelization schemes that make a productive use of computational resources for each simulation and from its genesis. Here, we introduce the heterogeneous domain decomposition approach, which is a combination of an heterogeneity-sensitive spatial domain decomposition with an a priori rearrangement of subdomain walls. Within this approach, the theoretical modeling and scaling laws for the force computation time are proposed and studied as a function of the number of particles and the spatial resolution ratio. We also show the new approach capabilities, by comparing it to both static domain decomposition algorithms and dynamic load-balancing schemes. Specifically, two representative molecular systems have been simulated and compared to the heterogeneous domain decomposition proposed in this work. These two systems comprise an adaptive resolution simulation of a biomolecule solvated in water and a phase-separated binary Lennard-Jones fluid.
NASA Astrophysics Data System (ADS)
Muraviev, A. V.; Smolski, V. O.; Loparo, Z. E.; Vodopyanov, K. L.
2018-04-01
Mid-infrared spectroscopy offers supreme sensitivity for the detection of trace gases, solids and liquids based on tell-tale vibrational bands specific to this spectral region. Here, we present a new platform for mid-infrared dual-comb Fourier-transform spectroscopy based on a pair of ultra-broadband subharmonic optical parametric oscillators pumped by two phase-locked thulium-fibre combs. Our system provides fast (7 ms for a single interferogram), moving-parts-free, simultaneous acquisition of 350,000 spectral data points, spaced by a 115 MHz intermodal interval over the 3.1-5.5 µm spectral range. Parallel detection of 22 trace molecular species in a gas mixture, including isotopologues containing isotopes such as 13C, 18O, 17O, 15N, 34S, 33S and deuterium, with part-per-billion sensitivity and sub-Doppler resolution is demonstrated. The technique also features absolute optical frequency referencing to an atomic clock, a high degree of mutual coherence between the two mid-infrared combs with a relative comb-tooth linewidth of 25 mHz, coherent averaging and feasibility for kilohertz-scale spectral resolution.
Embedded Implementation of VHR Satellite Image Segmentation
Li, Chao; Balla-Arabé, Souleymane; Ginhac, Dominique; Yang, Fan
2016-01-01
Processing and analysis of Very High Resolution (VHR) satellite images provide a mass of crucial information, which can be used for urban planning, security issues or environmental monitoring. However, they are computationally expensive and, thus, time consuming, while some of the applications, such as natural disaster monitoring and prevention, require high efficiency performance. Fortunately, parallel computing techniques and embedded systems have made great progress in recent years, and a series of massively parallel image processing devices, such as digital signal processors or Field Programmable Gate Arrays (FPGAs), have been made available to engineers at a very convenient price and demonstrate significant advantages in terms of running-cost, embeddability, power consumption flexibility, etc. In this work, we designed a texture region segmentation method for very high resolution satellite images by using the level set algorithm and the multi-kernel theory in a high-abstraction C environment and realize its register-transfer level implementation with the help of a new proposed high-level synthesis-based design flow. The evaluation experiments demonstrate that the proposed design can produce high quality image segmentation with a significant running-cost advantage. PMID:27240370
WaveJava: Wavelet-based network computing
NASA Astrophysics Data System (ADS)
Ma, Kun; Jiao, Licheng; Shi, Zhuoer
1997-04-01
Wavelet is a powerful theory, but its successful application still needs suitable programming tools. Java is a simple, object-oriented, distributed, interpreted, robust, secure, architecture-neutral, portable, high-performance, multi- threaded, dynamic language. This paper addresses the design and development of a cross-platform software environment for experimenting and applying wavelet theory. WaveJava, a wavelet class library designed by the object-orient programming, is developed to take advantage of the wavelets features, such as multi-resolution analysis and parallel processing in the networking computing. A new application architecture is designed for the net-wide distributed client-server environment. The data are transmitted with multi-resolution packets. At the distributed sites around the net, these data packets are done the matching or recognition processing in parallel. The results are fed back to determine the next operation. So, the more robust results can be arrived quickly. The WaveJava is easy to use and expand for special application. This paper gives a solution for the distributed fingerprint information processing system. It also fits for some other net-base multimedia information processing, such as network library, remote teaching and filmless picture archiving and communications.
Wyatt, Madison; Nave, Gillian
2017-01-01
We evaluated the use of a commercial flatbed scanner for digitizing photographic plates used for spectroscopy. The scanner has a bed size of 420 mm by 310 mm and a pixel size of about 0.0106 mm. Our tests show that the closest line pairs that can be resolved with the scanner are 0.024 mm apart, only slightly larger than the Nyquist resolution of 0.021 mm expected by the 0.0106 mm pixel size. We measured periodic errors in the scanner using both a calibrated length scale and a photographic plate. We find no noticeable periodic errors in the direction parallel to the linear detector in the scanner, but errors with an amplitude of 0.03 mm to 0.05 mm in the direction perpendicular to the detector. We conclude that large periodic errors in measurements of spectroscopic plates using flatbed scanners can be eliminated by scanning the plates with the dispersion direction parallel to the linear detector by placing the plate along the short side of the scanner. PMID:28463262
Performance, results, and prospects of the visible spectrograph VEGA on CHARA
NASA Astrophysics Data System (ADS)
Mourard, Denis; Challouf, Mounir; Ligi, Roxanne; Bério, Philippe; Clausse, Jean-Michel; Gerakis, Jérôme; Bourges, Laurent; Nardetto, Nicolas; Perraut, Karine; Tallon-Bosc, Isabelle; McAlister, H.; ten Brummelaar, T.; Ridgway, S.; Sturmann, J.; Sturmann, L.; Turner, N.; Farrington, C.; Goldfinger, P. J.
2012-07-01
In this paper, we review the current performance of the VEGA/CHARA visible spectrograph and make a review of the most recent astrophysical results. The science programs take benefit of the exceptional angular resolution, the unique spectral resolution and one of the main features of CHARA: Infrared and Visible parallel operation. We also discuss recent developments concerning the tools for the preparation of observations and important features of the data reduction software. A short discussion of the future developments will complete the presentation, directed towards new detectors and possible new beam combination scheme for improved sensitivity and imaging capabilities.
Integrated sensor with frame memory and programmable resolution for light adaptive imaging
NASA Technical Reports Server (NTRS)
Zhou, Zhimin (Inventor); Fossum, Eric R. (Inventor); Pain, Bedabrata (Inventor)
2004-01-01
An image sensor operable to vary the output spatial resolution according to a received light level while maintaining a desired signal-to-noise ratio. Signals from neighboring pixels in a pixel patch with an adjustable size are added to increase both the image brightness and signal-to-noise ratio. One embodiment comprises a sensor array for receiving input signals, a frame memory array for temporarily storing a full frame, and an array of self-calibration column integrators for uniform column-parallel signal summation. The column integrators are capable of substantially canceling fixed pattern noise.
Moral dilemmas and moral principles: when emotion and cognition unite.
Manfrinati, Andrea; Lotto, Lorella; Sarlo, Michela; Palomba, Daniela; Rumiati, Rino
2013-01-01
Traditional studies on moral judgement used resolutions of moral dilemmas that were framed in terms of acceptability of the consequentialist action promoting a greater good, thus overlooking the deontological implications (choices cannot be justified by their consequences). Recently, some authors have suggested a parallelism between automatic, unreflective emotional responses and deontological moral judgements. In this study, we developed a novel experimental paradigm in which participants were required to choose between two resolutions of a moral dilemma (consequentialist and deontological). To assess whether emotions are engaged in each of the two resolutions, we asked participants to evaluate their emotional experience through the ratings of valence and arousal. Results showed that emotion is involved not only in deontological but also in consequentialist resolutions. Moreover, response times pointed out a different interplay between emotion and cognition in determining a conflict in the dilemma's resolution. In particular, when people were faced with trolley-like dilemmas we found that decisions leading to deontological resolutions were slower than decisions leading to consequentialist resolutions. We propose that this finding reflects the special (but not accepted) permission provided by the doctrine of the double effect for incidentally causing death for the sake of a good end.
NASA Astrophysics Data System (ADS)
Al-Saadi, Osamah; Schmidt, Volkmar; Becken, Michael; Fritsch, Thomas
2017-04-01
Electrical resistivity tomography (ERT) methods have been increasingly used in various shallow depth archaeological prospections in the last few decades. These non-invasive techniques are very useful in saving time, costs, and efforts. Both 2D and 3D ERT techniques are used to obtain detailed images of subsurface anomalies. In two surveyed areas near Nonnweiler (Germany), we present the results of the full 3D setup with a roll-along technique and of the quasi-3D setup (parallel and orthogonal profiles in dipole-dipole configuration). In area A, a dipole-dipole array with 96 electrodes in a uniform rectangular survey grid has been used in full 3D to investigate a presumed Roman building. A roll-along technique has been utilized to cover a large part of the archaeological site with an electrode spacing of 1 meter and with 0.5 meter for a more detailed image. Additional dense parallel 2D profiles have been carried out in dipole-dipole array with 0.25 meter electrode spacing and 0.25 meter between adjacent profiles in both direction for higher- resolution subsurface images. We have designed a new field procedure, which used an electrode array fixed in a frame. This facilitates efficient field operation, which comprised 2376 electrode positions. With the quasi 3D imaging, we confirmed the full 3D inversion model but at a much better resolution. In area B, dense parallel 2D profiles were directly used to survey the second target with also 0.25 meter electrode spacing and profiles separation respectively. The same field measurement design has been utilized and comprised 9648 electrode positions in total. The quasi-3D inversion results clearly revealed the main structures of the Roman construction. These ERT inversion results coincided well with the archaeological excavation, which has been done in some parts of this area. The ERT result successfully images parts from the walls and also smaller internal structures of the Roman building.
Adaptive optics parallel spectral domain optical coherence tomography for imaging the living retina
NASA Astrophysics Data System (ADS)
Zhang, Yan; Rha, Jungtae; Jonnal, Ravi S.; Miller, Donald T.
2005-06-01
Although optical coherence tomography (OCT) can axially resolve and detect reflections from individual cells, there are no reports of imaging cells in the living human retina using OCT. To supplement the axial resolution and sensitivity of OCT with the necessary lateral resolution and speed, we developed a novel spectral domain OCT (SD-OCT) camera based on a free-space parallel illumination architecture and equipped with adaptive optics (AO). Conventional flood illumination, also with AO, was integrated into the camera and provided confirmation of the focus position in the retina with an accuracy of ±10.3 μm. Short bursts of narrow B-scans (100x560 μm) of the living retina were subsequently acquired at 500 Hz during dynamic compensation (up to 14 Hz) that successfully corrected the most significant ocular aberrations across a dilated 6 mm pupil. Camera sensitivity (up to 94 dB) was sufficient for observing reflections from essentially all neural layers of the retina. Signal-to-noise of the detected reflection from the photoreceptor layer was highly sensitive to the level of cular aberrations and defocus with changes of 11.4 and 13.1 dB (single pass) observed when the ocular aberrations (astigmatism, 3rd order and higher) were corrected and when the focus was shifted by 200 μm (0.54 diopters) in the retina, respectively. The 3D resolution of the B-scans (3.0x3.0x5.7 μm) is the highest reported to date in the living human eye and was sufficient to observe the interface between the inner and outer segments of individual photoreceptor cells, resolved in both lateral and axial dimensions. However, high contrast speckle, which is intrinsic to OCT, was present throughout the AO parallel SD-OCT B-scans and obstructed correlating retinal reflections to cell-sized retinal structures.
NASA Astrophysics Data System (ADS)
Stellmach, Stephan; Hansen, Ulrich
2008-05-01
Numerical simulations of the process of convection and magnetic field generation in planetary cores still fail to reach geophysically realistic control parameter values. Future progress in this field depends crucially on efficient numerical algorithms which are able to take advantage of the newest generation of parallel computers. Desirable features of simulation algorithms include (1) spectral accuracy, (2) an operation count per time step that is small and roughly proportional to the number of grid points, (3) memory requirements that scale linear with resolution, (4) an implicit treatment of all linear terms including the Coriolis force, (5) the ability to treat all kinds of common boundary conditions, and (6) reasonable efficiency on massively parallel machines with tens of thousands of processors. So far, algorithms for fully self-consistent dynamo simulations in spherical shells do not achieve all these criteria simultaneously, resulting in strong restrictions on the possible resolutions. In this paper, we demonstrate that local dynamo models in which the process of convection and magnetic field generation is only simulated for a small part of a planetary core in Cartesian geometry can achieve the above goal. We propose an algorithm that fulfills the first five of the above criteria and demonstrate that a model implementation of our method on an IBM Blue Gene/L system scales impressively well for up to O(104) processors. This allows for numerical simulations at rather extreme parameter values.
An Efficient Objective Analysis System for Parallel Computers
NASA Technical Reports Server (NTRS)
Stobie, J.
1999-01-01
A new atmospheric objective analysis system designed for parallel computers will be described. The system can produce a global analysis (on a 1 X 1 lat-lon grid with 18 levels of heights and winds and 10 levels of moisture) using 120,000 observations in 17 minutes on 32 CPUs (SGI Origin 2000). No special parallel code is needed (e.g. MPI or multitasking) and the 32 CPUs do not have to be on the same platform. The system is totally portable and can run on several different architectures at once. In addition, the system can easily scale up to 100 or more CPUS. This will allow for much higher resolution and significant increases in input data. The system scales linearly as the number of observations and the number of grid points. The cost overhead in going from 1 to 32 CPUs is 18%. In addition, the analysis results are identical regardless of the number of processors used. This system has all the characteristics of optimal interpolation, combining detailed instrument and first guess error statistics to produce the best estimate of the atmospheric state. Static tests with a 2 X 2.5 resolution version of this system showed it's analysis increments are comparable to the latest NASA operational system including maintenance of mass-wind balance. Results from several months of cycling test in the Goddard EOS Data Assimilation System (GEOS DAS) show this new analysis retains the same level of agreement between the first guess and observations (O-F statistics) as the current operational system.
PENTACLE: Parallelized particle-particle particle-tree code for planet formation
NASA Astrophysics Data System (ADS)
Iwasawa, Masaki; Oshino, Shoichi; Fujii, Michiko S.; Hori, Yasunori
2017-10-01
We have newly developed a parallelized particle-particle particle-tree code for planet formation, PENTACLE, which is a parallelized hybrid N-body integrator executed on a CPU-based (super)computer. PENTACLE uses a fourth-order Hermite algorithm to calculate gravitational interactions between particles within a cut-off radius and a Barnes-Hut tree method for gravity from particles beyond. It also implements an open-source library designed for full automatic parallelization of particle simulations, FDPS (Framework for Developing Particle Simulator), to parallelize a Barnes-Hut tree algorithm for a memory-distributed supercomputer. These allow us to handle 1-10 million particles in a high-resolution N-body simulation on CPU clusters for collisional dynamics, including physical collisions in a planetesimal disc. In this paper, we show the performance and the accuracy of PENTACLE in terms of \\tilde{R}_cut and a time-step Δt. It turns out that the accuracy of a hybrid N-body simulation is controlled through Δ t / \\tilde{R}_cut and Δ t / \\tilde{R}_cut ˜ 0.1 is necessary to simulate accurately the accretion process of a planet for ≥106 yr. For all those interested in large-scale particle simulations, PENTACLE, customized for planet formation, will be freely available from https://github.com/PENTACLE-Team/PENTACLE under the MIT licence.
Ardekani, Siamak; Selva, Luis; Sayre, James; Sinha, Usha
2006-11-01
Single-shot echo-planar based diffusion tensor imaging is prone to geometric and intensity distortions. Parallel imaging is a means of reducing these distortions while preserving spatial resolution. A quantitative comparison at 3 T of parallel imaging for diffusion tensor images (DTI) using k-space (generalized auto-calibrating partially parallel acquisitions; GRAPPA) and image domain (sensitivity encoding; SENSE) reconstructions at different acceleration factors, R, is reported here. Images were evaluated using 8 human subjects with repeated scans for 2 subjects to estimate reproducibility. Mutual information (MI) was used to assess the global changes in geometric distortions. The effects of parallel imaging techniques on random noise and reconstruction artifacts were evaluated by placing 26 regions of interest and computing the standard deviation of apparent diffusion coefficient and fractional anisotropy along with the error of fitting the data to the diffusion model (residual error). The larger positive values in mutual information index with increasing R values confirmed the anticipated decrease in distortions. Further, the MI index of GRAPPA sequences for a given R factor was larger than the corresponding mSENSE images. The residual error was lowest in the images acquired without parallel imaging and among the parallel reconstruction methods, the R = 2 acquisitions had the least error. The standard deviation, accuracy, and reproducibility of the apparent diffusion coefficient and fractional anisotropy in homogenous tissue regions showed that GRAPPA acquired with R = 2 had the least amount of systematic and random noise and of these, significant differences with mSENSE, R = 2 were found only for the fractional anisotropy index. Evaluation of the current implementation of parallel reconstruction algorithms identified GRAPPA acquired with R = 2 as optimal for diffusion tensor imaging.
NASA Astrophysics Data System (ADS)
Lachhwani, Kailash; Poonia, Mahaveer Prasad
2012-08-01
In this paper, we show a procedure for solving multilevel fractional programming problems in a large hierarchical decentralized organization using fuzzy goal programming approach. In the proposed method, the tolerance membership functions for the fuzzily described numerator and denominator part of the objective functions of all levels as well as the control vectors of the higher level decision makers are respectively defined by determining individual optimal solutions of each of the level decision makers. A possible relaxation of the higher level decision is considered for avoiding decision deadlock due to the conflicting nature of objective functions. Then, fuzzy goal programming approach is used for achieving the highest degree of each of the membership goal by minimizing negative deviational variables. We also provide sensitivity analysis with variation of tolerance values on decision vectors to show how the solution is sensitive to the change of tolerance values with the help of a numerical example.
Belkaid, Marwen; Cuperlier, Nicolas; Gaussier, Philippe
2017-01-01
Emotions play a significant role in internal regulatory processes. In this paper, we advocate four key ideas. First, novelty detection can be grounded in the sensorimotor experience and allow higher order appraisal. Second, cognitive processes, such as those involved in self-assessment, influence emotional states by eliciting affects like boredom and frustration. Third, emotional processes such as those triggered by self-assessment influence attentional processes. Last, close emotion-cognition interactions implement an efficient feedback loop for the purpose of top-down behavior regulation. The latter is what we call 'Emotional Metacontrol'. We introduce a model based on artificial neural networks. This architecture is used to control a robotic system in a visual search task. The emotional metacontrol intervenes to bias the robot visual attention during active object recognition. Through a behavioral and statistical analysis, we show that this mechanism increases the robot performance and fosters the exploratory behavior to avoid deadlocks.
Starfish Behavior as an Anticipatory System: Its Flexibility in Obstacle Avoidance
NASA Astrophysics Data System (ADS)
Migita, Masao
2006-06-01
As starfish do not have central nervous systems, their behaviors such as walking, righting, feeding, and so on, must be produced by some processes of self-organization of many motor organs. It has been noticed that self-organized behavioral patterns are not strictly determined by external stimuli, though such stimuli may elicit the very self-organization processes. In this sense, starfish are not only reactive like a conventional discourse of comparative psychology have presupposed. In this study, I will show diversity in self-organized behavior of a starfish exhibited under experiments on obstacle avoidance. The starfish may be considered as an anticipatory system, because it usually appeared to be free from serious deadlock at the obstacles. I will also discuss that to view the animal as an anticipatory system may have an interesting implication on the fields of behavioral biology and comparative psychology.
Experience Using Formal Methods for Specifying a Multi-Agent System
NASA Technical Reports Server (NTRS)
Rouff, Christopher; Rash, James; Hinchey, Michael; Szczur, Martha R. (Technical Monitor)
2000-01-01
The process and results of using formal methods to specify the Lights Out Ground Operations System (LOGOS) is presented in this paper. LOGOS is a prototype multi-agent system developed to show the feasibility of providing autonomy to satellite ground operations functions at NASA Goddard Space Flight Center (GSFC). After the initial implementation of LOGOS the development team decided to use formal methods to check for race conditions, deadlocks and omissions. The specification exercise revealed several omissions as well as race conditions. After completing the specification, the team concluded that certain tools would have made the specification process easier. This paper gives a sample specification of two of the agents in the LOGOS system and examples of omissions and race conditions found. It concludes with describing an architecture of tools that would better support the future specification of agents and other concurrent systems.
Adaptive Gait Control for a Quadruped Robot on 3D Path Planning
NASA Astrophysics Data System (ADS)
Igarashi, Hiroshi; Kakikura, Masayoshi
A legged walking robot is able to not only move on irregular terrain but also change its posture. For example, the robot can pass under overhead obstacles by crouching. The purpose of our research is to realize efficient path planning with a quadruped robot. Therefore, the path planning is expected to extended in three dimensions because of the mobility. However, some issues of the quadruped robot, which are instability, workspace limitation, deadlock and slippage, complicate realizing such application. In order to improve these issues and reinforce the mobility, a new static gait pattern for a quadruped robot, called TFG: Trajectory Following Gait, is proposed. The TFG intends to obtain high controllability like a wheel robot. Additionally, the TFG allows to change it posture during the walk. In this paper, some experimental results show that the TFG improves the issues and it is available for efficient locomotion in three dimensional environment.
The Wonderful World of Active Many-Particle Systems
NASA Astrophysics Data System (ADS)
Helbing, Dirk
Since the subject of traffic dynamics has captured the interest of physicists, many astonishing effects have been revealed and explained. Some of the questions now understood are the following: Why are vehicles sometimes stopped by so-called ``phantom traffic jams'', although they all like to drive fast? What are the mechanisms behind stop-and-go traffic? Why are there several different kinds of congestion, and how are they related? Why do most traffic jams occur considerably before the road capacity is reached? Can a temporary reduction of the traffic volume cause a lasting traffic jam? Why do pedestrians moving in opposite directions normally organize in lanes, while nervous crowds are ``freezing by heating''? Why do panicking pedestrians produce dangerous deadlocks? All these questions have been answered by applying and extending methods from statistical physics and non-linear dynamics to self-driven many-particle systems.
Automated Methodologies for the Design of Flow Diagrams for Development and Maintenance Activities
NASA Astrophysics Data System (ADS)
Shivanand M., Handigund; Shweta, Bhat
The Software Requirements Specification (SRS) of the organization is a text document prepared by strategic management incorporating the requirements of the organization. These requirements of ongoing business/ project development process involve the software tools, the hardware devices, the manual procedures, the application programs and the communication commands. These components are appropriately ordered for achieving the mission of the concerned process both in the project development and the ongoing business processes, in different flow diagrams viz. activity chart, workflow diagram, activity diagram, component diagram and deployment diagram. This paper proposes two generic, automatic methodologies for the design of various flow diagrams of (i) project development activities, (ii) ongoing business process. The methodologies also resolve the ensuing deadlocks in the flow diagrams and determine the critical paths for the activity chart. Though both methodologies are independent, each complements other in authenticating its correctness and completeness.
3D sensitivity encoded ellipsoidal MR spectroscopic imaging of gliomas at 3T☆
Ozturk-Isik, Esin; Chen, Albert P.; Crane, Jason C.; Bian, Wei; Xu, Duan; Han, Eric T.; Chang, Susan M.; Vigneron, Daniel B.; Nelson, Sarah J.
2010-01-01
Purpose The goal of this study was to implement time efficient data acquisition and reconstruction methods for 3D magnetic resonance spectroscopic imaging (MRSI) of gliomas at a field strength of 3T using parallel imaging techniques. Methods The point spread functions, signal to noise ratio (SNR), spatial resolution, metabolite intensity distributions and Cho:NAA ratio of 3D ellipsoidal, 3D sensitivity encoding (SENSE) and 3D combined ellipsoidal and SENSE (e-SENSE) k-space sampling schemes were compared with conventional k-space data acquisition methods. Results The 3D SENSE and e-SENSE methods resulted in similar spectral patterns as the conventional MRSI methods. The Cho:NAA ratios were highly correlated (P<.05 for SENSE and P<.001 for e-SENSE) with the ellipsoidal method and all methods exhibited significantly different spectral patterns in tumor regions compared to normal appearing white matter. The geometry factors ranged between 1.2 and 1.3 for both the SENSE and e-SENSE spectra. When corrected for these factors and for differences in data acquisition times, the empirical SNRs were similar to values expected based upon theoretical grounds. The effective spatial resolution of the SENSE spectra was estimated to be same as the corresponding fully sampled k-space data, while the spectra acquired with ellipsoidal and e-SENSE k-space samplings were estimated to have a 2.36–2.47-fold loss in spatial resolution due to the differences in their point spread functions. Conclusion The 3D SENSE method retained the same spatial resolution as full k-space sampling but with a 4-fold reduction in scan time and an acquisition time of 9.28 min. The 3D e-SENSE method had a similar spatial resolution as the corresponding ellipsoidal sampling with a scan time of 4:36 min. Both parallel imaging methods provided clinically interpretable spectra with volumetric coverage and adequate SNR for evaluating Cho, Cr and NAA. PMID:19766422
NASA Technical Reports Server (NTRS)
Lin, Shian-Jiann; Atlas, Robert (Technical Monitor)
2002-01-01
The Data Assimilation Office (DAO) has been developing a new generation of ultra-high resolution General Circulation Model (GCM) that is suitable for 4-D data assimilation, numerical weather predictions, and climate simulations. These three applications have conflicting requirements. For 4-D data assimilation and weather predictions, it is highly desirable to run the model at the highest possible spatial resolution (e.g., 55 km or finer) so as to be able to resolve and predict socially and economically important weather phenomena such as tropical cyclones, hurricanes, and severe winter storms. For climate change applications, the model simulations need to be carried out for decades, if not centuries. To reduce uncertainty in climate change assessments, the next generation model would also need to be run at a fine enough spatial resolution that can at least marginally simulate the effects of intense tropical cyclones. Scientific problems (e.g., parameterization of subgrid scale moist processes) aside, all three areas of application require the model's computational performance to be dramatically improved as compared to the previous generation. In this talk, I will present the current and future developments of the "finite-volume dynamical core" at the Data Assimilation Office. This dynamical core applies modem monotonicity preserving algorithms and is genuinely conservative by construction, not by an ad hoc fixer. The "discretization" of the conservation laws is purely local, which is clearly advantageous for resolving sharp gradient flow features. In addition, the local nature of the finite-volume discretization also has a significant advantage on distributed memory parallel computers. Together with a unique vertically Lagrangian control volume discretization that essentially reduces the dimension of the computational problem from three to two, the finite-volume dynamical core is very efficient, particularly at high resolutions. I will also present the computational design of the dynamical core using a hybrid distributed-shared memory programming paradigm that is portable to virtually any of today's high-end parallel super-computing clusters.
NASA Technical Reports Server (NTRS)
Lin, Shian-Jiann; Atlas, Robert (Technical Monitor)
2002-01-01
The Data Assimilation Office (DAO) has been developing a new generation of ultra-high resolution General Circulation Model (GCM) that is suitable for 4-D data assimilation, numerical weather predictions, and climate simulations. These three applications have conflicting requirements. For 4-D data assimilation and weather predictions, it is highly desirable to run the model at the highest possible spatial resolution (e.g., 55 kin or finer) so as to be able to resolve and predict socially and economically important weather phenomena such as tropical cyclones, hurricanes, and severe winter storms. For climate change applications, the model simulations need to be carried out for decades, if not centuries. To reduce uncertainty in climate change assessments, the next generation model would also need to be run at a fine enough spatial resolution that can at least marginally simulate the effects of intense tropical cyclones. Scientific problems (e.g., parameterization of subgrid scale moist processes) aside, all three areas of application require the model's computational performance to be dramatically improved as compared to the previous generation. In this talk, I will present the current and future developments of the "finite-volume dynamical core" at the Data Assimilation Office. This dynamical core applies modem monotonicity preserving algorithms and is genuinely conservative by construction, not by an ad hoc fixer. The "discretization" of the conservation laws is purely local, which is clearly advantageous for resolving sharp gradient flow features. In addition, the local nature of the finite-volume discretization also has a significant advantage on distributed memory parallel computers. Together with a unique vertically Lagrangian control volume discretization that essentially reduces the dimension of the computational problem from three to two, the finite-volume dynamical core is very efficient, particularly at high resolutions. I will also present the computational design of the dynamical core using a hybrid distributed- shared memory programming paradigm that is portable to virtually any of today's high-end parallel super-computing clusters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samedov, V. V., E-mail: v-samedov@yandex.ru
2016-12-15
In this study, we theoretically analyze the processes in a plane-parallel high-purity germanium (HPGe) detector. The generating function of factorial moments describing the process of registration of low-energy X-rays by the HPGe detector with consideration of capture of charge carriers by traps is obtained. It is demonstrated that the coefficients of expansion of the average signal amplitude and variance in power series over the quantity inversely proportional to the bias voltage of the detector allow one to determine the Fano factor, the product of the charge carrier lifetime and mobility, and other characteristics of the semiconductor material of the detector.
Image sensor with high dynamic range linear output
NASA Technical Reports Server (NTRS)
Yadid-Pecht, Orly (Inventor); Fossum, Eric R. (Inventor)
2007-01-01
Designs and operational methods to increase the dynamic range of image sensors and APS devices in particular by achieving more than one integration times for each pixel thereof. An APS system with more than one column-parallel signal chains for readout are described for maintaining a high frame rate in readout. Each active pixel is sampled for multiple times during a single frame readout, thus resulting in multiple integration times. The operation methods can also be used to obtain multiple integration times for each pixel with an APS design having a single column-parallel signal chain for readout. Furthermore, analog-to-digital conversion of high speed and high resolution can be implemented.
Coté, Gregory A; Slivka, Adam; Tarnasky, Paul; Mullady, Daniel K; Elmunzer, B Joseph; Elta, Grace; Fogel, Evan; Lehman, Glen; McHenry, Lee; Romagnuolo, Joseph; Menon, Shyam; Siddiqui, Uzma D; Watkins, James; Lynch, Sheryl; Denski, Cheryl; Xu, Huiping; Sherman, Stuart
Endoscopic placement of multiple plastic stents in parallel is the first-line treatment for most benign biliary strictures; it is possible that fully covered, self-expandable metallic stents (cSEMS) may require fewer endoscopic retrograde cholangiopancreatography procedures (ERCPs) to achieve resolution. To assess whether use of cSEMS is noninferior to plastic stents with respect to stricture resolution. Multicenter (8 endoscopic referral centers), open-label, parallel, randomized clinical trial involving patients with treatment-naive, benign biliary strictures (N = 112) due to orthotopic liver transplant (n = 73), chronic pancreatitis (n = 35), or postoperative injury (n = 4), who were enrolled between April 2011 and September 2014 (with follow-up ending October 2015). Patients with a bile duct diameter less than 6 mm and those with an intact gallbladder in whom the cystic duct would be overlapped by a cSEMS were excluded. Patients (N = 112) were randomized to receive multiple plastic stents or a single cSEMS, stratified by stricture etiology and with endoscopic reassessment for resolution every 3 months (plastic stents) or every 6 months (cSEMS). Patients were followed up for 12 months after stricture resolution to assess for recurrence. Primary outcome was stricture resolution after no more than 12 months of endoscopic therapy. The sample size was estimated based on the noninferiority of cSEMS to plastic stents, with a noninferiority margin of -15%. There were 55 patients in the plastic stent group (mean [SD] age, 57 [11] years; 17 women [31%]) and 57 patients in the cSEMS group (mean [SD] age, 55 [10] years; 19 women [33%]). Compared with plastic stents (41/48, 85.4%), the cSEMS resolution rate was 50 of 54 patients (92.6%), with a rate difference of 7.2% (1-sided 95% CI, -3.0% to ∞; P < .001). Given the prespecified noninferiority margin of -15%, the null hypothesis that cSEMS is less effective than plastic stents was rejected. The mean number of ERCPs to achieve resolution was lower for cSEMS (2.14) vs plastic (3.24; mean difference, 1.10; 95% CI, 0.74 to 1.46; P < .001). Among patients with benign biliary strictures and a bile duct diameter 6 mm or more in whom the covered metallic stent would not overlap the cystic duct, cSEMS were not inferior to multiple plastic stents after 12 months in achieving stricture resolution. Metallic stents should be considered an appropriate option in patients such as these. clinicaltrials.gov Identifier: NCT01221311.
Triggs, G. J.; Fischer, M.; Stellinga, D.; Scullion, M. G.; Evans, G. J. O.; Krauss, T. F.
2015-01-01
By depositing a resolution test pattern on top of a Si3N4 photonic crystal resonant surface, we have measured the dependence of spatial resolution on refractive index contrast Δn. Our experimental results and finite-difference time-domain (FDTD) simulations at different refractive index contrasts show that the spatial resolution of our device reduces with reduced contrast, which is an important consideration in biosensing, where the contrast may be of order 10−2. We also compare 1-D and 2-D gratings, taking into account different incidence polarizations, leading to a better understanding of the excitation and propagation of the resonant modes in these structures, as well as how this contributes to the spatial resolution. At Δn = 0.077, we observe resolutions of 2 and 6 μm parallel to and perpendicular to the grooves of a 1-D grating, respectively, and show that for polarized illumination of a 2-D grating, resolution remains asymmetrical. Illumination of a 2-D grating at 45° results in symmetric resolution. At very low index contrast, the resolution worsens dramatically, particularly for Δn < 0.01, where we observe a resolution exceeding 10 μm for our device. In addition, we measure a reduction in the resonance linewidth as the index contrast becomes lower, corresponding to a longer resonant mode propagation length in the structure and contributing to the change in spatial resolution. PMID:26356353
Adaptive multi-resolution 3D Hartree-Fock-Bogoliubov solver for nuclear structure
NASA Astrophysics Data System (ADS)
Pei, J. C.; Fann, G. I.; Harrison, R. J.; Nazarewicz, W.; Shi, Yue; Thornton, S.
2014-08-01
Background: Complex many-body systems, such as triaxial and reflection-asymmetric nuclei, weakly bound halo states, cluster configurations, nuclear fragments produced in heavy-ion fusion reactions, cold Fermi gases, and pasta phases in neutron star crust, are all characterized by large sizes and complex topologies in which many geometrical symmetries characteristic of ground-state configurations are broken. A tool of choice to study such complex forms of matter is an adaptive multi-resolution wavelet analysis. This method has generated much excitement since it provides a common framework linking many diversified methodologies across different fields, including signal processing, data compression, harmonic analysis and operator theory, fractals, and quantum field theory. Purpose: To describe complex superfluid many-fermion systems, we introduce an adaptive pseudospectral method for solving self-consistent equations of nuclear density functional theory in three dimensions, without symmetry restrictions. Methods: The numerical method is based on the multi-resolution and computational harmonic analysis techniques with a multi-wavelet basis. The application of state-of-the-art parallel programming techniques include sophisticated object-oriented templates which parse the high-level code into distributed parallel tasks with a multi-thread task queue scheduler for each multi-core node. The internode communications are asynchronous. The algorithm is variational and is capable of solving coupled complex-geometric systems of equations adaptively, with functional and boundary constraints, in a finite spatial domain of very large size, limited by existing parallel computer memory. For smooth functions, user-defined finite precision is guaranteed. Results: The new adaptive multi-resolution Hartree-Fock-Bogoliubov (HFB) solver madness-hfb is benchmarked against a two-dimensional coordinate-space solver hfb-ax that is based on the B-spline technique and a three-dimensional solver hfodd that is based on the harmonic-oscillator basis expansion. Several examples are considered, including the self-consistent HFB problem for spin-polarized trapped cold fermions and the Skyrme-Hartree-Fock (+BCS) problem for triaxial deformed nuclei. Conclusions: The new madness-hfb framework has many attractive features when applied to nuclear and atomic problems involving many-particle superfluid systems. Of particular interest are weakly bound nuclear configurations close to particle drip lines, strongly elongated and dinuclear configurations such as those present in fission and heavy-ion fusion, and exotic pasta phases that appear in neutron star crust.
GPU-based parallel algorithm for blind image restoration using midfrequency-based methods
NASA Astrophysics Data System (ADS)
Xie, Lang; Luo, Yi-han; Bao, Qi-liang
2013-08-01
GPU-based general-purpose computing is a new branch of modern parallel computing, so the study of parallel algorithms specially designed for GPU hardware architecture is of great significance. In order to solve the problem of high computational complexity and poor real-time performance in blind image restoration, the midfrequency-based algorithm for blind image restoration was analyzed and improved in this paper. Furthermore, a midfrequency-based filtering method is also used to restore the image hardly with any recursion or iteration. Combining the algorithm with data intensiveness, data parallel computing and GPU execution model of single instruction and multiple threads, a new parallel midfrequency-based algorithm for blind image restoration is proposed in this paper, which is suitable for stream computing of GPU. In this algorithm, the GPU is utilized to accelerate the estimation of class-G point spread functions and midfrequency-based filtering. Aiming at better management of the GPU threads, the threads in a grid are scheduled according to the decomposition of the filtering data in frequency domain after the optimization of data access and the communication between the host and the device. The kernel parallelism structure is determined by the decomposition of the filtering data to ensure the transmission rate to get around the memory bandwidth limitation. The results show that, with the new algorithm, the operational speed is significantly increased and the real-time performance of image restoration is effectively improved, especially for high-resolution images.
Hong, Ie-Hong; Liao, Yung-Cheng; Tsai, Yung-Feng
2013-11-05
The perfectly ordered parallel arrays of periodic Ce silicide nanowires can self-organize with atomic precision on single-domain Si(110)-16 × 2 surfaces. The growth evolution of self-ordered parallel Ce silicide nanowire arrays is investigated over a broad range of Ce coverages on single-domain Si(110)-16 × 2 surfaces by scanning tunneling microscopy (STM). Three different types of well-ordered parallel arrays, consisting of uniformly spaced and atomically identical Ce silicide nanowires, are self-organized through the heteroepitaxial growth of Ce silicides on a long-range grating-like 16 × 2 reconstruction at the deposition of various Ce coverages. Each atomically precise Ce silicide nanowire consists of a bundle of chains and rows with different atomic structures. The atomic-resolution dual-polarity STM images reveal that the interchain coupling leads to the formation of the registry-aligned chain bundles within individual Ce silicide nanowire. The nanowire width and the interchain coupling can be adjusted systematically by varying the Ce coverage on a Si(110) surface. This natural template-directed self-organization of perfectly regular parallel nanowire arrays allows for the precise control of the feature size and positions within ±0.2 nm over a large area. Thus, it is a promising route to produce parallel nanowire arrays in a straightforward, low-cost, high-throughput process.
2013-01-01
The perfectly ordered parallel arrays of periodic Ce silicide nanowires can self-organize with atomic precision on single-domain Si(110)-16 × 2 surfaces. The growth evolution of self-ordered parallel Ce silicide nanowire arrays is investigated over a broad range of Ce coverages on single-domain Si(110)-16 × 2 surfaces by scanning tunneling microscopy (STM). Three different types of well-ordered parallel arrays, consisting of uniformly spaced and atomically identical Ce silicide nanowires, are self-organized through the heteroepitaxial growth of Ce silicides on a long-range grating-like 16 × 2 reconstruction at the deposition of various Ce coverages. Each atomically precise Ce silicide nanowire consists of a bundle of chains and rows with different atomic structures. The atomic-resolution dual-polarity STM images reveal that the interchain coupling leads to the formation of the registry-aligned chain bundles within individual Ce silicide nanowire. The nanowire width and the interchain coupling can be adjusted systematically by varying the Ce coverage on a Si(110) surface. This natural template-directed self-organization of perfectly regular parallel nanowire arrays allows for the precise control of the feature size and positions within ±0.2 nm over a large area. Thus, it is a promising route to produce parallel nanowire arrays in a straightforward, low-cost, high-throughput process. PMID:24188092
NASA Astrophysics Data System (ADS)
Omidi, Parsa; Diop, Mamadou; Carson, Jeffrey; Nasiriavanaki, Mohammadreza
2017-03-01
Linear-array-based photoacoustic computed tomography is a popular methodology for deep and high resolution imaging. However, issues such as phase aberration, side-lobe effects, and propagation limitations deteriorate the resolution. The effect of phase aberration due to acoustic attenuation and constant assumption of the speed of sound (SoS) can be reduced by applying an adaptive weighting method such as the coherence factor (CF). Utilizing an adaptive beamforming algorithm such as the minimum variance (MV) can improve the resolution at the focal point by eliminating the side-lobes. Moreover, invisibility of directional objects emitting parallel to the detection plane, such as vessels and other absorbing structures stretched in the direction perpendicular to the detection plane can degrade resolution. In this study, we propose a full-view array level weighting algorithm in which different weighs are assigned to different positions of the linear array based on an orientation algorithm which uses the histogram of oriented gradient (HOG). Simulation results obtained from a synthetic phantom show the superior performance of the proposed method over the existing reconstruction methods.
Experimental study on microsphere assisted nanoscope in non-contact mode
NASA Astrophysics Data System (ADS)
Ling, Jinzhong; Li, Dancui; Liu, Xin; Wang, Xiaorui
2018-07-01
Microsphere assisted nanoscope was proposed in existing literatures to capture super-resolution images of the nano-structures beneath the microsphere attached on sample surface. In this paper, a microsphere assisted nanoscope working in non-contact mode is designed and demonstrated, in which the microsphere is controlled with a gap separated to sample surface. With a gap, the microsphere is moved in parallel to sample surface non-invasively, so as to observe all the areas of interest. Furthermore, the influence of gap size on image resolution is studied experimentally. Only when the microsphere is close enough to the sample surface, super-resolution image could be obtained. Generally, the resolution decreases when the gap increases as the contribution of evanescent wave disappears. To keep an appropriate gap size, a quantitative method is implemented to estimate the gap variation by observing Newton's rings around the microsphere, serving as a real-time feedback for tuning the gap size. With a constant gap, large-area image with high resolution can be obtained during microsphere scanning. Our study of non-contact mode makes the microsphere assisted nanoscope more practicable and easier to implement.
Hurricane Forecasting with the High-resolution NASA Finite-volume General Circulation Model
NASA Technical Reports Server (NTRS)
Atlas, R.; Reale, O.; Shen, B.-W.; Lin, S.-J.; Chern, J.-D.; Putman, W.; Lee, T.; Yeh, K.-S.; Bosilovich, M.; Radakovich, J.
2004-01-01
A high-resolution finite-volume General Circulation Model (fvGCM), resulting from a development effort of more than ten years, is now being run operationally at the NASA Goddard Space Flight Center and Ames Research Center. The model is based on a finite-volume dynamical core with terrain-following Lagrangian control-volume discretization and performs efficiently on massive parallel architectures. The computational efficiency allows simulations at a resolution of a quarter of a degree, which is double the resolution currently adopted by most global models in operational weather centers. Such fine global resolution brings us closer to overcoming a fundamental barrier in global atmospheric modeling for both weather and climate, because tropical cyclones and even tropical convective clusters can be more realistically represented. In this work, preliminary results of the fvGCM are shown. Fifteen simulations of four Atlantic tropical cyclones in 2002 and 2004 are chosen because of strong and varied difficulties presented to numerical weather forecasting. It is shown that the fvGCM, run at the resolution of a quarter of a degree, can produce very good forecasts of these tropical systems, adequately resolving problems like erratic track, abrupt recurvature, intense extratropical transition, multiple landfall and reintensification, and interaction among vortices.
Circuit for high resolution decoding of multi-anode microchannel array detectors
NASA Technical Reports Server (NTRS)
Kasle, David B. (Inventor)
1995-01-01
A circuit for high resolution decoding of multi-anode microchannel array detectors consisting of input registers accepting transient inputs from the anode array; anode encoding logic circuits connected to the input registers; midpoint pipeline registers connected to the anode encoding logic circuits; and pixel decoding logic circuits connected to the midpoint pipeline registers is described. A high resolution algorithm circuit operates in parallel with the pixel decoding logic circuit and computes a high resolution least significant bit to enhance the multianode microchannel array detector's spatial resolution by halving the pixel size and doubling the number of pixels in each axis of the anode array. A multiplexer is connected to the pixel decoding logic circuit and allows a user selectable pixel address output according to the actual multi-anode microchannel array detector anode array size. An output register concatenates the high resolution least significant bit onto the standard ten bit pixel address location to provide an eleven bit pixel address, and also stores the full eleven bit pixel address. A timing and control state machine is connected to the input registers, the anode encoding logic circuits, and the output register for managing the overall operation of the circuit.
Neuropharmacology of vision in goldfish: a review.
Mora-Ferrer, Carlos; Neumeyer, Christa
2009-05-01
The goldfish is one of the few animals exceptionally well analyzed in behavioral experiments and also in electrophysiological and neuroanatomical investigations of the retina. To get insight into the functional organization of the retina we studied color vision, motion detection and temporal resolution before and after intra-ocular injection of neuropharmaca with known effects on retinal neurons. Bicuculline, strychnine, curare, atropine, and dopamine D1- and D2-receptor antagonists were used. The results reviewed here indicate separate and parallel processing of L-cone contribution to different visual functions, and the influence of several neurotransmitters (dopamine, acetylcholine, glycine, and GABA) on motion vision, color vision, and temporal resolution.
Quantum key distribution with 1.25 Gbps clock synchronization.
Bienfang, J; Gross, A; Mink, A; Hershman, B; Nakassis, A; Tang, X; Lu, R; Su, D; Clark, Charles; Williams, Carl; Hagley, E; Wen, Jesse
2004-05-03
We have demonstrated the exchange of sifted quantum cryptographic key over a 730 meter free-space link at rates of up to 1.0 Mbps, two orders of magnitude faster than previously reported results. A classical channel at 1550 nm operates in parallel with a quantum channel at 845 nm. Clock recovery techniques on the classical channel at 1.25 Gbps enable quantum transmission at up to the clock rate. System performance is currently limited by the timing resolution of our silicon avalanche photodiode detectors. With improved detector resolution, our technique will yield another order of magnitude increase in performance, with existing technology.
NASA Technical Reports Server (NTRS)
Jerebets, Sergei
2004-01-01
We report our recent experiments on thermal conductivity measurements of superfluid He-4 near its phase transition in a two-dimensional (2D) confinement under saturated vapor pressure. A 2D confinement is created by 2-mm- and 1-mm-thick glass capillary plates, consisting of densely populated parallel microchannels with cross-sections of 5 x 50 and 1 x 10 microns, correspondingly. A heat current (2 < Q < 400 nW/sq cm) was applied along the channels long direction. High-resolution measurements were provided by DC SQUID-based high-resolution paramagnetic salt thermometers (HRTs) with a nanokelvin resolution. We might find that thermal conductivity of confined helium is finite at the bulk superfluid transition temperature. Our 2D results will be compared with those in a bulk and 1D confinement.
NASA Astrophysics Data System (ADS)
Jiang, Yao; Li, Tie-Min; Wang, Li-Ping
2015-09-01
This paper investigates the stiffness modeling of compliant parallel mechanism (CPM) based on the matrix method. First, the general compliance matrix of a serial flexure chain is derived. The stiffness modeling of CPMs is next discussed in detail, considering the relative positions of the applied load and the selected displacement output point. The derived stiffness models have simple and explicit forms, and the input, output, and coupling stiffness matrices of the CPM can easily be obtained. The proposed analytical model is applied to the stiffness modeling and performance analysis of an XY parallel compliant stage with input and output decoupling characteristics. Then, the key geometrical parameters of the stage are optimized to obtain the minimum input decoupling degree. Finally, a prototype of the compliant stage is developed and its input axial stiffness, coupling characteristics, positioning resolution, and circular contouring performance are tested. The results demonstrate the excellent performance of the compliant stage and verify the effectiveness of the proposed theoretical model. The general stiffness models provided in this paper will be helpful for performance analysis, especially in determining coupling characteristics, and the structure optimization of the CPM.
An automated workflow for parallel processing of large multiview SPIM recordings
Schmied, Christopher; Steinbach, Peter; Pietzsch, Tobias; Preibisch, Stephan; Tomancak, Pavel
2016-01-01
Summary: Selective Plane Illumination Microscopy (SPIM) allows to image developing organisms in 3D at unprecedented temporal resolution over long periods of time. The resulting massive amounts of raw image data requires extensive processing interactively via dedicated graphical user interface (GUI) applications. The consecutive processing steps can be easily automated and the individual time points can be processed independently, which lends itself to trivial parallelization on a high performance computing (HPC) cluster. Here, we introduce an automated workflow for processing large multiview, multichannel, multiillumination time-lapse SPIM data on a single workstation or in parallel on a HPC cluster. The pipeline relies on snakemake to resolve dependencies among consecutive processing steps and can be easily adapted to any cluster environment for processing SPIM data in a fraction of the time required to collect it. Availability and implementation: The code is distributed free and open source under the MIT license http://opensource.org/licenses/MIT. The source code can be downloaded from github: https://github.com/mpicbg-scicomp/snakemake-workflows. Documentation can be found here: http://fiji.sc/Automated_workflow_for_parallel_Multiview_Reconstruction. Contact: schmied@mpi-cbg.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26628585
An automated workflow for parallel processing of large multiview SPIM recordings.
Schmied, Christopher; Steinbach, Peter; Pietzsch, Tobias; Preibisch, Stephan; Tomancak, Pavel
2016-04-01
Selective Plane Illumination Microscopy (SPIM) allows to image developing organisms in 3D at unprecedented temporal resolution over long periods of time. The resulting massive amounts of raw image data requires extensive processing interactively via dedicated graphical user interface (GUI) applications. The consecutive processing steps can be easily automated and the individual time points can be processed independently, which lends itself to trivial parallelization on a high performance computing (HPC) cluster. Here, we introduce an automated workflow for processing large multiview, multichannel, multiillumination time-lapse SPIM data on a single workstation or in parallel on a HPC cluster. The pipeline relies on snakemake to resolve dependencies among consecutive processing steps and can be easily adapted to any cluster environment for processing SPIM data in a fraction of the time required to collect it. The code is distributed free and open source under the MIT license http://opensource.org/licenses/MIT The source code can be downloaded from github: https://github.com/mpicbg-scicomp/snakemake-workflows Documentation can be found here: http://fiji.sc/Automated_workflow_for_parallel_Multiview_Reconstruction : schmied@mpi-cbg.de Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.
NASA Technical Reports Server (NTRS)
Keppenne, Christian L.; Rienecker, Michele M.; Koblinsky, Chester (Technical Monitor)
2001-01-01
A multivariate ensemble Kalman filter (MvEnKF) implemented on a massively parallel computer architecture has been implemented for the Poseidon ocean circulation model and tested with a Pacific Basin model configuration. There are about two million prognostic state-vector variables. Parallelism for the data assimilation step is achieved by regionalization of the background-error covariances that are calculated from the phase-space distribution of the ensemble. Each processing element (PE) collects elements of a matrix measurement functional from nearby PEs. To avoid the introduction of spurious long-range covariances associated with finite ensemble sizes, the background-error covariances are given compact support by means of a Hadamard (element by element) product with a three-dimensional canonical correlation function. The methodology and the MvEnKF configuration are discussed. It is shown that the regionalization of the background covariances; has a negligible impact on the quality of the analyses. The parallel algorithm is very efficient for large numbers of observations but does not scale well beyond 100 PEs at the current model resolution. On a platform with distributed memory, memory rather than speed is the limiting factor.
NASA Astrophysics Data System (ADS)
Noh, S. J.; Lee, J. H.; Lee, S.; Zhang, Y.; Seo, D. J.
2017-12-01
Hurricane Harvey was one of the most extreme weather events in Texas history and left significant damages in the Houston and adjoining coastal areas. To understand better the relative impact to urban flooding of extreme amount and spatial extent of rainfall, unique geography, land use and storm surge, high-resolution water modeling is necessary such that natural and man-made components are fully resolved. In this presentation, we reconstruct spatiotemporal evolution of inundation during Hurricane Harvey using hyper-resolution modeling and quantitative image reanalysis. The two-dimensional urban flood model used is based on dynamic wave approximation and 10 m-resolution terrain data, and is forced by the radar-based multisensor quantitative precipitation estimates. The model domain includes Buffalo, Brays, Greens and White Oak Bayous in Houston. The model is simulated using hybrid parallel computing. To evaluate dynamic inundation mapping, we combine various qualitative crowdsourced images and video footages with LiDAR-based terrain data.
NASA Astrophysics Data System (ADS)
Barajas-Solano, D. A.; Tartakovsky, A. M.
2017-12-01
We present a multiresolution method for the numerical simulation of flow and reactive transport in porous, heterogeneous media, based on the hybrid Multiscale Finite Volume (h-MsFV) algorithm. The h-MsFV algorithm allows us to couple high-resolution (fine scale) flow and transport models with lower resolution (coarse) models to locally refine both spatial resolution and transport models. The fine scale problem is decomposed into various "local'' problems solved independently in parallel and coordinated via a "global'' problem. This global problem is then coupled with the coarse model to strictly ensure domain-wide coarse-scale mass conservation. The proposed method provides an alternative to adaptive mesh refinement (AMR), due to its capacity to rapidly refine spatial resolution beyond what's possible with state-of-the-art AMR techniques, and the capability to locally swap transport models. We illustrate our method by applying it to groundwater flow and reactive transport of multiple species.
High resolution ultrasonic spectroscopy system for nondestructive evaluation
NASA Technical Reports Server (NTRS)
Chen, C. H.
1991-01-01
With increased demand for high resolution ultrasonic evaluation, computer based systems or work stations become essential. The ultrasonic spectroscopy method of nondestructive evaluation (NDE) was used to develop a high resolution ultrasonic inspection system supported by modern signal processing, pattern recognition, and neural network technologies. The basic system which was completed consists of a 386/20 MHz PC (IBM AT compatible), a pulser/receiver, a digital oscilloscope with serial and parallel communications to the computer, an immersion tank with motor control of X-Y axis movement, and the supporting software package, IUNDE, for interactive ultrasonic evaluation. Although the hardware components are commercially available, the software development is entirely original. By integrating signal processing, pattern recognition, maximum entropy spectral analysis, and artificial neural network functions into the system, many NDE tasks can be performed. The high resolution graphics capability provides visualization of complex NDE problems. The phase 3 efforts involve intensive marketing of the software package and collaborative work with industrial sectors.
Three-dimensional laser microvision.
Shimotahira, H; Iizuka, K; Chu, S C; Wah, C; Costen, F; Yoshikuni, Y
2001-04-10
A three-dimensional (3-D) optical imaging system offering high resolution in all three dimensions, requiring minimum manipulation and capable of real-time operation, is presented. The system derives its capabilities from use of the superstructure grating laser source in the implementation of a laser step frequency radar for depth information acquisition. A synthetic aperture radar technique was also used to further enhance its lateral resolution as well as extend the depth of focus. High-speed operation was made possible by a dual computer system consisting of a host and a remote microcomputer supported by a dual-channel Small Computer System Interface parallel data transfer system. The system is capable of operating near real time. The 3-D display of a tunneling diode, a microwave integrated circuit, and a see-through image taken by the system operating near real time are included. The depth resolution is 40 mum; lateral resolution with a synthetic aperture approach is a fraction of a micrometer and that without it is approximately 10 mum.
Using parallel computing methods to improve log surface defect detection methods
R. Edward Thomas; Liya Thomas
2013-01-01
Determining the size and location of surface defects is crucial to evaluating the potential yield and value of hardwood logs. Recently a surface defect detection algorithm was developed using the Java language. This algorithm was developed around an earlier laser scanning system that had poor resolution along the length of the log (15 scan lines per foot). A newer...
Parallel Force Assay for Protein-Protein Interactions
Aschenbrenner, Daniela; Pippig, Diana A.; Klamecka, Kamila; Limmer, Katja; Leonhardt, Heinrich; Gaub, Hermann E.
2014-01-01
Quantitative proteome research is greatly promoted by high-resolution parallel format assays. A characterization of protein complexes based on binding forces offers an unparalleled dynamic range and allows for the effective discrimination of non-specific interactions. Here we present a DNA-based Molecular Force Assay to quantify protein-protein interactions, namely the bond between different variants of GFP and GFP-binding nanobodies. We present different strategies to adjust the maximum sensitivity window of the assay by influencing the binding strength of the DNA reference duplexes. The binding of the nanobody Enhancer to the different GFP constructs is compared at high sensitivity of the assay. Whereas the binding strength to wild type and enhanced GFP are equal within experimental error, stronger binding to superfolder GFP is observed. This difference in binding strength is attributed to alterations in the amino acids that form contacts according to the crystal structure of the initial wild type GFP-Enhancer complex. Moreover, we outline the potential for large-scale parallelization of the assay. PMID:25546146
Parallel force assay for protein-protein interactions.
Aschenbrenner, Daniela; Pippig, Diana A; Klamecka, Kamila; Limmer, Katja; Leonhardt, Heinrich; Gaub, Hermann E
2014-01-01
Quantitative proteome research is greatly promoted by high-resolution parallel format assays. A characterization of protein complexes based on binding forces offers an unparalleled dynamic range and allows for the effective discrimination of non-specific interactions. Here we present a DNA-based Molecular Force Assay to quantify protein-protein interactions, namely the bond between different variants of GFP and GFP-binding nanobodies. We present different strategies to adjust the maximum sensitivity window of the assay by influencing the binding strength of the DNA reference duplexes. The binding of the nanobody Enhancer to the different GFP constructs is compared at high sensitivity of the assay. Whereas the binding strength to wild type and enhanced GFP are equal within experimental error, stronger binding to superfolder GFP is observed. This difference in binding strength is attributed to alterations in the amino acids that form contacts according to the crystal structure of the initial wild type GFP-Enhancer complex. Moreover, we outline the potential for large-scale parallelization of the assay.
Design of k-Space Channel Combination Kernels and Integration with Parallel Imaging
Beatty, Philip J.; Chang, Shaorong; Holmes, James H.; Wang, Kang; Brau, Anja C. S.; Reeder, Scott B.; Brittain, Jean H.
2014-01-01
Purpose In this work, a new method is described for producing local k-space channel combination kernels using a small amount of low-resolution multichannel calibration data. Additionally, this work describes how these channel combination kernels can be combined with local k-space unaliasing kernels produced by the calibration phase of parallel imaging methods such as GRAPPA, PARS and ARC. Methods Experiments were conducted to evaluate both the image quality and computational efficiency of the proposed method compared to a channel-by-channel parallel imaging approach with image-space sum-of-squares channel combination. Results Results indicate comparable image quality overall, with some very minor differences seen in reduced field-of-view imaging. It was demonstrated that this method enables a speed up in computation time on the order of 3–16X for 32-channel data sets. Conclusion The proposed method enables high quality channel combination to occur earlier in the reconstruction pipeline, reducing computational and memory requirements for image reconstruction. PMID:23943602
Parallel detection experiment of fluorescence confocal microscopy using DMD.
Wang, Qingqing; Zheng, Jihong; Wang, Kangni; Gui, Kun; Guo, Hanming; Zhuang, Songlin
2016-05-01
Parallel detection of fluorescence confocal microscopy (PDFCM) system based on Digital Micromirror Device (DMD) is reported in this paper in order to realize simultaneous multi-channel imaging and improve detection speed. DMD is added into PDFCM system, working to take replace of the single traditional pinhole in the confocal system, which divides the laser source into multiple excitation beams. The PDFCM imaging system based on DMD is experimentally set up. The multi-channel image of fluorescence signal of potato cells sample is detected by parallel lateral scanning in order to verify the feasibility of introducing the DMD into fluorescence confocal microscope. In addition, for the purpose of characterizing the microscope, the depth response curve is also acquired. The experimental result shows that in contrast to conventional microscopy, the DMD-based PDFCM system has higher axial resolution and faster detection speed, which may bring some potential benefits in the biology and medicine analysis. SCANNING 38:234-239, 2016. © 2015 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.
Parallel ptychographic reconstruction
Nashed, Youssef S. G.; Vine, David J.; Peterka, Tom; ...
2014-12-19
Ptychography is an imaging method whereby a coherent beam is scanned across an object, and an image is obtained by iterative phasing of the set of diffraction patterns. It is able to be used to image extended objects at a resolution limited by scattering strength of the object and detector geometry, rather than at an optics-imposed limit. As technical advances allow larger fields to be imaged, computational challenges arise for reconstructing the correspondingly larger data volumes, yet at the same time there is also a need to deliver reconstructed images immediately so that one can evaluate the next steps tomore » take in an experiment. Here we present a parallel method for real-time ptychographic phase retrieval. It uses a hybrid parallel strategy to divide the computation between multiple graphics processing units (GPUs) and then employs novel techniques to merge sub-datasets into a single complex phase and amplitude image. Results are shown on a simulated specimen and a real dataset from an X-ray experiment conducted at a synchrotron light source.« less
Inversion of potential field data using the finite element method on parallel computers
NASA Astrophysics Data System (ADS)
Gross, L.; Altinay, C.; Shaw, S.
2015-11-01
In this paper we present a formulation of the joint inversion of potential field anomaly data as an optimization problem with partial differential equation (PDE) constraints. The problem is solved using the iterative Broyden-Fletcher-Goldfarb-Shanno (BFGS) method with the Hessian operator of the regularization and cross-gradient component of the cost function as preconditioner. We will show that each iterative step requires the solution of several PDEs namely for the potential fields, for the adjoint defects and for the application of the preconditioner. In extension to the traditional discrete formulation the BFGS method is applied to continuous descriptions of the unknown physical properties in combination with an appropriate integral form of the dot product. The PDEs can easily be solved using standard conforming finite element methods (FEMs) with potentially different resolutions. For two examples we demonstrate that the number of PDE solutions required to reach a given tolerance in the BFGS iteration is controlled by weighting regularization and cross-gradient but is independent of the resolution of PDE discretization and that as a consequence the method is weakly scalable with the number of cells on parallel computers. We also show a comparison with the UBC-GIF GRAV3D code.
NASA Astrophysics Data System (ADS)
von Zanthier, Christoph; Holl, Peter; Kemmer, Josef; Lechner, Peter; Maier, B.; Soltau, Heike; Stoetter, R.; Braeuninger, Heinrich W.; Dennerl, Konrad; Haberl, Frank; Hartmann, R.; Hartner, Gisela D.; Hippmann, H.; Kastelic, E.; Kink, W.; Krause, N.; Meidinger, Norbert; Metzner, G.; Pfeffermann, Elmar; Popp, M.; Reppin, Claus; Stoetter, Diana; Strueder, Lothar; Truemper, Joachim; Weber, U.; Carathanassis, D.; Engelhard, S.; Gebhart, Th.; Hauff, D.; Lutz, G.; Richter, R. H.; Seitz, H.; Solc, P.; Bihler, Edgar; Boettcher, H.; Kendziorra, Eckhard; Kraemer, J.; Pflueger, Bernhard; Staubert, Ruediger
1998-04-01
The concept and performance of the fully depleted pn- junction CCD system, developed for the European XMM- and the German ABRIXAS-satellite missions for soft x-ray imaging and spectroscopy in the 0.1 keV to 15 keV photon range, is presented. The 58 mm X 60 mm large pn-CCD array uses pn- junctions for registers and for the backside instead of MOS registers. This concept naturally allows to fully deplete the detector volume to make it an efficient detector to photons with energies up to 15 keV. For high detection efficiency in the soft x-ray region down to 100 eV, an ultrathin pn-CCD backside deadlayer has been realized. Each pn-CCD-channel is equipped with an on-chip JFET amplifier which, in combination with the CAMEX-amplifier and multiplexing chip, facilitates parallel readout with a pixel read rate of 3 MHz and an electronic noise floor of ENC < e-. With the complete parallel readout, very fast pn-CCD readout modi can be implemented in the system which allow for high resolution photon spectroscopy of even the brightest x-ray sources in the sky.
Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guzman, Horacio V.; Junghans, Christoph; Kremer, Kurt
Multiscale and inhomogeneous molecular systems are challenging topics in the field of molecular simulation. In particular, modeling biological systems in the context of multiscale simulations and exploring material properties are driving a permanent development of new simulation methods and optimization algorithms. In computational terms, those methods require parallelization schemes that make a productive use of computational resources for each simulation and from its genesis. Here, we introduce the heterogeneous domain decomposition approach, which is a combination of an heterogeneity-sensitive spatial domain decomposition with an a priori rearrangement of subdomain walls. Within this approach and paper, the theoretical modeling and scalingmore » laws for the force computation time are proposed and studied as a function of the number of particles and the spatial resolution ratio. We also show the new approach capabilities, by comparing it to both static domain decomposition algorithms and dynamic load-balancing schemes. Specifically, two representative molecular systems have been simulated and compared to the heterogeneous domain decomposition proposed in this work. Finally, these two systems comprise an adaptive resolution simulation of a biomolecule solvated in water and a phase-separated binary Lennard-Jones fluid.« less
Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes
Guzman, Horacio V.; Junghans, Christoph; Kremer, Kurt; ...
2017-11-27
Multiscale and inhomogeneous molecular systems are challenging topics in the field of molecular simulation. In particular, modeling biological systems in the context of multiscale simulations and exploring material properties are driving a permanent development of new simulation methods and optimization algorithms. In computational terms, those methods require parallelization schemes that make a productive use of computational resources for each simulation and from its genesis. Here, we introduce the heterogeneous domain decomposition approach, which is a combination of an heterogeneity-sensitive spatial domain decomposition with an a priori rearrangement of subdomain walls. Within this approach and paper, the theoretical modeling and scalingmore » laws for the force computation time are proposed and studied as a function of the number of particles and the spatial resolution ratio. We also show the new approach capabilities, by comparing it to both static domain decomposition algorithms and dynamic load-balancing schemes. Specifically, two representative molecular systems have been simulated and compared to the heterogeneous domain decomposition proposed in this work. Finally, these two systems comprise an adaptive resolution simulation of a biomolecule solvated in water and a phase-separated binary Lennard-Jones fluid.« less
NASA Astrophysics Data System (ADS)
Cossart-Magos, Claudina; Launay, Françoise; Parkin, James E.
The absorption spectrum of CO2 gas between 175 and 200 nm was photographed at high resolution some years ago. This very weak spectral region proved to be extremely rich in bands showing rotational fine structure. In Part 1 [C. Cossart-Magos, F. Launay, J. E. Parkin, Mol. Phys., 75, 835 (1992), nine perpendicular-type bands were assigned to the lowest singlet-singlet transition, 11A2 ← ν'3 (b2) vibration. Here, the parallel-type bands observed at 185.7 and 175.6 nm are assigned to the lowest triplet-singlet transition, 13B2 ← TMPH0629math005 ν'2 (a1) vibration. The assignment and the rotational and spin constant values obtained are discussed in relation to previous experimental data and ab initio calculation results on the lowest excited states of CO2. The actual role of the 13B2 state in CO2 photodissociation, O(3P)+CO(X1Σ+) recombination, and O(1D) emission quenching by CO(X) molecules is reviewed.
NASA Astrophysics Data System (ADS)
Pourteau, Marie-Line; Servin, Isabelle; Lepinay, Kévin; Essomba, Cyrille; Dal'Zotto, Bernard; Pradelles, Jonathan; Lattard, Ludovic; Brandt, Pieter; Wieland, Marco
2016-03-01
The emerging Massively Parallel-Electron Beam Direct Write (MP-EBDW) is an attractive high resolution high throughput lithography technology. As previously shown, Chemically Amplified Resists (CARs) meet process/integration specifications in terms of dose-to-size, resolution, contrast, and energy latitude. However, they are still limited by their line width roughness. To overcome this issue, we tested an alternative advanced non-CAR and showed it brings a substantial gain in sensitivity compared to CAR. We also implemented and assessed in-line post-lithographic treatments for roughness mitigation. For outgassing-reduction purpose, a top-coat layer is added to the total process stack. A new generation top-coat was tested and showed improved printing performances compared to the previous product, especially avoiding dark erosion: SEM cross-section showed a straight pattern profile. A spin-coatable charge dissipation layer based on conductive polyaniline has also been tested for conductivity and lithographic performances, and compatibility experiments revealed that the underlying resist type has to be carefully chosen when using this product. Finally, the Process Of Reference (POR) trilayer stack defined for 5 kV multi-e-beam lithography was successfully etched with well opened and straight patterns, and no lithography-etch bias.
Data for polarization in charmless B{yields}{phi}K*: A signal for new physics?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Das, Prasanta Kumar; Yang, K.-C.
2005-05-01
The recent observations of sizable transverse fractions of B{yields}{phi}K* may hint for the existence of new physics. We analyze all possible new-physics four-quark operators and find that two classes of new-physics operators could offer resolutions to the B{yields}{phi}K* polarization anomaly. The operators in the first class have structures (1-{gamma}{sub 5})x(1-{gamma}{sub 5}), {sigma}(1-{gamma}{sub 5})x{sigma}(1-{gamma}{sub 5}), and in the second class (1+{gamma}{sub 5})x(1+{gamma}{sub 5}), {sigma}(1+{gamma}{sub 5})x{sigma}(1+{gamma}{sub 5}). For each class, the new-physics effects can be lumped into a single parameter. Two possible experimental results of polarization phases, arg(A{sub perpendicular})-arg(A{sub parallel}){approx_equal}{pi} or 0, originating from the phase ambiguity in data, could be separatelymore » accounted for by our two new-physics scenarios: the first (second) scenario with the first (second) class new-physics operators. The consistency between the data and our new-physics analysis suggests a small new-physics weak phase, together with a large(r) strong phase. We obtain sizable transverse fractions {lambda}{sub parallel{sub parallel}}+{lambda}{sub perpendicular{sub perpendicular}}{approx_equal}{lambda}{sub 00}, in accordance with the observations. We find {lambda}{sub parallel{sub parallel}}{approx_equal}0.8{lambda}{sub perpendicular{sub perpendicular}} in the first scenario but {lambda}{sub parallel{sub parallel}} > or approx. {lambda}{sub perpendicular{sub perpendicular}} in the second scenario. We discuss the impact of the new-physics weak phase on observations.« less
A general CFD framework for fault-resilient simulations based on multi-resolution information fusion
NASA Astrophysics Data System (ADS)
Lee, Seungjoon; Kevrekidis, Ioannis G.; Karniadakis, George Em
2017-10-01
We develop a general CFD framework for multi-resolution simulations to target multiscale problems but also resilience in exascale simulations, where faulty processors may lead to gappy, in space-time, simulated fields. We combine approximation theory and domain decomposition together with statistical learning techniques, e.g. coKriging, to estimate boundary conditions and minimize communications by performing independent parallel runs. To demonstrate this new simulation approach, we consider two benchmark problems. First, we solve the heat equation (a) on a small number of spatial "patches" distributed across the domain, simulated by finite differences at fine resolution and (b) on the entire domain simulated at very low resolution, thus fusing multi-resolution models to obtain the final answer. Second, we simulate the flow in a lid-driven cavity in an analogous fashion, by fusing finite difference solutions obtained with fine and low resolution assuming gappy data sets. We investigate the influence of various parameters for this framework, including the correlation kernel, the size of a buffer employed in estimating boundary conditions, the coarseness of the resolution of auxiliary data, and the communication frequency across different patches in fusing the information at different resolution levels. In addition to its robustness and resilience, the new framework can be employed to generalize previous multiscale approaches involving heterogeneous discretizations or even fundamentally different flow descriptions, e.g. in continuum-atomistic simulations.
The Role of Nonlinear Gradients in Parallel Imaging: A k-Space Based Analysis.
Galiana, Gigi; Stockmann, Jason P; Tam, Leo; Peters, Dana; Tagare, Hemant; Constable, R Todd
2012-09-01
Sequences that encode the spatial information of an object using nonlinear gradient fields are a new frontier in MRI, with potential to provide lower peripheral nerve stimulation, windowed fields of view, tailored spatially-varying resolution, curved slices that mirror physiological geometry, and, most importantly, very fast parallel imaging with multichannel coils. The acceleration for multichannel images is generally explained by the fact that curvilinear gradient isocontours better complement the azimuthal spatial encoding provided by typical receiver arrays. However, the details of this complementarity have been more difficult to specify. We present a simple and intuitive framework for describing the mechanics of image formation with nonlinear gradients, and we use this framework to review some the main classes of nonlinear encoding schemes.
The Observing Modes of JWST/NIRISS
NASA Astrophysics Data System (ADS)
Taylor, Joanna M.; NIRISS Team
2018-06-01
The Near Infrared Imager and Slitless Spectrograph (NIRISS) is a contribution of the Canadian Space Agency to the James Webb Space Telescope (JWST). NIRISS complements the other near-infrared science instruments onboard JWST by providing capabilities for (a) low resolution grism spectroscopy between 0.8 and 2.2 µm over the entire field of view, with the possibility of observing the same scene with orthogonal dispersion directions to disentangle blended objects; (b) medium-resolution grism spectroscopy between 0.6 and 2.8 µm that has been optimized to provide high spectrophotometric stability for time-series observations of transiting exoplanets; (c) aperture masking interferometry that provides high angular resolution of 70 - 400 mas at wavelengths between 2.8 and 4.8 µm and (d) parallel imaging through a set of filters that are closely matched to NIRCam's.In this poster, we discuss each of these modes and present simulations of how they might typically be used to address specific scientific questions.
Acute organic brain syndrome due to drug-induced eosinophilia.
Ng, S C; Lee, M K; Teh, A
1989-11-01
A 72 year old man developed acute organic brain syndrome associated with marked eosinophilia following self medication with a variety of drugs. Investigations revealed no other known causes of eosinophilia. Withdrawal of drugs resulted in dramatic drop in eosinophil count paralleled by clinical resolution of neurological problems. To our knowledge drug-induced eosinophilia has not previously been associated with acute organic brain syndrome.
Acute organic brain syndrome due to drug-induced eosinophilia.
Ng, S. C.; Lee, M. K.; Teh, A.
1989-01-01
A 72 year old man developed acute organic brain syndrome associated with marked eosinophilia following self medication with a variety of drugs. Investigations revealed no other known causes of eosinophilia. Withdrawal of drugs resulted in dramatic drop in eosinophil count paralleled by clinical resolution of neurological problems. To our knowledge drug-induced eosinophilia has not previously been associated with acute organic brain syndrome. PMID:2616421
NASA Technical Reports Server (NTRS)
Egen, N. B.; Twitty, G. E.; Bier, M.
1979-01-01
Isoelectric focusing is a high-resolution technique for separating and purifying large peptides, proteins, and other biomolecules. The apparatus described in the present paper constitutes a new approach to fluid stabilization and increased throughput. Stabilization is achieved by flowing the process fluid uniformly through an array of closely spaced filter elements oriented parallel both to the electrodes and the direction of the flow. This seems to overcome the major difficulties of parabolic flow and electroosmosis at the walls, while limiting the convection to chamber compartments defined by adjacent spacers. Increased throughput is achieved by recirculating the process fluid through external heat exchange reservoirs, where the Joule heat is dissipated.
Development of ATHENA mirror modules
NASA Astrophysics Data System (ADS)
Collon, Maximilien J.; Vacanti, Giuseppe; Barrière, Nicolas M.; Landgraf, Boris; Günther, Ramses; Vervest, Mark; van der Hoeven, Roy; Dekker, Danielle; Chatbi, Abdel; Girou, David; Sforzini, Jessica; Beijersbergen, Marco W.; Bavdaz, Marcos; Wille, Eric; Fransen, Sebastiaan; Shortt, Brian; Haneveld, Jeroen; Koelewijn, Arenda; Booysen, Karin; Wijnperle, Maurice; van Baren, Coen; Eigenraam, Alexander; Müller, Peter; Krumrey, Michael; Burwitz, Vadim; Pareschi, Giovanni; Massahi, Sonny; Christensen, Finn E.; Della Monica Ferreira, Desirée.; Valsecchi, Giuseppe; Oliver, Paul; Checquer, Ian; Ball, Kevin; Zuknik, Karl-Heinz
2017-08-01
Silicon Pore Optics (SPO), developed at cosine with the European Space Agency (ESA) and several academic and industrial partners, provides lightweight, yet stiff, high-resolution x-ray optics. This technology enables ATHENA to reach an unprecedentedly large effective area in the 0.2 - 12 keV band with an angular resolution better than 5''. After developing the technology for 50 m and 20 m focal length, this year has witnessed the first 12 m focal length mirror modules being produced. The technology development is also gaining momentum with three different radii under study: mirror modules for the inner radii (Rmin = 250 mm), outer radii (Rmax = 1500 mm) and middle radii (Rmid = 737 mm) are being developed in parallel.
A Fresnel zone plate collimator: potential and aberrations
NASA Astrophysics Data System (ADS)
Menz, Benedikt; Bräuninger, Heinrich; Burwitz, Vadim; Hartner, Gisela; Predehl, Peter
2015-09-01
A collimator, that parallelizes an X-ray beam, provides a significant improvement of the metrology to characterize X-ray optics for space instruments at MPE's PANTER X-ray test facility. A Fresnel zone plate was selected as a collimating optic, as it meets a good angular resolution < 0.1n combined with a large active area > 10 cm2. Such an optic is ideally suited to illuminate Silicon Pore Optic (SPO) modules as proposed for ATHENA. This paper provides the theoretic description of such a Fresnel zone plate especially considering resolution and efficiency. Based on the theoretic results the collimator setup performance is analyzed and requirements for fabrication and alignment are calculated.
Use of high-resolution ground-penetrating radar in kimberlite delineation
Kruger, J.M.; Martinez, A.; Berendsen, P.
1997-01-01
High-resolution ground-penetrating radar (GPR) was used to image the near-surface extent of two exposed Late Cretaceous kimberlites intruded into lower Permian limestone and dolomite host rocks in northeast Kansas. Six parallel GPR profiles identify the margin of the Randolph 1 kimberlite by the up-bending and termination of limestone reflectors. Five radially-intersecting GPR profiles identify the elliptical margin of the Randolph 2 kimberlite by the termination of dolomite reflectors near or below the kimberlite's mushroom-shaped cap. These results suggest GPR may augment magnetic methods for the delineation of kimberlites or other forceful intrusions in a layered host rock where thick, conductive soil or shale is not present at the surface.
Abdellah, Marwan; Eldeib, Ayman; Owis, Mohamed I
2015-01-01
This paper features an advanced implementation of the X-ray rendering algorithm that harnesses the giant computing power of the current commodity graphics processors to accelerate the generation of high resolution digitally reconstructed radiographs (DRRs). The presented pipeline exploits the latest features of NVIDIA Graphics Processing Unit (GPU) architectures, mainly bindless texture objects and dynamic parallelism. The rendering throughput is substantially improved by exploiting the interoperability mechanisms between CUDA and OpenGL. The benchmarks of our optimized rendering pipeline reflect its capability of generating DRRs with resolutions of 2048(2) and 4096(2) at interactive and semi interactive frame-rates using an NVIDIA GeForce 970 GTX device.
Scanning tunneling microscope with two-dimensional translator.
Nichols, J; Ng, K-W
2011-01-01
Since the invention of the scanning tunneling microscope (STM), it has been a powerful tool for probing the electronic properties of materials. Typically STM designs capable of obtaining resolution on the atomic scale are limited to a small area which can be probed. We have built an STM capable of coarse motion in two dimensions, the z- and x-directions which are, respectively, parallel and perpendicular to the tip. This allows us to image samples with very high resolution at sites separated by macroscopic distances. This device is a single unit with a compact design making it very stable. It can operate in either a horizontal or vertical configuration and at cryogenic temperatures.
Coté, Gregory A.; Slivka, Adam; Tarnasky, Paul; Mullady, Daniel K.; Elmunzer, B. Joseph; Elta, Grace; Fogel, Evan; Lehman, Glen; McHenry, Lee; Romagnuolo, Joseph; Menon, Shyam; Siddiqui, Uzma D.; Watkins, James; Lynch, Sheryl; Denski, Cheryl; Xu, Huiping; Sherman, Stuart
2017-01-01
IMPORTANCE Endoscopic placement of multiple plastic stents in parallel is the first-line treatment for most benign biliary strictures; it is possible that fully covered, self-expandable metallic stents (cSEMS) may require fewer endoscopic retrograde cholangiopancreatography procedures (ERCPs) to achieve resolution. OBJECTIVE To assess whether use of cSEMS is noninferior to plastic stents with respect to stricture resolution. DESIGN, SETTING, AND PARTICIPANTS Multicenter (8 endoscopic referral centers), open-label, parallel, randomized clinical trial involving patients with treatment-naive, benign biliary strictures (N = 112) due to orthotopic liver transplant (n = 73), chronic pancreatitis (n = 35), or postoperative injury (n = 4), who were enrolled between April 2011 and September 2014 (with follow-up ending October 2015). Patients with a bile duct diameter less than 6 mm and those with an intact gallbladder in whom the cystic duct would be overlapped by a cSEMS were excluded. INTERVENTIONS Patients (N = 112) were randomized to receive multiple plastic stents or a single cSEMS, stratified by stricture etiology and with endoscopic reassessment for resolution every 3 months (plastic stents) or every 6 months (cSEMS). Patients were followed up for 12 months after stricture resolution to assess for recurrence. MAIN OUTCOMES AND MEASURES Primary outcome was stricture resolution after no more than 12 months of endoscopic therapy. The sample size was estimated based on the noninferiority of cSEMS to plastic stents, with a noninferiority margin of −15%. RESULTS There were 55 patients in the plastic stent group (mean [SD] age, 57 [11] years; 17 women [31%]) and 57 patients in the cSEMS group (mean [SD] age, 55 [10] years; 19 women [33%]). Compared with plastic stents (41/48, 85.4%), the cSEMS resolution rate was 50 of 54 patients (92.6%), with a rate difference of 7.2% (1-sided 95% CI, −3.0% to ∞; P < .001). Given the prespecified noninferiority margin of −15%, the null hypothesis that cSEMS is less effective than plastic stents was rejected. The mean number of ERCPs to achieve resolution was lower for cSEMS (2.14) vs plastic (3.24; mean difference, 1.10; 95% CI, 0.74 to 1.46; P < .001). CONCLUSIONS AND RELEVANCE Among patients with benign biliary strictures and a bile duct diameter 6 mm or more in whom the covered metallic stent would not overlap the cystic duct, cSEMS were not inferior to multiple plastic stents after 12 months in achieving stricture resolution. Metallic stents should be considered an appropriate option in patients such as these. TRIAL REGISTRATION clinicaltrials.gov Identifier: NCT01221311 PMID:27002446
Hu, Gang; He, Bin
2011-01-01
Magnetoacoustic tomography with magnetic induction (MAT-MI) is an emerging approach for noninvasively imaging electrical impedance properties of biological tissues. The MAT-MI imaging system measures ultrasound waves generated by the Lorentz force, having been induced by magnetic stimulation, which is related to the electrical conductivity distribution in tissue samples. MAT-MI promises to provide fine spatial resolution for biological tissue imaging as compared to ultrasound resolution. In the present study, we first estimated the imaging spatial resolution by calculating the full width at half maximum (FWHM) of the system point spread function (PSF). The actual spatial resolution of our MAT-MI system was experimentally determined to be 1.51 mm by a parallel-line-source phantom with Rayleigh criterion. Reconstructed images made from tissue-mimicking gel phantoms, as well as animal tissue samples, were consistent with the morphological structures of the samples. The electrical conductivity value of the samples was determined directly by a calibrated four-electrode system. It has been demonstrated that MAT-MI is able to image the electrical impedance properties of biological tissues with better than 2 mm spatial resolution. These results suggest the potential of MAT-MI for application to early detection of small-size diseased tissues (e.g. small breast cancer). PMID:21858111
Liang, Yicheng; Peng, Hao
2015-02-07
Depth-of-interaction (DOI) poses a major challenge for a PET system to achieve uniform spatial resolution across the field-of-view, particularly for small animal and organ-dedicated PET systems. In this work, we implemented an analytical method to model system matrix for resolution recovery, which was then incorporated in PET image reconstruction on a graphical processing unit platform, due to its parallel processing capacity. The method utilizes the concepts of virtual DOI layers and multi-ray tracing to calculate the coincidence detection response function for a given line-of-response. The accuracy of the proposed method was validated for a small-bore PET insert to be used for simultaneous PET/MR breast imaging. In addition, the performance comparisons were studied among the following three cases: 1) no physical DOI and no resolution modeling; 2) two physical DOI layers and no resolution modeling; and 3) no physical DOI design but with a different number of virtual DOI layers. The image quality was quantitatively evaluated in terms of spatial resolution (full-width-half-maximum and position offset), contrast recovery coefficient and noise. The results indicate that the proposed method has the potential to be used as an alternative to other physical DOI designs and achieve comparable imaging performances, while reducing detector/system design cost and complexity.
NASA Astrophysics Data System (ADS)
Harms, F.; Dalimier, E.; Vermeulen, P.; Fragola, A.; Boccara, A. C.
2012-03-01
Optical Coherence Tomography (OCT) is an efficient technique for in-depth optical biopsy of biological tissues, relying on interferometric selection of ballistic photons. Full-Field Optical Coherence Tomography (FF-OCT) is an alternative approach to Fourier-domain OCT (spectral or swept-source), allowing parallel acquisition of en-face optical sections. Using medium numerical aperture objective, it is possible to reach an isotropic resolution of about 1x1x1 ìm. After stitching a grid of acquired images, FF-OCT gives access to the architecture of the tissue, for both macroscopic and microscopic structures, in a non-invasive process, which makes the technique particularly suitable for applications in pathology. Here we report a multimodal approach to FF-OCT, combining two Full-Field techniques for collecting a backscattered endogeneous OCT image and a fluorescence exogeneous image in parallel. Considering pathological diagnosis of cancer, visualization of cell nuclei is of paramount importance. OCT images, even for the highest resolution, usually fail to identify individual nuclei due to the nature of the optical contrast used. We have built a multimodal optical microscope based on the combination of FF-OCT and Structured Illumination Microscopy (SIM). We used x30 immersion objectives, with a numerical aperture of 1.05, allowing for sub-micron transverse resolution. Fluorescent staining of nuclei was obtained using specific fluorescent dyes such as acridine orange. We present multimodal images of healthy and pathological skin tissue at various scales. This instrumental development paves the way for improvements of standard pathology procedures, as a faster, non sacrificial, operator independent digital optical method compared to frozen sections.
GENESIS: new self-consistent models of exoplanetary spectra
NASA Astrophysics Data System (ADS)
Gandhi, Siddharth; Madhusudhan, Nikku
2017-12-01
We are entering the era of high-precision and high-resolution spectroscopy of exoplanets. Such observations herald the need for robust self-consistent spectral models of exoplanetary atmospheres to investigate intricate atmospheric processes and to make observable predictions. Spectral models of plane-parallel exoplanetary atmospheres exist, mostly adapted from other astrophysical applications, with different levels of sophistication and accuracy. There is a growing need for a new generation of models custom-built for exoplanets and incorporating state-of-the-art numerical methods and opacities. The present work is a step in this direction. Here we introduce GENESIS, a plane-parallel, self-consistent, line-by-line exoplanetary atmospheric modelling code that includes (a) formal solution of radiative transfer using the Feautrier method, (b) radiative-convective equilibrium with temperature correction based on the Rybicki linearization scheme, (c) latest absorption cross-sections, and (d) internal flux and external irradiation, under the assumptions of hydrostatic equilibrium, local thermodynamic equilibrium and thermochemical equilibrium. We demonstrate the code here with cloud-free models of giant exoplanetary atmospheres over a range of equilibrium temperatures, metallicities, C/O ratios and spanning non-irradiated and irradiated planets, with and without thermal inversions. We provide the community with theoretical emergent spectra and pressure-temperature profiles over this range, along with those for several known hot Jupiters. The code can generate self-consistent spectra at high resolution and has the potential to be integrated into general circulation and non-equilibrium chemistry models as it is optimized for efficiency and convergence. GENESIS paves the way for high-fidelity remote sensing of exoplanetary atmospheres at high resolution with current and upcoming observations.
NASA Astrophysics Data System (ADS)
Macedo, Milton P.; Correia, C. M. B. A.
2013-04-01
This work aims at showing the applicability of a scanning-stage bench-microscope in bright-field reflection mode for wirebonding inspection of integrated circuits (IC) as well as quality assurance of tracks in printed circuit boards (PCB). The main issues of our laboratorial prototype arise from the use of a linear image sensor taking advantage of its geometry to achieve lower acquisition time in comparison to traditional (pinhole) confocal approach. The use of a slit-detector is normally related to resolution degradation for details parallel to sensor. But an improvement will surely arise using light distribution along line pixels of the sensor which establishes a great advantage in comparison to (pure) slit detectors. The versatility of this bench-microscope affords excellent means to develop and test algorithms. Those to improve lateral resolution isotropy as well as image visualization and 3D mesh reconstruction under different setups namely illumination modes. Based on the results of these tests tests both wide-field illumination and parallel slit illumination and detection configurations were used in these two applications. Results from IC wire-bonding show the ability of the system to extract 3D information. A comparison of auto-focus images and 3D profiles obtained using different 3D reconstruction algorithms as well as a method for the determination of the diameter of the bond wire are presented. Measurements of PCB track width and thickness were performed and the comparison of these results from both longitudinal and transverse tracks stress the limitations of a lower spatial sampling rate induced by the resolution of object stage positioners.
Galan, Maxime; Pons, Jean-Baptiste; Tournayre, Orianne; Pierre, Éric; Leuchtmann, Maxime; Pontier, Dominique; Charbonnel, Nathalie
2018-05-01
Assessing diet variability is of main importance to better understand the biology of bats and design conservation strategies. Although the advent of metabarcoding has facilitated such analyses, this approach does not come without challenges. Biases may occur throughout the whole experiment, from fieldwork to biostatistics, resulting in the detection of false negatives, false positives or low taxonomic resolution. We detail a rigorous metabarcoding approach based on a short COI minibarcode and two-step PCR protocol enabling the "all at once" taxonomic identification of bats and their arthropod prey for several hundreds of samples. Our study includes faecal pellets collected in France from 357 bats representing 16 species, as well as insect mock communities that mimic bat meals of known composition, negative and positive controls. All samples were analysed using three replicates. We compare the efficiency of DNA extraction methods, and we evaluate the effectiveness of our protocol using identification success, taxonomic resolution, sensitivity and amplification biases. Our parallel identification strategy of predators and prey reduces the risk of mis-assigning prey to wrong predators and decreases the number of molecular steps. Controls and replicates enable to filter the data and limit the risk of false positives, hence guaranteeing high confidence results for both prey occurrence and bat species identification. We validate 551 COI variants from arthropod including 18 orders, 117 family, 282 genus and 290 species. Our method therefore provides a rapid, resolutive and cost-effective screening tool for addressing evolutionary ecological issues or developing "chirosurveillance" and conservation strategies. © 2017 John Wiley & Sons Ltd.
Image scanning fluorescence emission difference microscopy based on a detector array.
Li, Y; Liu, S; Liu, D; Sun, S; Kuang, C; Ding, Z; Liu, X
2017-06-01
We propose a novel imaging method that enables the enhancement of three-dimensional resolution of confocal microscopy significantly and achieve experimentally a new fluorescence emission difference method for the first time, based on the parallel detection with a detector array. Following the principles of photon reassignment in image scanning microscopy, images captured by the detector array were arranged. And by selecting appropriate reassign patterns, the imaging result with enhanced resolution can be achieved with the method of fluorescence emission difference. Two specific methods are proposed in this paper, showing that the difference between an image scanning microscopy image and a confocal image will achieve an improvement of transverse resolution by approximately 43% compared with that in confocal microscopy, and the axial resolution can also be enhanced by at least 22% experimentally and 35% theoretically. Moreover, the methods presented in this paper can improve the lateral resolution by around 10% than fluorescence emission difference and 15% than Airyscan. The mechanism of our methods is verified by numerical simulations and experimental results, and it has significant potential in biomedical applications. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.
A High-Resolution Capability for Large-Eddy Simulation of Jet Flows
NASA Technical Reports Server (NTRS)
DeBonis, James R.
2011-01-01
A large-eddy simulation (LES) code that utilizes high-resolution numerical schemes is described and applied to a compressible jet flow. The code is written in a general manner such that the accuracy/resolution of the simulation can be selected by the user. Time discretization is performed using a family of low-dispersion Runge-Kutta schemes, selectable from first- to fourth-order. Spatial discretization is performed using central differencing schemes. Both standard schemes, second- to twelfth-order (3 to 13 point stencils) and Dispersion Relation Preserving schemes from 7 to 13 point stencils are available. The code is written in Fortran 90 and uses hybrid MPI/OpenMP parallelization. The code is applied to the simulation of a Mach 0.9 jet flow. Four-stage third-order Runge-Kutta time stepping and the 13 point DRP spatial discretization scheme of Bogey and Bailly are used. The high resolution numerics used allows for the use of relatively sparse grids. Three levels of grid resolution are examined, 3.5, 6.5, and 9.2 million points. Mean flow, first-order turbulent statistics and turbulent spectra are reported. Good agreement with experimental data for mean flow and first-order turbulent statistics is shown.
NASA Technical Reports Server (NTRS)
Ford, J. P.; Arvidson, R. E.
1989-01-01
The high sensitivity of imaging radars to slope at moderate to low incidence angles enhances the perception of linear topography on images. It reveals broad spatial patterns that are essential to landform mapping and interpretation. As radar responses are strongly directional, the ability to discriminate linear features on images varies with their orientation. Landforms that appear prominent on images where they are transverse to the illumination may be obscure to indistinguishable on images where they are parallel to it. Landform detection is also influenced by the spatial resolution in radar images. Seasat radar images of the Gran Desierto Dunes complex, Sonora, Mexico; the Appalachian Valley and Ridge Province; and accreted terranes in eastern interior Alaska were processed to simulate both Venera 15 and 16 images (1000 to 3000 km resolution) and image data expected from the Magellan mission (120 to 300 m resolution. The Gran Desierto Dunes are not discernable in the Venera simulation, whereas the higher resolution Magellan simulation shows dominant dune patterns produced from differential erosion of the rocks. The Magellan simulation also shows that fluvial processes have dominated erosion and exposure of the folds.
High-resolution ionization detector and array of such detectors
McGregor, Douglas S [Ypsilanti, MI; Rojeski, Ronald A [Pleasanton, CA
2001-01-16
A high-resolution ionization detector and an array of such detectors are described which utilize a reference pattern of conductive or semiconductive material to form interaction, pervious and measurement regions in an ionization substrate of, for example, CdZnTe material. The ionization detector is a room temperature semiconductor radiation detector. Various geometries of such a detector and an array of such detectors produce room temperature operated gamma ray spectrometers with relatively high resolution. For example, a 1 cm.sup.3 detector is capable of measuring .sup.137 Cs 662 keV gamma rays with room temperature energy resolution approaching 2% at FWHM. Two major types of such detectors include a parallel strip semiconductor Frisch grid detector and the geometrically weighted trapezoid prism semiconductor Frisch grid detector. The geometrically weighted detector records room temperature (24.degree. C.) energy resolutions of 2.68% FWHM for .sup.137 Cs 662 keV gamma rays and 2.45% FWHM for .sup.60 Co 1.332 MeV gamma rays. The detectors perform well without any electronic pulse rejection, correction or compensation techniques. The devices operate at room temperature with simple commercially available NIM bin electronics and do not require special preamplifiers or cooling stages for good spectroscopic results.
First results of high-resolution modeling of Cenozoic subduction orogeny in Andes
NASA Astrophysics Data System (ADS)
Liu, S.; Sobolev, S. V.; Babeyko, A. Y.; Krueger, F.; Quinteros, J.; Popov, A.
2016-12-01
The Andean Orogeny is the result of the upper-plate crustal shortening during the Cenozoic Nazca plate subduction beneath South America plate. With up to 300 km shortening, the Earth's second highest Altiplano-Puna Plateau was formed with a pronounced N-S oriented deformation diversity. Furthermore, the tectonic shortening in the Southern Andes was much less intensive and started much later. The mechanism of the shortening and the nature of N-S variation of its magnitude remain controversial. The previous studies of the Central Andes suggested that they might be related to the N-S variation in the strength of the lithosphere, friction coupling at slab interface, and are probably influenced by the interaction of the climate and tectonic systems. However, the exact nature of the strength variation was not explored due to the lack of high numerical resolution and 3D numerical models at that time. Here we will employ large-scale subduction models with a high resolution to reveal and quantify the factors controlling the strength of lithospheric structures and their effect on the magnitude of tectonic shortening in the South America plate between 18°-35°S. These high-resolution models are performed by using the highly scalable parallel 3D code LaMEM (Lithosphere and Mantle Evolution Model). This code is based on finite difference staggered grid approach and employs massive linear and non-linear solvers within the PETSc library to complete high-performance MPI-based parallelization in geodynamic modeling. Currently, in addition to benchmark-models we are developing high-resolution (< 1km) 2D subduction models with application to Nazca-South America convergence. In particular, we will present the models focusing on the effect of friction reduction in the Paleozoic-Cenozoic sediments above the uppermost crust in the Subandean Ranges. Future work will be focused on the origin of different styles of deformation and topography evolution in Altiplano-Puna Plateau and Central-Southern Andes through 3D modeling of large-scale interaction of subducting and overriding plates.
Synergies Between Grace and Regional Atmospheric Modeling Efforts
NASA Astrophysics Data System (ADS)
Kusche, J.; Springer, A.; Ohlwein, C.; Hartung, K.; Longuevergne, L.; Kollet, S. J.; Keune, J.; Dobslaw, H.; Forootan, E.; Eicker, A.
2014-12-01
In the meteorological community, efforts converge towards implementation of high-resolution (< 12km) data-assimilating regional climate modelling/monitoring systems based on numerical weather prediction (NWP) cores. This is driven by requirements of improving process understanding, better representation of land surface interactions, atmospheric convection, orographic effects, and better forecasting on shorter timescales. This is relevant for the GRACE community since (1) these models may provide improved atmospheric mass separation / de-aliasing and smaller topography-induced errors, compared to global (ECMWF-Op, ERA-Interim) data, (2) they inherit high temporal resolution from NWP models, (3) parallel efforts towards improving the land surface component and coupling groundwater models; this may provide realistic hydrological mass estimates with sub-diurnal resolution, (4) parallel efforts towards re-analyses, with the aim of providing consistent time series. (5) On the other hand, GRACE can help validating models and aids in the identification of processes needing improvement. A coupled atmosphere - land surface - groundwater modelling system is currently being implemented for the European CORDEX region at 12.5 km resolution, based on the TerrSysMP platform (COSMO-EU NWP, CLM land surface and ParFlow groundwater models). We report results from Springer et al. (J. Hydromet., accept.) on validating the water cycle in COSMO-EU using GRACE and precipitation, evapotranspiration and runoff data; confirming that the model does favorably at representing observations. We show that after GRACE-derived bias correction, basin-average hydrological conditions prior to 2002 can be reconstructed better than before. Next, comparing GRACE with CLM forced by EURO-CORDEX simulations allows identifying processes needing improvement in the model. Finally, we compare COSMO-EU atmospheric pressure, a proxy for mass corrections in satellite gravimetry, with ERA-Interim over Europe at timescales shorter/longer than 1 month, and spatial scales below/above ERA resolution. We find differences between regional and global model more pronounced at high frequencies, with magnitude at sub-grid scale and larger scale corresponding to 1-3 hPa (1-3 cm EWH); relevant for the assessment of post-GRACE concepts.
Resolution Enhancement in PET Reconstruction Using Collimation
NASA Astrophysics Data System (ADS)
Metzler, Scott D.; Matej, Samuel; Karp, Joel S.
2013-02-01
Collimation can improve both the spatial resolution and sampling properties compared to the same scanner without collimation. Spatial resolution improves because each original crystal can be conceptually split into two (i.e., doubling the number of in-plane crystals) by masking half the crystal with a high-density attenuator (e.g., tungsten); this reduces coincidence efficiency by 4× since both crystals comprising the line of response (LOR) are masked, but yields 4× as many resolution-enhanced (RE) LORs. All the new RE LORs can be measured by scanning with the collimator in different configurations.In this simulation study, the collimator was assumed to be ideal, neither allowing gamma penetration nor truncating the field of view. Comparisons were made in 2D between an uncollimated small-animal system with 2-mm crystals that were assumed to be perfectly absorbing and the same system with collimation that narrowed the effective crystal size to 1 mm. Digital phantoms included a hot-rod and a single-hot-spot, both in a uniform background with activity ratio of 4:1. In addition to the collimated and uncollimated configurations, angular and spatial wobbling acquisitions of the 2-mm case were also simulated. Similarly, configurations with different combinations of the RE LORs were considered including (i) all LORs, (ii) only those parallel to the 2-mm LORs; and (iii) only cross pairs that are not parallel to the 2-mm LORs. Lastly, quantitative studies were conducted for collimated and uncollimated data using contrast recovery coefficient and mean-squared error (MSE) as metrics. The reconstructions show that for most noise levels there is a substantial improvement in image quality (i.e., visual quality, resolution, and a reduction in artifacts) by using collimation even when there are 4 fewer counts or-in some cases-comparing with the noiseless uncollimated reconstruction. By comparing various configurations of sampling, the results show that it is the matched combination of both improved spatial resolution of each LOR and the increase in the number of LORs that yields improved reconstructions. Further, the quantitative studies show that for low-count scans, the collimated data give better MSE for small lesions and the uncollimated data give better MSE for larger lesions; for highcount studies, the collimated data yield better quantitative values for the entire range of lesion sizes that were evaluated.
NASA Astrophysics Data System (ADS)
Kim, Stephan D.; Luo, Jiajun; Buchholz, D. Bruce; Chang, R. P. H.; Grayson, M.
2016-09-01
A modular time division multiplexer (MTDM) device is introduced to enable parallel measurement of multiple samples with both fast and slow decay transients spanning from millisecond to month-long time scales. This is achieved by dedicating a single high-speed measurement instrument for rapid data collection at the start of a transient, and by multiplexing a second low-speed measurement instrument for slow data collection of several samples in parallel for the later transients. The MTDM is a high-level design concept that can in principle measure an arbitrary number of samples, and the low cost implementation here allows up to 16 samples to be measured in parallel over several months, reducing the total ensemble measurement duration and equipment usage by as much as an order of magnitude without sacrificing fidelity. The MTDM was successfully demonstrated by simultaneously measuring the photoconductivity of three amorphous indium-gallium-zinc-oxide thin films with 20 ms data resolution for fast transients and an uninterrupted parallel run time of over 20 days. The MTDM has potential applications in many areas of research that manifest response times spanning many orders of magnitude, such as photovoltaics, rechargeable batteries, amorphous semiconductors such as silicon and amorphous indium-gallium-zinc-oxide.
Lu, Xiaofeng; Song, Li; Shen, Sumin; He, Kang; Yu, Songyu; Ling, Nam
2013-01-01
Hough Transform has been widely used for straight line detection in low-definition and still images, but it suffers from execution time and resource requirements. Field Programmable Gate Arrays (FPGA) provide a competitive alternative for hardware acceleration to reap tremendous computing performance. In this paper, we propose a novel parallel Hough Transform (PHT) and FPGA architecture-associated framework for real-time straight line detection in high-definition videos. A resource-optimized Canny edge detection method with enhanced non-maximum suppression conditions is presented to suppress most possible false edges and obtain more accurate candidate edge pixels for subsequent accelerated computation. Then, a novel PHT algorithm exploiting spatial angle-level parallelism is proposed to upgrade computational accuracy by improving the minimum computational step. Moreover, the FPGA based multi-level pipelined PHT architecture optimized by spatial parallelism ensures real-time computation for 1,024 × 768 resolution videos without any off-chip memory consumption. This framework is evaluated on ALTERA DE2-115 FPGA evaluation platform at a maximum frequency of 200 MHz, and it can calculate straight line parameters in 15.59 ms on the average for one frame. Qualitative and quantitative evaluation results have validated the system performance regarding data throughput, memory bandwidth, resource, speed and robustness. PMID:23867746
Lu, Xiaofeng; Song, Li; Shen, Sumin; He, Kang; Yu, Songyu; Ling, Nam
2013-07-17
Hough Transform has been widely used for straight line detection in low-definition and still images, but it suffers from execution time and resource requirements. Field Programmable Gate Arrays (FPGA) provide a competitive alternative for hardware acceleration to reap tremendous computing performance. In this paper, we propose a novel parallel Hough Transform (PHT) and FPGA architecture-associated framework for real-time straight line detection in high-definition videos. A resource-optimized Canny edge detection method with enhanced non-maximum suppression conditions is presented to suppress most possible false edges and obtain more accurate candidate edge pixels for subsequent accelerated computation. Then, a novel PHT algorithm exploiting spatial angle-level parallelism is proposed to upgrade computational accuracy by improving the minimum computational step. Moreover, the FPGA based multi-level pipelined PHT architecture optimized by spatial parallelism ensures real-time computation for 1,024 × 768 resolution videos without any off-chip memory consumption. This framework is evaluated on ALTERA DE2-115 FPGA evaluation platform at a maximum frequency of 200 MHz, and it can calculate straight line parameters in 15.59 ms on the average for one frame. Qualitative and quantitative evaluation results have validated the system performance regarding data throughput, memory bandwidth, resource, speed and robustness.
Kim, Stephan D; Luo, Jiajun; Buchholz, D Bruce; Chang, R P H; Grayson, M
2016-09-01
A modular time division multiplexer (MTDM) device is introduced to enable parallel measurement of multiple samples with both fast and slow decay transients spanning from millisecond to month-long time scales. This is achieved by dedicating a single high-speed measurement instrument for rapid data collection at the start of a transient, and by multiplexing a second low-speed measurement instrument for slow data collection of several samples in parallel for the later transients. The MTDM is a high-level design concept that can in principle measure an arbitrary number of samples, and the low cost implementation here allows up to 16 samples to be measured in parallel over several months, reducing the total ensemble measurement duration and equipment usage by as much as an order of magnitude without sacrificing fidelity. The MTDM was successfully demonstrated by simultaneously measuring the photoconductivity of three amorphous indium-gallium-zinc-oxide thin films with 20 ms data resolution for fast transients and an uninterrupted parallel run time of over 20 days. The MTDM has potential applications in many areas of research that manifest response times spanning many orders of magnitude, such as photovoltaics, rechargeable batteries, amorphous semiconductors such as silicon and amorphous indium-gallium-zinc-oxide.
Configuration of twins in glass-embedded silver nanoparticles of various origin
NASA Astrophysics Data System (ADS)
Hofmeister, H.; Dubiel, M.; Tan, G. L.; Schicke, K.-D.
2005-09-01
Structural characterization using high resolution electron microscopy and diffractogram analysis of silver nanoparticles embedded in glass by various routes of fabrication was aimed at revealing the characteristic features of twin faults occuring in such particles. Nearly spherical silver nanoparticles well below 10 nm size embedded in commercial soda-lime silicate float glass have been fabricated either by silver/sodium ion exchange or by Ag+ ion implantation. Twinned nanoparticles, besides single crystalline species, have frequently been observed for both fabrication routes, mainly at sizes above 5 nm, but also at smaller sizes, even around 1 nm. The variety of particle forms comprises single crystalline particles of nearly cuboctahedron shape, particles containing single twin faults, and multiply twinned particles containing parallel twin lamellae, or cyclic twinned segments arranged around axes of fivefold symmetry. Parallel twinning is distinctly favoured by ion implantation whereas cyclic twinning preferably occurs upon ion exchange processing. Regardless of single or repeated twinning, parallel or cyclic twin arrangement, one may classify simple twin faults of regular atomic configuration and compound twin faults whose irregular configuration consists of additional planar defects like associated stacking faults or secondary twin faults. Besides, a particular superstructure composed of parallel twin lamellae of only three atomic layers thickness is observed.
A Microfluidic Platform for Correlative Live-Cell and Super-Resolution Microscopy
Tam, Johnny; Cordier, Guillaume Alan; Bálint, Štefan; Sandoval Álvarez, Ángel; Borbely, Joseph Steven; Lakadamyali, Melike
2014-01-01
Recently, super-resolution microscopy methods such as stochastic optical reconstruction microscopy (STORM) have enabled visualization of subcellular structures below the optical resolution limit. Due to the poor temporal resolution, however, these methods have mostly been used to image fixed cells or dynamic processes that evolve on slow time-scales. In particular, fast dynamic processes and their relationship to the underlying ultrastructure or nanoscale protein organization cannot be discerned. To overcome this limitation, we have recently developed a correlative and sequential imaging method that combines live-cell and super-resolution microscopy. This approach adds dynamic background to ultrastructural images providing a new dimension to the interpretation of super-resolution data. However, currently, it suffers from the need to carry out tedious steps of sample preparation manually. To alleviate this problem, we implemented a simple and versatile microfluidic platform that streamlines the sample preparation steps in between live-cell and super-resolution imaging. The platform is based on a microfluidic chip with parallel, miniaturized imaging chambers and an automated fluid-injection device, which delivers a precise amount of a specified reagent to the selected imaging chamber at a specific time within the experiment. We demonstrate that this system can be used for live-cell imaging, automated fixation, and immunostaining of adherent mammalian cells in situ followed by STORM imaging. We further demonstrate an application by correlating mitochondrial dynamics, morphology, and nanoscale mitochondrial protein distribution in live and super-resolution images. PMID:25545548
Wedi, Nils P
2014-06-28
The steady path of doubling the global horizontal resolution approximately every 8 years in numerical weather prediction (NWP) at the European Centre for Medium Range Weather Forecasts may be substantially altered with emerging novel computing architectures. It coincides with the need to appropriately address and determine forecast uncertainty with increasing resolution, in particular, when convective-scale motions start to be resolved. Blunt increases in the model resolution will quickly become unaffordable and may not lead to improved NWP forecasts. Consequently, there is a need to accordingly adjust proven numerical techniques. An informed decision on the modelling strategy for harnessing exascale, massively parallel computing power thus also requires a deeper understanding of the sensitivity to uncertainty--for each part of the model--and ultimately a deeper understanding of multi-scale interactions in the atmosphere and their numerical realization in ultra-high-resolution NWP and climate simulations. This paper explores opportunities for substantial increases in the forecast efficiency by judicious adjustment of the formal accuracy or relative resolution in the spectral and physical space. One path is to reduce the formal accuracy by which the spectral transforms are computed. The other pathway explores the importance of the ratio used for the horizontal resolution in gridpoint space versus wavenumbers in spectral space. This is relevant for both high-resolution simulations as well as ensemble-based uncertainty estimation. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Quantification of resolution in multiplanar reconstructions for digital breast tomosynthesis
NASA Astrophysics Data System (ADS)
Vent, Trevor L.; Acciavatti, Raymond J.; Kwon, Young Joon; Maidment, Andrew D. A.
2016-03-01
Multiplanar reconstruction (MPR) in digital breast tomosynthesis (DBT) allows tomographic images to be portrayed in various orientations. We have conducted research to determine the resolution of tomosynthesis MPR. We built a phantom that houses a star test pattern to measure resolution. This phantom provides three rotational degrees of freedom. The design consists of two hemispheres with longitudinal and latitudinal grooves that reference angular increments. When joined together, the hemispheres form a dome that sits inside a cylindrical encasement. The cylindrical encasement contains reference notches to match the longitudinal and latitudinal grooves that guide the phantom's rotations. With this design, any orientation of the star-pattern can be analyzed. Images of the star-pattern were acquired using a DBT mammography system at the Hospital of the University of Pennsylvania. Images taken were reconstructed and analyzed by two different methods. First, the maximum visible frequency (in line pairs per millimeter) of the star test pattern was measured. Then, the contrast was calculated at a fixed spatial frequency. These analyses confirm that resolution decreases with tilt relative to the breast support. They also confirm that resolution in tomosynthesis MPR is dependent on object orientation. Current results verify that the existence of super-resolution depends on the orientation of the frequency; the direction parallel to x-ray tube motion shows super-resolution. In conclusion, this study demonstrates that the direction of the spatial frequency relative to the motion of the x-ray tube is a determinant of resolution in MPR for DBT.
Ribot, Emeline J.; Wecker, Didier; Trotier, Aurélien J.; Dallaudière, Benjamin; Lefrançois, William; Thiaudière, Eric; Franconi, Jean-Michel; Miraux, Sylvain
2015-01-01
Introduction The purpose of this paper is to develop an easy method to generate both fat signal and banding artifact free 3D balanced Steady State Free Precession (bSSFP) images at high magnetic field. Methods In order to suppress fat signal and bSSFP banding artifacts, two or four images were acquired with the excitation frequency of the water-selective binomial radiofrequency pulse set On Resonance or shifted by a maximum of 3/4TR. Mice and human volunteers were imaged at 7T and 3T, respectively to perform whole-body and musculoskeletal imaging. “Sum-Of-Square” reconstruction was performed and combined or not with parallel imaging. Results The frequency selectivity of 1-2-3-2-1 or 1-3-3-1 binomial pulses was preserved after (3/4TR) frequency shifting. Consequently, whole body small animal 3D imaging was performed at 7T and enabled visualization of small structures within adipose tissue like lymph nodes. In parallel, this method allowed 3D musculoskeletal imaging in humans with high spatial resolution at 3T. The combination with parallel imaging allowed the acquisition of knee images with ~500μm resolution images in less than 2min. In addition, ankles, full head coverage and legs of volunteers were imaged, demonstrating the possible application of the method also for large FOV. Conclusion In conclusion, this robust method can be applied in small animals and humans at high magnetic fields. The high SNR and tissue contrast obtained in short acquisition times allows to prescribe bSSFP sequence for several preclinical and clinical applications. PMID:26426849
Modern gyrokinetic particle-in-cell simulation of fusion plasmas on top supercomputers
Wang, Bei; Ethier, Stephane; Tang, William; ...
2017-06-29
The Gyrokinetic Toroidal Code at Princeton (GTC-P) is a highly scalable and portable particle-in-cell (PIC) code. It solves the 5D Vlasov-Poisson equation featuring efficient utilization of modern parallel computer architectures at the petascale and beyond. Motivated by the goal of developing a modern code capable of dealing with the physics challenge of increasing problem size with sufficient resolution, new thread-level optimizations have been introduced as well as a key additional domain decomposition. GTC-P's multiple levels of parallelism, including inter-node 2D domain decomposition and particle decomposition, as well as intra-node shared memory partition and vectorization have enabled pushing the scalability ofmore » the PIC method to extreme computational scales. In this paper, we describe the methods developed to build a highly parallelized PIC code across a broad range of supercomputer designs. This particularly includes implementations on heterogeneous systems using NVIDIA GPU accelerators and Intel Xeon Phi (MIC) co-processors and performance comparisons with state-of-the-art homogeneous HPC systems such as Blue Gene/Q. New discovery science capabilities in the magnetic fusion energy application domain are enabled, including investigations of Ion-Temperature-Gradient (ITG) driven turbulence simulations with unprecedented spatial resolution and long temporal duration. Performance studies with realistic fusion experimental parameters are carried out on multiple supercomputing systems spanning a wide range of cache capacities, cache-sharing configurations, memory bandwidth, interconnects and network topologies. These performance comparisons using a realistic discovery-science-capable domain application code provide valuable insights on optimization techniques across one of the broadest sets of current high-end computing platforms worldwide.« less
Modern gyrokinetic particle-in-cell simulation of fusion plasmas on top supercomputers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Bei; Ethier, Stephane; Tang, William
The Gyrokinetic Toroidal Code at Princeton (GTC-P) is a highly scalable and portable particle-in-cell (PIC) code. It solves the 5D Vlasov-Poisson equation featuring efficient utilization of modern parallel computer architectures at the petascale and beyond. Motivated by the goal of developing a modern code capable of dealing with the physics challenge of increasing problem size with sufficient resolution, new thread-level optimizations have been introduced as well as a key additional domain decomposition. GTC-P's multiple levels of parallelism, including inter-node 2D domain decomposition and particle decomposition, as well as intra-node shared memory partition and vectorization have enabled pushing the scalability ofmore » the PIC method to extreme computational scales. In this paper, we describe the methods developed to build a highly parallelized PIC code across a broad range of supercomputer designs. This particularly includes implementations on heterogeneous systems using NVIDIA GPU accelerators and Intel Xeon Phi (MIC) co-processors and performance comparisons with state-of-the-art homogeneous HPC systems such as Blue Gene/Q. New discovery science capabilities in the magnetic fusion energy application domain are enabled, including investigations of Ion-Temperature-Gradient (ITG) driven turbulence simulations with unprecedented spatial resolution and long temporal duration. Performance studies with realistic fusion experimental parameters are carried out on multiple supercomputing systems spanning a wide range of cache capacities, cache-sharing configurations, memory bandwidth, interconnects and network topologies. These performance comparisons using a realistic discovery-science-capable domain application code provide valuable insights on optimization techniques across one of the broadest sets of current high-end computing platforms worldwide.« less
Xia, Yang; Mittelstaedt, Daniel; Ramakrishnan, Nagarajan; Szarko, Matthew; Bidthanapally, Aruna
2010-01-01
Full thickness blocks of canine humeral cartilage were microtomed into both perpendicular sections and a series of 100 parallel sections, each 6 μm thick. Fourier Transform Infrared Imaging (FTIRI) was used to image each tissue section eleven times under different infrared polarizations (from 0° to 180° polarization states in 20° increments and with an additional 90° polarization), at a spatial resolution of 6.25 μm and a wavenumber step of 8 cm−1. With increasing depth from the articular surface, amide anisotropies increased in the perpendicular sections and decreased in the parallel sections. Both types of tissue sectioning identified a 90° difference between amide I and amide II in the superficial zone of cartilage. The fibrillar distribution in the parallel sections from the superficial zone was shown to not be random. Sugar had the greatest anisotropy in the upper part of the radial zone in the perpendicular sections. The depth-dependent anisotropic data were fitted with a theoretical equation that contained three signature parameters, which illustrate the arcade structure of collagens with the aid of a fibril model. Infrared imaging of both perpendicular and parallel sections provides the possibility of determining the three-dimensional macromolecular structures in articular cartilage. Being sensitive to the orientation of the macromolecular structure in healthy articular cartilage aids the prospect of detecting the early onset of the tissue degradation that may lead to pathological conditions such as osteoarthritis. PMID:21274999
Research on moving object detection based on frog's eyes
NASA Astrophysics Data System (ADS)
Fu, Hongwei; Li, Dongguang; Zhang, Xinyuan
2008-12-01
On the basis of object's information processing mechanism with frog's eyes, this paper discussed a bionic detection technology which suitable for object's information processing based on frog's vision. First, the bionics detection theory by imitating frog vision is established, it is an parallel processing mechanism which including pick-up and pretreatment of object's information, parallel separating of digital image, parallel processing, and information synthesis. The computer vision detection system is described to detect moving objects which has special color, special shape, the experiment indicates that it can scheme out the detecting result in the certain interfered background can be detected. A moving objects detection electro-model by imitating biologic vision based on frog's eyes is established, the video simulative signal is digital firstly in this system, then the digital signal is parallel separated by FPGA. IN the parallel processing, the video information can be caught, processed and displayed in the same time, the information fusion is taken by DSP HPI ports, in order to transmit the data which processed by DSP. This system can watch the bigger visual field and get higher image resolution than ordinary monitor systems. In summary, simulative experiments for edge detection of moving object with canny algorithm based on this system indicate that this system can detect the edge of moving objects in real time, the feasibility of bionic model was fully demonstrated in the engineering system, and it laid a solid foundation for the future study of detection technology by imitating biologic vision.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Causgrove, T.P.; Yang, S.; Struve, W.S.
1988-11-17
The polarization of the Q/sub x/ electronic transition in the BChl a-protein complex from the green sulfur bacterium Prosthecochloris aestuarii was monitored by pump-probe spectroscopy with approx. 1.5-ps resolution at 598, 603, and 609 nm. At 603 nm, the polarization decays with a mean lifetime of 4.78 ps. Substantial residual polarization appears at long times (the ratio A/sub parallel//A/sub perpendicular/ of optical densities for probe pulses polarized parallel and perpendicular to the excitation pulse is approx. 1.7) in consequence of the nonrandom chromophore orientations. The polarized pump-probe transients have been analyzed in terms of an exciton hopping model that incorporatesmore » the known geometry of the BChl a-protein.« less
Luminosity variations in several parallel auroral arcs before auroral breakup
NASA Astrophysics Data System (ADS)
Safargaleev, V.; Lyatsky, W.; Tagirov, V.
1997-08-01
Variation of the luminosity in two parallel auroral arcs before auroral breakup has been studied by using digitised TV-data with high temporal and spatial resolution. The intervals when a new arc appears near already existing one were chosen for analysis. It is shown, for all cases, that the appearance of a new arc is accompanied by fading or disappearance of another arc. We have named these events out-of-phase events, OP. Another type of luminosity variation is characterised by almost simultaneous enhancement of intensity in the both arcs (in-phase event, IP). The characteristic time of IP events is 10-20 s, whereas OP events last about one minute. Sometimes out-of-phase events begin as IP events. The possible mechanisms for OP and IP events are discussed.
Use of PZT's for adaptive control of Fabry-Perot etalon plate figure
NASA Technical Reports Server (NTRS)
Skinner, WIlbert; Niciejewski, R.
2005-01-01
A Fabry Perot etalon, consisting of two spaced and reflective glass flats, provides the mechanism by which high resolution spectroscopy may be performed over narrow spectral regions. Space based applications include direct measurements of Doppler shifts of airglow absorption and emission features and the Doppler broadening of spectral lines. The technique requires a high degree of parallelism between the two flats to be maintained through harsh launch conditions. Monitoring and adjusting the plate figure by illuminating the Fabry Perot interferometer with a suitable monochromatic source may be performed on orbit to actively control of the parallelism of the flats. This report describes the use of such a technique in a laboratory environment applied to a piezo-electric stack attached to the center of a Fabry Perot etalon.
Applications of massively parallel computers in telemetry processing
NASA Technical Reports Server (NTRS)
El-Ghazawi, Tarek A.; Pritchard, Jim; Knoble, Gordon
1994-01-01
Telemetry processing refers to the reconstruction of full resolution raw instrumentation data with artifacts, of space and ground recording and transmission, removed. Being the first processing phase of satellite data, this process is also referred to as level-zero processing. This study is aimed at investigating the use of massively parallel computing technology in providing level-zero processing to spaceflights that adhere to the recommendations of the Consultative Committee on Space Data Systems (CCSDS). The workload characteristics, of level-zero processing, are used to identify processing requirements in high-performance computing systems. An example of level-zero functions on a SIMD MPP, such as the MasPar, is discussed. The requirements in this paper are based in part on the Earth Observing System (EOS) Data and Operation System (EDOS).
Parallel 3D-TLM algorithm for simulation of the Earth-ionosphere cavity
NASA Astrophysics Data System (ADS)
Toledo-Redondo, Sergio; Salinas, Alfonso; Morente-Molinera, Juan Antonio; Méndez, Antonio; Fornieles, Jesús; Portí, Jorge; Morente, Juan Antonio
2013-03-01
A parallel 3D algorithm for solving time-domain electromagnetic problems with arbitrary geometries is presented. The technique employed is the Transmission Line Modeling (TLM) method implemented in Shared Memory (SM) environments. The benchmarking performed reveals that the maximum speedup depends on the memory size of the problem as well as multiple hardware factors, like the disposition of CPUs, cache, or memory. A maximum speedup of 15 has been measured for the largest problem. In certain circumstances of low memory requirements, superlinear speedup is achieved using our algorithm. The model is employed to model the Earth-ionosphere cavity, thus enabling a study of the natural electromagnetic phenomena that occur in it. The algorithm allows complete 3D simulations of the cavity with a resolution of 10 km, within a reasonable timescale.
Nakada, Tsutomu; Matsuzawa, Hitoshi; Fujii, Yukihiko; Takahashi, Hitoshi; Nishizawa, Masatoyo; Kwee, Ingrid L
2006-07-01
Clinical magnetic resonance imaging (MRI) has recently entered the "high-field" era, and systems equipped with 3.0-4.0T superconductive magnets are becoming the gold standard for diagnostic imaging. While higher signal-to-noise ratio (S/N) is a definite advantage of higher field systems, higher susceptibility effect remains to be a significant trade-off. To take advantage of a higher field system in performing routine clinical images of higher anatomical resolution, we implemented a vector contrast image technique to 3.0T imaging, three-dimensional anisotropy contrast (3DAC), with a PROPELLER (Periodically Rotated Overlapping Parallel Lines with Enhanced Reconstruction) sequence, a method capable of effectively eliminating undesired artifacts on rapid diffusion imaging sequences. One hundred subjects (20 normal volunteers and 80 volunteers with various central nervous system diseases) participated in the study. Anisotropic diffusion-weighted PROPELLER images were obtained on a General Electric (Waukesha, WI, USA) Signa 3.0T for each axis, with b-value of 1100 sec/mm(2). Subsequently, 3DAC images were constructed using in-house software written on MATLAB (MathWorks, Natick, MA, USA). The vector contrast allows for providing exquisite anatomical detail illustrated by clear identification of all major tracts through the entire brain. 3DAC images provide better anatomical resolution for brainstem glioma than higher-resolution T2 reversed images. Degenerative processes of disease-specific tracts were clearly identified as illustrated in cases of multiple system atrophy and Joseph-Machado disease. Anatomical images of significantly higher resolution than the best current standard, T2 reversed images, were successfully obtained. As a technique readily applicable under routine clinical setting, 3DAC PROPELLER on a 3.0T system will be a powerful addition to diagnostic imaging.
NASA Astrophysics Data System (ADS)
Yamamoto, H.; Nakajima, K.; Zhang, K.; Nanai, S.
2015-12-01
Powerful numerical codes that are capable of modeling complex coupled processes of physics and chemistry have been developed for predicting the fate of CO2 in reservoirs as well as its potential impacts on groundwater and subsurface environments. However, they are often computationally demanding for solving highly non-linear models in sufficient spatial and temporal resolutions. Geological heterogeneity and uncertainties further increase the challenges in modeling works. Two-phase flow simulations in heterogeneous media usually require much longer computational time than that in homogeneous media. Uncertainties in reservoir properties may necessitate stochastic simulations with multiple realizations. Recently, massively parallel supercomputers with more than thousands of processors become available in scientific and engineering communities. Such supercomputers may attract attentions from geoscientist and reservoir engineers for solving the large and non-linear models in higher resolutions within a reasonable time. However, for making it a useful tool, it is essential to tackle several practical obstacles to utilize large number of processors effectively for general-purpose reservoir simulators. We have implemented massively-parallel versions of two TOUGH2 family codes (a multi-phase flow simulator TOUGH2 and a chemically reactive transport simulator TOUGHREACT) on two different types (vector- and scalar-type) of supercomputers with a thousand to tens of thousands of processors. After completing implementation and extensive tune-up on the supercomputers, the computational performance was measured for three simulations with multi-million grid models, including a simulation of the dissolution-diffusion-convection process that requires high spatial and temporal resolutions to simulate the growth of small convective fingers of CO2-dissolved water to larger ones in a reservoir scale. The performance measurement confirmed that the both simulators exhibit excellent scalabilities showing almost linear speedup against number of processors up to over ten thousand cores. Generally this allows us to perform coupled multi-physics (THC) simulations on high resolution geologic models with multi-million grid in a practical time (e.g., less than a second per time step).
A high-resolution physically-based global flood hazard map
NASA Astrophysics Data System (ADS)
Kaheil, Y.; Begnudelli, L.; McCollum, J.
2016-12-01
We present the results from a physically-based global flood hazard model. The model uses a physically-based hydrologic model to simulate river discharges, and 2D hydrodynamic model to simulate inundation. The model is set up such that it allows the application of large-scale flood hazard through efficient use of parallel computing. For hydrology, we use the Hillslope River Routing (HRR) model. HRR accounts for surface hydrology using Green-Ampt parameterization. The model is calibrated against observed discharge data from the Global Runoff Data Centre (GRDC) network, among other publicly-available datasets. The parallel-computing framework takes advantage of the river network structure to minimize cross-processor messages, and thus significantly increases computational efficiency. For inundation, we implemented a computationally-efficient 2D finite-volume model with wetting/drying. The approach consists of simulating flood along the river network by forcing the hydraulic model with the streamflow hydrographs simulated by HRR, and scaled up to certain return levels, e.g. 100 years. The model is distributed such that each available processor takes the next simulation. Given an approximate criterion, the simulations are ordered from most-demanding to least-demanding to ensure that all processors finalize almost simultaneously. Upon completing all simulations, the maximum envelope of flood depth is taken to generate the final map. The model is applied globally, with selected results shown from different continents and regions. The maps shown depict flood depth and extent at different return periods. These maps, which are currently available at 3 arc-sec resolution ( 90m) can be made available at higher resolutions where high resolution DEMs are available. The maps can be utilized by flood risk managers at the national, regional, and even local levels to further understand their flood risk exposure, exercise certain measures of mitigation, and/or transfer the residual risk financially through flood insurance programs.
Super-Resolution in Plenoptic Cameras Using FPGAs
Pérez, Joel; Magdaleno, Eduardo; Pérez, Fernando; Rodríguez, Manuel; Hernández, David; Corrales, Jaime
2014-01-01
Plenoptic cameras are a new type of sensor that extend the possibilities of current commercial cameras allowing 3D refocusing or the capture of 3D depths. One of the limitations of plenoptic cameras is their limited spatial resolution. In this paper we describe a fast, specialized hardware implementation of a super-resolution algorithm for plenoptic cameras. The algorithm has been designed for field programmable graphic array (FPGA) devices using VHDL (very high speed integrated circuit (VHSIC) hardware description language). With this technology, we obtain an acceleration of several orders of magnitude using its extremely high-performance signal processing capability through parallelism and pipeline architecture. The system has been developed using generics of the VHDL language. This allows a very versatile and parameterizable system. The system user can easily modify parameters such as data width, number of microlenses of the plenoptic camera, their size and shape, and the super-resolution factor. The speed of the algorithm in FPGA has been successfully compared with the execution using a conventional computer for several image sizes and different 3D refocusing planes. PMID:24841246
Super-resolution in plenoptic cameras using FPGAs.
Pérez, Joel; Magdaleno, Eduardo; Pérez, Fernando; Rodríguez, Manuel; Hernández, David; Corrales, Jaime
2014-05-16
Plenoptic cameras are a new type of sensor that extend the possibilities of current commercial cameras allowing 3D refocusing or the capture of 3D depths. One of the limitations of plenoptic cameras is their limited spatial resolution. In this paper we describe a fast, specialized hardware implementation of a super-resolution algorithm for plenoptic cameras. The algorithm has been designed for field programmable graphic array (FPGA) devices using VHDL (very high speed integrated circuit (VHSIC) hardware description language). With this technology, we obtain an acceleration of several orders of magnitude using its extremely high-performance signal processing capability through parallelism and pipeline architecture. The system has been developed using generics of the VHDL language. This allows a very versatile and parameterizable system. The system user can easily modify parameters such as data width, number of microlenses of the plenoptic camera, their size and shape, and the super-resolution factor. The speed of the algorithm in FPGA has been successfully compared with the execution using a conventional computer for several image sizes and different 3D refocusing planes.
Exploring the potential of machine learning to break deadlock in convection parameterization
NASA Astrophysics Data System (ADS)
Pritchard, M. S.; Gentine, P.
2017-12-01
We explore the potential of modern machine learning tools (via TensorFlow) to replace parameterization of deep convection in climate models. Our strategy begins by generating a large ( 1 Tb) training dataset from time-step level (30-min) output harvested from a one-year integration of a zonally symmetric, uniform-SST aquaplanet integration of the SuperParameterized Community Atmosphere Model (SPCAM). We harvest the inputs and outputs connecting each of SPCAM's 8,192 embedded cloud-resolving model (CRM) arrays to its host climate model's arterial thermodynamic state variables to afford 143M independent training instances. We demonstrate that this dataset is sufficiently large to induce preliminary convergence for neural network prediction of desired outputs of SP, i.e. CRM-mean convective heating and moistening profiles. Sensitivity of the machine learning convergence to the nuances of the TensorFlow implementation are discussed, as well as results from pilot tests from the neural network operating inline within the SPCAM as a replacement to the (super)parameterization of convection.
Cuperlier, Nicolas; Gaussier, Philippe
2017-01-01
Emotions play a significant role in internal regulatory processes. In this paper, we advocate four key ideas. First, novelty detection can be grounded in the sensorimotor experience and allow higher order appraisal. Second, cognitive processes, such as those involved in self-assessment, influence emotional states by eliciting affects like boredom and frustration. Third, emotional processes such as those triggered by self-assessment influence attentional processes. Last, close emotion-cognition interactions implement an efficient feedback loop for the purpose of top-down behavior regulation. The latter is what we call ‘Emotional Metacontrol’. We introduce a model based on artificial neural networks. This architecture is used to control a robotic system in a visual search task. The emotional metacontrol intervenes to bias the robot visual attention during active object recognition. Through a behavioral and statistical analysis, we show that this mechanism increases the robot performance and fosters the exploratory behavior to avoid deadlocks. PMID:28934291
Heuristic Search for Planning with Different Forced Goal-Ordering Constraints
Zhang, Weiming; Cui, Jing; Zhu, Cheng; Huang, Jincai; Liu, Zhong
2013-01-01
Planning with forced goal-ordering (FGO) constraints has been proposed many times over the years, but there are still major difficulties in realizing these FGOs in plan generation. In certain planning domains, all the FGOs exist in the initial state. No matter which approach is adopted to achieve a subgoal, all the subgoals should be achieved in a given sequence from the initial state. Otherwise, the planning may arrive at a deadlock. For some other planning domains, there is no FGO in the initial state. However, FGO may occur during the planning process if certain subgoal is achieved by an inappropriate approach. This paper contributes to illustrate that it is the excludable constraints among the goal achievement operations (GAO) of different subgoals that introduce the FGOs into the planning problem, and planning with FGO is still a challenge for the heuristic search based planners. Then, a novel multistep forward search algorithm is proposed which can solve the planning problem with different FGOs efficiently. PMID:23935443
Looking back: Italian psychiatry from its origins to Law 180 of 1978.
Babini, Valeria Paola
2014-06-01
Italian psychiatry is usually renowned for the radical anti-institutional movement and the Reform Law of 1978, which requires a historical analysis to understand. In the 1960s, Italian psychiatric culture had reached a deadlock because of its obsolete biological, strictly neuroanatomical, explanatory approach (so-called organicism) postulated by academic institutions and awkwardly implemented in asylum practice. One prominent figure in shaping this philosophy was Cesare Lombroso, internationally known as the father of criminal anthropology. This attitude became sharper and more oppressive during the Fascist regime, where international exchanges and collaboration were discouraged when not repressed. Thus, anachronistic was the situation in the 1960s that the anti-institutional movement founded and led by Franco Basaglia swept professionals, politicians, and public opinion in its wake. What, in most countries, took the form of gradual reform became a radical reaction in Italy and, in less than 20 years, turned one of the most deprived institutional systems into the most radical community mental health care system in the world.
Topological analysis of metabolic networks based on petri net theory.
Zevedei-Oancea, Ionela; Schuster, Stefan
2011-01-01
Petri net concepts provide additional tools for the modelling of metabolic networks. Here, the similarities between the counterparts in traditional biochemical modelling and Petri net theory are discussed. For example the stoichiometry matrix of a metabolic network corresponds to the incidence matrix of the Petri net. The flux modes and conservation relations have the T-invariants, respectively, P-invariants as counterparts. We reveal the biological meaning of some notions specific to the Petri net framework (traps, siphons, deadlocks, liveness). We focus on the topological analysis rather than on the analysis of the dynamic behaviour. The treatment of external metabolites is discussed. Some simple theoretical examples are presented for illustration. Also the Petri nets corresponding to some biochemical networks are built to support our results. For example, the role of triose phosphate isomerase (TPI) in Trypanosoma brucei metabolism is evaluated by detecting siphons and traps. All Petri net properties treated in this contribution are exemplified on a system extracted from nucleotide metabolism.
Topological analysis of metabolic networks based on Petri net theory.
Zevedei-Oancea, Ionela; Schuster, Stefan
2003-01-01
Petri net concepts provide additional tools for the modelling of metabolic networks. Here, the similarities between the counterparts in traditional biochemical modelling and Petri net theory are discussed. For example the stoichiometry matrix of a metabolic network corresponds to the incidence matrix of the Petri net. The flux modes and conservation relations have the T-invariants, respectively, P-invariants as counterparts. We reveal the biological meaning of some notions specific to the Petri net framework (traps, siphons, deadlocks, liveness). We focus on the topological analysis rather than on the analysis of the dynamic behaviour. The treatment of external metabolites is discussed. Some simple theoretical examples are presented for illustration. Also the Petri nets corresponding to some biochemical networks are built to support our results. For example, the role of triose phosphate isomerase (TPI) in Trypanosoma brucei metabolism is evaluated by detecting siphons and traps. All Petri net properties treated in this contribution are exemplified on a system extracted from nucleotide metabolism.
Order and disorder in traffic and self-driven many-particle systems
NASA Astrophysics Data System (ADS)
Helbing, Dirk
2002-07-01
During the last decade, physicists have identified various spatio-temporal patterns of motion in vehicle and pedestrian traffic. Moreover, by applying and extending methods from statistical physics and non-linear dynamics, these have been successfully explained by means of self-driven many-particle models. Some of the questions now understood are the following: Why are vehicles sometimes stopped by so-called "phantom traffic jams," although they all like to drive fast? What are the mechanisms behind stop-and-go traffic? Why are there several different kinds of congestion, and how are they related? Why do most traffic jams occur considerably before the road capacity is reached? Can a temporary reduction of the traffic volume cause a lasting traffic jam? What is the origin of fluctuations in traffic systems and which consequences do they have? Why do pedestrians moving in opposite directions normally organize in lanes, while nervous crowds are "freezing by heating?" Why do panicking pedestrians produce dangerous deadlocks?
Fallopian tubes--literature review of anatomy and etiology in female infertility.
Briceag, I; Costache, A; Purcarea, V L; Cergan, R; Dumitru, M; Briceag, I; Sajin, M; Ispas, A T
2015-01-01
Around 30% of the infertile women worldwide have associated Fallopian tubes pathology. Unfortunately, for a long time, this aspect of infertility has been neglected due to the possibility of bypassing this deadlock through IVF. Up to date free full text literature was reviewed, meaning 4 major textbooks and around 100 articles centered on tubal infertility, in order to raise the awareness on this subject. The anatomy of the Fallopian tube is complex starting from its embryological development and continuing with its vascular supply and ciliated microstructure, that is the key to the process of egg transport to the site of fertilization. There are many strongly documented causes of tubal infertility: infections (Chlamydia Trachomatis, Gonorrhea, and genital tuberculosis), intrauterine contraceptive devices, endometriosis, and complications after abdominal surgery, etc. Although there are still many controversies about the etiology of tubal sterility with the advent of molecular diagnosis of infections there has been cleared the pathway of infection through endometriosis or through ciliary immobility towards the tubal obstruction.
Investigating a method of producing "red and dead" galaxies
NASA Astrophysics Data System (ADS)
Skory, Stephen
2010-08-01
In optical wavelengths, galaxies are observed to be either red or blue. The overall color of a galaxy is due to the distribution of the ages of its stellar population. Galaxies with currently active star formation appear blue, while those with no recent star formation at all (greater than about a Gyr) have only old, red stars. This strong bimodality has lead to the idea of star formation quenching, and various proposed physical mechanisms. In this dissertation, I attempt to reproduce with Enzo the results of Naab et al. (2007), in which red and dead galaxies are formed using gravitational quenching, rather than with one of the more typical methods of quenching. My initial attempts are unsuccessful, and I explore the reasons why I think they failed. Then using simpler methods better suited to Enzo + AMR, I am successful in producing a galaxy that appears to be similar in color and formation history to those in Naab et al. However, quenching is achieved using unphysically high star formation efficiencies, which is a different mechanism than Naab et al. suggests. Preliminary results of a much higher resolution, follow-on simulation of the above show some possible contradiction with the results of Naab et al. Cold gas is streaming into the galaxy to fuel starbursts, while at a similar epoch the galaxies in Naab et al. have largely already ceased forming stars in the galaxy. On the other hand, the results of the high resolution simulation are qualitatively similar to other works in the literature that show a somewhat different gravitational quenching mechanism than Naab et al. I also discuss my work using halo finders to analyze simulated cosmological data, and my work improving the Enzo/AMR analysis tool "yt". This includes two parallelizations of the halo finder HOP (Eisenstein and Hut, 1998) which allows analysis of very large cosmological datasets on parallel machines. The first version is "yt-HOP," which works well for datasets between about 2563 and 5123 particles, but has memory bottlenecks as the datasets get larger. These bottlenecks inspired the second version, "Parallel HOP," which is a fully parallelized method and implementation of HOP that has worked on datasets with more than 20483 particles on hundreds of processing cores. Both methods are described in detail, as are the various effects of performance-related runtime options. Additionally, both halo finders are subjected to a full suite of performance benchmarks varying both dataset sizes and computational resources used. I conclude with descriptions of four new tools I added to yt. A Parallel Structure Function Generator allows analysis of two-point functions, such as correlation functions, using memory- and workload-parallelism. A Parallel Merger Tree Generator leverages the parallel halo finders in yt, such as Parallel HOP, to build the merger tree of halos in a cosmological simulation, and outputs the result to a SQLite database for simple and powerful data extraction. A Star Particle Analysis toolkit takes a group of star particles and can output the rate of formation as a function of time, and/or a synthetic Spectral Energy Distribution (S.E.D.) using the Bruzual and Charlot (2003) data tables. Finally, a Halo Mass Function toolkit takes as input a list of halo masses and can output the halo mass function for the halos, as well as an analytical fit for those halos using several previously published fits.
Concurrent Programming Using Actors: Exploiting Large-Scale Parallelism,
1985-10-07
ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT. PROJECT. TASK* Artificial Inteligence Laboratory AREA Is WORK UNIT NUMBERS 545 Technology Square...D-R162 422 CONCURRENT PROGRMMIZNG USING f"OS XL?ITP TEH l’ LARGE-SCALE PARALLELISH(U) NASI AC E Al CAMBRIDGE ARTIFICIAL INTELLIGENCE L. G AGHA ET AL...RESOLUTION TEST CHART N~ATIONAL BUREAU OF STANDA.RDS - -96 A -E. __ _ __ __’ .,*- - -- •. - MASSACHUSETTS INSTITUTE OF TECHNOLOGY ARTIFICIAL
Richard Tran Mills; Jitendra Kumar; Forrest M. Hoffman; William W. Hargrove; Joseph P. Spruce; Steven P. Norman
2013-01-01
We investigated the use of principal components analysis (PCA) to visualize dominant patterns and identify anomalies in a multi-year land surface phenology data set (231 m à 231 m normalized difference vegetation index (NDVI) values derived from the Moderate Resolution Imaging Spectroradiometer (MODIS)) used for detecting threats to forest health in the conterminous...
Development of Parallel Architectures for Sensor Array Processing. Volume 1
1993-08-01
required for the DOA estimation [ 1-7]. The Multiple Signal Classification ( MUSIC ) [ 1] and the Estimation of Signal Parameters by Rotational...manifold and the estimated subspace. Although MUSIC is a high resolution algorithm, it has several drawbacks including the fact that complete knowledge of...thoroughly, MUSIC algorithm was selected to develop special purpose hardware for real time computation. Summary of the MUSIC algorithm is as follows
IceT users' guide and reference.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moreland, Kenneth D.
2011-01-01
The Image Composition Engine for Tiles (IceT) is a high-performance sort-last parallel rendering library. In addition to providing accelerated rendering for a standard display, IceT provides the unique ability to generate images for tiled displays. The overall resolution of the display may be several times larger than any viewport that may be rendered by a single machine. This document is an overview of the user interface to IceT.
NASA Astrophysics Data System (ADS)
Gruenwald, J.; Kocoń, D.; Khikhlukha, D.
2018-03-01
In order to introduce spatially resolved measurements of the plasma density in a plasma accelerated by a laser, a novel concept is proposed in this work. We suggest the usage of an array of miniaturized Rogowski coils to measure the current contributions parallel to the laser beam with a spatial resolution in the sub-mm range. The principle of the experimental setup will be shown in 3-D CAD models. The coils are coaxial to the plasma channel (e.g. a hydrogen filled capillary, which is frequently used in laser-plasma acceleration experiments). This plasma diagnostics method is simple, robust and it is a passive measurement technique, which does not disturb the plasma itself. As such coils rely on a Biot-Savart inductivity, they allow to separate the contributions of the parallel from perpendicular currents (with respect to the laser beam). Rogowski coils do not have a ferromagnetic core. Hence, non-linear effects resulting from such a core are to be neglected, which increases the reliability of the obtained data. They also allow the diagnosis of transient signals that carry high currents (up to several hundred kA) on very short timescales. Within this paper some predictions about the time resolution of such coils will be presented along with simple theoretical considerations.
Miyashita, Kiyoteru; Oude Vrielink, Timo; Mylonas, George
2018-05-01
Endomicroscopy (EM) provides high resolution, non-invasive histological tissue information and can be used for scanning of large areas of tissue to assess cancerous and pre-cancerous lesions and their margins. However, current robotic solutions do not provide the accuracy and force sensitivity required to perform safe and accurate tissue scanning. A new surgical instrument has been developed that uses a cable-driven parallel mechanism (CPDM) to manipulate an EM probe. End-effector forces are determined by measuring the tensions in each cable. As a result, the instrument allows to accurately apply a contact force on a tissue, while at the same time offering high resolution and highly repeatable probe movement. 0.2 and 0.6 N force sensitivities were found for 1 and 2 DoF image acquisition methods, respectively. A back-stepping technique can be used when a higher force sensitivity is required for the acquisition of high quality tissue images. This method was successful in acquiring images on ex vivo liver tissue. The proposed approach offers high force sensitivity and precise control, which is essential for robotic EM. The technical benefits of the current system can also be used for other surgical robotic applications, including safe autonomous control, haptic feedback and palpation.
Performance-scalable volumetric data classification for online industrial inspection
NASA Astrophysics Data System (ADS)
Abraham, Aby J.; Sadki, Mustapha; Lea, R. M.
2002-03-01
Non-intrusive inspection and non-destructive testing of manufactured objects with complex internal structures typically requires the enhancement, analysis and visualization of high-resolution volumetric data. Given the increasing availability of fast 3D scanning technology (e.g. cone-beam CT), enabling on-line detection and accurate discrimination of components or sub-structures, the inherent complexity of classification algorithms inevitably leads to throughput bottlenecks. Indeed, whereas typical inspection throughput requirements range from 1 to 1000 volumes per hour, depending on density and resolution, current computational capability is one to two orders-of-magnitude less. Accordingly, speeding up classification algorithms requires both reduction of algorithm complexity and acceleration of computer performance. A shape-based classification algorithm, offering algorithm complexity reduction, by using ellipses as generic descriptors of solids-of-revolution, and supporting performance-scalability, by exploiting the inherent parallelism of volumetric data, is presented. A two-stage variant of the classical Hough transform is used for ellipse detection and correlation of the detected ellipses facilitates position-, scale- and orientation-invariant component classification. Performance-scalability is achieved cost-effectively by accelerating a PC host with one or more COTS (Commercial-Off-The-Shelf) PCI multiprocessor cards. Experimental results are reported to demonstrate the feasibility and cost-effectiveness of the data-parallel classification algorithm for on-line industrial inspection applications.
Wu, Jiayi; Ma, Yong-Bei; Congdon, Charles; Brett, Bevin; Chen, Shuobing; Xu, Yaofang; Ouyang, Qi
2017-01-01
Structural heterogeneity in single-particle cryo-electron microscopy (cryo-EM) data represents a major challenge for high-resolution structure determination. Unsupervised classification may serve as the first step in the assessment of structural heterogeneity. However, traditional algorithms for unsupervised classification, such as K-means clustering and maximum likelihood optimization, may classify images into wrong classes with decreasing signal-to-noise-ratio (SNR) in the image data, yet demand increased computational costs. Overcoming these limitations requires further development of clustering algorithms for high-performance cryo-EM data processing. Here we introduce an unsupervised single-particle clustering algorithm derived from a statistical manifold learning framework called generative topographic mapping (GTM). We show that unsupervised GTM clustering improves classification accuracy by about 40% in the absence of input references for data with lower SNRs. Applications to several experimental datasets suggest that our algorithm can detect subtle structural differences among classes via a hierarchical clustering strategy. After code optimization over a high-performance computing (HPC) environment, our software implementation was able to generate thousands of reference-free class averages within hours in a massively parallel fashion, which allows a significant improvement on ab initio 3D reconstruction and assists in the computational purification of homogeneous datasets for high-resolution visualization. PMID:28786986
Wu, Jiayi; Ma, Yong-Bei; Congdon, Charles; Brett, Bevin; Chen, Shuobing; Xu, Yaofang; Ouyang, Qi; Mao, Youdong
2017-01-01
Structural heterogeneity in single-particle cryo-electron microscopy (cryo-EM) data represents a major challenge for high-resolution structure determination. Unsupervised classification may serve as the first step in the assessment of structural heterogeneity. However, traditional algorithms for unsupervised classification, such as K-means clustering and maximum likelihood optimization, may classify images into wrong classes with decreasing signal-to-noise-ratio (SNR) in the image data, yet demand increased computational costs. Overcoming these limitations requires further development of clustering algorithms for high-performance cryo-EM data processing. Here we introduce an unsupervised single-particle clustering algorithm derived from a statistical manifold learning framework called generative topographic mapping (GTM). We show that unsupervised GTM clustering improves classification accuracy by about 40% in the absence of input references for data with lower SNRs. Applications to several experimental datasets suggest that our algorithm can detect subtle structural differences among classes via a hierarchical clustering strategy. After code optimization over a high-performance computing (HPC) environment, our software implementation was able to generate thousands of reference-free class averages within hours in a massively parallel fashion, which allows a significant improvement on ab initio 3D reconstruction and assists in the computational purification of homogeneous datasets for high-resolution visualization.
A ring transducer system for medical ultrasound research.
Waag, Robert C; Fedewa, Russell J
2006-10-01
An ultrasonic ring transducer system has been developed for experimental studies of scattering and imaging. The transducer consists of 2048 rectangular elements with a 2.5-MHz center frequency, a 67% -6 dB bandwidth, and a 0.23-mm pitch arranged in a 150-mm-diameter ring with a 25-mm elevation. At the center frequency, the element size is 0.30lambda x 42lambda and the pitch is 0.38lambda. The system has 128 parallel transmit channels, 16 parallel receive channels, a 2048:128 transmit multiplexer, a 2048:16 receive multiplexer, independently programmable transmit waveforms with 8-bit resolution, and receive amplifiers with time variable gain independently programmable over a 40-dB range. Receive signals are sampled at 20 MHz with 12-bit resolution. Arbitrary transmit and receive apertures can be synthesized. Calibration software minimizes system nonidealities caused by noncircularity of the ring and element-to-element response differences. Application software enables the system to be used by specification of high-level parameters in control files from which low-level hardware-dependent parameters are derived by specialized code. Use of the system is illustrated by producing focused and steered beams, synthesizing a spatially limited plane wave, measuring angular scattering, and forming b-scan images.
NASA Astrophysics Data System (ADS)
Takashima, Ichiro; Kajiwara, Riichi; Murano, Kiyo; Iijima, Toshio; Morinaka, Yasuhiro; Komobuchi, Hiroyoshi
2001-04-01
We have designed and built a high-speed CCD imaging system for monitoring neural activity in an exposed animal cortex stained with a voltage-sensitive dye. Two types of custom-made CCD sensors were developed for this system. The type I chip has a resolution of 2664 (H) X 1200 (V) pixels and a wide imaging area of 28.1 X 13.8 mm, while the type II chip has 1776 X 1626 pixels and an active imaging area of 20.4 X 18.7 mm. The CCD arrays were constructed with multiple output amplifiers in order to accelerate the readout rate. The two chips were divided into either 24 (I) or 16 (II) distinct areas that were driven in parallel. The parallel CCD outputs were digitized by 12-bit A/D converters and then stored in the frame memory. The frame memory was constructed with synchronous DRAM modules, which provided a capacity of 128 MB per channel. On-chip and on-memory binning methods were incorporated into the system, e.g., this enabled us to capture 444 X 200 pixel-images for periods of 36 seconds at a rate of 500 frames/second. This system was successfully used to visualize neural activity in the cortices of rats, guinea pigs, and monkeys.
He, Guanglin; Wang, Zheng; Wang, Mengge; Luo, Tao; Liu, Jing; Zhou, You; Gao, Bo; Hou, Yiping
2018-06-04
Ancestry inference based on single nucleotide polymorphism (SNP) with marked allele frequency differences in diverse populations (called ancestry-informative SNP, AISNP) is rapidly developed with the technology advancements of massively parallel sequencing (MPS). Despite the decade of exploration and broad public interest in the peopling of East-Asians, the genetic landscape of Chinese Silk Road populations based on the AISNPs is still little known. In this work, 206 unrelated individuals from Chinese Uyghur and Hui populations were firstly genotyped by 165 AISNPs (The Precision ID Ancestry Panel) using the Ion Torrent PGM system. The ethnic origin of two investigated populations and population structures and genetic relationships were subsequently investigated. The 165 AISNPs panel not only can differentiate Uyghur and Hui populations but also has potential applications in individual identification. Comprehensive population comparisons and admixture estimates demonstrated a predominantly higher European-related ancestry (36.30%) in Uyghurs than Huis (3.66%). Overall, the Precision ID Ancestry Panel can provide good resolution at the intercontinental level, but has limitations on the genetic homogeneous populations, such as the Hui and Han. Additional population-specific AISNPs remain necessary to get better-scale resolution within geographically proximate populations in East Asia. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
MHD code using multi graphical processing units: SMAUG+
NASA Astrophysics Data System (ADS)
Gyenge, N.; Griffiths, M. K.; Erdélyi, R.
2018-01-01
This paper introduces the Sheffield Magnetohydrodynamics Algorithm Using GPUs (SMAUG+), an advanced numerical code for solving magnetohydrodynamic (MHD) problems, using multi-GPU systems. Multi-GPU systems facilitate the development of accelerated codes and enable us to investigate larger model sizes and/or more detailed computational domain resolutions. This is a significant advancement over the parent single-GPU MHD code, SMAUG (Griffiths et al., 2015). Here, we demonstrate the validity of the SMAUG + code, describe the parallelisation techniques and investigate performance benchmarks. The initial configuration of the Orszag-Tang vortex simulations are distributed among 4, 16, 64 and 100 GPUs. Furthermore, different simulation box resolutions are applied: 1000 × 1000, 2044 × 2044, 4000 × 4000 and 8000 × 8000 . We also tested the code with the Brio-Wu shock tube simulations with model size of 800 employing up to 10 GPUs. Based on the test results, we observed speed ups and slow downs, depending on the granularity and the communication overhead of certain parallel tasks. The main aim of the code development is to provide massively parallel code without the memory limitation of a single GPU. By using our code, the applied model size could be significantly increased. We demonstrate that we are able to successfully compute numerically valid and large 2D MHD problems.
Cloud computing-based TagSNP selection algorithm for human genome data.
Hung, Che-Lun; Chen, Wen-Pei; Hua, Guan-Jie; Zheng, Huiru; Tsai, Suh-Jen Jane; Lin, Yaw-Ling
2015-01-05
Single nucleotide polymorphisms (SNPs) play a fundamental role in human genetic variation and are used in medical diagnostics, phylogeny construction, and drug design. They provide the highest-resolution genetic fingerprint for identifying disease associations and human features. Haplotypes are regions of linked genetic variants that are closely spaced on the genome and tend to be inherited together. Genetics research has revealed SNPs within certain haplotype blocks that introduce few distinct common haplotypes into most of the population. Haplotype block structures are used in association-based methods to map disease genes. In this paper, we propose an efficient algorithm for identifying haplotype blocks in the genome. In chromosomal haplotype data retrieved from the HapMap project website, the proposed algorithm identified longer haplotype blocks than an existing algorithm. To enhance its performance, we extended the proposed algorithm into a parallel algorithm that copies data in parallel via the Hadoop MapReduce framework. The proposed MapReduce-paralleled combinatorial algorithm performed well on real-world data obtained from the HapMap dataset; the improvement in computational efficiency was proportional to the number of processors used.
NASA Technical Reports Server (NTRS)
Keyes, David E.; Smooke, Mitchell D.
1987-01-01
A parallelized finite difference code based on the Newton method for systems of nonlinear elliptic boundary value problems in two dimensions is analyzed in terms of computational complexity and parallel efficiency. An approximate cost function depending on 15 dimensionless parameters is derived for algorithms based on stripwise and boxwise decompositions of the domain and a one-to-one assignment of the strip or box subdomains to processors. The sensitivity of the cost functions to the parameters is explored in regions of parameter space corresponding to model small-order systems with inexpensive function evaluations and also a coupled system of nineteen equations with very expensive function evaluations. The algorithm was implemented on the Intel Hypercube, and some experimental results for the model problems with stripwise decompositions are presented and compared with the theory. In the context of computational combustion problems, multiprocessors of either message-passing or shared-memory type may be employed with stripwise decompositions to realize speedup of O(n), where n is mesh resolution in one direction, for reasonable n.
Super-resolved Parallel MRI by Spatiotemporal Encoding
Schmidt, Rita; Baishya, Bikash; Ben-Eliezer, Noam; Seginer, Amir; Frydman, Lucio
2016-01-01
Recent studies described an alternative “ultrafast” scanning method based on spatiotemporal (SPEN) principles. SPEN demonstrates numerous potential advantages over EPI-based alternatives, at no additional expense in experimental complexity. An important aspect that SPEN still needs to achieve for providing a competitive acquisition alternative entails exploiting parallel imaging algorithms, without compromising its proven capabilities. The present work introduces a combination of multi-band frequency-swept pulses simultaneously encoding multiple, partial fields-of-view; together with a new algorithm merging a Super-Resolved SPEN image reconstruction and SENSE multiple-receiving methods. The ensuing approach enables one to reduce both the excitation and acquisition times of ultrafast SPEN acquisitions by the customary acceleration factor R, without compromises in either the ensuing spatial resolution, SAR deposition, or the capability to operate in multi-slice mode. The performance of these new single-shot imaging sequences and their ancillary algorithms were explored on phantoms and human volunteers at 3T. The gains of the parallelized approach were particularly evident when dealing with heterogeneous systems subject to major T2/T2* effects, as is the case upon single-scan imaging near tissue/air interfaces. PMID:24120293
Effect of surface morphology on drag and roughness sublayer in flows over regular roughness elements
NASA Astrophysics Data System (ADS)
Placidi, Marco; Ganapathisubramani, Bharathram
2014-11-01
The effects of systematically varied roughness morphology on bulk drag and on the spatial structure of turbulent boundary layers are examined by performing a series of wind tunnel experiments. In this study, rough surfaces consisting of regularly and uniformly distributed LEGO™ bricks are employed. Twelve different patterns are adopted in order to methodically examine the individual effects of frontal solidity (λF, frontal area of the roughness elements per unit wall-parallel area) and plan solidity (λP, plan area of roughness elements per unit wall-parallel area), on both the bulk drag and the turbulence structure. A floating element friction balance based on Krogstad & Efros (2010) was designed and manufactured to measure the drag generated by the different surfaces. In parallel, high resolution planar and stereoscopic Particle Image Velocimetry (PIV) was applied to investigate the flow features. This talk will focus on the effects of each solidity parameter on the bulk drag and attempt to relate the observed trends to the flow structures in the roughness sublayer. Currently at City University London.
Podoleanu, Adrian Gh; Bradu, Adrian
2013-08-12
Conventional spectral domain interferometry (SDI) methods suffer from the need of data linearization. When applied to optical coherence tomography (OCT), conventional SDI methods are limited in their 3D capability, as they cannot deliver direct en-face cuts. Here we introduce a novel SDI method, which eliminates these disadvantages. We denote this method as Master - Slave Interferometry (MSI), because a signal is acquired by a slave interferometer for an optical path difference (OPD) value determined by a master interferometer. The MSI method radically changes the main building block of an SDI sensor and of a spectral domain OCT set-up. The serially provided signal in conventional technology is replaced by multiple signals, a signal for each OPD point in the object investigated. This opens novel avenues in parallel sensing and in parallelization of signal processing in 3D-OCT, with applications in high- resolution medical imaging and microscopy investigation of biosamples. Eliminating the need of linearization leads to lower cost OCT systems and opens potential avenues in increasing the speed of production of en-face OCT images in comparison with conventional SDI.
Research on the Application of Fast-steering Mirror in Stellar Interferometer
NASA Astrophysics Data System (ADS)
Mei, R.; Hu, Z. W.; Xu, T.; Sun, C. S.
2017-07-01
For a stellar interferometer, the fast-steering mirror (FSM) is widely utilized to correct wavefront tilt caused by atmospheric turbulence and internal instrumental vibration due to its high resolution and fast response frequency. In this study, the non-coplanar error between the FSM and actuator deflection axis introduced by manufacture, assembly, and adjustment is analyzed. Via a numerical method, the additional optical path difference (OPD) caused by above factors is studied, and its effects on tracking accuracy of stellar interferometer are also discussed. On the other hand, the starlight parallelism between the beams of two arms is one of the main factors of the loss of fringe visibility. By analyzing the influence of wavefront tilt caused by the atmospheric turbulence on fringe visibility, a simple and efficient real-time correction scheme of starlight parallelism is proposed based on a single array detector. The feasibility of this scheme is demonstrated by laboratory experiment. The results show that starlight parallelism meets the requirement of stellar interferometer in wavefront tilt preliminarily after the correction of fast-steering mirror.
Cloud Computing-Based TagSNP Selection Algorithm for Human Genome Data
Hung, Che-Lun; Chen, Wen-Pei; Hua, Guan-Jie; Zheng, Huiru; Tsai, Suh-Jen Jane; Lin, Yaw-Ling
2015-01-01
Single nucleotide polymorphisms (SNPs) play a fundamental role in human genetic variation and are used in medical diagnostics, phylogeny construction, and drug design. They provide the highest-resolution genetic fingerprint for identifying disease associations and human features. Haplotypes are regions of linked genetic variants that are closely spaced on the genome and tend to be inherited together. Genetics research has revealed SNPs within certain haplotype blocks that introduce few distinct common haplotypes into most of the population. Haplotype block structures are used in association-based methods to map disease genes. In this paper, we propose an efficient algorithm for identifying haplotype blocks in the genome. In chromosomal haplotype data retrieved from the HapMap project website, the proposed algorithm identified longer haplotype blocks than an existing algorithm. To enhance its performance, we extended the proposed algorithm into a parallel algorithm that copies data in parallel via the Hadoop MapReduce framework. The proposed MapReduce-paralleled combinatorial algorithm performed well on real-world data obtained from the HapMap dataset; the improvement in computational efficiency was proportional to the number of processors used. PMID:25569088
[Basic examination of an imagecharacteristic in Multivane].
Ohshita, Tsuyoshi
2011-01-01
Deterioration in the image because of a patient's movement always becomes a problem in the MRI inspection. To solve this problem, the imaging procedure named Multivane was developed. The principle is similar to the periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) method. As for Multivane, the effect of the body motion correction is high. However, the filling method of k space is different than a past Cartesian method. A basic examination of the image characteristic of Multivane and Cartesian was utilized along with geostationary phantom. The examination items are SNR, CNR, and a spatial resolution. As a result, Multivane of SNR was high. Cartesian of the contrast and the spatial resolution was also high. It is important to recognize these features and to use Multivane.
Set-up and demonstration of a Low Energy Electron Magnetometer (LEEM)
NASA Technical Reports Server (NTRS)
Rayborn, G. H.
1986-01-01
Described are the design, construction and test results of a Low Energy Electron Magnetometer (LEEM). The electron source is a commercial electron gun capable of providing several microamperes of electron beam. These electrons, after acceleration through a selected potential difference of 100-300 volts, are sent through two 30 degree second-order focussing parallel plate electrostatic analyzers. The first analyzer acts as a monochromator located in the field-free space. It is capable of providing energy resolution of better than 10 to the -3 power. The second analyzer, located in the test field region, acts as the detector for electrons deflected by the test field. The entire magnetometer system is expected to have a resolution of 1 part in 1000 or better.
Acoustic phonons in chrysotile asbestos probed by high-resolution inelastic x-ray scattering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mamontov, Eugene; Vakhrushev, S. B.; Kumzerov, Yu. A,
Acoustic phonons in an individual, oriented fiber of chrysotile asbestos (chemical formula Mg{sub 3}Si{sub 2}O{sub 5}(OH){sub 4}) were observed at room temperature in the inelastic x-ray measurement with a very high (meV) resolution. The x-ray scattering vector was aligned along [1 0 0] direction of the reciprocal lattice, nearly parallel to the long axis of the fiber. The latter coincides with [1 0 0] direction of the direct lattice and the axes of the nano-channels. The data were analyzed using a damped harmonic oscillator model. Analysis of the phonon dispersion in the first Brillouin zone yielded the longitudinal sound velocitymore » of (9200 {+-} 600) m/s.« less
Lightweight and High-Resolution Single Crystal Silicon Optics for X-ray Astronomy
NASA Technical Reports Server (NTRS)
Zhang, William W.; Biskach, Michael P.; Chan, Kai-Wing; Mazzarella, James R.; McClelland, Ryan S.; Riveros, Raul E.; Saha, Timo T.; Solly, Peter M.
2016-01-01
We describe an approach to building mirror assemblies for next generation X-ray telescopes. It incorporates knowledge and lessons learned from building existing telescopes, including Chandra, XMM-Newton, Suzaku, and NuSTAR, as well as from our direct experience of the last 15 years developing mirror technology for the Constellation-X and International X-ray Observatory mission concepts. This approach combines single crystal silicon and precision polishing, thus has the potential of achieving the highest possible angular resolution with the least possible mass. Moreover, it is simple, consisting of several technical elements that can be developed independently in parallel. Lastly, it is highly amenable to mass production, therefore enabling the making of telescopes of very large photon collecting areas.
NASA Technical Reports Server (NTRS)
Strong, James P.
1987-01-01
A local area matching algorithm was developed on the Massively Parallel Processor (MPP). It is an iterative technique that first matches coarse or low resolution areas and at each iteration performs matches of higher resolution. Results so far show that when good matches are possible in the two images, the MPP algorithm matches corresponding areas as well as a human observer. To aid in developing this algorithm, a control or shell program was developed for the MPP that allows interactive experimentation with various parameters and procedures to be used in the matching process. (This would not be possible without the high speed of the MPP). With the system, optimal techniques can be developed for different types of matching problems.
NASA Technical Reports Server (NTRS)
Hussaini, M. Y. (Editor); Kumar, A. (Editor); Salas, M. D. (Editor)
1993-01-01
The purpose here is to assess the state of the art in the areas of numerical analysis that are particularly relevant to computational fluid dynamics (CFD), to identify promising new developments in various areas of numerical analysis that will impact CFD, and to establish a long-term perspective focusing on opportunities and needs. Overviews are given of discretization schemes, computational fluid dynamics, algorithmic trends in CFD for aerospace flow field calculations, simulation of compressible viscous flow, and massively parallel computation. Also discussed are accerelation methods, spectral and high-order methods, multi-resolution and subcell resolution schemes, and inherently multidimensional schemes.
30-lens interferometer for high energy x-rays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lyubomirskiy, M., E-mail: lyubomir@esrf.fr; Snigireva, I., E-mail: irina@esrf.fr; Vaughan, G.
2016-07-27
We report a hard X-ray multilens interferometer consisting of 30 parallel compound refractive lenses. Under coherent illumination each CRL creates a diffraction limited focal spot - secondary source. An overlapping of coherent beams from these sources resulting in the interference pattern which has a rich longitudinal structure in accordance with the Talbot imaging formalism. The proposed interferometer was experimentally tested at ID11 ESRF beamline for the photon energies 32 keV and 65 keV. The fundamental and fractional Talbot images were recorded with the high resolution CCD camera. An effective source size in the order of 15 µm was determined frommore » the first Talbot image proving that the multilens interferometer can be used as a high resolution beam diagnostic tool.« less
Anti-parallel Filament Flows and Bright Dots Observed in the EUV with Hi-C
NASA Technical Reports Server (NTRS)
Alexander, Caroline E.; Regnier, Stephane; Walsh, Robert; Winebarger, Amy
2013-01-01
Hi-C obtained the highest spatial and temporal resolution observations ever taken in the solar EUV corona. Hi-C reveals dynamics and structure at the limit of its temporal and spatial resolution. Hi-C observed various fine-scale features that SDO/AIA could not pick out. For the first time in the corona, Hi-C revealed magnetic braiding and component reconnection consistent with coronal heating. Hi-C shows evidence of reconnection and heating in several different regions and magnetic configurations with plasma being heated to 0.3 - 8 x 10(exp 6) K temperatures. Surprisingly, many of the first results highlight plasma at temperatures that are not at the peak of the response functions.
Characteristics of a high pressure gas proportional counter filled with xenon
NASA Technical Reports Server (NTRS)
Sakurai, H.; Ramsey, B. D.
1991-01-01
The characteristics of a conventional cylindrical geometry proportional counter filled with high pressure xenon gas up to 10 atm. were fundamentally investigated for use as a detector in hard X-ray astronomy. With a 2 percent methane gas mixture the energy resolutions at 10 atm. were 9.8 percent and 7.3 percent for 22 keV and 60 keV X-rays, respectively. From calculations of the Townsend ionization coefficient, it is shown that proportional counters at high pressure operate at weaker reduced electric field than low pressure counters. The characteristics of a parallel grid proportional counter at low pressure showed similar pressure dependence. It is suggested that this is the fundamental reason for the degradation of resolution observed with increasing pressure.
Fast 3D Surface Extraction 2 pages (including abstract)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sewell, Christopher Meyer; Patchett, John M.; Ahrens, James P.
Ocean scientists searching for isosurfaces and/or thresholds of interest in high resolution 3D datasets required a tedious and time-consuming interactive exploration experience. PISTON research and development activities are enabling ocean scientists to rapidly and interactively explore isosurfaces and thresholds in their large data sets using a simple slider with real time calculation and visualization of these features. Ocean Scientists can now visualize more features in less time, helping them gain a better understanding of the high resolution data sets they work with on a daily basis. Isosurface timings (512{sup 3} grid): VTK 7.7 s, Parallel VTK (48-core) 1.3 s, PISTONmore » OpenMP (48-core) 0.2 s, PISTON CUDA (Quadro 6000) 0.1 s.« less
Highly multiplexed subcellular RNA sequencing in situ
Lee, Je Hyuk; Daugharthy, Evan R.; Scheiman, Jonathan; Kalhor, Reza; Ferrante, Thomas C.; Yang, Joyce L.; Terry, Richard; Jeanty, Sauveur S. F.; Li, Chao; Amamoto, Ryoji; Peters, Derek T.; Turczyk, Brian M.; Marblestone, Adam H.; Inverso, Samuel A.; Bernard, Amy; Mali, Prashant; Rios, Xavier; Aach, John; Church, George M.
2014-01-01
Understanding the spatial organization of gene expression with single nucleotide resolution requires localizing the sequences of expressed RNA transcripts within a cell in situ. Here we describe fluorescent in situ RNA sequencing (FISSEQ), in which stably cross-linked cDNA amplicons are sequenced within a biological sample. Using 30-base reads from 8,742 genes in situ, we examined RNA expression and localization in human primary fibroblasts using a simulated wound healing assay. FISSEQ is compatible with tissue sections and whole mount embryos, and reduces the limitations of optical resolution and noisy signals on single molecule detection. Our platform enables massively parallel detection of genetic elements, including gene transcripts and molecular barcodes, and can be used to investigate cellular phenotype, gene regulation, and environment in situ. PMID:24578530
Complete description of the optical path difference of a novel spectral zooming imaging spectrometer
NASA Astrophysics Data System (ADS)
Li, Jie; Wu, Haiying; Qi, Chun
2018-03-01
A complete description of the optical path difference of a novel spectral zooming imaging spectrometer (SZIS) is presented. SZIS is designed based on two identical Wollaston prisms with an adjustable air gap. Thus, interferogram with arbitrary spectral resolution and great reduction of spectral image size can be conveniently formed to adapt to different application requirements. Ray tracing modeling in arbitrary incidence with a quasi-parallel-plate approximation scheme is proposed to analyze the optical path difference of SZIS. In order to know the characteristics of the apparatus, exact calculations of the corresponding spectral resolution and field of view are both derived and analyzed in detail. We also present a comparison of calculation and experiment to prove the validity of the theory.
CHICO2, a two-dimensional pixelated parallel-plate avalanche counter
Wu, C. Y.; Cline, D.; Hayes, A.; ...
2016-01-27
CHICO 2 (Compact Heavy Ion COunter), is a large solid-angle, charged-particle detector array developed to provide both θ and Φ angle resolutions matching those of GRETINA (Gamma-Ray Energy Tracking In-beam Nuclear Array). CHICO 2 was successfully tested at the Argonne National Laboratory where it was fielded as an auxiliary detector with GRETINA for γ-ray spectroscopic studies of nuclei using a 252Cf spontaneous fission source, stable beams, and radioactive beams from CARIBU. In field tests of the 72,76Ge beams on a 0.5 mg/cm 2208Pb target at the sub-barrier energy, CHICO 2 provided charged-particle angle resolutions (FWHM) of 1.55° in θ andmore » 2.47° in Φ. This achieves the design goal for both coordinates assuming a beam-spot size (>3 mm) and the target thickness (>0.5 mg/cm 2). The combined angular resolution of GRETINA/CHICO 2 resulted in a Doppler-shift corrected energy resolution of 0.60% for 1 MeV coincident de-excitation γ-rays. This is nearly a factor of two improvements in resolution and sensitivity compared to Gammasphere/CHICO. Kinematically-coincident detection of scattered ions by CHICO 2 still maintains the mass resolution (ΔM/M) of ~5% that enhanced isolation of scattered weak beams of interest from scattered contaminant beams.« less
Computational imaging through a fiber-optic bundle
NASA Astrophysics Data System (ADS)
Lodhi, Muhammad A.; Dumas, John Paul; Pierce, Mark C.; Bajwa, Waheed U.
2017-05-01
Compressive sensing (CS) has proven to be a viable method for reconstructing high-resolution signals using low-resolution measurements. Integrating CS principles into an optical system allows for higher-resolution imaging using lower-resolution sensor arrays. In contrast to prior works on CS-based imaging, our focus in this paper is on imaging through fiber-optic bundles, in which manufacturing constraints limit individual fiber spacing to around 2 μm. This limitation essentially renders fiber-optic bundles as low-resolution sensors with relatively few resolvable points per unit area. These fiber bundles are often used in minimally invasive medical instruments for viewing tissue at macro and microscopic levels. While the compact nature and flexibility of fiber bundles allow for excellent tissue access in-vivo, imaging through fiber bundles does not provide the fine details of tissue features that is demanded in some medical situations. Our hypothesis is that adapting existing CS principles to fiber bundle-based optical systems will overcome the resolution limitation inherent in fiber-bundle imaging. In a previous paper we examined the practical challenges involved in implementing a highly parallel version of the single-pixel camera while focusing on synthetic objects. This paper extends the same architecture for fiber-bundle imaging under incoherent illumination and addresses some practical issues associated with imaging physical objects. Additionally, we model the optical non-idealities in the system to get lower modelling errors.
Pandit, Prachi; Johnston, Samuel M; Qi, Yi; Story, Jennifer; Nelson, Rendon; Johnson, G Allan
2013-04-01
Liver is a common site for distal metastases in colon and rectal cancer. Numerous clinical studies have analyzed the relative merits of different imaging modalities for detection of liver metastases. Several exciting new therapies are being investigated in preclinical models. But, technical challenges in preclinical imaging make it difficult to translate conclusions from clinical studies to the preclinical environment. This study addresses the technical challenges of preclinical magnetic resonance imaging (MRI) and micro-computed tomography (CT) to enable comparison of state-of-the-art methods for following metastatic liver disease. We optimized two promising preclinical protocols to enable a parallel longitudinal study tracking metastatic human colon carcinoma growth in a mouse model: T2-weighted MRI using two-shot PROPELLER (Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction) and contrast-enhanced micro-CT using a liposomal contrast agent. Both methods were tailored for high throughput with attention to animal support and anesthesia to limit biological stress. Each modality has its strengths. Micro-CT permitted more rapid acquisition (<10 minutes) with the highest spatial resolution (88-micron isotropic resolution). But detection of metastatic lesions requires the use of a blood pool contrast agent, which could introduce a confound in the evaluation of new therapies. MRI was slower (30 minutes) and had lower anisotropic spatial resolution. But MRI eliminates the need for a contrast agent and the contrast-to-noise between tumor and normal parenchyma was higher, making earlier detection of small lesions possible. Both methods supported a relatively high-throughput, longitudinal study of the development of metastatic lesions. Copyright © 2013 AUR. Published by Elsevier Inc. All rights reserved.
High-resolution, high-throughput imaging with a multibeam scanning electron microscope
EBERLE, AL; MIKULA, S; SCHALEK, R; LICHTMAN, J; TATE, ML KNOTHE; ZEIDLER, D
2015-01-01
Electron–electron interactions and detector bandwidth limit the maximal imaging speed of single-beam scanning electron microscopes. We use multiple electron beams in a single column and detect secondary electrons in parallel to increase the imaging speed by close to two orders of magnitude and demonstrate imaging for a variety of samples ranging from biological brain tissue to semiconductor wafers. Lay Description The composition of our world and our bodies on the very small scale has always fascinated people, making them search for ways to make this visible to the human eye. Where light microscopes reach their resolution limit at a certain magnification, electron microscopes can go beyond. But their capability of visualizing extremely small features comes at the cost of a very small field of view. Some of the questions researchers seek to answer today deal with the ultrafine structure of brains, bones or computer chips. Capturing these objects with electron microscopes takes a lot of time – maybe even exceeding the time span of a human being – or new tools that do the job much faster. A new type of scanning electron microscope scans with 61 electron beams in parallel, acquiring 61 adjacent images of the sample at the same time a conventional scanning electron microscope captures one of these images. In principle, the multibeam scanning electron microscope’s field of view is 61 times larger and therefore coverage of the sample surface can be accomplished in less time. This enables researchers to think about large-scale projects, for example in the rather new field of connectomics. A very good introduction to imaging a brain at nanometre resolution can be found within course material from Harvard University on http://www.mcb80x.org/# as featured media entitled ‘connectomics’. PMID:25627873
Parastar, Hadi; Akvan, Nadia
2014-03-13
In the present contribution, a new combination of multivariate curve resolution-correlation optimized warping (MCR-COW) with trilinear parallel factor analysis (PARAFAC) is developed to exploit second-order advantage in complex chromatographic measurements. In MCR-COW, the complexity of the chromatographic data is reduced by arranging the data in a column-wise augmented matrix, analyzing using MCR bilinear model and aligning the resolved elution profiles using COW in a component-wise manner. The aligned chromatographic data is then decomposed using trilinear model of PARAFAC in order to exploit pure chromatographic and spectroscopic information. The performance of this strategy is evaluated using simulated and real high-performance liquid chromatography-diode array detection (HPLC-DAD) datasets. The obtained results showed that the MCR-COW can efficiently correct elution time shifts of target compounds that are completely overlapped by coeluted interferences in complex chromatographic data. In addition, the PARAFAC analysis of aligned chromatographic data has the advantage of unique decomposition of overlapped chromatographic peaks to identify and quantify the target compounds in the presence of interferences. Finally, to confirm the reliability of the proposed strategy, the performance of the MCR-COW-PARAFAC is compared with the frequently used methods of PARAFAC, COW-PARAFAC, multivariate curve resolution-alternating least squares (MCR-ALS), and MCR-COW-MCR. In general, in most of the cases the MCR-COW-PARAFAC showed an improvement in terms of lack of fit (LOF), relative error (RE) and spectral correlation coefficients in comparison to the PARAFAC, COW-PARAFAC, MCR-ALS and MCR-COW-MCR results. Copyright © 2014 Elsevier B.V. All rights reserved.
Large scale cardiac modeling on the Blue Gene supercomputer.
Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Keller, David U; Weiss, Daniel L; Seemann, Gunnar; Dössel, Olaf; Pitman, Michael C; Rice, John J
2008-01-01
Multi-scale, multi-physical heart models have not yet been able to include a high degree of accuracy and resolution with respect to model detail and spatial resolution due to computational limitations of current systems. We propose a framework to compute large scale cardiac models. Decomposition of anatomical data in segments to be distributed on a parallel computer is carried out by optimal recursive bisection (ORB). The algorithm takes into account a computational load parameter which has to be adjusted according to the cell models used. The diffusion term is realized by the monodomain equations. The anatomical data-set was given by both ventricles of the Visible Female data-set in a 0.2 mm resolution. Heterogeneous anisotropy was included in the computation. Model weights as input for the decomposition and load balancing were set to (a) 1 for tissue and 0 for non-tissue elements; (b) 10 for tissue and 1 for non-tissue elements. Scaling results for 512, 1024, 2048, 4096 and 8192 computational nodes were obtained for 10 ms simulation time. The simulations were carried out on an IBM Blue Gene/L parallel computer. A 1 s simulation was then carried out on 2048 nodes for the optimal model load. Load balances did not differ significantly across computational nodes even if the number of data elements distributed to each node differed greatly. Since the ORB algorithm did not take into account computational load due to communication cycles, the speedup is close to optimal for the computation time but not optimal overall due to the communication overhead. However, the simulation times were reduced form 87 minutes on 512 to 11 minutes on 8192 nodes. This work demonstrates that it is possible to run simulations of the presented detailed cardiac model within hours for the simulation of a heart beat.
Next-Generation Climate Modeling Science Challenges for Simulation, Workflow and Analysis Systems
NASA Astrophysics Data System (ADS)
Koch, D. M.; Anantharaj, V. G.; Bader, D. C.; Krishnan, H.; Leung, L. R.; Ringler, T.; Taylor, M.; Wehner, M. F.; Williams, D. N.
2016-12-01
We will present two examples of current and future high-resolution climate-modeling research that are challenging existing simulation run-time I/O, model-data movement, storage and publishing, and analysis. In each case, we will consider lessons learned as current workflow systems are broken by these large-data science challenges, as well as strategies to repair or rebuild the systems. First we consider the science and workflow challenges to be posed by the CMIP6 multi-model HighResMIP, involving around a dozen modeling groups performing quarter-degree simulations, in 3-member ensembles for 100 years, with high-frequency (1-6 hourly) diagnostics, which is expected to generate over 4PB of data. An example of science derived from these experiments will be to study how resolution affects the ability of models to capture extreme-events such as hurricanes or atmospheric rivers. Expected methods to transfer (using parallel Globus) and analyze (using parallel "TECA" software tools) HighResMIP data for such feature-tracking by the DOE CASCADE project will be presented. A second example will be from the Accelerated Climate Modeling for Energy (ACME) project, which is currently addressing challenges involving multiple century-scale coupled high resolution (quarter-degree) climate simulations on DOE Leadership Class computers. ACME is anticipating production of over 5PB of data during the next 2 years of simulations, in order to investigate the drivers of water cycle changes, sea-level-rise, and carbon cycle evolution. The ACME workflow, from simulation to data transfer, storage, analysis and publication will be presented. Current and planned methods to accelerate the workflow, including implementing run-time diagnostics, and implementing server-side analysis to avoid moving large datasets will be presented.
PAGANI Toolkit: Parallel graph-theoretical analysis package for brain network big data.
Du, Haixiao; Xia, Mingrui; Zhao, Kang; Liao, Xuhong; Yang, Huazhong; Wang, Yu; He, Yong
2018-05-01
The recent collection of unprecedented quantities of neuroimaging data with high spatial resolution has led to brain network big data. However, a toolkit for fast and scalable computational solutions is still lacking. Here, we developed the PArallel Graph-theoretical ANalysIs (PAGANI) Toolkit based on a hybrid central processing unit-graphics processing unit (CPU-GPU) framework with a graphical user interface to facilitate the mapping and characterization of high-resolution brain networks. Specifically, the toolkit provides flexible parameters for users to customize computations of graph metrics in brain network analyses. As an empirical example, the PAGANI Toolkit was applied to individual voxel-based brain networks with ∼200,000 nodes that were derived from a resting-state fMRI dataset of 624 healthy young adults from the Human Connectome Project. Using a personal computer, this toolbox completed all computations in ∼27 h for one subject, which is markedly less than the 118 h required with a single-thread implementation. The voxel-based functional brain networks exhibited prominent small-world characteristics and densely connected hubs, which were mainly located in the medial and lateral fronto-parietal cortices. Moreover, the female group had significantly higher modularity and nodal betweenness centrality mainly in the medial/lateral fronto-parietal and occipital cortices than the male group. Significant correlations between the intelligence quotient and nodal metrics were also observed in several frontal regions. Collectively, the PAGANI Toolkit shows high computational performance and good scalability for analyzing connectome big data and provides a friendly interface without the complicated configuration of computing environments, thereby facilitating high-resolution connectomics research in health and disease. © 2018 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Doisneau, François; Arienti, Marco; Oefelein, Joseph C.
2017-01-01
For sprays, as described by a kinetic disperse phase model strongly coupled to the Navier-Stokes equations, the resolution strategy is constrained by accuracy objectives, robustness needs, and the computing architecture. In order to leverage the good properties of the Eulerian formalism, we introduce a deterministic particle-based numerical method to solve transport in physical space, which is simple to adapt to the many types of closures and moment systems. The method is inspired by the semi-Lagrangian schemes, developed for Gas Dynamics. We show how semi-Lagrangian formulations are relevant for a disperse phase far from equilibrium and where the particle-particle coupling barely influences the transport; i.e., when particle pressure is negligible. The particle behavior is indeed close to free streaming. The new method uses the assumption of parcel transport and avoids to compute fluxes and their limiters, which makes it robust. It is a deterministic resolution method so that it does not require efforts on statistical convergence, noise control, or post-processing. All couplings are done among data under the form of Eulerian fields, which allows one to use efficient algorithms and to anticipate the computational load. This makes the method both accurate and efficient in the context of parallel computing. After a complete verification of the new transport method on various academic test cases, we demonstrate the overall strategy's ability to solve a strongly-coupled liquid jet with fine spatial resolution and we apply it to the case of high-fidelity Large Eddy Simulation of a dense spray flow. A fuel spray is simulated after atomization at Diesel engine combustion chamber conditions. The large, parallel, strongly coupled computation proves the efficiency of the method for dense, polydisperse, reacting spray flows.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doisneau, François, E-mail: fdoisne@sandia.gov; Arienti, Marco, E-mail: marient@sandia.gov; Oefelein, Joseph C., E-mail: oefelei@sandia.gov
For sprays, as described by a kinetic disperse phase model strongly coupled to the Navier–Stokes equations, the resolution strategy is constrained by accuracy objectives, robustness needs, and the computing architecture. In order to leverage the good properties of the Eulerian formalism, we introduce a deterministic particle-based numerical method to solve transport in physical space, which is simple to adapt to the many types of closures and moment systems. The method is inspired by the semi-Lagrangian schemes, developed for Gas Dynamics. We show how semi-Lagrangian formulations are relevant for a disperse phase far from equilibrium and where the particle–particle coupling barelymore » influences the transport; i.e., when particle pressure is negligible. The particle behavior is indeed close to free streaming. The new method uses the assumption of parcel transport and avoids to compute fluxes and their limiters, which makes it robust. It is a deterministic resolution method so that it does not require efforts on statistical convergence, noise control, or post-processing. All couplings are done among data under the form of Eulerian fields, which allows one to use efficient algorithms and to anticipate the computational load. This makes the method both accurate and efficient in the context of parallel computing. After a complete verification of the new transport method on various academic test cases, we demonstrate the overall strategy's ability to solve a strongly-coupled liquid jet with fine spatial resolution and we apply it to the case of high-fidelity Large Eddy Simulation of a dense spray flow. A fuel spray is simulated after atomization at Diesel engine combustion chamber conditions. The large, parallel, strongly coupled computation proves the efficiency of the method for dense, polydisperse, reacting spray flows.« less