Wang, Xingmei; Liu, Shu; Liu, Zhipeng
2017-01-01
This paper proposes a combination of non-local spatial information and quantum-inspired shuffled frog leaping algorithm to detect underwater objects in sonar images. Specifically, for the first time, the problem of inappropriate filtering degree parameter which commonly occurs in non-local spatial information and seriously affects the denoising performance in sonar images, was solved with the method utilizing a novel filtering degree parameter. Then, a quantum-inspired shuffled frog leaping algorithm based on new search mechanism (QSFLA-NSM) is proposed to precisely and quickly detect sonar images. Each frog individual is directly encoded by real numbers, which can greatly simplify the evolution process of the quantum-inspired shuffled frog leaping algorithm (QSFLA). Meanwhile, a fitness function combining intra-class difference with inter-class difference is adopted to evaluate frog positions more accurately. On this basis, recurring to an analysis of the quantum-behaved particle swarm optimization (QPSO) and the shuffled frog leaping algorithm (SFLA), a new search mechanism is developed to improve the searching ability and detection accuracy. At the same time, the time complexity is further reduced. Finally, the results of comparative experiments using the original sonar images, the UCI data sets and the benchmark functions demonstrate the effectiveness and adaptability of the proposed method.
Liu, Zhipeng
2017-01-01
This paper proposes a combination of non-local spatial information and quantum-inspired shuffled frog leaping algorithm to detect underwater objects in sonar images. Specifically, for the first time, the problem of inappropriate filtering degree parameter which commonly occurs in non-local spatial information and seriously affects the denoising performance in sonar images, was solved with the method utilizing a novel filtering degree parameter. Then, a quantum-inspired shuffled frog leaping algorithm based on new search mechanism (QSFLA-NSM) is proposed to precisely and quickly detect sonar images. Each frog individual is directly encoded by real numbers, which can greatly simplify the evolution process of the quantum-inspired shuffled frog leaping algorithm (QSFLA). Meanwhile, a fitness function combining intra-class difference with inter-class difference is adopted to evaluate frog positions more accurately. On this basis, recurring to an analysis of the quantum-behaved particle swarm optimization (QPSO) and the shuffled frog leaping algorithm (SFLA), a new search mechanism is developed to improve the searching ability and detection accuracy. At the same time, the time complexity is further reduced. Finally, the results of comparative experiments using the original sonar images, the UCI data sets and the benchmark functions demonstrate the effectiveness and adaptability of the proposed method. PMID:28542266
Koh, Wonryull; Blackwell, Kim T
2011-04-21
Stochastic simulation of reaction-diffusion systems enables the investigation of stochastic events arising from the small numbers and heterogeneous distribution of molecular species in biological cells. Stochastic variations in intracellular microdomains and in diffusional gradients play a significant part in the spatiotemporal activity and behavior of cells. Although an exact stochastic simulation that simulates every individual reaction and diffusion event gives a most accurate trajectory of the system's state over time, it can be too slow for many practical applications. We present an accelerated algorithm for discrete stochastic simulation of reaction-diffusion systems designed to improve the speed of simulation by reducing the number of time-steps required to complete a simulation run. This method is unique in that it employs two strategies that have not been incorporated in existing spatial stochastic simulation algorithms. First, diffusive transfers between neighboring subvolumes are based on concentration gradients. This treatment necessitates sampling of only the net or observed diffusion events from higher to lower concentration gradients rather than sampling all diffusion events regardless of local concentration gradients. Second, we extend the non-negative Poisson tau-leaping method that was originally developed for speeding up nonspatial or homogeneous stochastic simulation algorithms. This method calculates each leap time in a unified step for both reaction and diffusion processes while satisfying the leap condition that the propensities do not change appreciably during the leap and ensuring that leaping does not cause molecular populations to become negative. Numerical results are presented that illustrate the improvement in simulation speed achieved by incorporating these two new strategies.
Besozzi, Daniela; Pescini, Dario; Mauri, Giancarlo
2014-01-01
Tau-leaping is a stochastic simulation algorithm that efficiently reconstructs the temporal evolution of biological systems, modeled according to the stochastic formulation of chemical kinetics. The analysis of dynamical properties of these systems in physiological and perturbed conditions usually requires the execution of a large number of simulations, leading to high computational costs. Since each simulation can be executed independently from the others, a massive parallelization of tau-leaping can bring to relevant reductions of the overall running time. The emerging field of General Purpose Graphic Processing Units (GPGPU) provides power-efficient high-performance computing at a relatively low cost. In this work we introduce cuTauLeaping, a stochastic simulator of biological systems that makes use of GPGPU computing to execute multiple parallel tau-leaping simulations, by fully exploiting the Nvidia's Fermi GPU architecture. We show how a considerable computational speedup is achieved on GPU by partitioning the execution of tau-leaping into multiple separated phases, and we describe how to avoid some implementation pitfalls related to the scarcity of memory resources on the GPU streaming multiprocessors. Our results show that cuTauLeaping largely outperforms the CPU-based tau-leaping implementation when the number of parallel simulations increases, with a break-even directly depending on the size of the biological system and on the complexity of its emergent dynamics. In particular, cuTauLeaping is exploited to investigate the probability distribution of bistable states in the Schlögl model, and to carry out a bidimensional parameter sweep analysis to study the oscillatory regimes in the Ras/cAMP/PKA pathway in S. cerevisiae. PMID:24663957
A hierarchical exact accelerated stochastic simulation algorithm
NASA Astrophysics Data System (ADS)
Orendorff, David; Mjolsness, Eric
2012-12-01
A new algorithm, "HiER-leap" (hierarchical exact reaction-leaping), is derived which improves on the computational properties of the ER-leap algorithm for exact accelerated simulation of stochastic chemical kinetics. Unlike ER-leap, HiER-leap utilizes a hierarchical or divide-and-conquer organization of reaction channels into tightly coupled "blocks" and is thereby able to speed up systems with many reaction channels. Like ER-leap, HiER-leap is based on the use of upper and lower bounds on the reaction propensities to define a rejection sampling algorithm with inexpensive early rejection and acceptance steps. But in HiER-leap, large portions of intra-block sampling may be done in parallel. An accept/reject step is used to synchronize across blocks. This method scales well when many reaction channels are present and has desirable asymptotic properties. The algorithm is exact, parallelizable and achieves a significant speedup over the stochastic simulation algorithm and ER-leap on certain problems. This algorithm offers a potentially important step towards efficient in silico modeling of entire organisms.
A mesh partitioning algorithm for preserving spatial locality in arbitrary geometries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nivarti, Girish V., E-mail: g.nivarti@alumni.ubc.ca; Salehi, M. Mahdi; Bushe, W. Kendal
2015-01-15
Highlights: •An algorithm for partitioning computational meshes is proposed. •The Morton order space-filling curve is modified to achieve improved locality. •A spatial locality metric is defined to compare results with existing approaches. •Results indicate improved performance of the algorithm in complex geometries. -- Abstract: A space-filling curve (SFC) is a proximity preserving linear mapping of any multi-dimensional space and is widely used as a clustering tool. Equi-sized partitioning of an SFC ignores the loss in clustering quality that occurs due to inaccuracies in the mapping. Often, this results in poor locality within partitions, especially for the conceptually simple, Morton ordermore » curves. We present a heuristic that improves partition locality in arbitrary geometries by slicing a Morton order curve at points where spatial locality is sacrificed. In addition, we develop algorithms that evenly distribute points to the extent possible while maintaining spatial locality. A metric is defined to estimate relative inter-partition contact as an indicator of communication in parallel computing architectures. Domain partitioning tests have been conducted on geometries relevant to turbulent reactive flow simulations. The results obtained highlight the performance of our method as an unsupervised and computationally inexpensive domain partitioning tool.« less
2006-11-30
except in the simplest of circumstances. This belief has driven the com- putational research community to devise clever kinetic Monte Carlo ( KMC ... KMC rou- tine is very slow; cutting the error in half requires four times the number of simulations. Since a single simulation may contain huge numbers...subintervals [9–14]. Both approximation types, system partitioning and τ leaping, have been very successful in increasing the scope of problems to which KMC
Binomial tau-leap spatial stochastic simulation algorithm for applications in chemical kinetics.
Marquez-Lago, Tatiana T; Burrage, Kevin
2007-09-14
In cell biology, cell signaling pathway problems are often tackled with deterministic temporal models, well mixed stochastic simulators, and/or hybrid methods. But, in fact, three dimensional stochastic spatial modeling of reactions happening inside the cell is needed in order to fully understand these cell signaling pathways. This is because noise effects, low molecular concentrations, and spatial heterogeneity can all affect the cellular dynamics. However, there are ways in which important effects can be accounted without going to the extent of using highly resolved spatial simulators (such as single-particle software), hence reducing the overall computation time significantly. We present a new coarse grained modified version of the next subvolume method that allows the user to consider both diffusion and reaction events in relatively long simulation time spans as compared with the original method and other commonly used fully stochastic computational methods. Benchmarking of the simulation algorithm was performed through comparison with the next subvolume method and well mixed models (MATLAB), as well as stochastic particle reaction and transport simulations (CHEMCELL, Sandia National Laboratories). Additionally, we construct a model based on a set of chemical reactions in the epidermal growth factor receptor pathway. For this particular application and a bistable chemical system example, we analyze and outline the advantages of our presented binomial tau-leap spatial stochastic simulation algorithm, in terms of efficiency and accuracy, in scenarios of both molecular homogeneity and heterogeneity.
Hybrid stochastic simulation of reaction-diffusion systems with slow and fast dynamics.
Strehl, Robert; Ilie, Silvana
2015-12-21
In this paper, we present a novel hybrid method to simulate discrete stochastic reaction-diffusion models arising in biochemical signaling pathways. We study moderately stiff systems, for which we can partition each reaction or diffusion channel into either a slow or fast subset, based on its propensity. Numerical approaches missing this distinction are often limited with respect to computational run time or approximation quality. We design an approximate scheme that remedies these pitfalls by using a new blending strategy of the well-established inhomogeneous stochastic simulation algorithm and the tau-leaping simulation method. The advantages of our hybrid simulation algorithm are demonstrated on three benchmarking systems, with special focus on approximation accuracy and efficiency.
Wang, Gai-Ge; Feng, Qingjiang; Zhao, Xiang-Jun
2014-01-01
An effective hybrid cuckoo search algorithm (CS) with improved shuffled frog-leaping algorithm (ISFLA) is put forward for solving 0-1 knapsack problem. First of all, with the framework of SFLA, an improved frog-leap operator is designed with the effect of the global optimal information on the frog leaping and information exchange between frog individuals combined with genetic mutation with a small probability. Subsequently, in order to improve the convergence speed and enhance the exploitation ability, a novel CS model is proposed with considering the specific advantages of Lévy flights and frog-leap operator. Furthermore, the greedy transform method is used to repair the infeasible solution and optimize the feasible solution. Finally, numerical simulations are carried out on six different types of 0-1 knapsack instances, and the comparative results have shown the effectiveness of the proposed algorithm and its ability to achieve good quality solutions, which outperforms the binary cuckoo search, the binary differential evolution, and the genetic algorithm. PMID:25404940
Systems and methods for knowledge discovery in spatial data
Obradovic, Zoran; Fiez, Timothy E.; Vucetic, Slobodan; Lazarevic, Aleksandar; Pokrajac, Dragoljub; Hoskinson, Reed L.
2005-03-08
Systems and methods are provided for knowledge discovery in spatial data as well as to systems and methods for optimizing recipes used in spatial environments such as may be found in precision agriculture. A spatial data analysis and modeling module is provided which allows users to interactively and flexibly analyze and mine spatial data. The spatial data analysis and modeling module applies spatial data mining algorithms through a number of steps. The data loading and generation module obtains or generates spatial data and allows for basic partitioning. The inspection module provides basic statistical analysis. The preprocessing module smoothes and cleans the data and allows for basic manipulation of the data. The partitioning module provides for more advanced data partitioning. The prediction module applies regression and classification algorithms on the spatial data. The integration module enhances prediction methods by combining and integrating models. The recommendation module provides the user with site-specific recommendations as to how to optimize a recipe for a spatial environment such as a fertilizer recipe for an agricultural field.
Hybrid stochastic simulation of reaction-diffusion systems with slow and fast dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strehl, Robert; Ilie, Silvana, E-mail: silvana@ryerson.ca
2015-12-21
In this paper, we present a novel hybrid method to simulate discrete stochastic reaction-diffusion models arising in biochemical signaling pathways. We study moderately stiff systems, for which we can partition each reaction or diffusion channel into either a slow or fast subset, based on its propensity. Numerical approaches missing this distinction are often limited with respect to computational run time or approximation quality. We design an approximate scheme that remedies these pitfalls by using a new blending strategy of the well-established inhomogeneous stochastic simulation algorithm and the tau-leaping simulation method. The advantages of our hybrid simulation algorithm are demonstrated onmore » three benchmarking systems, with special focus on approximation accuracy and efficiency.« less
Automatic partitioning of head CTA for enabling segmentation
NASA Astrophysics Data System (ADS)
Suryanarayanan, Srikanth; Mullick, Rakesh; Mallya, Yogish; Kamath, Vidya; Nagaraj, Nithin
2004-05-01
Radiologists perform a CT Angiography procedure to examine vascular structures and associated pathologies such as aneurysms. Volume rendering is used to exploit volumetric capabilities of CT that provides complete interactive 3-D visualization. However, bone forms an occluding structure and must be segmented out. The anatomical complexity of the head creates a major challenge in the segmentation of bone and vessel. An analysis of the head volume reveals varying spatial relationships between vessel and bone that can be separated into three sub-volumes: "proximal", "middle", and "distal". The "proximal" and "distal" sub-volumes contain good spatial separation between bone and vessel (carotid referenced here). Bone and vessel appear contiguous in the "middle" partition that remains the most challenging region for segmentation. The partition algorithm is used to automatically identify these partition locations so that different segmentation methods can be developed for each sub-volume. The partition locations are computed using bone, image entropy, and sinus profiles along with a rule-based method. The algorithm is validated on 21 cases (varying volume sizes, resolution, clinical sites, pathologies) using ground truth identified visually. The algorithm is also computationally efficient, processing a 500+ slice volume in 6 seconds (an impressive 0.01 seconds / slice) that makes it an attractive algorithm for pre-processing large volumes. The partition algorithm is integrated into the segmentation workflow. Fast and simple algorithms are implemented for processing the "proximal" and "distal" partitions. Complex methods are restricted to only the "middle" partition. The partitionenabled segmentation has been successfully tested and results are shown from multiple cases.
Fast neural solution of a nonlinear wave equation
NASA Technical Reports Server (NTRS)
Toomarian, Nikzad; Barhen, Jacob
1992-01-01
A neural algorithm for rapidly simulating a certain class of nonlinear wave phenomena using analog VLSI neural hardware is presented and applied to the Korteweg-de Vries partial differential equation. The corresponding neural architecture is obtained from a pseudospectral representation of the spatial dependence, along with a leap-frog scheme for the temporal evolution. Numerical simulations demonstrated the robustness of the proposed approach.
NASA Astrophysics Data System (ADS)
Arya, L. D.; Koshti, Atul
2018-05-01
This paper investigates the Distributed Generation (DG) capacity optimization at location based on the incremental voltage sensitivity criteria for sub-transmission network. The Modified Shuffled Frog Leaping optimization Algorithm (MSFLA) has been used to optimize the DG capacity. Induction generator model of DG (wind based generating units) has been considered for study. Standard test system IEEE-30 bus has been considered for the above study. The obtained results are also validated by shuffled frog leaping algorithm and modified version of bare bones particle swarm optimization (BBExp). The performance of MSFLA has been found more efficient than the other two algorithms for real power loss minimization problem.
NASA Astrophysics Data System (ADS)
Chen, Guangye; Luis, Chacon; Bird, Robert; Stark, David; Yin, Lin; Albright, Brian
2017-10-01
Leap-frog based explicit algorithms, either ``energy-conserving'' or ``momentum-conserving'', do not conserve energy discretely. Time-centered fully implicit algorithms can conserve discrete energy exactly, but introduce large dispersion errors in the light-wave modes, regardless of timestep sizes. This can lead to intolerable simulation errors where highly accurate light propagation is needed (e.g. laser-plasma interactions, LPI). In this study, we selectively combine the leap-frog and Crank-Nicolson methods to produce a low-dispersion, exactly energy-and-charge-conserving PIC algorithm. Specifically, we employ the leap-frog method for Maxwell equations, and the Crank-Nicolson method for particle equations. Such an algorithm admits exact global energy conservation, exact local charge conservation, and preserves the dispersion properties of the leap-frog method for the light wave. The algorithm has been implemented in a code named iVPIC, based on the VPIC code developed at LANL. We will present numerical results that demonstrate the properties of the scheme with sample test problems (e.g. Weibel instability run for 107 timesteps, and LPI applications.
Spatial partitioning algorithms for data visualization
NASA Astrophysics Data System (ADS)
Devulapalli, Raghuveer; Quist, Mikael; Carlsson, John Gunnar
2013-12-01
Spatial partitions of an information space are frequently used for data visualization. Weighted Voronoi diagrams are among the most popular ways of dividing a space into partitions. However, the problem of computing such a partition efficiently can be challenging. For example, a natural objective is to select the weights so as to force each Voronoi region to take on a pre-defined area, which might represent the relevance or market share of an informational object. In this paper, we present an easy and fast algorithm to compute these weights of the Voronoi diagrams. Unlike previous approaches whose convergence properties are not well-understood, we give a formulation to the problem based on convex optimization with excellent performance guarantees in theory and practice. We also show how our technique can be used to control the shape of these partitions. More specifically we show how to convert undesirable skinny and long regions into fat regions while maintaining the areas of the partitions. As an application, we use these to visualize the amount of website traffic for the top 101 websites.
An adaptive tau-leaping method for stochastic simulations of reaction-diffusion systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Padgett, Jill M. A.; Ilie, Silvana, E-mail: silvana@ryerson.ca
2016-03-15
Stochastic modelling is critical for studying many biochemical processes in a cell, in particular when some reacting species have low population numbers. For many such cellular processes the spatial distribution of the molecular species plays a key role. The evolution of spatially heterogeneous biochemical systems with some species in low amounts is accurately described by the mesoscopic model of the Reaction-Diffusion Master Equation. The Inhomogeneous Stochastic Simulation Algorithm provides an exact strategy to numerically solve this model, but it is computationally very expensive on realistic applications. We propose a novel adaptive time-stepping scheme for the tau-leaping method for approximating themore » solution of the Reaction-Diffusion Master Equation. This technique combines effective strategies for variable time-stepping with path preservation to reduce the computational cost, while maintaining the desired accuracy. The numerical tests on various examples arising in applications show the improved efficiency achieved by the new adaptive method.« less
New "Tau-Leap" Strategy for Accelerated Stochastic Simulation.
Ramkrishna, Doraiswami; Shu, Che-Chi; Tran, Vu
2014-12-10
The "Tau-Leap" strategy for stochastic simulations of chemical reaction systems due to Gillespie and co-workers has had considerable impact on various applications. This strategy is reexamined with Chebyshev's inequality for random variables as it provides a rigorous probabilistic basis for a measured τ-leap thus adding significantly to simulation efficiency. It is also shown that existing strategies for simulation times have no probabilistic assurance that they satisfy the τ-leap criterion while the use of Chebyshev's inequality leads to a specified degree of certainty with which the τ-leap criterion is satisfied. This reduces the loss of sample paths which do not comply with the τ-leap criterion. The performance of the present algorithm is assessed, with respect to one discussed by Cao et al. ( J. Chem. Phys. 2006 , 124 , 044109), a second pertaining to binomial leap (Tian and Burrage J. Chem. Phys. 2004 , 121 , 10356; Chatterjee et al. J. Chem. Phys. 2005 , 122 , 024112; Peng et al. J. Chem. Phys. 2007 , 126 , 224109), and a third regarding the midpoint Poisson leap (Peng et al., 2007; Gillespie J. Chem. Phys. 2001 , 115 , 1716). The performance assessment is made by estimating the error in the histogram measured against that obtained with the so-called stochastic simulation algorithm. It is shown that the current algorithm displays notably less histogram error than its predecessor for a fixed computation time and, conversely, less computation time for a fixed accuracy. This computational advantage is an asset in repetitive calculations essential for modeling stochastic systems. The importance of stochastic simulations is derived from diverse areas of application in physical and biological sciences, process systems, and economics, etc. Computational improvements such as those reported herein are therefore of considerable significance.
Song, Xiaoying; Huang, Qijun; Chang, Sheng; He, Jin; Wang, Hao
2018-06-01
To improve the compression rates for lossless compression of medical images, an efficient algorithm, based on irregular segmentation and region-based prediction, is proposed in this paper. Considering that the first step of a region-based compression algorithm is segmentation, this paper proposes a hybrid method by combining geometry-adaptive partitioning and quadtree partitioning to achieve adaptive irregular segmentation for medical images. Then, least square (LS)-based predictors are adaptively designed for each region (regular subblock or irregular subregion). The proposed adaptive algorithm not only exploits spatial correlation between pixels but it utilizes local structure similarity, resulting in efficient compression performance. Experimental results show that the average compression performance of the proposed algorithm is 10.48, 4.86, 3.58, and 0.10% better than that of JPEG 2000, CALIC, EDP, and JPEG-LS, respectively. Graphical abstract ᅟ.
Multi-jagged: A scalable parallel spatial partitioning algorithm
Deveci, Mehmet; Rajamanickam, Sivasankaran; Devine, Karen D.; ...
2015-03-18
Geometric partitioning is fast and effective for load-balancing dynamic applications, particularly those requiring geometric locality of data (particle methods, crash simulations). We present, to our knowledge, the first parallel implementation of a multidimensional-jagged geometric partitioner. In contrast to the traditional recursive coordinate bisection algorithm (RCB), which recursively bisects subdomains perpendicular to their longest dimension until the desired number of parts is obtained, our algorithm does recursive multi-section with a given number of parts in each dimension. By computing multiple cut lines concurrently and intelligently deciding when to migrate data while computing the partition, we minimize data movement compared to efficientmore » implementations of recursive bisection. We demonstrate the algorithm's scalability and quality relative to the RCB implementation in Zoltan on both real and synthetic datasets. Our experiments show that the proposed algorithm performs and scales better than RCB in terms of run-time without degrading the load balance. Lastly, our implementation partitions 24 billion points into 65,536 parts within a few seconds and exhibits near perfect weak scaling up to 6K cores.« less
Leapfrog variants of iterative methods for linear algebra equations
NASA Technical Reports Server (NTRS)
Saylor, Paul E.
1988-01-01
Two iterative methods are considered, Richardson's method and a general second order method. For both methods, a variant of the method is derived for which only even numbered iterates are computed. The variant is called a leapfrog method. Comparisons between the conventional form of the methods and the leapfrog form are made under the assumption that the number of unknowns is large. In the case of Richardson's method, it is possible to express the final iterate in terms of only the initial approximation, a variant of the iteration called the grand-leap method. In the case of the grand-leap variant, a set of parameters is required. An algorithm is presented to compute these parameters that is related to algorithms to compute the weights and abscissas for Gaussian quadrature. General algorithms to implement the leapfrog and grand-leap methods are presented. Algorithms for the important special case of the Chebyshev method are also given.
LEAP: biomarker inference through learning and evaluating association patterns.
Jiang, Xia; Neapolitan, Richard E
2015-03-01
Single nucleotide polymorphism (SNP) high-dimensional datasets are available from Genome Wide Association Studies (GWAS). Such data provide researchers opportunities to investigate the complex genetic basis of diseases. Much of genetic risk might be due to undiscovered epistatic interactions, which are interactions in which combination of several genes affect disease. Research aimed at discovering interacting SNPs from GWAS datasets proceeded in two directions. First, tools were developed to evaluate candidate interactions. Second, algorithms were developed to search over the space of candidate interactions. Another problem when learning interacting SNPs, which has not received much attention, is evaluating how likely it is that the learned SNPs are associated with the disease. A complete system should provide this information as well. We develop such a system. Our system, called LEAP, includes a new heuristic search algorithm for learning interacting SNPs, and a Bayesian network based algorithm for computing the probability of their association. We evaluated the performance of LEAP using 100 1,000-SNP simulated datasets, each of which contains 15 SNPs involved in interactions. When learning interacting SNPs from these datasets, LEAP outperformed seven others methods. Furthermore, only SNPs involved in interactions were found to be probable. We also used LEAP to analyze real Alzheimer's disease and breast cancer GWAS datasets. We obtained interesting and new results from the Alzheimer's dataset, but limited results from the breast cancer dataset. We conclude that our results support that LEAP is a useful tool for extracting candidate interacting SNPs from high-dimensional datasets and determining their probability. © 2015 The Authors. *Genetic Epidemiology published by Wiley Periodicals, Inc.
Li, Tiejun; Min, Bin; Wang, Zhiming
2013-03-14
The stochastic integral ensuring the Newton-Leibnitz chain rule is essential in stochastic energetics. Marcus canonical integral has this property and can be understood as the Wong-Zakai type smoothing limit when the driving process is non-Gaussian. However, this important concept seems not well-known for physicists. In this paper, we discuss Marcus integral for non-Gaussian processes and its computation in the context of stochastic energetics. We give a comprehensive introduction to Marcus integral and compare three equivalent definitions in the literature. We introduce the exact pathwise simulation algorithm and give the error analysis. We show how to compute the thermodynamic quantities based on the pathwise simulation algorithm. We highlight the information hidden in the Marcus mapping, which plays the key role in determining thermodynamic quantities. We further propose the tau-leaping algorithm, which advance the process with deterministic time steps when tau-leaping condition is satisfied. The numerical experiments and its efficiency analysis show that it is very promising.
NASA Astrophysics Data System (ADS)
Dash, Rajashree
2017-11-01
Forecasting purchasing power of one currency with respect to another currency is always an interesting topic in the field of financial time series prediction. Despite the existence of several traditional and computational models for currency exchange rate forecasting, there is always a need for developing simpler and more efficient model, which will produce better prediction capability. In this paper, an evolutionary framework is proposed by using an improved shuffled frog leaping (ISFL) algorithm with a computationally efficient functional link artificial neural network (CEFLANN) for prediction of currency exchange rate. The model is validated by observing the monthly prediction measures obtained for three currency exchange data sets such as USD/CAD, USD/CHF, and USD/JPY accumulated within same period of time. The model performance is also compared with two other evolutionary learning techniques such as Shuffled frog leaping algorithm and Particle Swarm optimization algorithm. Practical analysis of results suggest that, the proposed model developed using the ISFL algorithm with CEFLANN network is a promising predictor model for currency exchange rate prediction compared to other models included in the study.
Implementation of MPEG-2 encoder to multiprocessor system using multiple MVPs (TMS320C80)
NASA Astrophysics Data System (ADS)
Kim, HyungSun; Boo, Kenny; Chung, SeokWoo; Choi, Geon Y.; Lee, YongJin; Jeon, JaeHo; Park, Hyun Wook
1997-05-01
This paper presents the efficient algorithm mapping for the real-time MPEG-2 encoding on the KAIST image computing system (KICS), which has a parallel architecture using five multimedia video processors (MVPs). The MVP is a general purpose digital signal processor (DSP) of Texas Instrument. It combines one floating-point processor and four fixed- point DSPs on a single chip. The KICS uses the MVP as a primary processing element (PE). Two PEs form a cluster, and there are two processing clusters in the KICS. Real-time MPEG-2 encoder is implemented through the spatial and the functional partitioning strategies. Encoding process of spatially partitioned half of the video input frame is assigned to ne processing cluster. Two PEs perform the functionally partitioned MPEG-2 encoding tasks in the pipelined operation mode. One PE of a cluster carries out the transform coding part and the other performs the predictive coding part of the MPEG-2 encoding algorithm. One MVP among five MVPs is used for system control and interface with host computer. This paper introduces an implementation of the MPEG-2 algorithm with a parallel processing architecture.
New “Tau-Leap” Strategy for Accelerated Stochastic Simulation
2015-01-01
The “Tau-Leap” strategy for stochastic simulations of chemical reaction systems due to Gillespie and co-workers has had considerable impact on various applications. This strategy is reexamined with Chebyshev’s inequality for random variables as it provides a rigorous probabilistic basis for a measured τ-leap thus adding significantly to simulation efficiency. It is also shown that existing strategies for simulation times have no probabilistic assurance that they satisfy the τ-leap criterion while the use of Chebyshev’s inequality leads to a specified degree of certainty with which the τ-leap criterion is satisfied. This reduces the loss of sample paths which do not comply with the τ-leap criterion. The performance of the present algorithm is assessed, with respect to one discussed by Cao et al. (J. Chem. Phys.2006, 124, 044109), a second pertaining to binomial leap (Tian and Burrage J. Chem. Phys.2004, 121, 10356; Chatterjee et al. J. Chem. Phys.2005, 122, 024112; Peng et al. J. Chem. Phys.2007, 126, 224109), and a third regarding the midpoint Poisson leap (Peng et al., 2007; Gillespie J. Chem. Phys.2001, 115, 1716). The performance assessment is made by estimating the error in the histogram measured against that obtained with the so-called stochastic simulation algorithm. It is shown that the current algorithm displays notably less histogram error than its predecessor for a fixed computation time and, conversely, less computation time for a fixed accuracy. This computational advantage is an asset in repetitive calculations essential for modeling stochastic systems. The importance of stochastic simulations is derived from diverse areas of application in physical and biological sciences, process systems, and economics, etc. Computational improvements such as those reported herein are therefore of considerable significance. PMID:25620846
Analytic TOF PET reconstruction algorithm within DIRECT data partitioning framework
Matej, Samuel; Daube-Witherspoon, Margaret E.; Karp, Joel S.
2016-01-01
Iterative reconstruction algorithms are routinely used for clinical practice; however, analytic algorithms are relevant candidates for quantitative research studies due to their linear behavior. While iterative algorithms also benefit from the inclusion of accurate data and noise models the widespread use of TOF scanners with less sensitivity to noise and data imperfections make analytic algorithms even more promising. In our previous work we have developed a novel iterative reconstruction approach (Direct Image Reconstruction for TOF) providing convenient TOF data partitioning framework and leading to very efficient reconstructions. In this work we have expanded DIRECT to include an analytic TOF algorithm with confidence weighting incorporating models of both TOF and spatial resolution kernels. Feasibility studies using simulated and measured data demonstrate that analytic-DIRECT with appropriate resolution and regularization filters is able to provide matched bias vs. variance performance to iterative TOF reconstruction with a matched resolution model. PMID:27032968
Analytic TOF PET reconstruction algorithm within DIRECT data partitioning framework
NASA Astrophysics Data System (ADS)
Matej, Samuel; Daube-Witherspoon, Margaret E.; Karp, Joel S.
2016-05-01
Iterative reconstruction algorithms are routinely used for clinical practice; however, analytic algorithms are relevant candidates for quantitative research studies due to their linear behavior. While iterative algorithms also benefit from the inclusion of accurate data and noise models the widespread use of time-of-flight (TOF) scanners with less sensitivity to noise and data imperfections make analytic algorithms even more promising. In our previous work we have developed a novel iterative reconstruction approach (DIRECT: direct image reconstruction for TOF) providing convenient TOF data partitioning framework and leading to very efficient reconstructions. In this work we have expanded DIRECT to include an analytic TOF algorithm with confidence weighting incorporating models of both TOF and spatial resolution kernels. Feasibility studies using simulated and measured data demonstrate that analytic-DIRECT with appropriate resolution and regularization filters is able to provide matched bias versus variance performance to iterative TOF reconstruction with a matched resolution model.
Design and implementation of intelligent electronic warfare decision making algorithm
NASA Astrophysics Data System (ADS)
Peng, Hsin-Hsien; Chen, Chang-Kuo; Hsueh, Chi-Shun
2017-05-01
Electromagnetic signals and the requirements of timely response have been a rapid growth in modern electronic warfare. Although jammers are limited resources, it is possible to achieve the best electronic warfare efficiency by tactical decisions. This paper proposes the intelligent electronic warfare decision support system. In this work, we develop a novel hybrid algorithm, Digital Pheromone Particle Swarm Optimization, based on Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO) and Shuffled Frog Leaping Algorithm (SFLA). We use PSO to solve the problem and combine the concept of pheromones in ACO to accumulate more useful information in spatial solving process and speed up finding the optimal solution. The proposed algorithm finds the optimal solution in reasonable computation time by using the method of matrix conversion in SFLA. The results indicated that jammer allocation was more effective. The system based on the hybrid algorithm provides electronic warfare commanders with critical information to assist commanders in effectively managing the complex electromagnetic battlefield.
Specht, Alicia T; Li, Jun
2017-03-01
To construct gene co-expression networks based on single-cell RNA-Sequencing data, we present an algorithm called LEAP, which utilizes the estimated pseudotime of the cells to find gene co-expression that involves time delay. R package LEAP available on CRAN. jun.li@nd.edu. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
A high-performance spatial database based approach for pathology imaging algorithm evaluation
Wang, Fusheng; Kong, Jun; Gao, Jingjing; Cooper, Lee A.D.; Kurc, Tahsin; Zhou, Zhengwen; Adler, David; Vergara-Niedermayr, Cristobal; Katigbak, Bryan; Brat, Daniel J.; Saltz, Joel H.
2013-01-01
Background: Algorithm evaluation provides a means to characterize variability across image analysis algorithms, validate algorithms by comparison with human annotations, combine results from multiple algorithms for performance improvement, and facilitate algorithm sensitivity studies. The sizes of images and image analysis results in pathology image analysis pose significant challenges in algorithm evaluation. We present an efficient parallel spatial database approach to model, normalize, manage, and query large volumes of analytical image result data. This provides an efficient platform for algorithm evaluation. Our experiments with a set of brain tumor images demonstrate the application, scalability, and effectiveness of the platform. Context: The paper describes an approach and platform for evaluation of pathology image analysis algorithms. The platform facilitates algorithm evaluation through a high-performance database built on the Pathology Analytic Imaging Standards (PAIS) data model. Aims: (1) Develop a framework to support algorithm evaluation by modeling and managing analytical results and human annotations from pathology images; (2) Create a robust data normalization tool for converting, validating, and fixing spatial data from algorithm or human annotations; (3) Develop a set of queries to support data sampling and result comparisons; (4) Achieve high performance computation capacity via a parallel data management infrastructure, parallel data loading and spatial indexing optimizations in this infrastructure. Materials and Methods: We have considered two scenarios for algorithm evaluation: (1) algorithm comparison where multiple result sets from different methods are compared and consolidated; and (2) algorithm validation where algorithm results are compared with human annotations. We have developed a spatial normalization toolkit to validate and normalize spatial boundaries produced by image analysis algorithms or human annotations. The validated data were formatted based on the PAIS data model and loaded into a spatial database. To support efficient data loading, we have implemented a parallel data loading tool that takes advantage of multi-core CPUs to accelerate data injection. The spatial database manages both geometric shapes and image features or classifications, and enables spatial sampling, result comparison, and result aggregation through expressive structured query language (SQL) queries with spatial extensions. To provide scalable and efficient query support, we have employed a shared nothing parallel database architecture, which distributes data homogenously across multiple database partitions to take advantage of parallel computation power and implements spatial indexing to achieve high I/O throughput. Results: Our work proposes a high performance, parallel spatial database platform for algorithm validation and comparison. This platform was evaluated by storing, managing, and comparing analysis results from a set of brain tumor whole slide images. The tools we develop are open source and available to download. Conclusions: Pathology image algorithm validation and comparison are essential to iterative algorithm development and refinement. One critical component is the support for queries involving spatial predicates and comparisons. In our work, we develop an efficient data model and parallel database approach to model, normalize, manage and query large volumes of analytical image result data. Our experiments demonstrate that the data partitioning strategy and the grid-based indexing result in good data distribution across database nodes and reduce I/O overhead in spatial join queries through parallel retrieval of relevant data and quick subsetting of datasets. The set of tools in the framework provide a full pipeline to normalize, load, manage and query analytical results for algorithm evaluation. PMID:23599905
Efficient Boundary Extraction of BSP Solids Based on Clipping Operations.
Wang, Charlie C L; Manocha, Dinesh
2013-01-01
We present an efficient algorithm to extract the manifold surface that approximates the boundary of a solid represented by a Binary Space Partition (BSP) tree. Our polygonization algorithm repeatedly performs clipping operations on volumetric cells that correspond to a spatial convex partition and computes the boundary by traversing the connected cells. We use point-based representations along with finite-precision arithmetic to improve the efficiency and generate the B-rep approximation of a BSP solid. The core of our polygonization method is a novel clipping algorithm that uses a set of logical operations to make it resistant to degeneracies resulting from limited precision of floating-point arithmetic. The overall BSP to B-rep conversion algorithm can accurately generate boundaries with sharp and small features, and is faster than prior methods. At the end of this paper, we use this algorithm for a few geometric processing applications including Boolean operations, model repair, and mesh reconstruction.
NASA Astrophysics Data System (ADS)
Marchetti, Luca; Priami, Corrado; Thanh, Vo Hong
2016-07-01
This paper introduces HRSSA (Hybrid Rejection-based Stochastic Simulation Algorithm), a new efficient hybrid stochastic simulation algorithm for spatially homogeneous biochemical reaction networks. HRSSA is built on top of RSSA, an exact stochastic simulation algorithm which relies on propensity bounds to select next reaction firings and to reduce the average number of reaction propensity updates needed during the simulation. HRSSA exploits the computational advantage of propensity bounds to manage time-varying transition propensities and to apply dynamic partitioning of reactions, which constitute the two most significant bottlenecks of hybrid simulation. A comprehensive set of simulation benchmarks is provided for evaluating performance and accuracy of HRSSA against other state of the art algorithms.
NASA Astrophysics Data System (ADS)
Chen, Naijin
2013-03-01
Level Based Partitioning (LBP) algorithm, Cluster Based Partitioning (CBP) algorithm and Enhance Static List (ESL) temporal partitioning algorithm based on adjacent matrix and adjacent table are designed and implemented in this paper. Also partitioning time and memory occupation based on three algorithms are compared. Experiment results show LBP partitioning algorithm possesses the least partitioning time and better parallel character, as far as memory occupation and partitioning time are concerned, algorithms based on adjacent table have less partitioning time and less space memory occupation.
Liu, Haorui; Yi, Fengyan; Yang, Heli
2016-01-01
The shuffled frog leaping algorithm (SFLA) easily falls into local optimum when it solves multioptimum function optimization problem, which impacts the accuracy and convergence speed. Therefore this paper presents grouped SFLA for solving continuous optimization problems combined with the excellent characteristics of cloud model transformation between qualitative and quantitative research. The algorithm divides the definition domain into several groups and gives each group a set of frogs. Frogs of each region search in their memeplex, and in the search process the algorithm uses the “elite strategy” to update the location information of existing elite frogs through cloud model algorithm. This method narrows the searching space and it can effectively improve the situation of a local optimum; thus convergence speed and accuracy can be significantly improved. The results of computer simulation confirm this conclusion. PMID:26819584
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marchetti, Luca, E-mail: marchetti@cosbi.eu; Priami, Corrado, E-mail: priami@cosbi.eu; University of Trento, Department of Mathematics
This paper introduces HRSSA (Hybrid Rejection-based Stochastic Simulation Algorithm), a new efficient hybrid stochastic simulation algorithm for spatially homogeneous biochemical reaction networks. HRSSA is built on top of RSSA, an exact stochastic simulation algorithm which relies on propensity bounds to select next reaction firings and to reduce the average number of reaction propensity updates needed during the simulation. HRSSA exploits the computational advantage of propensity bounds to manage time-varying transition propensities and to apply dynamic partitioning of reactions, which constitute the two most significant bottlenecks of hybrid simulation. A comprehensive set of simulation benchmarks is provided for evaluating performance andmore » accuracy of HRSSA against other state of the art algorithms.« less
Hierarchical image feature extraction by an irregular pyramid of polygonal partitions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skurikhin, Alexei N
2008-01-01
We present an algorithmic framework for hierarchical image segmentation and feature extraction. We build a successive fine-to-coarse hierarchy of irregular polygonal partitions of the original image. This multiscale hierarchy forms the basis for object-oriented image analysis. The framework incorporates the Gestalt principles of visual perception, such as proximity and closure, and exploits spectral and textural similarities of polygonal partitions, while iteratively grouping them until dissimilarity criteria are exceeded. Seed polygons are built upon a triangular mesh composed of irregular sized triangles, whose spatial arrangement is adapted to the image content. This is achieved by building the triangular mesh on themore » top of detected spectral discontinuities (such as edges), which form a network of constraints for the Delaunay triangulation. The image is then represented as a spatial network in the form of a graph with vertices corresponding to the polygonal partitions and edges reflecting their relations. The iterative agglomeration of partitions into object-oriented segments is formulated as Minimum Spanning Tree (MST) construction. An important characteristic of the approach is that the agglomeration of polygonal partitions is constrained by the detected edges; thus the shapes of agglomerated partitions are more likely to correspond to the outlines of real-world objects. The constructed partitions and their spatial relations are characterized using spectral, textural and structural features based on proximity graphs. The framework allows searching for object-oriented features of interest across multiple levels of details of the built hierarchy and can be generalized to the multi-criteria MST to account for multiple criteria important for an application.« less
Zhang, Weizhe; Bai, Enci; He, Hui; Cheng, Albert M.K.
2015-01-01
Reducing energy consumption is becoming very important in order to keep battery life and lower overall operational costs for heterogeneous real-time multiprocessor systems. In this paper, we first formulate this as a combinatorial optimization problem. Then, a successful meta-heuristic, called Shuffled Frog Leaping Algorithm (SFLA) is proposed to reduce the energy consumption. Precocity remission and local optimal avoidance techniques are proposed to avoid the precocity and improve the solution quality. Convergence acceleration significantly reduces the search time. Experimental results show that the SFLA-based energy-aware meta-heuristic uses 30% less energy than the Ant Colony Optimization (ACO) algorithm, and 60% less energy than the Genetic Algorithm (GA) algorithm. Remarkably, the running time of the SFLA-based meta-heuristic is 20 and 200 times less than ACO and GA, respectively, for finding the optimal solution. PMID:26110406
Aungkulanon, Pasura; Luangpaiboon, Pongchanun
2016-01-01
Response surface methods via the first or second order models are important in manufacturing processes. This study, however, proposes different structured mechanisms of the vertical transportation systems or VTS embedded on a shuffled frog leaping-based approach. There are three VTS scenarios, a motion reaching a normal operating velocity, and both reaching and not reaching transitional motion. These variants were performed to simultaneously inspect multiple responses affected by machining parameters in multi-pass turning processes. The numerical results of two machining optimisation problems demonstrated the high performance measures of the proposed methods, when compared to other optimisation algorithms for an actual deep cut design.
Boundary identification and error analysis of shocked material images
NASA Astrophysics Data System (ADS)
Hock, Margaret; Howard, Marylesa; Cooper, Leora; Meehan, Bernard; Nelson, Keith
2017-06-01
To compute quantities such as pressure and velocity from laser-induced shock waves propagating through materials, high-speed images are captured and analyzed. Shock images typically display high noise and spatially-varying intensities, causing conventional analysis techniques to have difficulty identifying boundaries in the images without making significant assumptions about the data. We present a novel machine learning algorithm that efficiently segments, or partitions, images with high noise and spatially-varying intensities, and provides error maps that describe a level of uncertainty in the partitioning. The user trains the algorithm by providing locations of known materials within the image but no assumptions are made on the geometries in the image. The error maps are used to provide lower and upper bounds on quantities of interest, such as velocity and pressure, once boundaries have been identified and propagated through equations of state. This algorithm will be demonstrated on images of shock waves with noise and aberrations to quantify properties of the wave as it progresses. DOE/NV/25946-3126 This work was done by National Security Technologies, LLC, under Contract No. DE- AC52-06NA25946 with the U.S. Department of Energy and supported by the SDRD Program.
The finite state projection algorithm for the solution of the chemical master equation.
Munsky, Brian; Khammash, Mustafa
2006-01-28
This article introduces the finite state projection (FSP) method for use in the stochastic analysis of chemically reacting systems. One can describe the chemical populations of such systems with probability density vectors that evolve according to a set of linear ordinary differential equations known as the chemical master equation (CME). Unlike Monte Carlo methods such as the stochastic simulation algorithm (SSA) or tau leaping, the FSP directly solves or approximates the solution of the CME. If the CME describes a system that has a finite number of distinct population vectors, the FSP method provides an exact analytical solution. When an infinite or extremely large number of population variations is possible, the state space can be truncated, and the FSP method provides a certificate of accuracy for how closely the truncated space approximation matches the true solution. The proposed FSP algorithm systematically increases the projection space in order to meet prespecified tolerance in the total probability density error. For any system in which a sufficiently accurate FSP exists, the FSP algorithm is shown to converge in a finite number of steps. The FSP is utilized to solve two examples taken from the field of systems biology, and comparisons are made between the FSP, the SSA, and tau leaping algorithms. In both examples, the FSP outperforms the SSA in terms of accuracy as well as computational efficiency. Furthermore, due to very small molecular counts in these particular examples, the FSP also performs far more effectively than tau leaping methods.
A Leap-Frog Discontinuous Galerkin Method for the Time-Domain Maxwell's Equations in Metamaterials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, J., Waters, J. W., Machorro, E. A.
2012-06-01
Numerical simulation of metamaterials play a very important role in the design of invisibility cloak, and sub-wavelength imaging. In this paper, we propose a leap-frog discontinuous Galerkin method to solve the time-dependent Maxwell’s equations in metamaterials. Conditional stability and error estimates are proved for the scheme. The proposed algorithm is implemented and numerical results supporting the analysis are provided.
Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan
2016-01-01
Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical modeling. PMID:27044039
Ocean surface partitioning strategies using ocean colour remote Sensing: A review
NASA Astrophysics Data System (ADS)
Krug, Lilian Anne; Platt, Trevor; Sathyendranath, Shubha; Barbosa, Ana B.
2017-06-01
The ocean surface is organized into regions with distinct properties reflecting the complexity of interactions between environmental forcing and biological responses. The delineation of these functional units, each with unique, homogeneous properties and underlying ecosystem structure and dynamics, can be defined as ocean surface partitioning. The main purposes and applications of ocean partitioning include the evaluation of particular marine environments; generation of more accurate satellite ocean colour products; assimilation of data into biogeochemical and climate models; and establishment of ecosystem-based management practices. This paper reviews the diverse approaches implemented for ocean surface partition into functional units, using ocean colour remote sensing (OCRS) data, including their purposes, criteria, methods and scales. OCRS offers a synoptic, high spatial-temporal resolution, multi-decadal coverage of bio-optical properties, relevant to the applications and value of ocean surface partitioning. In combination with other biotic and/or abiotic data, OCRS-derived data (e.g., chlorophyll-a, optical properties) provide a broad and varied source of information that can be analysed using different delineation methods derived from subjective, expert-based to unsupervised learning approaches (e.g., cluster, fuzzy and empirical orthogonal function analyses). Partition schemes are applied at global to mesoscale spatial coverage, with static (time-invariant) or dynamic (time-varying) representations. A case study, the highly heterogeneous area off SW Iberian Peninsula (NE Atlantic), illustrates how the selection of spatial coverage and temporal representation affects the discrimination of distinct environmental drivers of phytoplankton variability. Advances in operational oceanography and in the subject area of satellite ocean colour, including development of new sensors, algorithms and products, are among the potential benefits from extended use, scope and applications of ocean surface partitioning using OCRS.
Fast Time-Varying Volume Rendering Using Time-Space Partition (TSP) Tree
NASA Technical Reports Server (NTRS)
Shen, Han-Wei; Chiang, Ling-Jen; Ma, Kwan-Liu
1999-01-01
We present a new, algorithm for rapid rendering of time-varying volumes. A new hierarchical data structure that is capable of capturing both the temporal and the spatial coherence is proposed. Conventional hierarchical data structures such as octrees are effective in characterizing the homogeneity of the field values existing in the spatial domain. However, when treating time merely as another dimension for a time-varying field, difficulties frequently arise due to the discrepancy between the field's spatial and temporal resolutions. In addition, treating spatial and temporal dimensions equally often prevents the possibility of detecting the coherence that is unique in the temporal domain. Using the proposed data structure, our algorithm can meet the following goals. First, both spatial and temporal coherence are identified and exploited for accelerating the rendering process. Second, our algorithm allows the user to supply the desired error tolerances at run time for the purpose of image-quality/rendering-speed trade-off. Third, the amount of data that are required to be loaded into main memory is reduced, and thus the I/O overhead is minimized. This low I/O overhead makes our algorithm suitable for out-of-core applications.
ICER-3D Hyperspectral Image Compression Software
NASA Technical Reports Server (NTRS)
Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh
2010-01-01
Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received prior to the loss can be used to reconstruct that partition at lower fidelity. By virtue of the compression improvement it achieves relative to previous means of onboard data compression, this software enables (1) increased return of hyperspectral scientific data in the presence of limits on the rates of transmission of data from spacecraft to Earth via radio communication links and/or (2) reduction in spacecraft radio-communication power and/or cost through reduction in the amounts of data required to be downlinked and stored onboard prior to downlink. The software is also suitable for compressing hyperspectral images for ground storage or archival purposes.
A Partitioning Algorithm for Block-Diagonal Matrices With Overlap
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guy Antoine Atenekeng Kahou; Laura Grigori; Masha Sosonkina
2008-02-02
We present a graph partitioning algorithm that aims at partitioning a sparse matrix into a block-diagonal form, such that any two consecutive blocks overlap. We denote this form of the matrix as the overlapped block-diagonal matrix. The partitioned matrix is suitable for applying the explicit formulation of Multiplicative Schwarz preconditioner (EFMS) described in [3]. The graph partitioning algorithm partitions the graph of the input matrix into K partitions, such that every partition {Omega}{sub i} has at most two neighbors {Omega}{sub i-1} and {Omega}{sub i+1}. First, an ordering algorithm, such as the reverse Cuthill-McKee algorithm, that reduces the matrix profile ismore » performed. An initial overlapped block-diagonal partition is obtained from the profile of the matrix. An iterative strategy is then used to further refine the partitioning by allowing nodes to be transferred between neighboring partitions. Experiments are performed on matrices arising from real-world applications to show the feasibility and usefulness of this approach.« less
NASA Astrophysics Data System (ADS)
Sridhar, R.; Jeevananthan, S.; Dash, S. S.; Vishnuram, Pradeep
2017-05-01
Maximum Power Point Trackers (MPPTs) are power electronic conditioners used in photovoltaic (PV) system to ensure that PV structures feed maximum power for the given ambient temperature and sun's irradiation. When the PV panels are shaded by a fraction due to any environment hindrances then, conventional MPPT trackers may fail in tracking the appropriate peak power as there will be multi power peaks. In this work, a shuffled frog leap algorithm (SFLA) is proposed and it successfully identifies the global maximum power point among other local maxima. The SFLA MPPT is compared with a well-entrenched conventional perturb and observe (P&O) MPPT algorithm and a global search particle swarm optimisation (PSO) MPPT. The simulation results reveal that the proposed algorithm is highly advantageous than P&O, as it tracks nearly 30% more power for a given shading pattern. The credible nature of the proposed SFLA is ensured when it outplays PSO MPPT in convergence. The whole system is realised in MATLAB/Simulink environment.
Leap Frog and Time Step Sub-Cycle Scheme for Coupled Neutronics and Thermal-Hydraulic Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, S.
2002-07-01
As the result of the advancing TCP/IP based inter-process communication technology, more and more legacy thermal-hydraulic codes have been coupled with neutronics codes to provide best-estimate capabilities for reactivity related reactor transient analysis. Most of the coupling schemes are based on closely coupled serial or parallel approaches. Therefore, the execution of the coupled codes usually requires significant CPU time, when a complicated system is analyzed. Leap Frog scheme has been used to reduce the run time. The extent of the decoupling is usually determined based on a trial and error process for a specific analysis. It is the intent ofmore » this paper to develop a set of general criteria, which can be used to invoke the automatic Leap Frog algorithm. The algorithm will not only provide the run time reduction but also preserve the accuracy. The criteria will also serve as the base of an automatic time step sub-cycle scheme when a sudden reactivity change is introduced and the thermal-hydraulic code is marching with a relatively large time step. (authors)« less
Raster Data Partitioning for Supporting Distributed GIS Processing
NASA Astrophysics Data System (ADS)
Nguyen Thai, B.; Olasz, A.
2015-08-01
In the geospatial sector big data concept also has already impact. Several studies facing originally computer science techniques applied in GIS processing of huge amount of geospatial data. In other research studies geospatial data is considered as it were always been big data (Lee and Kang, 2015). Nevertheless, we can prove data acquisition methods have been improved substantially not only the amount, but the resolution of raw data in spectral, spatial and temporal aspects as well. A significant portion of big data is geospatial data, and the size of such data is growing rapidly at least by 20% every year (Dasgupta, 2013). The produced increasing volume of raw data, in different format, representation and purpose the wealth of information derived from this data sets represents only valuable results. However, the computing capability and processing speed rather tackle with limitations, even if semi-automatic or automatic procedures are aimed on complex geospatial data (Kristóf et al., 2014). In late times, distributed computing has reached many interdisciplinary areas of computer science inclusive of remote sensing and geographic information processing approaches. Cloud computing even more requires appropriate processing algorithms to be distributed and handle geospatial big data. Map-Reduce programming model and distributed file systems have proven their capabilities to process non GIS big data. But sometimes it's inconvenient or inefficient to rewrite existing algorithms to Map-Reduce programming model, also GIS data can not be partitioned as text-based data by line or by bytes. Hence, we would like to find an alternative solution for data partitioning, data distribution and execution of existing algorithms without rewriting or with only minor modifications. This paper focuses on technical overview of currently available distributed computing environments, as well as GIS data (raster data) partitioning, distribution and distributed processing of GIS algorithms. A proof of concept implementation have been made for raster data partitioning, distribution and processing. The first results on performance have been compared against commercial software ERDAS IMAGINE 2011 and 2014. Partitioning methods heavily depend on application areas, therefore we may consider data partitioning as a preprocessing step before applying processing services on data. As a proof of concept we have implemented a simple tile-based partitioning method splitting an image into smaller grids (NxM tiles) and comparing the processing time to existing methods by NDVI calculation. The concept is demonstrated using own development open source processing framework.
Performance of low-rank QR approximation of the finite element Biot-Savart law
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, D; Fasenfest, B
2006-10-16
In this paper we present a low-rank QR method for evaluating the discrete Biot-Savart law. Our goal is to develop an algorithm that is easily implemented on parallel computers. It is assumed that the known current density and the unknown magnetic field are both expressed in a finite element expansion, and we wish to compute the degrees-of-freedom (DOF) in the basis function expansion of the magnetic field. The matrix that maps the current DOF to the field DOF is full, but if the spatial domain is properly partitioned the matrix can be written as a block matrix, with blocks representingmore » distant interactions being low rank and having a compressed QR representation. While an octree partitioning of the matrix may be ideal, for ease of parallel implementation we employ a partitioning based on number of processors. The rank of each block (i.e. the compression) is determined by the specific geometry and is computed dynamically. In this paper we provide the algorithmic details and present computational results for large-scale computations.« less
A computing method for spatial accessibility based on grid partition
NASA Astrophysics Data System (ADS)
Ma, Linbing; Zhang, Xinchang
2007-06-01
An accessibility computing method and process based on grid partition was put forward in the paper. As two important factors impacting on traffic, density of road network and relative spatial resistance for difference land use was integrated into computing traffic cost in each grid. A* algorithms was inducted to searching optimum traffic cost of grids path, a detailed searching process and definition of heuristic evaluation function was described in the paper. Therefore, the method can be implemented more simply and its data source is obtained more easily. Moreover, by changing heuristic searching information, more reasonable computing result can be obtained. For confirming our research, a software package was developed with C# language under ArcEngine9 environment. Applying the computing method, a case study on accessibility of business districts in Guangzhou city was carried out.
Python Leap Second Management and Implementation of Precise Barycentric Correction (barycorrpy)
NASA Astrophysics Data System (ADS)
Kanodia, Shubham; Wright, Jason
2018-01-01
We announce barycorrpy (BCPy) , a Python implementation to calculate precise barycentric corrections well below the 1 cm/s level, following the algorithm of Wright and Eastman (2014). This level of precision is required in the search for 1 Earth mass planets in the Habitable Zones of Sun-like stars by the Radial Velocity (RV) method, where the maximum semi-amplitude is about 9 cm/s. We have developed BCPy to be used in the pipeline for the next generation Doppler Spectrometers - Habitable-zone Planet Finder (HPF) and NEID. In this work, we also develop an automated leap second management routine to improve upon the one available in Astropy. It checks for and downloads a new leap second file before converting from the UT time scale to TDB.
Mesoscopic-microscopic spatial stochastic simulation with automatic system partitioning.
Hellander, Stefan; Hellander, Andreas; Petzold, Linda
2017-12-21
The reaction-diffusion master equation (RDME) is a model that allows for efficient on-lattice simulation of spatially resolved stochastic chemical kinetics. Compared to off-lattice hard-sphere simulations with Brownian dynamics or Green's function reaction dynamics, the RDME can be orders of magnitude faster if the lattice spacing can be chosen coarse enough. However, strongly diffusion-controlled reactions mandate a very fine mesh resolution for acceptable accuracy. It is common that reactions in the same model differ in their degree of diffusion control and therefore require different degrees of mesh resolution. This renders mesoscopic simulation inefficient for systems with multiscale properties. Mesoscopic-microscopic hybrid methods address this problem by resolving the most challenging reactions with a microscale, off-lattice simulation. However, all methods to date require manual partitioning of a system, effectively limiting their usefulness as "black-box" simulation codes. In this paper, we propose a hybrid simulation algorithm with automatic system partitioning based on indirect a priori error estimates. We demonstrate the accuracy and efficiency of the method on models of diffusion-controlled networks in 3D.
NASA Astrophysics Data System (ADS)
Pathak, Sayan D.; Haynor, David R.; Thompson, Carol L.; Lein, Ed; Hawrylycz, Michael
2009-02-01
Understanding the geography of genetic expression in the mouse brain has opened previously unexplored avenues in neuroinformatics. The Allen Brain Atlas (www.brain-map.org) (ABA) provides genome-wide colorimetric in situ hybridization (ISH) gene expression images at high spatial resolution, all mapped to a common three-dimensional 200μm3 spatial framework defined by the Allen Reference Atlas (ARA) and is a unique data set for studying expression based structural and functional organization of the brain. The goal of this study was to facilitate an unbiased data-driven structural partitioning of the major structures in the mouse brain. We have developed an algorithm that uses nonnegative matrix factorization (NMF) to perform parts based analysis of ISH gene expression images. The standard NMF approach and its variants are limited in their ability to flexibly integrate prior knowledge, in the context of spatial data. In this paper, we introduce spatial connectivity as an additional regularization in NMF decomposition via the use of Markov Random Fields (mNMF). The mNMF algorithm alternates neighborhood updates with iterations of the standard NMF algorithm to exploit spatial correlations in the data. We present the algorithm and show the sub-divisions of hippocampus and somatosensory-cortex obtained via this approach. The results are compared with established neuroanatomic knowledge. We also highlight novel gene expression based sub divisions of the hippocampus identified by using the mNMF algorithm.
Path-integral Monte Carlo method for Rényi entanglement entropies.
Herdman, C M; Inglis, Stephen; Roy, P-N; Melko, R G; Del Maestro, A
2014-07-01
We introduce a quantum Monte Carlo algorithm to measure the Rényi entanglement entropies in systems of interacting bosons in the continuum. This approach is based on a path-integral ground state method that can be applied to interacting itinerant bosons in any spatial dimension with direct relevance to experimental systems of quantum fluids. We demonstrate how it may be used to compute spatial mode entanglement, particle partitioned entanglement, and the entanglement of particles, providing insights into quantum correlations generated by fluctuations, indistinguishability, and interactions. We present proof-of-principle calculations and benchmark against an exactly soluble model of interacting bosons in one spatial dimension. As this algorithm retains the fundamental polynomial scaling of quantum Monte Carlo when applied to sign-problem-free models, future applications should allow for the study of entanglement entropy in large-scale many-body systems of interacting bosons.
Spatial pattern recognition of seismic events in South West Colombia
NASA Astrophysics Data System (ADS)
Benítez, Hernán D.; Flórez, Juan F.; Duque, Diana P.; Benavides, Alberto; Lucía Baquero, Olga; Quintero, Jiber
2013-09-01
Recognition of seismogenic zones in geographical regions supports seismic hazard studies. This recognition is usually based on visual, qualitative and subjective analysis of data. Spatial pattern recognition provides a well founded means to obtain relevant information from large amounts of data. The purpose of this work is to identify and classify spatial patterns in instrumental data of the South West Colombian seismic database. In this research, clustering tendency analysis validates whether seismic database possesses a clustering structure. A non-supervised fuzzy clustering algorithm creates groups of seismic events. Given the sensitivity of fuzzy clustering algorithms to centroid initial positions, we proposed a methodology to initialize centroids that generates stable partitions with respect to centroid initialization. As a result of this work, a public software tool provides the user with the routines developed for clustering methodology. The analysis of the seismogenic zones obtained reveals meaningful spatial patterns in South-West Colombia. The clustering analysis provides a quantitative location and dispersion of seismogenic zones that facilitates seismological interpretations of seismic activities in South West Colombia.
Unsupervised classification of multivariate geostatistical data: Two algorithms
NASA Astrophysics Data System (ADS)
Romary, Thomas; Ors, Fabien; Rivoirard, Jacques; Deraisme, Jacques
2015-12-01
With the increasing development of remote sensing platforms and the evolution of sampling facilities in mining and oil industry, spatial datasets are becoming increasingly large, inform a growing number of variables and cover wider and wider areas. Therefore, it is often necessary to split the domain of study to account for radically different behaviors of the natural phenomenon over the domain and to simplify the subsequent modeling step. The definition of these areas can be seen as a problem of unsupervised classification, or clustering, where we try to divide the domain into homogeneous domains with respect to the values taken by the variables in hand. The application of classical clustering methods, designed for independent observations, does not ensure the spatial coherence of the resulting classes. Image segmentation methods, based on e.g. Markov random fields, are not adapted to irregularly sampled data. Other existing approaches, based on mixtures of Gaussian random functions estimated via the expectation-maximization algorithm, are limited to reasonable sample sizes and a small number of variables. In this work, we propose two algorithms based on adaptations of classical algorithms to multivariate geostatistical data. Both algorithms are model free and can handle large volumes of multivariate, irregularly spaced data. The first one proceeds by agglomerative hierarchical clustering. The spatial coherence is ensured by a proximity condition imposed for two clusters to merge. This proximity condition relies on a graph organizing the data in the coordinates space. The hierarchical algorithm can then be seen as a graph-partitioning algorithm. Following this interpretation, a spatial version of the spectral clustering algorithm is also proposed. The performances of both algorithms are assessed on toy examples and a mining dataset.
An intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces.
Ying, Xiang; Xin, Shi-Qing; Sun, Qian; He, Ying
2013-09-01
Poisson disk sampling has excellent spatial and spectral properties, and plays an important role in a variety of visual computing. Although many promising algorithms have been proposed for multidimensional sampling in euclidean space, very few studies have been reported with regard to the problem of generating Poisson disks on surfaces due to the complicated nature of the surface. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. In sharp contrast to the conventional parallel approaches, our method neither partitions the given surface into small patches nor uses any spatial data structure to maintain the voids in the sampling domain. Instead, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. Our algorithm guarantees that the generated Poisson disks are uniformly and randomly distributed without bias. It is worth noting that our method is intrinsic and independent of the embedding space. This intrinsic feature allows us to generate Poisson disk patterns on arbitrary surfaces in IR(n). To our knowledge, this is the first intrinsic, parallel, and accurate algorithm for surface Poisson disk sampling. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.
Hershberg, Julie A; Rose, Dorian K; Tilson, Julie K; Brutsch, Bettina; Correa, Anita; Gallichio, Joann; McLeod, Molly; Moore, Craig; Wu, Sam; Duncan, Pamela W; Behrman, Andrea L
2017-01-01
Despite efforts to translate knowledge into clinical practice, barriers often arise in adapting the strict protocols of a randomized, controlled trial (RCT) to the individual patient. The Locomotor Experience Applied Post-Stroke (LEAPS) RCT demonstrated equal effectiveness of 2 intervention protocols for walking recovery poststroke; both protocols were more effective than usual care physical therapy. The purpose of this article was to provide knowledge-translation tools to facilitate implementation of the LEAPS RCT protocols into clinical practice. Participants from 2 of the trial's intervention arms: (1) early Locomotor Training Program (LTP) and (2) Home Exercise Program (HEP) were chosen for case presentation. The two cases illustrate how the protocols are used in synergy with individual patient presentations and clinical expertise. Decision algorithms and guidelines for progression represent the interface between implementation of an RCT standardized intervention protocol and clinical decision-making. In each case, the participant presents with a distinct clinical challenge that the therapist addresses by integrating the participant's unique presentation with the therapist's expertise while maintaining fidelity to the LEAPS protocol. Both participants progressed through an increasingly challenging intervention despite their own unique presentation. Decision algorithms and exercise progression for the LTP and HEP protocols facilitate translation of the RCT protocol to the real world of clinical practice. The two case examples to facilitate translation of the LEAPS RCT into clinical practice by enhancing understanding of the protocols, their progression, and their application to individual participants.Video Abstract available for more insights from the authors (see Supplemental Digital Content 1, available at: http://links.lww.com/JNPT/A147).
NASA Astrophysics Data System (ADS)
Leier, André; Marquez-Lago, Tatiana T.; Burrage, Kevin
2008-05-01
The delay stochastic simulation algorithm (DSSA) by Barrio et al. [Plos Comput. Biol. 2, 117(E) (2006)] was developed to simulate delayed processes in cell biology in the presence of intrinsic noise, that is, when there are small-to-moderate numbers of certain key molecules present in a chemical reaction system. These delayed processes can faithfully represent complex interactions and mechanisms that imply a number of spatiotemporal processes often not explicitly modeled such as transcription and translation, basic in the modeling of cell signaling pathways. However, for systems with widely varying reaction rate constants or large numbers of molecules, the simulation time steps of both the stochastic simulation algorithm (SSA) and the DSSA can become very small causing considerable computational overheads. In order to overcome the limit of small step sizes, various τ-leap strategies have been suggested for improving computational performance of the SSA. In this paper, we present a binomial τ-DSSA method that extends the τ-leap idea to the delay setting and avoids drawing insufficient numbers of reactions, a common shortcoming of existing binomial τ-leap methods that becomes evident when dealing with complex chemical interactions. The resulting inaccuracies are most evident in the delayed case, even when considering reaction products as potential reactants within the same time step in which they are produced. Moreover, we extend the framework to account for multicellular systems with different degrees of intercellular communication. We apply these ideas to two important genetic regulatory models, namely, the hes1 gene, implicated as a molecular clock, and a Her1/Her 7 model for coupled oscillating cells.
Temporal acceleration of spatially distributed kinetic Monte Carlo simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chatterjee, Abhijit; Vlachos, Dionisios G.
The computational intensity of kinetic Monte Carlo (KMC) simulation is a major impediment in simulating large length and time scales. In recent work, an approximate method for KMC simulation of spatially uniform systems, termed the binomial {tau}-leap method, was introduced [A. Chatterjee, D.G. Vlachos, M.A. Katsoulakis, Binomial distribution based {tau}-leap accelerated stochastic simulation, J. Chem. Phys. 122 (2005) 024112], where molecular bundles instead of individual processes are executed over coarse-grained time increments. This temporal coarse-graining can lead to significant computational savings but its generalization to spatially lattice KMC simulation has not been realized yet. Here we extend the binomial {tau}-leapmore » method to lattice KMC simulations by combining it with spatially adaptive coarse-graining. Absolute stability and computational speed-up analyses for spatial systems along with simulations provide insights into the conditions where accuracy and substantial acceleration of the new spatio-temporal coarse-graining method are ensured. Model systems demonstrate that the r-time increment criterion of Chatterjee et al. obeys the absolute stability limit for values of r up to near 1.« less
Spatial coding-based approach for partitioning big spatial data in Hadoop
NASA Astrophysics Data System (ADS)
Yao, Xiaochuang; Mokbel, Mohamed F.; Alarabi, Louai; Eldawy, Ahmed; Yang, Jianyu; Yun, Wenju; Li, Lin; Ye, Sijing; Zhu, Dehai
2017-09-01
Spatial data partitioning (SDP) plays a powerful role in distributed storage and parallel computing for spatial data. However, due to skew distribution of spatial data and varying volume of spatial vector objects, it leads to a significant challenge to ensure both optimal performance of spatial operation and data balance in the cluster. To tackle this problem, we proposed a spatial coding-based approach for partitioning big spatial data in Hadoop. This approach, firstly, compressed the whole big spatial data based on spatial coding matrix to create a sensing information set (SIS), including spatial code, size, count and other information. SIS was then employed to build spatial partitioning matrix, which was used to spilt all spatial objects into different partitions in the cluster finally. Based on our approach, the neighbouring spatial objects can be partitioned into the same block. At the same time, it also can minimize the data skew in Hadoop distributed file system (HDFS). The presented approach with a case study in this paper is compared against random sampling based partitioning, with three measurement standards, namely, the spatial index quality, data skew in HDFS, and range query performance. The experimental results show that our method based on spatial coding technique can improve the query performance of big spatial data, as well as the data balance in HDFS. We implemented and deployed this approach in Hadoop, and it is also able to support efficiently any other distributed big spatial data systems.
Monkey search algorithm for ECE components partitioning
NASA Astrophysics Data System (ADS)
Kuliev, Elmar; Kureichik, Vladimir; Kureichik, Vladimir, Jr.
2018-05-01
The paper considers one of the important design problems – a partitioning of electronic computer equipment (ECE) components (blocks). It belongs to the NP-hard class of problems and has a combinatorial and logic nature. In the paper, a partitioning problem formulation can be found as a partition of graph into parts. To solve the given problem, the authors suggest using a bioinspired approach based on a monkey search algorithm. Based on the developed software, computational experiments were carried out that show the algorithm efficiency, as well as its recommended settings for obtaining more effective solutions in comparison with a genetic algorithm.
NASA Astrophysics Data System (ADS)
Anderson, Ray; Skaggs, Todd; Alfieri, Joseph; Kustas, William; Wang, Dong; Ayars, James
2016-04-01
Partitioned land surfaces fluxes (e.g. evaporation, transpiration, photosynthesis, and ecosystem respiration) are needed as input, calibration, and validation data for numerous hydrological and land surface models. However, one of the most commonly used techniques for measuring land surface fluxes, Eddy Covariance (EC), can directly measure net, combined water and carbon fluxes (evapotranspiration and net ecosystem exchange/productivity). Analysis of the correlation structure of high frequency EC time series (hereafter flux partitioning or FP) has been proposed to directly partition net EC fluxes into their constituent components using leaf-level water use efficiency (WUE) data to separate stomatal and non-stomatal transport processes. FP has significant logistical and spatial representativeness advantages over other partitioning approaches (e.g. isotopic fluxes, sap flow, microlysimeters), but the performance of the FP algorithm is reliant on the accuracy of the intercellular CO2 (ci) concentration used to parameterize WUE for each flux averaging interval. In this study, we tested several parameterizations for ci as a function of atmospheric CO2 (ca), including (1) a constant ci/ca ratio for C3 and C4 photosynthetic pathway plants, (2) species-specific ci/ca-Vapor Pressure Deficit (VPD) relationships (quadratic and linear), and (3) generalized C3 and C4 photosynthetic pathway ci/ca-VPD relationships. We tested these ci parameterizations at three agricultural EC towers from 2011-present in C4 and C3 crops (sugarcane - Saccharum officinarum L. and peach - Prunus persica), and validated again sap-flow sensors installed at the peach site. The peach results show that the species-specific parameterizations driven FP algorithm came to convergence significantly more frequently (~20% more frequently) than the constant ci/ca ratio or generic C3-VPD relationship. The FP algorithm parameterizations with a generic VPD relationship also had slightly higher transpiration (5 Wm-2 difference) than the constant ci/ca ratio. However, photosynthesis and respiration fluxes over sugarcane were ~15% lower with a VPD-ci/ca relationship than a constant ci/ca ratio. The results illustrate the importance of combining leaf-level physiological observations with EC to improve the performance of the FP algorithm.
Redrawing the map of Great Britain from a network of human interactions.
Ratti, Carlo; Sobolevsky, Stanislav; Calabrese, Francesco; Andris, Clio; Reades, Jonathan; Martino, Mauro; Claxton, Rob; Strogatz, Steven H
2010-12-08
Do regional boundaries defined by governments respect the more natural ways that people interact across space? This paper proposes a novel, fine-grained approach to regional delineation, based on analyzing networks of billions of individual human transactions. Given a geographical area and some measure of the strength of links between its inhabitants, we show how to partition the area into smaller, non-overlapping regions while minimizing the disruption to each person's links. We tested our method on the largest non-Internet human network, inferred from a large telecommunications database in Great Britain. Our partitioning algorithm yields geographically cohesive regions that correspond remarkably well with administrative regions, while unveiling unexpected spatial structures that had previously only been hypothesized in the literature. We also quantify the effects of partitioning, showing for instance that the effects of a possible secession of Wales from Great Britain would be twice as disruptive for the human network than that of Scotland.
Evolutionary Computation with Spatial Receding Horizon Control to Minimize Network Coding Resources
Leeson, Mark S.
2014-01-01
The minimization of network coding resources, such as coding nodes and links, is a challenging task, not only because it is a NP-hard problem, but also because the problem scale is huge; for example, networks in real world may have thousands or even millions of nodes and links. Genetic algorithms (GAs) have a good potential of resolving NP-hard problems like the network coding problem (NCP), but as a population-based algorithm, serious scalability and applicability problems are often confronted when GAs are applied to large- or huge-scale systems. Inspired by the temporal receding horizon control in control engineering, this paper proposes a novel spatial receding horizon control (SRHC) strategy as a network partitioning technology, and then designs an efficient GA to tackle the NCP. Traditional network partitioning methods can be viewed as a special case of the proposed SRHC, that is, one-step-wide SRHC, whilst the method in this paper is a generalized N-step-wide SRHC, which can make a better use of global information of network topologies. Besides the SRHC strategy, some useful designs are also reported in this paper. The advantages of the proposed SRHC and GA for the NCP are illustrated by extensive experiments, and they have a good potential of being extended to other large-scale complex problems. PMID:24883371
New Parallel Algorithms for Landscape Evolution Model
NASA Astrophysics Data System (ADS)
Jin, Y.; Zhang, H.; Shi, Y.
2017-12-01
Most landscape evolution models (LEM) developed in the last two decades solve the diffusion equation to simulate the transportation of surface sediments. This numerical approach is difficult to parallelize due to the computation of drainage area for each node, which needs huge amount of communication if run in parallel. In order to overcome this difficulty, we developed two parallel algorithms for LEM with a stream net. One algorithm handles the partition of grid with traditional methods and applies an efficient global reduction algorithm to do the computation of drainage areas and transport rates for the stream net; the other algorithm is based on a new partition algorithm, which partitions the nodes in catchments between processes first, and then partitions the cells according to the partition of nodes. Both methods focus on decreasing communication between processes and take the advantage of massive computing techniques, and numerical experiments show that they are both adequate to handle large scale problems with millions of cells. We implemented the two algorithms in our program based on the widely used finite element library deal.II, so that it can be easily coupled with ASPECT.
Computationally efficient algorithm for high sampling-frequency operation of active noise control
NASA Astrophysics Data System (ADS)
Rout, Nirmal Kumar; Das, Debi Prasad; Panda, Ganapati
2015-05-01
In high sampling-frequency operation of active noise control (ANC) system the length of the secondary path estimate and the ANC filter are very long. This increases the computational complexity of the conventional filtered-x least mean square (FXLMS) algorithm. To reduce the computational complexity of long order ANC system using FXLMS algorithm, frequency domain block ANC algorithms have been proposed in past. These full block frequency domain ANC algorithms are associated with some disadvantages such as large block delay, quantization error due to computation of large size transforms and implementation difficulties in existing low-end DSP hardware. To overcome these shortcomings, the partitioned block ANC algorithm is newly proposed where the long length filters in ANC are divided into a number of equal partitions and suitably assembled to perform the FXLMS algorithm in the frequency domain. The complexity of this proposed frequency domain partitioned block FXLMS (FPBFXLMS) algorithm is quite reduced compared to the conventional FXLMS algorithm. It is further reduced by merging one fast Fourier transform (FFT)-inverse fast Fourier transform (IFFT) combination to derive the reduced structure FPBFXLMS (RFPBFXLMS) algorithm. Computational complexity analysis for different orders of filter and partition size are presented. Systematic computer simulations are carried out for both the proposed partitioned block ANC algorithms to show its accuracy compared to the time domain FXLMS algorithm.
A Genetic Algorithm That Exchanges Neighboring Centers for Fuzzy c-Means Clustering
ERIC Educational Resources Information Center
Chahine, Firas Safwan
2012-01-01
Clustering algorithms are widely used in pattern recognition and data mining applications. Due to their computational efficiency, partitional clustering algorithms are better suited for applications with large datasets than hierarchical clustering algorithms. K-means is among the most popular partitional clustering algorithm, but has a major…
Performance analysis of a dual-tree algorithm for computing spatial distance histograms
Chen, Shaoping; Tu, Yi-Cheng; Xia, Yuni
2011-01-01
Many scientific and engineering fields produce large volume of spatiotemporal data. The storage, retrieval, and analysis of such data impose great challenges to database systems design. Analysis of scientific spatiotemporal data often involves computing functions of all point-to-point interactions. One such analytics, the Spatial Distance Histogram (SDH), is of vital importance to scientific discovery. Recently, algorithms for efficient SDH processing in large-scale scientific databases have been proposed. These algorithms adopt a recursive tree-traversing strategy to process point-to-point distances in the visited tree nodes in batches, thus require less time when compared to the brute-force approach where all pairwise distances have to be computed. Despite the promising experimental results, the complexity of such algorithms has not been thoroughly studied. In this paper, we present an analysis of such algorithms based on a geometric modeling approach. The main technique is to transform the analysis of point counts into a problem of quantifying the area of regions where pairwise distances can be processed in batches by the algorithm. From the analysis, we conclude that the number of pairwise distances that are left to be processed decreases exponentially with more levels of the tree visited. This leads to the proof of a time complexity lower than the quadratic time needed for a brute-force algorithm and builds the foundation for a constant-time approximate algorithm. Our model is also general in that it works for a wide range of point spatial distributions, histogram types, and space-partitioning options in building the tree. PMID:21804753
Yang, Xue; Li, Xue-You; Li, Jia-Guo; Ma, Jun; Zhang, Li; Yang, Jan; Du, Quan-Ye
2014-02-01
Fast Fourier transforms (FFT) is a basic approach to remote sensing image processing. With the improvement of capacity of remote sensing image capture with the features of hyperspectrum, high spatial resolution and high temporal resolution, how to use FFT technology to efficiently process huge remote sensing image becomes the critical step and research hot spot of current image processing technology. FFT algorithm, one of the basic algorithms of image processing, can be used for stripe noise removal, image compression, image registration, etc. in processing remote sensing image. CUFFT function library is the FFT algorithm library based on CPU and FFTW. FFTW is a FFT algorithm developed based on CPU in PC platform, and is currently the fastest CPU based FFT algorithm function library. However there is a common problem that once the available memory or memory is less than the capacity of image, there will be out of memory or memory overflow when using the above two methods to realize image FFT arithmetic. To address this problem, a CPU and partitioning technology based Huge Remote Fast Fourier Transform (HRFFT) algorithm is proposed in this paper. By improving the FFT algorithm in CUFFT function library, the problem of out of memory and memory overflow is solved. Moreover, this method is proved rational by experiment combined with the CCD image of HJ-1A satellite. When applied to practical image processing, it improves effect of the image processing, speeds up the processing, which saves the time of computation and achieves sound result.
Li, Fei; Yu, Peicheng; Xu, Xinlu; ...
2017-01-12
In this study we present a customized finite-difference-time-domain (FDTD) Maxwell solver for the particle-in-cell (PIC) algorithm. The solver is customized to effectively eliminate the numerical Cerenkov instability (NCI) which arises when a plasma (neutral or non-neutral) relativistically drifts on a grid when using the PIC algorithm. We control the EM dispersion curve in the direction of the plasma drift of a FDTD Maxwell solver by using a customized higher order finite difference operator for the spatial derivative along the direction of the drift (1ˆ direction). We show that this eliminates the main NCI modes with moderate |k 1|, while keepsmore » additional main NCI modes well outside the range of physical interest with higher |k 1|. These main NCI modes can be easily filtered out along with first spatial aliasing NCI modes which are also at the edge of the fundamental Brillouin zone. The customized solver has the possible advantage of improved parallel scalability because it can be easily partitioned along 1ˆ which typically has many more cells than other directions for the problems of interest. We show that FFTs can be performed locally to current on each partition to filter out the main and first spatial aliasing NCI modes, and to correct the current so that it satisfies the continuity equation for the customized spatial derivative. This ensures that Gauss’ Law is satisfied. Lastly, we present simulation examples of one relativistically drifting plasma, of two colliding relativistically drifting plasmas, and of nonlinear laser wakefield acceleration (LWFA) in a Lorentz boosted frame that show no evidence of the NCI can be observed when using this customized Maxwell solver together with its NCI elimination scheme.« less
NASA Astrophysics Data System (ADS)
Li, Fei; Yu, Peicheng; Xu, Xinlu; Fiuza, Frederico; Decyk, Viktor K.; Dalichaouch, Thamine; Davidson, Asher; Tableman, Adam; An, Weiming; Tsung, Frank S.; Fonseca, Ricardo A.; Lu, Wei; Mori, Warren B.
2017-05-01
In this paper we present a customized finite-difference-time-domain (FDTD) Maxwell solver for the particle-in-cell (PIC) algorithm. The solver is customized to effectively eliminate the numerical Cerenkov instability (NCI) which arises when a plasma (neutral or non-neutral) relativistically drifts on a grid when using the PIC algorithm. We control the EM dispersion curve in the direction of the plasma drift of a FDTD Maxwell solver by using a customized higher order finite difference operator for the spatial derivative along the direction of the drift (1 ˆ direction). We show that this eliminates the main NCI modes with moderate |k1 | , while keeps additional main NCI modes well outside the range of physical interest with higher |k1 | . These main NCI modes can be easily filtered out along with first spatial aliasing NCI modes which are also at the edge of the fundamental Brillouin zone. The customized solver has the possible advantage of improved parallel scalability because it can be easily partitioned along 1 ˆ which typically has many more cells than other directions for the problems of interest. We show that FFTs can be performed locally to current on each partition to filter out the main and first spatial aliasing NCI modes, and to correct the current so that it satisfies the continuity equation for the customized spatial derivative. This ensures that Gauss' Law is satisfied. We present simulation examples of one relativistically drifting plasma, of two colliding relativistically drifting plasmas, and of nonlinear laser wakefield acceleration (LWFA) in a Lorentz boosted frame that show no evidence of the NCI can be observed when using this customized Maxwell solver together with its NCI elimination scheme.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Fei; Yu, Peicheng; Xu, Xinlu
In this study we present a customized finite-difference-time-domain (FDTD) Maxwell solver for the particle-in-cell (PIC) algorithm. The solver is customized to effectively eliminate the numerical Cerenkov instability (NCI) which arises when a plasma (neutral or non-neutral) relativistically drifts on a grid when using the PIC algorithm. We control the EM dispersion curve in the direction of the plasma drift of a FDTD Maxwell solver by using a customized higher order finite difference operator for the spatial derivative along the direction of the drift (1ˆ direction). We show that this eliminates the main NCI modes with moderate |k 1|, while keepsmore » additional main NCI modes well outside the range of physical interest with higher |k 1|. These main NCI modes can be easily filtered out along with first spatial aliasing NCI modes which are also at the edge of the fundamental Brillouin zone. The customized solver has the possible advantage of improved parallel scalability because it can be easily partitioned along 1ˆ which typically has many more cells than other directions for the problems of interest. We show that FFTs can be performed locally to current on each partition to filter out the main and first spatial aliasing NCI modes, and to correct the current so that it satisfies the continuity equation for the customized spatial derivative. This ensures that Gauss’ Law is satisfied. Lastly, we present simulation examples of one relativistically drifting plasma, of two colliding relativistically drifting plasmas, and of nonlinear laser wakefield acceleration (LWFA) in a Lorentz boosted frame that show no evidence of the NCI can be observed when using this customized Maxwell solver together with its NCI elimination scheme.« less
An iterative network partition algorithm for accurate identification of dense network modules
Sun, Siqi; Dong, Xinran; Fu, Yao; Tian, Weidong
2012-01-01
A key step in network analysis is to partition a complex network into dense modules. Currently, modularity is one of the most popular benefit functions used to partition network modules. However, recent studies suggested that it has an inherent limitation in detecting dense network modules. In this study, we observed that despite the limitation, modularity has the advantage of preserving the primary network structure of the undetected modules. Thus, we have developed a simple iterative Network Partition (iNP) algorithm to partition a network. The iNP algorithm provides a general framework in which any modularity-based algorithm can be implemented in the network partition step. Here, we tested iNP with three modularity-based algorithms: multi-step greedy (MSG), spectral clustering and Qcut. Compared with the original three methods, iNP achieved a significant improvement in the quality of network partition in a benchmark study with simulated networks, identified more modules with significantly better enrichment of functionally related genes in both yeast protein complex network and breast cancer gene co-expression network, and discovered more cancer-specific modules in the cancer gene co-expression network. As such, iNP should have a broad application as a general method to assist in the analysis of biological networks. PMID:22121225
NASA Astrophysics Data System (ADS)
Olsson, O.
2018-01-01
We present a novel heuristic derived from a probabilistic cost model for approximate N-body simulations. We show that this new heuristic can be used to guide tree construction towards higher quality trees with improved performance over current N-body codes. This represents an important step beyond the current practice of using spatial partitioning for N-body simulations, and enables adoption of a range of state-of-the-art algorithms developed for computer graphics applications to yield further improvements in N-body simulation performance. We outline directions for further developments and review the most promising such algorithms.
NASA Astrophysics Data System (ADS)
Parrish, Robert M.; Sherrill, C. David
2014-07-01
We develop a physically-motivated assignment of symmetry adapted perturbation theory for intermolecular interactions (SAPT) into atom-pairwise contributions (the A-SAPT partition). The basic precept of A-SAPT is that the many-body interaction energy components are computed normally under the formalism of SAPT, following which a spatially-localized two-body quasiparticle interaction is extracted from the many-body interaction terms. For electrostatics and induction source terms, the relevant quasiparticles are atoms, which are obtained in this work through the iterative stockholder analysis (ISA) procedure. For the exchange, induction response, and dispersion terms, the relevant quasiparticles are local occupied orbitals, which are obtained in this work through the Pipek-Mezey procedure. The local orbital atomic charges obtained from ISA additionally allow the terms involving local orbitals to be assigned in an atom-pairwise manner. Further summation over the atoms of one or the other monomer allows for a chemically intuitive visualization of the contribution of each atom and interaction component to the overall noncovalent interaction strength. Herein, we present the intuitive development and mathematical form for A-SAPT applied in the SAPT0 approximation (the A-SAPT0 partition). We also provide an efficient series of algorithms for the computation of the A-SAPT0 partition with essentially the same computational cost as the corresponding SAPT0 decomposition. We probe the sensitivity of the A-SAPT0 partition to the ISA grid and convergence parameter, orbital localization metric, and induction coupling treatment, and recommend a set of practical choices which closes the definition of the A-SAPT0 partition. We demonstrate the utility and computational tractability of the A-SAPT0 partition in the context of side-on cation-π interactions and the intercalation of DNA by proflavine. A-SAPT0 clearly shows the key processes in these complicated noncovalent interactions, in systems with up to 220 atoms and 2845 basis functions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parrish, Robert M.; Sherrill, C. David, E-mail: sherrill@gatech.edu
2014-07-28
We develop a physically-motivated assignment of symmetry adapted perturbation theory for intermolecular interactions (SAPT) into atom-pairwise contributions (the A-SAPT partition). The basic precept of A-SAPT is that the many-body interaction energy components are computed normally under the formalism of SAPT, following which a spatially-localized two-body quasiparticle interaction is extracted from the many-body interaction terms. For electrostatics and induction source terms, the relevant quasiparticles are atoms, which are obtained in this work through the iterative stockholder analysis (ISA) procedure. For the exchange, induction response, and dispersion terms, the relevant quasiparticles are local occupied orbitals, which are obtained in this work throughmore » the Pipek-Mezey procedure. The local orbital atomic charges obtained from ISA additionally allow the terms involving local orbitals to be assigned in an atom-pairwise manner. Further summation over the atoms of one or the other monomer allows for a chemically intuitive visualization of the contribution of each atom and interaction component to the overall noncovalent interaction strength. Herein, we present the intuitive development and mathematical form for A-SAPT applied in the SAPT0 approximation (the A-SAPT0 partition). We also provide an efficient series of algorithms for the computation of the A-SAPT0 partition with essentially the same computational cost as the corresponding SAPT0 decomposition. We probe the sensitivity of the A-SAPT0 partition to the ISA grid and convergence parameter, orbital localization metric, and induction coupling treatment, and recommend a set of practical choices which closes the definition of the A-SAPT0 partition. We demonstrate the utility and computational tractability of the A-SAPT0 partition in the context of side-on cation-π interactions and the intercalation of DNA by proflavine. A-SAPT0 clearly shows the key processes in these complicated noncovalent interactions, in systems with up to 220 atoms and 2845 basis functions.« less
A New Model for Solving Time-Cost-Quality Trade-Off Problems in Construction
Fu, Fang; Zhang, Tao
2016-01-01
A poor quality affects project makespan and its total costs negatively, but it can be recovered by repair works during construction. We construct a new non-linear programming model based on the classic multi-mode resource constrained project scheduling problem considering repair works. In order to obtain satisfactory quality without a high increase of project cost, the objective is to minimize total quality cost which consists of the prevention cost and failure cost according to Quality-Cost Analysis. A binary dependent normal distribution function is adopted to describe the activity quality; Cumulative quality is defined to determine whether to initiate repair works, according to the different relationships among activity qualities, namely, the coordinative and precedence relationship. Furthermore, a shuffled frog-leaping algorithm is developed to solve this discrete trade-off problem based on an adaptive serial schedule generation scheme and adjusted activity list. In the program of the algorithm, the frog-leaping progress combines the crossover operator of genetic algorithm and a permutation-based local search. Finally, an example of a construction project for a framed railway overpass is provided to examine the algorithm performance, and it assist in decision making to search for the appropriate makespan and quality threshold with minimal cost. PMID:27911939
Nonlinear Rayleigh wave inversion based on the shuffled frog-leaping algorithm
NASA Astrophysics Data System (ADS)
Sun, Cheng-Yu; Wang, Yan-Yan; Wu, Dun-Shi; Qin, Xiao-Jun
2017-12-01
At present, near-surface shear wave velocities are mainly calculated through Rayleigh wave dispersion-curve inversions in engineering surface investigations, but the required calculations pose a highly nonlinear global optimization problem. In order to alleviate the risk of falling into a local optimal solution, this paper introduces a new global optimization method, the shuffle frog-leaping algorithm (SFLA), into the Rayleigh wave dispersion-curve inversion process. SFLA is a swarm-intelligence-based algorithm that simulates a group of frogs searching for food. It uses a few parameters, achieves rapid convergence, and is capability of effective global searching. In order to test the reliability and calculation performance of SFLA, noise-free and noisy synthetic datasets were inverted. We conducted a comparative analysis with other established algorithms using the noise-free dataset, and then tested the ability of SFLA to cope with data noise. Finally, we inverted a real-world example to examine the applicability of SFLA. Results from both synthetic and field data demonstrated the effectiveness of SFLA in the interpretation of Rayleigh wave dispersion curves. We found that SFLA is superior to the established methods in terms of both reliability and computational efficiency, so it offers great potential to improve our ability to solve geophysical inversion problems.
Task-specific image partitioning.
Kim, Sungwoong; Nowozin, Sebastian; Kohli, Pushmeet; Yoo, Chang D
2013-02-01
Image partitioning is an important preprocessing step for many of the state-of-the-art algorithms used for performing high-level computer vision tasks. Typically, partitioning is conducted without regard to the task in hand. We propose a task-specific image partitioning framework to produce a region-based image representation that will lead to a higher task performance than that reached using any task-oblivious partitioning framework and existing supervised partitioning framework, albeit few in number. The proposed method partitions the image by means of correlation clustering, maximizing a linear discriminant function defined over a superpixel graph. The parameters of the discriminant function that define task-specific similarity/dissimilarity among superpixels are estimated based on structured support vector machine (S-SVM) using task-specific training data. The S-SVM learning leads to a better generalization ability while the construction of the superpixel graph used to define the discriminant function allows a rich set of features to be incorporated to improve discriminability and robustness. We evaluate the learned task-aware partitioning algorithms on three benchmark datasets. Results show that task-aware partitioning leads to better labeling performance than the partitioning computed by the state-of-the-art general-purpose and supervised partitioning algorithms. We believe that the task-specific image partitioning paradigm is widely applicable to improving performance in high-level image understanding tasks.
DNA Encoding Training Using 3D Gesture Interaction.
Nicola, Stelian; Handrea, Flavia-Laura; Crişan-Vida, Mihaela; Stoicu-Tivadar, Lăcrămioara
2017-01-01
The work described in this paper summarizes the development process and presents the results of a human genetics training application, studying the 20 amino acids formed by the combination of the 3 nucleotides of DNA targeting mainly medical and bioinformatics students. Currently, the domain applications using recognized human gestures of the Leap Motion sensor are used in molecules controlling and learning from Mendeleev table or in visualizing the animated reactions of specific molecules with water. The novelty in the current application consists in using the Leap Motion sensor creating new gestures for the application control and creating a tag based algorithm corresponding to each amino acid, depending on the position in the 3D virtual space of the 4 nucleotides of DNA and their type. The team proposes a 3D application based on Unity editor and on Leap Motion sensor where the user has the liberty of forming different combinations of the 20 amino acids. The results confirm that this new type of study of medicine/biochemistry using the Leap Motion sensor for handling amino acids is suitable for students. The application is original and interactive and the users can create their own amino acid structures in a 3D-like environment which they could not do otherwise using traditional pen-and-paper.
Semi-supervised clustering for parcellating brain regions based on resting state fMRI data
NASA Astrophysics Data System (ADS)
Cheng, Hewei; Fan, Yong
2014-03-01
Many unsupervised clustering techniques have been adopted for parcellating brain regions of interest into functionally homogeneous subregions based on resting state fMRI data. However, the unsupervised clustering techniques are not able to take advantage of exiting knowledge of the functional neuroanatomy readily available from studies of cytoarchitectonic parcellation or meta-analysis of the literature. In this study, we propose a semi-supervised clustering method for parcellating amygdala into functionally homogeneous subregions based on resting state fMRI data. Particularly, the semi-supervised clustering is implemented under the framework of graph partitioning, and adopts prior information and spatial consistent constraints to obtain a spatially contiguous parcellation result. The graph partitioning problem is solved using an efficient algorithm similar to the well-known weighted kernel k-means algorithm. Our method has been validated for parcellating amygdala into 3 subregions based on resting state fMRI data of 28 subjects. The experiment results have demonstrated that the proposed method is more robust than unsupervised clustering and able to parcellate amygdala into centromedial, laterobasal, and superficial parts with improved functionally homogeneity compared with the cytoarchitectonic parcellation result. The validity of the parcellation results is also supported by distinctive functional and structural connectivity patterns of the subregions and high consistency between coactivation patterns derived from a meta-analysis and functional connectivity patterns of corresponding subregions.
Marginal Consistency: Upper-Bounding Partition Functions over Commutative Semirings.
Werner, Tomás
2015-07-01
Many inference tasks in pattern recognition and artificial intelligence lead to partition functions in which addition and multiplication are abstract binary operations forming a commutative semiring. By generalizing max-sum diffusion (one of convergent message passing algorithms for approximate MAP inference in graphical models), we propose an iterative algorithm to upper bound such partition functions over commutative semirings. The iteration of the algorithm is remarkably simple: change any two factors of the partition function such that their product remains the same and their overlapping marginals become equal. In many commutative semirings, repeating this iteration for different pairs of factors converges to a fixed point when the overlapping marginals of every pair of factors coincide. We call this state marginal consistency. During that, an upper bound on the partition function monotonically decreases. This abstract algorithm unifies several existing algorithms, including max-sum diffusion and basic constraint propagation (or local consistency) algorithms in constraint programming. We further construct a hierarchy of marginal consistencies of increasingly higher levels and show than any such level can be enforced by adding identity factors of higher arity (order). Finally, we discuss instances of the framework for several semirings, including the distributive lattice and the max-sum and sum-product semirings.
A 2D electrostatic PIC code for the Mark III Hypercube
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferraro, R.D.; Liewer, P.C.; Decyk, V.K.
We have implemented a 2D electrostastic plasma particle in cell (PIC) simulation code on the Caltech/JPL Mark IIIfp Hypercube. The code simulates plasma effects by evolving in time the trajectories of thousands to millions of charged particles subject to their self-consistent fields. Each particle`s position and velocity is advanced in time using a leap frog method for integrating Newton`s equations of motion in electric and magnetic fields. The electric field due to these moving charged particles is calculated on a spatial grid at each time by solving Poisson`s equation in Fourier space. These two tasks represent the largest part ofmore » the computation. To obtain efficient operation on a distributed memory parallel computer, we are using the General Concurrent PIC (GCPIC) algorithm previously developed for a 1D parallel PIC code.« less
Basto, Mafalda P; Santos-Reis, Margarida; Simões, Luciana; Grilo, Clara; Cardoso, Luís; Cortes, Helder; Bruford, Michael W; Fernandes, Carlos
2016-01-01
The identification of populations and spatial genetic patterns is important for ecological and conservation research, and spatially explicit individual-based methods have been recognised as powerful tools in this context. Mammalian carnivores are intrinsically vulnerable to habitat fragmentation but not much is known about the genetic consequences of fragmentation in common species. Stone martens (Martes foina) and red foxes (Vulpes vulpes) share a widespread Palearctic distribution and are considered habitat generalists, but in the Iberian Peninsula stone martens tend to occur in higher quality habitats. We compared their genetic structure in Portugal to see if they are consistent with their differences in ecological plasticity, and also to illustrate an approach to explicitly delineate the spatial boundaries of consistently identified genetic units. We analysed microsatellite data using spatial Bayesian clustering methods (implemented in the software BAPS, GENELAND and TESS), a progressive partitioning approach and a multivariate technique (Spatial Principal Components Analysis-sPCA). Three consensus Bayesian clusters were identified for the stone marten. No consensus was achieved for the red fox, but one cluster was the most probable clustering solution. Progressive partitioning and sPCA suggested additional clusters in the stone marten but they were not consistent among methods and were geographically incoherent. The contrasting results between the two species are consistent with the literature reporting stricter ecological requirements of the stone marten in the Iberian Peninsula. The observed genetic structure in the stone marten may have been influenced by landscape features, particularly rivers, and fragmentation. We suggest that an approach based on a consensus clustering solution of multiple different algorithms may provide an objective and effective means to delineate potential boundaries of inferred subpopulations. sPCA and progressive partitioning offer further verification of possible population structure and may be useful for revealing cryptic spatial genetic patterns worth further investigation.
Basto, Mafalda P.; Santos-Reis, Margarida; Simões, Luciana; Grilo, Clara; Cardoso, Luís; Cortes, Helder; Bruford, Michael W.; Fernandes, Carlos
2016-01-01
The identification of populations and spatial genetic patterns is important for ecological and conservation research, and spatially explicit individual-based methods have been recognised as powerful tools in this context. Mammalian carnivores are intrinsically vulnerable to habitat fragmentation but not much is known about the genetic consequences of fragmentation in common species. Stone martens (Martes foina) and red foxes (Vulpes vulpes) share a widespread Palearctic distribution and are considered habitat generalists, but in the Iberian Peninsula stone martens tend to occur in higher quality habitats. We compared their genetic structure in Portugal to see if they are consistent with their differences in ecological plasticity, and also to illustrate an approach to explicitly delineate the spatial boundaries of consistently identified genetic units. We analysed microsatellite data using spatial Bayesian clustering methods (implemented in the software BAPS, GENELAND and TESS), a progressive partitioning approach and a multivariate technique (Spatial Principal Components Analysis-sPCA). Three consensus Bayesian clusters were identified for the stone marten. No consensus was achieved for the red fox, but one cluster was the most probable clustering solution. Progressive partitioning and sPCA suggested additional clusters in the stone marten but they were not consistent among methods and were geographically incoherent. The contrasting results between the two species are consistent with the literature reporting stricter ecological requirements of the stone marten in the Iberian Peninsula. The observed genetic structure in the stone marten may have been influenced by landscape features, particularly rivers, and fragmentation. We suggest that an approach based on a consensus clustering solution of multiple different algorithms may provide an objective and effective means to delineate potential boundaries of inferred subpopulations. sPCA and progressive partitioning offer further verification of possible population structure and may be useful for revealing cryptic spatial genetic patterns worth further investigation. PMID:26727497
Redrawing the Map of Great Britain from a Network of Human Interactions
Ratti, Carlo; Sobolevsky, Stanislav; Calabrese, Francesco; Andris, Clio; Reades, Jonathan; Martino, Mauro; Claxton, Rob; Strogatz, Steven H.
2010-01-01
Do regional boundaries defined by governments respect the more natural ways that people interact across space? This paper proposes a novel, fine-grained approach to regional delineation, based on analyzing networks of billions of individual human transactions. Given a geographical area and some measure of the strength of links between its inhabitants, we show how to partition the area into smaller, non-overlapping regions while minimizing the disruption to each person's links. We tested our method on the largest non-Internet human network, inferred from a large telecommunications database in Great Britain. Our partitioning algorithm yields geographically cohesive regions that correspond remarkably well with administrative regions, while unveiling unexpected spatial structures that had previously only been hypothesized in the literature. We also quantify the effects of partitioning, showing for instance that the effects of a possible secession of Wales from Great Britain would be twice as disruptive for the human network than that of Scotland. PMID:21170390
Handling Data Skew in MapReduce Cluster by Using Partition Tuning
Gao, Yufei; Zhou, Yanjie; Zhou, Bing; Shi, Lei; Zhang, Jiacai
2017-01-01
The healthcare industry has generated large amounts of data, and analyzing these has emerged as an important problem in recent years. The MapReduce programming model has been successfully used for big data analytics. However, data skew invariably occurs in big data analytics and seriously affects efficiency. To overcome the data skew problem in MapReduce, we have in the past proposed a data processing algorithm called Partition Tuning-based Skew Handling (PTSH). In comparison with the one-stage partitioning strategy used in the traditional MapReduce model, PTSH uses a two-stage strategy and the partition tuning method to disperse key-value pairs in virtual partitions and recombines each partition in case of data skew. The robustness and efficiency of the proposed algorithm were tested on a wide variety of simulated datasets and real healthcare datasets. The results showed that PTSH algorithm can handle data skew in MapReduce efficiently and improve the performance of MapReduce jobs in comparison with the native Hadoop, Closer, and locality-aware and fairness-aware key partitioning (LEEN). We also found that the time needed for rule extraction can be reduced significantly by adopting the PTSH algorithm, since it is more suitable for association rule mining (ARM) on healthcare data. © 2017 Yufei Gao et al.
Handling Data Skew in MapReduce Cluster by Using Partition Tuning.
Gao, Yufei; Zhou, Yanjie; Zhou, Bing; Shi, Lei; Zhang, Jiacai
2017-01-01
The healthcare industry has generated large amounts of data, and analyzing these has emerged as an important problem in recent years. The MapReduce programming model has been successfully used for big data analytics. However, data skew invariably occurs in big data analytics and seriously affects efficiency. To overcome the data skew problem in MapReduce, we have in the past proposed a data processing algorithm called Partition Tuning-based Skew Handling (PTSH). In comparison with the one-stage partitioning strategy used in the traditional MapReduce model, PTSH uses a two-stage strategy and the partition tuning method to disperse key-value pairs in virtual partitions and recombines each partition in case of data skew. The robustness and efficiency of the proposed algorithm were tested on a wide variety of simulated datasets and real healthcare datasets. The results showed that PTSH algorithm can handle data skew in MapReduce efficiently and improve the performance of MapReduce jobs in comparison with the native Hadoop, Closer, and locality-aware and fairness-aware key partitioning (LEEN). We also found that the time needed for rule extraction can be reduced significantly by adopting the PTSH algorithm, since it is more suitable for association rule mining (ARM) on healthcare data.
Handling Data Skew in MapReduce Cluster by Using Partition Tuning
Zhou, Yanjie; Zhou, Bing; Shi, Lei
2017-01-01
The healthcare industry has generated large amounts of data, and analyzing these has emerged as an important problem in recent years. The MapReduce programming model has been successfully used for big data analytics. However, data skew invariably occurs in big data analytics and seriously affects efficiency. To overcome the data skew problem in MapReduce, we have in the past proposed a data processing algorithm called Partition Tuning-based Skew Handling (PTSH). In comparison with the one-stage partitioning strategy used in the traditional MapReduce model, PTSH uses a two-stage strategy and the partition tuning method to disperse key-value pairs in virtual partitions and recombines each partition in case of data skew. The robustness and efficiency of the proposed algorithm were tested on a wide variety of simulated datasets and real healthcare datasets. The results showed that PTSH algorithm can handle data skew in MapReduce efficiently and improve the performance of MapReduce jobs in comparison with the native Hadoop, Closer, and locality-aware and fairness-aware key partitioning (LEEN). We also found that the time needed for rule extraction can be reduced significantly by adopting the PTSH algorithm, since it is more suitable for association rule mining (ARM) on healthcare data. PMID:29065568
Sharifahmadian, Ershad
2006-01-01
The set partitioning in hierarchical trees (SPIHT) algorithm is very effective and computationally simple technique for image and signal compression. Here the author modified the algorithm which provides even better performance than the SPIHT algorithm. The enhanced set partitioning in hierarchical trees (ESPIHT) algorithm has performance faster than the SPIHT algorithm. In addition, the proposed algorithm reduces the number of bits in a bit stream which is stored or transmitted. I applied it to compression of multichannel ECG data. Also, I presented a specific procedure based on the modified algorithm for more efficient compression of multichannel ECG data. This method employed on selected records from the MIT-BIH arrhythmia database. According to experiments, the proposed method attained the significant results regarding compression of multichannel ECG data. Furthermore, in order to compress one signal which is stored for a long time, the proposed multichannel compression method can be utilized efficiently.
An agglomerative hierarchical clustering approach to visualisation in Bayesian clustering problems
Dawson, Kevin J.; Belkhir, Khalid
2009-01-01
Clustering problems (including the clustering of individuals into outcrossing populations, hybrid generations, full-sib families and selfing lines) have recently received much attention in population genetics. In these clustering problems, the parameter of interest is a partition of the set of sampled individuals, - the sample partition. In a fully Bayesian approach to clustering problems of this type, our knowledge about the sample partition is represented by a probability distribution on the space of possible sample partitions. Since the number of possible partitions grows very rapidly with the sample size, we can not visualise this probability distribution in its entirety, unless the sample is very small. As a solution to this visualisation problem, we recommend using an agglomerative hierarchical clustering algorithm, which we call the exact linkage algorithm. This algorithm is a special case of the maximin clustering algorithm that we introduced previously. The exact linkage algorithm is now implemented in our software package Partition View. The exact linkage algorithm takes the posterior co-assignment probabilities as input, and yields as output a rooted binary tree, - or more generally, a forest of such trees. Each node of this forest defines a set of individuals, and the node height is the posterior co-assignment probability of this set. This provides a useful visual representation of the uncertainty associated with the assignment of individuals to categories. It is also a useful starting point for a more detailed exploration of the posterior distribution in terms of the co-assignment probabilities. PMID:19337306
Improved parallel data partitioning by nested dissection with applications to information retrieval.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolf, Michael M.; Chevalier, Cedric; Boman, Erik Gunnar
The computational work in many information retrieval and analysis algorithms is based on sparse linear algebra. Sparse matrix-vector multiplication is a common kernel in many of these computations. Thus, an important related combinatorial problem in parallel computing is how to distribute the matrix and the vectors among processors so as to minimize the communication cost. We focus on minimizing the total communication volume while keeping the computation balanced across processes. In [1], the first two authors presented a new 2D partitioning method, the nested dissection partitioning algorithm. In this paper, we improve on that algorithm and show that it ismore » a good option for data partitioning in information retrieval. We also show partitioning time can be substantially reduced by using the SCOTCH software, and quality improves in some cases, too.« less
Efficient algorithms for a class of partitioning problems
NASA Technical Reports Server (NTRS)
Iqbal, M. Ashraf; Bokhari, Shahid H.
1990-01-01
The problem of optimally partitioning the modules of chain- or tree-like tasks over chain-structured or host-satellite multiple computer systems is addressed. This important class of problems includes many signal processing and industrial control applications. Prior research has resulted in a succession of faster exact and approximate algorithms for these problems. Polynomial exact and approximate algorithms are described for this class that are better than any of the previously reported algorithms. The approach is based on a preprocessing step that condenses the given chain or tree structured task into a monotonic chain or tree. The partitioning of this monotonic take can then be carried out using fast search techniques.
Multiscale stochastic simulations of chemical reactions with regulated scale separation
NASA Astrophysics Data System (ADS)
Koumoutsakos, Petros; Feigelman, Justin
2013-07-01
We present a coupling of multiscale frameworks with accelerated stochastic simulation algorithms for systems of chemical reactions with disparate propensities. The algorithms regulate the propensities of the fast and slow reactions of the system, using alternating micro and macro sub-steps simulated with accelerated algorithms such as τ and R-leaping. The proposed algorithms are shown to provide significant speedups in simulations of stiff systems of chemical reactions with a trade-off in accuracy as controlled by a regulating parameter. More importantly, the error of the methods exhibits a cutoff phenomenon that allows for optimal parameter choices. Numerical experiments demonstrate that hybrid algorithms involving accelerated stochastic simulations can be, in certain cases, more accurate while faster, than their corresponding stochastic simulation algorithm counterparts.
Influence of Soil Heterogeneity on Mesoscale Land Surface Fluxes During Washita '92
NASA Technical Reports Server (NTRS)
Jasinski, Michael F.; Jin, Hao
1998-01-01
The influence of soil heterogeneity on the partitioning of mesoscale land surface energy fluxes at diurnal time scales is investigated over a 10(exp 6) sq km domain centered on the Little Washita Basin, Oklahoma, for the period June 10 - 18, 1992. The sensitivity study is carried out using MM5/PLACE, the Penn State/NCAR MM5 model enhanced with the Parameterization for Land-Atmosphere-Cloud Exchange or PLACE. PLACE is a one-dimensional land surface model possessing detailed plant and soil water physics algorithms, multiple soil layers, and the capacity to model subgrid heterogeneity. A series of 12-hour simulations were conducted with identical atmospheric initialization and land surface characterization but with different initial soil moisture and texture. A comparison then was made of the simulated land surface energy flux fields, the partitioning of net radiation into latent and sensible heat, and the soil moisture fields. Results indicate that heterogeneity in both soil moisture and texture affects the spatial distribution and partitioning of mesoscale energy balance. Spatial averaging results in an overprediction of latent heat flux, and an underestimation of sensible heat flux. In addition to the primary focus on the partitioning of the land surface energy, the modeling effort provided an opportunity to examine the issue of initializing the soil moisture fields for coupled three-dimensional models. For the present case, the initial soil moisture and temperature were determined from off-line modeling using PLACE at each grid box, driven with a combination of observed and assimilated data fields.
Community detection in complex networks by using membrane algorithm
NASA Astrophysics Data System (ADS)
Liu, Chuang; Fan, Linan; Liu, Zhou; Dai, Xiang; Xu, Jiamei; Chang, Baoren
Community detection in complex networks is a key problem of network analysis. In this paper, a new membrane algorithm is proposed to solve the community detection in complex networks. The proposed algorithm is based on membrane systems, which consists of objects, reaction rules, and a membrane structure. Each object represents a candidate partition of a complex network, and the quality of objects is evaluated according to network modularity. The reaction rules include evolutionary rules and communication rules. Evolutionary rules are responsible for improving the quality of objects, which employ the differential evolutionary algorithm to evolve objects. Communication rules implement the information exchanged among membranes. Finally, the proposed algorithm is evaluated on synthetic, real-world networks with real partitions known and the large-scaled networks with real partitions unknown. The experimental results indicate the superior performance of the proposed algorithm in comparison with other experimental algorithms.
NASA Astrophysics Data System (ADS)
Hopcroft, Peter O.; Gallagher, Kerry; Pain, Christopher C.
2009-08-01
Collections of suitably chosen borehole profiles can be used to infer large-scale trends in ground-surface temperature (GST) histories for the past few hundred years. These reconstructions are based on a large database of carefully selected borehole temperature measurements from around the globe. Since non-climatic thermal influences are difficult to identify, representative temperature histories are derived by averaging individual reconstructions to minimize the influence of these perturbing factors. This may lead to three potentially important drawbacks: the net signal of non-climatic factors may not be zero, meaning that the average does not reflect the best estimate of past climate; the averaging over large areas restricts the useful amount of more local climate change information available; and the inversion methods used to reconstruct the past temperatures at each site must be mathematically identical and are therefore not necessarily best suited to all data sets. In this work, we avoid these issues by using a Bayesian partition model (BPM), which is computed using a trans-dimensional form of a Markov chain Monte Carlo algorithm. This then allows the number and spatial distribution of different GST histories to be inferred from a given set of borehole data by partitioning the geographical area into discrete partitions. Profiles that are heavily influenced by non-climatic factors will be partitioned separately. Conversely, profiles with climatic information, which is consistent with neighbouring profiles, will then be inferred to lie in the same partition. The geographical extent of these partitions then leads to information on the regional extent of the climatic signal. In this study, three case studies are described using synthetic and real data. The first demonstrates that the Bayesian partition model method is able to correctly partition a suite of synthetic profiles according to the inferred GST history. In the second, more realistic case, a series of temperature profiles are calculated using surface air temperatures of a global climate model simulation. In the final case, 23 real boreholes from the United Kingdom, previously used for climatic reconstructions, are examined and the results compared with a local instrumental temperature series and the previous estimate derived from the same borehole data. The results indicate that the majority (17) of the 23 boreholes are unsuitable for climatic reconstruction purposes, at least without including other thermal processes in the forward model.
A general algorithm using finite element method for aerodynamic configurations at low speeds
NASA Technical Reports Server (NTRS)
Balasubramanian, R.
1975-01-01
A finite element algorithm for numerical simulation of two-dimensional, incompressible, viscous flows was developed. The Navier-Stokes equations are suitably modelled to facilitate direct solution for the essential flow parameters. A leap-frog time differencing and Galerkin minimization of these model equations yields the finite element algorithm. The finite elements are triangular with bicubic shape functions approximating the solution space. The finite element matrices are unsymmetrically banded to facilitate savings in storage. An unsymmetric L-U decomposition is performed on the finite element matrices to obtain the solution for the boundary value problem.
Enhancing data locality by using terminal propagation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hendrickson, B.; Leland, R.; Van Driessche, R.
1995-12-31
Terminal propagation is a method developed in the circuit placement community for adding constraints to graph partitioning problems. This paper adapts and expands this idea, and applies it to the problem of partitioning data structures among the processors of a parallel computer. We show how the constraints in terminal propagation can be used to encourage partitions in which messages are communicated only between architecturally near processors. We then show how these constraints can be handled in two important partitioning algorithms, spectral bisection and multilevel-KL. We compare the quality of partitions generated by these algorithms to each other and to Partitionsmore » generated by more familiar techniques.« less
Improving Design Efficiency for Large-Scale Heterogeneous Circuits
NASA Astrophysics Data System (ADS)
Gregerson, Anthony
Despite increases in logic density, many Big Data applications must still be partitioned across multiple computing devices in order to meet their strict performance requirements. Among the most demanding of these applications is high-energy physics (HEP), which uses complex computing systems consisting of thousands of FPGAs and ASICs to process the sensor data created by experiments at particles accelerators such as the Large Hadron Collider (LHC). Designing such computing systems is challenging due to the scale of the systems, the exceptionally high-throughput and low-latency performance constraints that necessitate application-specific hardware implementations, the requirement that algorithms are efficiently partitioned across many devices, and the possible need to update the implemented algorithms during the lifetime of the system. In this work, we describe our research to develop flexible architectures for implementing such large-scale circuits on FPGAs. In particular, this work is motivated by (but not limited in scope to) high-energy physics algorithms for the Compact Muon Solenoid (CMS) experiment at the LHC. To make efficient use of logic resources in multi-FPGA systems, we introduce Multi-Personality Partitioning, a novel form of the graph partitioning problem, and present partitioning algorithms that can significantly improve resource utilization on heterogeneous devices while also reducing inter-chip connections. To reduce the high communication costs of Big Data applications, we also introduce Information-Aware Partitioning, a partitioning method that analyzes the data content of application-specific circuits, characterizes their entropy, and selects circuit partitions that enable efficient compression of data between chips. We employ our information-aware partitioning method to improve the performance of the hardware validation platform for evaluating new algorithms for the CMS experiment. Together, these research efforts help to improve the efficiency and decrease the cost of the developing large-scale, heterogeneous circuits needed to enable large-scale application in high-energy physics and other important areas.
Simulation methods with extended stability for stiff biochemical Kinetics.
Rué, Pau; Villà-Freixa, Jordi; Burrage, Kevin
2010-08-11
With increasing computer power, simulating the dynamics of complex systems in chemistry and biology is becoming increasingly routine. The modelling of individual reactions in (bio)chemical systems involves a large number of random events that can be simulated by the stochastic simulation algorithm (SSA). The key quantity is the step size, or waiting time, tau, whose value inversely depends on the size of the propensities of the different channel reactions and which needs to be re-evaluated after every firing event. Such a discrete event simulation may be extremely expensive, in particular for stiff systems where tau can be very short due to the fast kinetics of some of the channel reactions. Several alternative methods have been put forward to increase the integration step size. The so-called tau-leap approach takes a larger step size by allowing all the reactions to fire, from a Poisson or Binomial distribution, within that step. Although the expected value for the different species in the reactive system is maintained with respect to more precise methods, the variance at steady state can suffer from large errors as tau grows. In this paper we extend Poisson tau-leap methods to a general class of Runge-Kutta (RK) tau-leap methods. We show that with the proper selection of the coefficients, the variance of the extended tau-leap can be well-behaved, leading to significantly larger step sizes. The benefit of adapting the extended method to the use of RK frameworks is clear in terms of speed of calculation, as the number of evaluations of the Poisson distribution is still one set per time step, as in the original tau-leap method. The approach paves the way to explore new multiscale methods to simulate (bio)chemical systems.
Development of method for quantifying essential tremor using a small optical device.
Chen, Kai-Hsiang; Lin, Po-Chieh; Chen, Yu-Jung; Yang, Bing-Shiang; Lin, Chin-Hsien
2016-06-15
Clinical assessment scales are the most common means used by physicians to assess tremor severity. Some scientific tools that may be able to replace these scales to objectively assess the severity, such as accelerometers, digital tablets, electromyography (EMG) measurement devices, and motion capture cameras, are currently available. However, most of the operational modes of these tools are relatively complex or are only able to capture part of the clinical information; furthermore, using these tools is sometimes time consuming. Currently, there is no tool available for automatically quantifying tremor severity in clinical environments. We aimed to develop a rapid, objective, and quantitative system for measuring the severity of finger tremor using a small portable optical device (Leap Motion). A single test took 15s to conduct, and three algorithms were proposed to quantify the severity of finger tremor. The system was tested with four patients diagnosed with essential tremor. The proposed algorithms were able to quantify different characteristics of tremor in clinical environments, and could be used as references for future clinical assessments. A portable, easy-to-use, small-sized, and noncontact device (Leap Motion) was used to clinically detect and record finger movement, and three algorithms were proposed to describe tremor amplitudes. Copyright © 2016 Elsevier B.V. All rights reserved.
A Partitioning and Bounded Variable Algorithm for Linear Programming
ERIC Educational Resources Information Center
Sheskin, Theodore J.
2006-01-01
An interesting new partitioning and bounded variable algorithm (PBVA) is proposed for solving linear programming problems. The PBVA is a variant of the simplex algorithm which uses a modified form of the simplex method followed by the dual simplex method for bounded variables. In contrast to the two-phase method and the big M method, the PBVA does…
Number Partitioning via Quantum Adiabatic Computation
NASA Technical Reports Server (NTRS)
Smelyanskiy, Vadim N.; Toussaint, Udo
2002-01-01
We study both analytically and numerically the complexity of the adiabatic quantum evolution algorithm applied to random instances of combinatorial optimization problems. We use as an example the NP-complete set partition problem and obtain an asymptotic expression for the minimal gap separating the ground and exited states of a system during the execution of the algorithm. We show that for computationally hard problem instances the size of the minimal gap scales exponentially with the problem size. This result is in qualitative agreement with the direct numerical simulation of the algorithm for small instances of the set partition problem. We describe the statistical properties of the optimization problem that are responsible for the exponential behavior of the algorithm.
Software Partitioning Schemes for Advanced Simulation Computer Systems. Final Report.
ERIC Educational Resources Information Center
Clymer, S. J.
Conducted to design software partitioning techniques for use by the Air Force to partition a large flight simulator program for optimal execution on alternative configurations, this study resulted in a mathematical model which defines characteristics for an optimal partition, and a manually demonstrated partitioning algorithm design which…
A similarity based agglomerative clustering algorithm in networks
NASA Astrophysics Data System (ADS)
Liu, Zhiyuan; Wang, Xiujuan; Ma, Yinghong
2018-04-01
The detection of clusters is benefit for understanding the organizations and functions of networks. Clusters, or communities, are usually groups of nodes densely interconnected but sparsely linked with any other clusters. To identify communities, an efficient and effective community agglomerative algorithm based on node similarity is proposed. The proposed method initially calculates similarities between each pair of nodes, and form pre-partitions according to the principle that each node is in the same community as its most similar neighbor. After that, check each partition whether it satisfies community criterion. For the pre-partitions who do not satisfy, incorporate them with others that having the biggest attraction until there are no changes. To measure the attraction ability of a partition, we propose an attraction index that based on the linked node's importance in networks. Therefore, our proposed method can better exploit the nodes' properties and network's structure. To test the performance of our algorithm, both synthetic and empirical networks ranging in different scales are tested. Simulation results show that the proposed algorithm can obtain superior clustering results compared with six other widely used community detection algorithms.
Natural Scales in Geographical Patterns
NASA Astrophysics Data System (ADS)
Menezes, Telmo; Roth, Camille
2017-04-01
Human mobility is known to be distributed across several orders of magnitude of physical distances, which makes it generally difficult to endogenously find or define typical and meaningful scales. Relevant analyses, from movements to geographical partitions, seem to be relative to some ad-hoc scale, or no scale at all. Relying on geotagged data collected from photo-sharing social media, we apply community detection to movement networks constrained by increasing percentiles of the distance distribution. Using a simple parameter-free discontinuity detection algorithm, we discover clear phase transitions in the community partition space. The detection of these phases constitutes the first objective method of characterising endogenous, natural scales of human movement. Our study covers nine regions, ranging from cities to countries of various sizes and a transnational area. For all regions, the number of natural scales is remarkably low (2 or 3). Further, our results hint at scale-related behaviours rather than scale-related users. The partitions of the natural scales allow us to draw discrete multi-scale geographical boundaries, potentially capable of providing key insights in fields such as epidemiology or cultural contagion where the introduction of spatial boundaries is pivotal.
Velocity and stress autocorrelation decay in isothermal dissipative particle dynamics
NASA Astrophysics Data System (ADS)
Chaudhri, Anuj; Lukes, Jennifer R.
2010-02-01
The velocity and stress autocorrelation decay in a dissipative particle dynamics ideal fluid model is analyzed in this paper. The autocorrelation functions are calculated at three different friction parameters and three different time steps using the well-known Groot/Warren algorithm and newer algorithms including self-consistent leap-frog, self-consistent velocity Verlet and Shardlow first and second order integrators. At low friction values, the velocity autocorrelation function decays exponentially at short times, shows slower-than exponential decay at intermediate times, and approaches zero at long times for all five integrators. As friction value increases, the deviation from exponential behavior occurs earlier and is more pronounced. At small time steps, all the integrators give identical decay profiles. As time step increases, there are qualitative and quantitative differences between the integrators. The stress correlation behavior is markedly different for the algorithms. The self-consistent velocity Verlet and the Shardlow algorithms show very similar stress autocorrelation decay with change in friction parameter, whereas the Groot/Warren and leap-frog schemes show variations at higher friction factors. Diffusion coefficients and shear viscosities are calculated using Green-Kubo integration of the velocity and stress autocorrelation functions. The diffusion coefficients match well-known theoretical results at low friction limits. Although the stress autocorrelation function is different for each integrator, fluctuates rapidly, and gives poor statistics for most of the cases, the calculated shear viscosities still fall within range of theoretical predictions and nonequilibrium studies.
The Refinement-Tree Partition for Parallel Solution of Partial Differential Equations
Mitchell, William F.
1998-01-01
Dynamic load balancing is considered in the context of adaptive multilevel methods for partial differential equations on distributed memory multiprocessors. An approach that periodically repartitions the grid is taken. The important properties of a partitioning algorithm are presented and discussed in this context. A partitioning algorithm based on the refinement tree of the adaptive grid is presented and analyzed in terms of these properties. Theoretical and numerical results are given. PMID:28009355
The Refinement-Tree Partition for Parallel Solution of Partial Differential Equations.
Mitchell, William F
1998-01-01
Dynamic load balancing is considered in the context of adaptive multilevel methods for partial differential equations on distributed memory multiprocessors. An approach that periodically repartitions the grid is taken. The important properties of a partitioning algorithm are presented and discussed in this context. A partitioning algorithm based on the refinement tree of the adaptive grid is presented and analyzed in terms of these properties. Theoretical and numerical results are given.
An Intrinsic Algorithm for Parallel Poisson Disk Sampling on Arbitrary Surfaces.
Ying, Xiang; Xin, Shi-Qing; Sun, Qian; He, Ying
2013-03-08
Poisson disk sampling plays an important role in a variety of visual computing, due to its useful statistical property in distribution and the absence of aliasing artifacts. While many effective techniques have been proposed to generate Poisson disk distribution in Euclidean space, relatively few work has been reported to the surface counterpart. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. We propose a new technique for parallelizing the dart throwing. Rather than the conventional approaches that explicitly partition the spatial domain to generate the samples in parallel, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. It is worth noting that our algorithm is accurate as the generated Poisson disks are uniformly and randomly distributed without bias. Our method is intrinsic in that all the computations are based on the intrinsic metric and are independent of the embedding space. This intrinsic feature allows us to generate Poisson disk distributions on arbitrary surfaces. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.
An Integrative Object-Based Image Analysis Workflow for Uav Images
NASA Astrophysics Data System (ADS)
Yu, Huai; Yan, Tianheng; Yang, Wen; Zheng, Hong
2016-06-01
In this work, we propose an integrative framework to process UAV images. The overall process can be viewed as a pipeline consisting of the geometric and radiometric corrections, subsequent panoramic mosaicking and hierarchical image segmentation for later Object Based Image Analysis (OBIA). More precisely, we first introduce an efficient image stitching algorithm after the geometric calibration and radiometric correction, which employs a fast feature extraction and matching by combining the local difference binary descriptor and the local sensitive hashing. We then use a Binary Partition Tree (BPT) representation for the large mosaicked panoramic image, which starts by the definition of an initial partition obtained by an over-segmentation algorithm, i.e., the simple linear iterative clustering (SLIC). Finally, we build an object-based hierarchical structure by fully considering the spectral and spatial information of the super-pixels and their topological relationships. Moreover, an optimal segmentation is obtained by filtering the complex hierarchies into simpler ones according to some criterions, such as the uniform homogeneity and semantic consistency. Experimental results on processing the post-seismic UAV images of the 2013 Ya'an earthquake demonstrate the effectiveness and efficiency of our proposed method.
Li, Zhenping; Zhang, Xiang-Sun; Wang, Rui-Sheng; Liu, Hongwei; Zhang, Shihua
2013-01-01
Identification of communities in complex networks is an important topic and issue in many fields such as sociology, biology, and computer science. Communities are often defined as groups of related nodes or links that correspond to functional subunits in the corresponding complex systems. While most conventional approaches have focused on discovering communities of nodes, some recent studies start partitioning links to find overlapping communities straightforwardly. In this paper, we propose a new quantity function for link community identification in complex networks. Based on this quantity function we formulate the link community partition problem into an integer programming model which allows us to partition a complex network into overlapping communities. We further propose a genetic algorithm for link community detection which can partition a network into overlapping communities without knowing the number of communities. We test our model and algorithm on both artificial networks and real-world networks. The results demonstrate that the model and algorithm are efficient in detecting overlapping community structure in complex networks. PMID:24386268
NASA Technical Reports Server (NTRS)
Farhat, Charbel; Lesoinne, Michel
1993-01-01
Most of the recently proposed computational methods for solving partial differential equations on multiprocessor architectures stem from the 'divide and conquer' paradigm and involve some form of domain decomposition. For those methods which also require grids of points or patches of elements, it is often necessary to explicitly partition the underlying mesh, especially when working with local memory parallel processors. In this paper, a family of cost-effective algorithms for the automatic partitioning of arbitrary two- and three-dimensional finite element and finite difference meshes is presented and discussed in view of a domain decomposed solution procedure and parallel processing. The influence of the algorithmic aspects of a solution method (implicit/explicit computations), and the architectural specifics of a multiprocessor (SIMD/MIMD, startup/transmission time), on the design of a mesh partitioning algorithm are discussed. The impact of the partitioning strategy on load balancing, operation count, operator conditioning, rate of convergence and processor mapping is also addressed. Finally, the proposed mesh decomposition algorithms are demonstrated with realistic examples of finite element, finite volume, and finite difference meshes associated with the parallel solution of solid and fluid mechanics problems on the iPSC/2 and iPSC/860 multiprocessors.
NASA Astrophysics Data System (ADS)
Shope, C. L.; Maharjan, G. R.; Tenhunen, J.; Seo, B.; Kim, K.; Riley, J.; Arnhold, S.; Koellner, T.; Ok, Y. S.; Peiffer, S.; Kim, B.; Park, J.-H.; Huwe, B.
2014-02-01
Watershed-scale modeling can be a valuable tool to aid in quantification of water quality and yield; however, several challenges remain. In many watersheds, it is difficult to adequately quantify hydrologic partitioning. Data scarcity is prevalent, accuracy of spatially distributed meteorology is difficult to quantify, forest encroachment and land use issues are common, and surface water and groundwater abstractions substantially modify watershed-based processes. Our objective is to assess the capability of the Soil and Water Assessment Tool (SWAT) model to capture event-based and long-term monsoonal rainfall-runoff processes in complex mountainous terrain. To accomplish this, we developed a unique quality-control, gap-filling algorithm for interpolation of high-frequency meteorological data. We used a novel multi-location, multi-optimization calibration technique to improve estimations of catchment-wide hydrologic partitioning. The interdisciplinary model was calibrated to a unique combination of statistical, hydrologic, and plant growth metrics. Our results indicate scale-dependent sensitivity of hydrologic partitioning and substantial influence of engineered features. The addition of hydrologic and plant growth objective functions identified the importance of culverts in catchment-wide flow distribution. While this study shows the challenges of applying the SWAT model to complex terrain and extreme environments; by incorporating anthropogenic features into modeling scenarios, we can enhance our understanding of the hydroecological impact.
Managing Network Partitions in Structured P2P Networks
NASA Astrophysics Data System (ADS)
Shafaat, Tallat M.; Ghodsi, Ali; Haridi, Seif
Structured overlay networks form a major class of peer-to-peer systems, which are touted for their abilities to scale, tolerate failures, and self-manage. Any long-lived Internet-scale distributed system is destined to face network partitions. Consequently, the problem of network partitions and mergers is highly related to fault-tolerance and self-management in large-scale systems. This makes it a crucial requirement for building any structured peer-to-peer systems to be resilient to network partitions. Although the problem of network partitions and mergers is highly related to fault-tolerance and self-management in large-scale systems, it has hardly been studied in the context of structured peer-to-peer systems. Structured overlays have mainly been studied under churn (frequent joins/failures), which as a side effect solves the problem of network partitions, as it is similar to massive node failures. Yet, the crucial aspect of network mergers has been ignored. In fact, it has been claimed that ring-based structured overlay networks, which constitute the majority of the structured overlays, are intrinsically ill-suited for merging rings. In this chapter, we motivate the problem of network partitions and mergers in structured overlays. We discuss how a structured overlay can automatically detect a network partition and merger. We present an algorithm for merging multiple similar ring-based overlays when the underlying network merges. We examine the solution in dynamic conditions, showing how our solution is resilient to churn during the merger, something widely believed to be difficult or impossible. We evaluate the algorithm for various scenarios and show that even when falsely detecting a merger, the algorithm quickly terminates and does not clutter the network with many messages. The algorithm is flexible as the tradeoff between message complexity and time complexity can be adjusted by a parameter.
Scoring and staging systems using cox linear regression modeling and recursive partitioning.
Lee, J W; Um, S H; Lee, J B; Mun, J; Cho, H
2006-01-01
Scoring and staging systems are used to determine the order and class of data according to predictors. Systems used for medical data, such as the Child-Turcotte-Pugh scoring and staging systems for ordering and classifying patients with liver disease, are often derived strictly from physicians' experience and intuition. We construct objective and data-based scoring/staging systems using statistical methods. We consider Cox linear regression modeling and recursive partitioning techniques for censored survival data. In particular, to obtain a target number of stages we propose cross-validation and amalgamation algorithms. We also propose an algorithm for constructing scoring and staging systems by integrating local Cox linear regression models into recursive partitioning, so that we can retain the merits of both methods such as superior predictive accuracy, ease of use, and detection of interactions between predictors. The staging system construction algorithms are compared by cross-validation evaluation of real data. The data-based cross-validation comparison shows that Cox linear regression modeling is somewhat better than recursive partitioning when there are only continuous predictors, while recursive partitioning is better when there are significant categorical predictors. The proposed local Cox linear recursive partitioning has better predictive accuracy than Cox linear modeling and simple recursive partitioning. This study indicates that integrating local linear modeling into recursive partitioning can significantly improve prediction accuracy in constructing scoring and staging systems.
Bi-Partition of Shared Binary Decision Diagrams
2002-12-01
independently. Such BDDs are considered as a special case of partitioned BDDs [6], [12], [13] and free BDDs ( FBDDs ) [7], [8]. Note that BDD nomenclature...shi, 214-8571 Japan. a)E-mail: sasao@cse.kyutech.ac.jp Applications of partitioned SBDDs are similar to that of partitioned BDDs and FBDDs . When...partitioned SBDD is more canonical than partitioned BDDs and free BDDs ( FBDDs ). We developed a heuristic bi-partition algorithm for SBDDs, and showed cases
Implementation of a partitioned algorithm for simulation of large CSI problems
NASA Technical Reports Server (NTRS)
Alvin, Kenneth F.; Park, K. C.
1991-01-01
The implementation of a partitioned numerical algorithm for determining the dynamic response of coupled structure/controller/estimator finite-dimensional systems is reviewed. The partitioned approach leads to a set of coupled first and second-order linear differential equations which are numerically integrated with extrapolation and implicit step methods. The present software implementation, ACSIS, utilizes parallel processing techniques at various levels to optimize performance on a shared-memory concurrent/vector processing system. A general procedure for the design of controller and filter gains is also implemented, which utilizes the vibration characteristics of the structure to be solved. Also presented are: example problems; a user's guide to the software; the procedures and algorithm scripts; a stability analysis for the algorithm; and the source code for the parallel implementation.
Implementation of spectral clustering on microarray data of carcinoma using k-means algorithm
NASA Astrophysics Data System (ADS)
Frisca, Bustamam, Alhadi; Siswantining, Titin
2017-03-01
Clustering is one of data analysis methods that aims to classify data which have similar characteristics in the same group. Spectral clustering is one of the most popular modern clustering algorithms. As an effective clustering technique, spectral clustering method emerged from the concepts of spectral graph theory. Spectral clustering method needs partitioning algorithm. There are some partitioning methods including PAM, SOM, Fuzzy c-means, and k-means. Based on the research that has been done by Capital and Choudhury in 2013, when using Euclidian distance k-means algorithm provide better accuracy than PAM algorithm. So in this paper we use k-means as our partition algorithm. The major advantage of spectral clustering is in reducing data dimension, especially in this case to reduce the dimension of large microarray dataset. Microarray data is a small-sized chip made of a glass plate containing thousands and even tens of thousands kinds of genes in the DNA fragments derived from doubling cDNA. Application of microarray data is widely used to detect cancer, for the example is carcinoma, in which cancer cells express the abnormalities in his genes. The purpose of this research is to classify the data that have high similarity in the same group and the data that have low similarity in the others. In this research, Carcinoma microarray data using 7457 genes. The result of partitioning using k-means algorithm is two clusters.
Multiphase complete exchange: A theoretical analysis
NASA Technical Reports Server (NTRS)
Bokhari, Shahid H.
1993-01-01
Complete Exchange requires each of N processors to send a unique message to each of the remaining N-1 processors. For a circuit switched hypercube with N = 2(sub d) processors, the Direct and Standard algorithms for Complete Exchange are optimal for very large and very small message sizes, respectively. For intermediate sizes, a hybrid Multiphase algorithm is better. This carries out Direct exchanges on a set of subcubes whose dimensions are a partition of the integer d. The best such algorithm for a given message size m could hitherto only be found by enumerating all partitions of d. The Multiphase algorithm is analyzed assuming a high performance communication network. It is proved that only algorithms corresponding to equipartitions of d (partitions in which the maximum and minimum elements differ by at most 1) can possibly be optimal. The run times of these algorithms plotted against m form a hull of optimality. It is proved that, although there is an exponential number of partitions, (1) the number of faces on this hull is Theta(square root of d), (2) the hull can be found in theta(square root of d) time, and (3) once it has been found, the optimal algorithm for any given m can be found in Theta(log d) time. These results provide a very fast technique for minimizing communication overhead in many important applications, such as matrix transpose, Fast Fourier transform, and ADI.
Computer code for controller partitioning with IFPC application: A user's manual
NASA Technical Reports Server (NTRS)
Schmidt, Phillip H.; Yarkhan, Asim
1994-01-01
A user's manual for the computer code for partitioning a centralized controller into decentralized subcontrollers with applicability to Integrated Flight/Propulsion Control (IFPC) is presented. Partitioning of a centralized controller into two subcontrollers is described and the algorithm on which the code is based is discussed. The algorithm uses parameter optimization of a cost function which is described. The major data structures and functions are described. Specific instructions are given. The user is led through an example of an IFCP application.
An adaptive multi-level simulation algorithm for stochastic biological systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lester, C., E-mail: lesterc@maths.ox.ac.uk; Giles, M. B.; Baker, R. E.
2015-01-14
Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms (SSA) to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method [Anderson and Higham, “Multi-level Montemore » Carlo for continuous time Markov chains, with applications in biochemical kinetics,” SIAM Multiscale Model. Simul. 10(1), 146–179 (2012)] tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths, the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of τ. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel adaptive time-stepping approach where τ is chosen according to the stochastic behaviour of each sample path, we extend the applicability of the multi-level method to such cases. We demonstrate the efficiency of our method using a number of examples.« less
He, Chenlong; Feng, Zuren; Ren, Zhigang
2018-02-03
For Wireless Sensor Networks (WSNs), the Voronoi partition of a region is a challenging problem owing to the limited sensing ability of each sensor and the distributed organization of the network. In this paper, an algorithm is proposed for each sensor having a limited sensing range to compute its limited Voronoi cell autonomously, so that the limited Voronoi partition of the entire WSN is generated in a distributed manner. Inspired by Graham's Scan (GS) algorithm used to compute the convex hull of a point set, the limited Voronoi cell of each sensor is obtained by sequentially scanning two consecutive bisectors between the sensor and its neighbors. The proposed algorithm called the Boundary Scan (BS) algorithm has a lower computational complexity than the existing Range-Constrained Voronoi Cell (RCVC) algorithm and reaches the lower bound of the computational complexity of the algorithms used to solve the problem of this kind. Moreover, it also improves the time efficiency of a key step in the Adjust-Sensing-Radius (ASR) algorithm used to compute the exact Voronoi cell. Extensive numerical simulations are performed to demonstrate the correctness and effectiveness of the BS algorithm. The distributed realization of the BS combined with a localization algorithm in WSNs is used to justify the WSN nature of the proposed algorithm.
Distributed Algorithm for Voronoi Partition of Wireless Sensor Networks with a Limited Sensing Range
Feng, Zuren; Ren, Zhigang
2018-01-01
For Wireless Sensor Networks (WSNs), the Voronoi partition of a region is a challenging problem owing to the limited sensing ability of each sensor and the distributed organization of the network. In this paper, an algorithm is proposed for each sensor having a limited sensing range to compute its limited Voronoi cell autonomously, so that the limited Voronoi partition of the entire WSN is generated in a distributed manner. Inspired by Graham’s Scan (GS) algorithm used to compute the convex hull of a point set, the limited Voronoi cell of each sensor is obtained by sequentially scanning two consecutive bisectors between the sensor and its neighbors. The proposed algorithm called the Boundary Scan (BS) algorithm has a lower computational complexity than the existing Range-Constrained Voronoi Cell (RCVC) algorithm and reaches the lower bound of the computational complexity of the algorithms used to solve the problem of this kind. Moreover, it also improves the time efficiency of a key step in the Adjust-Sensing-Radius (ASR) algorithm used to compute the exact Voronoi cell. Extensive numerical simulations are performed to demonstrate the correctness and effectiveness of the BS algorithm. The distributed realization of the BS combined with a localization algorithm in WSNs is used to justify the WSN nature of the proposed algorithm. PMID:29401649
NASA Astrophysics Data System (ADS)
Schlögel, R.; Marchesini, I.; Alvioli, M.; Reichenbach, P.; Rossi, M.; Malet, J.-P.
2018-01-01
We perform landslide susceptibility zonation with slope units using three digital elevation models (DEMs) of varying spatial resolution of the Ubaye Valley (South French Alps). In so doing, we applied a recently developed algorithm automating slope unit delineation, given a number of parameters, in order to optimize simultaneously the partitioning of the terrain and the performance of a logistic regression susceptibility model. The method allowed us to obtain optimal slope units for each available DEM spatial resolution. For each resolution, we studied the susceptibility model performance by analyzing in detail the relevance of the conditioning variables. The analysis is based on landslide morphology data, considering either the whole landslide or only the source area outline as inputs. The procedure allowed us to select the most useful information, in terms of DEM spatial resolution, thematic variables and landslide inventory, in order to obtain the most reliable slope unit-based landslide susceptibility assessment.
GALAXY: A new hybrid MOEA for the optimal design of Water Distribution Systems
NASA Astrophysics Data System (ADS)
Wang, Q.; Savić, D. A.; Kapelan, Z.
2017-03-01
A new hybrid optimizer, called genetically adaptive leaping algorithm for approximation and diversity (GALAXY), is proposed for dealing with the discrete, combinatorial, multiobjective design of Water Distribution Systems (WDSs), which is NP-hard and computationally intensive. The merit of GALAXY is its ability to alleviate to a great extent the parameterization issue and the high computational overhead. It follows the generational framework of Multiobjective Evolutionary Algorithms (MOEAs) and includes six search operators and several important strategies. These operators are selected based on their leaping ability in the objective space from the global and local search perspectives. These strategies steer the optimization and balance the exploration and exploitation aspects simultaneously. A highlighted feature of GALAXY lies in the fact that it eliminates majority of parameters, thus being robust and easy-to-use. The comparative studies between GALAXY and three representative MOEAs on five benchmark WDS design problems confirm its competitiveness. GALAXY can identify better converged and distributed boundary solutions efficiently and consistently, indicating a much more balanced capability between the global and local search. Moreover, its advantages over other MOEAs become more substantial as the complexity of the design problem increases.
Tensor Spectral Clustering for Partitioning Higher-order Network Structures.
Benson, Austin R; Gleich, David F; Leskovec, Jure
2015-01-01
Spectral graph theory-based methods represent an important class of tools for studying the structure of networks. Spectral methods are based on a first-order Markov chain derived from a random walk on the graph and thus they cannot take advantage of important higher-order network substructures such as triangles, cycles, and feed-forward loops. Here we propose a Tensor Spectral Clustering (TSC) algorithm that allows for modeling higher-order network structures in a graph partitioning framework. Our TSC algorithm allows the user to specify which higher-order network structures (cycles, feed-forward loops, etc.) should be preserved by the network clustering. Higher-order network structures of interest are represented using a tensor, which we then partition by developing a multilinear spectral method. Our framework can be applied to discovering layered flows in networks as well as graph anomaly detection, which we illustrate on synthetic networks. In directed networks, a higher-order structure of particular interest is the directed 3-cycle, which captures feedback loops in networks. We demonstrate that our TSC algorithm produces large partitions that cut fewer directed 3-cycles than standard spectral clustering algorithms.
Tensor Spectral Clustering for Partitioning Higher-order Network Structures
Benson, Austin R.; Gleich, David F.; Leskovec, Jure
2016-01-01
Spectral graph theory-based methods represent an important class of tools for studying the structure of networks. Spectral methods are based on a first-order Markov chain derived from a random walk on the graph and thus they cannot take advantage of important higher-order network substructures such as triangles, cycles, and feed-forward loops. Here we propose a Tensor Spectral Clustering (TSC) algorithm that allows for modeling higher-order network structures in a graph partitioning framework. Our TSC algorithm allows the user to specify which higher-order network structures (cycles, feed-forward loops, etc.) should be preserved by the network clustering. Higher-order network structures of interest are represented using a tensor, which we then partition by developing a multilinear spectral method. Our framework can be applied to discovering layered flows in networks as well as graph anomaly detection, which we illustrate on synthetic networks. In directed networks, a higher-order structure of particular interest is the directed 3-cycle, which captures feedback loops in networks. We demonstrate that our TSC algorithm produces large partitions that cut fewer directed 3-cycles than standard spectral clustering algorithms. PMID:27812399
NASA Technical Reports Server (NTRS)
Schmidt, Phillip; Garg, Sanjay; Holowecky, Brian
1992-01-01
A parameter optimization framework is presented to solve the problem of partitioning a centralized controller into a decentralized hierarchical structure suitable for integrated flight/propulsion control implementation. The controller partitioning problem is briefly discussed and a cost function to be minimized is formulated, such that the resulting 'optimal' partitioned subsystem controllers will closely match the performance (including robustness) properties of the closed-loop system with the centralized controller while maintaining the desired controller partitioning structure. The cost function is written in terms of parameters in a state-space representation of the partitioned sub-controllers. Analytical expressions are obtained for the gradient of this cost function with respect to parameters, and an optimization algorithm is developed using modern computer-aided control design and analysis software. The capabilities of the algorithm are demonstrated by application to partitioned integrated flight/propulsion control design for a modern fighter aircraft in the short approach to landing task. The partitioning optimization is shown to lead to reduced-order subcontrollers that match the closed-loop command tracking and decoupling performance achieved by a high-order centralized controller.
NASA Technical Reports Server (NTRS)
Schmidt, Phillip H.; Garg, Sanjay; Holowecky, Brian R.
1993-01-01
A parameter optimization framework is presented to solve the problem of partitioning a centralized controller into a decentralized hierarchical structure suitable for integrated flight/propulsion control implementation. The controller partitioning problem is briefly discussed and a cost function to be minimized is formulated, such that the resulting 'optimal' partitioned subsystem controllers will closely match the performance (including robustness) properties of the closed-loop system with the centralized controller while maintaining the desired controller partitioning structure. The cost function is written in terms of parameters in a state-space representation of the partitioned sub-controllers. Analytical expressions are obtained for the gradient of this cost function with respect to parameters, and an optimization algorithm is developed using modern computer-aided control design and analysis software. The capabilities of the algorithm are demonstrated by application to partitioned integrated flight/propulsion control design for a modern fighter aircraft in the short approach to landing task. The partitioning optimization is shown to lead to reduced-order subcontrollers that match the closed-loop command tracking and decoupling performance achieved by a high-order centralized controller.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Xiaonan; Schreiber, Daniel K.; Neeway, James J.
Atom probe tomography (APT) is a novel analytical microscopy method that provides three dimensional elemental mapping with sub-nanometer spatial resolution and has only recently been applied to insulating glass and ceramic samples. In this paper, we have studied the influence of the optical absorption in glass samples on APT characterization by introducing different transition metal optical dopants to a model borosilicate nuclear waste glass (international simple glass). A systematic comparison is presented of the glass optical properties and the resulting APT data quality in terms of compositional accuracy and the mass spectra quality for two APT systems: one with amore » green laser (532 nm, LEAP 3000X HR) and one with a UV laser (355 nm, LEAP 4000X HR). These data were also compared to the study of a more complex borosilicate glass (SON68). The results show that the analysis data quality such as compositional accuracy and total ions collected, was clearly linked to optical absorption when using a green laser, while for the UV laser optical doping aided in improving data yield but did not have a significant effect on compositional accuracy. Comparisons of data between the LEAP systems suggest that the smaller laser spot size of the LEAP 4000X HR played a more critical role for optimum performance than the optical dopants themselves. The smaller spot size resulted in more accurate composition measurements due to a reduced background level independent of the material’s optical properties.« less
NASA Astrophysics Data System (ADS)
Arimbi, Mentari Dian; Bustamam, Alhadi; Lestari, Dian
2017-03-01
Data clustering can be executed through partition or hierarchical method for many types of data including DNA sequences. Both clustering methods can be combined by processing partition algorithm in the first level and hierarchical in the second level, called hybrid clustering. In the partition phase some popular methods such as PAM, K-means, or Fuzzy c-means methods could be applied. In this study we selected partitioning around medoids (PAM) in our partition stage. Furthermore, following the partition algorithm, in hierarchical stage we applied divisive analysis algorithm (DIANA) in order to have more specific clusters and sub clusters structures. The number of main clusters is determined using Davies Bouldin Index (DBI) value. We choose the optimal number of clusters if the results minimize the DBI value. In this work, we conduct the clustering on 1252 HPV DNA sequences data from GenBank. The characteristic extraction is initially performed, followed by normalizing and genetic distance calculation using Euclidean distance. In our implementation, we used the hybrid PAM and DIANA using the R open source programming tool. In our results, we obtained 3 main clusters with average DBI value is 0.979, using PAM in the first stage. After executing DIANA in the second stage, we obtained 4 sub clusters for Cluster-1, 9 sub clusters for Cluster-2 and 2 sub clusters in Cluster-3, with the BDI value 0.972, 0.771, and 0.768 for each main cluster respectively. Since the second stage produce lower DBI value compare to the DBI value in the first stage, we conclude that this hybrid approach can improve the accuracy of our clustering results.
Predicting chroma from luma with frequency domain intra prediction
NASA Astrophysics Data System (ADS)
Egge, Nathan E.; Valin, Jean-Marc
2015-03-01
This paper describes a technique for performing intra prediction of the chroma planes based on the reconstructed luma plane in the frequency domain. This prediction exploits the fact that while RGB to YUV color conversion has the property that it decorrelates the color planes globally across an image, there is still some correlation locally at the block level.1 Previous proposals compute a linear model of the spatial relationship between the luma plane (Y) and the two chroma planes (U and V).2 In codecs that use lapped transforms this is not possible since transform support extends across the block boundaries3 and thus neighboring blocks are unavailable during intra- prediction. We design a frequency domain intra predictor for chroma that exploits the same local correlation with lower complexity than the spatial predictor and which works with lapped transforms. We then describe a low- complexity algorithm that directly uses luma coefficients as a chroma predictor based on gain-shape quantization and band partitioning. An experiment is performed that compares these two techniques inside the experimental Daala video codec and shows the lower complexity algorithm to be a better chroma predictor.
Inter-method Performance Study of Tumor Volumetry Assessment on Computed Tomography Test-retest Data
Buckler, Andrew J.; Danagoulian, Jovanna; Johnson, Kjell; Peskin, Adele; Gavrielides, Marios A.; Petrick, Nicholas; Obuchowski, Nancy A.; Beaumont, Hubert; Hadjiiski, Lubomir; Jarecha, Rudresh; Kuhnigk, Jan-Martin; Mantri, Ninad; McNitt-Gray, Michael; Moltz, Jan Hendrik; Nyiri, Gergely; Peterson, Sam; Tervé, Pierre; Tietjen, Christian; von Lavante, Etienne; Ma, Xiaonan; Pierre, Samantha St.; Athelogou, Maria
2015-01-01
Rationale and objectives Tumor volume change has potential as a biomarker for diagnosis, therapy planning, and treatment response. Precision was evaluated and compared among semi-automated lung tumor volume measurement algorithms from clinical thoracic CT datasets. The results inform approaches and testing requirements for establishing conformance with the Quantitative Imaging Biomarker Alliance (QIBA) CT Volumetry Profile. Materials and Methods Industry and academic groups participated in a challenge study. Intra-algorithm repeatability and inter-algorithm reproducibility were estimated. Relative magnitudes of various sources of variability were estimated using a linear mixed effects model. Segmentation boundaries were compared to provide a basis on which to optimize algorithm performance for developers. Results Intra-algorithm repeatability ranged from 13% (best performing) to 100% (least performing), with most algorithms demonstrating improved repeatability as the tumor size increased. Inter-algorithm reproducibility determined in three partitions and found to be 58% for the four best performing groups, 70% for the set of groups meeting repeatability requirements, and 84% when all groups but the least performer were included. The best performing partition performed markedly better on tumors with equivalent diameters above 40 mm. Larger tumors benefitted by human editing but smaller tumors did not. One-fifth to one-half of the total variability came from sources independent of the algorithms. Segmentation boundaries differed substantially, not just in overall volume but in detail. Conclusions Nine of the twelve participating algorithms pass precision requirements similar to what is indicated in the QIBA Profile, with the caveat that the current study was not designed to explicitly evaluate algorithm Profile conformance. Change in tumor volume can be measured with confidence to within ±14% using any of these nine algorithms on tumor sizes above 10 mm. No partition of the algorithms were able to meet the QIBA requirements for interchangeability down to 10 mm, though the partition comprised of the best performing algorithms did meet this requirement above a tumor size of approximately 40 mm. PMID:26376841
An Efficient Algorithm for Partitioning and Authenticating Problem-Solutions of eLeaming Contents
ERIC Educational Resources Information Center
Dewan, Jahangir; Chowdhury, Morshed; Batten, Lynn
2013-01-01
Content authenticity and correctness is one of the important challenges in eLearning as there can be many solutions to one specific problem in cyber space. Therefore, the authors feel it is necessary to map problems to solutions using graph partition and weighted bipartite matching. This article proposes an efficient algorithm to partition…
Spatial-Temporal dynamics of Newtonian and viscoelastic turbulence
NASA Astrophysics Data System (ADS)
Wang, Sung-Ning; Graham, Michael
2015-11-01
Introducing a trace amount of polymer into liquid turbulent flows can result in substantial reduction of friction drag. This phenomenon has been widely used in fluid transport, such as the Alaska crude oil pipeline. However, the mechanism is not well understood. We conduct direct numerical simulations of Newtonian and viscoelastic turbulence in large domains, in which the flow shows different characteristics in different regions. In some areas the drag is low and vortex motions are quiescent, while in other areas the drag is higher and the motions are more active. To identify these regions, we apply a statistical method, k-means clustering, which partitions the observations into k clusters by assigning each observation to its nearest centroid. The resulting partition maximizes the between-cluster variance. In the simulations, the observations are the instantaneous wall shear rate. Regions with different levels of drag are automatically identified by the partitioning algorithm. We find that the velocity profiles of the centroids exhibit characteristics similar to the individual coherent structures observed in minimal domain simulations. In addition, as viscoelasticity increases, polymer stretch becomes strongly correlated with wall shear stress. This work was supported by NSF grant CBET-1510291.
Efficient Deterministic Finite Automata Minimization Based on Backward Depth Information.
Liu, Desheng; Huang, Zhiping; Zhang, Yimeng; Guo, Xiaojun; Su, Shaojing
2016-01-01
Obtaining a minimal automaton is a fundamental issue in the theory and practical implementation of deterministic finite automatons (DFAs). A minimization algorithm is presented in this paper that consists of two main phases. In the first phase, the backward depth information is built, and the state set of the DFA is partitioned into many blocks. In the second phase, the state set is refined using a hash table. The minimization algorithm has a lower time complexity O(n) than a naive comparison of transitions O(n2). Few states need to be refined by the hash table, because most states have been partitioned by the backward depth information in the coarse partition. This method achieves greater generality than previous methods because building the backward depth information is independent of the topological complexity of the DFA. The proposed algorithm can be applied not only to the minimization of acyclic automata or simple cyclic automata, but also to automata with high topological complexity. Overall, the proposal has three advantages: lower time complexity, greater generality, and scalability. A comparison to Hopcroft's algorithm demonstrates experimentally that the algorithm runs faster than traditional algorithms.
Robust rotational-velocity-Verlet integration methods.
Rozmanov, Dmitri; Kusalik, Peter G
2010-05-01
Two rotational integration algorithms for rigid-body dynamics are proposed in velocity-Verlet formulation. The first method uses quaternion dynamics and was derived from the original rotational leap-frog method by Svanberg [Mol. Phys. 92, 1085 (1997)]; it produces time consistent positions and momenta. The second method is also formulated in terms of quaternions but it is not quaternion specific and can be easily adapted for any other orientational representation. Both the methods are tested extensively and compared to existing rotational integrators. The proposed integrators demonstrated performance at least at the level of previously reported rotational algorithms. The choice of simulation parameters is also discussed.
Robust rotational-velocity-Verlet integration methods
NASA Astrophysics Data System (ADS)
Rozmanov, Dmitri; Kusalik, Peter G.
2010-05-01
Two rotational integration algorithms for rigid-body dynamics are proposed in velocity-Verlet formulation. The first method uses quaternion dynamics and was derived from the original rotational leap-frog method by Svanberg [Mol. Phys. 92, 1085 (1997)]; it produces time consistent positions and momenta. The second method is also formulated in terms of quaternions but it is not quaternion specific and can be easily adapted for any other orientational representation. Both the methods are tested extensively and compared to existing rotational integrators. The proposed integrators demonstrated performance at least at the level of previously reported rotational algorithms. The choice of simulation parameters is also discussed.
Jaspard, Emmanuel; Macherel, David; Hunault, Gilles
2012-01-01
Late Embryogenesis Abundant Proteins (LEAPs) are ubiquitous proteins expected to play major roles in desiccation tolerance. Little is known about their structure - function relationships because of the scarcity of 3-D structures for LEAPs. The previous building of LEAPdb, a database dedicated to LEAPs from plants and other organisms, led to the classification of 710 LEAPs into 12 non-overlapping classes with distinct properties. Using this resource, numerous physico-chemical properties of LEAPs and amino acid usage by LEAPs have been computed and statistically analyzed, revealing distinctive features for each class. This unprecedented analysis allowed a rigorous characterization of the 12 LEAP classes, which differed also in multiple structural and physico-chemical features. Although most LEAPs can be predicted as intrinsically disordered proteins, the analysis indicates that LEAP class 7 (PF03168) and probably LEAP class 11 (PF04927) are natively folded proteins. This study thus provides a detailed description of the structural properties of this protein family opening the path toward further LEAP structure - function analysis. Finally, since each LEAP class can be clearly characterized by a unique set of physico-chemical properties, this will allow development of software to predict proteins as LEAPs. PMID:22615859
Data Driven Estimation of Transpiration from Net Water Fluxes: the TEA Algorithm
NASA Astrophysics Data System (ADS)
Nelson, J. A.; Carvalhais, N.; Cuntz, M.; Delpierre, N.; Knauer, J.; Migliavacca, M.; Ogee, J.; Reichstein, M.; Jung, M.
2017-12-01
The eddy covariance method, while powerful, can only provide a net accounting of ecosystem fluxes. Particularly with water cycle components, efforts to partitioning total evapotranspiration (ET) into the biotic component (transpiration, T) and the abiotic component (here evaporation, E) have seen limited success, with no one method emerging as a standard.Here we demonstrate a novel method that uses ecosystem WUE to predict transpiration in two steps: (1) a filtration step that to isolate the signal of ET for periods where E is minimized and ET is likely dominated by the signal of T; and (2) a step which predicts the WUE using meteorological variables, as well as information derived from the carbon and energy fluxes. To assess the the underlying assumptions, we tested the proposed method on three ecological models, allowing validation where the underlying carbon:water relationships, as well as the transpiration estimates, are know.The partitioning method shows high correlation (R²>0.8) between Tmodel/ET and TTEA/ET across timescales from half-hourly to annually, as well as capturing spatial variability across sites. Apart from predictive performance, we explore the sensitivities of the method to the underlying assumptions, such as the effects of residual evaporation in the training dataset. Furthermore, we show initial transpiration estimates from the algorithm at global scale, via the FLUXNET dataset.
Stochastic simulation of reaction-diffusion systems: A fluctuating-hydrodynamics approach
NASA Astrophysics Data System (ADS)
Kim, Changho; Nonaka, Andy; Bell, John B.; Garcia, Alejandro L.; Donev, Aleksandar
2017-03-01
We develop numerical methods for stochastic reaction-diffusion systems based on approaches used for fluctuating hydrodynamics (FHD). For hydrodynamic systems, the FHD formulation is formally described by stochastic partial differential equations (SPDEs). In the reaction-diffusion systems we consider, our model becomes similar to the reaction-diffusion master equation (RDME) description when our SPDEs are spatially discretized and reactions are modeled as a source term having Poisson fluctuations. However, unlike the RDME, which becomes prohibitively expensive for an increasing number of molecules, our FHD-based description naturally extends from the regime where fluctuations are strong, i.e., each mesoscopic cell has few (reactive) molecules, to regimes with moderate or weak fluctuations, and ultimately to the deterministic limit. By treating diffusion implicitly, we avoid the severe restriction on time step size that limits all methods based on explicit treatments of diffusion and construct numerical methods that are more efficient than RDME methods, without compromising accuracy. Guided by an analysis of the accuracy of the distribution of steady-state fluctuations for the linearized reaction-diffusion model, we construct several two-stage (predictor-corrector) schemes, where diffusion is treated using a stochastic Crank-Nicolson method, and reactions are handled by the stochastic simulation algorithm of Gillespie or a weakly second-order tau leaping method. We find that an implicit midpoint tau leaping scheme attains second-order weak accuracy in the linearized setting and gives an accurate and stable structure factor for a time step size of an order of magnitude larger than the hopping time scale of diffusing molecules. We study the numerical accuracy of our methods for the Schlögl reaction-diffusion model both in and out of thermodynamic equilibrium. We demonstrate and quantify the importance of thermodynamic fluctuations to the formation of a two-dimensional Turing-like pattern and examine the effect of fluctuations on three-dimensional chemical front propagation. By comparing stochastic simulations to deterministic reaction-diffusion simulations, we show that fluctuations accelerate pattern formation in spatially homogeneous systems and lead to a qualitatively different disordered pattern behind a traveling wave.
Stochastic simulation of reaction-diffusion systems: A fluctuating-hydrodynamics approach
Kim, Changho; Nonaka, Andy; Bell, John B.; ...
2017-03-24
Here, we develop numerical methods for stochastic reaction-diffusion systems based on approaches used for fluctuating hydrodynamics (FHD). For hydrodynamic systems, the FHD formulation is formally described by stochastic partial differential equations (SPDEs). In the reaction-diffusion systems we consider, our model becomes similar to the reaction-diffusion master equation (RDME) description when our SPDEs are spatially discretized and reactions are modeled as a source term having Poisson fluctuations. However, unlike the RDME, which becomes prohibitively expensive for an increasing number of molecules, our FHD-based description naturally extends from the regime where fluctuations are strong, i.e., each mesoscopic cell has few (reactive) molecules,more » to regimes with moderate or weak fluctuations, and ultimately to the deterministic limit. By treating diffusion implicitly, we avoid the severe restriction on time step size that limits all methods based on explicit treatments of diffusion and construct numerical methods that are more efficient than RDME methods, without compromising accuracy. Guided by an analysis of the accuracy of the distribution of steady-state fluctuations for the linearized reaction-diffusion model, we construct several two-stage (predictor-corrector) schemes, where diffusion is treated using a stochastic Crank-Nicolson method, and reactions are handled by the stochastic simulation algorithm of Gillespie or a weakly second-order tau leaping method. We find that an implicit midpoint tau leaping scheme attains second-order weak accuracy in the linearized setting and gives an accurate and stable structure factor for a time step size of an order of magnitude larger than the hopping time scale of diffusing molecules. We study the numerical accuracy of our methods for the Schlögl reaction-diffusion model both in and out of thermodynamic equilibrium. We demonstrate and quantify the importance of thermodynamic fluctuations to the formation of a two-dimensional Turing-like pattern and examine the effect of fluctuations on three-dimensional chemical front propagation. Furthermore, by comparing stochastic simulations to deterministic reaction-diffusion simulations, we show that fluctuations accelerate pattern formation in spatially homogeneous systems and lead to a qualitatively different disordered pattern behind a traveling wave.« less
NASA Astrophysics Data System (ADS)
Wang, X. Y.; Dou, J. M.; Shen, H.; Li, J.; Yang, G. S.; Fan, R. Q.; Shen, Q.
2018-03-01
With the continuous strengthening of power grids, the network structure is becoming more and more complicated. An open and regional data modeling is used to complete the calculation of the protection fixed value based on the local region. At the same time, a high precision, quasi real-time boundary fusion technique is needed to seamlessly integrate the various regions so as to constitute an integrated fault computing platform which can conduct transient stability analysis of covering the whole network with high accuracy and multiple modes, deal with the impact results of non-single fault, interlocking fault and build “the first line of defense” of the power grid. The boundary fusion algorithm in this paper is an automatic fusion algorithm based on the boundary accurate coupling of the networking power grid partition, which takes the actual operation mode for qualification, complete the boundary coupling algorithm of various weak coupling partition based on open-loop mode, improving the fusion efficiency, truly reflecting its transient stability level, and effectively solving the problems of too much data, too many difficulties of partition fusion, and no effective fusion due to mutually exclusive conditions. In this paper, the basic principle of fusion process is introduced firstly, and then the method of boundary fusion customization is introduced by scene description. Finally, an example is given to illustrate the specific algorithm on how it effectively implements the boundary fusion after grid partition and to verify the accuracy and efficiency of the algorithm.
Accelerating Time-Varying Hardware Volume Rendering Using TSP Trees and Color-Based Error Metrics
NASA Technical Reports Server (NTRS)
Ellsworth, David; Chiang, Ling-Jen; Shen, Han-Wei; Kwak, Dochan (Technical Monitor)
2000-01-01
This paper describes a new hardware volume rendering algorithm for time-varying data. The algorithm uses the Time-Space Partitioning (TSP) tree data structure to identify regions within the data that have spatial or temporal coherence. By using this coherence, the rendering algorithm can improve performance when the volume data is larger than the texture memory capacity by decreasing the amount of textures required. This coherence can also allow improved speed by appropriately rendering flat-shaded polygons instead of textured polygons, and by not rendering transparent regions. To reduce the polygonization overhead caused by the use of the hierarchical data structure, we introduce an optimization method using polygon templates. The paper also introduces new color-based error metrics, which more accurately identify coherent regions compared to the earlier scalar-based metrics. By showing experimental results from runs using different data sets and error metrics, we demonstrate that the new methods give substantial improvements in volume rendering performance.
Minimum nonuniform graph partitioning with unrelated weights
NASA Astrophysics Data System (ADS)
Makarychev, K. S.; Makarychev, Yu S.
2017-12-01
We give a bi-criteria approximation algorithm for the Minimum Nonuniform Graph Partitioning problem, recently introduced by Krauthgamer, Naor, Schwartz and Talwar. In this problem, we are given a graph G=(V,E) and k numbers ρ_1,\\dots, ρ_k. The goal is to partition V into k disjoint sets (bins) P_1,\\dots, P_k satisfying \\vert P_i\\vert≤ ρi \\vert V\\vert for all i, so as to minimize the number of edges cut by the partition. Our bi-criteria algorithm gives an O(\\sqrt{log \\vert V\\vert log k}) approximation for the objective function in general graphs and an O(1) approximation in graphs excluding a fixed minor. The approximate solution satisfies the relaxed capacity constraints \\vert P_i\\vert ≤ (5+ \\varepsilon)ρi \\vert V\\vert. This algorithm is an improvement upon the O(log \\vert V\\vert)-approximation algorithm by Krauthgamer, Naor, Schwartz and Talwar. We extend our results to the case of 'unrelated weights' and to the case of 'unrelated d-dimensional weights'. A preliminary version of this work was presented at the 41st International Colloquium on Automata, Languages and Programming (ICALP 2014). Bibliography: 7 titles.
Constant-Time Pattern Matching For Real-Time Production Systems
NASA Astrophysics Data System (ADS)
Parson, Dale E.; Blank, Glenn D.
1989-03-01
Many intelligent systems must respond to sensory data or critical environmental conditions in fixed, predictable time. Rule-based systems, including those based on the efficient Rete matching algorithm, cannot guarantee this result. Improvement in execution-time efficiency is not all that is needed here; it is important to ensure constant, 0(1) time limits for portions of the matching process. Our approach is inspired by two observations about human performance. First, cognitive psychologists distinguish between automatic and controlled processing. Analogously, we partition the matching process across two networks. The first is the automatic partition; it is characterized by predictable 0(1) time and space complexity, lack of persistent memory, and is reactive in nature. The second is the controlled partition; it includes the search-based goal-driven and data-driven processing typical of most production system programming. The former is responsible for recognition and response to critical environmental conditions. The latter is responsible for the more flexible problem-solving behaviors consistent with the notion of intelligence. Support for learning and refining the automatic partition can be placed in the controlled partition. Our second observation is that people are able to attend to more critical stimuli or requirements selectively. Our match algorithm uses priorities to focus matching. It compares priority of information during matching, rather than deferring this comparison until conflict resolution. Messages from the automatic partition are able to interrupt the controlled partition, enhancing system responsiveness. Our algorithm has numerous applications for systems that must exhibit time-constrained behavior.
Distributed Sleep Scheduling in Wireless Sensor Networks via Fractional Domatic Partitioning
NASA Astrophysics Data System (ADS)
Schumacher, André; Haanpää, Harri
We consider setting up sleep scheduling in sensor networks. We formulate the problem as an instance of the fractional domatic partition problem and obtain a distributed approximation algorithm by applying linear programming approximation techniques. Our algorithm is an application of the Garg-Könemann (GK) scheme that requires solving an instance of the minimum weight dominating set (MWDS) problem as a subroutine. Our two main contributions are a distributed implementation of the GK scheme for the sleep-scheduling problem and a novel asynchronous distributed algorithm for approximating MWDS based on a primal-dual analysis of Chvátal's set-cover algorithm. We evaluate our algorithm with
NASA Astrophysics Data System (ADS)
Lakshmi, V.; Mladenova, I. E.; Narayan, U.
2009-12-01
Soil moisture is known to be an essential factor in controlling the partitioning of rainfall into surface runoff and infiltration and solar energy into latent and sensible heat fluxes. Remote sensing has long proven its capability to obtain soil moisture in near real-time. However, at the present time we have the Advanced Scanning Microwave Radiometer (AMSR-E) on board NASA’s AQUA platform is the only satellite sensor that supplies a soil moisture product. AMSR-E coarse spatial resolution (~ 50 km at 6.9 GHz) strongly limits its applicability for small scale studies. A very promising technique for spatial disaggregation by combining radar and radiometer observations has been demonstrated by the authors using a methodology is based on the assumption that any change in measured brightness temperature and backscatter from one to the next time step is due primarily to change in soil wetness. The approach uses radiometric estimates of soil moisture at a lower resolution to compute the sensitivity of radar to soil moisture at the lower resolution. This estimate of sensitivity is then disaggregated using vegetation water content, vegetation type and soil texture information, which are the variables on which determine the radar sensitivity to soil moisture and are generally available at a scale of radar observation. This change detection algorithm is applied to several locations. We have used aircraft observed active and passive data over Walnut Creek watershed in Central Iowa in 2002; the Little Washita Watershed in Oklahoma in 2003 and the Murrumbidgee Catchment in southeastern Australia for 2006. All of these locations have different soils and land cover conditions which leads to a rigorous test of the disaggregation algorithm. Furthermore, we compare the derived high spatial resolution soil moisture to in-situ sampling and ground observation networks
A parallel computer implementation of fast low-rank QR approximation of the Biot-Savart law
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, D A; Fasenfest, B J; Stowell, M L
2005-11-07
In this paper we present a low-rank QR method for evaluating the discrete Biot-Savart law on parallel computers. It is assumed that the known current density and the unknown magnetic field are both expressed in a finite element expansion, and we wish to compute the degrees-of-freedom (DOF) in the basis function expansion of the magnetic field. The matrix that maps the current DOF to the field DOF is full, but if the spatial domain is properly partitioned the matrix can be written as a block matrix, with blocks representing distant interactions being low rank and having a compressed QR representation.more » The matrix partitioning is determined by the number of processors, the rank of each block (i.e. the compression) is determined by the specific geometry and is computed dynamically. In this paper we provide the algorithmic details and present computational results for large-scale computations.« less
Mahrooghy, Majid; Ashraf, Ahmed B; Daye, Dania; Mies, Carolyn; Feldman, Michael; Rosen, Mark; Kontos, Despina
2013-01-01
Breast tumors are heterogeneous lesions. Intra-tumor heterogeneity presents a major challenge for cancer diagnosis and treatment. Few studies have worked on capturing tumor heterogeneity from imaging. Most studies to date consider aggregate measures for tumor characterization. In this work we capture tumor heterogeneity by partitioning tumor pixels into subregions and extracting heterogeneity wavelet kinetic (HetWave) features from breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) to obtain the spatiotemporal patterns of the wavelet coefficients and contrast agent uptake from each partition. Using a genetic algorithm for feature selection, and a logistic regression classifier with leave one-out cross validation, we tested our proposed HetWave features for the task of classifying breast cancer recurrence risk. The classifier based on our features gave an ROC AUC of 0.78, outperforming previously proposed kinetic, texture, and spatial enhancement variance features which give AUCs of 0.69, 0.64, and 0.65, respectively.
Velocity of motion across the skin influences perception of tactile location
Nguyen, Elizabeth H. L.; Taylor, Janet L.; Brooks, Jack
2015-01-01
We investigated the influence of motion context on tactile localization, using a paradigm similar to the cutaneous rabbit or sensory saltation (Geldard FA, Sherrick CE. Science 178: 178–179, 1972). In one of its forms, the rabbit stimulus consists of a tap in one location quickly followed by another tap elsewhere, creating the illusion that the two taps are near each other. Instead of taps, we used position of a halted brush and instead of distance judgment, localization responses. The brush moved across the skin of the left forearm, creating a clear motion signal before and after a rabbitlike leap of 10 cm (at 100 cm/s). Three before-and-after velocities (7.5, 15, or 30 cm/s) were used. Participants (n = 13) pointed with their right arm at the felt location of the brush when it halted either 1 cm before or after the leap. These stops were 12 cm apart, but distances computed from localization responses were only 5.4, 6.5, and 7.5 cm for the three velocities, respectively (F[2,11] = 15.19, P = 0.001). Thus the leap resulted in compressive position shift, as described previously for sensory saltation, but modulated by motion velocity before the leap: the slower the motion, the greater the shift opposite to motion direction. No gap in stimulation was perceived. We propose that velocity extrapolation causes the position shift: extrapolated motion does not have enough time to bridge the real spatial gap and thus assigns a closer location to the skin on the opposite side of the gap. PMID:26609112
Velocity of motion across the skin influences perception of tactile location.
Nguyen, Elizabeth H L; Taylor, Janet L; Brooks, Jack; Seizova-Cajic, Tatjana
2016-02-01
We investigated the influence of motion context on tactile localization, using a paradigm similar to the cutaneous rabbit or sensory saltation (Geldard FA, Sherrick CE. Science 178: 178-179, 1972). In one of its forms, the rabbit stimulus consists of a tap in one location quickly followed by another tap elsewhere, creating the illusion that the two taps are near each other. Instead of taps, we used position of a halted brush and instead of distance judgment, localization responses. The brush moved across the skin of the left forearm, creating a clear motion signal before and after a rabbitlike leap of 10 cm (at 100 cm/s). Three before-and-after velocities (7.5, 15, or 30 cm/s) were used. Participants (n = 13) pointed with their right arm at the felt location of the brush when it halted either 1 cm before or after the leap. These stops were 12 cm apart, but distances computed from localization responses were only 5.4, 6.5, and 7.5 cm for the three velocities, respectively (F[2,11] = 15.19, P = 0.001). Thus the leap resulted in compressive position shift, as described previously for sensory saltation, but modulated by motion velocity before the leap: the slower the motion, the greater the shift opposite to motion direction. No gap in stimulation was perceived. We propose that velocity extrapolation causes the position shift: extrapolated motion does not have enough time to bridge the real spatial gap and thus assigns a closer location to the skin on the opposite side of the gap. Copyright © 2016 the American Physiological Society.
Measurements by A LEAP-Based Virtual Glove for the Hand Rehabilitation
Cinque, Luigi; Polsinelli, Matteo; Spezialetti, Matteo
2018-01-01
Hand rehabilitation is fundamental after stroke or surgery. Traditional rehabilitation requires a therapist and implies high costs, stress for the patient, and subjective evaluation of the therapy effectiveness. Alternative approaches, based on mechanical and tracking-based gloves, can be really effective when used in virtual reality (VR) environments. Mechanical devices are often expensive, cumbersome, patient specific and hand specific, while tracking-based devices are not affected by these limitations but, especially if based on a single tracking sensor, could suffer from occlusions. In this paper, the implementation of a multi-sensors approach, the Virtual Glove (VG), based on the simultaneous use of two orthogonal LEAP motion controllers, is described. The VG is calibrated and static positioning measurements are compared with those collected with an accurate spatial positioning system. The positioning error is lower than 6 mm in a cylindrical region of interest of radius 10 cm and height 21 cm. Real-time hand tracking measurements are also performed, analysed and reported. Hand tracking measurements show that VG operated in real-time (60 fps), reduced occlusions, and managed two LEAP sensors correctly, without any temporal and spatial discontinuity when skipping from one sensor to the other. A video demonstrating the good performance of VG is also collected and presented in the Supplementary Materials. Results are promising but further work must be done to allow the calculation of the forces exerted by each finger when constrained by mechanical tools (e.g., peg-boards) and for reducing occlusions when grasping these tools. Although the VG is proposed for rehabilitation purposes, it could also be used for tele-operation of tools and robots, and for other VR applications. PMID:29534448
Measurements by A LEAP-Based Virtual Glove for the Hand Rehabilitation.
Placidi, Giuseppe; Cinque, Luigi; Polsinelli, Matteo; Spezialetti, Matteo
2018-03-10
Hand rehabilitation is fundamental after stroke or surgery. Traditional rehabilitation requires a therapist and implies high costs, stress for the patient, and subjective evaluation of the therapy effectiveness. Alternative approaches, based on mechanical and tracking-based gloves, can be really effective when used in virtual reality (VR) environments. Mechanical devices are often expensive, cumbersome, patient specific and hand specific, while tracking-based devices are not affected by these limitations but, especially if based on a single tracking sensor, could suffer from occlusions. In this paper, the implementation of a multi-sensors approach, the Virtual Glove (VG), based on the simultaneous use of two orthogonal LEAP motion controllers, is described. The VG is calibrated and static positioning measurements are compared with those collected with an accurate spatial positioning system. The positioning error is lower than 6 mm in a cylindrical region of interest of radius 10 cm and height 21 cm. Real-time hand tracking measurements are also performed, analysed and reported. Hand tracking measurements show that VG operated in real-time (60 fps), reduced occlusions, and managed two LEAP sensors correctly, without any temporal and spatial discontinuity when skipping from one sensor to the other. A video demonstrating the good performance of VG is also collected and presented in the Supplementary Materials. Results are promising but further work must be done to allow the calculation of the forces exerted by each finger when constrained by mechanical tools (e.g., peg-boards) and for reducing occlusions when grasping these tools. Although the VG is proposed for rehabilitation purposes, it could also be used for tele-operation of tools and robots, and for other VR applications.
Chan, An-Wen; Fung, Kinwah; Tran, Jennifer M; Kitchen, Jessica; Austin, Peter C; Weinstock, Martin A; Rochon, Paula A
2016-10-01
Keratinocyte carcinoma (nonmelanoma skin cancer) accounts for substantial burden in terms of high incidence and health care costs but is excluded by most cancer registries in North America. Administrative health insurance claims databases offer an opportunity to identify these cancers using diagnosis and procedural codes submitted for reimbursement purposes. To apply recursive partitioning to derive and validate a claims-based algorithm for identifying keratinocyte carcinoma with high sensitivity and specificity. Retrospective study using population-based administrative databases linked to 602 371 pathology episodes from a community laboratory for adults residing in Ontario, Canada, from January 1, 1992, to December 31, 2009. The final analysis was completed in January 2016. We used recursive partitioning (classification trees) to derive an algorithm based on health insurance claims. The performance of the derived algorithm was compared with 5 prespecified algorithms and validated using an independent academic hospital clinic data set of 2082 patients seen in May and June 2011. Sensitivity, specificity, positive predictive value, and negative predictive value using the histopathological diagnosis as the criterion standard. We aimed to achieve maximal specificity, while maintaining greater than 80% sensitivity. Among 602 371 pathology episodes, 131 562 (21.8%) had a diagnosis of keratinocyte carcinoma. Our final derived algorithm outperformed the 5 simple prespecified algorithms and performed well in both community and hospital data sets in terms of sensitivity (82.6% and 84.9%, respectively), specificity (93.0% and 99.0%, respectively), positive predictive value (76.7% and 69.2%, respectively), and negative predictive value (95.0% and 99.6%, respectively). Algorithm performance did not vary substantially during the 18-year period. This algorithm offers a reliable mechanism for ascertaining keratinocyte carcinoma for epidemiological research in the absence of cancer registry data. Our findings also demonstrate the value of recursive partitioning in deriving valid claims-based algorithms.
Lung partitioning for x-ray CAD applications
NASA Astrophysics Data System (ADS)
Annangi, Pavan; Raja, Anand
2011-03-01
Partitioning the inside region of lung into homogeneous regions becomes a crucial step in any computer-aided diagnosis applications based on chest X-ray. The ribs, air pockets and clavicle occupy major space inside the lung as seen in the chest x-ray PA image. Segmenting the ribs and clavicle to partition the lung into homogeneous regions forms a crucial step in any CAD application to better classify abnormalities. In this paper we present two separate algorithms to segment ribs and the clavicle bone in a completely automated way. The posterior ribs are segmented based on Phase congruency features and the clavicle is segmented using Mean curvature features followed by Radon transform. Both the algorithms work on the premise that the presentation of each of these anatomical structures inside the left and right lung has a specific orientation range within which they are confined to. The search space for both the algorithms is limited to the region inside the lung, which is obtained by an automated lung segmentation algorithm that was previously developed in our group. Both the algorithms were tested on 100 images of normal and patients affected with Pneumoconiosis.
Dynamics of Quantum Adiabatic Evolution Algorithm for Number Partitioning
NASA Technical Reports Server (NTRS)
Smelyanskiy, V. N.; Toussaint, U. V.; Timucin, D. A.
2002-01-01
We have developed a general technique to study the dynamics of the quantum adiabatic evolution algorithm applied to random combinatorial optimization problems in the asymptotic limit of large problem size n. We use as an example the NP-complete Number Partitioning problem and map the algorithm dynamics to that of an auxiliary quantum spin glass system with the slowly varying Hamiltonian. We use a Green function method to obtain the adiabatic eigenstates and the minimum excitation gap. g min, = O(n 2(exp -n/2), corresponding to the exponential complexity of the algorithm for Number Partitioning. The key element of the analysis is the conditional energy distribution computed for the set of all spin configurations generated from a given (ancestor) configuration by simultaneous flipping of a fixed number of spins. For the problem in question this distribution is shown to depend on the ancestor spin configuration only via a certain parameter related to 'the energy of the configuration. As the result, the algorithm dynamics can be described in terms of one-dimensional quantum diffusion in the energy space. This effect provides a general limitation of a quantum adiabatic computation in random optimization problems. Analytical results are in agreement with the numerical simulation of the algorithm.
Efficient Deterministic Finite Automata Minimization Based on Backward Depth Information
Liu, Desheng; Huang, Zhiping; Zhang, Yimeng; Guo, Xiaojun; Su, Shaojing
2016-01-01
Obtaining a minimal automaton is a fundamental issue in the theory and practical implementation of deterministic finite automatons (DFAs). A minimization algorithm is presented in this paper that consists of two main phases. In the first phase, the backward depth information is built, and the state set of the DFA is partitioned into many blocks. In the second phase, the state set is refined using a hash table. The minimization algorithm has a lower time complexity O(n) than a naive comparison of transitions O(n2). Few states need to be refined by the hash table, because most states have been partitioned by the backward depth information in the coarse partition. This method achieves greater generality than previous methods because building the backward depth information is independent of the topological complexity of the DFA. The proposed algorithm can be applied not only to the minimization of acyclic automata or simple cyclic automata, but also to automata with high topological complexity. Overall, the proposal has three advantages: lower time complexity, greater generality, and scalability. A comparison to Hopcroft’s algorithm demonstrates experimentally that the algorithm runs faster than traditional algorithms. PMID:27806102
Dynamics of Quantum Adiabatic Evolution Algorithm for Number Partitioning
NASA Technical Reports Server (NTRS)
Smelyanskiy, Vadius; vonToussaint, Udo V.; Timucin, Dogan A.; Clancy, Daniel (Technical Monitor)
2002-01-01
We have developed a general technique to study the dynamics of the quantum adiabatic evolution algorithm applied to random combinatorial optimization problems in the asymptotic limit of large problem size n. We use as an example the NP-complete Number Partitioning problem and map the algorithm dynamics to that of an auxiliary quantum spin glass system with the slowly varying Hamiltonian. We use a Green function method to obtain the adiabatic eigenstates and the minimum exitation gap, gmin = O(n2(sup -n/2)), corresponding to the exponential complexity of the algorithm for Number Partitioning. The key element of the analysis is the conditional energy distribution computed for the set of all spin configurations generated from a given (ancestor) configuration by simultaneous flipping of a fixed number of spins. For the problem in question this distribution is shown to depend on the ancestor spin configuration only via a certain parameter related to the energy of the configuration. As the result, the algorithm dynamics can be described in terms of one-dimensional quantum diffusion in the energy space. This effect provides a general limitation of a quantum adiabatic computation in random optimization problems. Analytical results are in agreement with the numerical simulation of the algorithm.
NASA Astrophysics Data System (ADS)
Patil, Riya Raghuvir
Networks of communicating agents require distributed algorithms for a variety of tasks in the field of network analysis and control. For applications such as swarms of autonomous vehicles, ad hoc and wireless sensor networks, and such military and civilian applications as exploring and patrolling a robust autonomous system that uses a distributed algorithm for selfpartitioning can be significantly helpful. A single team of autonomous vehicles in a field may need to self-dissemble into multiple teams, conducive to completing multiple control tasks. Moreover, because communicating agents are subject to changes, namely, addition or failure of an agent or link, a distributed or decentralized algorithm is favorable over having a central agent. A framework to help with the study of self-partitioning of such multi agent systems that have most basic mobility model not only saves our time in conception but also gives us a cost effective prototype without negotiating the physical realization of the proposed idea. In this thesis I present my work on the implementation of a flexible and distributed stochastic partitioning algorithm on the LegoRTM Mindstorms' NXT on a graphical programming platform using National Instruments' LabVIEW(TM) forming a team of communicating agents via NXT-Bee radio module. We single out mobility, communication and self-partition as the core elements of the work. The goal is to randomly explore a precinct for reference sites. Agents who have discovered the reference sites announce their target acquisition to form a network formed based upon the distance of each agent with the other wherein the self-partitioning begins to find an optimal partition. Further, to illustrate the work, an experimental test-bench of five Lego NXT robots is presented.
Quantum speedup of Monte Carlo methods.
Montanaro, Ashley
2015-09-08
Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently.
Quantum speedup of Monte Carlo methods
Montanaro, Ashley
2015-01-01
Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently. PMID:26528079
Burns, Randal; Roncal, William Gray; Kleissas, Dean; Lillaney, Kunal; Manavalan, Priya; Perlman, Eric; Berger, Daniel R; Bock, Davi D; Chung, Kwanghun; Grosenick, Logan; Kasthuri, Narayanan; Weiler, Nicholas C; Deisseroth, Karl; Kazhdan, Michael; Lichtman, Jeff; Reid, R Clay; Smith, Stephen J; Szalay, Alexander S; Vogelstein, Joshua T; Vogelstein, R Jacob
2013-01-01
We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes - neural connectivity maps of the brain-using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems-reads to parallel disk arrays and writes to solid-state storage-to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization.
Burns, Randal; Roncal, William Gray; Kleissas, Dean; Lillaney, Kunal; Manavalan, Priya; Perlman, Eric; Berger, Daniel R.; Bock, Davi D.; Chung, Kwanghun; Grosenick, Logan; Kasthuri, Narayanan; Weiler, Nicholas C.; Deisseroth, Karl; Kazhdan, Michael; Lichtman, Jeff; Reid, R. Clay; Smith, Stephen J.; Szalay, Alexander S.; Vogelstein, Joshua T.; Vogelstein, R. Jacob
2013-01-01
We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes— neural connectivity maps of the brain—using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems—reads to parallel disk arrays and writes to solid-state storage—to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization. PMID:24401992
Elmetwaly, Shereef; Schlick, Tamar
2014-01-01
Graph representations have been widely used to analyze and design various economic, social, military, political, and biological networks. In systems biology, networks of cells and organs are useful for understanding disease and medical treatments and, in structural biology, structures of molecules can be described, including RNA structures. In our RNA-As-Graphs (RAG) framework, we represent RNA structures as tree graphs by translating unpaired regions into vertices and helices into edges. Here we explore the modularity of RNA structures by applying graph partitioning known in graph theory to divide an RNA graph into subgraphs. To our knowledge, this is the first application of graph partitioning to biology, and the results suggest a systematic approach for modular design in general. The graph partitioning algorithms utilize mathematical properties of the Laplacian eigenvector (µ2) corresponding to the second eigenvalues (λ2) associated with the topology matrix defining the graph: λ2 describes the overall topology, and the sum of µ2′s components is zero. The three types of algorithms, termed median, sign, and gap cuts, divide a graph by determining nodes of cut by median, zero, and largest gap of µ2′s components, respectively. We apply these algorithms to 45 graphs corresponding to all solved RNA structures up through 11 vertices (∼220 nucleotides). While we observe that the median cut divides a graph into two similar-sized subgraphs, the sign and gap cuts partition a graph into two topologically-distinct subgraphs. We find that the gap cut produces the best biologically-relevant partitioning for RNA because it divides RNAs at less stable connections while maintaining junctions intact. The iterative gap cuts suggest basic modules and assembly protocols to design large RNA structures. Our graph substructuring thus suggests a systematic approach to explore the modularity of biological networks. In our applications to RNA structures, subgraphs also suggest design strategies for novel RNA motifs. PMID:25188578
NASA Astrophysics Data System (ADS)
Liu, Yan; Deng, Honggui; Ren, Shuang; Tang, Chengying; Qian, Xuewen
2018-01-01
We propose an efficient partial transmit sequence technique based on genetic algorithm and peak-value optimization algorithm (GAPOA) to reduce high peak-to-average power ratio (PAPR) in visible light communication systems based on orthogonal frequency division multiplexing (VLC-OFDM). By analysis of hill-climbing algorithm's pros and cons, we propose the POA with excellent local search ability to further process the signals whose PAPR is still over the threshold after processed by genetic algorithm (GA). To verify the effectiveness of the proposed technique and algorithm, we evaluate the PAPR performance and the bit error rate (BER) performance and compare them with partial transmit sequence (PTS) technique based on GA (GA-PTS), PTS technique based on genetic and hill-climbing algorithm (GH-PTS), and PTS based on shuffled frog leaping algorithm and hill-climbing algorithm (SFLAHC-PTS). The results show that our technique and algorithm have not only better PAPR performance but also lower computational complexity and BER than GA-PTS, GH-PTS, and SFLAHC-PTS technique.
A physics-motivated Centroidal Voronoi Particle domain decomposition method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fu, Lin, E-mail: lin.fu@tum.de; Hu, Xiangyu Y., E-mail: xiangyu.hu@tum.de; Adams, Nikolaus A., E-mail: nikolaus.adams@tum.de
2017-04-15
In this paper, we propose a novel domain decomposition method for large-scale simulations in continuum mechanics by merging the concepts of Centroidal Voronoi Tessellation (CVT) and Voronoi Particle dynamics (VP). The CVT is introduced to achieve a high-level compactness of the partitioning subdomains by the Lloyd algorithm which monotonically decreases the CVT energy. The number of computational elements between neighboring partitioning subdomains, which scales the communication effort for parallel simulations, is optimized implicitly as the generated partitioning subdomains are convex and simply connected with small aspect-ratios. Moreover, Voronoi Particle dynamics employing physical analogy with a tailored equation of state ismore » developed, which relaxes the particle system towards the target partition with good load balance. Since the equilibrium is computed by an iterative approach, the partitioning subdomains exhibit locality and the incremental property. Numerical experiments reveal that the proposed Centroidal Voronoi Particle (CVP) based algorithm produces high-quality partitioning with high efficiency, independently of computational-element types. Thus it can be used for a wide range of applications in computational science and engineering.« less
A physics-motivated Centroidal Voronoi Particle domain decomposition method
NASA Astrophysics Data System (ADS)
Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.
2017-04-01
In this paper, we propose a novel domain decomposition method for large-scale simulations in continuum mechanics by merging the concepts of Centroidal Voronoi Tessellation (CVT) and Voronoi Particle dynamics (VP). The CVT is introduced to achieve a high-level compactness of the partitioning subdomains by the Lloyd algorithm which monotonically decreases the CVT energy. The number of computational elements between neighboring partitioning subdomains, which scales the communication effort for parallel simulations, is optimized implicitly as the generated partitioning subdomains are convex and simply connected with small aspect-ratios. Moreover, Voronoi Particle dynamics employing physical analogy with a tailored equation of state is developed, which relaxes the particle system towards the target partition with good load balance. Since the equilibrium is computed by an iterative approach, the partitioning subdomains exhibit locality and the incremental property. Numerical experiments reveal that the proposed Centroidal Voronoi Particle (CVP) based algorithm produces high-quality partitioning with high efficiency, independently of computational-element types. Thus it can be used for a wide range of applications in computational science and engineering.
Applying graph partitioning methods in measurement-based dynamic load balancing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhatele, Abhinav; Fourestier, Sebastien; Menon, Harshitha
Load imbalance leads to an increasing waste of resources as an application is scaled to more and more processors. Achieving the best parallel efficiency for a program requires optimal load balancing which is a NP-hard problem. However, finding near-optimal solutions to this problem for complex computational science and engineering applications is becoming increasingly important. Charm++, a migratable objects based programming model, provides a measurement-based dynamic load balancing framework. This framework instruments and then migrates over-decomposed objects to balance computational load and communication at runtime. This paper explores the use of graph partitioning algorithms, traditionally used for partitioning physical domains/meshes, formore » measurement-based dynamic load balancing of parallel applications. In particular, we present repartitioning methods developed in a graph partitioning toolbox called SCOTCH that consider the previous mapping to minimize migration costs. We also discuss a new imbalance reduction algorithm for graphs with irregular load distributions. We compare several load balancing algorithms using microbenchmarks on Intrepid and Ranger and evaluate the effect of communication, number of cores and number of objects on the benefit achieved from load balancing. New algorithms developed in SCOTCH lead to better performance compared to the METIS partitioners for several cases, both in terms of the application execution time and fewer number of objects migrated.« less
NASA Astrophysics Data System (ADS)
Kumar, S.; Kaushal, D. R.; Gosain, A. K.
2017-12-01
Urban hydrology will have an increasing role to play in the sustainability of human settlements. Expansion of urban areas brings significant changes in physical characteristics of landuse. Problems with administration of urban flooding have their roots in concentration of population within a relatively small area. As watersheds are urbanized, infiltration decreases, pattern of surface runoff is changed generating high peak flows, large runoff volumes from urban areas. Conceptual rainfall-runoff models have become a foremost tool for predicting surface runoff and flood forecasting. Manual calibration is often time consuming and tedious because of the involved subjectivity, which makes automatic approach more preferable. The calibration of parameters usually includes numerous criteria for evaluating the performances with respect to the observed data. Moreover, derivation of objective function assosciat6ed with the calibration of model parameters is quite challenging. Various studies dealing with optimization methods has steered the embracement of evolution based optimization algorithms. In this paper, a systematic comparison of two evolutionary approaches to multi-objective optimization namely shuffled frog leaping algorithm (SFLA) and genetic algorithms (GA) is done. SFLA is a cooperative search metaphor, stimulated by natural memetics based on the population while, GA is based on principle of survival of the fittest and natural evolution. SFLA and GA has been employed for optimizing the major parameters i.e. width, imperviousness, Manning's coefficient and depression storage for the highly urbanized catchment of Delhi, India. The study summarizes the auto-tuning of a widely used storm water management model (SWMM), by internal coupling of SWMM with SFLA and GA separately. The values of statistical parameters such as, Nash-Sutcliffe efficiency (NSE) and Percent Bias (PBIAS) were found to lie within the acceptable limit, indicating reasonably good model performance. Overall, this study proved promising for assessing risk in urban drainage systems and should prove useful to improve integrity of the urban system, its reliability and provides guidance for inundation preparedness.Keywords: Hydrologic model, SWMM, Urbanization, SFLA and GA.
OpenRBC: Redefining the Frontier of Red Blood Cell Simulations at Protein Resolution
NASA Astrophysics Data System (ADS)
Tang, Yu-Hang; Lu, Lu; Li, He; Grinberg, Leopold; Sachdeva, Vipin; Evangelinos, Constantinos; Karniadakis, George
We present a from-scratch development of OpenRBC, a coarse-grained molecular dynamics code, which is capable of performing an unprecedented in silico experiment - simulating an entire mammal red blood cell lipid bilayer and cytoskeleton modeled by 4 million mesoscopic particles - on a single shared memory node. To achieve this, we invented an adaptive spatial searching algorithm to accelerate the computation of short-range pairwise interactions in an extremely sparse 3D space. The algorithm is based on a Voronoi partitioning of the point cloud of coarse-grained particles, and is continuously updated over the course of the simulation. The algorithm enables the construction of a lattice-free cell list, i.e. the key spatial searching data structure in our code, in O (N) time and space space with cells whose position and shape adapts automatically to the local density and curvature. The code implements NUMA/NUCA-aware OpenMP parallelization and achieves perfect scaling with up to hundreds of hardware threads. The code outperforms a legacy solver by more than 8 times in time-to-solution and more than 20 times in problem size, thus providing a new venue for probing the cytomechanics of red blood cells. This work was supported by the Department of Energy (DOE) Collaboratory on Mathematics for Mesoscopic Model- ing of Materials (CM4). YHT acknowledges partial financial support from an IBM Ph.D. Scholarship Award.
Wang, Junxiao; Wang, Xiaorui; Zhou, Shenglu; Wu, Shaohua; Zhu, Yan; Lu, Chunfeng
2016-01-01
With China’s rapid economic development, the reduction in arable land has emerged as one of the most prominent problems in the nation. The long-term dynamic monitoring of arable land quality is important for protecting arable land resources. An efficient practice is to select optimal sample points while obtaining accurate predictions. To this end, the selection of effective points from a dense set of soil sample points is an urgent problem. In this study, data were collected from Donghai County, Jiangsu Province, China. The number and layout of soil sample points are optimized by considering the spatial variations in soil properties and by using an improved simulated annealing (SA) algorithm. The conclusions are as follows: (1) Optimization results in the retention of more sample points in the moderate- and high-variation partitions of the study area; (2) The number of optimal sample points obtained with the improved SA algorithm is markedly reduced, while the accuracy of the predicted soil properties is improved by approximately 5% compared with the raw data; (3) With regard to the monitoring of arable land quality, a dense distribution of sample points is needed to monitor the granularity. PMID:27706051
Optimum and Heuristic Algorithms for Finite State Machine Decomposition and Partitioning
1989-09-01
Heuristic Algorithms for Finite State Machine Decomposition and Partitioning Pravnav Ashar, Srinivas Devadas , and A. Richard Newton , T E’,’ .,jpf~s’!i3...94720. Devadas : Department of Electrical Engineering and Computer Science, Room 36-848, MIT, Cambridge, MA 02139. (617) 253-0454. Copyright* 1989 MIT...and reduction, A finite state miachinie is represenutedl by its State Transition Graphi itodlitied froini two-level B ~oolean imiinimizers. Ilist
NASA Technical Reports Server (NTRS)
Schmidt, Phillip; Garg, Sanjay
1991-01-01
A framework for a decentralized hierarchical controller partitioning structure is developed. This structure allows for the design of separate airframe and propulsion controllers which, when assembled, will meet the overall design criterion for the integrated airframe/propulsion system. An algorithm based on parameter optimization of the state-space representation for the subsystem controllers is described. The algorithm is currently being applied to an integrated flight propulsion control design example.
A hybrid MLP-CNN classifier for very fine resolution remotely sensed image classification
NASA Astrophysics Data System (ADS)
Zhang, Ce; Pan, Xin; Li, Huapeng; Gardiner, Andy; Sargent, Isabel; Hare, Jonathon; Atkinson, Peter M.
2018-06-01
The contextual-based convolutional neural network (CNN) with deep architecture and pixel-based multilayer perceptron (MLP) with shallow structure are well-recognized neural network algorithms, representing the state-of-the-art deep learning method and the classical non-parametric machine learning approach, respectively. The two algorithms, which have very different behaviours, were integrated in a concise and effective way using a rule-based decision fusion approach for the classification of very fine spatial resolution (VFSR) remotely sensed imagery. The decision fusion rules, designed primarily based on the classification confidence of the CNN, reflect the generally complementary patterns of the individual classifiers. In consequence, the proposed ensemble classifier MLP-CNN harvests the complementary results acquired from the CNN based on deep spatial feature representation and from the MLP based on spectral discrimination. Meanwhile, limitations of the CNN due to the adoption of convolutional filters such as the uncertainty in object boundary partition and loss of useful fine spatial resolution detail were compensated. The effectiveness of the ensemble MLP-CNN classifier was tested in both urban and rural areas using aerial photography together with an additional satellite sensor dataset. The MLP-CNN classifier achieved promising performance, consistently outperforming the pixel-based MLP, spectral and textural-based MLP, and the contextual-based CNN in terms of classification accuracy. This research paves the way to effectively address the complicated problem of VFSR image classification.
Chen, Wenbin; Hendrix, William; Samatova, Nagiza F
2017-12-01
The problem of aligning multiple metabolic pathways is one of very challenging problems in computational biology. A metabolic pathway consists of three types of entities: reactions, compounds, and enzymes. Based on similarities between enzymes, Tohsato et al. gave an algorithm for aligning multiple metabolic pathways. However, the algorithm given by Tohsato et al. neglects the similarities among reactions, compounds, enzymes, and pathway topology. How to design algorithms for the alignment problem of multiple metabolic pathways based on the similarity of reactions, compounds, and enzymes? It is a difficult computational problem. In this article, we propose an algorithm for the problem of aligning multiple metabolic pathways based on the similarities among reactions, compounds, enzymes, and pathway topology. First, we compute a weight between each pair of like entities in different input pathways based on the entities' similarity score and topological structure using Ay et al.'s methods. We then construct a weighted k-partite graph for the reactions, compounds, and enzymes. We extract a mapping between these entities by solving the maximum-weighted k-partite matching problem by applying a novel heuristic algorithm. By analyzing the alignment results of multiple pathways in different organisms, we show that the alignments found by our algorithm correctly identify common subnetworks among multiple pathways.
NASA Astrophysics Data System (ADS)
Schratz, Patrick; Herrmann, Tobias; Brenning, Alexander
2017-04-01
Computational and statistical prediction methods such as the support vector machine have gained popularity in remote-sensing applications in recent years and are often compared to more traditional approaches like maximum-likelihood classification. However, the accuracy assessment of such predictive models in a spatial context needs to account for the presence of spatial autocorrelation in geospatial data by using spatial cross-validation and bootstrap strategies instead of their now more widely used non-spatial equivalent. The R package sperrorest by A. Brenning [IEEE International Geoscience and Remote Sensing Symposium, 1, 374 (2012)] provides a generic interface for performing (spatial) cross-validation of any statistical or machine-learning technique available in R. Since spatial statistical models as well as flexible machine-learning algorithms can be computationally expensive, parallel computing strategies are required to perform cross-validation efficiently. The most recent major release of sperrorest therefore comes with two new features (aside from improved documentation): The first one is the parallelized version of sperrorest(), parsperrorest(). This function features two parallel modes to greatly speed up cross-validation runs. Both parallel modes are platform independent and provide progress information. par.mode = 1 relies on the pbapply package and calls interactively (depending on the platform) parallel::mclapply() or parallel::parApply() in the background. While forking is used on Unix-Systems, Windows systems use a cluster approach for parallel execution. par.mode = 2 uses the foreach package to perform parallelization. This method uses a different way of cluster parallelization than the parallel package does. In summary, the robustness of parsperrorest() is increased with the implementation of two independent parallel modes. A new way of partitioning the data in sperrorest is provided by partition.factor.cv(). This function gives the user the possibility to perform cross-validation at the level of some grouping structure. As an example, in remote sensing of agricultural land uses, pixels from the same field contain nearly identical information and will thus be jointly placed in either the test set or the training set. Other spatial sampling resampling strategies are already available and can be extended by the user.
High performance genetic algorithm for VLSI circuit partitioning
NASA Astrophysics Data System (ADS)
Dinu, Simona
2016-12-01
Partitioning is one of the biggest challenges in computer-aided design for VLSI circuits (very large-scale integrated circuits). This work address the min-cut balanced circuit partitioning problem- dividing the graph that models the circuit into almost equal sized k sub-graphs while minimizing the number of edges cut i.e. minimizing the number of edges connecting the sub-graphs. The problem may be formulated as a combinatorial optimization problem. Experimental studies in the literature have shown the problem to be NP-hard and thus it is important to design an efficient heuristic algorithm to solve it. The approach proposed in this study is a parallel implementation of a genetic algorithm, namely an island model. The information exchange between the evolving subpopulations is modeled using a fuzzy controller, which determines an optimal balance between exploration and exploitation of the solution space. The results of simulations show that the proposed algorithm outperforms the standard sequential genetic algorithm both in terms of solution quality and convergence speed. As a direction for future study, this research can be further extended to incorporate local search operators which should include problem-specific knowledge. In addition, the adaptive configuration of mutation and crossover rates is another guidance for future research.
Operating experience with LEAP from the perspective of the computing applications analyst
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ford, W.E. III; Horwedel, J.E.; McAdoo, J.W.
1981-05-01
The Long-Term Energy Analysis Program (LEAP), which was used for the energy price-quantity projections in the 1978 Annual Report to Congress (ARC '78) and used in an ORNL research program to develop and demonstrate a procedure for evaluating energy-economic modeling computer codes and the important results derived therefrom, is discussed. The LEAP system used in the ORNL research, the mechanics of executing LEAP, and the personnel skills required to execute the system are described. In addition, a LEAP sample problem, subroutine hierarchical flowcharts, and input tables for the ARC '78 energy-economic model are included. Results of a study to testmore » the capability of the LEAP system used in the ORNL research to reproduce the ARC '78 results credited to LEAP are presented.« less
Hyperspectral Super-Resolution of Locally Low Rank Images From Complementary Multisource Data.
Veganzones, Miguel A; Simoes, Miguel; Licciardi, Giorgio; Yokoya, Naoto; Bioucas-Dias, Jose M; Chanussot, Jocelyn
2016-01-01
Remote sensing hyperspectral images (HSIs) are quite often low rank, in the sense that the data belong to a low dimensional subspace/manifold. This has been recently exploited for the fusion of low spatial resolution HSI with high spatial resolution multispectral images in order to obtain super-resolution HSI. Most approaches adopt an unmixing or a matrix factorization perspective. The derived methods have led to state-of-the-art results when the spectral information lies in a low-dimensional subspace/manifold. However, if the subspace/manifold dimensionality spanned by the complete data set is large, i.e., larger than the number of multispectral bands, the performance of these methods mainly decreases because the underlying sparse regression problem is severely ill-posed. In this paper, we propose a local approach to cope with this difficulty. Fundamentally, we exploit the fact that real world HSIs are locally low rank, that is, pixels acquired from a given spatial neighborhood span a very low-dimensional subspace/manifold, i.e., lower or equal than the number of multispectral bands. Thus, we propose to partition the image into patches and solve the data fusion problem independently for each patch. This way, in each patch the subspace/manifold dimensionality is low enough, such that the problem is not ill-posed anymore. We propose two alternative approaches to define the hyperspectral super-resolution through local dictionary learning using endmember induction algorithms. We also explore two alternatives to define the local regions, using sliding windows and binary partition trees. The effectiveness of the proposed approaches is illustrated with synthetic and semi real data.
Predicting long-range transport: a systematic evaluation of two multimedia transport models.
Bennett, D H; Scheringer, M; McKone, T E; Hungerbühler, K
2001-03-15
The United Nations Environment Program has recently developed criteria to identify and restrict chemicals with a potential for persistence and long-range transport (persistent organic pollutants or POPs). There are many stakeholders involved, and the issues are not only scientific but also include social, economic, and political factors. This work focuses on one aspect of the POPs debate, the criteria for determining the potential for long-range transport (LRT). Our goal is to determine if current models are reliable enough to support decisions that classify a chemical based on the LRT potential. We examine the robustness of two multimedia fate models for determining the relative ranking and absolute spatial range of various chemicals in the environment. We also consider the effect of parameter uncertainties and the model uncertainty associated with the selection of an algorithm for gas-particle partitioning on the model results. Given the same chemical properties, both models give virtually the same ranking. However, when chemical parameter uncertainties and model uncertainties such as particle partitioning are considered, the spatial range distributions obtained for the individual chemicals overlap, preventing a distinct rank order. The absolute values obtained for the predicted spatial range or travel distance differ significantly between the two models for the uncertainties evaluated. We find that to evaluate a chemical when large and unresolved uncertainties exist, it is more informative to use two or more models and include multiple types of uncertainty. Model differences and uncertainties must be explicitly confronted to determine how the limitations of scientific knowledge impact predictions in the decision-making process.
Leap motion evaluation for assessment of upper limb motor skills in Parkinson's disease.
Butt, A H; Rovini, E; Dolciotti, C; Bongioanni, P; De Petris, G; Cavallo, F
2017-07-01
The main goal of this study is to investigate the potential of the Leap Motion Controller (LMC) for the objective assessment of motor dysfunctioning in patients with Parkinson's disease (PwPD). The most relevant clinical signs in Parkinson's Disease (PD), such as slowness of movements, frequency variation, amplitude variation, and speed, were extracted from the recorded LMC data. Data were clinically quantified using the LMC software development kit (SDK). In this study, 16 PwPD subjects and 12 control healthy subjects were involved. A neurologist assessed the subjects during the task execution, assigning them a score according to the MDS/UPDRS-Section III items. Features of motor performance from both subject groups (patients and healthy controls) were extracted with dedicated algorithms. Furthermore, to find out the significance of such features from the clinical point of view, machine learning based methods were used. Overall, our findings showed the moderate potential of LMC to extract the motor performance of PwPD.
Combinatorial approximation algorithms for MAXCUT using random walks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seshadhri, Comandur; Kale, Satyen
We give the first combinatorial approximation algorithm for MaxCut that beats the trivial 0.5 factor by a constant. The main partitioning procedure is very intuitive, natural, and easily described. It essentially performs a number of random walks and aggregates the information to provide the partition. We can control the running time to get an approximation factor-running time tradeoff. We show that for any constant b > 1.5, there is an {tilde O}(n{sup b}) algorithm that outputs a (0.5 + {delta})-approximation for MaxCut, where {delta} = {delta}(b) is some positive constant. One of the components of our algorithm is a weakmore » local graph partitioning procedure that may be of independent interest. Given a starting vertex i and a conductance parameter {phi}, unless a random walk of length {ell} = O(log n) starting from i mixes rapidly (in terms of {phi} and {ell}), we can find a cut of conductance at most {phi} close to the vertex. The work done per vertex found in the cut is sublinear in n.« less
Semi-implicit time integration of atmospheric flows with characteristic-based flux partitioning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghosh, Debojyoti; Constantinescu, Emil M.
2016-06-23
Here, this paper presents a characteristic-based flux partitioning for the semi-implicit time integration of atmospheric flows. Nonhydrostatic models require the solution of the compressible Euler equations. The acoustic time scale is significantly faster than the advective scale, yet it is typically not relevant to atmospheric and weather phenomena. The acoustic and advective components of the hyperbolic flux are separated in the characteristic space. High-order, conservative additive Runge-Kutta methods are applied to the partitioned equations so that the acoustic component is integrated in time implicitly with an unconditionally stable method, while the advective component is integrated explicitly. The time step ofmore » the overall algorithm is thus determined by the advective scale. Benchmark flow problems are used to demonstrate the accuracy, stability, and convergence of the proposed algorithm. The computational cost of the partitioned semi-implicit approach is compared with that of explicit time integration.« less
Competitive learning with pairwise constraints.
Covões, Thiago F; Hruschka, Eduardo R; Ghosh, Joydeep
2013-01-01
Constrained clustering has been an active research topic since the last decade. Most studies focus on batch-mode algorithms. This brief introduces two algorithms for on-line constrained learning, named on-line linear constrained vector quantization error (O-LCVQE) and constrained rival penalized competitive learning (C-RPCL). The former is a variant of the LCVQE algorithm for on-line settings, whereas the latter is an adaptation of the (on-line) RPCL algorithm to deal with constrained clustering. The accuracy results--in terms of the normalized mutual information (NMI)--from experiments with nine datasets show that the partitions induced by O-LCVQE are competitive with those found by the (batch-mode) LCVQE. Compared with this formidable baseline algorithm, it is surprising that C-RPCL can provide better partitions (in terms of the NMI) for most of the datasets. Also, experiments on a large dataset show that on-line algorithms for constrained clustering can significantly reduce the computational time.
Multi-A Graph Patrolling and Partitioning
NASA Astrophysics Data System (ADS)
Elor, Y.; Bruckstein, A. M.
2012-12-01
We introduce a novel multi agent patrolling algorithm inspired by the behavior of gas filled balloons. Very low capability ant-like agents are considered with the task of patrolling an unknown area modeled as a graph. While executing the proposed algorithm, the agents dynamically partition the graph between them using simple local interactions, every agent assuming the responsibility for patrolling his subgraph. Balanced graph partition is an emergent behavior due to the local interactions between the agents in the swarm. Extensive simulations on various graphs (environments) showed that the average time to reach a balanced partition is linear with the graph size. The simulations yielded a convincing argument for conjecturing that if the graph being patrolled contains a balanced partition, the agents will find it. However, we could not prove this. Nevertheless, we have proved that if a balanced partition is reached, the maximum time lag between two successive visits to any vertex using the proposed strategy is at most twice the optimal so the patrol quality is at least half the optimal. In case of weighted graphs the patrol quality is at least (1)/(2){lmin}/{lmax} of the optimal where lmax (lmin) is the longest (shortest) edge in the graph.
NASA Astrophysics Data System (ADS)
Ramazani, Saba; Jackson, Delvin L.; Selmic, Rastko R.
2013-05-01
In search and surveillance operations, deploying a team of mobile agents provides a robust solution that has multiple advantages over using a single agent in efficiency and minimizing exploration time. This paper addresses the challenge of identifying a target in a given environment when using a team of mobile agents by proposing a novel method of mapping and movement of agent teams in a cooperative manner. The approach consists of two parts. First, the region is partitioned into a hexagonal beehive structure in order to provide equidistant movements in every direction and to allow for more natural and flexible environment mapping. Additionally, in search environments that are partitioned into hexagons, mobile agents have an efficient travel path while performing searches due to this partitioning approach. Second, we use a team of mobile agents that move in a cooperative manner and utilize the Tabu Random algorithm to search for the target. Due to the ever-increasing use of robotics and Unmanned Aerial Vehicle (UAV) platforms, the field of cooperative multi-agent search has developed many applications recently that would benefit from the use of the approach presented in this work, including: search and rescue operations, surveillance, data collection, and border patrol. In this paper, the increased efficiency of the Tabu Random Search algorithm method in combination with hexagonal partitioning is simulated, analyzed, and advantages of this approach are presented and discussed.
Revisiting the choice of the driving temperature for eddy covariance CO2 flux partitioning
Wohlfahrt, Georg; Galvagno, Marta
2017-01-01
So-called CO2 flux partitioning algorithms are widely used to partition the net ecosystem CO2 exchange into the two component fluxes, gross primary productivity and ecosystem respiration. Common CO2 flux partitioning algorithms conceptualize ecosystem respiration to originate from a single source, requiring the choice of a corresponding driving temperature. Using a conceptual dual-source respiration model, consisting of an above- and a below-ground respiration source each driven by a corresponding temperature, we demonstrate that the typical phase shift between air and soil temperature gives rise to a hysteresis relationship between ecosystem respiration and temperature. The hysteresis proceeds in a clockwise fashion if soil temperature is used to drive ecosystem respiration, while a counter-clockwise response is observed when ecosystem respiration is related to air temperature. As a consequence, nighttime ecosystem respiration is smaller than daytime ecosystem respiration when referenced to soil temperature, while the reverse is true for air temperature. We confirm these qualitative modelling results using measurements of day and night ecosystem respiration made with opaque chambers in a short-statured mountain grassland. Inferring daytime from nighttime ecosystem respiration or vice versa, as attempted by CO2 flux partitioning algorithms, using a single-source respiration model is thus an oversimplification resulting in biased estimates of ecosystem respiration. We discuss the likely magnitude of the bias, options for minimizing it and conclude by emphasizing that the systematic uncertainty of gross primary productivity and ecosystem respiration inferred through CO2 flux partitioning needs to be better quantified and reported. PMID:28439145
An Element-Based Concurrent Partitioner for Unstructured Finite Element Meshes
NASA Technical Reports Server (NTRS)
Ding, Hong Q.; Ferraro, Robert D.
1996-01-01
A concurrent partitioner for partitioning unstructured finite element meshes on distributed memory architectures is developed. The partitioner uses an element-based partitioning strategy. Its main advantage over the more conventional node-based partitioning strategy is its modular programming approach to the development of parallel applications. The partitioner first partitions element centroids using a recursive inertial bisection algorithm. Elements and nodes then migrate according to the partitioned centroids, using a data request communication template for unpredictable incoming messages. Our scalable implementation is contrasted to a non-scalable implementation which is a straightforward parallelization of a sequential partitioner.
The LEAP Challenge: Education for a World of Unscripted Problems
ERIC Educational Resources Information Center
Liberal Education, 2015
2015-01-01
This article was adapted from "The LEAP Challenge: Education for a World of Unscripted Problems," a folio distributed at the opening plenary session of the 2015 annual meeting of the Association of American Colleges and Universities at which the LEAP Challenge was formally launched. Liberal Education and America's Promise (LEAP) prepares…
Mesh Algorithms for PDE with Sieve I: Mesh Distribution
Knepley, Matthew G.; Karpeev, Dmitry A.
2009-01-01
We have developed a new programming framework, called Sieve, to support parallel numerical partial differential equation(s) (PDE) algorithms operating over distributed meshes. We have also developed a reference implementation of Sieve in C++ as a library of generic algorithms operating on distributed containers conforming to the Sieve interface. Sieve makes instances of the incidence relation, or arrows, the conceptual first-class objects represented in the containers. Further, generic algorithms acting on this arrow container are systematically used to provide natural geometric operations on the topology and also, through duality, on the data. Finally, coverings and duality are used to encode notmore » only individual meshes, but all types of hierarchies underlying PDE data structures, including multigrid and mesh partitions. In order to demonstrate the usefulness of the framework, we show how the mesh partition data can be represented and manipulated using the same fundamental mechanisms used to represent meshes. We present the complete description of an algorithm to encode a mesh partition and then distribute a mesh, which is independent of the mesh dimension, element shape, or embedding. Moreover, data associated with the mesh can be similarly distributed with exactly the same algorithm. The use of a high level of abstraction within the Sieve leads to several benefits in terms of code reuse, simplicity, and extensibility. We discuss these benefits and compare our approach to other existing mesh libraries.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Havu, V.; Fritz Haber Institute of the Max Planck Society, Berlin; Blum, V.
2009-12-01
We consider the problem of developing O(N) scaling grid-based operations needed in many central operations when performing electronic structure calculations with numeric atom-centered orbitals as basis functions. We outline the overall formulation of localized algorithms, and specifically the creation of localized grid batches. The choice of the grid partitioning scheme plays an important role in the performance and memory consumption of the grid-based operations. Three different top-down partitioning methods are investigated, and compared with formally more rigorous yet much more expensive bottom-up algorithms. We show that a conceptually simple top-down grid partitioning scheme achieves essentially the same efficiency as themore » more rigorous bottom-up approaches.« less
Regier, Michael D; Moodie, Erica E M
2016-05-01
We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience.
Qin, Jiahu; Fu, Weiming; Gao, Huijun; Zheng, Wei Xing
2016-03-03
This paper is concerned with developing a distributed k-means algorithm and a distributed fuzzy c-means algorithm for wireless sensor networks (WSNs) where each node is equipped with sensors. The underlying topology of the WSN is supposed to be strongly connected. The consensus algorithm in multiagent consensus theory is utilized to exchange the measurement information of the sensors in WSN. To obtain a faster convergence speed as well as a higher possibility of having the global optimum, a distributed k-means++ algorithm is first proposed to find the initial centroids before executing the distributed k-means algorithm and the distributed fuzzy c-means algorithm. The proposed distributed k-means algorithm is capable of partitioning the data observed by the nodes into measure-dependent groups which have small in-group and large out-group distances, while the proposed distributed fuzzy c-means algorithm is capable of partitioning the data observed by the nodes into different measure-dependent groups with degrees of membership values ranging from 0 to 1. Simulation results show that the proposed distributed algorithms can achieve almost the same results as that given by the centralized clustering algorithms.
Weights and topology: a study of the effects of graph construction on 3D image segmentation.
Grady, Leo; Jolly, Marie-Pierre
2008-01-01
Graph-based algorithms have become increasingly popular for medical image segmentation. The fundamental process for each of these algorithms is to use the image content to generate a set of weights for the graph and then set conditions for an optimal partition of the graph with respect to these weights. To date, the heuristics used for generating the weighted graphs from image intensities have largely been ignored, while the primary focus of attention has been on the details of providing the partitioning conditions. In this paper we empirically study the effects of graph connectivity and weighting function on the quality of the segmentation results. To control for algorithm-specific effects, we employ both the Graph Cuts and Random Walker algorithms in our experiments.
Normalized Cut Algorithm for Automated Assignment of Protein Domains
NASA Technical Reports Server (NTRS)
Samanta, M. P.; Liang, S.; Zha, H.; Biegel, Bryan A. (Technical Monitor)
2002-01-01
We present a novel computational method for automatic assignment of protein domains from structural data. At the core of our algorithm lies a recently proposed clustering technique that has been very successful for image-partitioning applications. This grap.,l-theory based clustering method uses the notion of a normalized cut to partition. an undirected graph into its strongly-connected components. Computer implementation of our method tested on the standard comparison set of proteins from the literature shows a high success rate (84%), better than most existing alternative In addition, several other features of our algorithm, such as reliance on few adjustable parameters, linear run-time with respect to the size of the protein and reduced complexity compared to other graph-theory based algorithms, would make it an attractive tool for structural biologists.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Satake, Shin-ichi; Kanamori, Hiroyuki; Kunugi, Tomoaki
2007-02-01
We have developed a parallel algorithm for microdigital-holographic particle-tracking velocimetry. The algorithm is used in (1) numerical reconstruction of a particle image computer using a digital hologram, and (2) searching for particles. The numerical reconstruction from the digital hologram makes use of the Fresnel diffraction equation and the FFT (fast Fourier transform),whereas the particle search algorithm looks for local maximum graduation in a reconstruction field represented by a 3D matrix. To achieve high performance computing for both calculations (reconstruction and particle search), two memory partitions are allocated to the 3D matrix. In this matrix, the reconstruction part consists of horizontallymore » placed 2D memory partitions on the x-y plane for the FFT, whereas, the particle search part consists of vertically placed 2D memory partitions set along the z axes.Consequently, the scalability can be obtained for the proportion of processor elements,where the benchmarks are carried out for parallel computation by a SGI Altix machine.« less
Zhang, Pan; Moore, Cristopher
2014-01-01
Modularity is a popular measure of community structure. However, maximizing the modularity can lead to many competing partitions, with almost the same modularity, that are poorly correlated with each other. It can also produce illusory ‘‘communities’’ in random graphs where none exist. We address this problem by using the modularity as a Hamiltonian at finite temperature and using an efficient belief propagation algorithm to obtain the consensus of many partitions with high modularity, rather than looking for a single partition that maximizes it. We show analytically and numerically that the proposed algorithm works all of the way down to the detectability transition in networks generated by the stochastic block model. It also performs well on real-world networks, revealing large communities in some networks where previous work has claimed no communities exist. Finally we show that by applying our algorithm recursively, subdividing communities until no statistically significant subcommunities can be found, we can detect hierarchical structure in real-world networks more efficiently than previous methods. PMID:25489096
NVU dynamics. I. Geodesic motion on the constant-potential-energy hypersurface.
Ingebrigtsen, Trond S; Toxvaerd, Søren; Heilmann, Ole J; Schrøder, Thomas B; Dyre, Jeppe C
2011-09-14
An algorithm is derived for computer simulation of geodesics on the constant-potential-energy hypersurface of a system of N classical particles. First, a basic time-reversible geodesic algorithm is derived by discretizing the geodesic stationarity condition and implementing the constant-potential-energy constraint via standard Lagrangian multipliers. The basic NVU algorithm is tested by single-precision computer simulations of the Lennard-Jones liquid. Excellent numerical stability is obtained if the force cutoff is smoothed and the two initial configurations have identical potential energy within machine precision. Nevertheless, just as for NVE algorithms, stabilizers are needed for very long runs in order to compensate for the accumulation of numerical errors that eventually lead to "entropic drift" of the potential energy towards higher values. A modification of the basic NVU algorithm is introduced that ensures potential-energy and step-length conservation; center-of-mass drift is also eliminated. Analytical arguments confirmed by simulations demonstrate that the modified NVU algorithm is absolutely stable. Finally, we present simulations showing that the NVU algorithm and the standard leap-frog NVE algorithm have identical radial distribution functions for the Lennard-Jones liquid. © 2011 American Institute of Physics
QuartetS: A Fast and Accurate Algorithm for Large-Scale Orthology Detection
2011-01-01
of these two genes with all other genes of the other one species. In addition, to be considered orthologs, the BBH pairs had to satisfy two conditions ...BBH pair computations employed as part of the outgroup and QuartetS methods, we used the same two conditions as the ones described above. In our...versus proteins. Genetica , 118, 209–216. 4. Serres,M.H., Kerr,A.R., McCormack,T.J. and Riley,M. (2009) Evolution by leaps: gene duplication in bacteria
Crashworthiness simulations with DYNA3D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schauer, D.A.; Hoover, C.G.; Kay, G.J.
1996-04-01
Current progress in parallel algorithm research and applications in vehicle crash simulation is described for the explicit, finite element algorithms in DYNA3D. Problem partitioning methods and parallel algorithms for contact at material interfaces are the two challenging algorithm research problems that are addressed. Two prototype parallel contact algorithms have been developed for treating the cases of local and arbitrary contact. Demonstration problems for local contact are crashworthiness simulations with 222 locally defined contact surfaces and a vehicle/barrier collision modeled with arbitrary contact. A simulation of crash tests conducted for a vehicle impacting a U-channel small sign post embedded in soilmore » has been run on both the serial and parallel versions of DYNA3D. A significant reduction in computational time has been observed when running these problems on the parallel version. However, to achieve maximum efficiency, complex problems must be appropriately partitioned, especially when contact dominates the computation.« less
Xie, Rui; Wan, Xianrong; Hong, Sheng; Yi, Jianxin
2017-06-14
The performance of a passive radar network can be greatly improved by an optimal radar network structure. Generally, radar network structure optimization consists of two aspects, namely the placement of receivers in suitable places and selection of appropriate illuminators. The present study investigates issues concerning the joint optimization of receiver placement and illuminator selection for a passive radar network. Firstly, the required radar cross section (RCS) for target detection is chosen as the performance metric, and the joint optimization model boils down to the partition p -center problem (PPCP). The PPCP is then solved by a proposed bisection algorithm. The key of the bisection algorithm lies in solving the partition set covering problem (PSCP), which can be solved by a hybrid algorithm developed by coupling the convex optimization with the greedy dropping algorithm. In the end, the performance of the proposed algorithm is validated via numerical simulations.
Mahrooghy, Majid; Ashraf, Ahmed B; Daye, Dania; McDonald, Elizabeth S; Rosen, Mark; Mies, Carolyn; Feldman, Michael; Kontos, Despina
2015-06-01
Heterogeneity in cancer can affect response to therapy and patient prognosis. Histologic measures have classically been used to measure heterogeneity, although a reliable noninvasive measurement is needed both to establish baseline risk of recurrence and monitor response to treatment. Here, we propose using spatiotemporal wavelet kinetic features from dynamic contrast-enhanced magnetic resonance imaging to quantify intratumor heterogeneity in breast cancer. Tumor pixels are first partitioned into homogeneous subregions using pharmacokinetic measures. Heterogeneity wavelet kinetic (HetWave) features are then extracted from these partitions to obtain spatiotemporal patterns of the wavelet coefficients and the contrast agent uptake. The HetWave features are evaluated in terms of their prognostic value using a logistic regression classifier with genetic algorithm wrapper-based feature selection to classify breast cancer recurrence risk as determined by a validated gene expression assay. Receiver operating characteristic analysis and area under the curve (AUC) are computed to assess classifier performance using leave-one-out cross validation. The HetWave features outperform other commonly used features (AUC = 0.88 HetWave versus 0.70 standard features). The combination of HetWave and standard features further increases classifier performance (AUCs 0.94). The rate of the spatial frequency pattern over the pharmacokinetic partitions can provide valuable prognostic information. HetWave could be a powerful feature extraction approach for characterizing tumor heterogeneity, providing valuable prognostic information.
Focusing light through random photonic layers by four-element division algorithm
NASA Astrophysics Data System (ADS)
Fang, Longjie; Zhang, Xicheng; Zuo, Haoyi; Pang, Lin
2018-02-01
The propagation of waves in turbid media is a fundamental problem of optics with vast applications. Optical phase optimization approaches for focusing light through turbid media using phase control algorithm have been widely studied in recent years due to the rapid development of spatial light modulator. The existing approaches include element-based algorithms - stepwise sequential algorithm, continuous sequential algorithm and whole element optimization approaches - partitioning algorithm, transmission matrix approach and genetic algorithm. The advantage of element-based approaches is that the phase contribution of each element is very clear; however, because the intensity contribution of each element to the focal point is small especially for the case of large number of elements, the determination of the optimal phase for a single element would be difficult. In other words, the signal to noise ratio of the measurement is weak, leading to possibly local maximal during the optimization. As for whole element optimization approaches, all elements are employed for the optimization. Of course, signal to noise ratio during the optimization is improved. However, because more random processings are introduced into the processing, optimizations take more time to converge than the single element based approaches. Based on the advantages of both single element based approaches and whole element optimization approaches, we propose FEDA approach. Comparisons with the existing approaches show that FEDA only takes one third of measurement time to reach the optimization, which means that FEDA is promising in practical application such as for deep tissue imaging.
NASA Astrophysics Data System (ADS)
Malof, Jordan M.; Reichman, Daniël.; Collins, Leslie M.
2018-04-01
A great deal of research has been focused on the development of computer algorithms for buried threat detection (BTD) in ground penetrating radar (GPR) data. Most recently proposed BTD algorithms are supervised, and therefore they employ machine learning models that infer their parameters using training data. Cross-validation (CV) is a popular method for evaluating the performance of such algorithms, in which the available data is systematically split into ܰ disjoint subsets, and an algorithm is repeatedly trained on ܰ-1 subsets and tested on the excluded subset. There are several common types of CV in BTD, which vary principally upon the spatial criterion used to partition the data: site-based, lane-based, region-based, etc. The performance metrics obtained via CV are often used to suggest the superiority of one model over others, however, most studies utilize just one type of CV, and the impact of this choice is unclear. Here we employ several types of CV to evaluate algorithms from a recent large-scale BTD study. The results indicate that the rank-order of the performance of the algorithms varies substantially depending upon which type of CV is used. For example, the rank-1 algorithm for region-based CV is the lowest ranked algorithm for site-based CV. This suggests that any algorithm results should be interpreted carefully with respect to the type of CV employed. We discuss some potential interpretations of performance, given a particular type of CV.
Prosperi, Mattia C F; De Luca, Andrea; Di Giambenedetto, Simona; Bracciale, Laura; Fabbiani, Massimiliano; Cauda, Roberto; Salemi, Marco
2010-10-25
Phylogenetic methods produce hierarchies of molecular species, inferring knowledge about taxonomy and evolution. However, there is not yet a consensus methodology that provides a crisp partition of taxa, desirable when considering the problem of intra/inter-patient quasispecies classification or infection transmission event identification. We introduce the threshold bootstrap clustering (TBC), a new methodology for partitioning molecular sequences, that does not require a phylogenetic tree estimation. The TBC is an incremental partition algorithm, inspired by the stochastic Chinese restaurant process, and takes advantage of resampling techniques and models of sequence evolution. TBC uses as input a multiple alignment of molecular sequences and its output is a crisp partition of the taxa into an automatically determined number of clusters. By varying initial conditions, the algorithm can produce different partitions. We describe a procedure that selects a prime partition among a set of candidate ones and calculates a measure of cluster reliability. TBC was successfully tested for the identification of type-1 human immunodeficiency and hepatitis C virus subtypes, and compared with previously established methodologies. It was also evaluated in the problem of HIV-1 intra-patient quasispecies clustering, and for transmission cluster identification, using a set of sequences from patients with known transmission event histories. TBC has been shown to be effective for the subtyping of HIV and HCV, and for identifying intra-patient quasispecies. To some extent, the algorithm was able also to infer clusters corresponding to events of infection transmission. The computational complexity of TBC is quadratic in the number of taxa, lower than other established methods; in addition, TBC has been enhanced with a measure of cluster reliability. The TBC can be useful to characterise molecular quasipecies in a broad context.
Ghalyan, Najah F; Miller, David J; Ray, Asok
2018-06-12
Estimation of a generating partition is critical for symbolization of measurements from discrete-time dynamical systems, where a sequence of symbols from a (finite-cardinality) alphabet may uniquely specify the underlying time series. Such symbolization is useful for computing measures (e.g., Kolmogorov-Sinai entropy) to identify or characterize the (possibly unknown) dynamical system. It is also useful for time series classification and anomaly detection. The seminal work of Hirata, Judd, and Kilminster (2004) derives a novel objective function, akin to a clustering objective, that measures the discrepancy between a set of reconstruction values and the points from the time series. They cast estimation of a generating partition via the minimization of their objective function. Unfortunately, their proposed algorithm is nonconvergent, with no guarantee of finding even locally optimal solutions with respect to their objective. The difficulty is a heuristic-nearest neighbor symbol assignment step. Alternatively, we develop a novel, locally optimal algorithm for their objective. We apply iterative nearest-neighbor symbol assignments with guaranteed discrepancy descent, by which joint, locally optimal symbolization of the entire time series is achieved. While most previous approaches frame generating partition estimation as a state-space partitioning problem, we recognize that minimizing the Hirata et al. (2004) objective function does not induce an explicit partitioning of the state space, but rather the space consisting of the entire time series (effectively, clustering in a (countably) infinite-dimensional space). Our approach also amounts to a novel type of sliding block lossy source coding. Improvement, with respect to several measures, is demonstrated over popular methods for symbolizing chaotic maps. We also apply our approach to time-series anomaly detection, considering both chaotic maps and failure application in a polycrystalline alloy material.
A Framework for an Automated Compilation System for Reconfigurable Architectures
1997-03-01
HDLs, Hardware C requires the designer to be thoroughly familiar with digital hardware design. 48 Vahid, Gong, and Gajski focus on the partitioning...of hardware used. Vahid, Gong, and Gajski suggest that the greedy approach used by Gupta and De Micheli is easily trapped in local minimums [46:216...iterative algorithm. To overcome this limitation, the Vahid, Gong, and Gajski suggest a binary constraint partitioning approach. The partitioning
Leadership Education for Advancement and Promotion (LEAP)
NASA Astrophysics Data System (ADS)
Rankin, Patricia
2004-05-01
A NSF ADVANCE Institutional Transformation award funds the Leadership Education for Advancement and Promotion (LEAP) project at the University of Colorado, Boulder (UCB). LEAP is the third year of a five-year program. The purpose of LEAP is to increase the number of women in leadership positions in the sciences and engineering. The author, who is PI of the project, will discuss what approaches the LEAP project is taking at UCB to improve faculty retention and to help faculty be more successful. Questions that will be addressed include 1) Is this a historic problem? 2) Is the playing field level? 3) Why are LEAP programs not aimed solely at women faculty? 4) What helps? 5) What is needed to change an institution? (The NSF (SBE-0123636) funds this work.)
Modeling Political Populations with Bacteria
NASA Astrophysics Data System (ADS)
Cleveland, Chris; Liao, David
2011-03-01
Results from lattice-based simulations of micro-environments with heterogeneous nutrient resources reveal that competition between wild-type and GASP rpoS819 strains of E. Coli offers mutual benefit, particularly in nutrient deprived regions. Our computational model spatially maps bacteria populations and energy sources onto a set of 3D lattices that collectively resemble the topology of North America. By implementing Wright-Fishcer re- production into a probabilistic leap-frog scheme, we observe populations of wild-type and GASP rpoS819 cells compete for resources and, yet, aid each other's long term survival. The connection to how spatial political ideologies map in a similar way is discussed.
A Novel Coarsening Method for Scalable and Efficient Mesh Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, A; Hysom, D; Gunney, B
2010-12-02
In this paper, we propose a novel mesh coarsening method called brick coarsening method. The proposed method can be used in conjunction with any graph partitioners and scales to very large meshes. This method reduces problem space by decomposing the original mesh into fixed-size blocks of nodes called bricks, layered in a similar way to conventional brick laying, and then assigning each node of the original mesh to appropriate brick. Our experiments indicate that the proposed method scales to very large meshes while allowing simple RCB partitioner to produce higher-quality partitions with significantly less edge cuts. Our results further indicatemore » that the proposed brick-coarsening method allows more complicated partitioners like PT-Scotch to scale to very large problem size while still maintaining good partitioning performance with relatively good edge-cut metric. Graph partitioning is an important problem that has many scientific and engineering applications in such areas as VLSI design, scientific computing, and resource management. Given a graph G = (V,E), where V is the set of vertices and E is the set of edges, (k-way) graph partitioning problem is to partition the vertices of the graph (V) into k disjoint groups such that each group contains roughly equal number of vertices and the number of edges connecting vertices in different groups is minimized. Graph partitioning plays a key role in large scientific computing, especially in mesh-based computations, as it is used as a tool to minimize the volume of communication and to ensure well-balanced load across computing nodes. The impact of graph partitioning on the reduction of communication can be easily seen, for example, in different iterative methods to solve a sparse system of linear equation. Here, a graph partitioning technique is applied to the matrix, which is basically a graph in which each edge is a non-zero entry in the matrix, to allocate groups of vertices to processors in such a way that many of matrix-vector multiplication can be performed locally on each processor and hence to minimize communication. Furthermore, a good graph partitioning scheme ensures the equal amount of computation performed on each processor. Graph partitioning is a well known NP-complete problem, and thus the most commonly used graph partitioning algorithms employ some forms of heuristics. These algorithms vary in terms of their complexity, partition generation time, and the quality of partitions, and they tend to trade off these factors. A significant challenge we are currently facing at the Lawrence Livermore National Laboratory is how to partition very large meshes on massive-size distributed memory machines like IBM BlueGene/P, where scalability becomes a big issue. For example, we have found that the ParMetis, a very popular graph partitioning tool, can only scale to 16K processors. An ideal graph partitioning method on such an environment should be fast and scale to very large meshes, while producing high quality partitions. This is an extremely challenging task, as to scale to that level, the partitioning algorithm should be simple and be able to produce partitions that minimize inter-processor communications and balance the load imposed on the processors. Our goals in this work are two-fold: (1) To develop a new scalable graph partitioning method with good load balancing and communication reduction capability. (2) To study the performance of the proposed partitioning method on very large parallel machines using actual data sets and compare the performance to that of existing methods. The proposed method achieves the desired scalability by reducing the mesh size. For this, it coarsens an input mesh into a smaller size mesh by coalescing the vertices and edges of the original mesh into a set of mega-vertices and mega-edges. A new coarsening method called brick algorithm is developed in this research. In the brick algorithm, the zones in a given mesh are first grouped into fixed size blocks called bricks. These brick are then laid in a way similar to conventional brick laying technique, which reduces the number of neighboring blocks each block needs to communicate. Contributions of this research are as follows: (1) We have developed a novel method that scales to a really large problem size while producing high quality mesh partitions; (2) We measured the performance and scalability of the proposed method on a machine of massive size using a set of actual large complex data sets, where we have scaled to a mesh with 110 million zones using our method. To the best of our knowledge, this is the largest complex mesh that a partitioning method is successfully applied to; and (3) We have shown that proposed method can reduce the number of edge cuts by as much as 65%.« less
Time lagged ordinal partition networks for capturing dynamics of continuous dynamical systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCullough, Michael; Iu, Herbert Ho-Ching; Small, Michael
2015-05-15
We investigate a generalised version of the recently proposed ordinal partition time series to network transformation algorithm. First, we introduce a fixed time lag for the elements of each partition that is selected using techniques from traditional time delay embedding. The resulting partitions define regions in the embedding phase space that are mapped to nodes in the network space. Edges are allocated between nodes based on temporal succession thus creating a Markov chain representation of the time series. We then apply this new transformation algorithm to time series generated by the Rössler system and find that periodic dynamics translate tomore » ring structures whereas chaotic time series translate to band or tube-like structures—thereby indicating that our algorithm generates networks whose structure is sensitive to system dynamics. Furthermore, we demonstrate that simple network measures including the mean out degree and variance of out degrees can track changes in the dynamical behaviour in a manner comparable to the largest Lyapunov exponent. We also apply the same analysis to experimental time series generated by a diode resonator circuit and show that the network size, mean shortest path length, and network diameter are highly sensitive to the interior crisis captured in this particular data set.« less
Multitasking a three-dimensional Navier-Stokes algorithm on the Cray-2
NASA Technical Reports Server (NTRS)
Swisshelm, Julie M.
1989-01-01
A three-dimensional computational aerodynamics algorithm has been multitasked for efficient parallel execution on the Cray-2. It provides a means for examining the multitasking performance of a complete CFD application code. An embedded zonal multigrid scheme is used to solve the Reynolds-averaged Navier-Stokes equations for an internal flow model problem. The explicit nature of each component of the method allows a spatial partitioning of the computational domain to achieve a well-balanced task load for MIMD computers with vector-processing capability. Experiments have been conducted with both two- and three-dimensional multitasked cases. The best speedup attained by an individual task group was 3.54 on four processors of the Cray-2, while the entire solver yielded a speedup of 2.67 on four processors for the three-dimensional case. The multiprocessing efficiency of various types of computational tasks is examined, performance on two Cray-2s with different memory access speeds is compared, and extrapolation to larger problems is discussed.
Correction of clipped pixels in color images.
Xu, Di; Doutre, Colin; Nasiopoulos, Panos
2011-03-01
Conventional images store a very limited dynamic range of brightness. The true luma in the bright area of such images is often lost due to clipping. When clipping changes the R, G, B color ratios of a pixel, color distortion also occurs. In this paper, we propose an algorithm to enhance both the luma and chroma of the clipped pixels. Our method is based on the strong chroma spatial correlation between clipped pixels and their surrounding unclipped area. After identifying the clipped areas in the image, we partition the clipped areas into regions with similar chroma, and estimate the chroma of each clipped region based on the chroma of its surrounding unclipped region. We correct the clipped R, G, or B color channels based on the estimated chroma and the unclipped color channel(s) of the current pixel. The last step involves smoothing of the boundaries between regions of different clipping scenarios. Both objective and subjective experimental results show that our algorithm is very effective in restoring the color of clipped pixels. © 2011 IEEE
NASA Astrophysics Data System (ADS)
Ouillon, G.; Ducorbier, C.; Sornette, D.
2008-01-01
We propose a new pattern recognition method that is able to reconstruct the three-dimensional structure of the active part of a fault network using the spatial location of earthquakes. The method is a generalization of the so-called dynamic clustering (or k means) method, that partitions a set of data points into clusters, using a global minimization criterion of the variance of the hypocenters locations about their center of mass. The new method improves on the original k means method by taking into account the full spatial covariance tensor of each cluster in order to partition the data set into fault-like, anisotropic clusters. Given a catalog of seismic events, the output is the optimal set of plane segments that fits the spatial structure of the data. Each plane segment is fully characterized by its location, size, and orientation. The main tunable parameter is the accuracy of the earthquake locations, which fixes the resolution, i.e., the residual variance of the fit. The resolution determines the number of fault segments needed to describe the earthquake catalog: the better the resolution, the finer the structure of the reconstructed fault segments. The algorithm successfully reconstructs the fault segments of synthetic earthquake catalogs. Applied to the real catalog constituted of a subset of the aftershock sequence of the 28 June 1992 Landers earthquake in southern California, the reconstructed plane segments fully agree with faults already known on geological maps or with blind faults that appear quite obvious in longer-term catalogs. Future improvements of the method are discussed, as well as its potential use in the multiscale study of the inner structure of fault zones.
An algorithm to track laboratory zebrafish shoals.
Feijó, Gregory de Oliveira; Sangalli, Vicenzo Abichequer; da Silva, Isaac Newton Lima; Pinho, Márcio Sarroglia
2018-05-01
In this paper, a semi-automatic multi-object tracking method to track a group of unmarked zebrafish is proposed. This method can handle partial occlusion cases, maintaining the correct identity of each individual. For every object, we extracted a set of geometric features to be used in the two main stages of the algorithm. The first stage selected the best candidate, based both on the blobs identified in the image and the estimate generated by a Kalman Filter instance. In the second stage, if the same candidate-blob is selected by two or more instances, a blob-partitioning algorithm takes place in order to split this blob and reestablish the instances' identities. If the algorithm cannot determine the identity of a blob, a manual intervention is required. This procedure was compared against a manual labeled ground truth on four video sequences with different numbers of fish and spatial resolution. The performance of the proposed method is then compared against two well-known zebrafish tracking methods found in the literature: one that treats occlusion scenarios and one that only track fish that are not in occlusion. Based on the data set used, the proposed method outperforms the first method in correctly separating fish in occlusion, increasing its efficiency by at least 8.15% of the cases. As for the second, the proposed method's overall performance outperformed the second in some of the tested videos, especially those with lower image quality, because the second method requires high-spatial resolution images, which is not a requirement for the proposed method. Yet, the proposed method was able to separate fish involved in occlusion and correctly assign its identity in up to 87.85% of the cases, without accounting for user intervention. Copyright © 2018 Elsevier Ltd. All rights reserved.
The Leadership Education for Advancement and Promotion (LEAP) Project
NASA Astrophysics Data System (ADS)
Rankin, Patricia
2003-10-01
A NSF ADVANCE Institutional Transformation award funds the Leadership Education for Advancement and Promotion (LEAP) project at the University of Colorado, Boulder (UCB). LEAP is the second year of a five-year program. The purpose of LEAP is to increase the number of women in leadership positions in the sciences and engineering. The author, who is PI of the project, will discuss what approaches the LEAP project is taking at UCB to improve faculty retention and to help faculty be more successful. Questions that will be addressed include 1)What has been learnt so far? 2) Why is it important to ensure that the playing field is truly level? 3) What makes it hard to level the playing field? 4) Why are LEAP programs not aimed solely at women faculty? 5) What is needed to change an institution?
Partitioning Rectangular and Structurally Nonsymmetric Sparse Matrices for Parallel Processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
B. Hendrickson; T.G. Kolda
1998-09-01
A common operation in scientific computing is the multiplication of a sparse, rectangular or structurally nonsymmetric matrix and a vector. In many applications the matrix- transpose-vector product is also required. This paper addresses the efficient parallelization of these operations. We show that the problem can be expressed in terms of partitioning bipartite graphs. We then introduce several algorithms for this partitioning problem and compare their performance on a set of test matrices.
A New Approach to Parallel Dynamic Partitioning for Adaptive Unstructured Meshes
NASA Technical Reports Server (NTRS)
Heber, Gerd; Biswas, Rupak; Gao, Guang R.
1999-01-01
Classical mesh partitioning algorithms were designed for rather static situations, and their straightforward application in a dynamical framework may lead to unsatisfactory results, e.g., excessive data migration among processors. Furthermore, special attention should be paid to their amenability to parallelization. In this paper, a novel parallel method for the dynamic partitioning of adaptive unstructured meshes is described. It is based on a linear representation of the mesh using self-avoiding walks.
The Application of Leap Motion in Astronaut Virtual Training
NASA Astrophysics Data System (ADS)
Qingchao, Xie; Jiangang, Chao
2017-03-01
With the development of computer vision, virtual reality has been applied in astronaut virtual training. As an advanced optic equipment to track hand, Leap Motion can provide precise and fluid tracking of hands. Leap Motion is suitable to be used as gesture input device in astronaut virtual training. This paper built an astronaut virtual training based Leap Motion, and established the mathematics model of hands occlusion. At last the ability of Leap Motion to handle occlusion was analysed. A virtual assembly simulation platform was developed for astronaut training, and occlusion gesture would influence the recognition process. The experimental result can guide astronaut virtual training.
NASA Astrophysics Data System (ADS)
Oblow, E. M.
1982-10-01
An evaluation was made of the mathematical and economic basis for conversion processes in the Long-term Energy Analysis Program (LEAP) energy economy model. Conversion processes are the main modeling subunit in LEAP used to represent energy conversion industries and are supposedly based on the classical economic theory of the firm. Questions about uniqueness and existence of LEAP solutions and their relation to classical equilibrium economic theory prompted the study. An analysis of classical theory and LEAP model equations was made to determine their exact relationship. The conclusions drawn from this analysis were that LEAP theory is not consistent with the classical theory of the firm. Specifically, the capacity factor formalism used by LEAP does not support a classical interpretation in terms of a technological production function for energy conversion processes. The economic implications of this inconsistency are suboptimal process operation and short term negative profits in years where plant operation should be terminated. A new capacity factor formalism, which retains the behavioral features of the original model, is proposed to resolve these discrepancies.
Gunavathi, Chellamuthu; Premalatha, Kandasamy
2014-01-01
Feature selection in cancer classification is a central area of research in the field of bioinformatics and used to select the informative genes from thousands of genes of the microarray. The genes are ranked based on T-statistics, signal-to-noise ratio (SNR), and F-test values. The swarm intelligence (SI) technique finds the informative genes from the top-m ranked genes. These selected genes are used for classification. In this paper the shuffled frog leaping with Lévy flight (SFLLF) is proposed for feature selection. In SFLLF, the Lévy flight is included to avoid premature convergence of shuffled frog leaping (SFL) algorithm. The SI techniques such as particle swarm optimization (PSO), cuckoo search (CS), SFL, and SFLLF are used for feature selection which identifies informative genes for classification. The k-nearest neighbour (k-NN) technique is used to classify the samples. The proposed work is applied on 10 different benchmark datasets and examined with SI techniques. The experimental results show that the results obtained from k-NN classifier through SFLLF feature selection method outperform PSO, CS, and SFL.
X-ray Optics Development at MSFC
NASA Technical Reports Server (NTRS)
Sharma, Dharma P.
2017-01-01
Development of high resolution focusing telescopes has led to a tremendous leap in sensitivity, revolutionizing observational X-ray astronomy. High sensitivity and high spatial resolution X-ray observations have been possible due to use of grazing incidence optics (paraboloid/hyperboloid) coupled with high spatial resolution and high efficiency detectors/imagers. The best X-ray telescope flown so far is mounted onboard Chandra observatory launched on July 23,1999. The telescope has a spatial resolution of 0.5 arc seconds with compatible imaging instruments in the energy range of 0.1 to 10 keV. The Chandra observatory has been responsible for a large number of discoveries and has provided X-ray insights on a large number of celestial objects including stars, supernova remnants, pulsars, magnetars, black holes, active galactic nuclei, galaxies, clusters and our own solar system.
Digital halftoning methods for selectively partitioning error into achromatic and chromatic channels
NASA Technical Reports Server (NTRS)
Mulligan, Jeffrey B.
1990-01-01
A method is described for reducing the visibility of artifacts arising in the display of quantized color images on CRT displays. The method is based on the differential spatial sensitivity of the human visual system to chromatic and achromatic modulations. Because the visual system has the highest spatial and temporal acuity for the luminance component of an image, a technique which will reduce luminance artifacts at the expense of introducing high-frequency chromatic errors is sought. A method based on controlling the correlations between the quantization errors in the individual phosphor images is explored. The luminance component is greatest when the phosphor errors are positively correlated, and is minimized when the phosphor errors are negatively correlated. The greatest effect of the correlation is obtained when the intensity quantization step sizes of the individual phosphors have equal luminances. For the ordered dither algorithm, a version of the method can be implemented by simply inverting the matrix of thresholds for one of the color components.
NASA Astrophysics Data System (ADS)
Beretta, Elena; Micheletti, Stefano; Perotto, Simona; Santacesaria, Matteo
2018-01-01
In this paper, we develop a shape optimization-based algorithm for the electrical impedance tomography (EIT) problem of determining a piecewise constant conductivity on a polygonal partition from boundary measurements. The key tool is to use a distributed shape derivative of a suitable cost functional with respect to movements of the partition. Numerical simulations showing the robustness and accuracy of the method are presented for simulated test cases in two dimensions.
Site partitioning for distributed redundant disk arrays
NASA Technical Reports Server (NTRS)
Mourad, Antoine N.; Fuchs, W. K.; Saab, Daniel G.
1992-01-01
Distributed redundant disk arrays can be used in a distributed computing system or database system to provide recovery in the presence of temporary and permanent failures of single sites. In this paper, we look at the problem of partitioning the sites into redundant arrays in such way that the communication costs for maintaining the parity information are minimized. We show that the partitioning problem is NP-complete and we propose two heuristic algorithms for finding approximate solutions.
Hybrid Nested Partitions and Math Programming Framework for Large-scale Combinatorial Optimization
2010-03-31
optimization problems: 1) exact algorithms and 2) metaheuristic algorithms . This project will integrate concepts from these two technologies to develop...optimal solutions within an acceptable amount of computation time, and 2) metaheuristic algorithms such as genetic algorithms , tabu search, and the...integer programming decomposition approaches, such as Dantzig Wolfe decomposition and Lagrangian relaxation, and metaheuristics such as the Nested
Miranian, A; Abdollahzade, M
2013-02-01
Local modeling approaches, owing to their ability to model different operating regimes of nonlinear systems and processes by independent local models, seem appealing for modeling, identification, and prediction applications. In this paper, we propose a local neuro-fuzzy (LNF) approach based on the least-squares support vector machines (LSSVMs). The proposed LNF approach employs LSSVMs, which are powerful in modeling and predicting time series, as local models and uses hierarchical binary tree (HBT) learning algorithm for fast and efficient estimation of its parameters. The HBT algorithm heuristically partitions the input space into smaller subdomains by axis-orthogonal splits. In each partitioning, the validity functions automatically form a unity partition and therefore normalization side effects, e.g., reactivation, are prevented. Integration of LSSVMs into the LNF network as local models, along with the HBT learning algorithm, yield a high-performance approach for modeling and prediction of complex nonlinear time series. The proposed approach is applied to modeling and predictions of different nonlinear and chaotic real-world and hand-designed systems and time series. Analysis of the prediction results and comparisons with recent and old studies demonstrate the promising performance of the proposed LNF approach with the HBT learning algorithm for modeling and prediction of nonlinear and chaotic systems and time series.
a Voxel-Based Filtering Algorithm for Mobile LIDAR Data
NASA Astrophysics Data System (ADS)
Qin, H.; Guan, G.; Yu, Y.; Zhong, L.
2018-04-01
This paper presents a stepwise voxel-based filtering algorithm for mobile LiDAR data. In the first step, to improve computational efficiency, mobile LiDAR points, in xy-plane, are first partitioned into a set of two-dimensional (2-D) blocks with a given block size, in each of which all laser points are further organized into an octree partition structure with a set of three-dimensional (3-D) voxels. Then, a voxel-based upward growing processing is performed to roughly separate terrain from non-terrain points with global and local terrain thresholds. In the second step, the extracted terrain points are refined by computing voxel curvatures. This voxel-based filtering algorithm is comprehensively discussed in the analyses of parameter sensitivity and overall performance. An experimental study performed on multiple point cloud samples, collected by different commercial mobile LiDAR systems, showed that the proposed algorithm provides a promising solution to terrain point extraction from mobile point clouds.
On global optimization using an estimate of Lipschitz constant and simplicial partition
NASA Astrophysics Data System (ADS)
Gimbutas, Albertas; Žilinskas, Antanas
2016-10-01
A new algorithm is proposed for finding the global minimum of a multi-variate black-box Lipschitz function with an unknown Lipschitz constant. The feasible region is initially partitioned into simplices; in the subsequent iteration, the most suitable simplices are selected and bisected via the middle point of the longest edge. The suitability of a simplex for bisection is evaluated by minimizing of a surrogate function which mimics the lower bound for the considered objective function over that simplex. The surrogate function is defined using an estimate of the Lipschitz constant and the objective function values at the vertices of a simplex. The novelty of the algorithm is the sophisticated method of estimating the Lipschitz constant, and the appropriate method to minimize the surrogate function. The proposed algorithm was tested using 600 random test problems of different complexity, showing competitive results with two popular advanced algorithms which are based on similar assumptions.
Dynamic partitioning for hybrid simulation of the bistable HIV-1 transactivation network.
Griffith, Mark; Courtney, Tod; Peccoud, Jean; Sanders, William H
2006-11-15
The stochastic kinetics of a well-mixed chemical system, governed by the chemical Master equation, can be simulated using the exact methods of Gillespie. However, these methods do not scale well as systems become more complex and larger models are built to include reactions with widely varying rates, since the computational burden of simulation increases with the number of reaction events. Continuous models may provide an approximate solution and are computationally less costly, but they fail to capture the stochastic behavior of small populations of macromolecules. In this article we present a hybrid simulation algorithm that dynamically partitions the system into subsets of continuous and discrete reactions, approximates the continuous reactions deterministically as a system of ordinary differential equations (ODE) and uses a Monte Carlo method for generating discrete reaction events according to a time-dependent propensity. Our approach to partitioning is improved such that we dynamically partition the system of reactions, based on a threshold relative to the distribution of propensities in the discrete subset. We have implemented the hybrid algorithm in an extensible framework, utilizing two rigorous ODE solvers to approximate the continuous reactions, and use an example model to illustrate the accuracy and potential speedup of the algorithm when compared with exact stochastic simulation. Software and benchmark models used for this publication can be made available upon request from the authors.
Overlapping clusters for distributed computation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mirrokni, Vahab; Andersen, Reid; Gleich, David F.
2010-11-01
Scalable, distributed algorithms must address communication problems. We investigate overlapping clusters, or vertex partitions that intersect, for graph computations. This setup stores more of the graph than required but then affords the ease of implementation of vertex partitioned algorithms. Our hope is that this technique allows us to reduce communication in a computation on a distributed graph. The motivation above draws on recent work in communication avoiding algorithms. Mohiyuddin et al. (SC09) design a matrix-powers kernel that gives rise to an overlapping partition. Fritzsche et al. (CSC2009) develop an overlapping clustering for a Schwarz method. Both techniques extend an initialmore » partitioning with overlap. Our procedure generates overlap directly. Indeed, Schwarz methods are commonly used to capitalize on overlap. Elsewhere, overlapping communities (Ahn et al, Nature 2009; Mishra et al. WAW2007) are now a popular model of structure in social networks. These have long been studied in statistics (Cole and Wishart, CompJ 1970). We present two types of results: (i) an estimated swapping probability {rho}{infinity}; and (ii) the communication volume of a parallel PageRank solution (link-following {alpha} = 0.85) using an additive Schwarz method. The volume ratio is the amount of extra storage for the overlap (2 means we store the graph twice). Below, as the ratio increases, the swapping probability and PageRank communication volume decreases.« less
Resource partitioning facilitates coexistence in sympatric cetaceans in the California Current.
Fossette, Sabrina; Abrahms, Briana; Hazen, Elliott L; Bograd, Steven J; Zilliacus, Kelly M; Calambokidis, John; Burrows, Julia A; Goldbogen, Jeremy A; Harvey, James T; Marinovic, Baldo; Tershy, Bernie; Croll, Donald A
2017-11-01
Resource partitioning is an important process driving habitat use and foraging strategies in sympatric species that potentially compete. Differences in foraging behavior are hypothesized to contribute to species coexistence by facilitating resource partitioning, but little is known on the multiple mechanisms for partitioning that may occur simultaneously. Studies are further limited in the marine environment, where the spatial and temporal distribution of resources is highly dynamic and subsequently difficult to quantify. We investigated potential pathways by which foraging behavior may facilitate resource partitioning in two of the largest co-occurring and closely related species on Earth, blue ( Balaenoptera musculus ) and humpback ( Megaptera novaeangliae ) whales. We integrated multiple long-term datasets (line-transect surveys, whale-watching records, net sampling, stable isotope analysis, and remote-sensing of oceanographic parameters) to compare the diet, phenology, and distribution of the two species during their foraging periods in the highly productive waters of Monterey Bay, California, USA within the California Current Ecosystem. Our long-term study reveals that blue and humpback whales likely facilitate sympatry by partitioning their foraging along three axes: trophic, temporal, and spatial. Blue whales were specialists foraging on krill, predictably targeting a seasonal peak in krill abundance, were present in the bay for an average of 4.7 months, and were spatially restricted at the continental shelf break. In contrast, humpback whales were generalists apparently feeding on a mixed diet of krill and fishes depending on relative abundances, were present in the bay for a more extended period (average of 6.6 months), and had a broader spatial distribution at the shelf break and inshore. Ultimately, competition for common resources can lead to behavioral, morphological, and physiological character displacement between sympatric species. Understanding the mechanisms for species coexistence is both fundamental to maintaining biodiverse ecosystems, and provides insight into the evolutionary drivers of morphological differences in closely related species.
NASA Astrophysics Data System (ADS)
Wagstaff, Kiri L.
2012-03-01
On obtaining a new data set, the researcher is immediately faced with the challenge of obtaining a high-level understanding from the observations. What does a typical item look like? What are the dominant trends? How many distinct groups are included in the data set, and how is each one characterized? Which observable values are common, and which rarely occur? Which items stand out as anomalies or outliers from the rest of the data? This challenge is exacerbated by the steady growth in data set size [11] as new instruments push into new frontiers of parameter space, via improvements in temporal, spatial, and spectral resolution, or by the desire to "fuse" observations from different modalities and instruments into a larger-picture understanding of the same underlying phenomenon. Data clustering algorithms provide a variety of solutions for this task. They can generate summaries, locate outliers, compress data, identify dense or sparse regions of feature space, and build data models. It is useful to note up front that "clusters" in this context refer to groups of items within some descriptive feature space, not (necessarily) to "galaxy clusters" which are dense regions in physical space. The goal of this chapter is to survey a variety of data clustering methods, with an eye toward their applicability to astronomical data analysis. In addition to improving the individual researcher’s understanding of a given data set, clustering has led directly to scientific advances, such as the discovery of new subclasses of stars [14] and gamma-ray bursts (GRBs) [38]. All clustering algorithms seek to identify groups within a data set that reflect some observed, quantifiable structure. Clustering is traditionally an unsupervised approach to data analysis, in the sense that it operates without any direct guidance about which items should be assigned to which clusters. There has been a recent trend in the clustering literature toward supporting semisupervised or constrained clustering, in which some partial information about item assignments or other components of the resulting output are already known and must be accommodated by the solution. Some algorithms seek a partition of the data set into distinct clusters, while others build a hierarchy of nested clusters that can capture taxonomic relationships. Some produce a single optimal solution, while others construct a probabilistic model of cluster membership. More formally, clustering algorithms operate on a data set X composed of items represented by one or more features (dimensions). These could include physical location, such as right ascension and declination, as well as other properties such as brightness, color, temporal change, size, texture, and so on. Let D be the number of dimensions used to represent each item, xi ∈ RD. The clustering goal is to produce an organization P of the items in X that optimizes an objective function f : P -> R, which quantifies the quality of solution P. Often f is defined so as to maximize similarity within a cluster and minimize similarity between clusters. To that end, many algorithms make use of a measure d : X x X -> R of the distance between two items. A partitioning algorithm produces a set of clusters P = {c1, . . . , ck} such that the clusters are nonoverlapping (c_i intersected with c_j = empty set, i != j) subsets of the data set (Union_i c_i=X). Hierarchical algorithms produce a series of partitions P = {p1, . . . , pn }. For a complete hierarchy, the number of partitions n’= n, the number of items in the data set; the top partition is a single cluster containing all items, and the bottom partition contains n clusters, each containing a single item. For model-based clustering, each cluster c_j is represented by a model m_j , such as the cluster center or a Gaussian distribution. The wide array of available clustering algorithms may seem bewildering, and covering all of them is beyond the scope of this chapter. Choosing among them for a particular application involves considerations of the kind of data being analyzed, algorithm runtime efficiency, and how much prior knowledge is available about the problem domain, which can dictate the nature of clusters sought. Fundamentally, the clustering method and its representations of clusters carries with it a definition of what a cluster is, and it is important that this be aligned with the analysis goals for the problem at hand. In this chapter, I emphasize this point by identifying for each algorithm the cluster representation as a model, m_j , even for algorithms that are not typically thought of as creating a “model.” This chapter surveys a basic collection of clustering methods useful to any practitioner who is interested in applying clustering to a new data set. The algorithms include k-means (Section 25.2), EM (Section 25.3), agglomerative (Section 25.4), and spectral (Section 25.5) clustering, with side mentions of variants such as kernel k-means and divisive clustering. The chapter also discusses each algorithm’s strengths and limitations and provides pointers to additional in-depth reading for each subject. Section 25.6 discusses methods for incorporating domain knowledge into the clustering process. This chapter concludes with a brief survey of interesting applications of clustering methods to astronomy data (Section 25.7). The chapter begins with k-means because it is both generally accessible and so widely used that understanding it can be considered a necessary prerequisite for further work in the field. EM can be viewed as a more sophisticated version of k-means that uses a generative model for each cluster and probabilistic item assignments. Agglomerative clustering is the most basic form of hierarchical clustering and provides a basis for further exploration of algorithms in that vein. Spectral clustering permits a departure from feature-vector-based clustering and can operate on data sets instead represented as affinity, or similarity matrices—cases in which only pairwise information is known. The list of algorithms covered in this chapter is representative of those most commonly in use, but it is by no means comprehensive. There is an extensive collection of existing books on clustering that provide additional background and depth. Three early books that remain useful today are Anderberg’s Cluster Analysis for Applications [3], Hartigan’s Clustering Algorithms [25], and Gordon’s Classification [22]. The latter covers basics on similarity measures, partitioning and hierarchical algorithms, fuzzy clustering, overlapping clustering, conceptual clustering, validations methods, and visualization or data reduction techniques such as principal components analysis (PCA),multidimensional scaling, and self-organizing maps. More recently, Jain et al. provided a useful and informative survey [27] of a variety of different clustering algorithms, including those mentioned here as well as fuzzy, graph-theoretic, and evolutionary clustering. Everitt’s Cluster Analysis [19] provides a modern overview of algorithms, similarity measures, and evaluation methods.
The Optimization of Automatically Generated Compilers.
1987-01-01
than their procedural counterparts, and are also easier to analyze for storage optimizations; (2) AGs can be algorithmically checked to be non-circular...Providing algorithms to move the storage for many attributes from the For structure tree into global stacks and variables. -Dd(2) Creating AEs which build and...54 3.5.2. Partitioning algorithm
Generalized fuzzy C-means clustering algorithm with improved fuzzy partitions.
Zhu, Lin; Chung, Fu-Lai; Wang, Shitong
2009-06-01
The fuzziness index m has important influence on the clustering result of fuzzy clustering algorithms, and it should not be forced to fix at the usual value m = 2. In view of its distinctive features in applications and its limitation in having m = 2 only, a recent advance of fuzzy clustering called fuzzy c-means clustering with improved fuzzy partitions (IFP-FCM) is extended in this paper, and a generalized algorithm called GIFP-FCM for more effective clustering is proposed. By introducing a novel membership constraint function, a new objective function is constructed, and furthermore, GIFP-FCM clustering is derived. Meanwhile, from the viewpoints of L(p) norm distance measure and competitive learning, the robustness and convergence of the proposed algorithm are analyzed. Furthermore, the classical fuzzy c-means algorithm (FCM) and IFP-FCM can be taken as two special cases of the proposed algorithm. Several experimental results including its application to noisy image texture segmentation are presented to demonstrate its average advantage over FCM and IFP-FCM in both clustering and robustness capabilities.
A Parallel Point Matching Algorithm for Landmark Based Image Registration Using Multicore Platform
Yang, Lin; Gong, Leiguang; Zhang, Hong; Nosher, John L.; Foran, David J.
2013-01-01
Point matching is crucial for many computer vision applications. Establishing the correspondence between a large number of data points is a computationally intensive process. Some point matching related applications, such as medical image registration, require real time or near real time performance if applied to critical clinical applications like image assisted surgery. In this paper, we report a new multicore platform based parallel algorithm for fast point matching in the context of landmark based medical image registration. We introduced a non-regular data partition algorithm which utilizes the K-means clustering algorithm to group the landmarks based on the number of available processing cores, which optimize the memory usage and data transfer. We have tested our method using the IBM Cell Broadband Engine (Cell/B.E.) platform. The results demonstrated a significant speed up over its sequential implementation. The proposed data partition and parallelization algorithm, though tested only on one multicore platform, is generic by its design. Therefore the parallel algorithm can be extended to other computing platforms, as well as other point matching related applications. PMID:24308014
Ma, Jingjing; Liu, Jie; Ma, Wenping; Gong, Maoguo; Jiao, Licheng
2014-01-01
Community structure is one of the most important properties in social networks. In dynamic networks, there are two conflicting criteria that need to be considered. One is the snapshot quality, which evaluates the quality of the community partitions at the current time step. The other is the temporal cost, which evaluates the difference between communities at different time steps. In this paper, we propose a decomposition-based multiobjective community detection algorithm to simultaneously optimize these two objectives to reveal community structure and its evolution in dynamic networks. It employs the framework of multiobjective evolutionary algorithm based on decomposition to simultaneously optimize the modularity and normalized mutual information, which quantitatively measure the quality of the community partitions and temporal cost, respectively. A local search strategy dealing with the problem-specific knowledge is incorporated to improve the effectiveness of the new algorithm. Experiments on computer-generated and real-world networks demonstrate that the proposed algorithm can not only find community structure and capture community evolution more accurately, but also be steadier than the two compared algorithms. PMID:24723806
Ma, Jingjing; Liu, Jie; Ma, Wenping; Gong, Maoguo; Jiao, Licheng
2014-01-01
Community structure is one of the most important properties in social networks. In dynamic networks, there are two conflicting criteria that need to be considered. One is the snapshot quality, which evaluates the quality of the community partitions at the current time step. The other is the temporal cost, which evaluates the difference between communities at different time steps. In this paper, we propose a decomposition-based multiobjective community detection algorithm to simultaneously optimize these two objectives to reveal community structure and its evolution in dynamic networks. It employs the framework of multiobjective evolutionary algorithm based on decomposition to simultaneously optimize the modularity and normalized mutual information, which quantitatively measure the quality of the community partitions and temporal cost, respectively. A local search strategy dealing with the problem-specific knowledge is incorporated to improve the effectiveness of the new algorithm. Experiments on computer-generated and real-world networks demonstrate that the proposed algorithm can not only find community structure and capture community evolution more accurately, but also be steadier than the two compared algorithms.
A heuristic re-mapping algorithm reducing inter-level communication in SAMR applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steensland, Johan; Ray, Jaideep
2003-07-01
This paper aims at decreasing execution time for large-scale structured adaptive mesh refinement (SAMR) applications by proposing a new heuristic re-mapping algorithm and experimentally showing its effectiveness in reducing inter-level communication. Tests were done for five different SAMR applications. The overall goal is to engineer a dynamically adaptive meta-partitioner capable of selecting and configuring the most appropriate partitioning strategy at run-time based on current system and application state. Such a metapartitioner can significantly reduce execution times for general SAMR applications. Computer simulations of physical phenomena are becoming increasingly popular as they constitute an important complement to real-life testing. In manymore » cases, such simulations are based on solving partial differential equations by numerical methods. Adaptive methods are crucial to efficiently utilize computer resources such as memory and CPU. But even with adaption, the simulations are computationally demanding and yield huge data sets. Thus parallelization and the efficient partitioning of data become issues of utmost importance. Adaption causes the workload to change dynamically, calling for dynamic (re-) partitioning to maintain efficient resource utilization. The proposed heuristic algorithm reduced inter-level communication substantially. Since the complexity of the proposed algorithm is low, this decrease comes at a relatively low cost. As a consequence, we draw the conclusion that the proposed re-mapping algorithm would be useful to lower overall execution times for many large SAMR applications. Due to its usefulness and its parameterization, the proposed algorithm would constitute a natural and important component of the meta-partitioner.« less
Towards Building a High Performance Spatial Query System for Large Scale Medical Imaging Data.
Aji, Ablimit; Wang, Fusheng; Saltz, Joel H
2012-11-06
Support of high performance queries on large volumes of scientific spatial data is becoming increasingly important in many applications. This growth is driven by not only geospatial problems in numerous fields, but also emerging scientific applications that are increasingly data- and compute-intensive. For example, digital pathology imaging has become an emerging field during the past decade, where examination of high resolution images of human tissue specimens enables more effective diagnosis, prediction and treatment of diseases. Systematic analysis of large-scale pathology images generates tremendous amounts of spatially derived quantifications of micro-anatomic objects, such as nuclei, blood vessels, and tissue regions. Analytical pathology imaging provides high potential to support image based computer aided diagnosis. One major requirement for this is effective querying of such enormous amount of data with fast response, which is faced with two major challenges: the "big data" challenge and the high computation complexity. In this paper, we present our work towards building a high performance spatial query system for querying massive spatial data on MapReduce. Our framework takes an on demand index building approach for processing spatial queries and a partition-merge approach for building parallel spatial query pipelines, which fits nicely with the computing model of MapReduce. We demonstrate our framework on supporting multi-way spatial joins for algorithm evaluation and nearest neighbor queries for microanatomic objects. To reduce query response time, we propose cost based query optimization to mitigate the effect of data skew. Our experiments show that the framework can efficiently support complex analytical spatial queries on MapReduce.
Towards Building a High Performance Spatial Query System for Large Scale Medical Imaging Data
Aji, Ablimit; Wang, Fusheng; Saltz, Joel H.
2013-01-01
Support of high performance queries on large volumes of scientific spatial data is becoming increasingly important in many applications. This growth is driven by not only geospatial problems in numerous fields, but also emerging scientific applications that are increasingly data- and compute-intensive. For example, digital pathology imaging has become an emerging field during the past decade, where examination of high resolution images of human tissue specimens enables more effective diagnosis, prediction and treatment of diseases. Systematic analysis of large-scale pathology images generates tremendous amounts of spatially derived quantifications of micro-anatomic objects, such as nuclei, blood vessels, and tissue regions. Analytical pathology imaging provides high potential to support image based computer aided diagnosis. One major requirement for this is effective querying of such enormous amount of data with fast response, which is faced with two major challenges: the “big data” challenge and the high computation complexity. In this paper, we present our work towards building a high performance spatial query system for querying massive spatial data on MapReduce. Our framework takes an on demand index building approach for processing spatial queries and a partition-merge approach for building parallel spatial query pipelines, which fits nicely with the computing model of MapReduce. We demonstrate our framework on supporting multi-way spatial joins for algorithm evaluation and nearest neighbor queries for microanatomic objects. To reduce query response time, we propose cost based query optimization to mitigate the effect of data skew. Our experiments show that the framework can efficiently support complex analytical spatial queries on MapReduce. PMID:24501719
A Polyhedral Outer-approximation, Dynamic-discretization optimization solver, 1.x
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bent, Rusell; Nagarajan, Harsha; Sundar, Kaarthik
2017-09-25
In this software, we implement an adaptive, multivariate partitioning algorithm for solving mixed-integer nonlinear programs (MINLP) to global optimality. The algorithm combines ideas that exploit the structure of convex relaxations to MINLPs and bound tightening procedures
Assessing Sustainability of Lifestyle Education for Activity Program (LEAP)
ERIC Educational Resources Information Center
Saunders, R. P.; Pate, R. R.; Dowda, M.; Ward, D. S.; Epping, J. N.; Dishman, R. K.
2012-01-01
Sustained intervention effects are needed for positive health impacts in populations; however, few published examples illustrate methods for assessing sustainability in health promotion programs. This paper describes the methods for assessing sustainability of the Lifestyle Education for Activity Program (LEAP). LEAP was a comprehensive…
NASA Astrophysics Data System (ADS)
Cahyaningrum, Rosalia D.; Bustamam, Alhadi; Siswantining, Titin
2017-03-01
Technology of microarray became one of the imperative tools in life science to observe the gene expression levels, one of which is the expression of the genes of people with carcinoma. Carcinoma is a cancer that forms in the epithelial tissue. These data can be analyzed such as the identification expressions hereditary gene and also build classifications that can be used to improve diagnosis of carcinoma. Microarray data usually served in large dimension that most methods require large computing time to do the grouping. Therefore, this study uses spectral clustering method which allows to work with any object for reduces dimension. Spectral clustering method is a method based on spectral decomposition of the matrix which is represented in the form of a graph. After the data dimensions are reduced, then the data are partitioned. One of the famous partition method is Partitioning Around Medoids (PAM) which is minimize the objective function with exchanges all the non-medoid points into medoid point iteratively until converge. Objectivity of this research is to implement methods spectral clustering and partitioning algorithm PAM to obtain groups of 7457 genes with carcinoma based on the similarity value. The result in this study is two groups of genes with carcinoma.
High Performance Computing Based Parallel HIearchical Modal Association Clustering (HPAR HMAC)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patlolla, Dilip R; Surendran Nair, Sujithkumar; Graves, Daniel A.
For many applications, clustering is a crucial step in order to gain insight into the makeup of a dataset. The best approach to a given problem often depends on a variety of factors, such as the size of the dataset, time restrictions, and soft clustering requirements. The HMAC algorithm seeks to combine the strengths of 2 particular clustering approaches: model-based and linkage-based clustering. One particular weakness of HMAC is its computational complexity. HMAC is not practical for mega-scale data clustering. For high-definition imagery, a user would have to wait months or years for a result; for a 16-megapixel image, themore » estimated runtime skyrockets to over a decade! To improve the execution time of HMAC, it is reasonable to consider an multi-core implementation that utilizes available system resources. An existing imple-mentation (Ray and Cheng 2014) divides the dataset into N partitions - one for each thread prior to executing the HMAC algorithm. This implementation benefits from 2 types of optimization: parallelization and divide-and-conquer. By running each partition in parallel, the program is able to accelerate computation by utilizing more system resources. Although the parallel implementation provides considerable improvement over the serial HMAC, it still suffers from poor computational complexity, O(N2). Once the maximum number of cores on a system is exhausted, the program exhibits slower behavior. We now consider a modification to HMAC that involves a recursive partitioning scheme. Our modification aims to exploit divide-and-conquer benefits seen by the parallel HMAC implementation. At each level in the recursion tree, partitions are divided into 2 sub-partitions until a threshold size is reached. When the partition can no longer be divided without falling below threshold size, the base HMAC algorithm is applied. This results in a significant speedup over the parallel HMAC.« less
Site Partitioning for Redundant Arrays of Distributed Disks
NASA Technical Reports Server (NTRS)
Mourad, Antoine N.; Fuchs, W. Kent; Saab, Daniel G.
1996-01-01
Redundant arrays of distributed disks (RADD) can be used in a distributed computing system or database system to provide recovery in the presence of disk crashes and temporary and permanent failures of single sites. In this paper, we look at the problem of partitioning the sites of a distributed storage system into redundant arrays in such a way that the communication costs for maintaining the parity information are minimized. We show that the partitioning problem is NP-hard. We then propose and evaluate several heuristic algorithms for finding approximate solutions. Simulation results show that significant reduction in remote parity update costs can be achieved by optimizing the site partitioning scheme.
Fully Decomposable Split Graphs
NASA Astrophysics Data System (ADS)
Broersma, Hajo; Kratsch, Dieter; Woeginger, Gerhard J.
We discuss various questions around partitioning a split graph into connected parts. Our main result is a polynomial time algorithm that decides whether a given split graph is fully decomposable, i.e., whether it can be partitioned into connected parts of order α 1,α 2,...,α k for every α 1,α 2,...,α k summing up to the order of the graph. In contrast, we show that the decision problem whether a given split graph can be partitioned into connected parts of order α 1,α 2,...,α k for a given partition α 1,α 2,...,α k of the order of the graph, is NP-hard.
Signature Work: A Survey of Current Practices
ERIC Educational Resources Information Center
Peden, Wilson
2015-01-01
At the centennial annual meeting of the "Association of American Colleges and Universities" (AAC&U), President Carol Geary Schneider introduced the "LEAP Challenge," the next phase of AAC&U's Liberal Education and America's Promise (LEAP) initiative. The LEAP Challenge calls on all colleges and universities to engage…
A Novel Space Partitioning Algorithm to Improve Current Practices in Facility Placement
Jimenez, Tamara; Mikler, Armin R; Tiwari, Chetan
2012-01-01
In the presence of naturally occurring and man-made public health threats, the feasibility of regional bio-emergency contingency plans plays a crucial role in the mitigation of such emergencies. While the analysis of in-place response scenarios provides a measure of quality for a given plan, it involves human judgment to identify improvements in plans that are otherwise likely to fail. Since resource constraints and government mandates limit the availability of service provided in case of an emergency, computational techniques can determine optimal locations for providing emergency response assuming that the uniform distribution of demand across homogeneous resources will yield and optimal service outcome. This paper presents an algorithm that recursively partitions the geographic space into sub-regions while equally distributing the population across the partitions. For this method, we have proven the existence of an upper bound on the deviation from the optimal population size for sub-regions. PMID:23853502
H-PoP and H-PoPG: heuristic partitioning algorithms for single individual haplotyping of polyploids.
Xie, Minzhu; Wu, Qiong; Wang, Jianxin; Jiang, Tao
2016-12-15
Some economically important plants including wheat and cotton have more than two copies of each chromosome. With the decreasing cost and increasing read length of next-generation sequencing technologies, reconstructing the multiple haplotypes of a polyploid genome from its sequence reads becomes practical. However, the computational challenge in polyploid haplotyping is much greater than that in diploid haplotyping, and there are few related methods. This article models the polyploid haplotyping problem as an optimal poly-partition problem of the reads, called the Polyploid Balanced Optimal Partition model. For the reads sequenced from a k-ploid genome, the model tries to divide the reads into k groups such that the difference between the reads of the same group is minimized while the difference between the reads of different groups is maximized. When the genotype information is available, the model is extended to the Polyploid Balanced Optimal Partition with Genotype constraint problem. These models are all NP-hard. We propose two heuristic algorithms, H-PoP and H-PoPG, based on dynamic programming and a strategy of limiting the number of intermediate solutions at each iteration, to solve the two models, respectively. Extensive experimental results on simulated and real data show that our algorithms can solve the models effectively, and are much faster and more accurate than the recent state-of-the-art polyploid haplotyping algorithms. The experiments also show that our algorithms can deal with long reads and deep read coverage effectively and accurately. Furthermore, H-PoP might be applied to help determine the ploidy of an organism. https://github.com/MinzhuXie/H-PoPG CONTACT: xieminzhu@hotmail.comSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
The Partition of Multi-Resolution LOD Based on Qtm
NASA Astrophysics Data System (ADS)
Hou, M.-L.; Xing, H.-Q.; Zhao, X.-S.; Chen, J.
2011-08-01
The partition hierarch of Quaternary Triangular Mesh (QTM) determine the accuracy of spatial analysis and application based on QTM. In order to resolve the problem that the partition hierarch of QTM is limited by the level of the computer hardware, the new method that Multi- Resolution LOD (Level of Details) based on QTM will be discussed in this paper. This method can make the resolution of the cells varying with the viewpoint position by partitioning the cells of QTM, selecting the particular area according to the viewpoint; dealing with the cracks caused by different subdivisions, it satisfies the request of unlimited partition in part.
The Great Leap Forward: Anatomy of a Central Planning Disaster
ERIC Educational Resources Information Center
Li, Wei; Yang, Dennis Tao
2005-01-01
The Great Leap Forward disaster, characterized by a collapse in grain production and a widespread famine in China between 1959 and 1961, is found attributable to a systemic failure in central planning. Wishfully expecting a great leap in agricultural productivity from collectivization, the Chinese government accelerated its aggressive…
Domain decomposition by the advancing-partition method for parallel unstructured grid generation
NASA Technical Reports Server (NTRS)
Banihashemi, legal representative, Soheila (Inventor); Pirzadeh, Shahyar Z. (Inventor)
2012-01-01
In a method for domain decomposition for generating unstructured grids, a surface mesh is generated for a spatial domain. A location of a partition plane dividing the domain into two sections is determined. Triangular faces on the surface mesh that intersect the partition plane are identified. A partition grid of tetrahedral cells, dividing the domain into two sub-domains, is generated using a marching process in which a front comprises only faces of new cells which intersect the partition plane. The partition grid is generated until no active faces remain on the front. Triangular faces on each side of the partition plane are collected into two separate subsets. Each subset of triangular faces is renumbered locally and a local/global mapping is created for each sub-domain. A volume grid is generated for each sub-domain. The partition grid and volume grids are then merged using the local-global mapping.
Spatially extended hybrid methods: a review
2018-01-01
Many biological and physical systems exhibit behaviour at multiple spatial, temporal or population scales. Multiscale processes provide challenges when they are to be simulated using numerical techniques. While coarser methods such as partial differential equations are typically fast to simulate, they lack the individual-level detail that may be required in regions of low concentration or small spatial scale. However, to simulate at such an individual level throughout a domain and in regions where concentrations are high can be computationally expensive. Spatially coupled hybrid methods provide a bridge, allowing for multiple representations of the same species in one spatial domain by partitioning space into distinct modelling subdomains. Over the past 20 years, such hybrid methods have risen to prominence, leading to what is now a very active research area across multiple disciplines including chemistry, physics and mathematics. There are three main motivations for undertaking this review. Firstly, we have collated a large number of spatially extended hybrid methods and presented them in a single coherent document, while comparing and contrasting them, so that anyone who requires a multiscale hybrid method will be able to find the most appropriate one for their need. Secondly, we have provided canonical examples with algorithms and accompanying code, serving to demonstrate how these types of methods work in practice. Finally, we have presented papers that employ these methods on real biological and physical problems, demonstrating their utility. We also consider some open research questions in the area of hybrid method development and the future directions for the field. PMID:29491179
Some practicable applications of quadtree data structures/representation in astronomy
NASA Technical Reports Server (NTRS)
Pasztor, L.
1992-01-01
Development of quadtree as hierarchical data structuring technique for representing spatial data (like points, regions, surfaces, lines, curves, volumes, etc.) has been motivated to a large extent by storage requirements of images, maps, and other multidimensional (spatially structured) data. For many spatial algorithms, time-efficiency of quadtrees in terms of execution may be as important as their space-efficiency concerning storage conditions. Briefly, the quadtree is a class of hierarchical data structures which is based on the recursive partition of a square region into quadrants and sub-quadrants until a predefined limit. Beyond the wide applicability of quadtrees in image processing, spatial information analysis, and building digital databases (processes becoming ordinary for the astronomical community), there may be numerous further applications in astronomy. Some of these practicable applications based on quadtree representation of astronomical data are presented and suggested for further considerations. Examples are shown for use of point as well as region quadtrees. Statistics of different leaf and non-leaf nodes (homogeneous and heterogeneous sub-quadrants respectively) at different levels may provide useful information on spatial structure of astronomical data in question. By altering the principle guiding the decomposition process, different types of spatial data may be focused on. Finally, a sampling method based on quadtree representation of an image is proposed which may prove to be efficient in the elaboration of sampling strategy in a region where observations were carried out previously either with different resolution or/and in different bands.
A hardware-oriented concurrent TZ search algorithm for High-Efficiency Video Coding
NASA Astrophysics Data System (ADS)
Doan, Nghia; Kim, Tae Sung; Rhee, Chae Eun; Lee, Hyuk-Jae
2017-12-01
High-Efficiency Video Coding (HEVC) is the latest video coding standard, in which the compression performance is double that of its predecessor, the H.264/AVC standard, while the video quality remains unchanged. In HEVC, the test zone (TZ) search algorithm is widely used for integer motion estimation because it effectively searches the good-quality motion vector with a relatively small amount of computation. However, the complex computation structure of the TZ search algorithm makes it difficult to implement it in the hardware. This paper proposes a new integer motion estimation algorithm which is designed for hardware execution by modifying the conventional TZ search to allow parallel motion estimations of all prediction unit (PU) partitions. The algorithm consists of the three phases of zonal, raster, and refinement searches. At the beginning of each phase, the algorithm obtains the search points required by the original TZ search for all PU partitions in a coding unit (CU). Then, all redundant search points are removed prior to the estimation of the motion costs, and the best search points are then selected for all PUs. Compared to the conventional TZ search algorithm, experimental results show that the proposed algorithm significantly decreases the Bjøntegaard Delta bitrate (BD-BR) by 0.84%, and it also reduces the computational complexity by 54.54%.
NASA Astrophysics Data System (ADS)
Aungkulanon, P.; Luangpaiboon, P.
2010-10-01
Nowadays, the engineering problem systems are large and complicated. An effective finite sequence of instructions for solving these problems can be categorised into optimisation and meta-heuristic algorithms. Though the best decision variable levels from some sets of available alternatives cannot be done, meta-heuristics is an alternative for experience-based techniques that rapidly help in problem solving, learning and discovery in the hope of obtaining a more efficient or more robust procedure. All meta-heuristics provide auxiliary procedures in terms of their own tooled box functions. It has been shown that the effectiveness of all meta-heuristics depends almost exclusively on these auxiliary functions. In fact, the auxiliary procedure from one can be implemented into other meta-heuristics. Well-known meta-heuristics of harmony search (HSA) and shuffled frog-leaping algorithms (SFLA) are compared with their hybridisations. HSA is used to produce a near optimal solution under a consideration of the perfect state of harmony of the improvisation process of musicians. A meta-heuristic of the SFLA, based on a population, is a cooperative search metaphor inspired by natural memetics. It includes elements of local search and global information exchange. This study presents solution procedures via constrained and unconstrained problems with different natures of single and multi peak surfaces including a curved ridge surface. Both meta-heuristics are modified via variable neighbourhood search method (VNSM) philosophy including a modified simplex method (MSM). The basic idea is the change of neighbourhoods during searching for a better solution. The hybridisations proceed by a descent method to a local minimum exploring then, systematically or at random, increasingly distant neighbourhoods of this local solution. The results show that the variant of HSA with VNSM and MSM seems to be better in terms of the mean and variance of design points and yields.
The Leap Challenge: Transforming for Students, Essential for Liberal Education
ERIC Educational Resources Information Center
Schneider, Carol Geary
2015-01-01
At the centennial annual meeting of the "Association of American Colleges & Universities" (AAC&U) in January 2015, there was an announcement to participants of the release of the "LEAP Challenge." The key concept at the center of the LEAP Challenge is that all college students need to prepare to contribute in a world…
Johnston, David W.; Christiansen, Fredrik
2017-01-01
Selective forces shape the evolution of wildlife behavioural strategies and influence the spatial and temporal partitioning of behavioural activities to maximize individual fitness. Globally, wildlife is increasingly exposed to human activities which may affect their behavioural activities. The ability of wildlife to compensate for the effects of human activities may have implications for their resilience to disturbance. Resilience theory suggests that behavioural systems which are constrained in their repertoires are less resilient to disturbance than flexible systems. Using behavioural time-series data, we show that spinner dolphins (Stenella longirostris) spatially and temporally partition their behavioural activities on a daily basis. Specifically, spinner dolphins were never observed foraging during daytime, where resting was the predominant activity. Travelling and socializing probabilities were higher in early mornings and late afternoons when dolphins were returning from or preparing for nocturnal feeding trips, respectively. The constrained nature of spinner dolphin behaviours suggests they are less resilient to human disturbance than other cetaceans. These dolphins experience the highest exposure rates to human activities ever reported for any cetaceans. Over the last 30 years human activities have increased significantly in Hawaii, but the spinner dolphins still inhabit these bays. Recent abundance estimates (2011 and 2012) however, are lower than all previous estimates (1979–1981, 1989–1992 and 2003), indicating a possible long-term impact. Quantification of the spatial and temporal partitioning of wildlife behavioural schedules provides critical insight for conservation measures that aim to mitigate the effects of human disturbance. PMID:28280561
Tyne, Julian A; Johnston, David W; Christiansen, Fredrik; Bejder, Lars
2017-01-01
Selective forces shape the evolution of wildlife behavioural strategies and influence the spatial and temporal partitioning of behavioural activities to maximize individual fitness. Globally, wildlife is increasingly exposed to human activities which may affect their behavioural activities. The ability of wildlife to compensate for the effects of human activities may have implications for their resilience to disturbance. Resilience theory suggests that behavioural systems which are constrained in their repertoires are less resilient to disturbance than flexible systems. Using behavioural time-series data, we show that spinner dolphins ( Stenella longirostris ) spatially and temporally partition their behavioural activities on a daily basis. Specifically, spinner dolphins were never observed foraging during daytime, where resting was the predominant activity. Travelling and socializing probabilities were higher in early mornings and late afternoons when dolphins were returning from or preparing for nocturnal feeding trips, respectively. The constrained nature of spinner dolphin behaviours suggests they are less resilient to human disturbance than other cetaceans. These dolphins experience the highest exposure rates to human activities ever reported for any cetaceans. Over the last 30 years human activities have increased significantly in Hawaii, but the spinner dolphins still inhabit these bays. Recent abundance estimates (2011 and 2012) however, are lower than all previous estimates (1979-1981, 1989-1992 and 2003), indicating a possible long-term impact. Quantification of the spatial and temporal partitioning of wildlife behavioural schedules provides critical insight for conservation measures that aim to mitigate the effects of human disturbance.
Exact and approximate stochastic simulation of intracellular calcium dynamics.
Wieder, Nicolas; Fink, Rainer H A; Wegner, Frederic von
2011-01-01
In simulations of chemical systems, the main task is to find an exact or approximate solution of the chemical master equation (CME) that satisfies certain constraints with respect to computation time and accuracy. While Brownian motion simulations of single molecules are often too time consuming to represent the mesoscopic level, the classical Gillespie algorithm is a stochastically exact algorithm that provides satisfying results in the representation of calcium microdomains. Gillespie's algorithm can be approximated via the tau-leap method and the chemical Langevin equation (CLE). Both methods lead to a substantial acceleration in computation time and a relatively small decrease in accuracy. Elimination of the noise terms leads to the classical, deterministic reaction rate equations (RRE). For complex multiscale systems, hybrid simulations are increasingly proposed to combine the advantages of stochastic and deterministic algorithms. An often used exemplary cell type in this context are striated muscle cells (e.g., cardiac and skeletal muscle cells). The properties of these cells are well described and they express many common calcium-dependent signaling pathways. The purpose of the present paper is to provide an overview of the aforementioned simulation approaches and their mutual relationships in the spectrum ranging from stochastic to deterministic algorithms.
NASA Technical Reports Server (NTRS)
Bokhari, Shahid H.; Crockett, Thomas W.; Nicol, David M.
1993-01-01
Binary dissection is widely used to partition non-uniform domains over parallel computers. This algorithm does not consider the perimeter, surface area, or aspect ratio of the regions being generated and can yield decompositions that have poor communication to computation ratio. Parametric Binary Dissection (PBD) is a new algorithm in which each cut is chosen to minimize load + lambda x(shape). In a 2 (or 3) dimensional problem, load is the amount of computation to be performed in a subregion and shape could refer to the perimeter (respectively surface) of that subregion. Shape is a measure of communication overhead and the parameter permits us to trade off load imbalance against communication overhead. When A is zero, the algorithm reduces to plain binary dissection. This algorithm can be used to partition graphs embedded in 2 or 3-d. Load is the number of nodes in a subregion, shape the number of edges that leave that subregion, and lambda the ratio of time to communicate over an edge to the time to compute at a node. An algorithm is presented that finds the depth d parametric dissection of an embedded graph with n vertices and e edges in O(max(n log n, de)) time, which is an improvement over the O(dn log n) time of plain binary dissection. Parallel versions of this algorithm are also presented; the best of these requires O((n/p) log(sup 3)p) time on a p processor hypercube, assuming graphs of bounded degree. How PBD is applied to 3-d unstructured meshes and yields partitions that are better than those obtained by plain dissection is described. Its application to the color image quantization problem is also discussed, in which samples in a high-resolution color space are mapped onto a lower resolution space in a way that minimizes the color error.
The number comb for a soil physical properties dynamic measurement
NASA Astrophysics Data System (ADS)
Olechko, K.; Patiño, P.; Tarquis, A. M.
2012-04-01
We propose the prime numbers distribution extracted from the soil digital multiscale images and some physical properties time series as the precise indicator of the spatial and temporal dynamics under soil management changes. With this new indicator the soil dynamics can be studied as a critical phenomenon where each phase transition is estimated and modeled by the graph partitioning induced phase transition. The critical point of prime numbers distribution was correlated with the beginning of Andosols, Vertisols and saline soils physical degradation under the unsustainable soil management in Michoacan, Guanajuato and Veracruz States of Mexico. The data banks corresponding to the long time periods (between 10 and 28 years) were statistically compared by RISK 5.0 software and our own algorithms. Our approach makes us able to distill free-form natural laws of soils physical properties dynamics directly from the experimental data. The Richter (1987) and Schmidt and Lipson (2009) original approaches were very useful to design the algorithms to identify Hamiltonians, Lagrangians and other laws of geometric and momentum conservation especially for erosion case.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pautz, Shawn D.; Bailey, Teresa S.
Here, the efficiency of discrete ordinates transport sweeps depends on the scheduling algorithm, the domain decomposition, the problem to be solved, and the computational platform. Sweep scheduling algorithms may be categorized by their approach to several issues. In this paper we examine the strategy of domain overloading for mesh partitioning as one of the components of such algorithms. In particular, we extend the domain overloading strategy, previously defined and analyzed for structured meshes, to the general case of unstructured meshes. We also present computational results for both the structured and unstructured domain overloading cases. We find that an appropriate amountmore » of domain overloading can greatly improve the efficiency of parallel sweeps for both structured and unstructured partitionings of the test problems examined on up to 10 5 processor cores.« less
Pautz, Shawn D.; Bailey, Teresa S.
2016-11-29
Here, the efficiency of discrete ordinates transport sweeps depends on the scheduling algorithm, the domain decomposition, the problem to be solved, and the computational platform. Sweep scheduling algorithms may be categorized by their approach to several issues. In this paper we examine the strategy of domain overloading for mesh partitioning as one of the components of such algorithms. In particular, we extend the domain overloading strategy, previously defined and analyzed for structured meshes, to the general case of unstructured meshes. We also present computational results for both the structured and unstructured domain overloading cases. We find that an appropriate amountmore » of domain overloading can greatly improve the efficiency of parallel sweeps for both structured and unstructured partitionings of the test problems examined on up to 10 5 processor cores.« less
Partitioning Algorithms for Simultaneously Balancing Iterative and Direct Methods
2004-03-03
is defined as 57698&:&;=<$>?8A@B8 DC E & /F <G8H IJ0 K L 012 1NM? which is the ratio of the highest partition weight over the average...OQPSR , 57698T:;=<$>U8T@B8 DC E & /VXWZYK[\\O , and E :^] E_CU`4ab /V is minimized. The load imbalance is the constraint we have to satisfy, and...that the initial partitioning can be improved [16, 19, 20]. 3 Problem Definition and Challenges Consider a graph )c2 with d e f vertices
Bringloe, Trevor T; Cottenie, Karl; Martin, Gillian K; Adamowicz, Sarah J
2016-12-01
Additive diversity partitioning (α, β, and γ) is commonly used to study the distribution of species-level diversity across spatial scales. Here, we first investigate whether published studies of additive diversity partitioning show signs of difficulty attaining species-level resolution due to inherent limitations with morphological identifications. Second, we present a DNA barcoding approach to delineate specimens of stream caddisfly larvae (order Trichoptera) and consider the importance of taxonomic resolution on classical (additive) measures of beta (β) diversity. Caddisfly larvae were sampled using a hierarchical spatial design in two regions (subarctic Churchill, Manitoba, Canada; temperate Pennsylvania, USA) and then additively partitioned according to Barcode Index Numbers (molecular clusters that serve as a proxy for species), genus, and family levels; diversity components were expressed as proportional species turnover. We screened 114 articles of additive diversity partitioning and found that a third reported difficulties with achieving species-level identifications, with a clear taxonomic tendency towards challenges identifying invertebrate taxa. Regarding our own study, caddisfly BINs appeared to show greater subregional turnover (e.g., proportional additive β) compared to genus or family levels. Diversity component studies failing to achieve species resolution due to morphological identifications may therefore be underestimating diversity turnover at larger spatial scales.
Multiway spectral community detection in networks
NASA Astrophysics Data System (ADS)
Zhang, Xiao; Newman, M. E. J.
2015-11-01
One of the most widely used methods for community detection in networks is the maximization of the quality function known as modularity. Of the many maximization techniques that have been used in this context, some of the most conceptually attractive are the spectral methods, which are based on the eigenvectors of the modularity matrix. Spectral algorithms have, however, been limited, by and large, to the division of networks into only two or three communities, with divisions into more than three being achieved by repeated two-way division. Here we present a spectral algorithm that can directly divide a network into any number of communities. The algorithm makes use of a mapping from modularity maximization to a vector partitioning problem, combined with a fast heuristic for vector partitioning. We compare the performance of this spectral algorithm with previous approaches and find it to give superior results, particularly in cases where community sizes are unbalanced. We also give demonstrative applications of the algorithm to two real-world networks and find that it produces results in good agreement with expectations for the networks studied.
A new memetic algorithm for mitigating tandem automated guided vehicle system partitioning problem
NASA Astrophysics Data System (ADS)
Pourrahimian, Parinaz
2017-11-01
Automated Guided Vehicle System (AGVS) provides the flexibility and automation demanded by Flexible Manufacturing System (FMS). However, with the growing concern on responsible management of resource use, it is crucial to manage these vehicles in an efficient way in order reduces travel time and controls conflicts and congestions. This paper presents the development process of a new Memetic Algorithm (MA) for optimizing partitioning problem of tandem AGVS. MAs employ a Genetic Algorithm (GA), as a global search, and apply a local search to bring the solutions to a local optimum point. A new Tabu Search (TS) has been developed and combined with a GA to refine the newly generated individuals by GA. The aim of the proposed algorithm is to minimize the maximum workload of the system. After all, the performance of the proposed algorithm is evaluated using Matlab. This study also compared the objective function of the proposed MA with GA. The results showed that the TS, as a local search, significantly improves the objective function of the GA for different system sizes with large and small numbers of zone by 1.26 in average.
NASA Astrophysics Data System (ADS)
Sallum, Ulysses W.; Zheng, Xiang; Verma, Sarika; Hasan, Tayyaba
2009-06-01
β-lactamase enzyme-activated photosensitizer (β-LEAP). We aim to exploit drug resistance mechanisms to selectively release photosensitizers (PSs) for a specific photodynamic antimicrobial effect and reduced host tissue damage. Consequently, the fluorescence emission intensity of the PSs increases and allows for the detection of enzyme activity. In this work we sought to evaluate β-LEAP for use as a sensitive molecular probe. We have reported the enzyme specific antibacterial action of β-LEAP. Here we report the use of β-LEAP for the rapid functional definition of a β-lactamase.
Research of image retrieval technology based on color feature
NASA Astrophysics Data System (ADS)
Fu, Yanjun; Jiang, Guangyu; Chen, Fengying
2009-10-01
Recently, with the development of the communication and the computer technology and the improvement of the storage technology and the capability of the digital image equipment, more and more image resources are given to us than ever. And thus the solution of how to locate the proper image quickly and accurately is wanted.The early method is to set up a key word for searching in the database, but now the method has become very difficult when we search much more picture that we need. In order to overcome the limitation of the traditional searching method, content based image retrieval technology was aroused. Now, it is a hot research subject.Color image retrieval is the important part of it. Color is the most important feature for color image retrieval. Three key questions on how to make use of the color characteristic are discussed in the paper: the expression of color, the abstraction of color characteristic and the measurement of likeness based on color. On the basis, the extraction technology of the color histogram characteristic is especially discussed. Considering the advantages and disadvantages of the overall histogram and the partition histogram, a new method based the partition-overall histogram is proposed. The basic thought of it is to divide the image space according to a certain strategy, and then calculate color histogram of each block as the color feature of this block. Users choose the blocks that contain important space information, confirming the right value. The system calculates the distance between the corresponding blocks that users choosed. Other blocks merge into part overall histograms again, and the distance should be calculated. Then accumulate all the distance as the real distance between two pictures. The partition-overall histogram comprehensive utilizes advantages of two methods above, by choosing blocks makes the feature contain more spatial information which can improve performance; the distances between partition-overall histogram make rotating and translation does not change. The HSV color space is used to show color characteristic of image, which is suitable to the visual characteristic of human. Taking advance of human's feeling to color, it quantifies color sector with unequal interval, and get characteristic vector. Finally, it matches the similarity of image with the algorithm of the histogram intersection and the partition-overall histogram. Users can choose a demonstration image to show inquired vision require, and also can adjust several right value through the relevance-feedback method to obtain the best result of search.An image retrieval system based on these approaches is presented. The result of the experiments shows that the image retrieval based on partition-overall histogram can keep the space distribution information while abstracting color feature efficiently, and it is superior to the normal color histograms in precision rate while researching. The query precision rate is more than 95%. In addition, the efficient block expression will lower the complicate degree of the images to be searched, and thus the searching efficiency will be increased. The image retrieval algorithms based on the partition-overall histogram proposed in the paper is efficient and effective.
Discrete stochastic simulation methods for chemically reacting systems.
Cao, Yang; Samuels, David C
2009-01-01
Discrete stochastic chemical kinetics describe the time evolution of a chemically reacting system by taking into account the fact that, in reality, chemical species are present with integer populations and exhibit some degree of randomness in their dynamical behavior. In recent years, with the development of new techniques to study biochemistry dynamics in a single cell, there are increasing studies using this approach to chemical kinetics in cellular systems, where the small copy number of some reactant species in the cell may lead to deviations from the predictions of the deterministic differential equations of classical chemical kinetics. This chapter reviews the fundamental theory related to stochastic chemical kinetics and several simulation methods based on that theory. We focus on nonstiff biochemical systems and the two most important discrete stochastic simulation methods: Gillespie's stochastic simulation algorithm (SSA) and the tau-leaping method. Different implementation strategies of these two methods are discussed. Then we recommend a relatively simple and efficient strategy that combines the strengths of the two methods: the hybrid SSA/tau-leaping method. The implementation details of the hybrid strategy are given here and a related software package is introduced. Finally, the hybrid method is applied to simple biochemical systems as a demonstration of its application.
Efficient Parallel Kernel Solvers for Computational Fluid Dynamics Applications
NASA Technical Reports Server (NTRS)
Sun, Xian-He
1997-01-01
Distributed-memory parallel computers dominate today's parallel computing arena. These machines, such as Intel Paragon, IBM SP2, and Cray Origin2OO, have successfully delivered high performance computing power for solving some of the so-called "grand-challenge" problems. Despite initial success, parallel machines have not been widely accepted in production engineering environments due to the complexity of parallel programming. On a parallel computing system, a task has to be partitioned and distributed appropriately among processors to reduce communication cost and to attain load balance. More importantly, even with careful partitioning and mapping, the performance of an algorithm may still be unsatisfactory, since conventional sequential algorithms may be serial in nature and may not be implemented efficiently on parallel machines. In many cases, new algorithms have to be introduced to increase parallel performance. In order to achieve optimal performance, in addition to partitioning and mapping, a careful performance study should be conducted for a given application to find a good algorithm-machine combination. This process, however, is usually painful and elusive. The goal of this project is to design and develop efficient parallel algorithms for highly accurate Computational Fluid Dynamics (CFD) simulations and other engineering applications. The work plan is 1) developing highly accurate parallel numerical algorithms, 2) conduct preliminary testing to verify the effectiveness and potential of these algorithms, 3) incorporate newly developed algorithms into actual simulation packages. The work plan has well achieved. Two highly accurate, efficient Poisson solvers have been developed and tested based on two different approaches: (1) Adopting a mathematical geometry which has a better capacity to describe the fluid, (2) Using compact scheme to gain high order accuracy in numerical discretization. The previously developed Parallel Diagonal Dominant (PDD) algorithm and Reduced Parallel Diagonal Dominant (RPDD) algorithm have been carefully studied on different parallel platforms for different applications, and a NASA simulation code developed by Man M. Rai and his colleagues has been parallelized and implemented based on data dependency analysis. These achievements are addressed in detail in the paper.
Signal Partitioning Algorithm for Highly Efficient Gaussian Mixture Modeling in Mass Spectrometry
Polanski, Andrzej; Marczyk, Michal; Pietrowska, Monika; Widlak, Piotr; Polanska, Joanna
2015-01-01
Mixture - modeling of mass spectra is an approach with many potential applications including peak detection and quantification, smoothing, de-noising, feature extraction and spectral signal compression. However, existing algorithms do not allow for automated analyses of whole spectra. Therefore, despite highlighting potential advantages of mixture modeling of mass spectra of peptide/protein mixtures and some preliminary results presented in several papers, the mixture modeling approach was so far not developed to the stage enabling systematic comparisons with existing software packages for proteomic mass spectra analyses. In this paper we present an efficient algorithm for Gaussian mixture modeling of proteomic mass spectra of different types (e.g., MALDI-ToF profiling, MALDI-IMS). The main idea is automated partitioning of protein mass spectral signal into fragments. The obtained fragments are separately decomposed into Gaussian mixture models. The parameters of the mixture models of fragments are then aggregated to form the mixture model of the whole spectrum. We compare the elaborated algorithm to existing algorithms for peak detection and we demonstrate improvements of peak detection efficiency obtained by using Gaussian mixture modeling. We also show applications of the elaborated algorithm to real proteomic datasets of low and high resolution. PMID:26230717
Scheduling Independent Partitions in Integrated Modular Avionics Systems
Du, Chenglie; Han, Pengcheng
2016-01-01
Recently the integrated modular avionics (IMA) architecture has been widely adopted by the avionics industry due to its strong partition mechanism. Although the IMA architecture can achieve effective cost reduction and reliability enhancement in the development of avionics systems, it results in a complex allocation and scheduling problem. All partitions in an IMA system should be integrated together according to a proper schedule such that their deadlines will be met even under the worst case situations. In order to help provide a proper scheduling table for all partitions in IMA systems, we study the schedulability of independent partitions on a multiprocessor platform in this paper. We firstly present an exact formulation to calculate the maximum scaling factor and determine whether all partitions are schedulable on a limited number of processors. Then with a Game Theory analogy, we design an approximation algorithm to solve the scheduling problem of partitions, by allowing each partition to optimize its own schedule according to the allocations of the others. Finally, simulation experiments are conducted to show the efficiency and reliability of the approach proposed in terms of time consumption and acceptance ratio. PMID:27942013
Graph Partitioning for Parallel Applications in Heterogeneous Grid Environments
NASA Technical Reports Server (NTRS)
Bisws, Rupak; Kumar, Shailendra; Das, Sajal K.; Biegel, Bryan (Technical Monitor)
2002-01-01
The problem of partitioning irregular graphs and meshes for parallel computations on homogeneous systems has been extensively studied. However, these partitioning schemes fail when the target system architecture exhibits heterogeneity in resource characteristics. With the emergence of technologies such as the Grid, it is imperative to study the partitioning problem taking into consideration the differing capabilities of such distributed heterogeneous systems. In our model, the heterogeneous system consists of processors with varying processing power and an underlying non-uniform communication network. We present in this paper a novel multilevel partitioning scheme for irregular graphs and meshes, that takes into account issues pertinent to Grid computing environments. Our partitioning algorithm, called MiniMax, generates and maps partitions onto a heterogeneous system with the objective of minimizing the maximum execution time of the parallel distributed application. For experimental performance study, we have considered both a realistic mesh problem from NASA as well as synthetic workloads. Simulation results demonstrate that MiniMax generates high quality partitions for various classes of applications targeted for parallel execution in a distributed heterogeneous environment.
Listing of Education in Archaeological Programs: The LEAP Clearinghouse 1990-1991 Summary Report.
ERIC Educational Resources Information Center
Knoll, Patricia C., Ed.
This is the second catalog of the National Park Service's Listing of Education in Archaeological Programs (LEAP). It consists of the information incorporated into the LEAP computerized database between 1990 and 1991. The database is a listing of federal, state, local, and private projects promoting public awareness of U.S. archaeology including…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-06
... Rule Change To Increase the Maximum Term for LEAPS to Fifteen Years November 30, 2012. Pursuant to... maximum term for Long-Term Equity Options Series (``LEAPS'') to fifteen years. The text of the proposed... rule change is to increase the maximum term for all LEAPS. Currently, the maximum term on BOX for...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-25
... Period for Commission Action on Proposed Rule Change To Increase the Maximum Term for LEAPS to Fifteen... proposed rule change to increase the maximum term for Long-Term Equity Options Series (``LEAPS'') to... to the comment. Currently, the maximum term for equity and interest rate LEAPS is three years and the...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-13
... Rule Change To Increase the Maximum Term for LEAPS to Fifteen Years November 6, 2012. I. Introduction... change to increase the maximum term for Long-Term Equity Options Series (``LEAPS'') to fifteen years. The.... Description of the Proposal Currently, the maximum term for equity and interest rate LEAPS is 36 months (three...
Four-Year Follow-Up of Children in the Leap Randomized Trial: Some Planned and Accidental Findings
ERIC Educational Resources Information Center
Strain, Phillip S.
2017-01-01
This article reports on a 4-year follow-up study from the Learning Experiences and Alternative Program for Preschoolers and Their Parents (LEAP) randomized trial of early intervention for young children with autism. Overall, participants from LEAP classes were marginally superior to comparison class children on elementary school outcomes specific…
Four-Year Follow-Up of Children in the LEAP Randomized Trial: Some Planned and Accidental Findings
ERIC Educational Resources Information Center
Strain, Phillip S.
2017-01-01
This article reports on a 4-year follow-up study from the Learning Experiences and Alternative Program for Preschoolers and Their Parents (LEAP) randomized trial of early intervention for young children with autism. Overall, participants from LEAP classes were marginally superior to comparison class children on elementary school outcomes specific…
A fast exact simulation method for a class of Markov jump processes.
Li, Yao; Hu, Lili
2015-11-14
A new method of the stochastic simulation algorithm (SSA), named the Hashing-Leaping method (HLM), for exact simulations of a class of Markov jump processes, is presented in this paper. The HLM has a conditional constant computational cost per event, which is independent of the number of exponential clocks in the Markov process. The main idea of the HLM is to repeatedly implement a hash-table-like bucket sort algorithm for all times of occurrence covered by a time step with length τ. This paper serves as an introduction to this new SSA method. We introduce the method, demonstrate its implementation, analyze its properties, and compare its performance with three other commonly used SSA methods in four examples. Our performance tests and CPU operation statistics show certain advantages of the HLM for large scale problems.
Algorithms for parallel flow solvers on message passing architectures
NASA Technical Reports Server (NTRS)
Vanderwijngaart, Rob F.
1995-01-01
The purpose of this project has been to identify and test suitable technologies for implementation of fluid flow solvers -- possibly coupled with structures and heat equation solvers -- on MIMD parallel computers. In the course of this investigation much attention has been paid to efficient domain decomposition strategies for ADI-type algorithms. Multi-partitioning derives its efficiency from the assignment of several blocks of grid points to each processor in the parallel computer. A coarse-grain parallelism is obtained, and a near-perfect load balance results. In uni-partitioning every processor receives responsibility for exactly one block of grid points instead of several. This necessitates fine-grain pipelined program execution in order to obtain a reasonable load balance. Although fine-grain parallelism is less desirable on many systems, especially high-latency networks of workstations, uni-partition methods are still in wide use in production codes for flow problems. Consequently, it remains important to achieve good efficiency with this technique that has essentially been superseded by multi-partitioning for parallel ADI-type algorithms. Another reason for the concentration on improving the performance of pipeline methods is their applicability in other types of flow solver kernels with stronger implied data dependence. Analytical expressions can be derived for the size of the dynamic load imbalance incurred in traditional pipelines. From these it can be determined what is the optimal first-processor retardation that leads to the shortest total completion time for the pipeline process. Theoretical predictions of pipeline performance with and without optimization match experimental observations on the iPSC/860 very well. Analysis of pipeline performance also highlights the effect of uncareful grid partitioning in flow solvers that employ pipeline algorithms. If grid blocks at boundaries are not at least as large in the wall-normal direction as those immediately adjacent to them, then the first processor in the pipeline will receive a computational load that is less than that of subsequent processors, magnifying the pipeline slowdown effect. Extra compensation is needed for grid boundary effects, even if all grid blocks are equally sized.
Finding reproducible cluster partitions for the k-means algorithm
2013-01-01
K-means clustering is widely used for exploratory data analysis. While its dependence on initialisation is well-known, it is common practice to assume that the partition with lowest sum-of-squares (SSQ) total i.e. within cluster variance, is both reproducible under repeated initialisations and also the closest that k-means can provide to true structure, when applied to synthetic data. We show that this is generally the case for small numbers of clusters, but for values of k that are still of theoretical and practical interest, similar values of SSQ can correspond to markedly different cluster partitions. This paper extends stability measures previously presented in the context of finding optimal values of cluster number, into a component of a 2-d map of the local minima found by the k-means algorithm, from which not only can values of k be identified for further analysis but, more importantly, it is made clear whether the best SSQ is a suitable solution or whether obtaining a consistently good partition requires further application of the stability index. The proposed method is illustrated by application to five synthetic datasets replicating a real world breast cancer dataset with varying data density, and a large bioinformatics dataset. PMID:23369085
Finding reproducible cluster partitions for the k-means algorithm.
Lisboa, Paulo J G; Etchells, Terence A; Jarman, Ian H; Chambers, Simon J
2013-01-01
K-means clustering is widely used for exploratory data analysis. While its dependence on initialisation is well-known, it is common practice to assume that the partition with lowest sum-of-squares (SSQ) total i.e. within cluster variance, is both reproducible under repeated initialisations and also the closest that k-means can provide to true structure, when applied to synthetic data. We show that this is generally the case for small numbers of clusters, but for values of k that are still of theoretical and practical interest, similar values of SSQ can correspond to markedly different cluster partitions. This paper extends stability measures previously presented in the context of finding optimal values of cluster number, into a component of a 2-d map of the local minima found by the k-means algorithm, from which not only can values of k be identified for further analysis but, more importantly, it is made clear whether the best SSQ is a suitable solution or whether obtaining a consistently good partition requires further application of the stability index. The proposed method is illustrated by application to five synthetic datasets replicating a real world breast cancer dataset with varying data density, and a large bioinformatics dataset.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huesemann, M.; Williams, P.; Edmundson, S.
A bench-scale photobioreactor system, termed Laboratory Environmental Algae Pond Simulator (LEAPS), was designed and constructed to simulate outdoor pond cultivation for a wide range of geographical locations and seasons. The LEAPS consists of six well-mixed glass column photobioreactors sparged with CO2-enriched air to maintain a set-point pH, illuminated from above by a programmable multicolor LED lighting (0 to 2,500 µmol/m2-sec), and submerged in a temperature controlled water-bath (-2 °C to >60 °C). Measured incident light intensities and water temperatures deviated from the respective light and temperature set-points on average only 2.3% and 0.9%, demonstrating accurate simulation of light and temperaturemore » conditions measured in outdoor ponds. In order to determine whether microalgae strains cultured in the LEAPS exhibit the same linear phase biomass productivity as in outdoor ponds, Chlorella sorokiniana and Nannochloropsis salina were cultured in the LEAPS bioreactors using light and temperature scripts measured previously in the respective outdoor pond studies. For Chlorella sorokiniana, the summer season biomass productivity in the LEAPS was 6.6% and 11.3% lower than in the respective outdoor ponds in Rimrock, Arizona, and Delhi, California; however, these differences were not statistically significant. For Nannochloropsis salina, the winter season biomass productivity in the LEAPS was statistically significantly higher (15.2%) during the 27 day experimental period than in the respective outdoor ponds in Tucson, Arizona. However, when considering only the first 14 days, the LEAPS biomass productivity was only 9.2% higher than in the outdoor ponds, a difference shown to be not statistically significant. Potential reasons for the positive or negative divergence in LEAPS performance, relative to outdoor ponds, are discussed. To demonstrate the utility of the LEAPS in predicting productivity, two other strains – Scenedesmus obliquus and Stichococcus minor – were evaluated using the summer season script for Rimrock, Arizona. For both strains, the productivity was around 11.6 g/m2-day at a 25 cm culture depth. In conclusion, the LEAPS is an accurate pond simulator and thus offers a reliable, fast, and cost-effective way for screening microalgae strains and operating conditions for high biomass productivity and co-product yields, using sunlight intensity and water temperature scripts generated by the Biomass Assessment Tool for any geographic location of choice where meteorological data are available.« less
Spatially-partitioned many-body vortices
NASA Astrophysics Data System (ADS)
Klaiman, S.; Alon, O. E.
2016-02-01
A vortex in Bose-Einstein condensates is a localized object which looks much like a tiny tornado storm. It is well described by mean-field theory. In the present work we go beyond the current paradigm and introduce many-body vortices. These are made of spatially- partitioned clouds, carry definite total angular momentum, and are fragmented rather than condensed objects which can only be described beyond mean-field theory. A phase diagram based on a mean-field model assists in predicting the parameters where many-body vortices occur. Implications are briefly discussed.
Initial Experiments with the Leap Motion as a User Interface in Robotic Endonasal Surgery.
Travaglini, T A; Swaney, P J; Weaver, Kyle D; Webster, R J
The Leap Motion controller is a low-cost, optically-based hand tracking system that has recently been introduced on the consumer market. Prior studies have investigated its precision and accuracy, toward evaluating its usefulness as a surgical robot master interface. Yet due to the diversity of potential slave robots and surgical procedures, as well as the dynamic nature of surgery, it is challenging to make general conclusions from published accuracy and precision data. Thus, our goal in this paper is to explore the use of the Leap in the specific scenario of endonasal pituitary surgery. We use it to control a concentric tube continuum robot in a phantom study, and compare user performance using the Leap to previously published results using the Phantom Omni. We find that the users were able to achieve nearly identical average resection percentage and overall surgical duration with the Leap.
Initial Experiments with the Leap Motion as a User Interface in Robotic Endonasal Surgery
Travaglini, T. A.; Swaney, P. J.; Weaver, Kyle D.; Webster, R. J.
2016-01-01
The Leap Motion controller is a low-cost, optically-based hand tracking system that has recently been introduced on the consumer market. Prior studies have investigated its precision and accuracy, toward evaluating its usefulness as a surgical robot master interface. Yet due to the diversity of potential slave robots and surgical procedures, as well as the dynamic nature of surgery, it is challenging to make general conclusions from published accuracy and precision data. Thus, our goal in this paper is to explore the use of the Leap in the specific scenario of endonasal pituitary surgery. We use it to control a concentric tube continuum robot in a phantom study, and compare user performance using the Leap to previously published results using the Phantom Omni. We find that the users were able to achieve nearly identical average resection percentage and overall surgical duration with the Leap. PMID:26752501
Wang, Xingmei; Hao, Wenqian; Li, Qiming
2017-12-18
This paper proposes an adaptive cultural algorithm with improved quantum-behaved particle swarm optimization (ACA-IQPSO) to detect the underwater sonar image. In the population space, to improve searching ability of particles, iterative times and the fitness value of particles are regarded as factors to adaptively adjust the contraction-expansion coefficient of the quantum-behaved particle swarm optimization algorithm (QPSO). The improved quantum-behaved particle swarm optimization algorithm (IQPSO) can make particles adjust their behaviours according to their quality. In the belief space, a new update strategy is adopted to update cultural individuals according to the idea of the update strategy in shuffled frog leaping algorithm (SFLA). Moreover, to enhance the utilization of information in the population space and belief space, accept function and influence function are redesigned in the new communication protocol. The experimental results show that ACA-IQPSO can obtain good clustering centres according to the grey distribution information of underwater sonar images, and accurately complete underwater objects detection. Compared with other algorithms, the proposed ACA-IQPSO has good effectiveness, excellent adaptability, a powerful searching ability and high convergence efficiency. Meanwhile, the experimental results of the benchmark functions can further demonstrate that the proposed ACA-IQPSO has better searching ability, convergence efficiency and stability.
Advanced Algorithms for Local Routing Strategy on Complex Networks
Lin, Benchuan; Chen, Bokui; Gao, Yachun; Tse, Chi K.; Dong, Chuanfei; Miao, Lixin; Wang, Binghong
2016-01-01
Despite the significant improvement on network performance provided by global routing strategies, their applications are still limited to small-scale networks, due to the need for acquiring global information of the network which grows and changes rapidly with time. Local routing strategies, however, need much less local information, though their transmission efficiency and network capacity are much lower than that of global routing strategies. In view of this, three algorithms are proposed and a thorough investigation is conducted in this paper. These algorithms include a node duplication avoidance algorithm, a next-nearest-neighbor algorithm and a restrictive queue length algorithm. After applying them to typical local routing strategies, the critical generation rate of information packets Rc increases by over ten-fold and the average transmission time 〈T〉 decreases by 70–90 percent, both of which are key physical quantities to assess the efficiency of routing strategies on complex networks. More importantly, in comparison with global routing strategies, the improved local routing strategies can yield better network performance under certain circumstances. This is a revolutionary leap for communication networks, because local routing strategy enjoys great superiority over global routing strategy not only in terms of the reduction of computational expense, but also in terms of the flexibility of implementation, especially for large-scale networks. PMID:27434502
Advanced Algorithms for Local Routing Strategy on Complex Networks.
Lin, Benchuan; Chen, Bokui; Gao, Yachun; Tse, Chi K; Dong, Chuanfei; Miao, Lixin; Wang, Binghong
2016-01-01
Despite the significant improvement on network performance provided by global routing strategies, their applications are still limited to small-scale networks, due to the need for acquiring global information of the network which grows and changes rapidly with time. Local routing strategies, however, need much less local information, though their transmission efficiency and network capacity are much lower than that of global routing strategies. In view of this, three algorithms are proposed and a thorough investigation is conducted in this paper. These algorithms include a node duplication avoidance algorithm, a next-nearest-neighbor algorithm and a restrictive queue length algorithm. After applying them to typical local routing strategies, the critical generation rate of information packets Rc increases by over ten-fold and the average transmission time 〈T〉 decreases by 70-90 percent, both of which are key physical quantities to assess the efficiency of routing strategies on complex networks. More importantly, in comparison with global routing strategies, the improved local routing strategies can yield better network performance under certain circumstances. This is a revolutionary leap for communication networks, because local routing strategy enjoys great superiority over global routing strategy not only in terms of the reduction of computational expense, but also in terms of the flexibility of implementation, especially for large-scale networks.
Robust numerical electromagnetic eigenfunction expansion algorithms
NASA Astrophysics Data System (ADS)
Sainath, Kamalesh
This thesis summarizes developments in rigorous, full-wave, numerical spectral-domain (integral plane wave eigenfunction expansion [PWE]) evaluation algorithms concerning time-harmonic electromagnetic (EM) fields radiated by generally-oriented and positioned sources within planar and tilted-planar layered media exhibiting general anisotropy, thickness, layer number, and loss characteristics. The work is motivated by the need to accurately and rapidly model EM fields radiated by subsurface geophysical exploration sensors probing layered, conductive media, where complex geophysical and man-made processes can lead to micro-laminate and micro-fractured geophysical formations exhibiting, at the lower (sub-2MHz) frequencies typically employed for deep EM wave penetration through conductive geophysical media, bulk-scale anisotropic (i.e., directional) electrical conductivity characteristics. When the planar-layered approximation (layers of piecewise-constant material variation and transversely-infinite spatial extent) is locally, near the sensor region, considered valid, numerical spectral-domain algorithms are suitable due to their strong low-frequency stability characteristic, and ability to numerically predict time-harmonic EM field propagation in media with response characterized by arbitrarily lossy and (diagonalizable) dense, anisotropic tensors. If certain practical limitations are addressed, PWE can robustly model sensors with general position and orientation that probe generally numerous, anisotropic, lossy, and thick layers. The main thesis contributions, leading to a sensor and geophysical environment-robust numerical modeling algorithm, are as follows: (1) Simple, rapid estimator of the region (within the complex plane) containing poles, branch points, and branch cuts (critical points) (Chapter 2), (2) Sensor and material-adaptive azimuthal coordinate rotation, integration contour deformation, integration domain sub-region partition and sub-region-dependent integration order (Chapter 3), (3) Integration partition-extrapolation-based (Chapter 3) and Gauss-Laguerre Quadrature (GLQ)-based (Chapter 4) evaluations of the deformed, semi-infinite-length integration contour tails, (4) Robust in-situ-based (i.e., at the spectral-domain integrand level) direct/homogeneous-medium field contribution subtraction and analytical curbing of the source current spatial spectrum function's ill behavior (Chapter 5), and (5) Analytical re-casting of the direct-field expressions when the source is embedded within a NBAM, short for non-birefringent anisotropic medium (Chapter 6). The benefits of these contributions are, respectively, (1) Avoiding computationally intensive critical-point location and tracking (computation time savings), (2) Sensor and material-robust curbing of the integrand's oscillatory and slow decay behavior, as well as preventing undesirable critical-point migration within the complex plane (computation speed, precision, and instability-avoidance benefits), (3) sensor and material-robust reduction (or, for GLQ, elimination) of integral truncation error, (4) robustly stable modeling of scattered fields and/or fields radiated from current sources modeled as spatially distributed (10 to 1000-fold compute-speed acceleration also realized for distributed-source computations), and (5) numerically stable modeling of fields radiated from sources within NBAM layers. Having addressed these limitations, are PWE algorithms applicable to modeling EM waves in tilted planar-layered geometries too? This question is explored in Chapter 7 using a Transformation Optics-based approach, allowing one to model wave propagation through layered media that (in the sensor's vicinity) possess tilted planar interfaces. The technique leads to spurious wave scattering however, whose induced computation accuracy degradation requires analysis. Mathematical exhibition, and exhaustive simulation-based study and analysis of the limitations of, this novel tilted-layer modeling formulation is Chapter 7's main contribution.
Random Partition Distribution Indexed by Pairwise Information
Dahl, David B.; Day, Ryan; Tsai, Jerry W.
2017-01-01
We propose a random partition distribution indexed by pairwise similarity information such that partitions compatible with the similarities are given more probability. The use of pairwise similarities, in the form of distances, is common in some clustering algorithms (e.g., hierarchical clustering), but we show how to use this type of information to define a prior partition distribution for flexible Bayesian modeling. A defining feature of the distribution is that it allocates probability among partitions within a given number of subsets, but it does not shift probability among sets of partitions with different numbers of subsets. Our distribution places more probability on partitions that group similar items yet keeps the total probability of partitions with a given number of subsets constant. The distribution of the number of subsets (and its moments) is available in closed-form and is not a function of the similarities. Our formulation has an explicit probability mass function (with a tractable normalizing constant) so the full suite of MCMC methods may be used for posterior inference. We compare our distribution with several existing partition distributions, showing that our formulation has attractive properties. We provide three demonstrations to highlight the features and relative performance of our distribution. PMID:29276318
Baca, Stephen M; Toussaint, Emmanuel F A; Miller, Kelly B; Short, Andrew E Z
2017-02-01
The first molecular phylogenetic hypothesis for the aquatic beetle family Noteridae is inferred using DNA sequence data from five gene fragments (mitochondrial and nuclear): COI, H3, 16S, 18S, and 28S. Our analysis is the most comprehensive phylogenetic reconstruction of Noteridae to date, and includes 53 species representing all subfamilies, tribes and 16 of the 17 genera within the family. We examine the impact of data partitioning on phylogenetic inference by comparing two different algorithm-based partitioning strategies: one using predefined subsets of the dataset, and another recently introduced method, which uses the k-means algorithm to iteratively divide the dataset into clusters of sites evolving at similar rates across sampled loci. We conducted both maximum likelihood and Bayesian inference analyses using these different partitioning schemes. Resulting trees are strongly incongruent with prior classifications of Noteridae. We recover variant tree topologies and support values among the implemented partitioning schemes. Bayes factors calculated with marginal likelihoods of Bayesian analyses support a priori partitioning over k-means and unpartitioned data strategies. Our study substantiates the importance of data partitioning in phylogenetic inference, and underscores the use of comparative analyses to determine optimal analytical strategies. Our analyses recover Noterini Thomson to be paraphyletic with respect to three other tribes. The genera Suphisellus Crotch and Hydrocanthus Say are also recovered as paraphyletic. Following the results of the preferred partitioning scheme, we here propose a revised classification of Noteridae, comprising two subfamilies, three tribes and 18 genera. The following taxonomic changes are made: Notomicrinae sensu n. (= Phreatodytinae syn. n.) is expanded to include the tribe Phreatodytini; Noterini sensu n. (= Neohydrocoptini syn. n., Pronoterini syn. n., Tonerini syn. n.) is expanded to include all genera of the Noterinae; The genus Suphisellus Crotch is expanded to include species of Pronoterus Sharp syn. n.; and the former subgenus Sternocanthus Guignot stat. rev. is resurrected from synonymy and elevated to genus rank. Copyright © 2016 Elsevier Inc. All rights reserved.
Modified symplectic schemes with nearly-analytic discrete operators for acoustic wave simulations
NASA Astrophysics Data System (ADS)
Liu, Shaolin; Yang, Dinghui; Lang, Chao; Wang, Wenshuai; Pan, Zhide
2017-04-01
Using a structure-preserving algorithm significantly increases the computational efficiency of solving wave equations. However, only a few explicit symplectic schemes are available in the literature, and the capabilities of these symplectic schemes have not been sufficiently exploited. Here, we propose a modified strategy to construct explicit symplectic schemes for time advance. The acoustic wave equation is transformed into a Hamiltonian system. The classical symplectic partitioned Runge-Kutta (PRK) method is used for the temporal discretization. Additional spatial differential terms are added to the PRK schemes to form the modified symplectic methods and then two modified time-advancing symplectic methods with all of positive symplectic coefficients are then constructed. The spatial differential operators are approximated by nearly-analytic discrete (NAD) operators, and we call the fully discretized scheme modified symplectic nearly analytic discrete (MSNAD) method. Theoretical analyses show that the MSNAD methods exhibit less numerical dispersion and higher stability limits than conventional methods. Three numerical experiments are conducted to verify the advantages of the MSNAD methods, such as their numerical accuracy, computational cost, stability, and long-term calculation capability.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-10
... Increase the Maximum Term for LEAPS to Fifteen Years August 6, 2012. Pursuant to Section 19(b)(1) of the... maximum term for Long-Term Equity Options Series (``LEAPS'') to fifteen years. The text of the proposed... purpose of the proposed rule change is to increase the maximum term for all LEAPS. Currently, the maximum...
Algorithm of regional surface evaporation using remote sensing: A case study of Haihe basin, China
NASA Astrophysics Data System (ADS)
Xiong, Jun; Wu, Bingfang; Yan, Nana; Hu, Minggang
2007-11-01
Evapotranspiration (ET, or latent heat flux) is the most essential and uncertain factor in water resource management. Remote sensing is a promising tool for estimation of spatial distribution of ET at regional scale with limited ground observations. We developed an algorithm for estimating regional evapotranspiration from MODIS 1b data and ancillary meteorological data. The algorithm is an integration of Penman-Monteith equation and SEBS (Surface Energy Balance System) model. The former is a combination of the energy balance theory and the mass transfer method to compute the evaporation from cropped surfaces from standard climatological records of sunshine, temperature, humidity and wind speed by introducing resistance factors, and the latter determines the spatio-temporal variability of regional evaporative condition. First, we characterized key land surface parameters on satellite over passing days, including fractional vegetation cover (fc), roughness height for momentum (z0m), net radiation (Rn) and soil heat flux (G0); Second, SEBS was applied to partition the sensible heat (H) from latent heat (LE) in combination with Planetary Boundary Layer (PBL) information from seven meteorological stations. A parameterization of surface roughness was applied at mountainous area considering topographic influence; third, we chose available surface resistance (RS) as the temporal-scaling factor. With bulk surface resistance is properly defined, P-M methods is valid for both soil and vegetation canopy. We validated ET from this algorithm with limited actual observations of ET including 2 eddy covariance system dataset and 1 lysimeter sites. Water balance equation is used as a trend-analysis tool to show the consistency between rainfall and ET on four drainage area. As a result, the prototype products showed different accuracy and applicability on different underlying and time scale, which demonstrates the potential of this approach for estimating ET from 1-km to regional spatial scale in North China Plain.
Vieira, Vasco Manuel Nobre de Carvalho da Silva; Mateus, Marcos Duarte
2014-01-01
Isomorphic biphasic algal life cycles often occur in the environment at ploidy abundance ratios (Haploid:Diploid) different from 1. Its spatial variability occurs within populations related to intertidal height and hydrodynamic stress, possibly reflecting the niche partitioning driven by their diverging adaptation to the environment argued necessary for their prevalence (evolutionary stability). Demographic models based in matrix algebra were developed to investigate which vital rates may efficiently generate an H:D variability at a fine spatial resolution. It was also taken into account time variation and type of life strategy. Ploidy dissimilarities in fecundity rates set an H:D spatial structure miss-fitting the ploidy fitness ratio. The same happened with ploidy dissimilarities in ramet growth whenever reproductive output dominated the population demography. Only through ploidy dissimilarities in looping rates (stasis, breakage and clonal growth) did the life cycle respond to a spatially heterogeneous environment efficiently creating a niche partition. Marginal locations were more sensitive than central locations. Related results have been obtained experimentally and numerically for widely different life cycles from the plant and animal kingdoms. Spore dispersal smoothed the effects of ploidy dissimilarities in fertility and enhanced the effects of ploidy dissimilarities looping rates. Ploidy dissimilarities in spore dispersal could also create the necessary niche partition, both over the space and time dimensions, even in spatial homogeneous environments and without the need for conditional differentiation of the ramets. Fine scale spatial variability may be the key for the prevalence of isomorphic biphasic life cycles, which has been neglected so far.
Regulation of the Demographic Structure in Isomorphic Biphasic Life Cycles at the Spatial Fine Scale
Vieira, Vasco Manuel Nobre de Carvalho da Silva; Mateus, Marcos Duarte
2014-01-01
Isomorphic biphasic algal life cycles often occur in the environment at ploidy abundance ratios (Haploid:Diploid) different from 1. Its spatial variability occurs within populations related to intertidal height and hydrodynamic stress, possibly reflecting the niche partitioning driven by their diverging adaptation to the environment argued necessary for their prevalence (evolutionary stability). Demographic models based in matrix algebra were developed to investigate which vital rates may efficiently generate an H:D variability at a fine spatial resolution. It was also taken into account time variation and type of life strategy. Ploidy dissimilarities in fecundity rates set an H:D spatial structure miss-fitting the ploidy fitness ratio. The same happened with ploidy dissimilarities in ramet growth whenever reproductive output dominated the population demography. Only through ploidy dissimilarities in looping rates (stasis, breakage and clonal growth) did the life cycle respond to a spatially heterogeneous environment efficiently creating a niche partition. Marginal locations were more sensitive than central locations. Related results have been obtained experimentally and numerically for widely different life cycles from the plant and animal kingdoms. Spore dispersal smoothed the effects of ploidy dissimilarities in fertility and enhanced the effects of ploidy dissimilarities looping rates. Ploidy dissimilarities in spore dispersal could also create the necessary niche partition, both over the space and time dimensions, even in spatial homogeneous environments and without the need for conditional differentiation of the ramets. Fine scale spatial variability may be the key for the prevalence of isomorphic biphasic life cycles, which has been neglected so far. PMID:24658603
Binomial leap methods for simulating stochastic chemical kinetics.
Tian, Tianhai; Burrage, Kevin
2004-12-01
This paper discusses efficient simulation methods for stochastic chemical kinetics. Based on the tau-leap and midpoint tau-leap methods of Gillespie [D. T. Gillespie, J. Chem. Phys. 115, 1716 (2001)], binomial random variables are used in these leap methods rather than Poisson random variables. The motivation for this approach is to improve the efficiency of the Poisson leap methods by using larger stepsizes. Unlike Poisson random variables whose range of sample values is from zero to infinity, binomial random variables have a finite range of sample values. This probabilistic property has been used to restrict possible reaction numbers and to avoid negative molecular numbers in stochastic simulations when larger stepsize is used. In this approach a binomial random variable is defined for a single reaction channel in order to keep the reaction number of this channel below the numbers of molecules that undergo this reaction channel. A sampling technique is also designed for the total reaction number of a reactant species that undergoes two or more reaction channels. Samples for the total reaction number are not greater than the molecular number of this species. In addition, probability properties of the binomial random variables provide stepsize conditions for restricting reaction numbers in a chosen time interval. These stepsize conditions are important properties of robust leap control strategies. Numerical results indicate that the proposed binomial leap methods can be applied to a wide range of chemical reaction systems with very good accuracy and significant improvement on efficiency over existing approaches. (c) 2004 American Institute of Physics.
Adaptively loaded IM/DD optical OFDM based on set-partitioned QAM formats.
Zhao, Jian; Chen, Lian-Kuan
2017-04-17
We investigate the constellation design and symbol error rate (SER) of set-partitioned (SP) quadrature amplitude modulation (QAM) formats. Based on the SER analysis, we derive the adaptive bit and power loading algorithm for SP QAM based intensity-modulation direct-detection (IM/DD) orthogonal frequency division multiplexing (OFDM). We experimentally show that the proposed system significantly outperforms the conventional adaptively-loaded IM/DD OFDM and can increase the data rate from 36 Gbit/s to 42 Gbit/s in the presence of severe dispersion-induced spectral nulls after 40-km single-mode fiber. It is also shown that the adaptive algorithm greatly enhances the tolerance to fiber nonlinearity and allows for more power budget.
Partitioning and packing mathematical simulation models for calculation on parallel computers
NASA Technical Reports Server (NTRS)
Arpasi, D. J.; Milner, E. J.
1986-01-01
The development of multiprocessor simulations from a serial set of ordinary differential equations describing a physical system is described. Degrees of parallelism (i.e., coupling between the equations) and their impact on parallel processing are discussed. The problem of identifying computational parallelism within sets of closely coupled equations that require the exchange of current values of variables is described. A technique is presented for identifying this parallelism and for partitioning the equations for parallel solution on a multiprocessor. An algorithm which packs the equations into a minimum number of processors is also described. The results of the packing algorithm when applied to a turbojet engine model are presented in terms of processor utilization.
Approximation algorithm for the problem of partitioning a sequence into clusters
NASA Astrophysics Data System (ADS)
Kel'manov, A. V.; Mikhailova, L. V.; Khamidullin, S. A.; Khandeev, V. I.
2017-08-01
We consider the problem of partitioning a finite sequence of Euclidean points into a given number of clusters (subsequences) using the criterion of the minimal sum (over all clusters) of intercluster sums of squared distances from the elements of the clusters to their centers. It is assumed that the center of one of the desired clusters is at the origin, while the center of each of the other clusters is unknown and determined as the mean value over all elements in this cluster. Additionally, the partition obeys two structural constraints on the indices of sequence elements contained in the clusters with unknown centers: (1) the concatenation of the indices of elements in these clusters is an increasing sequence, and (2) the difference between an index and the preceding one is bounded above and below by prescribed constants. It is shown that this problem is strongly NP-hard. A 2-approximation algorithm is constructed that is polynomial-time for a fixed number of clusters.
NASA Astrophysics Data System (ADS)
Fang, Y.; Hou, J.; Engel, D.; Lin, G.; Yin, J.; Han, B.; Fang, Z.; Fountoulakis, V.
2011-12-01
In this study, we introduce an uncertainty quantification (UQ) software framework for carbon sequestration, with the focus of studying being the effect of spatial heterogeneity of reservoir properties on CO2 migration. We use a sequential Gaussian method (SGSIM) to generate realizations of permeability fields with various spatial statistical attributes. To deal with the computational difficulties, we integrate the following ideas/approaches: 1) firstly, we use three different sampling approaches (probabilistic collocation, quasi-Monte Carlo, and adaptive sampling approaches) to reduce the required forward calculations while trying to explore the parameter space and quantify the input uncertainty; 2) secondly, we use eSTOMP as the forward modeling simulator. eSTOMP is implemented using the Global Arrays toolkit (GA) that is based on one-sided inter-processor communication and supports a shared memory programming style on distributed memory platforms. It provides highly-scalable performance. It uses a data model to partition most of the large scale data structures into a relatively small number of distinct classes. The lower level simulator infrastructure (e.g. meshing support, associated data structures, and data mapping to processors) is separated from the higher level physics and chemistry algorithmic routines using a grid component interface; and 3) besides the faster model and more efficient algorithms to speed up the forward calculation, we built an adaptive system infrastructure to select the best possible data transfer mechanisms, to optimally allocate system resources to improve performance, and to integrate software packages and data for composing carbon sequestration simulation, computation, analysis, estimation and visualization. We will demonstrate the framework with a given CO2 injection scenario in a heterogeneous sandstone reservoir.
An Analytic Equation Partitioning Climate Variation and Human Impacts on River Sediment Load
NASA Astrophysics Data System (ADS)
Zhang, J.; Gao, G.; Fu, B.
2017-12-01
Spatial or temporal patterns and process-based equations could co-exist in hydrologic model. Yet, existing approaches quantifying the impacts of those variables on river sediment load (RSL) changes are found to be severely limited, and new ways to evaluate the contribution of these variables are thus needed. Actually, the Newtonian modeling is hardly achievable for this process due to the limitation of both observations and knowledge of mechanisms, whereas laws based on the Darwinian approach could provide one component of a developed hydrologic model. Since that streamflow is the carrier of suspended sediment, sediment load changes are documented in changes of streamflow and suspended sediment concentration (SSC) - water discharge relationships. Consequently, an analytic equation for river sediment load changes are proposed to explicitly quantify the relative contributions of climate variation and direct human impacts on river sediment load changes. Initially, the sediment rating curve, which is of great significance in RSL changes analysis, was decomposed as probability distribution of streamflow and the corresponding SSC - water discharge relationships at equally spaced discharge classes. Furthermore, a proposed segmentation algorithm based on the fractal theory was used to decompose RSL changes attributed to these two portions. Additionally, the water balance framework was utilized and the corresponding elastic parameters were calculated. Finally, changes in climate variables (i.e. precipitation and potential evapotranspiration) and direct human impacts on river sediment load could be figured out. By data simulation, the efficiency of the segmentation algorithm was verified. The analytic equation provides a superior Darwinian approach partitioning climate and human impacts on RSL changes, as only data series of precipitation, potential evapotranspiration and SSC - water discharge are demanded.
SU-E-J-71: Spatially Preserving Prior Knowledge-Based Treatment Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, H; Xing, L
2015-06-15
Purpose: Prior knowledge-based treatment planning is impeded by the use of a single dose volume histogram (DVH) curve. Critical spatial information is lost from collapsing the dose distribution into a histogram. Even similar patients possess geometric variations that becomes inaccessible in the form of a single DVH. We propose a simple prior knowledge-based planning scheme that extracts features from prior dose distribution while still preserving the spatial information. Methods: A prior patient plan is not used as a mere starting point for a new patient but rather stopping criteria are constructed. Each structure from the prior patient is partitioned intomore » multiple shells. For instance, the PTV is partitioned into an inner, middle, and outer shell. Prior dose statistics are then extracted for each shell and translated into the appropriate Dmin and Dmax parameters for the new patient. Results: The partitioned dose information from a prior case has been applied onto 14 2-D prostate cases. Using prior case yielded final DVHs that was comparable to manual planning, even though the DVH for the prior case was different from the DVH for the 14 cases. Solely using a single DVH for the entire organ was also performed for comparison but showed a much poorer performance. Different ways of translating the prior dose statistics into parameters for the new patient was also tested. Conclusion: Prior knowledge-based treatment planning need to salvage the spatial information without transforming the patients on a voxel to voxel basis. An efficient balance between the anatomy and dose domain is gained through partitioning the organs into multiple shells. The use of prior knowledge not only serves as a starting point for a new case but the information extracted from the partitioned shells are also translated into stopping criteria for the optimization problem at hand.« less
Niche Partitioning of Feather Mites within a Seabird Host, Calonectris borealis
Stefan, Laura M.; Gómez-Díaz, Elena; Elguero, Eric; Proctor, Heather C.; McCoy, Karen D.; González-Solís, Jacob
2015-01-01
According to classic niche theory, species can coexist in heterogeneous environments by reducing interspecific competition via niche partitioning, e.g. trophic or spatial partitioning. However, support for the role of competition on niche partitioning remains controversial. Here, we tested for spatial and trophic partitioning in feather mites, a diverse and abundant group of arthropods. We focused on the two dominant mite species, Microspalax brevipes and Zachvatkinia ovata, inhabiting flight feathers of the Cory’s shearwater, Calonectris borealis. We performed mite counts across and within primary and tail feathers on free-living shearwaters breeding on an oceanic island (Gran Canaria, Canary Islands). We then investigated trophic relationships between the two mite species and the host using stable isotope analyses of carbon and nitrogen on mite tissues and potential host food sources. The distribution of the two mite species showed clear spatial segregation among feathers; M. brevipes showed high preference for the central wing primary feathers, whereas Z. ovata was restricted to the two outermost primaries. Morphological differences between M. brevipes and Z. ovata support an adaptive basis for the spatial segregation of the two mite species. However, the two mites overlap in some central primaries and statistical modeling showed that Z. ovata tends to outcompete M. brevipes. Isotopic analyses indicated similar isotopic values for the two mite species and a strong correlation in carbon signatures between mites inhabiting the same individual host suggesting that diet is mainly based on shared host-associated resources. Among the four candidate tissues examined (blood, feather remains, skin remains and preen gland oil), we conclude that the diet is most likely dominated by preen gland oil, while the contribution of exogenous material to mite diets is less marked. Our results indicate that ongoing competition for space and resources plays a central role in structuring feather mite communities. They also illustrate that symbiotic infracommunities are excellent model systems to study trophic ecology, and can improve our understanding of mechanisms of niche differentiation and species coexistence. PMID:26650672
NASA Astrophysics Data System (ADS)
Banks, J. W.; Henshaw, W. D.; Schwendeman, D. W.; Tang, Qi
2017-08-01
A stable partitioned algorithm is developed for fluid-structure interaction (FSI) problems involving viscous incompressible flow and rigid bodies. This added-mass partitioned (AMP) algorithm remains stable, without sub-iterations, for light and even zero mass rigid bodies when added-mass and viscous added-damping effects are large. The scheme is based on a generalized Robin interface condition for the fluid pressure that includes terms involving the linear acceleration and angular acceleration of the rigid body. Added-mass effects are handled in the Robin condition by inclusion of a boundary integral term that depends on the pressure. Added-damping effects due to the viscous shear forces on the body are treated by inclusion of added-damping tensors that are derived through a linearization of the integrals defining the force and torque. Added-damping effects may be important at low Reynolds number, or, for example, in the case of a rotating cylinder or rotating sphere when the rotational moments of inertia are small. In this first part of a two-part series, the properties of the AMP scheme are motivated and evaluated through the development and analysis of some model problems. The analysis shows when and why the traditional partitioned scheme becomes unstable due to either added-mass or added-damping effects. The analysis also identifies the proper form of the added-damping which depends on the discrete time-step and the grid-spacing normal to the rigid body. The results of the analysis are confirmed with numerical simulations that also demonstrate a second-order accurate implementation of the AMP scheme.
On models of the genetic code generated by binary dichotomic algorithms.
Gumbel, Markus; Fimmel, Elena; Danielli, Alberto; Strüngmann, Lutz
2015-02-01
In this paper we introduce the concept of a BDA-generated model of the genetic code which is based on binary dichotomic algorithms (BDAs). A BDA-generated model is based on binary dichotomic algorithms (BDAs). Such a BDA partitions the set of 64 codons into two disjoint classes of size 32 each and provides a generalization of known partitions like the Rumer dichotomy. We investigate what partitions can be generated when a set of different BDAs is applied sequentially to the set of codons. The search revealed that these models are able to generate code tables with very different numbers of classes ranging from 2 to 64. We have analyzed whether there are models that map the codons to their amino acids. A perfect matching is not possible. However, we present models that describe the standard genetic code with only few errors. There are also models that map all 64 codons uniquely to 64 classes showing that BDAs can be used to identify codons precisely. This could serve as a basis for further mathematical analysis using coding theory, for example. The hypothesis that BDAs might reflect a molecular mechanism taking place in the decoding center of the ribosome is discussed. The scan demonstrated that binary dichotomic partitions are able to model different aspects of the genetic code very well. The search was performed with our tool Beady-A. This software is freely available at http://mi.informatik.hs-mannheim.de/beady-a. It requires a JVM version 6 or higher. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Decision tree modeling using R.
Zhang, Zhongheng
2016-08-01
In machine learning field, decision tree learner is powerful and easy to interpret. It employs recursive binary partitioning algorithm that splits the sample in partitioning variable with the strongest association with the response variable. The process continues until some stopping criteria are met. In the example I focus on conditional inference tree, which incorporates tree-structured regression models into conditional inference procedures. While growing a single tree is subject to small changes in the training data, random forests procedure is introduced to address this problem. The sources of diversity for random forests come from the random sampling and restricted set of input variables to be selected. Finally, I introduce R functions to perform model based recursive partitioning. This method incorporates recursive partitioning into conventional parametric model building.
ERIC Educational Resources Information Center
Metropolitan Baltimore Council of AFL-CIO Unions, MD.
Maryland's Labor Education Achievement Program (LEAP) worked with a wide diversity of union workers in multiple industries and within numerous private companies and public agencies over a dispersed geographic area. Staff development included a workshop for local coordinators and a teacher inservice training session. LEAP provided…
Cities-LEAP Analysis Reveals Findings on the Most Efficient and Least
Polluting Cities in the U.S. | State, Local, and Tribal Governments | NREL Cities-LEAP Analysis Reveals Findings on the Most Efficient and Least Polluting Cities in the U.S. Cities-LEAP Analysis Reveals Findings on the Most Efficient and Least Polluting Cities in the U.S. September 12, 2016 by Megan Day
Fast depth decision for HEVC inter prediction based on spatial and temporal correlation
NASA Astrophysics Data System (ADS)
Chen, Gaoxing; Liu, Zhenyu; Ikenaga, Takeshi
2016-07-01
High efficiency video coding (HEVC) is a video compression standard that outperforms the predecessor H.264/AVC by doubling the compression efficiency. To enhance the compression accuracy, the partition sizes ranging is from 4x4 to 64x64 in HEVC. However, the manifold partition sizes dramatically increase the encoding complexity. This paper proposes a fast depth decision based on spatial and temporal correlation. Spatial correlation utilize the code tree unit (CTU) Splitting information and temporal correlation utilize the motion vector predictor represented CTU in inter prediction to determine the maximum depth in each CTU. Experimental results show that the proposed method saves about 29.1% of the original processing time with 0.9% of BD-bitrate increase on average.
Equal Graph Partitioning on Estimated Infection Network as an Effective Epidemic Mitigation Measure
Hadidjojo, Jeremy; Cheong, Siew Ann
2011-01-01
Controlling severe outbreaks remains the most important problem in infectious disease area. With time, this problem will only become more severe as population density in urban centers grows. Social interactions play a very important role in determining how infectious diseases spread, and organization of people along social lines gives rise to non-spatial networks in which the infections spread. Infection networks are different for diseases with different transmission modes, but are likely to be identical or highly similar for diseases that spread the same way. Hence, infection networks estimated from common infections can be useful to contain epidemics of a more severe disease with the same transmission mode. Here we present a proof-of-concept study demonstrating the effectiveness of epidemic mitigation based on such estimated infection networks. We first generate artificial social networks of different sizes and average degrees, but with roughly the same clustering characteristic. We then start SIR epidemics on these networks, censor the simulated incidences, and use them to reconstruct the infection network. We then efficiently fragment the estimated network by removing the smallest number of nodes identified by a graph partitioning algorithm. Finally, we demonstrate the effectiveness of this targeted strategy, by comparing it against traditional untargeted strategies, in slowing down and reducing the size of advancing epidemics. PMID:21799777
NASA Astrophysics Data System (ADS)
Diner, David
2010-05-01
The Multi-angle Imaging SpectroRadiometer (MISR) instrument has been collecting global Earth data from NASA's Terra satellite since February 2000. With its 9 along-track view angles, 4 spectral bands, intrinsic spatial resolution of 275 m, and stable radiometric and geometric calibration, no instrument that combines MISR's attributes has previously flown in space, nor is there is a similar capability currently available on any other satellite platform. Multiangle imaging offers several tools for remote sensing of aerosol and cloud properties, including bidirectional reflectance and scattering measurements, stereoscopic pattern matching, time lapse sequencing, and potentially, optical tomography. Current data products from MISR employ several of these techniques. Observations of the intensity of scattered light as a function of view angle and wavelength provide accurate measures of aerosol optical depths (AOD) over land, including bright desert and urban source regions. Partitioning of AOD according to retrieved particle classification and incorporation of height information improves the relationship between AOD and surface PM2.5 (fine particulate matter, a regulated air pollutant), constituting an important step toward a satellite-based particulate pollution monitoring system. Stereoscopic cloud-top heights provide a unique metric for detecting interannual variability of clouds and exceptionally high quality and sensitivity for detection and height retrieval for low-level clouds. Using the several-minute time interval between camera views, MISR has enabled a pole-to-pole, height-resolved atmospheric wind measurement system. Stereo imagery also makes possible global measurement of the injection heights and advection speeds of smoke plumes, volcanic plumes, and dust clouds, for which a large database is now available. To build upon what has been learned during the first decade of MISR observations, we are evaluating algorithm updates that not only refine retrieval accuracies but also include enhancements (e.g., finer spatial resolution) that would have been computationally prohibitive just ten years ago. In addition, we are developing technological building blocks for future sensors that enable broader spectral coverage, wider swath, and incorporation of high-accuracy polarimetric imaging. Prototype cameras incorporating photoelastic modulators have been constructed. To fully capitalize on the rich information content of the current and next-generation of multiangle imagers, several algorithmic paradigms currently employed need to be re-examined, e.g., the use of aerosol look-up tables, neglect of 3-D effects, and binary partitioning of the atmosphere into "cloudy" or "clear" designations. Examples of progress in algorithm and technology developments geared toward advanced application of multiangle imaging to remote sensing of aerosols and clouds will be presented.
A novel partitioning method for block-structured adaptive meshes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fu, Lin, E-mail: lin.fu@tum.de; Litvinov, Sergej, E-mail: sergej.litvinov@aer.mw.tum.de; Hu, Xiangyu Y., E-mail: xiangyu.hu@tum.de
We propose a novel partitioning method for block-structured adaptive meshes utilizing the meshless Lagrangian particle concept. With the observation that an optimum partitioning has high analogy to the relaxation of a multi-phase fluid to steady state, physically motivated model equations are developed to characterize the background mesh topology and are solved by multi-phase smoothed-particle hydrodynamics. In contrast to well established partitioning approaches, all optimization objectives are implicitly incorporated and achieved during the particle relaxation to stationary state. Distinct partitioning sub-domains are represented by colored particles and separated by a sharp interface with a surface tension model. In order to obtainmore » the particle relaxation, special viscous and skin friction models, coupled with a tailored time integration algorithm are proposed. Numerical experiments show that the present method has several important properties: generation of approximately equal-sized partitions without dependence on the mesh-element type, optimized interface communication between distinct partitioning sub-domains, continuous domain decomposition which is physically localized and implicitly incremental. Therefore it is particularly suitable for load-balancing of high-performance CFD simulations.« less
A novel partitioning method for block-structured adaptive meshes
NASA Astrophysics Data System (ADS)
Fu, Lin; Litvinov, Sergej; Hu, Xiangyu Y.; Adams, Nikolaus A.
2017-07-01
We propose a novel partitioning method for block-structured adaptive meshes utilizing the meshless Lagrangian particle concept. With the observation that an optimum partitioning has high analogy to the relaxation of a multi-phase fluid to steady state, physically motivated model equations are developed to characterize the background mesh topology and are solved by multi-phase smoothed-particle hydrodynamics. In contrast to well established partitioning approaches, all optimization objectives are implicitly incorporated and achieved during the particle relaxation to stationary state. Distinct partitioning sub-domains are represented by colored particles and separated by a sharp interface with a surface tension model. In order to obtain the particle relaxation, special viscous and skin friction models, coupled with a tailored time integration algorithm are proposed. Numerical experiments show that the present method has several important properties: generation of approximately equal-sized partitions without dependence on the mesh-element type, optimized interface communication between distinct partitioning sub-domains, continuous domain decomposition which is physically localized and implicitly incremental. Therefore it is particularly suitable for load-balancing of high-performance CFD simulations.
NASA Astrophysics Data System (ADS)
Reygondeau, Gabriel; Olivier Irisson, Jean; Guieu, Cecile; Gasparini, Stephane; Ayata, Sakina; Koubbi, Philippe
2013-04-01
In recent decades, it has been found useful to ecoregionalise the pelagic environment assuming that within each partition environmental conditions are distinguishable and unique. Indeed, each partition of the ocean that is proposed aimed to delineate the main oceanographical and ecological patterns to provide a geographical framework of marine ecosystems for ecological studies and management purposes. The aim of the present work is to integrate and process existing data on the pelagic environment of the Mediterranean Sea in order to define biogeochemical regions. Open access databases including remote sensing observations, oceanographic campaign data and physical modeling simulations are used. These various dataset allow the multidisciplinary view required to understand the interactions between climate and Mediterranean marine ecosystems. The first step of our study has consisted in a statistical selection of a set of crucial environmental factors to propose the most parsimonious biogeographical approach that allows detecting the main oceanographic structure of the Mediterranean Sea. Second, based on the identified set of environmental parameters, both non-hierarchical and hierarchical clustering algorithms have been tested. Outputs from each methodology are then inter-compared to propose a robust map of the biotopes (unique range of environmental parameters) of the area. Each biotope was then modeled using a non parametric environmental niche method to infer a dynamic biogeochemical partition. Last, the seasonal, inter annual and long term spatial changes of each biogeochemical regions were investigated. The future of this work will be to perform a second partition to subdivide the biogeochemical regions according to biotic features of the Mediterranean Sea (ecoregions). This second level of division will thus be used as a geographical framework to identify ecosystems that have been altered by human activities (i.e. pollution, fishery, invasive species) for the European project PERSEUS (Protecting EuRopean Seas and borders through the intelligence US of surveillance) and the French program MERMEX (Marine Ecosystems Response in the Mediterranean Experiment).
Feenstra, Peter; Brunsteiner, Michael; Khinast, Johannes
2014-10-01
The interaction between drug products and polymeric packaging materials is an important topic in the pharmaceutical industry and often associated with high costs because of the required elaborative interaction studies. Therefore, a theoretical prediction of such interactions would be beneficial. Often, material parameters such as the octanol water partition coefficient are used to predict the partitioning of migrant molecules between a solvent and a polymeric packaging material. Here, we present the investigation of the partitioning of various migrant molecules between polymers and solvents using molecular dynamics simulations for the calculation of interaction energies. Our results show that the use of a model for the interaction between the migrant and the polymer at atomistic detail can yield significantly better results when predicting the polymer solvent partitioning than a model based on the octanol water partition coefficient. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
Matrix-vector multiplication using digital partitioning for more accurate optical computing
NASA Technical Reports Server (NTRS)
Gary, C. K.
1992-01-01
Digital partitioning offers a flexible means of increasing the accuracy of an optical matrix-vector processor. This algorithm can be implemented with the same architecture required for a purely analog processor, which gives optical matrix-vector processors the ability to perform high-accuracy calculations at speeds comparable with or greater than electronic computers as well as the ability to perform analog operations at a much greater speed. Digital partitioning is compared with digital multiplication by analog convolution, residue number systems, and redundant number representation in terms of the size and the speed required for an equivalent throughput as well as in terms of the hardware requirements. Digital partitioning and digital multiplication by analog convolution are found to be the most efficient alogrithms if coding time and hardware are considered, and the architecture for digital partitioning permits the use of analog computations to provide the greatest throughput for a single processor.
EU-AIMS Longitudinal European Autism Project (LEAP): the autism twin cohort.
Isaksson, Johan; Tammimies, Kristiina; Neufeld, Janina; Cauvet, Élodie; Lundin, Karl; Buitelaar, Jan K; Loth, Eva; Murphy, Declan G M; Spooren, Will; Bölte, Sven
2018-01-01
EU-AIMS is the largest European research program aiming to identify stratification biomarkers and novel interventions for autism spectrum disorder (ASD). Within the program, the Longitudinal European Autism Project (LEAP) has recruited and comprehensively phenotyped a rare sample of 76 monozygotic and dizygotic twins, discordant, or concordant for ASD plus 30 typically developing twins. The aim of this letter is to complete previous descriptions of the LEAP case-control sample, clinically characterize, and investigate the suitability of the sample for ASD twin-control analyses purposes and share some 'lessons learnt.' Among the twins, a diagnosis of ASD is associated with increased symptom levels of ADHD, higher rates of intellectual disability, and lower family income. For the future, we conclude that the LEAP twin cohort offers multiple options for analyses of genetic and shared and non-shared environmental factors to generate new hypotheses for the larger cohort of LEAP singletons, but particularly cross-validate and refine evidence from it.
Leap and strike kinetics of an acoustically ‘hunting’ barn owl (Tyto alba)
Usherwood, James R.; Sparkes, Emily L.; Weller, Renate
2014-01-01
Barn owls are effective hunters of small rodents. One hunting technique is a leap from the ground followed by a brief flight and a plummeting ‘strike’ onto an acoustically targeted – and potentially entirely hidden – prey. We used forceplate measurements to derive kinetics of the leap and strike. Leaping performance was similar to reported values for guinea fowl. This is likely achieved despite the owl's considerably smaller size because of its relatively long legs and use of wing upstroke. Strikes appear deliberately forceful: impulses could have been spread over larger periods during greater deflections of the centre of mass, as observed in leaping and an alighting landing measurement. The strike, despite forces around 150 times that of a mouse body weight, is not thought to be crucial to the kill; rather, forceful strikes may function primarily to enable rapid penetration of leaf litter or snow cover, allowing grasping of hidden prey. PMID:24948629
Leap and strike kinetics of an acoustically 'hunting' barn owl (Tyto alba).
Usherwood, James R; Sparkes, Emily L; Weller, Renate
2014-09-01
Barn owls are effective hunters of small rodents. One hunting technique is a leap from the ground followed by a brief flight and a plummeting 'strike' onto an acoustically targeted - and potentially entirely hidden - prey. We used forceplate measurements to derive kinetics of the leap and strike. Leaping performance was similar to reported values for guinea fowl. This is likely achieved despite the owl's considerably smaller size because of its relatively long legs and use of wing upstroke. Strikes appear deliberately forceful: impulses could have been spread over larger periods during greater deflections of the centre of mass, as observed in leaping and an alighting landing measurement. The strike, despite forces around 150 times that of a mouse body weight, is not thought to be crucial to the kill; rather, forceful strikes may function primarily to enable rapid penetration of leaf litter or snow cover, allowing grasping of hidden prey. © 2014. Published by The Company of Biologists Ltd.
NASA Astrophysics Data System (ADS)
Garcia-Estringana, P.; Latron, J.; Molina, A. J.; Llorens, P.
2012-04-01
Rainfall partitioning fluxes (throughfall and stemflow) have a large degree of temporal and spatial variability and may consequently lead to significant changes in the volume and composition of water that reach the understory and the soil. The objective of this work is to study the effect of rainfall partitioning on the seasonal and spatial variability of the soil water content in a Mediterranean downy oak forest (Quercus pubescens), located in the Vallcebre research catchments (42° 12'N, 1° 49'E). The monitoring design, started on July 2011, consists of a set of 20 automatic rain recorders and 40 automatic soil moisture probes located below the canopy. One hundred hemispheric photographs of the canopy were used to place the instruments at representative locations (in terms of canopy cover) within the plot. Bulk rainfall, stemflow and meteorological conditions above the forest cover are also automatically recorded. Canopy cover, in leaf and leafless periods, as well as biometric characteristics of the plot, are also regularly measured. This work presents the first results describing throughfall and soil moisture spatial variability during both the leaf and leafless periods. The main drivers of throughfall variability, as canopy structure and meteorological conditions are also analysed.
Task Parallel Incomplete Cholesky Factorization using 2D Partitioned-Block Layout
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Kyungjoo; Rajamanickam, Sivasankaran; Stelle, George Widgery
We introduce a task-parallel algorithm for sparse incomplete Cholesky factorization that utilizes a 2D sparse partitioned-block layout of a matrix. Our factorization algorithm follows the idea of algorithms-by-blocks by using the block layout. The algorithm-byblocks approach induces a task graph for the factorization. These tasks are inter-related to each other through their data dependences in the factorization algorithm. To process the tasks on various manycore architectures in a portable manner, we also present a portable tasking API that incorporates different tasking backends and device-specific features using an open-source framework for manycore platforms i.e., Kokkos. A performance evaluation is presented onmore » both Intel Sandybridge and Xeon Phi platforms for matrices from the University of Florida sparse matrix collection to illustrate merits of the proposed task-based factorization. Experimental results demonstrate that our task-parallel implementation delivers about 26.6x speedup (geometric mean) over single-threaded incomplete Choleskyby- blocks and 19.2x speedup over serial Cholesky performance which does not carry tasking overhead using 56 threads on the Intel Xeon Phi processor for sparse matrices arising from various application problems.« less
Antigen-activated dendritic cells ameliorate influenza A infections
Boonnak, Kobporn; Vogel, Leatrice; Orandle, Marlene; Zimmerman, Daniel; Talor, Eyal; Subbarao, Kanta
2013-01-01
Influenza A viruses cause significant morbidity and mortality worldwide. There is a need for alternative or adjunct therapies, as resistance to currently used antiviral drugs is emerging rapidly. We tested ligand epitope antigen presentation system (LEAPS) technology as a new immune-based treatment for influenza virus infection in a mouse model. Influenza-J-LEAPS peptides were synthesized by conjugating the binding ligand derived from the β2-microglobulin chain of the human MHC class I molecule (J-LEAPS) with 15 to 30 amino acid–long peptides derived from influenza virus NP, M, or HA proteins. DCs were stimulated with influenza-J-LEAPS peptides (influenza-J-LEAPS) and injected intravenously into infected mice. Antigen-specific LEAPS-stimulated DCs were effective in reducing influenza virus replication in the lungs and enhancing survival of infected animals. Additionally, they augmented influenza-specific T cell responses in the lungs and reduced the severity of disease by limiting excessive cytokine responses, which are known to contribute to morbidity and mortality following influenza virus infection. Our data demonstrate that influenza-J-LEAPS–pulsed DCs reduce virus replication in the lungs, enhance survival, and modulate the protective immune responses that eliminate the virus while preventing excessive cytokines that could injure the host. This approach shows promise as an adjunct to antiviral treatment of influenza virus infections. PMID:23934125
NASA Astrophysics Data System (ADS)
Lund, M.; Zona, D.; Jackowicz-Korczynski, M.; Xu, X.
2017-12-01
The eddy covariance methodology is the primary tool for studying landscape-scale land-atmosphere exchange of greenhouse gases. Since the choice of instrumental setup and processing algorithms may influence the results, efforts within the international flux community have been made towards methodological harmonization and standardization. Performing eddy covariance measurements in high-latitude, Arctic tundra sites involves several challenges, related not only to remoteness and harsh climate conditions but also to the choice of processing algorithms. Partitioning of net ecosystem exchange (NEE) of CO2 into gross primary production (GPP) and ecosystem respiration (Reco) in the FLUXNET2015 dataset is made using either Nighttime or Daytime methods. These variables, GPP and Reco, are essential for calibration and validation of Earth system models. North of the Arctic Circle, sun remains visible at local midnight for a period of time, the number of days per year with midnight sun being dependent on latitude. The absence of nighttime conditions during Arctic summers renders the Nighttime method uncertain, however, no extensive assessment on the implications for flux partitioning has yet been made. In this study, we will assess the performance and validity of both partitioning methods along a latitudinal transect of northern sites included in the FLUXNET2015 dataset. We will evaluate the partitioned flux components against model simulations using the Community Land Model (CLM). Our results will be valuable for users interested in simulating Arctic and global carbon cycling.
Partridge, Roland W; Brown, Fraser S; Brennan, Paul M; Hennessey, Iain A M; Hughes, Mark A
2016-02-01
To assess the potential of the LEAP™ infrared motion tracking device to map laparoscopic instrument movement in a simulated environment. Simulator training is optimized when augmented by objective performance feedback. We explore the potential LEAP has to provide this in a way compatible with affordable take-home simulators. LEAP and the previously validated InsTrac visual tracking tool mapped expert and novice performances of a standardized simulated laparoscopic task. Ability to distinguish between the 2 groups (construct validity) and correlation between techniques (concurrent validity) were the primary outcome measures. Forty-three expert and 38 novice performances demonstrated significant differences in LEAP-derived metrics for instrument path distance (P < .001), speed (P = .002), acceleration (P < .001), motion smoothness (P < .001), and distance between the instruments (P = .019). Only instrument path distance demonstrated a correlation between LEAP and InsTrac tracking methods (novices: r = .663, P < .001; experts: r = .536, P < .001). Consistency of LEAP tracking was poor (average % time hands not tracked: 31.9%). The LEAP motion device is able to track the movement of hands using instruments in a laparoscopic box simulator. Construct validity is demonstrated by its ability to distinguish novice from expert performances. Only time and instrument path distance demonstrated concurrent validity with an existing tracking method however. A number of limitations to the tracking method used by LEAP have been identified. These need to be addressed before it can be considered an alternative to visual tracking for the delivery of objective performance metrics in take-home laparoscopic simulators. © The Author(s) 2015.
A fast exact simulation method for a class of Markov jump processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Yao, E-mail: yaoli@math.umass.edu; Hu, Lili, E-mail: lilyhu86@gmail.com
2015-11-14
A new method of the stochastic simulation algorithm (SSA), named the Hashing-Leaping method (HLM), for exact simulations of a class of Markov jump processes, is presented in this paper. The HLM has a conditional constant computational cost per event, which is independent of the number of exponential clocks in the Markov process. The main idea of the HLM is to repeatedly implement a hash-table-like bucket sort algorithm for all times of occurrence covered by a time step with length τ. This paper serves as an introduction to this new SSA method. We introduce the method, demonstrate its implementation, analyze itsmore » properties, and compare its performance with three other commonly used SSA methods in four examples. Our performance tests and CPU operation statistics show certain advantages of the HLM for large scale problems.« less
In Silico Constraint-Based Strain Optimization Methods: the Quest for Optimal Cell Factories
Maia, Paulo; Rocha, Miguel
2015-01-01
SUMMARY Shifting from chemical to biotechnological processes is one of the cornerstones of 21st century industry. The production of a great range of chemicals via biotechnological means is a key challenge on the way toward a bio-based economy. However, this shift is occurring at a pace slower than initially expected. The development of efficient cell factories that allow for competitive production yields is of paramount importance for this leap to happen. Constraint-based models of metabolism, together with in silico strain design algorithms, promise to reveal insights into the best genetic design strategies, a step further toward achieving that goal. In this work, a thorough analysis of the main in silico constraint-based strain design strategies and algorithms is presented, their application in real-world case studies is analyzed, and a path for the future is discussed. PMID:26609052
Multiscale 3-D shape representation and segmentation using spherical wavelets.
Nain, Delphine; Haker, Steven; Bobick, Aaron; Tannenbaum, Allen
2007-04-01
This paper presents a novel multiscale shape representation and segmentation algorithm based on the spherical wavelet transform. This work is motivated by the need to compactly and accurately encode variations at multiple scales in the shape representation in order to drive the segmentation and shape analysis of deep brain structures, such as the caudate nucleus or the hippocampus. Our proposed shape representation can be optimized to compactly encode shape variations in a population at the needed scale and spatial locations, enabling the construction of more descriptive, nonglobal, nonuniform shape probability priors to be included in the segmentation and shape analysis framework. In particular, this representation addresses the shortcomings of techniques that learn a global shape prior at a single scale of analysis and cannot represent fine, local variations in a population of shapes in the presence of a limited dataset. Specifically, our technique defines a multiscale parametric model of surfaces belonging to the same population using a compact set of spherical wavelets targeted to that population. We further refine the shape representation by separating into groups wavelet coefficients that describe independent global and/or local biological variations in the population, using spectral graph partitioning. We then learn a prior probability distribution induced over each group to explicitly encode these variations at different scales and spatial locations. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior for segmentation. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to two different brain structures, the caudate nucleus and the hippocampus, of interest in the study of schizophrenia. We show: 1) a reconstruction task of a test set to validate the expressiveness of our multiscale prior and 2) a segmentation task. In the reconstruction task, our results show that for a given training set size, our algorithm significantly improves the approximation of shapes in a testing set over the Point Distribution Model, which tends to oversmooth data. In the segmentation task, our validation shows our algorithm is computationally efficient and outperforms the Active Shape Model algorithm, by capturing finer shape details.
Multiscale 3-D Shape Representation and Segmentation Using Spherical Wavelets
Nain, Delphine; Haker, Steven; Bobick, Aaron
2013-01-01
This paper presents a novel multiscale shape representation and segmentation algorithm based on the spherical wavelet transform. This work is motivated by the need to compactly and accurately encode variations at multiple scales in the shape representation in order to drive the segmentation and shape analysis of deep brain structures, such as the caudate nucleus or the hippocampus. Our proposed shape representation can be optimized to compactly encode shape variations in a population at the needed scale and spatial locations, enabling the construction of more descriptive, nonglobal, nonuniform shape probability priors to be included in the segmentation and shape analysis framework. In particular, this representation addresses the shortcomings of techniques that learn a global shape prior at a single scale of analysis and cannot represent fine, local variations in a population of shapes in the presence of a limited dataset. Specifically, our technique defines a multiscale parametric model of surfaces belonging to the same population using a compact set of spherical wavelets targeted to that population. We further refine the shape representation by separating into groups wavelet coefficients that describe independent global and/or local biological variations in the population, using spectral graph partitioning. We then learn a prior probability distribution induced over each group to explicitly encode these variations at different scales and spatial locations. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior for segmentation. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to two different brain structures, the caudate nucleus and the hippocampus, of interest in the study of schizophrenia. We show: 1) a reconstruction task of a test set to validate the expressiveness of our multiscale prior and 2) a segmentation task. In the reconstruction task, our results show that for a given training set size, our algorithm significantly improves the approximation of shapes in a testing set over the Point Distribution Model, which tends to oversmooth data. In the segmentation task, our validation shows our algorithm is computationally efficient and outperforms the Active Shape Model algorithm, by capturing finer shape details. PMID:17427745
Moore, Talia Y; Rivera, Alberto M; Biewener, Andrew A
2017-01-01
Numerous historical descriptions of the Lesser Egyptian jerboa, Jaculus jaculus , a small bipedal mammal with elongate hindlimbs, make special note of their extraordinary leaping ability. We observed jerboa locomotion in a laboratory setting and performed inverse dynamics analysis to understand how this small rodent generates such impressive leaps. We combined kinematic data from video, kinetic data from a force platform, and morphometric data from dissections to calculate the relative contributions of each hindlimb muscle and tendon to the total movement. Jerboas leapt in excess of 10 times their hip height. At the maximum recorded leap height (not the maximum observed leap height), peak moments for metatarso-phalangeal, ankle, knee, and hip joints were 13.1, 58.4, 65.1, and 66.9 Nmm, respectively. Muscles acting at the ankle joint contributed the most work (mean 231.6 mJ / kg Body Mass) to produce the energy of vertical leaping, while muscles acting at the metatarso-phalangeal joint produced the most stress (peak 317.1 kPa). The plantaris, digital flexors, and gastrocnemius tendons encountered peak stresses of 25.6, 19.1, and 6.0 MPa, respectively, transmitting the forces of their corresponding muscles (peak force 3.3, 2.0, and 3.8 N, respectively). Notably, we found that the mean elastic energy recovered in the primary tendons of both hindlimbs comprised on average only 4.4% of the energy of the associated leap. The limited use of tendon elastic energy storage in the jerboa parallels the morphologically similar heteromyid kangaroo rat, Dipodomys spectabilis . When compared to larger saltatory kangaroos and wallabies that sustain hopping over longer periods of time, these small saltatory rodents store and recover less elastic strain energy in their tendons. The large contribution of muscle work, rather than elastic strain energy, to the vertical leap suggests that the fitness benefit of rapid acceleration for predator avoidance dominated over the need to enhance locomotor economy in the evolutionary history of jerboas.
Sarment: Python modules for HMM analysis and partitioning of sequences.
Guéguen, Laurent
2005-08-15
Sarment is a package of Python modules for easy building and manipulation of sequence segmentations. It provides efficient implementation of usual algorithms for hidden Markov Model computation, as well as for maximal predictive partitioning. Owing to its very large variety of criteria for computing segmentations, Sarment can handle many kinds of models. Because of object-oriented programming, the results of the segmentation are very easy tomanipulate.
Evaluation of Hierarchical Clustering Algorithms for Document Datasets
2002-06-03
link, complete-link, and group average ( UPGMA )) and a new set of merging criteria derived from the six partitional criterion functions. Overall, we...used the single-link, complete-link, and UPGMA schemes, as well as, the various partitional criterion functions described in Section 3.1. The single-link...other (complete-link approach). The UPGMA scheme [16] (also known as group average) overcomes these problems by measuring the similarity of two clusters
Design of a Dual Waveguide Normal Incidence Tube (DWNIT) Utilizing Energy and Modal Methods
NASA Technical Reports Server (NTRS)
Betts, Juan F.; Jones, Michael G. (Technical Monitor)
2002-01-01
This report investigates the partition design of the proposed Dual Waveguide Normal Incidence Tube (DWNIT). Some advantages provided by the DWNIT are (1) Assessment of coupling relationships between resonators in close proximity, (2) Evaluation of "smart liners", (3) Experimental validation for parallel element models, and (4) Investigation of effects of simulated angles of incidence of acoustic waves. Energy models of the two chambers were developed to determine the Sound Pressure Level (SPL) drop across the two chambers, through the use of an intensity transmission function for the chamber's partition. The models allowed the chamber's lengthwise end samples to vary. The initial partition design (2" high, 16" long, 0.25" thick) was predicted to provide at least 160 dB SPL drop across the partition with a compressive model, and at least 240 dB SPL drop with a bending model using a damping loss factor of 0.01. The end chamber sample transmissions coefficients were set to 0.1. Since these results predicted more SPL drop than required, a plate thickness optimization algorithm was developed. The results of the algorithm routine indicated that a plate with the same height and length, but with a thickness of 0.1" and 0.05 structural damping loss, would provide an adequate SPL isolation between the chambers.
Rate-distortion analysis of directional wavelets.
Maleki, Arian; Rajaei, Boshra; Pourreza, Hamid Reza
2012-02-01
The inefficiency of separable wavelets in representing smooth edges has led to a great interest in the study of new 2-D transformations. The most popular criterion for analyzing these transformations is the approximation power. Transformations with near-optimal approximation power are useful in many applications such as denoising and enhancement. However, they are not necessarily good for compression. Therefore, most of the nearly optimal transformations such as curvelets and contourlets have not found any application in image compression yet. One of the most promising schemes for image compression is the elegant idea of directional wavelets (DIWs). While these algorithms outperform the state-of-the-art image coders in practice, our theoretical understanding of them is very limited. In this paper, we adopt the notion of rate-distortion and calculate the performance of the DIW on a class of edge-like images. Our theoretical analysis shows that if the edges are not "sharp," the DIW will compress them more efficiently than the separable wavelets. It also demonstrates the inefficiency of the quadtree partitioning that is often used with the DIW. To solve this issue, we propose a new partitioning scheme called megaquad partitioning. Our simulation results on real-world images confirm the benefits of the proposed partitioning algorithm, promised by our theoretical analysis. © 2011 IEEE
Intelligent automated surface grid generation
NASA Technical Reports Server (NTRS)
Yao, Ke-Thia; Gelsey, Andrew
1995-01-01
The goal of our research is to produce a flexible, general grid generator for automated use by other programs, such as numerical optimizers. The current trend in the gridding field is toward interactive gridding. Interactive gridding more readily taps into the spatial reasoning abilities of the human user through the use of a graphical interface with a mouse. However, a sometimes fruitful approach to generating new designs is to apply an optimizer with shape modification operators to improve an initial design. In order for this approach to be useful, the optimizer must be able to automatically grid and evaluate the candidate designs. This paper describes and intelligent gridder that is capable of analyzing the topology of the spatial domain and predicting approximate physical behaviors based on the geometry of the spatial domain to automatically generate grids for computational fluid dynamics simulators. Typically gridding programs are given a partitioning of the spatial domain to assist the gridder. Our gridder is capable of performing this partitioning. This enables the gridder to automatically grid spatial domains of wide range of configurations.
Yu, P.; Sun, J.; Wolz, R.; Stephenson, D.; Brewer, J.; Fox, N.C.; Cole, P.E.; Jack, C.R.; Hill, D.L.G.; Schwarz, A.J.
2014-01-01
Objective To evaluate the effect of computational algorithm, measurement variability and cut-point on hippocampal volume (HCV)-based patient selection for clinical trials in mild cognitive impairment (MCI). Methods We used normal control and amnestic MCI subjects from ADNI-1 as normative reference and screening cohorts. We evaluated the enrichment performance of four widely-used hippocampal segmentation algorithms (FreeSurfer, HMAPS, LEAP and NeuroQuant) in terms of two-year changes in MMSE, ADAS-Cog and CDR-SB. We modeled the effect of algorithm, test-retest variability and cut-point on sample size, screen fail rates and trial cost and duration. Results HCV-based patient selection yielded not only reduced sample sizes (by ~40–60%) but also lower trial costs (by ~30–40%) across a wide range of cut-points. Overall, the dependence on the cut-point value was similar for the three clinical instruments considered. Conclusion These results provide a guide to the choice of HCV cut-point for aMCI clinical trials, allowing an informed trade-off between statistical and practical considerations. PMID:24211008
Minimum Sample Size Requirements for Mokken Scale Analysis
ERIC Educational Resources Information Center
Straat, J. Hendrik; van der Ark, L. Andries; Sijtsma, Klaas
2014-01-01
An automated item selection procedure in Mokken scale analysis partitions a set of items into one or more Mokken scales, if the data allow. Two algorithms are available that pursue the same goal of selecting Mokken scales of maximum length: Mokken's original automated item selection procedure (AISP) and a genetic algorithm (GA). Minimum…
Research on the information security system in electrical gis system in mobile application
NASA Astrophysics Data System (ADS)
Zhou, Chao; Feng, Renjun; Jiang, Haitao; Huang, Wei; Zhu, Daohua
2017-05-01
With the rapid development of social informatization process, the demands of government, enterprise, and individuals for spatial information becomes larger. In addition, the combination of wireless network technology and spatial information technology promotes the generation and development of mobile technologies. In today’s rapidly developed information technology field, network technology and mobile communication have become the two pillar industries by leaps and bounds. They almost absorbed and adopted all the latest information, communication, computer, electronics and so on new technologies. Concomitantly, the network coverage is more and more big, the transmission rate is faster and faster, the volume of user’s terminal is smaller and smaller. What’s more, from LAN to WAN, from wired network to wireless network, from wired access to mobile wireless access, people’s demand for communication technology is increasingly higher. As a result, mobile communication technology is facing unprecedented challenges as well as unprecedented opportunities. When combined with the existing mobile communication network, it led to the development of leaps and bounds. However, due to the inherent dependence of the system on the existing computer communication network, information security problems cannot be ignored. Today’s information security has penetrated into all aspects of life. Information system is a complex computer system, and it’s physical, operational and management vulnerabilities constitute the security vulnerability of the system. Firstly, this paper analyzes the composition of mobile enterprise network and information security threat. Secondly, this paper puts forward the security planning and measures, and constructs the information security structure.
Spatial partitions systematize visual search and enhance target memory.
Solman, Grayden J F; Kingstone, Alan
2017-02-01
Humans are remarkably capable of finding desired objects in the world, despite the scale and complexity of naturalistic environments. Broadly, this ability is supported by an interplay between exploratory search and guidance from episodic memory for previously observed target locations. Here we examined how the environment itself may influence this interplay. In particular, we examined how partitions in the environment-like buildings, rooms, and furniture-can impact memory during repeated search. We report that the presence of partitions in a display, independent of item configuration, reliably improves episodic memory for item locations. Repeated search through partitioned displays was faster overall and was characterized by more rapid ballistic orienting in later repetitions. Explicit recall was also both faster and more accurate when displays were partitioned. Finally, we found that search paths were more regular and systematic when displays were partitioned. Given the ubiquity of partitions in real-world environments, these results provide important insights into the mechanisms of naturalistic search and its relation to memory.
Network immunization under limited budget using graph spectra
NASA Astrophysics Data System (ADS)
Zahedi, R.; Khansari, M.
2016-03-01
In this paper, we propose a new algorithm that minimizes the worst expected growth of an epidemic by reducing the size of the largest connected component (LCC) of the underlying contact network. The proposed algorithm is applicable to any level of available resources and, despite the greedy approaches of most immunization strategies, selects nodes simultaneously. In each iteration, the proposed method partitions the LCC into two groups. These are the best candidates for communities in that component, and the available resources are sufficient to separate them. Using Laplacian spectral partitioning, the proposed method performs community detection inference with a time complexity that rivals that of the best previous methods. Experiments show that our method outperforms targeted immunization approaches in both real and synthetic networks.
Constraint treatment techniques and parallel algorithms for multibody dynamic analysis. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Chiou, Jin-Chern
1990-01-01
Computational procedures for kinematic and dynamic analysis of three-dimensional multibody dynamic (MBD) systems are developed from the differential-algebraic equations (DAE's) viewpoint. Constraint violations during the time integration process are minimized and penalty constraint stabilization techniques and partitioning schemes are developed. The governing equations of motion, a two-stage staggered explicit-implicit numerical algorithm, are treated which takes advantage of a partitioned solution procedure. A robust and parallelizable integration algorithm is developed. This algorithm uses a two-stage staggered central difference algorithm to integrate the translational coordinates and the angular velocities. The angular orientations of bodies in MBD systems are then obtained by using an implicit algorithm via the kinematic relationship between Euler parameters and angular velocities. It is shown that the combination of the present solution procedures yields a computationally more accurate solution. To speed up the computational procedures, parallel implementation of the present constraint treatment techniques, the two-stage staggered explicit-implicit numerical algorithm was efficiently carried out. The DAE's and the constraint treatment techniques were transformed into arrowhead matrices to which Schur complement form was derived. By fully exploiting the sparse matrix structural analysis techniques, a parallel preconditioned conjugate gradient numerical algorithm is used to solve the systems equations written in Schur complement form. A software testbed was designed and implemented in both sequential and parallel computers. This testbed was used to demonstrate the robustness and efficiency of the constraint treatment techniques, the accuracy of the two-stage staggered explicit-implicit numerical algorithm, and the speed up of the Schur-complement-based parallel preconditioned conjugate gradient algorithm on a parallel computer.
Robust and efficient overset grid assembly for partitioned unstructured meshes
NASA Astrophysics Data System (ADS)
Roget, Beatrice; Sitaraman, Jayanarayanan
2014-03-01
This paper presents a method to perform efficient and automated Overset Grid Assembly (OGA) on a system of overlapping unstructured meshes in a parallel computing environment where all meshes are partitioned into multiple mesh-blocks and processed on multiple cores. The main task of the overset grid assembler is to identify, in parallel, among all points in the overlapping mesh system, at which points the flow solution should be computed (field points), interpolated (receptor points), or ignored (hole points). Point containment search or donor search, an algorithm to efficiently determine the cell that contains a given point, is the core procedure necessary for accomplishing this task. Donor search is particularly challenging for partitioned unstructured meshes because of the complex irregular boundaries that are often created during partitioning.
Jachowski, David S.; Dobony, Christopher A.; Coleman, Laci S.; Ford, W. Mark; Britzke, Eric R.; Rodrigue, Jane L.
2014-01-01
AimEmerging infectious diseases present a major perturbation with apparent direct effects such as reduced population density, extirpation and/or extinction. Comparatively less is known about the potential indirect effects of disease that likely alter community structure and larger ecosystem function. Since 2006, white-nose syndrome (WNS) has resulted in the loss of over 6 million hibernating bats in eastern North America. Considerable evidence exists concerning niche partitioning in sympatric bat species in this region, and the unprecedented, rapid decline in multiple species following WNS may provide an opportunity to observe a dramatic restructuring of the bat community.LocationWe conducted our study at Fort Drum Army Installation in Jefferson and Lewis counties, New York, USA, where WNS first impacted extant bat species in winter 2007–2008.MethodsAcoustical monitoring during 2003–2011 allowed us to test the hypothesis that spatial and temporal niche partitioning by bats was relaxed post-WNS.ResultsWe detected nine bat species pre- and post-WNS. Activity for most bat species declined post-WNS. Dramatic post-WNS declines in activity of little brown bat (Myotis lucifugus, MYLU), formerly the most abundant bat species in the region, were associated with complex, often species-specific responses by other species that generally favoured increased spatial and temporal overlap with MYLU.Main conclusionsIn addition to the obvious direct effects of disease on bat populations and activity levels, our results provide evidence that disease can have cascading indirect effects on community structure. Recent occurrence of WNS in North America, combined with multiple existing stressors, is resulting in dramatic shifts in temporal and spatial niche partitioning within bat communities. These changes might influence long-term population viability of some bat species as well as broader scale ecosystem structure and function.
NASA Astrophysics Data System (ADS)
Buaria, D.; Yeung, P. K.
2017-12-01
A new parallel algorithm utilizing a partitioned global address space (PGAS) programming model to achieve high scalability is reported for particle tracking in direct numerical simulations of turbulent fluid flow. The work is motivated by the desire to obtain Lagrangian information necessary for the study of turbulent dispersion at the largest problem sizes feasible on current and next-generation multi-petaflop supercomputers. A large population of fluid particles is distributed among parallel processes dynamically, based on instantaneous particle positions such that all of the interpolation information needed for each particle is available either locally on its host process or neighboring processes holding adjacent sub-domains of the velocity field. With cubic splines as the preferred interpolation method, the new algorithm is designed to minimize the need for communication, by transferring between adjacent processes only those spline coefficients determined to be necessary for specific particles. This transfer is implemented very efficiently as a one-sided communication, using Co-Array Fortran (CAF) features which facilitate small data movements between different local partitions of a large global array. The cost of monitoring transfer of particle properties between adjacent processes for particles migrating across sub-domain boundaries is found to be small. Detailed benchmarks are obtained on the Cray petascale supercomputer Blue Waters at the University of Illinois, Urbana-Champaign. For operations on the particles in a 81923 simulation (0.55 trillion grid points) on 262,144 Cray XE6 cores, the new algorithm is found to be orders of magnitude faster relative to a prior algorithm in which each particle is tracked by the same parallel process at all times. This large speedup reduces the additional cost of tracking of order 300 million particles to just over 50% of the cost of computing the Eulerian velocity field at this scale. Improving support of PGAS models on major compilers suggests that this algorithm will be of wider applicability on most upcoming supercomputers.
Fourier transform spectrometer controller for partitioned architectures
NASA Astrophysics Data System (ADS)
Tamas-Selicean, D.; Keymeulen, D.; Berisford, D.; Carlson, R.; Hand, K.; Pop, P.; Wadsworth, W.; Levy, R.
The current trend in spacecraft computing is to integrate applications of different criticality levels on the same platform using no separation. This approach increases the complexity of the development, verification and integration processes, with an impact on the whole system life cycle. Researchers at ESA and NASA advocated for the use of partitioned architecture to reduce this complexity. Partitioned architectures rely on platform mechanisms to provide robust temporal and spatial separation between applications. Such architectures have been successfully implemented in several industries, such as avionics and automotive. In this paper we investigate the challenges of developing and the benefits of integrating a scientific instrument, namely a Fourier Transform Spectrometer, in such a partitioned architecture.
Connected Component Model for Multi-Object Tracking.
He, Zhenyu; Li, Xin; You, Xinge; Tao, Dacheng; Tang, Yuan Yan
2016-08-01
In multi-object tracking, it is critical to explore the data associations by exploiting the temporal information from a sequence of frames rather than the information from the adjacent two frames. Since straightforwardly obtaining data associations from multi-frames is an NP-hard multi-dimensional assignment (MDA) problem, most existing methods solve this MDA problem by either developing complicated approximate algorithms, or simplifying MDA as a 2D assignment problem based upon the information extracted only from adjacent frames. In this paper, we show that the relation between associations of two observations is the equivalence relation in the data association problem, based on the spatial-temporal constraint that the trajectories of different objects must be disjoint. Therefore, the MDA problem can be equivalently divided into independent subproblems by equivalence partitioning. In contrast to existing works for solving the MDA problem, we develop a connected component model (CCM) by exploiting the constraints of the data association and the equivalence relation on the constraints. Based upon CCM, we can efficiently obtain the global solution of the MDA problem for multi-object tracking by optimizing a sequence of independent data association subproblems. Experiments on challenging public data sets demonstrate that our algorithm outperforms the state-of-the-art approaches.
Cisneros, Laura M; Fagan, Matthew E; Willig, Michael R
2016-01-01
Assembly of species into communities following human disturbance (e.g., deforestation, fragmentation) may be governed by spatial (e.g., dispersal) or environmental (e.g., niche partitioning) mechanisms. Variation partitioning has been used to broadly disentangle spatial and environmental mechanisms, and approaches utilizing functional and phylogenetic characteristics of communities have been implemented to determine the relative importance of particular environmental (or niche-based) mechanisms. Nonetheless, few studies have integrated these quantitative approaches to comprehensively assess the relative importance of particular structuring processes. We employed a novel variation partitioning approach to evaluate the relative importance of particular spatial and environmental drivers of taxonomic, functional, and phylogenetic aspects of bat communities in a human-modified landscape in Costa Rica. Specifically, we estimated the amount of variation in species composition (taxonomic structure) and in two aspects of functional and phylogenetic structure (i.e., composition and dispersion) along a forest loss and fragmentation gradient that are uniquely explained by landscape characteristics (i.e., environment) or space to assess the importance of competing mechanisms. The unique effects of space on taxonomic, functional and phylogenetic structure were consistently small. In contrast, landscape characteristics (i.e., environment) played an appreciable role in structuring bat communities. Spatially-structured landscape characteristics explained 84% of the variation in functional or phylogenetic dispersion, and the unique effects of landscape characteristics significantly explained 14% of the variation in species composition. Furthermore, variation in bat community structure was primarily due to differences in dispersion of species within functional or phylogenetic space along the gradient, rather than due to differences in functional or phylogenetic composition. Variation among bat communities was related to environmental mechanisms, especially niche-based (i.e., environmental) processes, rather than spatial mechanisms. High variation in functional or phylogenetic dispersion, as opposed to functional or phylogenetic composition, suggests that loss or gain of niche space is driving the progressive loss or gain of species with particular traits from communities along the human-modified gradient. Thus, environmental characteristics associated with landscape structure influence functional or phylogenetic aspects of bat communities by effectively altering the ways in which species partition niche space.
Fagan, Matthew E.; Willig, Michael R.
2016-01-01
Background Assembly of species into communities following human disturbance (e.g., deforestation, fragmentation) may be governed by spatial (e.g., dispersal) or environmental (e.g., niche partitioning) mechanisms. Variation partitioning has been used to broadly disentangle spatial and environmental mechanisms, and approaches utilizing functional and phylogenetic characteristics of communities have been implemented to determine the relative importance of particular environmental (or niche-based) mechanisms. Nonetheless, few studies have integrated these quantitative approaches to comprehensively assess the relative importance of particular structuring processes. Methods We employed a novel variation partitioning approach to evaluate the relative importance of particular spatial and environmental drivers of taxonomic, functional, and phylogenetic aspects of bat communities in a human-modified landscape in Costa Rica. Specifically, we estimated the amount of variation in species composition (taxonomic structure) and in two aspects of functional and phylogenetic structure (i.e., composition and dispersion) along a forest loss and fragmentation gradient that are uniquely explained by landscape characteristics (i.e., environment) or space to assess the importance of competing mechanisms. Results The unique effects of space on taxonomic, functional and phylogenetic structure were consistently small. In contrast, landscape characteristics (i.e., environment) played an appreciable role in structuring bat communities. Spatially-structured landscape characteristics explained 84% of the variation in functional or phylogenetic dispersion, and the unique effects of landscape characteristics significantly explained 14% of the variation in species composition. Furthermore, variation in bat community structure was primarily due to differences in dispersion of species within functional or phylogenetic space along the gradient, rather than due to differences in functional or phylogenetic composition. Discussion Variation among bat communities was related to environmental mechanisms, especially niche-based (i.e., environmental) processes, rather than spatial mechanisms. High variation in functional or phylogenetic dispersion, as opposed to functional or phylogenetic composition, suggests that loss or gain of niche space is driving the progressive loss or gain of species with particular traits from communities along the human-modified gradient. Thus, environmental characteristics associated with landscape structure influence functional or phylogenetic aspects of bat communities by effectively altering the ways in which species partition niche space. PMID:27761338
On spatial coalescents with multiple mergers in two dimensions.
Heuer, Benjamin; Sturm, Anja
2013-08-01
We consider the genealogy of a sample of individuals taken from a spatially structured population when the variance of the offspring distribution is relatively large. The space is structured into discrete sites of a graph G. If the population size at each site is large, spatial coalescents with multiple mergers, so called spatial Λ-coalescents, for which ancestral lines migrate in space and coalesce according to some Λ-coalescent mechanism, are shown to be appropriate approximations to the genealogy of a sample of individuals. We then consider as the graph G the two dimensional torus with side length 2L+1 and show that as L tends to infinity, and time is rescaled appropriately, the partition structure of spatial Λ-coalescents of individuals sampled far enough apart converges to the partition structure of a non-spatial Kingman coalescent. From a biological point of view this means that in certain circumstances both the spatial structure as well as larger variances of the underlying offspring distribution are harder to detect from the sample. However, supplemental simulations show that for moderately large L the different structure is still evident. Copyright © 2012 Elsevier Inc. All rights reserved.
Evaluation of the leap motion controller as a new contact-free pointing device.
Bachmann, Daniel; Weichert, Frank; Rinkenauer, Gerhard
2014-12-24
This paper presents a Fitts' law-based analysis of the user's performance in selection tasks with the Leap Motion Controller compared with a standard mouse device. The Leap Motion Controller (LMC) is a new contact-free input system for gesture-based human-computer interaction with declared sub-millimeter accuracy. Up to this point, there has hardly been any systematic evaluation of this new system available. With an error rate of 7.8% for the LMC and 2.8% for the mouse device, movement times twice as large as for a mouse device and high overall effort ratings, the Leap Motion Controller's performance as an input device for everyday generic computer pointing tasks is rather limited, at least with regard to the selection recognition provided by the LMC.
Evaluation of the Leap Motion Controller as a New Contact-Free Pointing Device
Bachmann, Daniel; Weichert, Frank; Rinkenauer, Gerhard
2015-01-01
This paper presents a Fitts' law-based analysis of the user's performance in selection tasks with the Leap Motion Controller compared with a standard mouse device. The Leap Motion Controller (LMC) is a new contact-free input system for gesture-based human-computer interaction with declared sub-millimeter accuracy. Up to this point, there has hardly been any systematic evaluation of this new system available. With an error rate of 7.8 % for the LMC and 2.8% for the mouse device, movement times twice as large as for a mouse device and high overall effort ratings, the Leap Motion Controller's performance as an input device for everyday generic computer pointing tasks is rather limited, at least with regard to the selection recognition provided by the LMC. PMID:25609043
NASA Astrophysics Data System (ADS)
Vansteenberge, Stef; Verheyden, Sophie; Quinif, Yves; Genty, Dominique; Blamart, Dominique; Deprez, Maxim; Van Stappen, Jeroen; Cnudde, Veerle; Cheng, Hai; Edwards, R. Lawrence; Claeys, Philippe
2017-04-01
Interglacial-glacial transitions represent important turnovers in the climate system. In contrast with glacial terminations, they are described as a more gradual cooling. So far, the last interglacial has yielded a wealth of knowledge regarding climate dynamics during past warm periods. On top of the assumed gradual temperature drop starting at 119 ka, evidence for the presence of a drastic drying/cooling event in northern Europe has been observed. In lake records from Germany, a distinct shift in pollen assembly at 117.5 ka is interpreted as the consequence of a short dry event lasting 470 years, defined as the Late Eemian Aridity Pulse (LEAP, Sirocko et al., 2005). In a Belgian stalagmite from Han-sur-Lesse Cave, the LEAP is characterized by a 5‰ increase in δ13C occurring in just 200 years. The δ13C enrichment is dated at 117.5 ka and associated with a vegetation change above the cave, induced by a drying and/or cooling event (Vansteenberge et al., 2016). Also, within North Atlantic sediment cores, an increase in ice rafted debris was linked to the occurrence of a colder period at 117 ka (Irvali et al., 2016). Its coevality with the LEAP indicates a likely more regional extent than previously thought. Up to now, no independent chronology exists and little is known about the continental climatic expression of the LEAP. This study aims at 1) constructing an improved and independent chronology for the LEAP event, 2) characterizing this event in terms of its climatic expression and 3) placing the LEAP within the context of an interglacial-glacial transition. For this, two additional speleothems (Han-8, RSM-17) from two different Belgian caves (Han-sur-Lesse, Remouchamps) are added to the existing Han-9 dataset. Exceptionally high growth rates (0.5 mm yr-1) and a presumed annual layering of the RSM-17 sample enable an annual to decadal resolution to investigate the LEAP. U-Th age models covering the glacial inception are constructed with 25 dates on the three speleothems. All samples are investigated through a multiproxy approach consisting of growth rate, stable isotopes (δ13C and δ18O) and trace elements (Mg, Sr, Ba, Zn, Pb, U). Furthermore, µCT scans with a resolution down to 10µm characterize pronounced changes in speleothem morphology. First results show the presence of similar δ13C excursions in the two newly analyzed speleothems. The plenitude of U-Th dates now confirms the timing of the LEAP at 117.5 ka, as determined from Han-9 but significantly reduce the age error to 0.4 ka. Also, the various proxies demonstrate that pre-LEAP climate conditions were not reestablished after the event, indicating that, at least in Belgium, the LEAP may have had a more severe impact than previously thought. This study shows that events such as the LEAP are an important feature within the gradual cooling occurring during glacial inceptions, and they contribute to a better understanding of the dynamics of an interglacial-glacial transition. References: Irvali, N., et al., 2016, Quaternary Science Reviews, 150, 184-199. Sirocko, F., et al., 2005, Nature, 436, 833-836. Vansteenberge, S., et al., 2016, Climate of the Past., 12, 1445-1458.
ERIC Educational Resources Information Center
Beddard, Godfrey S.
2011-01-01
Thermodynamic quantities such as the average energy, heat capacity, and entropy are calculated using a Monte Carlo method based on the Metropolis algorithm. This method is illustrated with reference to the harmonic oscillator but is particularly useful when the partition function cannot be evaluated; an example using a one-dimensional spin system…
Skeletonization and Partitioning of Digital Images Using Discrete Morse Theory.
Delgado-Friedrichs, Olaf; Robins, Vanessa; Sheppard, Adrian
2015-03-01
We show how discrete Morse theory provides a rigorous and unifying foundation for defining skeletons and partitions of grayscale digital images. We model a grayscale image as a cubical complex with a real-valued function defined on its vertices (the voxel values). This function is extended to a discrete gradient vector field using the algorithm presented in Robins, Wood, Sheppard TPAMI 33:1646 (2011). In the current paper we define basins (the building blocks of a partition) and segments of the skeleton using the stable and unstable sets associated with critical cells. The natural connection between Morse theory and homology allows us to prove the topological validity of these constructions; for example, that the skeleton is homotopic to the initial object. We simplify the basins and skeletons via Morse-theoretic cancellation of critical cells in the discrete gradient vector field using a strategy informed by persistent homology. Simple working Python code for our algorithms for efficient vector field traversal is included. Example data are taken from micro-CT images of porous materials, an application area where accurate topological models of pore connectivity are vital for fluid-flow modelling.
Mapping to Irregular Torus Topologies and Other Techniques for Petascale Biomolecular Simulation
Phillips, James C.; Sun, Yanhua; Jain, Nikhil; Bohm, Eric J.; Kalé, Laxmikant V.
2014-01-01
Currently deployed petascale supercomputers typically use toroidal network topologies in three or more dimensions. While these networks perform well for topology-agnostic codes on a few thousand nodes, leadership machines with 20,000 nodes require topology awareness to avoid network contention for communication-intensive codes. Topology adaptation is complicated by irregular node allocation shapes and holes due to dedicated input/output nodes or hardware failure. In the context of the popular molecular dynamics program NAMD, we present methods for mapping a periodic 3-D grid of fixed-size spatial decomposition domains to 3-D Cray Gemini and 5-D IBM Blue Gene/Q toroidal networks to enable hundred-million atom full machine simulations, and to similarly partition node allocations into compact domains for smaller simulations using multiple-copy algorithms. Additional enabling techniques are discussed and performance is reported for NCSA Blue Waters, ORNL Titan, ANL Mira, TACC Stampede, and NERSC Edison. PMID:25594075
Ray tracing a three-dimensional scene using a hierarchical data structure
Wald, Ingo; Boulos, Solomon; Shirley, Peter
2012-09-04
Ray tracing a three-dimensional scene made up of geometric primitives that are spatially partitioned into a hierarchical data structure. One example embodiment is a method for ray tracing a three-dimensional scene made up of geometric primitives that are spatially partitioned into a hierarchical data structure. In this example embodiment, the hierarchical data structure includes at least a parent node and a corresponding plurality of child nodes. The method includes a first act of determining that a first active ray in the packet hits the parent node and a second act of descending to each of the plurality of child nodes.
A Collaborative Recommend Algorithm Based on Bipartite Community
Fu, Yuchen; Liu, Quan; Cui, Zhiming
2014-01-01
The recommendation algorithm based on bipartite network is superior to traditional methods on accuracy and diversity, which proves that considering the network topology of recommendation systems could help us to improve recommendation results. However, existing algorithms mainly focus on the overall topology structure and those local characteristics could also play an important role in collaborative recommend processing. Therefore, on account of data characteristics and application requirements of collaborative recommend systems, we proposed a link community partitioning algorithm based on the label propagation and a collaborative recommendation algorithm based on the bipartite community. Then we designed numerical experiments to verify the algorithm validity under benchmark and real database. PMID:24955393
Shi, Yan; Wang, Hao Gang; Li, Long; Chan, Chi Hou
2008-10-01
A multilevel Green's function interpolation method based on two kinds of multilevel partitioning schemes--the quasi-2D and the hybrid partitioning scheme--is proposed for analyzing electromagnetic scattering from objects comprising both conducting and dielectric parts. The problem is formulated using the surface integral equation for homogeneous dielectric and conducting bodies. A quasi-2D multilevel partitioning scheme is devised to improve the efficiency of the Green's function interpolation. In contrast to previous multilevel partitioning schemes, noncubic groups are introduced to discretize the whole EM structure in this quasi-2D multilevel partitioning scheme. Based on the detailed analysis of the dimension of the group in this partitioning scheme, a hybrid quasi-2D/3D multilevel partitioning scheme is proposed to effectively handle objects with fine local structures. Selection criteria for some key parameters relating to the interpolation technique are given. The proposed algorithm is ideal for the solution of problems involving objects such as missiles, microstrip antenna arrays, photonic bandgap structures, etc. Numerical examples are presented to show that CPU time is between O(N) and O(N log N) while the computer memory requirement is O(N).
LEAP: An Innovative Direction Dependent Ionospheric Calibration Scheme for Low Frequency Arrays
NASA Astrophysics Data System (ADS)
Rioja, María J.; Dodson, Richard; Franzen, Thomas M. O.
2018-05-01
The ambitious scientific goals of the SKA require a matching capability for calibration of atmospheric propagation errors, which contaminate the observed signals. We demonstrate a scheme for correcting the direction-dependent ionospheric and instrumental phase effects at the low frequencies and with the wide fields of view planned for SKA-Low. It leverages bandwidth smearing, to filter-out signals from off-axis directions, allowing the measurement of the direction-dependent antenna-based gains in the visibility domain; by doing this towards multiple directions it is possible to calibrate across wide fields of view. This strategy removes the need for a global sky model, therefore all directions are independent. We use MWA results at 88 and 154 MHz under various weather conditions to characterise the performance and applicability of the technique. We conclude that this method is suitable to measure and correct for temporal fluctuations and direction-dependent spatial ionospheric phase distortions on a wide range of scales: both larger and smaller than the array size. The latter are the most intractable and pose a major challenge for future instruments. Moreover this scheme is an embarrassingly parallel process, as multiple directions can be processed independently and simultaneously. This is an important consideration for the SKA, where the current planned architecture is one of compute-islands with limited interconnects. Current implementation of the algorithm and on-going developments are discussed.
NASA Astrophysics Data System (ADS)
Kruglov, V. E.; Malyshev, D. S.; Pochinka, O. V.
2018-01-01
Studying the dynamics of a flow on surfaces by partitioning the phase space into cells with the same limit behaviour of trajectories within a cell goes back to the classical papers of Andronov, Pontryagin, Leontovich and Maier. The types of cells (the number of which is finite) and how the cells adjoin one another completely determine the topological equivalence class of a flow with finitely many special trajectories. If one trajectory is chosen in every cell of a rough flow without periodic orbits, then the cells are partitioned into so-called triangular regions of the same type. A combinatorial description of such a partition gives rise to the three-colour Oshemkov-Sharko graph, the vertices of which correspond to the triangular regions, and the edges to separatrices connecting them. Oshemkov and Sharko proved that such flows are topologically equivalent if and only if the three-colour graphs of the flows are isomorphic, and described an algorithm of distinguishing three-colour graphs. But their algorithm is not efficient with respect to graph theory. In the present paper, we describe the dynamics of Ω-stable flows without periodic trajectories on surfaces in the language of four-colour graphs, present an efficient algorithm for distinguishing such graphs, and develop a realization of a flow from some abstract graph. Bibliography: 17 titles.
NASA Technical Reports Server (NTRS)
Garg, Sanjay; Schmidt, Phillip H.
1993-01-01
A parameter optimization framework has earlier been developed to solve the problem of partitioning a centralized controller into a decentralized, hierarchical structure suitable for integrated flight/propulsion control implementation. This paper presents results from the application of the controller partitioning optimization procedure to IFPC design for a Short Take-Off and Vertical Landing (STOVL) aircraft in transition flight. The controller partitioning problem and the parameter optimization algorithm are briefly described. Insight is provided into choosing various 'user' selected parameters in the optimization cost function such that the resulting optimized subcontrollers will meet the characteristics of the centralized controller that are crucial to achieving the desired closed-loop performance and robustness, while maintaining the desired subcontroller structure constraints that are crucial for IFPC implementation. The optimization procedure is shown to improve upon the initial partitioned subcontrollers and lead to performance comparable to that achieved with the centralized controller. This application also provides insight into the issues that should be addressed at the centralized control design level in order to obtain implementable partitioned subcontrollers.
Partial Storage Optimization and Load Control Strategy of Cloud Data Centers
2015-01-01
We present a novel approach to solve the cloud storage issues and provide a fast load balancing algorithm. Our approach is based on partitioning and concurrent dual direction download of the files from multiple cloud nodes. Partitions of the files are saved on the cloud rather than the full files, which provide a good optimization to the cloud storage usage. Only partial replication is used in this algorithm to ensure the reliability and availability of the data. Our focus is to improve the performance and optimize the storage usage by providing the DaaS on the cloud. This algorithm solves the problem of having to fully replicate large data sets, which uses up a lot of precious space on the cloud nodes. Reducing the space needed will help in reducing the cost of providing such space. Moreover, performance is also increased since multiple cloud servers will collaborate to provide the data to the cloud clients in a faster manner. PMID:25973444
Partial storage optimization and load control strategy of cloud data centers.
Al Nuaimi, Klaithem; Mohamed, Nader; Al Nuaimi, Mariam; Al-Jaroodi, Jameela
2015-01-01
We present a novel approach to solve the cloud storage issues and provide a fast load balancing algorithm. Our approach is based on partitioning and concurrent dual direction download of the files from multiple cloud nodes. Partitions of the files are saved on the cloud rather than the full files, which provide a good optimization to the cloud storage usage. Only partial replication is used in this algorithm to ensure the reliability and availability of the data. Our focus is to improve the performance and optimize the storage usage by providing the DaaS on the cloud. This algorithm solves the problem of having to fully replicate large data sets, which uses up a lot of precious space on the cloud nodes. Reducing the space needed will help in reducing the cost of providing such space. Moreover, performance is also increased since multiple cloud servers will collaborate to provide the data to the cloud clients in a faster manner.
On factors structuring the flatfish assemblage in the southern North Sea
NASA Astrophysics Data System (ADS)
Piet, G. J.; Pfisterer, A. B.; Rijnsdorp, A. D.
1998-09-01
Ten species of flatfish were studied to see to what extent interspecific competition influences their diet or spatial distribution and whether the potential of these flatfish species to avoid interspecific competition through resource partitioning is constrained by specific morphological characteristics. For this, seven morphological characteristics were measured, diet composition was determined from gut content analyses and overlap in distribution was determined from the co-occurrence in trawl hauls. Canonical correspondence analysis revealed the morphological characteristics that were most strongly correlated with the diet composition. Based on these findings the mouth gape was considered to be the most important morphological constraint affecting the choice of food. Two resource dimensions were distinguished along which interspecific competition can act on the flatfish assemblage: the trophic dimension (diet composition) and the spatial dimension (distribution). Resource partitioning was observed along both dimensions separately and, more importantly, the degree of resource partitioning along the two dimensions was negatively correlated. Especially the latter was considered strong circumstantial evidence that interspecific competition is a major factor structuring the flatfish assemblage. Resource partitioning along the two resource dimensions increased with decreasing mouth gape, suggesting that interspecific competition mainly acts on the small-mouthed fish, i.e. juveniles.
K.yle J. Haynes; Ottar N. Bjornstad; Andrew J. Allstadt; Andrew M. Liebhold
2012-01-01
Despite the pervasiveness of spatial synchrony of population fluctuations in virtually every taxon, it remains difficult to disentangle its underlying mechanisms, such as environmental perturbations and dispersal. We used multiple regression of distance matrices (MRMs) to statistically partition the importance of several factors potentially synchronizing the dynamics...
Sheng, Xi
2012-07-01
The thesis aims to study the automation replenishment algorithm in hospital on medical supplies supplying chain. The mathematical model and algorithm of medical supplies automation replenishment are designed through referring to practical data form hospital on the basis of applying inventory theory, greedy algorithm and partition algorithm. The automation replenishment algorithm is proved to realize automatic calculation of the medical supplies distribution amount and optimize medical supplies distribution scheme. A conclusion could be arrived that the model and algorithm of inventory theory, if applied in medical supplies circulation field, could provide theoretical and technological support for realizing medical supplies automation replenishment of hospital on medical supplies supplying chain.
Beta-diversity of ectoparasites at two spatial scales: nested hierarchy, geography and habitat type.
Warburton, Elizabeth M; van der Mescht, Luther; Stanko, Michal; Vinarski, Maxim V; Korallo-Vinarskaya, Natalia P; Khokhlova, Irina S; Krasnov, Boris R
2017-06-01
Beta-diversity of biological communities can be decomposed into (a) dissimilarity of communities among units of finer scale within units of broader scale and (b) dissimilarity of communities among units of broader scale. We investigated compositional, phylogenetic/taxonomic and functional beta-diversity of compound communities of fleas and gamasid mites parasitic on small Palearctic mammals in a nested hierarchy at two spatial scales: (a) continental scale (across the Palearctic) and (b) regional scale (across sites within Slovakia). At each scale, we analyzed beta-diversity among smaller units within larger units and among larger units with partitioning based on either geography or ecology. We asked (a) whether compositional, phylogenetic/taxonomic and functional dissimilarities of flea and mite assemblages are scale dependent; (b) how geographical (partitioning of sites according to geographic position) or ecological (partitioning of sites according to habitat type) characteristics affect phylogenetic/taxonomic and functional components of dissimilarity of ectoparasite assemblages and (c) whether assemblages of fleas and gamasid mites differ in their degree of dissimilarity, all else being equal. We found that compositional, phylogenetic/taxonomic, or functional beta-diversity was greater on a continental rather than a regional scale. Compositional and phylogenetic/taxonomic components of beta-diversity were greater among larger units than among smaller units within larger units, whereas functional beta-diversity did not exhibit any consistent trend regarding site partitioning. Geographic partitioning resulted in higher values of beta-diversity of ectoparasites than ecological partitioning. Compositional and phylogenetic components of beta-diversity were higher in fleas than mites but the opposite was true for functional beta-diversity in some, but not all, traits.
NASA Astrophysics Data System (ADS)
Matsakis, Demetrios; Defraigne, Pascale; Hosokawa, M.; Leschiutta, S.; Petit, G.; Zhai, Z.-C.
2007-03-01
The most intensely discussed and controversial issue in time keeping has been the proposal before the International Telecommunications Union (ITU) to redefine Coordinated Universal Time (UTC) so as to replace leap seconds by leap hours. Should this proposal be adopted, the practice of inserting leap seconds would cease after a specific date. Should the Earth's rotation continue to de-accelerate at its historical rate, the next discontinuity in UTC would be an hour inserted several centuries from now. Advocates of this proposal cite the need to synchronize satellite and other systems, such as GPS, Galileo, and GLONASS, which did not exist and were not envisioned when the current system was adopted. They note that leap second insertions can be and have been incorrectly implemented or accounted for. Such errors have to date had localized impact, but they could cause serious mishaps involving loss of life. For example, some GPS receivers have been known to fail simply because there was no leap second after a long enough interval, other GPS receivers failed because the leap second information was broadcast more than three months in advance, and some commercial software used for internet time-transfer Network Time Protocol (NTP) could either discard all data received after a leap second or interpret it as a frequency change. The ambiguity associated with the extra second could also disrupt financial accounting and certain forms of encryption. Those opposed to the proposal question the need for a change, and also point out the costs of adjusting to the proposed change and its inconvenience to amateur astronomers and others who rely upon astronomical calculations published in advance. Reports have been circulated that the cost of checking and correcting software to accommodate the new definition of UTC would be many millions of dollars for some systems. In October 2005 American Astronomical Society asked the ITU for a year's time to study the issue. This commission has supported the efforts of the IAU' s Committee on the Leap Second to make an informed recommendation, and anticipates considerable discussion at the IAU's 26th General Assembly in 2006.
A Sharp methodology for VLSI layout
NASA Astrophysics Data System (ADS)
Bapat, Shekhar
1993-01-01
The layout problem for VLSI circuits is recognized as a very difficult problem and has been traditionally decomposed into the several seemingly independent sub-problems of placement, global routing, and detailed routing. Although this structure achieves a reduction in programming complexity, it is also typically accompanied by a reduction in solution quality. Most current placement research recognizes that the separation is artificial, and that the placement and routing problems should be solved ideally in tandem. We propose a new interconnection model, Sharp and an associated partitioning algorithm. The Sharp interconnection model uses a partitioning shape that roughly resembles the musical sharp 'number sign' and makes extensive use of pre-computed rectilinear Steiner trees. The model is designed to generate strategic routing information along with the partitioning results. Additionally, the Sharp model also generates estimates of the routing congestion. We also propose the Sharp layout heuristic that solves the layout problem in its entirety. The Sharp layout heuristic makes extensive use of the Sharp partitioning model. The use of precomputed Steiner tree forms enables the method to model accurately net characteristics. For example, the Steiner tree forms can model both the length of the net and more importantly its route. In fact, the tree forms are also appropriate for modeling the timing delays of nets. The Sharp heuristic works to minimize both the total layout area by minimizing total net length (thus reducing the total wiring area), and the congestion imbalances in the various channels (thus reducing the unused or wasted channel area). Our heuristic uses circuit element movements amongst the different partitioning blocks and selection of alternate minimal Steiner tree forms to achieve this goal. The objective function for the algorithm can be modified readily to include other important circuit constraints like propagation delays. The layout technique first computes a very high-level approximation of the layout solution (i.e., the positions of the circuit elements and the associated net routes). The approximate solution is alternately refined, objective function. The technique creates well defined sub-problems and offers intermediary steps that can be solved in parallel, as well as a parallel mechanism to merge the sub-problem solutions.
Optimal steering for kinematic vehicles with applications to spatially distributed agents
NASA Astrophysics Data System (ADS)
Brown, Scott; Praeger, Cheryl E.; Giudici, Michael
While there is no universal method to address control problems involving networks of autonomous vehicles, there exist a few promising schemes that apply to different specific classes of problems, which have attracted the attention of many researchers from different fields. In particular, one way to extend techniques that address problems involving a single autonomous vehicle to those involving teams of autonomous vehicles is to use the concept of Voronoi diagram. The Voronoi diagram provides a spatial partition of the environment the team of vehicles operate in, where each element of this partition is associated with a unique vehicle from the team. The partition induces a graph abstraction of the operating space that is in an one-to-one correspondence with the network abstraction of the team of autonomous vehicles; a fact that can provide both conceptual and analytical advantages during mission planning and execution. In this dissertation, we propose the use of a new class of Voronoi-like partitioning schemes with respect to state-dependent proximity (pseudo-) metrics rather than the Euclidean distance or other generalized distance functions, which are typically used in the literature. An important nuance here is that, in contrast to the Euclidean distance, state-dependent metrics can succinctly capture system theoretic features of each vehicle from the team (e.g., vehicle kinematics), as well as the environment-vehicle interactions, which are induced, for example, by local winds/currents. We subsequently illustrate how the proposed concept of state-dependent Voronoi-like partition can induce local control schemes for problems involving networks of spatially distributed autonomous vehicles by examining a sequential pursuit problem of a maneuvering target by a group of pursuers distributed in the plane. The construction of generalized Voronoi diagrams with respect to state-dependent metrics poses some significant challenges. First, the generalized distance metric may be a function of the direction of motion of the vehicle (anisotropic pseudo-distance function) and/or may not be expressible in closed form. Second, such problems fall under the general class of partitioning problems for which the vehicles' dynamics must be taken into account. The topology of the vehicle's configuration space may be non-Euclidean, for example, it may be a manifold embedded in a Euclidean space. In other words, these problems may not be reducible to generalized Voronoi diagram problems for which efficient construction schemes, analytical and/or computational, exist in the literature. This research effort pursues three main objectives. First, we present the complete solution of different steering problems involving a single vehicle in the presence of motion constraints imposed by the maneuverability envelope of the vehicle and/or the presence of a drift field induced by winds/currents in its vicinity. The analysis of each steering problem involving a single vehicle provides us with a state-dependent generalized metric, such as the minimum time-to-go/come. We subsequently use these state-dependent generalized distance functions as the proximity metrics in the formulation of generalized Voronoi-like partitioning problems. The characterization of the solutions of these state-dependent Voronoi-like partitioning problems using either analytical or computational techniques constitutes the second main objective of this dissertation. The third objective of this research effort is to illustrate the use of the proposed concept of state-dependent Voronoi-like partition as a means for passing from control techniques that apply to problems involving a single vehicle to problems involving networks of spatially distributed autonomous vehicles. To this aim, we formulate the problem of sequential/relay pursuit of a maneuvering target by a group of spatially distributed pursuers and subsequently propose a distributed group pursuit strategy that directly derives from the solution of a state-dependent Voronoi-like partitioning problem. (Abstract shortened by UMI.)
Evolving bipartite authentication graph partitions
Pope, Aaron Scott; Tauritz, Daniel Remy; Kent, Alexander D.
2017-01-16
As large scale enterprise computer networks become more ubiquitous, finding the appropriate balance between user convenience and user access control is an increasingly challenging proposition. Suboptimal partitioning of users’ access and available services contributes to the vulnerability of enterprise networks. Previous edge-cut partitioning methods unduly restrict users’ access to network resources. This paper introduces a novel method of network partitioning superior to the current state-of-the-art which minimizes user impact by providing alternate avenues for access that reduce vulnerability. Networks are modeled as bipartite authentication access graphs and a multi-objective evolutionary algorithm is used to simultaneously minimize the size of largemore » connected components while minimizing overall restrictions on network users. Lastly, results are presented on a real world data set that demonstrate the effectiveness of the introduced method compared to previous naive methods.« less
Evolving bipartite authentication graph partitions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pope, Aaron Scott; Tauritz, Daniel Remy; Kent, Alexander D.
As large scale enterprise computer networks become more ubiquitous, finding the appropriate balance between user convenience and user access control is an increasingly challenging proposition. Suboptimal partitioning of users’ access and available services contributes to the vulnerability of enterprise networks. Previous edge-cut partitioning methods unduly restrict users’ access to network resources. This paper introduces a novel method of network partitioning superior to the current state-of-the-art which minimizes user impact by providing alternate avenues for access that reduce vulnerability. Networks are modeled as bipartite authentication access graphs and a multi-objective evolutionary algorithm is used to simultaneously minimize the size of largemore » connected components while minimizing overall restrictions on network users. Lastly, results are presented on a real world data set that demonstrate the effectiveness of the introduced method compared to previous naive methods.« less
NASA Astrophysics Data System (ADS)
Želi, Velibor; Zorica, Dušan
2018-02-01
Generalization of the heat conduction equation is obtained by considering the system of equations consisting of the energy balance equation and fractional-order constitutive heat conduction law, assumed in the form of the distributed-order Cattaneo type. The Cauchy problem for system of energy balance equation and constitutive heat conduction law is treated analytically through Fourier and Laplace integral transform methods, as well as numerically by the method of finite differences through Adams-Bashforth and Grünwald-Letnikov schemes for approximation derivatives in temporal domain and leap frog scheme for spatial derivatives. Numerical examples, showing time evolution of temperature and heat flux spatial profiles, demonstrate applicability and good agreement of both methods in cases of multi-term and power-type distributed-order heat conduction laws.
NASA Astrophysics Data System (ADS)
Wang, Liping; Ji, Yusheng; Liu, Fuqiang
The integration of multihop relays with orthogonal frequency-division multiple access (OFDMA) cellular infrastructures can meet the growing demands for better coverage and higher throughput. Resource allocation in the OFDMA two-hop relay system is more complex than that in the conventional single-hop OFDMA system. With time division between transmissions from the base station (BS) and those from relay stations (RSs), fixed partitioning of the BS subframe and RS subframes can not adapt to various traffic demands. Moreover, single-hop scheduling algorithms can not be used directly in the two-hop system. Therefore, we propose a semi-distributed algorithm called ASP to adjust the length of every subframe adaptively, and suggest two ways to extend single-hop scheduling algorithms into multihop scenarios: link-based and end-to-end approaches. Simulation results indicate that the ASP algorithm increases system utilization and fairness. The max carrier-to-interference ratio (Max C/I) and proportional fairness (PF) scheduling algorithms extended using the end-to-end approach obtain higher throughput than those using the link-based approach, but at the expense of more overhead for information exchange between the BS and RSs. The resource allocation scheme using ASP and end-to-end PF scheduling achieves a tradeoff between system throughput maximization and fairness.
Watling, James I.; Brandt, Laura A.; Bucklin, David N.; Fujisaki, Ikuko; Mazzotti, Frank J.; Romañach, Stephanie; Speroterra, Carolina
2015-01-01
Species distribution models (SDMs) are widely used in basic and applied ecology, making it important to understand sources and magnitudes of uncertainty in SDM performance and predictions. We analyzed SDM performance and partitioned variance among prediction maps for 15 rare vertebrate species in the southeastern USA using all possible combinations of seven potential sources of uncertainty in SDMs: algorithms, climate datasets, model domain, species presences, variable collinearity, CO2 emissions scenarios, and general circulation models. The choice of modeling algorithm was the greatest source of uncertainty in SDM performance and prediction maps, with some additional variation in performance associated with the comprehensiveness of the species presences used for modeling. Other sources of uncertainty that have received attention in the SDM literature such as variable collinearity and model domain contributed little to differences in SDM performance or predictions in this study. Predictions from different algorithms tended to be more variable at northern range margins for species with more northern distributions, which may complicate conservation planning at the leading edge of species' geographic ranges. The clear message emerging from this work is that researchers should use multiple algorithms for modeling rather than relying on predictions from a single algorithm, invest resources in compiling a comprehensive set of species presences, and explicitly evaluate uncertainty in SDM predictions at leading range margins.
COLA: Optimizing Stream Processing Applications via Graph Partitioning
NASA Astrophysics Data System (ADS)
Khandekar, Rohit; Hildrum, Kirsten; Parekh, Sujay; Rajan, Deepak; Wolf, Joel; Wu, Kun-Lung; Andrade, Henrique; Gedik, Buğra
In this paper, we describe an optimization scheme for fusing compile-time operators into reasonably-sized run-time software units called processing elements (PEs). Such PEs are the basic deployable units in System S, a highly scalable distributed stream processing middleware system. Finding a high quality fusion significantly benefits the performance of streaming jobs. In order to maximize throughput, our solution approach attempts to minimize the processing cost associated with inter-PE stream traffic while simultaneously balancing load across the processing hosts. Our algorithm computes a hierarchical partitioning of the operator graph based on a minimum-ratio cut subroutine. We also incorporate several fusion constraints in order to support real-world System S jobs. We experimentally compare our algorithm with several other reasonable alternative schemes, highlighting the effectiveness of our approach.
NASA Astrophysics Data System (ADS)
Shang, J. S.; Andrienko, D. A.; Huang, P. G.; Surzhikov, S. T.
2014-06-01
An efficient computational capability for nonequilibrium radiation simulation via the ray tracing technique has been accomplished. The radiative rate equation is iteratively coupled with the aerodynamic conservation laws including nonequilibrium chemical and chemical-physical kinetic models. The spectral properties along tracing rays are determined by a space partition algorithm of the nearest neighbor search process, and the numerical accuracy is further enhanced by a local resolution refinement using the Gauss-Lobatto polynomial. The interdisciplinary governing equations are solved by an implicit delta formulation through the diminishing residual approach. The axisymmetric radiating flow fields over the reentry RAM-CII probe have been simulated and verified with flight data and previous solutions by traditional methods. A computational efficiency gain nearly forty times is realized over that of the existing simulation procedures.
Precise Aperture-Dependent Motion Compensation with Frequency Domain Fast Back-Projection Algorithm.
Zhang, Man; Wang, Guanyong; Zhang, Lei
2017-10-26
Precise azimuth-variant motion compensation (MOCO) is an essential and difficult task for high-resolution synthetic aperture radar (SAR) imagery. In conventional post-filtering approaches, residual azimuth-variant motion errors are generally compensated through a set of spatial post-filters, where the coarse-focused image is segmented into overlapped blocks concerning the azimuth-dependent residual errors. However, image domain post-filtering approaches, such as precise topography- and aperture-dependent motion compensation algorithm (PTA), have difficulty of robustness in declining, when strong motion errors are involved in the coarse-focused image. In this case, in order to capture the complete motion blurring function within each image block, both the block size and the overlapped part need necessary extension leading to degeneration of efficiency and robustness inevitably. Herein, a frequency domain fast back-projection algorithm (FDFBPA) is introduced to deal with strong azimuth-variant motion errors. FDFBPA disposes of the azimuth-variant motion errors based on a precise azimuth spectrum expression in the azimuth wavenumber domain. First, a wavenumber domain sub-aperture processing strategy is introduced to accelerate computation. After that, the azimuth wavenumber spectrum is partitioned into a set of wavenumber blocks, and each block is formed into a sub-aperture coarse resolution image via the back-projection integral. Then, the sub-aperture images are straightforwardly fused together in azimuth wavenumber domain to obtain a full resolution image. Moreover, chirp-Z transform (CZT) is also introduced to implement the sub-aperture back-projection integral, increasing the efficiency of the algorithm. By disusing the image domain post-filtering strategy, robustness of the proposed algorithm is improved. Both simulation and real-measured data experiments demonstrate the effectiveness and superiority of the proposal.
Using Optimisation Techniques to Granulise Rough Set Partitions
NASA Astrophysics Data System (ADS)
Crossingham, Bodie; Marwala, Tshilidzi
2007-11-01
This paper presents an approach to optimise rough set partition sizes using various optimisation techniques. Three optimisation techniques are implemented to perform the granularisation process, namely, genetic algorithm (GA), hill climbing (HC) and simulated annealing (SA). These optimisation methods maximise the classification accuracy of the rough sets. The proposed rough set partition method is tested on a set of demographic properties of individuals obtained from the South African antenatal survey. The three techniques are compared in terms of their computational time, accuracy and number of rules produced when applied to the Human Immunodeficiency Virus (HIV) data set. The optimised methods results are compared to a well known non-optimised discretisation method, equal-width-bin partitioning (EWB). The accuracies achieved after optimising the partitions using GA, HC and SA are 66.89%, 65.84% and 65.48% respectively, compared to the accuracy of EWB of 59.86%. In addition to rough sets providing the plausabilities of the estimated HIV status, they also provide the linguistic rules describing how the demographic parameters drive the risk of HIV.
Spatial and temporal patterns of coexistence between competing Aedes mosquitoes in urban Florida
Juliano, S. A.
2009-01-01
Understanding mechanisms fostering coexistence between invasive and resident species is important in predicting ecological, economic, or health impacts of invasive species. The mosquito Aedes aegypti coexists at some urban sites in southeastern United States with invasive Aedes albopictus, which is often superior in interspecific competition. We tested predictions for three hypotheses of species coexistence: seasonal condition-specific competition, aggregation among individual water-filled containers, and colonization–competition tradeoff across spatially partitioned habitat patches (cemeteries) that have high densities of containers. We measured spatial and temporal patterns of abundance for both species among water-filled resident cemetery vases and experimentally positioned standard cemetery vases and ovitraps in metropolitan Tampa, Florida. Consistent with the seasonal condition-specific competition hypothesis, abundances of both species in resident and standard cemetery vases were higher early in the wet season (June) versus late in the wet season (September), but the proportional increase of A. albopictus was greater than that of A. aegypti, presumably due to higher dry-season egg mortality and strong wet-season competitive superiority of larval A. albopictus. Spatial partitioning was not evident among cemeteries, a result inconsistent with the colonization-competition tradeoff hypothesis, but both species were highly independently aggregated among standard cemetery vases and ovitraps, which is consistent with the aggregation hypothesis. Densities of A. aegypti but not A. albopictus differed among land use categories, with A. aegypti more abundant in ovitraps in residential areas compared to industrial and commercial areas. Spatial partitioning among land use types probably results from effects of land use on conditions in both terrestrial and aquatic-container environments. These results suggest that both temporal and spatial variation may contribute to local coexistence between these Aedes in urban areas. PMID:19263086
Spatial and temporal patterns of coexistence between competing Aedes mosquitoes in urban Florida.
Leisnham, Paul T; Juliano, S A
2009-05-01
Understanding mechanisms fostering coexistence between invasive and resident species is important in predicting ecological, economic, or health impacts of invasive species. The mosquito Aedes aegypti coexists at some urban sites in southeastern United States with invasive Aedes albopictus, which is often superior in interspecific competition. We tested predictions for three hypotheses of species coexistence: seasonal condition-specific competition, aggregation among individual water-filled containers, and colonization-competition tradeoff across spatially partitioned habitat patches (cemeteries) that have high densities of containers. We measured spatial and temporal patterns of abundance for both species among water-filled resident cemetery vases and experimentally positioned standard cemetery vases and ovitraps in metropolitan Tampa, Florida. Consistent with the seasonal condition-specific competition hypothesis, abundances of both species in resident and standard cemetery vases were higher early in the wet season (June) versus late in the wet season (September), but the proportional increase of A. albopictus was greater than that of A. aegypti, presumably due to higher dry-season egg mortality and strong wet-season competitive superiority of larval A. albopictus. Spatial partitioning was not evident among cemeteries, a result inconsistent with the colonization-competition tradeoff hypothesis, but both species were highly independently aggregated among standard cemetery vases and ovitraps, which is consistent with the aggregation hypothesis. Densities of A. aegypti but not A. albopictus differed among land use categories, with A. aegypti more abundant in ovitraps in residential areas compared to industrial and commercial areas. Spatial partitioning among land use types probably results from effects of land use on conditions in both terrestrial and aquatic-container environments. These results suggest that both temporal and spatial variation may contribute to local coexistence between these Aedes in urban areas.
Two modified symplectic partitioned Runge-Kutta methods for solving the elastic wave equation
NASA Astrophysics Data System (ADS)
Su, Bo; Tuo, Xianguo; Xu, Ling
2017-08-01
Based on a modified strategy, two modified symplectic partitioned Runge-Kutta (PRK) methods are proposed for the temporal discretization of the elastic wave equation. The two symplectic schemes are similar in form but are different in nature. After the spatial discretization of the elastic wave equation, the ordinary Hamiltonian formulation for the elastic wave equation is presented. The PRK scheme is then applied for time integration. An additional term associated with spatial discretization is inserted into the different stages of the PRK scheme. Theoretical analyses are conducted to evaluate the numerical dispersion and stability of the two novel PRK methods. A finite difference method is used to approximate the spatial derivatives since the two schemes are independent of the spatial discretization technique used. The numerical solutions computed by the two new schemes are compared with those computed by a conventional symplectic PRK. The numerical results, which verify the new method, are superior to those generated by traditional conventional methods in seismic wave modeling.
Prediction of distribution coefficient from structure. 1. Estimation method.
Csizmadia, F; Tsantili-Kakoulidou, A; Panderi, I; Darvas, F
1997-07-01
A method has been developed for the estimation of the distribution coefficient (D), which considers the microspecies of a compound. D is calculated from the microscopic dissociation constants (microconstants), the partition coefficients of the microspecies, and the counterion concentration. A general equation for the calculation of D at a given pH is presented. The microconstants are calculated from the structure using Hammett and Taft equations. The partition coefficients of the ionic microspecies are predicted by empirical equations using the dissociation constants and the partition coefficient of the uncharged species, which are estimated from the structure by a Linear Free Energy Relationship method. The algorithm is implemented in a program module called PrologD.
NASA Astrophysics Data System (ADS)
Li, Yan; Wu, Mingwei; Du, Xinwei; Xu, Zhuoran; Gurusamy, Mohan; Yu, Changyuan; Kam, Pooi-Yuen
2018-02-01
A novel soft-decision-aided maximum likelihood (SDA-ML) carrier phase estimation method and its simplified version, the decision-aided and soft-decision-aided maximum likelihood (DA-SDA-ML) methods are tested in a nonlinear phase noise-dominant channel. The numerical performance results show that both the SDA-ML and DA-SDA-ML methods outperform the conventional DA-ML in systems with constant-amplitude modulation formats. In addition, modified algorithms based on constellation partitioning are proposed. With partitioning, the modified SDA-ML and DA-SDA-ML are shown to be useful for compensating the nonlinear phase noise in multi-level modulation systems.
[An object-oriented remote sensing image segmentation approach based on edge detection].
Tan, Yu-Min; Huai, Jian-Zhu; Tang, Zhong-Shi
2010-06-01
Satellite sensor technology endorsed better discrimination of various landscape objects. Image segmentation approaches to extracting conceptual objects and patterns hence have been explored and a wide variety of such algorithms abound. To this end, in order to effectively utilize edge and topological information in high resolution remote sensing imagery, an object-oriented algorithm combining edge detection and region merging is proposed. Susan edge filter is firstly applied to the panchromatic band of Quickbird imagery with spatial resolution of 0.61 m to obtain the edge map. Thanks to the resulting edge map, a two-phrase region-based segmentation method operates on the fusion image from panchromatic and multispectral Quickbird images to get the final partition result. In the first phase, a quad tree grid consisting of squares with sides parallel to the image left and top borders agglomerates the square subsets recursively where the uniform measure is satisfied to derive image object primitives. Before the merger of the second phrase, the contextual and spatial information, (e. g., neighbor relationship, boundary coding) of the resulting squares are retrieved efficiently by means of the quad tree structure. Then a region merging operation is performed with those primitives, during which the criterion for region merging integrates edge map and region-based features. This approach has been tested on the QuickBird images of some site in Sanxia area and the result is compared with those of ENVI Zoom Definiens. In addition, quantitative evaluation of the quality of segmentation results is also presented. Experiment results demonstrate stable convergence and efficiency.
Slattery, Stuart R.
2015-12-02
In this study we analyze and extend mesh-free algorithms for three-dimensional data transfer problems in partitioned multiphysics simulations. We first provide a direct comparison between a mesh-based weighted residual method using the common-refinement scheme and two mesh-free algorithms leveraging compactly supported radial basis functions: one using a spline interpolation and one using a moving least square reconstruction. Through the comparison we assess both the conservation and accuracy of the data transfer obtained from each of the methods. We do so for a varying set of geometries with and without curvature and sharp features and for functions with and without smoothnessmore » and with varying gradients. Our results show that the mesh-based and mesh-free algorithms are complementary with cases where each was demonstrated to perform better than the other. We then focus on the mesh-free methods by developing a set of algorithms to parallelize them based on sparse linear algebra techniques. This includes a discussion of fast parallel radius searching in point clouds and restructuring the interpolation algorithms to leverage data structures and linear algebra services designed for large distributed computing environments. The scalability of our new algorithms is demonstrated on a leadership class computing facility using a set of basic scaling studies. Finally, these scaling studies show that for problems with reasonable load balance, our new algorithms for both spline interpolation and moving least square reconstruction demonstrate both strong and weak scalability using more than 100,000 MPI processes with billions of degrees of freedom in the data transfer operation.« less
Teh, Seng Khoon; Zheng, Wei; Lau, David P; Huang, Zhiwei
2009-06-01
In this work, we evaluated the diagnostic ability of near-infrared (NIR) Raman spectroscopy associated with the ensemble recursive partitioning algorithm based on random forests for identifying cancer from normal tissue in the larynx. A rapid-acquisition NIR Raman system was utilized for tissue Raman measurements at 785 nm excitation, and 50 human laryngeal tissue specimens (20 normal; 30 malignant tumors) were used for NIR Raman studies. The random forests method was introduced to develop effective diagnostic algorithms for classification of Raman spectra of different laryngeal tissues. High-quality Raman spectra in the range of 800-1800 cm(-1) can be acquired from laryngeal tissue within 5 seconds. Raman spectra differed significantly between normal and malignant laryngeal tissues. Classification results obtained from the random forests algorithm on tissue Raman spectra yielded a diagnostic sensitivity of 88.0% and specificity of 91.4% for laryngeal malignancy identification. The random forests technique also provided variables importance that facilitates correlation of significant Raman spectral features with cancer transformation. This study shows that NIR Raman spectroscopy in conjunction with random forests algorithm has a great potential for the rapid diagnosis and detection of malignant tumors in the larynx.
BIG DATA ANALYTICS AND PRECISION ANIMAL AGRICULTURE SYMPOSIUM: Data to decisions.
White, B J; Amrine, D E; Larson, R L
2018-04-14
Big data are frequently used in many facets of business and agronomy to enhance knowledge needed to improve operational decisions. Livestock operations collect data of sufficient quantity to perform predictive analytics. Predictive analytics can be defined as a methodology and suite of data evaluation techniques to generate a prediction for specific target outcomes. The objective of this manuscript is to describe the process of using big data and the predictive analytic framework to create tools to drive decisions in livestock production, health, and welfare. The predictive analytic process involves selecting a target variable, managing the data, partitioning the data, then creating algorithms, refining algorithms, and finally comparing accuracy of the created classifiers. The partitioning of the datasets allows model building and refining to occur prior to testing the predictive accuracy of the model with naive data to evaluate overall accuracy. Many different classification algorithms are available for predictive use and testing multiple algorithms can lead to optimal results. Application of a systematic process for predictive analytics using data that is currently collected or that could be collected on livestock operations will facilitate precision animal management through enhanced livestock operational decisions.
REACTT: an algorithm for solving spatial equilibrium problems.
D.J. Brooks; J. Kincaid
1987-01-01
The problem of determining equilibrium prices and quantities in spatially separated markets is reviewed. Algorithms that compute spatial equilibria are discussed. A computer program using the reactive programming algorithm for solving spatial equilibrium problems that involve multiple commodities is presented, along with detailed documentation. A sample data set,...
Hadoop-GIS: A High Performance Spatial Data Warehousing System over MapReduce.
Aji, Ablimit; Wang, Fusheng; Vo, Hoang; Lee, Rubao; Liu, Qiaoling; Zhang, Xiaodong; Saltz, Joel
2013-08-01
Support of high performance queries on large volumes of spatial data becomes increasingly important in many application domains, including geospatial problems in numerous fields, location based services, and emerging scientific applications that are increasingly data- and compute-intensive. The emergence of massive scale spatial data is due to the proliferation of cost effective and ubiquitous positioning technologies, development of high resolution imaging technologies, and contribution from a large number of community users. There are two major challenges for managing and querying massive spatial data to support spatial queries: the explosion of spatial data, and the high computational complexity of spatial queries. In this paper, we present Hadoop-GIS - a scalable and high performance spatial data warehousing system for running large scale spatial queries on Hadoop. Hadoop-GIS supports multiple types of spatial queries on MapReduce through spatial partitioning, customizable spatial query engine RESQUE, implicit parallel spatial query execution on MapReduce, and effective methods for amending query results through handling boundary objects. Hadoop-GIS utilizes global partition indexing and customizable on demand local spatial indexing to achieve efficient query processing. Hadoop-GIS is integrated into Hive to support declarative spatial queries with an integrated architecture. Our experiments have demonstrated the high efficiency of Hadoop-GIS on query response and high scalability to run on commodity clusters. Our comparative experiments have showed that performance of Hadoop-GIS is on par with parallel SDBMS and outperforms SDBMS for compute-intensive queries. Hadoop-GIS is available as a set of library for processing spatial queries, and as an integrated software package in Hive.
Hadoop-GIS: A High Performance Spatial Data Warehousing System over MapReduce
Aji, Ablimit; Wang, Fusheng; Vo, Hoang; Lee, Rubao; Liu, Qiaoling; Zhang, Xiaodong; Saltz, Joel
2013-01-01
Support of high performance queries on large volumes of spatial data becomes increasingly important in many application domains, including geospatial problems in numerous fields, location based services, and emerging scientific applications that are increasingly data- and compute-intensive. The emergence of massive scale spatial data is due to the proliferation of cost effective and ubiquitous positioning technologies, development of high resolution imaging technologies, and contribution from a large number of community users. There are two major challenges for managing and querying massive spatial data to support spatial queries: the explosion of spatial data, and the high computational complexity of spatial queries. In this paper, we present Hadoop-GIS – a scalable and high performance spatial data warehousing system for running large scale spatial queries on Hadoop. Hadoop-GIS supports multiple types of spatial queries on MapReduce through spatial partitioning, customizable spatial query engine RESQUE, implicit parallel spatial query execution on MapReduce, and effective methods for amending query results through handling boundary objects. Hadoop-GIS utilizes global partition indexing and customizable on demand local spatial indexing to achieve efficient query processing. Hadoop-GIS is integrated into Hive to support declarative spatial queries with an integrated architecture. Our experiments have demonstrated the high efficiency of Hadoop-GIS on query response and high scalability to run on commodity clusters. Our comparative experiments have showed that performance of Hadoop-GIS is on par with parallel SDBMS and outperforms SDBMS for compute-intensive queries. Hadoop-GIS is available as a set of library for processing spatial queries, and as an integrated software package in Hive. PMID:24187650
Time and Space Partition Platform for Safe and Secure Flight Software
NASA Astrophysics Data System (ADS)
Esquinas, Angel; Zamorano, Juan; de la Puente, Juan A.; Masmano, Miguel; Crespo, Alfons
2012-08-01
There are a number of research and development activities that are exploring Time and Space Partition (TSP) to implement safe and secure flight software. This approach allows to execute different real-time applications with different levels of criticality in the same computer board. In order to do that, flight applications must be isolated from each other in the temporal and spatial domains. This paper presents the first results of a partitioning platform based on the Open Ravenscar Kernel (ORK+) and the XtratuM hypervisor. ORK+ is a small, reliable realtime kernel supporting the Ada Ravenscar Computational model that is central to the ASSERT development process. XtratuM supports multiple virtual machines, i.e. partitions, on a single computer and is being used in the Integrated Modular Avionics for Space study. ORK+ executes in an XtratuM partition enabling Ada applications to share the computer board with other applications.
Investigation the Arithmetical or Tabular Islamic calendar
NASA Astrophysics Data System (ADS)
Rashed, M. G.; Moklof, M. G.; Hamza, Alaa E.
2018-06-01
Arithmetical calendar (or tabular calendar) is sometimes referred to as the Fātimid calendar but this is in fact one of several almost identical tabular Islamic calendars. This calendar introduced by Muslim astronomers in the 9th century CE to predict the approximate begin of the months in the Islamic lunar calendar. Chronologists adopted 11 leap years in a 30 year cycle. In the case of leap Hijri year they add one day to the last month of the Hijri year. The cycle of this calendar agree with the Smaller cycles (2-5.333 years) discovered by Galal and Rashed (2011) and coincide with the lag criterion given by Galal (1988). We suggested the Islamic tabular calendar. The Leap years of this suggested Islamic tabular calendar may be 2, 5, 7, 10, 13, 15, 18, 21, 23, 26 and 29. Our suggested Arithmetical calendar satisfies the mathematical patterns, while the old Arithmetical calendar (or tabular calendar) does not satisfy a known fixed rule. We conclude empirical formula for our suggested Islamic tabular calendar. From this empirical formula, we can calculate if the Hijric year after immigration is a leap or a non-leap year.
Scalable, Finite Element Analysis of Electromagnetic Scattering and Radiation
NASA Technical Reports Server (NTRS)
Cwik, T.; Lou, J.; Katz, D.
1997-01-01
In this paper a method for simulating electromagnetic fields scattered from complex objects is reviewed; namely, an unstructured finite element code that does not use traditional mesh partitioning algorithms.
Srinivasan, A.; Galbán, C.J.; Johnson, T.D.; Chenevert, T.L.; Ross, B.D.; Mukherji, S.K.
2014-01-01
Purpose The objective of our study was to analyze the differences between apparent diffusion coefficient (ADC) partitions (created using the K-Means algorithm) between benign and malignant neck lesions and evaluate its benefit in distinguishing these entities. Material and methods MRI studies of 10 benign and 10 malignant proven neck pathologies were post-processed on a PC using in-house software developed in MATLAB (The MathWorks, Inc., Natick, MA). Lesions were manually contoured by two neuroradiologists with the ADC values within each lesion clustered into two (low ADC-ADCL, high ADC-ADCH) and three partitions (ADCL, intermediate ADC-ADCI, ADCH) using the K-Means clustering algorithm. An unpaired two-tailed Student’s t-test was performed for all metrics to determine statistical differences in the means between the benign and malignant pathologies. Results Statistically significant difference between the mean ADCL clusters in benign and malignant pathologies was seen in the 3 cluster models of both readers (p=0.03, 0.022 respectively) and the 2 cluster model of reader 2 (p=0.04) with the other metrics (ADCH, ADCI, whole lesion mean ADC) not revealing any significant differences. Receiver operating characteristics curves demonstrated the quantitative difference in mean ADCH and ADCL in both the 2 and 3 cluster models to be predictive of malignancy (2 clusters: p=0.008, area under curve=0.850, 3 clusters: p=0.01, area under curve=0.825). Conclusion The K-Means clustering algorithm that generates partitions of large datasets may provide a better characterization of neck pathologies and may be of additional benefit in distinguishing benign and malignant neck pathologies compared to whole lesion mean ADC alone. PMID:20007723
Srinivasan, A; Galbán, C J; Johnson, T D; Chenevert, T L; Ross, B D; Mukherji, S K
2010-04-01
Does the K-means algorithm do a better job of differentiating benign and malignant neck pathologies compared to only mean ADC? The objective of our study was to analyze the differences between ADC partitions to evaluate whether the K-means technique can be of additional benefit to whole-lesion mean ADC alone in distinguishing benign and malignant neck pathologies. MR imaging studies of 10 benign and 10 malignant proved neck pathologies were postprocessed on a PC by using in-house software developed in Matlab. Two neuroradiologists manually contoured the lesions, with the ADC values within each lesion clustered into 2 (low, ADC-ADC(L); high, ADC-ADC(H)) and 3 partitions (ADC(L); intermediate, ADC-ADC(I); ADC(H)) by using the K-means clustering algorithm. An unpaired 2-tailed Student t test was performed for all metrics to determine statistical differences in the means of the benign and malignant pathologies. A statistically significant difference between the mean ADC(L) clusters in benign and malignant pathologies was seen in the 3-cluster models of both readers (P = .03 and .022, respectively) and the 2-cluster model of reader 2 (P = .04), with the other metrics (ADC(H), ADC(I); whole-lesion mean ADC) not revealing any significant differences. ROC curves demonstrated the quantitative differences in mean ADC(H) and ADC(L) in both the 2- and 3-cluster models to be predictive of malignancy (2 clusters: P = .008, area under curve = 0.850; 3 clusters: P = .01, area under curve = 0.825). The K-means clustering algorithm that generates partitions of large datasets may provide a better characterization of neck pathologies and may be of additional benefit in distinguishing benign and malignant neck pathologies compared with whole-lesion mean ADC alone.
Leap Frog Digital Sensors and Definition, Integration & Testing FY 2003 Annual Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meitzler, Wayne D.; Ouderkirk, Steven J.; Shoemaker, Steven V.
2003-12-31
The objective of Leap Frog is to develop a comprehensive security tool that is transparent to the user community and more effective than current methods for preventing and detecting security compromises of critical physical and digital assets. Current security tools intrude on the people that interact with these critical assets by requiring them to perform additional functions or having additional visible sensors. Leap Frog takes security to the next level by being more effective and reducing the adverse impact on the people interacting with protected assets.
NASA Technical Reports Server (NTRS)
Jones, Ross M.
1988-01-01
The current status and potential scientific applications of intelligent 1-5-kg projectiles being developed by SDIO and DARPA for military missions are discussed. The importance of advanced microelectronics for such small spacecraft is stressed, and it is pointed out that both chemical rockets and EM launchers are currently under consideration for these lightweight exoatmospheric projectiles (LEAPs). Long-duration power supply is identified as the primary technological change required if LEAPs are to be used for interplanetary scientific missions, and the design concept of a solar-powered space-based railgun to accelerate LEAPs on such missions is considered.
A segmentation algorithm based on image projection for complex text layout
NASA Astrophysics Data System (ADS)
Zhu, Wangsheng; Chen, Qin; Wei, Chuanyi; Li, Ziyang
2017-10-01
Segmentation algorithm is an important part of layout analysis, considering the efficiency advantage of the top-down approach and the particularity of the object, a breakdown of projection layout segmentation algorithm. Firstly, the algorithm will algorithm first partitions the text image, and divided into several columns, then for each column scanning projection, the text image is divided into several sub regions through multiple projection. The experimental results show that, this method inherits the projection itself and rapid calculation speed, but also can avoid the effect of arc image information page segmentation, and also can accurate segmentation of the text image layout is complex.
Recursive partitioned inversion of large (1500 x 1500) symmetric matrices
NASA Technical Reports Server (NTRS)
Putney, B. H.; Brownd, J. E.; Gomez, R. A.
1976-01-01
A recursive algorithm was designed to invert large, dense, symmetric, positive definite matrices using small amounts of computer core, i.e., a small fraction of the core needed to store the complete matrix. The described algorithm is a generalized Gaussian elimination technique. Other algorithms are also discussed for the Cholesky decomposition and step inversion techniques. The purpose of the inversion algorithm is to solve large linear systems of normal equations generated by working geodetic problems. The algorithm was incorporated into a computer program called SOLVE. In the past the SOLVE program has been used in obtaining solutions published as the Goddard earth models.
Spatial cluster detection using dynamic programming.
Sverchkov, Yuriy; Jiang, Xia; Cooper, Gregory F
2012-03-25
The task of spatial cluster detection involves finding spatial regions where some property deviates from the norm or the expected value. In a probabilistic setting this task can be expressed as finding a region where some event is significantly more likely than usual. Spatial cluster detection is of interest in fields such as biosurveillance, mining of astronomical data, military surveillance, and analysis of fMRI images. In almost all such applications we are interested both in the question of whether a cluster exists in the data, and if it exists, we are interested in finding the most accurate characterization of the cluster. We present a general dynamic programming algorithm for grid-based spatial cluster detection. The algorithm can be used for both Bayesian maximum a-posteriori (MAP) estimation of the most likely spatial distribution of clusters and Bayesian model averaging over a large space of spatial cluster distributions to compute the posterior probability of an unusual spatial clustering. The algorithm is explained and evaluated in the context of a biosurveillance application, specifically the detection and identification of Influenza outbreaks based on emergency department visits. A relatively simple underlying model is constructed for the purpose of evaluating the algorithm, and the algorithm is evaluated using the model and semi-synthetic test data. When compared to baseline methods, tests indicate that the new algorithm can improve MAP estimates under certain conditions: the greedy algorithm we compared our method to was found to be more sensitive to smaller outbreaks, while as the size of the outbreaks increases, in terms of area affected and proportion of individuals affected, our method overtakes the greedy algorithm in spatial precision and recall. The new algorithm performs on-par with baseline methods in the task of Bayesian model averaging. We conclude that the dynamic programming algorithm performs on-par with other available methods for spatial cluster detection and point to its low computational cost and extendability as advantages in favor of further research and use of the algorithm.
Spatial cluster detection using dynamic programming
2012-01-01
Background The task of spatial cluster detection involves finding spatial regions where some property deviates from the norm or the expected value. In a probabilistic setting this task can be expressed as finding a region where some event is significantly more likely than usual. Spatial cluster detection is of interest in fields such as biosurveillance, mining of astronomical data, military surveillance, and analysis of fMRI images. In almost all such applications we are interested both in the question of whether a cluster exists in the data, and if it exists, we are interested in finding the most accurate characterization of the cluster. Methods We present a general dynamic programming algorithm for grid-based spatial cluster detection. The algorithm can be used for both Bayesian maximum a-posteriori (MAP) estimation of the most likely spatial distribution of clusters and Bayesian model averaging over a large space of spatial cluster distributions to compute the posterior probability of an unusual spatial clustering. The algorithm is explained and evaluated in the context of a biosurveillance application, specifically the detection and identification of Influenza outbreaks based on emergency department visits. A relatively simple underlying model is constructed for the purpose of evaluating the algorithm, and the algorithm is evaluated using the model and semi-synthetic test data. Results When compared to baseline methods, tests indicate that the new algorithm can improve MAP estimates under certain conditions: the greedy algorithm we compared our method to was found to be more sensitive to smaller outbreaks, while as the size of the outbreaks increases, in terms of area affected and proportion of individuals affected, our method overtakes the greedy algorithm in spatial precision and recall. The new algorithm performs on-par with baseline methods in the task of Bayesian model averaging. Conclusions We conclude that the dynamic programming algorithm performs on-par with other available methods for spatial cluster detection and point to its low computational cost and extendability as advantages in favor of further research and use of the algorithm. PMID:22443103
NASA Astrophysics Data System (ADS)
Abatzoglou, John T.; Ficklin, Darren L.
2017-09-01
The geographic variability in the partitioning of precipitation into surface runoff (Q) and evapotranspiration (ET) is fundamental to understanding regional water availability. The Budyko equation suggests this partitioning is strictly a function of aridity, yet observed deviations from this relationship for individual watersheds impede using the framework to model surface water balance in ungauged catchments and under future climate and land use scenarios. A set of climatic, physiographic, and vegetation metrics were used to model the spatial variability in the partitioning of precipitation for 211 watersheds across the contiguous United States (CONUS) within Budyko's framework through the free parameter ω. A generalized additive model found that four widely available variables, precipitation seasonality, the ratio of soil water holding capacity to precipitation, topographic slope, and the fraction of precipitation falling as snow, explained 81.2% of the variability in ω. The ω model applied to the Budyko equation explained 97% of the spatial variability in long-term Q for an independent set of watersheds. The ω model was also applied to estimate the long-term water balance across the CONUS for both contemporary and mid-21st century conditions. The modeled partitioning of observed precipitation to Q and ET compared favorably across the CONUS with estimates from more sophisticated land-surface modeling efforts. For mid-21st century conditions, the model simulated an increase in the fraction of precipitation used by ET across the CONUS with declines in Q for much of the eastern CONUS and mountainous watersheds across the western United States.
On k-ary n-cubes: Theory and applications
NASA Technical Reports Server (NTRS)
Mao, Weizhen; Nicol, David M.
1994-01-01
Many parallel processing networks can be viewed as graphs called k-ary n-cubes, whose special cases include rings, hypercubes and toruses. In this paper, combinatorial properties of k-ary n-cubes are explored. In particular, the problem of characterizing the subgraph of a given number of nodes with the maximum edge count is studied. These theoretical results are then used to compute a lower bounding function in branch-and-bound partitioning algorithms and to establish the optimality of some irregular partitions.
Architecture Aware Partitioning Algorithms
2006-01-19
follows: Given a graph G = (V, E ), where V is the set of vertices, n = |V | is the number of vertices, and E is the set of edges in the graph, partition the...communication link l(pi, pj) is associated with a graph edge weight e ∗(pi, pj) that represents the communication cost per unit of communication between...one that is local for each one. For our model we assume that communication in either direction across a given link is the same, therefore e ∗(pi, pj
Analysis of the Accuracy and Robustness of the Leap Motion Controller
Weichert, Frank; Bachmann, Daniel; Rudak, Bartholomäus; Fisseler, Denis
2013-01-01
The Leap Motion Controller is a new device for hand gesture controlled user interfaces with declared sub-millimeter accuracy. However, up to this point its capabilities in real environments have not been analyzed. Therefore, this paper presents a first study of a Leap Motion Controller. The main focus of attention is on the evaluation of the accuracy and repeatability. For an appropriate evaluation, a novel experimental setup was developed making use of an industrial robot with a reference pen allowing a position accuracy of 0.2 mm. Thereby, a deviation between a desired 3D position and the average measured positions below 0.2 mm has been obtained for static setups and of 1.2 mm for dynamic setups. Using the conclusion of this analysis can improve the development of applications for the Leap Motion controller in the field of Human-Computer Interaction. PMID:23673678
Analysis of the accuracy and robustness of the leap motion controller.
Weichert, Frank; Bachmann, Daniel; Rudak, Bartholomäus; Fisseler, Denis
2013-05-14
The Leap Motion Controller is a new device for hand gesture controlled user interfaces with declared sub-millimeter accuracy. However, up to this point its capabilities in real environments have not been analyzed. Therefore, this paper presents a first study of a Leap Motion Controller. The main focus of attention is on the evaluation of the accuracy and repeatability. For an appropriate evaluation, a novel experimental setup was developed making use of an industrial robot with a reference pen allowing a position accuracy of 0.2 mm. Thereby, a deviation between a desired 3D position and the average measured positions below 0.2 mm has been obtained for static setups and of 1.2 mm for dynamic setups. Using the conclusion of this analysis can improve the development of applications for the Leap Motion controller in the field of Human-Computer Interaction.
Dynamic Airspace Configuration
NASA Technical Reports Server (NTRS)
Bloem, Michael J.
2014-01-01
In air traffic management systems, airspace is partitioned into regions in part to distribute the tasks associated with managing air traffic among different systems and people. These regions, as well as the systems and people allocated to each, are changed dynamically so that air traffic can be safely and efficiently managed. It is expected that new air traffic control systems will enable greater flexibility in how airspace is partitioned and how resources are allocated to airspace regions. In this talk, I will begin by providing an overview of some previous work and open questions in Dynamic Airspace Configuration research, which is concerned with how to partition airspace and assign resources to regions of airspace. For example, I will introduce airspace partitioning algorithms based on clustering, integer programming optimization, and computational geometry. I will conclude by discussing the development of a tablet-based tool that is intended to help air traffic controller supervisors configure airspace and controllers in current operations.
Partitioning problems in parallel, pipelined and distributed computing
NASA Technical Reports Server (NTRS)
Bokhari, S.
1985-01-01
The problem of optimally assigning the modules of a parallel program over the processors of a multiple computer system is addressed. A Sum-Bottleneck path algorithm is developed that permits the efficient solution of many variants of this problem under some constraints on the structure of the partitions. In particular, the following problems are solved optimally for a single-host, multiple satellite system: partitioning multiple chain structured parallel programs, multiple arbitrarily structured serial programs and single tree structured parallel programs. In addition, the problems of partitioning chain structured parallel programs across chain connected systems and across shared memory (or shared bus) systems are also solved under certain constraints. All solutions for parallel programs are equally applicable to pipelined programs. These results extend prior research in this area by explicitly taking concurrency into account and permit the efficient utilization of multiple computer architectures for a wide range of problems of practical interest.
Leibold, Mathew A; Loeuille, Nicolas
2015-12-01
Metacommunity theory indicates that variation in local community structure can be partitioned into components including those related to local environmental conditions vs. spatial effects and that these can be quantified using statistical methods based on variation partitioning. It has been hypothesized that joint associations of community composition with environment and space could be due to patch dynamics involving colonization-extinction processes in environmentally heterogeneous landscapes but this has yet to be theoretically shown. We develop a two-patch, type-two, species competition model in such a "harlequin" landscape (where different patches have different environments) to evaluate how composition is related to environmental and spatial effects as a function of background extinction rate. Using spatially implicit analytical models, we find that the environmental association of community composition declines with extinction rate as expected. Using spatially explicit simulation models, we further find that there is an increase in the spatial structure with extinction due to spatial patterning into clusters that are not related to environmental conditions but that this increase is limited. Natural metacommunities often show both environment and spatial determination even under conditions of relatively high isolation and these could be more easily explained by our model than alternative metacommunity models.
Procedure of Partitioning Data Into Number of Data Sets or Data Group - A Review
NASA Astrophysics Data System (ADS)
Kim, Tai-Hoon
The goal of clustering is to decompose a dataset into similar groups based on a objective function. Some already well established clustering algorithms are there for data clustering. Objective of these data clustering algorithms are to divide the data points of the feature space into a number of groups (or classes) so that a predefined set of criteria are satisfied. The article considers the comparative study about the effectiveness and efficiency of traditional data clustering algorithms. For evaluating the performance of the clustering algorithms, Minkowski score is used here for different data sets.
NASA Adds Leap Second to Master Clock
2017-12-08
On Dec. 31, 2016, official clocks around the world will add a leap second just before midnight Coordinated Universal Time — which corresponds to 6:59:59 p.m. EST. NASA missions will also have to make the switch, including the Solar Dynamics Observatory, or SDO, which watches the sun 24/7. Clocks do this to keep in sync with Earth's rotation, which gradually slows down over time. When the dinosaurs roamed Earth, for example, our globe took only 23 hours to make a complete rotation. In space, millisecond accuracy is crucial to understanding how satellites orbit. "SDO moves about 1.9 miles every second," said Dean Pesnell, the project scientist for SDO at NASA's Goddard Space Flight Center in Greenbelt, Maryland. "So does every other object in orbit near SDO. We all have to use the same time to make sure our collision avoidance programs are accurate. So we all add a leap second to the end of 2016, delaying 2017 by one second." The leap second is also key to making sure that SDO is in sync with the Coordinated Universal Time, or UTC, used to label each of its images. SDO has a clock that counts the number of seconds since the beginning of the mission. To convert that count to UTC requires knowing just how many leap seconds have been added to Earth-bound clocks since the mission started. When the spacecraft wants to provide a time in UTC, it calls a software module that takes into consideration both the mission's second count and the number of leap seconds — and then returns a time in UTC.
Spatial niche partitioning in dinosaurs from the latest cretaceous (Maastrichtian) of North America.
Lyson, Tyler R; Longrich, Nicholas R
2011-04-22
We examine patterns of occurrence of associated dinosaur specimens (n = 343) from the North American Upper Cretaceous Hell Creek Formation and equivalent beds, by comparing their relative abundance in sandstone and mudstone. Ceratopsians preferentially occur in mudstone, whereas hadrosaurs and the small ornithopod Thescelosaurus show a strong association with sandstone. By contrast, the giant carnivore Tyrannosaurus rex shows no preferred association with either lithology. These lithologies are used as an indicator of environment of deposition, with sandstone generally representing river environments, and finer grained sediments typically representing floodplain environments. Given these patterns of occurrence, we argue that spatial niche partitioning helped reduce competition for resources between the herbivorous dinosaurs. Within coastal lowlands ceratopsians preferred habitats farther away from rivers, whereas hadrosaurs and Thescelosaurus preferred habitats in close proximity to rivers, and T. rex, the ecosystem's sole large carnivore, inhabited both palaeoenvironments. Spatial partitioning of the environment helps explain how several species of large herbivorous dinosaurs coexisted. This study emphasizes that different lithologies can preserve dramatically dissimilar vertebrate assemblages, even when deposited in close proximity and within a narrow window of time. The lithology in which fossils are preserved should be recorded as these data can provide unique insights into the palaeoecology of the animals they preserve.
Spatial niche partitioning in dinosaurs from the latest cretaceous (Maastrichtian) of North America
Lyson, Tyler R.; Longrich, Nicholas R.
2011-01-01
We examine patterns of occurrence of associated dinosaur specimens (n = 343) from the North American Upper Cretaceous Hell Creek Formation and equivalent beds, by comparing their relative abundance in sandstone and mudstone. Ceratopsians preferentially occur in mudstone, whereas hadrosaurs and the small ornithopod Thescelosaurus show a strong association with sandstone. By contrast, the giant carnivore Tyrannosaurus rex shows no preferred association with either lithology. These lithologies are used as an indicator of environment of deposition, with sandstone generally representing river environments, and finer grained sediments typically representing floodplain environments. Given these patterns of occurrence, we argue that spatial niche partitioning helped reduce competition for resources between the herbivorous dinosaurs. Within coastal lowlands ceratopsians preferred habitats farther away from rivers, whereas hadrosaurs and Thescelosaurus preferred habitats in close proximity to rivers, and T. rex, the ecosystem's sole large carnivore, inhabited both palaeoenvironments. Spatial partitioning of the environment helps explain how several species of large herbivorous dinosaurs coexisted. This study emphasizes that different lithologies can preserve dramatically dissimilar vertebrate assemblages, even when deposited in close proximity and within a narrow window of time. The lithology in which fossils are preserved should be recorded as these data can provide unique insights into the palaeoecology of the animals they preserve. PMID:20943689
NASA Astrophysics Data System (ADS)
Banks, J. W.; Henshaw, W. D.; Schwendeman, D. W.; Tang, Qi
2017-08-01
A stable partitioned algorithm is developed for fluid-structure interaction (FSI) problems involving viscous incompressible flow and rigid bodies. This added-mass partitioned (AMP) algorithm remains stable, without sub-iterations, for light and even zero mass rigid bodies when added-mass and viscous added-damping effects are large. The scheme is based on a generalized Robin interface condition for the fluid pressure that includes terms involving the linear acceleration and angular acceleration of the rigid body. Added mass effects are handled in the Robin condition by inclusion of a boundary integral term that depends on the pressure. Added-damping effects due to the viscous shear forces on the body are treated by inclusion of added-damping tensors that are derived through a linearization of the integrals defining the force and torque. Added-damping effects may be important at low Reynolds number, or, for example, in the case of a rotating cylinder or rotating sphere when the rotational moments of inertia are small. In this second part of a two-part series, the general formulation of the AMP scheme is presented including the form of the AMP interface conditions and added-damping tensors for general geometries. A fully second-order accurate implementation of the AMP scheme is developed in two dimensions based on a fractional-step method for the incompressible Navier-Stokes equations using finite difference methods and overlapping grids to handle the moving geometry. The numerical scheme is verified on a number of difficult benchmark problems.
Possibilistic clustering for shape recognition
NASA Technical Reports Server (NTRS)
Keller, James M.; Krishnapuram, Raghu
1993-01-01
Clustering methods have been used extensively in computer vision and pattern recognition. Fuzzy clustering has been shown to be advantageous over crisp (or traditional) clustering in that total commitment of a vector to a given class is not required at each iteration. Recently fuzzy clustering methods have shown spectacular ability to detect not only hypervolume clusters, but also clusters which are actually 'thin shells', i.e., curves and surfaces. Most analytic fuzzy clustering approaches are derived from Bezdek's Fuzzy C-Means (FCM) algorithm. The FCM uses the probabilistic constraint that the memberships of a data point across classes sum to one. This constraint was used to generate the membership update equations for an iterative algorithm. Unfortunately, the memberships resulting from FCM and its derivatives do not correspond to the intuitive concept of degree of belonging, and moreover, the algorithms have considerable trouble in noisy environments. Recently, the clustering problem was cast into the framework of possibility theory. Our approach was radically different from the existing clustering methods in that the resulting partition of the data can be interpreted as a possibilistic partition, and the membership values may be interpreted as degrees of possibility of the points belonging to the classes. An appropriate objective function whose minimum will characterize a good possibilistic partition of the data was constructed, and the membership and prototype update equations from necessary conditions for minimization of our criterion function were derived. The ability of this approach to detect linear and quartic curves in the presence of considerable noise is shown.
Possibilistic clustering for shape recognition
NASA Technical Reports Server (NTRS)
Keller, James M.; Krishnapuram, Raghu
1992-01-01
Clustering methods have been used extensively in computer vision and pattern recognition. Fuzzy clustering has been shown to be advantageous over crisp (or traditional) clustering in that total commitment of a vector to a given class is not required at each iteration. Recently fuzzy clustering methods have shown spectacular ability to detect not only hypervolume clusters, but also clusters which are actually 'thin shells', i.e., curves and surfaces. Most analytic fuzzy clustering approaches are derived from Bezdek's Fuzzy C-Means (FCM) algorithm. The FCM uses the probabilistic constraint that the memberships of a data point across classes sum to one. This constraint was used to generate the membership update equations for an iterative algorithm. Unfortunately, the memberships resulting from FCM and its derivatives do not correspond to the intuitive concept of degree of belonging, and moreover, the algorithms have considerable trouble in noisy environments. Recently, we cast the clustering problem into the framework of possibility theory. Our approach was radically different from the existing clustering methods in that the resulting partition of the data can be interpreted as a possibilistic partition, and the membership values may be interpreted as degrees of possibility of the points belonging to the classes. We constructed an appropriate objective function whose minimum will characterize a good possibilistic partition of the data, and we derived the membership and prototype update equations from necessary conditions for minimization of our criterion function. In this paper, we show the ability of this approach to detect linear and quartic curves in the presence of considerable noise.
International Aviation (Selected Articles)
1991-06-05
The Manufacturing Capabilities of the Shanghai Aviation Company Leap to a New Level, by Yang Xinbang Zhang Shiyuan , Zheng Huilin............9 Setting...international cooperation. 8 THE MANUFACTURING CAPABILITIES OF THE SHANGHAI AVIATION COMPANY LEAP TO A NEW LEVEL Yang Xinbang Zhang Shiyuan Zheng
Spatial partitioning by mule deer and elk in relation to traffic.
Michael J. Wisdom; Norman J. Cimon; Bruce K. Johnson; Edward O. Garton; Jack Ward. Thomas
2004-01-01
Elk (Cervus elaphus) and mule deer (Odocoileus hemionus) have overlapping ranges on millions of acres of forests and rangelands in western North America. Accurate prediction of their spatial distributions within these ranges is essential to effective land-use planning, stocking allocation and population management (Wisdom and...
Expression of host defense peptides in the intestine of Eimeria-challenged chickens.
Su, S; Dwyer, D M; Miska, K B; Fetterer, R H; Jenkins, M C; Wong, E A
2017-07-01
Avian coccidiosis is caused by the intracellular protozoan Eimeria, which produces intestinal lesions leading to weight gain depression. Current control methods include vaccination and anticoccidial drugs. An alternative approach involves modulating the immune system. The objective of this study was to profile the expression of host defense peptides such as avian beta-defensins (AvBDs) and liver expressed antimicrobial peptide 2 (LEAP2), which are part of the innate immune system. The mRNA expression of AvBD family members 1, 6, 8, 10, 11, 12, and 13 and LEAP2 was examined in chickens challenged with either E. acervulina, E. maxima, or E. tenella. The duodenum, jejunum, ileum, and ceca were collected 7 d post challenge. In study 1, E. acervulina challenge resulted in down-regulation of AvBD1, AvBD6, AvBD10, AvBD11, AvBD12, and AvBD13 in the duodenum. E. maxima challenge caused down-regulation of AvBD6, AvBD10, and AvBD11 in the duodenum, down-regulation of AvBD10 in the jejunum, but up-regulation of AvBD8 and AvBD13 in the ceca. E. tenella challenge showed no change in AvBD expression in any tissue. In study 2, which involved challenge with only E. maxima, there was down-regulation of AvBD1 in the ileum, AvBD11 in the jejunum and ileum, and LEAP2 in all 3 segments of the small intestine. The expression of LEAP2 was further examined by in situ hybridization in the jejunum of chickens from study 2. LEAP2 mRNA was expressed similarly in the enterocytes lining the villi, but not in the crypts of control and Eimeria challenged chickens. The lengths of the villi in the Eimeria challenged chickens were less than those in the control chickens, which may in part account for the observed down-regulation of LEAP2 mRNA quantified by PCR. Overall, the AvBD response to Eimeria challenge was not consistent; whereas LEAP2 was consistently down-regulated, which suggests that LEAP2 plays an important role in modulating an Eimeria infection. Published by Oxford University Press on behalf of Poultry Science Association 2017.
X-Ray Phase Imaging for Breast Cancer Detection
2012-09-01
the Gerchberg-Saxton algorithm in the Fresnel diffraction regime, and is much more robust against image noise than the TIE-based method. For details...developed efficient coding with the software modules for the image registration, flat-filed correction , and phase retrievals. In addition, we...X, Liu H. 2010. Performance analysis of the attenuation-partition based iterative phase retrieval algorithm for in-line phase-contrast imaging
Nonparametric Bayesian Context Learning for Buried Threat Detection
2012-01-01
scans of field-collected GPR data collected on dirt (top) and gravel (bottom) lanes at an Eastern US test site. . . . . . . . . . 47 2.11 Plot of ...partition the data into M components, they differ in the information used to partition the data. First, approaches that as- sume independence of observations...and the advantages and disadvantages of using each are discussed. Furthermore, the merits of incorporating spatial information are also highlighted
NASA Astrophysics Data System (ADS)
Yu, Xu; Shao, Quanqin; Zhu, Yunhai; Deng, Yuejin; Yang, Haijun
2006-10-01
With the development of informationization and the separation between data management departments and application departments, spatial data sharing becomes one of the most important objectives for the spatial information infrastructure construction, and spatial metadata management system, data transmission security and data compression are the key technologies to realize spatial data sharing. This paper discusses the key technologies for metadata based on data interoperability, deeply researches the data compression algorithms such as adaptive Huffman algorithm, LZ77 and LZ78 algorithm, studies to apply digital signature technique to encrypt spatial data, which can not only identify the transmitter of spatial data, but also find timely whether the spatial data are sophisticated during the course of network transmission, and based on the analysis of symmetric encryption algorithms including 3DES,AES and asymmetric encryption algorithm - RAS, combining with HASH algorithm, presents a improved mix encryption method for spatial data. Digital signature technology and digital watermarking technology are also discussed. Then, a new solution of spatial data network distribution is put forward, which adopts three-layer architecture. Based on the framework, we give a spatial data network distribution system, which is efficient and safe, and also prove the feasibility and validity of the proposed solution.
A graph based algorithm for adaptable dynamic airspace configuration for NextGen
NASA Astrophysics Data System (ADS)
Savai, Mehernaz P.
The National Airspace System (NAS) is a complicated large-scale aviation network, consisting of many static sectors wherein each sector is controlled by one or more controllers. The main purpose of the NAS is to enable safe and prompt air travel in the U.S. However, such static configuration of sectors will not be able to handle the continued growth of air travel which is projected to be more than double the current traffic by 2025. Under the initiative of the Next Generation of Air Transportation system (NextGen), the main objective of Adaptable Dynamic Airspace Configuration (ADAC) is that the sectors should change to the changing traffic so as to reduce the controller workload variance with time while increasing the throughput. Change in the resectorization should be such that there is a minimal increase in exchange of air traffic among controllers. The benefit of a new design (improvement in workload balance, etc.) should sufficiently exceed the transition cost, in order to deserve a change. This leads to the analysis of the concept of transition workload which is the cost associated with a transition from one sectorization to another. Given two airspace configurations, a transition workload metric which considers the air traffic as well as the geometry of the airspace is proposed. A solution to reduce this transition workload is also discussed. The algorithm is specifically designed to be implemented for the Dynamic Airspace Configuration (DAC) Algorithm. A graph model which accurately represents the air route structure and air traffic in the NAS is used to formulate the airspace configuration problem. In addition, a multilevel graph partitioning algorithm is developed for Dynamic Airspace Configuration which partitions the graph model of airspace with given user defined constraints and hence provides the user more flexibility and control over various partitions. In terms of air traffic management, vertices represent airports and waypoints. Some of the major (busy) airports need to be given more importance and hence treated separately. Thus the algorithm takes into account the air route structure while finding a balance between sector workloads. The performance of the proposed algorithms and performance metrics is validated with the Enhanced Traffic Management System (ETMS) air traffic data.
Spatial compression algorithm for the analysis of very large multivariate images
Keenan, Michael R [Albuquerque, NM
2008-07-15
A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.
HNBody: A Simulation Package for Hierarchical N-Body Systems
NASA Astrophysics Data System (ADS)
Rauch, Kevin P.
2018-04-01
HNBody (http://www.hnbody.org/) is an extensible software package forintegrating the dynamics of N-body systems. Although general purpose, itincorporates several features and algorithms particularly well-suited tosystems containing a hierarchy (wide dynamic range) of masses. HNBodyversion 1 focused heavily on symplectic integration of nearly-Kepleriansystems. Here I describe the capabilities of the redesigned and expandedpackage version 2, which includes: symplectic integrators up to eighth order(both leap frog and Wisdom-Holman type methods), with symplectic corrector andclose encounter support; variable-order, variable-timestep Bulirsch-Stoer andStörmer integrators; post-Newtonian and multipole physics options; advancedround-off control for improved long-term stability; multi-threading and SIMDvectorization enhancements; seamless availability of extended precisionarithmetic for all calculations; extremely flexible configuration andoutput. Tests of the physical correctness of the algorithms are presentedusing JPL Horizons ephemerides (https://ssd.jpl.nasa.gov/?horizons) andpreviously published results for reference. The features and performanceof HNBody are also compared to several other freely available N-body codes,including MERCURY (Chambers), SWIFT (Levison & Duncan) and WHFAST (Rein &Tamayo).
Partitioning in parallel processing of production systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oflazer, K.
1987-01-01
This thesis presents research on certain issues related to parallel processing of production systems. It first presents a parallel production system interpreter that has been implemented on a four-processor multiprocessor. This parallel interpreter is based on Forgy's OPS5 interpreter and exploits production-level parallelism in production systems. Runs on the multiprocessor system indicate that it is possible to obtain speed-up of around 1.7 in the match computation for certain production systems when productions are split into three sets that are processed in parallel. The next issue addressed is that of partitioning a set of rules to processors in a parallel interpretermore » with production-level parallelism, and the extent of additional improvement in performance. The partitioning problem is formulated and an algorithm for approximate solutions is presented. The thesis next presents a parallel processing scheme for OPS5 production systems that allows some redundancy in the match computation. This redundancy enables the processing of a production to be divided into units of medium granularity each of which can be processed in parallel. Subsequently, a parallel processor architecture for implementing the parallel processing algorithm is presented.« less
Efficient Extraction of High Centrality Vertices in Distributed Graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumbhare, Alok; Frincu, Marc; Raghavendra, Cauligi S.
2014-09-09
Betweenness centrality (BC) is an important measure for identifying high value or critical vertices in graphs, in variety of domains such as communication networks, road networks, and social graphs. However, calculating betweenness values is prohibitively expensive and, more often, domain experts are interested only in the vertices with the highest centrality values. In this paper, we first propose a partition-centric algorithm (MS-BC) to calculate BC for a large distributed graph that optimizes resource utilization and improves overall performance. Further, we extend the notion of approximate BC by pruning the graph and removing a subset of edges and vertices that contributemore » the least to the betweenness values of other vertices (MSL-BC), which further improves the runtime performance. We evaluate the proposed algorithms using a mix of real-world and synthetic graphs on an HPC cluster and analyze its strengths and weaknesses. The experimental results show an improvement in performance of upto 12x for large sparse graphs as compared to the state-of-the-art, and at the same time highlights the need for better partitioning methods to enable a balanced workload across partitions for unbalanced graphs such as small-world or power-law graphs.« less
NASA Astrophysics Data System (ADS)
Haka, Abigail S.; Kidder, Linda H.; Lewis, E. Neil
2001-07-01
We have applied Fourier transform infrared (FTIR) spectroscopic imaging, coupling a mercury cadmium telluride (MCT) focal plane array detector (FPA) and a Michelson step scan interferometer, to the investigation of various states of malignant human prostate tissue. The MCT FPA used consists of 64x64 pixels, each 61 micrometers 2, and has a spectral range of 2-10.5 microns. Each imaging data set was collected at 16-1 resolution, resulting in 512 image planes and a total of 4096 interferograms. In this article we describe a method for separating different tissue types contained within FTIR spectroscopic imaging data sets of human prostate tissue biopsies. We present images, generated by the Fuzzy C-Means clustering algorithm, which demonstrate the successful partitioning of distinct tissue type domains. Additionally, analysis of differences in the centroid spectra corresponding to different tissue types provides an insight into their biochemical composition. Lastly, we demonstrate the ability to partition tissue type regions in a different data set using centroid spectra calculated from the original data set. This has implications for the use of the Fuzzy C-Means algorithm as an automated technique for the separation and examination of tissue domains in biopsy samples.
Tear fluid proteomics multimarkers for diabetic retinopathy screening
2013-01-01
Background The aim of the project was to develop a novel method for diabetic retinopathy screening based on the examination of tear fluid biomarker changes. In order to evaluate the usability of protein biomarkers for pre-screening purposes several different approaches were used, including machine learning algorithms. Methods All persons involved in the study had diabetes. Diabetic retinopathy (DR) was diagnosed by capturing 7-field fundus images, evaluated by two independent ophthalmologists. 165 eyes were examined (from 119 patients), 55 were diagnosed healthy and 110 images showed signs of DR. Tear samples were taken from all eyes and state-of-the-art nano-HPLC coupled ESI-MS/MS mass spectrometry protein identification was performed on all samples. Applicability of protein biomarkers was evaluated by six different optimally parameterized machine learning algorithms: Support Vector Machine, Recursive Partitioning, Random Forest, Naive Bayes, Logistic Regression, K-Nearest Neighbor. Results Out of the six investigated machine learning algorithms the result of Recursive Partitioning proved to be the most accurate. The performance of the system realizing the above algorithm reached 74% sensitivity and 48% specificity. Conclusions Protein biomarkers selected and classified with machine learning algorithms alone are at present not recommended for screening purposes because of low specificity and sensitivity values. This tool can be potentially used to improve the results of image processing methods as a complementary tool in automatic or semiautomatic systems. PMID:23919537
Comments on "The multisynapse neural network and its application to fuzzy clustering".
Yu, Jian; Hao, Pengwei
2005-05-01
In the above-mentioned paper, Wei and Fahn proposed a neural architecture, the multisynapse neural network, to solve constrained optimization problems including high-order, logarithmic, and sinusoidal forms, etc. As one of its main applications, a fuzzy bidirectional associative clustering network (FBACN) was proposed for fuzzy-partition clustering according to the objective-functional method. The connection between the objective-functional-based fuzzy c-partition algorithms and FBACN is the Lagrange multiplier approach. Unfortunately, the Lagrange multiplier approach was incorrectly applied so that FBACN does not equivalently minimize its corresponding constrained objective-function. Additionally, Wei and Fahn adopted traditional definition of fuzzy c-partition, which is not satisfied by FBACN. Therefore, FBACN can not solve constrained optimization problems, either.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Do, Hainam, E-mail: h.do@nottingham.ac.uk, E-mail: richard.wheatley@nottingham.ac.uk; Wheatley, Richard J., E-mail: h.do@nottingham.ac.uk, E-mail: richard.wheatley@nottingham.ac.uk
A robust and model free Monte Carlo simulation method is proposed to address the challenge in computing the classical density of states and partition function of solids. Starting from the minimum configurational energy, the algorithm partitions the entire energy range in the increasing energy direction (“upward”) into subdivisions whose integrated density of states is known. When combined with the density of states computed from the “downward” energy partitioning approach [H. Do, J. D. Hirst, and R. J. Wheatley, J. Chem. Phys. 135, 174105 (2011)], the equilibrium thermodynamic properties can be evaluated at any temperature and in any phase. The methodmore » is illustrated in the context of the Lennard-Jones system and can readily be extended to other molecular systems and clusters for which the structures are known.« less
Assessment of multiresolution segmentation for delimiting drumlins in digital elevation models.
Eisank, Clemens; Smith, Mike; Hillier, John
2014-06-01
Mapping or "delimiting" landforms is one of geomorphology's primary tools. Computer-based techniques such as land-surface segmentation allow the emulation of the process of manual landform delineation. Land-surface segmentation exhaustively subdivides a digital elevation model (DEM) into morphometrically-homogeneous irregularly-shaped regions, called terrain segments. Terrain segments can be created from various land-surface parameters (LSP) at multiple scales, and may therefore potentially correspond to the spatial extents of landforms such as drumlins. However, this depends on the segmentation algorithm, the parameterization, and the LSPs. In the present study we assess the widely used multiresolution segmentation (MRS) algorithm for its potential in providing terrain segments which delimit drumlins. Supervised testing was based on five 5-m DEMs that represented a set of 173 synthetic drumlins at random but representative positions in the same landscape. Five LSPs were tested, and four variants were computed for each LSP to assess the impact of median filtering of DEMs, and logarithmic transformation of LSPs. The testing scheme (1) employs MRS to partition each LSP exhaustively into 200 coarser scales of terrain segments by increasing the scale parameter ( SP ), (2) identifies the spatially best matching terrain segment for each reference drumlin, and (3) computes four segmentation accuracy metrics for quantifying the overall spatial match between drumlin segments and reference drumlins. Results of 100 tests showed that MRS tends to perform best on LSPs that are regionally derived from filtered DEMs, and then log-transformed. MRS delineated 97% of the detected drumlins at SP values between 1 and 50. Drumlin delimitation rates with values up to 50% are in line with the success of manual interpretations. Synthetic DEMs are well-suited for assessing landform quantification methods such as MRS, since subjectivity in the reference data is avoided which increases the reliability, validity and applicability of results.
Draft genome sequence of Xylella fastidiosa subsp. fastidiosa strain Stag’s Leap
USDA-ARS?s Scientific Manuscript database
Xylella fastidiosa subsp. fastidiosa causes Pierce’s disease of grapevine. Presented here is the draft genome sequence of the Stag’s Leap strain, previously used in pathogenicity/virulence assays to evaluate grapevine germplasm bearing Pierce’s disease....
MicroRNA polymorphisms: a giant leap towards personalized medicine
Mishra, Prasun J
2010-01-01
“An individual’s genetic inheritance of microRNA polymorphisms associated with disease progression, prognosis and treatment holds the key to create safer and more personalized drugs and can be a giant leap towards personalized medicine.” PMID:20428464
An efficient CU partition algorithm for HEVC based on improved Sobel operator
NASA Astrophysics Data System (ADS)
Sun, Xuebin; Chen, Xiaodong; Xu, Yong; Sun, Gang; Yang, Yunsheng
2018-04-01
As the latest video coding standard, High Efficiency Video Coding (HEVC) achieves over 50% bit rate reduction with similar video quality compared with previous standards H.264/AVC. However, the higher compression efficiency is attained at the cost of significantly increasing computational load. In order to reduce the complexity, this paper proposes a fast coding unit (CU) partition technique to speed up the process. To detect the edge features of each CU, a more accurate improved Sobel filtering is developed and performed By analyzing the textural features of CU, an early CU splitting termination is proposed to decide whether a CU should be decomposed into four lower-dimensions CUs or not. Compared with the reference software HM16.7, experimental results indicate the proposed algorithm can lessen the encoding time up to 44.09% on average, with a negligible bit rate increase of 0.24%, and quality losses lower 0.03 dB, respectively. In addition, the proposed algorithm gets a better trade-off between complexity and rate-distortion among the other proposed works.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Le Hardy, D.; Favennec, Y., E-mail: yann.favennec@univ-nantes.fr; Rousseau, B.
The contribution of this paper relies in the development of numerical algorithms for the mathematical treatment of specular reflection on borders when dealing with the numerical solution of radiative transfer problems. The radiative transfer equation being integro-differential, the discrete ordinates method allows to write down a set of semi-discrete equations in which weights are to be calculated. The calculation of these weights is well known to be based on either a quadrature or on angular discretization, making the use of such method straightforward for the state equation. Also, the diffuse contribution of reflection on borders is usually well taken intomore » account. However, the calculation of accurate partition ratio coefficients is much more tricky for the specular condition applied on arbitrary geometrical borders. This paper presents algorithms that calculate analytically partition ratio coefficients needed in numerical treatments. The developed algorithms, combined with a decentered finite element scheme, are validated with the help of comparisons with analytical solutions before being applied on complex geometries.« less
Fine-Scale Analysis Reveals Cryptic Landscape Genetic Structure in Desert Tortoises
Latch, Emily K.; Boarman, William I.; Walde, Andrew; Fleischer, Robert C.
2011-01-01
Characterizing the effects of landscape features on genetic variation is essential for understanding how landscapes shape patterns of gene flow and spatial genetic structure of populations. Most landscape genetics studies have focused on patterns of gene flow at a regional scale. However, the genetic structure of populations at a local scale may be influenced by a unique suite of landscape variables that have little bearing on connectivity patterns observed at broader spatial scales. We investigated fine-scale spatial patterns of genetic variation and gene flow in relation to features of the landscape in desert tortoise (Gopherus agassizii), using 859 tortoises genotyped at 16 microsatellite loci with associated data on geographic location, sex, elevation, slope, and soil type, and spatial relationship to putative barriers (power lines, roads). We used spatially explicit and non-explicit Bayesian clustering algorithms to partition the sample into discrete clusters, and characterize the relationships between genetic distance and ecological variables to identify factors with the greatest influence on gene flow at a local scale. Desert tortoises exhibit weak genetic structure at a local scale, and we identified two subpopulations across the study area. Although genetic differentiation between the subpopulations was low, our landscape genetic analysis identified both natural (slope) and anthropogenic (roads) landscape variables that have significantly influenced gene flow within this local population. We show that desert tortoise movements at a local scale are influenced by features of the landscape, and that these features are different than those that influence gene flow at larger scales. Our findings are important for desert tortoise conservation and management, particularly in light of recent translocation efforts in the region. More generally, our results indicate that recent landscape changes can affect gene flow at a local scale and that their effects can be detected almost immediately. PMID:22132143
Fine-scale analysis reveals cryptic landscape genetic structure in desert tortoises.
Latch, Emily K; Boarman, William I; Walde, Andrew; Fleischer, Robert C
2011-01-01
Characterizing the effects of landscape features on genetic variation is essential for understanding how landscapes shape patterns of gene flow and spatial genetic structure of populations. Most landscape genetics studies have focused on patterns of gene flow at a regional scale. However, the genetic structure of populations at a local scale may be influenced by a unique suite of landscape variables that have little bearing on connectivity patterns observed at broader spatial scales. We investigated fine-scale spatial patterns of genetic variation and gene flow in relation to features of the landscape in desert tortoise (Gopherus agassizii), using 859 tortoises genotyped at 16 microsatellite loci with associated data on geographic location, sex, elevation, slope, and soil type, and spatial relationship to putative barriers (power lines, roads). We used spatially explicit and non-explicit Bayesian clustering algorithms to partition the sample into discrete clusters, and characterize the relationships between genetic distance and ecological variables to identify factors with the greatest influence on gene flow at a local scale. Desert tortoises exhibit weak genetic structure at a local scale, and we identified two subpopulations across the study area. Although genetic differentiation between the subpopulations was low, our landscape genetic analysis identified both natural (slope) and anthropogenic (roads) landscape variables that have significantly influenced gene flow within this local population. We show that desert tortoise movements at a local scale are influenced by features of the landscape, and that these features are different than those that influence gene flow at larger scales. Our findings are important for desert tortoise conservation and management, particularly in light of recent translocation efforts in the region. More generally, our results indicate that recent landscape changes can affect gene flow at a local scale and that their effects can be detected almost immediately.
NASA Astrophysics Data System (ADS)
Jiang, Feng; Gu, Qing; Hao, Huizhen; Li, Na; Wang, Bingqian; Hu, Xiumian
2018-06-01
Automatic grain segmentation of sandstone is to partition mineral grains into separate regions in the thin section, which is the first step for computer aided mineral identification and sandstone classification. The sandstone microscopic images contain a large number of mixed mineral grains where differences among adjacent grains, i.e., quartz, feldspar and lithic grains, are usually ambiguous, which make grain segmentation difficult. In this paper, we take advantage of multi-angle cross-polarized microscopic images and propose a method for grain segmentation with high accuracy. The method consists of two stages, in the first stage, we enhance the SLIC (Simple Linear Iterative Clustering) algorithm, named MSLIC, to make use of multi-angle images and segment the images as boundary adherent superpixels. In the second stage, we propose the region merging technique which combines the coarse merging and fine merging algorithms. The coarse merging merges the adjacent superpixels with less evident boundaries, and the fine merging merges the ambiguous superpixels using the spatial enhanced fuzzy clustering. Experiments are designed on 9 sets of multi-angle cross-polarized images taken from the three major types of sandstones. The results demonstrate both the effectiveness and potential of the proposed method, comparing to the available segmentation methods.
A monolithic mass tracking formulation for bubbles in incompressible flow
NASA Astrophysics Data System (ADS)
Aanjaneya, Mridul; Patkar, Saket; Fedkiw, Ronald
2013-08-01
We devise a novel method for treating bubbles in incompressible flow that relies on the conservative advection of bubble mass and an associated equation of state in order to determine pressure boundary conditions inside each bubble. We show that executing this algorithm in a traditional manner leads to stability issues similar to those seen for partitioned methods for solid-fluid coupling. Therefore, we reformulate the problem monolithically. This is accomplished by first proposing a new fully monolithic approach to coupling incompressible flow to fully nonlinear compressible flow including the effects of shocks and rarefactions, and then subsequently making a number of simplifying assumptions on the air flow removing not only the nonlinearities but also the spatial variations of both the density and the pressure. The resulting algorithm is quite robust, has been shown to converge to known solutions for test problems, and has been shown to be quite effective on more realistic problems including those with multiple bubbles, merging and pinching, etc. Notably, this approach departs from a standard two-phase incompressible flow model where the air flow preserves its volume despite potentially large forces and pressure differentials in the surrounding incompressible fluid that should change its volume. Our bubbles readily change volume according to an isothermal equation of state.
Tung, James Y; Lulic, Tea; Gonzalez, Dave A; Tran, Johnathan; Dickerson, Clark R; Roy, Eric A
2015-05-01
Although motion analysis is frequently employed in upper limb motor assessment (e.g. visually-guided reaching), they are resource-intensive and limited to laboratory settings. This study evaluated the reliability and accuracy of a new markerless motion capture device, the Leap Motion controller, to measure finger position. Testing conditions that influence reliability and agreement between the Leap and a research-grade motion capture system were examined. Nine healthy young adults pointed to 15 targets on a computer screen under two conditions: (1) touching the target (touch) and (2) 4 cm away from the target (no-touch). Leap data was compared to an Optotrak marker attached to the index finger. Across all trials, root mean square (RMS) error of the Leap system was 17.30 ± 9.56 mm (mean ± SD), sampled at 65.47 ± 21.53 Hz. The % viable trials and mean sampling rate were significantly lower in the touch condition (44% versus 64%, p < 0.001; 52.02 ± 2.93 versus 73.98 ± 4.48 Hz, p = 0.003). While linear correlations were high (horizontal: r(2) = 0.995, vertical r(2) = 0.945), the limits of agreement were large (horizontal: -22.02 to +26.80 mm, vertical: -29.41 to +30.14 mm). While not as precise as more sophisticated optical motion capture systems, the Leap Motion controller is sufficiently reliable for measuring motor performance in pointing tasks that do not require high positional accuracy (e.g. reaction time, Fitt's, trails, bimanual coordination).
Evolution and Allometry of Calcaneal Elongation in Living and Extinct Primates
Boyer, Doug M.; Seiffert, Erik R.; Gladman, Justin T.; Bloch, Jonathan I.
2013-01-01
Specialized acrobatic leaping has been recognized as a key adaptive trait tied to the origin and subsequent radiation of euprimates based on its observed frequency in extant primates and inferred frequency in extinct early euprimates. Hypothesized skeletal correlates include elongated tarsal elements, which would be expected to aid leaping by allowing for increased rates and durations of propulsive acceleration at takeoff. Alternatively, authors of a recent study argued that pronounced distal calcaneal elongation of euprimates (compared to other mammalian taxa) was related primarily to specialized pedal grasping. Testing for correlations between calcaneal elongation and leaping versus grasping is complicated by body size differences and associated allometric affects. We re-assess allometric constraints on, and the functional significance of, calcaneal elongation using phylogenetic comparative methods, and present an evolutionary hypothesis for the evolution of calcaneal elongation in primates using a Bayesian approach to ancestral state reconstruction (ASR). Results show that among all primates, logged ratios of distal calcaneal length to total calcaneal length are inversely correlated with logged body mass proxies derived from the area of the calcaneal facet for the cuboid. Results from phylogenetic ANOVA on residuals from this allometric line suggest that deviations are explained by degree of leaping specialization in prosimians, but not anthropoids. Results from ASR suggest that non-allometric increases in calcaneal elongation began in the primate stem lineage and continued independently in haplorhines and strepsirrhines. Anthropoid and lorisid lineages show stasis and decreasing elongation, respectively. Initial increases in calcaneal elongation in primate evolution may be related to either development of hallucal-grasping or a combination of grasping and more specialized leaping behaviors. As has been previously suggested, subsequent increases in calcaneal elongation are likely adaptations for more effective acrobatic leaping, highlighting the importance of this behavior in early euprimate evolution. PMID:23844094
Symplectic partitioned Runge-Kutta scheme for Maxwell's equations
NASA Astrophysics Data System (ADS)
Huang, Zhi-Xiang; Wu, Xian-Liang
Using the symplectic partitioned Runge-Kutta (PRK) method, we construct a new scheme for approximating the solution to infinite dimensional nonseparable Hamiltonian systems of Maxwell's equations for the first time. The scheme is obtained by discretizing the Maxwell's equations in the time direction based on symplectic PRK method, and then evaluating the equation in the spatial direction with a suitable finite difference approximation. Several numerical examples are presented to verify the efficiency of the scheme.
Unsupervised Spatial, Temporal and Relational Models for Social Processes
2012-02-01
Andrej Mrvar . A partitioning approach to structural balance. Social Networks, 18(2):149 – 168, 1996 . [37] Thi V. Duong, Hung H. Bui, Dinh Q. Phung, and...partitioning provided by Doreian and Mrvar [36], who demonstrate that there was increasing evidence over time that 62 CHAPTER 4. COMMUNITY DETECTION this...foursome was a genuine group. Doreian and Mrvar used a block modeling approach optimiz- ing structural balance, a measure of cohesion incorporating
Detecting communities in large networks
NASA Astrophysics Data System (ADS)
Capocci, A.; Servedio, V. D. P.; Caldarelli, G.; Colaiori, F.
2005-07-01
We develop an algorithm to detect community structure in complex networks. The algorithm is based on spectral methods and takes into account weights and link orientation. Since the method detects efficiently clustered nodes in large networks even when these are not sharply partitioned, it turns to be specially suitable for the analysis of social and information networks. We test the algorithm on a large-scale data-set from a psychological experiment of word association. In this case, it proves to be successful both in clustering words, and in uncovering mental association patterns.
Fast Nonparametric Machine Learning Algorithms for High-Dimensional Massive Data and Applications
2006-03-01
know the probability of that from Lemma 2. Using the union bound, we know that for any query q, the probability that i-am-feeling-lucky search algorithm...and each point in a d-dimensional space, a naive k-NN search needs to do a linear scan of T for every single query q, and thus the computational time...algorithm based on partition trees with priority search , and give an expected query time O((1/)d log n). But the constant in the O((1/)d log n
NASA Astrophysics Data System (ADS)
Tedesco, M.; Alexander, P.; Porter, D. F.; Fettweis, X.; Luthcke, S. B.; Mote, T. L.; Rennermalm, A.; Hanna, E.
2017-12-01
Despite recent changes in Greenland surface mass losses and atmospheric circulation over the Arctic, little attention has been given to the potential role of large-scale atmospheric processes on the spatial and temporal variability of mass loss and partitioning of the GrIS mass loss. Using a combination of satellite gravimetry measurements, outputs of the MAR regional climate model and reanalysis data, we show that changes in atmospheric patterns since 2013 over the North Atlantic region of the Arctic (NAA) modulate total mass loss trends over Greenland together with the spatial and temporal distribution of mass loss partitioning. For example, during the 2002 - 2012 period, melting persistently increased, especially along the west coast, as a consequence of increased insulation and negative NAO conditions characterizing that period. Starting in 2013, runoff along the west coast decreased while snowfall increased substantially, when NAO turned to a more neutral/positive state. Modeled surface mass balance terms since 1950 indicate that part of the GRACE-period, specifically the period between 2002 and 2012, was exceptional in terms of snowfall over the east and northeast regions. During that period snowfall trend decreased to almost 0 Gt/yr from a long-term increasing trend, which presumed again in 2013. To identify the potential impact of atmospheric patterns on mass balance and its partitioning, we studied the spatial and temporal correlations between NAO and snowfall/runoff. Our results indicate that the correlation between summer snowfall and NAO is not stable during the 1950 - 2015 period. We further looked at changes in patterns of circulation using self organizing maps (SOMs) to identify the atmospheric patterns characterizing snowfall during different periods. We discuss potential implications for past changes and future GCM and RCM simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schilling, Jonathan
Consolidated bioprocessing (CBP) of lignocellulose combines enzymatic sugar release (saccharification) with fermentation, but pretreatments remain separate and costly. In nature, lignocellulose-degrading brown rot fungi consolidate pretreatment and saccharification, likely using spatial gradients to partition these incompatible reactions. With the field of biocatalysis maturing, reaction partitioning is increasingly reproducible for commercial use. Therefore, my goal was to resolve the reaction partitioning mechanisms of brown rot fungi so that they can be applied to bioconversion of lignocellulosic feedstocks. Brown rot fungi consolidate oxidative pretreatments with saccharification and are a focus for biomass refining because 1) they attain >99% sugar yield without destroyingmore » lignin, 2) they use a simplified cellulase suite that lacks exoglucanase, and 3) their non-enzymatic pretreatment is facilitative and may be accelerated. Specifically, I hypothesized that during brown rot, oxidative pretreatments occur ahead of enzymatic saccharification, spatially, and the fungus partitions these reactions using gradients in pH, lignin reactivity, and plant cell wall porosity. In fact, we found three key results during these experiments for this work: 1) Brown rot fungi have an inducible cellulase system, unlike previous descriptions of a constitutive mechanism. 2) The induction of cellulases is delayed until there is repression of oxidatively-linked genes, allowing the brown rot fungi to coordinate two incompatible reactions (oxidative pretreatment with enzymatic saccharification, to release wood sugars) in the same pieces of wood. 3) This transition is mediated by the same wood sugar, cellobiose, released by the oxidative pretreatment step. Collectively, these findings have been published in excellent journal outlets and have been presented at conferences around the United States, and they offer clear targets for gene discovery en route to making biofuels and biochemicals affordable, commercially.« less
Fokkema, M; Smits, N; Zeileis, A; Hothorn, T; Kelderman, H
2017-10-25
Identification of subgroups of patients for whom treatment A is more effective than treatment B, and vice versa, is of key importance to the development of personalized medicine. Tree-based algorithms are helpful tools for the detection of such interactions, but none of the available algorithms allow for taking into account clustered or nested dataset structures, which are particularly common in psychological research. Therefore, we propose the generalized linear mixed-effects model tree (GLMM tree) algorithm, which allows for the detection of treatment-subgroup interactions, while accounting for the clustered structure of a dataset. The algorithm uses model-based recursive partitioning to detect treatment-subgroup interactions, and a GLMM to estimate the random-effects parameters. In a simulation study, GLMM trees show higher accuracy in recovering treatment-subgroup interactions, higher predictive accuracy, and lower type II error rates than linear-model-based recursive partitioning and mixed-effects regression trees. Also, GLMM trees show somewhat higher predictive accuracy than linear mixed-effects models with pre-specified interaction effects, on average. We illustrate the application of GLMM trees on an individual patient-level data meta-analysis on treatments for depression. We conclude that GLMM trees are a promising exploratory tool for the detection of treatment-subgroup interactions in clustered datasets.
Scalable clustering algorithms for continuous environmental flow cytometry.
Hyrkas, Jeremy; Clayton, Sophie; Ribalet, Francois; Halperin, Daniel; Armbrust, E Virginia; Howe, Bill
2016-02-01
Recent technological innovations in flow cytometry now allow oceanographers to collect high-frequency flow cytometry data from particles in aquatic environments on a scale far surpassing conventional flow cytometers. The SeaFlow cytometer continuously profiles microbial phytoplankton populations across thousands of kilometers of the surface ocean. The data streams produced by instruments such as SeaFlow challenge the traditional sample-by-sample approach in cytometric analysis and highlight the need for scalable clustering algorithms to extract population information from these large-scale, high-frequency flow cytometers. We explore how available algorithms commonly used for medical applications perform at classification of such a large-scale, environmental flow cytometry data. We apply large-scale Gaussian mixture models to massive datasets using Hadoop. This approach outperforms current state-of-the-art cytometry classification algorithms in accuracy and can be coupled with manual or automatic partitioning of data into homogeneous sections for further classification gains. We propose the Gaussian mixture model with partitioning approach for classification of large-scale, high-frequency flow cytometry data. Source code available for download at https://github.com/jhyrkas/seaflow_cluster, implemented in Java for use with Hadoop. hyrkas@cs.washington.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
On program restructuring, scheduling, and communication for parallel processor systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polychronopoulos, Constantine D.
1986-08-01
This dissertation discusses several software and hardware aspects of program execution on large-scale, high-performance parallel processor systems. The issues covered are program restructuring, partitioning, scheduling and interprocessor communication, synchronization, and hardware design issues of specialized units. All this work was performed focusing on a single goal: to maximize program speedup, or equivalently, to minimize parallel execution time. Parafrase, a Fortran restructuring compiler was used to transform programs in a parallel form and conduct experiments. Two new program restructuring techniques are presented, loop coalescing and subscript blocking. Compile-time and run-time scheduling schemes are covered extensively. Depending on the program construct, thesemore » algorithms generate optimal or near-optimal schedules. For the case of arbitrarily nested hybrid loops, two optimal scheduling algorithms for dynamic and static scheduling are presented. Simulation results are given for a new dynamic scheduling algorithm. The performance of this algorithm is compared to that of self-scheduling. Techniques for program partitioning and minimization of interprocessor communication for idealized program models and for real Fortran programs are also discussed. The close relationship between scheduling, interprocessor communication, and synchronization becomes apparent at several points in this work. Finally, the impact of various types of overhead on program speedup and experimental results are presented.« less
The Kaye effect revisited: High speed imaging of leaping shampoo
NASA Astrophysics Data System (ADS)
Versluis, Michel; Blom, Cock; van der Meer, Devaraj; van der Weele, Ko; Lohse, Detlef
2003-11-01
When a visco-elastic fluid such as shampoo or shower gel is poured onto a flat surface the fluid piles up forming a heap on which rather irregular combinations of fluid buckling, coiling and folding are observed. Under specific conditions a string of fluid leaps from the heap and forms a steady jet fed by the incoming stream. Momentum transfer of the incoming jet, combined with the shear-thinning properties of the fluid, lead to a spoon-like dimple in the highly viscous fluid pool in which the jet recoils. The jet can be stable for several seconds. This effect is known as the Kaye effect. In order to reveal its mechanism we analyzed leaping shampoo through high-speed imaging. We studied the jet formation, jet stability and jet disruption mechanisms. We measured the velocity of both the incoming and recoiled jet, which was found to be thicker and slower. By inclining the surface on which the fluid was poured we observed jets leaping at upto five times.
Reducing Memory Cost of Exact Diagonalization using Singular Value Decomposition
NASA Astrophysics Data System (ADS)
Weinstein, Marvin; Chandra, Ravi; Auerbach, Assa
2012-02-01
We present a modified Lanczos algorithm to diagonalize lattice Hamiltonians with dramatically reduced memory requirements. In contrast to variational approaches and most implementations of DMRG, Lanczos rotations towards the ground state do not involve incremental minimizations, (e.g. sweeping procedures) which may get stuck in false local minima. The lattice of size N is partitioned into two subclusters. At each iteration the rotating Lanczos vector is compressed into two sets of nsvd small subcluster vectors using singular value decomposition. For low entanglement entropy See, (satisfied by short range Hamiltonians), the truncation error is bounded by (-nsvd^1/See). Convergence is tested for the Heisenberg model on Kagom'e clusters of 24, 30 and 36 sites, with no lattice symmetries exploited, using less than 15GB of dynamical memory. Generalization of the Lanczos-SVD algorithm to multiple partitioning is discussed, and comparisons to other techniques are given. Reference: arXiv:1105.0007
Clustering of financial time series
NASA Astrophysics Data System (ADS)
D'Urso, Pierpaolo; Cappelli, Carmela; Di Lallo, Dario; Massari, Riccardo
2013-05-01
This paper addresses the topic of classifying financial time series in a fuzzy framework proposing two fuzzy clustering models both based on GARCH models. In general clustering of financial time series, due to their peculiar features, needs the definition of suitable distance measures. At this aim, the first fuzzy clustering model exploits the autoregressive representation of GARCH models and employs, in the framework of a partitioning around medoids algorithm, the classical autoregressive metric. The second fuzzy clustering model, also based on partitioning around medoids algorithm, uses the Caiado distance, a Mahalanobis-like distance, based on estimated GARCH parameters and covariances that takes into account the information about the volatility structure of time series. In order to illustrate the merits of the proposed fuzzy approaches an application to the problem of classifying 29 time series of Euro exchange rates against international currencies is presented and discussed, also comparing the fuzzy models with their crisp version.
DTN routing in body sensor networks with dynamic postural partitioning.
Quwaider, Muhannad; Biswas, Subir
2010-11-01
This paper presents novel store-and-forward packet routing algorithms for Wireless Body Area Networks ( WBAN ) with frequent postural partitioning. A prototype WBAN has been constructed for experimentally characterizing on-body topology disconnections in the presence of ultra short range radio links, unpredictable RF attenuation, and human postural mobility. On-body DTN routing protocols are then developed using a stochastic link cost formulation, capturing multi-scale topological localities in human postural movements. Performance of the proposed protocols are evaluated experimentally and via simulation, and are compared with a number of existing single-copy DTN routing protocols and an on-body packet flooding mechanism that serves as a performance benchmark with delay lower-bound. It is shown that via multi-scale modeling of the spatio-temporal locality of on-body link disconnection patterns, the proposed algorithms can provide better routing performance compared to a number of existing probabilistic, opportunistic, and utility-based DTN routing protocols in the literature.
Three-Dimensional High-Lift Analysis Using a Parallel Unstructured Multigrid Solver
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.
1998-01-01
A directional implicit unstructured agglomeration multigrid solver is ported to shared and distributed memory massively parallel machines using the explicit domain-decomposition and message-passing approach. Because the algorithm operates on local implicit lines in the unstructured mesh, special care is required in partitioning the problem for parallel computing. A weighted partitioning strategy is described which avoids breaking the implicit lines across processor boundaries, while incurring minimal additional communication overhead. Good scalability is demonstrated on a 128 processor SGI Origin 2000 machine and on a 512 processor CRAY T3E machine for reasonably fine grids. The feasibility of performing large-scale unstructured grid calculations with the parallel multigrid algorithm is demonstrated by computing the flow over a partial-span flap wing high-lift geometry on a highly resolved grid of 13.5 million points in approximately 4 hours of wall clock time on the CRAY T3E.
Local performance optimization for a class of redundant eight-degree-of-freedom manipulators
NASA Technical Reports Server (NTRS)
Williams, Robert L., II
1994-01-01
Local performance optimization for joint limit avoidance and manipulability maximization (singularity avoidance) is obtained by using the Jacobian matrix pseudoinverse and by projecting the gradient of an objective function into the Jacobian null space. Real-time redundancy optimization control is achieved for an eight-joint redundant manipulator having a three-axis spherical shoulder, a single elbow joint, and a four-axis spherical wrist. Symbolic solutions are used for both full-Jacobian and wrist-partitioned pseudoinverses, partitioned null-space projection matrices, and all objective function gradients. A kinematic limitation of this class of manipulators and the limitation's effect on redundancy resolution are discussed. Results obtained with graphical simulation are presented to demonstrate the effectiveness of local redundant manipulator performance optimization. Actual hardware experiments performed to verify the simulated results are also discussed. A major result is that the partitioned solution is desirable because of low computation requirements. The partitioned solution is suboptimal compared with the full solution because translational and rotational terms are optimized separately; however, the results show that the difference is not significant. Singularity analysis reveals that no algorithmic singularities exist for the partitioned solution. The partitioned and full solutions share the same physical manipulator singular conditions. When compared with the full solution, the partitioned solution is shown to be ill-conditioned in smaller neighborhoods of the shared singularities.
Communications oriented programming of parallel iterative solutions of sparse linear systems
NASA Technical Reports Server (NTRS)
Patrick, M. L.; Pratt, T. W.
1986-01-01
Parallel algorithms are developed for a class of scientific computational problems by partitioning the problems into smaller problems which may be solved concurrently. The effectiveness of the resulting parallel solutions is determined by the amount and frequency of communication and synchronization and the extent to which communication can be overlapped with computation. Three different parallel algorithms for solving the same class of problems are presented, and their effectiveness is analyzed from this point of view. The algorithms are programmed using a new programming environment. Run-time statistics and experience obtained from the execution of these programs assist in measuring the effectiveness of these algorithms.
2013-01-01
Background Tailored psychosocial activity-based interventions have been shown to improve mood, behaviour and quality of life for nursing home residents. Occupational therapist delivered activity programs have shown benefits when delivered in home care settings for people with dementia. The primary aim of this study is to evaluate the effect of LEAP (Lifestyle Engagement Activity Program) for Life, a training and practice change program on the engagement of home care clients by care workers. Secondary aims are to evaluate the impact of the program on changes in client mood and behaviour. Methods/design The 12 month LEAP program has three components: 1) engaging site management and care staff in the program; 2) employing a LEAP champion one day a week to support program activities; 3) delivering an evidence-based training program to care staff. Specifically, case managers will be trained and supported to set meaningful social or recreational goals with clients and incorporate these into care plans. Care workers will be trained in and encouraged to practise good communication, promote client independence and choice, and tailor meaningful activities using Montessori principles, reminiscence, music, physical activity and play. LEAP Champions will be given information about theories of organisational change and trained in interpersonal skills required for their role. LEAP will be evaluated in five home care sites including two that service ethnic minority groups. A quasi experimental design will be used with evaluation data collected four times: 6-months prior to program commencement; at the start of the program; and then after 6 and 12 months. Mixed effect models will enable comparison of change in outcomes for the periods before and during the program. The primary outcome measure is client engagement. Secondary outcomes for clients are satisfaction with care, dysphoria/depression, loneliness, apathy and agitation; and work satisfaction for care workers. A process evaluation will also be undertaken. Discussion LEAP for Life may prove a cost-effective way to improve client engagement and other outcomes in the community setting. Trial registration Australian New Zealand Clinical Trials Registry ACTRN12612001064897. PMID:24238067
NASA Technical Reports Server (NTRS)
Pagnutti, Mary
2006-01-01
This viewgraph presentation reviews the creation of a prototype algorithm for atmospheric correction using high spatial resolution earth observing imaging systems. The objective of the work was to evaluate accuracy of a prototype algorithm that uses satellite-derived atmospheric products to generate scene reflectance maps for high spatial resolution (HSR) systems. This presentation focused on preliminary results of only the satellite-based atmospheric correction algorithm.
van Soest, Johan; Sun, Chang; Mussmann, Ole; Puts, Marco; van den Berg, Bob; Malic, Alexander; van Oppen, Claudia; Towend, David; Dekker, Andre; Dumontier, Michel
2018-01-01
Conventional data mining algorithms are unable to satisfy the current requirements on analyzing big data in some fields such as medicine, policy making, judicial, and tax records. However, applying diverse datasets from different institutes (both healthcare and non-healthcare related) can enrich information and insights. So far, analyzing this data in an automated, privacy-preserving manner does not exist to our knowledge. In this work, we propose an infrastructure, and proof-of-concept for privacy-preserving analytics on vertically partitioned data.
Modelling of land use change in Indramayu District, West Java Province
NASA Astrophysics Data System (ADS)
Handayani, L. D. W.; Tejaningrum, M. A.; Damrah, F.
2017-01-01
Indramayu District into a strategic area for a stopover and overseas from East Java area because Indramayu District passed the north coast main lane, which is the first as the economic lifeblood of the Java Island. Indramayu District is part of mainstream economic Java pathways so that physical development of the area and population density as well as community activities grew by leaps and bounds. Growth acceleration raised the level of land use change. Land use change and population activities in coastal area would reduce the carrying capacity and impact on environmental quality. This research aim to analyse landuse change of years 2000 and 2011 in Indramayu District. Using this land use change map, we can predict the condition of landuse change of year 2022 in Indramayu District. Cellular Automata Markov (Markov CA) Method is used to create a spatial model of land use changes. The results of this study are predictive of land use in 2022 and the suitability with Spatial Plan (RTRW). A settlement increase predicted to continue in the future the designation of the land according to the spatial plan should be maintained.
Niechwiej-Szwedo, Ewa; Gonzalez, David; Nouredanesh, Mina; Tung, James
2018-01-01
Kinematic analysis of upper limb reaching provides insight into the central nervous system control of movements. Until recently, kinematic examination of motor control has been limited to studies conducted in traditional research laboratories because motion capture equipment used for data collection is not easily portable and expensive. A recently developed markerless system, the Leap Motion Controller (LMC), is a portable and inexpensive tracking device that allows recording of 3D hand and finger position. The main goal of this study was to assess the concurrent reliability and validity of the LMC as compared to the Optotrak, a criterion-standard motion capture system, for measures of temporal accuracy and peak velocity during the performance of upper limb, visually-guided movements. In experiment 1, 14 participants executed aiming movements to visual targets presented on a computer monitor. Bland-Altman analysis was conducted to assess the validity and limits of agreement for measures of temporal accuracy (movement time, duration of deceleration interval), peak velocity, and spatial accuracy (endpoint accuracy). In addition, a one-sample t-test was used to test the hypothesis that the error difference between measures obtained from Optotrak and LMC is zero. In experiment 2, 15 participants performed a Fitts' type aiming task in order to assess whether the LMC is capable of assessing a well-known speed-accuracy trade-off relationship. Experiment 3 assessed the temporal coordination pattern during the performance of a sequence consisting of a reaching, grasping, and placement task in 15 participants. Results from the t-test showed that the error difference in temporal measures was significantly different from zero. Based on the results from the 3 experiments, the average temporal error in movement time was 40±44 ms, and the error in peak velocity was 0.024±0.103 m/s. The limits of agreement between the LMC and Optotrak for spatial accuracy measures ranged between 2-5 cm. Although the LMC system is a low-cost, highly portable system, which could facilitate collection of kinematic data outside of the traditional laboratory settings, the temporal and spatial errors may limit the use of the device in some settings.
Gonzalez, David; Nouredanesh, Mina; Tung, James
2018-01-01
Kinematic analysis of upper limb reaching provides insight into the central nervous system control of movements. Until recently, kinematic examination of motor control has been limited to studies conducted in traditional research laboratories because motion capture equipment used for data collection is not easily portable and expensive. A recently developed markerless system, the Leap Motion Controller (LMC), is a portable and inexpensive tracking device that allows recording of 3D hand and finger position. The main goal of this study was to assess the concurrent reliability and validity of the LMC as compared to the Optotrak, a criterion-standard motion capture system, for measures of temporal accuracy and peak velocity during the performance of upper limb, visually-guided movements. In experiment 1, 14 participants executed aiming movements to visual targets presented on a computer monitor. Bland-Altman analysis was conducted to assess the validity and limits of agreement for measures of temporal accuracy (movement time, duration of deceleration interval), peak velocity, and spatial accuracy (endpoint accuracy). In addition, a one-sample t-test was used to test the hypothesis that the error difference between measures obtained from Optotrak and LMC is zero. In experiment 2, 15 participants performed a Fitts’ type aiming task in order to assess whether the LMC is capable of assessing a well-known speed-accuracy trade-off relationship. Experiment 3 assessed the temporal coordination pattern during the performance of a sequence consisting of a reaching, grasping, and placement task in 15 participants. Results from the t-test showed that the error difference in temporal measures was significantly different from zero. Based on the results from the 3 experiments, the average temporal error in movement time was 40±44 ms, and the error in peak velocity was 0.024±0.103 m/s. The limits of agreement between the LMC and Optotrak for spatial accuracy measures ranged between 2–5 cm. Although the LMC system is a low-cost, highly portable system, which could facilitate collection of kinematic data outside of the traditional laboratory settings, the temporal and spatial errors may limit the use of the device in some settings. PMID:29529064
Enrofloxacin and Probiotic Lactobacilli Influence PepT1 and LEAP-2 mRNA Expression in Poultry.
Pavlova, Ivelina; Milanova, Aneliya; Danova, Svetla; Fink-Gremmels, Johanna
2016-12-01
Expression of peptide transporter 1 (PepT1) and liver-expressed antimicrobial peptide 2 (LEAP-2) in chickens can be influenced by food deprivation, pathological conditions and drug administration. Effect of three putative probiotic Lactobacillus strains and enrofloxacin on the expression of PepT1 and LEAP-2 mRNA was investigated in Ross 308 chickens. One-day-old chicks (n = 24) were allocated to following groups: control (without treatment); group treated with probiotics via feed; group treated with a combination of probiotics and enrofloxacin; and a group given enrofloxacin only. The drug was administered at a dose of 10 mg kg -1 , via drinking water for 5 days. Samples from liver, duodenum and jejunum were collected 126 h after the start of the treatment. Expression levels of PepT1 and LEAP-2 were determined by real-time polymerase chain reaction and were statistically evaluated by Mann-Whitney test. Enrofloxacin administered alone or in combination with probiotics provoked a statistically significant up-regulation of PepT1 mRNA levels in the measured organ sites. These changes can be attributed to a tendency of improvement in utilization of dietary peptide and in body weight gain. LEAP-2 mRNA expression levels did not change significantly in enrofloxacin-treated chickens in comparison with control group.
Algorithm Visualization System for Teaching Spatial Data Algorithms
ERIC Educational Resources Information Center
Nikander, Jussi; Helminen, Juha; Korhonen, Ari
2010-01-01
TRAKLA2 is a web-based learning environment for data structures and algorithms. The system delivers automatically assessed algorithm simulation exercises that are solved using a graphical user interface. In this work, we introduce a novel learning environment for spatial data algorithms, SDA-TRAKLA2, which has been implemented on top of the…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Banks, J. W.; Henshaw, W. D.; Schwendeman, D. W.
A stable partitioned algorithm is developed for fluid-structure interaction (FSI) problems involving viscous incompressible flow and rigid bodies. This added-mass partitioned (AMP) algorithm remains stable, without sub-iterations, for light and even zero mass rigid bodies when added-mass and viscous added-damping effects are large. The scheme is based on a generalized Robin interface condition for the fluid pressure that includes terms involving the linear acceleration and angular acceleration of the rigid body. Added mass effects are handled in the Robin condition by inclusion of a boundary integral term that depends on the pressure. Added-damping effects due to the viscous shear forcesmore » on the body are treated by inclusion of added-damping tensors that are derived through a linearization of the integrals defining the force and torque. Added-damping effects may be important at low Reynolds number, or, for example, in the case of a rotating cylinder or rotating sphere when the rotational moments of inertia are small. In this second part of a two-part series, the general formulation of the AMP scheme is presented including the form of the AMP interface conditions and added-damping tensors for general geometries. A fully second-order accurate implementation of the AMP scheme is developed in two dimensions based on a fractional-step method for the incompressible Navier-Stokes equations using finite difference methods and overlapping grids to handle the moving geometry. Here, the numerical scheme is verified on a number of difficult benchmark problems.« less
Banks, J. W.; Henshaw, W. D.; Schwendeman, D. W.; ...
2017-01-20
A stable partitioned algorithm is developed for fluid-structure interaction (FSI) problems involving viscous incompressible flow and rigid bodies. This added-mass partitioned (AMP) algorithm remains stable, without sub-iterations, for light and even zero mass rigid bodies when added-mass and viscous added-damping effects are large. The scheme is based on a generalized Robin interface condition for the fluid pressure that includes terms involving the linear acceleration and angular acceleration of the rigid body. Added mass effects are handled in the Robin condition by inclusion of a boundary integral term that depends on the pressure. Added-damping effects due to the viscous shear forcesmore » on the body are treated by inclusion of added-damping tensors that are derived through a linearization of the integrals defining the force and torque. Added-damping effects may be important at low Reynolds number, or, for example, in the case of a rotating cylinder or rotating sphere when the rotational moments of inertia are small. In this second part of a two-part series, the general formulation of the AMP scheme is presented including the form of the AMP interface conditions and added-damping tensors for general geometries. A fully second-order accurate implementation of the AMP scheme is developed in two dimensions based on a fractional-step method for the incompressible Navier-Stokes equations using finite difference methods and overlapping grids to handle the moving geometry. Here, the numerical scheme is verified on a number of difficult benchmark problems.« less
StochKit2: software for discrete stochastic simulation of biochemical systems with events.
Sanft, Kevin R; Wu, Sheng; Roh, Min; Fu, Jin; Lim, Rone Kwei; Petzold, Linda R
2011-09-01
StochKit2 is the first major upgrade of the popular StochKit stochastic simulation software package. StochKit2 provides highly efficient implementations of several variants of Gillespie's stochastic simulation algorithm (SSA), and tau-leaping with automatic step size selection. StochKit2 features include automatic selection of the optimal SSA method based on model properties, event handling, and automatic parallelism on multicore architectures. The underlying structure of the code has been completely updated to provide a flexible framework for extending its functionality. StochKit2 runs on Linux/Unix, Mac OS X and Windows. It is freely available under GPL version 3 and can be downloaded from http://sourceforge.net/projects/stochkit/. petzold@engineering.ucsb.edu.
Wang, Kai; Mao, Jiafu; Dickinson, Robert; ...
2013-06-05
This paper examines a land surface solar radiation partitioning scheme, i.e., that of the Community Land Model version 4 (CLM4) with coupled carbon and nitrogen cycles. Taking advantage of a unique 30-year fraction of absorbed photosynthetically active radiation (FPAR) dataset derived from the Global Inventory Modeling and Mapping Studies (GIMMS) normalized difference vegetation index (NDVI) data set, multiple other remote sensing datasets, and site level observations, we evaluated the CLM4 FPAR ’s seasonal cycle, diurnal cycle, long-term trends and spatial patterns. These findings show that the model generally agrees with observations in the seasonal cycle, long-term trends, and spatial patterns,more » but does not reproduce the diurnal cycle. Discrepancies also exist in seasonality magnitudes, peak value months, and spatial heterogeneity. Here, we identify the discrepancy in the diurnal cycle as, due to, the absence of dependence on sun angle in the model. Implementation of sun angle dependence in a one-dimensional (1-D) model is proposed. The need for better relating of vegetation to climate in the model, indicated by long-term trends, is also noted. Evaluation of the CLM4 land surface solar radiation partitioning scheme using remote sensing and site level FPAR datasets provides targets for future development in its representation of this naturally complicated process.« less
A robust fuzzy local Information c-means clustering algorithm with noise detection
NASA Astrophysics Data System (ADS)
Shang, Jiayu; Li, Shiren; Huang, Junwei
2018-04-01
Fuzzy c-means clustering (FCM), especially with spatial constraints (FCM_S), is an effective algorithm suitable for image segmentation. Its reliability contributes not only to the presentation of fuzziness for belongingness of every pixel but also to exploitation of spatial contextual information. But these algorithms still remain some problems when processing the image with noise, they are sensitive to the parameters which have to be tuned according to prior knowledge of the noise. In this paper, we propose a new FCM algorithm, combining the gray constraints and spatial constraints, called spatial and gray-level denoised fuzzy c-means (SGDFCM) algorithm. This new algorithm conquers the parameter disadvantages mentioned above by considering the possibility of noise of each pixel, which aims to improve the robustness and obtain more detail information. Furthermore, the possibility of noise can be calculated in advance, which means the algorithm is effective and efficient.
Mining Co-Location Patterns with Clustering Items from Spatial Data Sets
NASA Astrophysics Data System (ADS)
Zhou, G.; Li, Q.; Deng, G.; Yue, T.; Zhou, X.
2018-05-01
The explosive growth of spatial data and widespread use of spatial databases emphasize the need for the spatial data mining. Co-location patterns discovery is an important branch in spatial data mining. Spatial co-locations represent the subsets of features which are frequently located together in geographic space. However, the appearance of a spatial feature C is often not determined by a single spatial feature A or B but by the two spatial features A and B, that is to say where A and B appear together, C often appears. We note that this co-location pattern is different from the traditional co-location pattern. Thus, this paper presents a new concept called clustering terms, and this co-location pattern is called co-location patterns with clustering items. And the traditional algorithm cannot mine this co-location pattern, so we introduce the related concept in detail and propose a novel algorithm. This algorithm is extended by join-based approach proposed by Huang. Finally, we evaluate the performance of this algorithm.
A Data Cleaning Method for Big Trace Data Using Movement Consistency
Tang, Luliang; Zhang, Xia; Li, Qingquan
2018-01-01
Given the popularization of GPS technologies, the massive amount of spatiotemporal GPS traces collected by vehicles are becoming a new kind of big data source for urban geographic information extraction. The growing volume of the dataset, however, creates processing and management difficulties, while the low quality generates uncertainties when investigating human activities. Based on the conception of the error distribution law and position accuracy of the GPS data, we propose in this paper a data cleaning method for this kind of spatial big data using movement consistency. First, a trajectory is partitioned into a set of sub-trajectories using the movement characteristic points. In this process, GPS points indicate that the motion status of the vehicle has transformed from one state into another, and are regarded as the movement characteristic points. Then, GPS data are cleaned based on the similarities of GPS points and the movement consistency model of the sub-trajectory. The movement consistency model is built using the random sample consensus algorithm based on the high spatial consistency of high-quality GPS data. The proposed method is evaluated based on extensive experiments, using GPS trajectories generated by a sample of vehicles over a 7-day period in Wuhan city, China. The results show the effectiveness and efficiency of the proposed method. PMID:29522456
Multi-Objective Community Detection Based on Memetic Algorithm
2015-01-01
Community detection has drawn a lot of attention as it can provide invaluable help in understanding the function and visualizing the structure of networks. Since single objective optimization methods have intrinsic drawbacks to identifying multiple significant community structures, some methods formulate the community detection as multi-objective problems and adopt population-based evolutionary algorithms to obtain multiple community structures. Evolutionary algorithms have strong global search ability, but have difficulty in locating local optima efficiently. In this study, in order to identify multiple significant community structures more effectively, a multi-objective memetic algorithm for community detection is proposed by combining multi-objective evolutionary algorithm with a local search procedure. The local search procedure is designed by addressing three issues. Firstly, nondominated solutions generated by evolutionary operations and solutions in dominant population are set as initial individuals for local search procedure. Then, a new direction vector named as pseudonormal vector is proposed to integrate two objective functions together to form a fitness function. Finally, a network specific local search strategy based on label propagation rule is expanded to search the local optimal solutions efficiently. The extensive experiments on both artificial and real-world networks evaluate the proposed method from three aspects. Firstly, experiments on influence of local search procedure demonstrate that the local search procedure can speed up the convergence to better partitions and make the algorithm more stable. Secondly, comparisons with a set of classic community detection methods illustrate the proposed method can find single partitions effectively. Finally, the method is applied to identify hierarchical structures of networks which are beneficial for analyzing networks in multi-resolution levels. PMID:25932646
Multi-objective community detection based on memetic algorithm.
Wu, Peng; Pan, Li
2015-01-01
Community detection has drawn a lot of attention as it can provide invaluable help in understanding the function and visualizing the structure of networks. Since single objective optimization methods have intrinsic drawbacks to identifying multiple significant community structures, some methods formulate the community detection as multi-objective problems and adopt population-based evolutionary algorithms to obtain multiple community structures. Evolutionary algorithms have strong global search ability, but have difficulty in locating local optima efficiently. In this study, in order to identify multiple significant community structures more effectively, a multi-objective memetic algorithm for community detection is proposed by combining multi-objective evolutionary algorithm with a local search procedure. The local search procedure is designed by addressing three issues. Firstly, nondominated solutions generated by evolutionary operations and solutions in dominant population are set as initial individuals for local search procedure. Then, a new direction vector named as pseudonormal vector is proposed to integrate two objective functions together to form a fitness function. Finally, a network specific local search strategy based on label propagation rule is expanded to search the local optimal solutions efficiently. The extensive experiments on both artificial and real-world networks evaluate the proposed method from three aspects. Firstly, experiments on influence of local search procedure demonstrate that the local search procedure can speed up the convergence to better partitions and make the algorithm more stable. Secondly, comparisons with a set of classic community detection methods illustrate the proposed method can find single partitions effectively. Finally, the method is applied to identify hierarchical structures of networks which are beneficial for analyzing networks in multi-resolution levels.
Dyson, Greg; Frikke-Schmidt, Ruth; Nordestgaard, Børge G; Tybjaerg-Hansen, Anne; Sing, Charles F
2009-05-01
This article extends the Patient Rule-Induction Method (PRIM) for modeling cumulative incidence of disease developed by Dyson et al. (Genet Epidemiol 31:515-527) to include the simultaneous consideration of non-additive combinations of predictor variables, a significance test of each combination, an adjustment for multiple testing and a confidence interval for the estimate of the cumulative incidence of disease in each partition. We employ the partitioning algorithm component of the Combinatorial Partitioning Method to construct combinations of predictors, permutation testing to assess the significance of each combination, theoretical arguments for incorporating a multiple testing adjustment and bootstrap resampling to produce the confidence intervals. An illustration of this revised PRIM utilizing a sample of 2,258 European male participants from the Copenhagen City Heart Study is presented that assesses the utility of genetic variants in predicting the presence of ischemic heart disease beyond the established risk factors.
Dyson, Greg; Frikke-Schmidt, Ruth; Nordestgaard, Børge G.; Tybjærg-Hansen, Anne; Sing, Charles F.
2009-01-01
This paper extends the Patient Rule-Induction Method (PRIM) for modeling cumulative incidence of disease developed by Dyson et al. (2007) to include the simultaneous consideration of non-additive combinations of predictor variables, a significance test of each combination, an adjustment for multiple testing and a confidence interval for the estimate of the cumulative incidence of disease in each partition. We employ the partitioning algorithm component of the Combinatorial Partitioning Method (CPM) to construct combinations of predictors, permutation testing to assess the significance of each combination, theoretical arguments for incorporating a multiple testing adjustment and bootstrap resampling to produce the confidence intervals. An illustration of this revised PRIM utilizing a sample of 2258 European male participants from the Copenhagen City Heart Study is presented that assesses the utility of genetic variants in predicting the presence of ischemic heart disease beyond the established risk factors. PMID:19025787
This is SPIRAL-TAP: Sparse Poisson Intensity Reconstruction ALgorithms--theory and practice.
Harmany, Zachary T; Marcia, Roummel F; Willett, Rebecca M
2012-03-01
Observations in many applications consist of counts of discrete events, such as photons hitting a detector, which cannot be effectively modeled using an additive bounded or Gaussian noise model, and instead require a Poisson noise model. As a result, accurate reconstruction of a spatially or temporally distributed phenomenon (f*) from Poisson data (y) cannot be effectively accomplished by minimizing a conventional penalized least-squares objective function. The problem addressed in this paper is the estimation of f* from y in an inverse problem setting, where the number of unknowns may potentially be larger than the number of observations and f* admits sparse approximation. The optimization formulation considered in this paper uses a penalized negative Poisson log-likelihood objective function with nonnegativity constraints (since Poisson intensities are naturally nonnegative). In particular, the proposed approach incorporates key ideas of using separable quadratic approximations to the objective function at each iteration and penalization terms related to l1 norms of coefficient vectors, total variation seminorms, and partition-based multiscale estimation methods.
Distributed wavefront reconstruction with SABRE for real-time large scale adaptive optics control
NASA Astrophysics Data System (ADS)
Brunner, Elisabeth; de Visser, Cornelis C.; Verhaegen, Michel
2014-08-01
We present advances on Spline based ABerration REconstruction (SABRE) from (Shack-)Hartmann (SH) wavefront measurements for large-scale adaptive optics systems. SABRE locally models the wavefront with simplex B-spline basis functions on triangular partitions which are defined on the SH subaperture array. This approach allows high accuracy through the possible use of nonlinear basis functions and great adaptability to any wavefront sensor and pupil geometry. The main contribution of this paper is a distributed wavefront reconstruction method, D-SABRE, which is a 2 stage procedure based on decomposing the sensor domain into sub-domains each supporting a local SABRE model. D-SABRE greatly decreases the computational complexity of the method and removes the need for centralized reconstruction while obtaining a reconstruction accuracy for simulated E-ELT turbulences within 1% of the global method's accuracy. Further, a generalization of the methodology is proposed making direct use of SH intensity measurements which leads to an improved accuracy of the reconstruction compared to centroid algorithms using spatial gradients.
Gambling score in earthquake prediction analysis
NASA Astrophysics Data System (ADS)
Molchan, G.; Romashkova, L.
2011-03-01
The number of successes and the space-time alarm rate are commonly used to characterize the strength of an earthquake prediction method and the significance of prediction results. It has been recently suggested to use a new characteristic to evaluate the forecaster's skill, the gambling score (GS), which incorporates the difficulty of guessing each target event by using different weights for different alarms. We expand parametrization of the GS and use the M8 prediction algorithm to illustrate difficulties of the new approach in the analysis of the prediction significance. We show that the level of significance strongly depends (1) on the choice of alarm weights, (2) on the partitioning of the entire alarm volume into component parts and (3) on the accuracy of the spatial rate measure of target events. These tools are at the disposal of the researcher and can affect the significance estimate. Formally, all reasonable GSs discussed here corroborate that the M8 method is non-trivial in the prediction of 8.0 ≤M < 8.5 events because the point estimates of the significance are in the range 0.5-5 per cent. However, the conservative estimate 3.7 per cent based on the number of successes seems preferable owing to two circumstances: (1) it is based on relative values of the spatial rate and hence is more stable and (2) the statistic of successes enables us to construct analytically an upper estimate of the significance taking into account the uncertainty of the spatial rate measure.
QCE: A Simulator for Quantum Computer Hardware
NASA Astrophysics Data System (ADS)
Michielsen, Kristel; de Raedt, Hans
2003-09-01
The Quantum Computer Emulator (QCE) described in this paper consists of a simulator of a generic, general purpose quantum computer and a graphical user interface. The latter is used to control the simulator, to define the hardware of the quantum computer and to debug and execute quantum algorithms. QCE runs in a Windows 98/NT/2000/ME/XP environment. It can be used to validate designs of physically realizable quantum processors and as an interactive educational tool to learn about quantum computers and quantum algorithms. A detailed exposition is given of the implementation of the CNOT and the Toffoli gate, the quantum Fourier transform, Grover's database search algorithm, an order finding algorithm, Shor's algorithm, a three-input adder and a number partitioning algorithm. We also review the results of simulations of an NMR-like quantum computer.
The Leap from Patterns to Formulas
ERIC Educational Resources Information Center
Beigie, Darin
2011-01-01
Initial exposure to algebraic thinking involves the critical leap from working with numbers to thinking with variables. The transition to thinking mathematically using variables has many layers, and for all students an abstraction that is clear in one setting may be opaque in another. Geometric counting and the resulting algebraic patterns provide…
The Impossible Capture: Towards a Leaping Methodology
ERIC Educational Resources Information Center
Zaliwska, Zofia
2016-01-01
I offer Klein's "Leap into the void" as an entrée into exploring the complexities of qualitative research in education. In exposing the ways in which performance photography/documentation performs on the boundaries of representation, Klein helps us to think about representation and dissemination differently. Through this article I will…
34 CFR 692.3 - What regulations apply to the LEAP Program?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 34 Education 3 2010-07-01 2010-07-01 false What regulations apply to the LEAP Program? 692.3 Section 692.3 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF POSTSECONDARY EDUCATION, DEPARTMENT OF EDUCATION LEVERAGING EDUCATIONAL ASSISTANCE PARTNERSHIP PROGRAM...
ERIC Educational Resources Information Center
Phillips, Loraine; Roach, David; Williamson, Celia
2014-01-01
In Texas, educators working to coordinate the efforts of fifty community colleges, thirty-eight universities, and six university systems are bringing the resources of the Association of American Colleges and Universities (AAC&U) Liberal Education and America's Promise (LEAP) initiative to bear in order to ensure that the state's nearly 1.5…
Project LEAP: Learning English-for-Academic-Purposes. Training Manual Year Three.
ERIC Educational Resources Information Center
Snow, Marguerite Ann, Ed.
Project LEAP (Learning English-for-Academic-Purposes) is a three-year faculty development and supplemental instruction partnership to improve the academic literacy skills of native-born, immigrant, and international language minority students. This manual is the third set of faculty development materials produced by the project, presenting…
Constellation design with geometric and probabilistic shaping
NASA Astrophysics Data System (ADS)
Zhang, Shaoliang; Yaman, Fatih
2018-02-01
A systematic study, including theory, simulation and experiments, is carried out to review the generalized pairwise optimization algorithm for designing optimized constellation. In order to verify its effectiveness, the algorithm is applied in three testing cases: 2-dimensional 8 quadrature amplitude modulation (QAM), 4-dimensional set-partitioning QAM, and probabilistic-shaped (PS) 32QAM. The results suggest that geometric shaping can work together with PS to further bridge the gap toward the Shannon limit.