Sample records for core algorithm cover

  1. Optimization of Selected Remote Sensing Algorithms for Embedded NVIDIA Kepler GPU Architecture

    NASA Technical Reports Server (NTRS)

    Riha, Lubomir; Le Moigne, Jacqueline; El-Ghazawi, Tarek

    2015-01-01

    This paper evaluates the potential of embedded Graphic Processing Units in the Nvidias Tegra K1 for onboard processing. The performance is compared to a general purpose multi-core CPU and full fledge GPU accelerator. This study uses two algorithms: Wavelet Spectral Dimension Reduction of Hyperspectral Imagery and Automated Cloud-Cover Assessment (ACCA) Algorithm. Tegra K1 achieved 51 for ACCA algorithm and 20 for the dimension reduction algorithm, as compared to the performance of the high-end 8-core server Intel Xeon CPU with 13.5 times higher power consumption.

  2. Crisis management during anaesthesia: the development of an anaesthetic crisis management manual

    PubMed Central

    Runciman, W; Kluger, M; Morris, R; Paix, A; Watterson, L; Webb, R

    2005-01-01

    Background: All anaesthetists have to handle life threatening crises with little or no warning. However, some cognitive strategies and work practices that are appropriate for speed and efficiency under normal circumstances may become maladaptive in a crisis. It was judged in a previous study that the use of a structured "core" algorithm (based on the mnemonic COVER ABCD–A SWIFT CHECK) would diagnose and correct the problem in 60% of cases and provide a functional diagnosis in virtually all of the remaining 40%. It was recommended that specific sub-algorithms be developed for managing the problems underlying the remaining 40% of crises and assembled in an easy-to-use manual. Sub-algorithms were therefore developed for these problems so that they could be checked for applicability and validity against the first 4000 anaesthesia incidents reported to the Australian Incident Monitoring Study (AIMS). Methods: The need for 24 specific sub-algorithms was identified. Teams of practising anaesthetists were assembled and sets of incidents relevant to each sub-algorithm were identified from the first 4000 reported to AIMS. Based largely on successful strategies identified in these reports, a set of 24 specific sub-algorithms was developed for trial against the 4000 AIMS reports and assembled into an easy-to-use manual. A process was developed for applying each component of the core algorithm COVER at one of four levels (scan-check-alert/ready-emergency) according to the degree of perceived urgency, and incorporated into the manual. The manual was disseminated at a World Congress and feedback was obtained. Results: Each of the 24 specific crisis management sub-algorithms was tested against the relevant incidents among the first 4000 reported to AIMS and compared with the actual management by the anaesthetist at the time. It was judged that, if the core algorithm had been correctly applied, the appropriate sub-algorithm would have been resolved better and/or faster in one in eight of all incidents, and would have been unlikely to have caused harm to any patient. The descriptions of the validation of each of the 24 sub-algorithms constitute the remaining 24 papers in this set. Feedback from five meetings each attended by 60–100 anaesthetists was then collated and is included. Conclusion: The 24 sub-algorithms developed form the basis for developing a rational evidence-based approach to crisis management during anaesthesia. The COVER component has been found to be satisfactory in real life resuscitation situations and the sub-algorithms have been used successfully for several years. It would now be desirable for carefully designed simulator based studies, using naive trainees at the start of their training, to systematically examine the merits and demerits of various aspects of the sub-algorithms. It would seem prudent that these sub-algorithms be regarded, for the moment, as decision aids to support and back up clinicians' natural responses to a crisis when all is not progressing as expected. PMID:15933282

  3. Crisis management during anaesthesia: the development of an anaesthetic crisis management manual.

    PubMed

    Runciman, W B; Kluger, M T; Morris, R W; Paix, A D; Watterson, L M; Webb, R K

    2005-06-01

    All anaesthetists have to handle life threatening crises with little or no warning. However, some cognitive strategies and work practices that are appropriate for speed and efficiency under normal circumstances may become maladaptive in a crisis. It was judged in a previous study that the use of a structured "core" algorithm (based on the mnemonic COVER ABCD-A SWIFT CHECK) would diagnose and correct the problem in 60% of cases and provide a functional diagnosis in virtually all of the remaining 40%. It was recommended that specific sub-algorithms be developed for managing the problems underlying the remaining 40% of crises and assembled in an easy-to-use manual. Sub-algorithms were therefore developed for these problems so that they could be checked for applicability and validity against the first 4000 anaesthesia incidents reported to the Australian Incident Monitoring Study (AIMS). The need for 24 specific sub-algorithms was identified. Teams of practising anaesthetists were assembled and sets of incidents relevant to each sub-algorithm were identified from the first 4000 reported to AIMS. Based largely on successful strategies identified in these reports, a set of 24 specific sub-algorithms was developed for trial against the 4000 AIMS reports and assembled into an easy-to-use manual. A process was developed for applying each component of the core algorithm COVER at one of four levels (scan-check-alert/ready-emergency) according to the degree of perceived urgency, and incorporated into the manual. The manual was disseminated at a World Congress and feedback was obtained. Each of the 24 specific crisis management sub-algorithms was tested against the relevant incidents among the first 4000 reported to AIMS and compared with the actual management by the anaesthetist at the time. It was judged that, if the core algorithm had been correctly applied, the appropriate sub-algorithm would have been resolved better and/or faster in one in eight of all incidents, and would have been unlikely to have caused harm to any patient. The descriptions of the validation of each of the 24 sub-algorithms constitute the remaining 24 papers in this set. Feedback from five meetings each attended by 60-100 anaesthetists was then collated and is included. The 24 sub-algorithms developed form the basis for developing a rational evidence-based approach to crisis management during anaesthesia. The COVER component has been found to be satisfactory in real life resuscitation situations and the sub-algorithms have been used successfully for several years. It would now be desirable for carefully designed simulator based studies, using naive trainees at the start of their training, to systematically examine the merits and demerits of various aspects of the sub-algorithms. It would seem prudent that these sub-algorithms be regarded, for the moment, as decision aids to support and back up clinicians' natural responses to a crisis when all is not progressing as expected.

  4. A Population of Assessment Tasks

    ERIC Educational Resources Information Center

    Daro, Phil; Burkhardt, Hugh

    2012-01-01

    We propose the development of a "population" of high-quality assessment tasks that cover the performance goals set out in the "Common Core State Standards for Mathematics." The population will be published. Tests are drawn from this population as a structured random sample guided by a "balancing algorithm."

  5. Crisis management during anaesthesia: hypotension.

    PubMed

    Morris, R W; Watterson, L M; Westhorpe, R N; Webb, R K

    2005-06-01

    Hypotension is commonly encountered in association with anaesthesia and surgery. Uncorrected and sustained it puts the brain, heart, kidneys, and the fetus in pregnancy at risk of permanent or even fatal damage. Its recognition and correction is time critical, especially in patients with pre-existing disease that compromises organ perfusion. To examine the role of a previously described core algorithm "COVER ABCD-A SWIFT CHECK", supplemented by a specific sub-algorithm for hypotension, in the management of hypotension when it occurs in association with anaesthesia. Reports of hypotension during anaesthesia were extracted and studied from the first 4000 incidents reported to the Australian Incident Monitoring Study (AIMS). The potential performance of the COVER ABCD algorithm and the sub-algorithm for hypotension was compared with the actual management as reported by the anaesthetist involved. There were 438 reports that mentioned hypotension, cardiovascular collapse, or cardiac arrest. In 17% of reports more than one cause was attributed and 550 causative events were identified overall. The most common causes identified were drugs (26%), regional anaesthesia (14%), and hypovolaemia (9%). Concomitant changes were reported in heart rate or rhythm in 39% and oxygen saturation or ventilation in 21% of reports. Cardiac arrest was documented in 25% of reports. As hypotension was frequently associated with abnormalities of other vital signs, it could not always be adequately addressed by a single algorithm. The sub-algorithm for hypotension is adequate when hypotension occurs in association with sinus tachycardia. However, when it occurs in association with bradycardia, non-sinus tachycardia, desaturation or signs of anaphylaxis or other problems, the sub-algorithm for hypotension recommends cross referencing to other relevant sub-algorithms. It was considered that, correctly applied, the core algorithm COVER ABCD would have diagnosed 18% of cases and led to resolution in two thirds of these. It was further estimated that completion of this followed by the specific sub-algorithm for hypotension would have led to earlier recognition of the problem and/or better management in 6% of cases compared with actual management reported. Pattern recognition in most cases enables anaesthetists to determine the cause and manage hypotension. However, an algorithm based approach is likely to improve the management of a small proportion of atypical but potentially life threatening cases. While an algorithm based approach will facilitate crisis management, the frequency of co-existing abnormalities in other vital signs means that all cases of hypotension cannot be dealt with using a single algorithm. Diagnosis, in particular, may potentially be assisted by cross referencing to the specific sub-algorithms for these.

  6. Statistical mechanics of the vertex-cover problem

    NASA Astrophysics Data System (ADS)

    Hartmann, Alexander K.; Weigt, Martin

    2003-10-01

    We review recent progress in the study of the vertex-cover problem (VC). The VC belongs to the class of NP-complete graph theoretical problems, which plays a central role in theoretical computer science. On ensembles of random graphs, VC exhibits a coverable-uncoverable phase transition. Very close to this transition, depending on the solution algorithm, easy-hard transitions in the typical running time of the algorithms occur. We explain a statistical mechanics approach, which works by mapping the VC to a hard-core lattice gas, and then applying techniques such as the replica trick or the cavity approach. Using these methods, the phase diagram of the VC could be obtained exactly for connectivities c < e, where the VC is replica symmetric. Recently, this result could be confirmed using traditional mathematical techniques. For c > e, the solution of the VC exhibits full replica symmetry breaking. The statistical mechanics approach can also be used to study analytically the typical running time of simple complete and incomplete algorithms for the VC. Finally, we describe recent results for the VC when studied on other ensembles of finite- and infinite-dimensional graphs.

  7. Deriving Continuous Fields of Tree Cover at 1-m over the Continental United States From the National Agriculture Imagery Program (NAIP) Imagery to Reduce Uncertainties in Forest Carbon Stock Estimation

    NASA Astrophysics Data System (ADS)

    Ganguly, S.; Basu, S.; Mukhopadhyay, S.; Michaelis, A.; Milesi, C.; Votava, P.; Nemani, R. R.

    2013-12-01

    An unresolved issue with coarse-to-medium resolution satellite-based forest carbon mapping over regional to continental scales is the high level of uncertainty in above ground biomass (AGB) estimates caused by the absence of forest cover information at a high enough spatial resolution (current spatial resolution is limited to 30-m). To put confidence in existing satellite-derived AGB density estimates, it is imperative to create continuous fields of tree cover at a sufficiently high resolution (e.g. 1-m) such that large uncertainties in forested area are reduced. The proposed work will provide means to reduce uncertainty in present satellite-derived AGB maps and Forest Inventory and Analysis (FIA) based regional estimates. Our primary objective will be to create Very High Resolution (VHR) estimates of tree cover at a spatial resolution of 1-m for the Continental United States using all available National Agriculture Imaging Program (NAIP) color-infrared imagery from 2010 till 2012. We will leverage the existing capabilities of the NASA Earth Exchange (NEX) high performance computing and storage facilities. The proposed 1-m tree cover map can be further aggregated to provide percent tree cover at any medium-to-coarse resolution spatial grid, which will aid in reducing uncertainties in AGB density estimation at the respective grid and overcome current limitations imposed by medium-to-coarse resolution land cover maps. We have implemented a scalable and computationally-efficient parallelized framework for tree-cover delineation - the core components of the algorithm [that] include a feature extraction process, a Statistical Region Merging image segmentation algorithm and a classification algorithm based on Deep Belief Network and a Feedforward Backpropagation Neural Network algorithm. An initial pilot exercise has been performed over the state of California (~11,000 scenes) to create a wall-to-wall 1-m tree cover map and the classification accuracy has been assessed. Results show an improvement in accuracy of tree-cover delineation as compared to existing forest cover maps from NLCD, especially over fragmented, heterogeneous and urban landscapes. Estimates of VHR tree cover will complement and enhance the accuracy of present remote-sensing based AGB modeling approaches and forest inventory based estimates at both national and local scales. A requisite step will be to characterize the inherent uncertainties in tree cover estimates and propagate them to estimate AGB.

  8. The Snow Data System at NASA JPL

    NASA Astrophysics Data System (ADS)

    Laidlaw, R.; Painter, T. H.; Mattmann, C. A.; Ramirez, P.; Bormann, K.; Brodzik, M. J.; Burgess, A. B.; Rittger, K.; Goodale, C. E.; Joyce, M.; McGibbney, L. J.; Zimdars, P.

    2014-12-01

    NASA JPL's Snow Data System has a data-processing pipeline powered by Apache OODT, an open source software tool. The pipeline has been running for several years and has successfully generated a significant amount of cryosphere data, including MODIS-based products such as MODSCAG, MODDRFS and MODICE, with historical and near-real time windows and covering regions such as the Artic, Western US, Alaska, Central Europe, Asia, South America, Australia and New Zealand. The team continues to improve the pipeline, using monitoring tools such as Ganglia to give an overview of operations, and improving fault-tolerance with automated recovery scripts. Several alternative adaptations of the Snow Covered Area and Grain size (SCAG) algorithm are being investigated. These include using VIIRS and Landsat TM/ETM+ satellite data as inputs. Parallel computing techniques are being considered for core SCAG processing, such as using the PyCUDA Python API to utilize multi-core GPU architectures. An experimental version of MODSCAG is also being developed for the Google Earth Engine platform, a cloud-based service.

  9. Multicore and GPU algorithms for Nussinov RNA folding

    PubMed Central

    2014-01-01

    Background One segment of a RNA sequence might be paired with another segment of the same RNA sequence due to the force of hydrogen bonds. This two-dimensional structure is called the RNA sequence's secondary structure. Several algorithms have been proposed to predict an RNA sequence's secondary structure. These algorithms are referred to as RNA folding algorithms. Results We develop cache efficient, multicore, and GPU algorithms for RNA folding using Nussinov's algorithm. Conclusions Our cache efficient algorithm provides a speedup between 1.6 and 3.0 relative to a naive straightforward single core code. The multicore version of the cache efficient single core algorithm provides a speedup, relative to the naive single core algorithm, between 7.5 and 14.0 on a 6 core hyperthreaded CPU. Our GPU algorithm for the NVIDIA C2050 is up to 1582 times as fast as the naive single core algorithm and between 5.1 and 11.2 times as fast as the fastest previously known GPU algorithm for Nussinov RNA folding. PMID:25082539

  10. Nuclear reactor

    DOEpatents

    Wade, Elman E.

    1979-01-01

    A nuclear reactor including two rotatable plugs and a positive top core holddown structure. The top core holddown structure is divided into two parts: a small core cover, and a large core cover. The small core cover, and the upper internals associated therewith, are attached to the small rotating plug, and the large core cover, with its associated upper internals, is attached to the large rotating plug. By so splitting the core holddown structures, under-the-plug refueling is accomplished without the necessity of enlarging the reactor pressure vessel to provide a storage space for the core holddown structure during refueling. Additionally, the small and large rotating plugs, and their associated core covers, are arranged such that the separation of the two core covers to permit rotation is accomplished without the installation of complex lifting mechanisms.

  11. A distributed-memory approximation algorithm for maximum weight perfect bipartite matching

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azad, Ariful; Buluc, Aydin; Li, Xiaoye S.

    We design and implement an efficient parallel approximation algorithm for the problem of maximum weight perfect matching in bipartite graphs, i.e. the problem of finding a set of non-adjacent edges that covers all vertices and has maximum weight. This problem differs from the maximum weight matching problem, for which scalable approximation algorithms are known. It is primarily motivated by finding good pivots in scalable sparse direct solvers before factorization where sequential implementations of maximum weight perfect matching algorithms, such as those available in MC64, are widely used due to the lack of scalable alternatives. To overcome this limitation, we proposemore » a fully parallel distributed memory algorithm that first generates a perfect matching and then searches for weightaugmenting cycles of length four in parallel and iteratively augments the matching with a vertex disjoint set of such cycles. For most practical problems the weights of the perfect matchings generated by our algorithm are very close to the optimum. An efficient implementation of the algorithm scales up to 256 nodes (17,408 cores) on a Cray XC40 supercomputer and can solve instances that are too large to be handled by a single node using the sequential algorithm.« less

  12. Multi-Temporal Multi-Sensor Analysis of Urbanization and Environmental/Climate Impact in China for Sustainable Urban Development

    NASA Astrophysics Data System (ADS)

    Ban, Yifang; Gong, Peng; Gamba, Paolo; Taubenbock, Hannes; Du, Peijun

    2016-08-01

    The overall objective of this research is to investigate multi-temporal, multi-scale, multi-sensor satellite data for analysis of urbanization and environmental/climate impact in China to support sustainable planning. Multi- temporal multi-scale SAR and optical data have been evaluated for urban information extraction using innovative methods and algorithms, including KTH- Pavia Urban Extractor, Pavia UEXT, and an "exclusion- inclusion" framework for urban extent extraction, and KTH-SEG, a novel object-based classification method for detailed urban land cover mapping. Various pixel- based and object-based change detection algorithms were also developed to extract urban changes. Several Chinese cities including Beijing, Shanghai and Guangzhou are selected as study areas. Spatio-temporal urbanization patterns and environmental impact at regional, metropolitan and city core were evaluated through ecosystem service, landscape metrics, spatial indices, and/or their combinations. The relationship between land surface temperature and land-cover classes was also analyzed.The urban extraction results showed that urban areas and small towns could be well extracted using multitemporal SAR data with the KTH-Pavia Urban Extractor and UEXT. The fusion of SAR data at multiple scales from multiple sensors was proven to improve urban extraction. For urban land cover mapping, the results show that the fusion of multitemporal SAR and optical data could produce detailed land cover maps with improved accuracy than that of SAR or optical data alone. Pixel-based and object-based change detection algorithms developed with the project were effective to extract urban changes. Comparing the urban land cover results from mulitemporal multisensor data, the environmental impact analysis indicates major losses for food supply, noise reduction, runoff mitigation, waste treatment and global climate regulation services through landscape structural changes in terms of decreases in service area, edge contamination and fragmentation. In terms ofclimate impact, the results indicate that land surface temperature can be related to land use/land cover classes.

  13. Theoretical studies of massive stars. I - Evolution of a 15-solar-mass star from the zero-age main sequence to neon ignition

    NASA Technical Reports Server (NTRS)

    Endal, A. S.

    1975-01-01

    The evolution of a star with mass 15 times that of the sun from the zero-age main sequence to neon ignition has been computed by the Henyey method. The hydrogen-rich envelope and all shell sources were explicitly included in the models. An algorithm has been developed for approximating the results of carbon burning, including the branching ratio for the C-12 + C-12 reaction and taking some secondary reactions into account. Penetration of the convective envelope into the core is found to be unimportant during the stages covered by the models. Energy transfer from the carbon-burning shell to the core by degenerate electron conduction becomes important after the core carbon-burning stage. Neon ignition will occur in a semidegenerate core and will lead to a mild 'flash.' Detailed numerical results are given in an appendix. Continuation of the calculations into later stages and variations with the total mass of the star will be discussed in later papers.

  14. [Delimitation of urban growth boundary based on ecological suitability and risk control: A case of Taibai Lake New District in Jining City, Shandong, China.

    PubMed

    Liu, Yan Xu; Peng, Jian; Sun, Mao Long; Yang, Yang

    2016-08-01

    Urban growth boundary, with full consideration of regional ecological constraints, can effectively control the unordered urban sprawl. Thus, urban growth boundary is a significant planning concept integrating regional ecological protection and urban construction. Finding the preferential position for urban construction, as well as controlling the ecological risk, has always been the core content of urban growth boundary delimitation. This study selected Taibai Lake New District in Jining City as a case area, and analyzed the scenario of ecological suitability by ordered weighted ave-raging algorithm. Surface temperature retrieval and rain flooding simulation were used to identify the spatial ecological risk. In the result of ecological suitability, the suitable construction zone accounted for 25.3% of the total area, the unsuitable construction zone accounted for 20.4%, and the other area was in the limit construction zone. Excluding the ecological risk control region, the flexible urban growth boundary covered 2975 hm 2 in near term, and covered 6754 hm 2 in long term. The final inflexible urban growth boundary covered 9405 hm 2 . As a new method, the scenario algorithms of ordered weighted averaging and ecological risk modeling could provide effective support in urban growth boundary identification.

  15. A highly efficient multi-core algorithm for clustering extremely large datasets

    PubMed Central

    2010-01-01

    Background In recent years, the demand for computational power in computational biology has increased due to rapidly growing data sets from microarray and other high-throughput technologies. This demand is likely to increase. Standard algorithms for analyzing data, such as cluster algorithms, need to be parallelized for fast processing. Unfortunately, most approaches for parallelizing algorithms largely rely on network communication protocols connecting and requiring multiple computers. One answer to this problem is to utilize the intrinsic capabilities in current multi-core hardware to distribute the tasks among the different cores of one computer. Results We introduce a multi-core parallelization of the k-means and k-modes cluster algorithms based on the design principles of transactional memory for clustering gene expression microarray type data and categorial SNP data. Our new shared memory parallel algorithms show to be highly efficient. We demonstrate their computational power and show their utility in cluster stability and sensitivity analysis employing repeated runs with slightly changed parameters. Computation speed of our Java based algorithm was increased by a factor of 10 for large data sets while preserving computational accuracy compared to single-core implementations and a recently published network based parallelization. Conclusions Most desktop computers and even notebooks provide at least dual-core processors. Our multi-core algorithms show that using modern algorithmic concepts, parallelization makes it possible to perform even such laborious tasks as cluster sensitivity and cluster number estimation on the laboratory computer. PMID:20370922

  16. Different Scalable Implementations of Collision and Streaming for Optimal Computational Performance of Lattice Boltzmann Simulations

    NASA Astrophysics Data System (ADS)

    Geneva, Nicholas; Wang, Lian-Ping

    2015-11-01

    In the past 25 years, the mesoscopic lattice Boltzmann method (LBM) has become an increasingly popular approach to simulate incompressible flows including turbulent flows. While LBM solves more solution variables compared to the conventional CFD approach based on the macroscopic Navier-Stokes equation, it also offers opportunities for more efficient parallelization. In this talk we will describe several different algorithms that have been developed over the past 10 plus years, which can be used to represent the two core steps of LBM, collision and streaming, more effectively than standard approaches. The application of these algorithms spans LBM simulations ranging from basic channel to particle laden flows. We will cover the essential detail on the implementation of each algorithm for simple 2D flows, to the challenges one faces when using a given algorithm for more complex simulations. The key is to explore the best use of data structure and cache memory. Two basic data structures will be discussed and the importance of effective data storage to maximize a CPU's cache will be addressed. The performance of a 3D turbulent channel flow simulation using these different algorithms and data structures will be compared along with important hardware related issues.

  17. A novel complex networks clustering algorithm based on the core influence of nodes.

    PubMed

    Tong, Chao; Niu, Jianwei; Dai, Bin; Xie, Zhongyu

    2014-01-01

    In complex networks, cluster structure, identified by the heterogeneity of nodes, has become a common and important topological property. Network clustering methods are thus significant for the study of complex networks. Currently, many typical clustering algorithms have some weakness like inaccuracy and slow convergence. In this paper, we propose a clustering algorithm by calculating the core influence of nodes. The clustering process is a simulation of the process of cluster formation in sociology. The algorithm detects the nodes with core influence through their betweenness centrality, and builds the cluster's core structure by discriminant functions. Next, the algorithm gets the final cluster structure after clustering the rest of the nodes in the network by optimizing method. Experiments on different datasets show that the clustering accuracy of this algorithm is superior to the classical clustering algorithm (Fast-Newman algorithm). It clusters faster and plays a positive role in revealing the real cluster structure of complex networks precisely.

  18. Quality Evaluation of Land-Cover Classification Using Convolutional Neural Network

    NASA Astrophysics Data System (ADS)

    Dang, Y.; Zhang, J.; Zhao, Y.; Luo, F.; Ma, W.; Yu, F.

    2018-04-01

    Land-cover classification is one of the most important products of earth observation, which focuses mainly on profiling the physical characters of the land surface with temporal and distribution attributes and contains the information of both natural and man-made coverage elements, such as vegetation, soil, glaciers, rivers, lakes, marsh wetlands and various man-made structures. In recent years, the amount of high-resolution remote sensing data has increased sharply. Accordingly, the volume of land-cover classification products increases, as well as the need to evaluate such frequently updated products that is a big challenge. Conventionally, the automatic quality evaluation of land-cover classification is made through pixel-based classifying algorithms, which lead to a much trickier task and consequently hard to keep peace with the required updating frequency. In this paper, we propose a novel quality evaluation approach for evaluating the land-cover classification by a scene classification method Convolutional Neural Network (CNN) model. By learning from remote sensing data, those randomly generated kernels that serve as filter matrixes evolved to some operators that has similar functions to man-crafted operators, like Sobel operator or Canny operator, and there are other kernels learned by the CNN model that are much more complex and can't be understood as existing filters. The method using CNN approach as the core algorithm serves quality-evaluation tasks well since it calculates a bunch of outputs which directly represent the image's membership grade to certain classes. An automatic quality evaluation approach for the land-cover DLG-DOM coupling data (DLG for Digital Line Graphic, DOM for Digital Orthophoto Map) will be introduced in this paper. The CNN model as an robustness method for image evaluation, then brought out the idea of an automatic quality evaluation approach for land-cover classification. Based on this experiment, new ideas of quality evaluation of DLG-DOM coupling land-cover classification or other kinds of labelled remote sensing data can be further studied.

  19. Core-periphery structure requires something else in the network

    NASA Astrophysics Data System (ADS)

    Kojaku, Sadamori; Masuda, Naoki

    2018-04-01

    A network with core-periphery structure consists of core nodes that are densely interconnected. In contrast to a community structure, which is a different meso-scale structure of networks, core nodes can be connected to peripheral nodes and peripheral nodes are not densely interconnected. Although core-periphery structure sounds reasonable, we argue that it is merely accounted for by heterogeneous degree distributions, if one partitions a network into a single core block and a single periphery block, which the famous Borgatti–Everett algorithm and many succeeding algorithms assume. In other words, there is a strong tendency that high-degree and low-degree nodes are judged to be core and peripheral nodes, respectively. To discuss core-periphery structure beyond the expectation of the node’s degree (as described by the configuration model), we propose that one needs to assume at least one block of nodes apart from the focal core-periphery structure, such as a different core-periphery pair, community or nodes not belonging to any meso-scale structure. We propose a scalable algorithm to detect pairs of core and periphery in networks, controlling for the effect of the node’s degree. We illustrate our algorithm using various empirical networks.

  20. Incremental k-core decomposition: Algorithms and evaluation

    DOE PAGES

    Sariyuce, Ahmet Erdem; Gedik, Bugra; Jacques-SIlva, Gabriela; ...

    2016-02-01

    A k-core of a graph is a maximal connected subgraph in which every vertex is connected to at least k vertices in the subgraph. k-core decomposition is often used in large-scale network analysis, such as community detection, protein function prediction, visualization, and solving NP-hard problems on real networks efficiently, like maximal clique finding. In many real-world applications, networks change over time. As a result, it is essential to develop efficient incremental algorithms for dynamic graph data. In this paper, we propose a suite of incremental k-core decomposition algorithms for dynamic graph data. These algorithms locate a small subgraph that ismore » guaranteed to contain the list of vertices whose maximum k-core values have changed and efficiently process this subgraph to update the k-core decomposition. We present incremental algorithms for both insertion and deletion operations, and propose auxiliary vertex state maintenance techniques that can further accelerate these operations. Our results show a significant reduction in runtime compared to non-incremental alternatives. We illustrate the efficiency of our algorithms on different types of real and synthetic graphs, at varying scales. Furthermore, for a graph of 16 million vertices, we observe relative throughputs reaching a million times, relative to the non-incremental algorithms.« less

  1. Moats and Drawbridges: An Isolation Primitive for Reconfigurable Hardware Based Systems

    DTIC Science & Technology

    2007-05-01

    these systems, and after being run through an optimizing CAD tool the resulting circuit is a single entangled mess of gates and wires. To prevent the...translates MATLAB [48] algorithms into HDL, logic synthesis translates this HDL into a netlist, a synthesis tool uses a place-and-route algorithm to...Core Soft Core µ Soft P Core µP Core Hard Soft Algorithms MATLAB gcc ExecutableC Code HDL C Code Bitstream Place and Route NetlistLogic Synthesis EDK µP

  2. Application of the artificial bee colony algorithm for solving the set covering problem.

    PubMed

    Crawford, Broderick; Soto, Ricardo; Cuesta, Rodrigo; Paredes, Fernando

    2014-01-01

    The set covering problem is a formal model for many practical optimization problems. In the set covering problem the goal is to choose a subset of the columns of minimal cost that covers every row. Here, we present a novel application of the artificial bee colony algorithm to solve the non-unicost set covering problem. The artificial bee colony algorithm is a recent swarm metaheuristic technique based on the intelligent foraging behavior of honey bees. Experimental results show that our artificial bee colony algorithm is competitive in terms of solution quality with other recent metaheuristic approaches for the set covering problem.

  3. Application of the Artificial Bee Colony Algorithm for Solving the Set Covering Problem

    PubMed Central

    Crawford, Broderick; Soto, Ricardo; Cuesta, Rodrigo; Paredes, Fernando

    2014-01-01

    The set covering problem is a formal model for many practical optimization problems. In the set covering problem the goal is to choose a subset of the columns of minimal cost that covers every row. Here, we present a novel application of the artificial bee colony algorithm to solve the non-unicost set covering problem. The artificial bee colony algorithm is a recent swarm metaheuristic technique based on the intelligent foraging behavior of honey bees. Experimental results show that our artificial bee colony algorithm is competitive in terms of solution quality with other recent metaheuristic approaches for the set covering problem. PMID:24883356

  4. Software for Project-Based Learning of Robot Motion Planning

    ERIC Educational Resources Information Center

    Moll, Mark; Bordeaux, Janice; Kavraki, Lydia E.

    2013-01-01

    Motion planning is a core problem in robotics concerned with finding feasible paths for a given robot. Motion planning algorithms perform a search in the high-dimensional continuous space of robot configurations and exemplify many of the core algorithmic concepts of search algorithms and associated data structures. Motion planning algorithms can…

  5. Improvement of Speckle Contrast Image Processing by an Efficient Algorithm.

    PubMed

    Steimers, A; Farnung, W; Kohl-Bareis, M

    2016-01-01

    We demonstrate an efficient algorithm for the temporal and spatial based calculation of speckle contrast for the imaging of blood flow by laser speckle contrast analysis (LASCA). It reduces the numerical complexity of necessary calculations, facilitates a multi-core and many-core implementation of the speckle analysis and enables an independence of temporal or spatial resolution and SNR. The new algorithm was evaluated for both spatial and temporal based analysis of speckle patterns with different image sizes and amounts of recruited pixels as sequential, multi-core and many-core code.

  6. Density-based cluster algorithms for the identification of core sets

    NASA Astrophysics Data System (ADS)

    Lemke, Oliver; Keller, Bettina G.

    2016-10-01

    The core-set approach is a discretization method for Markov state models of complex molecular dynamics. Core sets are disjoint metastable regions in the conformational space, which need to be known prior to the construction of the core-set model. We propose to use density-based cluster algorithms to identify the cores. We compare three different density-based cluster algorithms: the CNN, the DBSCAN, and the Jarvis-Patrick algorithm. While the core-set models based on the CNN and DBSCAN clustering are well-converged, constructing core-set models based on the Jarvis-Patrick clustering cannot be recommended. In a well-converged core-set model, the number of core sets is up to an order of magnitude smaller than the number of states in a conventional Markov state model with comparable approximation error. Moreover, using the density-based clustering one can extend the core-set method to systems which are not strongly metastable. This is important for the practical application of the core-set method because most biologically interesting systems are only marginally metastable. The key point is to perform a hierarchical density-based clustering while monitoring the structure of the metric matrix which appears in the core-set method. We test this approach on a molecular-dynamics simulation of a highly flexible 14-residue peptide. The resulting core-set models have a high spatial resolution and can distinguish between conformationally similar yet chemically different structures, such as register-shifted hairpin structures.

  7. Cloud cover detection combining high dynamic range sky images and ceilometer measurements

    NASA Astrophysics Data System (ADS)

    Román, R.; Cazorla, A.; Toledano, C.; Olmo, F. J.; Cachorro, V. E.; de Frutos, A.; Alados-Arboledas, L.

    2017-11-01

    This paper presents a new algorithm for cloud detection based on high dynamic range images from a sky camera and ceilometer measurements. The algorithm is also able to detect the obstruction of the sun. This algorithm, called CPC (Camera Plus Ceilometer), is based on the assumption that under cloud-free conditions the sky field must show symmetry. The symmetry criteria are applied depending on ceilometer measurements of the cloud base height. CPC algorithm is applied in two Spanish locations (Granada and Valladolid). The performance of CPC retrieving the sun conditions (obstructed or unobstructed) is analyzed in detail using as reference pyranometer measurements at Granada. CPC retrievals are in agreement with those derived from the reference pyranometer in 85% of the cases (it seems that this agreement does not depend on aerosol size or optical depth). The agreement percentage goes down to only 48% when another algorithm, based on Red-Blue Ratio (RBR), is applied to the sky camera images. The retrieved cloud cover at Granada and Valladolid is compared with that registered by trained meteorological observers. CPC cloud cover is in agreement with the reference showing a slight overestimation and a mean absolute error around 1 okta. A major advantage of the CPC algorithm with respect to the RBR method is that the determined cloud cover is independent of aerosol properties. The RBR algorithm overestimates cloud cover for coarse aerosols and high loads. Cloud cover obtained only from ceilometer shows similar results than CPC algorithm; but the horizontal distribution cannot be obtained. In addition, it has been observed that under quick and strong changes on cloud cover ceilometers retrieve a cloud cover fitting worse with the real cloud cover.

  8. T-L Plane Abstraction-Based Energy-Efficient Real-Time Scheduling for Multi-Core Wireless Sensors.

    PubMed

    Kim, Youngmin; Lee, Ki-Seong; Pham, Ngoc-Son; Lee, Sun-Ro; Lee, Chan-Gun

    2016-07-08

    Energy efficiency is considered as a critical requirement for wireless sensor networks. As more wireless sensor nodes are equipped with multi-cores, there are emerging needs for energy-efficient real-time scheduling algorithms. The T-L plane-based scheme is known to be an optimal global scheduling technique for periodic real-time tasks on multi-cores. Unfortunately, there has been a scarcity of studies on extending T-L plane-based scheduling algorithms to exploit energy-saving techniques. In this paper, we propose a new T-L plane-based algorithm enabling energy-efficient real-time scheduling on multi-core sensor nodes with dynamic power management (DPM). Our approach addresses the overhead of processor mode transitions and reduces fragmentations of the idle time, which are inherent in T-L plane-based algorithms. Our experimental results show the effectiveness of the proposed algorithm compared to other energy-aware scheduling methods on T-L plane abstraction.

  9. Multi-Sensor Registration of Earth Remotely Sensed Imagery

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline; Cole-Rhodes, Arlene; Eastman, Roger; Johnson, Kisha; Morisette, Jeffrey; Netanyahu, Nathan S.; Stone, Harold S.; Zavorin, Ilya; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    Assuming that approximate registration is given within a few pixels by a systematic correction system, we develop automatic image registration methods for multi-sensor data with the goal of achieving sub-pixel accuracy. Automatic image registration is usually defined by three steps; feature extraction, feature matching, and data resampling or fusion. Our previous work focused on image correlation methods based on the use of different features. In this paper, we study different feature matching techniques and present five algorithms where the features are either original gray levels or wavelet-like features, and the feature matching is based on gradient descent optimization, statistical robust matching, and mutual information. These algorithms are tested and compared on several multi-sensor datasets covering one of the EOS Core Sites, the Konza Prairie in Kansas, from four different sensors: IKONOS (4m), Landsat-7/ETM+ (30m), MODIS (500m), and SeaWIFS (1000m).

  10. Integration of ANFIS, NN and GA to determine core porosity and permeability from conventional well log data

    NASA Astrophysics Data System (ADS)

    Ja'fari, Ahmad; Hamidzadeh Moghadam, Rasoul

    2012-10-01

    Routine core analysis provides useful information for petrophysical study of the hydrocarbon reservoirs. Effective porosity and fluid conductivity (permeability) could be obtained from core analysis in laboratory. Coring hydrocarbon bearing intervals and analysis of obtained cores in laboratory is expensive and time consuming. In this study an improved method to make a quantitative correlation between porosity and permeability obtained from core and conventional well log data by integration of different artificial intelligent systems is proposed. The proposed method combines the results of adaptive neuro-fuzzy inference system (ANFIS) and neural network (NN) algorithms for overall estimation of core data from conventional well log data. These methods multiply the output of each algorithm with a weight factor. Simple averaging and weighted averaging were used for determining the weight factors. In the weighted averaging method the genetic algorithm (GA) is used to determine the weight factors. The overall algorithm was applied in one of SW Iran’s oil fields with two cored wells. One-third of all data were used as the test dataset and the rest of them were used for training the networks. Results show that the output of the GA averaging method provided the best mean square error and also the best correlation coefficient with real core data.

  11. Optimization of image processing algorithms on mobile platforms

    NASA Astrophysics Data System (ADS)

    Poudel, Pramod; Shirvaikar, Mukul

    2011-03-01

    This work presents a technique to optimize popular image processing algorithms on mobile platforms such as cell phones, net-books and personal digital assistants (PDAs). The increasing demand for video applications like context-aware computing on mobile embedded systems requires the use of computationally intensive image processing algorithms. The system engineer has a mandate to optimize them so as to meet real-time deadlines. A methodology to take advantage of the asymmetric dual-core processor, which includes an ARM and a DSP core supported by shared memory, is presented with implementation details. The target platform chosen is the popular OMAP 3530 processor for embedded media systems. It has an asymmetric dual-core architecture with an ARM Cortex-A8 and a TMS320C64x Digital Signal Processor (DSP). The development platform was the BeagleBoard with 256 MB of NAND RAM and 256 MB SDRAM memory. The basic image correlation algorithm is chosen for benchmarking as it finds widespread application for various template matching tasks such as face-recognition. The basic algorithm prototypes conform to OpenCV, a popular computer vision library. OpenCV algorithms can be easily ported to the ARM core which runs a popular operating system such as Linux or Windows CE. However, the DSP is architecturally more efficient at handling DFT algorithms. The algorithms are tested on a variety of images and performance results are presented measuring the speedup obtained due to dual-core implementation. A major advantage of this approach is that it allows the ARM processor to perform important real-time tasks, while the DSP addresses performance-hungry algorithms.

  12. Reconstruction of a digital core containing clay minerals based on a clustering algorithm.

    PubMed

    He, Yanlong; Pu, Chunsheng; Jing, Cheng; Gu, Xiaoyu; Chen, Qingdong; Liu, Hongzhi; Khan, Nasir; Dong, Qiaoling

    2017-10-01

    It is difficult to obtain a core sample and information for digital core reconstruction of mature sandstone reservoirs around the world, especially for an unconsolidated sandstone reservoir. Meanwhile, reconstruction and division of clay minerals play a vital role in the reconstruction of the digital cores, although the two-dimensional data-based reconstruction methods are specifically applicable as the microstructure reservoir simulation methods for the sandstone reservoir. However, reconstruction of clay minerals is still challenging from a research viewpoint for the better reconstruction of various clay minerals in the digital cores. In the present work, the content of clay minerals was considered on the basis of two-dimensional information about the reservoir. After application of the hybrid method, and compared with the model reconstructed by the process-based method, the digital core containing clay clusters without the labels of the clusters' number, size, and texture were the output. The statistics and geometry of the reconstruction model were similar to the reference model. In addition, the Hoshen-Kopelman algorithm was used to label various connected unclassified clay clusters in the initial model and then the number and size of clay clusters were recorded. At the same time, the K-means clustering algorithm was applied to divide the labeled, large connecting clusters into smaller clusters on the basis of difference in the clusters' characteristics. According to the clay minerals' characteristics, such as types, textures, and distributions, the digital core containing clay minerals was reconstructed by means of the clustering algorithm and the clay clusters' structure judgment. The distributions and textures of the clay minerals of the digital core were reasonable. The clustering algorithm improved the digital core reconstruction and provided an alternative method for the simulation of different clay minerals in the digital cores.

  13. Petri nets SM-cover-based on heuristic coloring algorithm

    NASA Astrophysics Data System (ADS)

    Tkacz, Jacek; Doligalski, Michał

    2015-09-01

    In the paper, coloring heuristic algorithm of interpreted Petri nets is presented. Coloring is used to determine the State Machines (SM) subnets. The present algorithm reduces the Petri net in order to reduce the computational complexity and finds one of its possible State Machines cover. The proposed algorithm uses elements of interpretation of Petri nets. The obtained result may not be the best, but it is sufficient for use in rapid prototyping of logic controllers. Found SM-cover will be also used in the development of algorithms for decomposition, and modular synthesis and implementation of parallel logic controllers. Correctness developed heuristic algorithm was verified using Gentzen formal reasoning system.

  14. Efficacy of the core DNA barcodes in identifying processed and poorly conserved plant materials commonly used in South African traditional medicine

    PubMed Central

    Mankga, Ledile T.; Yessoufou, Kowiyou; Moteetee, Annah M.; Daru, Barnabas H.; van der Bank, Michelle

    2013-01-01

    Abstract Medicinal plants cover a broad range of taxa, which may be phylogenetically less related but morphologically very similar. Such morphological similarity between species may lead to misidentification and inappropriate use. Also the substitution of a medicinal plant by a cheaper alternative (e.g. other non-medicinal plant species), either due to misidentification, or deliberately to cheat consumers, is an issue of growing concern. In this study, we used DNA barcoding to identify commonly used medicinal plants in South Africa. Using the core plant barcodes, matK and rbcLa, obtained from processed and poorly conserved materials sold at the muthi traditional medicine market, we tested efficacy of the barcodes in species discrimination. Based on genetic divergence, PCR amplification efficiency and BLAST algorithm, we revealed varied discriminatory potentials for the DNA barcodes. In general, the barcodes exhibited high discriminatory power, indicating their effectiveness in verifying the identity of the most common plant species traded in South African medicinal markets. BLAST algorithm successfully matched 61% of the queries against a reference database, suggesting that most of the information supplied by sellers at traditional medicinal markets in South Africa is correct. Our findings reinforce the utility of DNA barcoding technique in limiting false identification that can harm public health. PMID:24453559

  15. T-L Plane Abstraction-Based Energy-Efficient Real-Time Scheduling for Multi-Core Wireless Sensors

    PubMed Central

    Kim, Youngmin; Lee, Ki-Seong; Pham, Ngoc-Son; Lee, Sun-Ro; Lee, Chan-Gun

    2016-01-01

    Energy efficiency is considered as a critical requirement for wireless sensor networks. As more wireless sensor nodes are equipped with multi-cores, there are emerging needs for energy-efficient real-time scheduling algorithms. The T-L plane-based scheme is known to be an optimal global scheduling technique for periodic real-time tasks on multi-cores. Unfortunately, there has been a scarcity of studies on extending T-L plane-based scheduling algorithms to exploit energy-saving techniques. In this paper, we propose a new T-L plane-based algorithm enabling energy-efficient real-time scheduling on multi-core sensor nodes with dynamic power management (DPM). Our approach addresses the overhead of processor mode transitions and reduces fragmentations of the idle time, which are inherent in T-L plane-based algorithms. Our experimental results show the effectiveness of the proposed algorithm compared to other energy-aware scheduling methods on T-L plane abstraction. PMID:27399722

  16. Parallel transformation of K-SVD solar image denoising algorithm

    NASA Astrophysics Data System (ADS)

    Liang, Youwen; Tian, Yu; Li, Mei

    2017-02-01

    The images obtained by observing the sun through a large telescope always suffered with noise due to the low SNR. K-SVD denoising algorithm can effectively remove Gauss white noise. Training dictionaries for sparse representations is a time consuming task, due to the large size of the data involved and to the complexity of the training algorithms. In this paper, an OpenMP parallel programming language is proposed to transform the serial algorithm to the parallel version. Data parallelism model is used to transform the algorithm. Not one atom but multiple atoms updated simultaneously is the biggest change. The denoising effect and acceleration performance are tested after completion of the parallel algorithm. Speedup of the program is 13.563 in condition of using 16 cores. This parallel version can fully utilize the multi-core CPU hardware resources, greatly reduce running time and easily to transplant in multi-core platform.

  17. Improved Snow Mapping Accuracy with Revised MODIS Snow Algorithm

    NASA Technical Reports Server (NTRS)

    Riggs, George; Hall, Dorothy K.

    2012-01-01

    The MODIS snow cover products have been used in over 225 published studies. From those reports, and our ongoing analysis, we have learned about the accuracy and errors in the snow products. Revisions have been made in the algorithms to improve the accuracy of snow cover detection in Collection 6 (C6), the next processing/reprocessing of the MODIS data archive planned to start in September 2012. Our objective in the C6 revision of the MODIS snow-cover algorithms and products is to maximize the capability to detect snow cover while minimizing snow detection errors of commission and omission. While the basic snow detection algorithm will not change, new screens will be applied to alleviate snow detection commission and omission errors, and only the fractional snow cover (FSC) will be output (the binary snow cover area (SCA) map will no longer be included).

  18. Land cover and land use mapping of the iSimangaliso Wetland Park, South Africa: comparison of oblique and orthogonal random forest algorithms

    NASA Astrophysics Data System (ADS)

    Bassa, Zaakirah; Bob, Urmilla; Szantoi, Zoltan; Ismail, Riyad

    2016-01-01

    In recent years, the popularity of tree-based ensemble methods for land cover classification has increased significantly. Using WorldView-2 image data, we evaluate the potential of the oblique random forest algorithm (oRF) to classify a highly heterogeneous protected area. In contrast to the random forest (RF) algorithm, the oRF algorithm builds multivariate trees by learning the optimal split using a supervised model. The oRF binary algorithm is adapted to a multiclass land cover and land use application using both the "one-against-one" and "one-against-all" combination approaches. Results show that the oRF algorithms are capable of achieving high classification accuracies (>80%). However, there was no statistical difference in classification accuracies obtained by the oRF algorithms and the more popular RF algorithm. For all the algorithms, user accuracies (UAs) and producer accuracies (PAs) >80% were recorded for most of the classes. Both the RF and oRF algorithms poorly classified the indigenous forest class as indicated by the low UAs and PAs. Finally, the results from this study advocate and support the utility of the oRF algorithm for land cover and land use mapping of protected areas using WorldView-2 image data.

  19. Automated podosome identification and characterization in fluorescence microscopy images.

    PubMed

    Meddens, Marjolein B M; Rieger, Bernd; Figdor, Carl G; Cambi, Alessandra; van den Dries, Koen

    2013-02-01

    Podosomes are cellular adhesion structures involved in matrix degradation and invasion that comprise an actin core and a ring of cytoskeletal adaptor proteins. They are most often identified by staining with phalloidin, which binds F-actin and therefore visualizes the core. However, not only podosomes, but also many other cytoskeletal structures contain actin, which makes podosome segmentation by automated image processing difficult. Here, we have developed a quantitative image analysis algorithm that is optimized to identify podosome cores within a typical sample stained with phalloidin. By sequential local and global thresholding, our analysis identifies up to 76% of podosome cores excluding other F-actin-based structures. Based on the overlap in podosome identifications and quantification of podosome numbers, our algorithm performs equally well compared to three experts. Using our algorithm we show effects of actin polymerization and myosin II inhibition on the actin intensity in both podosome core and associated actin network. Furthermore, by expanding the core segmentations, we reveal a previously unappreciated differential distribution of cytoskeletal adaptor proteins within the podosome ring. These applications illustrate that our algorithm is a valuable tool for rapid and accurate large-scale analysis of podosomes to increase our understanding of these characteristic adhesion structures.

  20. Large-Scale Image Analytics Using Deep Learning

    NASA Astrophysics Data System (ADS)

    Ganguly, S.; Nemani, R. R.; Basu, S.; Mukhopadhyay, S.; Michaelis, A.; Votava, P.

    2014-12-01

    High resolution land cover classification maps are needed to increase the accuracy of current Land ecosystem and climate model outputs. Limited studies are in place that demonstrates the state-of-the-art in deriving very high resolution (VHR) land cover products. In addition, most methods heavily rely on commercial softwares that are difficult to scale given the region of study (e.g. continents to globe). Complexities in present approaches relate to (a) scalability of the algorithm, (b) large image data processing (compute and memory intensive), (c) computational cost, (d) massively parallel architecture, and (e) machine learning automation. In addition, VHR satellite datasets are of the order of terabytes and features extracted from these datasets are of the order of petabytes. In our present study, we have acquired the National Agricultural Imaging Program (NAIP) dataset for the Continental United States at a spatial resolution of 1-m. This data comes as image tiles (a total of quarter million image scenes with ~60 million pixels) and has a total size of ~100 terabytes for a single acquisition. Features extracted from the entire dataset would amount to ~8-10 petabytes. In our proposed approach, we have implemented a novel semi-automated machine learning algorithm rooted on the principles of "deep learning" to delineate the percentage of tree cover. In order to perform image analytics in such a granular system, it is mandatory to devise an intelligent archiving and query system for image retrieval, file structuring, metadata processing and filtering of all available image scenes. Using the Open NASA Earth Exchange (NEX) initiative, which is a partnership with Amazon Web Services (AWS), we have developed an end-to-end architecture for designing the database and the deep belief network (following the distbelief computing model) to solve a grand challenge of scaling this process across quarter million NAIP tiles that cover the entire Continental United States. The AWS core components that we use to solve this problem are DynamoDB along with S3 for database query and storage, ElastiCache shared memory architecture for image segmentation, Elastic Map Reduce (EMR) for image feature extraction, and the memory optimized Elastic Cloud Compute (EC2) for the learning algorithm.

  1. VLBI-resolution radio-map algorithms: Performance analysis of different levels of data-sharing on multi-socket, multi-core architectures

    NASA Astrophysics Data System (ADS)

    Tabik, S.; Romero, L. F.; Mimica, P.; Plata, O.; Zapata, E. L.

    2012-09-01

    A broad area in astronomy focuses on simulating extragalactic objects based on Very Long Baseline Interferometry (VLBI) radio-maps. Several algorithms in this scope simulate what would be the observed radio-maps if emitted from a predefined extragalactic object. This work analyzes the performance and scaling of this kind of algorithms on multi-socket, multi-core architectures. In particular, we evaluate a sharing approach, a privatizing approach and a hybrid approach on systems with complex memory hierarchy that includes shared Last Level Cache (LLC). In addition, we investigate which manual processes can be systematized and then automated in future works. The experiments show that the data-privatizing model scales efficiently on medium scale multi-socket, multi-core systems (up to 48 cores) while regardless of algorithmic and scheduling optimizations, the sharing approach is unable to reach acceptable scalability on more than one socket. However, the hybrid model with a specific level of data-sharing provides the best scalability over all used multi-socket, multi-core systems.

  2. Predator selection of prairie landscape features and its relation to duck nest success

    USGS Publications Warehouse

    Phillips, M.L.; Clark, W.R.; Sovada, M.A.; Horn, D.J.; Koford, Rolf R.; Greenwood, R.J.

    2003-01-01

    Mammalian predation is a major cause of mortality for breeding waterfowl in the U.S. Northern Great Plains, and yet we know little about the selection of prairie habitats by predators or how this influences nest success in grassland nesting cover. We selected 2 41.4-km2 study areas in both 1996 and 1997 in North Dakota, USA, with contrasting compositions of perennial grassland. A study area contained either 15-20% perennial grassland (Low Grassland Composition [LGC]) or 45-55% perennial grassland (High Grassland Composition [HGC]). We used radiotelemetry to investigate the selection of 9 landscape cover types by red fox (Vulpes vulpes) and striped skunk (Mephitis mephitis), while simultaneously recording duck nest success within planted cover. The cover types included the edge and core areas of planted cover, wetland edges within planted cover or surrounded by cropland, pastureland, hayland, cropland, roads, and miscellaneous cover types. Striped skunks selected wetland edges surrounded by agriculture over all other cover types in LGC landscapes (P-values for all pairwise comparisons were <0.05). Striped skunks also selected wetland edges surrounded by agriculture over all other cover types in HGC landscapes (P < 0.05), except for wetland edges within planted cover (P = 0.12). Red foxes selected the edge and core areas of planted cover, as well as wetland edges within planted cover in LGC landscapes (i.e., they were attracted to the more isolated patches of planted cover). However, in HGC landscapes, red foxes did not select interior areas of planted cover (i.e., core areas of planted cover and wetland edges in planted cover) as frequently as edges of planted cover (P < 0.05). Red foxes selected core areas of planted cover more frequently in LGC than in HGC landscapes (P < 0.05) and selected pastureland more frequently in HGC than in LGC landscapes (P < 0.05). Furthermore, red foxes selected the isolated patches of planted cover more than pastureland in LGC landscapes (P < 0.05). Duck nest success was greater in HGC landscapes than in LGC landscapes for planted-cover core (P < 0.0001), planted-cover edge (P < 0.001) and planted cover-wetland edge (P < 0.001). Both the increased amount of planted-cover core area and the increased pastureland selection in HGC landscapes may have diluted predator foraging efficiency in the interior areas of planted cover and contributed to higher nest success in HGC landscapes. Our observations of predator cover-type selection not only support the restoration and management of large blocks of grassland but also indicate the influence of alternative cover types for mitigating nest predation in the Prairie Pothole Region.

  3. Stochastic Local Search for Core Membership Checking in Hedonic Games

    NASA Astrophysics Data System (ADS)

    Keinänen, Helena

    Hedonic games have emerged as an important tool in economics and show promise as a useful formalism to model multi-agent coalition formation in AI as well as group formation in social networks. We consider a coNP-complete problem of core membership checking in hedonic coalition formation games. No previous algorithms to tackle the problem have been presented. In this work, we overcome this by developing two stochastic local search algorithms for core membership checking in hedonic games. We demonstrate the usefulness of the algorithms by showing experimentally that they find solutions efficiently, particularly for large agent societies.

  4. Incorporating Added Sugar Improves the Performance of the Health Star Rating Front-of-Pack Labelling System in Australia.

    PubMed

    Peters, Sanne A E; Dunford, Elizabeth; Jones, Alexandra; Ni Mhurchu, Cliona; Crino, Michelle; Taylor, Fraser; Woodward, Mark; Neal, Bruce

    2017-07-05

    The Health Star Rating (HSR) is an interpretive front-of-pack labelling system that rates the overall nutritional profile of packaged foods. The algorithm underpinning the HSR includes total sugar content as one of the components. This has been criticised because intrinsic sugars naturally present in dairy, fruits, and vegetables are treated the same as sugars added during food processing. We assessed whether the HSR could better discriminate between core and discretionary foods by including added sugar in the underlying algorithm. Nutrition information was extracted for 34,135 packaged foods available in The George Institute's Australian FoodSwitch database. Added sugar levels were imputed from food composition databases. Products were classified as 'core' or 'discretionary' based on the Australian Dietary Guidelines. The ability of each of the nutrients included in the HSR algorithm, as well as added sugar, to discriminate between core and discretionary foods was estimated using the area under the curve (AUC). 15,965 core and 18,350 discretionary foods were included. Of these, 8230 (52%) core foods and 15,947 (87%) discretionary foods contained added sugar. Median (Q1, Q3) HSRs were 4.0 (3.0, 4.5) for core foods and 2.0 (1.0, 3.0) for discretionary foods. Median added sugar contents (g/100 g) were 3.3 (1.5, 5.5) for core foods and 14.6 (1.8, 37.2) for discretionary foods. Of all the nutrients used in the current HSR algorithm, total sugar had the greatest individual capacity to discriminate between core and discretionary foods; AUC 0.692 (0.686; 0.697). Added sugar alone achieved an AUC of 0.777 (0.772; 0.782). A model with all nutrients in the current HSR algorithm had an AUC of 0.817 (0.812; 0.821), which increased to 0.871 (0.867; 0.874) with inclusion of added sugar. The HSR nutrients discriminate well between core and discretionary packaged foods. However, discrimination was improved when added sugar was also included. These data argue for inclusion of added sugar in an updated HSR algorithm and declaration of added sugar as part of mandatory nutrient declarations.

  5. Software for project-based learning of robot motion planning

    NASA Astrophysics Data System (ADS)

    Moll, Mark; Bordeaux, Janice; Kavraki, Lydia E.

    2013-12-01

    Motion planning is a core problem in robotics concerned with finding feasible paths for a given robot. Motion planning algorithms perform a search in the high-dimensional continuous space of robot configurations and exemplify many of the core algorithmic concepts of search algorithms and associated data structures. Motion planning algorithms can be explained in a simplified two-dimensional setting, but this masks many of the subtleties and complexities of the underlying problem. We have developed software for project-based learning of motion planning that enables deep learning. The projects that we have developed allow advanced undergraduate students and graduate students to reflect on the performance of existing textbook algorithms and their own variations on such algorithms. Formative assessment has been conducted at three institutions. The core of the software used for this teaching module is also used within the Robot Operating System, a widely adopted platform by the robotics research community. This allows for transfer of knowledge and skills to robotics research projects involving a large variety robot hardware platforms.

  6. Clustering algorithms for identifying core atom sets and for assessing the precision of protein structure ensembles.

    PubMed

    Snyder, David A; Montelione, Gaetano T

    2005-06-01

    An important open question in the field of NMR-based biomolecular structure determination is how best to characterize the precision of the resulting ensemble of structures. Typically, the RMSD, as minimized in superimposing the ensemble of structures, is the preferred measure of precision. However, the presence of poorly determined atomic coordinates and multiple "RMSD-stable domains"--locally well-defined regions that are not aligned in global superimpositions--complicate RMSD calculations. In this paper, we present a method, based on a novel, structurally defined order parameter, for identifying a set of core atoms to use in determining superimpositions for RMSD calculations. In addition we present a method for deciding whether to partition that core atom set into "RMSD-stable domains" and, if so, how to determine partitioning of the core atom set. We demonstrate our algorithm and its application in calculating statistically sound RMSD values by applying it to a set of NMR-derived structural ensembles, superimposing each RMSD-stable domain (or the entire core atom set, where appropriate) found in each protein structure under consideration. A parameter calculated by our algorithm using a novel, kurtosis-based criterion, the epsilon-value, is a measure of precision of the superimposition that complements the RMSD. In addition, we compare our algorithm with previously described algorithms for determining core atom sets. The methods presented in this paper for biomolecular structure superimposition are quite general, and have application in many areas of structural bioinformatics and structural biology.

  7. Efficient parallel and out of core algorithms for constructing large bi-directed de Bruijn graphs.

    PubMed

    Kundeti, Vamsi K; Rajasekaran, Sanguthevar; Dinh, Hieu; Vaughn, Matthew; Thapar, Vishal

    2010-11-15

    Assembling genomic sequences from a set of overlapping reads is one of the most fundamental problems in computational biology. Algorithms addressing the assembly problem fall into two broad categories - based on the data structures which they employ. The first class uses an overlap/string graph and the second type uses a de Bruijn graph. However with the recent advances in short read sequencing technology, de Bruijn graph based algorithms seem to play a vital role in practice. Efficient algorithms for building these massive de Bruijn graphs are very essential in large sequencing projects based on short reads. In an earlier work, an O(n/p) time parallel algorithm has been given for this problem. Here n is the size of the input and p is the number of processors. This algorithm enumerates all possible bi-directed edges which can overlap with a node and ends up generating Θ(nΣ) messages (Σ being the size of the alphabet). In this paper we present a Θ(n/p) time parallel algorithm with a communication complexity that is equal to that of parallel sorting and is not sensitive to Σ. The generality of our algorithm makes it very easy to extend it even to the out-of-core model and in this case it has an optimal I/O complexity of Θ(nlog(n/B)Blog(M/B)) (M being the main memory size and B being the size of the disk block). We demonstrate the scalability of our parallel algorithm on a SGI/Altix computer. A comparison of our algorithm with the previous approaches reveals that our algorithm is faster--both asymptotically and practically. We demonstrate the scalability of our sequential out-of-core algorithm by comparing it with the algorithm used by VELVET to build the bi-directed de Bruijn graph. Our experiments reveal that our algorithm can build the graph with a constant amount of memory, which clearly outperforms VELVET. We also provide efficient algorithms for the bi-directed chain compaction problem. The bi-directed de Bruijn graph is a fundamental data structure for any sequence assembly program based on Eulerian approach. Our algorithms for constructing Bi-directed de Bruijn graphs are efficient in parallel and out of core settings. These algorithms can be used in building large scale bi-directed de Bruijn graphs. Furthermore, our algorithms do not employ any all-to-all communications in a parallel setting and perform better than the prior algorithms. Finally our out-of-core algorithm is extremely memory efficient and can replace the existing graph construction algorithm in VELVET.

  8. Efficient sequential and parallel algorithms for record linkage.

    PubMed

    Mamun, Abdullah-Al; Mi, Tian; Aseltine, Robert; Rajasekaran, Sanguthevar

    2014-01-01

    Integrating data from multiple sources is a crucial and challenging problem. Even though there exist numerous algorithms for record linkage or deduplication, they suffer from either large time needs or restrictions on the number of datasets that they can integrate. In this paper we report efficient sequential and parallel algorithms for record linkage which handle any number of datasets and outperform previous algorithms. Our algorithms employ hierarchical clustering algorithms as the basis. A key idea that we use is radix sorting on certain attributes to eliminate identical records before any further processing. Another novel idea is to form a graph that links similar records and find the connected components. Our sequential and parallel algorithms have been tested on a real dataset of 1,083,878 records and synthetic datasets ranging in size from 50,000 to 9,000,000 records. Our sequential algorithm runs at least two times faster, for any dataset, than the previous best-known algorithm, the two-phase algorithm using faster computation of the edit distance (TPA (FCED)). The speedups obtained by our parallel algorithm are almost linear. For example, we get a speedup of 7.5 with 8 cores (residing in a single node), 14.1 with 16 cores (residing in two nodes), and 26.4 with 32 cores (residing in four nodes). We have compared the performance of our sequential algorithm with TPA (FCED) and found that our algorithm outperforms the previous one. The accuracy is the same as that of this previous best-known algorithm.

  9. Protein complex prediction for large protein protein interaction networks with the Core&Peel method.

    PubMed

    Pellegrini, Marco; Baglioni, Miriam; Geraci, Filippo

    2016-11-08

    Biological networks play an increasingly important role in the exploration of functional modularity and cellular organization at a systemic level. Quite often the first tools used to analyze these networks are clustering algorithms. We concentrate here on the specific task of predicting protein complexes (PC) in large protein-protein interaction networks (PPIN). Currently, many state-of-the-art algorithms work well for networks of small or moderate size. However, their performance on much larger networks, which are becoming increasingly common in modern proteome-wise studies, needs to be re-assessed. We present a new fast algorithm for clustering large sparse networks: Core&Peel, which runs essentially in time and storage O(a(G)m+n) for a network G of n nodes and m arcs, where a(G) is the arboricity of G (which is roughly proportional to the maximum average degree of any induced subgraph in G). We evaluated Core&Peel on five PPI networks of large size and one of medium size from both yeast and homo sapiens, comparing its performance against those of ten state-of-the-art methods. We demonstrate that Core&Peel consistently outperforms the ten competitors in its ability to identify known protein complexes and in the functional coherence of its predictions. Our method is remarkably robust, being quite insensible to the injection of random interactions. Core&Peel is also empirically efficient attaining the second best running time over large networks among the tested algorithms. Our algorithm Core&Peel pushes forward the state-of the-art in PPIN clustering providing an algorithmic solution with polynomial running time that attains experimentally demonstrable good output quality and speed on challenging large real networks.

  10. On the Critical Behaviour, Crossover Point and Complexity of the Exact Cover Problem

    NASA Technical Reports Server (NTRS)

    Morris, Robin D.; Smelyanskiy, Vadim N.; Shumow, Daniel; Koga, Dennis (Technical Monitor)

    2003-01-01

    Research into quantum algorithms for NP-complete problems has rekindled interest in the detailed study a broad class of combinatorial problems. A recent paper applied the quantum adiabatic evolution algorithm to the Exact Cover problem for 3-sets (EC3), and provided an empirical evidence that the algorithm was polynomial. In this paper we provide a detailed study of the characteristics of the exact cover problem. We present the annealing approximation applied to EC3, which gives an over-estimate of the phase transition point. We also identify empirically the phase transition point. We also study the complexity of two classical algorithms on this problem: Davis-Putnam and Simulated Annealing. For these algorithms, EC3 is significantly easier than 3-SAT.

  11. Set covering algorithm, a subprogram of the scheduling algorithm for mission planning and logistic evaluation

    NASA Technical Reports Server (NTRS)

    Chang, H.

    1976-01-01

    A computer program using Lemke, Salkin and Spielberg's Set Covering Algorithm (SCA) to optimize a traffic model problem in the Scheduling Algorithm for Mission Planning and Logistics Evaluation (SAMPLE) was documented. SCA forms a submodule of SAMPLE and provides for input and output, subroutines, and an interactive feature for performing the optimization and arranging the results in a readily understandable form for output.

  12. Continuous Change Detection and Classification (CCDC) of Land Cover Using All Available Landsat Data

    NASA Astrophysics Data System (ADS)

    Zhu, Z.; Woodcock, C. E.

    2012-12-01

    A new algorithm for Continuous Change Detection and Classification (CCDC) of land cover using all available Landsat data is developed. This new algorithm is capable of detecting many kinds of land cover change as new images are collected and at the same time provide land cover maps for any given time. To better identify land cover change, a two step cloud, cloud shadow, and snow masking algorithm is used for eliminating "noisy" observations. Next, a time series model that has components of seasonality, trend, and break estimates the surface reflectance and temperature. The time series model is updated continuously with newly acquired observations. Due to the high variability in spectral response for different kinds of land cover change, the CCDC algorithm uses a data-driven threshold derived from all seven Landsat bands. When the difference between observed and predicted exceeds the thresholds three consecutive times, a pixel is identified as land cover change. Land cover classification is done after change detection. Coefficients from the time series models and the Root Mean Square Error (RMSE) from model fitting are used as classification inputs for the Random Forest Classifier (RFC). We applied this new algorithm for one Landsat scene (Path 12 Row 31) that includes all of Rhode Island as well as much of Eastern Massachusetts and parts of Connecticut. A total of 532 Landsat images acquired between 1982 and 2011 were processed. During this period, 619,924 pixels were detected to change once (91% of total changed pixels) and 60,199 pixels were detected to change twice (8% of total changed pixels). The most frequent land cover change category is from mixed forest to low density residential which occupies more than 8% of total land cover change pixels.

  13. Visualization assisted by parallel processing

    NASA Astrophysics Data System (ADS)

    Lange, B.; Rey, H.; Vasques, X.; Puech, W.; Rodriguez, N.

    2011-01-01

    This paper discusses the experimental results of our visualization model for data extracted from sensors. The objective of this paper is to find a computationally efficient method to produce a real time rendering visualization for a large amount of data. We develop visualization method to monitor temperature variance of a data center. Sensors are placed on three layers and do not cover all the room. We use particle paradigm to interpolate data sensors. Particles model the "space" of the room. In this work we use a partition of the particle set, using two mathematical methods: Delaunay triangulation and Voronoý cells. Avis and Bhattacharya present these two algorithms in. Particles provide information on the room temperature at different coordinates over time. To locate and update particles data we define a computational cost function. To solve this function in an efficient way, we use a client server paradigm. Server computes data and client display this data on different kind of hardware. This paper is organized as follows. The first part presents related algorithm used to visualize large flow of data. The second part presents different platforms and methods used, which was evaluated in order to determine the better solution for the task proposed. The benchmark use the computational cost of our algorithm that formed based on located particles compared to sensors and on update of particles value. The benchmark was done on a personal computer using CPU, multi core programming, GPU programming and hybrid GPU/CPU. GPU programming method is growing in the research field; this method allows getting a real time rendering instates of a precompute rendering. For improving our results, we compute our algorithm on a High Performance Computing (HPC), this benchmark was used to improve multi-core method. HPC is commonly used in data visualization (astronomy, physic, etc) for improving the rendering and getting real-time.

  14. Recursive partitioned inversion of large (1500 x 1500) symmetric matrices

    NASA Technical Reports Server (NTRS)

    Putney, B. H.; Brownd, J. E.; Gomez, R. A.

    1976-01-01

    A recursive algorithm was designed to invert large, dense, symmetric, positive definite matrices using small amounts of computer core, i.e., a small fraction of the core needed to store the complete matrix. The described algorithm is a generalized Gaussian elimination technique. Other algorithms are also discussed for the Cholesky decomposition and step inversion techniques. The purpose of the inversion algorithm is to solve large linear systems of normal equations generated by working geodetic problems. The algorithm was incorporated into a computer program called SOLVE. In the past the SOLVE program has been used in obtaining solutions published as the Goddard earth models.

  15. New Scheduling Algorithms for Agile All-Photonic Networks

    NASA Astrophysics Data System (ADS)

    Mehri, Mohammad Saleh; Ghaffarpour Rahbar, Akbar

    2017-12-01

    An optical overlaid star network is a class of agile all-photonic networks that consists of one or more core node(s) at the center of the star network and a number of edge nodes around the core node. In this architecture, a core node may use a scheduling algorithm for transmission of traffic through the network. A core node is responsible for scheduling optical packets that arrive from edge nodes and switching them toward their destinations. Nowadays, most edge nodes use virtual output queue (VOQ) architecture for buffering client packets to achieve high throughput. This paper presents two efficient scheduling algorithms called discretionary iterative matching (DIM) and adaptive DIM. These schedulers find maximum matching in a small number of iterations and provide high throughput and incur low delay. The number of arbiters in these schedulers and the number of messages exchanged between inputs and outputs of a core node are reduced. We show that DIM and adaptive DIM can provide better performance in comparison with iterative round-robin matching with SLIP (iSLIP). SLIP means the act of sliding for a short distance to select one of the requested connections based on the scheduling algorithm.

  16. Overview of NASA's MODIS and Visible Infrared Imaging Radiometer Suite (VIIRS) snow-cover Earth System Data Records

    NASA Technical Reports Server (NTRS)

    Riggs, George A.; Hall, Dorothy K.; Roman, Miguel O.

    2017-01-01

    Knowledge of the distribution, extent, duration and timing of snowmelt is critical for characterizing the Earth's climate system and its changes. As a result, snow cover is one of the Global Climate Observing System (GCOS) essential climate variables (ECVs). Consistent, long-term datasets of snow cover are needed to study interannual variability and snow climatology. The NASA snow-cover datasets generated from the Moderate Resolution Imaging Spectroradiometer (MODIS) on the Terra and Aqua spacecraft and the Suomi National Polar-orbiting Partnership (S-NPP) Visible Infrared Imaging Radiometer Suite (VIIRS) are NASA Earth System Data Records (ESDR). The objective of the snow-cover detection algorithms is to optimize the accuracy of mapping snow-cover extent (SCE) and to minimize snow-cover detection errors of omission and commission using automated, globally applied algorithms to produce SCE data products. Advancements in snow-cover mapping have been made with each of the four major reprocessings of the MODIS data record, which extends from 2000 to the present. MODIS Collection 6 (C6) and VIIRS Collection 1 (C1) represent the state-of-the-art global snow cover mapping algorithms and products for NASA Earth science. There were many revisions made in the C6 algorithms which improved snow-cover detection accuracy and information content of the data products. These improvements have also been incorporated into the NASA VIIRS snow cover algorithms for C1. Both information content and usability were improved by including the Normalized Snow Difference Index (NDSI) and a quality assurance (QA) data array of algorithm processing flags in the data product, along with the SCE map.The increased data content allows flexibility in using the datasets for specific regions and end-user applications.Though there are important differences between the MODIS and VIIRS instruments (e.g., the VIIRS 375m native resolution compared to MODIS 500 m), the snow detection algorithms and data products are designed to be as similar as possible so that the 16C year MODIS ESDR of global SCE can be extended into the future with the S-NPP VIIRS snow products and with products from future Joint Polar Satellite System (JPSS) platforms.These NASA datasets are archived and accessible through the NASA Distributed Active Archive Center at the National Snow and Ice Data Center in Boulder, Colorado.

  17. Data-driven CT protocol review and management—experience from a large academic hospital.

    PubMed

    Zhang, Da; Savage, Cristy A; Li, Xinhua; Liu, Bob

    2015-03-01

    Protocol review plays a critical role in CT quality assurance, but large numbers of protocols and inconsistent protocol names on scanners and in exam records make thorough protocol review formidable. In this investigation, we report on a data-driven cataloging process that can be used to assist in the reviewing and management of CT protocols. We collected lists of scanner protocols, as well as 18 months of recent exam records, for 10 clinical scanners. We developed computer algorithms to automatically deconstruct the protocol names on the scanner and in the exam records into core names and descriptive components. Based on the core names, we were able to group the scanner protocols into a much smaller set of "core protocols," and to easily link exam records with the scanner protocols. We calculated the percentage of usage for each core protocol, from which the most heavily used protocols were identified. From the percentage-of-usage data, we found that, on average, 18, 33, and 49 core protocols per scanner covered 80%, 90%, and 95%, respectively, of all exams. These numbers are one order of magnitude smaller than the typical numbers of protocols that are loaded on a scanner (200-300, as reported in the literature). Duplicated, outdated, and rarely used protocols on the scanners were easily pinpointed in the cataloging process. The data-driven cataloging process can facilitate the task of protocol review. Copyright © 2015 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  18. COVERING A CORE BY EXTRUSION

    DOEpatents

    Karnie, A.J.

    1963-07-16

    A method of covering a cylindrical fuel core with a cladding metal ms described. The metal is forced between dies around the core from both ends in two opposing skirts, and as these meet the ends turn outward into an annular recess in the dics. By cutting off the raised portion formed by the recess, oxide impurities are eliminated. (AEC)

  19. Efficient sequential and parallel algorithms for record linkage

    PubMed Central

    Mamun, Abdullah-Al; Mi, Tian; Aseltine, Robert; Rajasekaran, Sanguthevar

    2014-01-01

    Background and objective Integrating data from multiple sources is a crucial and challenging problem. Even though there exist numerous algorithms for record linkage or deduplication, they suffer from either large time needs or restrictions on the number of datasets that they can integrate. In this paper we report efficient sequential and parallel algorithms for record linkage which handle any number of datasets and outperform previous algorithms. Methods Our algorithms employ hierarchical clustering algorithms as the basis. A key idea that we use is radix sorting on certain attributes to eliminate identical records before any further processing. Another novel idea is to form a graph that links similar records and find the connected components. Results Our sequential and parallel algorithms have been tested on a real dataset of 1 083 878 records and synthetic datasets ranging in size from 50 000 to 9 000 000 records. Our sequential algorithm runs at least two times faster, for any dataset, than the previous best-known algorithm, the two-phase algorithm using faster computation of the edit distance (TPA (FCED)). The speedups obtained by our parallel algorithm are almost linear. For example, we get a speedup of 7.5 with 8 cores (residing in a single node), 14.1 with 16 cores (residing in two nodes), and 26.4 with 32 cores (residing in four nodes). Conclusions We have compared the performance of our sequential algorithm with TPA (FCED) and found that our algorithm outperforms the previous one. The accuracy is the same as that of this previous best-known algorithm. PMID:24154837

  20. SIZE AND SURFACE AREA OF ICY DUST AGGREGATES AFTER A HEATING EVENT AT A PROTOPLANETARY NEBULA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sirono, Sin-iti

    2013-03-01

    The activity of a young star rises abruptly during an FU Orionis outburst. This event causes a temporary temperature increase in the protoplanetary nebula. H{sub 2}O icy grains are sublimated by this event, and silicate cores embedded inside the ice are ejected. During the high-temperature phase, the silicate grains coagulate to form silicate core aggregates. After the heating event, the temperature drops, and the ice recondenses onto the aggregates. I determined numerically the size distribution of the ice-covered aggregates. The size of the aggregates exceeds 10 {mu}m around the snow line. Because of the migration of the ice to largemore » aggregates, only a small fraction of the silicate core aggregate is covered with H{sub 2}O ice. After the heating event, the surface of an ice-covered aggregate is totally covered by silicate core aggregates. This might reduce the fragmentation velocity of aggregates when they collide. It is possible that the covering silicate cores shield the UV radiation field which induces photodissociation of H{sub 2}O ice. This effect may cause the shortage of cold H{sub 2}O vapor observed by Herschel.« less

  1. Enhanced encrypted reversible data hiding algorithm with minimum distortion through homomorphic encryption

    NASA Astrophysics Data System (ADS)

    Bhardwaj, Rupali

    2018-03-01

    Reversible data hiding means embedding a secret message in a cover image in such a manner, to the point that in the midst of extraction of the secret message, the cover image and, furthermore, the secret message are recovered with no error. The goal of by far most of the reversible data hiding algorithms is to have improved the embedding rate and enhanced visual quality of stego image. An improved encrypted-domain-based reversible data hiding algorithm to embed two binary bits in each gray pixel of original cover image with minimum distortion of stego-pixels is employed in this paper. Highlights of the proposed algorithm are minimum distortion of pixel's value, elimination of underflow and overflow problem, and equivalence of stego image and cover image with a PSNR of ∞ (for Lena, Goldhill, and Barbara image). The experimental outcomes reveal that in terms of average PSNR and embedding rate, for natural images, the proposed algorithm performed better than other conventional ones.

  2. Precipitation from the GPM Microwave Imager and Constellation Radiometers

    NASA Astrophysics Data System (ADS)

    Kummerow, Christian; Randel, David; Kirstetter, Pierre-Emmanuel; Kulie, Mark; Wang, Nai-Yu

    2014-05-01

    Satellite precipitation retrievals from microwave sensors are fundamentally underconstrained requiring either implicit or explicit a-priori information to constrain solutions. The radiometer algorithm designed for the GPM core and constellation satellites makes this a-priori information explicit in the form of a database of possible rain structures from the GPM core satellite and a Bayesian retrieval scheme. The a-priori database will eventually come from the GPM core satellite's combined radar/radiometer retrieval algorithm. That product is physically constrained to ensure radiometric consistency between the radars and radiometers and is thus ideally suited to create the a-priori databases for all radiometers in the GPM constellation. Until a robust product exists, however, the a-priori databases are being generated from the combination of existing sources over land and oceans. Over oceans, the Day-1 GPM radiometer algorithm uses the TRMM PR/TMI physically derived hydrometer profiles that are available from the tropics through sea surface temperatures of approximately 285K. For colder sea surface temperatures, the existing profiles are used with lower hydrometeor layers removed to correspond to colder conditions. While not ideal, the results appear to be reasonable placeholders until the full GPM database can be constructed. It is more difficult to construct physically consistent profiles over land due to ambiguities in surface emissivities as well as details of the ice scattering that dominates brightness temperature signatures over land. Over land, the a-priori databases have therefore been constructed by matching satellite overpasses to surface radar data derived from the WSR-88 network over the continental United States through the National Mosaic and Multi-Sensor QPE (NMQ) initiative. Databases are generated as a function of land type (4 categories of increasing vegetation cover as well as 4 categories of increasing snow depth), land surface temperature and total precipitable water. One year of coincident observations, generating 20 and 80 million database entries, depending upon the sensor, are used in the retrieval algorithm. The remaining areas such as sea ice and high latitude coastal zones are filled with a combination of CloudSat and AMSR-E plus MHS observations together with a model to create the equivalent databases for other radiometers in the constellation. The most noteworthy result from the Day-1 algorithm is the quality of the land products when compared to existing products. Unlike previous versions of land algorithms that depended upon complex screening routines to decide if pixels were precipitating or not, the current scheme is free of conditional rain statements and appears to produce rain rate with much greater fidelity than previous schemes. There results will be shown.

  3. Mono and multi-objective optimization techniques applied to a large range of industrial test cases using Metamodel assisted Evolutionary Algorithms

    NASA Astrophysics Data System (ADS)

    Fourment, Lionel; Ducloux, Richard; Marie, Stéphane; Ejday, Mohsen; Monnereau, Dominique; Massé, Thomas; Montmitonnet, Pierre

    2010-06-01

    The use of material processing numerical simulation allows a strategy of trial and error to improve virtual processes without incurring material costs or interrupting production and therefore save a lot of money, but it requires user time to analyze the results, adjust the operating conditions and restart the simulation. Automatic optimization is the perfect complement to simulation. Evolutionary Algorithm coupled with metamodelling makes it possible to obtain industrially relevant results on a very large range of applications within a few tens of simulations and without any specific automatic optimization technique knowledge. Ten industrial partners have been selected to cover the different area of the mechanical forging industry and provide different examples of the forming simulation tools. It aims to demonstrate that it is possible to obtain industrially relevant results on a very large range of applications within a few tens of simulations and without any specific automatic optimization technique knowledge. The large computational time is handled by a metamodel approach. It allows interpolating the objective function on the entire parameter space by only knowing the exact function values at a reduced number of "master points". Two algorithms are used: an evolution strategy combined with a Kriging metamodel and a genetic algorithm combined with a Meshless Finite Difference Method. The later approach is extended to multi-objective optimization. The set of solutions, which corresponds to the best possible compromises between the different objectives, is then computed in the same way. The population based approach allows using the parallel capabilities of the utilized computer with a high efficiency. An optimization module, fully embedded within the Forge2009 IHM, makes possible to cover all the defined examples, and the use of new multi-core hardware to compute several simulations at the same time reduces the needed time dramatically. The presented examples demonstrate the method versatility. They include billet shape optimization of a common rail, the cogging of a bar and a wire drawing problem.

  4. A Semi-Automated Machine Learning Algorithm for Tree Cover Delineation from 1-m Naip Imagery Using a High Performance Computing Architecture

    NASA Astrophysics Data System (ADS)

    Basu, S.; Ganguly, S.; Nemani, R. R.; Mukhopadhyay, S.; Milesi, C.; Votava, P.; Michaelis, A.; Zhang, G.; Cook, B. D.; Saatchi, S. S.; Boyda, E.

    2014-12-01

    Accurate tree cover delineation is a useful instrument in the derivation of Above Ground Biomass (AGB) density estimates from Very High Resolution (VHR) satellite imagery data. Numerous algorithms have been designed to perform tree cover delineation in high to coarse resolution satellite imagery, but most of them do not scale to terabytes of data, typical in these VHR datasets. In this paper, we present an automated probabilistic framework for the segmentation and classification of 1-m VHR data as obtained from the National Agriculture Imagery Program (NAIP) for deriving tree cover estimates for the whole of Continental United States, using a High Performance Computing Architecture. The results from the classification and segmentation algorithms are then consolidated into a structured prediction framework using a discriminative undirected probabilistic graphical model based on Conditional Random Field (CRF), which helps in capturing the higher order contextual dependencies between neighboring pixels. Once the final probability maps are generated, the framework is updated and re-trained by incorporating expert knowledge through the relabeling of misclassified image patches. This leads to a significant improvement in the true positive rates and reduction in false positive rates. The tree cover maps were generated for the state of California, which covers a total of 11,095 NAIP tiles and spans a total geographical area of 163,696 sq. miles. Our framework produced correct detection rates of around 85% for fragmented forests and 70% for urban tree cover areas, with false positive rates lower than 3% for both regions. Comparative studies with the National Land Cover Data (NLCD) algorithm and the LiDAR high-resolution canopy height model shows the effectiveness of our algorithm in generating accurate high-resolution tree cover maps.

  5. Public Conceptions of Algorithms and Representations in the Common Core State Standards for Mathematics

    ERIC Educational Resources Information Center

    Nanna, Robert J.

    2016-01-01

    Algorithms and representations have been an important aspect of the work of mathematics, especially for understanding concepts and communicating ideas about concepts and mathematical relationships. They have played a key role in various mathematics standards documents, including the Common Core State Standards for Mathematics. However, there have…

  6. “Skin-Core-Skin” Structure of Polymer Crystallization Investigated by Multiscale Simulation

    PubMed Central

    Ruan, Chunlei

    2018-01-01

    “Skin-core-skin” structure is a typical crystal morphology in injection products. Previous numerical works have rarely focused on crystal evolution; rather, they have mostly been based on the prediction of temperature distribution or crystallization kinetics. The aim of this work was to achieve the “skin-core-skin” structure and investigate the role of external flow and temperature fields on crystal morphology. Therefore, the multiscale algorithm was extended to the simulation of polymer crystallization in a pipe flow. The multiscale algorithm contains two parts: a collocated finite volume method at the macroscopic level and a morphological Monte Carlo method at the microscopic level. The SIMPLE (semi-implicit method for pressure linked equations) algorithm was used to calculate the polymeric model at the macroscopic level, while the Monte Carlo method with stochastic birth-growth process of spherulites and shish-kebabs was used at the microscopic level. Results show that our algorithm is valid to predict “skin-core-skin” structure, and the initial melt temperature and the maximum velocity of melt at the inlet mainly affects the morphology of shish-kebabs. PMID:29659516

  7. Parallel Algorithms for Monte Carlo Particle Transport Simulation on Exascale Computing Architectures

    NASA Astrophysics Data System (ADS)

    Romano, Paul Kollath

    Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there are a number of algorithmic shortcomings that would prevent their immediate adoption for full-core analyses. In this thesis, algorithms are proposed both to ameliorate the degradation in parallel efficiency typically observed for large numbers of processors and to offer a means of decomposing large tally data that will be needed for reactor analysis. A nearest-neighbor fission bank algorithm was proposed and subsequently implemented in the OpenMC Monte Carlo code. A theoretical analysis of the communication pattern shows that the expected cost is O( N ) whereas traditional fission bank algorithms are O(N) at best. The algorithm was tested on two supercomputers, the Intrepid Blue Gene/P and the Titan Cray XK7, and demonstrated nearly linear parallel scaling up to 163,840 processor cores on a full-core benchmark problem. An algorithm for reducing network communication arising from tally reduction was analyzed and implemented in OpenMC. The proposed algorithm groups only particle histories on a single processor into batches for tally purposes---in doing so it prevents all network communication for tallies until the very end of the simulation. The algorithm was tested, again on a full-core benchmark, and shown to reduce network communication substantially. A model was developed to predict the impact of load imbalances on the performance of domain decomposed simulations. The analysis demonstrated that load imbalances in domain decomposed simulations arise from two distinct phenomena: non-uniform particle densities and non-uniform spatial leakage. The dominant performance penalty for domain decomposition was shown to come from these physical effects rather than insufficient network bandwidth or high latency. The model predictions were verified with measured data from simulations in OpenMC on a full-core benchmark problem. Finally, a novel algorithm for decomposing large tally data was proposed, analyzed, and implemented/tested in OpenMC. The algorithm relies on disjoint sets of compute processes and tally servers. The analysis showed that for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead. Tests were performed on Intrepid and Titan and demonstrated that the algorithm did indeed perform well over a wide range of parameters. (Copies available exclusively from MIT Libraries, libraries.mit.edu/docs - docs mit.edu)

  8. Options for Parallelizing a Planning and Scheduling Algorithm

    NASA Technical Reports Server (NTRS)

    Clement, Bradley J.; Estlin, Tara A.; Bornstein, Benjamin D.

    2011-01-01

    Space missions have a growing interest in putting multi-core processors onboard spacecraft. For many missions processing power significantly slows operations. We investigate how continual planning and scheduling algorithms can exploit multi-core processing and outline different potential design decisions for a parallelized planning architecture. This organization of choices and challenges helps us with an initial design for parallelizing the CASPER planning system for a mesh multi-core processor. This work extends that presented at another workshop with some preliminary results.

  9. Acceleration of the Particle Swarm Optimization for Peierls-Nabarro modeling of dislocations in conventional and high-entropy alloys

    NASA Astrophysics Data System (ADS)

    Pei, Zongrui; Eisenbach, Markus

    2017-06-01

    Dislocations are among the most important defects in determining the mechanical properties of both conventional alloys and high-entropy alloys. The Peierls-Nabarro model supplies an efficient pathway to their geometries and mobility. The difficulty in solving the integro-differential Peierls-Nabarro equation is how to effectively avoid the local minima in the energy landscape of a dislocation core. Among the other methods to optimize the dislocation core structures, we choose the algorithm of Particle Swarm Optimization, an algorithm that simulates the social behaviors of organisms. By employing more particles (bigger swarm) and more iterative steps (allowing them to explore for longer time), the local minima can be effectively avoided. But this would require more computational cost. The advantage of this algorithm is that it is readily parallelized in modern high computing architecture. We demonstrate the performance of our parallelized algorithm scales linearly with the number of employed cores.

  10. Design Considerations for a Computationally-Lightweight Authentication Mechanism for Passive RFID Tags

    DTIC Science & Technology

    2009-09-01

    suffer the power and complexity requirements of a public key system. 28 In [18], a simulation of the SHA –1 algorithm is performed on a Xilinx FPGA ... 256 bits. Thus, the construction of a hash table would need 2512 independent comparisons. It is known that hash collisions of the SHA –1 algorithm... SHA –1 algorithm for small-core FPGA design. Small-core FPGA design is the process by which a circuit is adapted to use the minimal amount of logic

  11. Mathematical Foundation for Plane Covering Using Hexagons

    NASA Technical Reports Server (NTRS)

    Johnson, Gordon G.

    1999-01-01

    This work is to indicate the development and mathematical underpinnings of the algorithms previously developed for covering the plane and the addressing of the elements of the covering. The algorithms are of interest in that they provides a simple systematic way of increasing or decreasing resolution, in the sense that if we have the covering in place and there is an image superimposed upon the covering, then we may view the image in a rough form or in a very detailed form with minimal effort. Such ability allows for quick searches of crude forms to determine a class in which to make a detailed search. In addition, the addressing algorithms provide an efficient way to process large data sets that have related subsets. The algorithms produced were based in part upon the work of D. Lucas "A Multiplication in N Space" which suggested a set of three vectors, any two of which would serve as a bases for the plane and also that the hexagon is the natural geometric object to be used in a covering with a suggested bases. The second portion is a refinement of the eyeball vision system, the globular viewer.

  12. Overview of NASA's MODIS and Visible Infrared Imaging Radiometer Suite (VIIRS) snow-cover Earth System Data Records

    NASA Astrophysics Data System (ADS)

    Riggs, George A.; Hall, Dorothy K.; Román, Miguel O.

    2017-10-01

    Knowledge of the distribution, extent, duration and timing of snowmelt is critical for characterizing the Earth's climate system and its changes. As a result, snow cover is one of the Global Climate Observing System (GCOS) essential climate variables (ECVs). Consistent, long-term datasets of snow cover are needed to study interannual variability and snow climatology. The NASA snow-cover datasets generated from the Moderate Resolution Imaging Spectroradiometer (MODIS) on the Terra and Aqua spacecraft and the Suomi National Polar-orbiting Partnership (S-NPP) Visible Infrared Imaging Radiometer Suite (VIIRS) are NASA Earth System Data Records (ESDR). The objective of the snow-cover detection algorithms is to optimize the accuracy of mapping snow-cover extent (SCE) and to minimize snow-cover detection errors of omission and commission using automated, globally applied algorithms to produce SCE data products. Advancements in snow-cover mapping have been made with each of the four major reprocessings of the MODIS data record, which extends from 2000 to the present. MODIS Collection 6 (C6; https://nsidc.org/data/modis/data_summaries) and VIIRS Collection 1 (C1; https://doi.org/10.5067/VIIRS/VNP10.001) represent the state-of-the-art global snow-cover mapping algorithms and products for NASA Earth science. There were many revisions made in the C6 algorithms which improved snow-cover detection accuracy and information content of the data products. These improvements have also been incorporated into the NASA VIIRS snow-cover algorithms for C1. Both information content and usability were improved by including the Normalized Snow Difference Index (NDSI) and a quality assurance (QA) data array of algorithm processing flags in the data product, along with the SCE map. The increased data content allows flexibility in using the datasets for specific regions and end-user applications. Though there are important differences between the MODIS and VIIRS instruments (e.g., the VIIRS 375 m native resolution compared to MODIS 500 m), the snow detection algorithms and data products are designed to be as similar as possible so that the 16+ year MODIS ESDR of global SCE can be extended into the future with the S-NPP VIIRS snow products and with products from future Joint Polar Satellite System (JPSS) platforms. These NASA datasets are archived and accessible through the NASA Distributed Active Archive Center at the National Snow and Ice Data Center in Boulder, Colorado.

  13. Coverability graphs for a class of synchronously executed unbounded Petri net

    NASA Technical Reports Server (NTRS)

    Stotts, P. David; Pratt, Terrence W.

    1990-01-01

    After detailing a variant of the concurrent-execution rule for firing of maximal subsets, in which the simultaneous firing of conflicting transitions is prohibited, an algorithm is constructed for generating the coverability graph of a net executed under this synchronous firing rule. The omega insertion criteria in the algorithm are shown to be valid for any net on which the algorithm terminates. It is accordingly shown that the set of nets on which the algorithm terminates includes the 'conflict-free' class.

  14. A matrix-algebraic formulation of distributed-memory maximal cardinality matching algorithms in bipartite graphs

    DOE PAGES

    Azad, Ariful; Buluç, Aydın

    2016-05-16

    We describe parallel algorithms for computing maximal cardinality matching in a bipartite graph on distributed-memory systems. Unlike traditional algorithms that match one vertex at a time, our algorithms process many unmatched vertices simultaneously using a matrix-algebraic formulation of maximal matching. This generic matrix-algebraic framework is used to develop three efficient maximal matching algorithms with minimal changes. The newly developed algorithms have two benefits over existing graph-based algorithms. First, unlike existing parallel algorithms, cardinality of matching obtained by the new algorithms stays constant with increasing processor counts, which is important for predictable and reproducible performance. Second, relying on bulk-synchronous matrix operations,more » these algorithms expose a higher degree of parallelism on distributed-memory platforms than existing graph-based algorithms. We report high-performance implementations of three maximal matching algorithms using hybrid OpenMP-MPI and evaluate the performance of these algorithm using more than 35 real and randomly generated graphs. On real instances, our algorithms achieve up to 200 × speedup on 2048 cores of a Cray XC30 supercomputer. Even higher speedups are obtained on larger synthetically generated graphs where our algorithms show good scaling on up to 16,384 cores.« less

  15. Incorporating Added Sugar Improves the Performance of the Health Star Rating Front-of-Pack Labelling System in Australia

    PubMed Central

    Peters, Sanne A. E.; Jones, Alexandra; Crino, Michelle; Taylor, Fraser; Woodward, Mark; Neal, Bruce

    2017-01-01

    Background: The Health Star Rating (HSR) is an interpretive front-of-pack labelling system that rates the overall nutritional profile of packaged foods. The algorithm underpinning the HSR includes total sugar content as one of the components. This has been criticised because intrinsic sugars naturally present in dairy, fruits, and vegetables are treated the same as sugars added during food processing. We assessed whether the HSR could better discriminate between core and discretionary foods by including added sugar in the underlying algorithm. Methods: Nutrition information was extracted for 34,135 packaged foods available in The George Institute’s Australian FoodSwitch database. Added sugar levels were imputed from food composition databases. Products were classified as ‘core’ or ‘discretionary’ based on the Australian Dietary Guidelines. The ability of each of the nutrients included in the HSR algorithm, as well as added sugar, to discriminate between core and discretionary foods was estimated using the area under the curve (AUC). Results: 15,965 core and 18,350 discretionary foods were included. Of these, 8230 (52%) core foods and 15,947 (87%) discretionary foods contained added sugar. Median (Q1, Q3) HSRs were 4.0 (3.0, 4.5) for core foods and 2.0 (1.0, 3.0) for discretionary foods. Median added sugar contents (g/100 g) were 3.3 (1.5, 5.5) for core foods and 14.6 (1.8, 37.2) for discretionary foods. Of all the nutrients used in the current HSR algorithm, total sugar had the greatest individual capacity to discriminate between core and discretionary foods; AUC 0.692 (0.686; 0.697). Added sugar alone achieved an AUC of 0.777 (0.772; 0.782). A model with all nutrients in the current HSR algorithm had an AUC of 0.817 (0.812; 0.821), which increased to 0.871 (0.867; 0.874) with inclusion of added sugar. Conclusion: The HSR nutrients discriminate well between core and discretionary packaged foods. However, discrimination was improved when added sugar was also included. These data argue for inclusion of added sugar in an updated HSR algorithm and declaration of added sugar as part of mandatory nutrient declarations. PMID:28678187

  16. Detection of honeycomb cell walls from measurement data based on Harris corner detection algorithm

    NASA Astrophysics Data System (ADS)

    Qin, Yan; Dong, Zhigang; Kang, Renke; Yang, Jie; Ayinde, Babajide O.

    2018-06-01

    A honeycomb core is a discontinuous material with a thin-wall structure—a characteristic that makes accurate surface measurement difficult. This paper presents a cell wall detection method based on the Harris corner detection algorithm using laser measurement data. The vertexes of honeycomb cores are recognized with two different methods: one method is the reduction of data density, and the other is the optimization of the threshold of the Harris corner detection algorithm. Each cell wall is then identified in accordance with the neighboring relationships of its vertexes. Experiments were carried out for different types and surface shapes of honeycomb cores, where the proposed method was proved effective in dealing with noise due to burrs and/or deformation of cell walls.

  17. CQPSO scheduling algorithm for heterogeneous multi-core DAG task model

    NASA Astrophysics Data System (ADS)

    Zhai, Wenzheng; Hu, Yue-Li; Ran, Feng

    2017-07-01

    Efficient task scheduling is critical to achieve high performance in a heterogeneous multi-core computing environment. The paper focuses on the heterogeneous multi-core directed acyclic graph (DAG) task model and proposes a novel task scheduling method based on an improved chaotic quantum-behaved particle swarm optimization (CQPSO) algorithm. A task priority scheduling list was built. A processor with minimum cumulative earliest finish time (EFT) was acted as the object of the first task assignment. The task precedence relationships were satisfied and the total execution time of all tasks was minimized. The experimental results show that the proposed algorithm has the advantage of optimization abilities, simple and feasible, fast convergence, and can be applied to the task scheduling optimization for other heterogeneous and distributed environment.

  18. Test Scheduling for Core-Based SOCs Using Genetic Algorithm Based Heuristic Approach

    NASA Astrophysics Data System (ADS)

    Giri, Chandan; Sarkar, Soumojit; Chattopadhyay, Santanu

    This paper presents a Genetic algorithm (GA) based solution to co-optimize test scheduling and wrapper design for core based SOCs. Core testing solutions are generated as a set of wrapper configurations, represented as rectangles with width equal to the number of TAM (Test Access Mechanism) channels and height equal to the corresponding testing time. A locally optimal best-fit heuristic based bin packing algorithm has been used to determine placement of rectangles minimizing the overall test times, whereas, GA has been utilized to generate the sequence of rectangles to be considered for placement. Experimental result on ITC'02 benchmark SOCs shows that the proposed method provides better solutions compared to the recent works reported in the literature.

  19. MC64-ClustalWP2: A Highly-Parallel Hybrid Strategy to Align Multiple Sequences in Many-Core Architectures

    PubMed Central

    Díaz, David; Esteban, Francisco J.; Hernández, Pilar; Caballero, Juan Antonio; Guevara, Antonio

    2014-01-01

    We have developed the MC64-ClustalWP2 as a new implementation of the Clustal W algorithm, integrating a novel parallelization strategy and significantly increasing the performance when aligning long sequences in architectures with many cores. It must be stressed that in such a process, the detailed analysis of both the software and hardware features and peculiarities is of paramount importance to reveal key points to exploit and optimize the full potential of parallelism in many-core CPU systems. The new parallelization approach has focused into the most time-consuming stages of this algorithm. In particular, the so-called progressive alignment has drastically improved the performance, due to a fine-grained approach where the forward and backward loops were unrolled and parallelized. Another key approach has been the implementation of the new algorithm in a hybrid-computing system, integrating both an Intel Xeon multi-core CPU and a Tilera Tile64 many-core card. A comparison with other Clustal W implementations reveals the high-performance of the new algorithm and strategy in many-core CPU architectures, in a scenario where the sequences to align are relatively long (more than 10 kb) and, hence, a many-core GPU hardware cannot be used. Thus, the MC64-ClustalWP2 runs multiple alignments more than 18x than the original Clustal W algorithm, and more than 7x than the best x86 parallel implementation to date, being publicly available through a web service. Besides, these developments have been deployed in cost-effective personal computers and should be useful for life-science researchers, including the identification of identities and differences for mutation/polymorphism analyses, biodiversity and evolutionary studies and for the development of molecular markers for paternity testing, germplasm management and protection, to assist breeding, illegal traffic control, fraud prevention and for the protection of the intellectual property (identification/traceability), including the protected designation of origin, among other applications. PMID:24710354

  20. Combinatorial optimization games

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deng, X.; Ibaraki, Toshihide; Nagamochi, Hiroshi

    1997-06-01

    We introduce a general integer programming formulation for a class of combinatorial optimization games, which immediately allows us to improve the algorithmic result for finding amputations in the core (an important solution concept in cooperative game theory) of the network flow game on simple networks by Kalai and Zemel. An interesting result is a general theorem that the core for this class of games is nonempty if and only if a related linear program has an integer optimal solution. We study the properties for this mathematical condition to hold for several interesting problems, and apply them to resolve algorithmic andmore » complexity issues for their cores along the line as put forward in: decide whether the core is empty; if the core is empty, find an imputation in the core; given an imputation x, test whether x is in the core. We also explore the properties of totally balanced games in this succinct formulation of cooperative games.« less

  1. TAMEE: data management and analysis for tissue microarrays.

    PubMed

    Thallinger, Gerhard G; Baumgartner, Kerstin; Pirklbauer, Martin; Uray, Martina; Pauritsch, Elke; Mehes, Gabor; Buck, Charles R; Zatloukal, Kurt; Trajanoski, Zlatko

    2007-03-07

    With the introduction of tissue microarrays (TMAs) researchers can investigate gene and protein expression in tissues on a high-throughput scale. TMAs generate a wealth of data calling for extended, high level data management. Enhanced data analysis and systematic data management are required for traceability and reproducibility of experiments and provision of results in a timely and reliable fashion. Robust and scalable applications have to be utilized, which allow secure data access, manipulation and evaluation for researchers from different laboratories. TAMEE (Tissue Array Management and Evaluation Environment) is a web-based database application for the management and analysis of data resulting from the production and application of TMAs. It facilitates storage of production and experimental parameters, of images generated throughout the TMA workflow, and of results from core evaluation. Database content consistency is achieved using structured classifications of parameters. This allows the extraction of high quality results for subsequent biologically-relevant data analyses. Tissue cores in the images of stained tissue sections are automatically located and extracted and can be evaluated using a set of predefined analysis algorithms. Additional evaluation algorithms can be easily integrated into the application via a plug-in interface. Downstream analysis of results is facilitated via a flexible query generator. We have developed an integrated system tailored to the specific needs of research projects using high density TMAs. It covers the complete workflow of TMA production, experimental use and subsequent analysis. The system is freely available for academic and non-profit institutions from http://genome.tugraz.at/Software/TAMEE.

  2. An Improved SoC Test Scheduling Method Based on Simulated Annealing Algorithm

    NASA Astrophysics Data System (ADS)

    Zheng, Jingjing; Shen, Zhihang; Gao, Huaien; Chen, Bianna; Zheng, Weida; Xiong, Xiaoming

    2017-02-01

    In this paper, we propose an improved SoC test scheduling method based on simulated annealing algorithm (SA). It is our first to disorganize IP core assignment for each TAM to produce a new solution for SA, allocate TAM width for each TAM using greedy algorithm and calculate corresponding testing time. And accepting the core assignment according to the principle of simulated annealing algorithm and finally attain the optimum solution. Simultaneously, we run the test scheduling experiment with the international reference circuits provided by International Test Conference 2002(ITC’02) and the result shows that our algorithm is superior to the conventional integer linear programming algorithm (ILP), simulated annealing algorithm (SA) and genetic algorithm(GA). When TAM width reaches to 48,56 and 64, the testing time based on our algorithm is lesser than the classic methods and the optimization rates are 30.74%, 3.32%, 16.13% respectively. Moreover, the testing time based on our algorithm is very close to that of improved genetic algorithm (IGA), which is state-of-the-art at present.

  3. Efficient Scalable Median Filtering Using Histogram-Based Operations.

    PubMed

    Green, Oded

    2018-05-01

    Median filtering is a smoothing technique for noise removal in images. While there are various implementations of median filtering for a single-core CPU, there are few implementations for accelerators and multi-core systems. Many parallel implementations of median filtering use a sorting algorithm for rearranging the values within a filtering window and taking the median of the sorted value. While using sorting algorithms allows for simple parallel implementations, the cost of the sorting becomes prohibitive as the filtering windows grow. This makes such algorithms, sequential and parallel alike, inefficient. In this work, we introduce the first software parallel median filtering that is non-sorting-based. The new algorithm uses efficient histogram-based operations. These reduce the computational requirements of the new algorithm while also accessing the image fewer times. We show an implementation of our algorithm for both the CPU and NVIDIA's CUDA supported graphics processing unit (GPU). The new algorithm is compared with several other leading CPU and GPU implementations. The CPU implementation has near perfect linear scaling with a speedup on a quad-core system. The GPU implementation is several orders of magnitude faster than the other GPU implementations for mid-size median filters. For small kernels, and , comparison-based approaches are preferable as fewer operations are required. Lastly, the new algorithm is open-source and can be found in the OpenCV library.

  4. LArSoft: toolkit for simulation, reconstruction and analysis of liquid argon TPC neutrino detectors

    NASA Astrophysics Data System (ADS)

    Snider, E. L.; Petrillo, G.

    2017-10-01

    LArSoft is a set of detector-independent software tools for the simulation, reconstruction and analysis of data from liquid argon (LAr) neutrino experiments The common features of LAr time projection chambers (TPCs) enable sharing of algorithm code across detectors of very different size and configuration. LArSoft is currently used in production simulation and reconstruction by the ArgoNeuT, DUNE, LArlAT, MicroBooNE, and SBND experiments. The software suite offers a wide selection of algorithms and utilities, including those for associated photo-detectors and the handling of auxiliary detectors outside the TPCs. Available algorithms cover the full range of simulation and reconstruction, from raw waveforms to high-level reconstructed objects, event topologies and classification. The common code within LArSoft is contributed by adopting experiments, which also provide detector-specific geometry descriptions, and code for the treatment of electronic signals. LArSoft is also a collaboration of experiments, Fermilab and associated software projects which cooperate in setting requirements, priorities, and schedules. In this talk, we outline the general architecture of the software and the interaction with external libraries and detector-specific code. We also describe the dynamics of LArSoft software development between the contributing experiments, the projects supporting the software infrastructure LArSoft relies on, and the core LArSoft support project.

  5. MIMO signal progressing with RLSCMA algorithm for multi-mode multi-core optical transmission system

    NASA Astrophysics Data System (ADS)

    Bi, Yuan; Liu, Bo; Zhang, Li-jia; Xin, Xiang-jun; Zhang, Qi; Wang, Yong-jun; Tian, Qing-hua; Tian, Feng; Mao, Ya-ya

    2018-01-01

    In the process of transmitting signals of multi-mode multi-core fiber, there will be mode coupling between modes. The mode dispersion will also occur because each mode has different transmission speed in the link. Mode coupling and mode dispersion will cause damage to the useful signal in the transmission link, so the receiver needs to deal received signal with digital signal processing, and compensate the damage in the link. We first analyzes the influence of mode coupling and mode dispersion in the process of transmitting signals of multi-mode multi-core fiber, then presents the relationship between the coupling coefficient and dispersion coefficient. Then we carry out adaptive signal processing with MIMO equalizers based on recursive least squares constant modulus algorithm (RLSCMA). The MIMO equalization algorithm offers adaptive equalization taps according to the degree of crosstalk in cores or modes, which eliminates the interference among different modes and cores in space division multiplexing(SDM) transmission system. The simulation results show that the distorted signals are restored efficiently with fast convergence speed.

  6. Case for a field-programmable gate array multicore hybrid machine for an image-processing application

    NASA Astrophysics Data System (ADS)

    Rakvic, Ryan N.; Ives, Robert W.; Lira, Javier; Molina, Carlos

    2011-01-01

    General purpose computer designers have recently begun adding cores to their processors in order to increase performance. For example, Intel has adopted a homogeneous quad-core processor as a base for general purpose computing. PlayStation3 (PS3) game consoles contain a multicore heterogeneous processor known as the Cell, which is designed to perform complex image processing algorithms at a high level. Can modern image-processing algorithms utilize these additional cores? On the other hand, modern advancements in configurable hardware, most notably field-programmable gate arrays (FPGAs) have created an interesting question for general purpose computer designers. Is there a reason to combine FPGAs with multicore processors to create an FPGA multicore hybrid general purpose computer? Iris matching, a repeatedly executed portion of a modern iris-recognition algorithm, is parallelized on an Intel-based homogeneous multicore Xeon system, a heterogeneous multicore Cell system, and an FPGA multicore hybrid system. Surprisingly, the cheaper PS3 slightly outperforms the Intel-based multicore on a core-for-core basis. However, both multicore systems are beaten by the FPGA multicore hybrid system by >50%.

  7. A NUMERICAL ALGORITHM FOR MODELING MULTIGROUP NEUTRINO-RADIATION HYDRODYNAMICS IN TWO SPATIAL DIMENSIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swesty, F. Douglas; Myra, Eric S.

    It is now generally agreed that multidimensional, multigroup, neutrino-radiation hydrodynamics (RHD) is an indispensable element of any realistic model of stellar-core collapse, core-collapse supernovae, and proto-neutron star instabilities. We have developed a new, two-dimensional, multigroup algorithm that can model neutrino-RHD flows in core-collapse supernovae. Our algorithm uses an approach similar to the ZEUS family of algorithms, originally developed by Stone and Norman. However, this completely new implementation extends that previous work in three significant ways: first, we incorporate multispecies, multigroup RHD in a flux-limited-diffusion approximation. Our approach is capable of modeling pair-coupled neutrino-RHD, and includes effects of Pauli blocking inmore » the collision integrals. Blocking gives rise to nonlinearities in the discretized radiation-transport equations, which we evolve implicitly in time. We employ parallelized Newton-Krylov methods to obtain a solution of these nonlinear, implicit equations. Our second major extension to the ZEUS algorithm is the inclusion of an electron conservation equation that describes the evolution of electron-number density in the hydrodynamic flow. This permits calculating deleptonization of a stellar core. Our third extension modifies the hydrodynamics algorithm to accommodate realistic, complex equations of state, including those having nonconvex behavior. In this paper, we present a description of our complete algorithm, giving sufficient details to allow others to implement, reproduce, and extend our work. Finite-differencing details are presented in appendices. We also discuss implementation of this algorithm on state-of-the-art, parallel-computing architectures. Finally, we present results of verification tests that demonstrate the numerical accuracy of this algorithm on diverse hydrodynamic, gravitational, radiation-transport, and RHD sample problems. We believe our methods to be of general use in a variety of model settings where radiation transport or RHD is important. Extension of this work to three spatial dimensions is straightforward.« less

  8. Extracting paleo-climate signals from sediment laminae: A new, automated image processing method

    NASA Astrophysics Data System (ADS)

    Gan, S. Q.; Scholz, C. A.

    2010-12-01

    Lake sediment laminations commonly represent depositional seasonality in lacustrine environments. Their occurrence and quantitative attributes contain various signals of their depositional environment, limnological conditions and climate. However, the identification and measurement of laminae remains a mainly manual process that is not only tedious and labor intensive, but also subjective and error prone. We present a batch method to identify laminae and extract lamina properties automatically and accurately from sediment core images. Our algorithm is focused on image enhancement that improves the signal-to-noise ratio and maximizes and normalizes image contrast. The unique feature of these algorithms is that they are all direction-sensitive, i.e., the algorithms treat images in the horizontal direction and vertical direction differently and independently. The core process of lamina identification is to use a one-dimensional (1-D) lamina identification algorithm to produce a lamina map, and to use image blob analyses and lamina connectivity analyses to aggregate and smash two-dimensional (2-D) lamina data for the best representation of fine-scale stratigraphy in the sediment profile. The primary output datasets of the system are definitions of laminae and primary color values for each pixel and each lamina in the depth direction; other derived datasets can be retrieved at users’ discretion. Sediment core images from Lake Hitchcock , USA and Lake Bosumtwi, Ghana, were used for algorithm development and testing. As a demonstration of the utility of the software, we processed sediment core images from the top of 50 meters of drill core (representing the past ~100 ky) from Lake Bosumtwi, Ghana.

  9. [Site selection of nature reserve based on the self-learning tabu search algorithm with space-ecology set covering problem: An example from Daiyun Mountain, Southeast China].

    PubMed

    Huang, Jia Hang; Liu, Jin Fu; Lin, Zhi Wei; Zheng, Shi Qun; He, Zhong Sheng; Zhang, Hui Guang; Li, Wen Zhou

    2017-01-01

    Designing the nature reserves is an effective approach to protecting biodiversity. The traditional approaches to designing the nature reserves could only identify the core area for protecting the species without specifying an appropriate land area of the nature reserve. The site selection approaches, which are based on mathematical model, can select part of the land from the planning area to compose the nature reserve and to protect specific species or ecosystem. They are useful approaches to alleviating the contradiction between ecological protection and development. The existing site selection methods do not consider the ecological differences between each unit and has the bottleneck of computational efficiency in optimization algorithm. In this study, we first constructed the ecological value assessment system which was appropriated for forest ecosystem and that was used for calculating ecological value of Daiyun Mountain and for drawing its distribution map. Then, the Ecological Set Covering Problem (ESCP) was established by integrating the ecological values and then the Space-ecology Set Covering Problem (SSCP) was generated based on the spatial compactness of ESCP. Finally, the STS algorithm which possessed good optimizing performance was utilized to search the approximate optimal solution under diverse protection targets, and the optimization solution of the built-up area of Daiyun Mountain was proposed. According to the experimental results, the difference of ecological values in the spatial distribution was obvious. The ecological va-lue of selected sites of ESCP was higher than that of SCP. SSCP could aggregate the sites with high ecological value based on ESCP. From the results, the level of the aggregation increased with the weight of the perimeter. We suggested that the range of the existing reserve could be expanded for about 136 km 2 and the site of Tsuga longibracteata should be included, which was located in the northwest of the study area. Our research aimed at providing an optimization scheme for the sustai-nable development of Daiyun Mountain nature reserve and the optimal allocation of land resource, and a novel idea for designing the nature reserve of forest ecosystem in China.

  10. Accelerating moderately stiff chemical kinetics in reactive-flow simulations using GPUs

    NASA Astrophysics Data System (ADS)

    Niemeyer, Kyle E.; Sung, Chih-Jen

    2014-01-01

    The chemical kinetics ODEs arising from operator-split reactive-flow simulations were solved on GPUs using explicit integration algorithms. Nonstiff chemical kinetics of a hydrogen oxidation mechanism (9 species and 38 irreversible reactions) were computed using the explicit fifth-order Runge-Kutta-Cash-Karp method, and the GPU-accelerated version performed faster than single- and six-core CPU versions by factors of 126 and 25, respectively, for 524,288 ODEs. Moderately stiff kinetics, represented with mechanisms for hydrogen/carbon-monoxide (13 species and 54 irreversible reactions) and methane (53 species and 634 irreversible reactions) oxidation, were computed using the stabilized explicit second-order Runge-Kutta-Chebyshev (RKC) algorithm. The GPU-based RKC implementation demonstrated an increase in performance of nearly 59 and 10 times, for problem sizes consisting of 262,144 ODEs and larger, than the single- and six-core CPU-based RKC algorithms using the hydrogen/carbon-monoxide mechanism. With the methane mechanism, RKC-GPU performed more than 65 and 11 times faster, for problem sizes consisting of 131,072 ODEs and larger, than the single- and six-core RKC-CPU versions, and up to 57 times faster than the six-core CPU-based implicit VODE algorithm on 65,536 ODEs. In the presence of more severe stiffness, such as ethylene oxidation (111 species and 1566 irreversible reactions), RKC-GPU performed more than 17 times faster than RKC-CPU on six cores for 32,768 ODEs and larger, and at best 4.5 times faster than VODE on six CPU cores for 65,536 ODEs. With a larger time step size, RKC-GPU performed at best 2.5 times slower than six-core VODE for 8192 ODEs and larger. Therefore, the need for developing new strategies for integrating stiff chemistry on GPUs was discussed.

  11. A PML-FDTD ALGORITHM FOR SIMULATING PLASMA-COVERED CAVITY-BACKED SLOT ANTENNAS. (R825225)

    EPA Science Inventory

    A three-dimensional frequency-dependent finite-difference time-domain (FDTD) algorithm with perfectly matched layer (PML) absorbing boundary condition (ABC) and recursive convolution approaches is developed to model plasma-covered open-ended waveguide or cavity-backed slot antenn...

  12. Cloud cover estimation optical package: New facility, algorithms and techniques

    NASA Astrophysics Data System (ADS)

    Krinitskiy, Mikhail

    2017-02-01

    Short- and long-wave radiation is an important component of surface heat budget over sea and land. For estimating them accurate observations of the cloud cover are needed. While massively observed visually, for building accurate parameterizations cloud cover needs also to be quantified using precise instrumental measurements. Major disadvantages of the most of existing cloud-cameras are associated with their complicated design and inaccuracy of post-processing algorithms which typically result in the uncertainties of 20% to 30% in the camera-based estimates of cloud cover. The accuracy of these types of algorithm in terms of true scoring compared to human-observed values is typically less than 10%. We developed new generation package for cloud cover estimating, which provides much more accurate results and also allows for measuring additional characteristics. New algorithm, namely SAIL GrIx, based on routine approach, also developed for this package. It uses the synthetic controlling index ("grayness rate index") which allows to suppress the background sunburn effect. This makes it possible to increase the reliability of the detection of the optically thin clouds. The accuracy of this algorithm in terms of true scoring became 30%. One more approach, namely SAIL GrIx ML, we have used to increase the cloud cover estimating accuracy is the algorithm that uses machine learning technique along with some other signal processing techniques. Sun disk condition appears to be a strong feature in this kind of models. Artificial Neural Networks type of model demonstrates the best quality. This model accuracy in terms of true scoring increases up to 95,5%. Application of a new algorithm lets us to modify the design of the optical sensing package and to avoid the use of the solar trackers. This made the design of the cloud camera much more compact. New cloud-camera has already been tested in several missions across Atlantic and Indian oceans on board of IORAS research vessels.

  13. A hybrid algorithm for parallel molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Mangiardi, Chris M.; Meyer, R.

    2017-10-01

    This article describes algorithms for the hybrid parallelization and SIMD vectorization of molecular dynamics simulations with short-range forces. The parallelization method combines domain decomposition with a thread-based parallelization approach. The goal of the work is to enable efficient simulations of very large (tens of millions of atoms) and inhomogeneous systems on many-core processors with hundreds or thousands of cores and SIMD units with large vector sizes. In order to test the efficiency of the method, simulations of a variety of configurations with up to 74 million atoms have been performed. Results are shown that were obtained on multi-core systems with Sandy Bridge and Haswell processors as well as systems with Xeon Phi many-core processors.

  14. Acceleration of the Particle Swarm Optimization for Peierls–Nabarro modeling of dislocations in conventional and high-entropy alloys

    DOE PAGES

    Pei, Zongrui; Max-Planck-Inst. fur Eisenforschung, Duseldorf; Eisenbach, Markus

    2017-02-06

    Dislocations are among the most important defects in determining the mechanical properties of both conventional alloys and high-entropy alloys. The Peierls-Nabarro model supplies an efficient pathway to their geometries and mobility. The difficulty in solving the integro-differential Peierls-Nabarro equation is how to effectively avoid the local minima in the energy landscape of a dislocation core. Among the other methods to optimize the dislocation core structures, we choose the algorithm of Particle Swarm Optimization, an algorithm that simulates the social behaviors of organisms. By employing more particles (bigger swarm) and more iterative steps (allowing them to explore for longer time), themore » local minima can be effectively avoided. But this would require more computational cost. The advantage of this algorithm is that it is readily parallelized in modern high computing architecture. We demonstrate the performance of our parallelized algorithm scales linearly with the number of employed cores.« less

  15. Identifying Dynamic Protein Complexes Based on Gene Expression Profiles and PPI Networks

    PubMed Central

    Li, Min; Chen, Weijie; Wang, Jianxin; Pan, Yi

    2014-01-01

    Identification of protein complexes from protein-protein interaction networks has become a key problem for understanding cellular life in postgenomic era. Many computational methods have been proposed for identifying protein complexes. Up to now, the existing computational methods are mostly applied on static PPI networks. However, proteins and their interactions are dynamic in reality. Identifying dynamic protein complexes is more meaningful and challenging. In this paper, a novel algorithm, named DPC, is proposed to identify dynamic protein complexes by integrating PPI data and gene expression profiles. According to Core-Attachment assumption, these proteins which are always active in the molecular cycle are regarded as core proteins. The protein-complex cores are identified from these always active proteins by detecting dense subgraphs. Final protein complexes are extended from the protein-complex cores by adding attachments based on a topological character of “closeness” and dynamic meaning. The protein complexes produced by our algorithm DPC contain two parts: static core expressed in all the molecular cycle and dynamic attachments short-lived. The proposed algorithm DPC was applied on the data of Saccharomyces cerevisiae and the experimental results show that DPC outperforms CMC, MCL, SPICi, HC-PIN, COACH, and Core-Attachment based on the validation of matching with known complexes and hF-measures. PMID:24963481

  16. Potential for Monitoring Snow Cover in Boreal Forests by Combining MODIS Snow Cover and AMSR-E SWE Maps

    NASA Technical Reports Server (NTRS)

    Riggs, George A.; Hall, Dorothy K.; Foster, James L.

    2009-01-01

    Monitoring of snow cover extent and snow water equivalent (SWE) in boreal forests is important for determining the amount of potential runoff and beginning date of snowmelt. The great expanse of the boreal forest necessitates the use of satellite measurements to monitor snow cover. Snow cover in the boreal forest can be mapped with either the Moderate Resolution Imaging Spectroradiometer (MODIS) or the Advanced Microwave Scanning Radiometer for EOS (AMSR-E) microwave instrument. The extent of snow cover is estimated from the MODIS data and SWE is estimated from the AMSR-E. Environmental limitations affect both sensors in different ways to limit their ability to detect snow in some situations. Forest density, snow wetness, and snow depth are factors that limit the effectiveness of both sensors for snow detection. Cloud cover is a significant hindrance to monitoring snow cover extent Using MODIS but is not a hindrance to the use of the AMSR-E. These limitations could be mitigated by combining MODIS and AMSR-E data to allow for improved interpretation of snow cover extent and SWE on a daily basis and provide temporal continuity of snow mapping across the boreal forest regions in Canada. The purpose of this study is to investigate if temporal monitoring of snow cover using a combination of MODIS and AMSR-E data could yield a better interpretation of changing snow cover conditions. The MODIS snow mapping algorithm is based on snow detection using the Normalized Difference Snow Index (NDSI) and the Normalized Difference Vegetation Index (NDVI) to enhance snow detection in dense vegetation. (Other spectral threshold tests are also used to map snow using MODIS.) Snow cover under a forest canopy may have an effect on the NDVI thus we use the NDVI in snow detection. A MODIS snow fraction product is also generated but not used in this study. In this study the NDSI and NDVI components of the snow mapping algorithm were calculated and analyzed to determine how they changed through the seasons. A blended snow product, the Air Force Weather Agency and NASA (ANSA) snow algorithm and product has recently been developed. The ANSA algorithm blends the MODIS snow cover and AMSR-E SWE products into a single snow product that has been shown to improve the performance of snow cover mapping. In this study components of the ANSA snow algorithm are used along with additional MODIS data to monitor daily changes in snow cover over the period of 1 February to 30 June 2008.

  17. Novel density-based and hierarchical density-based clustering algorithms for uncertain data.

    PubMed

    Zhang, Xianchao; Liu, Han; Zhang, Xiaotong

    2017-09-01

    Uncertain data has posed a great challenge to traditional clustering algorithms. Recently, several algorithms have been proposed for clustering uncertain data, and among them density-based techniques seem promising for handling data uncertainty. However, some issues like losing uncertain information, high time complexity and nonadaptive threshold have not been addressed well in the previous density-based algorithm FDBSCAN and hierarchical density-based algorithm FOPTICS. In this paper, we firstly propose a novel density-based algorithm PDBSCAN, which improves the previous FDBSCAN from the following aspects: (1) it employs a more accurate method to compute the probability that the distance between two uncertain objects is less than or equal to a boundary value, instead of the sampling-based method in FDBSCAN; (2) it introduces new definitions of probability neighborhood, support degree, core object probability, direct reachability probability, thus reducing the complexity and solving the issue of nonadaptive threshold (for core object judgement) in FDBSCAN. Then, we modify the algorithm PDBSCAN to an improved version (PDBSCANi), by using a better cluster assignment strategy to ensure that every object will be assigned to the most appropriate cluster, thus solving the issue of nonadaptive threshold (for direct density reachability judgement) in FDBSCAN. Furthermore, as PDBSCAN and PDBSCANi have difficulties for clustering uncertain data with non-uniform cluster density, we propose a novel hierarchical density-based algorithm POPTICS by extending the definitions of PDBSCAN, adding new definitions of fuzzy core distance and fuzzy reachability distance, and employing a new clustering framework. POPTICS can reveal the cluster structures of the datasets with different local densities in different regions better than PDBSCAN and PDBSCANi, and it addresses the issues in FOPTICS. Experimental results demonstrate the superiority of our proposed algorithms over the existing algorithms in accuracy and efficiency. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Core Hunter 3: flexible core subset selection.

    PubMed

    De Beukelaer, Herman; Davenport, Guy F; Fack, Veerle

    2018-05-31

    Core collections provide genebank curators and plant breeders a way to reduce size of their collections and populations, while minimizing impact on genetic diversity and allele frequency. Many methods have been proposed to generate core collections, often using distance metrics to quantify the similarity of two accessions, based on genetic marker data or phenotypic traits. Core Hunter is a multi-purpose core subset selection tool that uses local search algorithms to generate subsets relying on one or more metrics, including several distance metrics and allelic richness. In version 3 of Core Hunter (CH3) we have incorporated two new, improved methods for summarizing distances to quantify diversity or representativeness of the core collection. A comparison of CH3 and Core Hunter 2 (CH2) showed that these new metrics can be effectively optimized with less complex algorithms, as compared to those used in CH2. CH3 is more effective at maximizing the improved diversity metric than CH2, still ensures a high average and minimum distance, and is faster for large datasets. Using CH3, a simple stochastic hill-climber is able to find highly diverse core collections, and the more advanced parallel tempering algorithm further increases the quality of the core and further reduces variability across independent samples. We also evaluate the ability of CH3 to simultaneously maximize diversity, and either representativeness or allelic richness, and compare the results with those of the GDOpt and SimEli methods. CH3 can sample equally representative cores as GDOpt, which was specifically designed for this purpose, and is able to construct cores that are simultaneously more diverse, and either are more representative or have higher allelic richness, than those obtained by SimEli. In version 3, Core Hunter has been updated to include two new core subset selection metrics that construct cores for representativeness or diversity, with improved performance. It combines and outperforms the strengths of other methods, as it (simultaneously) optimizes a variety of metrics. In addition, CH3 is an improvement over CH2, with the option to use genetic marker data or phenotypic traits, or both, and improved speed. Core Hunter 3 is freely available on http://www.corehunter.org .

  19. Analysing the Effects of Different Land Cover Types on Land Surface Temperature Using Satellite Data

    NASA Astrophysics Data System (ADS)

    Şekertekin, A.; Kutoglu, Ş. H.; Kaya, S.; Marangoz, A. M.

    2015-12-01

    Monitoring Land Surface Temperature (LST) via remote sensing images is one of the most important contributions to climatology. LST is an important parameter governing the energy balance on the Earth and it also helps us to understand the behavior of urban heat islands. There are lots of algorithms to obtain LST by remote sensing techniques. The most commonly used algorithms are split-window algorithm, temperature/emissivity separation method, mono-window algorithm and single channel method. In this research, mono window algorithm was implemented to Landsat 5 TM image acquired on 28.08.2011. Besides, meteorological data such as humidity and temperature are used in the algorithm. Moreover, high resolution Geoeye-1 and Worldview-2 images acquired on 29.08.2011 and 12.07.2013 respectively were used to investigate the relationships between LST and land cover type. As a result of the analyses, area with vegetation cover has approximately 5 ºC lower temperatures than the city center and arid land., LST values change about 10 ºC in the city center because of different surface properties such as reinforced concrete construction, green zones and sandbank. The temperature around some places in thermal power plant region (ÇATES and ZETES) Çatalağzı, is about 5 ºC higher than city center. Sandbank and agricultural areas have highest temperature due to the land cover structure.

  20. Ice Core Depth-Age Relation for Vostok delta-D and Dome Fuji delta-18O Records Based on the Devils Hole Paleotemperature Chronology

    USGS Publications Warehouse

    Landwehr, Jurate Maciunas

    2002-01-01

    This report presents the data for the Vostok - Devils Hole chronology, termed V-DH chronology, for the Antarctic Vostok ice core record. This depth - age relation is based on a join between the Vostok deuterium profile (D) and the stable oxygen isotope ratio (18O) record of paleotemperature from a calcitic core at Devils Hole, Nevada, using the algorithm developed by Landwehr and Winograd (2001). Both the control points defining the V-DH chronology and the numeric values for the chronology are given. In addition, a plausible chronology for a deformed bottom portion of the Vostok core developed with this algorithm is presented. Landwehr and Winograd (2001) demonstrated the broader utility of their algorithm by applying it to another appropriate Antarctic paleotemperature record, the Antarctic Dome Fuji ice core 18O record. Control points for this chronology are also presented in this report but deemed preliminary because, to date, investigators have published only the visual trace and not the numeric values for the Dome Fuji 18O record. The total uncertainty that can be associated with the assigned ages is also given.

  1. Dynamic Voltage-Frequency and Workload Joint Scaling Power Management for Energy Harvesting Multi-Core WSN Node SoC

    PubMed Central

    Li, Xiangyu; Xie, Nijie; Tian, Xinyue

    2017-01-01

    This paper proposes a scheduling and power management solution for energy harvesting heterogeneous multi-core WSN node SoC such that the system continues to operate perennially and uses the harvested energy efficiently. The solution consists of a heterogeneous multi-core system oriented task scheduling algorithm and a low-complexity dynamic workload scaling and configuration optimization algorithm suitable for light-weight platforms. Moreover, considering the power consumption of most WSN applications have the characteristic of data dependent behavior, we introduce branches handling mechanism into the solution as well. The experimental result shows that the proposed algorithm can operate in real-time on a lightweight embedded processor (MSP430), and that it can make a system do more valuable works and make more than 99.9% use of the power budget. PMID:28208730

  2. Dynamic Voltage-Frequency and Workload Joint Scaling Power Management for Energy Harvesting Multi-Core WSN Node SoC.

    PubMed

    Li, Xiangyu; Xie, Nijie; Tian, Xinyue

    2017-02-08

    This paper proposes a scheduling and power management solution for energy harvesting heterogeneous multi-core WSN node SoC such that the system continues to operate perennially and uses the harvested energy efficiently. The solution consists of a heterogeneous multi-core system oriented task scheduling algorithm and a low-complexity dynamic workload scaling and configuration optimization algorithm suitable for light-weight platforms. Moreover, considering the power consumption of most WSN applications have the characteristic of data dependent behavior, we introduce branches handling mechanism into the solution as well. The experimental result shows that the proposed algorithm can operate in real-time on a lightweight embedded processor (MSP430), and that it can make a system do more valuable works and make more than 99.9% use of the power budget.

  3. Selection of core animals in the Algorithm for Proven and Young using a simulation model.

    PubMed

    Bradford, H L; Pocrnić, I; Fragomeni, B O; Lourenco, D A L; Misztal, I

    2017-12-01

    The Algorithm for Proven and Young (APY) enables the implementation of single-step genomic BLUP (ssGBLUP) in large, genotyped populations by separating genotyped animals into core and non-core subsets and creating a computationally efficient inverse for the genomic relationship matrix (G). As APY became the choice for large-scale genomic evaluations in BLUP-based methods, a common question is how to choose the animals in the core subset. We compared several core definitions to answer this question. Simulations comprised a moderately heritable trait for 95,010 animals and 50,000 genotypes for animals across five generations. Genotypes consisted of 25,500 SNP distributed across 15 chromosomes. Genotyping errors and missing pedigree were also mimicked. Core animals were defined based on individual generations, equal representation across generations, and at random. For a sufficiently large core size, core definitions had the same accuracies and biases, even if the core animals had imperfect genotypes. When genotyped animals had unknown parents, accuracy and bias were significantly better (p ≤ .05) for random and across generation core definitions. © 2017 The Authors. Journal of Animal Breeding and Genetics Published by Blackwell Verlag GmbH.

  4. Scalable Parallel Density-based Clustering and Applications

    NASA Astrophysics Data System (ADS)

    Patwary, Mostofa Ali

    2014-04-01

    Recently, density-based clustering algorithms (DBSCAN and OPTICS) have gotten significant attention of the scientific community due to their unique capability of discovering arbitrary shaped clusters and eliminating noise data. These algorithms have several applications, which require high performance computing, including finding halos and subhalos (clusters) from massive cosmology data in astrophysics, analyzing satellite images, X-ray crystallography, and anomaly detection. However, parallelization of these algorithms are extremely challenging as they exhibit inherent sequential data access order, unbalanced workload resulting in low parallel efficiency. To break the data access sequentiality and to achieve high parallelism, we develop new parallel algorithms, both for DBSCAN and OPTICS, designed using graph algorithmic techniques. For example, our parallel DBSCAN algorithm exploits the similarities between DBSCAN and computing connected components. Using datasets containing up to a billion floating point numbers, we show that our parallel density-based clustering algorithms significantly outperform the existing algorithms, achieving speedups up to 27.5 on 40 cores on shared memory architecture and speedups up to 5,765 using 8,192 cores on distributed memory architecture. In our experiments, we found that while achieving the scalability, our algorithms produce clustering results with comparable quality to the classical algorithms.

  5. A decision tree algorithm for investigation of model biases related to dynamical cores and physical parameterizations: CESM/CAM EVALUATION BY DECISION TREES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soner Yorgun, M.; Rood, Richard B.

    An object-based evaluation method using a pattern recognition algorithm (i.e., classification trees) is applied to the simulated orographic precipitation for idealized experimental setups using the National Center of Atmospheric Research (NCAR) Community Atmosphere Model (CAM) with the finite volume (FV) and the Eulerian spectral transform dynamical cores with varying resolutions. Daily simulations were analyzed and three different types of precipitation features were identified by the classification tree algorithm. The statistical characteristics of these features (i.e., maximum value, mean value, and variance) were calculated to quantify the difference between the dynamical cores and changing resolutions. Even with the simple and smoothmore » topography in the idealized setups, complexity in the precipitation fields simulated by the models develops quickly. The classification tree algorithm using objective thresholding successfully detected different types of precipitation features even as the complexity of the precipitation field increased. The results show that the complexity and the bias introduced in small-scale phenomena due to the spectral transform method of CAM Eulerian spectral dynamical core is prominent, and is an important reason for its dissimilarity from the FV dynamical core. The resolvable scales, both in horizontal and vertical dimensions, have significant effect on the simulation of precipitation. The results of this study also suggest that an efficient and informative study about the biases produced by GCMs should involve daily (or even hourly) output (rather than monthly mean) analysis over local scales.« less

  6. A decision tree algorithm for investigation of model biases related to dynamical cores and physical parameterizations: CESM/CAM EVALUATION BY DECISION TREES

    DOE PAGES

    Soner Yorgun, M.; Rood, Richard B.

    2016-11-11

    An object-based evaluation method using a pattern recognition algorithm (i.e., classification trees) is applied to the simulated orographic precipitation for idealized experimental setups using the National Center of Atmospheric Research (NCAR) Community Atmosphere Model (CAM) with the finite volume (FV) and the Eulerian spectral transform dynamical cores with varying resolutions. Daily simulations were analyzed and three different types of precipitation features were identified by the classification tree algorithm. The statistical characteristics of these features (i.e., maximum value, mean value, and variance) were calculated to quantify the difference between the dynamical cores and changing resolutions. Even with the simple and smoothmore » topography in the idealized setups, complexity in the precipitation fields simulated by the models develops quickly. The classification tree algorithm using objective thresholding successfully detected different types of precipitation features even as the complexity of the precipitation field increased. The results show that the complexity and the bias introduced in small-scale phenomena due to the spectral transform method of CAM Eulerian spectral dynamical core is prominent, and is an important reason for its dissimilarity from the FV dynamical core. The resolvable scales, both in horizontal and vertical dimensions, have significant effect on the simulation of precipitation. The results of this study also suggest that an efficient and informative study about the biases produced by GCMs should involve daily (or even hourly) output (rather than monthly mean) analysis over local scales.« less

  7. AthenaMT: upgrading the ATLAS software framework for the many-core world with multi-threading

    NASA Astrophysics Data System (ADS)

    Leggett, Charles; Baines, John; Bold, Tomasz; Calafiura, Paolo; Farrell, Steven; van Gemmeren, Peter; Malon, David; Ritsch, Elmar; Stewart, Graeme; Snyder, Scott; Tsulaia, Vakhtang; Wynne, Benjamin; ATLAS Collaboration

    2017-10-01

    ATLAS’s current software framework, Gaudi/Athena, has been very successful for the experiment in LHC Runs 1 and 2. However, its single threaded design has been recognized for some time to be increasingly problematic as CPUs have increased core counts and decreased available memory per core. Even the multi-process version of Athena, AthenaMP, will not scale to the range of architectures we expect to use beyond Run2. After concluding a rigorous requirements phase, where many design components were examined in detail, ATLAS has begun the migration to a new data-flow driven, multi-threaded framework, which enables the simultaneous processing of singleton, thread unsafe legacy Algorithms, cloned Algorithms that execute concurrently in their own threads with different Event contexts, and fully re-entrant, thread safe Algorithms. In this paper we report on the process of modifying the framework to safely process multiple concurrent events in different threads, which entails significant changes in the underlying handling of features such as event and time dependent data, asynchronous callbacks, metadata, integration with the online High Level Trigger for partial processing in certain regions of interest, concurrent I/O, as well as ensuring thread safety of core services. We also report on upgrading the framework to handle Algorithms that are fully re-entrant.

  8. The systems biology simulation core algorithm

    PubMed Central

    2013-01-01

    Background With the increasing availability of high dimensional time course data for metabolites, genes, and fluxes, the mathematical description of dynamical systems has become an essential aspect of research in systems biology. Models are often encoded in formats such as SBML, whose structure is very complex and difficult to evaluate due to many special cases. Results This article describes an efficient algorithm to solve SBML models that are interpreted in terms of ordinary differential equations. We begin our consideration with a formal representation of the mathematical form of the models and explain all parts of the algorithm in detail, including several preprocessing steps. We provide a flexible reference implementation as part of the Systems Biology Simulation Core Library, a community-driven project providing a large collection of numerical solvers and a sophisticated interface hierarchy for the definition of custom differential equation systems. To demonstrate the capabilities of the new algorithm, it has been tested with the entire SBML Test Suite and all models of BioModels Database. Conclusions The formal description of the mathematics behind the SBML format facilitates the implementation of the algorithm within specifically tailored programs. The reference implementation can be used as a simulation backend for Java™-based programs. Source code, binaries, and documentation can be freely obtained under the terms of the LGPL version 3 from http://simulation-core.sourceforge.net. Feature requests, bug reports, contributions, or any further discussion can be directed to the mailing list simulation-core-development@lists.sourceforge.net. PMID:23826941

  9. Ultrafast adiabatic quantum algorithm for the NP-complete exact cover problem

    PubMed Central

    Wang, Hefeng; Wu, Lian-Ao

    2016-01-01

    An adiabatic quantum algorithm may lose quantumness such as quantum coherence entirely in its long runtime, and consequently the expected quantum speedup of the algorithm does not show up. Here we present a general ultrafast adiabatic quantum algorithm. We show that by applying a sequence of fast random or regular signals during evolution, the runtime can be reduced substantially, whereas advantages of the adiabatic algorithm remain intact. We also propose a randomized Trotter formula and show that the driving Hamiltonian and the proposed sequence of fast signals can be implemented simultaneously. We illustrate the algorithm by solving the NP-complete 3-bit exact cover problem (EC3), where NP stands for nondeterministic polynomial time, and put forward an approach to implementing the problem with trapped ions. PMID:26923834

  10. EVALUATION OF THE HTA CORE MODEL FOR NATIONAL HEALTH TECHNOLOGY ASSESSMENT REPORTS: COMPARATIVE STUDY AND EXPERIENCES FROM EUROPEAN COUNTRIES.

    PubMed

    Kõrge, Kristina; Berndt, Nadine; Hohmann, Juergen; Romano, Florence; Hiligsmann, Mickael

    2017-01-01

    The health technology assessment (HTA) Core Model® is a tool for defining and standardizing the elements of HTA analyses within several domains for producing structured reports. This study explored the parallels between the Core Model and a national HTA report. Experiences from various European HTA agencies were also investigated to determine the Core Model's adaptability to national reports. A comparison between a national report on Genetic Counseling, produced by the Cellule d'expertise médicale Luxembourg, and the Core Model was performed to identify parallels in terms of relevant and comparable assessment elements (AEs). Semi-structured interviews with five representatives from European HTA agencies were performed to assess their user experiences with the Core Model. The comparative study revealed that 50 percent of the total number (n = 144) of AEs in the Core Model were relevant for the national report. Of these 144 AEs from the Core Model, 34 (24 percent) were covered in the national report. Some AEs were covered only partly. The interviewees emphasized flexibility in using the Core Model and stated that the most important aspects to be evaluated include characteristics of the disease and technology, clinical effectiveness, economic aspects, and safety. In the present study, the national report covered an acceptable number of AEs of the Core Model. These results need to be interpreted with caution because only one comparison was performed. The Core Model can be used in a flexible manner, applying only those elements that are relevant from the perspective of the technology assessment and specific country context.

  11. The TSP-approach to approximate solving the m-Cycles Cover Problem

    NASA Astrophysics Data System (ADS)

    Gimadi, Edward Kh.; Rykov, Ivan; Tsidulko, Oxana

    2016-10-01

    In the m-Cycles Cover problem it is required to find a collection of m vertex-disjoint cycles that covers all vertices of the graph and the total weight of edges in the cover is minimum (or maximum). The problem is a generalization of the Traveling salesmen problem. It is strongly NP-hard. We discuss a TSP-approach that gives polynomial approximate solutions for this problem. It transforms an approximation TSP algorithm into an approximation m-CCP algorithm. In this paper we present a number of successful transformations with proven performance guarantees for the obtained solutions.

  12. Hybrid Reduced Order Modeling Algorithms for Reactor Physics Calculations

    NASA Astrophysics Data System (ADS)

    Bang, Youngsuk

    Reduced order modeling (ROM) has been recognized as an indispensable approach when the engineering analysis requires many executions of high fidelity simulation codes. Examples of such engineering analyses in nuclear reactor core calculations, representing the focus of this dissertation, include the functionalization of the homogenized few-group cross-sections in terms of the various core conditions, e.g. burn-up, fuel enrichment, temperature, etc. This is done via assembly calculations which are executed many times to generate the required functionalization for use in the downstream core calculations. Other examples are sensitivity analysis used to determine important core attribute variations due to input parameter variations, and uncertainty quantification employed to estimate core attribute uncertainties originating from input parameter uncertainties. ROM constructs a surrogate model with quantifiable accuracy which can replace the original code for subsequent engineering analysis calculations. This is achieved by reducing the effective dimensionality of the input parameter, the state variable, or the output response spaces, by projection onto the so-called active subspaces. Confining the variations to the active subspace allows one to construct an ROM model of reduced complexity which can be solved more efficiently. This dissertation introduces a new algorithm to render reduction with the reduction errors bounded based on a user-defined error tolerance which represents the main challenge of existing ROM techniques. Bounding the error is the key to ensuring that the constructed ROM models are robust for all possible applications. Providing such error bounds represents one of the algorithmic contributions of this dissertation to the ROM state-of-the-art. Recognizing that ROM techniques have been developed to render reduction at different levels, e.g. the input parameter space, the state space, and the response space, this dissertation offers a set of novel hybrid ROM algorithms which can be readily integrated into existing methods and offer higher computational efficiency and defendable accuracy of the reduced models. For example, the snapshots ROM algorithm is hybridized with the range finding algorithm to render reduction in the state space, e.g. the flux in reactor calculations. In another implementation, the perturbation theory used to calculate first order derivatives of responses with respect to parameters is hybridized with a forward sensitivity analysis approach to render reduction in the parameter space. Reduction at the state and parameter spaces can be combined to render further reduction at the interface between different physics codes in a multi-physics model with the accuracy quantified in a similar manner to the single physics case. Although the proposed algorithms are generic in nature, we focus here on radiation transport models used in support of the design and analysis of nuclear reactor cores. In particular, we focus on replacing the traditional assembly calculations by ROM models to facilitate the generation of homogenized cross-sections for downstream core calculations. The implication is that assembly calculations could be done instantaneously therefore precluding the need for the expensive evaluation of the few-group cross-sections for all possible core conditions. Given the generic natures of the algorithms, we make an effort to introduce the material in a general form to allow non-nuclear engineers to benefit from this work.

  13. Satellite Snow-Cover Mapping: A Brief Review

    NASA Technical Reports Server (NTRS)

    Hall, Dorothy K.

    1995-01-01

    Satellite snow mapping has been accomplished since 1966, initially using data from the reflective part of the electromagnetic spectrum, and now also employing data from the microwave part of the spectrum. Visible and near-infrared sensors can provide excellent spatial resolution from space enabling detailed snow mapping. When digital elevation models are also used, snow mapping can provide realistic measurements of snow extent even in mountainous areas. Passive-microwave satellite data permit global snow cover to be mapped on a near-daily basis and estimates of snow depth to be made, but with relatively poor spatial resolution (approximately 25 km). Dense forest cover limits both techniques and optical remote sensing is limited further by cloudcover conditions. Satellite remote sensing of snow cover with imaging radars is still in the early stages of research, but shows promise at least for mapping wet or melting snow using C-band (5.3 GHz) synthetic aperture radar (SAR) data. Observing System (EOS) Moderate Resolution Imaging Spectroradiometer (MODIS) data beginning with the launch of the first EOS platform in 1998. Digital maps will be produced that will provide daily, and maximum weekly global snow, sea ice and lake ice cover at 1-km spatial resolution. Statistics will be generated on the extent and persistence of snow or ice cover in each pixel for each weekly map, cloudcover permitting. It will also be possible to generate snow- and ice-cover maps using MODIS data at 250- and 500-m resolution, and to study and map snow and ice characteristics such as albedo. been under development. Passive-microwave data offer the potential for determining not only snow cover, but snow water equivalent, depth and wetness under all sky conditions. A number of algorithms have been developed to utilize passive-microwave brightness temperatures to provide information on snow cover and water equivalent. The variability of vegetative Algorithms are being developed to map global snow and ice cover using Earth Algorithms to map global snow cover using passive-microwave data have also cover and of snow grain size, globally, limits the utility of a single algorithm to map global snow cover.

  14. The parallel algorithm for the 2D discrete wavelet transform

    NASA Astrophysics Data System (ADS)

    Barina, David; Najman, Pavel; Kleparnik, Petr; Kula, Michal; Zemcik, Pavel

    2018-04-01

    The discrete wavelet transform can be found at the heart of many image-processing algorithms. Until now, the transform on general-purpose processors (CPUs) was mostly computed using a separable lifting scheme. As the lifting scheme consists of a small number of operations, it is preferred for processing using single-core CPUs. However, considering a parallel processing using multi-core processors, this scheme is inappropriate due to a large number of steps. On such architectures, the number of steps corresponds to the number of points that represent the exchange of data. Consequently, these points often form a performance bottleneck. Our approach appropriately rearranges calculations inside the transform, and thereby reduces the number of steps. In other words, we propose a new scheme that is friendly to parallel environments. When evaluating on multi-core CPUs, we consistently overcome the original lifting scheme. The evaluation was performed on 61-core Intel Xeon Phi and 8-core Intel Xeon processors.

  15. Virtual optical network mapping and core allocation in elastic optical networks using multi-core fibers

    NASA Astrophysics Data System (ADS)

    Xuan, Hejun; Wang, Yuping; Xu, Zhanqi; Hao, Shanshan; Wang, Xiaoli

    2017-11-01

    Virtualization technology can greatly improve the efficiency of the networks by allowing the virtual optical networks to share the resources of the physical networks. However, it will face some challenges, such as finding the efficient strategies for virtual nodes mapping, virtual links mapping and spectrum assignment. It is even more complex and challenging when the physical elastic optical networks using multi-core fibers. To tackle these challenges, we establish a constrained optimization model to determine the optimal schemes of optical network mapping, core allocation and spectrum assignment. To solve the model efficiently, tailor-made encoding scheme, crossover and mutation operators are designed. Based on these, an efficient genetic algorithm is proposed to obtain the optimal schemes of the virtual nodes mapping, virtual links mapping, core allocation. The simulation experiments are conducted on three widely used networks, and the experimental results show the effectiveness of the proposed model and algorithm.

  16. Ultrafast fingerprint indexing for embedded systems

    NASA Astrophysics Data System (ADS)

    Zhou, Ru; Sin, Sang Woo; Li, Dongju; Isshiki, Tsuyoshi; Kunieda, Hiroaki

    2011-10-01

    A novel core-based fingerprint indexing scheme for embedded systems is presented in this paper. Our approach is enabled by our new precise and fast core-detection algorithm with the direction map. It introduces the feature of CMP (core minutiae pair), which describes the coordinates of minutiae and the direction of ridges associated with the minutiae based on the uniquely defined core coordinates. Since each CMP is identical against the shift and rotation of the fingerprint image, the CMP comparison between a template and an input image can be performed without any alignment. The proposed indexing algorithm based on CMP is suitable for embedded systems because the tremendous speed up and the memory reduction are achieved. In fact, the experiments with the fingerprint database FVC2002 show that its speed for the identifications becomes about 40 times faster than conventional approaches, even though the database includes fingerprints with no core.

  17. A High Performance Computing Approach to Tree Cover Delineation in 1-m NAIP Imagery Using a Probabilistic Learning Framework

    NASA Technical Reports Server (NTRS)

    Basu, Saikat; Ganguly, Sangram; Michaelis, Andrew; Votava, Petr; Roy, Anshuman; Mukhopadhyay, Supratik; Nemani, Ramakrishna

    2015-01-01

    Tree cover delineation is a useful instrument in deriving Above Ground Biomass (AGB) density estimates from Very High Resolution (VHR) airborne imagery data. Numerous algorithms have been designed to address this problem, but most of them do not scale to these datasets, which are of the order of terabytes. In this paper, we present a semi-automated probabilistic framework for the segmentation and classification of 1-m National Agriculture Imagery Program (NAIP) for tree-cover delineation for the whole of Continental United States, using a High Performance Computing Architecture. Classification is performed using a multi-layer Feedforward Backpropagation Neural Network and segmentation is performed using a Statistical Region Merging algorithm. The results from the classification and segmentation algorithms are then consolidated into a structured prediction framework using a discriminative undirected probabilistic graphical model based on Conditional Random Field, which helps in capturing the higher order contextual dependencies between neighboring pixels. Once the final probability maps are generated, the framework is updated and re-trained by relabeling misclassified image patches. This leads to a significant improvement in the true positive rates and reduction in false positive rates. The tree cover maps were generated for the whole state of California, spanning a total of 11,095 NAIP tiles covering a total geographical area of 163,696 sq. miles. The framework produced true positive rates of around 88% for fragmented forests and 74% for urban tree cover areas, with false positive rates lower than 2% for both landscapes. Comparative studies with the National Land Cover Data (NLCD) algorithm and the LiDAR canopy height model (CHM) showed the effectiveness of our framework for generating accurate high-resolution tree-cover maps.

  18. A High Performance Computing Approach to Tree Cover Delineation in 1-m NAIP Imagery using a Probabilistic Learning Framework

    NASA Astrophysics Data System (ADS)

    Basu, S.; Ganguly, S.; Michaelis, A.; Votava, P.; Roy, A.; Mukhopadhyay, S.; Nemani, R. R.

    2015-12-01

    Tree cover delineation is a useful instrument in deriving Above Ground Biomass (AGB) density estimates from Very High Resolution (VHR) airborne imagery data. Numerous algorithms have been designed to address this problem, but most of them do not scale to these datasets which are of the order of terabytes. In this paper, we present a semi-automated probabilistic framework for the segmentation and classification of 1-m National Agriculture Imagery Program (NAIP) for tree-cover delineation for the whole of Continental United States, using a High Performance Computing Architecture. Classification is performed using a multi-layer Feedforward Backpropagation Neural Network and segmentation is performed using a Statistical Region Merging algorithm. The results from the classification and segmentation algorithms are then consolidated into a structured prediction framework using a discriminative undirected probabilistic graphical model based on Conditional Random Field, which helps in capturing the higher order contextual dependencies between neighboring pixels. Once the final probability maps are generated, the framework is updated and re-trained by relabeling misclassified image patches. This leads to a significant improvement in the true positive rates and reduction in false positive rates. The tree cover maps were generated for the whole state of California, spanning a total of 11,095 NAIP tiles covering a total geographical area of 163,696 sq. miles. The framework produced true positive rates of around 88% for fragmented forests and 74% for urban tree cover areas, with false positive rates lower than 2% for both landscapes. Comparative studies with the National Land Cover Data (NLCD) algorithm and the LiDAR canopy height model (CHM) showed the effectiveness of our framework for generating accurate high-resolution tree-cover maps.

  19. Operational performance of the three bean salad control algorithm on the ACRR (Annular Core Research Reactor)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ball, R.M.; Madaras, J.J.; Trowbridge, F.R. Jr.

    Experimental tests on the Annular Core Research Reactor have confirmed that the Three-Bean-Salad'' control algorithm based on the Pontryagin maximum principle can change the power of a nuclear reactor many decades with a very fast startup rate and minimal overshoot. The paper describes the results of simulations and operations up to 25 MW and 87 decades per minute. 3 refs., 4 figs., 1 tab.

  20. Correlation of Wissler Human Thermal Model Blood Flow and Shiver Algorithms

    NASA Technical Reports Server (NTRS)

    Bue, Grant; Makinen, Janice; Cognata, Thomas

    2010-01-01

    The Wissler Human Thermal Model (WHTM) is a thermal math model of the human body that has been widely used to predict the human thermoregulatory response to a variety of cold and hot environments. The model has been shown to predict core temperature and skin temperatures higher and lower, respectively, than in tests of subjects in crew escape suit working in a controlled hot environments. Conversely the model predicts core temperature and skin temperatures lower and higher, respectively, than in tests of lightly clad subjects immersed in cold water conditions. The blood flow algorithms of the model has been investigated to allow for more and less flow, respectively, for the cold and hot case. These changes in the model have yielded better correlation of skin and core temperatures in the cold and hot cases. The algorithm for onset of shiver did not need to be modified to achieve good agreement in cold immersion simulations

  1. Experiments with a Parallel Multi-Objective Evolutionary Algorithm for Scheduling

    NASA Technical Reports Server (NTRS)

    Brown, Matthew; Johnston, Mark D.

    2013-01-01

    Evolutionary multi-objective algorithms have great potential for scheduling in those situations where tradeoffs among competing objectives represent a key requirement. One challenge, however, is runtime performance, as a consequence of evolving not just a single schedule, but an entire population, while attempting to sample the Pareto frontier as accurately and uniformly as possible. The growing availability of multi-core processors in end user workstations, and even laptops, has raised the question of the extent to which such hardware can be used to speed up evolutionary algorithms. In this paper we report on early experiments in parallelizing a Generalized Differential Evolution (GDE) algorithm for scheduling long-range activities on NASA's Deep Space Network. Initial results show that significant speedups can be achieved, but that performance does not necessarily improve as more cores are utilized. We describe our preliminary results and some initial suggestions from parallelizing the GDE algorithm. Directions for future work are outlined.

  2. Multigroup Monte Carlo on GPUs: Comparison of history- and event-based algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamilton, Steven P.; Slattery, Stuart R.; Evans, Thomas M.

    This article presents an investigation of the performance of different multigroup Monte Carlo transport algorithms on GPUs with a discussion of both history-based and event-based approaches. Several algorithmic improvements are introduced for both approaches. By modifying the history-based algorithm that is traditionally favored in CPU-based MC codes to occasionally filter out dead particles to reduce thread divergence, performance exceeds that of either the pure history-based or event-based approaches. The impacts of several algorithmic choices are discussed, including performance studies on Kepler and Pascal generation NVIDIA GPUs for fixed source and eigenvalue calculations. Single-device performance equivalent to 20–40 CPU cores onmore » the K40 GPU and 60–80 CPU cores on the P100 GPU is achieved. Last, in addition, nearly perfect multi-device parallel weak scaling is demonstrated on more than 16,000 nodes of the Titan supercomputer.« less

  3. Multigroup Monte Carlo on GPUs: Comparison of history- and event-based algorithms

    DOE PAGES

    Hamilton, Steven P.; Slattery, Stuart R.; Evans, Thomas M.

    2017-12-22

    This article presents an investigation of the performance of different multigroup Monte Carlo transport algorithms on GPUs with a discussion of both history-based and event-based approaches. Several algorithmic improvements are introduced for both approaches. By modifying the history-based algorithm that is traditionally favored in CPU-based MC codes to occasionally filter out dead particles to reduce thread divergence, performance exceeds that of either the pure history-based or event-based approaches. The impacts of several algorithmic choices are discussed, including performance studies on Kepler and Pascal generation NVIDIA GPUs for fixed source and eigenvalue calculations. Single-device performance equivalent to 20–40 CPU cores onmore » the K40 GPU and 60–80 CPU cores on the P100 GPU is achieved. Last, in addition, nearly perfect multi-device parallel weak scaling is demonstrated on more than 16,000 nodes of the Titan supercomputer.« less

  4. Observations on Student Misconceptions--A Case Study of the Build-Heap Algorithm

    ERIC Educational Resources Information Center

    Seppala, Otto; Malmi, Lauri; Korhonen, Ari

    2006-01-01

    Data structures and algorithms are core issues in computer programming. However, learning them is challenging for most students and many of them have various types of misconceptions on how algorithms work. In this study, we discuss the problem of identifying misconceptions on the principles of how algorithms work. Our context is algorithm…

  5. Super-channel oriented routing, spectrum and core assignment under crosstalk limit in spatial division multiplexing elastic optical networks

    NASA Astrophysics Data System (ADS)

    Zhao, Yongli; Zhu, Ye; Wang, Chunhui; Yu, Xiaosong; Liu, Chuan; Liu, Binglin; Zhang, Jie

    2017-07-01

    With the capacity increasing in optical networks enabled by spatial division multiplexing (SDM) technology, spatial division multiplexing elastic optical networks (SDM-EONs) attract much attention from both academic and industry. Super-channel is an important type of service provisioning in SDM-EONs. This paper focuses on the issue of super-channel construction in SDM-EONs. Mixed super-channel oriented routing, spectrum and core assignment (MS-RSCA) algorithm is proposed in SDM-EONs considering inter-core crosstalk. Simulation results show that MS-RSCA can improve spectrum resource utilization and reduce blocking probability significantly compared with the baseline RSCA algorithms.

  6. Complete synthetic seismograms based on a spherical self-gravitating Earth model with an atmosphere-ocean-mantle-core structure

    NASA Astrophysics Data System (ADS)

    Wang, Rongjiang; Heimann, Sebastian; Zhang, Yong; Wang, Hansheng; Dahm, Torsten

    2017-04-01

    A hybrid method is proposed to calculate complete synthetic seismograms based on a spherically symmetric and self-gravitating Earth with a multi-layered structure of atmosphere, ocean, mantle, liquid core and solid core. For large wavelengths, a numerical scheme is used to solve the geodynamic boundary-value problem without any approximation on the deformation and gravity coupling. With the decreasing wavelength, the gravity effect on the deformation becomes negligible and the analytical propagator scheme can be used. Many useful approaches are used to overcome the numerical problems that may arise in both analytical and numerical schemes. Some of these approaches have been established in the seismological community and the others are developed for the first time. Based on the stable and efficient hybrid algorithm, an all-in-one code QSSP is implemented to cover the complete spectrum of seismological interests. The performance of the code is demonstrated by various tests including the curvature effect on teleseismic body and surface waves, the appearance of multiple reflected, teleseismic core phases, the gravity effect on long period surface waves and free oscillations, the simulation of near-field displacement seismograms with the static offset, the coupling of tsunami and infrasound waves, and free oscillations of the solid Earth, the atmosphere and the ocean. QSSP is open source software that can be used as a stand-alone FORTRAN code or may be applied in combination with a Python toolbox to calculate and handle Green's function databases for efficient coding of source inversion problems.

  7. Complete synthetic seismograms based on a spherical self-gravitating Earth model with an atmosphere-ocean-mantle-core structure

    NASA Astrophysics Data System (ADS)

    Wang, Rongjiang; Heimann, Sebastian; Zhang, Yong; Wang, Hansheng; Dahm, Torsten

    2017-09-01

    A hybrid method is proposed to calculate complete synthetic seismograms based on a spherically symmetric and self-gravitating Earth with a multilayered structure of atmosphere, ocean, mantle, liquid core and solid core. For large wavelengths, a numerical scheme is used to solve the geodynamic boundary-value problem without any approximation on the deformation and gravity coupling. With decreasing wavelength, the gravity effect on the deformation becomes negligible and the analytical propagator scheme can be used. Many useful approaches are used to overcome the numerical problems that may arise in both analytical and numerical schemes. Some of these approaches have been established in the seismological community and the others are developed for the first time. Based on the stable and efficient hybrid algorithm, an all-in-one code QSSP is implemented to cover the complete spectrum of seismological interests. The performance of the code is demonstrated by various tests including the curvature effect on teleseismic body and surface waves, the appearance of multiple reflected, teleseismic core phases, the gravity effect on long period surface waves and free oscillations, the simulation of near-field displacement seismograms with the static offset, the coupling of tsunami and infrasound waves, and free oscillations of the solid Earth, the atmosphere and the ocean. QSSP is open source software that can be used as a stand-alone FORTRAN code or may be applied in combination with a Python toolbox to calculate and handle Green's function databases for efficient coding of source inversion problems.

  8. Identifying protein complex by integrating characteristic of core-attachment into dynamic PPI network.

    PubMed

    Shen, Xianjun; Yi, Li; Jiang, Xingpeng; He, Tingting; Yang, Jincai; Xie, Wei; Hu, Po; Hu, Xiaohua

    2017-01-01

    How to identify protein complex is an important and challenging task in proteomics. It would make great contribution to our knowledge of molecular mechanism in cell life activities. However, the inherent organization and dynamic characteristic of cell system have rarely been incorporated into the existing algorithms for detecting protein complexes because of the limitation of protein-protein interaction (PPI) data produced by high throughput techniques. The availability of time course gene expression profile enables us to uncover the dynamics of molecular networks and improve the detection of protein complexes. In order to achieve this goal, this paper proposes a novel algorithm DCA (Dynamic Core-Attachment). It detects protein-complex core comprising of continually expressed and highly connected proteins in dynamic PPI network, and then the protein complex is formed by including the attachments with high adhesion into the core. The integration of core-attachment feature into the dynamic PPI network is responsible for the superiority of our algorithm. DCA has been applied on two different yeast dynamic PPI networks and the experimental results show that it performs significantly better than the state-of-the-art techniques in terms of prediction accuracy, hF-measure and statistical significance in biology. In addition, the identified complexes with strong biological significance provide potential candidate complexes for biologists to validate.

  9. An Energy-Efficient and Scalable Deep Learning/Inference Processor With Tetra-Parallel MIMD Architecture for Big Data Applications.

    PubMed

    Park, Seong-Wook; Park, Junyoung; Bong, Kyeongryeol; Shin, Dongjoo; Lee, Jinmook; Choi, Sungpill; Yoo, Hoi-Jun

    2015-12-01

    Deep Learning algorithm is widely used for various pattern recognition applications such as text recognition, object recognition and action recognition because of its best-in-class recognition accuracy compared to hand-crafted algorithm and shallow learning based algorithms. Long learning time caused by its complex structure, however, limits its usage only in high-cost servers or many-core GPU platforms so far. On the other hand, the demand on customized pattern recognition within personal devices will grow gradually as more deep learning applications will be developed. This paper presents a SoC implementation to enable deep learning applications to run with low cost platforms such as mobile or portable devices. Different from conventional works which have adopted massively-parallel architecture, this work adopts task-flexible architecture and exploits multiple parallelism to cover complex functions of convolutional deep belief network which is one of popular deep learning/inference algorithms. In this paper, we implement the most energy-efficient deep learning and inference processor for wearable system. The implemented 2.5 mm × 4.0 mm deep learning/inference processor is fabricated using 65 nm 8-metal CMOS technology for a battery-powered platform with real-time deep inference and deep learning operation. It consumes 185 mW average power, and 213.1 mW peak power at 200 MHz operating frequency and 1.2 V supply voltage. It achieves 411.3 GOPS peak performance and 1.93 TOPS/W energy efficiency, which is 2.07× higher than the state-of-the-art.

  10. NASA Tech Briefs, March 2013

    NASA Technical Reports Server (NTRS)

    2013-01-01

    Topics covered include: Remote Data Access with IDL Data Compression Algorithm Architecture for Large Depth-of-Field Particle Image Velocimeters Vectorized Rebinning Algorithm for Fast Data Down-Sampling Display Provides Pilots with Real-Time Sonic-Boom Information Onboard Algorithms for Data Prioritization and Summarization of Aerial Imagery Monitoring and Acquisition Real-time System (MARS) Analog Signal Correlating Using an Analog-Based Signal Conditioning Front End Micro-Textured Black Silicon Wick for Silicon Heat Pipe Array Robust Multivariable Optimization and Performance Simulation for ASIC Design; Castable Amorphous Metal Mirrors and Mirror Assemblies; Sandwich Core Heat-Pipe Radiator for Power and Propulsion Systems; Apparatus for Pumping a Fluid; Cobra Fiber-Optic Positioner Upgrade; Improved Wide Operating Temperature Range of Li-Ion Cells; Non-Toxic, Non-Flammable, -80 C Phase Change Materials; Soft-Bake Purification of SWCNTs Produced by Pulsed Laser Vaporization; Improved Cell Culture Method for Growing Contracting Skeletal Muscle Models; Hand-Based Biometric Analysis; The Next Generation of Cold Immersion Dry Suit Design Evolution for Hypothermia Prevention; Integrated Lunar Information Architecture for Decision Support Version 3.0 (ILIADS 3.0); Relay Forward-Link File Management Services (MaROS Phase 2); Two Mechanisms to Avoid Control Conflicts Resulting from Uncoordinated Intent; XTCE GOVSAT Tool Suite 1.0; Determining Temperature Differential to Prevent Hardware Cross-Contamination in a Vacuum Chamber; SequenceL: Automated Parallel Algorithms Derived from CSP-NT Computational Laws; Remote Data Exploration with the Interactive Data Language (IDL); Mixture-Tuned, Clutter Matched Filter for Remote Detection of Subpixel Spectral Signals; Partitioned-Interval Quantum Optical Communications Receiver; and Practical UAV Optical Sensor Bench with Minimal Adjustability.

  11. Crisis management during anaesthesia: embolism

    PubMed Central

    Williamson, J; Helps, S; Westhorpe, R; Mackay, P

    2005-01-01

    Background: Embolism with gas, thrombus, fat, amniotic fluid, or particulate matter may occur suddenly and unexpectedly during anaesthesia, posing a diagnostic and management problem for the anaesthetist. Objectives: To examine the role of a previously described core algorithm "COVER ABCD–A SWIFT CHECK" supplemented by a specific sub-algorithm for embolism, in the management of embolism occurring in association with anaesthesia. Methods: The potential performance of this structured approach for each of the relevant incidents among the first 4000 reported to the Australian Incident Monitoring Study (AIMS) was compared with the actual management as reported by the anaesthetists involved. Results: Among the first 4000 incidents reported to AIMS, 38 reports of embolism were found. A sudden fall in end-tidal carbon dioxide and oxygen saturation were the cardinal signs of embolism, each occurring in about two thirds of cases, with hypotension and electrocardiographic changes each occurring in about one third of cases. Conclusion: The potential value of an explicit structured approach to the diagnosis and management of embolism was assessed in the light of AIMS reports. It was considered that, correctly applied, it potentially would have led to earlier recognition of the problem and/or better management in over 40% of cases. PMID:15933290

  12. Crisis management during anaesthesia: obstruction of the natural airway.

    PubMed

    Visvanathan, T; Kluger, M T; Webb, R K; Westhorpe, R N

    2005-06-01

    Obstruction of the natural airway, while usually easily recognised and managed, may present simply as desaturation, have an unexpected cause, be very difficult to manage, and have serious consequences for the patient. To examine the role of a previously described core algorithm "COVER ABCD-A SWIFT CHECK", supplemented by a specific sub-algorithm for obstruction of the natural airway, in the management of acute airway obstruction occurring in association with anaesthesia. The potential performance for this structured approach for each of the relevant incidents among the first 4000 reported to the Australian Incident Monitoring Study (AIMS) was compared with the actual management as reported by the anaesthetists involved. There were 62 relevant incidents among the first 4000 reports to the AIMS. It was considered that the correct use of the structured approach would have led to earlier recognition of the problem and/or better management in 11% of cases. Airway management is a fundamental anaesthetic responsibility and skill. Airway obstruction demands a rapid and organised approach to its diagnosis and management and undue delay usually results in desaturation and a potential threat to life. An uncomplicated pre-learned sequence of airway rescue instructions is an essential part of every anaesthetist's clinical practice requirements.

  13. What Do Core Obligations under the Right to Health Bring to Universal Health Coverage?

    PubMed

    Forman, Lisa; Beiersmann, Claudia; Brolan, Claire E; Mckee, Martin; Hammonds, Rachel; Ooms, Gorik

    2016-12-01

    Can the right to health, and particularly the core obligations of states specified under this right, assist in formulating and implementing universal health coverage (UHC), now included in the post-2015 Sustainable Development Goals? In this paper, we examine how core obligations under the right to health could lead to a version of UHC that is likely to advance equity and rights. We first address the affinity between the right to health and UHC as evinced through changing definitions of UHC and the health domains that UHC explicitly covers. We then engage with relevant interpretations of the right to health, including core obligations. We turn to analyze what core obligations might bring to UHC, particularly in defining what and who is covered. Finally, we acknowledge some of the risks associated with both UHC and core obligations and consider potential avenues for mitigating these risks.

  14. Heuristic rules embedded genetic algorithm for in-core fuel management optimization

    NASA Astrophysics Data System (ADS)

    Alim, Fatih

    The objective of this study was to develop a unique methodology and a practical tool for designing loading pattern (LP) and burnable poison (BP) pattern for a given Pressurized Water Reactor (PWR) core. Because of the large number of possible combinations for the fuel assembly (FA) loading in the core, the design of the core configuration is a complex optimization problem. It requires finding an optimal FA arrangement and BP placement in order to achieve maximum cycle length while satisfying the safety constraints. Genetic Algorithms (GA) have been already used to solve this problem for LP optimization for both PWR and Boiling Water Reactor (BWR). The GA, which is a stochastic method works with a group of solutions and uses random variables to make decisions. Based on the theories of evaluation, the GA involves natural selection and reproduction of the individuals in the population for the next generation. The GA works by creating an initial population, evaluating it, and then improving the population by using the evaluation operators. To solve this optimization problem, a LP optimization package, GARCO (Genetic Algorithm Reactor Code Optimization) code is developed in the framework of this thesis. This code is applicable for all types of PWR cores having different geometries and structures with an unlimited number of FA types in the inventory. To reach this goal, an innovative GA is developed by modifying the classical representation of the genotype. To obtain the best result in a shorter time, not only the representation is changed but also the algorithm is changed to use in-core fuel management heuristics rules. The improved GA code was tested to demonstrate and verify the advantages of the new enhancements. The developed methodology is explained in this thesis and preliminary results are shown for the VVER-1000 reactor hexagonal geometry core and the TMI-1 PWR. The improved GA code was tested to verify the advantages of new enhancements. The core physics code used for VVER in this research is Moby-Dick, which was developed to analyze the VVER by SKODA Inc. The SIMULATE-3 code, which is an advanced two-group nodal code, is used to analyze the TMI-1.

  15. Optimized extreme learning machine for urban land cover classification using hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Su, Hongjun; Tian, Shufang; Cai, Yue; Sheng, Yehua; Chen, Chen; Najafian, Maryam

    2017-12-01

    This work presents a new urban land cover classification framework using the firefly algorithm (FA) optimized extreme learning machine (ELM). FA is adopted to optimize the regularization coefficient C and Gaussian kernel σ for kernel ELM. Additionally, effectiveness of spectral features derived from an FA-based band selection algorithm is studied for the proposed classification task. Three sets of hyperspectral databases were recorded using different sensors, namely HYDICE, HyMap, and AVIRIS. Our study shows that the proposed method outperforms traditional classification algorithms such as SVM and reduces computational cost significantly.

  16. The implement of Talmud property allocation algorithm based on graphic point-segment way

    NASA Astrophysics Data System (ADS)

    Cen, Haifeng

    2017-04-01

    Under the guidance of the Talmud allocation scheme's theory, the paper analyzes the algorithm implemented process via the perspective of graphic point-segment way, and designs the point-segment way's Talmud property allocation algorithm. Then it uses Java language to implement the core of allocation algorithm, by using Android programming to build a visual interface.

  17. Core-core and core-valence correlation energy atomic and molecular benchmarks for Li through Ar

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ranasinghe, Duminda S.; Frisch, Michael J.; Petersson, George A., E-mail: gpetersson@wesleyan.edu

    2015-12-07

    We have established benchmark core-core, core-valence, and valence-valence absolute coupled-cluster single double (triple) correlation energies (±0.1%) for 210 species covering the first- and second-rows of the periodic table. These species provide 194 energy differences (±0.03 mE{sub h}) including ionization potentials, electron affinities, and total atomization energies. These results can be used for calibration of less expensive methodologies for practical routine determination of core-core and core-valence correlation energies.

  18. The Interior Angular Momentum of Core Hydrogen Burning Stars from Gravity-mode Oscillations

    NASA Astrophysics Data System (ADS)

    Aerts, C.; Van Reeth, T.; Tkachenko, A.

    2017-09-01

    A major uncertainty in the theory of stellar evolution is the angular momentum distribution inside stars and its change during stellar life. We compose a sample of 67 stars in the core hydrogen burning phase with a {log} g value from high-resolution spectroscopy, as well as an asteroseismic estimate of the near-core rotation rate derived from gravity-mode oscillations detected in space photometry. This assembly includes 8 B-type stars and 59 AF-type stars, covering a mass range from 1.4 to 5 M ⊙, I.e., it concerns intermediate-mass stars born with a well-developed convective core. The sample covers projected surface rotation velocities v\\sin I\\in [9,242] km s-1 and core rotation rates up to 26 μHz, which corresponds to 50% of the critical rotation frequency. We find deviations from rigid rotation to be moderate in the single stars of this sample. We place the near-core rotation rates in an evolutionary context and find that the core rotation must drop drastically before or during the short phase between the end of the core hydrogen burning and the onset of core helium burning. We compute the spin parameter, which is the ratio of twice the rotation rate to the mode frequency (also known as the inverse Rossby number), for 1682 gravity modes and find the majority (95%) to occur in the sub-inertial regime. The 10 stars with Rossby modes have spin parameters between 14 and 30, while the gravito-inertial modes cover the range from 1 to 15.

  19. Evolving land cover classification algorithms for multispectral and multitemporal imagery

    NASA Astrophysics Data System (ADS)

    Brumby, Steven P.; Theiler, James P.; Bloch, Jeffrey J.; Harvey, Neal R.; Perkins, Simon J.; Szymanski, John J.; Young, Aaron C.

    2002-01-01

    The Cerro Grande/Los Alamos forest fire devastated over 43,000 acres (17,500 ha) of forested land, and destroyed over 200 structures in the town of Los Alamos and the adjoining Los Alamos National Laboratory. The need to measure the continuing impact of the fire on the local environment has led to the application of a number of remote sensing technologies. During and after the fire, remote-sensing data was acquired from a variety of aircraft- and satellite-based sensors, including Landsat 7 Enhanced Thematic Mapper (ETM+). We now report on the application of a machine learning technique to the automated classification of land cover using multi-spectral and multi-temporal imagery. We apply a hybrid genetic programming/supervised classification technique to evolve automatic feature extraction algorithms. We use a software package we have developed at Los Alamos National Laboratory, called GENIE, to carry out this evolution. We use multispectral imagery from the Landsat 7 ETM+ instrument from before, during, and after the wildfire. Using an existing land cover classification based on a 1992 Landsat 5 TM scene for our training data, we evolve algorithms that distinguish a range of land cover categories, and an algorithm to mask out clouds and cloud shadows. We report preliminary results of combining individual classification results using a K-means clustering approach. The details of our evolved classification are compared to the manually produced land-cover classification.

  20. Identification of both copy number variation-type and constant-type core elements in a large segmental duplication region of the mouse genome

    PubMed Central

    2013-01-01

    Background Copy number variation (CNV), an important source of diversity in genomic structure, is frequently found in clusters called CNV regions (CNVRs). CNVRs are strongly associated with segmental duplications (SDs), but the composition of these complex repetitive structures remains unclear. Results We conducted self-comparative-plot analysis of all mouse chromosomes using the high-speed and large-scale-homology search algorithm SHEAP. For eight chromosomes, we identified various types of large SD as tartan-checked patterns within the self-comparative plots. A complex arrangement of diagonal split lines in the self-comparative-plots indicated the presence of large homologous repetitive sequences. We focused on one SD on chromosome 13 (SD13M), and developed SHEPHERD, a stepwise ab initio method, to extract longer repetitive elements and to characterize repetitive structures in this region. Analysis using SHEPHERD showed the existence of 60 core elements, which were expected to be the basic units that form SDs within the repetitive structure of SD13M. The demonstration that sequences homologous to the core elements (>70% homology) covered approximately 90% of the SD13M region indicated that our method can characterize the repetitive structure of SD13M effectively. Core elements were composed largely of fragmented repeats of a previously identified type, such as long interspersed nuclear elements (LINEs), together with partial genic regions. Comparative genome hybridization array analysis showed that whereas 42 core elements were components of CNVR that varied among mouse strains, 8 did not vary among strains (constant type), and the status of the others could not be determined. The CNV-type core elements contained significantly larger proportions of long terminal repeat (LTR) types of retrotransposon than the constant-type core elements, which had no CNV. The higher divergence rates observed in the CNV-type core elements than in the constant type indicate that the CNV-type core elements have a longer evolutionary history than constant-type core elements in SD13M. Conclusions Our methodology for the identification of repetitive core sequences simplifies characterization of the structures of large SDs and detailed analysis of CNV. The results of detailed structural and quantitative analyses in this study might help to elucidate the biological role of one of the SDs on chromosome 13. PMID:23834397

  1. Conceptual Underpinnings of the Quality of Life in Neurological Disorders (Neuro-QoL): Comparisons of Core Sets for Stroke, Multiple Sclerosis, Spinal Cord Injury, and Traumatic Brain Injury.

    PubMed

    Wong, Alex W K; Lau, Stephen C L; Fong, Mandy W M; Cella, David; Lai, Jin-Shei; Heinemann, Allen W

    2018-04-03

    To determine the extent to which the content of the Quality of Life in Neurological Disorders (Neuro-QoL) covers the International Classification of Functioning, Disability and Health (ICF) Core Sets for multiple sclerosis (MS), stroke, spinal cord injury (SCI), and traumatic brain injury (TBI) using summary linkage indicators. Content analysis by linking content of the Neuro-QoL to corresponding ICF codes of each Core Set for MS, stroke, SCI, and TBI. Three academic centers. None. None. Four summary linkage indicators proposed by MacDermid et al were estimated to compare the content coverage between Neuro-QoL and the ICF codes of Core Sets for MS, stroke, MS, and TBI. Neuro-QoL represented 20% to 30% Core Set codes for different conditions in which more codes in Core Sets for MS (29%), stroke (28%), and TBI (28%) were covered than those for SCI in the long-term (20%) and early postacute (19%) contexts. Neuro-QoL represented nearly half of the unique Activity and Participation codes (43%-49%) and less than one third of the unique Body Function codes (12%-32%). It represented fewer Environmental Factors codes (2%-6%) and no Body Structures codes. Absolute linkage indicators found that at least 60% of Neuro-QoL items were linked to Core Set codes (63%-95%), but many items covered the same codes as revealed by unique linkage indicators (7%-13%), suggesting high concept redundancy among items. The Neuro-QoL links more closely to ICF Core Sets for stroke, MS, and TBI than to those for SCI, and primarily covers activity and participation ICF domains. Other instruments are needed to address concepts not measured by the Neuro-QoL when a comprehensive health assessment is needed. Copyright © 2018 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  2. Mapping and assessing variability in the Antarctic marginal ice zone, pack ice and coastal polynyas in two sea ice algorithms with implications on breeding success of snow petrels

    NASA Astrophysics Data System (ADS)

    Stroeve, Julienne C.; Jenouvrier, Stephanie; Campbell, G. Garrett; Barbraud, Christophe; Delord, Karine

    2016-08-01

    Sea ice variability within the marginal ice zone (MIZ) and polynyas plays an important role for phytoplankton productivity and krill abundance. Therefore, mapping their spatial extent as well as seasonal and interannual variability is essential for understanding how current and future changes in these biologically active regions may impact the Antarctic marine ecosystem. Knowledge of the distribution of MIZ, consolidated pack ice and coastal polynyas in the total Antarctic sea ice cover may also help to shed light on the factors contributing towards recent expansion of the Antarctic ice cover in some regions and contraction in others. The long-term passive microwave satellite data record provides the longest and most consistent record for assessing the proportion of the sea ice cover that is covered by each of these ice categories. However, estimates of the amount of MIZ, consolidated pack ice and polynyas depend strongly on which sea ice algorithm is used. This study uses two popular passive microwave sea ice algorithms, the NASA Team and Bootstrap, and applies the same thresholds to the sea ice concentrations to evaluate the distribution and variability in the MIZ, the consolidated pack ice and coastal polynyas. Results reveal that the seasonal cycle in the MIZ and pack ice is generally similar between both algorithms, yet the NASA Team algorithm has on average twice the MIZ and half the consolidated pack ice area as the Bootstrap algorithm. Trends also differ, with the Bootstrap algorithm suggesting statistically significant trends towards increased pack ice area and no statistically significant trends in the MIZ. The NASA Team algorithm on the other hand indicates statistically significant positive trends in the MIZ during spring. Potential coastal polynya area and amount of broken ice within the consolidated ice pack are also larger in the NASA Team algorithm. The timing of maximum polynya area may differ by as much as 5 months between algorithms. These differences lead to different relationships between sea ice characteristics and biological processes, as illustrated here with the breeding success of an Antarctic seabird.

  3. On-Board Cryospheric Change Detection By The Autonomous Sciencecraft Experiment

    NASA Astrophysics Data System (ADS)

    Doggett, T.; Greeley, R.; Castano, R.; Cichy, B.; Chien, S.; Davies, A.; Baker, V.; Dohm, J.; Ip, F.

    2004-12-01

    The Autonomous Sciencecraft Experiment (ASE) is operating on-board Earth Observing - 1 (EO-1) with the Hyperion hyper-spectral visible/near-IR spectrometer. ASE science activities include autonomous monitoring of cryopsheric changes, triggering the collection of additional data when change is detected and filtering of null data such as no change or cloud cover. This would have application to the study of cryospheres on Earth, Mars and the icy moons of the outer solar system. A cryosphere classification algorithm, in combination with a previously developed cloud algorithm [1] has been tested on-board ten times from March through August 2004. The cloud algorithm correctly screened out three scenes with total cloud cover, while the cryosphere algorithm detected alpine snow cover in the Rocky Mountains, lake thaw near Madison, Wisconsin, and the presence and subsequent break-up of sea ice in the Barrow Strait of the Canadian Arctic. Hyperion has 220 bands ranging from 400 to 2400 nm, with a spatial resolution of 30 m/pixel and a spectral resolution of 10 nm. Limited on-board memory and processing speed imposed the constraint that only partially processed Level 0.5 data with dark image subtraction and gain factors applied, but not full radiometric calibration. In addition, a maximum of 12 bands could be used for any stacked sequence of algorithms run for a scene on-board. The cryosphere algorithm was developed to classify snow, water, ice and land, using six Hyperion bands at 427, 559, 661, 864, 1245 and 1649 nm. Of these, only 427 nm does overlap with the cloud algorithm. The cloud algorithm was developed with Level 1 data, which introduces complications because of the incomplete calibration of SWIR in Level 0.5 data, including a high level of noise in the 1377 nm band used by the cloud algorithm. Development of a more robust cryosphere classifier, including cloud classification specifically adapted to Level 0.5, is in progress for deployment on EO-1 as part of continued ASE operations. [1] Griffin, M.K. et al., Cloud Cover Detection Algorithm For EO-1 Hyperion Imagery, SPIE 17, 2003.

  4. Multi-Core Programming Design Patterns: Stream Processing Algorithms for Dynamic Scene Perceptions

    DTIC Science & Technology

    2014-05-01

    processor developed by IBM and other companies , incorpo- rates the verb—POWER5— processor as the Power Processor Element (PPE), one of the early general...deliver an power efficient single-precision peak performance of more than 256 GFlops. Substantially more raw power became available later, when nVIDIA ...algorithms, including IBM’s Cell/B.E., GPUs from NVidia and AMD and many-core CPUs from Intel.27 The vast growth of digital video content has been a

  5. Application of ant colony Algorithm and particle swarm optimization in architectural design

    NASA Astrophysics Data System (ADS)

    Song, Ziyi; Wu, Yunfa; Song, Jianhua

    2018-02-01

    By studying the development of ant colony algorithm and particle swarm algorithm, this paper expounds the core idea of the algorithm, explores the combination of algorithm and architectural design, sums up the application rules of intelligent algorithm in architectural design, and combines the characteristics of the two algorithms, obtains the research route and realization way of intelligent algorithm in architecture design. To establish algorithm rules to assist architectural design. Taking intelligent algorithm as the beginning of architectural design research, the authors provide the theory foundation of ant colony Algorithm and particle swarm algorithm in architectural design, popularize the application range of intelligent algorithm in architectural design, and provide a new idea for the architects.

  6. Pricing the Services of Scientific Cores. Part I: Charging Subsidized and Unsubsidized Users.

    ERIC Educational Resources Information Center

    Fife, Jerry; Forrester, Robert

    2002-01-01

    Explaining that scientific cores at research institutions support shared resources and facilities, discusses devising a method of charging users for core services and controlling and managing the rates. Proposes the concept of program-based management to cover sources of core support that are funding similar work. (EV)

  7. Use of information technologies when designing multilayered plates and covers with filler of various types

    NASA Astrophysics Data System (ADS)

    Golova, T. A.; Magerramova, I. A.; Ivanov, S. A.

    2018-05-01

    Calculation of multilayered plates and covers does not consider anisotropic properties of a construction. Calculation comes down to uniform isotropic covers and definition of one of intense and deformation conditions of constructions. The existing techniques consider work of multilayered designs by means of various coefficients. The article describes the optimized algorithm of operations when designing multilayered plates and covers with filler of various types on the basis of the conducted researches. It is dealt with a development engineering algorithm of calculation of multi-layer constructions of walls. Software is created which allows one to carry out assessment of intense and deformation conditions of constructions of walls.

  8. PEG Enhancement for EM1 and EM2+ Missions

    NASA Technical Reports Server (NTRS)

    Von der Porten, Paul; Ahmad, Naeem; Hawkins, Matt

    2018-01-01

    NASA is currently building the Space Launch System (SLS) Block-1 launch vehicle for the Exploration Mission 1 (EM-1) test flight. The next evolution of SLS, the Block-1B Exploration Mission 2 (EM-2), is currently being designed. The Block-1 and Block-1B vehicles will use the Powered Explicit Guidance (PEG) algorithm. Due to the relatively low thrust-to-weight ratio of the Exploration Upper Stage (EUS), certain enhancements to the Block-1 PEG algorithm are needed to perform Block-1B missions. In order to accommodate mission design for EM-2 and beyond, PEG has been significantly improved since its use on the Space Shuttle program. The current version of PEG has the ability to switch to different targets during Core Stage (CS) or EUS flight, and can automatically reconfigure for a single Engine Out (EO) scenario, loss of communication with the Launch Abort System (LAS), and Inertial Navigation System (INS) failure. The Thrust Factor (TF) algorithm uses measured state information in addition to a priori parameters, providing PEG with an improved estimate of propulsion information. This provides robustness against unknown or undetected engine failures. A loft parameter input allows LAS jettison while maximizing payload mass. The current PEG algorithm is now able to handle various classes of missions with burn arcs much longer than were seen in the shuttle program. These missions include targeting a circular LEO orbit with a low-thrust, long-burn-duration upper stage, targeting a highly eccentric Trans-Lunar Injection (TLI) orbit, targeting a disposal orbit using the low-thrust Reaction Control System (RCS), and targeting a hyperbolic orbit. This paper will describe the design and implementation of the TF algorithm, the strategy to handle EO in various flight regimes, algorithms to cover off-nominal conditions, and other enhancements to the Block-1 PEG algorithm. This paper illustrates challenges posed by the Block-1B vehicle, and results show that the improved PEG algorithm is capable for use on the SLS Block 1-B vehicle as part of the Guidance, Navigation, and Control System.

  9. Efficient Geometric Sound Propagation Using Visibility Culling

    NASA Astrophysics Data System (ADS)

    Chandak, Anish

    2011-07-01

    Simulating propagation of sound can improve the sense of realism in interactive applications such as video games and can lead to better designs in engineering applications such as architectural acoustics. In this thesis, we present geometric sound propagation techniques which are faster than prior methods and map well to upcoming parallel multi-core CPUs. We model specular reflections by using the image-source method and model finite-edge diffraction by using the well-known Biot-Tolstoy-Medwin (BTM) model. We accelerate the computation of specular reflections by applying novel visibility algorithms, FastV and AD-Frustum, which compute visibility from a point. We accelerate finite-edge diffraction modeling by applying a novel visibility algorithm which computes visibility from a region. Our visibility algorithms are based on frustum tracing and exploit recent advances in fast ray-hierarchy intersections, data-parallel computations, and scalable, multi-core algorithms. The AD-Frustum algorithm adapts its computation to the scene complexity and allows small errors in computing specular reflection paths for higher computational efficiency. FastV and our visibility algorithm from a region are general, object-space, conservative visibility algorithms that together significantly reduce the number of image sources compared to other techniques while preserving the same accuracy. Our geometric propagation algorithms are an order of magnitude faster than prior approaches for modeling specular reflections and two to ten times faster for modeling finite-edge diffraction. Our algorithms are interactive, scale almost linearly on multi-core CPUs, and can handle large, complex, and dynamic scenes. We also compare the accuracy of our sound propagation algorithms with other methods. Once sound propagation is performed, it is desirable to listen to the propagated sound in interactive and engineering applications. We can generate smooth, artifact-free output audio signals by applying efficient audio-processing algorithms. We also present the first efficient audio-processing algorithm for scenarios with simultaneously moving source and moving receiver (MS-MR) which incurs less than 25% overhead compared to static source and moving receiver (SS-MR) or moving source and static receiver (MS-SR) scenario.

  10. Optimization of the coherence function estimation for multi-core central processing unit

    NASA Astrophysics Data System (ADS)

    Cheremnov, A. G.; Faerman, V. A.; Avramchuk, V. S.

    2017-02-01

    The paper considers use of parallel processing on multi-core central processing unit for optimization of the coherence function evaluation arising in digital signal processing. Coherence function along with other methods of spectral analysis is commonly used for vibration diagnosis of rotating machinery and its particular nodes. An algorithm is given for the function evaluation for signals represented with digital samples. The algorithm is analyzed for its software implementation and computational problems. Optimization measures are described, including algorithmic, architecture and compiler optimization, their results are assessed for multi-core processors from different manufacturers. Thus, speeding-up of the parallel execution with respect to sequential execution was studied and results are presented for Intel Core i7-4720HQ и AMD FX-9590 processors. The results show comparatively high efficiency of the optimization measures taken. In particular, acceleration indicators and average CPU utilization have been significantly improved, showing high degree of parallelism of the constructed calculating functions. The developed software underwent state registration and will be used as a part of a software and hardware solution for rotating machinery fault diagnosis and pipeline leak location with acoustic correlation method.

  11. Dynamic optical resource allocation for mobile core networks with software defined elastic optical networking.

    PubMed

    Zhao, Yongli; Chen, Zhendong; Zhang, Jie; Wang, Xinbo

    2016-07-25

    Driven by the forthcoming of 5G mobile communications, the all-IP architecture of mobile core networks, i.e. evolved packet core (EPC) proposed by 3GPP, has been greatly challenged by the users' demands for higher data rate and more reliable end-to-end connection, as well as operators' demands for low operational cost. These challenges can be potentially met by software defined optical networking (SDON), which enables dynamic resource allocation according to the users' requirement. In this article, a novel network architecture for mobile core network is proposed based on SDON. A software defined network (SDN) controller is designed to realize the coordinated control over different entities in EPC networks. We analyze the requirement of EPC-lightpath (EPCL) in data plane and propose an optical switch load balancing (OSLB) algorithm for resource allocation in optical layer. The procedure of establishment and adjustment of EPCLs is demonstrated on a SDON-based EPC testbed with extended OpenFlow protocol. We also evaluate the OSLB algorithm through simulation in terms of bandwidth blocking ratio, traffic load distribution, and resource utilization ratio compared with link-based load balancing (LLB) and MinHops algorithms.

  12. Simultaneous optimization of loading pattern and burnable poison placement for PWRs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alim, F.; Ivanov, K.; Yilmaz, S.

    2006-07-01

    To solve in-core fuel management optimization problem, GARCO-PSU (Genetic Algorithm Reactor Core Optimization - Pennsylvania State Univ.) is developed. This code is applicable for all types and geometry of PWR core structures with unlimited number of fuel assembly (FA) types in the inventory. For this reason an innovative genetic algorithm is developed with modifying the classical representation of the genotype. In-core fuel management heuristic rules are introduced into GARCO. The core re-load design optimization has two parts, loading pattern (LP) optimization and burnable poison (BP) placement optimization. These parts depend on each other, but it is difficult to solve themore » combined problem due to its large size. Separating the problem into two parts provides a practical way to solve the problem. However, the result of this method does not reflect the real optimal solution. GARCO-PSU achieves to solve LP optimization and BP placement optimization simultaneously in an efficient manner. (authors)« less

  13. A Teaching Approach from the Exhaustive Search Method to the Needleman-Wunsch Algorithm

    ERIC Educational Resources Information Center

    Xu, Zhongneng; Yang, Yayun; Huang, Beibei

    2017-01-01

    The Needleman-Wunsch algorithm has become one of the core algorithms in bioinformatics; however, this programming requires more suitable explanations for students with different major backgrounds. In supposing sample sequences and using a simple store system, the connection between the exhaustive search method and the Needleman-Wunsch algorithm…

  14. Coding for parallel execution of hardware-in-the-loop millimeter-wave scene generation models on multicore SIMD processor architectures

    NASA Astrophysics Data System (ADS)

    Olson, Richard F.

    2013-05-01

    Rendering of point scatterer based radar scenes for millimeter wave (mmW) seeker tests in real-time hardware-in-the-loop (HWIL) scene generation requires efficient algorithms and vector-friendly computer architectures for complex signal synthesis. New processor technology from Intel implements an extended 256-bit vector SIMD instruction set (AVX, AVX2) in a multi-core CPU design providing peak execution rates of hundreds of GigaFLOPS (GFLOPS) on one chip. Real world mmW scene generation code can approach peak SIMD execution rates only after careful algorithm and source code design. An effective software design will maintain high computing intensity emphasizing register-to-register SIMD arithmetic operations over data movement between CPU caches or off-chip memories. Engineers at the U.S. Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) applied two basic parallel coding methods to assess new 256-bit SIMD multi-core architectures for mmW scene generation in HWIL. These include use of POSIX threads built on vector library functions and more portable, highlevel parallel code based on compiler technology (e.g. OpenMP pragmas and SIMD autovectorization). Since CPU technology is rapidly advancing toward high processor core counts and TeraFLOPS peak SIMD execution rates, it is imperative that coding methods be identified which produce efficient and maintainable parallel code. This paper describes the algorithms used in point scatterer target model rendering, the parallelization of those algorithms, and the execution performance achieved on an AVX multi-core machine using the two basic parallel coding methods. The paper concludes with estimates for scale-up performance on upcoming multi-core technology.

  15. Parallel heterogeneous architectures for efficient OMP compressive sensing reconstruction

    NASA Astrophysics Data System (ADS)

    Kulkarni, Amey; Stanislaus, Jerome L.; Mohsenin, Tinoosh

    2014-05-01

    Compressive Sensing (CS) is a novel scheme, in which a signal that is sparse in a known transform domain can be reconstructed using fewer samples. The signal reconstruction techniques are computationally intensive and have sluggish performance, which make them impractical for real-time processing applications . The paper presents novel architectures for Orthogonal Matching Pursuit algorithm, one of the popular CS reconstruction algorithms. We show the implementation results of proposed architectures on FPGA, ASIC and on a custom many-core platform. For FPGA and ASIC implementation, a novel thresholding method is used to reduce the processing time for the optimization problem by at least 25%. Whereas, for the custom many-core platform, efficient parallelization techniques are applied, to reconstruct signals with variant signal lengths of N and sparsity of m. The algorithm is divided into three kernels. Each kernel is parallelized to reduce execution time, whereas efficient reuse of the matrix operators allows us to reduce area. Matrix operations are efficiently paralellized by taking advantage of blocked algorithms. For demonstration purpose, all architectures reconstruct a 256-length signal with maximum sparsity of 8 using 64 measurements. Implementation on Xilinx Virtex-5 FPGA, requires 27.14 μs to reconstruct the signal using basic OMP. Whereas, with thresholding method it requires 18 μs. ASIC implementation reconstructs the signal in 13 μs. However, our custom many-core, operating at 1.18 GHz, takes 18.28 μs to complete. Our results show that compared to the previous published work of the same algorithm and matrix size, proposed architectures for FPGA and ASIC implementations perform 1.3x and 1.8x respectively faster. Also, the proposed many-core implementation performs 3000x faster than the CPU and 2000x faster than the GPU.

  16. The Geomagnetic Field as a Transient: Constraints From Paleomagnetic Intensity Data

    NASA Astrophysics Data System (ADS)

    Aldridge, K. D.; Baker, R.; McMillan, D. G.

    2009-12-01

    Measurement of Earth’s magnetic field intensity from sedimentary rocks confirms that the field is a transient on millennial time scales. In accounting for this observation, parameters from dynamo models need to be compared with those obtained from observations. Here we model temporal changes in intensity of the geomagnetic field as either growths or decays, sometimes separated by stationary states. In order to obtain temporal properties of the geomagnetic field, our model, developed as a Matlab algorithm, searches records of relative paleointensity to measure objectively the rates of growth and decay of the field. Here we report on the application of our algorithm to six records of relative paleointensity obtained from oceanic cores. Our model for the fluid velocity field in Earth’s core is based on parametric instability produced externally through gradients of the gravitational field. It is well known that these gradients can lead to instability of the core fluid through both elliptical and shear straining of fluid streamlines. Such an instability will exist as long as the externally produced strain rate exceeds the dissipation rate in Earth’s fluid core. As known from both theoretical models and experimental observations that a sequence of alternately growing and decaying velocities will develop in the fluid, our algorithm has searched the records of relative paleointensity for exponential growths and decays. Since a balance may exist between strain and decay rates described above, our algorithm includes the possibility for a segment of relative paleointensity that is stationary. Such segments do indeed occur in the relative paleointensity record and are expected by the model of parametric instability. Results of the application of our algorithm spanning two Ma with broad geographical coverage will be presented.

  17. Accelerating Demand Paging for Local and Remote Out-of-Core Visualization

    NASA Technical Reports Server (NTRS)

    Ellsworth, David

    2001-01-01

    This paper describes a new algorithm that improves the performance of application-controlled demand paging for the out-of-core visualization of data sets that are on either local disks or disks on remote servers. The performance improvements come from better overlapping the computation with the page reading process, and by performing multiple page reads in parallel. The new algorithm can be applied to many different visualization algorithms since application-controlled demand paging is not specific to any visualization algorithm. The paper includes measurements that show that the new multi-threaded paging algorithm decreases the time needed to compute visualizations by one third when using one processor and reading data from local disk. The time needed when using one processor and reading data from remote disk decreased by up to 60%. Visualization runs using data from remote disk ran about as fast as ones using data from local disk because the remote runs were able to make use of the remote server's high performance disk array.

  18. Concurrent computation of attribute filters on shared memory parallel machines.

    PubMed

    Wilkinson, Michael H F; Gao, Hui; Hesselink, Wim H; Jonker, Jan-Eppo; Meijster, Arnold

    2008-10-01

    Morphological attribute filters have not previously been parallelized, mainly because they are both global and non-separable. We propose a parallel algorithm that achieves efficient parallelism for a large class of attribute filters, including attribute openings, closings, thinnings and thickenings, based on Salembier's Max-Trees and Min-trees. The image or volume is first partitioned in multiple slices. We then compute the Max-trees of each slice using any sequential Max-Tree algorithm. Subsequently, the Max-trees of the slices can be merged to obtain the Max-tree of the image. A C-implementation yielded good speed-ups on both a 16-processor MIPS 14000 parallel machine, and a dual-core Opteron-based machine. It is shown that the speed-up of the parallel algorithm is a direct measure of the gain with respect to the sequential algorithm used. Furthermore, the concurrent algorithm shows a speed gain of up to 72 percent on a single-core processor, due to reduced cache thrashing.

  19. MODIS Snow Cover Mapping Decision Tree Technique: Snow and Cloud Discrimination

    NASA Technical Reports Server (NTRS)

    Riggs, George A.; Hall, Dorothy K.

    2010-01-01

    Accurate mapping of snow cover continues to challenge cryospheric scientists and modelers. The Moderate-Resolution Imaging Spectroradiometer (MODIS) snow data products have been used since 2000 by many investigators to map and monitor snow cover extent for various applications. Users have reported on the utility of the products and also on problems encountered. Three problems or hindrances in the use of the MODIS snow data products that have been reported in the literature are: cloud obscuration, snow/cloud confusion, and snow omission errors in thin or sparse snow cover conditions. Implementation of the MODIS snow algorithm in a decision tree technique using surface reflectance input to mitigate those problems is being investigated. The objective of this work is to use a decision tree structure for the snow algorithm. This should alleviate snow/cloud confusion and omission errors and provide a snow map with classes that convey information on how snow was detected, e.g. snow under clear sky, snow tinder cloud, to enable users' flexibility in interpreting and deriving a snow map. Results of a snow cover decision tree algorithm are compared to the standard MODIS snow map and found to exhibit improved ability to alleviate snow/cloud confusion in some situations allowing up to about 5% increase in mapped snow cover extent, thus accuracy, in some scenes.

  20. Cloud cover and solar disk state estimation using all-sky images: deep neural networks approach compared to routine methods

    NASA Astrophysics Data System (ADS)

    Krinitskiy, Mikhail; Sinitsyn, Alexey

    2017-04-01

    Shortwave radiation is an important component of surface heat budget over sea and land. To estimate them accurate observations of cloud conditions are needed including total cloud cover, spatial and temporal cloud structure. While massively observed visually, for building accurate SW radiation parameterizations cloud structure needs also to be quantified using precise instrumental measurements. While there already exist several state of the art land-based cloud-cameras that satisfy researchers needs, their major disadvantages are associated with inaccuracy of all-sky images processing algorithms which typically result in the uncertainties of 2-4 octa of cloud cover estimates with the resulting true-scoring cloud cover accuracy of about 7%. Moreover, none of these algorithms determine cloud types. We developed an approach for cloud cover and structure estimating, which provides much more accurate estimates and also allows for measuring additional characteristics. This method is based on the synthetic controlling index, namely the "grayness rate index", that we introduced in 2014. Since then this index has already demonstrated high efficiency being used along with the technique namely the "background sunburn effect suppression", to detect thin clouds. This made it possible to significantly increase the accuracy of total cloud cover estimation in various sky image states using this extension of routine algorithm type. Errors for the cloud cover estimates significantly decreased down resulting the mean squared error of about 1.5 octa. Resulting true-scoring accuracy is more than 38%. The main source of this approach uncertainties is the solar disk state determination errors. While the deep neural networks approach lets us to estimate solar disk state with 94% accuracy, the final result of total cloud estimation still isn`t satisfying. To solve this problem completely we applied the set of machine learning algorithms to the problem of total cloud cover estimation directly. The accuracy of this approach varies depending on algorithm choice. Deep neural networks demonstrated the best accuracy of more than 96%. We will demonstrate some approaches and the most influential statistical features of all-sky images that lets the algorithm reach that high accuracy. With the use of our new optical package a set of over 480`000 samples has been collected in several sea missions in 2014-2016 along with concurrent standard human observed and instrumentally recorded meteorological parameters. We will demonstrate the results of the field measurements and will discuss some still remaining problems and the potential of the further developments of machine learning approach.

  1. Fast parallel algorithm for slicing STL based on pipeline

    NASA Astrophysics Data System (ADS)

    Ma, Xulong; Lin, Feng; Yao, Bo

    2016-05-01

    In Additive Manufacturing field, the current researches of data processing mainly focus on a slicing process of large STL files or complicated CAD models. To improve the efficiency and reduce the slicing time, a parallel algorithm has great advantages. However, traditional algorithms can't make full use of multi-core CPU hardware resources. In the paper, a fast parallel algorithm is presented to speed up data processing. A pipeline mode is adopted to design the parallel algorithm. And the complexity of the pipeline algorithm is analyzed theoretically. To evaluate the performance of the new algorithm, effects of threads number and layers number are investigated by a serial of experiments. The experimental results show that the threads number and layers number are two remarkable factors to the speedup ratio. The tendency of speedup versus threads number reveals a positive relationship which greatly agrees with the Amdahl's law, and the tendency of speedup versus layers number also keeps a positive relationship agreeing with Gustafson's law. The new algorithm uses topological information to compute contours with a parallel method of speedup. Another parallel algorithm based on data parallel is used in experiments to show that pipeline parallel mode is more efficient. A case study at last shows a suspending performance of the new parallel algorithm. Compared with the serial slicing algorithm, the new pipeline parallel algorithm can make full use of the multi-core CPU hardware, accelerate the slicing process, and compared with the data parallel slicing algorithm, the new slicing algorithm in this paper adopts a pipeline parallel model, and a much higher speedup ratio and efficiency is achieved.

  2. An exploration of the properties of the CORE problem list subset and how it facilitates the implementation of SNOMED CT

    PubMed Central

    Xu, Julia

    2015-01-01

    Objective Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT) is the emergent international health terminology standard for encoding clinical information in electronic health records. The CORE Problem List Subset was created to facilitate the terminology’s implementation. This study evaluates the CORE Subset’s coverage and examines its growth pattern as source datasets are being incorporated. Methods Coverage of frequently used terms and the corresponding usage of the covered terms were assessed by “leave-one-out” analysis of the eight datasets constituting the current CORE Subset. The growth pattern was studied using a retrospective experiment, growing the Subset one dataset at a time and examining the relationship between the size of the starting subset and the coverage of frequently used terms in the incoming dataset. Linear regression was used to model that relationship. Results On average, the CORE Subset covered 80.3% of the frequently used terms of the left-out dataset, and the covered terms accounted for 83.7% of term usage. There was a significant positive correlation between the CORE Subset’s size and the coverage of the frequently used terms in an incoming dataset. This implies that the CORE Subset will grow at a progressively slower pace as it gets bigger. Conclusion The CORE Problem List Subset is a useful resource for the implementation of Systematized Nomenclature of Medicine Clinical Terms in electronic health records. It offers good coverage of frequently used terms, which account for a high proportion of term usage. If future datasets are incorporated into the CORE Subset, it is likely that its size will remain small and manageable. PMID:25725003

  3. A comprehensive change detection method for updating the National Land Cover Database to circa 2011

    USGS Publications Warehouse

    Jin, Suming; Yang, Limin; Danielson, Patrick; Homer, Collin G.; Fry, Joyce; Xian, George

    2013-01-01

    The importance of characterizing, quantifying, and monitoring land cover, land use, and their changes has been widely recognized by global and environmental change studies. Since the early 1990s, three U.S. National Land Cover Database (NLCD) products (circa 1992, 2001, and 2006) have been released as free downloads for users. The NLCD 2006 also provides land cover change products between 2001 and 2006. To continue providing updated national land cover and change datasets, a new initiative in developing NLCD 2011 is currently underway. We present a new Comprehensive Change Detection Method (CCDM) designed as a key component for the development of NLCD 2011 and the research results from two exemplar studies. The CCDM integrates spectral-based change detection algorithms including a Multi-Index Integrated Change Analysis (MIICA) model and a novel change model called Zone, which extracts change information from two Landsat image pairs. The MIICA model is the core module of the change detection strategy and uses four spectral indices (CV, RCVMAX, dNBR, and dNDVI) to obtain the changes that occurred between two image dates. The CCDM also includes a knowledge-based system, which uses critical information on historical and current land cover conditions and trends and the likelihood of land cover change, to combine the changes from MIICA and Zone. For NLCD 2011, the improved and enhanced change products obtained from the CCDM provide critical information on location, magnitude, and direction of potential change areas and serve as a basis for further characterizing land cover changes for the nation. An accuracy assessment from the two study areas show 100% agreement between CCDM mapped no-change class with reference dataset, and 18% and 82% disagreement for the change class for WRS path/row p22r39 and p33r33, respectively. The strength of the CCDM is that the method is simple, easy to operate, widely applicable, and capable of capturing a variety of natural and anthropogenic disturbances potentially associated with land cover changes on different landscapes.

  4. Application of modified Martinez-Silva algorithm in determination of net cover

    NASA Astrophysics Data System (ADS)

    Stefanowicz, Łukasz; Grobelna, Iwona

    2016-12-01

    In the article we present the idea of modifications of Martinez-Silva algorithm, which allows for determination of place invariants (p-invariants) of Petri net. Their generation time is important in the parallel decomposition of discrete systems described by Petri nets. Decomposition process is essential from the point of view of discrete system design, as it allows for separation of smaller sequential parts. The proposed modifications of Martinez-Silva method concern the net cover by p-invariants and are focused on two important issues: cyclic reduction of invariant matrix and cyclic checking of net cover.

  5. A Text Corpus Approach to an Analysis of the Shared Use of Core Terminology.

    ERIC Educational Resources Information Center

    Patrick, Timothy B.; Sievert, MaryEllen; Reid, John C.; Rice, Frances Ellis; Gigantelli, James W.; Schiffman, Jade S.; Shelton, Mark E.

    2003-01-01

    Investigates the shared use of core Ophthalmology terms in the domains of Ophthalmology, Family Practice and Radiology. Core terms were searched for in a text corpus of 38,695 MEDLINE abstracts covering 1970-1999 from journals representing the three domains. Findings indicated core Ophthalmology terms were used significantly more by Ophthalmology…

  6. Protecting core networks with dual-homing: A study on enhanced network availability, resource efficiency, and energy-savings

    NASA Astrophysics Data System (ADS)

    Abeywickrama, Sandu; Furdek, Marija; Monti, Paolo; Wosinska, Lena; Wong, Elaine

    2016-12-01

    Core network survivability affects the reliability performance of telecommunication networks and remains one of the most important network design considerations. This paper critically examines the benefits arising from utilizing dual-homing in the optical access networks to provide resource-efficient protection against link and node failures in the optical core segment. Four novel, heuristic-based RWA algorithms that provide dedicated path protection in networks with dual-homing are proposed and studied. These algorithms protect against different failure scenarios (i.e. single link or node failures) and are implemented with different optimization objectives (i.e., minimization of wavelength usage and path length). Results obtained through simulations and comparison with baseline architectures indicate that exploiting dual-homed architecture in the access segment can bring significant improvements in terms of core network resource usage, connection availability, and power consumption.

  7. Expert system for identification of simultaneous and sequential reactor fuel failures with gas tagging

    DOEpatents

    Gross, Kenny C.

    1994-01-01

    Failure of a fuel element in a nuclear reactor core is determined by a gas tagging failure detection system and method. Failures are catalogued and characterized after the event so that samples of the reactor's cover gas are taken at regular intervals and analyzed by mass spectroscopy. Employing a first set of systematic heuristic rules which are applied in a transformed node space allows the number of node combinations which must be processed within a barycentric algorithm to be substantially reduced. A second set of heuristic rules treats the tag nodes of the most recent one or two leakers as "background" gases, further reducing the number of trial node combinations. Lastly, a "fuzzy" set theory formalism minimizes experimental uncertainties in the identification of the most likely volumes of tag gases. This approach allows for the identification of virtually any number of sequential leaks and up to five simultaneous gas leaks from fuel elements.

  8. Hybrid Architectures for Evolutionary Computing Algorithms

    DTIC Science & Technology

    2008-01-01

    other EC algorithms to FPGA Core Burns P1026/MAPLD 200532 Genetic Algorithm Hardware References S. Scott, A. Samal , and S. Seth, “HGA: A Hardware Based...on Parallel and Distributed Processing (IPPS/SPDP 󈨦), pp. 316-320, Proceedings. IEEE Computer Society 1998. [12] Scott, S. D. , Samal , A., and...Algorithm Hardware References S. Scott, A. Samal , and S. Seth, “HGA: A Hardware Based Genetic Algorithm”, Proceedings of the 1995 ACM Third

  9. Forward-looking Assimilation of MODIS-derived Snow Covered Area into a Land Surface Model

    NASA Technical Reports Server (NTRS)

    Zaitchik, Benjamin F.; Rodell, Matthew

    2008-01-01

    Snow cover over land has a significant impact on the surface radiation budget, turbulent energy fluxes to the atmosphere, and local hydrological fluxes. For this reason, inaccuracies in the representation of snow covered area (SCA) within a land surface model (LSM) can lead to substantial errors in both offline and coupled simulations. Data assimilation algorithms have the potential to address this problem. However, the assimilation of SCA observations is complicated by an information deficit in the observation SCA indicates only the presence or absence of snow, and not snow volume and by the fact that assimilated SCA observations can introduce inconsistencies with atmospheric forcing data, leading to non-physical artifacts in the local water balance. In this paper we present a novel assimilation algorithm that introduces MODIS SCA observations to the Noah LSM in global, uncoupled simulations. The algorithm utilizes observations from up to 72 hours ahead of the model simulation in order to correct against emerging errors in the simulation of snow cover while preserving the local hydrologic balance. This is accomplished by using future snow observations to adjust air temperature and, when necessary, precipitation within the LSM. In global, offline integrations, this new assimilation algorithm provided improved simulation of SCA and snow water equivalent relative to open loop integrations and integrations that used an earlier SCA assimilation algorithm. These improvements, in turn, influenced the simulation of surface water and energy fluxes both during the snow season and, in some regions, on into the following spring.

  10. Classification of simple vegetation types using POLSAR image data

    NASA Technical Reports Server (NTRS)

    Freeman, A.

    1993-01-01

    Mapping basic vegetation or land cover types is a fairly common problem in remote sensing. Knowledge of the land cover type is a key input to algorithms which estimate geophysical parameters, such as soil moisture, surface roughness, leaf area index or biomass from remotely sensed data. In an earlier paper, an algorithm for fitting a simple three-component scattering model to POLSAR data was presented. The algorithm yielded estimates for surface scatter, double-bounce scatter and volume scatter for each pixel in a POLSAR image data set. In this paper, we show how the relative levels of each of the three components can be used as inputs to simple classifier for vegetation type. Vegetation classes include no vegetation cover (e.g. bare soil or desert), low vegetation cover (e.g. grassland), moderate vegetation cover (e.g. fully developed crops), forest and urban areas. Implementation of the approach requires estimates for the three components from all three frequencies available using the NASA/JPL AIRSAR, i.e. C-, L- and P-bands. The research described in this paper was carried out by the Jet Propulsion Laboratory, California Institute of Technology under a contract with the National Aeronautics and Space Administration.

  11. GPU Accelerated Browser for Neuroimaging Genomics.

    PubMed

    Zigon, Bob; Li, Huang; Yao, Xiaohui; Fang, Shiaofen; Hasan, Mohammad Al; Yan, Jingwen; Moore, Jason H; Saykin, Andrew J; Shen, Li

    2018-04-25

    Neuroimaging genomics is an emerging field that provides exciting opportunities to understand the genetic basis of brain structure and function. The unprecedented scale and complexity of the imaging and genomics data, however, have presented critical computational bottlenecks. In this work we present our initial efforts towards building an interactive visual exploratory system for mining big data in neuroimaging genomics. A GPU accelerated browsing tool for neuroimaging genomics is created that implements the ANOVA algorithm for single nucleotide polymorphism (SNP) based analysis and the VEGAS algorithm for gene-based analysis, and executes them at interactive rates. The ANOVA algorithm is 110 times faster than the 4-core OpenMP version, while the VEGAS algorithm is 375 times faster than its 4-core OpenMP counter part. This approach lays a solid foundation for researchers to address the challenges of mining large-scale imaging genomics datasets via interactive visual exploration.

  12. Spectral unmixing of urban land cover using a generic library approach

    NASA Astrophysics Data System (ADS)

    Degerickx, Jeroen; Lordache, Marian-Daniel; Okujeni, Akpona; Hermy, Martin; van der Linden, Sebastian; Somers, Ben

    2016-10-01

    Remote sensing based land cover classification in urban areas generally requires the use of subpixel classification algorithms to take into account the high spatial heterogeneity. These spectral unmixing techniques often rely on spectral libraries, i.e. collections of pure material spectra (endmembers, EM), which ideally cover the large EM variability typically present in urban scenes. Despite the advent of several (semi-) automated EM detection algorithms, the collection of such image-specific libraries remains a tedious and time-consuming task. As an alternative, we suggest the use of a generic urban EM library, containing material spectra under varying conditions, acquired from different locations and sensors. This approach requires an efficient EM selection technique, capable of only selecting those spectra relevant for a specific image. In this paper, we evaluate and compare the potential of different existing library pruning algorithms (Iterative Endmember Selection and MUSIC) using simulated hyperspectral (APEX) data of the Brussels metropolitan area. In addition, we develop a new hybrid EM selection method which is shown to be highly efficient in dealing with both imagespecific and generic libraries, subsequently yielding more robust land cover classification results compared to existing methods. Future research will include further optimization of the proposed algorithm and additional tests on both simulated and real hyperspectral data.

  13. An Automated Algorithm for Producing Land Cover Information from Landsat Surface Reflectance Data Acquired Between 1984 and Present

    NASA Astrophysics Data System (ADS)

    Rover, J.; Goldhaber, M. B.; Holen, C.; Dittmeier, R.; Wika, S.; Steinwand, D.; Dahal, D.; Tolk, B.; Quenzer, R.; Nelson, K.; Wylie, B. K.; Coan, M.

    2015-12-01

    Multi-year land cover mapping from remotely sensed data poses challenges. Producing land cover products at spatial and temporal scales required for assessing longer-term trends in land cover change are typically a resource-limited process. A recently developed approach utilizes open source software libraries to automatically generate datasets, decision tree classifications, and data products while requiring minimal user interaction. Users are only required to supply coordinates for an area of interest, land cover from an existing source such as National Land Cover Database and percent slope from a digital terrain model for the same area of interest, two target acquisition year-day windows, and the years of interest between 1984 and present. The algorithm queries the Landsat archive for Landsat data intersecting the area and dates of interest. Cloud-free pixels meeting the user's criteria are mosaicked to create composite images for training the classifiers and applying the classifiers. Stratification of training data is determined by the user and redefined during an iterative process of reviewing classifiers and resulting predictions. The algorithm outputs include yearly land cover raster format data, graphics, and supporting databases for further analysis. Additional analytical tools are also incorporated into the automated land cover system and enable statistical analysis after data are generated. Applications tested include the impact of land cover change and water permanence. For example, land cover conversions in areas where shrubland and grassland were replaced by shale oil pads during hydrofracking of the Bakken Formation were quantified. Analytical analysis of spatial and temporal changes in surface water included identifying wetlands in the Prairie Pothole Region of North Dakota with potential connectivity to ground water, indicating subsurface permeability and geochemistry.

  14. Crisis management during anaesthesia: myocardial ischaemia and infarction.

    PubMed

    Ludbrook, G L; Webb, R K; Currie, M; Watterson, L M

    2005-06-01

    Myocardial ischaemia and infarction are significant perioperative complications which are associated with poor patient outcome. Anaesthetic practice should therefore focus, particularly in the at risk patient, on their prevention, their accurate detection, on the identification of precipitating factors, and on rapid effective management. To examine the role of a previously described core algorithm "COVER ABCD-A SWIFT CHECK" supplemented by a specific sub-algorithm for myocardial ischaemia and infarction in the management of myocardial ischaemia and/or infarction occurring in association with anaesthesia. The potential performance of this structured approach for each of the relevant incidents among the first 4000 reported to the Australian Incident Monitoring Study (AIMS) was compared with the actual management as reported by the anaesthetists involved. Of the 125 incidents retrieved from the 4000 reports, 40 (1%) were considered to demonstrate myocardial infarction or ischaemia. The use of the structured approach described in this paper would have led to appropriate management in 90% of cases, with the remaining 10% requiring other sub-algorithms. It was considered that the application of this structured approach would have led to earlier recognition and/or better management of the problem in 45% of cases. Close and continuous monitoring of patients at risk of myocardial ischaemia during anaesthesia is necessary, using optimal ECG lead configurations, but sensitivity of this monitoring is not 100%. Coronary vasodilatation with glyceryl trinitrate (GTN) should not be withheld when indicated and the early use of beta blocking drugs should be considered even with normal blood pressures and heart rates.

  15. Parallel Lattice Basis Reduction Using a Multi-threaded Schnorr-Euchner LLL Algorithm

    NASA Astrophysics Data System (ADS)

    Backes, Werner; Wetzel, Susanne

    In this paper, we introduce a new parallel variant of the LLL lattice basis reduction algorithm. Our new, multi-threaded algorithm is the first to provide an efficient, parallel implementation of the Schorr-Euchner algorithm for today’s multi-processor, multi-core computer architectures. Experiments with sparse and dense lattice bases show a speed-up factor of about 1.8 for the 2-thread and about factor 3.2 for the 4-thread version of our new parallel lattice basis reduction algorithm in comparison to the traditional non-parallel algorithm.

  16. A service relation model for web-based land cover change detection

    NASA Astrophysics Data System (ADS)

    Xing, Huaqiao; Chen, Jun; Wu, Hao; Zhang, Jun; Li, Songnian; Liu, Boyu

    2017-10-01

    Change detection with remotely sensed imagery is a critical step in land cover monitoring and updating. Although a variety of algorithms or models have been developed, none of them can be universal for all cases. The selection of appropriate algorithms and construction of processing workflows depend largely on the expertise of experts about the "algorithm-data" relations among change detection algorithms and the imagery data used. This paper presents a service relation model for land cover change detection by integrating the experts' knowledge about the "algorithm-data" relations into the web-based geo-processing. The "algorithm-data" relations are mapped into a set of web service relations with the analysis of functional and non-functional service semantics. These service relations are further classified into three different levels, i.e., interface, behavior and execution levels. A service relation model is then established using the Object and Relation Diagram (ORD) approach to represent the multi-granularity services and their relations for change detection. A set of semantic matching rules are built and used for deriving on-demand change detection service chains from the service relation model. A web-based prototype system is developed in .NET development environment, which encapsulates nine change detection and pre-processing algorithms and represents their service relations as an ORD. Three test areas from Shandong and Hebei provinces, China with different imagery conditions are selected for online change detection experiments, and the results indicate that on-demand service chains can be generated according to different users' demands.

  17. Mapping forested wetlands in the Great Zhan River Basin through integrating optical, radar, and topographical data classification techniques.

    PubMed

    Na, X D; Zang, S Y; Wu, C S; Li, W L

    2015-11-01

    Knowledge of the spatial extent of forested wetlands is essential to many studies including wetland functioning assessment, greenhouse gas flux estimation, and wildlife suitable habitat identification. For discriminating forested wetlands from their adjacent land cover types, researchers have resorted to image analysis techniques applied to numerous remotely sensed data. While with some success, there is still no consensus on the optimal approaches for mapping forested wetlands. To address this problem, we examined two machine learning approaches, random forest (RF) and K-nearest neighbor (KNN) algorithms, and applied these two approaches to the framework of pixel-based and object-based classifications. The RF and KNN algorithms were constructed using predictors derived from Landsat 8 imagery, Radarsat-2 advanced synthetic aperture radar (SAR), and topographical indices. The results show that the objected-based classifications performed better than per-pixel classifications using the same algorithm (RF) in terms of overall accuracy and the difference of their kappa coefficients are statistically significant (p<0.01). There were noticeably omissions for forested and herbaceous wetlands based on the per-pixel classifications using the RF algorithm. As for the object-based image analysis, there were also statistically significant differences (p<0.01) of Kappa coefficient between results performed based on RF and KNN algorithms. The object-based classification using RF provided a more visually adequate distribution of interested land cover types, while the object classifications based on the KNN algorithm showed noticeably commissions for forested wetlands and omissions for agriculture land. This research proves that the object-based classification with RF using optical, radar, and topographical data improved the mapping accuracy of land covers and provided a feasible approach to discriminate the forested wetlands from the other land cover types in forestry area.

  18. Competitive repetition suppression (CoRe) clustering: a biologically inspired learning model with application to robust clustering.

    PubMed

    Bacciu, Davide; Starita, Antonina

    2008-11-01

    Determining a compact neural coding for a set of input stimuli is an issue that encompasses several biological memory mechanisms as well as various artificial neural network models. In particular, establishing the optimal network structure is still an open problem when dealing with unsupervised learning models. In this paper, we introduce a novel learning algorithm, named competitive repetition-suppression (CoRe) learning, inspired by a cortical memory mechanism called repetition suppression (RS). We show how such a mechanism is used, at various levels of the cerebral cortex, to generate compact neural representations of the visual stimuli. From the general CoRe learning model, we derive a clustering algorithm, named CoRe clustering, that can automatically estimate the unknown cluster number from the data without using a priori information concerning the input distribution. We illustrate how CoRe clustering, besides its biological plausibility, posses strong theoretical properties in terms of robustness to noise and outliers, and we provide an error function describing CoRe learning dynamics. Such a description is used to analyze CoRe relationships with the state-of-the art clustering models and to highlight CoRe similitude with rival penalized competitive learning (RPCL), showing how CoRe extends such a model by strengthening the rival penalization estimation by means of loss functions from robust statistics.

  19. An Out-of-Core GPU based dimensionality reduction algorithm for Big Mass Spectrometry Data and its application in bottom-up Proteomics.

    PubMed

    Awan, Muaaz Gul; Saeed, Fahad

    2017-08-01

    Modern high resolution Mass Spectrometry instruments can generate millions of spectra in a single systems biology experiment. Each spectrum consists of thousands of peaks but only a small number of peaks actively contribute to deduction of peptides. Therefore, pre-processing of MS data to detect noisy and non-useful peaks are an active area of research. Most of the sequential noise reducing algorithms are impractical to use as a pre-processing step due to high time-complexity. In this paper, we present a GPU based dimensionality-reduction algorithm, called G-MSR, for MS2 spectra. Our proposed algorithm uses novel data structures which optimize the memory and computational operations inside GPU. These novel data structures include Binary Spectra and Quantized Indexed Spectra (QIS) . The former helps in communicating essential information between CPU and GPU using minimum amount of data while latter enables us to store and process complex 3-D data structure into a 1-D array structure while maintaining the integrity of MS data. Our proposed algorithm also takes into account the limited memory of GPUs and switches between in-core and out-of-core modes based upon the size of input data. G-MSR achieves a peak speed-up of 386x over its sequential counterpart and is shown to process over a million spectra in just 32 seconds. The code for this algorithm is available as a GPL open-source at GitHub at the following link: https://github.com/pcdslab/G-MSR.

  20. The Teaching and Learning of Algorithms in School Mathematics. 1998 Yearbook.

    ERIC Educational Resources Information Center

    Morrow, Lorna J., Ed.; Kenney, Margaret J., Ed.

    This 1998 yearbook aims to stimulate and answer questions that all educators of mathematics need to consider to adapt school mathematics for the 21st century. The papers included in this book cover a wide variety of topics, including student-invented algorithms, the assessment of such algorithms, algorithms from history and other cultures, ways…

  1. Genetic Algorithms and Local Search

    NASA Technical Reports Server (NTRS)

    Whitley, Darrell

    1996-01-01

    The first part of this presentation is a tutorial level introduction to the principles of genetic search and models of simple genetic algorithms. The second half covers the combination of genetic algorithms with local search methods to produce hybrid genetic algorithms. Hybrid algorithms can be modeled within the existing theoretical framework developed for simple genetic algorithms. An application of a hybrid to geometric model matching is given. The hybrid algorithm yields results that improve on the current state-of-the-art for this problem.

  2. Sensitivity of MODIS evapotranspiration algorithm (MOD16) to the acuracy of meteorological data and land use and land cover parameterization

    NASA Astrophysics Data System (ADS)

    Ruhoff, Anderson; Santini Adamatti, Daniela

    2017-04-01

    MODIS evapotranspiration (MOD16) is currently available with 1 km of spatial resolution over 109.03 Million km2 of vegetated land surface areas and this information is widely used to evaluate the linkages between hydrological, energy and carbon cycles. The algorithm is driven by meteorological reanalysis data and MODIS remotely-sensed data, which include land use and land cover classification (MCD12Q1), leaf area index (LAI) and fraction of absorbed photosynthetically active radiation (FPAR) (MOD15A2) and albedo (MOD43b3). For calibration and parameterization, the algorithm uses a Biome Property Look-up Table (BPLUT) based on MCD12Q1 land cover classification. Several studies evaluated MOD16 accuracy using evapotranspiration measurements and water balance analysis, showing that this product can reproduce global evapotranspiration effectively under a variety climate condition, from local to wide-basin scale, with uncertainties up to 25%. In this study, we evaluated the sensitivity of MOD16 algorithm to land use and land cover parameterization and to meteorological data. Considering that MCD12Q1 has an accuracy between 70 and 85% at continental scale, we changed land cover parametererization to understand the influence of land use and land cover classification on MOD16 evapotranspiration estimations. Knowing that meteorological reanalysis data also have uncertainties (mostly related to the coarse spatial resolution), we compared MOD16 evapotranspiration driven by observed meteorological data to those driven by the reanalysis data. Our analysis were carried in South America, with evapotranspiration and meteorological measurements from the Large-Scale Biosphere-Atmosphere Experiment in Amazonia (LBA) at 8 different sites, including tropical rainforest, tropical dry forest, selective logged forest, seasonal flooded forest and pasture/agriculture. Our results indicate that land use and land cover classification has a strong influence on MOD16 algorithm. The use of incorrect parametererization due to land use and land cover misclassification can introduce large erros in estimates of evapotranspiration. We also found that the biases in meteorological reanalysis data can introduce considerable errors into the estimations. Overall, there is a significant potential for mapping and monitoring global evapotranspiration using MODIS remotely-sensed images combined to meteorological reanalysis data.

  3. General form of a cooperative gradual maximal covering location problem

    NASA Astrophysics Data System (ADS)

    Bagherinejad, Jafar; Bashiri, Mahdi; Nikzad, Hamideh

    2018-07-01

    Cooperative and gradual covering are two new methods for developing covering location models. In this paper, a cooperative maximal covering location-allocation model is developed (CMCLAP). In addition, both cooperative and gradual covering concepts are applied to the maximal covering location simultaneously (CGMCLP). Then, we develop an integrated form of a cooperative gradual maximal covering location problem, which is called a general CGMCLP. By setting the model parameters, the proposed general model can easily be transformed into other existing models, facilitating general comparisons. The proposed models are developed without allocation for physical signals and with allocation for non-physical signals in discrete location space. Comparison of the previously introduced gradual maximal covering location problem (GMCLP) and cooperative maximal covering location problem (CMCLP) models with our proposed CGMCLP model in similar data sets shows that the proposed model can cover more demands and acts more efficiently. Sensitivity analyses are performed to show the effect of related parameters and the model's validity. Simulated annealing (SA) and a tabu search (TS) are proposed as solution algorithms for the developed models for large-sized instances. The results show that the proposed algorithms are efficient solution approaches, considering solution quality and running time.

  4. Image based book cover recognition and retrieval

    NASA Astrophysics Data System (ADS)

    Sukhadan, Kalyani; Vijayarajan, V.; Krishnamoorthi, A.; Bessie Amali, D. Geraldine

    2017-11-01

    In this we are developing a graphical user interface using MATLAB for the users to check the information related to books in real time. We are taking the photos of the book cover using GUI, then by using MSER algorithm it will automatically detect all the features from the input image, after this it will filter bifurcate non-text features which will be based on morphological difference between text and non-text regions. We implemented a text character alignment algorithm which will improve the accuracy of the original text detection. We will also have a look upon the built in MATLAB OCR recognition algorithm and an open source OCR which is commonly used to perform better detection results, post detection algorithm is implemented and natural language processing to perform word correction and false detection inhibition. Finally, the detection result will be linked to internet to perform online matching. More than 86% accuracy can be obtained by this algorithm.

  5. On the improvement of blood sample collection at clinical laboratories

    PubMed Central

    2014-01-01

    Background Blood samples are usually collected daily from different collection points, such hospitals and health centers, and transported to a core laboratory for testing. This paper presents a project to improve the collection routes of two of the largest clinical laboratories in Spain. These routes must be designed in a cost-efficient manner while satisfying two important constraints: (i) two-hour time windows between collection and delivery, and (ii) vehicle capacity. Methods A heuristic method based on a genetic algorithm has been designed to solve the problem of blood sample collection. The user enters the following information for each collection point: postal address, average collecting time, and average demand (in thermal containers). After implementing the algorithm using C programming, this is run and, in few seconds, it obtains optimal (or near-optimal) collection routes that specify the collection sequence for each vehicle. Different scenarios using various types of vehicles have been considered. Unless new collection points are added or problem parameters are changed substantially, routes need to be designed only once. Results The two laboratories in this study previously planned routes manually for 43 and 74 collection points, respectively. These routes were covered by an external carrier company. With the implementation of this algorithm, the number of routes could be reduced from ten to seven in one laboratory and from twelve to nine in the other, which represents significant annual savings in transportation costs. Conclusions The algorithm presented can be easily implemented in other laboratories that face this type of problem, and it is particularly interesting and useful as the number of collection points increases. The method designs blood collection routes with reduced costs that meet the time and capacity constraints of the problem. PMID:24406140

  6. Sentinel-1 Archive and Processing in the Cloud using the Hybrid Pluggable Processing Pipeline (HyP3) at the ASF DAAC

    NASA Astrophysics Data System (ADS)

    Arko, S. A.; Hogenson, R.; Geiger, A.; Herrmann, J.; Buechler, B.; Hogenson, K.

    2016-12-01

    In the coming years there will be an unprecedented amount of SAR data available on a free and open basis to research and operational users around the globe. The Alaska Satellite Facility (ASF) DAAC hosts, through an international agreement, data from the Sentinel-1 spacecraft and will be hosting data from the upcoming NASA ISRO SAR (NISAR) mission. To more effectively manage and exploit these vast datasets, ASF DAAC has begun moving portions of the archive to the cloud and utilizing cloud services to provide higher-level processing on the data. The Hybrid Pluggable Processing Pipeline (HyP3) project is designed to support higher-level data processing in the cloud and extend the capabilities of researchers to larger scales. Built upon a set of core Amazon cloud services, the HyP3 system allows users to request data processing using a number of canned algorithms or their own algorithms once they have been uploaded to the cloud. The HyP3 system automatically accesses the ASF cloud-based archive through the DAAC RESTful application programming interface and processes the data on Amazon's elastic compute cluster (EC2). Final products are distributed through Amazon's simple storage service (S3) and are available for user download. This presentation will provide an overview of ASF DAAC's activities moving the Sentinel-1 archive into the cloud and developing the integrated HyP3 system, covering both the benefits and difficulties of working in the cloud. Additionally, we will focus on the utilization of HyP3 for higher-level processing of SAR data. Two example algorithms, for sea-ice tracking and change detection, will be discussed as well as the mechanism for integrating new algorithms into the pipeline for community use.

  7. 40 CFR 35.6235 - Cost sharing.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... ASSISTANCE Cooperative Agreements and Superfund State Contracts for Superfund Response Actions Core Program... indirect costs of all activities covered by the Core Program Cooperative Agreement. Indian Tribes are not required to share in the cost of Core Program activities. The State must provide its cost share with non...

  8. Can Psychiatric Rehabilitation Be Core to CORE?

    ERIC Educational Resources Information Center

    Olney, Marjorie F.; Gill, Kenneth J.

    2016-01-01

    Purpose: In this article, we seek to determine whether psychiatric rehabilitation principles and practices have been more fully incorporated into the Council on Rehabilitation Education (CORE) standards, the extent to which they are covered in four rehabilitation counseling "foundations" textbooks, and how they are reflected in the…

  9. Multi-Resolution Indexing for Hierarchical Out-of-Core Traversal of Rectilinear Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pascucci, V.

    2000-07-10

    The real time processing of very large volumetric meshes introduces specific algorithmic challenges due to the impossibility of fitting the input data in the main memory of a computer. The basic assumption (RAM computational model) of uniform-constant-time access to each memory location is not valid because part of the data is stored out-of-core or in external memory. The performance of most algorithms does not scale well in the transition from the in-core to the out-of-core processing conditions. The performance degradation is due to the high frequency of I/O operations that may start dominating the overall running time. Out-of-core computing [28]more » addresses specifically the issues of algorithm redesign and data layout restructuring to enable data access patterns with minimal performance degradation in out-of-core processing. Results in this area are also valuable in parallel and distributed computing where one has to deal with the similar issue of balancing processing time with data migration time. The solution of the out-of-core processing problem is typically divided into two parts: (i) analysis of a specific algorithm to understand its data access patterns and, when possible, redesign the algorithm to maximize their locality; and (ii) storage of the data in secondary memory with a layout consistent with the access patterns of the algorithm to amortize the cost of each I/O operation over several memory access operations. In the case of a hierarchical visualization algorithms for volumetric data the 3D input hierarchy is traversed to build derived geometric models with adaptive levels of detail. The shape of the output models is then modified dynamically with incremental updates of their level of detail. The parameters that govern this continuous modification of the output geometry are dependent on the runtime user interaction making it impossible to determine a priori what levels of detail are going to be constructed. For example they can be dependent from external parameters like the viewpoint of the current display window or from internal parameters like the isovalue of an isocontour or the position of an orthogonal slice. The structure of the access pattern can be summarized into two main points: (i) the input hierarchy is traversed level by level so that the data in the same level of resolution or in adjacent levels is traversed at the same time and (ii) within each level of resolution the data is mostly traversed at the same time in regions that are geometrically close. In this paper I introduce a new static indexing scheme that induces a data layout satisfying both requirements (i) and (ii) for the hierarchical traversal of n-dimensional regular grids. In one particular implementation the scheme exploits in a new way the recursive construction of the Z-order space filling curve. The standard indexing that maps the input nD data onto a 1D sequence for the Z-order curve is based on a simple bit interleaving operation that merges the n input indices into one index n times longer. This helps in grouping the data for geometric proximity but only for a specific level of detail. In this paper I show how this indexing can be transformed into an alternative index that allows to group the data per level of resolution first and then the data within each level per geometric proximity. This yields a data layout that is appropriate for hierarchical out-of-core processing of large grids.« less

  10. Towards better understanding of high-mountain cryosphere changes using GPM data: A Joint Snowfall and Snow-cover Passive Microwave Retrieval Algorithm

    NASA Astrophysics Data System (ADS)

    Ebtehaj, A.; Foufoula-Georgiou, E.

    2016-12-01

    Scientific evidence suggests that the duration and frequency of snowfall and the extent of snow cover are rapidly declining under global warming. Both precipitation and snow cover scatter the upwelling surface microwave emission and decrease the observed high-frequency brightness temperatures. The mixture of these two scattering signals is amongst the largest sources of ambiguities and errors in passive microwave retrievals of both precipitation and snow-cover. The dual frequency radar and the high-frequency radiometer on board the GPM satellite provide a unique opportunity to improve passive retrievals of precipitation and snow-cover physical properties and fill the gaps in our understating of their variability in view of climate change. Recently, a new Bayesian rainfall retrieval algorithm (called ShARP) was developed using modern approximation methods and shown to yield improvements against other algorithms in retrieval of rainfall over radiometrically complex land surfaces. However, ShARP uses a large database of input rainfall and output brightness temperatures, which might be undersampled. Furthermore, it is not capable to discriminate between solid and liquid phase of precipitation and specifically discriminate the background snow-cover emission and its contamination effects on the retrievals. We address these problems by extending it to a new Bayesian land-atmosphere retrieval framework (ShARP-L) that allows joint retrievals of atmospheric constituents and land surface physical properties. Using modern sparse approximation techniques, the database is reduced to atomic microwave signatures in a family of compact class consistent dictionaries. These dictionaries can efficiently represent the entire database and allow us to discriminate between different land-atmosphere states. First the algorithm makes use of the dictionaries to detect the phase of the precipitation and type of the land-cover and then it estimates the physical properties of precipitation and snow cover using an extended version of the Dantzig Selector, which is robust to non-Gaussian and correlated geophysical noise. Promising results are presented in retrievals of snowfall and snow-cover over coastal orographic features of North America's Coast Range and South America's Andes.

  11. Computational mechanics analysis tools for parallel-vector supercomputers

    NASA Technical Reports Server (NTRS)

    Storaasli, O. O.; Nguyen, D. T.; Baddourah, M. A.; Qin, J.

    1993-01-01

    Computational algorithms for structural analysis on parallel-vector supercomputers are reviewed. These parallel algorithms, developed by the authors, are for the assembly of structural equations, 'out-of-core' strategies for linear equation solution, massively distributed-memory equation solution, unsymmetric equation solution, general eigen-solution, geometrically nonlinear finite element analysis, design sensitivity analysis for structural dynamics, optimization algorithm and domain decomposition. The source code for many of these algorithms is available from NASA Langley.

  12. Influence of different post core materials on the color of Empress 2 full ceramic crowns.

    PubMed

    Ge, Jing; Wang, Xin-zhi; Feng, Hai-lan

    2006-10-20

    For esthetic consideration, dentin color post core materials were normally used for all-ceramic crown restorations. However, in some cases, clinicians have to consider combining a full ceramic crown with a metal post core. Therefore, this experiment was conducted to test the esthetical possibility of applying cast metal post core in a full ceramic crown restoration. The color of full ceramic crowns on gold and Nickel-Chrome post cores was compared with the color of the same crowns on tooth colored post cores. Different try-in pastes were used to imitate the influence of a composite cementation on the color of different restorative combinations. The majority of patients could not detect any color difference less than DeltaE 1.8 between the two ceramic samples. So, DeltaE 1.8 was taken as the objective evaluative criterion for the evaluation of color matching and patients' satisfaction. When the Empress 2 crown was combined with the gold alloy post core, the color of the resulting material was similar to that of a glass fiber reinforced resin post core (DeltaE = 0.3). The gold alloy post core and the try-in paste did not show a perceptible color change in the full ceramic crowns, which indicated that the color of the crowns might not be susceptible to change between lab and clinic as well as during the process of composite cementation. Without an opaque covering the Ni-Cr post core would cause an unacceptable color effect on the crown (DeltaE = 2.0), but with opaque covering, the color effect became more clinically satisfactory (DeltaE = 1.8). It may be possible to apply a gold alloy post core in the Empress 2 full ceramic crown restoration when necessary. If a non-extractible Ni-Cr post core exists in the root canal, it might be possible to restore the tooth with an Empress 2 crown after covering the labial surface of the core with one layer of opaque resin cement.

  13. Visual Prediction of Rover Slip: Learning Algorithms and Field Experiments

    DTIC Science & Technology

    2008-01-01

    DATES COVERED 00-00-2008 to 00-00-2008 4. TITLE AND SUBTITLE Visual Prediction of Rover Slip: Learning Algorithms and Field Experiments 5a...rover mobility [23, 78]. Remote slip prediction will enable safe traversals on large slopes covered with sand, drift material or loose crater ejecta...aqueous processes, e.g., mineral-rich out- crops which imply exposure to water [92] or putative lake formations or shorelines, layered deposits, etc

  14. Least square regularized regression in sum space.

    PubMed

    Xu, Yong-Li; Chen, Di-Rong; Li, Han-Xiong; Liu, Lu

    2013-04-01

    This paper proposes a least square regularized regression algorithm in sum space of reproducing kernel Hilbert spaces (RKHSs) for nonflat function approximation, and obtains the solution of the algorithm by solving a system of linear equations. This algorithm can approximate the low- and high-frequency component of the target function with large and small scale kernels, respectively. The convergence and learning rate are analyzed. We measure the complexity of the sum space by its covering number and demonstrate that the covering number can be bounded by the product of the covering numbers of basic RKHSs. For sum space of RKHSs with Gaussian kernels, by choosing appropriate parameters, we tradeoff the sample error and regularization error, and obtain a polynomial learning rate, which is better than that in any single RKHS. The utility of this method is illustrated with two simulated data sets and five real-life databases.

  15. Improving the MODIS Global Snow-Mapping Algorithm

    NASA Technical Reports Server (NTRS)

    Klein, Andrew G.; Hall, Dorothy K.; Riggs, George A.

    1997-01-01

    An algorithm (Snowmap) is under development to produce global snow maps at 500 meter resolution on a daily basis using data from the NASA MODIS instrument. MODIS, the Moderate Resolution Imaging Spectroradiometer, will be launched as part of the first Earth Observing System (EOS) platform in 1998. Snowmap is a fully automated, computationally frugal algorithm that will be ready to implement at launch. Forests represent a major limitation to the global mapping of snow cover as a forest canopy both obscures and shadows the snow underneath. Landsat Thematic Mapper (TM) and MODIS Airborne Simulator (MAS) data are used to investigate the changes in reflectance that occur as a forest stand becomes snow covered and to propose changes to the Snowmap algorithm that will improve snow classification accuracy forested areas.

  16. An algorithm for encryption of secret images into meaningful images

    NASA Astrophysics Data System (ADS)

    Kanso, A.; Ghebleh, M.

    2017-03-01

    Image encryption algorithms typically transform a plain image into a noise-like cipher image, whose appearance is an indication of encrypted content. Bao and Zhou [Image encryption: Generating visually meaningful encrypted images, Information Sciences 324, 2015] propose encrypting the plain image into a visually meaningful cover image. This improves security by masking existence of encrypted content. Following their approach, we propose a lossless visually meaningful image encryption scheme which improves Bao and Zhou's algorithm by making the encrypted content, i.e. distortions to the cover image, more difficult to detect. Empirical results are presented to show high quality of the resulting images and high security of the proposed algorithm. Competence of the proposed scheme is further demonstrated by means of comparison with Bao and Zhou's scheme.

  17. A new calibration of the effective scattering albedo and soil roughness parameters in the SMOS SM retrieval algorithm

    NASA Astrophysics Data System (ADS)

    Fernandez-Moran, R.; Wigneron, J.-P.; De Lannoy, G.; Lopez-Baeza, E.; Parrens, M.; Mialon, A.; Mahmoodi, A.; Al-Yaari, A.; Bircher, S.; Al Bitar, A.; Richaume, P.; Kerr, Y.

    2017-10-01

    This study focuses on the calibration of the effective vegetation scattering albedo (ω) and surface soil roughness parameters (HR, and NRp, p = H,V) in the Soil Moisture (SM) retrieval from L-band passive microwave observations using the L-band Microwave Emission of the Biosphere (L-MEB) model. In the current Soil Moisture and Ocean Salinity (SMOS) Level 2 (L2), v620, and Level 3 (L3), v300, SM retrieval algorithms, low vegetated areas are parameterized by ω = 0 and HR = 0.1, whereas values of ω = 0.06 - 0.08 and HR = 0.3 are used for forests. Several parameterizations of the vegetation and soil roughness parameters (ω, HR and NRp, p = H,V) were tested in this study, treating SMOS SM retrievals as homogeneous over each pixel instead of retrieving SM over a representative fraction of the pixel, as implemented in the operational SMOS L2 and L3 algorithms. Globally-constant values of ω = 0.10, HR = 0.4 and NRp = -1 (p = H,V) were found to yield SM retrievals that compared best with in situ SM data measured at many sites worldwide from the International Soil Moisture Network (ISMN). The calibration was repeated for collections of in situ sites classified in different land cover categories based on the International Geosphere-Biosphere Programme (IGBP) scheme. Depending on the IGBP land cover class, values of ω and HR varied, respectively, in the range 0.08-0.12 and 0.1-0.5. A validation exercise based on in situ measurements confirmed that using either a global or an IGBP-based calibration, there was an improvement in the accuracy of the SM retrievals compared to the SMOS L3 SM product considering all statistical metrics (R = 0.61, bias = -0.019 m3 m-3, ubRMSE = 0.062 m3 m-3 for the IGBP-based calibration; against R = 0.54, bias = -0.034 m3 m-3 and ubRMSE = 0.070 m3 m-3 for the SMOS L3 SM product). This result is a key step in the calibration of the roughness and vegetation parameters in the operational SMOS retrieval algorithm. The approach presented here is the core of a new forthcoming SMOS optimized SM product.

  18. Monitoring conterminous United States (CONUS) land cover change with Web-Enabled Landsat Data (WELD)

    USGS Publications Warehouse

    Hansen, M.C.; Egorov, Alexey; Potapov, P.V.; Stehman, S.V.; Tyukavina, A.; Turubanova, S.A.; Roy, David P.; Goetz, S.J.; Loveland, Thomas R.; Ju, J.; Kommareddy, A.; Kovalskyy, Valeriy; Forsyth, C.; Bents, T.

    2014-01-01

    Forest cover loss and bare ground gain from 2006 to 2010 for the conterminous United States (CONUS) were quantified at a 30 m spatial resolution using Web-Enabled Landsat Data available from the USGS Center for Earth Resources Observation and Science (EROS) (http://landsat.usgs.gov/WELD.php). The approach related multi-temporal WELD metrics and expert-derived training data for forest cover loss and bare ground gain through a decision tree classification algorithm. Forest cover loss was reported at state and ecoregional scales, and the identification of core forests' absent of change was made and verified using LiDAR data from the GLAS (Geoscience Laser Altimetry System) instrument. Bare ground gain correlated with population change for large metropolitan statistical areas (MSAs) outside of desert or semi-desert environments. GoogleEarth™ time-series images were used to validate the products. Mapped forest cover loss totaled 53,084 km2 and was found to be depicted conservatively, with a user's accuracy of 78% and a producer's accuracy of 68%. Excluding errors of adjacency, user's and producer's accuracies rose to 93% and 89%, respectively. Mapped bare ground gain equaled 5974 km2 and nearly matched the estimated area from the reference (GoogleEarth™) classification; however, user's (42%) and producer's (49%) accuracies were much less than those of the forest cover loss product. Excluding errors of adjacency, user's and producer's accuracies rose to 62% and 75%, respectively. Compared to recent 2001–2006 USGS National Land Cover Database validation data for forest loss (82% and 30% for respective user's and producer's accuracies) and urban gain (72% and 18% for respective user's and producer's accuracies), results using a single CONUS-scale model with WELD data are promising and point to the potential for national-scale operational mapping of key land cover transitions. However, validation results highlighted limitations, some of which can be addressed by improving training data, creating a more robust image feature space, adding contemporaneous Landsat 5 data to the inputs, and modifying definition sets to account for differences in temporal and spatial observational scales. The presented land cover extent and change data are available via the official WELD website (ftp://weldftp.cr.usgs.gov/CONUS_5Y_LandCover/ftp://weldftp.cr.usgs.gov/CONUS_5Y_LandCover/).

  19. Switching portfolios.

    PubMed

    Singer, Y

    1997-08-01

    A constant rebalanced portfolio is an asset allocation algorithm which keeps the same distribution of wealth among a set of assets along a period of time. Recently, there has been work on on-line portfolio selection algorithms which are competitive with the best constant rebalanced portfolio determined in hindsight (Cover, 1991; Helmbold et al., 1996; Cover and Ordentlich, 1996). By their nature, these algorithms employ the assumption that high returns can be achieved using a fixed asset allocation strategy. However, stock markets are far from being stationary and in many cases the wealth achieved by a constant rebalanced portfolio is much smaller than the wealth achieved by an ad hoc investment strategy that adapts to changes in the market. In this paper we present an efficient portfolio selection algorithm that is able to track a changing market. We also describe a simple extension of the algorithm for the case of a general transaction cost, including the transactions cost models recently investigated in (Blum and Kalai, 1997). We provide a simple analysis of the competitiveness of the algorithm and check its performance on real stock data from the New York Stock Exchange accumulated during a 22-year period. On this data, our algorithm outperforms all the algorithms referenced above, with and without transaction costs.

  20. Efficient parallel linear scaling construction of the density matrix for Born-Oppenheimer molecular dynamics.

    PubMed

    Mniszewski, S M; Cawkwell, M J; Wall, M E; Mohd-Yusof, J; Bock, N; Germann, T C; Niklasson, A M N

    2015-10-13

    We present an algorithm for the calculation of the density matrix that for insulators scales linearly with system size and parallelizes efficiently on multicore, shared memory platforms with small and controllable numerical errors. The algorithm is based on an implementation of the second-order spectral projection (SP2) algorithm [ Niklasson, A. M. N. Phys. Rev. B 2002 , 66 , 155115 ] in sparse matrix algebra with the ELLPACK-R data format. We illustrate the performance of the algorithm within self-consistent tight binding theory by total energy calculations of gas phase poly(ethylene) molecules and periodic liquid water systems containing up to 15,000 atoms on up to 16 CPU cores. We consider algorithm-specific performance aspects, such as local vs nonlocal memory access and the degree of matrix sparsity. Comparisons to sparse matrix algebra implementations using off-the-shelf libraries on multicore CPUs, graphics processing units (GPUs), and the Intel many integrated core (MIC) architecture are also presented. The accuracy and stability of the algorithm are illustrated with long duration Born-Oppenheimer molecular dynamics simulations of 1000 water molecules and a 303 atom Trp cage protein solvated by 2682 water molecules.

  1. Low-cost Assessment for Early Vigor and Canopy Cover Estimation in Durum Wheat Using RGB Images.

    NASA Astrophysics Data System (ADS)

    Fernandez-Gallego, J. A.; Kefauver, S. C.; Aparicio Gutiérrez, N.; Nieto-Taladriz, M. T.; Araus, J. L.

    2017-12-01

    Early vigor and canopy cover is an important agronomical component for determining grain yield in wheat. Estimates of the canopy cover area at early stages of the crop cycle may contribute to efficiency of crop management practices and breeding programs. Canopy-image segmentation is complicated in field conditions by numerous factors, including soil, shadows and unexpected objects, such as rocks, weeds, plant remains, or even part of the photographer's boots (many times it appears in the scene); and the algorithms must be robust to accommodate these conditions. Field trials were carried out in two sites (Aranjuez and Valladolid, Spain) during the 2016/2017 crop season. A set of 24 varieties of durum wheat in two growing conditions (rainfed and support irrigation) per site were used to create the image database. This work uses zenithal RGB images taken from above the crop in natural light conditions. The images were taken with Canon IXUS 320HS camera in Aranjuez, holding the camera by hand, and with a Nikon D300 camera in Valladolid, using a monopod. The algorithm for early vigor and canopy cover area estimation uses three main steps: (i) Image decorrelation (ii) Colour space transformation and (iii) Canopy cover segmentation using an automatic threshold based on the image histogram. The first step was chosen to enhance the visual interpretation and separate the pixel colors into the scene; the colour space transformation contributes to further separate the colours. Finally an automatic threshold using a minimum method allows for correct segmentation and quantification of the canopy pixels. The percent of area covered by the canopy was calculated using a simple algorithm for counting pixels in the final binary segmented image. The comparative results demonstrate the algorithm's effectiveness through significant correlations between early vigor and canopy cover estimation compared to NDVI (Normalized difference vegetation index) and grain yield.

  2. Silicon oxynitride-on-glass waveguide array refractometer with wide sensing range and integrated read-out (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Viegas, Jaime; Mayeh, Mona; Srinivasan, Pradeep; Johnson, Eric G.; Marques, Paulo V. S.; Farahi, Faramarz

    2017-02-01

    In this work, a silicon oxynitride-on-silica refractometer is presented, based on sub-wavelength coupled arrayed waveguide interference, and capable of low-cost, high resolution, large scale deployment. The sensor has an experimental spectral sensitivity as high as 3200 nm/RIU, covering refractive indices ranging from 1 (air) up to 1.43 (oils). The sensor readout can be performed by standard spectrometers techniques of by pattern projection onto a camera, followed by optical pattern recognition. Positive identification of the refractive index of an unknown species is obtained by pattern cross-correlation with a look-up calibration table based algorithm. Given the lower contrast between core and cladding in such devices, higher mode overlap with single mode fiber is achieved, leading to a larger coupling efficiency and more relaxed alignment requirements as compared to silicon photonics platform. Also, the optical transparency of the sensor in the visible range allows the operation with light sources and camera detectors in the visible range, of much lower capital costs for a complete sensor system. Furthermore, the choice of refractive indices of core and cladding in the sensor head with integrated readout, allows the fabrication of the same device in polymers, for mass-production replication of disposable sensors.

  3. Accuracy of Geophysical Parameters Derived from AIRS/AMSU as a Function of Fractional Cloud Cover

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Barnet, Chris; Blaisdell, John; Iredell, Lena; Keita, Fricky; Kouvaris, Lou; Molnar, Gyula; Chahine, Moustafa

    2005-01-01

    AIRS was launched on EOS Aqua on May 4,2002, together with AMSU A and HSB, to form a next generation polar orbiting infrared and microwave atmospheric sounding system. The primary products of AIRS/AMSU are twice daily global fields of atmospheric temperature-humidity profiles, ozone profiles, sea/land surface skin temperature, and cloud related parameters including OLR. The sounding goals of AIRS are to produce 1 km tropospheric layer mean temperatures with an rms error of 1K, and layer precipitable water with an rms error of 20%, in cases with up to 80% effective cloud cover. The basic theory used to analyze AIRS/AMSU/HSB data in the presence of clouds, called the at-launch algorithm, was described previously. Pre-launch simulation studies using this algorithm indicated that these results should be achievable. Some modifications have been made to the at-launch retrieval algorithm as described in this paper. Sample fields of parameters retrieved from AIRS/AMSU/HSB data are presented and validated as a function of retrieved fractional cloud cover. As in simulation, the degradation of retrieval accuracy with increasing cloud cover is small. HSB failed in February 2005, and consequently HSB channel radiances are not used in the results shown in this paper. The AIRS/AMSU retrieval algorithm described in this paper, called Version 4, become operational at the Goddard DAAC in April 2005 and is being used to analyze near-real time AIRS/AMSU data. Historical AIRS/AMSU data, going backwards from March 2005 through September 2002, is also being analyzed by the DAAC using the Version 4 algorithm.

  4. Optimization of composite sandwich cover panels subjected to compressive loadings

    NASA Technical Reports Server (NTRS)

    Cruz, Juan R.

    1991-01-01

    An analysis and design method is presented for the design of composite sandwich cover panels that include the transverse shear effects and damage tolerance considerations. This method is incorporated into a sandwich optimization computer program entitled SANDOP. As a demonstration of its capabilities, SANDOP is used in the present study to design optimized composite sandwich cover panels for for transport aircraft wing applications. The results of this design study indicate that optimized composite sandwich cover panels have approximately the same structural efficiency as stiffened composite cover panels designed to satisfy individual constraints. The results also indicate that inplane stiffness requirements have a large effect on the weight of these composite sandwich cover panels at higher load levels. Increasing the maximum allowable strain and the upper percentage limit of the 0 degree and +/- 45 degree plies can yield significant weight savings. The results show that the structural efficiency of these optimized composite sandwich cover panels is relatively insensitive to changes in core density. Thus, core density should be chosen by criteria other than minimum weight (e.g., damage tolerance, ease of manufacture, etc.).

  5. A Simulation Tool for Distributed Databases.

    DTIC Science & Technology

    1981-09-01

    11-8 . Reed’s multiversion system [RE1T8] may also be viewed aa updating only copies until the commit is made. The decision to make the changes...distributed voting, and Ellis’ ring algorithm. Other, significantly different algorithms not covered in his work include Reed’s multiversion algorithm, the

  6. An Evaluation of Emergency Medicine Core Content Covered by Free Open Access Medical Education Resources.

    PubMed

    Stuntz, Robert; Clontz, Robert

    2016-05-01

    Emergency physicians are using free open access medical education (FOAM) resources at an increasing rate. The extent to which FOAM resources cover the breadth of emergency medicine core content is unknown. We hypothesize that the content of FOAM resources does not provide comprehensive or balanced coverage of the scope of knowledge necessary for emergency medicine providers. Our objective is to quantify emergency medicine core content covered by FOAM resources and identify the predominant FOAM topics. This is an institutional review board-approved, retrospective review of all English-language FOAM posts between July 1, 2013, and June 30, 2014, as aggregated on http://FOAMem.com. The topics of FOAM posts were compared with those of the emergency medicine core content, as defined by the American Board of Emergency Medicine's Model of the Clinical Practice of Emergency Medicine (MCPEM). Each FOAM post could cover more than 1 topic. Repeated posts and summaries were excluded. Review of the MCPEM yielded 915 total emergency medicine topics grouped into 20 sections. Review of 6,424 FOAM posts yielded 7,279 total topics and 654 unique topics, representing 71.5% coverage of the 915 topics outlined by the MCPEM. The procedures section was covered most often, representing 2,285 (31.4%) FOAM topics. The 4 sections with the least coverage were cutaneous disorders, hematologic disorders, nontraumatic musculoskeletal disorders, and obstetric and gynecologic disorders, each representing 0.6% of FOAM topics. Airway techniques; ECG interpretation; research, evidence-based medicine, and interpretation of the literature; resuscitation; and ultrasonography were the most overrepresented subsections, equaling 1,674 (23.0%) FOAM topics when combined. The data suggest an imbalanced and incomplete coverage of emergency medicine core content in FOAM. The study is limited by its retrospective design and use of a single referral Web site to obtain available FOAM resources. More comprehensive and balanced coverage of emergency medicine core content is needed if FOAM is to serve as a primary educational resource. Copyright © 2016 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.

  7. Massively parallel algorithm and implementation of RI-MP2 energy calculation for peta-scale many-core supercomputers.

    PubMed

    Katouda, Michio; Naruse, Akira; Hirano, Yukihiko; Nakajima, Takahito

    2016-11-15

    A new parallel algorithm and its implementation for the RI-MP2 energy calculation utilizing peta-flop-class many-core supercomputers are presented. Some improvements from the previous algorithm (J. Chem. Theory Comput. 2013, 9, 5373) have been performed: (1) a dual-level hierarchical parallelization scheme that enables the use of more than 10,000 Message Passing Interface (MPI) processes and (2) a new data communication scheme that reduces network communication overhead. A multi-node and multi-GPU implementation of the present algorithm is presented for calculations on a central processing unit (CPU)/graphics processing unit (GPU) hybrid supercomputer. Benchmark results of the new algorithm and its implementation using the K computer (CPU clustering system) and TSUBAME 2.5 (CPU/GPU hybrid system) demonstrate high efficiency. The peak performance of 3.1 PFLOPS is attained using 80,199 nodes of the K computer. The peak performance of the multi-node and multi-GPU implementation is 514 TFLOPS using 1349 nodes and 4047 GPUs of TSUBAME 2.5. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  8. TomoPhantom, a software package to generate 2D-4D analytical phantoms for CT image reconstruction algorithm benchmarks

    NASA Astrophysics Data System (ADS)

    Kazantsev, Daniil; Pickalov, Valery; Nagella, Srikanth; Pasca, Edoardo; Withers, Philip J.

    2018-01-01

    In the field of computerized tomographic imaging, many novel reconstruction techniques are routinely tested using simplistic numerical phantoms, e.g. the well-known Shepp-Logan phantom. These phantoms cannot sufficiently cover the broad spectrum of applications in CT imaging where, for instance, smooth or piecewise-smooth 3D objects are common. TomoPhantom provides quick access to an external library of modular analytical 2D/3D phantoms with temporal extensions. In TomoPhantom, quite complex phantoms can be built using additive combinations of geometrical objects, such as, Gaussians, parabolas, cones, ellipses, rectangles and volumetric extensions of them. Newly designed phantoms are better suited for benchmarking and testing of different image processing techniques. Specifically, tomographic reconstruction algorithms which employ 2D and 3D scanning geometries, can be rigorously analyzed using the software. TomoPhantom also provides a capability of obtaining analytical tomographic projections which further extends the applicability of software towards more realistic, free from the "inverse crime" testing. All core modules of the package are written in the C-OpenMP language and wrappers for Python and MATLAB are provided to enable easy access. Due to C-based multi-threaded implementation, volumetric phantoms of high spatial resolution can be obtained with computational efficiency.

  9. Toward a comprehensive landscape vegetation monitoring framework

    NASA Astrophysics Data System (ADS)

    Kennedy, Robert; Hughes, Joseph; Neeti, Neeti; Larrue, Tara; Gregory, Matthew; Roberts, Heather; Ohmann, Janet; Kane, Van; Kane, Jonathan; Hooper, Sam; Nelson, Peder; Cohen, Warren; Yang, Zhiqiang

    2016-04-01

    Blossoming Earth observation resources provide great opportunity to better understand land vegetation dynamics, but also require new techniques and frameworks to exploit their potential. Here, I describe several parallel projects that leverage time-series Landsat imagery to describe vegetation dynamics at regional and continental scales. At the core of these projects are the LandTrendr algorithms, which distill time-series earth observation data into periods of consistent long or short-duration dynamics. In one approach, we built an integrated, empirical framework to blend these algorithmically-processed time-series data with field data and lidar data to ascribe yearly change in forest biomass across the US states of Washington, Oregon, and California. In a separate project, we expanded from forest-only monitoring to full landscape land cover monitoring over the same regional scale, including both categorical class labels and continuous-field estimates. In these and other projects, we apply machine-learning approaches to ascribe all changes in vegetation to driving processes such as harvest, fire, urbanization, etc., allowing full description of both disturbance and recovery processes and drivers. Finally, we are moving toward extension of these same techniques to continental and eventually global scales using Google Earth Engine. Taken together, these approaches provide one framework for describing and understanding processes of change in vegetation communities at broad scales.

  10. Cover song identification by sequence alignment algorithms

    NASA Astrophysics Data System (ADS)

    Wang, Chih-Li; Zhong, Qian; Wang, Szu-Ying; Roychowdhury, Vwani

    2011-10-01

    Content-based music analysis has drawn much attention due to the rapidly growing digital music market. This paper describes a method that can be used to effectively identify cover songs. A cover song is a song that preserves only the crucial melody of its reference song but different in some other acoustic properties. Hence, the beat/chroma-synchronous chromagram, which is insensitive to the variation of the timber or rhythm of songs but sensitive to the melody, is chosen. The key transposition is achieved by cyclically shifting the chromatic domain of the chromagram. By using the Hidden Markov Model (HMM) to obtain the time sequences of songs, the system is made even more robust. Similar structure or length between the cover songs and its reference are not necessary by the Smith-Waterman Alignment Algorithm.

  11. A Parallel Saturation Algorithm on Shared Memory Architectures

    NASA Technical Reports Server (NTRS)

    Ezekiel, Jonathan; Siminiceanu

    2007-01-01

    Symbolic state-space generators are notoriously hard to parallelize. However, the Saturation algorithm implemented in the SMART verification tool differs from other sequential symbolic state-space generators in that it exploits the locality of ring events in asynchronous system models. This paper explores whether event locality can be utilized to efficiently parallelize Saturation on shared-memory architectures. Conceptually, we propose to parallelize the ring of events within a decision diagram node, which is technically realized via a thread pool. We discuss the challenges involved in our parallel design and conduct experimental studies on its prototypical implementation. On a dual-processor dual core PC, our studies show speed-ups for several example models, e.g., of up to 50% for a Kanban model, when compared to running our algorithm only on a single core.

  12. Radioisotopic heat source

    DOEpatents

    Sayell, E.H.

    1973-10-23

    A radioisotopic heat source is described which includes a core of heat productive, radioisotopic material, an impact resistant layer of graphite surrounding said core, and a shell of iridium metal intermediate the core and the impact layer. The source may also include a compliant mat of iridium between the core and the iridium shell, as well as an outer covering of iridium metal about the entire heat source. (Official Gazette)

  13. Technical Note: A fast online adaptive replanning method for VMAT using flattening filter free beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ates, Ozgur; Ahunbay, Ergun E.; Li, X. Allen, E-mail: ali@mcw.edu

    Purpose: To develop a fast replanning algorithm based on segment aperture morphing (SAM) for online replanning of volumetric modulated arc therapy (VMAT) with flattening filter free (FFF) beams. Methods: A software tool was developed to interface with a VMAT research planning system, which enables the input and output of beam and machine parameters of VMAT plans. The SAM algorithm was used to modify multileaf collimator positions for each segment aperture based on the changes of the target from the planning (CT/MR) to daily image [CT/CBCT/magnetic resonance imaging (MRI)]. The leaf travel distance was controlled for large shifts to prevent themore » increase of VMAT delivery time. The SAM algorithm was tested for 11 patient cases including prostate, pancreatic, and lung cancers. For each daily image set, three types of VMAT plans, image-guided radiation therapy (IGRT) repositioning, SAM adaptive, and full-scope reoptimization plans, were generated and compared. Results: The SAM adaptive plans were found to have improved the plan quality in target and/or critical organs when compared to the IGRT repositioning plans and were comparable to the reoptimization plans based on the data of planning target volume (PTV)-V100 (volume covered by 100% of prescription dose). For the cases studied, the average PTV-V100 was 98.85% ± 1.13%, 97.61% ± 1.45%, and 92.84% ± 1.61% with FFF beams for the reoptimization, SAM adaptive, and repositioning plans, respectively. The execution of the SAM algorithm takes less than 10 s using 16-CPU (2.6 GHz dual core) hardware. Conclusions: The SAM algorithm can generate adaptive VMAT plans using FFF beams with comparable plan qualities as those from the full-scope reoptimization plans based on daily CT/CBCT/MRI and can be used for online replanning to address interfractional variations.« less

  14. Optical Algorithm for Cloud Shadow Detection Over Water

    DTIC Science & Technology

    2013-02-01

    REPORT DATE (DD-MM-YYYY) 05-02-2013 2. REPORT TYPE Journal Article 3. DATES COVERED (From ■ To) 4. TITLE AND SUBTITLE Optical Algorithm for Cloud...particularly over humid tropical regions. Throughout the year, about two-thirds of the Earth’s surface is always covered by clouds [1]. The problem...V. Khlopenkov and A. P. Trishchenko, "SPARC: New cloud, snow , cloud shadow detection scheme for historical I-km AVHHR data over Canada," / Atmos

  15. Proposed hybrid-classifier ensemble algorithm to map snow cover area

    NASA Astrophysics Data System (ADS)

    Nijhawan, Rahul; Raman, Balasubramanian; Das, Josodhir

    2018-01-01

    Metaclassification ensemble approach is known to improve the prediction performance of snow-covered area. The methodology adopted in this case is based on neural network along with four state-of-art machine learning algorithms: support vector machine, artificial neural networks, spectral angle mapper, K-mean clustering, and a snow index: normalized difference snow index. An AdaBoost ensemble algorithm related to decision tree for snow-cover mapping is also proposed. According to available literature, these methods have been rarely used for snow-cover mapping. Employing the above techniques, a study was conducted for Raktavarn and Chaturangi Bamak glaciers, Uttarakhand, Himalaya using multispectral Landsat 7 ETM+ (enhanced thematic mapper) image. The study also compares the results with those obtained from statistical combination methods (majority rule and belief functions) and accuracies of individual classifiers. Accuracy assessment is performed by computing the quantity and allocation disagreement, analyzing statistic measures (accuracy, precision, specificity, AUC, and sensitivity) and receiver operating characteristic curves. A total of 225 combinations of parameters for individual classifiers were trained and tested on the dataset and results were compared with the proposed approach. It was observed that the proposed methodology produced the highest classification accuracy (95.21%), close to (94.01%) that was produced by the proposed AdaBoost ensemble algorithm. From the sets of observations, it was concluded that the ensemble of classifiers produced better results compared to individual classifiers.

  16. Quantum algorithm for support matrix machines

    NASA Astrophysics Data System (ADS)

    Duan, Bojia; Yuan, Jiabin; Liu, Ying; Li, Dan

    2017-09-01

    We propose a quantum algorithm for support matrix machines (SMMs) that efficiently addresses an image classification problem by introducing a least-squares reformulation. This algorithm consists of two core subroutines: a quantum matrix inversion (Harrow-Hassidim-Lloyd, HHL) algorithm and a quantum singular value thresholding (QSVT) algorithm. The two algorithms can be implemented on a universal quantum computer with complexity O[log(npq) ] and O[log(pq)], respectively, where n is the number of the training data and p q is the size of the feature space. By iterating the algorithms, we can find the parameters for the SMM classfication model. Our analysis shows that both HHL and QSVT algorithms achieve an exponential increase of speed over their classical counterparts.

  17. Current Status of Japan's Activity for GPM/DPR and Global Rainfall Map algorithm development

    NASA Astrophysics Data System (ADS)

    Kachi, M.; Kubota, T.; Yoshida, N.; Kida, S.; Oki, R.; Iguchi, T.; Nakamura, K.

    2012-04-01

    The Global Precipitation Measurement (GPM) mission is composed of two categories of satellites; 1) a Tropical Rainfall Measuring Mission (TRMM)-like non-sun-synchronous orbit satellite (GPM Core Observatory); and 2) constellation of satellites carrying microwave radiometer instruments. The GPM Core Observatory carries the Dual-frequency Precipitation Radar (DPR), which is being developed by the Japan Aerospace Exploration Agency (JAXA) and the National Institute of Information and Communications Technology (NICT), and microwave radiometer provided by the National Aeronautics and Space Administration (NASA). GPM Core Observatory will be launched in February 2014, and development of algorithms is underway. DPR Level 1 algorithm, which provides DPR L1B product including received power, will be developed by the JAXA. The first version was submitted in March 2011. Development of the second version of DPR L1B algorithm (Version 2) will complete in March 2012. Version 2 algorithm includes all basic functions, preliminary database, HDF5 I/F, and minimum error handling. Pre-launch code will be developed by the end of October 2012. DPR Level 2 algorithm has been developing by the DPR Algorithm Team led by Japan, which is under the NASA-JAXA Joint Algorithm Team. The first version of GPM/DPR Level-2 Algorithm Theoretical Basis Document was completed on November 2010. The second version, "Baseline code", was completed in January 2012. Baseline code includes main module, and eight basic sub-modules (Preparation module, Vertical Profile module, Classification module, SRT module, DSD module, Solver module, Input module, and Output module.) The Level-2 algorithms will provide KuPR only products, KaPR only products, and Dual-frequency Precipitation products, with estimated precipitation rate, radar reflectivity, and precipitation information such as drop size distribution and bright band height. It is important to develop algorithm applicable to both TRMM/PR and KuPR in order to produce long-term continuous data set. Pre-launch code will be developed by autumn 2012. Global Rainfall Map algorithm has been developed by the Global Rainfall Map Algorithm Development Team in Japan. The algorithm succeeded heritages of the Global Satellite Mapping for Precipitation (GSMaP) project between 2002 and 2007, and near-real-time version operating at JAXA since 2007. "Baseline code" used current operational GSMaP code (V5.222,) and development completed in January 2012. Pre-launch code will be developed by autumn 2012, including update of database for rain type classification and rain/no-rain classification, and introduction of rain-gauge correction.

  18. An automated approach for annual layer counting in ice cores

    NASA Astrophysics Data System (ADS)

    Winstrup, M.; Svensson, A.; Rasmussen, S. O.; Winther, O.; Steig, E.; Axelrod, A.

    2012-04-01

    The temporal resolution of some ice cores is sufficient to preserve seasonal information in the ice core record. In such cases, annual layer counting represents one of the most accurate methods to produce a chronology for the core. Yet, manual layer counting is a tedious and sometimes ambiguous job. As reliable layer recognition becomes more difficult, a manual approach increasingly relies on human interpretation of the available data. Thus, much may be gained by an automated and therefore objective approach for annual layer identification in ice cores. We have developed a novel method for automated annual layer counting in ice cores, which relies on Bayesian statistics. It uses algorithms from the statistical framework of Hidden Markov Models (HMM), originally developed for use in machine speech recognition. The strength of this layer detection algorithm lies in the way it is able to imitate the manual procedures for annual layer counting, while being based on purely objective criteria for annual layer identification. With this methodology, it is possible to determine the most likely position of multiple layer boundaries in an entire section of ice core data at once. It provides a probabilistic uncertainty estimate of the resulting layer count, hence ensuring a proper treatment of ambiguous layer boundaries in the data. Furthermore multiple data series can be incorporated to be used at once, hence allowing for a full multi-parameter annual layer counting method similar to a manual approach. In this study, the automated layer counting algorithm has been applied to data from the NGRIP ice core, Greenland. The NGRIP ice core has very high temporal resolution with depth, and hence the potential to be dated by annual layer counting far back in time. In previous studies [Andersen et al., 2006; Svensson et al., 2008], manual layer counting has been carried out back to 60 kyr BP. A comparison between the counted annual layers based on the two approaches will be presented and their differences discussed. Within the estimated uncertainties, the two methodologies agree. This shows the potential for a fully automated annual layer counting method to be operational for data sections where the annual layering is unknown.

  19. Effect of a single 3-hour exposure to bright light on core body temperature and sleep in humans.

    PubMed

    Dijk, D J; Cajochen, C; Borbély, A A

    1991-01-02

    Seven human subjects were exposed to bright light (BL, approx. 2500 lux) and dim light (DL, approx. 6 lux) during 3 h prior to nocturnal sleep, in a cross-over design. At the end of the BL exposure period core body temperature was significantly higher than at the end of the DL exposure period. The difference in core body temperature persisted during the first 4 h of sleep. The latency to sleep onset was increased after BL exposure. Rapid-eye movement sleep (REMS) and slow-wave sleep (SWS; stage 3 + 4 of non-REMS) were not significantly changed. Eight subjects were exposed to BL from 20.30 to 23.30 h while their eyes were covered or uncovered. During BL exposure with uncovered eyes, core body temperature decreased significantly less than during exposure with covered eyes. We conclude that bright light immediately affects core body temperature and that this effect is mediated via the eyes.

  20. Scalable and Power Efficient Data Analytics for Hybrid Exascale Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choudhary, Alok; Samatova, Nagiza; Wu, Kesheng

    This project developed a generic and optimized set of core data analytics functions. These functions organically consolidate a broad constellation of high performance analytical pipelines. As the architectures of emerging HPC systems become inherently heterogeneous, there is a need to design algorithms for data analysis kernels accelerated on hybrid multi-node, multi-core HPC architectures comprised of a mix of CPUs, GPUs, and SSDs. Furthermore, the power-aware trend drives the advances in our performance-energy tradeoff analysis framework which enables our data analysis kernels algorithms and software to be parameterized so that users can choose the right power-performance optimizations.

  1. Parallelized seeded region growing using CUDA.

    PubMed

    Park, Seongjin; Lee, Jeongjin; Lee, Hyunna; Shin, Juneseuk; Seo, Jinwook; Lee, Kyoung Ho; Shin, Yeong-Gil; Kim, Bohyoung

    2014-01-01

    This paper presents a novel method for parallelizing the seeded region growing (SRG) algorithm using Compute Unified Device Architecture (CUDA) technology, with intention to overcome the theoretical weakness of SRG algorithm of its computation time being directly proportional to the size of a segmented region. The segmentation performance of the proposed CUDA-based SRG is compared with SRG implementations on single-core CPUs, quad-core CPUs, and shader language programming, using synthetic datasets and 20 body CT scans. Based on the experimental results, the CUDA-based SRG outperforms the other three implementations, advocating that it can substantially assist the segmentation during massive CT screening tests.

  2. ALGORITHM OF CARDIO COMPLEX DETECTION AND SORTING FOR PROCESSING THE DATA OF CONTINUOUS CARDIO SIGNAL MONITORING.

    PubMed

    Krasichkov, A S; Grigoriev, E B; Nifontov, E M; Shapovalov, V V

    The paper presents an algorithm of cardio complex classification as part of processing the data of continuous cardiac monitoring. R-wave detection concurrently with cardio complex sorting is discussed. The core of this approach is the use of prior information about. cardio complex forms, segmental structure, and degree of kindness. Results of the sorting algorithm testing are provided.

  3. Accuracy of Geophysical Parameters Derived from AIRS/AMSU as a Function of Fractional Cloud Cover

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Barnet, Chris; Blaisdell, John; Iredell, Lena; Keita, Fricky; Kouvaris, Lou; Molnar, Gyula; Chahine, Moustafa

    2006-01-01

    AIRS was launched on EOS Aqua on May 4,2002, together with AMSU A and HSB, to form a next generation polar orbiting infrared and microwave atmospheric sounding system. The primary products of AIRS/AMSU are twice daily global fields of atmospheric temperature-humidity profiles, ozone profiles, sea/land surface skin temperature, and cloud related parameters including OLR. The sounding goals of AIRS are to produce 1 km tropospheric layer mean temperatures with an rms error of lK, and layer precipitable water with an rms error of 20 percent, in cases with up to 80 percent effective cloud cover. The basic theory used to analyze Atmospheric InfraRed Sounder/Advanced Microwave Sounding Unit/Humidity Sounder Brazil (AIRS/AMSU/HSB) data in the presence of clouds, called the at-launch algorithm, was described previously. Pre-launch simulation studies using this algorithm indicated that these results should be achievable. Some modifications have been made to the at-launch retrieval algorithm as described in this paper. Sample fields of parameters retrieved from AIRS/AMSU/HSB data are presented and validated as a function of retrieved fractional cloud cover. As in simulation, the degradation of retrieval accuracy with increasing cloud cover is small and the RMS accuracy of lower tropospheric temperature retrieved with 80 percent cloud cover is about 0.5 K poorer than for clear cases. HSB failed in February 2003, and consequently HSB channel radiances are not used in the results shown in this paper. The AIRS/AMSU retrieval algorithm described in this paper, called Version 4, become operational at the Goddard DAAC (Distributed Active Archive Center) in April 2003 and is being used to analyze near-real time AIRS/AMSU data. Historical AIRS/AMSU data, going backwards from March 2005 through September 2002, is also being analyzed by the DAAC using the Version 4 algorithm.

  4. Early Results from the Global Precipitation Measurement (GPM) Mission in Japan

    NASA Astrophysics Data System (ADS)

    Kachi, Misako; Kubota, Takuji; Masaki, Takeshi; Kaneko, Yuki; Kanemaru, Kaya; Oki, Riko; Iguchi, Toshio; Nakamura, Kenji; Takayabu, Yukari N.

    2015-04-01

    The Global Precipitation Measurement (GPM) mission is an international collaboration to achieve highly accurate and highly frequent global precipitation observations. The GPM mission consists of the GPM Core Observatory jointly developed by U.S. and Japan and Constellation Satellites that carry microwave radiometers and provided by the GPM partner agencies. The Dual-frequency Precipitation Radar (DPR) was developed by the Japan Aerospace Exploration Agency (JAXA) and the National Institute of Information and Communications Technology (NICT), and installed on the GPM Core Observatory. The GPM Core Observatory chooses a non-sun-synchronous orbit to carry on diurnal cycle observations of rainfall from the Tropical Rainfall Measuring Mission (TRMM) satellite and was successfully launched at 3:37 a.m. on February 28, 2014 (JST), while the Constellation Satellites, including JAXA's Global Change Observation Mission (GCOM) - Water (GCOM-W1) or "SHIZUKU," are launched by each partner agency sometime around 2014 and contribute to expand observation coverage and increase observation frequency JAXA develops the DPR Level 1 algorithm, and the NASA-JAXA Joint Algorithm Team develops the DPR Level 2 and DPR-GMI combined Level2 algorithms. JAXA also develops the Global Rainfall Map (GPM-GSMaP) algorithm, which is a latest version of the Global Satellite Mapping of Precipitation (GSMaP), as national product to distribute hourly and 0.1-degree horizontal resolution rainfall map. Major improvements in the GPM-GSMaP algorithm is; 1) improvements in microwave imager algorithm based on AMSR2 precipitation standard algorithm, including new land algorithm, new coast detection scheme; 2) Development of orographic rainfall correction method for warm rainfall in coastal area (Taniguchi et al., 2012); 3) Update of database, including rainfall detection over land and land surface emission database; 4) Development of microwave sounder algorithm over land (Kida et al., 2012); and 5) Development of gauge-calibrated GSMaP algorithm (Ushio et al., 2013). In addition to those improvements in the algorithms number of passive microwave imagers and/or sounders used in the GPM-GSMaP was increased compared to the previous version. After the early calibration and validation of the products and evaluation that all products achieved the release criteria, all GPM standard products and the GPM-GSMaP product has been released to the public since September 2014. The GPM products can be downloaded via the internet through the JAXA G-Portal (https://www.gportal.jaxa.jp).

  5. Computer-assisted design of flux-cored wires

    NASA Astrophysics Data System (ADS)

    Dubtsov, Yu N.; Zorin, I. V.; Sokolov, G. N.; Antonov, A. A.; Artem'ev, A. A.; Lysak, V. I.

    2017-02-01

    The algorithm and description of the AlMe-WireLaB software for the computer-assisted design of flux-cored wires are introduced. The software functionality is illustrated with the selection of the components for the flux-cored wire, ensuring the acquisition of the deposited metal of the Fe-Cr-C-Mo-Ni-Ti-B system. It is demonstrated that the developed software enables the technologically reliable flux-cored wire to be designed for surfacing, resulting in a metal of an ordered composition.

  6. Multipoint to multipoint routing and wavelength assignment in multi-domain optical networks

    NASA Astrophysics Data System (ADS)

    Qin, Panke; Wu, Jingru; Li, Xudong; Tang, Yongli

    2018-01-01

    In multi-point to multi-point (MP2MP) routing and wavelength assignment (RWA) problems, researchers usually assume the optical networks to be a single domain. However, the optical networks develop toward to multi-domain and larger scale in practice. In this context, multi-core shared tree (MST)-based MP2MP RWA are introduced problems including optimal multicast domain sequence selection, core nodes belonging in which domains and so on. In this letter, we focus on MST-based MP2MP RWA problems in multi-domain optical networks, mixed integer linear programming (MILP) formulations to optimally construct MP2MP multicast trees is presented. A heuristic algorithm base on network virtualization and weighted clustering algorithm (NV-WCA) is proposed. Simulation results show that, under different traffic patterns, the proposed algorithm achieves significant improvement on network resources occupation and multicast trees setup latency in contrast with the conventional algorithms which were proposed base on a single domain network environment.

  7. Comparison of Snow Mass Estimates from a Prototype Passive Microwave Snow Algorithm, a Revised Algorithm and a Snow Depth Climatology

    NASA Technical Reports Server (NTRS)

    Foster, J. L.; Chang, A. T. C.; Hall, D. K.

    1997-01-01

    While it is recognized that no single snow algorithm is capable of producing accurate global estimates of snow depth, for research purposes it is useful to test an algorithm's performance in different climatic areas in order to see how it responds to a variety of snow conditions. This study is one of the first to develop separate passive microwave snow algorithms for North America and Eurasia by including parameters that consider the effects of variations in forest cover and crystal size on microwave brightness temperature. A new algorithm (GSFC 1996) is compared to a prototype algorithm (Chang et al., 1987) and to a snow depth climatology (SDC), which for this study is considered to be a standard reference or baseline. It is shown that the GSFC 1996 algorithm compares much more favorably to the SDC than does the Chang et al. (1987) algorithm. For example, in North America in February there is a 15% difference between the GSFC 198-96 Algorithm and the SDC, but with the Chang et al. (1987) algorithm the difference is greater than 50%. In Eurasia, also in February, there is only a 1.3% difference between the GSFC 1996 algorithm and the SDC, whereas with the Chang et al. (1987) algorithm the difference is about 20%. As expected, differences tend to be less when the snow cover extent is greater, particularly for Eurasia. The GSFC 1996 algorithm performs better in North America in each month than dose the Chang et al. (1987) algorithm. This is also the case in Eurasia, except in April and May when the Chang et al.(1987) algorithms is in closer accord to the SDC than is GSFC 1996 algorithm.

  8. Algorithmic Puzzles: History, Taxonomies, and Applications in Human Problem Solving

    ERIC Educational Resources Information Center

    Levitin, Anany

    2017-01-01

    The paper concerns an important but underappreciated genre of algorithmic puzzles, explaining what these puzzles are, reviewing milestones in their long history, and giving two different ways to classify them. Also covered are major applications of algorithmic puzzles in cognitive science research, with an emphasis on insight problem solving, and…

  9. A Hybrid Cellular Genetic Algorithm for Multi-objective Crew Scheduling Problem

    NASA Astrophysics Data System (ADS)

    Jolai, Fariborz; Assadipour, Ghazal

    Crew scheduling is one of the important problems of the airline industry. This problem aims to cover a number of flights by crew members, such that all the flights are covered. In a robust scheduling the assignment should be so that the total cost, delays, and unbalanced utilization are minimized. As the problem is NP-hard and the objectives are in conflict with each other, a multi-objective meta-heuristic called CellDE, which is a hybrid cellular genetic algorithm, is implemented as the optimization method. The proposed algorithm provides the decision maker with a set of non-dominated or Pareto-optimal solutions, and enables them to choose the best one according to their preferences. A set of problems of different sizes is generated and solved using the proposed algorithm. Evaluating the performance of the proposed algorithm, three metrics are suggested, and the diversity and the convergence of the achieved Pareto front are appraised. Finally a comparison is made between CellDE and PAES, another meta-heuristic algorithm. The results show the superiority of CellDE.

  10. On Connected Target k-Coverage in Heterogeneous Wireless Sensor Networks.

    PubMed

    Yu, Jiguo; Chen, Ying; Ma, Liran; Huang, Baogui; Cheng, Xiuzhen

    2016-01-15

    Coverage and connectivity are two important performance evaluation indices for wireless sensor networks (WSNs). In this paper, we focus on the connected target k-coverage (CTC k) problem in heterogeneous wireless sensor networks (HWSNs). A centralized connected target k-coverage algorithm (CCTC k) and a distributed connected target k-coverage algorithm (DCTC k) are proposed so as to generate connected cover sets for energy-efficient connectivity and coverage maintenance. To be specific, our proposed algorithms aim at achieving minimum connected target k-coverage, where each target in the monitored region is covered by at least k active sensor nodes. In addition, these two algorithms strive to minimize the total number of active sensor nodes and guarantee that each sensor node is connected to a sink, such that the sensed data can be forwarded to the sink. Our theoretical analysis and simulation results show that our proposed algorithms outperform a state-of-art connected k-coverage protocol for HWSNs.

  11. Development of 2010 national land cover database for the Nepal.

    PubMed

    Uddin, Kabir; Shrestha, Him Lal; Murthy, M S R; Bajracharya, Birendra; Shrestha, Basanta; Gilani, Hammad; Pradhan, Sudip; Dangol, Bikash

    2015-01-15

    Land cover and its change analysis across the Hindu Kush Himalayan (HKH) region is realized as an urgent need to support diverse issues of environmental conservation. This study presents the first and most complete national land cover database of Nepal prepared using public domain Landsat TM data of 2010 and replicable methodology. The study estimated that 39.1% of Nepal is covered by forests and 29.83% by agriculture. Patch and edge forests constituting 23.4% of national forest cover revealed proximate biotic interferences over the forests. Core forests constituted 79.3% of forests of Protected areas where as 63% of area was under core forests in the outside protected area. Physiographic regions wise forest fragmentation analysis revealed specific conservation requirements for productive hill and mid mountain regions. Comparative analysis with Landsat TM based global land cover product showed difference of the order of 30-60% among different land cover classes stressing the need for significant improvements for national level adoption. The online web based land cover validation tool is developed for continual improvement of land cover product. The potential use of the data set for national and regional level sustainable land use planning strategies and meeting several global commitments also highlighted. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Modeling Hubble Space Telescope flight data by Q-Markov cover identification

    NASA Technical Reports Server (NTRS)

    Liu, K.; Skelton, R. E.; Sharkey, J. P.

    1992-01-01

    A state space model for the Hubble Space Telescope under the influence of unknown disturbances in orbit is presented. This model was obtained from flight data by applying the Q-Markov covariance equivalent realization identification algorithm. This state space model guarantees the match of the first Q-Markov parameters and covariance parameters of the Hubble system. The flight data were partitioned into high- and low-frequency components for more efficient Q-Markov cover modeling, to reduce some computational difficulties of the Q-Markov cover algorithm. This identification revealed more than 20 lightly damped modes within the bandwidth of the attitude control system. Comparisons with the analytical (TREETOPS) model are also included.

  13. Genetic algorithm for nuclear data evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arthur, Jennifer Ann

    These are slides on genetic algorithm for nuclear data evaluation. The following is covered: initial population, fitness (outer loop), calculate fitness, selection (first part of inner loop), reproduction (second part of inner loop), solution, and examples.

  14. A quantitative comparison of soil moisture inversion algorithms

    NASA Technical Reports Server (NTRS)

    Zyl, J. J. van; Kim, Y.

    2001-01-01

    This paper compares the performance of four bare surface radar soil moisture inversion algorithms in the presence of measurement errors. The particular errors considered include calibration errors, system thermal noise, local topography and vegetation cover.

  15. The design of multi-core DSP parallel model based on message passing and multi-level pipeline

    NASA Astrophysics Data System (ADS)

    Niu, Jingyu; Hu, Jian; He, Wenjing; Meng, Fanrong; Li, Chuanrong

    2017-10-01

    Currently, the design of embedded signal processing system is often based on a specific application, but this idea is not conducive to the rapid development of signal processing technology. In this paper, a parallel processing model architecture based on multi-core DSP platform is designed, and it is mainly suitable for the complex algorithms which are composed of different modules. This model combines the ideas of multi-level pipeline parallelism and message passing, and summarizes the advantages of the mainstream model of multi-core DSP (the Master-Slave model and the Data Flow model), so that it has better performance. This paper uses three-dimensional image generation algorithm to validate the efficiency of the proposed model by comparing with the effectiveness of the Master-Slave and the Data Flow model.

  16. Mastery Multiplied

    ERIC Educational Resources Information Center

    Shumway, Jessica F.; Kyriopoulos, Joan

    2014-01-01

    Being able to find the correct answer to a math problem does not always indicate solid mathematics mastery. A student who knows how to apply the basic algorithms can correctly solve problems without understanding the relationships between numbers or why the algorithms work. The Common Core standards require that students actually understand…

  17. The Process of Parallelizing the Conjunction Prediction Algorithm of ESA's SSA Conjunction Prediction Service Using GPGPU

    NASA Astrophysics Data System (ADS)

    Fehr, M.; Navarro, V.; Martin, L.; Fletcher, E.

    2013-08-01

    Space Situational Awareness[8] (SSA) is defined as the comprehensive knowledge, understanding and maintained awareness of the population of space objects, the space environment and existing threats and risks. As ESA's SSA Conjunction Prediction Service (CPS) requires the repetitive application of a processing algorithm against a data set of man-made space objects, it is crucial to exploit the highly parallelizable nature of this problem. Currently the CPS system makes use of OpenMP[7] for parallelization purposes using CPU threads, but only a GPU with its hundreds of cores can fully benefit from such high levels of parallelism. This paper presents the adaptation of several core algorithms[5] of the CPS for general-purpose computing on graphics processing units (GPGPU) using NVIDIAs Compute Unified Device Architecture (CUDA).

  18. Detection of core-periphery structure in networks based on 3-tuple motifs

    NASA Astrophysics Data System (ADS)

    Ma, Chuang; Xiang, Bing-Bing; Chen, Han-Shuang; Small, Michael; Zhang, Hai-Feng

    2018-05-01

    Detecting mesoscale structure, such as community structure, is of vital importance for analyzing complex networks. Recently, a new mesoscale structure, core-periphery (CP) structure, has been identified in many real-world systems. In this paper, we propose an effective algorithm for detecting CP structure based on a 3-tuple motif. In this algorithm, we first define a 3-tuple motif in terms of the patterns of edges as well as the property of nodes, and then a motif adjacency matrix is constructed based on the 3-tuple motif. Finally, the problem is converted to find a cluster that minimizes the smallest motif conductance. Our algorithm works well in different CP structures: including single or multiple CP structure, and local or global CP structures. Results on the synthetic and the empirical networks validate the high performance of our method.

  19. Out-of-Core Streamline Visualization on Large Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Ueng, Shyh-Kuang; Sikorski, K.; Ma, Kwan-Liu

    1997-01-01

    It's advantageous for computational scientists to have the capability to perform interactive visualization on their desktop workstations. For data on large unstructured meshes, this capability is not generally available. In particular, particle tracing on unstructured grids can result in a high percentage of non-contiguous memory accesses and therefore may perform very poorly with virtual memory paging schemes. The alternative of visualizing a lower resolution of the data degrades the original high-resolution calculations. This paper presents an out-of-core approach for interactive streamline construction on large unstructured tetrahedral meshes containing millions of elements. The out-of-core algorithm uses an octree to partition and restructure the raw data into subsets stored into disk files for fast data retrieval. A memory management policy tailored to the streamline calculations is used such that during the streamline construction only a very small amount of data are brought into the main memory on demand. By carefully scheduling computation and data fetching, the overhead of reading data from the disk is significantly reduced and good memory performance results. This out-of-core algorithm makes possible interactive streamline visualization of large unstructured-grid data sets on a single mid-range workstation with relatively low main-memory capacity: 5-20 megabytes. Our test results also show that this approach is much more efficient than relying on virtual memory and operating system's paging algorithms.

  20. Real-time core body temperature estimation from heart rate for first responders wearing different levels of personal protective equipment.

    PubMed

    Buller, Mark J; Tharion, William J; Duhamel, Cynthia M; Yokota, Miyo

    2015-01-01

    First responders often wear personal protective equipment (PPE) for protection from on-the-job hazards. While PPE ensembles offer individuals protection, they limit one's ability to thermoregulate, and can place the wearer in danger of heat exhaustion and higher cardiac stress. Automatically monitoring thermal-work strain is one means to manage these risks, but measuring core body temperature (Tc) has proved problematic. An algorithm that estimates Tc from sequential measures of heart rate (HR) was compared to the observed Tc from 27 US soldiers participating in three different chemical/biological training events (45-90 min duration) while wearing PPE. Hotter participants (higher Tc) averaged (HRs) of 140 bpm and reached Tc around 39 °C. Overall the algorithm had a small bias (0.02 °C) and root mean square error (0.21 °C). Limits of agreement (LoA ± 0.48 °C) were similar to comparisons of Tc measured by oesophageal and rectal probes. The algorithm shows promise for use in real-time monitoring of encapsulated first responders. An algorithm to estimate core temperature (Tc) from non-invasive measures of HR was validated. Three independent studies (n = 27) compared the estimated Tc to the observed Tc in humans participating in chemical/ biological hazard training. The algorithm’s bias and variance to observed data were similar to that found from comparisons of oesophageal and rectal measurements.

  1. Comparing performance of many-core CPUs and GPUs for static and motion compensated reconstruction of C-arm CT data.

    PubMed

    Hofmann, Hannes G; Keck, Benjamin; Rohkohl, Christopher; Hornegger, Joachim

    2011-01-01

    Interventional reconstruction of 3-D volumetric data from C-arm CT projections is a computationally demanding task. Hardware optimization is not an option but mandatory for interventional image processing and, in particular, for image reconstruction due to the high demands on performance. Several groups have published fast analytical 3-D reconstruction on highly parallel hardware such as GPUs to mitigate this issue. The authors show that the performance of modern CPU-based systems is in the same order as current GPUs for static 3-D reconstruction and outperforms them for a recent motion compensated (3-D+time) image reconstruction algorithm. This work investigates two algorithms: Static 3-D reconstruction as well as a recent motion compensated algorithm. The evaluation was performed using a standardized reconstruction benchmark, RABBITCT, to get comparable results and two additional clinical data sets. The authors demonstrate for a parametric B-spline motion estimation scheme that the derivative computation, which requires many write operations to memory, performs poorly on the GPU and can highly benefit from modern CPU architectures with large caches. Moreover, on a 32-core Intel Xeon server system, the authors achieve linear scaling with the number of cores used and reconstruction times almost in the same range as current GPUs. Algorithmic innovations in the field of motion compensated image reconstruction may lead to a shift back to CPUs in the future. For analytical 3-D reconstruction, the authors show that the gap between GPUs and CPUs became smaller. It can be performed in less than 20 s (on-the-fly) using a 32-core server.

  2. Coevolutionary Free Lunches

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.; Macready, William G.

    2005-01-01

    Recent work on the mathematical foundations of optimization has begun to uncover its rich structure. In particular, the "No Free Lunch" (NFL) theorems state that any two algorithms are equivalent when their performance is averaged across all possible problems. This highlights the need for exploiting problem-specific knowledge to achieve better than random performance. In this paper we present a general framework covering more search scenarios. In addition to the optimization scenarios addressed in the NFL results, this framework covers multi-armed bandit problems and evolution of multiple co-evolving players. As a particular instance of the latter, it covers "self-play" problems. In these problems the set of players work together to produce a champion, who then engages one or more antagonists in a subsequent multi-player game. In contrast to the traditional optimization case where the NFL results hold, we show that in self-play there are free lunches: in coevolution some algorithms have better performance than other algorithms, averaged across all possible problems. We consider the implications of these results to biology where there is no champion.

  3. Computational mechanics analysis tools for parallel-vector supercomputers

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.; Nguyen, Duc T.; Baddourah, Majdi; Qin, Jiangning

    1993-01-01

    Computational algorithms for structural analysis on parallel-vector supercomputers are reviewed. These parallel algorithms, developed by the authors, are for the assembly of structural equations, 'out-of-core' strategies for linear equation solution, massively distributed-memory equation solution, unsymmetric equation solution, general eigensolution, geometrically nonlinear finite element analysis, design sensitivity analysis for structural dynamics, optimization search analysis and domain decomposition. The source code for many of these algorithms is available.

  4. Deploy Nalu/Kokkos algorithmic infrastructure with performance benchmarking.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Domino, Stefan P.; Ananthan, Shreyas; Knaus, Robert C.

    The former Nalu interior heterogeneous algorithm design, which was originally designed to manage matrix assembly operations over all elemental topology types, has been modified to operate over homogeneous collections of mesh entities. This newly templated kernel design allows for removal of workset variable resize operations that were formerly required at each loop over a Sierra ToolKit (STK) bucket (nominally, 512 entities in size). Extensive usage of the Standard Template Library (STL) std::vector has been removed in favor of intrinsic Kokkos memory views. In this milestone effort, the transition to Kokkos as the underlying infrastructure to support performance and portability onmore » many-core architectures has been deployed for key matrix algorithmic kernels. A unit-test driven design effort has developed a homogeneous entity algorithm that employs a team-based thread parallelism construct. The STK Single Instruction Multiple Data (SIMD) infrastructure is used to interleave data for improved vectorization. The collective algorithm design, which allows for concurrent threading and SIMD management, has been deployed for the core low-Mach element- based algorithm. Several tests to ascertain SIMD performance on Intel KNL and Haswell architectures have been carried out. The performance test matrix includes evaluation of both low- and higher-order methods. The higher-order low-Mach methodology builds on polynomial promotion of the core low-order control volume nite element method (CVFEM). Performance testing of the Kokkos-view/SIMD design indicates low-order matrix assembly kernel speed-up ranging between two and four times depending on mesh loading and node count. Better speedups are observed for higher-order meshes (currently only P=2 has been tested) especially on KNL. The increased workload per element on higher-order meshes bene ts from the wide SIMD width on KNL machines. Combining multiple threads with SIMD on KNL achieves a 4.6x speedup over the baseline, with assembly timings faster than that observed on Haswell architecture. The computational workload of higher-order meshes, therefore, seems ideally suited for the many-core architecture and justi es further exploration of higher-order on NGP platforms. A Trilinos/Tpetra-based multi-threaded GMRES preconditioned by symmetric Gauss Seidel (SGS) represents the core solver infrastructure for the low-Mach advection/diffusion implicit solves. The threaded solver stack has been tested on small problems on NREL's Peregrine system using the newly developed and deployed Kokkos-view/SIMD kernels. fforts are underway to deploy the Tpetra-based solver stack on NERSC Cori system to benchmark its performance at scale on KNL machines.« less

  5. Change detection with heterogeneous data using ecoregional stratification, statistical summaries and a land allocation algorithm

    Treesearch

    Kathleen M. Bergen; Daniel G. Brown; James F. Rutherford; Eric J. Gustafson

    2005-01-01

    A ca. 1980 national-scale land-cover classification based on aerial photo interpretation was combined with 2000 AVHRR satellite imagery to derive land cover and land-cover change information for forest, urban, and agriculture categories over a seven-state region in the U.S. To derive useful land-cover change data using a heterogeneous dataset and to validate our...

  6. [GNU Pattern: open source pattern hunter for biological sequences based on SPLASH algorithm].

    PubMed

    Xu, Ying; Li, Yi-xue; Kong, Xiang-yin

    2005-06-01

    To construct a high performance open source software engine based on IBM SPLASH algorithm for later research on pattern discovery. Gpat, which is based on SPLASH algorithm, was developed by using open source software. GNU Pattern (Gpat) software was developped, which efficiently implemented the core part of SPLASH algorithm. Full source code of Gpat was also available for other researchers to modify the program under the GNU license. Gpat is a successful implementation of SPLASH algorithm and can be used as a basic framework for later research on pattern recognition in biological sequences.

  7. Validation of SMAP surface soil moisture products with core validation sites

    USDA-ARS?s Scientific Manuscript database

    The NASA Soil Moisture Active Passive (SMAP) mission has utilized a set of core validation sites as the primary methodology in assessing the soil moisture retrieval algorithm performance. Those sites provide well-calibrated in situ soil moisture measurements within SMAP product grid pixels for diver...

  8. Minimum Covers of Fixed Cardinality in Weighted Graphs.

    ERIC Educational Resources Information Center

    White, Lee J.

    Reported is the result of research on combinatorial and algorithmic techniques for information processing. A method is discussed for obtaining minimum covers of specified cardinality from a given weighted graph. By the indicated method, it is shown that the family of minimum covers of varying cardinality is related to the minimum spanning tree of…

  9. Deconvolving instrumental and intrinsic broadening in core-shell x-ray spectroscopies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fister, T. T.; Seidler, G. T.; Rehr, J. J.

    2007-05-01

    Intrinsic and experimental mechanisms frequently lead to broadening of spectral features in core-shell spectroscopies. For example, intrinsic broadening occurs in x-ray absorption spectroscopy (XAS) measurements of heavy elements where the core-hole lifetime is very short. On the other hand, nonresonant x-ray Raman scattering (XRS) and other energy loss measurements are more limited by instrumental resolution. Here, we demonstrate that the Richardson-Lucy (RL) iterative algorithm provides a robust method for deconvolving instrumental and intrinsic resolutions from typical XAS and XRS data. For the K-edge XAS of Ag, we find nearly complete removal of {approx}9.3 eV full width at half maximum broadeningmore » from the combined effects of the short core-hole lifetime and instrumental resolution. We are also able to remove nearly all instrumental broadening in an XRS measurement of diamond, with the resulting improved spectrum comparing favorably with prior soft x-ray XAS measurements. We present a practical methodology for implementing the RL algorithm in these problems, emphasizing the importance of testing for stability of the deconvolution process against noise amplification, perturbations in the initial spectra, and uncertainties in the core-hole lifetime.« less

  10. A scalable, fully implicit algorithm for the reduced two-field low-β extended MHD model

    DOE PAGES

    Chacon, Luis; Stanier, Adam John

    2016-12-01

    Here, we demonstrate a scalable fully implicit algorithm for the two-field low-β extended MHD model. This reduced model describes plasma behavior in the presence of strong guide fields, and is of significant practical impact both in nature and in laboratory plasmas. The model displays strong hyperbolic behavior, as manifested by the presence of fast dispersive waves, which make a fully implicit treatment very challenging. In this study, we employ a Jacobian-free Newton–Krylov nonlinear solver, for which we propose a physics-based preconditioner that renders the linearized set of equations suitable for inversion with multigrid methods. As a result, the algorithm ismore » shown to scale both algorithmically (i.e., the iteration count is insensitive to grid refinement and timestep size) and in parallel in a weak-scaling sense, with the wall-clock time scaling weakly with the number of cores for up to 4096 cores. For a 4096 × 4096 mesh, we demonstrate a wall-clock-time speedup of ~6700 with respect to explicit algorithms. The model is validated linearly (against linear theory predictions) and nonlinearly (against fully kinetic simulations), demonstrating excellent agreement.« less

  11. Phase Transitions in Combinatorial Optimization Problems: Basics, Algorithms and Statistical Mechanics

    NASA Astrophysics Data System (ADS)

    Hartmann, Alexander K.; Weigt, Martin

    2005-10-01

    A concise, comprehensive introduction to the topic of statistical physics of combinatorial optimization, bringing together theoretical concepts and algorithms from computer science with analytical methods from physics. The result bridges the gap between statistical physics and combinatorial optimization, investigating problems taken from theoretical computing, such as the vertex-cover problem, with the concepts and methods of theoretical physics. The authors cover rapid developments and analytical methods that are both extremely complex and spread by word-of-mouth, providing all the necessary basics in required detail. Throughout, the algorithms are shown with examples and calculations, while the proofs are given in a way suitable for graduate students, post-docs, and researchers. Ideal for newcomers to this young, multidisciplinary field.

  12. Limited distortion in LSB steganography

    NASA Astrophysics Data System (ADS)

    Kim, Younhee; Duric, Zoran; Richards, Dana

    2006-02-01

    It is well known that all information hiding methods that modify the least significant bits introduce distortions into the cover objects. Those distortions have been utilized by steganalysis algorithms to detect that the objects had been modified. It has been proposed that only coefficients whose modification does not introduce large distortions should be used for embedding. In this paper we propose an effcient algorithm for information hiding in the LSBs of JPEG coefficients. Our algorithm uses parity coding to choose the coefficients whose modifications introduce minimal additional distortion. We derive the expected value of the additional distortion as a function of the message length and the probability distribution of the JPEG quantization errors of cover images. Our experiments show close agreement between the theoretical prediction and the actual additional distortion.

  13. LSB Based Quantum Image Steganography Algorithm

    NASA Astrophysics Data System (ADS)

    Jiang, Nan; Zhao, Na; Wang, Luo

    2016-01-01

    Quantum steganography is the technique which hides a secret message into quantum covers such as quantum images. In this paper, two blind LSB steganography algorithms in the form of quantum circuits are proposed based on the novel enhanced quantum representation (NEQR) for quantum images. One algorithm is plain LSB which uses the message bits to substitute for the pixels' LSB directly. The other is block LSB which embeds a message bit into a number of pixels that belong to one image block. The extracting circuits can regain the secret message only according to the stego cover. Analysis and simulation-based experimental results demonstrate that the invisibility is good, and the balance between the capacity and the robustness can be adjusted according to the needs of applications.

  14. An Online Algorithm for Maximizing Submodular Functions

    DTIC Science & Technology

    2007-12-20

    dynamics of the social network are known. In theory, our online algorithms could be used to adapt a marketing campaign to unknown or time-varying social...An Online Algorithm for Maximizing Submodular Functions Matthew Streeter Daniel Golovin December 20, 2007 CMU-CS-07-171 School of Computer Science...number. 1. REPORT DATE 20 DEC 2007 2. REPORT TYPE 3. DATES COVERED 00-00-2007 to 00-00-2007 4. TITLE AND SUBTITLE An Online Algorithm for

  15. Ion Structure Near a Core-Shell Dielectric Nanoparticle

    NASA Astrophysics Data System (ADS)

    Ma, Manman; Gan, Zecheng; Xu, Zhenli

    2017-02-01

    A generalized image charge formulation is proposed for the Green's function of a core-shell dielectric nanoparticle for which theoretical and simulation investigations are rarely reported due to the difficulty of resolving the dielectric heterogeneity. Based on the formulation, an efficient and accurate algorithm is developed for calculating electrostatic polarization charges of mobile ions, allowing us to study related physical systems using the Monte Carlo algorithm. The computer simulations show that a fine-tuning of the shell thickness or the ion-interface correlation strength can greatly alter electric double-layer structures and capacitances, owing to the complicated interplay between dielectric boundary effects and ion-interface correlations.

  16. Parallelized Seeded Region Growing Using CUDA

    PubMed Central

    Park, Seongjin; Lee, Hyunna; Seo, Jinwook; Lee, Kyoung Ho; Shin, Yeong-Gil; Kim, Bohyoung

    2014-01-01

    This paper presents a novel method for parallelizing the seeded region growing (SRG) algorithm using Compute Unified Device Architecture (CUDA) technology, with intention to overcome the theoretical weakness of SRG algorithm of its computation time being directly proportional to the size of a segmented region. The segmentation performance of the proposed CUDA-based SRG is compared with SRG implementations on single-core CPUs, quad-core CPUs, and shader language programming, using synthetic datasets and 20 body CT scans. Based on the experimental results, the CUDA-based SRG outperforms the other three implementations, advocating that it can substantially assist the segmentation during massive CT screening tests. PMID:25309619

  17. Crowded Cluster Cores. Algorithms for Deblending in Dark Energy Survey Images

    DOE PAGES

    Zhang, Yuanyuan; McKay, Timothy A.; Bertin, Emmanuel; ...

    2015-10-26

    Deep optical images are often crowded with overlapping objects. We found that this is especially true in the cores of galaxy clusters, where images of dozens of galaxies may lie atop one another. Accurate measurements of cluster properties require deblending algorithms designed to automatically extract a list of individual objects and decide what fraction of the light in each pixel comes from each object. In this article, we introduce a new software tool called the Gradient And Interpolation based (GAIN) deblender. GAIN is used as a secondary deblender to improve the separation of overlapping objects in galaxy cluster cores inmore » Dark Energy Survey images. It uses image intensity gradients and an interpolation technique originally developed to correct flawed digital images. Our paper is dedicated to describing the algorithm of the GAIN deblender and its applications, but we additionally include modest tests of the software based on real Dark Energy Survey co-add images. GAIN helps to extract an unbiased photometry measurement for blended sources and improve detection completeness, while introducing few spurious detections. When applied to processed Dark Energy Survey data, GAIN serves as a useful quick fix when a high level of deblending is desired.« less

  18. Comparing Four Age Model Techniques using Nine Sediment Cores from the Iberian Margin

    NASA Astrophysics Data System (ADS)

    Lisiecki, L. E.; Jones, A. M.; Lawrence, C.

    2017-12-01

    Interpretations of paleoclimate records from ocean sediment cores rely on age models, which provide estimates of age as a function of core depth. Here we compare four methods used to generate age models for sediment cores for the past 140 kyr. The first method is based on radiocarbon dating using the Bayesian statistical software, Bacon [Blaauw and Christen, 2011]. The second method aligns benthic δ18O to a target core using the probabilistic alignment algorithm, HMM-Match, which also generates age uncertainty estimates [Lin et al., 2014]. The third and fourth methods are planktonic δ18O and sea surface temperature (SST) alignments to the same target core, using the alignment algorithm Match [Lisiecki and Lisiecki, 2002]. Unlike HMM-Match, Match requires parameter tuning and does not produce uncertainty estimates. The results of these four age model techniques are compared for nine high-resolution cores from the Iberian margin. The root mean square error between the individual age model results and each core's average estimated age is 1.4 kyr. Additionally, HMM-Match and Bacon age estimates agree to within uncertainty and have similar 95% confidence widths of 1-2 kyr for the highest resolution records. In one core, the planktonic and SST alignments did not fall within the 95% confidence intervals from HMM-Match. For this core, the surface proxy alignments likely produce more reliable results due to millennial-scale SST variability and the presence of several gaps in the benthic δ18O data. Similar studies of other oceanographic regions are needed to determine the spatial extents over which these climate proxies may be stratigraphically correlated.

  19. Evolution of Canada’s Boreal Forest Spatial Patterns as Seen from Space

    PubMed Central

    Pickell, Paul D.; Coops, Nicholas C.; Gergel, Sarah E.; Andison, David W.; Marshall, Peter L.

    2016-01-01

    Understanding the development of landscape patterns over broad spatial and temporal scales is a major contribution to ecological sciences and is a critical area of research for forested land management. Boreal forests represent an excellent case study for such research because these forests have undergone significant changes over recent decades. We analyzed the temporal trends of four widely-used landscape pattern indices for boreal forests of Canada: forest cover, largest forest patch index, forest edge density, and core (interior) forest cover. The indices were computed over landscape extents ranging from 5,000 ha (n = 18,185) to 50,000 ha (n = 1,662) and across nine major ecozones of Canada. We used 26 years of Landsat satellite imagery to derive annualized trends of the landscape pattern indices. The largest declines in forest cover, largest forest patch index, and core forest cover were observed in the Boreal Shield, Boreal Plain, and Boreal Cordillera ecozones. Forest edge density increased at all landscape extents for all ecozones. Rapidly changing landscapes, defined as the 90th percentile of forest cover change, were among the most forested initially and were characterized by four times greater decrease in largest forest patch index, three times greater increase in forest edge density, and four times greater decrease in core forest cover compared with all 50,000 ha landscapes. Moreover, approximately 18% of all 50,000 ha landscapes did not change due to a lack of disturbance. The pattern database results provide important context for forest management agencies committed to implementing ecosystem-based management strategies. PMID:27383055

  20. Evolution of Canada's Boreal Forest Spatial Patterns as Seen from Space.

    PubMed

    Pickell, Paul D; Coops, Nicholas C; Gergel, Sarah E; Andison, David W; Marshall, Peter L

    2016-01-01

    Understanding the development of landscape patterns over broad spatial and temporal scales is a major contribution to ecological sciences and is a critical area of research for forested land management. Boreal forests represent an excellent case study for such research because these forests have undergone significant changes over recent decades. We analyzed the temporal trends of four widely-used landscape pattern indices for boreal forests of Canada: forest cover, largest forest patch index, forest edge density, and core (interior) forest cover. The indices were computed over landscape extents ranging from 5,000 ha (n = 18,185) to 50,000 ha (n = 1,662) and across nine major ecozones of Canada. We used 26 years of Landsat satellite imagery to derive annualized trends of the landscape pattern indices. The largest declines in forest cover, largest forest patch index, and core forest cover were observed in the Boreal Shield, Boreal Plain, and Boreal Cordillera ecozones. Forest edge density increased at all landscape extents for all ecozones. Rapidly changing landscapes, defined as the 90th percentile of forest cover change, were among the most forested initially and were characterized by four times greater decrease in largest forest patch index, three times greater increase in forest edge density, and four times greater decrease in core forest cover compared with all 50,000 ha landscapes. Moreover, approximately 18% of all 50,000 ha landscapes did not change due to a lack of disturbance. The pattern database results provide important context for forest management agencies committed to implementing ecosystem-based management strategies.

  1. Superscattering of light optimized by a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Mirzaei, Ali; Miroshnichenko, Andrey E.; Shadrivov, Ilya V.; Kivshar, Yuri S.

    2014-07-01

    We analyse scattering of light from multi-layer plasmonic nanowires and employ a genetic algorithm for optimizing the scattering cross section. We apply the mode-expansion method using experimental data for material parameters to demonstrate that our genetic algorithm allows designing realistic core-shell nanostructures with the superscattering effect achieved at any desired wavelength. This approach can be employed for optimizing both superscattering and cloaking at different wavelengths in the visible spectral range.

  2. An assessment of coupling algorithms for nuclear reactor core physics simulations

    DOE PAGES

    Hamilton, Steven; Berrill, Mark; Clarno, Kevin; ...

    2016-04-01

    This paper evaluates the performance of multiphysics coupling algorithms applied to a light water nuclear reactor core simulation. The simulation couples the k-eigenvalue form of the neutron transport equation with heat conduction and subchannel flow equations. We compare Picard iteration (block Gauss–Seidel) to Anderson acceleration and multiple variants of preconditioned Jacobian-free Newton–Krylov (JFNK). The performance of the methods are evaluated over a range of energy group structures and core power levels. A novel physics-based approximation to a Jacobian-vector product has been developed to mitigate the impact of expensive on-line cross section processing steps. Furthermore, numerical simulations demonstrating the efficiency ofmore » JFNK and Anderson acceleration relative to standard Picard iteration are performed on a 3D model of a nuclear fuel assembly. Both criticality (k-eigenvalue) and critical boron search problems are considered.« less

  3. An assessment of coupling algorithms for nuclear reactor core physics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamilton, Steven; Berrill, Mark; Clarno, Kevin

    This paper evaluates the performance of multiphysics coupling algorithms applied to a light water nuclear reactor core simulation. The simulation couples the k-eigenvalue form of the neutron transport equation with heat conduction and subchannel flow equations. We compare Picard iteration (block Gauss–Seidel) to Anderson acceleration and multiple variants of preconditioned Jacobian-free Newton–Krylov (JFNK). The performance of the methods are evaluated over a range of energy group structures and core power levels. A novel physics-based approximation to a Jacobian-vector product has been developed to mitigate the impact of expensive on-line cross section processing steps. Furthermore, numerical simulations demonstrating the efficiency ofmore » JFNK and Anderson acceleration relative to standard Picard iteration are performed on a 3D model of a nuclear fuel assembly. Both criticality (k-eigenvalue) and critical boron search problems are considered.« less

  4. An assessment of coupling algorithms for nuclear reactor core physics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamilton, Steven, E-mail: hamiltonsp@ornl.gov; Berrill, Mark, E-mail: berrillma@ornl.gov; Clarno, Kevin, E-mail: clarnokt@ornl.gov

    This paper evaluates the performance of multiphysics coupling algorithms applied to a light water nuclear reactor core simulation. The simulation couples the k-eigenvalue form of the neutron transport equation with heat conduction and subchannel flow equations. We compare Picard iteration (block Gauss–Seidel) to Anderson acceleration and multiple variants of preconditioned Jacobian-free Newton–Krylov (JFNK). The performance of the methods are evaluated over a range of energy group structures and core power levels. A novel physics-based approximation to a Jacobian-vector product has been developed to mitigate the impact of expensive on-line cross section processing steps. Numerical simulations demonstrating the efficiency of JFNKmore » and Anderson acceleration relative to standard Picard iteration are performed on a 3D model of a nuclear fuel assembly. Both criticality (k-eigenvalue) and critical boron search problems are considered.« less

  5. Hybrid Parallelism for Volume Rendering on Large-, Multi-, and Many-Core Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Howison, Mark; Bethel, E. Wes; Childs, Hank

    2012-01-01

    With the computing industry trending towards multi- and many-core processors, we study how a standard visualization algorithm, ray-casting volume rendering, can benefit from a hybrid parallelism approach. Hybrid parallelism provides the best of both worlds: using distributed-memory parallelism across a large numbers of nodes increases available FLOPs and memory, while exploiting shared-memory parallelism among the cores within each node ensures that each node performs its portion of the larger calculation as efficiently as possible. We demonstrate results from weak and strong scaling studies, at levels of concurrency ranging up to 216,000, and with datasets as large as 12.2 trillion cells.more » The greatest benefit from hybrid parallelism lies in the communication portion of the algorithm, the dominant cost at higher levels of concurrency. We show that reducing the number of participants with a hybrid approach significantly improves performance.« less

  6. Ground-truth collections at the MTI core sites

    NASA Astrophysics Data System (ADS)

    Garrett, Alfred J.; Kurzeja, Robert J.; Parker, Matthew J.; O'Steen, Byron L.; Pendergast, Malcolm M.; Villa-Aleman, Eliel

    2001-08-01

    The Savannah River Technology Center (SRTC) selected 13 sites across the continental US and one site in the western Pacific to serve as the primary or core site for collection of ground truth data for validation of MTI science algorithms. Imagery and ground truth data from several of these sites are presented in this paper. These sites are the Comanche Peak, Pilgrim and Turkey Point power plants, Ivanpah playas, Crater Lake, Stennis Space Center and the Tropical Western Pacific ARM site on the island of Nauru. Ground truth data includes water temperatures (bulk and skin), radiometric data, meteorological data and plant operating data. The organizations that manage these sites assist SRTC with its ground truth data collections and also give the MTI project a variety of ground truth measurements that they make for their own purposes. Collectively, the ground truth data from the 14 core sites constitute a comprehensive database for science algorithm validation.

  7. A Particle Swarm Optimization-Based Approach with Local Search for Predicting Protein Folding.

    PubMed

    Yang, Cheng-Hong; Lin, Yu-Shiun; Chuang, Li-Yeh; Chang, Hsueh-Wei

    2017-10-01

    The hydrophobic-polar (HP) model is commonly used for predicting protein folding structures and hydrophobic interactions. This study developed a particle swarm optimization (PSO)-based algorithm combined with local search algorithms; specifically, the high exploration PSO (HEPSO) algorithm (which can execute global search processes) was combined with three local search algorithms (hill-climbing algorithm, greedy algorithm, and Tabu table), yielding the proposed HE-L-PSO algorithm. By using 20 known protein structures, we evaluated the performance of the HE-L-PSO algorithm in predicting protein folding in the HP model. The proposed HE-L-PSO algorithm exhibited favorable performance in predicting both short and long amino acid sequences with high reproducibility and stability, compared with seven reported algorithms. The HE-L-PSO algorithm yielded optimal solutions for all predicted protein folding structures. All HE-L-PSO-predicted protein folding structures possessed a hydrophobic core that is similar to normal protein folding.

  8. GPU Accelerated Chemical Similarity Calculation for Compound Library Comparison

    PubMed Central

    Ma, Chao; Wang, Lirong; Xie, Xiang-Qun

    2012-01-01

    Chemical similarity calculation plays an important role in compound library design, virtual screening, and “lead” optimization. In this manuscript, we present a novel GPU-accelerated algorithm for all-vs-all Tanimoto matrix calculation and nearest neighbor search. By taking advantage of multi-core GPU architecture and CUDA parallel programming technology, the algorithm is up to 39 times superior to the existing commercial software that runs on CPUs. Because of the utilization of intrinsic GPU instructions, this approach is nearly 10 times faster than existing GPU-accelerated sparse vector algorithm, when Unity fingerprints are used for Tanimoto calculation. The GPU program that implements this new method takes about 20 minutes to complete the calculation of Tanimoto coefficients between 32M PubChem compounds and 10K Active Probes compounds, i.e., 324G Tanimoto coefficients, on a 128-CUDA-core GPU. PMID:21692447

  9. Baseline Design Compliance Matrix for the Rotary Mode Core Sampling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LECHELT, J.A.

    2000-10-17

    The purpose of the design compliance matrix (DCM) is to provide a single-source document of all design requirements associated with the fifteen subsystems that make up the rotary mode core sampling (RMCS) system. It is intended to be the baseline requirement document for the RMCS system and to be used in governing all future design and design verification activities associated with it. This document is the DCM for the RMCS system used on Hanford single-shell radioactive waste storage tanks. This includes the Exhauster System, Rotary Mode Core Sample Trucks, Universal Sampling System, Diesel Generator System, Distribution Trailer, X-Ray Cart System,more » Breathing Air Compressor, Nitrogen Supply Trailer, Casks and Cask Truck, Service Trailer, Core Sampling Riser Equipment, Core Sampling Support Trucks, Foot Clamp, Ramps and Platforms and Purged Camera System. Excluded items are tools such as light plants and light stands. Other items such as the breather inlet filter are covered by a different design baseline. In this case, the inlet breather filter is covered by the Tank Farms Design Compliance Matrix.« less

  10. Metaphor Identification in Large Texts Corpora

    PubMed Central

    Neuman, Yair; Assaf, Dan; Cohen, Yohai; Last, Mark; Argamon, Shlomo; Howard, Newton; Frieder, Ophir

    2013-01-01

    Identifying metaphorical language-use (e.g., sweet child) is one of the challenges facing natural language processing. This paper describes three novel algorithms for automatic metaphor identification. The algorithms are variations of the same core algorithm. We evaluate the algorithms on two corpora of Reuters and the New York Times articles. The paper presents the most comprehensive study of metaphor identification in terms of scope of metaphorical phrases and annotated corpora size. Algorithms’ performance in identifying linguistic phrases as metaphorical or literal has been compared to human judgment. Overall, the algorithms outperform the state-of-the-art algorithm with 71% precision and 27% averaged improvement in prediction over the base-rate of metaphors in the corpus. PMID:23658625

  11. Expert system for identification of simultaneous and sequential reactor fuel failures with gas tagging

    DOEpatents

    Gross, K.C.

    1994-07-26

    Failure of a fuel element in a nuclear reactor core is determined by a gas tagging failure detection system and method. Failures are catalogued and characterized after the event so that samples of the reactor's cover gas are taken at regular intervals and analyzed by mass spectroscopy. Employing a first set of systematic heuristic rules which are applied in a transformed node space allows the number of node combinations which must be processed within a barycentric algorithm to be substantially reduced. A second set of heuristic rules treats the tag nodes of the most recent one or two leakers as background'' gases, further reducing the number of trial node combinations. Lastly, a fuzzy'' set theory formalism minimizes experimental uncertainties in the identification of the most likely volumes of tag gases. This approach allows for the identification of virtually any number of sequential leaks and up to five simultaneous gas leaks from fuel elements. 14 figs.

  12. Development of the Landsat Data Continuity Mission Cloud Cover Assessment Algorithms

    USGS Publications Warehouse

    Scaramuzza, Pat; Bouchard, M.A.; Dwyer, John L.

    2012-01-01

    The upcoming launch of the Operational Land Imager (OLI) will start the next era of the Landsat program. However, the Automated Cloud-Cover Assessment (CCA) (ACCA) algorithm used on Landsat 7 requires a thermal band and is thus not suited for OLI. There will be a thermal instrument on the Landsat Data Continuity Mission (LDCM)-the Thermal Infrared Sensor-which may not be available during all OLI collections. This illustrates a need for CCA for LDCM in the absence of thermal data. To research possibilities for full-resolution OLI cloud assessment, a global data set of 207 Landsat 7 scenes with manually generated cloud masks was created. It was used to evaluate the ACCA algorithm, showing that the algorithm correctly classified 79.9% of a standard test subset of 3.95 109 pixels. The data set was also used to develop and validate two successor algorithms for use with OLI data-one derived from an off-the-shelf machine learning package and one based on ACCA but enhanced by a simple neural network. These comprehensive CCA algorithms were shown to correctly classify pixels as cloudy or clear 88.5% and 89.7% of the time, respectively.

  13. Temporal expansion of annual crop classification layers for the CONUS using the C5 decision tree classifier

    USGS Publications Warehouse

    Friesz, Aaron M.; Wylie, Bruce K.; Howard, Daniel M.

    2017-01-01

    Crop cover maps have become widely used in a range of research applications. Multiple crop cover maps have been developed to suite particular research interests. The National Agricultural Statistics Service (NASS) Cropland Data Layers (CDL) are a series of commonly used crop cover maps for the conterminous United States (CONUS) that span from 2008 to 2013. In this investigation, we sought to contribute to the availability of consistent CONUS crop cover maps by extending temporal coverage of the NASS CDL archive back eight additional years to 2000 by creating annual NASS CDL-like crop cover maps derived from a classification tree model algorithm. We used over 11 million records to train a classification tree algorithm and develop a crop classification model (CCM). The model was used to create crop cover maps for the CONUS for years 2000–2013 at 250 m spatial resolution. The CCM and the maps for years 2008–2013 were assessed for accuracy relative to resampled NASS CDLs. The CCM performed well against a withheld test data set with a model prediction accuracy of over 90%. The assessment of the crop cover maps indicated that the model performed well spatially, placing crop cover pixels within their known domains; however, the model did show a bias towards the ‘Other’ crop cover class, which caused frequent misclassifications of pixels around the periphery of large crop cover patch clusters and of pixels that form small, sparsely dispersed crop cover patches.

  14. Prediction and Optimization of Key Performance Indicators in the Production of Stator Core Using a GA-NN Approach

    NASA Astrophysics Data System (ADS)

    Rajora, M.; Zou, P.; Xu, W.; Jin, L.; Chen, W.; Liang, S. Y.

    2017-12-01

    With the rapidly changing demands of the manufacturing market, intelligent techniques are being used to solve engineering problems due to their ability to handle nonlinear complex problems. For example, in the conventional production of stator cores, it is relied upon experienced engineers to make an initial plan on the number of compensation sheets to be added to achieve uniform pressure distribution throughout the laminations. Additionally, these engineers must use their experience to revise the initial plans based upon the measurements made during the production of stator core. However, this method yields inconsistent results as humans are incapable of storing and analysing large amounts of data. In this article, first, a Neural Network (NN), trained using a hybrid Levenberg-Marquardt (LM) - Genetic Algorithm (GA), is developed to assist the engineers with the decision-making process. Next, the trained NN is used as a fitness function in an optimization algorithm to find the optimal values of the initial compensation sheet plan with the aim of minimizing the required revisions during the production of the stator core.

  15. 2 × 2 MIMO OFDM/OQAM radio signals over an elliptical core few-mode fiber.

    PubMed

    Mo, Qi; He, Jiale; Yu, Dawei; Deng, Lei; Fu, Songnian; Tang, Ming; Liu, Deming

    2016-10-01

    We experimentally demonstrate a 4.46  Gb/s2×2 multi-input multi-output (MIMO) orthogonal frequency division multiplexing (OFDM)/OQAM radio signal over a 2 km elliptical core 3-mode fiber, together with 0.4 m wireless transmission. Meanwhile, to cope with differential channel delay (DCD) among involved MIMO channels, we propose a time-offset crosstalk cancellation algorithm to extend the DCD tolerance from 10 to 60 ns without using a circle prefix (CP), leading to an 18.7% improvement of spectral efficiency. For the purpose of comparison, we also examine the transmission performance of CP-OFDM signals with different lengths of CPs, under the same system configuration. The proposed algorithm is also effective for the DCD compensation of a radio signal over a 2 km 7-core fiber. These results not only demonstrate the feasibility of space division multiplexing for RoF application but also validate that the elliptical core few-mode fiber can provide the same independent channels as the multicore fiber.

  16. Dual Super-Systolic Core for Real-Time Reconstructive Algorithms of High-Resolution Radar/SAR Imaging Systems

    PubMed Central

    Atoche, Alejandro Castillo; Castillo, Javier Vázquez

    2012-01-01

    A high-speed dual super-systolic core for reconstructive signal processing (SP) operations consists of a double parallel systolic array (SA) machine in which each processing element of the array is also conceptualized as another SA in a bit-level fashion. In this study, we addressed the design of a high-speed dual super-systolic array (SSA) core for the enhancement/reconstruction of remote sensing (RS) imaging of radar/synthetic aperture radar (SAR) sensor systems. The selected reconstructive SP algorithms are efficiently transformed in their parallel representation and then, they are mapped into an efficient high performance embedded computing (HPEC) architecture in reconfigurable Xilinx field programmable gate array (FPGA) platforms. As an implementation test case, the proposed approach was aggregated in a HW/SW co-design scheme in order to solve the nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) from a remotely sensed scene. We show how such dual SSA core, drastically reduces the computational load of complex RS regularization techniques achieving the required real-time operational mode. PMID:22736964

  17. A new JPEG-based steganographic algorithm for mobile devices

    NASA Astrophysics Data System (ADS)

    Agaian, Sos S.; Cherukuri, Ravindranath C.; Schneider, Erik C.; White, Gregory B.

    2006-05-01

    Currently, cellular phones constitute a significant portion of the global telecommunications market. Modern cellular phones offer sophisticated features such as Internet access, on-board cameras, and expandable memory which provide these devices with excellent multimedia capabilities. Because of the high volume of cellular traffic, as well as the ability of these devices to transmit nearly all forms of data. The need for an increased level of security in wireless communications is becoming a growing concern. Steganography could provide a solution to this important problem. In this article, we present a new algorithm for JPEG-compressed images which is applicable to mobile platforms. This algorithm embeds sensitive information into quantized discrete cosine transform coefficients obtained from the cover JPEG. These coefficients are rearranged based on certain statistical properties and the inherent processing and memory constraints of mobile devices. Based on the energy variation and block characteristics of the cover image, the sensitive data is hidden by using a switching embedding technique proposed in this article. The proposed system offers high capacity while simultaneously withstanding visual and statistical attacks. Based on simulation results, the proposed method demonstrates an improved retention of first-order statistics when compared to existing JPEG-based steganographic algorithms, while maintaining a capacity which is comparable to F5 for certain cover images.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chacon, Luis; Stanier, Adam John

    Here, we demonstrate a scalable fully implicit algorithm for the two-field low-β extended MHD model. This reduced model describes plasma behavior in the presence of strong guide fields, and is of significant practical impact both in nature and in laboratory plasmas. The model displays strong hyperbolic behavior, as manifested by the presence of fast dispersive waves, which make a fully implicit treatment very challenging. In this study, we employ a Jacobian-free Newton–Krylov nonlinear solver, for which we propose a physics-based preconditioner that renders the linearized set of equations suitable for inversion with multigrid methods. As a result, the algorithm ismore » shown to scale both algorithmically (i.e., the iteration count is insensitive to grid refinement and timestep size) and in parallel in a weak-scaling sense, with the wall-clock time scaling weakly with the number of cores for up to 4096 cores. For a 4096 × 4096 mesh, we demonstrate a wall-clock-time speedup of ~6700 with respect to explicit algorithms. The model is validated linearly (against linear theory predictions) and nonlinearly (against fully kinetic simulations), demonstrating excellent agreement.« less

  19. New Features of the Collection 4 MODIS LAI and FPAR Product

    NASA Astrophysics Data System (ADS)

    Bin, T.; Yang, W.; Dong, H.; Shabanov, N.; Knyazikhin, Y.; Myneni, R.

    2003-12-01

    An algorithm based on physics of radiative transfer in vegetation canopies for the retrieval of vegetation green leaf area index (LAI) and fraction of absorbed photosynthetically active radiation (FPAR) from MODIS surface reflectance data was developed, prototyped and is in operational production at NASA computing facilities since June 2000. This poster highlights recent changes in the operational MODIS LAI and FPAR algorithm introduced for collection 4 data reprocessing. The changes to the algorithm are targeted to improve agreement of retrieved LAI and FPAR with corresponding field measurements, improve consistency of Quality Control (QC) definitions and miscellaneous bug fixes as summarized below. * Improvement of LUTs for the main and back-up algorithms for biomes 1 and 3. Benefits: a) increase in quality of retrievals; b) non-physical peaks in the global LAI distribution have been removed; c) improved agreement with field measurements * Improved QA scheme. Benefits: a) consistency between MODLAND and SCF quality flags has been achieved; b)ambiguity in QA has been resolved * New 8-day compositing scheme. Benefits: a) compositing over best quality retrievals, instead of all retrievals; b) lowers LAI values, decreases saturation and number of pixels generated by the back-up * At-launch static IGBP land cover, input to the LAI/FPAR algorithm, was replaced with the MODIS land cover map. Benefits: a) crosswalking of 17 classes IGBP scheme to 6-biome LC has been eliminated; b) uncertainties in the MODIS LAI/FPAR product due to uncertainties in land cover map have been reduced

  20. Derived crop management data for the LandCarbon Project

    USGS Publications Warehouse

    Schmidt, Gail; Liu, Shu-Guang; Oeding, Jennifer

    2011-01-01

    The LandCarbon project is assessing potential carbon pools and greenhouse gas fluxes under various scenarios and land management regimes to provide information to support the formulation of policies governing climate change mitigation, adaptation and land management strategies. The project is unique in that spatially explicit maps of annual land cover and land-use change are created at the 250-meter pixel resolution. The project uses vast amounts of data as input to the models, including satellite, climate, land cover, soil, and land management data. Management data have been obtained from the U.S. Department of Agriculture (USDA) National Agricultural Statistics Service (NASS) and USDA Economic Research Service (ERS) that provides information regarding crop type, crop harvesting, manure, fertilizer, tillage, and cover crop (U.S. Department of Agriculture, 2011a, b, c). The LandCarbon team queried the USDA databases to pull historic crop-related management data relative to the needs of the project. The data obtained was in table form with the County or State Federal Information Processing Standard (FIPS) and the year as the primary and secondary keys. Future projections were generated for the A1B, A2, B1, and B2 Intergovernmental Panel on Climate Change (IPCC) Special Report on Emissions Scenarios (SRES) scenarios using the historic data values along with coefficients generated by the project. The PBL Netherlands Environmental Assessment Agency (PBL) Integrated Model to Assess the Global Environment (IMAGE) modeling framework (Integrated Model to Assess the Global Environment, 2006) was used to develop coefficients for each IPCC SRES scenario, which were applied to the historic management data to produce future land management practice projections. The LandCarbon project developed algorithms for deriving gridded data, using these tabular management data products as input. The derived gridded crop type, crop harvesting, manure, fertilizer, tillage, and cover crop products are used as input to the LandCarbon models to represent the historic and the future scenario management data. The overall algorithm to generate each of the gridded management products is based on the land cover and the derived crop type. For each year in the land cover dataset, the algorithm loops through each 250-meter pixel in the ecoregion. If the current pixel in the land cover dataset is an agriculture pixel, then the crop type is determined. Once the crop type is derived, then the crop harvest, manure, fertilizer, tillage, and cover crop values are derived independently for that crop type. The following is the overall algorithm used for the set of derived grids. The specific algorithm to generate each management dataset is discussed in the respective section for that dataset, along with special data handling and a description of the output product.

  1. Tractable flux-driven temperature, density, and rotation profile evolution with the quasilinear gyrokinetic transport model QuaLiKiz

    NASA Astrophysics Data System (ADS)

    Citrin, J.; Bourdelle, C.; Casson, F. J.; Angioni, C.; Bonanomi, N.; Camenen, Y.; Garbet, X.; Garzotti, L.; Görler, T.; Gürcan, O.; Koechl, F.; Imbeaux, F.; Linder, O.; van de Plassche, K.; Strand, P.; Szepesi, G.; Contributors, JET

    2017-12-01

    Quasilinear turbulent transport models are a successful tool for prediction of core tokamak plasma profiles in many regimes. Their success hinges on the reproduction of local nonlinear gyrokinetic fluxes. We focus on significant progress in the quasilinear gyrokinetic transport model QuaLiKiz (Bourdelle et al 2016 Plasma Phys. Control. Fusion 58 014036), which employs an approximated solution of the mode structures to significantly speed up computation time compared to full linear gyrokinetic solvers. Optimisation of the dispersion relation solution algorithm within integrated modelling applications leads to flux calculations × {10}6-7 faster than local nonlinear simulations. This allows tractable simulation of flux-driven dynamic profile evolution including all transport channels: ion and electron heat, main particles, impurities, and momentum. Furthermore, QuaLiKiz now includes the impact of rotation and temperature anisotropy induced poloidal asymmetry on heavy impurity transport, important for W-transport applications. Application within the JETTO integrated modelling code results in 1 s of JET plasma simulation within 10 h using 10 CPUs. Simultaneous predictions of core density, temperature, and toroidal rotation profiles for both JET hybrid and baseline experiments are presented, covering both ion and electron turbulence scales. The simulations are successfully compared to measured profiles, with agreement mostly in the 5%-25% range according to standard figures of merit. QuaLiKiz is now open source and available at www.qualikiz.com.

  2. Effective user guidance in online interactive semantic segmentation

    NASA Astrophysics Data System (ADS)

    Petersen, Jens; Bendszus, Martin; Debus, Jürgen; Heiland, Sabine; Maier-Hein, Klaus H.

    2017-03-01

    With the recent success of machine learning based solutions for automatic image parsing, the availability of reference image annotations for algorithm training is one of the major bottlenecks in medical image segmentation. We are interested in interactive semantic segmentation methods that can be used in an online fashion to generate expert segmentations. These can be used to train automated segmentation techniques or, from an application perspective, for quick and accurate tumor progression monitoring. Using simulated user interactions in a MRI glioblastoma segmentation task, we show that if the user possesses knowledge of the correct segmentation it is significantly (p <= 0.009) better to present data and current segmentation to the user in such a manner that they can easily identify falsely classified regions compared to guiding the user to regions where the classifier exhibits high uncertainty, resulting in differences of mean Dice scores between +0.070 (Whole tumor) and +0.136 (Tumor Core) after 20 iterations. The annotation process should cover all classes equally, which results in a significant (p <= 0.002) improvement compared to completely random annotations anywhere in falsely classified regions for small tumor regions such as the necrotic tumor core (mean Dice +0.151 after 20 it.) and non-enhancing abnormalities (mean Dice +0.069 after 20 it.). These findings provide important insights for the development of efficient interactive segmentation systems and user interfaces.

  3. Hail detection algorithm for the Global Precipitation Measuring mission core satellite sensors

    NASA Astrophysics Data System (ADS)

    Mroz, Kamil; Battaglia, Alessandro; Lang, Timothy J.; Tanelli, Simone; Cecil, Daniel J.; Tridon, Frederic

    2017-04-01

    By exploiting an abundant number of extreme storms observed simultaneously by the Global Precipitation Measurement (GPM) mission core satellite's suite of sensors and by the ground-based S-band Next-Generation Radar (NEXRAD) network over continental US, proxies for the identification of hail are developed based on the GPM core satellite observables. The full capabilities of the GPM observatory are tested by analyzing more than twenty observables and adopting the hydrometeor classification based on ground-based polarimetric measurements as truth. The proxies have been tested using the Critical Success Index (CSI) as a verification measure. The hail detection algorithm based on the mean Ku reflectivity in the mixed-phase layer performs the best, out of all considered proxies (CSI of 45%). Outside the Dual frequency Precipitation Radar (DPR) swath, the Polarization Corrected Temperature at 18.7 GHz shows the greatest potential for hail detection among all GMI channels (CSI of 26% at a threshold value of 261 K). When dual variable proxies are considered, the combination involving the mixed-phase reflectivity values at both Ku and Ka-bands outperforms all the other proxies, with a CSI of 49%. The best-performing radar-radiometer algorithm is based on the mixed-phase reflectivity at Ku-band and on the brightness temperature (TB) at 10.7 GHz (CSI of 46%). When only radiometric data are available, the algorithm based on the TBs at 36.6 and 166 GHz is the most efficient, with a CSI of 27.5%.

  4. MACCS : Multi-Mission Atmospheric Correction and Cloud Screening tool for high-frequency revisit data processing

    NASA Astrophysics Data System (ADS)

    Petrucci, B.; Huc, M.; Feuvrier, T.; Ruffel, C.; Hagolle, O.; Lonjou, V.; Desjardins, C.

    2015-10-01

    For the production of Level2A products during Sentinel-2 commissioning in the Technical Expertise Center Sentinel-2 in CNES, CESBIO proposed to adapt the Venus Level-2 , taking advantage of the similarities between the two missions: image acquisition at a high frequency (2 days for Venus, 5 days with the two Sentinel-2), high resolution (5m for Venus, 10, 20 and 60m for Sentinel-2), images acquisition under constant viewing conditions. The Multi-Mission Atmospheric Correction and Cloud Screening (MACCS) tool was born: based on CNES Orfeo Toolbox Library, Venμs processor which was already able to process Formosat2 and VENμS data, was adapted to process Sentinel-2 and Landsat5-7 data; since then, a great effort has been made reviewing MACCS software architecture in order to ease the add-on of new missions that have also the peculiarity of acquiring images at high resolution, high revisit and under constant viewing angles, such as Spot4/Take5 and Landsat8. The recursive and multi-temporal algorithm is implemented in a core that is the same for all the sensors and that combines several processing steps: estimation of cloud cover, cloud shadow, water, snow and shadows masks, of water vapor content, aerosol optical thickness, atmospheric correction. This core is accessed via a number of plug-ins where the specificity of the sensor and of the user project are taken into account: products format, algorithmic processing chaining and parameters. After a presentation of MACCS architecture and functionalities, the paper will give an overview of the production facilities integrating MACCS and the associated specificities: the interest for this tool has grown worldwide and MACCS will be used for extensive production within the THEIA land data center and Agri-S2 project. Finally the paper will zoom on the use of MACCS during Sentinel-2 In Orbit Test phase showing the first Level-2A products.

  5. Mold with improved core for metal casting operation

    DOEpatents

    Gritzner, Verne B.; Hackett, Donald W.

    1977-01-01

    The present invention is directed to a mold containing an improved core for use in casting hollow, metallic articles. The core is formed of, or covered with, a layer of cellular material which possesses sufficient strength to maintain its structural integrity during casting, but will crush to alleviate the internal stresses that build up if the normal contraction during solidification and cooling is restricted.

  6. High-performance sparse matrix-matrix products on Intel KNL and multicore architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagasaka, Y; Matsuoka, S; Azad, A

    Sparse matrix-matrix multiplication (SpGEMM) is a computational primitive that is widely used in areas ranging from traditional numerical applications to recent big data analysis and machine learning. Although many SpGEMM algorithms have been proposed, hardware specific optimizations for multi- and many-core processors are lacking and a detailed analysis of their performance under various use cases and matrices is not available. We firstly identify and mitigate multiple bottlenecks with memory management and thread scheduling on Intel Xeon Phi (Knights Landing or KNL). Specifically targeting multi- and many-core processors, we develop a hash-table-based algorithm and optimize a heap-based shared-memory SpGEMM algorithm. Wemore » examine their performance together with other publicly available codes. Different from the literature, our evaluation also includes use cases that are representative of real graph algorithms, such as multi-source breadth-first search or triangle counting. Our hash-table and heap-based algorithms are showing significant speedups from libraries in the majority of the cases while different algorithms dominate the other scenarios with different matrix size, sparsity, compression factor and operation type. We wrap up in-depth evaluation results and make a recipe to give the best SpGEMM algorithm for target scenario. A critical finding is that hash-table-based SpGEMM gets a significant performance boost if the nonzeros are not required to be sorted within each row of the output matrix.« less

  7. Linear Subpixel Learning Algorithm for Land Cover Classification from WELD using High Performance Computing

    NASA Technical Reports Server (NTRS)

    Kumar, Uttam; Nemani, Ramakrishna R.; Ganguly, Sangram; Kalia, Subodh; Michaelis, Andrew

    2017-01-01

    In this work, we use a Fully Constrained Least Squares Subpixel Learning Algorithm to unmix global WELD (Web Enabled Landsat Data) to obtain fractions or abundances of substrate (S), vegetation (V) and dark objects (D) classes. Because of the sheer nature of data and compute needs, we leveraged the NASA Earth Exchange (NEX) high performance computing architecture to optimize and scale our algorithm for large-scale processing. Subsequently, the S-V-D abundance maps were characterized into 4 classes namely, forest, farmland, water and urban areas (with NPP-VIIRS-national polar orbiting partnership visible infrared imaging radiometer suite nighttime lights data) over California, USA using Random Forest classifier. Validation of these land cover maps with NLCD (National Land Cover Database) 2011 products and NAFD (North American Forest Dynamics) static forest cover maps showed that an overall classification accuracy of over 91 percent was achieved, which is a 6 percent improvement in unmixing based classification relative to per-pixel-based classification. As such, abundance maps continue to offer an useful alternative to high-spatial resolution data derived classification maps for forest inventory analysis, multi-class mapping for eco-climatic models and applications, fast multi-temporal trend analysis and for societal and policy-relevant applications needed at the watershed scale.

  8. Linear Subpixel Learning Algorithm for Land Cover Classification from WELD using High Performance Computing

    NASA Astrophysics Data System (ADS)

    Ganguly, S.; Kumar, U.; Nemani, R. R.; Kalia, S.; Michaelis, A.

    2017-12-01

    In this work, we use a Fully Constrained Least Squares Subpixel Learning Algorithm to unmix global WELD (Web Enabled Landsat Data) to obtain fractions or abundances of substrate (S), vegetation (V) and dark objects (D) classes. Because of the sheer nature of data and compute needs, we leveraged the NASA Earth Exchange (NEX) high performance computing architecture to optimize and scale our algorithm for large-scale processing. Subsequently, the S-V-D abundance maps were characterized into 4 classes namely, forest, farmland, water and urban areas (with NPP-VIIRS - national polar orbiting partnership visible infrared imaging radiometer suite nighttime lights data) over California, USA using Random Forest classifier. Validation of these land cover maps with NLCD (National Land Cover Database) 2011 products and NAFD (North American Forest Dynamics) static forest cover maps showed that an overall classification accuracy of over 91% was achieved, which is a 6% improvement in unmixing based classification relative to per-pixel based classification. As such, abundance maps continue to offer an useful alternative to high-spatial resolution data derived classification maps for forest inventory analysis, multi-class mapping for eco-climatic models and applications, fast multi-temporal trend analysis and for societal and policy-relevant applications needed at the watershed scale.

  9. A CERES-like Cloud Property Climatology Using AVHRR Data

    NASA Astrophysics Data System (ADS)

    Minnis, P.; Bedka, K. M.; Yost, C. R.; Trepte, Q.; Bedka, S. T.; Sun-Mack, S.; Doelling, D.

    2015-12-01

    Clouds affect the climate system by modulating the radiation budget and distributing precipitation. Variations in cloud patterns and properties are expected to accompany changes in climate. The NASA Clouds and the Earth's Radiant Energy System (CERES) Project developed an end-to-end analysis system to measure broadband radiances from a radiometer and retrieve cloud properties from collocated high-resolution MODerate-resolution Imaging Spectroradiometer (MODIS) data to generate a long-term climate data record of clouds and clear-sky properties and top-of-atmosphere radiation budget. The first MODIS was not launched until 2000, so the current CERES record is only 15 years long at this point. The core of the algorithms used to retrieve the cloud properties from MODIS is based on the spectral complement of the Advanced Very High Resolution Radiometer (AVHRR), which has been aboard a string of satellites since 1978. The CERES cloud algorithms were adapted for application to AVHRR data and have been used to produce an ongoing CERES-like cloud property and surface temperature product that includes an initial narrowband-based radiation budget. This presentation will summarize this new product, which covers nearly 37 years, and its comparability with cloud parameters from CERES, CALIPSO, and other satellites. Examples of some applications of this dataset are given and the potential for generating a long-term radiation budget CDR is also discussed.

  10. Efficient Multicriteria Protein Structure Comparison on Modern Processor Architectures

    PubMed Central

    Manolakos, Elias S.

    2015-01-01

    Fast increasing computational demand for all-to-all protein structures comparison (PSC) is a result of three confounding factors: rapidly expanding structural proteomics databases, high computational complexity of pairwise protein comparison algorithms, and the trend in the domain towards using multiple criteria for protein structures comparison (MCPSC) and combining results. We have developed a software framework that exploits many-core and multicore CPUs to implement efficient parallel MCPSC in modern processors based on three popular PSC methods, namely, TMalign, CE, and USM. We evaluate and compare the performance and efficiency of the two parallel MCPSC implementations using Intel's experimental many-core Single-Chip Cloud Computer (SCC) as well as Intel's Core i7 multicore processor. We show that the 48-core SCC is more efficient than the latest generation Core i7, achieving a speedup factor of 42 (efficiency of 0.9), making many-core processors an exciting emerging technology for large-scale structural proteomics. We compare and contrast the performance of the two processors on several datasets and also show that MCPSC outperforms its component methods in grouping related domains, achieving a high F-measure of 0.91 on the benchmark CK34 dataset. The software implementation for protein structure comparison using the three methods and combined MCPSC, along with the developed underlying rckskel algorithmic skeletons library, is available via GitHub. PMID:26605332

  11. Efficient Multicriteria Protein Structure Comparison on Modern Processor Architectures.

    PubMed

    Sharma, Anuj; Manolakos, Elias S

    2015-01-01

    Fast increasing computational demand for all-to-all protein structures comparison (PSC) is a result of three confounding factors: rapidly expanding structural proteomics databases, high computational complexity of pairwise protein comparison algorithms, and the trend in the domain towards using multiple criteria for protein structures comparison (MCPSC) and combining results. We have developed a software framework that exploits many-core and multicore CPUs to implement efficient parallel MCPSC in modern processors based on three popular PSC methods, namely, TMalign, CE, and USM. We evaluate and compare the performance and efficiency of the two parallel MCPSC implementations using Intel's experimental many-core Single-Chip Cloud Computer (SCC) as well as Intel's Core i7 multicore processor. We show that the 48-core SCC is more efficient than the latest generation Core i7, achieving a speedup factor of 42 (efficiency of 0.9), making many-core processors an exciting emerging technology for large-scale structural proteomics. We compare and contrast the performance of the two processors on several datasets and also show that MCPSC outperforms its component methods in grouping related domains, achieving a high F-measure of 0.91 on the benchmark CK34 dataset. The software implementation for protein structure comparison using the three methods and combined MCPSC, along with the developed underlying rckskel algorithmic skeletons library, is available via GitHub.

  12. A foraminiferal δ(18)O record covering the last 2,200 years.

    PubMed

    Taricco, Carla; Alessio, Silvia; Rubinetti, Sara; Vivaldo, Gianna; Mancuso, Salvatore

    2016-06-21

    Thanks to the precise core dating and the high sedimentation rate of the drilling site (Gallipoli Terrace, Ionian Sea) we were able to measure a foraminiferal δ(18)O series covering the last 2,200 years with a time resolution shorter than 4 years. In order to support the quality of this data-set we link the δ(18)O values measured in the foraminifera shells to temperature and salinity measurements available for the last thirty years covered by the core. Moreover, we describe in detail the dating procedures based on the presence of volcanic markers along the core and on the measurement of (210)Pb and (137)Cs activity in the most recent sediment layers. The high time resolution allows for detecting a δ(18)O decennial-scale oscillation, together with centennial and multicentennial components. Due to the dependence of foraminiferal δ(18)O on environmental conditions, these oscillations can provide information about temperature and salinity variations in past millennia. The strategic location of the drilling area makes this record a unique tool for climate and oceanographic studies of the Central Mediterranean.

  13. Improved Passive Microwave Algorithms for North America and Eurasia

    NASA Technical Reports Server (NTRS)

    Foster, James; Chang, Alfred; Hall, Dorothy

    1997-01-01

    Microwave algorithms simplify complex physical processes in order to estimate geophysical parameters such as snow cover and snow depth. The microwave radiances received at the satellite sensor and expressed as brightness temperatures are a composite of contributions from the Earth's surface, the Earth's atmosphere and from space. Owing to the coarse resolution inherent to passive microwave sensors, each pixel value represents a mixture of contributions from different surface types including deep snow, shallow snow, forests and open areas. Algorithms are generated in order to resolve these mixtures. The accuracy of the retrieved information is affected by uncertainties in the assumptions used in the radiative transfer equation (Steffen et al., 1992). One such uncertainty in the Chang et al., (1987) snow algorithm is that the snow grain radius is 0.3 mm for all layers of the snowpack and for all physiographic regions. However, this is not usually the case. The influence of larger grain sizes appears to be of more importance for deeper snowpacks in the interior of Eurasia. Based on this consideration and the effects of forests, a revised SMMR snow algorithm produces more realistic snow mass values. The purpose of this study is to present results of the revised algorithm (referred to for the remainder of this paper as the GSFC 94 snow algorithm) which incorporates differences in both fractional forest cover and snow grain size. Results from the GSFC 94 algorithm will be compared to the original Chang et al. (1987) algorithm and to climatological snow depth data as well.

  14. Efficient and accurate Greedy Search Methods for mining functional modules in protein interaction networks.

    PubMed

    He, Jieyue; Li, Chaojun; Ye, Baoliu; Zhong, Wei

    2012-06-25

    Most computational algorithms mainly focus on detecting highly connected subgraphs in PPI networks as protein complexes but ignore their inherent organization. Furthermore, many of these algorithms are computationally expensive. However, recent analysis indicates that experimentally detected protein complexes generally contain Core/attachment structures. In this paper, a Greedy Search Method based on Core-Attachment structure (GSM-CA) is proposed. The GSM-CA method detects densely connected regions in large protein-protein interaction networks based on the edge weight and two criteria for determining core nodes and attachment nodes. The GSM-CA method improves the prediction accuracy compared to other similar module detection approaches, however it is computationally expensive. Many module detection approaches are based on the traditional hierarchical methods, which is also computationally inefficient because the hierarchical tree structure produced by these approaches cannot provide adequate information to identify whether a network belongs to a module structure or not. In order to speed up the computational process, the Greedy Search Method based on Fast Clustering (GSM-FC) is proposed in this work. The edge weight based GSM-FC method uses a greedy procedure to traverse all edges just once to separate the network into the suitable set of modules. The proposed methods are applied to the protein interaction network of S. cerevisiae. Experimental results indicate that many significant functional modules are detected, most of which match the known complexes. Results also demonstrate that the GSM-FC algorithm is faster and more accurate as compared to other competing algorithms. Based on the new edge weight definition, the proposed algorithm takes advantages of the greedy search procedure to separate the network into the suitable set of modules. Experimental analysis shows that the identified modules are statistically significant. The algorithm can reduce the computational time significantly while keeping high prediction accuracy.

  15. The algorithm of central axis in surface reconstruction

    NASA Astrophysics Data System (ADS)

    Zhao, Bao Ping; Zhang, Zheng Mei; Cai Li, Ji; Sun, Da Ming; Cao, Hui Ying; Xing, Bao Liang

    2017-09-01

    Reverse engineering is an important technique means of product imitation and new product development. Its core technology -- surface reconstruction is the current research for scholars. In the various algorithms of surface reconstruction, using axis reconstruction is a kind of important method. For the various reconstruction, using medial axis algorithm was summarized, pointed out the problems existed in various methods, as well as the place needs to be improved. Also discussed the later surface reconstruction and development of axial direction.

  16. Machine learning based cloud mask algorithm driven by radiative transfer modeling

    NASA Astrophysics Data System (ADS)

    Chen, N.; Li, W.; Tanikawa, T.; Hori, M.; Shimada, R.; Stamnes, K. H.

    2017-12-01

    Cloud detection is a critically important first step required to derive many satellite data products. Traditional threshold based cloud mask algorithms require a complicated design process and fine tuning for each sensor, and have difficulty over snow/ice covered areas. With the advance of computational power and machine learning techniques, we have developed a new algorithm based on a neural network classifier driven by extensive radiative transfer modeling. Statistical validation results obtained by using collocated CALIOP and MODIS data show that its performance is consistent over different ecosystems and significantly better than the MODIS Cloud Mask (MOD35 C6) during the winter seasons over mid-latitude snow covered areas. Simulations using a reduced number of satellite channels also show satisfactory results, indicating its flexibility to be configured for different sensors.

  17. MODIS Collection 6 Data at the National Snow and Ice Data Center (NSIDC)

    NASA Astrophysics Data System (ADS)

    Fowler, D. K.; Steiker, A. E.; Johnston, T.; Haran, T. M.; Fowler, C.; Wyatt, P.

    2015-12-01

    For over 15 years, the NASA National Snow and Ice Data Center Distributed Active Archive Center (NSIDC DAAC) has archived and distributed snow and sea ice products derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) instruments on the NASA Earth Observing System (EOS) Aqua and Terra satellites. Collection 6 represents the next revision to NSIDC's MODIS archive, mainly affecting the snow-cover products. Collection 6 specifically addresses the needs of the MODIS science community by targeting the scenarios that have historically confounded snow detection and introduced errors into the snow-cover and fractional snow-cover maps even though MODIS snow-cover maps are typically 90 percent accurate or better under good observing conditions, Collection 6 uses revised algorithms to discriminate between snow and clouds, resolve uncertainties along the edges of snow-covered regions, and detect summer snow cover in mountains. Furthermore, Collection 6 applies modified and additional snow detection screens and new Quality Assessment protocols that enhance the overall accuracy of the snow maps compared with Collection 5. Collection 6 also introduces several new MODIS snow products, including a daily Climate Modelling Grid (CMG) cloud gap-filled (CGF) snow-cover map which generates cloud-free maps by using the most recent clear observations.. The MODIS Collection 6 sea ice extent and ice surface temperature algorithms and products are much the same as Collection 5; however, Collection 6 updates to algorithm inputs—in particular, the L1B calibrated radiances, land and water mask, and cloud mask products—have improved the sea ice outputs. The MODIS sea ice products are currently available at NSIDC, and the snow cover products are soon to follow in 2016 NSIDC offers a variety of methods for obtaining these data. Users can download data directly from an online archive or use the NASA Reverb Search & Order Tool to perform spatial, temporal, and parameter subsetting, reformatting, and re-projection of the data.

  18. A new strategy for snow-cover mapping using remote sensing data and ensemble based systems techniques

    NASA Astrophysics Data System (ADS)

    Roberge, S.; Chokmani, K.; De Sève, D.

    2012-04-01

    The snow cover plays an important role in the hydrological cycle of Quebec (Eastern Canada). Consequently, evaluating its spatial extent interests the authorities responsible for the management of water resources, especially hydropower companies. The main objective of this study is the development of a snow-cover mapping strategy using remote sensing data and ensemble based systems techniques. Planned to be tested in a near real-time operational mode, this snow-cover mapping strategy has the advantage to provide the probability of a pixel to be snow covered and its uncertainty. Ensemble systems are made of two key components. First, a method is needed to build an ensemble of classifiers that is diverse as much as possible. Second, an approach is required to combine the outputs of individual classifiers that make up the ensemble in such a way that correct decisions are amplified, and incorrect ones are cancelled out. In this study, we demonstrate the potential of ensemble systems to snow-cover mapping using remote sensing data. The chosen classifier is a sequential thresholds algorithm using NOAA-AVHRR data adapted to conditions over Eastern Canada. Its special feature is the use of a combination of six sequential thresholds varying according to the day in the winter season. Two versions of the snow-cover mapping algorithm have been developed: one is specific for autumn (from October 1st to December 31st) and the other for spring (from March 16th to May 31st). In order to build the ensemble based system, different versions of the algorithm are created by varying randomly its parameters. One hundred of the versions are included in the ensemble. The probability of a pixel to be snow, no-snow or cloud covered corresponds to the amount of votes the pixel has been classified as such by all classifiers. The overall performance of ensemble based mapping is compared to the overall performance of the chosen classifier, and also with ground observations at meteorological stations.

  19. Rigid polyurethane foam – kenaf core composites for structural applications

    USDA-ARS?s Scientific Manuscript database

    Kenaf (Hibiscus cannabinus L.) is a fast growing summer annual crop with numerous commercial applications (fibers, biofuels, bioremediation, paper pulp, building materials, cover crops, and livestock forages). The stalks of the kenaf plants contain two distinct fiber types, bast and core fibers. The...

  20. Topological Relations-Based Detection of Spatial Inconsistency in GLOBELAND30

    NASA Astrophysics Data System (ADS)

    Kang, S.; Chen, J.; Peng, S.

    2017-09-01

    Land cover is one of the fundamental data sets on environment assessment, land management and biodiversity protection, etc. Hence, data quality control of land cover is extremely critical for geospatial analysis and decision making. Due to the similar remote-sensing reflectance for some land cover types, omission and commission errors occurred in preliminary classification could result to spatial inconsistency between land cover types. In the progress of post-classification, this error checking mainly depends on manual labour to assure data quality, by which it is time-consuming and labour intensive. So a method required for automatic detection in post-classification is still an open issue. From logical inconsistency point of view, an inconsistency detection method is designed. This method consist of a grids extended 4-intersection model (GE4IM) for topological representation in single-valued space, by which three different kinds of topological relations including disjoint, touch, contain or contained-by are described, and an algorithm of region overlay for the computation of spatial inconsistency. The rules are derived from universal law in nature between water body and wetland, cultivated land and artificial surface. Through experiment conducted in Shandong Linqu County, data inconsistency can be pointed out within 6 minutes through calculation of topological inconsistency between cultivated land and artificial surface, water body and wetland. The efficiency evaluation of the presented algorithm is demonstrated by Google Earth images. Through comparative analysis, the algorithm is proved to be promising for inconsistency detection in land cover data.

  1. Study of Anticyclogenesis Affecting the Mediterranean

    NASA Astrophysics Data System (ADS)

    Hatzaki, M.; Flocas, H. A.; Simmonds, I.; Kouroutzoglou, J.; Garde, L.; Keay, K.; Bitsa, E.

    2014-12-01

    A comprehensive climatology of migratory anticyclones affecting the Mediterranean was generated by the University of Melbourne finding and tracking algorithm (MS algorithm), applied to 34 years (1979-2012) of ERA-Interim MSLP on a 1.5°x1.5° resolution. The algorithm was employed for the first time for anticyclones in this region, thus, its robustness and reliability in efficiently capturing the individual characteristics of the anticyclonic tracks in such a closed basin with complex topography were checked and verified. Then, the tracks and the statistical properties of the migratory systems were calculated and analyzed. Considering that cold-core anticyclones are shallow and weaken with height contrary to the warm-core that exhibit a vertically well-organized structure, the vertical thermal extend of the systems was studied with an algorithm developed as an extension module of the MS algorithm using ERA-Interim temperatures on several isobaric levels from 1000hPa to 100hPa on an 1.5°x1.5° resolution. The results verified that during both cold and warm period, cold-core anticyclones mainly affect the northern parts of the Mediterranean basin, with their behavior to be strongly regulated by cyclonic activity from the main storm track areas of the North Atlantic and Europe. On the other hand, warm-core anticyclones were found mainly in the southern Mediterranean and North African areas. Here, in order to get a perspective on the dynamic and thermodynamic processes in anticyclonic formation, a dynamical analysis at several vertical levels is performed. The study of mean fields of potential vorticity, temperature advection, vorticity advection at various levels can elucidate the role of upper and low levels during anticyclogenesis and system evolvement and help to further understand the dynamic mechanisms which are responsible for the anticyclogenesis over the Mediterranean region. Acknowledgement: This research project is implemented within the framework of the Action «Supporting Postdoctoral Researchers» of the Operational Program "Education and Lifelong Learning" (Action's Beneficiary: General Secretariat for Research and Technology) and is co-financed by the European Social Fund (ESF) and the Greek State. Some funding from the Australian Research Council is also acknowledged.

  2. Twenty-four year record of Northern Hemisphere snow cover derived from passive microwave remote sensing

    NASA Astrophysics Data System (ADS)

    Armstrong, Richard L.; Brodzik, Mary Jo

    2003-04-01

    Snow cover is an important variable for climate and hydrologic models due to its effects on energy and moisture budgets. Seasonal snow can cover more than 50% of the Northern Hemisphere land surface during the winter resulting in snow cover being the land surface characteristic responsible for the largest annual and interannual differences in albedo. Passive microwave satellite remote sensing can augment measurements based on visible satellite data alone because of the ability to acquire data through most clouds or during darkness as well as to provide a measure of snow depth or water equivalent. It is now possible to monitor the global fluctuation of snow cover over a 24 year period using passive microwave data (Scanning Multichannel Microwave Radiometer (SMMR) 1978-1987 and Special Sensor Microwave/Imager (SSM/I), 1987-present). Evaluation of snow extent derived from passive microwave algorithms is presented through comparison with the NOAA Northern Hemisphere snow extent data. For the period 1978 to 2002, both passive microwave and visible data sets show a smiliar pattern of inter-annual variability, although the maximum snow extents derived from the microwave data are consistently less than those provided by the visible statellite data and the visible data typically show higher monthly variability. During shallow snow conditions of the early winter season microwave data consistently indicate less snow-covered area than the visible data. This underestimate of snow extent results from the fact that shallow snow cover (less than about 5.0 cm) does not provide a scattering signal of sufficient strength to be detected by the algorithms. As the snow cover continues to build during the months of January through March, as well as on into the melt season, agreement between the two data types continually improves. This occurs because as the snow becomes deeper and the layered structure more complex, the negative spectral gradient driving the passive microwave algorithm is enhanced. Trends in annual averages are similar, decreasing at rates of approximately 2% per decade. The only region where the passive microwave data consistently indicate snow and the visible data do not is over the Tibetan Plateau and surrounding mountain areas. In the effort to determine the accuracy of the microwave algorithm over this region we are acquiring surface snow observations through a collaborative study with CAREERI/Lanzhou. In order to provide an optimal snow cover product in the future, we are developing a procedure that blends snow extent maps derived from MODIS data with snow water equivalent maps derived from both SSM/I and AMSR.

  3. Optimization of Self-Directed Target Coverage in Wireless Multimedia Sensor Network

    PubMed Central

    Yang, Yang; Wang, Yufei; Pi, Dechang; Wang, Ruchuan

    2014-01-01

    Video and image sensors in wireless multimedia sensor networks (WMSNs) have directed view and limited sensing angle. So the methods to solve target coverage problem for traditional sensor networks, which use circle sensing model, are not suitable for WMSNs. Based on the FoV (field of view) sensing model and FoV disk model proposed, how expected multimedia sensor covers the target is defined by the deflection angle between target and the sensor's current orientation and the distance between target and the sensor. Then target coverage optimization algorithms based on expected coverage value are presented for single-sensor single-target, multisensor single-target, and single-sensor multitargets problems distinguishingly. Selecting the orientation that sensor rotated to cover every target falling in the FoV disk of that sensor for candidate orientations and using genetic algorithm to multisensor multitargets problem, which has NP-complete complexity, then result in the approximated minimum subset of sensors which covers all the targets in networks. Simulation results show the algorithm's performance and the effect of number of targets on the resulting subset. PMID:25136667

  4. Coevolutionary Free Lunches

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.; Macready, William G.

    2005-01-01

    Recent work on the foundations of optimization has begun to uncover its underlying rich structure. In particular, the "No Free Lunch" (NFL) theorems [WM97] state that any two algorithms are equivalent when their performance is averaged across all possible problems. This highlights the need for exploiting problem-specific knowledge to achieve better than random performance. In this paper we present a general framework covering most search scenarios. In addition to the optimization scenarios addressed in the NFL results, this framework covers multi-armed bandit problems and evolution of multiple co-evolving agents. As a particular instance of the latter, it covers "self-play" problems. In these problems the agents work together to produce a champion, who then engages one or more antagonists in a subsequent multi-player game In contrast to the traditional optimization case where the NFL results hold, we show that in self-play there are free lunches: in coevolution some algorithms have better performance than other algorithms, averaged across all possible problems. However in the typical coevolutionary scenarios encountered in biology, where there is no champion, NFL still holds.

  5. Spacewire router IP-core with priority adaptive routing

    NASA Astrophysics Data System (ADS)

    Shakhmatov, A. V.; Chekmarev, S. A.; Vergasov, M. Y.; Khanov, V. Kh

    2015-10-01

    Design of modern spacecraft focuses on using network principles of interaction on-board equipment, in particular in network SpaceWire. Routers are an integral part of most SpaceWire networks. The paper presents an adaptive routing algorithm with a prioritization, allowing more flexibility to manage the routing process. This algorithm is designed to transmit SpaceWire packets over a redundant network. Also a method is proposed for rapid restoration of working capacity after power by saving the routing table and the router configuration in an external non-volatile memory. The proposed solutions used to create IP-core router, and then tested in the FPGA device. The results illustrate the realizability and rationality of the proposed solutions.

  6. Seed robustness of oriented relative fuzzy connectedness: core computation and its applications

    NASA Astrophysics Data System (ADS)

    Tavares, Anderson C. M.; Bejar, Hans H. C.; Miranda, Paulo A. V.

    2017-02-01

    In this work, we present a formal definition and an efficient algorithm to compute the cores of Oriented Relative Fuzzy Connectedness (ORFC), a recent seed-based segmentation technique. The core is a region where the seed can be moved without altering the segmentation, an important aspect for robust techniques and reduction of user effort. We show how ORFC cores can be used to build a powerful hybrid image segmentation approach. We also provide some new theoretical relations between ORFC and Oriented Image Foresting Transform (OIFT), as well as their cores. Experimental results among several methods show that the hybrid approach conserves high accuracy, avoids the shrinking problem and provides robustness to seed placement inside the desired object due to the cores properties.

  7. Intelligent Use of CFAR Algorithms

    DTIC Science & Technology

    1993-05-01

    the reference windows can raise the threshold too high in many CFAR algorithms and result in masking of targets. GCMLD is a modification of CMLD that...AD-A267 755 RL-TR-93-75 III 11 III II liiI Interim Report May 1993 INTELLIGENT USE OF CFAR ALGORITHMS Kaman Sciences Corporation P. Antonik, B...AND DATES COVERED IMay 1993 Inte ’rim Jan 92 - Se2 92 4. TITLE AND SUBTITLE 5. FUNDING NUMBERS INTELLIGENT USE OF CFAR ALGORITHMS C - F30602-91-C-0017

  8. Undercut feature recognition for core and cavity generation

    NASA Astrophysics Data System (ADS)

    Yusof, Mursyidah Md; Salman Abu Mansor, Mohd

    2018-01-01

    Core and cavity is one of the important components in injection mould where the quality of the final product is mostly dependent on it. In the industry, with years of experience and skill, mould designers commonly use commercial CAD software to design the core and cavity which is time consuming. This paper proposes an algorithm that detect possible undercut features and generate the core and cavity. Two approaches are presented; edge convexity and face connectivity approach. The edge convexity approach is used to recognize undercut features while face connectivity is used to divide the faces into top and bottom region.

  9. Network Coding on Heterogeneous Multi-Core Processors for Wireless Sensor Networks

    PubMed Central

    Kim, Deokho; Park, Karam; Ro, Won W.

    2011-01-01

    While network coding is well known for its efficiency and usefulness in wireless sensor networks, the excessive costs associated with decoding computation and complexity still hinder its adoption into practical use. On the other hand, high-performance microprocessors with heterogeneous multi-cores would be used as processing nodes of the wireless sensor networks in the near future. To this end, this paper introduces an efficient network coding algorithm developed for the heterogenous multi-core processors. The proposed idea is fully tested on one of the currently available heterogeneous multi-core processors referred to as the Cell Broadband Engine. PMID:22164053

  10. Accelerating k-NN Algorithm with Hybrid MPI and OpenSHMEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Jian; Hamidouche, Khaled; Zheng, Jie

    2015-08-05

    Machine Learning algorithms are benefiting from the continuous improvement of programming models, including MPI, MapReduce and PGAS. k-Nearest Neighbors (k-NN) algorithm is a widely used machine learning algorithm, applied to supervised learning tasks such as classification. Several parallel implementations of k-NN have been proposed in the literature and practice. However, on high-performance computing systems with high-speed interconnects, it is important to further accelerate existing designs of the k-NN algorithm through taking advantage of scalable programming models. To improve the performance of k-NN on large-scale environment with InfiniBand network, this paper proposes several alternative hybrid MPI+OpenSHMEM designs and performs a systemicmore » evaluation and analysis on typical workloads. The hybrid designs leverage the one-sided memory access to better overlap communication with computation than the existing pure MPI design, and propose better schemes for efficient buffer management. The implementation based on k-NN program from MaTEx with MVAPICH2-X (Unified MPI+PGAS Communication Runtime over InfiniBand) shows up to 9.0% time reduction for training KDD Cup 2010 workload over 512 cores, and 27.6% time reduction for small workload with balanced communication and computation. Experiments of running with varied number of cores show that our design can maintain good scalability.« less

  11. Large core plastic planar optical splitter fabricated by 3D printing technology

    NASA Astrophysics Data System (ADS)

    Prajzler, Václav; Kulha, Pavel; Knietel, Marian; Enser, Herbert

    2017-10-01

    We report on the design, fabrication and optical properties of large core multimode optical polymer splitter fabricated using fill up core polymer in substrate that was made by 3D printing technology. The splitter was designed by the beam propagation method intended for assembling large core waveguide fibers with 735 μm diameter. Waveguide core layers were made of optically clear liquid adhesive, and Veroclear polymer was used as substrate and cover layers. Measurement of optical losses proved that the insertion optical loss was lower than 6.8 dB in the visible spectrum.

  12. Deep Belief Networks for Electroencephalography: A Review of Recent Contributions and Future Outlooks.

    PubMed

    Movahedi, Faezeh; Coyle, James L; Sejdic, Ervin

    2018-05-01

    Deep learning, a relatively new branch of machine learning, has been investigated for use in a variety of biomedical applications. Deep learning algorithms have been used to analyze different physiological signals and gain a better understanding of human physiology for automated diagnosis of abnormal conditions. In this paper, we provide an overview of deep learning approaches with a focus on deep belief networks in electroencephalography applications. We investigate the state-of-the-art algorithms for deep belief networks and then cover the application of these algorithms and their performances in electroencephalographic applications. We covered various applications of electroencephalography in medicine, including emotion recognition, sleep stage classification, and seizure detection, in order to understand how deep learning algorithms could be modified to better suit the tasks desired. This review is intended to provide researchers with a broad overview of the currently existing deep belief network methodology for electroencephalography signals, as well as to highlight potential challenges for future research.

  13. An Innovative Thinking-Based Intelligent Information Fusion Algorithm

    PubMed Central

    Hu, Liang; Liu, Gang; Zhou, Jin

    2013-01-01

    This study proposes an intelligent algorithm that can realize information fusion in reference to the relative research achievements in brain cognitive theory and innovative computation. This algorithm treats knowledge as core and information fusion as a knowledge-based innovative thinking process. Furthermore, the five key parts of this algorithm including information sense and perception, memory storage, divergent thinking, convergent thinking, and evaluation system are simulated and modeled. This algorithm fully develops innovative thinking skills of knowledge in information fusion and is a try to converse the abstract conception of brain cognitive science to specific and operable research routes and strategies. Furthermore, the influences of each parameter of this algorithm on algorithm performance are analyzed and compared with those of classical intelligent algorithms trough test. Test results suggest that the algorithm proposed in this study can obtain the optimum problem solution by less target evaluation times, improve optimization effectiveness, and achieve the effective fusion of information. PMID:23956699

  14. An innovative thinking-based intelligent information fusion algorithm.

    PubMed

    Lu, Huimin; Hu, Liang; Liu, Gang; Zhou, Jin

    2013-01-01

    This study proposes an intelligent algorithm that can realize information fusion in reference to the relative research achievements in brain cognitive theory and innovative computation. This algorithm treats knowledge as core and information fusion as a knowledge-based innovative thinking process. Furthermore, the five key parts of this algorithm including information sense and perception, memory storage, divergent thinking, convergent thinking, and evaluation system are simulated and modeled. This algorithm fully develops innovative thinking skills of knowledge in information fusion and is a try to converse the abstract conception of brain cognitive science to specific and operable research routes and strategies. Furthermore, the influences of each parameter of this algorithm on algorithm performance are analyzed and compared with those of classical intelligent algorithms trough test. Test results suggest that the algorithm proposed in this study can obtain the optimum problem solution by less target evaluation times, improve optimization effectiveness, and achieve the effective fusion of information.

  15. Fire behavior simulation in Mediterranean forests using the minimum travel time algorithm

    Treesearch

    Kostas Kalabokidis; Palaiologos Palaiologou; Mark A. Finney

    2014-01-01

    Recent large wildfires in Greece exemplify the need for pre-fire burn probability assessment and possible landscape fire flow estimation to enhance fire planning and resource allocation. The Minimum Travel Time (MTT) algorithm, incorporated as FlamMap's version five module, provide valuable fire behavior functions, while enabling multi-core utilization for the...

  16. Intercomparison of Eight Forward 1D Vector Radiative Transfer Models, with the Performance of Satellite Aerosol Remote Sensing Algorithms in Mind

    NASA Astrophysics Data System (ADS)

    Davis, Anthony B.; Kalashnikova, Olga V.; Diner, David J.; Garay, Michael J.; Lyapustin, Alexei I.; Korkin, Sergey V.; Martonchik, John V.; Natraj, Vijay; Sanghavi, Suniti V.; Xu, Feng; Zhai, Pengwang; Rozanov, Vladimir V.; Kokhanovsky, Alexander A.

    2014-05-01

    Quantification and characterization of the omnipresent atmospheric aerosol by remote sensing methods is key to answering many challenging questions in atmospheric science, in climate modeling and in air quality monitoring foremost. In recent years, accurate measurement of the state of polarization of photon fluxes at optical sensors in the visible and near-IR spectrum has been hailed as a very promising approach to aerosol remote sensing. Consequently, there has been a flurry of activity in polarized or 'vector' radiative transfer (vRT) model development. This covers the multiple scattering and ground reflection aspects of sensor signal prediction that complement single-particle scattering computation, and lies at the core of all physics-based retrieval algorithms. One can legitimately ask: What level of model fidelity (representativeness of natural scenes) and what computational accuracy should be achieved for this task in view of the practical constraints that apply? These constraints are, at a minimum: (i) the desired accuracy of the retrieved aerosol properties, (ii) observational uncertainties, and (iii) operational efficiency requirements as determined by throughput. We offer a rational and balanced approach to address these questions and illustrate it with a systematic inter-comparison of the performance of a diverse set of 1D vRT models using a small but representative set of test cases. This 'JPL' benchmarking suite of cases is naturally divided into two parts. First the emphasis is on stratified atmospheres with a continuous mixture of molecular and aerosol scattering and absorption over a black surface, with the corresponding pure cases treated for diagnostic purposes. Then the emphasis shifts to the variety of surfaces, both polarizing and not, that can be encountered in real observations and may confuse the aerosol retrieval algorithm if not properly treated.

  17. A privacy-preserving parallel and homomorphic encryption scheme

    NASA Astrophysics Data System (ADS)

    Min, Zhaoe; Yang, Geng; Shi, Jingqi

    2017-04-01

    In order to protect data privacy whilst allowing efficient access to data in multi-nodes cloud environments, a parallel homomorphic encryption (PHE) scheme is proposed based on the additive homomorphism of the Paillier encryption algorithm. In this paper we propose a PHE algorithm, in which plaintext is divided into several blocks and blocks are encrypted with a parallel mode. Experiment results demonstrate that the encryption algorithm can reach a speed-up ratio at about 7.1 in the MapReduce environment with 16 cores and 4 nodes.

  18. Design Time Optimization for Hardware Watermarking Protection of HDL Designs

    PubMed Central

    Castillo, E.; Morales, D. P.; García, A.; Parrilla, L.; Todorovich, E.; Meyer-Baese, U.

    2015-01-01

    HDL-level design offers important advantages for the application of watermarking to IP cores, but its complexity also requires tools automating these watermarking algorithms. A new tool for signature distribution through combinational logic is proposed in this work. IPP@HDL, a previously proposed high-level watermarking technique, has been employed for evaluating the tool. IPP@HDL relies on spreading the bits of a digital signature at the HDL design level using combinational logic included within the original system. The development of this new tool for the signature distribution has not only extended and eased the applicability of this IPP technique, but it has also improved the signature hosting process itself. Three algorithms were studied in order to develop this automated tool. The selection of a cost function determines the best hosting solutions in terms of area and performance penalties on the IP core to protect. An 1D-DWT core and MD5 and SHA1 digital signatures were used in order to illustrate the benefits of the new tool and its optimization related to the extraction logic resources. Among the proposed algorithms, the alternative based on simulated annealing reduces the additional resources while maintaining an acceptable computation time and also saving designer effort and time. PMID:25861681

  19. Performance Evaluation of NWChem Ab-Initio Molecular Dynamics (AIMD) Simulations on the Intel® Xeon Phi™ Processor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bylaska, Eric J.; Jacquelin, Mathias; De Jong, Wibe A.

    2017-10-20

    Ab-initio Molecular Dynamics (AIMD) methods are an important class of algorithms, as they enable scientists to understand the chemistry and dynamics of molecular and condensed phase systems while retaining a first-principles-based description of their interactions. Many-core architectures such as the Intel® Xeon Phi™ processor are an interesting and promising target for these algorithms, as they can provide the computational power that is needed to solve interesting problems in chemistry. In this paper, we describe the efforts of refactoring the existing AIMD plane-wave method of NWChem from an MPI-only implementation to a scalable, hybrid code that employs MPI and OpenMP tomore » exploit the capabilities of current and future many-core architectures. We describe the optimizations required to get close to optimal performance for the multiplication of the tall-and-skinny matrices that form the core of the computational algorithm. We present strong scaling results on the complete AIMD simulation for a test case that simulates 256 water molecules and that strong-scales well on a cluster of 1024 nodes of Intel Xeon Phi processors. We compare the performance obtained with a cluster of dual-socket Intel® Xeon® E5–2698v3 processors.« less

  20. Chapter 13. Exploring Use of the Reserved Core

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holmen, John; Humphrey, Alan; Berzins, Martin

    2015-07-29

    In this chapter, we illustrate benefits of thinking in terms of thread management techniques when using a centralized scheduler model along with interoperability of MPI and PThread. This is facilitated through an exploration of thread placement strategies for an algorithm modeling radiative heat transfer with special attention to the 61st core. This algorithm plays a key role within the Uintah Computational Framework (UCF) and current efforts taking place at the University of Utah to model next-generation, large-scale clean coal boilers. In such simulations, this algorithm models the dominant form of heat transfer and consumes a large portion of compute time.more » Exemplified by a real-world example, this chapter presents our early efforts in porting a key portion of a scalability-centric codebase to the Intel Xeon Phi coprocessor. Specifically, this chapter presents results from our experiments profiling the native execution of a reverse Monte-Carlo ray tracing-based radiation model on a single coprocessor. These results demonstrate that our fastest run configurations utilized the 61st core and that performance was not profoundly impacted when explicitly oversubscribing the coprocessor operating system thread. Additionally, this chapter presents a portion of radiation model source code, a MIC-centric UCF cross-compilation example, and less conventional thread management technique for developers utilizing the PThreads threading model.« less

  1. Integration of Landsat-based disturbance maps in the Landscape Change Monitoring System (LCMS)

    NASA Astrophysics Data System (ADS)

    Healey, S. P.; Cohen, W. B.; Eidenshink, J. C.; Hernandez, A. J.; Huang, C.; Kennedy, R. E.; Moisen, G. G.; Schroeder, T. A.; Stehman, S.; Steinwand, D.; Vogelmann, J. E.; Woodcock, C.; Yang, L.; Yang, Z.; Zhu, Z.

    2013-12-01

    Land cover change can have a profound effect upon an area's natural resources and its role in biogeochemical and hydrological cycles. Many land cover changes processes are sensitive to climate, including: fire; storm damage, and insect activity. Monitoring of both past and ongoing land cover change is critical, particularly as we try to understand the impact of a changing climate on the natural systems we manage. The Landsat series of satellites, which initially launched in 1972, has allowed land observation at spatial and spectral resolutions appropriate for identification of many types of land cover change. Over the years, and particularly since the opening of the Landsat archive in 2008, many approaches have been developed to meet individual monitoring needs. Algorithms vary by the cover type targeted, the rate of change sought, and the period between observations. The Landscape Change Monitoring System (LCMS) is envisioned as a sustained, inter-agency monitoring program that brings together and operationally provides the best available land cover change maps over the United States. Expanding upon the successful USGS/Forest Service Monitoring Trends in Burn Severity project, LCMS is designed to serve a variety of research and management communities. The LCMS Science Team is currently assessing the relative strengths of a variety of leading change detection approaches, primarily emphasizing Landsat observations. Using standardized image pre-processing methods, maps produced by these algorithms have been compared at intensive validation sites across the country. Additionally, LCMS has taken steps toward a data-mining framework, in which ensembles of algorithm outputs are used with non-parametric models to create integrated predictions of change across a variety of scenarios and change dynamics. We present initial findings from the LCMS Science Team, including validation results from individual algorithms and assessment of initial 'integrated' products from the data-mining framework. It is anticipated that these results will directly impact land change information that will in the future be routinely available across the country through LCMS. With a baseline observation period of more than 40 years and a national scope, this data should shed light upon how trends in disturbance may be linked to climatic changes.

  2. New optical package and algorithms for accurate estimation and interactive recording of the cloud cover information over land and sea

    NASA Astrophysics Data System (ADS)

    Krinitskiy, Mikhail; Sinitsyn, Alexey; Gulev, Sergey

    2014-05-01

    Cloud fraction is a critical parameter for the accurate estimation of short-wave and long-wave radiation - one of the most important surface fluxes over sea and land. Massive estimates of the total cloud cover as well as cloud amount for different layers of clouds are available from visual observations, satellite measurements and reanalyses. However, these data are subject of different uncertainties and need continuous validation against highly accurate in-situ measurements. Sky imaging with high resolution fish eye camera provides an excellent opportunity for collecting cloud cover data supplemented with additional characteristics hardly available from routine visual observations (e.g. structure of cloud cover under broken cloud conditions, parameters of distribution of cloud dimensions). We present operational automatic observational package which is based on fish eye camera taking sky images with high resolution (up to 1Hz) in time and a spatial resolution of 968x648px. This spatial resolution has been justified as an optimal by several sensitivity experiments. For the use of the package at research vessel when the horizontal positioning becomes critical, a special extension of the hardware and software to the package has been developed. These modules provide the explicit detection of the optimal moment for shooting. For the post processing of sky images we developed a software realizing the algorithm of the filtering of sunburn effect in case of small and moderate could cover and broken cloud conditions. The same algorithm accurately quantifies the cloud fraction by analyzing color mixture for each point and introducing the so-called "grayness rate index" for every pixel. The accuracy of the algorithm has been tested using the data collected during several campaigns in 2005-2011 in the North Atlantic Ocean. The collection of images included more than 3000 images for different cloud conditions supplied with observations of standard parameters. The system is fully autonomous and has a block for digital data collection at the hard disk. The system has been tested for a wide range of open ocean cloud conditions and we will demonstrate some pilot results of data processing and physical interpretation of fractional cloud cover estimation.

  3. Mapping Mountain Front Recharge Areas in Arid Watersheds Based on a Digital Elevation Model and Land Cover Types

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bowen, Esther E.; Hamada, Yuki; O’Connor, Ben L.

    Here, a recent assessment that quantified potential impacts of solar energy development on water resources in the southwestern United States necessitated the development of a methodology to identify locations of mountain front recharge (MFR) in order to guide land development decisions. A spatially explicit, slope-based algorithm was created to delineate MFR zones in 17 arid, mountainous watersheds using elevation and land cover data. Slopes were calculated from elevation data and grouped into 100 classes using iterative self-organizing classification. Candidate MFR zones were identified based on slope classes that were consistent with MFR. Land cover types that were inconsistent with groundwatermore » recharge were excluded from the candidate areas to determine the final MFR zones. No MFR reference maps exist for comparison with the study’s results, so the reliability of the resulting MFR zone maps was evaluated qualitatively using slope, surficial geology, soil, and land cover datasets. MFR zones ranged from 74 km2 to 1,547 km2 and accounted for 40% of the total watershed area studied. Slopes and surficial geologic materials that were present in the MFR zones were consistent with conditions at the mountain front, while soils and land cover that were present would generally promote groundwater recharge. Visual inspection of the MFR zone maps also confirmed the presence of well-recognized alluvial fan features in several study watersheds. While qualitative evaluation suggested that the algorithm reliably delineated MFR zones in most watersheds overall, the algorithm was better suited for application in watersheds that had characteristic Basin and Range topography and relatively flat basin floors than areas without these characteristics. Because the algorithm performed well to reliably delineate the spatial distribution of MFR, it would allow researchers to quantify aspects of the hydrologic processes associated with MFR and help local land resource managers to consider protection of critical groundwater recharge regions in their development decisions.« less

  4. Mapping Mountain Front Recharge Areas in Arid Watersheds Based on a Digital Elevation Model and Land Cover Types

    DOE PAGES

    Bowen, Esther E.; Hamada, Yuki; O’Connor, Ben L.

    2014-06-01

    Here, a recent assessment that quantified potential impacts of solar energy development on water resources in the southwestern United States necessitated the development of a methodology to identify locations of mountain front recharge (MFR) in order to guide land development decisions. A spatially explicit, slope-based algorithm was created to delineate MFR zones in 17 arid, mountainous watersheds using elevation and land cover data. Slopes were calculated from elevation data and grouped into 100 classes using iterative self-organizing classification. Candidate MFR zones were identified based on slope classes that were consistent with MFR. Land cover types that were inconsistent with groundwatermore » recharge were excluded from the candidate areas to determine the final MFR zones. No MFR reference maps exist for comparison with the study’s results, so the reliability of the resulting MFR zone maps was evaluated qualitatively using slope, surficial geology, soil, and land cover datasets. MFR zones ranged from 74 km2 to 1,547 km2 and accounted for 40% of the total watershed area studied. Slopes and surficial geologic materials that were present in the MFR zones were consistent with conditions at the mountain front, while soils and land cover that were present would generally promote groundwater recharge. Visual inspection of the MFR zone maps also confirmed the presence of well-recognized alluvial fan features in several study watersheds. While qualitative evaluation suggested that the algorithm reliably delineated MFR zones in most watersheds overall, the algorithm was better suited for application in watersheds that had characteristic Basin and Range topography and relatively flat basin floors than areas without these characteristics. Because the algorithm performed well to reliably delineate the spatial distribution of MFR, it would allow researchers to quantify aspects of the hydrologic processes associated with MFR and help local land resource managers to consider protection of critical groundwater recharge regions in their development decisions.« less

  5. Information Hiding: an Annotated Bibliography

    DTIC Science & Technology

    1999-04-13

    parameters needed for reconstruction are enciphered using DES . The encrypted image is hidden in a cover image . [153] 074115, ‘Watermarking algorithm ...authors present a block based watermarking algorithm for digital images . The D.C.T. of the block is increased by a certain value. Quality control is...includes evaluation of the watermark robustness and the subjec- tive visual image quality. Two algorithms use the frequency domain while the two others use

  6. CYGNUS A: Hot Spots, Bow Shocks, Core Emission, and Exclusion of Cluster Gas by Radio Lobes

    NASA Technical Reports Server (NTRS)

    Harris, Daniel E.

    1999-01-01

    This report covers work preformed on three ROSAT projects: (1) Monitoring the X-ray Intensity of the Core and Jet of M87; (2) The radio-optical jet in 3C-120 and (3) A search for cluster emission at high redshift.

  7. Competency-Based Common-Core Curriculum for Emergency Medical Technician Education.

    ERIC Educational Resources Information Center

    Arizona State Board of Directors for Community Colleges, Phoenix.

    This curriculum guide contains a listing of all common-core competencies that should be taught in Arizona community colleges in order to prepare students to meet the requirements of basic and refresher emergency medical technician training. Identified through a statewide project, the competencies cover the following topics: introduction to…

  8. Determination of Algorithm Parallelism in NP Complete Problems for Distributed Architectures

    DTIC Science & Technology

    1990-03-05

    12 structure STACK declare OpenStack (S-.NODE **TopPtr) -+TopPtrI FlushStack(S.-NODE **TopPtr) -*TopPtr PushOnStack(S-.NODE **TopPtr, ITEM *NewltemPtr...OfCoveringSets, CoveringSets, L, Best CoverTime, Vertex, Set3end SCND ADT B.26 structure STACKI declare OpenStack (S-NODE **TopPtr) -+TopPtr FlushStack(S

  9. Evaluation of GPM candidate algorithms on hurricane observations

    NASA Astrophysics Data System (ADS)

    Le, M.; Chandrasekar, C. V.

    2012-12-01

    The observation of precipitation on a global scale by the Tropical Rain Measuring Mission (TRMM) precipitation radar (PR) and has enabled a large scale study of precipitation over ocean, especially tropical storms. The three-dimensional downward-looking observation characteristic of the TRMM-PR makes it possible to study the vertical structure of tropical storms. The global precipitation measuring mission (GPM) will be the second mission following the success of TRMM. The GPM Mission extends tropical storm tracking and forecasting capabilities into the middle and high latitudes, covering the area from 65° S to 65°N. This orbit will provide new insight into how and why some tropical storm intensify and others weaken as they move from tropical to mid-latitude systems. The GPM core satellite will be equipped with a dual-frequency precipitation radar (DPR) operating at K_u (13.6 GHz) and K_a (35.5 GHz) band. DPR on aboard the GPM core satellite is expected to improve our knowledge of precipitation processes relative to the single-frequency (K_u band) radar used in TRMM by providing greater dynamic range, more detailed information on microphysics, and better accuracies in rainfall retrievals. New K_a band channel observation of DPR will help to improve the detection thresholds for light rain and snow relative to TRMM PR [1]. The dual-frequency signals will allow us to better distinguish regions of liquid, frozen, and mixed-phase precipitation. In the GPM era, storms could be better tracked and characterized. In support the NASA GPM mission, NASA JPL (Jet Propulsion Lab) developed the 2nd generation Airborne Precipitation Radar (APR-2) as a prototype of advanced dual-frequency space radar which emulates DPR on board the GPM core satellite before it is launched. GRIP (Genesis and Rapid Intensification Processes) is the most recent campaign of APR-2 conducted in the year 2010 located in Golf of Mexico and Caribbean sea with the major goal to better understand tropical storms and hurricanes. In this paper, the performance of GPM candidate algorithms [2][3] to perform profile classification, melting region detection as well as drop size distribution retrieval for hurricane Earl will be presented. This analysis will be compared with other storm observations that are not tropical storms. The philosophy of the algorithm is based on the vertical characteristic of measured dual-frequency ratio (DFRm), defined as the difference in measured radar reflectivities at the two frequencies. It helps our understanding of how hurricanes such as Earl form and intensify rapidly. Reference [1] T. Iguchi, R. Oki, A. Eric and Y. Furuhama, "Global precipitation measurement program and the development of dual-frequency precipitation radar," J. Commun. Res. Lab. (Japan), 49, 37-45.2002. [2] M. Le and V. Chandrasekar, Recent updates on precipitation classification and hydrometeor identification algorithm for GPM-DPR, Geoscience science and remote sensing symposium, IGARSS'2012, IEEE International, Munich, Germany. [3] M. Le ,V. Chandrasekar and S. Lim, Microphysical retrieval from dual-frequency precipitation radar board GPM, Geoscience science and remote sensing symposium, IGARSS'2010, IEEE International, Honolulu, USA.

  10. Quantum red-green-blue image steganography

    NASA Astrophysics Data System (ADS)

    Heidari, Shahrokh; Pourarian, Mohammad Rasoul; Gheibi, Reza; Naseri, Mosayeb; Houshmand, Monireh

    One of the most considering matters in the field of quantum information processing is quantum data hiding including quantum steganography and quantum watermarking. This field is an efficient tool for protecting any kind of digital data. In this paper, three quantum color images steganography algorithms are investigated based on Least Significant Bit (LSB). The first algorithm employs only one of the image’s channels to cover secret data. The second procedure is based on LSB XORing technique, and the last algorithm utilizes two channels to cover the color image for hiding secret quantum data. The performances of the proposed schemes are analyzed by using software simulations in MATLAB environment. The analysis of PSNR, BER and Histogram graphs indicate that the presented schemes exhibit acceptable performances and also theoretical analysis demonstrates that the networks complexity of the approaches scales squarely.

  11. Enhancement of the MODIS Daily Snow Albedo Product

    NASA Technical Reports Server (NTRS)

    Hall, Dorothy K.; Schaaf, Crystal B.; Wang, Zhuosen; Riggs, George A.

    2009-01-01

    The MODIS daily snow albedo product is a data layer in the MOD10A1 snow-cover product that includes snow-covered area and fractional snow cover as well as quality information and other metadata. It was developed to augment the MODIS BRDF/Albedo algorithm (MCD43) that provides 16-day maps of albedo globally at 500-m resolution. But many modelers require daily snow albedo, especially during the snowmelt season when the snow albedo is changing rapidly. Many models have an unrealistic snow albedo feedback in both estimated albedo and change in albedo over the seasonal cycle context, Rapid changes in snow cover extent or brightness challenge the MCD43 algorithm; over a 16-day period, MCD43 determines whether the majority of clear observations was snow-covered or snow-free then only calculates albedo for the majority condition. Thus changes in snow albedo and snow cover are not portrayed accurately during times of rapid change, therefore the current MCD43 product is not ideal for snow work. The MODIS daily snow albedo from the MOD10 product provides more frequent, though less robust maps for pixels defined as "snow" by the MODIS snow-cover algorithm. Though useful, the daily snow albedo product can be improved using a daily version of the MCD43 product as described in this paper. There are important limitations to the MOD10A1 daily snow albedo product, some of which can be mitigated. Utilizing the appropriate per-pixel Bidirectional Reflectance Distribution Functions (BRDFs) can be problematic, and correction for anisotropic scattering must be included. The BRDF describes how the reflectance varies with view and illumination geometry. Also, narrow-to-broadband conversion specific for snow on different surfaces must be calculated and this can be difficult. In consideration of these limitations of MOD10A1, we are planning to improve the daily snow albedo algorithm by coupling the periodic per-pixel snow albedo from MCD43, with daily surface ref|outanoom, In this paper, we compare a daily version of MCD43B3 with the daily albedo from MOD10A1. and MCD43B3 with a 16-day average of MOD10A1, over Greenland. We also discuss some near-future planned enhancements to MOD10A1.

  12. Mapping spatial patterns with morphological image processing

    Treesearch

    Peter Vogt; Kurt H. Riitters; Christine Estreguil; Jacek Kozak; Timothy G. Wade; James D. Wickham

    2006-01-01

    We use morphological image processing for classifying spatial patterns at the pixel level on binary land-cover maps. Land-cover pattern is classified as 'perforated,' 'edge,' 'patch,' and 'core' with higher spatial precision and thematic accuracy compared to a previous approach based on image convolution, while retaining the...

  13. Nuclear Thermal Propulsion: A Joint NASA/DOE/DOD Workshop

    NASA Technical Reports Server (NTRS)

    Clark, John S. (Editor)

    1991-01-01

    Papers presented at the joint NASA/DOE/DOD workshop on nuclear thermal propulsion are compiled. The following subject areas are covered: nuclear thermal propulsion programs; Rover/NERVA and NERVA systems; Low Pressure Nuclear Thermal Rocket (LPNTR); particle bed reactor nuclear rocket; hybrid propulsion systems; wire core reactor; pellet bed reactor; foil reactor; Droplet Core Nuclear Rocket (DCNR); open cycle gas core nuclear rockets; vapor core propulsion reactors; nuclear light bulb; Nuclear rocket using Indigenous Martian Fuel (NIMF); mission analysis; propulsion and reactor technology; development plans; and safety issues.

  14. Mapping Soil Carbon in the Yukon Kuskokwim River Delta Alaska

    NASA Astrophysics Data System (ADS)

    Natali, S.; Fiske, G.; Schade, J. D.; Mann, P. J.; Holmes, R. M.; Ludwig, S.; Melton, S.; Sae-lim, N.; Jardine, L. E.; Navarro-Perez, E.

    2017-12-01

    Arctic river deltas are hotspots for carbon storage, occupying <1% of the pan-Arctic watershed but containing >10% of carbon stored in arctic permafrost. The Yukon Kuskokwim (YK) Delta, Alaska is located in the lower latitudinal range of the northern permafrost region in an area of relatively warm permafrost that is particularly vulnerable to warming climate. Active layer depths range from 50 cm on peat plateaus to >100 cm in wetland and aquatic ecosystems. The size of the soil organic carbon pool and vulnerability of the carbon in the YK Delta is a major unknown and is critically important as climate warming and increasing fire frequency may make this carbon vulnerable to transport to aquatic and marine systems and the atmosphere. To characterize the size and distribution of soil carbon pools in the YK Delta, we mapped the land cover of a 1910 km2 watershed located in a region of the YK Delta that was impacted by fire in 2015. The map product was the result of an unsupervised classification using the Weka K Means clustering algorithm implemented in Google's Earth Engine. Inputs to the classification were Worldview2 resolution optical imagery (1m), Arctic DEM (5m), and Sentinel 2 level 1C multispectral imagery, including NDVI, (10 m). We collected 100 soil cores (0-30 cm) from sites of different land cover and landscape position, including moist and dry peat plateaus, high and low intensity burned plateaus, fens, and drained lakes; 13 lake sediment cores (0-50 cm); and 20 surface permafrost cores (to 100 cm) from burned and unburned peat plateaus. Active layer and permafrost soils were analyzed for organic matter content, soil moisture content, and carbon and nitrogen pools (30 and 100 cm). Soil carbon content varied across the landscape; average carbon content values for lake sediments were 12% (5- 17% range), fens 26% (9-44%), unburned peat plateaus 41% (34-44%), burned peat plateaus 19% (7-34%). These values will be used to estimate soil carbon pools, which will be applied to the spatial extent of each landcover class in our map, yielding a watershed-wide and spatially explicit map of soil carbon in the YK Delta. This map will provide the basis for understanding where carbon is stored in the watershed and the vulnerability of that carbon to climate change and fire.

  15. Optimal shortening of uniform covering arrays

    PubMed Central

    Rangel-Valdez, Nelson; Avila-George, Himer; Carrizalez-Turrubiates, Oscar

    2017-01-01

    Software test suites based on the concept of interaction testing are very useful for testing software components in an economical way. Test suites of this kind may be created using mathematical objects called covering arrays. A covering array, denoted by CA(N; t, k, v), is an N × k array over Zv={0,…,v-1} with the property that every N × t sub-array covers all t-tuples of Zvt at least once. Covering arrays can be used to test systems in which failures occur as a result of interactions among components or subsystems. They are often used in areas such as hardware Trojan detection, software testing, and network design. Because system testing is expensive, it is critical to reduce the amount of testing required. This paper addresses the Optimal Shortening of Covering ARrays (OSCAR) problem, an optimization problem whose objective is to construct, from an existing covering array matrix of uniform level, an array with dimensions of (N − δ) × (k − Δ) such that the number of missing t-tuples is minimized. Two applications of the OSCAR problem are (a) to produce smaller covering arrays from larger ones and (b) to obtain quasi-covering arrays (covering arrays in which the number of missing t-tuples is small) to be used as input to a meta-heuristic algorithm that produces covering arrays. In addition, it is proven that the OSCAR problem is NP-complete, and twelve different algorithms are proposed to solve it. An experiment was performed on 62 problem instances, and the results demonstrate the effectiveness of solving the OSCAR problem to facilitate the construction of new covering arrays. PMID:29267343

  16. A modified approach combining FNEA and watershed algorithms for segmenting remotely-sensed optical images

    NASA Astrophysics Data System (ADS)

    Liu, Likun

    2018-01-01

    In the field of remote sensing image processing, remote sensing image segmentation is a preliminary step for later analysis of remote sensing image processing and semi-auto human interpretation, fully-automatic machine recognition and learning. Since 2000, a technique of object-oriented remote sensing image processing method and its basic thought prevails. The core of the approach is Fractal Net Evolution Approach (FNEA) multi-scale segmentation algorithm. The paper is intent on the research and improvement of the algorithm, which analyzes present segmentation algorithms and selects optimum watershed algorithm as an initialization. Meanwhile, the algorithm is modified by modifying an area parameter, and then combining area parameter with a heterogeneous parameter further. After that, several experiments is carried on to prove the modified FNEA algorithm, compared with traditional pixel-based method (FCM algorithm based on neighborhood information) and combination of FNEA and watershed, has a better segmentation result.

  17. Dating the Vostok ice core record by importing the Devils Hole chronology

    USGS Publications Warehouse

    Landwehr, J.M.; Winograd, I.J.

    2001-01-01

    The development of an accurate chronology for the Vostok record continues to be an open research question because these invaluable ice cores cannot be dated directly. Depth-to-age relationships have been developed using many different approaches, but published age estimates are inconsistent, even for major paleoclimatic events. We have developed a chronology for the Vostok deuterium paleotemperature record using a simple and objective algorithm to transfer ages of major paleoclimatic events from the radiometrically dated 500,000-year ??18O-paleotemperature record from Devils Hole, Nevada. The method is based only on a strong inference that major shifts in paleotemperature recorded at both locations occurred synchronously, consistent with an atmospheric teleconnection. The derived depth-to-age relationship conforms with the physics of ice compaction, and internally produces ages for climatic events 5.4 and 11.24 which are consistent with the externally assigned ages that the Vostok team needed to assume in order to derive their most recent chronology, GT4. Indeed, the resulting V-DH chronology is highly correlated with GT4 because of the unexpected correspondence even in the timing of second-order climatic events that were not constrained by the algorithm. Furthermore, the algorithm developed herein is not specific to this problem; rather, the procedure can be used whenever two paleoclimate records are proxies for the same physical phenomenon, and paleoclimatic conditions forcing the two records can be considered to have occurred contemporaneously. The ability of the algorithm to date the East Antarctic Dome Fuji core is also demonstrated.

  18. Information Orientation, Information Technology Governance, and Information Technology Service Management: A Multi-Level Approach for Teaching the MBA Core Information Systems Course

    ERIC Educational Resources Information Center

    Beachboard, John; Aytes, Kregg

    2011-01-01

    Core MBA IT courses have tended to be survey courses that cover important topics but often do not sufficiently engage students. The result is that many top-ranked MBA programs have not found such courses useful enough to include in their core MBA requirements. In this paper, we present a design of an MBA course emphasizing information technology…

  19. Validation of Core Temperature Estimation Algorithm

    DTIC Science & Technology

    2016-01-29

    plot of observed versus estimated core temperature with the line of identity (dashed) and the least squares regression line (solid) and line equation...estimated PSI with the line of identity (dashed) and the least squares regression line (solid) and line equation in the top left corner. (b) Bland...for comparison. The root mean squared error (RMSE) was also computed, as given by Equation 2.

  20. A Real-Time Marker-Based Visual Sensor Based on a FPGA and a Soft Core Processor

    PubMed Central

    Tayara, Hilal; Ham, Woonchul; Chong, Kil To

    2016-01-01

    This paper introduces a real-time marker-based visual sensor architecture for mobile robot localization and navigation. A hardware acceleration architecture for post video processing system was implemented on a field-programmable gate array (FPGA). The pose calculation algorithm was implemented in a System on Chip (SoC) with an Altera Nios II soft-core processor. For every frame, single pass image segmentation and Feature Accelerated Segment Test (FAST) corner detection were used for extracting the predefined markers with known geometries in FPGA. Coplanar PosIT algorithm was implemented on the Nios II soft-core processor supplied with floating point hardware for accelerating floating point operations. Trigonometric functions have been approximated using Taylor series and cubic approximation using Lagrange polynomials. Inverse square root method has been implemented for approximating square root computations. Real time results have been achieved and pixel streams have been processed on the fly without any need to buffer the input frame for further implementation. PMID:27983714

  1. A Real-Time Marker-Based Visual Sensor Based on a FPGA and a Soft Core Processor.

    PubMed

    Tayara, Hilal; Ham, Woonchul; Chong, Kil To

    2016-12-15

    This paper introduces a real-time marker-based visual sensor architecture for mobile robot localization and navigation. A hardware acceleration architecture for post video processing system was implemented on a field-programmable gate array (FPGA). The pose calculation algorithm was implemented in a System on Chip (SoC) with an Altera Nios II soft-core processor. For every frame, single pass image segmentation and Feature Accelerated Segment Test (FAST) corner detection were used for extracting the predefined markers with known geometries in FPGA. Coplanar PosIT algorithm was implemented on the Nios II soft-core processor supplied with floating point hardware for accelerating floating point operations. Trigonometric functions have been approximated using Taylor series and cubic approximation using Lagrange polynomials. Inverse square root method has been implemented for approximating square root computations. Real time results have been achieved and pixel streams have been processed on the fly without any need to buffer the input frame for further implementation.

  2. GaiaGrid : Its Implications and Implementation

    NASA Astrophysics Data System (ADS)

    Ansari, S. G.; Lammers, U.; Ter Linden, M.

    2005-12-01

    Gaia is an ESA space mission to determine positions of 1 billion objects in the Galaxy at micro-arcsecond precision. The data analysis and processing requirements of the mission involves about 20 institutes across Europe, each providing specific algorithms for specific tasks, which range from relativistic effects on positional determination, classification, astrometric binary star detection, photometric analysis, spectroscopic analysis etc. In an initial phase, a study has been ongoing over the past three years to determine the complexity of Gaia's data processing. Two processing categories have materialised: core and shell. While core deals with routine data processing, shell tasks are algorithms to carry out data analysis, which involves the Gaia Community at large. For this latter category, we are currently experimenting with use of Grid paradigms to allow access to the core data and to augment processing power to simulate and analyse the data in preparation for the actual mission. We present preliminary results and discuss the sociological impact of distributing the tasks amongst the community.

  3. Polarization image segmentation of radiofrequency ablated porcine myocardial tissue

    PubMed Central

    Ahmad, Iftikhar; Gribble, Adam; Murtza, Iqbal; Ikram, Masroor; Pop, Mihaela; Vitkin, Alex

    2017-01-01

    Optical polarimetry has previously imaged the spatial extent of a typical radiofrequency ablated (RFA) lesion in myocardial tissue, exhibiting significantly lower total depolarization at the necrotic core compared to healthy tissue, and intermediate values at the RFA rim region. Here, total depolarization in ablated myocardium was used to segment the total depolarization image into three (core, rim and healthy) zones. A local fuzzy thresholding algorithm was used for this multi-region segmentation, and then compared with a ground truth segmentation obtained from manual demarcation of RFA core and rim regions on the histopathology image. Quantitative comparison of the algorithm segmentation results was performed with evaluation metrics such as dice similarity coefficient (DSC = 0.78 ± 0.02 and 0.80 ± 0.02), sensitivity (Sn = 0.83 ± 0.10 and 0.91 ± 0.08), specificity (Sp = 0.76 ± 0.17 and 0.72 ± 0.17) and accuracy (Acc = 0.81 ± 0.09 and 0.71 ± 0.10) for RFA core and rim regions, respectively. This automatic segmentation of parametric depolarization images suggests a novel application of optical polarimetry, namely its use in objective RFA image quantification. PMID:28380013

  4. LOD-based clustering techniques for efficient large-scale terrain storage and visualization

    NASA Astrophysics Data System (ADS)

    Bao, Xiaohong; Pajarola, Renato

    2003-05-01

    Large multi-resolution terrain data sets are usually stored out-of-core. To visualize terrain data at interactive frame rates, the data needs to be organized on disk, loaded into main memory part by part, then rendered efficiently. Many main-memory algorithms have been proposed for efficient vertex selection and mesh construction. Organization of terrain data on disk is quite difficult because the error, the triangulation dependency and the spatial location of each vertex all need to be considered. Previous terrain clustering algorithms did not consider the per-vertex approximation error of individual terrain data sets. Therefore, the vertex sequences on disk are exactly the same for any terrain. In this paper, we propose a novel clustering algorithm which introduces the level-of-detail (LOD) information to terrain data organization to map multi-resolution terrain data to external memory. In our approach the LOD parameters of the terrain elevation points are reflected during clustering. The experiments show that dynamic loading and paging of terrain data at varying LOD is very efficient and minimizes page faults. Additionally, the preprocessing of this algorithm is very fast and works from out-of-core.

  5. Intercomparison of Satellite-Derived Snow-Cover Maps

    NASA Technical Reports Server (NTRS)

    Hall, Dorothy K.; Tait, Andrew B.; Foster, James L.; Chang, Alfred T. C.; Allen, Milan

    1999-01-01

    In anticipation of the launch of the Earth Observing System (EOS) Terra, and the PM-1 spacecraft in 1999 and 2000, respectively, efforts are ongoing to determine errors of satellite-derived snow-cover maps. EOS Moderate Resolution Imaging Spectroradiometer (MODIS) and Advanced Microwave Scanning Radiometer-E (AMSR-E) snow-cover products will be produced. For this study we compare snow maps covering the same study area acquired from different sensors using different snow- mapping algorithms. Four locations are studied: 1) southern Saskatchewan; 2) a part of New England (New Hampshire, Vermont and Massachusetts) and eastern New York; 3) central Idaho and western Montana; and 4) parts of North and South Dakota. Snow maps were produced using a prototype MODIS snow-mapping algorithm used on Landsat Thematic Mapper (TM) scenes of each study area at 30-m and when the TM data were degraded to 1 -km resolution. National Operational Hydrologic Remote Sensing Center (NOHRSC) 1 -km resolution snow maps were also used, as were snow maps derived from 1/2 deg. x 1/2 deg. resolution Special Sensor Microwave Imager (SSM/1) data. A land-cover map derived from the International Geosphere-Biosphere Program (IGBP) land-cover map of North America was also registered to the scenes. The TM, NOHRSC and SSM/I snow maps, and land-cover maps were compared digitally. In most cases, TM-derived maps show less snow cover than the NOHRSC and SSM/I maps because areas of incomplete snow cover in forests (e.g., tree canopies, branches and trunks) are seen in the TM data, but not in the coarser-resolution maps. The snow maps generally agree with respect to the spatial variability of the snow cover. The 30-m resolution TM data provide the most accurate snow maps, and are thus used as the baseline for comparison with the other maps. Comparisons show that the percent change in amount of snow cover relative to the 3 0-m resolution TM maps is lowest using the TM I -km resolution maps, ranging from 0 to 40%. The highest percent change (less than 100%) is found in the New England study area, probably due to the presence of patchy snow cover. A scene with patchy snow cover is more difficult to map accurately than is a scene with a well-defined snowline such as is found on the North and South Dakota scene where the percent change ranged from 0 to 40%. There are also some important differences in the amount of snow mapped using the two different SSM/I algorithms because they utilize different channels.

  6. Scaled Runge-Kutta algorithms for handling dense output

    NASA Technical Reports Server (NTRS)

    Horn, M. K.

    1981-01-01

    Low order Runge-Kutta algorithms are developed which determine the solution of a system of ordinary differential equations at any point within a given integration step, as well as at the end of each step. The scaled Runge-Kutta methods are designed to be used with existing Runge-Kutta formulas, using the derivative evaluations of these defining algorithms as the core of the system. For a slight increase in computing time, the solution may be generated within the integration step, improving the efficiency of the Runge-Kutta algorithms, since the step length need no longer be severely reduced to coincide with the desired output point. Scaled Runge-Kutta algorithms are presented for orders 3 through 5, along with accuracy comparisons between the defining algorithms and their scaled versions for a test problem.

  7. History matching by spline approximation and regularization in single-phase areal reservoirs

    NASA Technical Reports Server (NTRS)

    Lee, T. Y.; Kravaris, C.; Seinfeld, J.

    1986-01-01

    An automatic history matching algorithm is developed based on bi-cubic spline approximations of permeability and porosity distributions and on the theory of regularization to estimate permeability or porosity in a single-phase, two-dimensional real reservoir from well pressure data. The regularization feature of the algorithm is used to convert the ill-posed history matching problem into a well-posed problem. The algorithm employs the conjugate gradient method as its core minimization method. A number of numerical experiments are carried out to evaluate the performance of the algorithm. Comparisons with conventional (non-regularized) automatic history matching algorithms indicate the superiority of the new algorithm with respect to the parameter estimates obtained. A quasioptimal regularization parameter is determined without requiring a priori information on the statistical properties of the observations.

  8. Exact diagonalization of quantum lattice models on coprocessors

    NASA Astrophysics Data System (ADS)

    Siro, T.; Harju, A.

    2016-10-01

    We implement the Lanczos algorithm on an Intel Xeon Phi coprocessor and compare its performance to a multi-core Intel Xeon CPU and an NVIDIA graphics processor. The Xeon and the Xeon Phi are parallelized with OpenMP and the graphics processor is programmed with CUDA. The performance is evaluated by measuring the execution time of a single step in the Lanczos algorithm. We study two quantum lattice models with different particle numbers, and conclude that for small systems, the multi-core CPU is the fastest platform, while for large systems, the graphics processor is the clear winner, reaching speedups of up to 7.6 compared to the CPU. The Xeon Phi outperforms the CPU with sufficiently large particle number, reaching a speedup of 2.5.

  9. Correlation of Miocene strata on the submarine St. Croix Ridge and onland St. Croix, US Virgin Islands

    NASA Astrophysics Data System (ADS)

    von Salis, Katharina; Speed, Robert

    1995-03-01

    The nannofossils of an hydraulic piston core from the steep scarp between the St. Croix Ridge and Virgin Islands Basin were restudied. Formerly thought to represent a Pliocene debris flow, we interpret it as an early Miocene (NN1/2) hemipelagic deposit. We correlate the seismic unit sampled by piston core with the Kingshill-Jealousy Formation present on St. Croix. These sediments likely belong to an extensive, thick, deep marine cover of the St. Croix Ridge, deposited on a metamorphic—igneous basement between early Eocene and early Miocene time. Faulting did not evidently affect this sediment cover until the late Neogene.

  10. Study of positron annihilation with core electrons at the clean and oxygen covered Ag(001) surface

    NASA Astrophysics Data System (ADS)

    Joglekar, P.; Shastry, K.; Olenga, A.; Fazleev, N. G.; Weiss, A. H.

    2013-03-01

    In this paper we present measurements of the energy spectrum of electrons emitted as a result of Positron Annihilation Induce Auger Electron Emission from a clean and oxygen covered Ag (100) surface using a series of incident beam energies ranging from 20 eV down to 2 eV. A peak was observed at ~ 40 eV corresponding to the N23VV Auger transition in agreement with previous PAES studies. Experimental results were investigated theoretically by calculations of positron states and annihilation probabilities of surface-trapped positrons with relevant core electrons at the clean and oxygen covered Ag(100) surface. An ab-initio investigation of stability and associated electronic properties of different adsorption phases of oxygen on Ag(100) has been performed on the basis of density functional theory and using DMOl3 code. The computed positron binding energy, positron surface state wave function, and positron annihilation probabilities of surface trapped positrons with relevant core electrons demonstrate their sensitivity to oxygen coverage, elemental content, atomic structure of the topmost layers of surfaces, and charge transfer effects. Theoretical results are compared with experimental data. This work was supported in part by the National Science Foundation Grant # DMR-0907679.

  11. Study of Computational Structures for Multiobject Tracking Algorithms

    DTIC Science & Technology

    1986-12-01

    MULTIOBJECT TRACKING ALGORITHMS 12. PERSONAL AUTHOR(S) i Allen, Thomas G .; Kurien, Thomas; Washburn, Robert B. Jr. 13a. TYPE OF REPORT 13b. TIME COVERED 14...mentioned possible restructurings of the tracking algorithm that increase the amount of available parallelism ’ g ~. are investigated. This step is extremely...sufficient for our needs here. In the following section we will examine the structure and computational requirements of the track- g , oriented approach

  12. Safety monitoring and reactor transient interpreter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hench, J. E.; Fukushima, T. Y.

    1983-12-20

    An apparatus which monitors a subset of control panel inputs in a nuclear reactor power plant, the subset being those indicators of plant status which are of a critical nature during an unusual event. A display (10) is provided for displaying primary information (14) as to whether the core is covered and likely to remain covered, including information as to the status of subsystems needed to cool the core and maintain core integrity. Secondary display information (18,20) is provided which can be viewed selectively for more detailed information when an abnormal condition occurs. The primary display information has messages (24)more » for prompting an operator as to which one of a number of pushbuttons (16) to press to bring up the appropriate secondary display (18,20). The apparatus utilizes a thermal-hydraulic analysis to more accurately determine key parameters (such as water level) from other measured parameters, such as power, pressure, and flow rate.« less

  13. TRMM precipitation analysis of extreme storms in South America: Bias and climatological contribution

    NASA Astrophysics Data System (ADS)

    Rasmussen, K. L.; Houze, R.; Zuluaga, M. D.; Choi, S. L.; Chaplin, M.

    2013-12-01

    The TRMM (Tropical Rainfall Measuring Mission) satellite was designed both to measure spatial and temporal variation of tropical rainfall around the globe and to understand the factors controlling the precipitation. TRMM observations have led to the realization that storms just east of the Andes in southeastern South America are among the most intense deep convection in the world. For a complete perspective of the impact of intense precipitation systems on the hydrologic cycle in South America, it is necessary to assess the contribution from various forms of extreme storms to the climatological rainfall. However, recent studies have suggested that the TRMM Precipitation Radar (PR) algorithm significantly underestimates surface rainfall in deep convection over land. Prior to investigating the climatological behavior, this research first investigates the range of the rain bias in storms containing four different types of extreme radar echoes: deep convective cores, deep and wide convective cores, wide convective cores, and broad stratiform regions over South America. The TRMM PR algorithm exhibits bias in all four extreme echo types considered here when the algorithm rates are compared to a range of conventional Z-R relations. Storms with deep convective cores, defined as high reflectivity echo volumes that extend above 10 km in altitude, show the greatest underestimation, and the bias is unrelated to their echo top height. The bias in wide convective cores, defined as high reflectivity echo volumes that extend horizontally over 1,000 km2, relates to the echo top, indicating that storms with significant mixed phase and ice hydrometeors are similarly affected by assumptions in the TRMM PR algorithm. The subtropical region tends to have more intense precipitating systems than the tropics, but the relationship between the TRMM PR rain bias and storm type is the same regardless of the climatological regime. The most extreme storms are typically not collocated with regions of high climatological precipitation. A quantitative approach that accounts for the previously described bias using TRMM PR data is employed to investigate the role of the most extreme precipitating systems on the hydrological cycle in South America. These data are first used to investigate the relative contribution of precipitation from the TRMM-identified echo cores to each separate storm in which the convective cores are embedded. The second part of the study assesses how much of the climatological rainfall in South America is accounted for by storms containing deep convective, wide convective, and broad stratiform echo components. Systems containing these echoes produce very different hydrologic responses. From a hydrologic and climatological viewpoint, this empirical knowledge is critical, as the type of runoff and flooding that may occur depends on the specific character of the convective storm and has broad implications for the hydrological cycle in this region.

  14. Accuracy assessments and areal estimates using two-phase stratified random sampling, cluster plots, and the multivariate composite estimator

    Treesearch

    Raymond L. Czaplewski

    2000-01-01

    Consider the following example of an accuracy assessment. Landsat data are used to build a thematic map of land cover for a multicounty region. The map classifier (e.g., a supervised classification algorithm) assigns each pixel into one category of land cover. The classification system includes 12 different types of forest and land cover: black spruce, balsam fir,...

  15. A high performance load balance strategy for real-time multicore systems.

    PubMed

    Cho, Keng-Mao; Tsai, Chun-Wei; Chiu, Yi-Shiuan; Yang, Chu-Sing

    2014-01-01

    Finding ways to distribute workloads to each processor core and efficiently reduce power consumption is of vital importance, especially for real-time systems. In this paper, a novel scheduling algorithm is proposed for real-time multicore systems to balance the computation loads and save power. The developed algorithm simultaneously considers multiple criteria, a novel factor, and task deadline, and is called power and deadline-aware multicore scheduling (PDAMS). Experiment results show that the proposed algorithm can greatly reduce energy consumption by up to 54.2% and the deadline times missed, as compared to the other scheduling algorithms outlined in this paper.

  16. featsel: A framework for benchmarking of feature selection algorithms and cost functions

    NASA Astrophysics Data System (ADS)

    Reis, Marcelo S.; Estrela, Gustavo; Ferreira, Carlos Eduardo; Barrera, Junior

    In this paper, we introduce featsel, a framework for benchmarking of feature selection algorithms and cost functions. This framework allows the user to deal with the search space as a Boolean lattice and has its core coded in C++ for computational efficiency purposes. Moreover, featsel includes Perl scripts to add new algorithms and/or cost functions, generate random instances, plot graphs and organize results into tables. Besides, this framework already comes with dozens of algorithms and cost functions for benchmarking experiments. We also provide illustrative examples, in which featsel outperforms the popular Weka workbench in feature selection procedures on data sets from the UCI Machine Learning Repository.

  17. Interaction sorting method for molecular dynamics on multi-core SIMD CPU architecture.

    PubMed

    Matvienko, Sergey; Alemasov, Nikolay; Fomin, Eduard

    2015-02-01

    Molecular dynamics (MD) is widely used in computational biology for studying binding mechanisms of molecules, molecular transport, conformational transitions, protein folding, etc. The method is computationally expensive; thus, the demand for the development of novel, much more efficient algorithms is still high. Therefore, the new algorithm designed in 2007 and called interaction sorting (IS) clearly attracted interest, as it outperformed the most efficient MD algorithms. In this work, a new IS modification is proposed which allows the algorithm to utilize SIMD processor instructions. This paper shows that the improvement provides an additional gain in performance, 9% to 45% in comparison to the original IS method.

  18. A High Performance Load Balance Strategy for Real-Time Multicore Systems

    PubMed Central

    Cho, Keng-Mao; Tsai, Chun-Wei; Chiu, Yi-Shiuan; Yang, Chu-Sing

    2014-01-01

    Finding ways to distribute workloads to each processor core and efficiently reduce power consumption is of vital importance, especially for real-time systems. In this paper, a novel scheduling algorithm is proposed for real-time multicore systems to balance the computation loads and save power. The developed algorithm simultaneously considers multiple criteria, a novel factor, and task deadline, and is called power and deadline-aware multicore scheduling (PDAMS). Experiment results show that the proposed algorithm can greatly reduce energy consumption by up to 54.2% and the deadline times missed, as compared to the other scheduling algorithms outlined in this paper. PMID:24955382

  19. Land use/cover classification in the Brazilian Amazon using satellite images.

    PubMed

    Lu, Dengsheng; Batistella, Mateus; Li, Guiying; Moran, Emilio; Hetrick, Scott; Freitas, Corina da Costa; Dutra, Luciano Vieira; Sant'anna, Sidnei João Siqueira

    2012-09-01

    Land use/cover classification is one of the most important applications in remote sensing. However, mapping accurate land use/cover spatial distribution is a challenge, particularly in moist tropical regions, due to the complex biophysical environment and limitations of remote sensing data per se. This paper reviews experiments related to land use/cover classification in the Brazilian Amazon for a decade. Through comprehensive analysis of the classification results, it is concluded that spatial information inherent in remote sensing data plays an essential role in improving land use/cover classification. Incorporation of suitable textural images into multispectral bands and use of segmentation-based method are valuable ways to improve land use/cover classification, especially for high spatial resolution images. Data fusion of multi-resolution images within optical sensor data is vital for visual interpretation, but may not improve classification performance. In contrast, integration of optical and radar data did improve classification performance when the proper data fusion method was used. Of the classification algorithms available, the maximum likelihood classifier is still an important method for providing reasonably good accuracy, but nonparametric algorithms, such as classification tree analysis, has the potential to provide better results. However, they often require more time to achieve parametric optimization. Proper use of hierarchical-based methods is fundamental for developing accurate land use/cover classification, mainly from historical remotely sensed data.

  20. Land use/cover classification in the Brazilian Amazon using satellite images

    PubMed Central

    Lu, Dengsheng; Batistella, Mateus; Li, Guiying; Moran, Emilio; Hetrick, Scott; Freitas, Corina da Costa; Dutra, Luciano Vieira; Sant’Anna, Sidnei João Siqueira

    2013-01-01

    Land use/cover classification is one of the most important applications in remote sensing. However, mapping accurate land use/cover spatial distribution is a challenge, particularly in moist tropical regions, due to the complex biophysical environment and limitations of remote sensing data per se. This paper reviews experiments related to land use/cover classification in the Brazilian Amazon for a decade. Through comprehensive analysis of the classification results, it is concluded that spatial information inherent in remote sensing data plays an essential role in improving land use/cover classification. Incorporation of suitable textural images into multispectral bands and use of segmentation-based method are valuable ways to improve land use/cover classification, especially for high spatial resolution images. Data fusion of multi-resolution images within optical sensor data is vital for visual interpretation, but may not improve classification performance. In contrast, integration of optical and radar data did improve classification performance when the proper data fusion method was used. Of the classification algorithms available, the maximum likelihood classifier is still an important method for providing reasonably good accuracy, but nonparametric algorithms, such as classification tree analysis, has the potential to provide better results. However, they often require more time to achieve parametric optimization. Proper use of hierarchical-based methods is fundamental for developing accurate land use/cover classification, mainly from historical remotely sensed data. PMID:24353353

  1. Comparison of Support Vector Machine, Neural Network, and CART Algorithms for the Land-Cover Classification Using Limited Training Data Points

    EPA Science Inventory

    Support vector machine (SVM) was applied for land-cover characterization using MODIS time-series data. Classification performance was examined with respect to training sample size, sample variability, and landscape homogeneity (purity). The results were compared to two convention...

  2. Pathways from Toddler Information Processing to Adolescent Lexical Proficiency

    ERIC Educational Resources Information Center

    Rose, Susan A.; Feldman, Judith F.; Jankowski, Jeffery J.

    2015-01-01

    This study examined the relation of 3-year core information-processing abilities to lexical growth and development. The core abilities covered four domains--memory, representational competence (cross-modal transfer), processing speed, and attention. Lexical proficiency was assessed at 3 and 13 years with the Peabody Picture Vocabulary Test (PPVT)…

  3. DNA Copy Number Signature to Predict Recurrence in Early-Stage Ovarian Cancer

    DTIC Science & Technology

    2015-08-01

    RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE August 2015 2. REPORT TYPE Annual 3. DATES COVERED 1 August 2014 – 31 July 2015 4. TITLE AND...Partners Translational Core in Cambridge MA, 2) RPCI Genomics Shared Resources at Roswell Park Cancer Institute. Results for the two core

  4. Implications of Multi-Core Architectures on the Development of Multiple Independent Levels of Security (MILS) Compliant Systems

    DTIC Science & Technology

    2012-10-01

    REPORT 3. DATES COVERED (From - To) MAR 2010 – APR 2012 4 . TITLE AND SUBTITLE IMPLICATIONS OF MULT-CORE ARCHITECTURES ON THE DEVELOPMENT OF...Framework for Multicore Information Flow Analysis ...................................... 23 4 4.1 A Hypothetical Reference Architecture... 4 Figure 2: Pentium II Block Diagram

  5. Economics America: Content Statements for State Standards in Economics, K-12.

    ERIC Educational Resources Information Center

    National Council on Economic Education, New York, NY.

    This updated list of content standards covering economics is suggested for states developing their own economics standards. The list outlines the core requirements for basic literacy in economics for grades K-12. The statements are similar to designated content standards from other core subject areas. Key economic concepts describing their basic…

  6. JOBS. A Partnership between Education and Industry.

    ERIC Educational Resources Information Center

    Mann, Sandra; And Others

    This packet contains 15 lessons developed in a workplace basic skills project for the metal casting industry established jointly by Central Alabama Community College and Robinson Foundry, Inc. The lessons cover the following topics: (1) green sand schedule; (2) the core room; (3) the core room (continued); (4) figuring time; (5) the cleaning room;…

  7. 78 FR 59652 - Certain Corrosion-Resistant Carbon Steel Flat Products From the Republic of Korea: Notice of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-27

    ... DEPARTMENT OF COMMERCE International Trade Administration [A-580-816] Certain Corrosion-Resistant... corrosion-resistant carbon steel flat products (``CORE'') from the Republic of Korea (``Korea''), pursuant... administrative review of the antidumping duty order on CORE from Korea covering the period of review (``POR'') of...

  8. Optofluidic tuning of multimode interference fiber filters

    NASA Astrophysics Data System (ADS)

    Antonio-Lopez, J. E.; May-Arrioja, D. A.; LiKamWa, P.

    2009-05-01

    We report on the optofluidic tuning of MMI-based bandpass filters. It is well known that MMI devices exhibit their highest sensitivity when their diameter (D) is modified, since they have a D2 wavelength dependence. In order to increase the MMF diameter we use a special fiber, called No-Core fiber, which is basically a MMF with a diameter of 125 μm with air as the cover. Therefore, when this No-Core fiber is immersed in liquids with different refractive indexes, as a result of the Goes-Hänchen shift the effective width (fundamental mode width) of the No-Core fiber is increased, and thus the peak wavelength is tuned. A tunability of almost 40 nm in going from air (n=1.333) to ethylene glycol (n=1.434) was easily obtained, with a minimum change in peak transmission, contrast, and bandwidth. Moreover, since replacing the entire liquid can be difficult, the device was placed vertically and the liquid was covering the No-Core fiber in small steps. This provided similar amount of tuning as before, but a more controllable tuning mechanism.

  9. High-performance 3D compressive sensing MRI reconstruction.

    PubMed

    Kim, Daehyun; Trzasko, Joshua D; Smelyanskiy, Mikhail; Haider, Clifton R; Manduca, Armando; Dubey, Pradeep

    2010-01-01

    Compressive Sensing (CS) is a nascent sampling and reconstruction paradigm that describes how sparse or compressible signals can be accurately approximated using many fewer samples than traditionally believed. In magnetic resonance imaging (MRI), where scan duration is directly proportional to the number of acquired samples, CS has the potential to dramatically decrease scan time. However, the computationally expensive nature of CS reconstructions has so far precluded their use in routine clinical practice - instead, more-easily generated but lower-quality images continue to be used. We investigate the development and optimization of a proven inexact quasi-Newton CS reconstruction algorithm on several modern parallel architectures, including CPUs, GPUs, and Intel's Many Integrated Core (MIC) architecture. Our (optimized) baseline implementation on a quad-core Core i7 is able to reconstruct a 256 × 160×80 volume of the neurovasculature from an 8-channel, 10 × undersampled data set within 56 seconds, which is already a significant improvement over existing implementations. The latest six-core Core i7 reduces the reconstruction time further to 32 seconds. Moreover, we show that the CS algorithm benefits from modern throughput-oriented architectures. Specifically, our CUDA-base implementation on NVIDIA GTX480 reconstructs the same dataset in 16 seconds, while Intel's Knights Ferry (KNF) of the MIC architecture even reduces the time to 12 seconds. Such level of performance allows the neurovascular dataset to be reconstructed within a clinically viable time.

  10. Hiding Techniques for Dynamic Encryption Text based on Corner Point

    NASA Astrophysics Data System (ADS)

    Abdullatif, Firas A.; Abdullatif, Alaa A.; al-Saffar, Amna

    2018-05-01

    Hiding technique for dynamic encryption text using encoding table and symmetric encryption method (AES algorithm) is presented in this paper. The encoding table is generated dynamically from MSB of the cover image points that used as the first phase of encryption. The Harris corner point algorithm is applied on cover image to generate the corner points which are used to generate dynamic AES key to second phase of text encryption. The embedded process in the LSB for the image pixels except the Harris corner points for more robust. Experimental results have demonstrated that the proposed scheme have embedding quality, error-free text recovery, and high value in PSNR.

  11. Multispectral and Panchromatic used Enhancement Resolution and Study Effective Enhancement on Supervised and Unsupervised Classification Land – Cover

    NASA Astrophysics Data System (ADS)

    Salman, S. S.; Abbas, W. A.

    2018-05-01

    The goal of the study is to support analysis Enhancement of Resolution and study effect on classification methods on bands spectral information of specific and quantitative approaches. In this study introduce a method to enhancement resolution Landsat 8 of combining the bands spectral of 30 meters resolution with panchromatic band 8 of 15 meters resolution, because of importance multispectral imagery to extracting land - cover. Classification methods used in this study to classify several lands -covers recorded from OLI- 8 imagery. Two methods of Data mining can be classified as either supervised or unsupervised. In supervised methods, there is a particular predefined target, that means the algorithm learn which values of the target are associated with which values of the predictor sample. K-nearest neighbors and maximum likelihood algorithms examine in this work as supervised methods. In other hand, no sample identified as target in unsupervised methods, the algorithm of data extraction searches for structure and patterns between all the variables, represented by Fuzzy C-mean clustering method as one of the unsupervised methods, NDVI vegetation index used to compare the results of classification method, the percent of dense vegetation in maximum likelihood method give a best results.

  12. A pipelined FPGA implementation of an encryption algorithm based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Thirer, Nonel

    2013-05-01

    With the evolution of digital data storage and exchange, it is essential to protect the confidential information from every unauthorized access. High performance encryption algorithms were developed and implemented by software and hardware. Also many methods to attack the cipher text were developed. In the last years, the genetic algorithm has gained much interest in cryptanalysis of cipher texts and also in encryption ciphers. This paper analyses the possibility to use the genetic algorithm as a multiple key sequence generator for an AES (Advanced Encryption Standard) cryptographic system, and also to use a three stages pipeline (with four main blocks: Input data, AES Core, Key generator, Output data) to provide a fast encryption and storage/transmission of a large amount of data.

  13. Substructure System Identification for Finite Element Model Updating

    NASA Technical Reports Server (NTRS)

    Craig, Roy R., Jr.; Blades, Eric L.

    1997-01-01

    This report summarizes research conducted under a NASA grant on the topic 'Substructure System Identification for Finite Element Model Updating.' The research concerns ongoing development of the Substructure System Identification Algorithm (SSID Algorithm), a system identification algorithm that can be used to obtain mathematical models of substructures, like Space Shuttle payloads. In the present study, particular attention was given to the following topics: making the algorithm robust to noisy test data, extending the algorithm to accept experimental FRF data that covers a broad frequency bandwidth, and developing a test analytical model (TAM) for use in relating test data to reduced-order finite element models.

  14. Scalable Triadic Analysis of Large-Scale Graphs: Multi-Core vs. Multi-Processor vs. Multi-Threaded Shared Memory Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chin, George; Marquez, Andres; Choudhury, Sutanay

    2012-09-01

    Triadic analysis encompasses a useful set of graph mining methods that is centered on the concept of a triad, which is a subgraph of three nodes and the configuration of directed edges across the nodes. Such methods are often applied in the social sciences as well as many other diverse fields. Triadic methods commonly operate on a triad census that counts the number of triads of every possible edge configuration in a graph. Like other graph algorithms, triadic census algorithms do not scale well when graphs reach tens of millions to billions of nodes. To enable the triadic analysis ofmore » large-scale graphs, we developed and optimized a triad census algorithm to efficiently execute on shared memory architectures. We will retrace the development and evolution of a parallel triad census algorithm. Over the course of several versions, we continually adapted the code’s data structures and program logic to expose more opportunities to exploit parallelism on shared memory that would translate into improved computational performance. We will recall the critical steps and modifications that occurred during code development and optimization. Furthermore, we will compare the performances of triad census algorithm versions on three specific systems: Cray XMT, HP Superdome, and AMD multi-core NUMA machine. These three systems have shared memory architectures but with markedly different hardware capabilities to manage parallelism.« less

  15. Processor core for real time background identification of HD video based on OpenCV Gaussian mixture model algorithm

    NASA Astrophysics Data System (ADS)

    Genovese, Mariangela; Napoli, Ettore

    2013-05-01

    The identification of moving objects is a fundamental step in computer vision processing chains. The development of low cost and lightweight smart cameras steadily increases the request of efficient and high performance circuits able to process high definition video in real time. The paper proposes two processor cores aimed to perform the real time background identification on High Definition (HD, 1920 1080 pixel) video streams. The implemented algorithm is the OpenCV version of the Gaussian Mixture Model (GMM), an high performance probabilistic algorithm for the segmentation of the background that is however computationally intensive and impossible to implement on general purpose CPU with the constraint of real time processing. In the proposed paper, the equations of the OpenCV GMM algorithm are optimized in such a way that a lightweight and low power implementation of the algorithm is obtained. The reported performances are also the result of the use of state of the art truncated binary multipliers and ROM compression techniques for the implementation of the non-linear functions. The first circuit has commercial FPGA devices as a target and provides speed and logic resource occupation that overcome previously proposed implementations. The second circuit is oriented to an ASIC (UMC-90nm) standard cell implementation. Both implementations are able to process more than 60 frames per second in 1080p format, a frame rate compatible with HD television.

  16. The characteristics and interpretability of land surface change and implications for project design

    USGS Publications Warehouse

    Sohl, Terry L.; Gallant, Alisa L.; Loveland, Thomas R.

    2004-01-01

    The need for comprehensive, accurate information on land-cover change has never been greater. While remotely sensed imagery affords the opportunity to provide information on land-cover change over large geographic expanses at a relatively low cost, the characteristics of land-surface change bring into question the suitability of many commonly used methodologies. Algorithm-based methodologies to detect change generally cannot provide the same level of accuracy as the analyses done by human interpreters. Results from the Land Cover Trends project, a cooperative venture that includes the U.S. Geological Survey, Environmental Protection Agency, and National Aeronautics and Space Administration, have shown that land-cover conversion is a relatively rare event, occurs locally in small patches, varies geographically and temporally, and is spectrally ambiguous. Based on these characteristics of change and the type of information required, manual interpretation was selected as the primary means of detecting change in the Land Cover Trends project. Mixtures of algorithm-based detection and manual interpretation may often prove to be the most feasible and appropriate design for change-detection applications. Serious examination of the expected characteristics and measurability of change must be considered during the design and implementation phase of any change analysis project.

  17. Method of detecting leakage of reactor core components of liquid metal cooled fast reactors

    DOEpatents

    Holt, Fred E.; Cash, Robert J.; Schenter, Robert E.

    1977-01-01

    A method of detecting the failure of a sealed non-fueled core component of a liquid-metal cooled fast reactor having an inert cover gas. A gas mixture is incorporated in the component which includes Xenon-124; under neutron irradiation, Xenon-124 is converted to radioactive Xenon-125. The cover gas is scanned by a radiation detector. The occurrence of 188 Kev gamma radiation and/or other identifying gamma radiation-energy level indicates the presence of Xenon-125 and therefore leakage of a component. Similarly, Xe-126, which transmutes to Xe-127 and Kr-84, which produces Kr-85.sup.m can be used for detection of leakage. Different components are charged with mixtures including different ratios of isotopes other than Xenon-124. On detection of the identifying radiation, the cover gas is subjected to mass spectroscopic analysis to locate the leaking component.

  18. Fuel management optimization using genetic algorithms and expert knowledge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeChaine, M.D.; Feltus, M.A.

    1996-09-01

    The CIGARO fuel management optimization code based on genetic algorithms is described and tested. The test problem optimized the core lifetime for a pressurized water reactor with a penalty function constraint on the peak normalized power. A bit-string genotype encoded the loading patterns, and genotype bias was reduced with additional bits. Expert knowledge about fuel management was incorporated into the genetic algorithm. Regional crossover exchanged physically adjacent fuel assemblies and improved the optimization slightly. Biasing the initial population toward a known priority table significantly improved the optimization.

  19. Vectorized algorithms for spiking neural network simulation.

    PubMed

    Brette, Romain; Goodman, Dan F M

    2011-06-01

    High-level languages (Matlab, Python) are popular in neuroscience because they are flexible and accelerate development. However, for simulating spiking neural networks, the cost of interpretation is a bottleneck. We describe a set of algorithms to simulate large spiking neural networks efficiently with high-level languages using vector-based operations. These algorithms constitute the core of Brian, a spiking neural network simulator written in the Python language. Vectorized simulation makes it possible to combine the flexibility of high-level languages with the computational efficiency usually associated with compiled languages.

  20. Leveraging Python Interoperability Tools to Improve Sapphire's Usability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gezahegne, A; Love, N S

    2007-12-10

    The Sapphire project at the Center for Applied Scientific Computing (CASC) develops and applies an extensive set of data mining algorithms for the analysis of large data sets. Sapphire's algorithms are currently available as a set of C++ libraries. However many users prefer higher level scripting languages such as Python for their ease of use and flexibility. In this report, we evaluate four interoperability tools for the purpose of wrapping Sapphire's core functionality with Python. Exposing Sapphire's functionality through a Python interface would increase its usability and connect its algorithms to existing Python tools.

  1. Effect of Ni Core Structure on the Electrocatalytic Activity of Pt-Ni/C in Methanol Oxidation

    PubMed Central

    Kang, Jian; Wang, Rongfang; Wang, Hui; Liao, Shijun; Key, Julian; Linkov, Vladimir; Ji, Shan

    2013-01-01

    Methanol oxidation catalysts comprising an outer Pt-shell with an inner Ni-core supported on carbon, (Pt-Ni/C), were prepared with either crystalline or amorphous Ni core structures. Structural comparisons of the two forms of catalyst were made using transmission electron microscopy (TEM), X-ray diffraction (XRD) and X-ray photoelectron spectroscopy (XPS), and methanol oxidation activity compared using CV and chronoamperometry (CA). While both the amorphous Ni core and crystalline Ni core structures were covered by similar Pt shell thickness and structure, the Pt-Ni(amorphous)/C catalyst had higher methanol oxidation activity. The amorphous Ni core thus offers improved Pt usage efficiency in direct methanol fuel cells. PMID:28811402

  2. Spatial scaling of core and dominant forest cover in the Upper Mississippi and Illinois River floodplains, USA

    USGS Publications Warehouse

    De Jager, Nathan R.; Rohweder, Jason J.

    2011-01-01

    Different organisms respond to spatial structure in different terms and across different spatial scales. As a consequence, efforts to reverse habitat loss and fragmentation through strategic habitat restoration ought to account for the different habitat density and scale requirements of various taxonomic groups. Here, we estimated the local density of floodplain forest surrounding each of ~20 million 10-m forested pixels of the Upper Mississippi and Illinois River floodplains by using moving windows of multiple sizes (1–100 ha). We further identified forest pixels that met two local density thresholds: 'core' forest pixels were nested in a 100% (unfragmented) forested window and 'dominant' forest pixels were those nested in a >60% forested window. Finally, we fit two scaling functions to declines in the proportion of forest cover meeting these criteria with increasing window length for 107 management-relevant focal areas: a power function (i.e. self-similar, fractal-like scaling) and an exponential decay function (fractal dimension depends on scale). The exponential decay function consistently explained more variation in changes to the proportion of forest meeting both the 'core' and 'dominant' criteria with increasing window length than did the power function, suggesting that elevation, soil type, hydrology, and human land use constrain these forest types to a limited range of scales. To examine these scales, we transformed the decay constants to measures of the distance at which the probability of forest meeting the 'core' and 'dominant' criteria was cut in half (S 1/2, m). S 1/2 for core forest was typically between ~55 and ~95 m depending on location along the river, indicating that core forest cover is restricted to extremely fine scales. In contrast, half of all dominant forest cover was lost at scales that were typically between ~525 and 750 m, but S 1/2 was as long as 1,800 m. S 1/2 is a simple measure that (1) condenses information derived from multi-scale analyses, (2) allows for comparisons of the amount of forest habitat available to species with different habitat density and scale requirements, and (3) can be used as an index of the spatial continuity of habitat types that do not scale fractally.

  3. Scheduling Algorithm for Mission Planning and Logistics Evaluation (SAMPLE). Volume 2: Mission payloads subsystem description

    NASA Technical Reports Server (NTRS)

    Dupnick, E.; Wiggins, D.

    1980-01-01

    The scheduling algorithm for mission planning and logistics evaluation (SAMPLE) is presented. Two major subsystems are included: The mission payloads program; and the set covering program. Formats and parameter definitions for the payload data set (payload model), feasible combination file, and traffic model are documented.

  4. Cytoprophet: a Cytoscape plug-in for protein and domain interaction networks inference.

    PubMed

    Morcos, Faruck; Lamanna, Charles; Sikora, Marcin; Izaguirre, Jesús

    2008-10-01

    Cytoprophet is a software tool that allows prediction and visualization of protein and domain interaction networks. It is implemented as a plug-in of Cytoscape, an open source software framework for analysis and visualization of molecular networks. Cytoprophet implements three algorithms that predict new potential physical interactions using the domain composition of proteins and experimental assays. The algorithms for protein and domain interaction inference include maximum likelihood estimation (MLE) using expectation maximization (EM); the set cover approach maximum specificity set cover (MSSC) and the sum-product algorithm (SPA). After accepting an input set of proteins with Uniprot ID/Accession numbers and a selected prediction algorithm, Cytoprophet draws a network of potential interactions with probability scores and GO distances as edge attributes. A network of domain interactions between the domains of the initial protein list can also be generated. Cytoprophet was designed to take advantage of the visual capabilities of Cytoscape and be simple to use. An example of inference in a signaling network of myxobacterium Myxococcus xanthus is presented and available at Cytoprophet's website. http://cytoprophet.cse.nd.edu.

  5. Discovering protein complexes in protein interaction networks via exploring the weak ties effect

    PubMed Central

    2012-01-01

    Background Studying protein complexes is very important in biological processes since it helps reveal the structure-functionality relationships in biological networks and much attention has been paid to accurately predict protein complexes from the increasing amount of protein-protein interaction (PPI) data. Most of the available algorithms are based on the assumption that dense subgraphs correspond to complexes, failing to take into account the inherence organization within protein complex and the roles of edges. Thus, there is a critical need to investigate the possibility of discovering protein complexes using the topological information hidden in edges. Results To provide an investigation of the roles of edges in PPI networks, we show that the edges connecting less similar vertices in topology are more significant in maintaining the global connectivity, indicating the weak ties phenomenon in PPI networks. We further demonstrate that there is a negative relation between the weak tie strength and the topological similarity. By using the bridges, a reliable virtual network is constructed, in which each maximal clique corresponds to the core of a complex. By this notion, the detection of the protein complexes is transformed into a classic all-clique problem. A novel core-attachment based method is developed, which detects the cores and attachments, respectively. A comprehensive comparison among the existing algorithms and our algorithm has been made by comparing the predicted complexes against benchmark complexes. Conclusions We proved that the weak tie effect exists in the PPI network and demonstrated that the density is insufficient to characterize the topological structure of protein complexes. Furthermore, the experimental results on the yeast PPI network show that the proposed method outperforms the state-of-the-art algorithms. The analysis of detected modules by the present algorithm suggests that most of these modules have well biological significance in context of complexes, suggesting that the roles of edges are critical in discovering protein complexes. PMID:23046740

  6. Detection of QT prolongation using a novel electrocardiographic analysis algorithm applying intelligent automation: prospective blinded evaluation using the Cardiac Safety Research Consortium electrocardiographic database.

    PubMed

    Green, Cynthia L; Kligfield, Paul; George, Samuel; Gussak, Ihor; Vajdic, Branislav; Sager, Philip; Krucoff, Mitchell W

    2012-03-01

    The Cardiac Safety Research Consortium (CSRC) provides both "learning" and blinded "testing" digital electrocardiographic (ECG) data sets from thorough QT (TQT) studies annotated for submission to the US Food and Drug Administration (FDA) to developers of ECG analysis technologies. This article reports the first results from a blinded testing data set that examines developer reanalysis of original sponsor-reported core laboratory data. A total of 11,925 anonymized ECGs including both moxifloxacin and placebo arms of a parallel-group TQT in 181 subjects were blindly analyzed using a novel ECG analysis algorithm applying intelligent automation. Developer-measured ECG intervals were submitted to CSRC for unblinding, temporal reconstruction of the TQT exposures, and statistical comparison to core laboratory findings previously submitted to FDA by the pharmaceutical sponsor. Primary comparisons included baseline-adjusted interval measurements, baseline- and placebo-adjusted moxifloxacin QTcF changes (ddQTcF), and associated variability measures. Developer and sponsor-reported baseline-adjusted data were similar with average differences <1 ms for all intervals. Both developer- and sponsor-reported data demonstrated assay sensitivity with similar ddQTcF changes. Average within-subject SD for triplicate QTcF measurements was significantly lower for developer- than sponsor-reported data (5.4 and 7.2 ms, respectively; P < .001). The virtually automated ECG algorithm used for this analysis produced similar yet less variable TQT results compared with the sponsor-reported study, without the use of a manual core laboratory. These findings indicate that CSRC ECG data sets can be useful for evaluating novel methods and algorithms for determining drug-induced QT/QTc prolongation. Although the results should not constitute endorsement of specific algorithms by either CSRC or FDA, the value of a public domain digital ECG warehouse to provide prospective, blinded comparisons of ECG technologies applied for QT/QTc measurement is illustrated. Copyright © 2012 Mosby, Inc. All rights reserved.

  7. Detection of QT prolongation using a novel ECG analysis algorithm applying intelligent automation: Prospective blinded evaluation using the Cardiac Safety Research Consortium ECG database

    PubMed Central

    Green, Cynthia L.; Kligfield, Paul; George, Samuel; Gussak, Ihor; Vajdic, Branislav; Sager, Philip; Krucoff, Mitchell W.

    2013-01-01

    Background The Cardiac Safety Research Consortium (CSRC) provides both “learning” and blinded “testing” digital ECG datasets from thorough QT (TQT) studies annotated for submission to the US Food and Drug Administration (FDA) to developers of ECG analysis technologies. This manuscript reports the first results from a blinded “testing” dataset that examines Developer re-analysis of original Sponsor-reported core laboratory data. Methods 11,925 anonymized ECGs including both moxifloxacin and placebo arms of a parallel-group TQT in 191 subjects were blindly analyzed using a novel ECG analysis algorithm applying intelligent automation. Developer measured ECG intervals were submitted to CSRC for unblinding, temporal reconstruction of the TQT exposures, and statistical comparison to core laboratory findings previously submitted to FDA by the pharmaceutical sponsor. Primary comparisons included baseline-adjusted interval measurements, baseline- and placebo-adjusted moxifloxacin QTcF changes (ddQTcF), and associated variability measures. Results Developer and Sponsor-reported baseline-adjusted data were similar with average differences less than 1 millisecond (ms) for all intervals. Both Developer and Sponsor-reported data demonstrated assay sensitivity with similar ddQTcF changes. Average within-subject standard deviation for triplicate QTcF measurements was significantly lower for Developer than Sponsor-reported data (5.4 ms and 7.2 ms, respectively; p<0.001). Conclusion The virtually automated ECG algorithm used for this analysis produced similar yet less variable TQT results compared to the Sponsor-reported study, without the use of a manual core laboratory. These findings indicate CSRC ECG datasets can be useful for evaluating novel methods and algorithms for determining QT/QTc prolongation by drugs. While the results should not constitute endorsement of specific algorithms by either CSRC or FDA, the value of a public domain digital ECG warehouse to provide prospective, blinded comparisons of ECG technologies applied for QT/QTc measurement is illustrated. PMID:22424006

  8. Operational performance of the three bean salad control algorithm on the ACRR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ball, R.M.; Madaras, J.J.; Trowbridge, F.R. Jr.

    Experimental tests on the Annular Core Research Reactor have confirmed that the Three-Bean-Salad'' control algorithm based on the Pontryagin maximum principle can change the power of a nuclear reactor many decades with a very fast startup rate and minimal overshoot. The paper describes the results of simulations and operations up to 25 MW and 87 decades per minute.

  9. Advanced Dynamically Adaptive Algorithms for Stochastic Simulations on Extreme Scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiu, Dongbin

    2017-03-03

    The focus of the project is the development of mathematical methods and high-performance computational tools for stochastic simulations, with a particular emphasis on computations on extreme scales. The core of the project revolves around the design of highly efficient and scalable numerical algorithms that can adaptively and accurately, in high dimensional spaces, resolve stochastic problems with limited smoothness, even containing discontinuities.

  10. Operational performance of the three bean salad control algorithm on the ACRR

    NASA Astrophysics Data System (ADS)

    Ball, Russell M.; Madaras, John J.; Trowbridge, F. Ray; Talley, Darren G.; Parma, Edward J.

    1991-01-01

    Experimental tests on the Annular Core Research Reactor have confirmed that the ``Three-Bean-Salad'' control algorithm based on the Pontryagin maximum principle can change the power of a nuclear reactor many decades with a very fast startup rate and minimal overshoot. The paper describes the results of simulations and operations up to 25 MW and 87 decades per minute.

  11. Novel automated inversion algorithm for temperature reconstruction using gas isotopes from ice cores

    NASA Astrophysics Data System (ADS)

    Döring, Michael; Leuenberger, Markus C.

    2018-06-01

    Greenland past temperature history can be reconstructed by forcing the output of a firn-densification and heat-diffusion model to fit multiple gas-isotope data (δ15N or δ40Ar or δ15Nexcess) extracted from ancient air in Greenland ice cores using published accumulation-rate (Acc) datasets. We present here a novel methodology to solve this inverse problem, by designing a fully automated algorithm. To demonstrate the performance of this novel approach, we begin by intentionally constructing synthetic temperature histories and associated δ15N datasets, mimicking real Holocene data that we use as true values (targets) to be compared to the output of the algorithm. This allows us to quantify uncertainties originating from the algorithm itself. The presented approach is completely automated and therefore minimizes the subjective impact of manual parameter tuning, leading to reproducible temperature estimates. In contrast to many other ice-core-based temperature reconstruction methods, the presented approach is completely independent from ice-core stable-water isotopes, providing the opportunity to validate water-isotope-based reconstructions or reconstructions where water isotopes are used together with δ15N or δ40Ar. We solve the inverse problem T(δ15N, Acc) by using a combination of a Monte Carlo based iterative approach and the analysis of remaining mismatches between modelled and target data, based on cubic-spline filtering of random numbers and the laboratory-determined temperature sensitivity for nitrogen isotopes. Additionally, the presented reconstruction approach was tested by fitting measured δ40Ar and δ15Nexcess data, which led as well to a robust agreement between modelled and measured data. The obtained final mismatches follow a symmetric standard-distribution function. For the study on synthetic data, 95 % of the mismatches compared to the synthetic target data are in an envelope between 3.0 to 6.3 permeg for δ15N and 0.23 to 0.51 K for temperature (2σ, respectively). In addition to Holocene temperature reconstructions, the fitting approach can also be used for glacial temperature reconstructions. This is shown by fitting of the North Greenland Ice Core Project (NGRIP) δ15N data for two Dansgaard-Oeschger events using the presented approach, leading to results comparable to other studies.

  12. Generating Global Leaf Area Index from Landsat: Algorithm Formulation and Demonstration

    NASA Technical Reports Server (NTRS)

    Ganguly, Sangram; Nemani, Ramakrishna R.; Zhang, Gong; Hashimoto, Hirofumi; Milesi, Cristina; Michaelis, Andrew; Wang, Weile; Votava, Petr; Samanta, Arindam; Melton, Forrest; hide

    2012-01-01

    This paper summarizes the implementation of a physically based algorithm for the retrieval of vegetation green Leaf Area Index (LAI) from Landsat surface reflectance data. The algorithm is based on the canopy spectral invariants theory and provides a computationally efficient way of parameterizing the Bidirectional Reflectance Factor (BRF) as a function of spatial resolution and wavelength. LAI retrievals from the application of this algorithm to aggregated Landsat surface reflectances are consistent with those of MODIS for homogeneous sites represented by different herbaceous and forest cover types. Example results illustrating the physics and performance of the algorithm suggest three key factors that influence the LAI retrieval process: 1) the atmospheric correction procedures to estimate surface reflectances; 2) the proximity of Landsatobserved surface reflectance and corresponding reflectances as characterized by the model simulation; and 3) the quality of the input land cover type in accurately delineating pure vegetated components as opposed to mixed pixels. Accounting for these factors, a pilot implementation of the LAI retrieval algorithm was demonstrated for the state of California utilizing the Global Land Survey (GLS) 2005 Landsat data archive. In a separate exercise, the performance of the LAI algorithm over California was evaluated by using the short-wave infrared band in addition to the red and near-infrared bands. Results show that the algorithm, while ingesting the short-wave infrared band, has the ability to delineate open canopies with understory effects and may provide useful information compared to a more traditional two-band retrieval. Future research will involve implementation of this algorithm at continental scales and a validation exercise will be performed in evaluating the accuracy of the 30-m LAI products at several field sites. ©

  13. Tiled architecture of a CNN-mostly IP system

    NASA Astrophysics Data System (ADS)

    Spaanenburg, Lambert; Malki, Suleyman

    2009-05-01

    Multi-core architectures have been popularized with the advent of the IBM CELL. On a finer grain the problems in scheduling multi-cores have already existed in the tiled architectures, such as the EPIC and Da Vinci. It is not easy to evaluate the performance of a schedule on such architecture as historical data are not available. One solution is to compile algorithms for which an optimal schedule is known by analysis. A typical example is an algorithm that is already defined in terms of many collaborating simple nodes, such as a Cellular Neural Network (CNN). A simple node with a local register stack together with a 'rotating wheel' internal communication mechanism has been proposed. Though the basic CNN allows for a tiled implementation of a tiled algorithm on a tiled structure, a practical CNN system will have to disturb this regularity by the additional need for arithmetical and logical operations. Arithmetic operations are needed for instance to accommodate for low-level image processing, while logical operations are needed to fork and merge different data streams without use of the external memory. It is found that the 'rotating wheel' internal communication mechanism still handles such mechanisms without the need for global control. Overall the CNN system provides for a practical network size as implemented on a FPGA, can be easily used as embedded IP and provides a clear benchmark for a multi-core compiler.

  14. MODIS Snow Cover Recovery Using Variational Interpolation

    NASA Astrophysics Data System (ADS)

    Tran, H.; Nguyen, P.; Hsu, K. L.; Sorooshian, S.

    2017-12-01

    Cloud obscuration is one of the major problems that limit the usages of satellite images in general and in NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) global Snow-Covered Area (SCA) products in particular. Among the approaches to resolve the problem, the Variational Interpolation (VI) algorithm method, proposed by Xia et al., 2012, obtains cloud-free dynamic SCA images from MODIS. This method is automatic and robust. However, computational deficiency is a main drawback that degrades applying the method for larger scales (i.e., spatial and temporal scales). To overcome this difficulty, this study introduces an improved version of the original VI. The modified VI algorithm integrates the MINimum RESidual (MINRES) iteration (Paige and Saunders., 1975) to prevent the system from breaking up when applied to much broader scales. An experiment was done to demonstrate the crash-proof ability of the new algorithm in comparison with the original VI method, an ability that is obtained when maintaining the distribution of the weights set after solving the linear system. After that, the new VI algorithm was applied to the whole Contiguous United States (CONUS) over four winter months of 2016 and 2017, and validated using the snow station network (SNOTEL). The resulting cloud free images have high accuracy in capturing the dynamical changes of snow in contrast with the MODIS snow cover maps. Lastly, the algorithm was applied to create a Cloud free images dataset from March 10, 2000 to February 28, 2017, which is able to provide an overview of snow trends over CONUS for nearly two decades. ACKNOWLEDGMENTSWe would like to acknowledge NASA, NOAA Office of Hydrologic Development (OHD) National Weather Service (NWS), Cooperative Institute for Climate and Satellites (CICS), Army Research Office (ARO), ICIWaRM, and UNESCO for supporting this research.

  15. Improved Surface and Tropospheric Temperatures Determined Using Only Shortwave Channels: The AIRS Science Team Version-6 Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Blaisdell, John; Iredell, Lena

    2011-01-01

    The Goddard DISC has generated products derived from AIRS/AMSU-A observations, starting from September 2002 when the AIRS instrument became stable, using the AIRS Science Team Version-5 retrieval algorithm. The AIRS Science Team Version-6 retrieval algorithm will be finalized in September 2011. This paper describes some of the significant improvements contained in the Version-6 retrieval algorithm, compared to that used in Version-5, with an emphasis on the improvement of atmospheric temperature profiles, ocean and land surface skin temperatures, and ocean and land surface spectral emissivities. AIRS contains 2378 spectral channels covering portions of the spectral region 650 cm(sup -1) (15.38 micrometers) - 2665 cm(sup -1) (3.752 micrometers). These spectral regions contain significant absorption features from two CO2 absorption bands, the 15 micrometers (longwave) CO2 band, and the 4.3 micrometers (shortwave) CO2 absorption band. There are also two atmospheric window regions, the 12 micrometer - 8 micrometer (longwave) window, and the 4.17 micrometer - 3.75 micrometer (shortwave) window. Historically, determination of surface and atmospheric temperatures from satellite observations was performed using primarily observations in the longwave window and CO2 absorption regions. According to cloud clearing theory, more accurate soundings of both surface skin and atmospheric temperatures can be obtained under partial cloud cover conditions if one uses observations in longwave channels to determine coefficients which generate cloud cleared radiances R(sup ^)(sub i) for all channels, and uses R(sup ^)(sub i) only from shortwave channels in the determination of surface and atmospheric temperatures. This procedure is now being used in the AIRS Version-6 Retrieval Algorithm. Results are presented for both daytime and nighttime conditions showing improved Version-6 surface and atmospheric soundings under partial cloud cover.

  16. Functional Analysis of OMICs Data and Small Molecule Compounds in an Integrated "Knowledge-Based" Platform.

    PubMed

    Dubovenko, Alexey; Nikolsky, Yuri; Rakhmatulin, Eugene; Nikolskaya, Tatiana

    2017-01-01

    Analysis of NGS and other sequencing data, gene variants, gene expression, proteomics, and other high-throughput (OMICs) data is challenging because of its biological complexity and high level of technical and biological noise. One way to deal with both problems is to perform analysis with a high fidelity annotated knowledgebase of protein interactions, pathways, and functional ontologies. This knowledgebase has to be structured in a computer-readable format and must include software tools for managing experimental data, analysis, and reporting. Here, we present MetaCore™ and Key Pathway Advisor (KPA), an integrated platform for functional data analysis. On the content side, MetaCore and KPA encompass a comprehensive database of molecular interactions of different types, pathways, network models, and ten functional ontologies covering human, mouse, and rat genes. The analytical toolkit includes tools for gene/protein list enrichment analysis, statistical "interactome" tool for the identification of over- and under-connected proteins in the dataset, and a biological network analysis module made up of network generation algorithms and filters. The suite also features Advanced Search, an application for combinatorial search of the database content, as well as a Java-based tool called Pathway Map Creator for drawing and editing custom pathway maps. Applications of MetaCore and KPA include molecular mode of action of disease research, identification of potential biomarkers and drug targets, pathway hypothesis generation, analysis of biological effects for novel small molecule compounds and clinical applications (analysis of large cohorts of patients, and translational and personalized medicine).

  17. Analytic reconstruction algorithms for triple-source CT with horizontal data truncation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Ming; Yu, Hengyong, E-mail: hengyong-yu@ieee.org

    2015-10-15

    Purpose: This paper explores a triple-source imaging method with horizontal data truncation to enlarge the field of view (FOV) for big objects. Methods: The study is conducted by using theoretical analysis, mathematical deduction, and numerical simulations. The proposed algorithms are implemented in c + + and MATLAB. While the basic platform is constructed in MATLAB, the computationally intensive segments are coded in c + +, which are linked via a MEX interface. Results: A triple-source circular scanning configuration with horizontal data truncation is developed, where three pairs of x-ray sources and detectors are unevenly distributed on the same circle tomore » cover the whole imaging object. For this triple-source configuration, a fan-beam filtered backprojection-type algorithm is derived for truncated full-scan projections without data rebinning. The algorithm is also extended for horizontally truncated half-scan projections and cone-beam projections in a Feldkamp-type framework. Using their method, the FOV is enlarged twofold to threefold to scan bigger objects with high speed and quality. The numerical simulation results confirm the correctness and effectiveness of the developed algorithms. Conclusions: The triple-source scanning configuration with horizontal data truncation cannot only keep most of the advantages of a traditional multisource system but also cover a larger FOV for big imaging objects. In addition, because the filtering is shift-invariant, the proposed algorithms are very fast and easily parallelized on graphic processing units.« less

  18. Analytic reconstruction algorithms for triple-source CT with horizontal data truncation.

    PubMed

    Chen, Ming; Yu, Hengyong

    2015-10-01

    This paper explores a triple-source imaging method with horizontal data truncation to enlarge the field of view (FOV) for big objects. The study is conducted by using theoretical analysis, mathematical deduction, and numerical simulations. The proposed algorithms are implemented in c + + and matlab. While the basic platform is constructed in matlab, the computationally intensive segments are coded in c + +, which are linked via a mex interface. A triple-source circular scanning configuration with horizontal data truncation is developed, where three pairs of x-ray sources and detectors are unevenly distributed on the same circle to cover the whole imaging object. For this triple-source configuration, a fan-beam filtered backprojection-type algorithm is derived for truncated full-scan projections without data rebinning. The algorithm is also extended for horizontally truncated half-scan projections and cone-beam projections in a Feldkamp-type framework. Using their method, the FOV is enlarged twofold to threefold to scan bigger objects with high speed and quality. The numerical simulation results confirm the correctness and effectiveness of the developed algorithms. The triple-source scanning configuration with horizontal data truncation cannot only keep most of the advantages of a traditional multisource system but also cover a larger FOV for big imaging objects. In addition, because the filtering is shift-invariant, the proposed algorithms are very fast and easily parallelized on graphic processing units.

  19. A Comparison of Soil Moisture Retrieval Models Using SIR-C Measurements over the Little Washita River Watershed

    NASA Technical Reports Server (NTRS)

    Wang, J. R.; Hsu, A.; Shi, J. C.; ONeill, P. E.; Engman, E. T.

    1997-01-01

    Six SIR-C L-band measurements over the Little Washita River watershed in Chickasha, Oklahoma during 11-17 April 1994 have been analyzed for studying the change of soil moisture in the region. Two algorithms developed recently for estimation of moisture content in bare soil were applied to these measurements and the results were compared with those sampled on the ground. There is a good agreement between the values of soil moisture estimated by either one of the algorithms and those measured from ground sampling for bare or sparsely vegetated fields. The standard error from this comparison is on the order of 0.05-0.06 cu cm/cu cm, which is comparable to that expected from a regression between backscattering coefficients and measured soil moisture. Both algorithms provide a poor estimation of soil moisture or fail to give solutions to areas covered with moderate or dense vegetation. Even for bare soils the number of pixels that bear no numerical solution from the application of either one of the two algorithms to the data is not negligible. Results from using one of these algorithms indicate that the fraction of these pixels becomes larger as the bare soils become drier. The other algorithm generally gives a larger fraction of these pixels when the fields are vegetation-covered. The implication and impact of these features are discussed in this article.

  20. Large-pitch kagome-structured hollow-core photonic crystal fiber

    NASA Astrophysics Data System (ADS)

    Couny, F.; Benabid, F.; Light, P. S.

    2006-12-01

    We report the fabrication and characterization of a new type of hollow-core photonic crystal fiber based on large-pitch (˜12μm) kagome lattice cladding. The optical characteristics of the 19-cell, 7-cell, and single-cell core defect fibers include broad optical transmission bands covering the visible and near-IR parts of the spectrum with relatively low loss and low chromatic dispersion, no detectable surface modes and high confinement of light in the core. Various applications of such a novel fiber are also discussed, including gas sensing, quantum optics, and high harmonic generation.

  1. Testing alternative response designs for training forest disturbance and attribution models

    Treesearch

    T. Schroeder; G. Moisen; K. Schleeweis

    2014-01-01

    Understanding and modeling land cover and land use change is evolving into a foundational element of climate, environmental, and sustainability science. Land cover and land use data are core to applications such as carbon accounting, greenhouse gas emissions reporting, biomass and bioenergy assessments, hydrologic function assessments, fire and fuels planning and...

  2. MEG and EEG data analysis with MNE-Python.

    PubMed

    Gramfort, Alexandre; Luessi, Martin; Larson, Eric; Engemann, Denis A; Strohmeier, Daniel; Brodbeck, Christian; Goj, Roman; Jas, Mainak; Brooks, Teon; Parkkonen, Lauri; Hämäläinen, Matti

    2013-12-26

    Magnetoencephalography and electroencephalography (M/EEG) measure the weak electromagnetic signals generated by neuronal activity in the brain. Using these signals to characterize and locate neural activation in the brain is a challenge that requires expertise in physics, signal processing, statistics, and numerical methods. As part of the MNE software suite, MNE-Python is an open-source software package that addresses this challenge by providing state-of-the-art algorithms implemented in Python that cover multiple methods of data preprocessing, source localization, statistical analysis, and estimation of functional connectivity between distributed brain regions. All algorithms and utility functions are implemented in a consistent manner with well-documented interfaces, enabling users to create M/EEG data analysis pipelines by writing Python scripts. Moreover, MNE-Python is tightly integrated with the core Python libraries for scientific comptutation (NumPy, SciPy) and visualization (matplotlib and Mayavi), as well as the greater neuroimaging ecosystem in Python via the Nibabel package. The code is provided under the new BSD license allowing code reuse, even in commercial products. Although MNE-Python has only been under heavy development for a couple of years, it has rapidly evolved with expanded analysis capabilities and pedagogical tutorials because multiple labs have collaborated during code development to help share best practices. MNE-Python also gives easy access to preprocessed datasets, helping users to get started quickly and facilitating reproducibility of methods by other researchers. Full documentation, including dozens of examples, is available at http://martinos.org/mne.

  3. Development of Enabling Scientific Tools to Characterize the Geologic Subsurface at Hanford

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kenna, Timothy C.; Herron, Michael M.

    2014-07-08

    This final report to the Department of Energy provides a summary of activities conducted under our exploratory grant, funded through U.S. DOE Subsurface Biogeochemical Research Program in the category of enabling scientific tools, which covers the period from July 15, 2010 to July 14, 2013. The main goal of this exploratory project is to determine the parameters necessary to translate existing borehole log data into reservoir properties following scientifically sound petrophysical relationships. For this study, we focused on samples and Ge-based spectral gamma logging system (SGLS) data collected from wells located in the Hanford 300 Area. The main activities consistedmore » of 1) the analysis of available core samples for a variety of mineralogical, chemical and physical; 2) evaluation of selected spectral gamma logs, environmental corrections, and calibration; 3) development of algorithms and a proposed workflow that permits translation of log responses into useful reservoir properties such as lithology, matrix density, porosity, and permeability. These techniques have been successfully employed in the petroleum industry; however, the approach is relatively new when applied to subsurface remediation. This exploratory project has been successful in meeting its stated objectives. We have demonstrated that our approach can lead to an improved interpretation of existing well log data. The algorithms we developed can utilize available log data, in particular gamma, and spectral gamma logs, and continued optimization will improve their application to ERSP goals of understanding subsurface properties.« less

  4. High skill in low-frequency climate response through fluctuation dissipation theorems despite structural instability.

    PubMed

    Majda, Andrew J; Abramov, Rafail; Gershgorin, Boris

    2010-01-12

    Climate change science focuses on predicting the coarse-grained, planetary-scale, longtime changes in the climate system due to either changes in external forcing or internal variability, such as the impact of increased carbon dioxide. The predictions of climate change science are carried out through comprehensive, computational atmospheric, and oceanic simulation models, which necessarily parameterize physical features such as clouds, sea ice cover, etc. Recently, it has been suggested that there is irreducible imprecision in such climate models that manifests itself as structural instability in climate statistics and which can significantly hamper the skill of computer models for climate change. A systematic approach to deal with this irreducible imprecision is advocated through algorithms based on the Fluctuation Dissipation Theorem (FDT). There are important practical and computational advantages for climate change science when a skillful FDT algorithm is established. The FDT response operator can be utilized directly for multiple climate change scenarios, multiple changes in forcing, and other parameters, such as damping and inverse modelling directly without the need of running the complex climate model in each individual case. The high skill of FDT in predicting climate change, despite structural instability, is developed in an unambiguous fashion using mathematical theory as guidelines in three different test models: a generic class of analytical models mimicking the dynamical core of the computer climate models, reduced stochastic models for low-frequency variability, and models with a significant new type of irreducible imprecision involving many fast, unstable modes.

  5. MEG and EEG data analysis with MNE-Python

    PubMed Central

    Gramfort, Alexandre; Luessi, Martin; Larson, Eric; Engemann, Denis A.; Strohmeier, Daniel; Brodbeck, Christian; Goj, Roman; Jas, Mainak; Brooks, Teon; Parkkonen, Lauri; Hämäläinen, Matti

    2013-01-01

    Magnetoencephalography and electroencephalography (M/EEG) measure the weak electromagnetic signals generated by neuronal activity in the brain. Using these signals to characterize and locate neural activation in the brain is a challenge that requires expertise in physics, signal processing, statistics, and numerical methods. As part of the MNE software suite, MNE-Python is an open-source software package that addresses this challenge by providing state-of-the-art algorithms implemented in Python that cover multiple methods of data preprocessing, source localization, statistical analysis, and estimation of functional connectivity between distributed brain regions. All algorithms and utility functions are implemented in a consistent manner with well-documented interfaces, enabling users to create M/EEG data analysis pipelines by writing Python scripts. Moreover, MNE-Python is tightly integrated with the core Python libraries for scientific comptutation (NumPy, SciPy) and visualization (matplotlib and Mayavi), as well as the greater neuroimaging ecosystem in Python via the Nibabel package. The code is provided under the new BSD license allowing code reuse, even in commercial products. Although MNE-Python has only been under heavy development for a couple of years, it has rapidly evolved with expanded analysis capabilities and pedagogical tutorials because multiple labs have collaborated during code development to help share best practices. MNE-Python also gives easy access to preprocessed datasets, helping users to get started quickly and facilitating reproducibility of methods by other researchers. Full documentation, including dozens of examples, is available at http://martinos.org/mne. PMID:24431986

  6. Parallel, stochastic measurement of molecular surface area.

    PubMed

    Juba, Derek; Varshney, Amitabh

    2008-08-01

    Biochemists often wish to compute surface areas of proteins. A variety of algorithms have been developed for this task, but they are designed for traditional single-processor architectures. The current trend in computer hardware is towards increasingly parallel architectures for which these algorithms are not well suited. We describe a parallel, stochastic algorithm for molecular surface area computation that maps well to the emerging multi-core architectures. Our algorithm is also progressive, providing a rough estimate of surface area immediately and refining this estimate as time goes on. Furthermore, the algorithm generates points on the molecular surface which can be used for point-based rendering. We demonstrate a GPU implementation of our algorithm and show that it compares favorably with several existing molecular surface computation programs, giving fast estimates of the molecular surface area with good accuracy.

  7. An impatient evolutionary algorithm with probabilistic tabu search for unified solution of some NP-hard problems in graph and set theory via clique finding.

    PubMed

    Guturu, Parthasarathy; Dantu, Ram

    2008-06-01

    Many graph- and set-theoretic problems, because of their tremendous application potential and theoretical appeal, have been well investigated by the researchers in complexity theory and were found to be NP-hard. Since the combinatorial complexity of these problems does not permit exhaustive searches for optimal solutions, only near-optimal solutions can be explored using either various problem-specific heuristic strategies or metaheuristic global-optimization methods, such as simulated annealing, genetic algorithms, etc. In this paper, we propose a unified evolutionary algorithm (EA) to the problems of maximum clique finding, maximum independent set, minimum vertex cover, subgraph and double subgraph isomorphism, set packing, set partitioning, and set cover. In the proposed approach, we first map these problems onto the maximum clique-finding problem (MCP), which is later solved using an evolutionary strategy. The proposed impatient EA with probabilistic tabu search (IEA-PTS) for the MCP integrates the best features of earlier successful approaches with a number of new heuristics that we developed to yield a performance that advances the state of the art in EAs for the exploration of the maximum cliques in a graph. Results of experimentation with the 37 DIMACS benchmark graphs and comparative analyses with six state-of-the-art algorithms, including two from the smaller EA community and four from the larger metaheuristics community, indicate that the IEA-PTS outperforms the EAs with respect to a Pareto-lexicographic ranking criterion and offers competitive performance on some graph instances when individually compared to the other heuristic algorithms. It has also successfully set a new benchmark on one graph instance. On another benchmark suite called Benchmarks with Hidden Optimal Solutions, IEA-PTS ranks second, after a very recent algorithm called COVER, among its peers that have experimented with this suite.

  8. IODP expedition 347: Baltic Sea basin paleoenvironment and biosphere

    NASA Astrophysics Data System (ADS)

    Andrén, T.; Barker Jørgensen, B.; Cotterill, C.; Green, S.; IODP expedition 347 scientific party, the

    2015-12-01

    The Integrated Ocean Drilling Program (IODP) expedition 347 cored sediments from different settings of the Baltic Sea covering the last glacial-interglacial cycle. The main aim was to study the geological development of the Baltic Sea in relation to the extreme climate variability of the region with changing ice cover and major shifts in temperature, salinity, and biological communities. Using the Greatship Manisha as a European Consortium for Ocean Research Drilling (ECORD) mission-specific platform, we recovered 1.6 km of core from nine sites of which four were additionally cored for microbiology. The sites covered the gateway to the North Sea and Atlantic Ocean, several sub-basins in the southern Baltic Sea, a deep basin in the central Baltic Sea, and a river estuary in the north. The waxing and waning of the Scandinavian ice sheet has profoundly affected the Baltic Sea sediments. During the Weichselian, progressing glaciers reshaped the submarine landscape and displaced sedimentary deposits from earlier Quaternary time. As the glaciers retreated they left a complex pattern of till, sand, and lacustrine clay, which in the basins has since been covered by a thick deposit of Holocene, organic-rich clay. Due to the stratified water column of the brackish Baltic Sea and the recurrent and widespread anoxia, the deeper basins harbor laminated sediments that provide a unique opportunity for high-resolution chronological studies. The Baltic Sea is a eutrophic intra-continental sea that is strongly impacted by terrestrial runoff and nutrient fluxes. The Holocene deposits are recorded today to be up to 50 m deep and geochemically affected by diagenetic alterations driven by organic matter degradation. Many of the cored sequences were highly supersaturated with respect to methane, which caused strong degassing upon core recovery. The depth distributions of conservative sea water ions still reflected the transition at the end of the last glaciation from fresh-water clays to Holocene brackish mud. High-resolution sampling and analyses of interstitial water chemistry revealed the intensive mineralization and zonation of the predominant biogeochemical processes. Quantification of microbial cells in the sediments yielded some of the highest cell densities yet recorded by scientific drilling.

  9. Diatoms in sediments of perennially ice-covered Lake Hoare, and implications for interpreting lake history in the McMurdo Dry Valleys of Antarctica

    USGS Publications Warehouse

    Spaulding, S.A.; McKnight, Diane M.; Stoermer, E.F.; Doran, P.T.

    1997-01-01

    Diatom assemblages in surficial sediments, sediment cores, sediment traps, and inflowing streams of perennially ice-covered Lake Hore, South Victorialand, Antarctica were examined to determine the distribution of diatom taxa, and to ascertain if diatom species composition has changed over time. Lake Hoare is a closed-basin lake with an area of 1.8 km2, maximum depth of 34 m, and mean depth of 14 m, although lake level has been rising at a rate of 0.09 m yr-1 in recent decades. The lake has an unusual regime of sediment deposition: coarse grained sediments accumulate on the ice surface and are deposited episodically on the lake bottom. Benthic microbial mats are covered in situ by the coarse episodic deposits, and the new surfaces are recolonized. Ice cover prevents wind-induced mixing, creating the unique depositional environment in which sediment cores record the history of a particular site, rather than a lake=wide integration. Shallow-water (<1 m) diatom assemblages (Stauroneis anceps, Navicula molesta, Diadesmis contenta var. parallela, Navicula peraustralis) were distinct from mid-depth (4-16 m) assemblages (Diadesmis contenta, Luticola muticopsis fo. reducta, Stauroneis anceps, Diadesmis contenta var. parallela, Luticola murrayi) and deep-water (2-31 m) assemblages (Luticola murrayi, Luticola muticopsis fo. reducta, Navicula molesta. Analysis of a sediment core (30 cm long, from 11 m water depth) from Lake Hoare revealed two abrupt changes in diatom assemblages. The upper section of the sediment core contained the greatest biomass of benthic microbial mat, as well as the greatest total abundance and diversity of diatoms. Relative abundances of diatoms in this section are similar to the surficial samples from mid-depths. An intermediate zone contained less organic material and lower densities of diatoms. The bottom section of core contained the least amount of microbial mat and organic material, and the lowest density of diatoms. The dominant process influencing species composition and abundance of diatom assemblages in the benthic microbial mats is episodic deposition of coarse sediment from the ice surface.

  10. Impulse Flashover Tests at Edgar Beauchamp High Voltage Test Facility, Dixon, California, in Support of Cutler Insulator Failure Investigation

    DTIC Science & Technology

    2006-07-01

    sites. The strength member of the safety core insulators is a fiberglass belt wrapped around pins in the end fittings. Porcelain tubes cover the belt... porcelain tube and heavily tracked the fiberglass belt but left the belt intact structurally (Figure 1). Figure 1. Cutler safety core insulator ...fail-safe insulators . For these tests, the porcelain tube of the safety core insulator was replaced with a plastic see-through tube. The test report [5

  11. Core II Materials for Rural Agriculture Programs. Units E-H.

    ERIC Educational Resources Information Center

    Biondo, Ron; And Others

    This curriculum guide includes teaching packets for 21 problem areas to be included in a core curriculum for 10th grade students enrolled in a rural agricultural program. Covered in the four units included in this volume are crop science (harvesting farm crops and growing small grains); soil science and conservation of natural resources…

  12. Agricultural Mechanics Unit for Plant Science Core Curriculum. Volume 15, Number 4. Instructor's Guide.

    ERIC Educational Resources Information Center

    Linhardt, Richard E.; Hunter, Bill

    This instructor's guide is intended for use in teaching the agricultural mechanics unit of a plant science core curriculum. Covered in the individual units of the guide are the following topics: arc welding (following safety procedures, controlling distortion, selecting and caring for electrodes, identifying the material to be welded, and welding…

  13. Basic Safety II. Apprentice Related Training Module.

    ERIC Educational Resources Information Center

    Rice, Eric; Spetz, Sally H.

    One in a series of core instructional materials for apprentices to use during the first or second years of apprentice-related subjects training, this booklet deals with basic safety. The first section consists of an outline of the content and scope of the core materials as well as a self-assessment pretest. Covered in the four instructional…

  14. Metrics. A Basic Core Curriculum for Teaching Metrics to Vocational Students.

    ERIC Educational Resources Information Center

    Albracht, James; Simmons, A. D.

    This core curriculum contains five units for use in teaching metrics to vocational students. Included in the first unit are a series of learning activities to familiarize students with the terminology of metrics, including the prefixes and their values. Measures of distance and speed are covered. Discussed next are measures of volume used with…

  15. Energy and Agriculture. A Basic Core Curriculum for Teaching Energy to Vocational Agriculture Students.

    ERIC Educational Resources Information Center

    Albracht, James; French, Byron

    This core curriculum contains five units of material for teaching energy to vocational agriculture students. Energy uses and the benefits of energy conservation are covered in a unit on the impact of energy on agriculture. Discussed next are tractor performance and Nebraska tractor test data for selecting and evaluating tractors for maximum fuel…

  16. A coupled/uncoupled deformation and fatigue damage algorithm utilizing the finite element method

    NASA Technical Reports Server (NTRS)

    Wilt, Thomas E.; Arnold, Steven M.

    1994-01-01

    A fatigue damage computational algorithm utilizing a multiaxial, isothermal, continuum based fatigue damage model for unidirectional metal matrix composites has been implemented into the commercial finite element code MARC using MARC user subroutines. Damage is introduced into the finite element solution through the concept of effective stress which fully couples the fatigue damage calculations with the finite element deformation solution. An axisymmetric stress analysis was performed on a circumferentially reinforced ring, wherein both the matrix cladding and the composite core were assumed to behave elastic-perfectly plastic. The composite core behavior was represented using Hill's anisotropic continuum based plasticity model, and similarly, the matrix cladding was represented by an isotropic plasticity model. Results are presented in the form of S-N curves and damage distribution plots.

  17. On the effective implementation of a boundary element code on graphics processing units unsing an out-of-core LU algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D'Azevedo, Ed F; Nintcheu Fata, Sylvain

    2012-01-01

    A collocation boundary element code for solving the three-dimensional Laplace equation, publicly available from \\url{http://www.intetec.org}, has been adapted to run on an Nvidia Tesla general purpose graphics processing unit (GPU). Global matrix assembly and LU factorization of the resulting dense matrix were performed on the GPU. Out-of-core techniques were used to solve problems larger than available GPU memory. The code achieved over eight times speedup in matrix assembly and about 56~Gflops/sec in the LU factorization using only 512~Mbytes of GPU memory. Details of the GPU implementation and comparisons with the standard sequential algorithm are included to illustrate the performance ofmore » the GPU code.« less

  18. Automatic crown cover mapping to improve forest inventory

    Treesearch

    Claude Vidal; Jean-Guy Boureau; Nicolas Robert; Nicolas Py; Josiane Zerubia; Xavier Descombes; Guillaume Perrin

    2009-01-01

    To automatically analyze near infrared aerial photographs, the French National Institute for Research in Computer Science and Control developed together with the French National Forest Inventory (NFI) a method for automatic crown cover mapping. This method uses a Reverse Jump Monte Carlo Markov Chain algorithm to locate the crowns and describe those using ellipses or...

  19. SEASONAL EMISSIONS OF AMMONIA AND METHANE FROM A HOG WASTE LAGOON WITH BIOACTIVE COVER

    EPA Science Inventory

    The paper discusses the use of plane-integrated (PI) open-path Fourier transform infrared spectrometry (OP-FTIR) to measure the flux of ammonia and methane from a hog waste lagoon before and after the installation of a bioactive cover. A computed tomography algorithm using a smoo...

  20. RNAcode: Robust discrimination of coding and noncoding regions in comparative sequence data

    PubMed Central

    Washietl, Stefan; Findeiß, Sven; Müller, Stephan A.; Kalkhof, Stefan; von Bergen, Martin; Hofacker, Ivo L.; Stadler, Peter F.; Goldman, Nick

    2011-01-01

    With the availability of genome-wide transcription data and massive comparative sequencing, the discrimination of coding from noncoding RNAs and the assessment of coding potential in evolutionarily conserved regions arose as a core analysis task. Here we present RNAcode, a program to detect coding regions in multiple sequence alignments that is optimized for emerging applications not covered by current protein gene-finding software. Our algorithm combines information from nucleotide substitution and gap patterns in a unified framework and also deals with real-life issues such as alignment and sequencing errors. It uses an explicit statistical model with no machine learning component and can therefore be applied “out of the box,” without any training, to data from all domains of life. We describe the RNAcode method and apply it in combination with mass spectrometry experiments to predict and confirm seven novel short peptides in Escherichia coli and to analyze the coding potential of RNAs previously annotated as “noncoding.” RNAcode is open source software and available for all major platforms at http://wash.github.com/rnacode. PMID:21357752

  1. Automated simultaneous multiple feature classification of MTI data

    NASA Astrophysics Data System (ADS)

    Harvey, Neal R.; Theiler, James P.; Balick, Lee K.; Pope, Paul A.; Szymanski, John J.; Perkins, Simon J.; Porter, Reid B.; Brumby, Steven P.; Bloch, Jeffrey J.; David, Nancy A.; Galassi, Mark C.

    2002-08-01

    Los Alamos National Laboratory has developed and demonstrated a highly capable system, GENIE, for the two-class problem of detecting a single feature against a background of non-feature. In addition to the two-class case, however, a commonly encountered remote sensing task is the segmentation of multispectral image data into a larger number of distinct feature classes or land cover types. To this end we have extended our existing system to allow the simultaneous classification of multiple features/classes from multispectral data. The technique builds on previous work and its core continues to utilize a hybrid evolutionary-algorithm-based system capable of searching for image processing pipelines optimized for specific image feature extraction tasks. We describe the improvements made to the GENIE software to allow multiple-feature classification and describe the application of this system to the automatic simultaneous classification of multiple features from MTI image data. We show the application of the multiple-feature classification technique to the problem of classifying lava flows on Mauna Loa volcano, Hawaii, using MTI image data and compare the classification results with standard supervised multiple-feature classification techniques.

  2. RNAcode: robust discrimination of coding and noncoding regions in comparative sequence data.

    PubMed

    Washietl, Stefan; Findeiss, Sven; Müller, Stephan A; Kalkhof, Stefan; von Bergen, Martin; Hofacker, Ivo L; Stadler, Peter F; Goldman, Nick

    2011-04-01

    With the availability of genome-wide transcription data and massive comparative sequencing, the discrimination of coding from noncoding RNAs and the assessment of coding potential in evolutionarily conserved regions arose as a core analysis task. Here we present RNAcode, a program to detect coding regions in multiple sequence alignments that is optimized for emerging applications not covered by current protein gene-finding software. Our algorithm combines information from nucleotide substitution and gap patterns in a unified framework and also deals with real-life issues such as alignment and sequencing errors. It uses an explicit statistical model with no machine learning component and can therefore be applied "out of the box," without any training, to data from all domains of life. We describe the RNAcode method and apply it in combination with mass spectrometry experiments to predict and confirm seven novel short peptides in Escherichia coli and to analyze the coding potential of RNAs previously annotated as "noncoding." RNAcode is open source software and available for all major platforms at http://wash.github.com/rnacode.

  3. The Apache OODT Project: An Introduction

    NASA Astrophysics Data System (ADS)

    Mattmann, C. A.; Crichton, D. J.; Hughes, J. S.; Ramirez, P.; Goodale, C. E.; Hart, A. F.

    2012-12-01

    Apache OODT is a science data system framework, borne over the past decade, with 100s of FTEs of investment, tens of sponsoring agencies (NASA, NIH/NCI, DoD, NSF, universities, etc.), and hundreds of projects and science missions that it powers everyday to their success. At its core, Apache OODT carries with it two fundamental classes of software services and components: those that deal with information integration from existing science data repositories and archives, that themselves have already-in-use business processes and models for populating those archives. Information integration allows search, retrieval, and dissemination across these heterogeneous systems, and ultimately rapid, interactive data access, and retrieval. The other suite of services and components within Apache OODT handle population and processing of those data repositories and archives. Workflows, resource management, crawling, remote data retrieval, curation and ingestion, along with science data algorithm integration all are part of these Apache OODT software elements. In this talk, I will provide an overview of the use of Apache OODT to unlock and populate information from science data repositories and archives. We'll cover the basics, along with some advanced use cases and success stories.

  4. Autonomous sensor manager agents (ASMA)

    NASA Astrophysics Data System (ADS)

    Osadciw, Lisa A.

    2004-04-01

    Autonomous sensor manager agents are presented as an algorithm to perform sensor management within a multisensor fusion network. The design of the hybrid ant system/particle swarm agents is described in detail with some insight into their performance. Although the algorithm is designed for the general sensor management problem, a simulation example involving 2 radar systems is presented. Algorithmic parameters are determined by the size of the region covered by the sensor network, the number of sensors, and the number of parameters to be selected. With straight forward modifications, this algorithm can be adapted for most sensor management problems.

  5. DeepSAT: A Deep Learning Approach to Tree-Cover Delineation in 1-m NAIP Imagery for the Continental United States

    NASA Technical Reports Server (NTRS)

    Ganguly, Sangram; Basu, Saikat; Nemani, Ramakrishna R.; Mukhopadhyay, Supratik; Michaelis, Andrew; Votava, Petr

    2016-01-01

    High resolution tree cover classification maps are needed to increase the accuracy of current land ecosystem and climate model outputs. Limited studies are in place that demonstrates the state-of-the-art in deriving very high resolution (VHR) tree cover products. In addition, most methods heavily rely on commercial softwares that are difficult to scale given the region of study (e.g. continents to globe). Complexities in present approaches relate to (a) scalability of the algorithm, (b) large image data processing (compute and memory intensive), (c) computational cost, (d) massively parallel architecture, and (e) machine learning automation. In addition, VHR satellite datasets are of the order of terabytes and features extracted from these datasets are of the order of petabytes. In our present study, we have acquired the National Agriculture Imagery Program (NAIP) dataset for the Continental United States at a spatial resolution of 1-m. This data comes as image tiles (a total of quarter million image scenes with 60 million pixels) and has a total size of 65 terabytes for a single acquisition. Features extracted from the entire dataset would amount to 8-10 petabytes. In our proposed approach, we have implemented a novel semi-automated machine learning algorithm rooted on the principles of "deep learning" to delineate the percentage of tree cover. Using the NASA Earth Exchange (NEX) initiative, we have developed an end-to-end architecture by integrating a segmentation module based on Statistical Region Merging, a classification algorithm using Deep Belief Network and a structured prediction algorithm using Conditional Random Fields to integrate the results from the segmentation and classification modules to create per-pixel class labels. The training process is scaled up using the power of GPUs and the prediction is scaled to quarter million NAIP tiles spanning the whole of Continental United States using the NEX HPC supercomputing cluster. An initial pilot over the state of California spanning a total of 11,095 NAIP tiles covering a total geographical area of 163,696 sq. miles has produced true positive rates of around 88 percent for fragmented forests and 74 percent for urban tree cover areas, with false positive rates lower than 2 percent for both landscapes.

  6. DeepSAT: A Deep Learning Approach to Tree-cover Delineation in 1-m NAIP Imagery for the Continental United States

    NASA Astrophysics Data System (ADS)

    Ganguly, S.; Basu, S.; Nemani, R. R.; Mukhopadhyay, S.; Michaelis, A.; Votava, P.

    2016-12-01

    High resolution tree cover classification maps are needed to increase the accuracy of current land ecosystem and climate model outputs. Limited studies are in place that demonstrates the state-of-the-art in deriving very high resolution (VHR) tree cover products. In addition, most methods heavily rely on commercial softwares that are difficult to scale given the region of study (e.g. continents to globe). Complexities in present approaches relate to (a) scalability of the algorithm, (b) large image data processing (compute and memory intensive), (c) computational cost, (d) massively parallel architecture, and (e) machine learning automation. In addition, VHR satellite datasets are of the order of terabytes and features extracted from these datasets are of the order of petabytes. In our present study, we have acquired the National Agriculture Imagery Program (NAIP) dataset for the Continental United States at a spatial resolution of 1-m. This data comes as image tiles (a total of quarter million image scenes with 60 million pixels) and has a total size of 65 terabytes for a single acquisition. Features extracted from the entire dataset would amount to 8-10 petabytes. In our proposed approach, we have implemented a novel semi-automated machine learning algorithm rooted on the principles of "deep learning" to delineate the percentage of tree cover. Using the NASA Earth Exchange (NEX) initiative, we have developed an end-to-end architecture by integrating a segmentation module based on Statistical Region Merging, a classification algorithm using Deep Belief Network and a structured prediction algorithm using Conditional Random Fields to integrate the results from the segmentation and classification modules to create per-pixel class labels. The training process is scaled up using the power of GPUs and the prediction is scaled to quarter million NAIP tiles spanning the whole of Continental United States using the NEX HPC supercomputing cluster. An initial pilot over the state of California spanning a total of 11,095 NAIP tiles covering a total geographical area of 163,696 sq. miles has produced true positive rates of around 88% for fragmented forests and 74% for urban tree cover areas, with false positive rates lower than 2% for both landscapes.

  7. Use of Added Sugars Instead of Total Sugars May Improve the Capacity of the Health Star Rating System to Discriminate between Core and Discretionary Foods.

    PubMed

    Menday, Hannah; Neal, Bruce; Wu, Jason H Y; Crino, Michelle; Baines, Surinder; Petersen, Kristina S

    2017-12-01

    The Australian Government has introduced a voluntary front-of-package labeling system that includes total sugar in the calculation. Our aim was to determine the effect of substituting added sugars for total sugars when calculating Health Star Ratings (HSR) and identify whether use of added sugars improves the capacity to distinguish between core and discretionary food products. This study included packaged food and beverage products available in Australian supermarkets (n=3,610). The product categories included in the analyses were breakfast cereals (n=513), fruit (n=571), milk (n=309), non-alcoholic beverages (n=1,040), vegetables (n=787), and yogurt (n=390). Added sugar values were estimated for each product using a validated method. HSRs were then estimated for every product according to the established method using total sugar, and then by substituting added sugar for total sugar. The scoring system was not modified when added sugar was used in place of total sugar in the HSR calculation. Products were classified as core or discretionary based on the Australian Dietary Guidelines. To investigate whether use of added sugar in the HSR algorithm improved the distinction between core and discretionary products as defined by the Australian Dietary Guidelines, the proportion of core products that received an HSR of ≥3.5 stars and the proportion of discretionary products that received an HSR of <3.5 stars, for algorithms based upon total vs added sugars were determined. There were 2,263 core and 1,347 discretionary foods; 1,684 of 3,610 (47%) products contained added sugar (median 8.4 g/100 g, interquartile range=5.0 to 12.2 g). When the HSR was calculated with added sugar instead of total sugar, an additional 166 (7.3%) core products received an HSR of ≥3.5 stars and 103 (7.6%) discretionary products received a rating of ≥3.5 stars. The odds of correctly identifying a product as core vs discretionary were increased by 61% (odds ratio 1.61, 95% CI 1.26 to 2.06; P<0.001) when the algorithm was based on added compared to total sugars. In the six product categories examined, substitution of added sugars for total sugars better aligned the HSR with the Australian Dietary Guidelines. Future work is required to investigate the impact in other product categories. Copyright © 2017 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.

  8. Comparison of build-up region doses in oblique tangential 6 MV photon beams calculated by AAA and CCC algorithms in breast Rando phantom

    NASA Astrophysics Data System (ADS)

    Masunun, P.; Tangboonduangjit, P.; Dumrongkijudom, N.

    2016-03-01

    The purpose of this study is to compare the build-up region doses on breast Rando phantom surface with the bolus covered, the doses in breast Rando phantom and also the doses in a lung that is the heterogeneous region by two algorithms. The AAA in Eclipse TPS and the collapsed cone convolution algorithm in Pinnacle treatment planning system were used to plan in tangential field technique with 6 MV photon beam at 200 cGy total doses in Breast Rando phantom with bolus covered (5 mm and 10 mm). TLDs were calibrated with Cobalt-60 and used to measure the doses in irradiation process. The results in treatment planning show that the doses in build-up region and the doses in breast phantom were closely matched in both algorithms which are less than 2% differences. However, overestimate of doses in a lung (L2) were found in AAA with 13.78% and 6.06% differences at 5 mm and 10 mm bolus thickness, respectively when compared with CCC algorithm. The TLD measurements show the underestimate in buildup region and in breast phantom but the doses in a lung (L2) were overestimated when compared with the doses in the two plannings at both thicknesses of the bolus.

  9. Paleoenvironmental changes during the past 2000 years, evidence from Kongsfjorden, Svalbard

    NASA Astrophysics Data System (ADS)

    Jernas, P.; Kristensen, D.; Koc, N.; Skirbekk, K.

    2009-04-01

    Over the past decades the Arctic has received more attention due to the rapid warming that is more pronounced there than elsewhere on the globe. Instrumental time series are too short to capture the range of natural variability in the Arctic and we therefore have to rely on proxy records to describe the whole range of natural variability. In this context the late-Holocene climate variations are particularly important because natural forcings and the Earth's boundary conditions have been approximately similar to those operating today. Documenting past natural climate variability has therefore a vital role to play in understanding the present climate and predicting future change. Here we present a high resolution marine record from Kongsfjorden covering the last c. 2000 years. The core site is located in Kongsfjorden situated on the western coast of Spitsbergen (Svalbard). We focus on this region because it lies along the path of inflow of warmer and saline subsurface waters via the West Spitsbergen Current which is one of the important heat sources for the Arctic Ocean. This current is a major regulator of environmental changes and for example sea-ice distribution in the west Svalbard area. Therefore quantification of it's spatially and temporally variations through time are essential for understanding past environmental and climate changes. We have investigated faunal variations in benthic foraminifera from the upper 60 cm (covering the last two millennia) of a gravity core (510 cm total length) sampled with one-cm density. Chronology of the gravity core is established by AMS radiocarbon dating. The core was in addition investigated for grain size analysis and x-ray. The sediment analysis and x-ray show the upper part of the core contains large amounts of IRD from 7 cm - 25 cm corresponding to an age of 150-700 cal yr. It indicates that abundant icebergs melted over the core site depositing IRD. Further down core (1000-1800 cal yr) there is a significant dominance of fine grained sediment and decrease in ice rafting indicating less influence from glaciers. The foraminiferal species composition show decreasing content of agglutinated foraminifera down core caused by their low preservation potential. For this core site it confirms the importance of calcareous foraminifera as a fossil record tool. The two dominant species in the core are Elphidium excavatum and Nonionellina labradorica. During the last 2000 years the percentage of E. excavatum shows a general tendency to decrease while N. labradorica increases toward present. Elphidium excavatum is typical for arctic glaciomarine environments close to glaciers and ice caps, indicating harsh conditions (cold bottom waters temperatures, lower salinity) and probably extensive ice cover. Nonionellina labradorica indicates the vicinity of oceanographic fronts and high productivity. Another species Islandiella spp., often associated with increased productivity and presence of the sea ice edge, shows significant increase in percentage from 1000 to 800 cal yr BP. From 600 to 400 cal yr BP Bucella spp. start to decline suggesting increased sea ice cover and diminished influence of the Coastal Current on the inner shelf of Svalbard.

  10. Generic algorithms for high performance scalable geocomputing

    NASA Astrophysics Data System (ADS)

    de Jong, Kor; Schmitz, Oliver; Karssenberg, Derek

    2016-04-01

    During the last decade, the characteristics of computing hardware have changed a lot. For example, instead of a single general purpose CPU core, personal computers nowadays contain multiple cores per CPU and often general purpose accelerators, like GPUs. Additionally, compute nodes are often grouped together to form clusters or a supercomputer, providing enormous amounts of compute power. For existing earth simulation models to be able to use modern hardware platforms, their compute intensive parts must be rewritten. This can be a major undertaking and may involve many technical challenges. Compute tasks must be distributed over CPU cores, offloaded to hardware accelerators, or distributed to different compute nodes. And ideally, all of this should be done in such a way that the compute task scales well with the hardware resources. This presents two challenges: 1) how to make good use of all the compute resources and 2) how to make these compute resources available for developers of simulation models, who may not (want to) have the required technical background for distributing compute tasks. The first challenge requires the use of specialized technology (e.g.: threads, OpenMP, MPI, OpenCL, CUDA). The second challenge requires the abstraction of the logic handling the distribution of compute tasks from the model-specific logic, hiding the technical details from the model developer. To assist the model developer, we are developing a C++ software library (called Fern) containing algorithms that can use all CPU cores available in a single compute node (distributing tasks over multiple compute nodes will be done at a later stage). The algorithms are grid-based (finite difference) and include local and spatial operations such as convolution filters. The algorithms handle distribution of the compute tasks to CPU cores internally. In the resulting model the low-level details of how this is done is separated from the model-specific logic representing the modeled system. This contrasts with practices in which code for distributing of compute tasks is mixed with model-specific code, and results in a better maintainable model. For flexibility and efficiency, the algorithms are configurable at compile-time with the respect to the following aspects: data type, value type, no-data handling, input value domain handling, and output value range handling. This makes the algorithms usable in very different contexts, without the need for making intrusive changes to existing models when using them. Applications that benefit from using the Fern library include the construction of forward simulation models in (global) hydrology (e.g. PCR-GLOBWB (Van Beek et al. 2011)), ecology, geomorphology, or land use change (e.g. PLUC (Verstegen et al. 2014)) and manipulation of hyper-resolution land surface data such as digital elevation models and remote sensing data. Using the Fern library, we have also created an add-on to the PCRaster Python Framework (Karssenberg et al. 2010) allowing its users to speed up their spatio-temporal models, sometimes by changing just a single line of Python code in their model. In our presentation we will give an overview of the design of the algorithms, providing examples of different contexts where they can be used to replace existing sequential algorithms, including the PCRaster environmental modeling software (www.pcraster.eu). We will show how the algorithms can be configured to behave differently when necessary. References Karssenberg, D., Schmitz, O., Salamon, P., De Jong, K. and Bierkens, M.F.P., 2010, A software framework for construction of process-based stochastic spatio-temporal models and data assimilation. Environmental Modelling & Software, 25, pp. 489-502, Link. Best Paper Award 2010: Software and Decision Support. Van Beek, L. P. H., Y. Wada, and M. F. P. Bierkens. 2011. Global monthly water stress: 1. Water balance and water availability. Water Resources Research. 47. Verstegen, J. A., D. Karssenberg, F. van der Hilst, and A. P. C. Faaij. 2014. Identifying a land use change cellular automaton by Bayesian data assimilation. Environmental Modelling & Software 53:121-136.

  11. Stability performance and interface shear strength of geocomposite drain/soil systems

    NASA Astrophysics Data System (ADS)

    Othman, Maidiana; Frost, Matthew; Dixon, Neil

    2018-02-01

    Landfill covers are designed as impermeable caps on top of waste containment facilities after the completion of landfill operations. Geocomposite drain (GD) materials consist of a geonet or geospacer (as a drainage core) sandwiched between non-woven geotextiles that act as separators and filters. GD provides a drainage function as part of the cover system. The stability performance of landfill cover system is largely controlled by the interface shear strength mobilized between the elements of the cover. If a GD is used, the interface shear strength properties between the upper surface of the GD and the overlying soil may govern stability of the system. It is not uncommon for fine grained materials to be used as cover soils. In these cases, understanding soil softening issues at the soil interface with the non-woven geotextile is important. Such softening can be caused by capillary break behaviour and build-up of water pressures from the toe of the drain upwards into the cover soil. The interaction processes to allow water flow into a GD core through the soil-geotextile interface is very complex. This paper reports the main behaviour of in-situ interface shear strength of soil-GD using field measurements on the trial landfill cover at Bletchley, UK. The soil softening at the interface due to soaked behaviour show a reduction in interface shear strength and this aspect should be emphasized in design specifications and construction control. The results also help to increase confidence in the understanding of the implications for design of cover systems.

  12. The contour-buildup algorithm to calculate the analytical molecular surface.

    PubMed

    Totrov, M; Abagyan, R

    1996-01-01

    A new algorithm is presented to calculate the analytical molecular surface defined as a smooth envelope traced out by the surface of a probe sphere rolled over the molecule. The core of the algorithm is the sequential build up of multi-arc contours on the van der Waals spheres. This algorithm yields substantial reduction in both memory and time requirements of surface calculations. Further, the contour-buildup principle is intrinsically "local", which makes calculations of the partial molecular surfaces even more efficient. Additionally, the algorithm is equally applicable not only to convex patches, but also to concave triangular patches which may have complex multiple intersections. The algorithm permits the rigorous calculation of the full analytical molecular surface for a 100-residue protein in about 2 seconds on an SGI indigo with R4400++ processor at 150 Mhz, with the performance scaling almost linearly with the protein size. The contour-buildup algorithm is faster than the original Connolly algorithm an order of magnitude.

  13. Quantum Algorithms Based on Physical Processes

    DTIC Science & Technology

    2013-12-03

    quantum walks with hard-core bosons and the graph isomorphism problem,” American Physical Society March meeting, March 2011 Kenneth Rudinger, John...King Gamble, Mark Wellons, Mark Friesen, Dong Zhou, Eric Bach, Robert Joynt, and S.N. Coppersmith, “Quantum random walks of non-interacting bosons on...and noninteracting Bosons to distinguish nonisomorphic graphs. 1) We showed that quantum walks of two hard-core Bosons can distinguish all pairs of

  14. Quantum Algorithms Based on Physical Processes

    DTIC Science & Technology

    2013-12-02

    quantum walks with hard-core bosons and the graph isomorphism problem,” American Physical Society March meeting, March 2011 Kenneth Rudinger, John...King Gamble, Mark Wellons, Mark Friesen, Dong Zhou, Eric Bach, Robert Joynt, and S.N. Coppersmith, “Quantum random walks of non-interacting bosons on...and noninteracting Bosons to distinguish nonisomorphic graphs. 1) We showed that quantum walks of two hard-core Bosons can distinguish all pairs of

  15. Support the Design of Improved IUE NEWSIPS High Dispersion Extraction Algorithms: Improved IUE High Dispersion Extraction Algorithms

    NASA Technical Reports Server (NTRS)

    Lawton, Pat

    2004-01-01

    The objective of this work was to support the design of improved IUE NEWSIPS high dispersion extraction algorithms. The purpose of this work was to evaluate use of the Linearized Image (LIHI) file versus the Re-Sampled Image (SIHI) file, evaluate various extraction, and design algorithms for evaluation of IUE High Dispersion spectra. It was concluded the use of the Re-Sampled Image (SIHI) file was acceptable. Since the Gaussian profile worked well for the core and the Lorentzian profile worked well for the wings, the Voigt profile was chosen for use in the extraction algorithm. It was found that the gamma and sigma parameters varied significantly across the detector, so gamma and sigma masks for the SWP detector were developed. Extraction code was written.

  16. A GPU-paralleled implementation of an enhanced face recognition algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Hao; Liu, Xiyang; Shao, Shuai; Zan, Jiguo

    2013-03-01

    Face recognition algorithm based on compressed sensing and sparse representation is hotly argued in these years. The scheme of this algorithm increases recognition rate as well as anti-noise capability. However, the computational cost is expensive and has become a main restricting factor for real world applications. In this paper, we introduce a GPU-accelerated hybrid variant of face recognition algorithm named parallel face recognition algorithm (pFRA). We describe here how to carry out parallel optimization design to take full advantage of many-core structure of a GPU. The pFRA is tested and compared with several other implementations under different data sample size. Finally, Our pFRA, implemented with NVIDIA GPU and Computer Unified Device Architecture (CUDA) programming model, achieves a significant speedup over the traditional CPU implementations.

  17. Progress Towards a Rad-Hydro Code for Modern Computing Architectures LA-UR-10-02825

    NASA Astrophysics Data System (ADS)

    Wohlbier, J. G.; Lowrie, R. B.; Bergen, B.; Calef, M.

    2010-11-01

    We are entering an era of high performance computing where data movement is the overwhelming bottleneck to scalable performance, as opposed to the speed of floating-point operations per processor. All multi-core hardware paradigms, whether heterogeneous or homogeneous, be it the Cell processor, GPGPU, or multi-core x86, share this common trait. In multi-physics applications such as inertial confinement fusion or astrophysics, one may be solving multi-material hydrodynamics with tabular equation of state data lookups, radiation transport, nuclear reactions, and charged particle transport in a single time cycle. The algorithms are intensely data dependent, e.g., EOS, opacity, nuclear data, and multi-core hardware memory restrictions are forcing code developers to rethink code and algorithm design. For the past two years LANL has been funding a small effort referred to as Multi-Physics on Multi-Core to explore ideas for code design as pertaining to inertial confinement fusion and astrophysics applications. The near term goals of this project are to have a multi-material radiation hydrodynamics capability, with tabular equation of state lookups, on cartesian and curvilinear block structured meshes. In the longer term we plan to add fully implicit multi-group radiation diffusion and material heat conduction, and block structured AMR. We will report on our progress to date.

  18. Distributed Sleep Scheduling in Wireless Sensor Networks via Fractional Domatic Partitioning

    NASA Astrophysics Data System (ADS)

    Schumacher, André; Haanpää, Harri

    We consider setting up sleep scheduling in sensor networks. We formulate the problem as an instance of the fractional domatic partition problem and obtain a distributed approximation algorithm by applying linear programming approximation techniques. Our algorithm is an application of the Garg-Könemann (GK) scheme that requires solving an instance of the minimum weight dominating set (MWDS) problem as a subroutine. Our two main contributions are a distributed implementation of the GK scheme for the sleep-scheduling problem and a novel asynchronous distributed algorithm for approximating MWDS based on a primal-dual analysis of Chvátal's set-cover algorithm. We evaluate our algorithm with ns2 simulations.

  19. Tracing Forest Change through 40 Years on Two Continents with the BULC Algorithm and Google Earth Engine

    NASA Astrophysics Data System (ADS)

    Cardille, J. A.; Crowley, M.; Fortin, J. A.; Lee, J.; Perez, E.; Sleeter, B. M.; Thau, D.

    2016-12-01

    With the opening of the Landsat archive, researchers have a vast new data source teeming with imagery and potential. Beyond Landsat, data from other sensors is newly available as well: these include ALOS/PALSAR, Sentinel-1 and -2, MERIS, and many more. Google Earth Engine, developed to organize and provide analysis tools for these immense data sets, is an ideal platform for researchers trying to sift through huge image stacks. It offers nearly unlimited processing power and storage with a straightforward programming interface. Yet labeling land-cover change through time remains challenging given the current state of the art for interpreting remote sensing image sequences. Moreover, combining data from very different image platforms remains quite difficult. To address these challenges, we developed the BULC algorithm (Bayesian Updating of Land Cover), designed for the continuous updating of land-cover classifications through time in large data sets. The algorithm ingests data from any of the wide variety of earth-resources sensors; it maintains a running estimate of land-cover probabilities and the most probable class at all time points along a sequence of events. Here we compare BULC results from two study sites that witnessed considerable forest change in the last 40 years: the Pacific Northwest of the United States and the Mato Grosso region of Brazil. In Brazil, we incorporated rough classifications from more than 100 images of varying quality, mixing imagery from more than 10 different sensors. In the Pacific Northwest, we used BULC to identify forest changes due to logging and urbanization from 1973 to the present. Both regions had classification sequences that were better than many of the component days, effectively ignoring clouds and other unwanted noise while fusing the information contained on several platforms. As we leave remote sensing's data-poor era and enter a period with multiple looks at Earth's surface from multiple sensors over a short period of time, the BULC algorithm can help to sift through images of varying quality in Google Earth Engine to extract the most useful information for mapping the state and history of Earth's land cover.

  20. State-based verification of RTCP-nets with nuXmv

    NASA Astrophysics Data System (ADS)

    Biernacka, Agnieszka; Biernacki, Jerzy; Szpyrka, Marcin

    2015-12-01

    The paper deals with an algorithm of translation of RTCP-nets' (real-time coloured Petri nets) coverability graphs into nuXmv state machines. The approach enables users to verify RTCP-nets with model checking techniques provided by the nuXmv tool. Full details of the algorithm are presented and an illustrative example of the approach usefulness is provided.

  1. Algorithms for Differential Games with Bounded Control and States.

    DTIC Science & Technology

    1982-03-01

    D-R124 642 ALGORITHMS FOR DIFFERENTIAL GAMES WI1TH BOUNDED CONTROL 1/2 AND STATES(U) CALIFORNIA UNIV LOS ANGELES SCHOOL OF ENGINEERING AND APPLIED...RECIPILNT’S CATALOG NUMBER None ~_________ TITLE (end Subtitle) S. TYPE OF REPORT P ERIOD COVERED ALGORITHMS FOR DIFFERENTIAL GAMES WITH Final, 11/29/79-11/28...problems are probably the most natural application of differential game theory and have been treated by many authors as such. Very few problems of this

  2. Accuracy of the Estimated Core Temperature (ECTemp) Algorithm in Estimating Circadian Rhythm Indicators

    DTIC Science & Technology

    2017-04-12

    measurement of CT outside of stringent laboratory environments. This study evaluated ECTempTM, a heart rate-based extended Kalman Filter CT...based CT-estimation algorithms [7, 13, 14]. One notable example is ECTempTM, which utilizes an extended Kalman Filter to estimate CT from...3. The extended Kalman filter mapping function variance coefficient (Ct) was computed using the following equation: = −9.1428 ×

  3. Peculiarities of stochastic regime of Arctic ice cover time evolution over 1987-2014 from microwave satellite sounding on the basis of NASA team 2 algorithm

    NASA Astrophysics Data System (ADS)

    Raev, M. D.; Sharkov, E. A.; Tikhonov, V. V.; Repina, I. A.; Komarova, N. Yu.

    2015-12-01

    The GLOBAL-RT database (DB) is composed of long-term radio heat multichannel observation data received from DMSP F08-F17 satellites; it is permanently supplemented with new data on the Earth's exploration from the space department of the Space Research Institute, Russian Academy of Sciences. Arctic ice-cover areas for regions higher than 60° N latitude were calculated using the DB polar version and NASA Team 2 algorithm, which is widely used in foreign scientific literature. According to the analysis of variability of Arctic ice cover during 1987-2014, 2 months were selected when the Arctic ice cover was maximal (February) and minimal (September), and the average ice cover area was calculated for these months. Confidence intervals of the average values are in the 95-98% limits. Several approximations are derived for the time dependences of the ice-cover maximum and minimum over the period under study. Regression dependences were calculated for polynomials from the first degree (linear) to sextic. It was ascertained that the minimal root-mean-square error of deviation from the approximated curve sharply decreased for the biquadratic polynomial and then varied insignificantly: from 0.5593 for the polynomial of third degree to 0.4560 for the biquadratic polynomial. Hence, the commonly used strictly linear regression with a negative time gradient for the September Arctic ice cover minimum over 30 years should be considered incorrect.

  4. We introduce an algorithm for the simultaneous reconstruction of faults and slip fields. We prove that the minimum of a related regularized functional converges to the unique solution of the fault inverse problem. We consider a Bayesian approach. We use a parallel multi-core platform and we discuss techniques to save on computational time.

    NASA Astrophysics Data System (ADS)

    Volkov, D.

    2017-12-01

    We introduce an algorithm for the simultaneous reconstruction of faults and slip fields on those faults. We define a regularized functional to be minimized for the reconstruction. We prove that the minimum of that functional converges to the unique solution of the related fault inverse problem. Due to inherent uncertainties in measurements, rather than seeking a deterministic solution to the fault inverse problem, we consider a Bayesian approach. The advantage of such an approach is that we obtain a way of quantifying uncertainties as part of our final answer. On the downside, this Bayesian approach leads to a very large computation. To contend with the size of this computation we developed an algorithm for the numerical solution to the stochastic minimization problem which can be easily implemented on a parallel multi-core platform and we discuss techniques to save on computational time. After showing how this algorithm performs on simulated data and assessing the effect of noise, we apply it to measured data. The data was recorded during a slow slip event in Guerrero, Mexico.

  5. An assessment of support vector machines for land cover classification

    USGS Publications Warehouse

    Huang, C.; Davis, L.S.; Townshend, J.R.G.

    2002-01-01

    The support vector machine (SVM) is a group of theoretically superior machine learning algorithms. It was found competitive with the best available machine learning algorithms in classifying high-dimensional data sets. This paper gives an introduction to the theoretical development of the SVM and an experimental evaluation of its accuracy, stability and training speed in deriving land cover classifications from satellite images. The SVM was compared to three other popular classifiers, including the maximum likelihood classifier (MLC), neural network classifiers (NNC) and decision tree classifiers (DTC). The impacts of kernel configuration on the performance of the SVM and of the selection of training data and input variables on the four classifiers were also evaluated in this experiment.

  6. DCMIP2016: a review of non-hydrostatic dynamical core design and intercomparison of participating models

    NASA Astrophysics Data System (ADS)

    Ullrich, Paul A.; Jablonowski, Christiane; Kent, James; Lauritzen, Peter H.; Nair, Ramachandran; Reed, Kevin A.; Zarzycki, Colin M.; Hall, David M.; Dazlich, Don; Heikes, Ross; Konor, Celal; Randall, David; Dubos, Thomas; Meurdesoif, Yann; Chen, Xi; Harris, Lucas; Kühnlein, Christian; Lee, Vivian; Qaddouri, Abdessamad; Girard, Claude; Giorgetta, Marco; Reinert, Daniel; Klemp, Joseph; Park, Sang-Hun; Skamarock, William; Miura, Hiroaki; Ohno, Tomoki; Yoshida, Ryuji; Walko, Robert; Reinecke, Alex; Viner, Kevin

    2017-12-01

    Atmospheric dynamical cores are a fundamental component of global atmospheric modeling systems and are responsible for capturing the dynamical behavior of the Earth's atmosphere via numerical integration of the Navier-Stokes equations. These systems have existed in one form or another for over half of a century, with the earliest discretizations having now evolved into a complex ecosystem of algorithms and computational strategies. In essence, no two dynamical cores are alike, and their individual successes suggest that no perfect model exists. To better understand modern dynamical cores, this paper aims to provide a comprehensive review of 11 non-hydrostatic dynamical cores, drawn from modeling centers and groups that participated in the 2016 Dynamical Core Model Intercomparison Project (DCMIP) workshop and summer school. This review includes a choice of model grid, variable placement, vertical coordinate, prognostic equations, temporal discretization, and the diffusion, stabilization, filters, and fixers employed by each system.

  7. PARALLELISATION OF THE MODEL-BASED ITERATIVE RECONSTRUCTION ALGORITHM DIRA.

    PubMed

    Örtenberg, A; Magnusson, M; Sandborg, M; Alm Carlsson, G; Malusek, A

    2016-06-01

    New paradigms for parallel programming have been devised to simplify software development on multi-core processors and many-core graphical processing units (GPU). Despite their obvious benefits, the parallelisation of existing computer programs is not an easy task. In this work, the use of the Open Multiprocessing (OpenMP) and Open Computing Language (OpenCL) frameworks is considered for the parallelisation of the model-based iterative reconstruction algorithm DIRA with the aim to significantly shorten the code's execution time. Selected routines were parallelised using OpenMP and OpenCL libraries; some routines were converted from MATLAB to C and optimised. Parallelisation of the code with the OpenMP was easy and resulted in an overall speedup of 15 on a 16-core computer. Parallelisation with OpenCL was more difficult owing to differences between the central processing unit and GPU architectures. The resulting speedup was substantially lower than the theoretical peak performance of the GPU; the cause was explained. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. Development of an embedded atmospheric turbulence mitigation engine

    NASA Astrophysics Data System (ADS)

    Paolini, Aaron; Bonnett, James; Kozacik, Stephen; Kelmelis, Eric

    2017-05-01

    Methods to reconstruct pictures from imagery degraded by atmospheric turbulence have been under development for decades. The techniques were initially developed for observing astronomical phenomena from the Earth's surface, but have more recently been modified for ground and air surveillance scenarios. Such applications can impose significant constraints on deployment options because they both increase the computational complexity of the algorithms themselves and often dictate a requirement for low size, weight, and power (SWaP) form factors. Consequently, embedded implementations must be developed that can perform the necessary computations on low-SWaP platforms. Fortunately, there is an emerging class of embedded processors driven by the mobile and ubiquitous computing industries. We have leveraged these processors to develop embedded versions of the core atmospheric correction engine found in our ATCOM software. In this paper, we will present our experience adapting our algorithms for embedded systems on a chip (SoCs), namely the NVIDIA Tegra that couples general-purpose ARM cores with their graphics processing unit (GPU) technology and the Xilinx Zynq which pairs similar ARM cores with their field-programmable gate array (FPGA) fabric.

  9. Comparative Performance Analysis of Coarse Solvers for Algebraic Multigrid on Multicore and Manycore Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Druinsky, Alex; Ghysels, Pieter; Li, Xiaoye S.

    In this paper, we study the performance of a two-level algebraic-multigrid algorithm, with a focus on the impact of the coarse-grid solver on performance. We consider two algorithms for solving the coarse-space systems: the preconditioned conjugate gradient method and a new robust HSS-embedded low-rank sparse-factorization algorithm. Our test data comes from the SPE Comparative Solution Project for oil-reservoir simulations. We contrast the performance of our code on one 12-core socket of a Cray XC30 machine with performance on a 60-core Intel Xeon Phi coprocessor. To obtain top performance, we optimized the code to take full advantage of fine-grained parallelism andmore » made it thread-friendly for high thread count. We also developed a bounds-and-bottlenecks performance model of the solver which we used to guide us through the optimization effort, and also carried out performance tuning in the solver’s large parameter space. Finally, as a result, significant speedups were obtained on both machines.« less

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trędak, Przemysław, E-mail: przemyslaw.tredak@fuw.edu.pl; Rudnicki, Witold R.; Interdisciplinary Centre for Mathematical and Computational Modelling, University of Warsaw, ul. Pawińskiego 5a, 02-106 Warsaw

    The second generation Reactive Bond Order (REBO) empirical potential is commonly used to accurately model a wide range hydrocarbon materials. It is also extensible to other atom types and interactions. REBO potential assumes complex multi-body interaction model, that is difficult to represent efficiently in the SIMD or SIMT programming model. Hence, despite its importance, no efficient GPGPU implementation has been developed for this potential. Here we present a detailed description of a highly efficient GPGPU implementation of molecular dynamics algorithm using REBO potential. The presented algorithm takes advantage of rarely used properties of the SIMT architecture of a modern GPUmore » to solve difficult synchronizations issues that arise in computations of multi-body potential. Techniques developed for this problem may be also used to achieve efficient solutions of different problems. The performance of proposed algorithm is assessed using a range of model systems. It is compared to highly optimized CPU implementation (both single core and OpenMP) available in LAMMPS package. These experiments show up to 6x improvement in forces computation time using single processor of the NVIDIA Tesla K80 compared to high end 16-core Intel Xeon processor.« less

  11. Highly efficient spatial data filtering in parallel using the opensource library CPPPO

    NASA Astrophysics Data System (ADS)

    Municchi, Federico; Goniva, Christoph; Radl, Stefan

    2016-10-01

    CPPPO is a compilation of parallel data processing routines developed with the aim to create a library for "scale bridging" (i.e. connecting different scales by mean of closure models) in a multi-scale approach. CPPPO features a number of parallel filtering algorithms designed for use with structured and unstructured Eulerian meshes, as well as Lagrangian data sets. In addition, data can be processed on the fly, allowing the collection of relevant statistics without saving individual snapshots of the simulation state. Our library is provided with an interface to the widely-used CFD solver OpenFOAM®, and can be easily connected to any other software package via interface modules. Also, we introduce a novel, extremely efficient approach to parallel data filtering, and show that our algorithms scale super-linearly on multi-core clusters. Furthermore, we provide a guideline for choosing the optimal Eulerian cell selection algorithm depending on the number of CPU cores used. Finally, we demonstrate the accuracy and the parallel scalability of CPPPO in a showcase focusing on heat and mass transfer from a dense bed of particles.

  12. A VHDL Core for Intrinsic Evolution of Discrete Time Filters with Signal Feedback

    NASA Technical Reports Server (NTRS)

    Gwaltney, David A.; Dutton, Kenneth

    2005-01-01

    The design of an Evolvable Machine VHDL Core is presented, representing a discrete-time processing structure capable of supporting control system applications. This VHDL Core is implemented in an FPGA and is interfaced with an evolutionary algorithm implemented in firmware on a Digital Signal Processor (DSP) to create an evolvable system platform. The salient features of this architecture are presented. The capability to implement IIR filter structures is presented along with the results of the intrinsic evolution of a filter. The robustness of the evolved filter design is tested and its unique characteristics are described.

  13. Constraining Genome-Scale Models to Represent the Bow Tie Structure of Metabolism for 13C Metabolic Flux Analysis

    PubMed Central

    Ando, David; Singh, Jahnavi; Keasling, Jay D.; García Martín, Héctor

    2018-01-01

    Determination of internal metabolic fluxes is crucial for fundamental and applied biology because they map how carbon and electrons flow through metabolism to enable cell function. 13C Metabolic Flux Analysis (13C MFA) and Two-Scale 13C Metabolic Flux Analysis (2S-13C MFA) are two techniques used to determine such fluxes. Both operate on the simplifying approximation that metabolic flux from peripheral metabolism into central “core” carbon metabolism is minimal, and can be omitted when modeling isotopic labeling in core metabolism. The validity of this “two-scale” or “bow tie” approximation is supported both by the ability to accurately model experimental isotopic labeling data, and by experimentally verified metabolic engineering predictions using these methods. However, the boundaries of core metabolism that satisfy this approximation can vary across species, and across cell culture conditions. Here, we present a set of algorithms that (1) systematically calculate flux bounds for any specified “core” of a genome-scale model so as to satisfy the bow tie approximation and (2) automatically identify an updated set of core reactions that can satisfy this approximation more efficiently. First, we leverage linear programming to simultaneously identify the lowest fluxes from peripheral metabolism into core metabolism compatible with the observed growth rate and extracellular metabolite exchange fluxes. Second, we use Simulated Annealing to identify an updated set of core reactions that allow for a minimum of fluxes into core metabolism to satisfy these experimental constraints. Together, these methods accelerate and automate the identification of a biologically reasonable set of core reactions for use with 13C MFA or 2S-13C MFA, as well as provide for a substantially lower set of flux bounds for fluxes into the core as compared with previous methods. We provide an open source Python implementation of these algorithms at https://github.com/JBEI/limitfluxtocore. PMID:29300340

  14. Shock wave propagation in layered planetary embryos

    NASA Astrophysics Data System (ADS)

    Arkani-Hamed, Jafar; Ivanov, Boris A.

    2014-05-01

    The propagation of impact-induced shock wave inside a planetary embryo is investigated using the Hugoniot equations and a new scaling law, governing the particle velocity variations along a shock ray inside a spherical body. The scaling law is adopted to determine the impact heating of a growing embryo in its early stage when it is an undifferentiated and uniform body. The new scaling law, similar to other existing scaling laws, is not suitable for a large differentiated embryo consisting of a silicate mantle overlying an iron core. An algorithm is developed in this study on the basis of the ray theory in a spherically symmetric body which relates the shock parameters at the top of the core to those at the base of the mantle, thus enabling the adoption of scaling laws to estimate the impact heating of both the mantle and the core. The algorithm is applied to two embryo models: a simple two-layered model with a uniform mantle overlying a uniform core, and a model where the pre-shock density and acoustic velocity of the embryo are radially dependent. The former illustrates details of the particle velocity, shock pressure, and temperature increase behind the shock front in a 2D axisymmetric geometry. The latter provides a means to compare the results with those obtained by a hydrocode simulation. The agreement between the results of the two techniques in revealing the effects of the core-mantle boundary on the shock wave transmission across the boundary is encouraging.

  15. Adaptive control of nonlinear system using online error minimum neural networks.

    PubMed

    Jia, Chao; Li, Xiaoli; Wang, Kang; Ding, Dawei

    2016-11-01

    In this paper, a new learning algorithm named OEM-ELM (Online Error Minimized-ELM) is proposed based on ELM (Extreme Learning Machine) neural network algorithm and the spreading of its main structure. The core idea of this OEM-ELM algorithm is: online learning, evaluation of network performance, and increasing of the number of hidden nodes. It combines the advantages of OS-ELM and EM-ELM, which can improve the capability of identification and avoid the redundancy of networks. The adaptive control based on the proposed algorithm OEM-ELM is set up which has stronger adaptive capability to the change of environment. The adaptive control of chemical process Continuous Stirred Tank Reactor (CSTR) is also given for application. The simulation results show that the proposed algorithm with respect to the traditional ELM algorithm can avoid network redundancy and improve the control performance greatly. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  16. FPGA Online Tracking Algorithm for the PANDA Straw Tube Tracker

    NASA Astrophysics Data System (ADS)

    Liang, Yutie; Ye, Hua; Galuska, Martin J.; Gessler, Thomas; Kuhn, Wolfgang; Lange, Jens Soren; Wagner, Milan N.; Liu, Zhen'an; Zhao, Jingzhou

    2017-06-01

    A novel FPGA based online tracking algorithm for helix track reconstruction in a solenoidal field, developed for the PANDA spectrometer, is described. Employing the Straw Tube Tracker detector with 4636 straw tubes, the algorithm includes a complex track finder, and a track fitter. Implemented in VHDL, the algorithm is tested on a Xilinx Virtex-4 FX60 FPGA chip with different types of events, at different event rates. A processing time of 7 $\\mu$s per event for an average of 6 charged tracks is obtained. The momentum resolution is about 3\\% (4\\%) for $p_t$ ($p_z$) at 1 GeV/c. Comparing to the algorithm running on a CPU chip (single core Intel Xeon E5520 at 2.26 GHz), an improvement of 3 orders of magnitude in processing time is obtained. The algorithm can handle severe overlapping of events which are typical for interaction rates above 10 MHz.

  17. Multi-threaded Sparse Matrix Sparse Matrix Multiplication for Many-Core and GPU Architectures.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deveci, Mehmet; Trott, Christian Robert; Rajamanickam, Sivasankaran

    Sparse Matrix-Matrix multiplication is a key kernel that has applications in several domains such as scientific computing and graph analysis. Several algorithms have been studied in the past for this foundational kernel. In this paper, we develop parallel algorithms for sparse matrix- matrix multiplication with a focus on performance portability across different high performance computing architectures. The performance of these algorithms depend on the data structures used in them. We compare different types of accumulators in these algorithms and demonstrate the performance difference between these data structures. Furthermore, we develop a meta-algorithm, kkSpGEMM, to choose the right algorithm and datamore » structure based on the characteristics of the problem. We show performance comparisons on three architectures and demonstrate the need for the community to develop two phase sparse matrix-matrix multiplication implementations for efficient reuse of the data structures involved.« less

  18. Multi-threaded Sparse Matrix-Matrix Multiplication for Many-Core and GPU Architectures.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deveci, Mehmet; Rajamanickam, Sivasankaran; Trott, Christian Robert

    Sparse Matrix-Matrix multiplication is a key kernel that has applications in several domains such as scienti c computing and graph analysis. Several algorithms have been studied in the past for this foundational kernel. In this paper, we develop parallel algorithms for sparse matrix-matrix multiplication with a focus on performance portability across different high performance computing architectures. The performance of these algorithms depend on the data structures used in them. We compare different types of accumulators in these algorithms and demonstrate the performance difference between these data structures. Furthermore, we develop a meta-algorithm, kkSpGEMM, to choose the right algorithm and datamore » structure based on the characteristics of the problem. We show performance comparisons on three architectures and demonstrate the need for the community to develop two phase sparse matrix-matrix multiplication implementations for efficient reuse of the data structures involved.« less

  19. Cloud detection algorithm comparison and validation for operational Landsat data products

    USGS Publications Warehouse

    Foga, Steven Curtis; Scaramuzza, Pat; Guo, Song; Zhu, Zhe; Dilley, Ronald; Beckmann, Tim; Schmidt, Gail L.; Dwyer, John L.; Hughes, MJ; Laue, Brady

    2017-01-01

    Clouds are a pervasive and unavoidable issue in satellite-borne optical imagery. Accurate, well-documented, and automated cloud detection algorithms are necessary to effectively leverage large collections of remotely sensed data. The Landsat project is uniquely suited for comparative validation of cloud assessment algorithms because the modular architecture of the Landsat ground system allows for quick evaluation of new code, and because Landsat has the most comprehensive manual truth masks of any current satellite data archive. Currently, the Landsat Level-1 Product Generation System (LPGS) uses separate algorithms for determining clouds, cirrus clouds, and snow and/or ice probability on a per-pixel basis. With more bands onboard the Landsat 8 Operational Land Imager (OLI)/Thermal Infrared Sensor (TIRS) satellite, and a greater number of cloud masking algorithms, the U.S. Geological Survey (USGS) is replacing the current cloud masking workflow with a more robust algorithm that is capable of working across multiple Landsat sensors with minimal modification. Because of the inherent error from stray light and intermittent data availability of TIRS, these algorithms need to operate both with and without thermal data. In this study, we created a workflow to evaluate cloud and cloud shadow masking algorithms using cloud validation masks manually derived from both Landsat 7 Enhanced Thematic Mapper Plus (ETM +) and Landsat 8 OLI/TIRS data. We created a new validation dataset consisting of 96 Landsat 8 scenes, representing different biomes and proportions of cloud cover. We evaluated algorithm performance by overall accuracy, omission error, and commission error for both cloud and cloud shadow. We found that CFMask, C code based on the Function of Mask (Fmask) algorithm, and its confidence bands have the best overall accuracy among the many algorithms tested using our validation data. The Artificial Thermal-Automated Cloud Cover Algorithm (AT-ACCA) is the most accurate nonthermal-based algorithm. We give preference to CFMask for operational cloud and cloud shadow detection, as it is derived from a priori knowledge of physical phenomena and is operable without geographic restriction, making it useful for current and future land imaging missions without having to be retrained in a machine-learning environment.

  20. ASTER cloud coverage reassessment using MODIS cloud mask products

    NASA Astrophysics Data System (ADS)

    Tonooka, Hideyuki; Omagari, Kunjuro; Yamamoto, Hirokazu; Tachikawa, Tetsushi; Fujita, Masaru; Paitaer, Zaoreguli

    2010-10-01

    In the Advanced Spaceborne Thermal Emission and Reflection radiometer (ASTER) Project, two kinds of algorithms are used for cloud assessment in Level-1 processing. The first algorithm based on the LANDSAT-5 TM Automatic Cloud Cover Assessment (ACCA) algorithm is used for a part of daytime scenes observed with only VNIR bands and all nighttime scenes, and the second algorithm based on the LANDSAT-7 ETM+ ACCA algorithm is used for most of daytime scenes observed with all spectral bands. However, the first algorithm does not work well for lack of some spectral bands sensitive to cloud detection, and the two algorithms have been less accurate over snow/ice covered areas since April 2008 when the SWIR subsystem developed troubles. In addition, they perform less well for some combinations of surface type and sun elevation angle. We, therefore, have developed the ASTER cloud coverage reassessment system using MODIS cloud mask (MOD35) products, and have reassessed cloud coverage for all ASTER archived scenes (>1.7 million scenes). All of the new cloud coverage data are included in Image Management System (IMS) databases of the ASTER Ground Data System (GDS) and NASA's Land Process Data Active Archive Center (LP DAAC) and used for ASTER product search by users, and cloud mask images are distributed to users through Internet. Daily upcoming scenes (about 400 scenes per day) are reassessed and inserted into the IMS databases in 5 to 7 days after each scene observation date. Some validation studies for the new cloud coverage data and some mission-related analyses using those data are also demonstrated in the present paper.

  1. Fast and Accurate Support Vector Machines on Large Scale Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vishnu, Abhinav; Narasimhan, Jayenthi; Holder, Larry

    Support Vector Machines (SVM) is a supervised Machine Learning and Data Mining (MLDM) algorithm, which has become ubiquitous largely due to its high accuracy and obliviousness to dimensionality. The objective of SVM is to find an optimal boundary --- also known as hyperplane --- which separates the samples (examples in a dataset) of different classes by a maximum margin. Usually, very few samples contribute to the definition of the boundary. However, existing parallel algorithms use the entire dataset for finding the boundary, which is sub-optimal for performance reasons. In this paper, we propose a novel distributed memory algorithm to eliminatemore » the samples which do not contribute to the boundary definition in SVM. We propose several heuristics, which range from early (aggressive) to late (conservative) elimination of the samples, such that the overall time for generating the boundary is reduced considerably. In a few cases, a sample may be eliminated (shrunk) pre-emptively --- potentially resulting in an incorrect boundary. We propose a scalable approach to synchronize the necessary data structures such that the proposed algorithm maintains its accuracy. We consider the necessary trade-offs of single/multiple synchronization using in-depth time-space complexity analysis. We implement the proposed algorithm using MPI and compare it with libsvm--- de facto sequential SVM software --- which we enhance with OpenMP for multi-core/many-core parallelism. Our proposed approach shows excellent efficiency using up to 4096 processes on several large datasets such as UCI HIGGS Boson dataset and Offending URL dataset.« less

  2. The Data Transfer Kit: A geometric rendezvous-based tool for multiphysics data transfer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slattery, S. R.; Wilson, P. P. H.; Pawlowski, R. P.

    2013-07-01

    The Data Transfer Kit (DTK) is a software library designed to provide parallel data transfer services for arbitrary physics components based on the concept of geometric rendezvous. The rendezvous algorithm provides a means to geometrically correlate two geometric domains that may be arbitrarily decomposed in a parallel simulation. By repartitioning both domains such that they have the same geometric domain on each parallel process, efficient and load balanced search operations and data transfer can be performed at a desirable algorithmic time complexity with low communication overhead relative to other types of mapping algorithms. With the increased development efforts in multiphysicsmore » simulation and other multiple mesh and geometry problems, generating parallel topology maps for transferring fields and other data between geometric domains is a common operation. The algorithms used to generate parallel topology maps based on the concept of geometric rendezvous as implemented in DTK are described with an example using a conjugate heat transfer calculation and thermal coupling with a neutronics code. In addition, we provide the results of initial scaling studies performed on the Jaguar Cray XK6 system at Oak Ridge National Laboratory for a worse-case-scenario problem in terms of algorithmic complexity that shows good scaling on 0(1 x 104) cores for topology map generation and excellent scaling on 0(1 x 105) cores for the data transfer operation with meshes of O(1 x 109) elements. (authors)« less

  3. The assessment of EUMETSAT HSAF Snow Products for mountainuos areas in the eastern part of Turkey

    NASA Astrophysics Data System (ADS)

    Akyurek, Z.; Surer, S.; Beser, O.; Bolat, K.; Erturk, A. G.

    2012-04-01

    Monitoring the snow parameters (e.g. snow cover area, snow water equivalent) is a challenging work. Because of its natural physical properties, snow highly affects the evolution of weather from daily basis to climate on a longer time scale. The derivation of snow products over mountainous regions has been considered very challenging. This can be done by periodic and precise mapping of the snow cover. However inaccessibility and scarcity of the ground observations limit the snow cover mapping in the mountainous areas. Today, it is carried out operationally by means of optical satellite imagery and microwave radiometry. In retrieving the snow cover area from satellite images bring the problem of topographical variations within the footprint of satellite sensors and spatial and temporal variation of snow characteristics in the mountainous areas. Most of the global and regional operational snow products use generic algorithms for flat and mountainous areas. However the non-uniformity of the snow characteristics can only be modeled with different algorithms for mountain and flat areas. In this study the early findings of Satellite Application Facilities on Hydrology (H-SAF) project, which is financially supported by EUMETSAT, will be presented. Turkey is a part of the H-SAF project, both in product generation (eg. snow recognition, fractional snow cover and snow water equivalent) for mountainous regions for whole Europe, cal/val of satellite-derived snow products with ground observations and cal/val studies with hydrological modeling in the mountainous terrain of Europe. All the snow products are operational on a daily basis. For the snow recognition product (H10) for mountainous areas, spectral thresholding methods were applied on sub pixel scale of MSG-SEVIRI images. The different spectral characteristics of cloud, snow and land determined the structure of the algorithm and these characteristics were obtained from subjective classification of known snow cover features in the MSG/SEVIRI images. The fractional snow cover area (H12) algorithm is based on a sub-pixel reflectance model applied on METOP-AVHRR data. Knowing the effects of topography on satellite-measured radiances for rough terrain, the sun zenith and azimuth angles, as well as direction of observation relative to these are taken into account in estimating the target reflectances from the satellite images. The values of SWE products (H13) were obtained using an assimilation process based on the Helsinki University of Technology model using Advanced Microwave Scanning Radiometer for EOS (AMSR-E) daily brightness-temperature values. The validation studies for three products have been performed for the water years 2010 and 2011. Average values of 70% of probability of detection for snow recognition product, 60% of overall accuracy for the fractional snow cover product and 45 mm RMSE for the snow water equivalent product have been obtained from the validation studies. Final versions of these three products will be presented and discussed. Key words: snow, satellite images, mountain, HSAF, snow cover, snow water equivalent

  4. Update to core reporting practices in structural equation modeling.

    PubMed

    Schreiber, James B

    This paper is a technical update to "Core Reporting Practices in Structural Equation Modeling." 1 As such, the content covered in this paper includes, sample size, missing data, specification and identification of models, estimation method choices, fit and residual concerns, nested, alternative, and equivalent models, and unique issues within the SEM family of techniques. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Documentation to the 2015-16 Common Core of Data (CCD) Universe Files. NCES 2017-074

    ERIC Educational Resources Information Center

    Glander, Mark

    2017-01-01

    The Common Core of Data (CCD) is a national statistical program that collects and compiles administrative data from SEAs covering the universe of all public elementary and secondary schools and school districts in the United States. The first CCD collection was for SY 1986-87. The predecessor to CCD was the Elementary and Secondary General…

  6. Geostatistical analysis and isoscape of ice core derived water stable isotope records in an Antarctic macro region

    NASA Astrophysics Data System (ADS)

    Hatvani, István Gábor; Leuenberger, Markus; Kohán, Balázs; Kern, Zoltán

    2017-09-01

    Water stable isotopes preserved in ice cores provide essential information about polar precipitation. In the present study, multivariate regression and variogram analyses were conducted on 22 δ2H and 53 δ18O records from 60 ice cores covering the second half of the 20th century. Taking the multicollinearity of the explanatory variables into account, as also the model's adjusted R2 and its mean absolute error, longitude, elevation and distance from the coast were found to be the main independent geographical driving factors governing the spatial δ18O variability of firn/ice in the chosen Antarctic macro region. After diminishing the effects of these factors, using variography, the weights for interpolation with kriging were obtained and the spatial autocorrelation structure of the dataset was revealed. This indicates an average area of influence with a radius of 350 km. This allows the determination of the areas which are as yet not covered by the spatial variability of the existing network of ice cores. Finally, the regional isoscape was obtained for the study area, and this may be considered the first step towards a geostatistically improved isoscape for Antarctica.

  7. Design of space-type electronic power transformers

    NASA Technical Reports Server (NTRS)

    Ahearn, J. F.; Lagadinos, J. C.

    1977-01-01

    Both open and encapsulated varieties of high reliability, low weight, and high efficiency moderate and high voltage transformers were investigated to determine the advantages and limitations of their construction in the ranges of power and voltage required for operation in the hard vacuum environment of space. Topics covered include: (1) selection of the core material; (2) preliminary calculation of core dimensions; (3) selection of insulating materials including magnet wire insulation, coil forms, and layer and interwinding insulation; (4) coil design; (5) calculation of copper losses, core losses and efficiency; (6) calculation of temperature rise; and (7) optimization of design with changes in core selection or coil design as required to meet specifications.

  8. Analysis of the Dryden Wet Bulb GLobe Temperature Algorithm for White Sands Missile Range

    NASA Technical Reports Server (NTRS)

    LaQuay, Ryan Matthew

    2011-01-01

    In locations where workforce is exposed to high relative humidity and light winds, heat stress is a significant concern. Such is the case at the White Sands Missile Range in New Mexico. Heat stress is depicted by the wet bulb globe temperature, which is the official measurement used by the American Conference of Governmental Industrial Hygienists. The wet bulb globe temperature is measured by an instrument which was designed to be portable and needing routine maintenance. As an alternative form for measuring the wet bulb globe temperature, algorithms have been created to calculate the wet bulb globe temperature from basic meteorological observations. The algorithms are location dependent; therefore a specific algorithm is usually not suitable for multiple locations. Due to climatology similarities, the algorithm developed for use at the Dryden Flight Research Center was applied to data from the White Sands Missile Range. A study was performed that compared a wet bulb globe instrument to data from two Surface Atmospheric Measurement Systems that was applied to the Dryden wet bulb globe temperature algorithm. The period of study was from June to September of2009, with focus being applied from 0900 to 1800, local time. Analysis showed that the algorithm worked well, with a few exceptions. The algorithm becomes less accurate to the measurement when the dew point temperature is over 10 Celsius. Cloud cover also has a significant effect on the measured wet bulb globe temperature. The algorithm does not show red and black heat stress flags well due to shorter time scales of such events. The results of this study show that it is plausible that the Dryden Flight Research wet bulb globe temperature algorithm is compatible with the White Sands Missile Range, except for when there are increased dew point temperatures and cloud cover or precipitation. During such occasions, the wet bulb globe temperature instrument would be the preferred method of measurement. Out of the 30 dates examined, 23 fell under the category of having good accuracy.

  9. Intratumoral heterogeneity as a source of discordance in breast cancer biomarker classification.

    PubMed

    Allott, Emma H; Geradts, Joseph; Sun, Xuezheng; Cohen, Stephanie M; Zirpoli, Gary R; Khoury, Thaer; Bshara, Wiam; Chen, Mengjie; Sherman, Mark E; Palmer, Julie R; Ambrosone, Christine B; Olshan, Andrew F; Troester, Melissa A

    2016-06-28

    Spatial heterogeneity in biomarker expression may impact breast cancer classification. The aims of this study were to estimate the frequency of spatial heterogeneity in biomarker expression within tumors, to identify technical and biological factors contributing to spatial heterogeneity, and to examine the impact of discordant biomarker status within tumors on clinical record agreement. Tissue microarrays (TMAs) were constructed using two to four cores (1.0 mm) for each of 1085 invasive breast cancers from the Carolina Breast Cancer Study, which is part of the AMBER Consortium. Immunohistochemical staining for estrogen receptor (ER), progesterone receptor (PR), and human epidermal growth factor receptor 2 (HER2) was quantified using automated digital imaging analysis. The biomarker status for each core and for each case was assigned using clinical thresholds. Cases with core-to-core biomarker discordance were manually reviewed to distinguish intratumoral biomarker heterogeneity from misclassification of biomarker status by the automated algorithm. The impact of core-to-core biomarker discordance on case-level agreement between TMAs and the clinical record was evaluated. On the basis of automated analysis, discordant biomarker status between TMA cores occurred in 9 %, 16 %, and 18 % of cases for ER, PR, and HER2, respectively. Misclassification of benign epithelium and/or ductal carcinoma in situ as invasive carcinoma by the automated algorithm was implicated in discordance among cores. However, manual review of discordant cases confirmed spatial heterogeneity as a source of discordant biomarker status between cores in 2 %, 7 %, and 8 % of cases for ER, PR, and HER2, respectively. Overall, agreement between TMA and clinical record was high for ER (94 %), PR (89 %), and HER2 (88 %), but it was reduced in cases with core-to-core discordance (agreement 70 % for ER, 61 % for PR, and 57 % for HER2). Intratumoral biomarker heterogeneity may impact breast cancer classification accuracy, with implications for clinical management. Both manually confirmed biomarker heterogeneity and misclassification of biomarker status by automated image analysis contribute to discordant biomarker status between TMA cores. Given that manually confirmed heterogeneity is uncommon (<10 % of cases), large studies are needed to study the impact of heterogeneous biomarker expression on breast cancer classification and outcomes.

  10. Cloud Detection of Optical Satellite Images Using Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Lee, Kuan-Yi; Lin, Chao-Hung

    2016-06-01

    Cloud covers are generally present in optical remote-sensing images, which limit the usage of acquired images and increase the difficulty of data analysis, such as image compositing, correction of atmosphere effects, calculations of vegetation induces, land cover classification, and land cover change detection. In previous studies, thresholding is a common and useful method in cloud detection. However, a selected threshold is usually suitable for certain cases or local study areas, and it may be failed in other cases. In other words, thresholding-based methods are data-sensitive. Besides, there are many exceptions to control, and the environment is changed dynamically. Using the same threshold value on various data is not effective. In this study, a threshold-free method based on Support Vector Machine (SVM) is proposed, which can avoid the abovementioned problems. A statistical model is adopted to detect clouds instead of a subjective thresholding-based method, which is the main idea of this study. The features used in a classifier is the key to a successful classification. As a result, Automatic Cloud Cover Assessment (ACCA) algorithm, which is based on physical characteristics of clouds, is used to distinguish the clouds and other objects. In the same way, the algorithm called Fmask (Zhu et al., 2012) uses a lot of thresholds and criteria to screen clouds, cloud shadows, and snow. Therefore, the algorithm of feature extraction is based on the ACCA algorithm and Fmask. Spatial and temporal information are also important for satellite images. Consequently, co-occurrence matrix and temporal variance with uniformity of the major principal axis are used in proposed method. We aim to classify images into three groups: cloud, non-cloud and the others. In experiments, images acquired by the Landsat 7 Enhanced Thematic Mapper Plus (ETM+) and images containing the landscapes of agriculture, snow area, and island are tested. Experiment results demonstrate the detection accuracy of the proposed method is better than related methods.

  11. The application of dynamic programming in production planning

    NASA Astrophysics Data System (ADS)

    Wu, Run

    2017-05-01

    Nowadays, with the popularity of the computers, various industries and fields are widely applying computer information technology, which brings about huge demand for a variety of application software. In order to develop software meeting various needs with most economical cost and best quality, programmers must design efficient algorithms. A superior algorithm can not only soul up one thing, but also maximize the benefits and generate the smallest overhead. As one of the common algorithms, dynamic programming algorithms are used to solving problems with some sort of optimal properties. When solving problems with a large amount of sub-problems that needs repetitive calculations, the ordinary sub-recursive method requires to consume exponential time, and dynamic programming algorithm can reduce the time complexity of the algorithm to the polynomial level, according to which we can conclude that dynamic programming algorithm is a very efficient compared to other algorithms reducing the computational complexity and enriching the computational results. In this paper, we expound the concept, basic elements, properties, core, solving steps and difficulties of the dynamic programming algorithm besides, establish the dynamic programming model of the production planning problem.

  12. Mapping and Assessing Variability in the Antarctic Marginal Ice Zone, the Pack Ice and Coastal Polynyas

    NASA Astrophysics Data System (ADS)

    Stroeve, Julienne; Jenouvrier, Stephanie

    2016-04-01

    Sea ice variability within the marginal ice zone (MIZ) and polynyas plays an important role for phytoplankton productivity and krill abundance. Therefore mapping their spatial extent, seasonal and interannual variability is essential for understanding how current and future changes in these biological active regions may impact the Antarctic marine ecosystem. Knowledge of the distribution of different ice types to the total Antarctic sea ice cover may also help to shed light on the factors contributing towards recent expansion of the Antarctic ice cover in some regions and contraction in others. The long-term passive microwave satellite data record provides the longest and most consistent data record for assessing different ice types. However, estimates of the amount of MIZ, consolidated pack ice and polynyas depends strongly on what sea ice algorithm is used. This study uses two popular passive microwave sea ice algorithms, the NASA Team and Bootstrap to evaluate the distribution and variability in the MIZ, the consolidated pack ice and coastal polynyas. Results reveal the NASA Team algorithm has on average twice the MIZ and half the consolidated pack ice area as the Bootstrap algorithm. Polynya area is also larger in the NASA Team algorithm, and the timing of maximum polynya area may differ by as much as 5 months between algorithms. These differences lead to different relationships between sea ice characteristics and biological processes, as illustrated here with the breeding success of an Antarctic seabird.

  13. iPTF Discoveries of Recent Core-Collapse Supernovae

    NASA Astrophysics Data System (ADS)

    Taddia, F.; Ferretti, R.; Papadogiannakis, S.; Petrushevska, T.; Fremling, C.; Karamehmetoglu, E.; Nyholm, A.; Roy, R.; Hangard, L.; Horesh, A.; Khazov, D.; Knezevic, S.; Johansson, J.; Leloudas, G.; Manulis, I.; Rubin, A.; Soumagnac, M.; Vreeswijk, P.; Yaron, O.; Bar, I.; Cao, Y.; Kulkarni, S.; Blagorodnova, N.

    2016-05-01

    The intermediate Palomar Transient Factory (ATel #4807) reports the discovery and classification of the following core-collapse SNe. Our automated candidate vetting to distinguish a real astrophysical source (1.0) from bogus artifacts (0.0) is powered by three generations of machine learning algorithms: RB2 (Brink et al. 2013MNRAS.435.1047B), RB4 (Rebbapragada et al. 2015AAS...22543402R) and RB5 (Wozniak et al. 2013AAS...22143105W).

  14. iPTF Discoveries of Recent Core-Collapse Supernovae

    NASA Astrophysics Data System (ADS)

    Taddia, F.; Ferretti, R.; Fremling, C.; Karamehmetoglu, E.; Nyholm, A.; Papadogiannakis, S.; Petrushevska, T.; Roy, R.; Hangard, L.; De Cia, A.; Vreeswijk, P.; Horesh, A.; Manulis, I.; Sagiv, I.; Rubin, A.; Yaron, O.; Leloudas, G.; Khazov, D.; Soumagnac, M.; Bilgi, P.

    2015-04-01

    The intermediate Palomar Transient Factory (ATel #4807) reports the discovery and classification of the following Core-Collapse SNe. Our automated candidate vetting to distinguish a real astrophysical source (1.0) from bogus artifacts (0.0) is powered by three generations of machine learning algorithms: RB2 (Brink et al. 2013MNRAS.435.1047B), RB4 (Rebbapragada et al. 2015AAS...22543402R) and RB5 (Wozniak et al. 2013AAS...22143105W).

  15. iPTF Discoveries of Recent Core-Collapse Supernovae

    NASA Astrophysics Data System (ADS)

    Taddia, F.; Ferretti, R.; Fremling, C.; Karamehmetoglu, E.; Nyholm, A.; Papadogiannakis, S.; Petrushevska, T.; Roy, R.; Hangard, L.; Vreeswijk, P.; Horesh, A.; Manulis, I.; Rubin, A.; Yaron, O.; Leloudas, G.; Khazov, D.; Soumagnac, M.; Knezevic, S.; Johansson, J.; Duggan, G.; Lunnan, R.; Cao, Y.

    2015-09-01

    The intermediate Palomar Transient Factory (ATel #4807) reports the discovery and classification of the following Core-Collapse SNe. Our automated candidate vetting to distinguish a real astrophysical source (1.0) from bogus artifacts (0.0) is powered by three generations of machine learning algorithms: RB2 (Brink et al. 2013MNRAS.435.1047B), RB4 (Rebbapragada et al. 2015AAS...22543402R) and RB5 (Wozniak et al. 2013AAS...22143105W).

  16. iPTF Discoveries of Recent Core-Collapse Supernovae

    NASA Astrophysics Data System (ADS)

    Taddia, F.; Ferretti, R.; Fremling, C.; Karamehmetoglu, E.; Nyholm, A.; Papadogiannakis, S.; Petrushevska, T.; Roy, R.; Hangard, L.; Vreeswijk, P.; Horesh, A.; Manulis, I.; Rubin, A.; Yaron, O.; Leloudas, G.; Khazov, D.; Soumagnac, M.; Knezevic, S.; Johansson, J.; Lunnan, R.; Cao, Y.; Miller, A.

    2015-11-01

    The intermediate Palomar Transient Factory (ATel #4807) reports the discovery and classification of the following Core-Collapse SNe. Our automated candidate vetting to distinguish a real astrophysical source (1.0) from bogus artifacts (0.0) is powered by three generations of machine learning algorithms: RB2 (Brink et al. 2013MNRAS.435.1047B), RB4 (Rebbapragada et al. 2015AAS...22543402R) and RB5 (Wozniak et al. 2013AAS...22143105W).

  17. Mapping Forest Edge Using Aerial Lidar

    NASA Astrophysics Data System (ADS)

    MacLean, M. G.

    2014-12-01

    Slightly more than 60% of Massachusetts is covered with forest and this land cover type is invaluable for the protection and maintenance of our natural resources and is a carbon sink for the state. However, Massachusetts is currently experiencing a decline in forested lands, primarily due to the expansion of human development (Thompson et al., 2011). Of particular concern is the loss of "core areas" or the areas within forests that are not influenced by other land cover types. These areas are of significant importance to native flora and fauna, since they generally are not subject to invasion by exotic species and are more resilient to the effects of climate change (Campbell et al., 2009). However, the expansion of development has reduced the amount of this core area, but the exact amount is still unknown. Current methods of estimating core area are not particularly precise, since edge, or the area of the forest that is most influenced by other land cover types, is quite variable and situation dependent. Therefore, the purpose of this study is to devise a new method for identifying areas that could qualify as "edge" within the Harvard Forest, in Petersham MA, using new remote sensing techniques. We sampled along eight transects perpendicular to the edge of an abandoned golf course within the Harvard Forest property. Vegetation inventories as well as Photosynthetically Active Radiation (PAR) at different heights within the canopy were used to determine edge depth. These measurements were then compared with small-footprint waveform aerial LiDAR datasets and imagery to model edge depths within Harvard Forest.

  18. The Mars Science Laboratory Entry, Descent, and Landing Flight Software

    NASA Technical Reports Server (NTRS)

    Gostelow, Kim P.

    2013-01-01

    This paper describes the design, development, and testing of the EDL program from the perspective of the software engineer. We briefly cover the overall MSL flight software organization, and then the organization of EDL itself. We discuss the timeline, the structure of the GNC code (but not the algorithms as they are covered elsewhere in this conference) and the command and telemetry interfaces. Finally, we cover testing and the influence that testability had on the EDL flight software design.

  19. The computational core and fixed point organization in Boolean networks

    NASA Astrophysics Data System (ADS)

    Correale, L.; Leone, M.; Pagnani, A.; Weigt, M.; Zecchina, R.

    2006-03-01

    In this paper, we analyse large random Boolean networks in terms of a constraint satisfaction problem. We first develop an algorithmic scheme which allows us to prune simple logical cascades and underdetermined variables, returning thereby the computational core of the network. Second, we apply the cavity method to analyse the number and organization of fixed points. We find in particular a phase transition between an easy and a complex regulatory phase, the latter being characterized by the existence of an exponential number of macroscopically separated fixed point clusters. The different techniques developed are reinterpreted as algorithms for the analysis of single Boolean networks, and they are applied in the analysis of and in silico experiments on the gene regulatory networks of baker's yeast (Saccharomyces cerevisiae) and the segment-polarity genes of the fruitfly Drosophila melanogaster.

  20. Hiding the Disk and Network Latency of Out-of-Core Visualization

    NASA Technical Reports Server (NTRS)

    Ellsworth, David

    2001-01-01

    This paper describes an algorithm that improves the performance of application-controlled demand paging for out-of-core visualization by hiding the latency of reading data from both local disks or disks on remote servers. The performance improvements come from better overlapping the computation with the page reading process, and by performing multiple page reads in parallel. The paper includes measurements that show that the new multithreaded paging algorithm decreases the time needed to compute visualizations by one third when using one processor and reading data from local disk. The time needed when using one processor and reading data from remote disk decreased by two thirds. Visualization runs using data from remote disk actually ran faster than ones using data from local disk because the remote runs were able to make use of the remote server's high performance disk array.

  1. Shared protection based virtual network mapping in space division multiplexing optical networks

    NASA Astrophysics Data System (ADS)

    Zhang, Huibin; Wang, Wei; Zhao, Yongli; Zhang, Jie

    2018-05-01

    Space Division Multiplexing (SDM) has been introduced to improve the capacity of optical networks. In SDM optical networks, there are multiple cores/modes in each fiber link, and spectrum resources are multiplexed in both frequency and core/modes dimensions. Enabled by network virtualization technology, one SDM optical network substrate can be shared by several virtual networks operators. Similar with point-to-point connection services, virtual networks (VN) also need certain survivability to guard against network failures. Based on customers' heterogeneous requirements on the survivability of their virtual networks, this paper studies the shared protection based VN mapping problem and proposes a Minimum Free Frequency Slots (MFFS) mapping algorithm to improve spectrum efficiency. Simulation results show that the proposed algorithm can optimize SDM optical networks significantly in terms of blocking probability and spectrum utilization.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorentla Venkata, Manjunath; Graham, Richard L; Ladd, Joshua S

    This paper describes the design and implementation of InfiniBand (IB) CORE-Direct based blocking and nonblocking broadcast operations within the Cheetah collective operation framework. It describes a novel approach that fully ofFLoads collective operations and employs only user-supplied buffers. For a 64 rank communicator, the latency of CORE-Direct based hierarchical algorithm is better than production-grade Message Passing Interface (MPI) implementations, 150% better than the default Open MPI algorithm and 115% better than the shared memory optimized MVAPICH implementation for a one kilobyte (KB) message, and for eight mega-bytes (MB) it is 48% and 64% better, respectively. Flat-topology broadcast achieves 99.9% overlapmore » in a polling based communication-computation test, and 95.1% overlap for a wait based test, compared with 92.4% and 17.0%, respectively, for a similar Central Processing Unit (CPU) based implementation.« less

  3. Bayesian network representing system dynamics in risk analysis of nuclear systems

    NASA Astrophysics Data System (ADS)

    Varuttamaseni, Athi

    2011-12-01

    A dynamic Bayesian network (DBN) model is used in conjunction with the alternating conditional expectation (ACE) regression method to analyze the risk associated with the loss of feedwater accident coupled with a subsequent initiation of the feed and bleed operation in the Zion-1 nuclear power plant. The use of the DBN allows the joint probability distribution to be factorized, enabling the analysis to be done on many simpler network structures rather than on one complicated structure. The construction of the DBN model assumes conditional independence relations among certain key reactor parameters. The choice of parameter to model is based on considerations of the macroscopic balance statements governing the behavior of the reactor under a quasi-static assumption. The DBN is used to relate the peak clad temperature to a set of independent variables that are known to be important in determining the success of the feed and bleed operation. A simple linear relationship is then used to relate the clad temperature to the core damage probability. To obtain a quantitative relationship among different nodes in the DBN, surrogates of the RELAP5 reactor transient analysis code are used. These surrogates are generated by applying the ACE algorithm to output data obtained from about 50 RELAP5 cases covering a wide range of the selected independent variables. These surrogates allow important safety parameters such as the fuel clad temperature to be expressed as a function of key reactor parameters such as the coolant temperature and pressure together with important independent variables such as the scram delay time. The time-dependent core damage probability is calculated by sampling the independent variables from their probability distributions and propagate the information up through the Bayesian network to give the clad temperature. With the knowledge of the clad temperature and the assumption that the core damage probability has a one-to-one relationship to it, we have calculated the core damage probably as a function of transient time. The use of the DBN model in combination with ACE allows risk analysis to be performed with much less effort than if the analysis were done using the standard techniques.

  4. Weighted compactness function based label propagation algorithm for community detection

    NASA Astrophysics Data System (ADS)

    Zhang, Weitong; Zhang, Rui; Shang, Ronghua; Jiao, Licheng

    2018-02-01

    Community detection in complex networks, is to detect the community structure with the internal structure relatively compact and the external structure relatively sparse, according to the topological relationship among nodes in the network. In this paper, we propose a compactness function which combines the weight of nodes, and use it as the objective function to carry out the node label propagation. Firstly, according to the node degree, we find the sets of core nodes which have great influence on the network. The more the connections between the core nodes and the other nodes are, the larger the amount of the information these kernel nodes receive and transform. Then, according to the similarity of the nodes between the core nodes sets and the nodes degree, we assign weights to the nodes in the network. So the label of the nodes with great influence will be the priority in the label propagation process, which effectively improves the accuracy of the label propagation. The compactness function between nodes and communities in this paper is based on the nodes influence. It combines the connections between nodes and communities with the degree of the node belongs to its neighbor communities based on calculating the node weight. The function effectively uses the information of nodes and connections in the network. The experimental results show that the proposed algorithm can achieve good results in the artificial network and large-scale real networks compared with the 8 contrast algorithms.

  5. Bioinactivation: Software for modelling dynamic microbial inactivation.

    PubMed

    Garre, Alberto; Fernández, Pablo S; Lindqvist, Roland; Egea, Jose A

    2017-03-01

    This contribution presents the bioinactivation software, which implements functions for the modelling of isothermal and non-isothermal microbial inactivation. This software offers features such as user-friendliness, modelling of dynamic conditions, possibility to choose the fitting algorithm and generation of prediction intervals. The software is offered in two different formats: Bioinactivation core and Bioinactivation SE. Bioinactivation core is a package for the R programming language, which includes features for the generation of predictions and for the fitting of models to inactivation experiments using non-linear regression or a Markov Chain Monte Carlo algorithm (MCMC). The calculations are based on inactivation models common in academia and industry (Bigelow, Peleg, Mafart and Geeraerd). Bioinactivation SE supplies a user-friendly interface to selected functions of Bioinactivation core, namely the model fitting of non-isothermal experiments and the generation of prediction intervals. The capabilities of bioinactivation are presented in this paper through a case study, modelling the non-isothermal inactivation of Bacillus sporothermodurans. This study has provided a full characterization of the response of the bacteria to dynamic temperature conditions, including confidence intervals for the model parameters and a prediction interval of the survivor curve. We conclude that the MCMC algorithm produces a better characterization of the biological uncertainty and variability than non-linear regression. The bioinactivation software can be relevant to the food and pharmaceutical industry, as well as to regulatory agencies, as part of a (quantitative) microbial risk assessment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Firefly Mating Algorithm for Continuous Optimization Problems

    PubMed Central

    Ritthipakdee, Amarita; Premasathian, Nol; Jitkongchuen, Duangjai

    2017-01-01

    This paper proposes a swarm intelligence algorithm, called firefly mating algorithm (FMA), for solving continuous optimization problems. FMA uses genetic algorithm as the core of the algorithm. The main feature of the algorithm is a novel mating pair selection method which is inspired by the following 2 mating behaviors of fireflies in nature: (i) the mutual attraction between males and females causes them to mate and (ii) fireflies of both sexes are of the multiple-mating type, mating with multiple opposite sex partners. A female continues mating until her spermatheca becomes full, and, in the same vein, a male can provide sperms for several females until his sperm reservoir is depleted. This new feature enhances the global convergence capability of the algorithm. The performance of FMA was tested with 20 benchmark functions (sixteen 30-dimensional functions and four 2-dimensional ones) against FA, ALC-PSO, COA, MCPSO, LWGSODE, MPSODDS, DFOA, SHPSOS, LSA, MPDPGA, DE, and GABC algorithms. The experimental results showed that the success rates of our proposed algorithm with these functions were higher than those of other algorithms and the proposed algorithm also required fewer numbers of iterations to reach the global optima. PMID:28808442

  7. Firefly Mating Algorithm for Continuous Optimization Problems.

    PubMed

    Ritthipakdee, Amarita; Thammano, Arit; Premasathian, Nol; Jitkongchuen, Duangjai

    2017-01-01

    This paper proposes a swarm intelligence algorithm, called firefly mating algorithm (FMA), for solving continuous optimization problems. FMA uses genetic algorithm as the core of the algorithm. The main feature of the algorithm is a novel mating pair selection method which is inspired by the following 2 mating behaviors of fireflies in nature: (i) the mutual attraction between males and females causes them to mate and (ii) fireflies of both sexes are of the multiple-mating type, mating with multiple opposite sex partners. A female continues mating until her spermatheca becomes full, and, in the same vein, a male can provide sperms for several females until his sperm reservoir is depleted. This new feature enhances the global convergence capability of the algorithm. The performance of FMA was tested with 20 benchmark functions (sixteen 30-dimensional functions and four 2-dimensional ones) against FA, ALC-PSO, COA, MCPSO, LWGSODE, MPSODDS, DFOA, SHPSOS, LSA, MPDPGA, DE, and GABC algorithms. The experimental results showed that the success rates of our proposed algorithm with these functions were higher than those of other algorithms and the proposed algorithm also required fewer numbers of iterations to reach the global optima.

  8. On the recovery of electric currents in the liquid core of the Earth

    NASA Astrophysics Data System (ADS)

    Kuslits, Lukács; Prácser, Ernő; Lemperger, István

    2017-04-01

    Inverse geodynamo modelling has become a standard method to get a more accurate image of the processes within the outer core. In this poster excerpts from the preliminary results of an other approach are presented. This comes around the possibility of recovering the currents within the liquid core directly, using Main Magnetic Field data. The approximation of different systems of the flow of charge is possible with various geometries. Based on previous geodynamo simulations, current coils can furnish a good initial geometry for such an estimation. The presentation introduces our preliminary test results and the study of reliability of the applied inversion algorithm for different numbers of coils, distributed in a grid simbolysing the domain between the inner-core and core-mantle boundaries. We shall also present inverted current structures using Main Field model data.

  9. Preventing Heat Injuries by Predicting Individualized Human Core Temperature

    DTIC Science & Technology

    2015-10-14

    hardware/software warning system of an impending rise in TC and generate alerts to potentially prevent heat injuries. PREVENTING HEAT INJURIES BY...TC estimates, provides ahead-of-time alerts about an impending rise in TC and 2) an individualized model that uses non-invasive measurements of AC...PREDICTION AND ALERT ALGORITHMS Here, we detail the development of an algorithm that uses a time series of recent-past TC measurements to provide

  10. The influence of conifer forest canopy cover on the accuracy of two individual tree measurement algorithms using lidar data

    Treesearch

    Michael J. Falkowski; Alistair M.S. Smith; Paul E. Gessler; Andrew T. Hudak; Lee A. Vierling; Jeffrey S. Evans

    2008-01-01

    Individual tree detection algorithms can provide accurate measurements of individual tree locations, crown diameters (from aerial photography and light detection and ranging (lidar) data), and tree heights (from lidar data). However, to be useful for forest management goals relating to timber harvest, carbon accounting, and ecological processes, there is a need to...

  11. Comparative Results of AIRS AMSU and CrIS/ATMS Retrievals Using a Scientifically Equivalent Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Kouvaris, Louis; Iredell, Lena

    2016-01-01

    The AIRS Science Team Version 6 retrieval algorithm is currently producing high quality level-3 Climate Data Records (CDRs) from AIRSAMSU which are critical for understanding climate processes. The AIRS Science Team is finalizing an improved Version-7 retrieval algorithm to reprocess all old and future AIRS data. AIRS CDRs should eventually cover the period September 2002 through at least 2020. CrISATMS is the only scheduled follow on to AIRSAMSU. The objective of this research is to prepare for generation of a long term CrISATMS level-3 data using a finalized retrieval algorithm that is scientifically equivalent to AIRSAMSU Version-7.

  12. Reversible Data Hiding Based on DNA Computing

    PubMed Central

    Xie, Yingjie

    2017-01-01

    Biocomputing, especially DNA, computing has got great development. It is widely used in information security. In this paper, a novel algorithm of reversible data hiding based on DNA computing is proposed. Inspired by the algorithm of histogram modification, which is a classical algorithm for reversible data hiding, we combine it with DNA computing to realize this algorithm based on biological technology. Compared with previous results, our experimental results have significantly improved the ER (Embedding Rate). Furthermore, some PSNR (peak signal-to-noise ratios) of test images are also improved. Experimental results show that it is suitable for protecting the copyright of cover image in DNA-based information security. PMID:28280504

  13. What does voice-processing technology support today?

    PubMed Central

    Nakatsu, R; Suzuki, Y

    1995-01-01

    This paper describes the state of the art in applications of voice-processing technologies. In the first part, technologies concerning the implementation of speech recognition and synthesis algorithms are described. Hardware technologies such as microprocessors and DSPs (digital signal processors) are discussed. Software development environment, which is a key technology in developing applications software, ranging from DSP software to support software also is described. In the second part, the state of the art of algorithms from the standpoint of applications is discussed. Several issues concerning evaluation of speech recognition/synthesis algorithms are covered, as well as issues concerning the robustness of algorithms in adverse conditions. Images Fig. 3 PMID:7479720

  14. A parallel algorithm for the two-dimensional time fractional diffusion equation with implicit difference method.

    PubMed

    Gong, Chunye; Bao, Weimin; Tang, Guojian; Jiang, Yuewen; Liu, Jie

    2014-01-01

    It is very time consuming to solve fractional differential equations. The computational complexity of two-dimensional fractional differential equation (2D-TFDE) with iterative implicit finite difference method is O(M(x)M(y)N(2)). In this paper, we present a parallel algorithm for 2D-TFDE and give an in-depth discussion about this algorithm. A task distribution model and data layout with virtual boundary are designed for this parallel algorithm. The experimental results show that the parallel algorithm compares well with the exact solution. The parallel algorithm on single Intel Xeon X5540 CPU runs 3.16-4.17 times faster than the serial algorithm on single CPU core. The parallel efficiency of 81 processes is up to 88.24% compared with 9 processes on a distributed memory cluster system. We do think that the parallel computing technology will become a very basic method for the computational intensive fractional applications in the near future.

  15. Operational monitoring of land-cover change using multitemporal remote sensing data

    NASA Astrophysics Data System (ADS)

    Rogan, John

    2005-11-01

    Land-cover change, manifested as either land-cover modification and/or conversion, can occur at all spatial scales, and changes at local scales can have profound, cumulative impacts at broader scales. The implication of operational land-cover monitoring is that researchers have access to a continuous stream of remote sensing data, with the long term goal of providing for consistent and repetitive mapping. Effective large area monitoring of land-cover (i.e., >1000 km2) can only be accomplished by using remotely sensed images as an indirect data source in land-cover change mapping and as a source for land-cover change model projections. Large area monitoring programs face several challenges: (1) choice of appropriate classification scheme/map legend over large, topographically and phenologically diverse areas; (2) issues concerning data consistency and map accuracy (i.e., calibration and validation); (3) very large data volumes; (4) time consuming data processing and interpretation. Therefore, this dissertation research broadly addresses these challenges in the context of examining state-of-the-art image pre-processing, spectral enhancement, classification, and accuracy assessment techniques to assist the California Land-cover Mapping and Monitoring Program (LCMMP). The results of this dissertation revealed that spatially varying haze can be effectively corrected from Landsat data for the purposes of change detection. The Multitemporal Spectral Mixture Analysis (MSMA) spectral enhancement technique produced more accurate land-cover maps than those derived from the Multitemporal Kauth Thomas (MKT) transformation in northern and southern California study areas. A comparison of machine learning classifiers showed that Fuzzy ARTMAP outperformed two classification tree algorithms, based on map accuracy and algorithm robustness. Variation in spatial data error (positional and thematic) was explored in relation to environmental variables using geostatistical interpolation techniques. Finally, the land-cover modification maps generated for three time intervals (1985--1990--1996--2000), with nine change-classes revealed important variations in land-cover gain and loss between northern and southern California study areas.

  16. Parallel and Scalable Clustering and Classification for Big Data in Geosciences

    NASA Astrophysics Data System (ADS)

    Riedel, M.

    2015-12-01

    Machine learning, data mining, and statistical computing are common techniques to perform analysis in earth sciences. This contribution will focus on two concrete and widely used data analytics methods suitable to analyse 'big data' in the context of geoscience use cases: clustering and classification. From the broad class of available clustering methods we focus on the density-based spatial clustering of appliactions with noise (DBSCAN) algorithm that enables the identification of outliers or interesting anomalies. A new open source parallel and scalable DBSCAN implementation will be discussed in the light of a scientific use case that detects water mixing events in the Koljoefjords. The second technique we cover is classification, with a focus set on the support vector machines algorithm (SVMs), as one of the best out-of-the-box classification algorithm. A parallel and scalable SVM implementation will be discussed in the light of a scientific use case in the field of remote sensing with 52 different classes of land cover types.

  17. Study on efficiency of time computation in x-ray imaging simulation base on Monte Carlo algorithm using graphics processing unit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Setiani, Tia Dwi, E-mail: tiadwisetiani@gmail.com; Suprijadi; Nuclear Physics and Biophysics Reaserch Division, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung Jalan Ganesha 10 Bandung, 40132

    Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic imagesmore » and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 – 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 10{sup 8} and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.« less

  18. Neogene sea surface temperature reconstructions from the Southern McMurdo Sound and the McMurdo Ice Shelf (ANDRILL Program, Antarctica)

    NASA Astrophysics Data System (ADS)

    Sangiorgi, Francesca; Willmott, Veronica; Kim, Jung-Hyun; Schouten, Stefan; Brinkhuis, Henk; Sinninghe Damsté, Jaap S.; Florindo, Fabio; Harwood, David; Naish, Tim; Powell, Ross

    2010-05-01

    During the austral summers 2006 and 2007 the ANtarctic DRILLing Program (ANDRILL) drilled two cores, each recovering more than 1000m of sediment from below the McMurdo Ice-Shelf (MIS, AND-1B), and sea-ice in Southern McMurdo Sound (SMS, AND-2A), respectively, revealing new information about Neogene Antarctic cryosphere evolution. Core AND-1B was drilled in a more distal location than core AND-2A. With the aim of obtaining important information for the understanding of the history of Antarctic climate and environment during selected interval of the Neogene, we applied novel organic geochemistry proxies such as TEX86 (Tetra Ether IndeX of lipids with 86 carbon atoms) using a new calibration equation specifically developed for polar areas and based on 116 surface sediment samples collected from polar oceans (Kim et al., subm.), and BIT (Branched and Isoprenoid Tetraether), to derive absolute (sea surface) temperature values and to evaluate the relative contribution of soil organic matter versus marine organic matter, respectively. We will present the state-of-the-art of the methodology applied, discussing its advantages and limitations, and the results so far obtained from the analysis of 60 samples from core AND-2A covering the Miocene Climatic Optimum (and the Mid-late Miocene transition) and of 20 pilot samples from core AND-1B covering the late Pliocene.

  19. Current Status of Japanese Global Precipitation Measurement (GPM) Research Project

    NASA Astrophysics Data System (ADS)

    Kachi, Misako; Oki, Riko; Kubota, Takuji; Masaki, Takeshi; Kida, Satoshi; Iguchi, Toshio; Nakamura, Kenji; Takayabu, Yukari N.

    2013-04-01

    The Global Precipitation Measurement (GPM) mission is a mission led by the Japan Aerospace Exploration Agency (JAXA) and the National Aeronautics and Space Administration (NASA) under collaboration with many international partners, who will provide constellation of satellites carrying microwave radiometer instruments. The GPM Core Observatory, which carries the Dual-frequency Precipitation Radar (DPR) developed by JAXA and the National Institute of Information and Communications Technology (NICT), and the GPM Microwave Imager (GMI) developed by NASA. The GPM Core Observatory is scheduled to be launched in early 2014. JAXA also provides the Global Change Observation Mission (GCOM) 1st - Water (GCOM-W1) named "SHIZUKU," as one of constellation satellites. The SHIZUKU satellite was launched in 18 May, 2012 from JAXA's Tanegashima Space Center, and public data release of the Advanced Microwave Scanning Radiometer 2 (AMSR2) on board the SHIZUKU satellite was planned that Level 1 products in January 2013, and Level 2 products including precipitation in May 2013. The Japanese GPM research project conducts scientific activities on algorithm development, ground validation, application research including production of research products. In addition, we promote collaboration studies in Japan and Asian countries, and public relations activities to extend potential users of satellite precipitation products. In pre-launch phase, most of our activities are focused on the algorithm development and the ground validation related to the algorithm development. As the GPM standard products, JAXA develops the DPR Level 1 algorithm, and the NASA-JAXA Joint Algorithm Team develops the DPR Level 2 and the DPR-GMI combined Level2 algorithms. JAXA also develops the Global Rainfall Map product as national product to distribute hourly and 0.1-degree horizontal resolution rainfall map. All standard algorithms including Japan-US joint algorithm will be reviewed by the Japan-US Joint Precipitation Measuring Mission (PMM) Science Team (JPST) before the release. DPR Level 2 algorithm has been developing by the DPR Algorithm Team led by Japan, which is under the NASA-JAXA Joint Algorithm Team. The Level-2 algorithms will provide KuPR only products, KaPR only products, and Dual-frequency Precipitation products, with estimated precipitation rate, radar reflectivity, and precipitation information such as drop size distribution and bright band height. At-launch code was developed in December 2012. In addition, JAXA and NASA have provided synthetic DPR L1 data and tests have been performed using them. Japanese Global Rainfall Map algorithm for the GPM mission has been developed by the Global Rainfall Map Algorithm Development Team in Japan. The algorithm succeeded heritages of the Global Satellite Mapping for Precipitation (GSMaP) project, which was sponsored by the Japan Science and Technology Agency (JST) under the Core Research for Evolutional Science and Technology (CREST) framework between 2002 and 2007. The GSMaP near-real-time version and reanalysis version have been in operation at JAXA, and browse images and binary data available at the GSMaP web site (http://sharaku.eorc.jaxa.jp/GSMaP/). The GSMaP algorithm for GPM is developed in collaboration with AMSR2 standard algorithm for precipitation product, and their validation studies are closely related. As JAXA GPM product, we will provide 0.1-degree grid and hourly product for standard and near-realtime processing. Outputs will include hourly rainfall, gauge-calibrated hourly rainfall, and several quality information (satellite information flag, time information flag, and gauge quality information) over global areas from 60°S to 60°N. At-launch code of GSMaP for GPM is under development, and will be delivered to JAXA GPM Mission Operation System by April 2013. At-launch code will include several updates of microwave imager and sounder algorithms and databases, and introduction of rain-gauge correction.

  20. Benthic foraminiferal census data from Mobile Bay, Alabama--counts of surface samples and box cores

    USGS Publications Warehouse

    Richwine, Kathryn A.; Osterman, Lisa E.

    2012-01-01

    A study was undertaken in order to understand recent environmental change in Mobile Bay, Alabama. For this study a series of surface sediment and box core samples was collected. The surface benthic foraminiferal data provide the modern baseline conditions of the bay and can be used as a reference for changing paleoenvironmental parameters recorded in the box cores. The 14 sampling locations were chosen in the bay to cover the wide diversity of fluvial and marine-influenced environments on both sides of the shipping channel.

Top