Sample records for windows kernel architecture

  1. Combined multi-kernel head computed tomography images optimized for depicting both brain parenchyma and bone.

    PubMed

    Takagi, Satoshi; Nagase, Hiroyuki; Hayashi, Tatsuya; Kita, Tamotsu; Hayashi, Katsumi; Sanada, Shigeru; Koike, Masayuki

    2014-01-01

    The hybrid convolution kernel technique for computed tomography (CT) is known to enable the depiction of an image set using different window settings. Our purpose was to decrease the number of artifacts in the hybrid convolution kernel technique for head CT and to determine whether our improved combined multi-kernel head CT images enabled diagnosis as a substitute for both brain (low-pass kernel-reconstructed) and bone (high-pass kernel-reconstructed) images. Forty-four patients with nondisplaced skull fractures were included. Our improved multi-kernel images were generated so that pixels of >100 Hounsfield unit in both brain and bone images were composed of CT values of bone images and other pixels were composed of CT values of brain images. Three radiologists compared the improved multi-kernel images with bone images. The improved multi-kernel images and brain images were identically displayed on the brain window settings. All three radiologists agreed that the improved multi-kernel images on the bone window settings were sufficient for diagnosing skull fractures in all patients. This improved multi-kernel technique has a simple algorithm and is practical for clinical use. Thus, simplified head CT examinations and fewer images that need to be stored can be expected.

  2. A personal computer-based, multitasking data acquisition system

    NASA Technical Reports Server (NTRS)

    Bailey, Steven A.

    1990-01-01

    A multitasking, data acquisition system was written to simultaneously collect meteorological radar and telemetry data from two sources. This system is based on the personal computer architecture. Data is collected via two asynchronous serial ports and is deposited to disk. The system is written in both the C programming language and assembler. It consists of three parts: a multitasking kernel for data collection, a shell with pull down windows as user interface, and a graphics processor for editing data and creating coded messages. An explanation of both system principles and program structure is presented.

  3. Resource Efficient Hardware Architecture for Fast Computation of Running Max/Min Filters

    PubMed Central

    Torres-Huitzil, Cesar

    2013-01-01

    Running max/min filters on rectangular kernels are widely used in many digital signal and image processing applications. Filtering with a k × k kernel requires of k 2 − 1 comparisons per sample for a direct implementation; thus, performance scales expensively with the kernel size k. Faster computations can be achieved by kernel decomposition and using constant time one-dimensional algorithms on custom hardware. This paper presents a hardware architecture for real-time computation of running max/min filters based on the van Herk/Gil-Werman (HGW) algorithm. The proposed architecture design uses less computation and memory resources than previously reported architectures when targeted to Field Programmable Gate Array (FPGA) devices. Implementation results show that the architecture is able to compute max/min filters, on 1024 × 1024 images with up to 255 × 255 kernels, in around 8.4 milliseconds, 120 frames per second, at a clock frequency of 250 MHz. The implementation is highly scalable for the kernel size with good performance/area tradeoff suitable for embedded applications. The applicability of the architecture is shown for local adaptive image thresholding. PMID:24288456

  4. Design and Analysis of Architectures for Structural Health Monitoring Systems

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi; Sixto, S. L. (Technical Monitor)

    2002-01-01

    During the two-year project period, we have worked on several aspects of Health Usage and Monitoring Systems for structural health monitoring. In particular, we have made contributions in the following areas. 1. Reference HUMS architecture: We developed a high-level architecture for health monitoring and usage systems (HUMS). The proposed reference architecture is shown. It is compatible with the Generic Open Architecture (GOA) proposed as a standard for avionics systems. 2. HUMS kernel: One of the critical layers of HUMS reference architecture is the HUMS kernel. We developed a detailed design of a kernel to implement the high level architecture.3. Prototype implementation of HUMS kernel: We have implemented a preliminary version of the HUMS kernel on a Unix platform.We have implemented both a centralized system version and a distributed version. 4. SCRAMNet and HUMS: SCRAMNet (Shared Common Random Access Memory Network) is a system that is found to be suitable to implement HUMS. For this reason, we have conducted a simulation study to determine its stability in handling the input data rates in HUMS. 5. Architectural specification.

  5. A Frequency-Domain Implementation of a Sliding-Window Traffic Sign Detector for Large Scale Panoramic Datasets

    NASA Astrophysics Data System (ADS)

    Creusen, I. M.; Hazelhoff, L.; De With, P. H. N.

    2013-10-01

    In large-scale automatic traffic sign surveying systems, the primary computational effort is concentrated at the traffic sign detection stage. This paper focuses on reducing the computational load of particularly the sliding window object detection algorithm which is employed for traffic sign detection. Sliding-window object detectors often use a linear SVM to classify the features in a window. In this case, the classification can be seen as a convolution of the feature maps with the SVM kernel. It is well known that convolution can be efficiently implemented in the frequency domain, for kernels larger than a certain size. We show that by careful reordering of sliding-window operations, most of the frequency-domain transformations can be eliminated, leading to a substantial increase in efficiency. Additionally, we suggest to use the overlap-add method to keep the memory use within reasonable bounds. This allows us to keep all the transformed kernels in memory, thereby eliminating even more domain transformations, and allows all scales in a multiscale pyramid to be processed using the same set of transformed kernels. For a typical sliding-window implementation, we have found that the detector execution performance improves with a factor of 5.3. As a bonus, many of the detector improvements from literature, e.g. chi-squared kernel approximations, sub-class splitting algorithms etc., can be more easily applied at a lower performance penalty because of an improved scalability.

  6. Architecture for removable media USB-ARM

    DOEpatents

    Shue, Craig A.; Lamb, Logan M.; Paul, Nathanael R.

    2015-07-14

    A storage device is coupled to a computing system comprising an operating system and application software. Access to the storage device is blocked by a kernel filter driver, except exclusive access is granted to a first anti-virus engine. The first anti-virus engine is directed to scan the storage device for malicious software and report results. Exclusive access may be granted to one or more other anti-virus engines and they may be directed to scan the storage device and report results. Approval of all or a portion of the information on the storage device is based on the results from the first anti-virus engine and the other anti-virus engines. The storage device is presented to the operating system and access is granted to the approved information. The operating system may be a Microsoft Windows operating system. The kernel filter driver and usage of anti-virus engines may be configurable by a user.

  7. An energy efficient and high speed architecture for convolution computing based on binary resistive random access memory

    NASA Astrophysics Data System (ADS)

    Liu, Chen; Han, Runze; Zhou, Zheng; Huang, Peng; Liu, Lifeng; Liu, Xiaoyan; Kang, Jinfeng

    2018-04-01

    In this work we present a novel convolution computing architecture based on metal oxide resistive random access memory (RRAM) to process the image data stored in the RRAM arrays. The proposed image storage architecture shows performances of better speed-device consumption efficiency compared with the previous kernel storage architecture. Further we improve the architecture for a high accuracy and low power computing by utilizing the binary storage and the series resistor. For a 28 × 28 image and 10 kernels with a size of 3 × 3, compared with the previous kernel storage approach, the newly proposed architecture shows excellent performances including: 1) almost 100% accuracy within 20% LRS variation and 90% HRS variation; 2) more than 67 times speed boost; 3) 71.4% energy saving.

  8. Parametric output-only identification of time-varying structures using a kernel recursive extended least squares TARMA approach

    NASA Astrophysics Data System (ADS)

    Ma, Zhi-Sai; Liu, Li; Zhou, Si-Da; Yu, Lei; Naets, Frank; Heylen, Ward; Desmet, Wim

    2018-01-01

    The problem of parametric output-only identification of time-varying structures in a recursive manner is considered. A kernelized time-dependent autoregressive moving average (TARMA) model is proposed by expanding the time-varying model parameters onto the basis set of kernel functions in a reproducing kernel Hilbert space. An exponentially weighted kernel recursive extended least squares TARMA identification scheme is proposed, and a sliding-window technique is subsequently applied to fix the computational complexity for each consecutive update, allowing the method to operate online in time-varying environments. The proposed sliding-window exponentially weighted kernel recursive extended least squares TARMA method is employed for the identification of a laboratory time-varying structure consisting of a simply supported beam and a moving mass sliding on it. The proposed method is comparatively assessed against an existing recursive pseudo-linear regression TARMA method via Monte Carlo experiments and shown to be capable of accurately tracking the time-varying dynamics. Furthermore, the comparisons demonstrate the superior achievable accuracy, lower computational complexity and enhanced online identification capability of the proposed kernel recursive extended least squares TARMA approach.

  9. An Efficient Implementation For Real Time Applications Of The Wigner-Ville Distribution

    NASA Astrophysics Data System (ADS)

    Boashash, Boualem; Black, Peter; Whitehouse, Harper J.

    1986-03-01

    The Wigner-Ville Distribution (WVD) is a valuable tool for time-frequency signal analysis. In order to implement the WVD in real time an efficient algorithm and architecture have been developed which may be implemented with commercial components. This algorithm successively computes the analytic signal corresponding to the input signal, forms a weighted kernel function and analyses the kernel via a Discrete Fourier Transform (DFT). To evaluate the analytic signal required by the algorithm it is shown that the time domain definition implemented as a finite impulse response (FIR) filter is practical and more efficient than the frequency domain definition of the analytic signal. The windowed resolution of the WVD in the frequency domain is shown to be similar to the resolution of a windowed Fourier Transform. A real time signal processsor has been designed for evaluation of the WVD analysis system. The system is easily paralleled and can be configured to meet a variety of frequency and time resolutions. The arithmetic unit is based on a pair of high speed VLSI floating-point multiplier and adder chips. Dual operand buses and an independent result bus maximize data transfer rates. The system is horizontally microprogrammed and utilizes a full instruction pipeline. Each microinstruction specifies two operand addresses, a result location, the type of arithmetic and the memory configuration. input and output is via shared memory blocks with front-end processors to handle data transfers during the non access periods of the analyzer.

  10. Self spectrum window method in wigner-ville distribution.

    PubMed

    Liu, Zhongguo; Liu, Changchun; Liu, Boqiang; Lv, Yangsheng; Lei, Yinsheng; Yu, Mengsun

    2005-01-01

    Wigner-Ville distribution (WVD) is an important type of time-frequency analysis in biomedical signal processing. The cross-term interference in WVD has a disadvantageous influence on its application. In this research, the Self Spectrum Window (SSW) method was put forward to suppress the cross-term interference, based on the fact that the cross-term and auto-WVD- terms in integral kernel function are orthogonal. With the Self Spectrum Window (SSW) algorithm, a real auto-WVD function was used as a template to cross-correlate with the integral kernel function, and the Short Time Fourier Transform (STFT) spectrum of the signal was used as window function to process the WVD in time-frequency plane. The SSW method was confirmed by computer simulation with good analysis results. Satisfactory time- frequency distribution was obtained.

  11. Using the Intel Math Kernel Library on Peregrine | High-Performance

    Science.gov Websites

    Computing | NREL the Intel Math Kernel Library on Peregrine Using the Intel Math Kernel Library on Peregrine Learn how to use the Intel Math Kernel Library (MKL) with Peregrine system software. MKL architectures. Core math functions in MKL include BLAS, LAPACK, ScaLAPACK, sparse solvers, fast Fourier

  12. Fast Interrupt Priority Management in Operating System Kernels

    DTIC Science & Technology

    1993-05-01

    We present results for the Mach 3.0 microkernel operating system, although the technique is applicable to other kernel architectures, both micro and...protection in the Mach 3.0 microkernel for several different processor architectures. For example, on the Omron Luna88k, we observed a 50% reduction in...general interrupt mask raise/lower pair within the Mach 3.0 microkernel on a variety of architectures. DTIC QUALM i.N1’R%.*1IMD 5 k81tltC Avail andl

  13. The probabilistic neural network architecture for high speed classification of remotely sensed imagery

    NASA Technical Reports Server (NTRS)

    Chettri, Samir R.; Cromp, Robert F.

    1993-01-01

    In this paper we discuss a neural network architecture (the Probabilistic Neural Net or the PNN) that, to the best of our knowledge, has not previously been applied to remotely sensed data. The PNN is a supervised non-parametric classification algorithm as opposed to the Gaussian maximum likelihood classifier (GMLC). The PNN works by fitting a Gaussian kernel to each training point. The width of the Gaussian is controlled by a tuning parameter called the window width. If very small widths are used, the method is equivalent to the nearest neighbor method. For large windows, the PNN behaves like the GMLC. The basic implementation of the PNN requires no training time at all. In this respect it is far better than the commonly used backpropagation neural network which can be shown to take O(N6) time for training where N is the dimensionality of the input vector. In addition the PNN can be implemented in a feed forward mode in hardware. The disadvantage of the PNN is that it requires all the training data to be stored. Some solutions to this problem are discussed in the paper. Finally, we discuss the accuracy of the PNN with respect to the GMLC and the backpropagation neural network (BPNN). The PNN is shown to be better than GMLC and not as good as the BPNN with regards to classification accuracy.

  14. Lagged kernel machine regression for identifying time windows of susceptibility to exposures of complex mixtures.

    PubMed

    Liu, Shelley H; Bobb, Jennifer F; Lee, Kyu Ha; Gennings, Chris; Claus Henn, Birgit; Bellinger, David; Austin, Christine; Schnaas, Lourdes; Tellez-Rojo, Martha M; Hu, Howard; Wright, Robert O; Arora, Manish; Coull, Brent A

    2018-07-01

    The impact of neurotoxic chemical mixtures on children's health is a critical public health concern. It is well known that during early life, toxic exposures may impact cognitive function during critical time intervals of increased vulnerability, known as windows of susceptibility. Knowledge on time windows of susceptibility can help inform treatment and prevention strategies, as chemical mixtures may affect a developmental process that is operating at a specific life phase. There are several statistical challenges in estimating the health effects of time-varying exposures to multi-pollutant mixtures, such as: multi-collinearity among the exposures both within time points and across time points, and complex exposure-response relationships. To address these concerns, we develop a flexible statistical method, called lagged kernel machine regression (LKMR). LKMR identifies critical exposure windows of chemical mixtures, and accounts for complex non-linear and non-additive effects of the mixture at any given exposure window. Specifically, LKMR estimates how the effects of a mixture of exposures change with the exposure time window using a Bayesian formulation of a grouped, fused lasso penalty within a kernel machine regression (KMR) framework. A simulation study demonstrates the performance of LKMR under realistic exposure-response scenarios, and demonstrates large gains over approaches that consider each time window separately, particularly when serial correlation among the time-varying exposures is high. Furthermore, LKMR demonstrates gains over another approach that inputs all time-specific chemical concentrations together into a single KMR. We apply LKMR to estimate associations between neurodevelopment and metal mixtures in Early Life Exposures in Mexico and Neurotoxicology, a prospective cohort study of child health in Mexico City.

  15. Parameterized Micro-benchmarking: An Auto-tuning Approach for Complex Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Wenjing; Krishnamoorthy, Sriram; Agrawal, Gagan

    2012-05-15

    Auto-tuning has emerged as an important practical method for creating highly optimized implementations of key computational kernels and applications. However, the growing complexity of architectures and applications is creating new challenges for auto-tuning. Complex applications can involve a prohibitively large search space that precludes empirical auto-tuning. Similarly, architectures are becoming increasingly complicated, making it hard to model performance. In this paper, we focus on the challenge to auto-tuning presented by applications with a large number of kernels and kernel instantiations. While these kernels may share a somewhat similar pattern, they differ considerably in problem sizes and the exact computation performed.more » We propose and evaluate a new approach to auto-tuning which we refer to as parameterized micro-benchmarking. It is an alternative to the two existing classes of approaches to auto-tuning: analytical model-based and empirical search-based. Particularly, we argue that the former may not be able to capture all the architectural features that impact performance, whereas the latter might be too expensive for an application that has several different kernels. In our approach, different expressions in the application, different possible implementations of each expression, and the key architectural features, are used to derive a simple micro-benchmark and a small parameter space. This allows us to learn the most significant features of the architecture that can impact the choice of implementation for each kernel. We have evaluated our approach in the context of GPU implementations of tensor contraction expressions encountered in excited state calculations in quantum chemistry. We have focused on two aspects of GPUs that affect tensor contraction execution: memory access patterns and kernel consolidation. Using our parameterized micro-benchmarking approach, we obtain a speedup of up to 2 over the version that used default optimizations, but no auto-tuning. We demonstrate that observations made from microbenchmarks match the behavior seen from real expressions. In the process, we make important observations about the memory hierarchy of two of the most recent NVIDIA GPUs, which can be used in other optimization frameworks as well.« less

  16. Sliding Window Generalized Kernel Affine Projection Algorithm Using Projection Mappings

    NASA Astrophysics Data System (ADS)

    Slavakis, Konstantinos; Theodoridis, Sergios

    2008-12-01

    Very recently, a solution to the kernel-based online classification problem has been given by the adaptive projected subgradient method (APSM). The developed algorithm can be considered as a generalization of a kernel affine projection algorithm (APA) and the kernel normalized least mean squares (NLMS). Furthermore, sparsification of the resulting kernel series expansion was achieved by imposing a closed ball (convex set) constraint on the norm of the classifiers. This paper presents another sparsification method for the APSM approach to the online classification task by generating a sequence of linear subspaces in a reproducing kernel Hilbert space (RKHS). To cope with the inherent memory limitations of online systems and to embed tracking capabilities to the design, an upper bound on the dimension of the linear subspaces is imposed. The underlying principle of the design is the notion of projection mappings. Classification is performed by metric projection mappings, sparsification is achieved by orthogonal projections, while the online system's memory requirements and tracking are attained by oblique projections. The resulting sparsification scheme shows strong similarities with the classical sliding window adaptive schemes. The proposed design is validated by the adaptive equalization problem of a nonlinear communication channel, and is compared with classical and recent stochastic gradient descent techniques, as well as with the APSM's solution where sparsification is performed by a closed ball constraint on the norm of the classifiers.

  17. KITTEN Lightweight Kernel 0.1 Beta

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pedretti, Kevin; Levenhagen, Michael; Kelly, Suzanne

    2007-12-12

    The Kitten Lightweight Kernel is a simplified OS (operating system) kernel that is intended to manage a compute node's hardware resources. It provides a set of mechanisms to user-level applications for utilizing hardware resources (e.g., allocating memory, creating processes, accessing the network). Kitten is much simpler than general-purpose OS kernels, such as Linux or Windows, but includes all of the esssential functionality needed to support HPC (high-performance computing) MPI, PGAS and OpenMP applications. Kitten provides unique capabilities such as physically contiguous application memory, transparent large page support, and noise-free tick-less operation, which enable HPC applications to obtain greater efficiency andmore » scalability than with general purpose OS kernels.« less

  18. Patterns and Practices for Future Architectures

    DTIC Science & Technology

    2014-08-01

    14. SUBJECT TERMS computing architecture, graph algorithms, high-performance computing, big data , GPU 15. NUMBER OF PAGES 44 16. PRICE CODE 17...at Vertex 1 6 Figure 4: Data Structures Created by Kernel 1 of Single CPU, List Implementation Using the Graph in the Example from Section 1.2 9...Figure 5: Kernel 2 of Graph500 BFS Reference Implementation: Single CPU, List 10 Figure 6: Data Structures for Sequential CSR Algorithm 12 Figure 7

  19. Wilson Dslash Kernel From Lattice QCD Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joo, Balint; Smelyanskiy, Mikhail; Kalamkar, Dhiraj D.

    2015-07-01

    Lattice Quantum Chromodynamics (LQCD) is a numerical technique used for calculations in Theoretical Nuclear and High Energy Physics. LQCD is traditionally one of the first applications ported to many new high performance computing architectures and indeed LQCD practitioners have been known to design and build custom LQCD computers. Lattice QCD kernels are frequently used as benchmarks (e.g. 168.wupwise in the SPEC suite) and are generally well understood, and as such are ideal to illustrate several optimization techniques. In this chapter we will detail our work in optimizing the Wilson-Dslash kernels for Intel Xeon Phi, however, as we will show themore » technique gives excellent performance on regular Xeon Architecture as well.« less

  20. Modular Affective Reasoning-Based Versatile Introspective Architecture (MARVIN)

    DTIC Science & Technology

    2008-08-14

    monolithic kernels found in most mass market OSs, where these kinds of system processes run within the kernel , and thus need to be highly optimized as well as...without modifying pre- existing process management elements, we expect the process of transitioning this component from MINIX to monolithic kernels to...necessary to incorporate them into a monolithic kernel . To demonstrate how the APMM would work in practice, we used it as the basis for building a simulated

  1. SPLICER - A GENETIC ALGORITHM TOOL FOR SEARCH AND OPTIMIZATION, VERSION 1.0 (MACINTOSH VERSION)

    NASA Technical Reports Server (NTRS)

    Wang, L.

    1994-01-01

    SPLICER is a genetic algorithm tool which can be used to solve search and optimization problems. Genetic algorithms are adaptive search procedures (i.e. problem solving methods) based loosely on the processes of natural selection and Darwinian "survival of the fittest." SPLICER provides the underlying framework and structure for building a genetic algorithm application. These algorithms apply genetically-inspired operators to populations of potential solutions in an iterative fashion, creating new populations while searching for an optimal or near-optimal solution to the problem at hand. SPLICER 1.0 was created using a modular architecture that includes a Genetic Algorithm Kernel, interchangeable Representation Libraries, Fitness Modules and User Interface Libraries, and well-defined interfaces between these components. The architecture supports portability, flexibility, and extensibility. SPLICER comes with all source code and several examples. For instance, a "traveling salesperson" example searches for the minimum distance through a number of cities visiting each city only once. Stand-alone SPLICER applications can be used without any programming knowledge. However, to fully utilize SPLICER within new problem domains, familiarity with C language programming is essential. SPLICER's genetic algorithm (GA) kernel was developed independent of representation (i.e. problem encoding), fitness function or user interface type. The GA kernel comprises all functions necessary for the manipulation of populations. These functions include the creation of populations and population members, the iterative population model, fitness scaling, parent selection and sampling, and the generation of population statistics. In addition, miscellaneous functions are included in the kernel (e.g., random number generators). Different problem-encoding schemes and functions are defined and stored in interchangeable representation libraries. This allows the GA kernel to be used with any representation scheme. The SPLICER tool provides representation libraries for binary strings and for permutations. These libraries contain functions for the definition, creation, and decoding of genetic strings, as well as multiple crossover and mutation operators. Furthermore, the SPLICER tool defines the appropriate interfaces to allow users to create new representation libraries. Fitness modules are the only component of the SPLICER system a user will normally need to create or alter to solve a particular problem. Fitness functions are defined and stored in interchangeable fitness modules which must be created using C language. Within a fitness module, a user can create a fitness (or scoring) function, set the initial values for various SPLICER control parameters (e.g., population size), create a function which graphically displays the best solutions as they are found, and provide descriptive information about the problem. The tool comes with several example fitness modules, while the process of developing a fitness module is fully discussed in the accompanying documentation. The user interface is event-driven and provides graphic output in windows. SPLICER is written in Think C for Apple Macintosh computers running System 6.0.3 or later and Sun series workstations running SunOS. The UNIX version is easily ported to other UNIX platforms and requires MIT's X Window System, Version 11 Revision 4 or 5, MIT's Athena Widget Set, and the Xw Widget Set. Example executables and source code are included for each machine version. The standard distribution media for the Macintosh version is a set of three 3.5 inch Macintosh format diskettes. The standard distribution medium for the UNIX version is a .25 inch streaming magnetic tape cartridge in UNIX tar format. For the UNIX version, alternate distribution media and formats are available upon request. SPLICER was developed in 1991.

  2. PERI - Auto-tuning Memory Intensive Kernels for Multicore

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bailey, David H; Williams, Samuel; Datta, Kaushik

    2008-06-24

    We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to Sparse Matrix Vector Multiplication (SpMV), the explicit heat equation PDE on a regular grid (Stencil), and a lattice Boltzmann application (LBMHD). We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Xeon Clovertown, AMD Opteron Barcelona, Sun Victoria Falls, and the Sony-Toshiba-IBM (STI) Cell. Rather than hand-tuning each kernel for each system, we developmore » a code generator for each kernel that allows us to identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned kernel applications often achieve a better than 4X improvement compared with the original code. Additionally, we analyze a Roofline performance model for each platform to reveal hardware bottlenecks and software challenges for future multicore systems and applications.« less

  3. A fast non-local means algorithm based on integral image and reconstructed similar kernel

    NASA Astrophysics Data System (ADS)

    Lin, Zheng; Song, Enmin

    2018-03-01

    Image denoising is one of the essential methods in digital image processing. The non-local means (NLM) denoising approach is a remarkable denoising technique. However, its time complexity of the computation is high. In this paper, we design a fast NLM algorithm based on integral image and reconstructed similar kernel. First, the integral image is introduced in the traditional NLM algorithm. In doing so, it reduces a great deal of repetitive operations in the parallel processing, which will greatly improves the running speed of the algorithm. Secondly, in order to amend the error of the integral image, we construct a similar window resembling the Gaussian kernel in the pyramidal stacking pattern. Finally, in order to eliminate the influence produced by replacing the Gaussian weighted Euclidean distance with Euclidean distance, we propose a scheme to construct a similar kernel with a size of 3 x 3 in a neighborhood window which will reduce the effect of noise on a single pixel. Experimental results demonstrate that the proposed algorithm is about seventeen times faster than the traditional NLM algorithm, yet produce comparable results in terms of Peak Signal-to- Noise Ratio (the PSNR increased 2.9% in average) and perceptual image quality.

  4. ASIC-based architecture for the real-time computation of 2D convolution with large kernel size

    NASA Astrophysics Data System (ADS)

    Shao, Rui; Zhong, Sheng; Yan, Luxin

    2015-12-01

    Bidimensional convolution is a low-level processing algorithm of interest in many areas, but its high computational cost constrains the size of the kernels, especially in real-time embedded systems. This paper presents a hardware architecture for the ASIC-based implementation of 2-D convolution with medium-large kernels. Aiming to improve the efficiency of storage resources on-chip, reducing off-chip bandwidth of these two issues, proposed construction of a data cache reuse. Multi-block SPRAM to cross cached images and the on-chip ping-pong operation takes full advantage of the data convolution calculation reuse, design a new ASIC data scheduling scheme and overall architecture. Experimental results show that the structure can achieve 40× 32 size of template real-time convolution operations, and improve the utilization of on-chip memory bandwidth and on-chip memory resources, the experimental results show that the structure satisfies the conditions to maximize data throughput output , reducing the need for off-chip memory bandwidth.

  5. The Conserved and Unique Genetic Architecture of Kernel Size and Weight in Maize and Rice1[OPEN

    PubMed Central

    Lan, Liu; Wang, Hongze; Xu, Yuancheng; Yang, Xiaohong; Li, Wenqiang; Tong, Hao; Xiao, Yingjie; Pan, Qingchun; Qiao, Feng; Raihan, Mohammad Sharif; Liu, Haijun; Yang, Ning; Wang, Xiaqing; Deng, Min; Jin, Minliang; Zhao, Lijun; Luo, Xin; Zhan, Wei; Liu, Nannan; Wang, Hong; Chen, Gengshen

    2017-01-01

    Maize (Zea mays) is a major staple crop. Maize kernel size and weight are important contributors to its yield. Here, we measured kernel length, kernel width, kernel thickness, hundred kernel weight, and kernel test weight in 10 recombinant inbred line populations and dissected their genetic architecture using three statistical models. In total, 729 quantitative trait loci (QTLs) were identified, many of which were identified in all three models, including 22 major QTLs that each can explain more than 10% of phenotypic variation. To provide candidate genes for these QTLs, we identified 30 maize genes that are orthologs of 18 rice (Oryza sativa) genes reported to affect rice seed size or weight. Interestingly, 24 of these 30 genes are located in the identified QTLs or within 1 Mb of the significant single-nucleotide polymorphisms. We further confirmed the effects of five genes on maize kernel size/weight in an independent association mapping panel with 540 lines by candidate gene association analysis. Lastly, the function of ZmINCW1, a homolog of rice GRAIN INCOMPLETE FILLING1 that affects seed size and weight, was characterized in detail. ZmINCW1 is close to QTL peaks for kernel size/weight (less than 1 Mb) and contains significant single-nucleotide polymorphisms affecting kernel size/weight in the association panel. Overexpression of this gene can rescue the reduced weight of the Arabidopsis (Arabidopsis thaliana) homozygous mutant line in the AtcwINV2 gene (Arabidopsis ortholog of ZmINCW1). These results indicate that the molecular mechanisms affecting seed development are conserved in maize, rice, and possibly Arabidopsis. PMID:28811335

  6. The Conserved and Unique Genetic Architecture of Kernel Size and Weight in Maize and Rice.

    PubMed

    Liu, Jie; Huang, Juan; Guo, Huan; Lan, Liu; Wang, Hongze; Xu, Yuancheng; Yang, Xiaohong; Li, Wenqiang; Tong, Hao; Xiao, Yingjie; Pan, Qingchun; Qiao, Feng; Raihan, Mohammad Sharif; Liu, Haijun; Zhang, Xuehai; Yang, Ning; Wang, Xiaqing; Deng, Min; Jin, Minliang; Zhao, Lijun; Luo, Xin; Zhou, Yang; Li, Xiang; Zhan, Wei; Liu, Nannan; Wang, Hong; Chen, Gengshen; Li, Qing; Yan, Jianbing

    2017-10-01

    Maize ( Zea mays ) is a major staple crop. Maize kernel size and weight are important contributors to its yield. Here, we measured kernel length, kernel width, kernel thickness, hundred kernel weight, and kernel test weight in 10 recombinant inbred line populations and dissected their genetic architecture using three statistical models. In total, 729 quantitative trait loci (QTLs) were identified, many of which were identified in all three models, including 22 major QTLs that each can explain more than 10% of phenotypic variation. To provide candidate genes for these QTLs, we identified 30 maize genes that are orthologs of 18 rice ( Oryza sativa ) genes reported to affect rice seed size or weight. Interestingly, 24 of these 30 genes are located in the identified QTLs or within 1 Mb of the significant single-nucleotide polymorphisms. We further confirmed the effects of five genes on maize kernel size/weight in an independent association mapping panel with 540 lines by candidate gene association analysis. Lastly, the function of ZmINCW1 , a homolog of rice GRAIN INCOMPLETE FILLING1 that affects seed size and weight, was characterized in detail. ZmINCW1 is close to QTL peaks for kernel size/weight (less than 1 Mb) and contains significant single-nucleotide polymorphisms affecting kernel size/weight in the association panel. Overexpression of this gene can rescue the reduced weight of the Arabidopsis ( Arabidopsis thaliana ) homozygous mutant line in the AtcwINV2 gene (Arabidopsis ortholog of ZmINCW1 ). These results indicate that the molecular mechanisms affecting seed development are conserved in maize, rice, and possibly Arabidopsis. © 2017 American Society of Plant Biologists. All Rights Reserved.

  7. gkmSVM: an R package for gapped-kmer SVM

    PubMed Central

    Ghandi, Mahmoud; Mohammad-Noori, Morteza; Ghareghani, Narges; Lee, Dongwon; Garraway, Levi; Beer, Michael A.

    2016-01-01

    Summary: We present a new R package for training gapped-kmer SVM classifiers for DNA and protein sequences. We describe an improved algorithm for kernel matrix calculation that speeds run time by about 2 to 5-fold over our original gkmSVM algorithm. This package supports several sequence kernels, including: gkmSVM, kmer-SVM, mismatch kernel and wildcard kernel. Availability and Implementation: gkmSVM package is freely available through the Comprehensive R Archive Network (CRAN), for Linux, Mac OS and Windows platforms. The C ++ implementation is available at www.beerlab.org/gkmsvm Contact: mghandi@gmail.com or mbeer@jhu.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153639

  8. Towards Formal Verification of a Separation Microkernel

    NASA Astrophysics Data System (ADS)

    Butterfield, Andrew; Sanan, David; Hinchey, Mike

    2013-08-01

    The best approach to verifying an IMA separation kernel is to use a (fixed) time-space partitioning kernel with a multiple independent levels of separation (MILS) architecture. We describe an activity that explores the cost and feasibility of doing a formal verification of such a kernel to the Common Criteria (CC) levels mandated by the Separation Kernel Protection Profile (SKPP). We are developing a Reference Specification of such a kernel, and are using higher-order logic (HOL) to construct formal models of this specification and key separation properties. We then plan to do a dry run of part of a formal proof of those properties using the Isabelle/HOL theorem prover.

  9. Novel characterization method of impedance cardiography signals using time-frequency distributions.

    PubMed

    Escrivá Muñoz, Jesús; Pan, Y; Ge, S; Jensen, E W; Vallverdú, M

    2018-03-16

    The purpose of this document is to describe a methodology to select the most adequate time-frequency distribution (TFD) kernel for the characterization of impedance cardiography signals (ICG). The predominant ICG beat was extracted from a patient and was synthetized using time-frequency variant Fourier approximations. These synthetized signals were used to optimize several TFD kernels according to a performance maximization. The optimized kernels were tested for noise resistance on a clinical database. The resulting optimized TFD kernels are presented with their performance calculated using newly proposed methods. The procedure explained in this work showcases a new method to select an appropriate kernel for ICG signals and compares the performance of different time-frequency kernels found in the literature for the case of ICG signals. We conclude that, for ICG signals, the performance (P) of the spectrogram with either Hanning or Hamming windows (P = 0.780) and the extended modified beta distribution (P = 0.765) provided similar results, higher than the rest of analyzed kernels. Graphical abstract Flowchart for the optimization of time-frequency distribution kernels for impedance cardiography signals.

  10. Security Tagged Architecture Co-Design (STACD)

    DTIC Science & Technology

    2015-09-01

    components have access to all other system components whether they need it or not. Microkernels [8, 9, 10] seek to reduce the kernel size to improve...does not provide the fine-grained control to allow for formal verification. Microkernels reduce the size of the kernel enough to allow for a formal...verification of the kernel. Tanenbaum [14] documents many of the security virtues of microkernels and argues that the Ring 3 Ring 2 Ring 1

  11. gkmSVM: an R package for gapped-kmer SVM.

    PubMed

    Ghandi, Mahmoud; Mohammad-Noori, Morteza; Ghareghani, Narges; Lee, Dongwon; Garraway, Levi; Beer, Michael A

    2016-07-15

    We present a new R package for training gapped-kmer SVM classifiers for DNA and protein sequences. We describe an improved algorithm for kernel matrix calculation that speeds run time by about 2 to 5-fold over our original gkmSVM algorithm. This package supports several sequence kernels, including: gkmSVM, kmer-SVM, mismatch kernel and wildcard kernel. gkmSVM package is freely available through the Comprehensive R Archive Network (CRAN), for Linux, Mac OS and Windows platforms. The C ++ implementation is available at www.beerlab.org/gkmsvm mghandi@gmail.com or mbeer@jhu.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. Source Code Analysis Laboratory (SCALe)

    DTIC Science & Technology

    2012-04-01

    Versus Flagged Nonconformities (FNC) Software System TP/FNC Ratio Mozilla Firefox version 2.0 6/12 50% Linux kernel version 2.6.15 10/126 8...is inappropriately tuned for analysis of the Linux kernel, which has anomalous results. Customizing SCALe to work with software for a particular...servers support a collection of virtual machines (VMs) that can be configured to support analysis in various environments, such as Windows XP and Linux . A

  13. A Robustness Testing Campaign for IMA-SP Partitioning Kernels

    NASA Astrophysics Data System (ADS)

    Grixti, Stephen; Lopez Trecastro, Jorge; Sammut, Nicholas; Zammit-Mangion, David

    2015-09-01

    With time and space partitioned architectures becoming increasingly appealing to the European space sector, the dependability of partitioning kernel technology is a key factor to its applicability in European Space Agency projects. This paper explores the potential of the data type fault model, which injects faults through the Application Program Interface, in partitioning kernel robustness testing. This fault injection methodology has been tailored to investigate its relevance in uncovering vulnerabilities within partitioning kernels and potentially contributing towards fault removal campaigns within this domain. This is demonstrated through a robustness testing case study of the XtratuM partitioning kernel for SPARC LEON3 processors. The robustness campaign exposed a number of vulnerabilities in XtratuM, exhibiting the potential benefits of using such a methodology for the robustness assessment of partitioning kernels.

  14. Scalable and Power Efficient Data Analytics for Hybrid Exascale Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choudhary, Alok; Samatova, Nagiza; Wu, Kesheng

    This project developed a generic and optimized set of core data analytics functions. These functions organically consolidate a broad constellation of high performance analytical pipelines. As the architectures of emerging HPC systems become inherently heterogeneous, there is a need to design algorithms for data analysis kernels accelerated on hybrid multi-node, multi-core HPC architectures comprised of a mix of CPUs, GPUs, and SSDs. Furthermore, the power-aware trend drives the advances in our performance-energy tradeoff analysis framework which enables our data analysis kernels algorithms and software to be parameterized so that users can choose the right power-performance optimizations.

  15. Detection of Splice Sites Using Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Varadwaj, Pritish; Purohit, Neetesh; Arora, Bhumika

    Automatic identification and annotation of exon and intron region of gene, from DNA sequences has been an important research area in field of computational biology. Several approaches viz. Hidden Markov Model (HMM), Artificial Intelligence (AI) based machine learning and Digital Signal Processing (DSP) techniques have extensively and independently been used by various researchers to cater this challenging task. In this work, we propose a Support Vector Machine based kernel learning approach for detection of splice sites (the exon-intron boundary) in a gene. Electron-Ion Interaction Potential (EIIP) values of nucleotides have been used for mapping character sequences to corresponding numeric sequences. Radial Basis Function (RBF) SVM kernel is trained using EIIP numeric sequences. Furthermore this was tested on test gene dataset for detection of splice site by window (of 12 residues) shifting. Optimum values of window size, various important parameters of SVM kernel have been optimized for a better accuracy. Receiver Operating Characteristic (ROC) curves have been utilized for displaying the sensitivity rate of the classifier and results showed 94.82% accuracy for splice site detection on test dataset.

  16. Genetic analysis of grain attributes, milling performance, and end-use quality traits in hard red spring wheat (Triticum aestivum L.)

    USDA-ARS?s Scientific Manuscript database

    Wheat kernel texture dictates U.S. wheat market class and culinary end-uses. Of interest to wheat breeders is to identify quantitative trait loci (QTL) for wheat kernel texture, milling performance, or end-use quality because it is imperative for wheat breeders to ascertain the genetic architecture ...

  17. Comparative analysis of genetic architectures for nine developmental traits of rye.

    PubMed

    Masojć, Piotr; Milczarski, P; Kruszona, P

    2017-08-01

    Genetic architectures of plant height, stem thickness, spike length, awn length, heading date, thousand-kernel weight, kernel length, leaf area and chlorophyll content were aligned on the DArT-based high-density map of the 541 × Ot1-3 RILs population of rye using the genes interaction assorting by divergent selection (GIABDS) method. Complex sets of QTL for particular traits contained 1-5 loci of the epistatic D class and 10-28 loci of the hypostatic, mostly R and E classes controlling traits variation through D-E or D-R types of two-loci interactions. QTL were distributed on each of the seven rye chromosomes in unique positions or as a coinciding loci for 2-8 traits. Detection of considerable numbers of the reversed (D', E' and R') classes of QTL might be attributed to the transgression effects observed for most of the studied traits. First examples of E* and F QTL classes, defined in the model, are reported for awn length, leaf area, thousand-kernel weight and kernel length. The results of this study extend experimental data to 11 quantitative traits (together with pre-harvest sprouting and alpha-amylase activity) for which genetic architectures fit the model of mechanism underlying alleles distribution within tails of bi-parental populations. They are also a valuable starting point for map-based search of genes underlying detected QTL and for planning advanced marker-assisted multi-trait breeding strategies.

  18. Kernel Machine SNP-set Testing under Multiple Candidate Kernels

    PubMed Central

    Wu, Michael C.; Maity, Arnab; Lee, Seunggeun; Simmons, Elizabeth M.; Harmon, Quaker E.; Lin, Xinyi; Engel, Stephanie M.; Molldrem, Jeffrey J.; Armistead, Paul M.

    2013-01-01

    Joint testing for the cumulative effect of multiple single nucleotide polymorphisms grouped on the basis of prior biological knowledge has become a popular and powerful strategy for the analysis of large scale genetic association studies. The kernel machine (KM) testing framework is a useful approach that has been proposed for testing associations between multiple genetic variants and many different types of complex traits by comparing pairwise similarity in phenotype between subjects to pairwise similarity in genotype, with similarity in genotype defined via a kernel function. An advantage of the KM framework is its flexibility: choosing different kernel functions allows for different assumptions concerning the underlying model and can allow for improved power. In practice, it is difficult to know which kernel to use a priori since this depends on the unknown underlying trait architecture and selecting the kernel which gives the lowest p-value can lead to inflated type I error. Therefore, we propose practical strategies for KM testing when multiple candidate kernels are present based on constructing composite kernels and based on efficient perturbation procedures. We demonstrate through simulations and real data applications that the procedures protect the type I error rate and can lead to substantially improved power over poor choices of kernels and only modest differences in power versus using the best candidate kernel. PMID:23471868

  19. Multi-threaded Sparse Matrix Sparse Matrix Multiplication for Many-Core and GPU Architectures.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deveci, Mehmet; Trott, Christian Robert; Rajamanickam, Sivasankaran

    Sparse Matrix-Matrix multiplication is a key kernel that has applications in several domains such as scientific computing and graph analysis. Several algorithms have been studied in the past for this foundational kernel. In this paper, we develop parallel algorithms for sparse matrix- matrix multiplication with a focus on performance portability across different high performance computing architectures. The performance of these algorithms depend on the data structures used in them. We compare different types of accumulators in these algorithms and demonstrate the performance difference between these data structures. Furthermore, we develop a meta-algorithm, kkSpGEMM, to choose the right algorithm and datamore » structure based on the characteristics of the problem. We show performance comparisons on three architectures and demonstrate the need for the community to develop two phase sparse matrix-matrix multiplication implementations for efficient reuse of the data structures involved.« less

  20. Multi-threaded Sparse Matrix-Matrix Multiplication for Many-Core and GPU Architectures.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deveci, Mehmet; Rajamanickam, Sivasankaran; Trott, Christian Robert

    Sparse Matrix-Matrix multiplication is a key kernel that has applications in several domains such as scienti c computing and graph analysis. Several algorithms have been studied in the past for this foundational kernel. In this paper, we develop parallel algorithms for sparse matrix-matrix multiplication with a focus on performance portability across different high performance computing architectures. The performance of these algorithms depend on the data structures used in them. We compare different types of accumulators in these algorithms and demonstrate the performance difference between these data structures. Furthermore, we develop a meta-algorithm, kkSpGEMM, to choose the right algorithm and datamore » structure based on the characteristics of the problem. We show performance comparisons on three architectures and demonstrate the need for the community to develop two phase sparse matrix-matrix multiplication implementations for efficient reuse of the data structures involved.« less

  1. Research on Separation of Three Powers Architecture for Trusted OS

    NASA Astrophysics Data System (ADS)

    Li, Yu; Zhao, Yong; Xin, Siyuan

    The privilege in the operating system (OS) often results in the break of confidentiality and integrity of the system. To solve this problem, several security mechanisms are proposed, such as Role-based Access Control, Separation of Duty. However, these mechanisms can not eliminate the privilege in OS kernel layer. This paper proposes a Separation of Three Powers Architecture (STPA). The authorizations in OS are divided into three parts: System Management Subsystem (SMS), Security Management Subsystem (SEMS) and Audit Subsystem (AS). Mutual support and mutual checks and balances which are the design principles of STPA eliminate the administrator in the kernel layer. Furthermore, the paper gives the formal description for authorization division using the graph theory. Finally, the implementation of STPA is given. Proved by experiments, the Separation of Three Powers Architecture we proposed can provide reliable protection for the OS through authorization division.

  2. Adaptive Fault-Resistant Systems

    DTIC Science & Technology

    1994-10-01

    An Architectural Overview of the Alpha Real-Time Distributed Kernel . In Proceeding., of the USEN[X Workshop on Microkernels and Other Kernel ...system and the controller are monolithic . We have noted earlier some of the problems of distributed systems-for exam- ple, the need to bound the...are monolithic . In practice, designers employ a layered structuring for their systems in order to manage complexity, and we expect that practical

  3. Secure Computer System: Unified Exposition and Multics Interpretation

    DTIC Science & Technology

    1976-03-01

    prearranged code to semaphore critical information to an undercleared subject/process. Neither of these topics is directly addressed by the mathematical...FURTHER CONSIDERATIONS. RULES OF OPERATION FOR A SECURE MULTICS Kernel primitives for a secure Multics will be derived from a higher level user...the Multics architecture as little as possible; this will account to a large extent for radical differences in form between actual kernel primitives

  4. A comparative study of linear and nonlinear anomaly detectors for hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Goldberg, Hirsh; Nasrabadi, Nasser M.

    2007-04-01

    In this paper we implement various linear and nonlinear subspace-based anomaly detectors for hyperspectral imagery. First, a dual window technique is used to separate the local area around each pixel into two regions - an inner-window region (IWR) and an outer-window region (OWR). Pixel spectra from each region are projected onto a subspace which is defined by projection bases that can be generated in several ways. Here we use three common pattern classification techniques (Principal Component Analysis (PCA), Fisher Linear Discriminant (FLD) Analysis, and the Eigenspace Separation Transform (EST)) to generate projection vectors. In addition to these three algorithms, the well-known Reed-Xiaoli (RX) anomaly detector is also implemented. Each of the four linear methods is then implicitly defined in a high- (possibly infinite-) dimensional feature space by using a nonlinear mapping associated with a kernel function. Using a common machine-learning technique known as the kernel trick all dot products in the feature space are replaced with a Mercer kernel function defined in terms of the original input data space. To determine how anomalous a given pixel is, we then project the current test pixel spectra and the spectral mean vector of the OWR onto the linear and nonlinear projection vectors in order to exploit the statistical differences between the IWR and OWR pixels. Anomalies are detected if the separation of the projection of the current test pixel spectra and the OWR mean spectra are greater than a certain threshold. Comparisons are made using receiver operating characteristics (ROC) curves.

  5. SMART (Sandia's Modular Architecture for Robotics and Teleoperation) Ver. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Robert

    "SMART Ver. 0.8 Beta" provides a system developer with software tools to create a telerobotic control system, i.e., a system whereby an end-user can interact with mechatronic equipment. It consists of three main components: the SMART Editor (tsmed), the SMART Real-time kernel (rtos), and the SMART Supervisor (gui). The SMART Editor is a graphical icon-based code generation tool for creating end-user systems, given descriptions of SMART modules. The SMART real-time kernel implements behaviors that combine modules representing input devices, sensors, constraints, filters, and robotic devices. Included with this software release is a number of core modules, which can be combinedmore » with additional project and device specific modules to create a telerobotic controller. The SMART Supervisor is a graphical front-end for running a SMART system. It is an optional component of the SMART Environment and utilizes the TeVTk windowing and scripting environment. Although the code contained within this release is complete, and can be utilized for defining, running, and interfacing to a sample end-user SMART system, most systems will include additional project and hardware specific modules developed either by the system developer or obtained independently from a SMART module developer. SMART is a software system designed to integrate the different robots, input devices, sensors and dynamic elements required for advanced modes of telerobotic control. "SMART Ver. 0.8 Beta" defines and implements a telerobotic controller. A telerobotic system consists of combinations of modules that implement behaviors. Each real-time module represents an input device, robot device, sensor, constraint, connection or filter. The underlying theory utilizes non-linear discretized multidimensional network elements to model each individual module, and guarantees that upon a valid connection, the resulting system will perform in a stable fashion. Different combinations of modules implement different behaviors. Each module must have at a minimum an initialization routine, a parameter adjustment routine, and an update routine. The SMART runtime kernel runs continuously within a real-time embedded system. Each module is first set-up by the kernel, initialized, and then updated at a fixed rate whenever it is in context. The kernel responds to operator directed commands by changing the state of the system, changing parameters on individual modules, and switching behavioral modes. The SMART Editor is a tool used to define, verify, configure and generate source code for a SMART control system. It uses icon representations of the modules, code patches from valid configurations of the modules, and configuration files describing how a module can be connected into a system to lead the end-user in through the steps needed to create a final system. The SMART Supervisor serves as an interface to a SMART run-time system. It provides an interface on a host computer that connects to the embedded system via TCPIIP ASCII commands. It utilizes a scripting language (Tel) and a graphics windowing environment (Tk). This system can either be customized to fit an end-user's needs or completely replaced as needed.« less

  6. Computer Architecture's Changing Role in Rebooting Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeBenedictis, Erik P.

    In this paper, Windows 95 started the Wintel era, in which Microsoft Windows running on Intel x86 microprocessors dominated the computer industry and changed the world. Retaining the x86 instruction set across many generations let users buy new and more capable microprocessors without having to buy software to work with new architectures.

  7. Computer Architecture's Changing Role in Rebooting Computing

    DOE PAGES

    DeBenedictis, Erik P.

    2017-04-26

    In this paper, Windows 95 started the Wintel era, in which Microsoft Windows running on Intel x86 microprocessors dominated the computer industry and changed the world. Retaining the x86 instruction set across many generations let users buy new and more capable microprocessors without having to buy software to work with new architectures.

  8. Predicting complex traits using a diffusion kernel on genetic markers with an application to dairy cattle and wheat data

    PubMed Central

    2013-01-01

    Background Arguably, genotypes and phenotypes may be linked in functional forms that are not well addressed by the linear additive models that are standard in quantitative genetics. Therefore, developing statistical learning models for predicting phenotypic values from all available molecular information that are capable of capturing complex genetic network architectures is of great importance. Bayesian kernel ridge regression is a non-parametric prediction model proposed for this purpose. Its essence is to create a spatial distance-based relationship matrix called a kernel. Although the set of all single nucleotide polymorphism genotype configurations on which a model is built is finite, past research has mainly used a Gaussian kernel. Results We sought to investigate the performance of a diffusion kernel, which was specifically developed to model discrete marker inputs, using Holstein cattle and wheat data. This kernel can be viewed as a discretization of the Gaussian kernel. The predictive ability of the diffusion kernel was similar to that of non-spatial distance-based additive genomic relationship kernels in the Holstein data, but outperformed the latter in the wheat data. However, the difference in performance between the diffusion and Gaussian kernels was negligible. Conclusions It is concluded that the ability of a diffusion kernel to capture the total genetic variance is not better than that of a Gaussian kernel, at least for these data. Although the diffusion kernel as a choice of basis function may have potential for use in whole-genome prediction, our results imply that embedding genetic markers into a non-Euclidean metric space has very small impact on prediction. Our results suggest that use of the black box Gaussian kernel is justified, given its connection to the diffusion kernel and its similar predictive performance. PMID:23763755

  9. Laser Ignition Technology for Bi-Propellant Rocket Engine Applications

    NASA Technical Reports Server (NTRS)

    Thomas, Matthew E.; Bossard, John A.; Early, Jim; Trinh, Huu; Dennis, Jay; Turner, James (Technical Monitor)

    2001-01-01

    The fiber optically coupled laser ignition approach summarized is under consideration for use in igniting bi-propellant rocket thrust chambers. This laser ignition approach is based on a novel dual pulse format capable of effectively increasing laser generated plasma life times up to 1000 % over conventional laser ignition methods. In the dual-pulse format tinder consideration here an initial laser pulse is used to generate a small plasma kernel. A second laser pulse that effectively irradiates the plasma kernel follows this pulse. Energy transfer into the kernel is much more efficient because of its absorption characteristics thereby allowing the kernel to develop into a much more effective ignition source for subsequent combustion processes. In this research effort both single and dual-pulse formats were evaluated in a small testbed rocket thrust chamber. The rocket chamber was designed to evaluate several bipropellant combinations. Optical access to the chamber was provided through small sapphire windows. Test results from gaseous oxygen (GOx) and RP-1 propellants are presented here. Several variables were evaluated during the test program, including spark location, pulse timing, and relative pulse energy. These variables were evaluated in an effort to identify the conditions in which laser ignition of bi-propellants is feasible. Preliminary results and analysis indicate that this laser ignition approach may provide superior ignition performance relative to squib and torch igniters, while simultaneously eliminating some of the logistical issues associated with these systems. Further research focused on enhancing the system robustness, multiplexing, and window durability/cleaning and fiber optic enhancements is in progress.

  10. Defense Information Systems Agency Technical Integration Support (DISA- TIS). MUMPS Study.

    DTIC Science & Technology

    1993-01-01

    usable in DoD, MUMPS must continue to improve in its support of DoD and OSE standards such as SQL , X-Windows, POSIX, PHIGS, etc. MUMPS and large AlSs...Language ( SQL ), X-Windows, and Graphical Kernel Services (GKS)) 2.2.2.3 FIPS Adoption by NIST The National Institute of Standards and Technology (NIST...many of the performance tuning mechanisms that must be performed explicitly with other systems. The VA looks forward to the SQL binding (1993 ANS) that

  11. Performance Evaluation of Synthetic Benchmarks and Image Processing (IP) Kernels on Intel and PowerPC Processors

    DTIC Science & Technology

    2013-08-01

    2006 Linux Q1 2005 Pentium D (830) 3 2/2 2511 1148 3617 Windows Vista Q2 2005 Pentium D (830) 3 2/2 2938 1155 3556 Windows XP Q2 2005 PowerPC 970MP 2...1 3734 3439 1304 Cell Broadband Engine 3.2 1/1 0.207 2006 239 441 Pentium D (830) 3 2/2 2 3617 2511 1148 Pentium D (830) 3 2/2 2 3556 2938 1155

  12. A Security Architecture for Fault-Tolerant Systems

    DTIC Science & Technology

    1993-06-03

    aspect of our effort to achieve better performance is integrating the system into microkernel -based operating systems. 4 Summary and discussion In...135-171, June 1983. [vRBC+92] R. van Renesse, K. Birman, R. Cooper, B. Glade, and P. Stephenson. Reliable multicast between microkernels . In...Proceedings of the USENIX Microkernels and Other Kernel Architectures Workshop, April 1992. 29

  13. Compiler-Driven Performance Optimization and Tuning for Multicore Architectures

    DTIC Science & Technology

    2015-04-10

    develop a powerful system for auto-tuning of library routines and compute-intensive kernels, driven by the Pluto system for multicores that we are...kernels, driven by the Pluto system for multicores that we are developing. The work here is motivated by recent advances in two major areas of...automatic C-to-CUDA code generator using a polyhedral compiler transformation framework. We have used and adapted PLUTO (our state-of-the-art tool

  14. Parallel language constructs for tensor product computations on loosely coupled architectures

    NASA Technical Reports Server (NTRS)

    Mehrotra, Piyush; Van Rosendale, John

    1989-01-01

    A set of language primitives designed to allow the specification of parallel numerical algorithms at a higher level is described. The authors focus on tensor product array computations, a simple but important class of numerical algorithms. They consider first the problem of programming one-dimensional kernel routines, such as parallel tridiagonal solvers, and then look at how such parallel kernels can be combined to form parallel tensor product algorithms.

  15. An Efficient Implementation of Deep Convolutional Neural Networks for MRI Segmentation.

    PubMed

    Hoseini, Farnaz; Shahbahrami, Asadollah; Bayat, Peyman

    2018-02-27

    Image segmentation is one of the most common steps in digital image processing, classifying a digital image into different segments. The main goal of this paper is to segment brain tumors in magnetic resonance images (MRI) using deep learning. Tumors having different shapes, sizes, brightness and textures can appear anywhere in the brain. These complexities are the reasons to choose a high-capacity Deep Convolutional Neural Network (DCNN) containing more than one layer. The proposed DCNN contains two parts: architecture and learning algorithms. The architecture and the learning algorithms are used to design a network model and to optimize parameters for the network training phase, respectively. The architecture contains five convolutional layers, all using 3 × 3 kernels, and one fully connected layer. Due to the advantage of using small kernels with fold, it allows making the effect of larger kernels with smaller number of parameters and fewer computations. Using the Dice Similarity Coefficient metric, we report accuracy results on the BRATS 2016, brain tumor segmentation challenge dataset, for the complete, core, and enhancing regions as 0.90, 0.85, and 0.84 respectively. The learning algorithm includes the task-level parallelism. All the pixels of an MR image are classified using a patch-based approach for segmentation. We attain a good performance and the experimental results show that the proposed DCNN increases the segmentation accuracy compared to previous techniques.

  16. Deep learning architecture for iris recognition based on optimal Gabor filters and deep belief network

    NASA Astrophysics Data System (ADS)

    He, Fei; Han, Ye; Wang, Han; Ji, Jinchao; Liu, Yuanning; Ma, Zhiqiang

    2017-03-01

    Gabor filters are widely utilized to detect iris texture information in several state-of-the-art iris recognition systems. However, the proper Gabor kernels and the generative pattern of iris Gabor features need to be predetermined in application. The traditional empirical Gabor filters and shallow iris encoding ways are incapable of dealing with such complex variations in iris imaging including illumination, aging, deformation, and device variations. Thereby, an adaptive Gabor filter selection strategy and deep learning architecture are presented. We first employ particle swarm optimization approach and its binary version to define a set of data-driven Gabor kernels for fitting the most informative filtering bands, and then capture complex pattern from the optimal Gabor filtered coefficients by a trained deep belief network. A succession of comparative experiments validate that our optimal Gabor filters may produce more distinctive Gabor coefficients and our iris deep representations be more robust and stable than traditional iris Gabor codes. Furthermore, the depth and scales of the deep learning architecture are also discussed.

  17. Kernel Recursive Least-Squares Temporal Difference Algorithms with Sparsification and Regularization

    PubMed Central

    Zhu, Qingxin; Niu, Xinzheng

    2016-01-01

    By combining with sparse kernel methods, least-squares temporal difference (LSTD) algorithms can construct the feature dictionary automatically and obtain a better generalization ability. However, the previous kernel-based LSTD algorithms do not consider regularization and their sparsification processes are batch or offline, which hinder their widespread applications in online learning problems. In this paper, we combine the following five techniques and propose two novel kernel recursive LSTD algorithms: (i) online sparsification, which can cope with unknown state regions and be used for online learning, (ii) L 2 and L 1 regularization, which can avoid overfitting and eliminate the influence of noise, (iii) recursive least squares, which can eliminate matrix-inversion operations and reduce computational complexity, (iv) a sliding-window approach, which can avoid caching all history samples and reduce the computational cost, and (v) the fixed-point subiteration and online pruning, which can make L 1 regularization easy to implement. Finally, simulation results on two 50-state chain problems demonstrate the effectiveness of our algorithms. PMID:27436996

  18. Kernel Recursive Least-Squares Temporal Difference Algorithms with Sparsification and Regularization.

    PubMed

    Zhang, Chunyuan; Zhu, Qingxin; Niu, Xinzheng

    2016-01-01

    By combining with sparse kernel methods, least-squares temporal difference (LSTD) algorithms can construct the feature dictionary automatically and obtain a better generalization ability. However, the previous kernel-based LSTD algorithms do not consider regularization and their sparsification processes are batch or offline, which hinder their widespread applications in online learning problems. In this paper, we combine the following five techniques and propose two novel kernel recursive LSTD algorithms: (i) online sparsification, which can cope with unknown state regions and be used for online learning, (ii) L 2 and L 1 regularization, which can avoid overfitting and eliminate the influence of noise, (iii) recursive least squares, which can eliminate matrix-inversion operations and reduce computational complexity, (iv) a sliding-window approach, which can avoid caching all history samples and reduce the computational cost, and (v) the fixed-point subiteration and online pruning, which can make L 1 regularization easy to implement. Finally, simulation results on two 50-state chain problems demonstrate the effectiveness of our algorithms.

  19. A message passing kernel for the hypercluster parallel processing test bed

    NASA Technical Reports Server (NTRS)

    Blech, Richard A.; Quealy, Angela; Cole, Gary L.

    1989-01-01

    A Message-Passing Kernel (MPK) for the Hypercluster parallel-processing test bed is described. The Hypercluster is being developed at the NASA Lewis Research Center to support investigations of parallel algorithms and architectures for computational fluid and structural mechanics applications. The Hypercluster resembles the hypercube architecture except that each node consists of multiple processors communicating through shared memory. The MPK efficiently routes information through the Hypercluster, using a message-passing protocol when necessary and faster shared-memory communication whenever possible. The MPK also interfaces all of the processors with the Hypercluster operating system (HYCLOPS), which runs on a Front-End Processor (FEP). This approach distributes many of the I/O tasks to the Hypercluster processors and eliminates the need for a separate I/O support program on the FEP.

  20. Robotic intelligence kernel

    DOEpatents

    Bruemmer, David J [Idaho Falls, ID

    2009-11-17

    A robot platform includes perceptors, locomotors, and a system controller. The system controller executes a robot intelligence kernel (RIK) that includes a multi-level architecture and a dynamic autonomy structure. The multi-level architecture includes a robot behavior level for defining robot behaviors, that incorporate robot attributes and a cognitive level for defining conduct modules that blend an adaptive interaction between predefined decision functions and the robot behaviors. The dynamic autonomy structure is configured for modifying a transaction capacity between an operator intervention and a robot initiative and may include multiple levels with at least a teleoperation mode configured to maximize the operator intervention and minimize the robot initiative and an autonomous mode configured to minimize the operator intervention and maximize the robot initiative. Within the RIK at least the cognitive level includes the dynamic autonomy structure.

  1. Depth-time interpolation of feature trends extracted from mobile microelectrode data with kernel functions.

    PubMed

    Wong, Stephen; Hargreaves, Eric L; Baltuch, Gordon H; Jaggi, Jurg L; Danish, Shabbar F

    2012-01-01

    Microelectrode recording (MER) is necessary for precision localization of target structures such as the subthalamic nucleus during deep brain stimulation (DBS) surgery. Attempts to automate this process have produced quantitative temporal trends (feature activity vs. time) extracted from mobile MER data. Our goal was to evaluate computational methods of generating spatial profiles (feature activity vs. depth) from temporal trends that would decouple automated MER localization from the clinical procedure and enhance functional localization in DBS surgery. We evaluated two methods of interpolation (standard vs. kernel) that generated spatial profiles from temporal trends. We compared interpolated spatial profiles to true spatial profiles that were calculated with depth windows, using correlation coefficient analysis. Excellent approximation of true spatial profiles is achieved by interpolation. Kernel-interpolated spatial profiles produced superior correlation coefficient values at optimal kernel widths (r = 0.932-0.940) compared to standard interpolation (r = 0.891). The choice of kernel function and kernel width resulted in trade-offs in smoothing and resolution. Interpolation of feature activity to create spatial profiles from temporal trends is accurate and can standardize and facilitate MER functional localization of subcortical structures. The methods are computationally efficient, enhancing localization without imposing additional constraints on the MER clinical procedure during DBS surgery. Copyright © 2012 S. Karger AG, Basel.

  2. False Windows - Yesterday and Today

    NASA Astrophysics Data System (ADS)

    Niewitecki, Stefan

    2017-10-01

    The article is concerned with a very interesting aspect of architectural design, namely, a contradiction between the building functions and the necessity of giving the building a desired external appearance. One of the possibilities of reconciling this contradiction is using pseudo windows that are visible on the elevation and generally have the form of a black painted recess accompanied by frames and sashes and often single glazing. Of course, there are no windows or openings in the corresponding places in the walls inside the building. The article discusses the differences between false windows and blind widows (German: blende), also known as blank windows, which, in fact, are shallow recesses in the wall having the external appearance of an arcade or a window and which had already been used in Gothic architecture mostly for aesthetic reasons and sometimes to reduce the load of the wall. Moreover, the article describes various false windows that appeared later than blind windows because they did not appear until the 17th century. Contemporary false windows are also discussed and it is shown that contrary to the common belief they are widely used. In his research, the author not only used the Internet data but also carried out his own in situ exploration. The false windows constitute very interesting albeit rare elements of the architectural design of buildings. They have been used successfully for a few hundred years. It might seem that they should have been discarded by now but this has not happened. Quite contrary, since the second half of the 20th century there has been a rapid development of glass curtain walls that serve a similar function in contemporary buildings as the false windows once did, only in a more extensive way.

  3. Fine-mapping of qGW4.05, a major QTL for kernel weight and size in maize.

    PubMed

    Chen, Lin; Li, Yong-xiang; Li, Chunhui; Wu, Xun; Qin, Weiwei; Li, Xin; Jiao, Fuchao; Zhang, Xiaojing; Zhang, Dengfeng; Shi, Yunsu; Song, Yanchun; Li, Yu; Wang, Tianyu

    2016-04-12

    Kernel weight and size are important components of grain yield in cereals. Although some information is available concerning the map positions of quantitative trait loci (QTL) for kernel weight and size in maize, little is known about the molecular mechanisms of these QTLs. qGW4.05 is a major QTL that is associated with kernel weight and size in maize. We combined linkage analysis and association mapping to fine-map and identify candidate gene(s) at qGW4.05. QTL qGW4.05 was fine-mapped to a 279.6-kb interval in a segregating population derived from a cross of Huangzaosi with LV28. By combining the results of regional association mapping and linkage analysis, we identified GRMZM2G039934 as a candidate gene responsible for qGW4.05. Candidate gene-based association mapping was conducted using a panel of 184 inbred lines with variable kernel weights and kernel sizes. Six polymorphic sites in the gene GRMZM2G039934 were significantly associated with kernel weight and kernel size. The results of linkage analysis and association mapping revealed that GRMZM2G039934 is the most likely candidate gene for qGW4.05. These results will improve our understanding of the genetic architecture and molecular mechanisms underlying kernel development in maize.

  4. Continuation of research into software for space operations support: Conversion of the display manager to X Windows/Motif, volume 2

    NASA Technical Reports Server (NTRS)

    Collier, Mark D.; Killough, Ronnie; Martin, Nancy L.

    1990-01-01

    NASA is currently using a set of applications called the Display Builder and Display Manager. They run on Concurrent systems and heavily depend on the Graphic Kernel System (GKS). At this time however, these two applications would more appropriately be developed in X Windows, in which a low X is used for all actual text and graphics display and a standard widget set (such as Motif) is used for the user interface. Use of the X Windows will increase performance, improve the user interface, enhance portability, and improve reliability. Prototype of X Window/Motif based Display Manager provides the following advantages over a GKS based application: improved performance by using a low level X Windows, display of graphic and text will be more efficient; improved user interface by using Motif; Improved portability by operating on both Concurrent and Sun workstations; and Improved reliability.

  5. High-Throughput, Adaptive FFT Architecture for FPGA-Based Spaceborne Data Processors

    NASA Technical Reports Server (NTRS)

    NguyenKobayashi, Kayla; Zheng, Jason X.; He, Yutao; Shah, Biren N.

    2011-01-01

    Exponential growth in microelectronics technology such as field-programmable gate arrays (FPGAs) has enabled high-performance spaceborne instruments with increasing onboard data processing capabilities. As a commonly used digital signal processing (DSP) building block, fast Fourier transform (FFT) has been of great interest in onboard data processing applications, which needs to strike a reasonable balance between high-performance (throughput, block size, etc.) and low resource usage (power, silicon footprint, etc.). It is also desirable to be designed so that a single design can be reused and adapted into instruments with different requirements. The Multi-Pass Wide Kernel FFT (MPWK-FFT) architecture was developed, in which the high-throughput benefits of the parallel FFT structure and the low resource usage of Singleton s single butterfly method is exploited. The result is a wide-kernel, multipass, adaptive FFT architecture. The 32K-point MPWK-FFT architecture includes 32 radix-2 butterflies, 64 FIFOs to store the real inputs, 64 FIFOs to store the imaginary inputs, complex twiddle factor storage, and FIFO logic to route the outputs to the correct FIFO. The inputs are stored in sequential fashion into the FIFOs, and the outputs of each butterfly are sequentially written first into the even FIFO, then the odd FIFO. Because of the order of the outputs written into the FIFOs, the depth of the even FIFOs, which are 768 each, are 1.5 times larger than the odd FIFOs, which are 512 each. The total memory needed for data storage, assuming that each sample is 36 bits, is 2.95 Mbits. The twiddle factors are stored in internal ROM inside the FPGA for fast access time. The total memory size to store the twiddle factors is 589.9Kbits. This FFT structure combines the benefits of high throughput from the parallel FFT kernels and low resource usage from the multi-pass FFT kernels with desired adaptability. Space instrument missions that need onboard FFT capabilities such as the proposed DESDynl, SWOT (Surface Water Ocean Topography), and Europa sounding radar missions would greatly benefit from this technology with significant reductions in non-recurring cost and risk.

  6. Design and Implementation of Davis Social Links OSN Kernel

    NASA Astrophysics Data System (ADS)

    Tran, Thomas; Chan, Kelcey; Ye, Shaozhi; Bhattacharyya, Prantik; Garg, Ankush; Lu, Xiaoming; Wu, S. Felix

    Social network popularity continues to rise as they broaden out to more users. Hidden away within these social networks is a valuable set of data that outlines everyone’s relationships. Networks have created APIs such as the Facebook Development Platform and OpenSocial that allow developers to create applications that can leverage user information. However, at the current stage, the social network support for these new applications is fairly limited in its functionality. Most, if not all, of the existing internet applications such as email, BitTorrent, and Skype cannot benefit from the valuable social network among their own users. In this paper, we present an architecture that couples two different communication layers together: the end2end communication layer and the social context layer, under the Davis Social Links (DSL) project. Our proposed architecture attempts to preserve the original application semantics (i.e., we can use Thunderbird or Outlook, unmodified, to read our SMTP emails) and provides the communicating parties (email sender and receivers) a social context for control and management. For instance, the receiver can set trust policy rules based on the social context between the pair, to determine how a particular email in question should be prioritized for delivery to the SMTP layer. Furthermore, as our architecture includes two coupling layers, it is then possible, as an option, to shift some of the services from the original applications into the social context layer. In the context of email, for example, our architecture allows users to choose operations, such as reply, reply-all, and forward, to be realized in either the application layer or the social network layer. And, the realization of these operations under the social network layer offers powerful features unavailable in the original applications. To validate our coupling architecture, we have implemented a DSL kernel prototype as a Facebook application called CyrusDSL (currently about 40 local users) and a simple communication application combined into the DSL kernel but is unaware of Facebook’s API.

  7. A support architecture for reliable distributed computing systems

    NASA Technical Reports Server (NTRS)

    Mckendry, Martin S.

    1986-01-01

    The Clouds kernel design was through several design phases and is nearly complete. The object manager, the process manager, the storage manager, the communications manager, and the actions manager are examined.

  8. Rootkit Detection Using a Cross-View Clean Boot Method

    DTIC Science & Technology

    2013-03-01

    FindNextFile: [2] Kernel32.dll 4. SSDTHooks r -- ... CALL NtQueryDirectoryFile 5. Code Patch ing - 6. Layered Driver 4 NtQueryDirectoryFile : 7...NTFS Driver 0 Volume Manger Disk Driver [2] I. Disk Driver r ! J IAT hooks take advantage of function calls in applications [13]. When an...f36e923898161fa7be50810288e2f48a 61 Appendix D: Windows Source Code Windows Batch File @echo o f f py thon walk . py pause shutdown − r − t 0 Walk.py in

  9. Computer Program Development Specification for Ada Integrated Environment: KAPSE (Kernel Ada Programming Support Environment)/Database, Type B5, B5-AIE(1).KAPSE(1).

    DTIC Science & Technology

    1982-11-12

    File 1/0 Prgram Invocation Other Access M and Control Services KAPSE/Host Interface most Operating System Peripherals/ 01 su ?eetworks 6282318-2 Figure 3...3.2.4.3.8.5 Transitory Windows The TRANSITORY flag is used to prevent permanent dependence on temporary windows created simply for focusing on a part of the...KAPSE/Tool interfaces in terms of these low-level host-independent interfaces. In addition, the KAPSE/Host interface packages prevent the application

  10. A custom hardware classifier for bruised apple detection in hyperspectral images

    NASA Astrophysics Data System (ADS)

    Cárdenas, Javier; Figueroa, Miguel; Pezoa, Jorge E.

    2015-09-01

    We present a custom digital architecture for bruised apple classification using hyperspectral images in the near infrared (NIR) spectrum. The algorithm classifies each pixel in an image into one of three classes: bruised, non-bruised, and background. We extract two 5-element feature vectors for each pixel using only 10 out of the 236 spectral bands provided by the hyperspectral camera, thereby greatly reducing both the requirements of the imager and the computational complexity of the algorithm. We then use two linear-kernel support vector machine (SVM) to classify each pixel. Each SVM was trained with 504 windows of size 17×17-pixel taken from 14 hyperspectral images of 320×320 pixels each, for each class. The architecture then computes the percentage of bruised pixels in each apple in order to adequately classify the fruit. We implemented the architecture on a Xilinx Zynq Z-7010 field-programmable gate array (FPGA) and tested it on images from a NIR N17E push-broom camera with a frame rate of 25 fps, a band-pixel rate of 1.888 MHz, and 236 spectral bands between 900 and 1700 nanometers in laboratory conditions. Using 28-bit fixed-point arithmetic, the circuit accurately discriminates 95.2% of the pixels corresponding to an apple, 81% of the pixels corresponding to a bruised apple, and 96.4% of the background. With the default threshold settings, the highest false positive (FP) for a bruised apple is 18.7%. The circuit operates at the native frame rate of the camera, consumes 67 mW of dynamic power, and uses less than 10% of the logic resources on the FPGA.

  11. A robust, high-throughput method for computing maize ear, cob, and kernel attributes automatically from images.

    PubMed

    Miller, Nathan D; Haase, Nicholas J; Lee, Jonghyun; Kaeppler, Shawn M; de Leon, Natalia; Spalding, Edgar P

    2017-01-01

    Grain yield of the maize plant depends on the sizes, shapes, and numbers of ears and the kernels they bear. An automated pipeline that can measure these components of yield from easily-obtained digital images is needed to advance our understanding of this globally important crop. Here we present three custom algorithms designed to compute such yield components automatically from digital images acquired by a low-cost platform. One algorithm determines the average space each kernel occupies along the cob axis using a sliding-window Fourier transform analysis of image intensity features. A second counts individual kernels removed from ears, including those in clusters. A third measures each kernel's major and minor axis after a Bayesian analysis of contour points identifies the kernel tip. Dimensionless ear and kernel shape traits that may interrelate yield components are measured by principal components analysis of contour point sets. Increased objectivity and speed compared to typical manual methods are achieved without loss of accuracy as evidenced by high correlations with ground truth measurements and simulated data. Millimeter-scale differences among ear, cob, and kernel traits that ranged more than 2.5-fold across a diverse group of inbred maize lines were resolved. This system for measuring maize ear, cob, and kernel attributes is being used by multiple research groups as an automated Web service running on community high-throughput computing and distributed data storage infrastructure. Users may create their own workflow using the source code that is staged for download on a public repository. © 2016 The Authors. The Plant Journal published by Society for Experimental Biology and John Wiley & Sons Ltd.

  12. EOS: A project to investigate the design and construction of real-time distributed embedded operating systems

    NASA Technical Reports Server (NTRS)

    Campbell, R. H.; Essick, R. B.; Grass, J.; Johnston, G.; Kenny, K.; Russo, V.

    1986-01-01

    The EOS project is investigating the design and construction of a family of real-time distributed embedded operating systems for reliable, distributed aerospace applications. Using the real-time programming techniques developed in co-operation with NASA in earlier research, the project staff is building a kernel for a multiple processor networked system. The first six months of the grant included a study of scheduling in an object-oriented system, the design philosophy of the kernel, and the architectural overview of the operating system. In this report, the operating system and kernel concepts are described. An environment for the experiments has been built and several of the key concepts of the system have been prototyped. The kernel and operating system is intended to support future experimental studies in multiprocessing, load-balancing, routing, software fault-tolerance, distributed data base design, and real-time processing.

  13. Trust-Management, Intrusion-Tolerance, Accountability, and Reconstitution Architecture (TIARA)

    DTIC Science & Technology

    2009-12-01

    Tainting, tagged, metadata, architecture, hardware, processor, microkernel , zero-kernel, co-design 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF... microkernels (e.g., [27]) embraced the idea that it was beneficial to reduce the ker- nel, separating out services as separate processes isolated from...limited adoption. More recently Tanenbaum [72] notes the security virtues of microkernels and suggests the modern importance of security makes it

  14. Modern multicore and manycore architectures: Modelling, optimisation and benchmarking a multiblock CFD code

    NASA Astrophysics Data System (ADS)

    Hadade, Ioan; di Mare, Luca

    2016-08-01

    Modern multicore and manycore processors exhibit multiple levels of parallelism through a wide range of architectural features such as SIMD for data parallel execution or threads for core parallelism. The exploitation of multi-level parallelism is therefore crucial for achieving superior performance on current and future processors. This paper presents the performance tuning of a multiblock CFD solver on Intel SandyBridge and Haswell multicore CPUs and the Intel Xeon Phi Knights Corner coprocessor. Code optimisations have been applied on two computational kernels exhibiting different computational patterns: the update of flow variables and the evaluation of the Roe numerical fluxes. We discuss at great length the code transformations required for achieving efficient SIMD computations for both kernels across the selected devices including SIMD shuffles and transpositions for flux stencil computations and global memory transformations. Core parallelism is expressed through threading based on a number of domain decomposition techniques together with optimisations pertaining to alleviating NUMA effects found in multi-socket compute nodes. Results are correlated with the Roofline performance model in order to assert their efficiency for each distinct architecture. We report significant speedups for single thread execution across both kernels: 2-5X on the multicore CPUs and 14-23X on the Xeon Phi coprocessor. Computations at full node and chip concurrency deliver a factor of three speedup on the multicore processors and up to 24X on the Xeon Phi manycore coprocessor.

  15. An alternative covariance estimator to investigate genetic heterogeneity in populations.

    PubMed

    Heslot, Nicolas; Jannink, Jean-Luc

    2015-11-26

    For genomic prediction and genome-wide association studies (GWAS) using mixed models, covariance between individuals is estimated using molecular markers. Based on the properties of mixed models, using available molecular data for prediction is optimal if this covariance is known. Under this assumption, adding individuals to the analysis should never be detrimental. However, some empirical studies showed that increasing training population size decreased prediction accuracy. Recently, results from theoretical models indicated that even if marker density is high and the genetic architecture of traits is controlled by many loci with small additive effects, the covariance between individuals, which depends on relationships at causal loci, is not always well estimated by the whole-genome kinship. We propose an alternative covariance estimator named K-kernel, to account for potential genetic heterogeneity between populations that is characterized by a lack of genetic correlation, and to limit the information flow between a priori unknown populations in a trait-specific manner. This is similar to a multi-trait model and parameters are estimated by REML and, in extreme cases, it can allow for an independent genetic architecture between populations. As such, K-kernel is useful to study the problem of the design of training populations. K-kernel was compared to other covariance estimators or kernels to examine its fit to the data, cross-validated accuracy and suitability for GWAS on several datasets. It provides a significantly better fit to the data than the genomic best linear unbiased prediction model and, in some cases it performs better than other kernels such as the Gaussian kernel, as shown by an empirical null distribution. In GWAS simulations, alternative kernels control type I errors as well as or better than the classical whole-genome kinship and increase statistical power. No or small gains were observed in cross-validated prediction accuracy. This alternative covariance estimator can be used to gain insight into trait-specific genetic heterogeneity by identifying relevant sub-populations that lack genetic correlation between them. Genetic correlation can be 0 between identified sub-populations by performing automatic selection of relevant sets of individuals to be included in the training population. It may also increase statistical power in GWAS.

  16. Study of heterogeneous and reconfigurable architectures in the communication domain

    NASA Astrophysics Data System (ADS)

    Feldkaemper, H. T.; Blume, H.; Noll, T. G.

    2003-05-01

    One of the most challenging design issues for next generations of (mobile) communication systems is fulfilling the computational demands while finding an appropriate trade-off between flexibility and implementation aspects, especially power consumption. Flexibility of modern architectures is desirable, e.g. concerning adaptation to new standards and reduction of time-to-market of a new product. Typical target architectures for future communication systems include embedded FPGAs, dedicated macros as well as programmable digital signal and control oriented processor cores as each of these has its specific advantages. These will be integrated as a System-on-Chip (SoC). For such a heterogeneous architecture a design space exploration and an appropriate partitioning plays a crucial role. On the exemplary vehicle of a Viterbi decoder as frequently used in communication systems we show which costs in terms of ATE complexity arise implementing typical components on different types of architecture blocks. A factor of about seven orders of magnitude spans between a physically optimised implementation and an implementation on a programmable DSP kernel. An implementation on an embedded FPGA kernel is in between these two representing an attractive compromise with high flexibility and low power consumption. Extending this comparison to further components, it is shown quantitatively that the cost ratio between different implementation alternatives is closely related to the operation to be performed. This information is essential for the appropriate partitioning of heterogeneous systems.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krueger, Jens; Micikevicius, Paulius; Williams, Samuel

    Reverse Time Migration (RTM) is one of the main approaches in the seismic processing industry for imaging the subsurface structure of the Earth. While RTM provides qualitative advantages over its predecessors, it has a high computational cost warranting implementation on HPC architectures. We focus on three progressively more complex kernels extracted from RTM: for isotropic (ISO), vertical transverse isotropic (VTI) and tilted transverse isotropic (TTI) media. In this work, we examine performance optimization of forward wave modeling, which describes the computational kernels used in RTM, on emerging multi- and manycore processors and introduce a novel common subexpression elimination optimization formore » TTI kernels. We compare attained performance and energy efficiency in both the single-node and distributed memory environments in order to satisfy industry’s demands for fidelity, performance, and energy efficiency. Moreover, we discuss the interplay between architecture (chip and system) and optimizations (both on-node computation) highlighting the importance of NUMA-aware approaches to MPI communication. Ultimately, our results show we can improve CPU energy efficiency by more than 10× on Magny Cours nodes while acceleration via multiple GPUs can surpass the energy-efficient Intel Sandy Bridge by as much as 3.6×.« less

  18. Towards Seismic Tomography Based Upon Adjoint Methods

    NASA Astrophysics Data System (ADS)

    Tromp, J.; Liu, Q.; Tape, C.; Maggi, A.

    2006-12-01

    We outline the theory behind tomographic inversions based on 3D reference models, fully numerical 3D wave propagation, and adjoint methods. Our approach involves computing the Fréchet derivatives for tomographic inversions via the interaction between a forward wavefield, propagating from the source to the receivers, and an `adjoint' wavefield, propagating from the receivers back to the source. The forward wavefield is computed using a spectral-element method (SEM) and a heterogeneous wave-speed model, and stored as synthetic seismograms at particular receivers for which there is data. We specify an objective or misfit function that defines a measure of misfit between data and synthetics. For a given receiver, the differences between the data and the synthetics are time reversed and used as the source of the adjoint wavefield. For each earthquake, the interaction between the regular and adjoint wavefields is used to construct finite-frequency sensitivity kernels, which we call event kernel. These kernels may be thought of as weighted sums of measurement-specific banana-donut kernels, with weights determined by the measurements. The overall sensitivity is simply the sum of event kernels, which defines the misfit kernel. The misfit kernel is multiplied by convenient orthonormal basis functions that are embedded in the SEM code, resulting in the gradient of the misfit function, i.e., the Fréchet derivatives. The misfit kernel is multiplied by convenient orthonormal basis functions that are embedded in the SEM code, resulting in the gradient of the misfit function, i.e., the Fréchet derivatives. A conjugate gradient algorithm is used to iteratively improve the model while reducing the misfit function. Using 2D examples for Rayleigh wave phase-speed maps of southern California, we illustrate the construction of the gradient and the minimization algorithm, and consider various tomographic experiments, including source inversions, structural inversions, and joint source-structure inversions. We also illustrate the characteristics of these 3D finite-frequency kernels based upon adjoint simulations for a variety of global arrivals, e.g., Pdiff, P'P', and SKS, and we illustrate how the approach may be used to investigate body- and surface-wave anisotropy. In adjoint tomography any time segment in which the data and synthetics match reasonably well is suitable for measurement, and this implies a much greater number of phases per seismogram can be used compared to classical tomography in which the sensitivity of the measurements is determined analytically for specific arrivals, e.g., P. We use an automated picking algorithm based upon short-term/long-term averages and strict phase and amplitude anomaly criteria to determine arrivals and time windows suitable for measurement. For shallow global events the algorithm typically identifies of the order of 1000~windows suitable for measurement, whereas for a deep event the number can reach 4000. For southern California earthquakes the number of phases is of the order of 100 for a magnitude 4.0 event and up to 450 for a magnitude 5.0 event. We will show examples of event kernels for both global and regional earthquakes. These event kernels form the basis of adjoint tomography.

  19. The structure of the clouds distributed operating system

    NASA Technical Reports Server (NTRS)

    Dasgupta, Partha; Leblanc, Richard J., Jr.

    1989-01-01

    A novel system architecture, based on the object model, is the central structuring concept used in the Clouds distributed operating system. This architecture makes Clouds attractive over a wide class of machines and environments. Clouds is a native operating system, designed and implemented at Georgia Tech. and runs on a set of generated purpose computers connected via a local area network. The system architecture of Clouds is composed of a system-wide global set of persistent (long-lived) virtual address spaces, called objects that contain persistent data and code. The object concept is implemented at the operating system level, thus presenting a single level storage view to the user. Lightweight treads carry computational activity through the code stored in the objects. The persistent objects and threads gives rise to a programming environment composed of shared permanent memory, dispensing with the need for hardware-derived concepts such as the file systems and message systems. Though the hardware may be distributed and may have disks and networks, the Clouds provides the applications with a logically centralized system, based on a shared, structured, single level store. The current design of Clouds uses a minimalist philosophy with respect to both the kernel and the operating system. That is, the kernel and the operating system support a bare minimum of functionality. Clouds also adheres to the concept of separation of policy and mechanism. Most low-level operating system services are implemented above the kernel and most high level services are implemented at the user level. From the measured performance of using the kernel mechanisms, we are able to demonstrate that efficient implementations are feasible for the object model on commercially available hardware. Clouds provides a rich environment for conducting research in distributed systems. Some of the topics addressed in this paper include distributed programming environments, consistency of persistent data and fault-tolerance.

  20. The Crystal Hole.

    ERIC Educational Resources Information Center

    Law, Maxine S.

    1980-01-01

    Presented are three curriculum sequences about windows that were designed as part of an architectural education course for junior high school students in Ohio. The three units, or "encounters," deal with stained glass, styles of windows, and functions of the window. Each includes student objectives and selected activities. (SJL)

  1. Hardware, Languages, and Architectures for Defense Against Hostile Operating Systems

    DTIC Science & Technology

    2015-05-14

    Executive Service Directorate (0704-0188). Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to...in privileged system services . ExpressOS requires only a modest annotation burden (annotations were about 3% of code of the kernel), modest performance...INT benchmarks . In addition to enabling the development of an architecture- neutral instrumentation framework, our approach can take advantage of the

  2. Interaction with Machine Improvisation

    NASA Astrophysics Data System (ADS)

    Assayag, Gerard; Bloch, George; Cont, Arshia; Dubnov, Shlomo

    We describe two multi-agent architectures for an improvisation oriented musician-machine interaction systems that learn in real time from human performers. The improvisation kernel is based on sequence modeling and statistical learning. We present two frameworks of interaction with this kernel. In the first, the stylistic interaction is guided by a human operator in front of an interactive computer environment. In the second framework, the stylistic interaction is delegated to machine intelligence and therefore, knowledge propagation and decision are taken care of by the computer alone. The first framework involves a hybrid architecture using two popular composition/performance environments, Max and OpenMusic, that are put to work and communicate together, each one handling the process at a different time/memory scale. The second framework shares the same representational schemes with the first but uses an Active Learning architecture based on collaborative, competitive and memory-based learning to handle stylistic interactions. Both systems are capable of processing real-time audio/video as well as MIDI. After discussing the general cognitive background of improvisation practices, the statistical modelling tools and the concurrent agent architecture are presented. Then, an Active Learning scheme is described and considered in terms of using different improvisation regimes for improvisation planning. Finally, we provide more details about the different system implementations and describe several performances with the system.

  3. GPU-accelerated Kernel Regression Reconstruction for Freehand 3D Ultrasound Imaging.

    PubMed

    Wen, Tiexiang; Li, Ling; Zhu, Qingsong; Qin, Wenjian; Gu, Jia; Yang, Feng; Xie, Yaoqin

    2017-07-01

    Volume reconstruction method plays an important role in improving reconstructed volumetric image quality for freehand three-dimensional (3D) ultrasound imaging. By utilizing the capability of programmable graphics processing unit (GPU), we can achieve a real-time incremental volume reconstruction at a speed of 25-50 frames per second (fps). After incremental reconstruction and visualization, hole-filling is performed on GPU to fill remaining empty voxels. However, traditional pixel nearest neighbor-based hole-filling fails to reconstruct volume with high image quality. On the contrary, the kernel regression provides an accurate volume reconstruction method for 3D ultrasound imaging but with the cost of heavy computational complexity. In this paper, a GPU-based fast kernel regression method is proposed for high-quality volume after the incremental reconstruction of freehand ultrasound. The experimental results show that improved image quality for speckle reduction and details preservation can be obtained with the parameter setting of kernel window size of [Formula: see text] and kernel bandwidth of 1.0. The computational performance of the proposed GPU-based method can be over 200 times faster than that on central processing unit (CPU), and the volume with size of 50 million voxels in our experiment can be reconstructed within 10 seconds.

  4. Advanced Computing Architectures for Cognitive Processing

    DTIC Science & Technology

    2009-07-01

    Evolution ................................................................................. 20  Figure 9: Logic diagram smart block-based neuron...48  Figure 21: Naive Grid Potential Kernel...processing would be helpful for Air Force systems acquisition. Specific cognitive processing approaches addressed herein include global information grid

  5. Simplified Computation for Nonparametric Windows Method of Probability Density Function Estimation.

    PubMed

    Joshi, Niranjan; Kadir, Timor; Brady, Michael

    2011-08-01

    Recently, Kadir and Brady proposed a method for estimating probability density functions (PDFs) for digital signals which they call the Nonparametric (NP) Windows method. The method involves constructing a continuous space representation of the discrete space and sampled signal by using a suitable interpolation method. NP Windows requires only a small number of observed signal samples to estimate the PDF and is completely data driven. In this short paper, we first develop analytical formulae to obtain the NP Windows PDF estimates for 1D, 2D, and 3D signals, for different interpolation methods. We then show that the original procedure to calculate the PDF estimate can be significantly simplified and made computationally more efficient by a judicious choice of the frame of reference. We have also outlined specific algorithmic details of the procedures enabling quick implementation. Our reformulation of the original concept has directly demonstrated a close link between the NP Windows method and the Kernel Density Estimator.

  6. Efficient Comparison between Windows and Linux Platform Applicable in a Virtual Architectural Walkthrough Application

    NASA Astrophysics Data System (ADS)

    Thubaasini, P.; Rusnida, R.; Rohani, S. M.

    This paper describes Linux, an open source platform used to develop and run a virtual architectural walkthrough application. It proposes some qualitative reflections and observations on the nature of Linux in the concept of Virtual Reality (VR) and on the most popular and important claims associated with the open source approach. The ultimate goal of this paper is to measure and evaluate the performance of Linux used to build the virtual architectural walkthrough and develop a proof of concept based on the result obtain through this project. Besides that, this study reveals the benefits of using Linux in the field of virtual reality and reflects a basic comparison and evaluation between Windows and Linux base operating system. Windows platform is use as a baseline to evaluate the performance of Linux. The performance of Linux is measured based on three main criteria which is frame rate, image quality and also mouse motion.

  7. A Unified and Comprehensible View of Parametric and Kernel Methods for Genomic Prediction with Application to Rice.

    PubMed

    Jacquin, Laval; Cao, Tuong-Vi; Ahmadi, Nourollah

    2016-01-01

    One objective of this study was to provide readers with a clear and unified understanding of parametric statistical and kernel methods, used for genomic prediction, and to compare some of these in the context of rice breeding for quantitative traits. Furthermore, another objective was to provide a simple and user-friendly R package, named KRMM, which allows users to perform RKHS regression with several kernels. After introducing the concept of regularized empirical risk minimization, the connections between well-known parametric and kernel methods such as Ridge regression [i.e., genomic best linear unbiased predictor (GBLUP)] and reproducing kernel Hilbert space (RKHS) regression were reviewed. Ridge regression was then reformulated so as to show and emphasize the advantage of the kernel "trick" concept, exploited by kernel methods in the context of epistatic genetic architectures, over parametric frameworks used by conventional methods. Some parametric and kernel methods; least absolute shrinkage and selection operator (LASSO), GBLUP, support vector machine regression (SVR) and RKHS regression were thereupon compared for their genomic predictive ability in the context of rice breeding using three real data sets. Among the compared methods, RKHS regression and SVR were often the most accurate methods for prediction followed by GBLUP and LASSO. An R function which allows users to perform RR-BLUP of marker effects, GBLUP and RKHS regression, with a Gaussian, Laplacian, polynomial or ANOVA kernel, in a reasonable computation time has been developed. Moreover, a modified version of this function, which allows users to tune kernels for RKHS regression, has also been developed and parallelized for HPC Linux clusters. The corresponding KRMM package and all scripts have been made publicly available.

  8. Temporal Effects on Internal Fluorescence Emissions Associated with Aflatoxin Contamination from Corn Kernel Cross-Sections Inoculated with Toxigenic and Atoxigenic Aspergillus flavus.

    PubMed

    Hruska, Zuzana; Yao, Haibo; Kincaid, Russell; Brown, Robert L; Bhatnagar, Deepak; Cleveland, Thomas E

    2017-01-01

    Non-invasive, easy to use and cost-effective technology offers a valuable alternative for rapid detection of carcinogenic fungal metabolites, namely aflatoxins, in commodities. One relatively recent development in this area is the use of spectral technology. Fluorescence hyperspectral imaging, in particular, offers a potential rapid and non-invasive method for detecting the presence of aflatoxins in maize infected with the toxigenic fungus Aspergillus flavus . Earlier studies have shown that whole maize kernels contaminated with aflatoxins exhibit different spectral signatures from uncontaminated kernels based on the external fluorescence emission of the whole kernels. Here, the effect of time on the internal fluorescence spectral emissions from cross-sections of kernels infected with toxigenic and atoxigenic A. flavus , were examined in order to elucidate the interaction between the fluorescence signals emitted by some aflatoxin contaminated maize kernels and the fungal invasion resulting in the production of aflatoxins. First, the difference in internal fluorescence emissions between cross-sections of kernels incubated in toxigenic and atoxigenic inoculum was assessed. Kernels were inoculated with each strain for 5, 7, and 9 days before cross-sectioning and imaging. There were 270 kernels (540 halves) imaged, including controls. Second, in a different set of kernels (15 kernels/group; 135 total), the germ of each kernel was separated from the endosperm to determine the major areas of aflatoxin accumulation and progression over nine growth days. Kernels were inoculated with toxigenic and atoxigenic fungal strains for 5, 7, and 9 days before the endosperm and germ were separated, followed by fluorescence hyperspectral imaging and chemical aflatoxin determination. A marked difference in fluorescence intensity was shown between the toxigenic and atoxigenic strains on day nine post-inoculation, which may be a useful indicator of the location of aflatoxin contamination. This finding suggests that both, the fluorescence peak shift and intensity as well as timing, may be essential in distinguishing toxigenic and atoxigenic fungi based on spectral features. Results also reveal a possible preferential difference in the internal colonization of maize kernels between the toxigenic and atoxigenic strains of A. flavus suggesting a potential window for differentiating the strains based on fluorescence spectra at specific time points.

  9. Temporal Effects on Internal Fluorescence Emissions Associated with Aflatoxin Contamination from Corn Kernel Cross-Sections Inoculated with Toxigenic and Atoxigenic Aspergillus flavus

    PubMed Central

    Hruska, Zuzana; Yao, Haibo; Kincaid, Russell; Brown, Robert L.; Bhatnagar, Deepak; Cleveland, Thomas E.

    2017-01-01

    Non-invasive, easy to use and cost-effective technology offers a valuable alternative for rapid detection of carcinogenic fungal metabolites, namely aflatoxins, in commodities. One relatively recent development in this area is the use of spectral technology. Fluorescence hyperspectral imaging, in particular, offers a potential rapid and non-invasive method for detecting the presence of aflatoxins in maize infected with the toxigenic fungus Aspergillus flavus. Earlier studies have shown that whole maize kernels contaminated with aflatoxins exhibit different spectral signatures from uncontaminated kernels based on the external fluorescence emission of the whole kernels. Here, the effect of time on the internal fluorescence spectral emissions from cross-sections of kernels infected with toxigenic and atoxigenic A. flavus, were examined in order to elucidate the interaction between the fluorescence signals emitted by some aflatoxin contaminated maize kernels and the fungal invasion resulting in the production of aflatoxins. First, the difference in internal fluorescence emissions between cross-sections of kernels incubated in toxigenic and atoxigenic inoculum was assessed. Kernels were inoculated with each strain for 5, 7, and 9 days before cross-sectioning and imaging. There were 270 kernels (540 halves) imaged, including controls. Second, in a different set of kernels (15 kernels/group; 135 total), the germ of each kernel was separated from the endosperm to determine the major areas of aflatoxin accumulation and progression over nine growth days. Kernels were inoculated with toxigenic and atoxigenic fungal strains for 5, 7, and 9 days before the endosperm and germ were separated, followed by fluorescence hyperspectral imaging and chemical aflatoxin determination. A marked difference in fluorescence intensity was shown between the toxigenic and atoxigenic strains on day nine post-inoculation, which may be a useful indicator of the location of aflatoxin contamination. This finding suggests that both, the fluorescence peak shift and intensity as well as timing, may be essential in distinguishing toxigenic and atoxigenic fungi based on spectral features. Results also reveal a possible preferential difference in the internal colonization of maize kernels between the toxigenic and atoxigenic strains of A. flavus suggesting a potential window for differentiating the strains based on fluorescence spectra at specific time points. PMID:28966606

  10. A novel configurable VLSI architecture design of window-based image processing method

    NASA Astrophysics Data System (ADS)

    Zhao, Hui; Sang, Hongshi; Shen, Xubang

    2018-03-01

    Most window-based image processing architecture can only achieve a certain kind of specific algorithms, such as 2D convolution, and therefore lack the flexibility and breadth of application. In addition, improper handling of the image boundary can cause loss of accuracy, or consume more logic resources. For the above problems, this paper proposes a new VLSI architecture of window-based image processing operations, which is configurable and based on consideration of the image boundary. An efficient technique is explored to manage the image borders by overlapping and flushing phases at the end of row and the end of frame, which does not produce new delay and reduce the overhead in real-time applications. Maximize the reuse of the on-chip memory data, in order to reduce the hardware complexity and external bandwidth requirements. To perform different scalar function and reduction function operations in pipeline, this can support a variety of applications of window-based image processing. Compared with the performance of other reported structures, the performance of the new structure has some similarities to some of the structures, but also superior to some other structures. Especially when compared with a systolic array processor CWP, this structure at the same frequency of approximately 12.9% of the speed increases. The proposed parallel VLSI architecture was implemented with SIMC 0.18-μm CMOS technology, and the maximum clock frequency, power consumption, and area are 125Mhz, 57mW, 104.8K Gates, respectively, furthermore the processing time is independent of the different window-based algorithms mapped to the structure

  11. Empowering open systems through cross-platform interoperability

    NASA Astrophysics Data System (ADS)

    Lyke, James C.

    2014-06-01

    Most of the motivations for open systems lie in the expectation of interoperability, sometimes referred to as "plug-and-play". Nothing in the notion of "open-ness", however, guarantees this outcome, which makes the increased interest in open architecture more perplexing. In this paper, we explore certain themes of open architecture. We introduce the concept of "windows of interoperability", which can be used to align disparate portions of architecture. Such "windows of interoperability", which concentrate on a reduced set of protocol and interface features, might achieve many of the broader purposes assigned as benefits in open architecture. Since it is possible to engineer proprietary systems that interoperate effectively, this nuanced definition of interoperability may in fact be a more important concept to understand and nurture for effective systems engineering and maintenance.

  12. An Integrated Architecture for Automatic Indication, Avoidance and Profiling of Kernel Rootkit Attacks

    DTIC Science & Technology

    2014-08-20

    ORGANIZATION REPORT NUMBER . Laf yette, IN 47907 North Carolina State University, 890 Oval Dr., Raleigh, NC 27695 ...North Carolina State University Raleigh, NC 27695 Phone: (919) 513-7835 Email: jiang@cs.ncsu.edu 1. Project Summary

  13. Architectural Heritage: An Experiment in Montreal's Schools.

    ERIC Educational Resources Information Center

    Leveille, Chantal

    1982-01-01

    A museum program in Montreal encourages elementary and secondary school students to examine their surroundings and neighborhoods. Units focus on stained glass windows, houses, history of Montreal, the neighborhood, and architectural heritage. (KC)

  14. A research on the security of wisdom campus based on geospatial big data

    NASA Astrophysics Data System (ADS)

    Wang, Haiying

    2018-05-01

    There are some difficulties in wisdom campus, such as geospatial big data sharing, function expansion, data management, analysis and mining geospatial big data for a characteristic, especially the problem of data security can't guarantee cause prominent attention increasingly. In this article we put forward a data-oriented software architecture which is designed by the ideology of orienting data and data as kernel, solve the problem of traditional software architecture broaden the campus space data research, develop the application of wisdom campus.

  15. Deep neural mapping support vector machines.

    PubMed

    Li, Yujian; Zhang, Ting

    2017-09-01

    The choice of kernel has an important effect on the performance of a support vector machine (SVM). The effect could be reduced by NEUROSVM, an architecture using multilayer perceptron for feature extraction and SVM for classification. In binary classification, a general linear kernel NEUROSVM can be theoretically simplified as an input layer, many hidden layers, and an SVM output layer. As a feature extractor, the sub-network composed of the input and hidden layers is first trained together with a virtual ordinary output layer by backpropagation, then with the output of its last hidden layer taken as input of the SVM classifier for further training separately. By taking the sub-network as a kernel mapping from the original input space into a feature space, we present a novel model, called deep neural mapping support vector machine (DNMSVM), from the viewpoint of deep learning. This model is also a new and general kernel learning method, where the kernel mapping is indeed an explicit function expressed as a sub-network, different from an implicit function induced by a kernel function traditionally. Moreover, we exploit a two-stage procedure of contrastive divergence learning and gradient descent for DNMSVM to jointly training an adaptive kernel mapping instead of a kernel function, without requirement of kernel tricks. As a whole of the sub-network and the SVM classifier, the joint training of DNMSVM is done by using gradient descent to optimize the objective function with the sub-network layer-wise pre-trained via contrastive divergence learning of restricted Boltzmann machines. Compared to the separate training of NEUROSVM, the joint training is a new algorithm for DNMSVM to have advantages over NEUROSVM. Experimental results show that DNMSVM can outperform NEUROSVM and RBFSVM (i.e., SVM with the kernel of radial basis function), demonstrating its effectiveness. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. 33. DETAIL VIEW OF THE WEST WINDOW AT THE FIRST ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    33. DETAIL VIEW OF THE WEST WINDOW AT THE FIRST FLOOR LEVEL OF THE TOWER (NOTE LEVEL OF ARCHITECTURAL FINISH SEEN IN THE ROUNDHEADED WINDOW WITH ITS ROUND-ARCHED, BROWNSTONE HOOD THAT TERMINATES IN BRACKETS AS WELL AS IN THE STRINGCOURSE AND IN BRICK INSET-PANELING BELOW THE BROWNSTONE SILL) - Kenworthy Hall, State Highway 14 (Greensboro Road), Marion, Perry County, AL

  17. A New GPU-Enabled MODTRAN Thermal Model for the PLUME TRACKER Volcanic Emission Analysis Toolkit

    NASA Astrophysics Data System (ADS)

    Acharya, P. K.; Berk, A.; Guiang, C.; Kennett, R.; Perkins, T.; Realmuto, V. J.

    2013-12-01

    Real-time quantification of volcanic gaseous and particulate releases is important for (1) recognizing rapid increases in SO2 gaseous emissions which may signal an impending eruption; (2) characterizing ash clouds to enable safe and efficient commercial aviation; and (3) quantifying the impact of volcanic aerosols on climate forcing. The Jet Propulsion Laboratory (JPL) has developed state-of-the-art algorithms, embedded in their analyst-driven Plume Tracker toolkit, for performing SO2, NH3, and CH4 retrievals from remotely sensed multi-spectral Thermal InfraRed spectral imagery. While Plume Tracker provides accurate results, it typically requires extensive analyst time. A major bottleneck in this processing is the relatively slow but accurate FORTRAN-based MODTRAN atmospheric and plume radiance model, developed by Spectral Sciences, Inc. (SSI). To overcome this bottleneck, SSI in collaboration with JPL, is porting these slow thermal radiance algorithms onto massively parallel, relatively inexpensive and commercially-available GPUs. This paper discusses SSI's efforts to accelerate the MODTRAN thermal emission algorithms used by Plume Tracker. Specifically, we are developing a GPU implementation of the Curtis-Godson averaging and the Voigt in-band transmittances from near line center molecular absorption, which comprise the major computational bottleneck. The transmittance calculations were decomposed into separate functions, individually implemented as GPU kernels, and tested for accuracy and performance relative to the original CPU code. Speedup factors of 14 to 30× were realized for individual processing components on an NVIDIA GeForce GTX 295 graphics card with no loss of accuracy. Due to the separate host (CPU) and device (GPU) memory spaces, a redesign of the MODTRAN architecture was required to ensure efficient data transfer between host and device, and to facilitate high parallel throughput. Currently, we are incorporating the separate GPU kernels into a single function for calculating the Voigt in-band transmittance, and subsequently for integration into the re-architectured MODTRAN6 code. Our overall objective is that by combining the GPU processing with more efficient Plume Tracker retrieval algorithms, a 100-fold increase in the computational speed will be realized. Since the Plume Tracker runs on Windows-based platforms, the GPU-enhanced MODTRAN6 will be packaged as a DLL. We do however anticipate that the accelerated option will be made available to the general MODTRAN community through an application programming interface (API).

  18. The Benefits of Aluminum Windows.

    ERIC Educational Resources Information Center

    Goyal, R. C.

    2002-01-01

    Discusses benefits of aluminum windows for college construction and renovation projects, including that aluminum is the most successfully recycled material, that it meets architectural glass deflection standards, that it has positive thermal energy performance, and that it is a preferred exterior surface. (EV)

  19. TRUST: TDRSS Resource User Support Tool

    NASA Technical Reports Server (NTRS)

    Sparn, Thomas P.; Gablehouse, R. Daniel

    1991-01-01

    TRUST-TDRSS (Tracking Data and Relay Satellite System) Resource User Support Tool is presented in the form of the viewgraphs. The following subject areas are covered: TRUST development cycle; the TRUST system; scheduling window; ODM/GCMR window; TRUST architecture; surpass; and summary.

  20. Dissection of Genetic Factors underlying Wheat Kernel Shape and Size in an Elite × Nonadapted Cross using a High Density SNP Linkage Map.

    PubMed

    Kumar, Ajay; Mantovani, E E; Seetan, R; Soltani, A; Echeverry-Solarte, M; Jain, S; Simsek, S; Doehlert, D; Alamri, M S; Elias, E M; Kianian, S F; Mergoum, M

    2016-03-01

    Wheat kernel shape and size has been under selection since early domestication. Kernel morphology is a major consideration in wheat breeding, as it impacts grain yield and quality. A population of 160 recombinant inbred lines (RIL), developed using an elite (ND 705) and a nonadapted genotype (PI 414566), was extensively phenotyped in replicated field trials and genotyped using Infinium iSelect 90K assay to gain insight into the genetic architecture of kernel shape and size. A high density genetic map consisting of 10,172 single nucleotide polymorphism (SNP) markers, with an average marker density of 0.39 cM/marker, identified a total of 29 genomic regions associated with six grain shape and size traits; ∼80% of these regions were associated with multiple traits. The analyses showed that kernel length (KL) and width (KW) are genetically independent, while a large number (∼59%) of the quantitative trait loci (QTL) for kernel shape traits were in common with genomic regions associated with kernel size traits. The most significant QTL was identified on chromosome 4B, and could be an ortholog of major rice grain size and shape gene or . Major and stable loci also were identified on the homeologous regions of Group 5 chromosomes, and in the regions of (6A) and (7A) genes. Both parental genotypes contributed equivalent positive QTL alleles, suggesting that the nonadapted germplasm has a great potential for enhancing the gene pool for grain shape and size. This study provides new knowledge on the genetic dissection of kernel morphology, with a much higher resolution, which may aid further improvement in wheat yield and quality using genomic tools. Copyright © 2016 Crop Science Society of America.

  1. An orthogonal oriented quadrature hexagonal image pyramid

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.

    1987-01-01

    An image pyramid has been developed with basis functions that are orthogonal, self-similar, and localized in space, spatial frequency, orientation, and phase. The pyramid operates on a hexagonal sample lattice. The set of seven basis functions consist of three even high-pass kernels, three odd high-pass kernels, and one low-pass kernel. The three even kernels are identified when rotated by 60 or 120 deg, and likewise for the odd. The seven basis functions occupy a point and a hexagon of six nearest neighbors on a hexagonal sample lattice. At the lowest level of the pyramid, the input lattice is the image sample lattice. At each higher level, the input lattice is provided by the low-pass coefficients computed at the previous level. At each level, the output is subsampled in such a way as to yield a new hexagonal lattice with a spacing sq rt 7 larger than the previous level, so that the number of coefficients is reduced by a factor of 7 at each level. The relationship between this image code and the processing architecture of the primate visual cortex is discussed.

  2. Deploy Nalu/Kokkos algorithmic infrastructure with performance benchmarking.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Domino, Stefan P.; Ananthan, Shreyas; Knaus, Robert C.

    The former Nalu interior heterogeneous algorithm design, which was originally designed to manage matrix assembly operations over all elemental topology types, has been modified to operate over homogeneous collections of mesh entities. This newly templated kernel design allows for removal of workset variable resize operations that were formerly required at each loop over a Sierra ToolKit (STK) bucket (nominally, 512 entities in size). Extensive usage of the Standard Template Library (STL) std::vector has been removed in favor of intrinsic Kokkos memory views. In this milestone effort, the transition to Kokkos as the underlying infrastructure to support performance and portability onmore » many-core architectures has been deployed for key matrix algorithmic kernels. A unit-test driven design effort has developed a homogeneous entity algorithm that employs a team-based thread parallelism construct. The STK Single Instruction Multiple Data (SIMD) infrastructure is used to interleave data for improved vectorization. The collective algorithm design, which allows for concurrent threading and SIMD management, has been deployed for the core low-Mach element- based algorithm. Several tests to ascertain SIMD performance on Intel KNL and Haswell architectures have been carried out. The performance test matrix includes evaluation of both low- and higher-order methods. The higher-order low-Mach methodology builds on polynomial promotion of the core low-order control volume nite element method (CVFEM). Performance testing of the Kokkos-view/SIMD design indicates low-order matrix assembly kernel speed-up ranging between two and four times depending on mesh loading and node count. Better speedups are observed for higher-order meshes (currently only P=2 has been tested) especially on KNL. The increased workload per element on higher-order meshes bene ts from the wide SIMD width on KNL machines. Combining multiple threads with SIMD on KNL achieves a 4.6x speedup over the baseline, with assembly timings faster than that observed on Haswell architecture. The computational workload of higher-order meshes, therefore, seems ideally suited for the many-core architecture and justi es further exploration of higher-order on NGP platforms. A Trilinos/Tpetra-based multi-threaded GMRES preconditioned by symmetric Gauss Seidel (SGS) represents the core solver infrastructure for the low-Mach advection/diffusion implicit solves. The threaded solver stack has been tested on small problems on NREL's Peregrine system using the newly developed and deployed Kokkos-view/SIMD kernels. fforts are underway to deploy the Tpetra-based solver stack on NERSC Cori system to benchmark its performance at scale on KNL machines.« less

  3. General purpose graphic processing unit implementation of adaptive pulse compression algorithms

    NASA Astrophysics Data System (ADS)

    Cai, Jingxiao; Zhang, Yan

    2017-07-01

    This study introduces a practical approach to implement real-time signal processing algorithms for general surveillance radar based on NVIDIA graphical processing units (GPUs). The pulse compression algorithms are implemented using compute unified device architecture (CUDA) libraries such as CUDA basic linear algebra subroutines and CUDA fast Fourier transform library, which are adopted from open source libraries and optimized for the NVIDIA GPUs. For more advanced, adaptive processing algorithms such as adaptive pulse compression, customized kernel optimization is needed and investigated. A statistical optimization approach is developed for this purpose without needing much knowledge of the physical configurations of the kernels. It was found that the kernel optimization approach can significantly improve the performance. Benchmark performance is compared with the CPU performance in terms of processing accelerations. The proposed implementation framework can be used in various radar systems including ground-based phased array radar, airborne sense and avoid radar, and aerospace surveillance radar.

  4. Dynamic experiment design regularization approach to adaptive imaging with array radar/SAR sensor systems.

    PubMed

    Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart

    2011-01-01

    We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the "model-free" variational analysis (VA)-based image enhancement approach and the "model-based" descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations.

  5. Development of a graphical user interface for the global land information system (GLIS)

    USGS Publications Warehouse

    Alstad, Susan R.; Jackson, David A.

    1993-01-01

    The process of developing a Motif Graphical User Interface for the Global Land Information System (GLIS) involved incorporating user requirements, in-house visual and functional design requirements, and Open Software Foundation (OSF) Motif style guide standards. Motif user interface windows have been developed using the software to support Motif window functions war written using the C programming language. The GLIS architecture was modified to support multiple servers and remote handlers running the X Window System by forming a network of servers and handlers connected by TCP/IP communications. In April 1993, prior to release the GLIS graphical user interface and system architecture modifications were test by developers and users located at the EROS Data Center and 11 beta test sites across the country.

  6. Genetic architecture of kernel composition in global sorghum germplasm

    USDA-ARS?s Scientific Manuscript database

    Sorghum [Sorghum bicolor (L.) Moench] is an important cereal crop for dryland areas in the United States and for small-holder farmers in Africa. Natural variation of sorghum grain composition (protein, fat, and starch) between accessions can be used for crop improvement, but the genetic controls are...

  7. Parallel heterogeneous architectures for efficient OMP compressive sensing reconstruction

    NASA Astrophysics Data System (ADS)

    Kulkarni, Amey; Stanislaus, Jerome L.; Mohsenin, Tinoosh

    2014-05-01

    Compressive Sensing (CS) is a novel scheme, in which a signal that is sparse in a known transform domain can be reconstructed using fewer samples. The signal reconstruction techniques are computationally intensive and have sluggish performance, which make them impractical for real-time processing applications . The paper presents novel architectures for Orthogonal Matching Pursuit algorithm, one of the popular CS reconstruction algorithms. We show the implementation results of proposed architectures on FPGA, ASIC and on a custom many-core platform. For FPGA and ASIC implementation, a novel thresholding method is used to reduce the processing time for the optimization problem by at least 25%. Whereas, for the custom many-core platform, efficient parallelization techniques are applied, to reconstruct signals with variant signal lengths of N and sparsity of m. The algorithm is divided into three kernels. Each kernel is parallelized to reduce execution time, whereas efficient reuse of the matrix operators allows us to reduce area. Matrix operations are efficiently paralellized by taking advantage of blocked algorithms. For demonstration purpose, all architectures reconstruct a 256-length signal with maximum sparsity of 8 using 64 measurements. Implementation on Xilinx Virtex-5 FPGA, requires 27.14 μs to reconstruct the signal using basic OMP. Whereas, with thresholding method it requires 18 μs. ASIC implementation reconstructs the signal in 13 μs. However, our custom many-core, operating at 1.18 GHz, takes 18.28 μs to complete. Our results show that compared to the previous published work of the same algorithm and matrix size, proposed architectures for FPGA and ASIC implementations perform 1.3x and 1.8x respectively faster. Also, the proposed many-core implementation performs 3000x faster than the CPU and 2000x faster than the GPU.

  8. Photographic copy of architectural drawing, 1921 (original located at University ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Photographic copy of architectural drawing, 1921 (original located at University of Minnesota Facilities Management Office, Minneapolis). EXTERIOR ENTRANCE AND WINDOW BAY - Mines Experiment Station, University of Minnesota, Twin Cities Campus, 56 East River Road, Minneapolis, Hennepin County, MN

  9. Parallel language constructs for tensor product computations on loosely coupled architectures

    NASA Technical Reports Server (NTRS)

    Mehrotra, Piyush; Vanrosendale, John

    1989-01-01

    Distributed memory architectures offer high levels of performance and flexibility, but have proven awkard to program. Current languages for nonshared memory architectures provide a relatively low level programming environment, and are poorly suited to modular programming, and to the construction of libraries. A set of language primitives designed to allow the specification of parallel numerical algorithms at a higher level is described. Tensor product array computations are focused on along with a simple but important class of numerical algorithms. The problem of programming 1-D kernal routines is focused on first, such as parallel tridiagonal solvers, and then how such parallel kernels can be combined to form parallel tensor product algorithms is examined.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCaskey, Alexander J.

    Hybrid programming models for beyond-CMOS technologies will prove critical for integrating new computing technologies alongside our existing infrastructure. Unfortunately the software infrastructure required to enable this is lacking or not available. XACC is a programming framework for extreme-scale, post-exascale accelerator architectures that integrates alongside existing conventional applications. It is a pluggable framework for programming languages developed for next-gen computing hardware architectures like quantum and neuromorphic computing. It lets computational scientists efficiently off-load classically intractable work to attached accelerators through user-friendly Kernel definitions. XACC makes post-exascale hybrid programming approachable for domain computational scientists.

  11. Polymorphous Computing Architecture (PCA) Kernel Benchmark Measurements on the MIT Raw Microprocessor

    DTIC Science & Technology

    2006-06-14

    Robert Graybill . A Raw hoard for the use of this project was provided by the Computer Architecture Croup at the Massachusetts Institute of Technology...simulator is presented by MIT as being an accurate model of the Raw chip, we have found that it does not accurately model the board. Our comparison...G4 processor, model 7410. with a 32 kbyte level-1 cache on-chip and a 2 Mbyte L2 cache connected through a 250 MH/ bus [12]. Each node has 256 Mbyte

  12. Deep Recurrent Neural Networks for Human Activity Recognition

    PubMed Central

    Murad, Abdulmajid

    2017-01-01

    Adopting deep learning methods for human activity recognition has been effective in extracting discriminative features from raw input sequences acquired from body-worn sensors. Although human movements are encoded in a sequence of successive samples in time, typical machine learning methods perform recognition tasks without exploiting the temporal correlations between input data samples. Convolutional neural networks (CNNs) address this issue by using convolutions across a one-dimensional temporal sequence to capture dependencies among input data. However, the size of convolutional kernels restricts the captured range of dependencies between data samples. As a result, typical models are unadaptable to a wide range of activity-recognition configurations and require fixed-length input windows. In this paper, we propose the use of deep recurrent neural networks (DRNNs) for building recognition models that are capable of capturing long-range dependencies in variable-length input sequences. We present unidirectional, bidirectional, and cascaded architectures based on long short-term memory (LSTM) DRNNs and evaluate their effectiveness on miscellaneous benchmark datasets. Experimental results show that our proposed models outperform methods employing conventional machine learning, such as support vector machine (SVM) and k-nearest neighbors (KNN). Additionally, the proposed models yield better performance than other deep learning techniques, such as deep believe networks (DBNs) and CNNs. PMID:29113103

  13. Deep Recurrent Neural Networks for Human Activity Recognition.

    PubMed

    Murad, Abdulmajid; Pyun, Jae-Young

    2017-11-06

    Adopting deep learning methods for human activity recognition has been effective in extracting discriminative features from raw input sequences acquired from body-worn sensors. Although human movements are encoded in a sequence of successive samples in time, typical machine learning methods perform recognition tasks without exploiting the temporal correlations between input data samples. Convolutional neural networks (CNNs) address this issue by using convolutions across a one-dimensional temporal sequence to capture dependencies among input data. However, the size of convolutional kernels restricts the captured range of dependencies between data samples. As a result, typical models are unadaptable to a wide range of activity-recognition configurations and require fixed-length input windows. In this paper, we propose the use of deep recurrent neural networks (DRNNs) for building recognition models that are capable of capturing long-range dependencies in variable-length input sequences. We present unidirectional, bidirectional, and cascaded architectures based on long short-term memory (LSTM) DRNNs and evaluate their effectiveness on miscellaneous benchmark datasets. Experimental results show that our proposed models outperform methods employing conventional machine learning, such as support vector machine (SVM) and k-nearest neighbors (KNN). Additionally, the proposed models yield better performance than other deep learning techniques, such as deep believe networks (DBNs) and CNNs.

  14. Sensitivity analysis of seismic waveforms to upper-mantle discontinuities using the adjoint method

    NASA Astrophysics Data System (ADS)

    Koroni, Maria; Bozdağ, Ebru; Paulssen, Hanneke; Trampert, Jeannot

    2017-09-01

    Using spectral-element simulations of wave propagation, we investigated the sensitivity of seismic waveforms, recorded on transverse components, to upper-mantle discontinuities in 1-D and 3-D background models. These sensitivity kernels, or Fréchet derivatives, illustrate the spatial sensitivity to model parameters, of which those for shear wave speed and the surface topography of internal boundaries are discussed in this paper. We focus on the boundaries at 400 and 670 km depth of the mantle transition zone. SS precursors have frequently been used to infer the topography of upper-mantle discontinuities. These seismic phases are underside reflections off these boundaries and are usually analysed in the distance range of 110°-160°. This distance range is chosen to minimize the interference from other waves. We show sensitivity kernels for consecutive time windows at three characteristic epicentral distances within the 110°-160° range. The sensitivity kernels are computed with the adjoint method using synthetic data. From our simulations we can draw three main conclusions: (i) The exact Fréchet derivatives show that in all time windows, and also in those centred on the SS precursors, there is interference from other waves. This explains the difficulty reported in the literature to correct for 3-D shear wave speed perturbations, even if the 3-D structure is perfectly known. (ii) All studies attempting to map the topography of the 400 and 670 km discontinuities to date assume that the traveltimes of SS precursors can be linearly decomposed into a 3-D elastic structure and a topography part. We recently showed that such a linear decomposition is not possible for SS precursors, and the sensitivity kernels presented in this paper explain why. (iii) In agreement with previous work, we show that other parts of the seismograms have greater sensitivity to upper-mantle discontinuities than SS precursors, especially multiply bouncing S waves exploiting the S-wave triplications due to the mantle transition zone. These phases can potentially improve the inference of global topographic variations of the upper-mantle discontinuities in the context of full waveform inversion in a joint inversion for (an)elastic parameters and topography.

  15. Genetic Architecture of Grain Chalk in Rice and Interactions with a Low Phytic Acid Locus

    USDA-ARS?s Scientific Manuscript database

    Grain quality characteristics have a major impact on the value of the harvested rice crop. In addition to grain dimensions which determine rice grain market classes, translucent milled kernels are also important for assuring the highest grain quality and crop value. Over the last several years, ther...

  16. System Architectural Concepts: Army Battlefield Command and Control Information Utility (CCIU).

    DTIC Science & Technology

    1982-07-25

    produce (device-type), the computers they may interface with (required- host), and the identification number of the devices (device- number). Line- printers ...interface in a network PE ( ZINK Sol. A-5 GLOSSARY Kernel A layer of the PEOS; implements the basic system primitives. LUS Local Name Space Locking A

  17. Understanding Portability of a High-Level Programming Model on Contemporary Heterogeneous Architectures

    DOE PAGES

    Sabne, Amit J.; Sakdhnagool, Putt; Lee, Seyong; ...

    2015-07-13

    Accelerator-based heterogeneous computing is gaining momentum in the high-performance computing arena. However, the increased complexity of heterogeneous architectures demands more generic, high-level programming models. OpenACC is one such attempt to tackle this problem. Although the abstraction provided by OpenACC offers productivity, it raises questions concerning both functional and performance portability. In this article, the authors propose HeteroIR, a high-level, architecture-independent intermediate representation, to map high-level programming models, such as OpenACC, to heterogeneous architectures. They present a compiler approach that translates OpenACC programs into HeteroIR and accelerator kernels to obtain OpenACC functional portability. They then evaluate the performance portability obtained bymore » OpenACC with their approach on 12 OpenACC programs on Nvidia CUDA, AMD GCN, and Intel Xeon Phi architectures. They study the effects of various compiler optimizations and OpenACC program settings on these architectures to provide insights into the achieved performance portability.« less

  18. Optimum-AIV: A planning and scheduling system for spacecraft AIV

    NASA Technical Reports Server (NTRS)

    Arentoft, M. M.; Fuchs, Jens J.; Parrod, Y.; Gasquet, Andre; Stader, J.; Stokes, I.; Vadon, H.

    1991-01-01

    A project undertaken for the European Space Agency (ESA) is presented. The project is developing a knowledge based software system for planning and scheduling of activities for spacecraft assembly, integration, and verification (AIV). The system extends into the monitoring of plan execution and the plan repair phase. The objectives are to develop an operational kernel of a planning, scheduling, and plan repair tool, called OPTIMUM-AIV, and to provide facilities which will allow individual projects to customize the kernel to suit its specific needs. The kernel shall consist of a set of software functionalities for assistance in initial specification of the AIV plan, in verification and generation of valid plans and schedules for the AIV activities, and in interactive monitoring and execution problem recovery for the detailed AIV plans. Embedded in OPTIMUM-AIV are external interfaces which allow integration with alternative scheduling systems and project databases. The current status of the OPTIMUM-AIV project, as of Jan. 1991, is that a further analysis of the AIV domain has taken place through interviews with satellite AIV experts, a software requirement document (SRD) for the full operational tool was approved, and an architectural design document (ADD) for the kernel excluding external interfaces is ready for review.

  19. Local and Global Gestalt Laws: A Neurally Based Spectral Approach.

    PubMed

    Favali, Marta; Citti, Giovanna; Sarti, Alessandro

    2017-02-01

    This letter presents a mathematical model of figure-ground articulation that takes into account both local and global gestalt laws and is compatible with the functional architecture of the primary visual cortex (V1). The local gestalt law of good continuation is described by means of suitable connectivity kernels that are derived from Lie group theory and quantitatively compared with long-range connectivity in V1. Global gestalt constraints are then introduced in terms of spectral analysis of a connectivity matrix derived from these kernels. This analysis performs grouping of local features and individuates perceptual units with the highest salience. Numerical simulations are performed, and results are obtained by applying the technique to a number of stimuli.

  20. GeantV: from CPU to accelerators

    NASA Astrophysics Data System (ADS)

    Amadio, G.; Ananya, A.; Apostolakis, J.; Arora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Duhem, L.; Elvira, D.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Sehgal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.

    2016-10-01

    The GeantV project aims to research and develop the next-generation simulation software describing the passage of particles through matter. While the modern CPU architectures are being targeted first, resources such as GPGPU, Intel© Xeon Phi, Atom or ARM cannot be ignored anymore by HEP CPU-bound applications. The proof of concept GeantV prototype has been mainly engineered for CPU's having vector units but we have foreseen from early stages a bridge to arbitrary accelerators. A software layer consisting of architecture/technology specific backends supports currently this concept. This approach allows to abstract out the basic types such as scalar/vector but also to formalize generic computation kernels using transparently library or device specific constructs based on Vc, CUDA, Cilk+ or Intel intrinsics. While the main goal of this approach is portable performance, as a bonus, it comes with the insulation of the core application and algorithms from the technology layer. This allows our application to be long term maintainable and versatile to changes at the backend side. The paper presents the first results of basket-based GeantV geometry navigation on the Intel© Xeon Phi KNC architecture. We present the scalability and vectorization study, conducted using Intel performance tools, as well as our preliminary conclusions on the use of accelerators for GeantV transport. We also describe the current work and preliminary results for using the GeantV transport kernel on GPUs.

  1. Dynamic Experiment Design Regularization Approach to Adaptive Imaging with Array Radar/SAR Sensor Systems

    PubMed Central

    Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart

    2011-01-01

    We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the “model-free” variational analysis (VA)-based image enhancement approach and the “model-based” descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations. PMID:22163859

  2. Accelerated Prediction of the Polar Ice and Global Ocean (APPIGO)

    DTIC Science & Technology

    2015-09-30

    and so HYCOM was 10x slower on the K20X Keplers than on the 16-Core AMDs alone. The initial lack of performance was not a surprise. Our goal was to...MPI ranks to take advantage of the Hyper-Q capabilities on newer Kepler architectures. Hyper-Q allows multiple GPU kernels originating from

  3. Support Vector Data Description Model to Map Specific Land Cover with Optimal Parameters Determined from a Window-Based Validation Set.

    PubMed

    Zhang, Jinshui; Yuan, Zhoumiqi; Shuai, Guanyuan; Pan, Yaozhong; Zhu, Xiufang

    2017-04-26

    This paper developed an approach, the window-based validation set for support vector data description (WVS-SVDD), to determine optimal parameters for support vector data description (SVDD) model to map specific land cover by integrating training and window-based validation sets. Compared to the conventional approach where the validation set included target and outlier pixels selected visually and randomly, the validation set derived from WVS-SVDD constructed a tightened hypersphere because of the compact constraint by the outlier pixels which were located neighboring to the target class in the spectral feature space. The overall accuracies for wheat and bare land achieved were as high as 89.25% and 83.65%, respectively. However, target class was underestimated because the validation set covers only a small fraction of the heterogeneous spectra of the target class. The different window sizes were then tested to acquire more wheat pixels for validation set. The results showed that classification accuracy increased with the increasing window size and the overall accuracies were higher than 88% at all window size scales. Moreover, WVS-SVDD showed much less sensitivity to the untrained classes than the multi-class support vector machine (SVM) method. Therefore, the developed method showed its merits using the optimal parameters, tradeoff coefficient ( C ) and kernel width ( s ), in mapping homogeneous specific land cover.

  4. Architectural development of an advanced EVA Electronic System

    NASA Technical Reports Server (NTRS)

    Lavelle, Joseph

    1992-01-01

    An advanced electronic system for future EVA missions (including zero gravity, the lunar surface, and the surface of Mars) is under research and development within the Advanced Life Support Division at NASA Ames Research Center. As a first step in the development, an optimum system architecture has been derived from an analysis of the projected requirements for these missions. The open, modular architecture centers around a distributed multiprocessing concept where the major subsystems independently process their own I/O functions and communicate over a common bus. Supervision and coordination of the subsystems is handled by an embedded real-time operating system kernel employing multitasking software techniques. A discussion of how the architecture most efficiently meets the electronic system functional requirements, maximizes flexibility for future development and mission applications, and enhances the reliability and serviceability of the system in these remote, hostile environments is included.

  5. Level-2 Milestone 5588: Deliver Strategic Plan and Initial Scalability Assessment by Advanced Architecture and Portability Specialists Team

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Draeger, Erik W.

    This report documents the fact that the work in creating a strategic plan and beginning customer engagements has been completed. The description of milestone is: The newly formed advanced architecture and portability specialists (AAPS) team will develop a strategic plan to meet the goals of 1) sharing knowledge and experience with code teams to ensure that ASC codes run well on new architectures, and 2) supplying skilled computational scientists to put the strategy into practice. The plan will be delivered to ASC management in the first quarter. By the fourth quarter, the team will identify their first customers within PEMmore » and IC, perform an initial assessment and scalability and performance bottleneck for next-generation architectures, and embed AAPS team members with customer code teams to assist with initial portability development within standalone kernels or proxy applications.« less

  6. Body-wave traveltime and amplitude shifts from asymptotic travelling wave coupling

    USGS Publications Warehouse

    Pollitz, F.

    2006-01-01

    We explore the sensitivity of finite-frequency body-wave traveltimes and amplitudes to perturbations in 3-D seismic velocity structure relative to a spherically symmetric model. Using the approach of coupled travelling wave theory, we consider the effect of a structural perturbation on an isolated portion of the seismogram. By convolving the spectrum of the differential seismogram with the spectrum of a narrow window taper, and using a Taylor's series expansion for wavenumber as a function of frequency on a mode dispersion branch, we derive semi-analytic expressions for the sensitivity kernels. Far-field effects of wave interactions with the free surface or internal discontinuities are implicitly included, as are wave conversions upon scattering. The kernels may be computed rapidly for the purpose of structural inversions. We give examples of traveltime sensitivity kernels for regional wave propagation at 1 Hz. For the direct SV wave in a simple crustal velocity model, they are generally complicated because of interfering waves generated by interactions with the free surface and the Mohorovic??ic?? discontinuity. A large part of the interference effects may be eliminated by restricting the travelling wave basis set to those waves within a certain range of horizontal phase velocity. ?? Journal compilation ?? 2006 RAS.

  7. A High Performance Block Eigensolver for Nuclear Configuration Interaction Calculations

    DOE PAGES

    Aktulga, Hasan Metin; Afibuzzaman, Md.; Williams, Samuel; ...

    2017-06-01

    As on-node parallelism increases and the performance gap between the processor and the memory system widens, achieving high performance in large-scale scientific applications requires an architecture-aware design of algorithms and solvers. We focus on the eigenvalue problem arising in nuclear Configuration Interaction (CI) calculations, where a few extreme eigenpairs of a sparse symmetric matrix are needed. Here, we consider a block iterative eigensolver whose main computational kernels are the multiplication of a sparse matrix with multiple vectors (SpMM), and tall-skinny matrix operations. We then present techniques to significantly improve the SpMM and the transpose operation SpMM T by using themore » compressed sparse blocks (CSB) format. We achieve 3-4× speedup on the requisite operations over good implementations with the commonly used compressed sparse row (CSR) format. We develop a performance model that allows us to correctly estimate the performance of our SpMM kernel implementations, and we identify cache bandwidth as a potential performance bottleneck beyond DRAM. We also analyze and optimize the performance of LOBPCG kernels (inner product and linear combinations on multiple vectors) and show up to 15× speedup over using high performance BLAS libraries for these operations. The resulting high performance LOBPCG solver achieves 1.4× to 1.8× speedup over the existing Lanczos solver on a series of CI computations on high-end multicore architectures (Intel Xeons). We also analyze the performance of our techniques on an Intel Xeon Phi Knights Corner (KNC) processor.« less

  8. A High Performance Block Eigensolver for Nuclear Configuration Interaction Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aktulga, Hasan Metin; Afibuzzaman, Md.; Williams, Samuel

    As on-node parallelism increases and the performance gap between the processor and the memory system widens, achieving high performance in large-scale scientific applications requires an architecture-aware design of algorithms and solvers. We focus on the eigenvalue problem arising in nuclear Configuration Interaction (CI) calculations, where a few extreme eigenpairs of a sparse symmetric matrix are needed. Here, we consider a block iterative eigensolver whose main computational kernels are the multiplication of a sparse matrix with multiple vectors (SpMM), and tall-skinny matrix operations. We then present techniques to significantly improve the SpMM and the transpose operation SpMM T by using themore » compressed sparse blocks (CSB) format. We achieve 3-4× speedup on the requisite operations over good implementations with the commonly used compressed sparse row (CSR) format. We develop a performance model that allows us to correctly estimate the performance of our SpMM kernel implementations, and we identify cache bandwidth as a potential performance bottleneck beyond DRAM. We also analyze and optimize the performance of LOBPCG kernels (inner product and linear combinations on multiple vectors) and show up to 15× speedup over using high performance BLAS libraries for these operations. The resulting high performance LOBPCG solver achieves 1.4× to 1.8× speedup over the existing Lanczos solver on a series of CI computations on high-end multicore architectures (Intel Xeons). We also analyze the performance of our techniques on an Intel Xeon Phi Knights Corner (KNC) processor.« less

  9. Computational Particle Dynamic Simulations on Multicore Processors (CPDMu) Final Report Phase I

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmalz, Mark S

    2011-07-24

    Statement of Problem - Department of Energy has many legacy codes for simulation of computational particle dynamics and computational fluid dynamics applications that are designed to run on sequential processors and are not easily parallelized. Emerging high-performance computing architectures employ massively parallel multicore architectures (e.g., graphics processing units) to increase throughput. Parallelization of legacy simulation codes is a high priority, to achieve compatibility, efficiency, accuracy, and extensibility. General Statement of Solution - A legacy simulation application designed for implementation on mainly-sequential processors has been represented as a graph G. Mathematical transformations, applied to G, produce a graph representation {und G}more » for a high-performance architecture. Key computational and data movement kernels of the application were analyzed/optimized for parallel execution using the mapping G {yields} {und G}, which can be performed semi-automatically. This approach is widely applicable to many types of high-performance computing systems, such as graphics processing units or clusters comprised of nodes that contain one or more such units. Phase I Accomplishments - Phase I research decomposed/profiled computational particle dynamics simulation code for rocket fuel combustion into low and high computational cost regions (respectively, mainly sequential and mainly parallel kernels), with analysis of space and time complexity. Using the research team's expertise in algorithm-to-architecture mappings, the high-cost kernels were transformed, parallelized, and implemented on Nvidia Fermi GPUs. Measured speedups (GPU with respect to single-core CPU) were approximately 20-32X for realistic model parameters, without final optimization. Error analysis showed no loss of computational accuracy. Commercial Applications and Other Benefits - The proposed research will constitute a breakthrough in solution of problems related to efficient parallel computation of particle and fluid dynamics simulations. These problems occur throughout DOE, military and commercial sectors: the potential payoff is high. We plan to license or sell the solution to contractors for military and domestic applications such as disaster simulation (aerodynamic and hydrodynamic), Government agencies (hydrological and environmental simulations), and medical applications (e.g., in tomographic image reconstruction). Keywords - High-performance Computing, Graphic Processing Unit, Fluid/Particle Simulation. Summary for Members of Congress - Department of Energy has many simulation codes that must compute faster, to be effective. The Phase I research parallelized particle/fluid simulations for rocket combustion, for high-performance computing systems.« less

  10. Active impulsive noise control using maximum correntropy with adaptive kernel size

    NASA Astrophysics Data System (ADS)

    Lu, Lu; Zhao, Haiquan

    2017-03-01

    The active noise control (ANC) based on the principle of superposition is an attractive method to attenuate the noise signals. However, the impulsive noise in the ANC systems will degrade the performance of the controller. In this paper, a filtered-x recursive maximum correntropy (FxRMC) algorithm is proposed based on the maximum correntropy criterion (MCC) to reduce the effect of outliers. The proposed FxRMC algorithm does not requires any priori information of the noise characteristics and outperforms the filtered-x least mean square (FxLMS) algorithm for impulsive noise. Meanwhile, in order to adjust the kernel size of FxRMC algorithm online, a recursive approach is proposed through taking into account the past estimates of error signals over a sliding window. Simulation and experimental results in the context of active impulsive noise control demonstrate that the proposed algorithms achieve much better performance than the existing algorithms in various noise environments.

  11. Antiterrorist Software

    NASA Technical Reports Server (NTRS)

    Clark, David A.

    1998-01-01

    In light of the escalation of terrorism, the Department of Defense spearheaded the development of new antiterrorist software for all Government agencies by issuing a Broad Agency Announcement to solicit proposals. This Government-wide competition resulted in a team that includes NASA Lewis Research Center's Computer Services Division, who will develop the graphical user interface (GUI) and test it in their usability lab. The team launched a program entitled Joint Sphere of Security (JSOS), crafted a design architecture (see the following figure), and is testing the interface. This software system has a state-ofthe- art, object-oriented architecture, with a main kernel composed of the Dynamic Information Architecture System (DIAS) developed by Argonne National Laboratory. DIAS will be used as the software "breadboard" for assembling the components of explosions, such as blast and collapse simulations.

  12. Characterization of Transient Plasma Ignition Flame Kernel Growth for Varying Inlet Conditions

    DTIC Science & Technology

    2009-12-01

    unlimited 12b. DISTRIBUTION CODE A 13. ABSTRACT (maximum 200 words) Pulse detonation engines ( PDEs ) have the...Instruments NPS - Naval Postgraduate School PDC - Pulse Detonation Combustor PDE - Pulse Detonation Engine Phi The Greek letter Φ PSIA...produced little to no new chemical propulsion developments; only improvements to existing architectures. The Pulse Detonation Engine ( PDE ) is a

  13. GeantV: From CPU to accelerators

    DOE PAGES

    Amadio, G.; Ananya, A.; Apostolakis, J.; ...

    2016-01-01

    The GeantV project aims to research and develop the next-generation simulation software describing the passage of particles through matter. While the modern CPU architectures are being targeted first, resources such as GPGPU, Intel© Xeon Phi, Atom or ARM cannot be ignored anymore by HEP CPU-bound applications. The proof of concept GeantV prototype has been mainly engineered for CPU's having vector units but we have foreseen from early stages a bridge to arbitrary accelerators. A software layer consisting of architecture/technology specific backends supports currently this concept. This approach allows to abstract out the basic types such as scalar/vector but also tomore » formalize generic computation kernels using transparently library or device specific constructs based on Vc, CUDA, Cilk+ or Intel intrinsics. While the main goal of this approach is portable performance, as a bonus, it comes with the insulation of the core application and algorithms from the technology layer. This allows our application to be long term maintainable and versatile to changes at the backend side. The paper presents the first results of basket-based GeantV geometry navigation on the Intel© Xeon Phi KNC architecture. We present the scalability and vectorization study, conducted using Intel performance tools, as well as our preliminary conclusions on the use of accelerators for GeantV transport. Lastly, we also describe the current work and preliminary results for using the GeantV transport kernel on GPUs.« less

  14. Roofline Analysis in the Intel® Advisor to Deliver Optimized Performance for applications on Intel® Xeon Phi™ Processor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koskela, Tuomas S.; Lobet, Mathieu; Deslippe, Jack

    In this session we show, in two case studies, how the roofline feature of Intel Advisor has been utilized to optimize the performance of kernels of the XGC1 and PICSAR codes in preparation for Intel Knights Landing architecture. The impact of the implemented optimizations and the benefits of using the automatic roofline feature of Intel Advisor to study performance of large applications will be presented. This demonstrates an effective optimization strategy that has enabled these science applications to achieve up to 4.6 times speed-up and prepare for future exascale architectures. # Goal/Relevance of Session The roofline model [1,2] is amore » powerful tool for analyzing the performance of applications with respect to the theoretical peak achievable on a given computer architecture. It allows one to graphically represent the performance of an application in terms of operational intensity, i.e. the ratio of flops performed and bytes moved from memory in order to guide optimization efforts. Given the scale and complexity of modern science applications, it can often be a tedious task for the user to perform the analysis on the level of functions or loops to identify where performance gains can be made. With new Intel tools, it is now possible to automate this task, as well as base the estimates of peak performance on measurements rather than vendor specifications. The goal of this session is to demonstrate how the roofline feature of Intel Advisor can be used to balance memory vs. computation related optimization efforts and effectively identify performance bottlenecks. A series of typical optimization techniques: cache blocking, structure refactoring, data alignment, and vectorization illustrated by the kernel cases will be addressed. # Description of the codes ## XGC1 The XGC1 code [3] is a magnetic fusion Particle-In-Cell code that uses an unstructured mesh for its Poisson solver that allows it to accurately resolve the edge plasma of a magnetic fusion device. After recent optimizations to its collision kernel [4], most of the computing time is spent in the electron push (pushe) kernel, where these optimization efforts have been focused. The kernel code scaled well with MPI+OpenMP but had almost no automatic compiler vectorization, in part due to indirect memory addresses and in part due to low trip counts of low-level loops that would be candidates for vectorization. Particle blocking and sorting have been implemented to increase trip counts of low-level loops and improve memory locality, and OpenMP directives have been added to vectorize compute-intensive loops that were identified by Advisor. The optimizations have improved the performance of the pushe kernel 2x on Haswell processors and 1.7x on KNL. The KNL node-for-node performance has been brought to within 30% of a NERSC Cori phase I Haswell node and we expect to bridge this gap by reducing the memory footprint of compute intensive routines to improve cache reuse. ## PICSAR is a Fortran/Python high-performance Particle-In-Cell library targeting at MIC architectures first designed to be coupled with the PIC code WARP for the simulation of laser-matter interaction and particle accelerators. PICSAR also contains a FORTRAN stand-alone kernel for performance studies and benchmarks. A MPI domain decomposition is used between NUMA domains and a tile decomposition (cache-blocking) handled by OpenMP has been added for shared-memory parallelism and better cache management. The so-called current deposition and field gathering steps that compose the PIC time loop constitute major hotspots that have been rewritten to enable more efficient vectorization. Particle communications between tiles and MPI domain has been merged and parallelized. All considered, these improvements provide speedups of 3.1 for order 1 and 4.6 for order 3 interpolation shape factors on KNL configured in SNC4 quadrant flat mode. Performance is similar between a node of cori phase 1 and KNL at order 1 and better on KNL by a factor 1.6 at order 3 with the considered test case (homogeneous thermal plasma).« less

  15. VO2 thermochromic smart window for energy savings and generation

    PubMed Central

    Zhou, Jiadong; Gao, Yanfeng; Zhang, Zongtao; Luo, Hongjie; Cao, Chuanxiang; Chen, Zhang; Dai, Lei; Liu, Xinling

    2013-01-01

    The ability to achieve energy saving in architectures and optimal solar energy utilisation affects the sustainable development of the human race. Traditional smart windows and solar cells cannot be combined into one device for energy saving and electricity generation. A VO2 film can respond to the environmental temperature to intelligently regulate infrared transmittance while maintaining visible transparency, and can be applied as a thermochromic smart window. Herein, we report for the first time a novel VO2-based smart window that partially utilises light scattering to solar cells around the glass panel for electricity generation. This smart window combines energy-saving and generation in one device, and offers potential to intelligently regulate and utilise solar radiation in an efficient manner. PMID:24157625

  16. VO₂ thermochromic smart window for energy savings and generation.

    PubMed

    Zhou, Jiadong; Gao, Yanfeng; Zhang, Zongtao; Luo, Hongjie; Cao, Chuanxiang; Chen, Zhang; Dai, Lei; Liu, Xinling

    2013-10-24

    The ability to achieve energy saving in architectures and optimal solar energy utilisation affects the sustainable development of the human race. Traditional smart windows and solar cells cannot be combined into one device for energy saving and electricity generation. A VO2 film can respond to the environmental temperature to intelligently regulate infrared transmittance while maintaining visible transparency, and can be applied as a thermochromic smart window. Herein, we report for the first time a novel VO2-based smart window that partially utilises light scattering to solar cells around the glass panel for electricity generation. This smart window combines energy-saving and generation in one device, and offers potential to intelligently regulate and utilise solar radiation in an efficient manner.

  17. How architecture wins technology wars.

    PubMed

    Morris, C R; Ferguson, C H

    1993-01-01

    Signs of revolutionary transformation in the global computer industry are everywhere. A roll call of the major industry players reads like a waiting list in the emergency room. The usual explanations for the industry's turmoil are at best inadequate. Scale, friendly government policies, manufacturing capabilities, a strong position in desktop markets, excellent software, top design skills--none of these is sufficient, either by itself or in combination, to ensure competitive success in information technology. A new paradigm is required to explain patterns of success and failure. Simply stated, success flows to the company that manages to establish proprietary architectural control over a broad, fast-moving, competitive space. Architectural strategies have become crucial to information technology because of the astonishing rate of improvement in microprocessors and other semiconductor components. Since no single vendor can keep pace with the outpouring of cheap, powerful, mass-produced components, customers insist on stitching together their own local systems solutions. Architectures impose order on the system and make the interconnections possible. The architectural controller is the company that controls the standard by which the entire information package is assembled. Microsoft's Windows is an excellent example of this. Because of the popularity of Windows, companies like Lotus must conform their software to its parameters in order to compete for market share. In the 1990s, proprietary architectural control is not only possible but indispensable to competitive success. What's more, it has broader implications for organizational structure: architectural competition is giving rise to a new form of business organization.

  18. Minimizing energy dissipation of matrix multiplication kernel on Virtex-II

    NASA Astrophysics Data System (ADS)

    Choi, Seonil; Prasanna, Viktor K.; Jang, Ju-wook

    2002-07-01

    In this paper, we develop energy-efficient designs for matrix multiplication on FPGAs. To analyze the energy dissipation, we develop a high-level model using domain-specific modeling techniques. In this model, we identify architecture parameters that significantly affect the total energy (system-wide energy) dissipation. Then, we explore design trade-offs by varying these parameters to minimize the system-wide energy. For matrix multiplication, we consider a uniprocessor architecture and a linear array architecture to develop energy-efficient designs. For the uniprocessor architecture, the cache size is a parameter that affects the I/O complexity and the system-wide energy. For the linear array architecture, the amount of storage per processing element is a parameter affecting the system-wide energy. By using maximum amount of storage per processing element and minimum number of multipliers, we obtain a design that minimizes the system-wide energy. We develop several energy-efficient designs for matrix multiplication. For example, for 6×6 matrix multiplication, energy savings of upto 52% for the uniprocessor architecture and 36% for the linear arrary architecture is achieved over an optimized library for Virtex-II FPGA from Xilinx.

  19. 24. Photocopy of Drawing (November 1934 Architectural Drawings by Burge ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    24. Photocopy of Drawing (November 1934 Architectural Drawings by Burge and Stevens, in Possession of the Engineering and Capital Improvements Department of the Atlanta Housing Authority, Atlanta, Georgia). SCHEDULES OF DOORS, WINDOWS, AND INTERIOR FINISHES OF TECHWOOD PROJECT #1101, SHEET A-56. - Techwood Homes, Building No. 1, 575-579 Techwood Drive, Atlanta, Fulton County, GA

  20. Photocopy of drawing (this photograph is an 8" x 10" ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Photocopy of drawing (this photograph is an 8" x 10" copy of an 8" x 10" negative; 1991 original architectural drawing located at Building No. 458, NAS Pensacola, Florida) Replace windows, Building No. 45, Architectural building elevations, Sheet 2 of 2 - U.S. Naval Air Station, Equipment Shops & Offices, 206 South Avenue, Pensacola, Escambia County, FL

  1. India's Vernacular Architecture as a Reflection of Culture.

    ERIC Educational Resources Information Center

    Masalski, Kathleen Woods

    This paper contains the narrative for a slide presentation on the architecture of India. Through the narration, the geography and climate of the country and the social conditions of the Indian people are discussed. Roofs and windows are adapted for the hot, rainy climate, while the availability of building materials ranges from palm leaves to mud…

  2. CMOS VLSI Active-Pixel Sensor for Tracking

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata; Sun, Chao; Yang, Guang; Heynssens, Julie

    2004-01-01

    An architecture for a proposed active-pixel sensor (APS) and a design to implement the architecture in a complementary metal oxide semiconductor (CMOS) very-large-scale integrated (VLSI) circuit provide for some advanced features that are expected to be especially desirable for tracking pointlike features of stars. The architecture would also make this APS suitable for robotic- vision and general pointing and tracking applications. CMOS imagers in general are well suited for pointing and tracking because they can be configured for random access to selected pixels and to provide readout from windows of interest within their fields of view. However, until now, the architectures of CMOS imagers have not supported multiwindow operation or low-noise data collection. Moreover, smearing and motion artifacts in collected images have made prior CMOS imagers unsuitable for tracking applications. The proposed CMOS imager (see figure) would include an array of 1,024 by 1,024 pixels containing high-performance photodiode-based APS circuitry. The pixel pitch would be 9 m. The operations of the pixel circuits would be sequenced and otherwise controlled by an on-chip timing and control block, which would enable the collection of image data, during a single frame period, from either the full frame (that is, all 1,024 1,024 pixels) or from within as many as 8 different arbitrarily placed windows as large as 8 by 8 pixels each. A typical prior CMOS APS operates in a row-at-a-time ( grolling-shutter h) readout mode, which gives rise to exposure skew. In contrast, the proposed APS would operate in a sample-first/readlater mode, suppressing rolling-shutter effects. In this mode, the analog readout signals from the pixels corresponding to the windows of the interest (which windows, in the star-tracking application, would presumably contain guide stars) would be sampled rapidly by routing them through a programmable diagonal switch array to an on-chip parallel analog memory array. The diagonal-switch and memory addresses would be generated by the on-chip controller. The memory array would be large enough to hold differential signals acquired from all 8 windows during a frame period. Following the rapid sampling from all the windows, the contents of the memory array would be read out sequentially by use of a capacitive transimpedance amplifier (CTIA) at a maximum data rate of 10 MHz. This data rate is compatible with an update rate of almost 10 Hz, even in full-frame operation

  3. FOS: A Factored Operating Systems for High Assurance and Scalability on Multicores

    DTIC Science & Technology

    2012-08-01

    computing. It builds on previous work in distributed and microkernel OSes by factoring services out of the kernel, and then further distributing each...2 3.0 Methods, Assumptions, and Procedures (System Design) .................................................. 4 3.1 Microkernel ...cooperating servers. We term such a service a fleet. Figure 2 shows the high-level architecture of fos. A small microkernel runs on every core

  4. A WPS Based Architecture for Climate Data Analytic Services (CDAS) at NASA

    NASA Astrophysics Data System (ADS)

    Maxwell, T. P.; McInerney, M.; Duffy, D.; Carriere, L.; Potter, G. L.; Doutriaux, C.

    2015-12-01

    Faced with unprecedented growth in the Big Data domain of climate science, NASA has developed the Climate Data Analytic Services (CDAS) framework. This framework enables scientists to execute trusted and tested analysis operations in a high performance environment close to the massive data stores at NASA. The data is accessed in standard (NetCDF, HDF, etc.) formats in a POSIX file system and processed using trusted climate data analysis tools (ESMF, CDAT, NCO, etc.). The framework is structured as a set of interacting modules allowing maximal flexibility in deployment choices. The current set of module managers include: Staging Manager: Runs the computation locally on the WPS server or remotely using tools such as celery or SLURM. Compute Engine Manager: Runs the computation serially or distributed over nodes using a parallelization framework such as celery or spark. Decomposition Manger: Manages strategies for distributing the data over nodes. Data Manager: Handles the import of domain data from long term storage and manages the in-memory and disk-based caching architectures. Kernel manager: A kernel is an encapsulated computational unit which executes a processor's compute task. Each kernel is implemented in python exploiting existing analysis packages (e.g. CDAT) and is compatible with all CDAS compute engines and decompositions. CDAS services are accessed via a WPS API being developed in collaboration with the ESGF Compute Working Team to support server-side analytics for ESGF. The API can be executed using either direct web service calls, a python script or application, or a javascript-based web application. Client packages in python or javascript contain everything needed to make CDAS requests. The CDAS architecture brings together the tools, data storage, and high-performance computing required for timely analysis of large-scale data sets, where the data resides, to ultimately produce societal benefits. It is is currently deployed at NASA in support of the Collaborative REAnalysis Technical Environment (CREATE) project, which centralizes numerous global reanalysis datasets onto a single advanced data analytics platform. This service permits decision makers to investigate climate changes around the globe, inspect model trends, compare multiple reanalysis datasets, and variability.

  5. The Workstation Approach to Laboratory Computing

    PubMed Central

    Crosby, P.A.; Malachowski, G.C.; Hall, B.R.; Stevens, V.; Gunn, B.J.; Hudson, S.; Schlosser, D.

    1985-01-01

    There is a need for a Laboratory Workstation which specifically addresses the problems associated with computing in the scientific laboratory. A workstation based on the IBM PC architecture and including a front end data acquisition system which communicates with a host computer via a high speed communications link; a new graphics display controller with hardware window management and window scrolling; and an integrated software package is described.

  6. A Software Architecture for Adaptive Modular Sensing Systems

    PubMed Central

    Lyle, Andrew C.; Naish, Michael D.

    2010-01-01

    By combining a number of simple transducer modules, an arbitrarily complex sensing system may be produced to accommodate a wide range of applications. This work outlines a novel software architecture and knowledge representation scheme that has been developed to support this type of flexible and reconfigurable modular sensing system. Template algorithms are used to embed intelligence within each module. As modules are added or removed, the composite sensor is able to automatically determine its overall geometry and assume an appropriate collective identity. A virtual machine-based middleware layer runs on top of a real-time operating system with a pre-emptive kernel, enabling platform-independent template algorithms to be written once and run on any module, irrespective of its underlying hardware architecture. Applications that may benefit from easily reconfigurable modular sensing systems include flexible inspection, mobile robotics, surveillance, and space exploration. PMID:22163614

  7. A software architecture for adaptive modular sensing systems.

    PubMed

    Lyle, Andrew C; Naish, Michael D

    2010-01-01

    By combining a number of simple transducer modules, an arbitrarily complex sensing system may be produced to accommodate a wide range of applications. This work outlines a novel software architecture and knowledge representation scheme that has been developed to support this type of flexible and reconfigurable modular sensing system. Template algorithms are used to embed intelligence within each module. As modules are added or removed, the composite sensor is able to automatically determine its overall geometry and assume an appropriate collective identity. A virtual machine-based middleware layer runs on top of a real-time operating system with a pre-emptive kernel, enabling platform-independent template algorithms to be written once and run on any module, irrespective of its underlying hardware architecture. Applications that may benefit from easily reconfigurable modular sensing systems include flexible inspection, mobile robotics, surveillance, and space exploration.

  8. A feasibility study on porting the community land model onto accelerators using OpenACC

    DOE PAGES

    Wang, Dali; Wu, Wei; Winkler, Frank; ...

    2014-01-01

    As environmental models (such as Accelerated Climate Model for Energy (ACME), Parallel Reactive Flow and Transport Model (PFLOTRAN), Arctic Terrestrial Simulator (ATS), etc.) became more and more complicated, we are facing enormous challenges regarding to porting those applications onto hybrid computing architecture. OpenACC appears as a very promising technology, therefore, we have conducted a feasibility analysis on porting the Community Land Model (CLM), a terrestrial ecosystem model within the Community Earth System Models (CESM)). Specifically, we used automatic function testing platform to extract a small computing kernel out of CLM, then we apply this kernel into the actually CLM dataflowmore » procedure, and investigate the strategy of data parallelization and the benefit of data movement provided by current implementation of OpenACC. Even it is a non-intensive kernel, on a single 16-core computing node, the performance (based on the actual computation time using one GPU) of OpenACC implementation is 2.3 time faster than that of OpenMP implementation using single OpenMP thread, but it is 2.8 times slower than the performance of OpenMP implementation using 16 threads. On multiple nodes, MPI_OpenACC implementation demonstrated very good scalability on up to 128 GPUs on 128 computing nodes. This study also provides useful information for us to look into the potential benefits of “deep copy” capability and “routine” feature of OpenACC standards. In conclusion, we believe that our experience on the environmental model, CLM, can be beneficial to many other scientific research programs who are interested to porting their large scale scientific code using OpenACC onto high-end computers, empowered by hybrid computing architecture.« less

  9. Open architecture CNC system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tal, J.; Lopez, A.; Edwards, J.M.

    1995-04-01

    In this paper, an alternative solution to the traditional CNC machine tool controller has been introduced. Software and hardware modules have been described and their incorporation in a CNC control system has been outlined. This type of CNC machine tool controller demonstrates that technology is accessible and can be readily implemented into an open architecture machine tool controller. Benefit to the user is greater controller flexibility, while being economically achievable. PC based, motion as well as non-motion features will provide flexibility through a Windows environment. Up-grading this type of controller system through software revisions will keep the machine tool inmore » a competitive state with minimal effort. Software and hardware modules are mass produced permitting competitive procurement and incorporation. Open architecture CNC systems provide diagnostics thus enhancing maintainability, and machine tool up-time. A major concern of traditional CNC systems has been operator training time. Training time can be greatly minimized by making use of Windows environment features.« less

  10. High-frequency modes in a two-dimensional rectangular room with windows

    NASA Astrophysics Data System (ADS)

    Shabalina, E. D.; Shirgina, N. V.; Shanin, A. V.

    2010-07-01

    We examine a two-dimensional model problem of architectural acoustics on sound propagation in a rectangular room with windows. It is supposed that the walls are ideally flat and hard; the windows absorb all energy that falls upon them. We search for the modes of such a room having minimal attenuation indices, which have the expressed structure of billiard trajectories. The main attenuation mechanism for such modes is diffraction at the edges of the windows. We construct estimates for the attenuation indices of the given modes based on the solution to the Weinstein problem. We formulate diffraction problems similar to the statement of the Weinstein problem that describe the attenuation of billiard modes in complex situations.

  11. Bio-inspired adaptive feedback error learning architecture for motor control.

    PubMed

    Tolu, Silvia; Vanegas, Mauricio; Luque, Niceto R; Garrido, Jesús A; Ros, Eduardo

    2012-10-01

    This study proposes an adaptive control architecture based on an accurate regression method called Locally Weighted Projection Regression (LWPR) and on a bio-inspired module, such as a cerebellar-like engine. This hybrid architecture takes full advantage of the machine learning module (LWPR kernel) to abstract an optimized representation of the sensorimotor space while the cerebellar component integrates this to generate corrective terms in the framework of a control task. Furthermore, we illustrate how the use of a simple adaptive error feedback term allows to use the proposed architecture even in the absence of an accurate analytic reference model. The presented approach achieves an accurate control with low gain corrective terms (for compliant control schemes). We evaluate the contribution of the different components of the proposed scheme comparing the obtained performance with alternative approaches. Then, we show that the presented architecture can be used for accurate manipulation of different objects when their physical properties are not directly known by the controller. We evaluate how the scheme scales for simulated plants of high Degrees of Freedom (7-DOFs).

  12. SOMKE: kernel density estimation over data streams by sequences of self-organizing maps.

    PubMed

    Cao, Yuan; He, Haibo; Man, Hong

    2012-08-01

    In this paper, we propose a novel method SOMKE, for kernel density estimation (KDE) over data streams based on sequences of self-organizing map (SOM). In many stream data mining applications, the traditional KDE methods are infeasible because of the high computational cost, processing time, and memory requirement. To reduce the time and space complexity, we propose a SOM structure in this paper to obtain well-defined data clusters to estimate the underlying probability distributions of incoming data streams. The main idea of this paper is to build a series of SOMs over the data streams via two operations, that is, creating and merging the SOM sequences. The creation phase produces the SOM sequence entries for windows of the data, which obtains clustering information of the incoming data streams. The size of the SOM sequences can be further reduced by combining the consecutive entries in the sequence based on the measure of Kullback-Leibler divergence. Finally, the probability density functions over arbitrary time periods along the data streams can be estimated using such SOM sequences. We compare SOMKE with two other KDE methods for data streams, the M-kernel approach and the cluster kernel approach, in terms of accuracy and processing time for various stationary data streams. Furthermore, we also investigate the use of SOMKE over nonstationary (evolving) data streams, including a synthetic nonstationary data stream, a real-world financial data stream and a group of network traffic data streams. The simulation results illustrate the effectiveness and efficiency of the proposed approach.

  13. Cross-Platform Development Techniques for Mobile Devices

    DTIC Science & Technology

    2013-09-01

    many other platforms including Windows, Blackberry , and Symbian. Each of these platforms has their own distinct architecture and programming language...sales of iPhones and the increasing use of Android-based devices have forced less successful competitors such as Microsoft, Blackberry , and Symbian... Blackberry and Windows Phone are planned [12] in this tool’s attempt to reuse code with a unified JavaScript API while at the same time supporting unique

  14. Combat Vehicle Command and Control System Architecture Overview

    DTIC Science & Technology

    1994-10-01

    inserted in the software. • Interactive interface displays and controls were prepared using rapidly prototyped software and were retained at the MWTB for...being simulated "* controls , sensor displays, and out-the-window displays for the crew "* computer image generators (CIGs) for out-the-window and...black hot viewing modes. The commander may access a number of capabilities of the CITV simulation, described below, from controls located around the

  15. A Prototype SSVEP Based Real Time BCI Gaming System

    PubMed Central

    Martišius, Ignas

    2016-01-01

    Although brain-computer interface technology is mainly designed with disabled people in mind, it can also be beneficial to healthy subjects, for example, in gaming or virtual reality systems. In this paper we discuss the typical architecture, paradigms, requirements, and limitations of electroencephalogram-based gaming systems. We have developed a prototype three-class brain-computer interface system, based on the steady state visually evoked potentials paradigm and the Emotiv EPOC headset. An online target shooting game, implemented in the OpenViBE environment, has been used for user feedback. The system utilizes wave atom transform for feature extraction, achieving an average accuracy of 78.2% using linear discriminant analysis classifier, 79.3% using support vector machine classifier with a linear kernel, and 80.5% using a support vector machine classifier with a radial basis function kernel. PMID:27051414

  16. A Prototype SSVEP Based Real Time BCI Gaming System.

    PubMed

    Martišius, Ignas; Damaševičius, Robertas

    2016-01-01

    Although brain-computer interface technology is mainly designed with disabled people in mind, it can also be beneficial to healthy subjects, for example, in gaming or virtual reality systems. In this paper we discuss the typical architecture, paradigms, requirements, and limitations of electroencephalogram-based gaming systems. We have developed a prototype three-class brain-computer interface system, based on the steady state visually evoked potentials paradigm and the Emotiv EPOC headset. An online target shooting game, implemented in the OpenViBE environment, has been used for user feedback. The system utilizes wave atom transform for feature extraction, achieving an average accuracy of 78.2% using linear discriminant analysis classifier, 79.3% using support vector machine classifier with a linear kernel, and 80.5% using a support vector machine classifier with a radial basis function kernel.

  17. Key Technologies of Phone Storage Forensics Based on ARM Architecture

    NASA Astrophysics Data System (ADS)

    Zhang, Jianghan; Che, Shengbing

    2018-03-01

    Smart phones are mainly running Android, IOS and Windows Phone three mobile platform operating systems. The android smart phone has the best market shares and its processor chips are almost ARM software architecture. The chips memory address mapping mechanism of ARM software architecture is different with x86 software architecture. To forensics to android mart phone, we need to understand three key technologies: memory data acquisition, the conversion mechanism from virtual address to the physical address, and find the system’s key data. This article presents a viable solution which does not rely on the operating system API for a complete solution to these three issues.

  18. Photocopy of drawing (this photograph is an 8" x 10" ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Photocopy of drawing (this photograph is an 8" x 10" copy of an 8" x 10" negative; 1989 original architectural drawing located at Building No. 458, NAS Pensacola, Florida) Interior renovation, Navy recruiting orientation unit, Building No. 45, Architectural floor plan and window details, Sheet 3 of 29 - U.S. Naval Air Station, Equipment Shops & Offices, 206 South Avenue, Pensacola, Escambia County, FL

  19. Segmentation of the Speaker's Face Region with Audiovisual Correlation

    NASA Astrophysics Data System (ADS)

    Liu, Yuyu; Sato, Yoichi

    The ability to find the speaker's face region in a video is useful for various applications. In this work, we develop a novel technique to find this region within different time windows, which is robust against the changes of view, scale, and background. The main thrust of our technique is to integrate audiovisual correlation analysis into a video segmentation framework. We analyze the audiovisual correlation locally by computing quadratic mutual information between our audiovisual features. The computation of quadratic mutual information is based on the probability density functions estimated by kernel density estimation with adaptive kernel bandwidth. The results of this audiovisual correlation analysis are incorporated into graph cut-based video segmentation to resolve a globally optimum extraction of the speaker's face region. The setting of any heuristic threshold in this segmentation is avoided by learning the correlation distributions of speaker and background by expectation maximization. Experimental results demonstrate that our method can detect the speaker's face region accurately and robustly for different views, scales, and backgrounds.

  20. Optimization of knowledge sharing through multi-forum using cloud computing architecture

    NASA Astrophysics Data System (ADS)

    Madapusi Vasudevan, Sriram; Sankaran, Srivatsan; Muthuswamy, Shanmugasundaram; Ram, N. Sankar

    2011-12-01

    Knowledge sharing is done through various knowledge sharing forums which requires multiple logins through multiple browser instances. Here a single Multi-Forum knowledge sharing concept is introduced which requires only one login session which makes user to connect multiple forums and display the data in a single browser window. Also few optimization techniques are introduced here to speed up the access time using cloud computing architecture.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Samuel; Patterson, David; Oliker, Leonid

    This article consists of a collection of slides from the authors' conference presentation. The Roofline model is a visually intuitive figure for kernel analysis and optimization. We believe undergraduates will find it useful in assessing performance and scalability limitations. It is easily extended to other architectural paradigms. It is easily extendable to other metrics: performance (sort, graphics, crypto..) bandwidth (L2, PCIe, ..). Furthermore, a performance counters could be used to generate a runtime-specific roofline that would greatly aide the optimization.

  2. A coarse-to-fine kernel matching approach for mean-shift based visual tracking

    NASA Astrophysics Data System (ADS)

    Liangfu, L.; Zuren, F.; Weidong, C.; Ming, J.

    2009-03-01

    Mean shift is an efficient pattern match algorithm. It is widely used in visual tracking fields since it need not perform whole search in the image space. It employs gradient optimization method to reduce the time of feature matching and realize rapid object localization, and uses Bhattacharyya coefficient as the similarity measure between object template and candidate template. This thesis presents a mean shift algorithm based on coarse-to-fine search for the best kernel matching. This paper researches for object tracking with large motion area based on mean shift. To realize efficient tracking of such an object, we present a kernel matching method from coarseness to fine. If the motion areas of the object between two frames are very large and they are not overlapped in image space, then the traditional mean shift method can only obtain local optimal value by iterative computing in the old object window area, so the real tracking position cannot be obtained and the object tracking will be disabled. Our proposed algorithm can efficiently use a similarity measure function to realize the rough location of motion object, then use mean shift method to obtain the accurate local optimal value by iterative computing, which successfully realizes object tracking with large motion. Experimental results show its good performance in accuracy and speed when compared with background-weighted histogram algorithm in the literature.

  3. Calculating Reuse Distance from Source Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Narayanan, Sri Hari Krishna; Hovland, Paul

    The efficient use of a system is of paramount importance in high-performance computing. Applications need to be engineered for future systems even before the architecture of such a system is clearly known. Static performance analysis that generates performance bounds is one way to approach the task of understanding application behavior. Performance bounds provide an upper limit on the performance of an application on a given architecture. Predicting cache hierarchy behavior and accesses to main memory is a requirement for accurate performance bounds. This work presents our static reuse distance algorithm to generate reuse distance histograms. We then use these histogramsmore » to predict cache miss rates. Experimental results for kernels studied show that the approach is accurate.« less

  4. Crowd counting via region based multi-channel convolution neural network

    NASA Astrophysics Data System (ADS)

    Cao, Xiaoguang; Gao, Siqi; Bai, Xiangzhi

    2017-11-01

    This paper proposed a novel region based multi-channel convolution neural network architecture for crowd counting. In order to effectively solve the perspective distortion in crowd datasets with a great diversity of scales, this work combines the main channel and three branch channels. These channels extract both the global and region features. And the results are used to estimate density map. Moreover, kernels with ladder-shaped sizes are designed across all the branch channels, which generate adaptive region features. Also, branch channels use relatively deep and shallow network to achieve more accurate detector. By using these strategies, the proposed architecture achieves state-of-the-art performance on ShanghaiTech datasets and competitive performance on UCF_CC_50 datasets.

  5. Method and system for compact efficient laser architecture

    DOEpatents

    Bayramian, Andrew James; Erlandson, Alvin Charles; Manes, Kenneth Rene; Spaeth, Mary Louis; Caird, John Allyn; Deri, Robert J.

    2015-09-15

    A laser amplifier module having an enclosure includes an input window, a mirror optically coupled to the input window and disposed in a first plane, and a first amplifier head disposed along an optical amplification path adjacent a first end of the enclosure. The laser amplifier module also includes a second amplifier head disposed along the optical amplification path adjacent a second end of the enclosure and a cavity mirror disposed along the optical amplification path.

  6. Profugus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, Thomas; Hamilton, Steven; Slattery, Stuart

    Profugus is an open-source mini-application (mini-app) for radiation transport and reactor applications. It contains the fundamental computational kernels used in the Exnihilo code suite from Oak Ridge National Laboratory. However, Exnihilo is production code with a substantial user base. Furthermore, Exnihilo is export controlled. This makes collaboration with computer scientists and computer engineers difficult. Profugus is designed to bridge that gap. By encapsulating the core numerical algorithms in an abbreviated code base that is open-source, computer scientists can analyze the algorithms and easily make code-architectural changes to test performance without compromising the production code values of Exnihilo. Profugus is notmore » meant to be production software with respect to problem analysis. The computational kernels in Profugus are designed to analyze performance, not correctness. Nonetheless, users of Profugus can setup and run problems with enough real-world features to be useful as proof-of-concept for actual production work.« less

  7. Validation of a weather forecast model at radiance level against satellite observations allowing quantification of temperature, humidity, and cloud-related biases

    NASA Astrophysics Data System (ADS)

    Bani Shahabadi, Maziar; Huang, Yi; Garand, Louis; Heilliette, Sylvain; Yang, Ping

    2016-09-01

    An established radiative transfer model (RTM) is adapted for simulating all-sky infrared radiance spectra from the Canadian Global Environmental Multiscale (GEM) model in order to validate its forecasts at the radiance level against Atmospheric InfraRed Sounder (AIRS) observations. Synthetic spectra are generated for 2 months from short-term (3-9 h) GEM forecasts. The RTM uses a monthly climatological land surface emissivity/reflectivity atlas. An updated ice particle optical property library was introduced for cloudy radiance calculations. Forward model brightness temperature (BT) biases are assessed to be of the order of ˜1 K for both clear-sky and overcast conditions. To quantify GEM forecast meteorological variables biases, spectral sensitivity kernels are generated and used to attribute radiance biases to surface and atmospheric temperatures, atmospheric humidity, and clouds biases. The kernel method, supplemented with retrieved profiles based on AIRS observations in collocation with a microwave sounder, achieves good closure in explaining clear-sky radiance biases, which are attributed mostly to surface temperature and upper tropospheric water vapor biases. Cloudy-sky radiance biases are dominated by cloud-induced radiance biases. Prominent GEM biases are identified as: (1) too low surface temperature over land, causing about -5 K bias in the atmospheric window region; (2) too high upper tropospheric water vapor, inducing about -3 K bias in the water vapor absorption band; (3) too few high clouds in the convective regions, generating about +10 K bias in window band and about +6 K bias in the water vapor band.

  8. Activity-Centric Approach to Distributed Programming

    NASA Technical Reports Server (NTRS)

    Levy, Renato; Satapathy, Goutam; Lang, Jun

    2004-01-01

    The first phase of an effort to develop a NASA version of the Cybele software system has been completed. To give meaning to even a highly abbreviated summary of the modifications to be embodied in the NASA version, it is necessary to present the following background information on Cybele: Cybele is a proprietary software infrastructure for use by programmers in developing agent-based application programs [complex application programs that contain autonomous, interacting components (agents)]. Cybele provides support for event handling from multiple sources, multithreading, concurrency control, migration, and load balancing. A Cybele agent follows a programming paradigm, called activity-centric programming, that enables an abstraction over system-level thread mechanisms. Activity centric programming relieves application programmers of the complex tasks of thread management, concurrency control, and event management. In order to provide such functionality, activity-centric programming demands support of other layers of software. This concludes the background information. In the first phase of the present development, a new architecture for Cybele was defined. In this architecture, Cybele follows a modular service-based approach to coupling of the programming and service layers of software architecture. In a service-based approach, the functionalities supported by activity-centric programming are apportioned, according to their characteristics, among several groups called services. A well-defined interface among all such services serves as a path that facilitates the maintenance and enhancement of such services without adverse effect on the whole software framework. The activity-centric application-program interface (API) is part of a kernel. The kernel API calls the services by use of their published interface. This approach makes it possible for any application code written exclusively under the API to be portable for any configuration of Cybele.

  9. Performance Analysis of a Hybrid Overset Multi-Block Application on Multiple Architectures

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Biswas, Rupak

    2003-01-01

    This paper presents a detailed performance analysis of a multi-block overset grid compu- tational fluid dynamics app!ication on multiple state-of-the-art computer architectures. The application is implemented using a hybrid MPI+OpenMP programming paradigm that exploits both coarse and fine-grain parallelism; the former via MPI message passing and the latter via OpenMP directives. The hybrid model also extends the applicability of multi-block programs to large clusters of SNIP nodes by overcoming the restriction that the number of processors be less than the number of grid blocks. A key kernel of the application, namely the LU-SGS linear solver, had to be modified to enhance the performance of the hybrid approach on the target machines. Investigations were conducted on cacheless Cray SX6 vector processors, cache-based IBM Power3 and Power4 architectures, and single system image SGI Origin3000 platforms. Overall results for complex vortex dynamics simulations demonstrate that the SX6 achieves the highest performance and outperforms the RISC-based architectures; however, the best scaling performance was achieved on the Power3.

  10. A specification of 3D manipulation in virtual environments

    NASA Technical Reports Server (NTRS)

    Su, S. Augustine; Furuta, Richard

    1994-01-01

    In this paper we discuss the modeling of three basic kinds of 3-D manipulations in the context of a logical hand device and our virtual panel architecture. The logical hand device is a useful software abstraction representing hands in virtual environments. The virtual panel architecture is the 3-D component of the 2-D window systems. Both of the abstractions are intended to form the foundation for adaptable 3-D manipulation.

  11. NPS-NRL-Rice-UIUC Collaboration on Navy Atmosphere-Ocean Coupled Models on Many-Core Computer Architectures Annual Report

    DTIC Science & Technology

    2015-09-30

    DISTRIBUTION STATEMENT A: Distribution approved for public release; distribution is unlimited. NPS-NRL- Rice -UIUC Collaboration on Navy Atmosphere...portability. There is still a gap in the OCCA support for Fortran programmers who do not have accelerator experience. Activities at Rice /Virginia Tech are...for automated data movement and for kernel optimization using source code analysis and run-time detective work. In this quarter the Rice /Virginia

  12. A Framework for Adaptable Operating and Runtime Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sterling, Thomas

    The emergence of new classes of HPC systems where performance improvement is enabled by Moore’s Law for technology is manifest through multi-core-based architectures including specialized GPU structures. Operating systems were originally designed for control of uniprocessor systems. By the 1980s multiprogramming, virtual memory, and network interconnection were integral services incorporated as part of most modern computers. HPC operating systems were primarily derivatives of the Unix model with Linux dominating the Top-500 list. The use of Linux for commodity clusters was first pioneered by the NASA Beowulf Project. However, the rapid increase in number of cores to achieve performance gain throughmore » technology advances has exposed the limitations of POSIX general-purpose operating systems in scaling and efficiency. This project was undertaken through the leadership of Sandia National Laboratories and in partnership of the University of New Mexico to investigate the alternative of composable lightweight kernels on scalable HPC architectures to achieve superior performance for a wide range of applications. The use of composable operating systems is intended to provide a minimalist set of services specifically required by a given application to preclude overheads and operational uncertainties (“OS noise”) that have been demonstrated to degrade efficiency and operational consistency. This project was undertaken as an exploration to investigate possible strategies and methods for composable lightweight kernel operating systems towards support for extreme scale systems.« less

  13. Parametric and Nonparametric Statistical Methods for Genomic Selection of Traits with Additive and Epistatic Genetic Architectures

    PubMed Central

    Howard, Réka; Carriquiry, Alicia L.; Beavis, William D.

    2014-01-01

    Parametric and nonparametric methods have been developed for purposes of predicting phenotypes. These methods are based on retrospective analyses of empirical data consisting of genotypic and phenotypic scores. Recent reports have indicated that parametric methods are unable to predict phenotypes of traits with known epistatic genetic architectures. Herein, we review parametric methods including least squares regression, ridge regression, Bayesian ridge regression, least absolute shrinkage and selection operator (LASSO), Bayesian LASSO, best linear unbiased prediction (BLUP), Bayes A, Bayes B, Bayes C, and Bayes Cπ. We also review nonparametric methods including Nadaraya-Watson estimator, reproducing kernel Hilbert space, support vector machine regression, and neural networks. We assess the relative merits of these 14 methods in terms of accuracy and mean squared error (MSE) using simulated genetic architectures consisting of completely additive or two-way epistatic interactions in an F2 population derived from crosses of inbred lines. Each simulated genetic architecture explained either 30% or 70% of the phenotypic variability. The greatest impact on estimates of accuracy and MSE was due to genetic architecture. Parametric methods were unable to predict phenotypic values when the underlying genetic architecture was based entirely on epistasis. Parametric methods were slightly better than nonparametric methods for additive genetic architectures. Distinctions among parametric methods for additive genetic architectures were incremental. Heritability, i.e., proportion of phenotypic variability, had the second greatest impact on estimates of accuracy and MSE. PMID:24727289

  14. Information Management Functional Economic Analysis for Finance Workstations to the Defense Information Technology Services Organization

    DTIC Science & Technology

    1993-03-01

    values themselves. The Wools perform risk-adjusted present-value comparisons and compute the ROI using discount factors. The assessment of risk in a...developed X Window system, the de facto industry standard window system in the UNIX environment. An X- terminal’s use is limited to display. It has no...2.1 IT HARDWARE The DOS-based PC used in this analysis costs $2,060. It includes an ASL 486DX-33 Industry Standard Architecture (ISA) computer with 8

  15. Feature-Oriented Domain Analysis (FODA) Feasibility Study

    DTIC Science & Technology

    1990-11-01

    controlling the synchronous behavior of the task. A task may wait for one or more synchronizing or message queue events. "* Each task is designed using the...Comparative Study 13 2.2.1. The Genesis System 13 2.2.2. MCC Work 15 2.2.2.1. The DESIRE Design Recovery Tool 15 0 2.2.2.2. Domain Analysis Method 1f...Illustration 43 Figure 6-1: Architectural Layers 48 Figure 6-2: Window Management Subsystem Design Structure 49 Figure 7-1: Function of a Window Manager

  16. Processing Cones: A Computational Structure for Image Analysis.

    DTIC Science & Technology

    1981-12-01

    image analysis applications, referred to as a processing cone, is described and sample algorithms are presented. A fundamental characteristic of the structure is its hierarchical organization into two-dimensional arrays of decreasing resolution. In this architecture, a protypical function is defined on a local window of data and applied uniformly to all windows in a parallel manner. Three basic modes of processing are supported in the cone: reduction operations (upward processing), horizontal operations (processing at a single level) and projection operations (downward

  17. 54. The Curtis Music Hall (15 West Park) dates from ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    54. The Curtis Music Hall (15 West Park) dates from 1892. This is one if the more architecturally interesting buildings remaining in Butte, with a variety of window types, a corbelled parapet extending over one bay, a central gable flanked by decorative square towers, a turret, and a richly decorated facade. The storefront has been modernized with plate glass windows and a metal canopy. - Butte Historic District, Bounded by Copper, Arizona, Mercury & Continental Streets, Butte, Silver Bow County, MT

  18. An Error-Entropy Minimization Algorithm for Tracking Control of Nonlinear Stochastic Systems with Non-Gaussian Variables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yunlong; Wang, Aiping; Guo, Lei

    This paper presents an error-entropy minimization tracking control algorithm for a class of dynamic stochastic system. The system is represented by a set of time-varying discrete nonlinear equations with non-Gaussian stochastic input, where the statistical properties of stochastic input are unknown. By using Parzen windowing with Gaussian kernel to estimate the probability densities of errors, recursive algorithms are then proposed to design the controller such that the tracking error can be minimized. The performance of the error-entropy minimization criterion is compared with the mean-square-error minimization in the simulation results.

  19. Detailed Primitive-Based 3d Modeling of Architectural Elements

    NASA Astrophysics Data System (ADS)

    Remondino, F.; Lo Buglio, D.; Nony, N.; De Luca, L.

    2012-07-01

    The article describes a pipeline, based on image-data, for the 3D reconstruction of building façades or architectural elements and the successive modeling using geometric primitives. The approach overcome some existing problems in modeling architectural elements and deliver efficient-in-size reality-based textured 3D models useful for metric applications. For the 3D reconstruction, an opensource pipeline developed within the TAPENADE project is employed. In the successive modeling steps, the user manually selects an area containing an architectural element (capital, column, bas-relief, window tympanum, etc.) and then the procedure fits geometric primitives and computes disparity and displacement maps in order to tie visual and geometric information together in a light but detailed 3D model. Examples are reported and commented.

  20. Finite-frequency structural sensitivities of short-period compressional body waves

    NASA Astrophysics Data System (ADS)

    Fuji, Nobuaki; Chevrot, Sébastien; Zhao, Li; Geller, Robert J.; Kawai, Kenji

    2012-07-01

    We present an extension of the method recently introduced by Zhao & Chevrot for calculating Fréchet kernels from a precomputed database of strain Green's tensors by normal mode summation. The extension involves two aspects: (1) we compute the strain Green's tensors using the Direct Solution Method, which allows us to go up to frequencies as high as 1 Hz; and (2) we develop a spatial interpolation scheme so that the Green's tensors can be computed with a relatively coarse grid, thus improving the efficiency in the computation of the sensitivity kernels. The only requirement is that the Green's tensors be computed with a fine enough spatial sampling rate to avoid spatial aliasing. The Green's tensors can then be interpolated to any location inside the Earth, avoiding the need to store and retrieve strain Green's tensors for a fine sampling grid. The interpolation scheme not only significantly reduces the CPU time required to calculate the Green's tensor database and the disk space to store it, but also enhances the efficiency in computing the kernels by reducing the number of I/O operations needed to retrieve the Green's tensors. Our new implementation allows us to calculate sensitivity kernels for high-frequency teleseismic body waves with very modest computational resources such as a laptop. We illustrate the potential of our approach for seismic tomography by computing traveltime and amplitude sensitivity kernels for high frequency P, PKP and Pdiff phases. A comparison of our PKP kernels with those computed by asymptotic ray theory clearly shows the limits of the latter. With ray theory, it is not possible to model waves diffracted by internal discontinuities such as the core-mantle boundary, and it is also difficult to compute amplitudes for paths close to the B-caustic of the PKP phase. We also compute waveform partial derivatives for different parts of the seismic wavefield, a key ingredient for high resolution imaging by waveform inversion. Our computations of partial derivatives in the time window where PcP precursors are commonly observed show that the distribution of sensitivity is complex and counter-intuitive, with a large contribution from the mid-mantle region. This clearly emphasizes the need to use accurate and complete partial derivatives in waveform inversion.

  1. A non-synonymous SNP within the isopentenyl transferase 2 locus is associated with kernel weight in Chinese maize inbreds (Zea mays L.).

    PubMed

    Weng, Jianfeng; Li, Bo; Liu, Changlin; Yang, Xiaoyan; Wang, Hongwei; Hao, Zhuanfang; Li, Mingshun; Zhang, Degui; Ci, Xiaoke; Li, Xinhai; Zhang, Shihuang

    2013-07-05

    Kernel weight, controlled by quantitative trait loci (QTL), is an important component of grain yield in maize. Cytokinins (CKs) participate in determining grain morphology and final grain yield in crops. ZmIPT2, which is expressed mainly in the basal transfer cell layer, endosperm, and embryo during maize kernel development, encodes an isopentenyl transferase (IPT) that is involved in CK biosynthesis. The coding region of ZmIPT2 was sequenced across a panel of 175 maize inbred lines that are currently used in Chinese maize breeding programs. Only 16 single nucleotide polymorphisms (SNPs) and seven haplotypes were detected among these inbred lines. Nucleotide diversity (π) within the ZmIPT2 window and coding region were 0.347 and 0.0047, respectively, and they were significantly lower than the mean nucleotide diversity value of 0.372 for maize Chromosome 2 (P < 0.01). Association mapping revealed that a single nucleotide change from cytosine (C) to thymine (T) in the ZmIPT2 coding region, which converted a proline residue into a serine residue, was significantly associated with hundred kernel weight (HKW) in three environments (P <0.05), and explained 4.76% of the total phenotypic variation. In vitro characterization suggests that the dimethylallyl diphospate (DMAPP) IPT activity of ZmIPT2-T is higher than that of ZmIPT2-C, as the amounts of adenosine triphosphate (ATP), adenosine diphosphate (ADP), and adenosine monophosphate (AMP) consumed by ZmIPT2-T were 5.48-, 2.70-, and 1.87-fold, respectively, greater than those consumed by ZmIPT2-C. The effects of artificial selection on the ZmIPT2 coding region were evaluated using Tajima's D tests across six subgroups of Chinese maize germplasm, with the most frequent favorable allele identified in subgroup PB (Partner B). These results showed that ZmIPT2, which is associated with kernel weight, was subjected to artificial selection during the maize breeding process. ZmIPT2-T had higher IPT activity than ZmIPT2-C, and this favorable allele for kernel weight could be used in molecular marker-assisted selection for improvement of grain yield components in Chinese maize breeding programs.

  2. The Relationship of Built Environment to Perceived Social Support and Psychological Distress in Hispanic Elders: The Role of “Eyes on the Street”

    PubMed Central

    Mason, Craig A.; Lombard, Joanna L.; Martinez, Frank; Plater-Zyberk, Elizabeth; Spokane, Arnold R.; Newman, Frederick L.; Pantin, Hilda; Szapocznik, José

    2009-01-01

    Background Research on contextual and neighborhood effects increasingly includes the built (physical) environment's influences on health and social well-being. A population-based study examined whether architectural features of the built environment theorized to promote observations and social interactions (e.g., porches, windows) predict Hispanic elders’ psychological distress. Methods Coding of built environment features of all 3,857 lots across 403 blocks in East Little Havana, Florida, and enumeration of elders in 16,000 households was followed by assessments of perceived social support and psychological distress in a representative sample of 273 low socioeconomic status (SES) Hispanic elders. Structural-equation modeling was used to assess relationships between block-level built environment features, elders’ perceived social support, and psychological distress. Results Architectural features of the front entrance such as porches that promote visibility from a building's exterior were positively associated with perceived social support. In contrast, architectural features such as window areas that promote visibility from a building's interior were negatively associated with perceived social support. Perceived social support in turn was associated with reduced psychological distress after controlling for demographics. Additionally, perceived social support mediated the relationship of built environment variables to psychological distress. Conclusions Architectural features that facilitate direct, in-person interactions may be beneficial for Hispanic elders’ mental health. PMID:19196696

  3. Improvements to Integrated Tradespace Analysis of Communications Architectures (ITACA) Network Loading Analysis Tool

    NASA Technical Reports Server (NTRS)

    Lee, Nathaniel; Welch, Bryan W.

    2018-01-01

    NASA's SCENIC project aims to simplify and reduce the cost of space mission planning by replicating the analysis capabilities of commercially licensed software which are integrated with relevant analysis parameters specific to SCaN assets and SCaN supported user missions. SCENIC differs from current tools that perform similar analyses in that it 1) does not require any licensing fees, 2) will provide an all-in-one package for various analysis capabilities that normally requires add-ons or multiple tools to complete. As part of SCENIC's capabilities, the ITACA network loading analysis tool will be responsible for assessing the loading on a given network architecture and generating a network service schedule. ITACA will allow users to evaluate the quality of service of a given network architecture and determine whether or not the architecture will satisfy the mission's requirements. ITACA is currently under development, and the following improvements were made during the fall of 2017: optimization of runtime, augmentation of network asset pre-service configuration time, augmentation of Brent's method of root finding, augmentation of network asset FOV restrictions, augmentation of mission lifetimes, and the integration of a SCaN link budget calculation tool. The improvements resulted in (a) 25% reduction in runtime, (b) more accurate contact window predictions when compared to STK(Registered Trademark) contact window predictions, and (c) increased fidelity through the use of specific SCaN asset parameters.

  4. Energy scaling advantages of resistive memory crossbar based computation and its application to sparse coding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agarwal, Sapan; Quach, Tu -Thach; Parekh, Ojas

    In this study, the exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational properties of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an N × N crossbar, these two kernels can be O(N) more energy efficient than a conventional digital memory-basedmore » architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1)). These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N) reduction in energy for the entire algorithm when run with finite precision. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning.« less

  5. Energy scaling advantages of resistive memory crossbar based computation and its application to sparse coding

    DOE PAGES

    Agarwal, Sapan; Quach, Tu -Thach; Parekh, Ojas; ...

    2016-01-06

    In this study, the exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational properties of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an N × N crossbar, these two kernels can be O(N) more energy efficient than a conventional digital memory-basedmore » architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1)). These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N) reduction in energy for the entire algorithm when run with finite precision. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning.« less

  6. Power and Performance Trade-offs for Space Time Adaptive Processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gawande, Nitin A.; Manzano Franco, Joseph B.; Tumeo, Antonino

    Computational efficiency – performance relative to power or energy – is one of the most important concerns when designing RADAR processing systems. This paper analyzes power and performance trade-offs for a typical Space Time Adaptive Processing (STAP) application. We study STAP implementations for CUDA and OpenMP on two computationally efficient architectures, Intel Haswell Core I7-4770TE and NVIDIA Kayla with a GK208 GPU. We analyze the power and performance of STAP’s computationally intensive kernels across the two hardware testbeds. We also show the impact and trade-offs of GPU optimization techniques. We show that data parallelism can be exploited for efficient implementationmore » on the Haswell CPU architecture. The GPU architecture is able to process large size data sets without increase in power requirement. The use of shared memory has a significant impact on the power requirement for the GPU. A balance between the use of shared memory and main memory access leads to an improved performance in a typical STAP application.« less

  7. Production Level CFD Code Acceleration for Hybrid Many-Core Architectures

    NASA Technical Reports Server (NTRS)

    Duffy, Austen C.; Hammond, Dana P.; Nielsen, Eric J.

    2012-01-01

    In this work, a novel graphics processing unit (GPU) distributed sharing model for hybrid many-core architectures is introduced and employed in the acceleration of a production-level computational fluid dynamics (CFD) code. The latest generation graphics hardware allows multiple processor cores to simultaneously share a single GPU through concurrent kernel execution. This feature has allowed the NASA FUN3D code to be accelerated in parallel with up to four processor cores sharing a single GPU. For codes to scale and fully use resources on these and the next generation machines, codes will need to employ some type of GPU sharing model, as presented in this work. Findings include the effects of GPU sharing on overall performance. A discussion of the inherent challenges that parallel unstructured CFD codes face in accelerator-based computing environments is included, with considerations for future generation architectures. This work was completed by the author in August 2010, and reflects the analysis and results of the time.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Womeldorff, Geoffrey Alan; Payne, Joshua Estes; Bergen, Benjamin Karl

    These are slides for a presentation on PARTISN Research and FleCSI Updates. The following topics are covered: SNAP vs PARTISN, Background Research, Production Code (structural design and changes, kernel design and implementation, lessons learned), NuT IMC Proxy, FleCSI Update (design and lessons learned). It can all be summarized in the following manner: Kokkos was shown to be effective in FY15 in implementing a C++ version of SNAP's kernel. This same methodology was applied to a production IC code, PARTISN. This was a much more complex endeavour than in FY15 for many reasons; a C++ kernel embedded in Fortran, overloading Fortranmore » memory allocations, general language interoperability, and a fully fleshed out production code versus a simplified proxy code. Lessons learned are Legion. In no particular order: Interoperability between Fortran and C++ was really not that hard, and a useful engineering effort. Tracking down all necessary memory allocations for a kernel in a production code is pretty hard. Modifying a production code to work for more than a handful of use cases is also pretty hard. Figuring out the toolchain that will allow a successful implementation of design decisions is quite hard, if making use of "bleeding edge" design choices. In terms of performance, production code concurrency architecture can be a virtual showstopper; being too complex to easily rewrite and test in a short period of time, or depending on tool features which do not exist yet. Ultimately, while the tools used in this work were not successful in speeding up the production code, they helped to identify how work would be done, and provide requirements to tools.« less

  9. Analytical Performance Modeling and Validation of Intel’s Xeon Phi Architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chunduri, Sudheer; Balaprakash, Prasanna; Morozov, Vitali

    Modeling the performance of scientific applications on emerging hardware plays a central role in achieving extreme-scale computing goals. Analytical models that capture the interaction between applications and hardware characteristics are attractive because even a reasonably accurate model can be useful for performance tuning before the hardware is made available. In this paper, we develop a hardware model for Intel’s second-generation Xeon Phi architecture code-named Knights Landing (KNL) for the SKOPE framework. We validate the KNL hardware model by projecting the performance of mini-benchmarks and application kernels. The results show that our KNL model can project the performance with prediction errorsmore » of 10% to 20%. The hardware model also provides informative recommendations for code transformations and tuning.« less

  10. Design considerations of CareWindows, a Windows 3.0-based graphical front end to a Medical Information Management System using a pass-through-requester architecture.

    PubMed Central

    Ward, R. E.; Purves, T.; Feldman, M.; Schiffman, R. M.; Barry, S.; Christner, M.; Kipa, G.; McCarthy, B. D.; Stiphout, R.

    1991-01-01

    The Care Windows development project demonstrated the feasibility of an approach designed to add the benefits of an event-driven, graphically-oriented user interface to an existing Medical Information Management System (MIMS) without overstepping economic and logistic constraints. The design solution selected for the Care Windows project incorporates three important design features: (1) the effective de-coupling of severs from requesters, permitting the use of an extensive pre-existing library of MIMS servers, (2) the off-loading of program control functions of the requesters to the workstation processor, reducing the load per transaction on central resources and permitting the use of object-oriented development environments available for microcomputers, (3) the selection of a low end, GUI-capable workstation consisting of a PC-compatible personal computer running Microsoft Windows 3.0, and (4) the development of a highly layered, modular workstation application, permitting the development of interchangeable modules to insure portability and adaptability. PMID:1807665

  11. Framewise phoneme classification with bidirectional LSTM and other neural network architectures.

    PubMed

    Graves, Alex; Schmidhuber, Jürgen

    2005-01-01

    In this paper, we present bidirectional Long Short Term Memory (LSTM) networks, and a modified, full gradient version of the LSTM learning algorithm. We evaluate Bidirectional LSTM (BLSTM) and several other network architectures on the benchmark task of framewise phoneme classification, using the TIMIT database. Our main findings are that bidirectional networks outperform unidirectional ones, and Long Short Term Memory (LSTM) is much faster and also more accurate than both standard Recurrent Neural Nets (RNNs) and time-windowed Multilayer Perceptrons (MLPs). Our results support the view that contextual information is crucial to speech processing, and suggest that BLSTM is an effective architecture with which to exploit it.

  12. The roofline model: A pedagogical tool for program analysis and optimization

    DOE PAGES

    Williams, Samuel; Patterson, David; Oliker, Leonid; ...

    2008-08-01

    This article consists of a collection of slides from the authors' conference presentation. The Roofline model is a visually intuitive figure for kernel analysis and optimization. We believe undergraduates will find it useful in assessing performance and scalability limitations. It is easily extended to other architectural paradigms. It is easily extendable to other metrics: performance (sort, graphics, crypto..) bandwidth (L2, PCIe, ..). Furthermore, a performance counters could be used to generate a runtime-specific roofline that would greatly aide the optimization.

  13. An Xdata Architecture for Federated Graph Models and Multi-tier Asymmetric Computing

    DTIC Science & Technology

    2014-01-01

    Wikipedia, a scale-free random graph (kron), Akamai trace route data, Bitcoin transaction data, and a Twitter follower network. We present results for...3x (SSSP on a random graph) and nearly 300x (Akamai and Bitcoin ) over the CPU performance of a well-known and widely deployed CPU-based graph...provided better throughput for smaller frontiers such as roadmaps or the Bitcoin data set. In our work, we have focused on two-phase kernels, but it

  14. Design of an EEG-based brain-computer interface (BCI) from standard components running in real-time under Windows.

    PubMed

    Guger, C; Schlögl, A; Walterspacher, D; Pfurtscheller, G

    1999-01-01

    An EEG-based brain-computer interface (BCI) is a direct connection between the human brain and the computer. Such a communication system is needed by patients with severe motor impairments (e.g. late stage of Amyotrophic Lateral Sclerosis) and has to operate in real-time. This paper describes the selection of the appropriate components to construct such a BCI and focuses also on the selection of a suitable programming language and operating system. The multichannel system runs under Windows 95, equipped with a real-time Kernel expansion to obtain reasonable real-time operations on a standard PC. Matlab controls the data acquisition and the presentation of the experimental paradigm, while Simulink is used to calculate the recursive least square (RLS) algorithm that describes the current state of the EEG in real-time. First results of the new low-cost BCI show that the accuracy of differentiating imagination of left and right hand movement is around 95%.

  15. Noise-induced hearing loss increases the temporal precision of complex envelope coding by auditory-nerve fibers

    PubMed Central

    Henry, Kenneth S.; Kale, Sushrut; Heinz, Michael G.

    2014-01-01

    While changes in cochlear frequency tuning are thought to play an important role in the perceptual difficulties of people with sensorineural hearing loss (SNHL), the possible role of temporal processing deficits remains less clear. Our knowledge of temporal envelope coding in the impaired cochlea is limited to two studies that examined auditory-nerve fiber responses to narrowband amplitude modulated stimuli. In the present study, we used Wiener-kernel analyses of auditory-nerve fiber responses to broadband Gaussian noise in anesthetized chinchillas to quantify changes in temporal envelope coding with noise-induced SNHL. Temporal modulation transfer functions (TMTFs) and temporal windows of sensitivity to acoustic stimulation were computed from 2nd-order Wiener kernels and analyzed to estimate the temporal precision, amplitude, and latency of envelope coding. Noise overexposure was associated with slower (less negative) TMTF roll-off with increasing modulation frequency and reduced temporal window duration. The results show that at equal stimulus sensation level, SNHL increases the temporal precision of envelope coding by 20–30%. Furthermore, SNHL increased the amplitude of envelope coding by 50% in fibers with CFs from 1–2 kHz and decreased mean response latency by 0.4 ms. While a previous study of envelope coding demonstrated a similar increase in response amplitude, the present study is the first to show enhanced temporal precision. This new finding may relate to the use of a more complex stimulus with broad frequency bandwidth and a dynamic temporal envelope. Exaggerated neural coding of fast envelope modulations may contribute to perceptual difficulties in people with SNHL by acting as a distraction from more relevant acoustic cues, especially in fluctuating background noise. Finally, the results underscore the value of studying sensory systems with more natural, real-world stimuli. PMID:24596545

  16. Wide-Range Motion Estimation Architecture with Dual Search Windows for High Resolution Video Coding

    NASA Astrophysics Data System (ADS)

    Dung, Lan-Rong; Lin, Meng-Chun

    This paper presents a memory-efficient motion estimation (ME) technique for high-resolution video compression. The main objective is to reduce the external memory access, especially for limited local memory resource. The reduction of memory access can successfully save the notorious power consumption. The key to reduce the memory accesses is based on center-biased algorithm in that the center-biased algorithm performs the motion vector (MV) searching with the minimum search data. While considering the data reusability, the proposed dual-search-windowing (DSW) approaches use the secondary windowing as an option per searching necessity. By doing so, the loading of search windows can be alleviated and hence reduce the required external memory bandwidth. The proposed techniques can save up to 81% of external memory bandwidth and require only 135 MBytes/sec, while the quality degradation is less than 0.2dB for 720p HDTV clips coded at 8Mbits/sec.

  17. The "Total Immersion" Meeting Environment.

    ERIC Educational Resources Information Center

    Finkel, Coleman

    1980-01-01

    The designing of intelligently planned meeting facilities can aid management communication and learning. The author examines the psychology of meeting attendance; architectural considerations (lighting, windows, color, etc.); design elements and learning modes (furniture, walls, audiovisuals, materials); and the idea of "total immersion meeting…

  18. ARC-2006-ACD06-0213-013

    NASA Image and Video Library

    2006-09-25

    Ames and Moffett Field (MFA) historical sites and memorials Entry of building N-210 Ames Flight System Research Laboratory architectural detail. Eastside showing NACA brass inset wing over front doors, light fixtures flanking the doors and glass brick window wall above the doors

  19. ARC-2006-ACD06-0213-014

    NASA Image and Video Library

    2006-09-25

    Ames and Moffett Field (MFA) historical sites and memorials Entry of building N-210 Ames Flight System Research Laboratory architectural detail. Eastside showing NACA brass inset wing over front doors, light fixtures flanking the doors and glass brick window wall above the doors

  20. Parallel mutual information estimation for inferring gene regulatory networks on GPUs

    PubMed Central

    2011-01-01

    Background Mutual information is a measure of similarity between two variables. It has been widely used in various application domains including computational biology, machine learning, statistics, image processing, and financial computing. Previously used simple histogram based mutual information estimators lack the precision in quality compared to kernel based methods. The recently introduced B-spline function based mutual information estimation method is competitive to the kernel based methods in terms of quality but at a lower computational complexity. Results We present a new approach to accelerate the B-spline function based mutual information estimation algorithm with commodity graphics hardware. To derive an efficient mapping onto this type of architecture, we have used the Compute Unified Device Architecture (CUDA) programming model to design and implement a new parallel algorithm. Our implementation, called CUDA-MI, can achieve speedups of up to 82 using double precision on a single GPU compared to a multi-threaded implementation on a quad-core CPU for large microarray datasets. We have used the results obtained by CUDA-MI to infer gene regulatory networks (GRNs) from microarray data. The comparisons to existing methods including ARACNE and TINGe show that CUDA-MI produces GRNs of higher quality in less time. Conclusions CUDA-MI is publicly available open-source software, written in CUDA and C++ programming languages. It obtains significant speedup over sequential multi-threaded implementation by fully exploiting the compute capability of commonly used CUDA-enabled low-cost GPUs. PMID:21672264

  1. Locality Aware Concurrent Start for Stencil Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shrestha, Sunil; Gao, Guang R.; Manzano Franco, Joseph B.

    Stencil computations are at the heart of many physical simulations used in scientific codes. Thus, there exists a plethora of optimization efforts for this family of computations. Among these techniques, tiling techniques that allow concurrent start have proven to be very efficient in providing better performance for these critical kernels. Nevertheless, with many core designs being the norm, these optimization techniques might not be able to fully exploit locality (both spatial and temporal) on multiple levels of the memory hierarchy without compromising parallelism. It is no longer true that the machine can be seen as a homogeneous collection of nodesmore » with caches, main memory and an interconnect network. New architectural designs exhibit complex grouping of nodes, cores, threads, caches and memory connected by an ever evolving network-on-chip design. These new designs may benefit greatly from carefully crafted schedules and groupings that encourage parallel actors (i.e. threads, cores or nodes) to be aware of the computational history of other actors in close proximity. In this paper, we provide an efficient tiling technique that allows hierarchical concurrent start for memory hierarchy aware tile groups. Each execution schedule and tile shape exploit the available parallelism, load balance and locality present in the given applications. We demonstrate our technique on the Intel Xeon Phi architecture with selected and representative stencil kernels. We show improvement ranging from 5.58% to 31.17% over existing state-of-the-art techniques.« less

  2. Memory handling in the ATLAS submission system from job definition to sites limits

    NASA Astrophysics Data System (ADS)

    Forti, A. C.; Walker, R.; Maeno, T.; Love, P.; Rauschmayr, N.; Filipcic, A.; Di Girolamo, A.

    2017-10-01

    In the past few years the increased luminosity of the LHC, changes in the linux kernel and a move to a 64bit architecture have affected the ATLAS jobs memory usage and the ATLAS workload management system had to be adapted to be more flexible and pass memory parameters to the batch systems, which in the past wasn’t a necessity. This paper describes the steps required to add the capability to better handle memory requirements, included the review of how each component definition and parametrization of the memory is mapped to the other components, and what changes had to be applied to make the submission chain work. These changes go from the definition of tasks and the way tasks memory requirements are set using scout jobs, through the new memory tool developed to do that, to how these values are used by the submission component of the system and how the jobs are treated by the sites through the CEs, batch systems and ultimately the kernel.

  3. The devil's checkerboard: why cross-correlation delay times require a finite frequency interpretation

    NASA Astrophysics Data System (ADS)

    Nolet, G.; Mercerat, D.; Zaroli, C.

    2012-12-01

    We present the first complete test of finite frequency tomography with banana-doughnut kernels, from the generation of seismograms in a 3D model to the final inversion, and are able to lay to rest all of the so-called `controversies' that have slowed down its adoption. Cross-correlation delay times are influenced by energy arriving in a time window that includes later arrivals, either scattered from, or diffracted around lateral heterogeneities. We present here the results of a 3D test in which we generate 1716 seismograms using the spectral element method in a cross-borehole experiment conducted in a checkerboard box. Delays are determined for the broadband signals as well as for five frequency bands (each one octave apart) by cross-correlating seismograms for a homogeneous pattern with those for a checkerboard. The large (10 per cent) velocity contrast and the regularity of the checkerboard pattern causes severe reverberations that arrive late in the cross-correlation window. Data errors are estimated by comparing linearity between delays measured for a model with 10 per cent velocity contrast with those with a 4 per cent contrast. Sensitivity kernels are efficiently computed with ray theory using the `banana-doughnut' kernels from Dahlen et al. (GJI 141:157, 2000). The model resulting from the inversion with a data fit with reduced χ2red=1 shows an excellent correspondence with the input model and allows for a complete validation of the theory. Amplitudes in the (well resolved) top part of the model are close to the input amplitudes. Comparing a model derived from one band only shows the power of using multiple frequency bands in resolving detail - essentially the observed dispersion captures some of the waveform information. Finite frequency theory also allows us to image the checkerboard at some distance from the borehole plane. Most disconcertingly for advocates of ray theory are the results obtained when we interpret cross-correlation delays with ray theory. We shall present an extreme case of the devil's checkerboard (the term is from Jacobsen and Sigloch), in which the sign of the anomalies in the checkerboard is reversed in the ray-theoretical solution, a clear demonstration of the reality of effects of the doughnut hole. We conclude that the test fully validates `banana-doughnut' theory, and disqualifies ray theoretical inversions of cross-correlation delays.

  4. Evaluating Multi-core Architectures through Accelerating the Three-Dimensional Lax–Wendroff Correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    You, Yang; Fu, Haohuan; Song, Shuaiwen

    2014-07-18

    Wave propagation forward modeling is a widely used computational method in oil and gas exploration. The iterative stencil loops in such problems have broad applications in scientific computing. However, executing such loops can be highly time time-consuming, which greatly limits application’s performance and power efficiency. In this paper, we accelerate the forward modeling technique on the latest multi-core and many-core architectures such as Intel Sandy Bridge CPUs, NVIDIA Fermi C2070 GPU, NVIDIA Kepler K20x GPU, and the Intel Xeon Phi Co-processor. For the GPU platforms, we propose two parallel strategies to explore the performance optimization opportunities for our stencil kernels.more » For Sandy Bridge CPUs and MIC, we also employ various optimization techniques in order to achieve the best.« less

  5. Time reversal seismic imaging using laterally reflected surface waves in southern California

    NASA Astrophysics Data System (ADS)

    Tape, C.; Liu, Q.; Tromp, J.; Plesch, A.; Shaw, J. H.

    2010-12-01

    We use observed post-surface-wave seismic waveforms to image shallow (upper 10 km) lateral reflectors in southern California. Our imaging technique employs the 3D crustal model m16 of Tape et al. (2009), which is accurate for most local earthquakes over the period range 2-30 s. Model m16 captures the resonance of the major sedimentary basins in southern California, as well as some lateral surface wave reflections associated with these basins. We apply a 3D Gaussian smoothing function (12 km horizontal, 2 km vertical) to model m16. This smoothing has the effect of suppressing synthetic waveforms within the period range of interest (3-10 s) that are associated with reflections (single and multiple) from these basins. The smoothed 3D model serves as the background model within which we propagate an ``adjoint wavefield'' comprised of time-reversed windows of post-surface-wave coda waveforms that are initiated at the respective station locations. This adjoint wavefield constructively interferes with the regular wavefield in the locations of potential reflectors. The potential reflectors are revealed in an ``event kernel,'' which is the time-integrated volumetric field for each earthquake. By summing (or ``stacking'') the event kernels from 28 well-recorded earthquakes, we identify several reflectors using this imaging procedure. The most prominent lateral reflectors occur in proximity to: the southernmost San Joaquin basin, the Los Angeles basin, the San Pedro basin, the Ventura basin, the Manix basin, the San Clemente--Santa Cruz--Santa Barbara ridge, and isolated segments of the San Jacinto and San Andreas faults. The correspondence between observed coherent coda waveforms and the imaged reflectors provides a solid basis for interpreting the kernel features as material contrasts. The 3D spatial extent and amplitude of the kernel features provide constraints on the geometry and material contrast of the imaged reflectors.

  6. Gaining Cyber Dominance

    DTIC Science & Technology

    2015-01-01

    Robust team exercise and simulation • Air-gapped; isolation from production networks • “Train as you fight” scenarios • Advanced user and Internet...Security Onion • SIFT (Linux/Windows) • Kali • Rucksack • Docker • VTS 18 GCD Overview January 2015 © 2014 Carnegie Mellon University TEXN Architecture

  7. Performance of the Wavelet Decomposition on Massively Parallel Architectures

    NASA Technical Reports Server (NTRS)

    El-Ghazawi, Tarek A.; LeMoigne, Jacqueline; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    Traditionally, Fourier Transforms have been utilized for performing signal analysis and representation. But although it is straightforward to reconstruct a signal from its Fourier transform, no local description of the signal is included in its Fourier representation. To alleviate this problem, Windowed Fourier transforms and then wavelet transforms have been introduced, and it has been proven that wavelets give a better localization than traditional Fourier transforms, as well as a better division of the time- or space-frequency plane than Windowed Fourier transforms. Because of these properties and after the development of several fast algorithms for computing the wavelet representation of any signal, in particular the Multi-Resolution Analysis (MRA) developed by Mallat, wavelet transforms have increasingly been applied to signal analysis problems, especially real-life problems, in which speed is critical. In this paper we present and compare efficient wavelet decomposition algorithms on different parallel architectures. We report and analyze experimental measurements, using NASA remotely sensed images. Results show that our algorithms achieve significant performance gains on current high performance parallel systems, and meet scientific applications and multimedia requirements. The extensive performance measurements collected over a number of high-performance computer systems have revealed important architectural characteristics of these systems, in relation to the processing demands of the wavelet decomposition of digital images.

  8. SpF: Enabling Petascale Performance for Pseudospectral Dynamo Models

    NASA Astrophysics Data System (ADS)

    Jiang, W.; Clune, T.; Vriesema, J.; Gutmann, G.

    2013-12-01

    Pseudospectral (PS) methods possess a number of characteristics (e.g., efficiency, accuracy, natural boundary conditions) that are extremely desirable for dynamo models. Unfortunately, dynamo models based upon PS methods face a number of daunting challenges, which include exposing additional parallelism, leveraging hardware accelerators, exploiting hybrid parallelism, and improving the scalability of global memory transposes. Although these issues are a concern for most models, solutions for PS methods tend to require far more pervasive changes to underlying data and control structures. Further, improvements in performance in one model are difficult to transfer to other models, resulting in significant duplication of effort across the research community. We have developed an extensible software framework for pseudospectral methods called SpF that is intended to enable extreme scalability and optimal performance. High-level abstractions provided by SpF unburden applications of the responsibility of managing domain decomposition and load balance while reducing the changes in code required to adapt to new computing architectures. The key design concept in SpF is that each phase of the numerical calculation is partitioned into disjoint numerical 'kernels' that can be performed entirely in-processor. The granularity of domain-decomposition provided by SpF is only constrained by the data-locality requirements of these kernels. SpF builds on top of optimized vendor libraries for common numerical operations such as transforms, matrix solvers, etc., but can also be configured to use open source alternatives for portability. SpF includes several alternative schemes for global data redistribution and is expected to serve as an ideal testbed for further research into optimal approaches for different network architectures. In this presentation, we will describe the basic architecture of SpF as well as preliminary performance data and experience with adapting legacy dynamo codes. We will conclude with a discussion of planned extensions to SpF that will provide pseudospectral applications with additional flexibility with regard to time integration, linear solvers, and discretization in the radial direction.

  9. A performance model for GPUs with caches

    DOE PAGES

    Dao, Thanh Tuan; Kim, Jungwon; Seo, Sangmin; ...

    2014-06-24

    To exploit the abundant computational power of the world's fastest supercomputers, an even workload distribution to the typically heterogeneous compute devices is necessary. While relatively accurate performance models exist for conventional CPUs, accurate performance estimation models for modern GPUs do not exist. This paper presents two accurate models for modern GPUs: a sampling-based linear model, and a model based on machine-learning (ML) techniques which improves the accuracy of the linear model and is applicable to modern GPUs with and without caches. We first construct the sampling-based linear model to predict the runtime of an arbitrary OpenCL kernel. Based on anmore » analysis of NVIDIA GPUs' scheduling policies we determine the earliest sampling points that allow an accurate estimation. The linear model cannot capture well the significant effects that memory coalescing or caching as implemented in modern GPUs have on performance. We therefore propose a model based on ML techniques that takes several compiler-generated statistics about the kernel as well as the GPU's hardware performance counters as additional inputs to obtain a more accurate runtime performance estimation for modern GPUs. We demonstrate the effectiveness and broad applicability of the model by applying it to three different NVIDIA GPU architectures and one AMD GPU architecture. On an extensive set of OpenCL benchmarks, on average, the proposed model estimates the runtime performance with less than 7 percent error for a second-generation GTX 280 with no on-chip caches and less than 5 percent for the Fermi-based GTX 580 with hardware caches. On the Kepler-based GTX 680, the linear model has an error of less than 10 percent. On an AMD GPU architecture, Radeon HD 6970, the model estimates with 8 percent of error rates. As a result, the proposed technique outperforms existing models by a factor of 5 to 6 in terms of accuracy.« less

  10. Mark 4A antenna control system data handling architecture study

    NASA Technical Reports Server (NTRS)

    Briggs, H. C.; Eldred, D. B.

    1991-01-01

    A high-level review was conducted to provide an analysis of the existing architecture used to handle data and implement control algorithms for NASA's Deep Space Network (DSN) antennas and to make system-level recommendations for improving this architecture so that the DSN antennas can support the ever-tightening requirements of the next decade and beyond. It was found that the existing system is seriously overloaded, with processor utilization approaching 100 percent. A number of factors contribute to this overloading, including dated hardware, inefficient software, and a message-passing strategy that depends on serial connections between machines. At the same time, the system has shortcomings and idiosyncrasies that require extensive human intervention. A custom operating system kernel and an obscure programming language exacerbate the problems and should be modernized. A new architecture is presented that addresses these and other issues. Key features of the new architecture include a simplified message passing hierarchy that utilizes a high-speed local area network, redesign of particular processing function algorithms, consolidation of functions, and implementation of the architecture in modern hardware and software using mainstream computer languages and operating systems. The system would also allow incremental hardware improvements as better and faster hardware for such systems becomes available, and costs could potentially be low enough that redundancy would be provided economically. Such a system could support DSN requirements for the foreseeable future, though thorough consideration must be given to hard computational requirements, porting existing software functionality to the new system, and issues of fault tolerance and recovery.

  11. Convolutional neural network architectures for predicting DNA–protein binding

    PubMed Central

    Zeng, Haoyang; Edwards, Matthew D.; Liu, Ge; Gifford, David K.

    2016-01-01

    Motivation: Convolutional neural networks (CNN) have outperformed conventional methods in modeling the sequence specificity of DNA–protein binding. Yet inappropriate CNN architectures can yield poorer performance than simpler models. Thus an in-depth understanding of how to match CNN architecture to a given task is needed to fully harness the power of CNNs for computational biology applications. Results: We present a systematic exploration of CNN architectures for predicting DNA sequence binding using a large compendium of transcription factor datasets. We identify the best-performing architectures by varying CNN width, depth and pooling designs. We find that adding convolutional kernels to a network is important for motif-based tasks. We show the benefits of CNNs in learning rich higher-order sequence features, such as secondary motifs and local sequence context, by comparing network performance on multiple modeling tasks ranging in difficulty. We also demonstrate how careful construction of sequence benchmark datasets, using approaches that control potentially confounding effects like positional or motif strength bias, is critical in making fair comparisons between competing methods. We explore how to establish the sufficiency of training data for these learning tasks, and we have created a flexible cloud-based framework that permits the rapid exploration of alternative neural network architectures for problems in computational biology. Availability and Implementation: All the models analyzed are available at http://cnn.csail.mit.edu. Contact: gifford@mit.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307608

  12. Investigating Advances in the Acquisition of Secure Systems Based on Open Architecture, Open Source Software, and Software Product Lines

    DTIC Science & Technology

    2012-01-27

    example is found in games converted to serve a purpose other than entertainment , such as the development and use of games for science, technology, and...These play-session histories can then be further modded via video editing or remixing with other media (e.g., adding music ) to better enable cinematic...available OSS (e.g., the Linux Kernel on the Sony PS3 game console2) that game system hackers seek to undo. Finally, games are one of the most commonly

  13. A Case Study of Two NRL Pump Prototypes

    DTIC Science & Technology

    1996-01-01

    n messages messages ACK Pump Low High ACK MA buffer Figure 1. The simpli ed Pump architecture The Pump (see...I C A T I O N S Y S T E M S E R VICES Application Soft wa re Figure 4. STOP security ring structure The security kernel provides basic system...A D Y PUMP_TIMEOUT M O V _ A V G RECORD_AVAILABLE Ack A c k Ac k or N ac k Ack or Nack Legend IPC msg FIFO msg Data msg Process Object FIFO

  14. Hybrid parallel computing architecture for multiview phase shifting

    NASA Astrophysics Data System (ADS)

    Zhong, Kai; Li, Zhongwei; Zhou, Xiaohui; Shi, Yusheng; Wang, Congjun

    2014-11-01

    The multiview phase-shifting method shows its powerful capability in achieving high resolution three-dimensional (3-D) shape measurement. Unfortunately, this ability results in very high computation costs and 3-D computations have to be processed offline. To realize real-time 3-D shape measurement, a hybrid parallel computing architecture is proposed for multiview phase shifting. In this architecture, the central processing unit can co-operate with the graphic processing unit (GPU) to achieve hybrid parallel computing. The high computation cost procedures, including lens distortion rectification, phase computation, correspondence, and 3-D reconstruction, are implemented in GPU, and a three-layer kernel function model is designed to simultaneously realize coarse-grained and fine-grained paralleling computing. Experimental results verify that the developed system can perform 50 fps (frame per second) real-time 3-D measurement with 260 K 3-D points per frame. A speedup of up to 180 times is obtained for the performance of the proposed technique using a NVIDIA GT560Ti graphics card rather than a sequential C in a 3.4 GHZ Inter Core i7 3770.

  15. Kokkos: Enabling manycore performance portability through polymorphic memory access patterns

    DOE PAGES

    Carter Edwards, H.; Trott, Christian R.; Sunderland, Daniel

    2014-07-22

    The manycore revolution can be characterized by increasing thread counts, decreasing memory per thread, and diversity of continually evolving manycore architectures. High performance computing (HPC) applications and libraries must exploit increasingly finer levels of parallelism within their codes to sustain scalability on these devices. We found that a major obstacle to performance portability is the diverse and conflicting set of constraints on memory access patterns across devices. Contemporary portable programming models address manycore parallelism (e.g., OpenMP, OpenACC, OpenCL) but fail to address memory access patterns. The Kokkos C++ library enables applications and domain libraries to achieve performance portability on diversemore » manycore architectures by unifying abstractions for both fine-grain data parallelism and memory access patterns. In this paper we describe Kokkos’ abstractions, summarize its application programmer interface (API), present performance results for unit-test kernels and mini-applications, and outline an incremental strategy for migrating legacy C++ codes to Kokkos. Furthermore, the Kokkos library is under active research and development to incorporate capabilities from new generations of manycore architectures, and to address a growing list of applications and domain libraries.« less

  16. Maintenance Task Data Base for Buildings: Architectural Systems

    DTIC Science & Technology

    1991-05-01

    EXTERIOR DOOR Task Descriptin REPLACE METAL WIRE MESH PAINTD EEIO DOOR Unit of Measure: CWNT~ Fr enc of ccrence: H: i.~UU A: 15U.UU L: 160.00 Persons...5R.ULA5S:REPL.3RD FL.WD.FR.PAIN.)DL.EX .WIND0WS Unit of Measure: COURT- Freq.. enc OfOcr ee : 0.Yu A: 1.UU L: 1AiI Persons per Team: 1 Task DuRation: 0...System: EXTERIOR WINDOWS Susystem: INOPERABLE WIND)OWS Tak ecrpRtVLACE 2ND FLOOR STEEL FRAME(PAIT ETWINDOWS Unit of Measure: - JN I Frequjeng orOc ence : N

  17. Gradient descent for robust kernel-based regression

    NASA Astrophysics Data System (ADS)

    Guo, Zheng-Chu; Hu, Ting; Shi, Lei

    2018-06-01

    In this paper, we study the gradient descent algorithm generated by a robust loss function over a reproducing kernel Hilbert space (RKHS). The loss function is defined by a windowing function G and a scale parameter σ, which can include a wide range of commonly used robust losses for regression. There is still a gap between theoretical analysis and optimization process of empirical risk minimization based on loss: the estimator needs to be global optimal in the theoretical analysis while the optimization method can not ensure the global optimality of its solutions. In this paper, we aim to fill this gap by developing a novel theoretical analysis on the performance of estimators generated by the gradient descent algorithm. We demonstrate that with an appropriately chosen scale parameter σ, the gradient update with early stopping rules can approximate the regression function. Our elegant error analysis can lead to convergence in the standard L 2 norm and the strong RKHS norm, both of which are optimal in the mini-max sense. We show that the scale parameter σ plays an important role in providing robustness as well as fast convergence. The numerical experiments implemented on synthetic examples and real data set also support our theoretical results.

  18. Authentication Architecture for Region-Wide e-Health System with Smartcards and a PKI

    NASA Astrophysics Data System (ADS)

    Zúquete, André; Gomes, Helder; Cunha, João Paulo Silva

    This paper describes the design and implementation of an e-Health authentication architecture using smartcards and a PKI. This architecture was developed to authenticate e-Health Professionals accessing the RTS (Rede Telemática da Saúde), a regional platform for sharing clinical data among a set of affiliated health institutions. The architecture had to accommodate specific RTS requirements, namely the security of Professionals' credentials, the mobility of Professionals, and the scalability to accommodate new health institutions. The adopted solution uses short-lived certificates and cross-certification agreements between RTS and e-Health institutions for authenticating Professionals accessing the RTS. These certificates carry as well the Professional's role at their home institution for role-based authorization. Trust agreements between e-Health institutions and RTS are necessary in order to make the certificates recognized by the RTS. As a proof of concept, a prototype was implemented with Windows technology. The presented authentication architecture is intended to be applied to other medical telematic systems.

  19. A reference web architecture and patterns for real-time visual analytics on large streaming data

    NASA Astrophysics Data System (ADS)

    Kandogan, Eser; Soroker, Danny; Rohall, Steven; Bak, Peter; van Ham, Frank; Lu, Jie; Ship, Harold-Jeffrey; Wang, Chun-Fu; Lai, Jennifer

    2013-12-01

    Monitoring and analysis of streaming data, such as social media, sensors, and news feeds, has become increasingly important for business and government. The volume and velocity of incoming data are key challenges. To effectively support monitoring and analysis, statistical and visual analytics techniques need to be seamlessly integrated; analytic techniques for a variety of data types (e.g., text, numerical) and scope (e.g., incremental, rolling-window, global) must be properly accommodated; interaction, collaboration, and coordination among several visualizations must be supported in an efficient manner; and the system should support the use of different analytics techniques in a pluggable manner. Especially in web-based environments, these requirements pose restrictions on the basic visual analytics architecture for streaming data. In this paper we report on our experience of building a reference web architecture for real-time visual analytics of streaming data, identify and discuss architectural patterns that address these challenges, and report on applying the reference architecture for real-time Twitter monitoring and analysis.

  20. Exploring Manycore Multinode Systems for Irregular Applications with FPGA Prototyping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ceriani, Marco; Palermo, Gianluca; Secchi, Simone

    We present a prototype of a multi-core architecture implemented on FPGA, designed to enable efficient execution of irregular applications on distributed shared memory machines, while maintaining high performance on regular workloads. The architecture is composed of off-the-shelf soft-core cores, local interconnection and memory interface, integrated with custom components that optimize it for irregular applications. It relies on three key elements: a global address space, multithreading, and fine-grained synchronization. Global addresses are scrambled to reduce the formation of network hot-spots, while the latency of the transactions is covered by integrating an hardware scheduler within the custom load/store buffers to take advantagemore » from the availability of multiple executions threads, increasing the efficiency in a transparent way to the application. We evaluated a dual node system irregular kernels showing scalability in the number of cores and threads.« less

  1. The component-based architecture of the HELIOS medical software engineering environment.

    PubMed

    Degoulet, P; Jean, F C; Engelmann, U; Meinzer, H P; Baud, R; Sandblad, B; Wigertz, O; Le Meur, R; Jagermann, C

    1994-12-01

    The constitution of highly integrated health information networks and the growth of multimedia technologies raise new challenges for the development of medical applications. We describe in this paper the general architecture of the HELIOS medical software engineering environment devoted to the development and maintenance of multimedia distributed medical applications. HELIOS is made of a set of software components, federated by a communication channel called the HELIOS Unification Bus. The HELIOS kernel includes three main components, the Analysis-Design and Environment, the Object Information System and the Interface Manager. HELIOS services consist in a collection of toolkits providing the necessary facilities to medical application developers. They include Image Related services, a Natural Language Processor, a Decision Support System and Connection services. The project gives special attention to both object-oriented approaches and software re-usability that are considered crucial steps towards the development of more reliable, coherent and integrated applications.

  2. Optimizing Excited-State Electronic-Structure Codes for Intel Knights Landing: A Case Study on the BerkeleyGW Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deslippe, Jack; da Jornada, Felipe H.; Vigil-Fowler, Derek

    2016-10-06

    We profile and optimize calculations performed with the BerkeleyGW code on the Xeon-Phi architecture. BerkeleyGW depends both on hand-tuned critical kernels as well as on BLAS and FFT libraries. We describe the optimization process and performance improvements achieved. We discuss a layered parallelization strategy to take advantage of vector, thread and node-level parallelism. We discuss locality changes (including the consequence of the lack of L3 cache) and effective use of the on-package high-bandwidth memory. We show preliminary results on Knights-Landing including a roofline study of code performance before and after a number of optimizations. We find that the GW methodmore » is particularly well-suited for many-core architectures due to the ability to exploit a large amount of parallelism over plane-wave components, band-pairs, and frequencies.« less

  3. Gregarious Data Re-structuring in a Many Core Architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shrestha, Sunil; Manzano Franco, Joseph B.; Marquez, Andres

    this paper, we have developed a new methodology that takes in consideration the access patterns from a single parallel actor (e.g. a thread), as well as, the access patterns of “grouped” parallel actors that share a resource (e.g. a distributed Level 3 cache). We start with a hierarchical tile code for our target machine and apply a series of transformations at the tile level to improve data residence in a given memory hierarchy level. The contribution of this paper includes (a) collaborative data restructuring for group reuse and (b) low overhead transformation technique to improve access pattern and bring closelymore » connected data elements together. Preliminary results in a many core architecture, Tilera TileGX, shows promising improvements over optimized OpenMP code (up to 31% increase in GFLOPS) and over our own previous work on fine grained runtimes (up to 16%) for selected kernels« less

  4. Cross-stiffened continuous fiber structures

    NASA Technical Reports Server (NTRS)

    Ewen, John R.; Suarez, Jim A.

    1993-01-01

    Under NASA's Novel Composites for Wing and Fuselage Applications (NCWFA) program, Contract NAS1-18784, Grumman is evaluating the structural efficiency of graphite/epoxy cross-stiffened panel elements fabricated using innovative textile preforms and cost effective Resin Transfer Molding (RTM) and Resin Film Infusion (RFI) processes. Two three-dimensional woven preform assembly concepts have been defined for application to a representative window belt design typically found in a commercial transport airframe. The 3D woven architecture for each of these concepts is different; one is vertically woven in the plane of the window belt geometry and the other is loom woven in a compressed state similar to an unfolded eggcrate. The feasibility of both designs has been demonstrated in the fabrication of small test element assemblies. These elements and the final window belt assemblies will be structurally tested, and results compared.

  5. High-Power X-Band Semiconductor RF Switch for Pulse Compression Systems of Future Colliders

    NASA Astrophysics Data System (ADS)

    Tantawi, Sami G.; Tamura, Fumihiko

    2000-04-01

    We describe the potential of semiconductor X-band RF switch arrays as a means of developing high power RF pulse compression systems for future linear colliders. The switch systems described here have two designs. Both designs consist of two 3dB hybrids and active modules. In the first design the module is composed of a cascaded active phase shifter. In the second design the module uses arrays of SPST (Single Pole Single Throw) switches. Each cascaded element of the phase shifter and the SPST switch has similar design. The active element consists of symmetrical three-port tee-junctions and an active waveguide window in the symmetrical arm of the tee-junction. The design methodology of the elements and the architecture of the whole switch system are presented. We describe the scaling law that governs the relation between power handling capability and number of elements. The design of the active waveguide window is presented. The waveguide window is a silicon wafer with an array of four hundred PIN/NIP diodes covering the surface of the window. This waveguide window is located in an over-moded TE01 circular waveguide. The results of high power RF measurements of the active waveguide window are presented. The experiment is performed at power levels of tens of megawatts at X-band.

  6. Mars Hybrid Propulsion System Trajectory Analysis. Part I; Crew Missions

    NASA Technical Reports Server (NTRS)

    Chai, Patrick R.; Merrill, Raymond G.; Qu, Min

    2015-01-01

    NASAs Human spaceflight Architecture team is developing a reusable hybrid transportation architecture in which both chemical and electric propulsion systems are used to send crew and cargo to Mars destinations such as Phobos, Deimos, the surface of Mars, and other orbits around Mars. By combining chemical and electrical propulsion into a single space- ship and applying each where it is more effective, the hybrid architecture enables a series of Mars trajectories that are more fuel-efficient than an all chemical architecture without significant increases in flight times. This paper provides the analysis of the interplanetary segments of the three Evolvable Mars Campaign crew missions to Mars using the hybrid transportation architecture. The trajectory analysis provides departure and arrival dates and propellant needs for the three crew missions that are used by the campaign analysis team for campaign build-up and logistics aggregation analysis. Sensitivity analyses were performed to investigate the impact of mass growth, departure window, and propulsion system performance on the hybrid transportation architecture. The results and system analysis from this paper contribute to analyses of the other human spaceflight architecture team tasks and feed into the definition of the Evolvable Mars Campaign.

  7. Accelerating a MPEG-4 video decoder through custom software/hardware co-design

    NASA Astrophysics Data System (ADS)

    Díaz, Jorge L.; Barreto, Dacil; García, Luz; Marrero, Gustavo; Carballo, Pedro P.; Núñez, Antonio

    2007-05-01

    In this paper we present a novel methodology to accelerate an MPEG-4 video decoder using software/hardware co-design for wireless DAB/DMB networks. Software support includes the services provided by the embedded kernel μC/OS-II, and the application tasks mapped to software. Hardware support includes several custom co-processors and a communication architecture with bridges to the main system bus and with a dual port SRAM. Synchronization among tasks is achieved at two levels, by a hardware protocol and by kernel level scheduling services. Our reference application is an MPEG-4 video decoder composed of several software functions and written using a special C++ library named CASSE. Profiling and space exploration techniques were used previously over the Advanced Simple Profile (ASP) MPEG-4 decoder to determinate the best HW/SW partition developed here. This research is part of the ARTEMI project and its main goal is the establishment of methodologies for the design of real-time complex digital systems using Programmable Logic Devices with embedded microprocessors as target technology and the design of multimedia systems for broadcasting networks as reference application.

  8. A New Generation of Real-Time Systems in the JET Tokamak

    NASA Astrophysics Data System (ADS)

    Alves, Diogo; Neto, Andre C.; Valcarcel, Daniel F.; Felton, Robert; Lopez, Juan M.; Barbalace, Antonio; Boncagni, Luca; Card, Peter; De Tommasi, Gianmaria; Goodyear, Alex; Jachmich, Stefan; Lomas, Peter J.; Maviglia, Francesco; McCullen, Paul; Murari, Andrea; Rainford, Mark; Reux, Cedric; Rimini, Fernanda; Sartori, Filippo; Stephen, Adam V.; Vega, Jesus; Vitelli, Riccardo; Zabeo, Luca; Zastrow, Klaus-Dieter

    2014-04-01

    Recently, a new recipe for developing and deploying real-time systems has become increasingly adopted in the JET tokamak. Powered by the advent of x86 multi-core technology and the reliability of JET's well established Real-Time Data Network (RTDN) to handle all real-time I/O, an official Linux vanilla kernel has been demonstrated to be able to provide real-time performance to user-space applications that are required to meet stringent timing constraints. In particular, a careful rearrangement of the Interrupt ReQuests' (IRQs) affinities together with the kernel's CPU isolation mechanism allows one to obtain either soft or hard real-time behavior depending on the synchronization mechanism adopted. Finally, the Multithreaded Application Real-Time executor (MARTe) framework is used for building applications particularly optimised for exploring multi-core architectures. In the past year, four new systems based on this philosophy have been installed and are now part of JET's routine operation. The focus of the present work is on the configuration aspects that enable these new systems' real-time capability. Details are given about the common real-time configuration of these systems, followed by a brief description of each system together with results regarding their real-time performance. A cycle time jitter analysis of a user-space MARTe based application synchronizing over a network is also presented. The goal is to compare its deterministic performance while running on a vanilla and on a Messaging Real time Grid (MRG) Linux kernel.

  9. Solar Transparent Radiators by Optical Nanoantennas.

    PubMed

    Jönsson, Gustav; Tordera, Daniel; Pakizeh, Tavakol; Jaysankar, Manoj; Miljkovic, Vladimir; Tong, Lianming; Jonsson, Magnus P; Dmitriev, Alexandre

    2017-11-08

    Architectural windows are a major cause of thermal discomfort as the inner glazing during cold days can be several degrees colder than the indoor air. Mitigating this, the indoor temperature has to be increased, leading to unavoidable thermal losses. Here we present solar thermal surfaces based on complex nanoplasmonic antennas that can raise the temperature of window glazing by up to 8 K upon solar irradiation while transmitting light with a color rendering index of 98.76. The nanoantennas are directional, can be tuned to absorb in different spectral ranges, and possess a structural integrity that is not substrate-dependent, and thus they open up for application on a broad range of surfaces.

  10. Open the Windows: Design New Spaces for Learning

    ERIC Educational Resources Information Center

    Johnson, Christopher

    2011-01-01

    As a technologist, the author is interested in how the digital world is changing the educational landscape. As he began to research effective learning spaces, he discovered that the architecture, design, and school facilities communities are making a great deal of progress in creating better classrooms and school buildings. Unfortunately, many in…

  11. Genetic architechture and biological basis for feed efficiency in dairy cattle

    USDA-ARS?s Scientific Manuscript database

    The genetic architecture of residual feed intake (RFI) and related traits was evaluated using a dataset of 2,894 cows. A Bayesian analysis estimated that markers accounted for 14% of the variance in RFI, and that RFI had considerable genetic variation. Effects of marker windows were small, but QTL p...

  12. Lightness computation by the human visual system

    NASA Astrophysics Data System (ADS)

    Rudd, Michael E.

    2017-05-01

    A model of achromatic color computation by the human visual system is presented, which is shown to account in an exact quantitative way for a large body of appearance matching data collected with simple visual displays. The model equations are closely related to those of the original Retinex model of Land and McCann. However, the present model differs in important ways from Land and McCann's theory in that it invokes additional biological and perceptual mechanisms, including contrast gain control, different inherent neural gains for incremental, and decremental luminance steps, and two types of top-down influence on the perceptual weights applied to local luminance steps in the display: edge classification and spatial integration attentional windowing. Arguments are presented to support the claim that these various visual processes must be instantiated by a particular underlying neural architecture. By pointing to correspondences between the architecture of the model and findings from visual neurophysiology, this paper suggests that edge classification involves a top-down gating of neural edge responses in early visual cortex (cortical areas V1 and/or V2) while spatial integration windowing occurs in cortical area V4 or beyond.

  13. Reconfigurable vision system for real-time applications

    NASA Astrophysics Data System (ADS)

    Torres-Huitzil, Cesar; Arias-Estrada, Miguel

    2002-03-01

    Recently, a growing community of researchers has used reconfigurable systems to solve computationally intensive problems. Reconfigurability provides optimized processors for systems on chip designs, and makes easy to import technology to a new system through reusable modules. The main objective of this work is the investigation of a reconfigurable computer system targeted for computer vision and real-time applications. The system is intended to circumvent the inherent computational load of most window-based computer vision algorithms. It aims to build a system for such tasks by providing an FPGA-based hardware architecture for task specific vision applications with enough processing power, using the minimum amount of hardware resources as possible, and a mechanism for building systems using this architecture. Regarding the software part of the system, a library of pre-designed and general-purpose modules that implement common window-based computer vision operations is being investigated. A common generic interface is established for these modules in order to define hardware/software components. These components can be interconnected to develop more complex applications, providing an efficient mechanism for transferring image and result data among modules. Some preliminary results are presented and discussed.

  14. GPU Accelerated Vector Median Filter

    NASA Technical Reports Server (NTRS)

    Aras, Rifat; Shen, Yuzhong

    2011-01-01

    Noise reduction is an important step for most image processing tasks. For three channel color images, a widely used technique is vector median filter in which color values of pixels are treated as 3-component vectors. Vector median filters are computationally expensive; for a window size of n x n, each of the n(sup 2) vectors has to be compared with other n(sup 2) - 1 vectors in distances. General purpose computation on graphics processing units (GPUs) is the paradigm of utilizing high-performance many-core GPU architectures for computation tasks that are normally handled by CPUs. In this work. NVIDIA's Compute Unified Device Architecture (CUDA) paradigm is used to accelerate vector median filtering. which has to the best of our knowledge never been done before. The performance of GPU accelerated vector median filter is compared to that of the CPU and MPI-based versions for different image and window sizes, Initial findings of the study showed 100x improvement of performance of vector median filter implementation on GPUs over CPU implementations and further speed-up is expected after more extensive optimizations of the GPU algorithm .

  15. Unified transform architecture for AVC, AVS, VC-1 and HEVC high-performance codecs

    NASA Astrophysics Data System (ADS)

    Dias, Tiago; Roma, Nuno; Sousa, Leonel

    2014-12-01

    A unified architecture for fast and efficient computation of the set of two-dimensional (2-D) transforms adopted by the most recent state-of-the-art digital video standards is presented in this paper. Contrasting to other designs with similar functionality, the presented architecture is supported on a scalable, modular and completely configurable processing structure. This flexible structure not only allows to easily reconfigure the architecture to support different transform kernels, but it also permits its resizing to efficiently support transforms of different orders (e.g. order-4, order-8, order-16 and order-32). Consequently, not only is it highly suitable to realize high-performance multi-standard transform cores, but it also offers highly efficient implementations of specialized processing structures addressing only a reduced subset of transforms that are used by a specific video standard. The experimental results that were obtained by prototyping several configurations of this processing structure in a Xilinx Virtex-7 FPGA show the superior performance and hardware efficiency levels provided by the proposed unified architecture for the implementation of transform cores for the Advanced Video Coding (AVC), Audio Video coding Standard (AVS), VC-1 and High Efficiency Video Coding (HEVC) standards. In addition, such results also demonstrate the ability of this processing structure to realize multi-standard transform cores supporting all the standards mentioned above and that are capable of processing the 8k Ultra High Definition Television (UHDTV) video format (7,680 × 4,320 at 30 fps) in real time.

  16. The genetic architecture of maize (Zea mays L.) kernel weight determination.

    PubMed

    Alvarez Prado, Santiago; López, César G; Senior, M Lynn; Borrás, Lucas

    2014-09-18

    Individual kernel weight is an important trait for maize yield determination. We have identified genomic regions controlling this trait by using the B73xMo17 population; however, the effect of genetic background on control of this complex trait and its physiological components is not yet known. The objective of this study was to understand how genetic background affected our previous results. Two nested stable recombinant inbred line populations (N209xMo17 and R18xMo17) were designed for this purpose. A total of 408 recombinant inbred lines were genotyped and phenotyped at two environments for kernel weight and five other traits related to kernel growth and development. All traits showed very high and significant (P < 0.001) phenotypic variability and medium-to-high heritability (0.60-0.90). When N209xMo17 and R18xMo17 were analyzed separately, a total of 23 environmentally stable quantitative trait loci (QTL) and five epistatic interactions were detected for N209xMo17. For R18xMo17, 59 environmentally stable QTL and 17 epistatic interactions were detected. A joint analysis detected 14 stable QTL regardless of the genetic background. Between 57 and 83% of detected QTL were population specific, denoting medium-to-high genetic background effects. This percentage was dependent on the trait. A meta-analysis including our previous B73xMo17 results identified five relevant genomic regions deserving further characterization. In summary, our grain filling traits were dominated by small additive QTL with several epistatic and few environmental interactions and medium-to-high genetic background effects. This study demonstrates that the number of detected QTL and additive effects for different physiologically related grain filling traits need to be understood relative to the specific germplasm. Copyright © 2014 Alvarez Prado et al.

  17. Nutritive value of corn silage as affected by maturity and mechanical processing: a contemporary review.

    PubMed

    Johnson, L; Harrison, J H; Hunt, C; Shinners, K; Doggett, C G; Sapienza, D

    1999-12-01

    Stage of maturity at harvest and mechanical processing affect the nutritive value of corn silage. The change in nutritive value of corn silage as maturity advances can be measured by animal digestion and macro in situ degradation studies among other methods. Predictive equations using climatic data, vitreousness of corn grain in corn silage, starch reactivity, gelatinization enthalpy, dry matter (DM) of corn grain in corn silage, and DM of corn silage can be used to estimate starch digestibility of corn silage. Whole plant corn silage can be mechanically processed either pre- or postensiling with a kernel processor mounted on a forage harvester, a recutter screen on a forage harvester, or a stationary roller mill. Mechanical processing of corn silage can improve ensiling characteristics, reduce DM losses during ensiling, and improve starch and fiber digestion as a result of fracturing the corn kernels and crushing and shearing the stover and cobs. Improvements in milk production have ranged from 0.2 to 2.0 kg/d when cows were fed mechanically processed corn silage. A consistent improvement in milk protein yield has also been observed when mechanically processed corn silage has been fed. With the advent of mechanical processors, alternative strategies are evident for corn silage management, such as a longer harvest window.

  18. Fault Detection and Diagnosis In Hall-Héroult Cells Based on Individual Anode Current Measurements Using Dynamic Kernel PCA

    NASA Astrophysics Data System (ADS)

    Yao, Yuchen; Bao, Jie; Skyllas-Kazacos, Maria; Welch, Barry J.; Akhmetov, Sergey

    2018-04-01

    Individual anode current signals in aluminum reduction cells provide localized cell conditions in the vicinity of each anode, which contain more information than the conventionally measured cell voltage and line current. One common use of this measurement is to identify process faults that can cause significant changes in the anode current signals. While this method is simple and direct, it ignores the interactions between anode currents and other important process variables. This paper presents an approach that applies multivariate statistical analysis techniques to individual anode currents and other process operating data, for the detection and diagnosis of local process abnormalities in aluminum reduction cells. Specifically, since the Hall-Héroult process is time-varying with its process variables dynamically and nonlinearly correlated, dynamic kernel principal component analysis with moving windows is used. The cell is discretized into a number of subsystems, with each subsystem representing one anode and cell conditions in its vicinity. The fault associated with each subsystem is identified based on multivariate statistical control charts. The results show that the proposed approach is able to not only effectively pinpoint the problematic areas in the cell, but also assess the effect of the fault on different parts of the cell.

  19. Time-frequency analysis of electric motors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bentley, C.L.; Dunn, M.E.; Mattingly, J.K.

    1995-12-31

    Physical signals such as the current of an electric motor become nonstationary as a consequence of degraded operation and broken parts. In this instance, their power spectral densities become time dependent, and time-frequency analysis techniques become the appropriate tools for signal analysis. The first among these techniques, generally called the short-time Fourier transform (STFT) method, is the Gabor transform 2 (GT) of a signal S(t), which decomposes the signal into time-local frequency modes: where the window function, {Phi}(t-{tau}), is a normalized Gaussian. Alternatively, one can decompose the signal into its multi-resolution representation at different levels of magnification. This representation ismore » achieved by the continuous wavelet transform (CWT) where the function g(t) is a kernel of zero average belonging to a family of scaled and shifted wavelet kernels. The CWT can be interpreted as the action of a microscope that locates the signal by the shift parameter b and adjusts its magnification by changing the scale parameter a. The Fourier-transformed CWT, W,{sub g}(a, {omega}), acts as a filter that places the high-frequency content of a signal into the lower end of the scale spectrum and vice versa for the low frequencies. Signals from a motor in three different states were analyzed.« less

  20. Scale-space measures for graph topology link protein network architecture to function.

    PubMed

    Hulsman, Marc; Dimitrakopoulos, Christos; de Ridder, Jeroen

    2014-06-15

    The network architecture of physical protein interactions is an important determinant for the molecular functions that are carried out within each cell. To study this relation, the network architecture can be characterized by graph topological characteristics such as shortest paths and network hubs. These characteristics have an important shortcoming: they do not take into account that interactions occur across different scales. This is important because some cellular functions may involve a single direct protein interaction (small scale), whereas others require more and/or indirect interactions, such as protein complexes (medium scale) and interactions between large modules of proteins (large scale). In this work, we derive generalized scale-aware versions of known graph topological measures based on diffusion kernels. We apply these to characterize the topology of networks across all scales simultaneously, generating a so-called graph topological scale-space. The comprehensive physical interaction network in yeast is used to show that scale-space based measures consistently give superior performance when distinguishing protein functional categories and three major types of functional interactions-genetic interaction, co-expression and perturbation interactions. Moreover, we demonstrate that graph topological scale spaces capture biologically meaningful features that provide new insights into the link between function and protein network architecture. Matlab(TM) code to calculate the scale-aware topological measures (STMs) is available at http://bioinformatics.tudelft.nl/TSSA © The Author 2014. Published by Oxford University Press.

  1. The Traffic Noise Index: A Method of Controlling Noise Nuisance.

    ERIC Educational Resources Information Center

    Langdon, F. J.; Scholes, W. E.

    This building research survey is an analysis of the social nuisance caused by urban motor ways and their noise. The Traffic Noise Index is used to indicate traffic noises and their effects on architectural designs and planning, while suggesting the need for more and better window insulation and acoustical barriers. Overall concern is for--(1)…

  2. Parallel Architectures for Planetary Exploration Requirements (PAPER)

    NASA Astrophysics Data System (ADS)

    Cezzar, Ruknet

    1993-08-01

    The project's main contributions have been in the area of student support. Throughout the project, at least one, in some cases two, undergraduate students have been supported. By working with the project, these students gained valuable knowledge involving the scientific research project, including the not-so-pleasant reporting requirements to the funding agencies. The other important contribution was towards the establishment of a graduate program in computer science at Hampton University. Primarily, the PAPER project has served as the main research basis in seeking funds from other agencies, such as the National Science Foundation, for establishing a research infrastructure in the department. In technical areas, especially in the first phase, we believe the trip to Jet Propulsion Laboratory, and gathering together all the pertinent information involving experimental computer architectures aimed for planetary explorations was very helpful. Indeed, if this effort is to be revived in the future due to congressional funding for planetary explorations, say an unmanned mission to Mars, our interim report will be an important starting point. In other technical areas, our simulator has pinpointed and highlighted several important performance issues related to the design of operating system kernels for MIMD machines. In particular, the critical issue of how the kernel itself will run in parallel on a multiple-processor system has been addressed through the various ready list organization and access policies. In the area of neural computing, our main contribution was an introductory tutorial package to familiarize the researchers at NASA with this new and promising field zone axes (20). Finally, we have introduced the notion of reversibility in programming systems which may find applications in various areas of space research.

  3. Using SpF to Achieve Petascale for Legacy Pseudospectral Applications

    NASA Technical Reports Server (NTRS)

    Clune, Thomas L.; Jiang, Weiyuan

    2014-01-01

    Pseudospectral (PS) methods possess a number of characteristics (e.g., efficiency, accuracy, natural boundary conditions) that are extremely desirable for dynamo models. Unfortunately, dynamo models based upon PS methods face a number of daunting challenges, which include exposing additional parallelism, leveraging hardware accelerators, exploiting hybrid parallelism, and improving the scalability of global memory transposes. Although these issues are a concern for most models, solutions for PS methods tend to require far more pervasive changes to underlying data and control structures. Further, improvements in performance in one model are difficult to transfer to other models, resulting in significant duplication of effort across the research community. We have developed an extensible software framework for pseudospectral methods called SpF that is intended to enable extreme scalability and optimal performance. Highlevel abstractions provided by SpF unburden applications of the responsibility of managing domain decomposition and load balance while reducing the changes in code required to adapt to new computing architectures. The key design concept in SpF is that each phase of the numerical calculation is partitioned into disjoint numerical kernels that can be performed entirely inprocessor. The granularity of domain decomposition provided by SpF is only constrained by the datalocality requirements of these kernels. SpF builds on top of optimized vendor libraries for common numerical operations such as transforms, matrix solvers, etc., but can also be configured to use open source alternatives for portability. SpF includes several alternative schemes for global data redistribution and is expected to serve as an ideal testbed for further research into optimal approaches for different network architectures. In this presentation, we will describe our experience in porting legacy pseudospectral models, MoSST and DYNAMO, to use SpF as well as present preliminary performance results provided by the improved scalability.

  4. Parallel Architectures for Planetary Exploration Requirements (PAPER)

    NASA Technical Reports Server (NTRS)

    Cezzar, Ruknet

    1993-01-01

    The project's main contributions have been in the area of student support. Throughout the project, at least one, in some cases two, undergraduate students have been supported. By working with the project, these students gained valuable knowledge involving the scientific research project, including the not-so-pleasant reporting requirements to the funding agencies. The other important contribution was towards the establishment of a graduate program in computer science at Hampton University. Primarily, the PAPER project has served as the main research basis in seeking funds from other agencies, such as the National Science Foundation, for establishing a research infrastructure in the department. In technical areas, especially in the first phase, we believe the trip to Jet Propulsion Laboratory, and gathering together all the pertinent information involving experimental computer architectures aimed for planetary explorations was very helpful. Indeed, if this effort is to be revived in the future due to congressional funding for planetary explorations, say an unmanned mission to Mars, our interim report will be an important starting point. In other technical areas, our simulator has pinpointed and highlighted several important performance issues related to the design of operating system kernels for MIMD machines. In particular, the critical issue of how the kernel itself will run in parallel on a multiple-processor system has been addressed through the various ready list organization and access policies. In the area of neural computing, our main contribution was an introductory tutorial package to familiarize the researchers at NASA with this new and promising field zone axes (20). Finally, we have introduced the notion of reversibility in programming systems which may find applications in various areas of space research.

  5. Mask data processing in the era of multibeam writers

    NASA Astrophysics Data System (ADS)

    Abboud, Frank E.; Asturias, Michael; Chandramouli, Maesh; Tezuka, Yoshihiro

    2014-10-01

    Mask writers' architectures have evolved through the years in response to ever tightening requirements for better resolution, tighter feature placement, improved CD control, and tolerable write time. The unprecedented extension of optical lithography and the myriad of Resolution Enhancement Techniques have tasked current mask writers with ever increasing shot count and higher dose, and therefore, increasing write time. Once again, we see the need for a transition to a new type of mask writer based on massively parallel architecture. These platforms offer a step function improvement in both dose and the ability to process massive amounts of data. The higher dose and almost unlimited appetite for edge corrections open new windows of opportunity to further push the envelope. These architectures are also naturally capable of producing curvilinear shapes, making the need to approximate a curve with multiple Manhattan shapes unnecessary.

  6. Architecture of the software for LAMOST fiber positioning subsystem

    NASA Astrophysics Data System (ADS)

    Peng, Xiaobo; Xing, Xiaozheng; Hu, Hongzhuan; Zhai, Chao; Li, Weimin

    2004-09-01

    The architecture of the software which controls the LAMOST fiber positioning sub-system is described. The software is composed of two parts as follows: a main control program in a computer and a unit controller program in a MCS51 single chip microcomputer ROM. And the function of the software includes: Client/Server model establishment, observation planning, collision handling, data transmission, pulse generation, CCD control, image capture and processing, and data analysis etc. Particular attention is paid to the ways in which different parts of the software can communicate. Also software techniques for multi threads, SOCKET programming, Microsoft Windows message response, and serial communications are discussed.

  7. VETA x ray data acquisition and control system

    NASA Technical Reports Server (NTRS)

    Brissenden, Roger J. V.; Jones, Mark T.; Ljungberg, Malin; Nguyen, Dan T.; Roll, John B., Jr.

    1992-01-01

    We describe the X-ray Data Acquisition and Control System (XDACS) used together with the X-ray Detection System (XDS) to characterize the X-ray image during testing of the AXAF P1/H1 mirror pair at the MSFC X-ray Calibration Facility. A variety of X-ray data were acquired, analyzed and archived during the testing including: mirror alignment, encircled energy, effective area, point spread function, system housekeeping and proportional counter window uniformity data. The system architecture is presented with emphasis placed on key features that include a layered UNIX tool approach, dedicated subsystem controllers, real-time X-window displays, flexibility in combining tools, network connectivity and system extensibility. The VETA test data archive is also described.

  8. Automatic 3D Moment tensor inversions for southern California earthquakes

    NASA Astrophysics Data System (ADS)

    Liu, Q.; Tape, C.; Friberg, P.; Tromp, J.

    2008-12-01

    We present a new source mechanism (moment-tensor and depth) catalog for about 150 recent southern California earthquakes with Mw ≥ 3.5. We carefully select the initial solutions from a few available earthquake catalogs as well as our own preliminary 3D moment tensor inversion results. We pick useful data windows by assessing the quality of fits between the data and synthetics using an automatic windowing package FLEXWIN (Maggi et al 2008). We compute the source Fréchet derivatives of moment-tensor elements and depth for a recent 3D southern California velocity model inverted based upon finite-frequency event kernels calculated by the adjoint methods and a nonlinear conjugate gradient technique with subspace preconditioning (Tape et al 2008). We then invert for the source mechanisms and event depths based upon the techniques introduced by Liu et al 2005. We assess the quality of this new catalog, as well as the other existing ones, by computing the 3D synthetics for the updated 3D southern California model. We also plan to implement the moment-tensor inversion methods to automatically determine the source mechanisms for earthquakes with Mw ≥ 3.5 in southern California.

  9. Assessing Thermal Comfort Due to a Ventilated Double Window

    NASA Astrophysics Data System (ADS)

    Carlos, Jorge S.; Corvacho, Helena

    2017-10-01

    Building design and its components are the result of a complex process, which should provide pleasant conditions to its inhabitants. Therefore, indoor acceptable comfort is influenced by the architectural design. ISO and ASHRAE standards define thermal comfort as the condition of mind that expresses satisfaction with the thermal environment. The energy demand for heating, beside the building’s physical properties, also depend on human behaviour, like opening or closing windows. Generally, windows are the weakest façade element concerning to thermal performance. A lower thermal resistance allows higher thermal conduction through it. When a window is very hot or cold, and the occupant is very close to it, it may result in thermal discomfort. The functionality of a ventilated double window introduces new physical considerations to a traditional window. In consequence, it is necessary to study the local effect on human comfort in function of the boundary conditions. Wind, solar availability, air temperature and therefore heating and indoor air quality conditions will affect the relationship between this passive system and the indoor environment. In the present paper, the influence of thermal performance and ventilation on human comfort resulting from the construction and geometry solutions is shown, helping to choose the best solution. The presented approach shows that in order to save energy it is possible to reduce the air changes of a room to the minimum, without compromising air quality, enhancing simultaneously local thermal performance and comfort. The results of the study on the effect of two parallel windows with a ventilated channel in the same fenestration on comfort conditions for several different room dimensions, are also presented. As the room dimensions’ rate changes so does the window to floor rate; therefore, under the same climatic conditions and same construction solution, different results are obtained.

  10. Optimization of sparse matrix-vector multiplication on emerging multicore platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Samuel; Oliker, Leonid; Vuduc, Richard

    2007-01-01

    We are witnessing a dramatic change in computer architecture due to the multicore paradigm shift, as every electronic device from cell phones to supercomputers confronts parallelism of unprecedented scale. To fully unleash the potential of these systems, the HPC community must develop multicore specific optimization methodologies for important scientific computations. In this work, we examine sparse matrix-vector multiply (SpMV) - one of the most heavily used kernels in scientific computing - across a broad spectrum of multicore designs. Our experimental platform includes the homogeneous AMD dual-core and Intel quad-core designs, the heterogeneous STI Cell, as well as the first scientificmore » study of the highly multithreaded Sun Niagara2. We present several optimization strategies especially effective for the multicore environment, and demonstrate significant performance improvements compared to existing state-of-the-art serial and parallel SpMV implementations. Additionally, we present key insights into the architectural tradeoffs of leading multicore design strategies, in the context of demanding memory-bound numerical algorithms.« less

  11. The density matrix renormalization group algorithm on kilo-processor architectures: Implementation and trade-offs

    NASA Astrophysics Data System (ADS)

    Nemes, Csaba; Barcza, Gergely; Nagy, Zoltán; Legeza, Örs; Szolgay, Péter

    2014-06-01

    In the numerical analysis of strongly correlated quantum lattice models one of the leading algorithms developed to balance the size of the effective Hilbert space and the accuracy of the simulation is the density matrix renormalization group (DMRG) algorithm, in which the run-time is dominated by the iterative diagonalization of the Hamilton operator. As the most time-dominant step of the diagonalization can be expressed as a list of dense matrix operations, the DMRG is an appealing candidate to fully utilize the computing power residing in novel kilo-processor architectures. In the paper a smart hybrid CPU-GPU implementation is presented, which exploits the power of both CPU and GPU and tolerates problems exceeding the GPU memory size. Furthermore, a new CUDA kernel has been designed for asymmetric matrix-vector multiplication to accelerate the rest of the diagonalization. Besides the evaluation of the GPU implementation, the practical limits of an FPGA implementation are also discussed.

  12. Signed-negabinary-arithmetic-based optical computing by use of a single liquid-crystal-display panel.

    PubMed

    Datta, Asit K; Munshi, Soumika

    2002-03-10

    Based on the negabinary number representation, parallel one-step arithmetic operations (that is, addition and subtraction), logical operations, and matrix-vector multiplication on data have been optically implemented, by use of a two-dimensional spatial-encoding technique. For addition and subtraction, one of the operands in decimal form is converted into the unsigned negabinary form, whereas the other decimal number is represented in the signed negabinary form. The result of operation is obtained in the mixed negabinary form and is converted back into decimal. Matrix-vector multiplication for unsigned negabinary numbers is achieved through the convolution technique. Both of the operands for logical operation are converted to their signed negabinary forms. All operations are implemented by use of a unique optical architecture. The use of a single liquid-crystal-display panel to spatially encode the input data, operational kernels, and decoding masks have simplified the architecture as well as reduced the cost and complexity.

  13. Open-WiSe: a solar powered wireless sensor network platform.

    PubMed

    González, Apolinar; Aquino, Raúl; Mata, Walter; Ochoa, Alberto; Saldaña, Pedro; Edwards, Arthur

    2012-01-01

    Because battery-powered nodes are required in wireless sensor networks and energy consumption represents an important design consideration, alternate energy sources are needed to provide more effective and optimal function. The main goal of this work is to present an energy harvesting wireless sensor network platform, the Open Wireless Sensor node (WiSe). The design and implementation of the solar powered wireless platform is described including the hardware architecture, firmware, and a POSIX Real-Time Kernel. A sleep and wake up strategy was implemented to prolong the lifetime of the wireless sensor network. This platform was developed as a tool for researchers investigating Wireless sensor network or system integrators.

  14. A Biologically-Inspired Neural Network Architecture for Image Processing

    DTIC Science & Technology

    1990-12-01

    was organized into twelve groups of 8-by-8 node arrays. Weights were con- strained for each group of nodes, with each node "viewing" a 5-by-5 pixel...single wIndow * smk 0; for(J-0; j< 6 4 ; J++)( sum -sum +t %borfi][J]*rfarray[j]; ) /* Finished calculating ore block, one j:,,sition (first layer

  15. Optimizing Irregular Applications for Energy and Performance on the Tilera Many-core Architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chavarría-Miranda, Daniel; Panyala, Ajay R.; Halappanavar, Mahantesh

    Optimizing applications simultaneously for energy and performance is a complex problem. High performance, parallel, irregular applications are notoriously hard to optimize due to their data-dependent memory accesses, lack of structured locality and complex data structures and code patterns. Irregular kernels are growing in importance in applications such as machine learning, graph analytics and combinatorial scientific computing. Performance- and energy-efficient implementation of these kernels on modern, energy efficient, multicore and many-core platforms is therefore an important and challenging problem. We present results from optimizing two irregular applications { the Louvain method for community detection (Grappolo), and high-performance conjugate gradient (HPCCG) {more » on the Tilera many-core system. We have significantly extended MIT's OpenTuner auto-tuning framework to conduct a detailed study of platform-independent and platform-specific optimizations to improve performance as well as reduce total energy consumption. We explore the optimization design space along three dimensions: memory layout schemes, compiler-based code transformations, and optimization of parallel loop schedules. Using auto-tuning, we demonstrate whole node energy savings of up to 41% relative to a baseline instantiation, and up to 31% relative to manually optimized variants.« less

  16. GPU-accelerated non-uniform fast Fourier transform-based compressive sensing spectral domain optical coherence tomography.

    PubMed

    Xu, Daguang; Huang, Yong; Kang, Jin U

    2014-06-16

    We implemented the graphics processing unit (GPU) accelerated compressive sensing (CS) non-uniform in k-space spectral domain optical coherence tomography (SD OCT). Kaiser-Bessel (KB) function and Gaussian function are used independently as the convolution kernel in the gridding-based non-uniform fast Fourier transform (NUFFT) algorithm with different oversampling ratios and kernel widths. Our implementation is compared with the GPU-accelerated modified non-uniform discrete Fourier transform (MNUDFT) matrix-based CS SD OCT and the GPU-accelerated fast Fourier transform (FFT)-based CS SD OCT. It was found that our implementation has comparable performance to the GPU-accelerated MNUDFT-based CS SD OCT in terms of image quality while providing more than 5 times speed enhancement. When compared to the GPU-accelerated FFT based-CS SD OCT, it shows smaller background noise and less side lobes while eliminating the need for the cumbersome k-space grid filling and the k-linear calibration procedure. Finally, we demonstrated that by using a conventional desktop computer architecture having three GPUs, real-time B-mode imaging can be obtained in excess of 30 fps for the GPU-accelerated NUFFT based CS SD OCT with frame size 2048(axial) × 1,000(lateral).

  17. Automatic Thread-Level Parallelization in the Chombo AMR Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christen, Matthias; Keen, Noel; Ligocki, Terry

    2011-05-26

    The increasing on-chip parallelism has some substantial implications for HPC applications. Currently, hybrid programming models (typically MPI+OpenMP) are employed for mapping software to the hardware in order to leverage the hardware?s architectural features. In this paper, we present an approach that automatically introduces thread level parallelism into Chombo, a parallel adaptive mesh refinement framework for finite difference type PDE solvers. In Chombo, core algorithms are specified in the ChomboFortran, a macro language extension to F77 that is part of the Chombo framework. This domain-specific language forms an already used target language for an automatic migration of the large number ofmore » existing algorithms into a hybrid MPI+OpenMP implementation. It also provides access to the auto-tuning methodology that enables tuning certain aspects of an algorithm to hardware characteristics. Performance measurements are presented for a few of the most relevant kernels with respect to a specific application benchmark using this technique as well as benchmark results for the entire application. The kernel benchmarks show that, using auto-tuning, up to a factor of 11 in performance was gained with 4 threads with respect to the serial reference implementation.« less

  18. Prediction of cis/trans isomerization in proteins using PSI-BLAST profiles and secondary structure information.

    PubMed

    Song, Jiangning; Burrage, Kevin; Yuan, Zheng; Huber, Thomas

    2006-03-09

    The majority of peptide bonds in proteins are found to occur in the trans conformation. However, for proline residues, a considerable fraction of Prolyl peptide bonds adopt the cis form. Proline cis/trans isomerization is known to play a critical role in protein folding, splicing, cell signaling and transmembrane active transport. Accurate prediction of proline cis/trans isomerization in proteins would have many important applications towards the understanding of protein structure and function. In this paper, we propose a new approach to predict the proline cis/trans isomerization in proteins using support vector machine (SVM). The preliminary results indicated that using Radial Basis Function (RBF) kernels could lead to better prediction performance than that of polynomial and linear kernel functions. We used single sequence information of different local window sizes, amino acid compositions of different local sequences, multiple sequence alignment obtained from PSI-BLAST and the secondary structure information predicted by PSIPRED. We explored these different sequence encoding schemes in order to investigate their effects on the prediction performance. The training and testing of this approach was performed on a newly enlarged dataset of 2424 non-homologous proteins determined by X-Ray diffraction method using 5-fold cross-validation. Selecting the window size 11 provided the best performance for determining the proline cis/trans isomerization based on the single amino acid sequence. It was found that using multiple sequence alignments in the form of PSI-BLAST profiles could significantly improve the prediction performance, the prediction accuracy increased from 62.8% with single sequence to 69.8% and Matthews Correlation Coefficient (MCC) improved from 0.26 with single local sequence to 0.40. Furthermore, if coupled with the predicted secondary structure information by PSIPRED, our method yielded a prediction accuracy of 71.5% and MCC of 0.43, 9% and 0.17 higher than the accuracy achieved based on the singe sequence information, respectively. A new method has been developed to predict the proline cis/trans isomerization in proteins based on support vector machine, which used the single amino acid sequence with different local window sizes, the amino acid compositions of local sequence flanking centered proline residues, the position-specific scoring matrices (PSSMs) extracted by PSI-BLAST and the predicted secondary structures generated by PSIPRED. The successful application of SVM approach in this study reinforced that SVM is a powerful tool in predicting proline cis/trans isomerization in proteins and biological sequence analysis.

  19. Impact of windows and daylight exposure on overall health and sleep quality of office workers: a case-control pilot study.

    PubMed

    Boubekri, Mohamed; Cheung, Ivy N; Reid, Kathryn J; Wang, Chia-Hui; Zee, Phyllis C

    2014-06-15

    This research examined the impact of daylight exposure on the health of office workers from the perspective of subjective well-being and sleep quality as well as actigraphy measures of light exposure, activity, and sleep-wake patterns. Participants (N = 49) included 27 workers working in windowless environments and 22 comparable workers in workplaces with significantly more daylight. Windowless environment is defined as one without any windows or one where workstations were far away from windows and without any exposure to daylight. Well-being of the office workers was measured by Short Form-36 (SF-36), while sleep quality was measured by Pittsburgh Sleep Quality Index (PSQI). In addition, a subset of participants (N = 21; 10 workers in windowless environments and 11 workers in workplaces with windows) had actigraphy recordings to measure light exposure, activity, and sleep-wake patterns. Workers in windowless environments reported poorer scores than their counterparts on two SF-36 dimensions--role limitation due to physical problems and vitality--as well as poorer overall sleep quality from the global PSQI score and the sleep disturbances component of the PSQI. Compared to the group without windows, workers with windows at the workplace had more light exposure during the workweek, a trend toward more physical activity, and longer sleep duration as measured by actigraphy. We suggest that architectural design of office environments should place more emphasis on sufficient daylight exposure of the workers in order to promote office workers' health and well-being.

  20. Assessment and forensic application of laser-induced breakdown spectroscopy (LIBS) for the discrimination of Australian window glass.

    PubMed

    El-Deftar, Moteaa M; Speers, Naomi; Eggins, Stephen; Foster, Simon; Robertson, James; Lennard, Chris

    2014-08-01

    A commercially available laser-induced breakdown spectroscopy (LIBS) instrument was evaluated for the determination of elemental composition of twenty Australian window glass samples, consisting of 14 laminated samples and 6 non-laminated samples (or not otherwise specified) collected from broken windows at crime scenes. In this study, the LIBS figures of merit were assessed in terms of accuracy, limits of detection and precision using three standard reference materials (NIST 610, 612, and 1831). The discrimination potential of LIBS was compared to that obtained using laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS), X-ray microfluorescence spectroscopy (μXRF) and scanning electron microscopy energy dispersive X-ray spectrometry (SEM-EDX) for the analysis of architectural window glass samples collected from crime scenes in the Canberra region, Australia. Pairwise comparisons were performed using a three-sigma rule, two-way ANOVA and Tukey's HSD test at 95% confidence limit in order to investigate the discrimination power for window glass analysis. The results show that the elemental analysis of glass by LIBS provides a discrimination power greater than 97% (>98% when combined with refractive index data), which was comparable to the discrimination powers obtained by LA-ICP-MS and μXRF. These results indicate that LIBS is a feasible alternative to the more expensive LA-ICP-MS and μXRF options for the routine forensic analysis of window glass samples. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  1. ORNL Cray X1 evaluation status report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agarwal, P.K.; Alexander, R.A.; Apra, E.

    2004-05-01

    On August 15, 2002 the Department of Energy (DOE) selected the Center for Computational Sciences (CCS) at Oak Ridge National Laboratory (ORNL) to deploy a new scalable vector supercomputer architecture for solving important scientific problems in climate, fusion, biology, nanoscale materials and astrophysics. ''This program is one of the first steps in an initiative designed to provide U.S. scientists with the computational power that is essential to 21st century scientific leadership,'' said Dr. Raymond L. Orbach, director of the department's Office of Science. In FY03, CCS procured a 256-processor Cray X1 to evaluate the processors, memory subsystem, scalability of themore » architecture, software environment and to predict the expected sustained performance on key DOE applications codes. The results of the micro-benchmarks and kernel bench marks show the architecture of the Cray X1 to be exceptionally fast for most operations. The best results are shown on large problems, where it is not possible to fit the entire problem into the cache of the processors. These large problems are exactly the types of problems that are important for the DOE and ultra-scale simulation. Application performance is found to be markedly improved by this architecture: - Large-scale simulations of high-temperature superconductors run 25 times faster than on an IBM Power4 cluster using the same number of processors. - Best performance of the parallel ocean program (POP v1.4.3) is 50 percent higher than on Japan s Earth Simulator and 5 times higher than on an IBM Power4 cluster. - A fusion application, global GYRO transport, was found to be 16 times faster on the X1 than on an IBM Power3. The increased performance allowed simulations to fully resolve questions raised by a prior study. - The transport kernel in the AGILE-BOLTZTRAN astrophysics code runs 15 times faster than on an IBM Power4 cluster using the same number of processors. - Molecular dynamics simulations related to the phenomenon of photon echo run 8 times faster than previously achieved. Even at 256 processors, the Cray X1 system is already outperforming other supercomputers with thousands of processors for a certain class of applications such as climate modeling and some fusion applications. This evaluation is the outcome of a number of meetings with both high-performance computing (HPC) system vendors and application experts over the past 9 months and has received broad-based support from the scientific community and other agencies.« less

  2. Two-dimensional high efficiency thin-film silicon solar cells with a lateral light trapping architecture.

    PubMed

    Fang, Jia; Liu, Bofei; Zhao, Ying; Zhang, Xiaodan

    2014-08-22

    Introducing light trapping structures into thin-film solar cells has the potential to enhance their solar energy harvesting as well as the performance of the cells; however, current strategies have been focused mainly on harvesting photons without considering the light re-escaping from cells in two-dimensional scales. The lateral out-coupled solar energy loss from the marginal areas of cells has reduced the electrical yield indeed. We therefore herein propose a lateral light trapping structure (LLTS) as a means of improving the light-harvesting capacity and performance of cells, achieving a 13.07% initial efficiency and greatly improved current output of a-Si:H single-junction solar cell based on this architecture. Given the unique transparency characteristics of thin-film solar cells, this proposed architecture has great potential for integration into the windows of buildings, microelectronics and other applications requiring transparent components.

  3. The NIST Real-Time Control System (RCS): A Reference Model Architecture for Computational Intelligence

    NASA Technical Reports Server (NTRS)

    Albus, James S.

    1996-01-01

    The Real-time Control System (RCS) developed at NIST and elsewhere over the past two decades defines a reference model architecture for design and analysis of complex intelligent control systems. The RCS architecture consists of a hierarchically layered set of functional processing modules connected by a network of communication pathways. The primary distinguishing feature of the layers is the bandwidth of the control loops. The characteristic bandwidth of each level is determined by the spatial and temporal integration window of filters, the temporal frequency of signals and events, the spatial frequency of patterns, and the planning horizon and granularity of the planners that operate at each level. At each level, tasks are decomposed into sequential subtasks, to be performed by cooperating sets of subordinate agents. At each level, signals from sensors are filtered and correlated with spatial and temporal features that are relevant to the control function being implemented at that level.

  4. Centralized Monitoring of the Microsoft Windows-based computers of the LHC Experiment Control Systems

    NASA Astrophysics Data System (ADS)

    Varela Rodriguez, F.

    2011-12-01

    The control system of each of the four major Experiments at the CERN Large Hadron Collider (LHC) is distributed over up to 160 computers running either Linux or Microsoft Windows. A quick response to abnormal situations of the computer infrastructure is crucial to maximize the physics usage. For this reason, a tool was developed to supervise, identify errors and troubleshoot such a large system. Although the monitoring of the performance of the Linux computers and their processes was available since the first versions of the tool, it is only recently that the software package has been extended to provide similar functionality for the nodes running Microsoft Windows as this platform is the most commonly used in the LHC detector control systems. In this paper, the architecture and the functionality of the Windows Management Instrumentation (WMI) client developed to provide centralized monitoring of the nodes running different flavour of the Microsoft platform, as well as the interface to the SCADA software of the control systems are presented. The tool is currently being commissioned by the Experiments and it has already proven to be very efficient optimize the running systems and to detect misbehaving processes or nodes.

  5. JPARSS: A Java Parallel Network Package for Grid Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Jie; Akers, Walter; Chen, Ying

    2002-03-01

    The emergence of high speed wide area networks makes grid computinga reality. However grid applications that need reliable data transfer still have difficulties to achieve optimal TCP performance due to network tuning of TCP window size to improve bandwidth and to reduce latency on a high speed wide area network. This paper presents a Java package called JPARSS (Java Parallel Secure Stream (Socket)) that divides data into partitions that are sent over several parallel Java streams simultaneously and allows Java or Web applications to achieve optimal TCP performance in a grid environment without the necessity of tuning TCP window size.more » This package enables single sign-on, certificate delegation and secure or plain-text data transfer using several security components based on X.509 certificate and SSL. Several experiments will be presented to show that using Java parallelstreams is more effective than tuning TCP window size. In addition a simple architecture using Web services« less

  6. Kernel abortion in maize : I. Carbohydrate concentration patterns and Acid invertase activity of maize kernels induced to abort in vitro.

    PubMed

    Hanft, J M; Jones, R J

    1986-06-01

    Kernels cultured in vitro were induced to abort by high temperature (35 degrees C) and by culturing six kernels/cob piece. Aborting kernels failed to enter a linear phase of dry mass accumulation and had a final mass that was less than 6% of nonaborting field-grown kernels. Kernels induced to abort by high temperature failed to synthesize starch in the endosperm and had elevated sucrose concentrations and low fructose and glucose concentrations in the pedicel during early growth compared to nonaborting kernels. Kernels induced to abort by high temperature also had much lower pedicel soluble acid invertase activities than did nonaborting kernels. These results suggest that high temperature during the lag phase of kernel growth may impair the process of sucrose unloading in the pedicel by indirectly inhibiting soluble acid invertase activity and prevent starch synthesis in the endosperm. Kernels induced to abort by culturing six kernels/cob piece had reduced pedicel fructose, glucose, and sucrose concentrations compared to kernels from field-grown ears. These aborting kernels also had a lower pedicel soluble acid invertase activity compared to nonaborting kernels from the same cob piece and from field-grown ears. The low invertase activity in pedicel tissue of the aborting kernels was probably caused by a lack of substrate (sucrose) for the invertase to cleave due to the intense competition for available assimilates. In contrast to kernels cultured at 35 degrees C, aborting kernels from cob pieces containing all six kernels accumulated starch in a linear fashion. These results indicate that kernels cultured six/cob piece abort because of an inadequate supply of sugar and are similar to apical kernels from field-grown ears that often abort prior to the onset of linear growth.

  7. Multi-PON access network using a coarse AWG for smooth migration from TDM to WDM PON

    NASA Astrophysics Data System (ADS)

    Shachaf, Y.; Chang, C.-H.; Kourtessis, P.; Senior, J. M.

    2007-06-01

    An interoperable access network architecture based on a coarse array waveguide grating (AWG) is described, displaying dynamic wavelength assignment to manage the network load across multiple PONs. The multi-PON architecture utilizes coarse Gaussian channels of an AWG to facilitate scalability and smooth migration path between TDM and WDM PONs. Network simulations of a cross-operational protocol platform confirmed successful routing of individual PON clusters through 7 nm-wide passband windows of the AWG. Furthermore, polarization-dependent wavelength shift and phase errors of the device proved not to impose restrain on the routing performance. Optical transmission tests at 2.5 Gbit/s for distances up to 20 km are demonstrated.

  8. High-performance reconfigurable coincidence counting unit based on a field programmable gate array.

    PubMed

    Park, Byung Kwon; Kim, Yong-Su; Kwon, Osung; Han, Sang-Wook; Moon, Sung

    2015-05-20

    We present a high-performance reconfigurable coincidence counting unit (CCU) using a low-end field programmable gate array (FPGA) and peripheral circuits. Because of the flexibility guaranteed by the FPGA program, we can easily change system parameters, such as internal input delays, coincidence configurations, and the coincidence time window. In spite of a low-cost implementation, the proposed CCU architecture outperforms previous ones in many aspects: it has 8 logic inputs and 4 coincidence outputs that can measure up to eight-fold coincidences. The minimum coincidence time window and the maximum input frequency are 0.47 ns and 163 MHz, respectively. The CCU will be useful in various experimental research areas, including the field of quantum optics and quantum information.

  9. Protect sensitive data with lightweight memory encryption

    NASA Astrophysics Data System (ADS)

    Zhou, Hongwei; Yuan, Jinhui; Xiao, Rui; Zhang, Kai; Sun, Jingyao

    2018-04-01

    Since current commercial processor is not able to deal with the data in the cipher text, the sensitive data have to be exposed in the memory. It leaves a window for the adversary. To protect the sensitive data, a direct idea is to encrypt the data when the processor does not access them. On the observation, we have developed a lightweight memory encryption, called LeMe, to protect the sensitive data in the application. LeMe marks the sensitive data in the memory with the page table entry, and encrypts the data in their free time. LeMe is built on the Linux with a 3.17.6 kernel, and provides four user interfaces as dynamic link library. Our evaluations show LeMe is effective to protect the sensitive data and incurs an acceptable performance overhead.

  10. Porting plasma physics simulation codes to modern computing architectures using the libmrc framework

    NASA Astrophysics Data System (ADS)

    Germaschewski, Kai; Abbott, Stephen

    2015-11-01

    Available computing power has continued to grow exponentially even after single-core performance satured in the last decade. The increase has since been driven by more parallelism, both using more cores and having more parallelism in each core, e.g. in GPUs and Intel Xeon Phi. Adapting existing plasma physics codes is challenging, in particular as there is no single programming model that covers current and future architectures. We will introduce the open-source libmrc framework that has been used to modularize and port three plasma physics codes: The extended MHD code MRCv3 with implicit time integration and curvilinear grids; the OpenGGCM global magnetosphere model; and the particle-in-cell code PSC. libmrc consolidates basic functionality needed for simulations based on structured grids (I/O, load balancing, time integrators), and also introduces a parallel object model that makes it possible to maintain multiple implementations of computational kernels, on e.g. conventional processors and GPUs. It handles data layout conversions and enables us to port performance-critical parts of a code to a new architecture step-by-step, while the rest of the code can remain unchanged. We will show examples of the performance gains and some physics applications.

  11. 7 CFR 810.602 - Definition of other terms.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) Damaged kernels. Kernels and pieces of flaxseed kernels that are badly ground-damaged, badly weather... instructions. Also, underdeveloped, shriveled, and small pieces of flaxseed kernels removed in properly... recleaning. (c) Heat-damaged kernels. Kernels and pieces of flaxseed kernels that are materially discolored...

  12. Kernel Abortion in Maize 1

    PubMed Central

    Hanft, Jonathan M.; Jones, Robert J.

    1986-01-01

    Kernels cultured in vitro were induced to abort by high temperature (35°C) and by culturing six kernels/cob piece. Aborting kernels failed to enter a linear phase of dry mass accumulation and had a final mass that was less than 6% of nonaborting field-grown kernels. Kernels induced to abort by high temperature failed to synthesize starch in the endosperm and had elevated sucrose concentrations and low fructose and glucose concentrations in the pedicel during early growth compared to nonaborting kernels. Kernels induced to abort by high temperature also had much lower pedicel soluble acid invertase activities than did nonaborting kernels. These results suggest that high temperature during the lag phase of kernel growth may impair the process of sucrose unloading in the pedicel by indirectly inhibiting soluble acid invertase activity and prevent starch synthesis in the endosperm. Kernels induced to abort by culturing six kernels/cob piece had reduced pedicel fructose, glucose, and sucrose concentrations compared to kernels from field-grown ears. These aborting kernels also had a lower pedicel soluble acid invertase activity compared to nonaborting kernels from the same cob piece and from field-grown ears. The low invertase activity in pedicel tissue of the aborting kernels was probably caused by a lack of substrate (sucrose) for the invertase to cleave due to the intense competition for available assimilates. In contrast to kernels cultured at 35°C, aborting kernels from cob pieces containing all six kernels accumulated starch in a linear fashion. These results indicate that kernels cultured six/cob piece abort because of an inadequate supply of sugar and are similar to apical kernels from field-grown ears that often abort prior to the onset of linear growth. PMID:16664846

  13. Out-of-Sample Extensions for Non-Parametric Kernel Methods.

    PubMed

    Pan, Binbin; Chen, Wen-Sheng; Chen, Bo; Xu, Chen; Lai, Jianhuang

    2017-02-01

    Choosing suitable kernels plays an important role in the performance of kernel methods. Recently, a number of studies were devoted to developing nonparametric kernels. Without assuming any parametric form of the target kernel, nonparametric kernel learning offers a flexible scheme to utilize the information of the data, which may potentially characterize the data similarity better. The kernel methods using nonparametric kernels are referred to as nonparametric kernel methods. However, many nonparametric kernel methods are restricted to transductive learning, where the prediction function is defined only over the data points given beforehand. They have no straightforward extension for the out-of-sample data points, and thus cannot be applied to inductive learning. In this paper, we show how to make the nonparametric kernel methods applicable to inductive learning. The key problem of out-of-sample extension is how to extend the nonparametric kernel matrix to the corresponding kernel function. A regression approach in the hyper reproducing kernel Hilbert space is proposed to solve this problem. Empirical results indicate that the out-of-sample performance is comparable to the in-sample performance in most cases. Experiments on face recognition demonstrate the superiority of our nonparametric kernel method over the state-of-the-art parametric kernel methods.

  14. 7 CFR 810.1202 - Definition of other terms.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... kernels. Kernels, pieces of rye kernels, and other grains that are badly ground-damaged, badly weather.... Also, underdeveloped, shriveled, and small pieces of rye kernels removed in properly separating the...-damaged kernels. Kernels, pieces of rye kernels, and other grains that are materially discolored and...

  15. The Genetic Basis of Natural Variation in Kernel Size and Related Traits Using a Four-Way Cross Population in Maize.

    PubMed

    Chen, Jiafa; Zhang, Luyan; Liu, Songtao; Li, Zhimin; Huang, Rongrong; Li, Yongming; Cheng, Hongliang; Li, Xiantang; Zhou, Bo; Wu, Suowei; Chen, Wei; Wu, Jianyu; Ding, Junqiang

    2016-01-01

    Kernel size is an important component of grain yield in maize breeding programs. To extend the understanding on the genetic basis of kernel size traits (i.e., kernel length, kernel width and kernel thickness), we developed a set of four-way cross mapping population derived from four maize inbred lines with varied kernel sizes. In the present study, we investigated the genetic basis of natural variation in seed size and other components of maize yield (e.g., hundred kernel weight, number of rows per ear, number of kernels per row). In total, ten QTL affecting kernel size were identified, three of which (two for kernel length and one for kernel width) had stable expression in other components of maize yield. The possible genetic mechanism behind the trade-off of kernel size and yield components was discussed.

  16. The Genetic Basis of Natural Variation in Kernel Size and Related Traits Using a Four-Way Cross Population in Maize

    PubMed Central

    Liu, Songtao; Li, Zhimin; Huang, Rongrong; Li, Yongming; Cheng, Hongliang; Li, Xiantang; Zhou, Bo; Wu, Suowei; Chen, Wei; Wu, Jianyu; Ding, Junqiang

    2016-01-01

    Kernel size is an important component of grain yield in maize breeding programs. To extend the understanding on the genetic basis of kernel size traits (i.e., kernel length, kernel width and kernel thickness), we developed a set of four-way cross mapping population derived from four maize inbred lines with varied kernel sizes. In the present study, we investigated the genetic basis of natural variation in seed size and other components of maize yield (e.g., hundred kernel weight, number of rows per ear, number of kernels per row). In total, ten QTL affecting kernel size were identified, three of which (two for kernel length and one for kernel width) had stable expression in other components of maize yield. The possible genetic mechanism behind the trade-off of kernel size and yield components was discussed. PMID:27070143

  17. Glass-windowed ultrasound transducers.

    PubMed

    Yddal, Tostein; Gilja, Odd Helge; Cochran, Sandy; Postema, Michiel; Kotopoulis, Spiros

    2016-05-01

    In research and industrial processes, it is increasingly common practice to combine multiple measurement modalities. Nevertheless, experimental tools that allow the co-linear combination of optical and ultrasonic transmission have rarely been reported. The aim of this study was to develop and characterise a water-matched ultrasound transducer architecture using standard components, with a central optical window larger than 10 mm in diameter allowing for optical transmission. The window can be used to place illumination or imaging apparatus such as light guides, miniature cameras, or microscope objectives, simplifying experimental setups. Four design variations of a basic architecture were fabricated and characterised with the objective to assess whether the variations influence the acoustic output. The basic architecture consisted of a piezoelectric ring and a glass disc, with an aluminium casing. The designs differed in piezoelectric element dimensions: inner diameter, ID=10 mm, outer diameter, OD=25 mm, thickness, TH=4 mm or ID=20 mm, OD=40 mm, TH=5 mm; glass disc dimensions OD=20-50 mm, TH=2-4 mm; and details of assembly. The transducers' frequency responses were characterised using electrical impedance spectroscopy and pulse-echo measurements, the acoustic propagation pattern using acoustic pressure field scans, the acoustic power output using radiation force balance measurements, and the acoustic pressure using a needle hydrophone. Depending on the design and piezoelectric element dimensions, the resonance frequency was in the range 350-630 kHz, the -6 dB bandwidth was in the range 87-97%, acoustic output power exceeded 1 W, and acoustic pressure exceeded 1 MPa peak-to-peak. 3D stress simulations were performed to predict the isostatic pressure required to induce material failure and 4D acoustic simulations. The pressure simulations indicated that specific design variations could sustain isostatic pressures up to 4.8 MPa.The acoustic simulations were able to predict the behaviour of the fabricated devices. A total of 480 simulations, varying material dimensions (piezoelectric ring ID, glass disc diameter, glass thickness) and drive frequency indicated that the emitted acoustic profile varies nonlinearly with these parameters. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Impact of Windows and Daylight Exposure on Overall Health and Sleep Quality of Office Workers: A Case-Control Pilot Study

    PubMed Central

    Boubekri, Mohamed; Cheung, Ivy N.; Reid, Kathryn J.; Wang, Chia-Hui; Zee, Phyllis C.

    2014-01-01

    Study Objective: This research examined the impact of daylight exposure on the health of office workers from the perspective of subjective well-being and sleep quality as well as actigraphy measures of light exposure, activity, and sleep-wake patterns. Methods: Participants (N = 49) included 27 workers working in windowless environments and 22 comparable workers in workplaces with significantly more daylight. Windowless environment is defined as one without any windows or one where workstations were far away from windows and without any exposure to daylight. Well-being of the office workers was measured by Short Form-36 (SF-36), while sleep quality was measured by Pittsburgh Sleep Quality Index (PSQI). In addition, a subset of participants (N = 21; 10 workers in windowless environments and 11 workers in workplaces with windows) had actigraphy recordings to measure light exposure, activity, and sleep-wake patterns. Results: Workers in windowless environments reported poorer scores than their counterparts on two SF-36 dimensions—role limitation due to physical problems and vitality—as well as poorer overall sleep quality from the global PSQI score and the sleep disturbances component of the PSQI. Compared to the group without windows, workers with windows at the workplace had more light exposure during the workweek, a trend toward more physical activity, and longer sleep duration as measured by actigraphy. Conclusions: We suggest that architectural design of office environments should place more emphasis on sufficient daylight exposure of the workers in order to promote office workers' health and well-being. Citation: Boubekri M, Cheung IN, Reid KJ, Wang CH, Zee PC. Impact of windows and daylight exposure on overall health and sleep quality of office workers: a case-control pilot study. J Clin Sleep Med 2014;10(6):603-611. PMID:24932139

  19. Spectral decomposition of seismic data with reassigned smoothed pseudo Wigner-Ville distribution

    NASA Astrophysics Data System (ADS)

    Wu, Xiaoyang; Liu, Tianyou

    2009-07-01

    Seismic signals are nonstationary mainly due to absorption and attenuation of seismic energy in strata. Referring to spectral decomposition of seismic data, the conventional method using short-time Fourier transform (STFT) limits temporal and spectral resolution by a predefined window length. Continuous-wavelet transform (CWT) uses dilation and translation of a wavelet to produce a time-scale map. However, the wavelets utilized should be orthogonal in order to obtain a satisfactory resolution. The less applied, Wigner-Ville distribution (WVD) being superior in energy distribution concentration, is confronted with cross-terms interference (CTI) when signals are multi-component. In order to reduce the impact of CTI, Cohen class uses kernel function as low-pass filter. Nevertheless it also weakens energy concentration of auto-terms. In this paper, we employ smoothed pseudo Wigner-Ville distribution (SPWVD) with Gauss kernel function to reduce CTI in time and frequency domain, then reassign values of SPWVD (called reassigned SPWVD) according to the center of gravity of the considering energy region so that distribution concentration is maintained simultaneously. We conduct the method above on a multi-component synthetic seismic record and compare with STFT and CWT spectra. Two field examples reveal that RSPWVD potentially can be applied to detect low-frequency shadows caused by hydrocarbons and to delineate the space distribution of abnormal geological body more precisely.

  20. Local Renyi entropic profiles of DNA sequences.

    PubMed

    Vinga, Susana; Almeida, Jonas S

    2007-10-16

    In a recent report the authors presented a new measure of continuous entropy for DNA sequences, which allows the estimation of their randomness level. The definition therein explored was based on the Rényi entropy of probability density estimation (pdf) using the Parzen's window method and applied to Chaos Game Representation/Universal Sequence Maps (CGR/USM). Subsequent work proposed a fractal pdf kernel as a more exact solution for the iterated map representation. This report extends the concepts of continuous entropy by defining DNA sequence entropic profiles using the new pdf estimations to refine the density estimation of motifs. The new methodology enables two results. On the one hand it shows that the entropic profiles are directly related with the statistical significance of motifs, allowing the study of under and over-representation of segments. On the other hand, by spanning the parameters of the kernel function it is possible to extract important information about the scale of each conserved DNA region. The computational applications, developed in Matlab m-code, the corresponding binary executables and additional material and examples are made publicly available at http://kdbio.inesc-id.pt/~svinga/ep/. The ability to detect local conservation from a scale-independent representation of symbolic sequences is particularly relevant for biological applications where conserved motifs occur in multiple, overlapping scales, with significant future applications in the recognition of foreign genomic material and inference of motif structures.

  1. Local Renyi entropic profiles of DNA sequences

    PubMed Central

    Vinga, Susana; Almeida, Jonas S

    2007-01-01

    Background In a recent report the authors presented a new measure of continuous entropy for DNA sequences, which allows the estimation of their randomness level. The definition therein explored was based on the Rényi entropy of probability density estimation (pdf) using the Parzen's window method and applied to Chaos Game Representation/Universal Sequence Maps (CGR/USM). Subsequent work proposed a fractal pdf kernel as a more exact solution for the iterated map representation. This report extends the concepts of continuous entropy by defining DNA sequence entropic profiles using the new pdf estimations to refine the density estimation of motifs. Results The new methodology enables two results. On the one hand it shows that the entropic profiles are directly related with the statistical significance of motifs, allowing the study of under and over-representation of segments. On the other hand, by spanning the parameters of the kernel function it is possible to extract important information about the scale of each conserved DNA region. The computational applications, developed in Matlab m-code, the corresponding binary executables and additional material and examples are made publicly available at . Conclusion The ability to detect local conservation from a scale-independent representation of symbolic sequences is particularly relevant for biological applications where conserved motifs occur in multiple, overlapping scales, with significant future applications in the recognition of foreign genomic material and inference of motif structures. PMID:17939871

  2. Architecture for a PACS primary diagnosis workstation

    NASA Astrophysics Data System (ADS)

    Shastri, Kaushal; Moran, Byron

    1990-08-01

    A major factor in determining the overall utility of a medical Picture Archiving and Communications (PACS) system is the functionality of the diagnostic workstation. Meyer-Ebrecht and Wendler [1] have proposed a modular picture computer architecture with high throughput and Perry et.al [2] have defined performance requirements for radiology workstations. In order to be clinically useful, a primary diagnosis workstation must not only provide functions of current viewing systems (e.g. mechanical alternators [3,4]) such as acceptable image quality, simultaneous viewing of multiple images, and rapid switching of image banks; but must also provide a diagnostic advantage over the current systems. This includes window-level functions on any image, simultaneous display of multi-modality images, rapid image manipulation, image processing, dynamic image display (cine), electronic image archival, hardcopy generation, image acquisition, network support, and an easy user interface. Implementation of such a workstation requires an underlying hardware architecture which provides high speed image transfer channels, local storage facilities, and image processing functions. This paper describes the hardware architecture of the Siemens Diagnostic Reporting Console (DRC) which meets these requirements.

  3. 7 CFR 810.802 - Definition of other terms.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) Damaged kernels. Kernels and pieces of grain kernels for which standards have been established under the.... (d) Heat-damaged kernels. Kernels and pieces of grain kernels for which standards have been...

  4. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...

  5. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...

  6. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...

  7. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...

  8. Hijazi Architectural Object Library (haol)

    NASA Astrophysics Data System (ADS)

    Baik, A.; Boehm, J.

    2017-02-01

    As with many historical buildings around the world, building façades are of special interest; moreover, the details of such windows, stonework, and ornaments give each historic building its individual character. Each object of these buildings must be classified in an architectural object library. Recently, a number of researches have been focusing on this topic in Europe and Canada. From this standpoint, the Hijazi Architectural Objects Library (HAOL) has reproduced Hijazi elements as 3D computer models, which are modelled using a Revit Family (RFA). The HAOL will be dependent on the image survey and point cloud data. The Hijazi Object such as Roshan and Mashrabiyah, become as vocabulary of many Islamic cities in the Hijazi region such as Jeddah in Saudi Arabia, and even for a number of Islamic historic cities such as Istanbul and Cairo. These architectural vocabularies are the main cause of the beauty of these heritage. However, there is a big gap in both the Islamic architectural library and the Hijazi architectural library to provide these unique elements. Besides, both Islamic and Hijazi architecture contains a huge amount of information which has not yet been digitally classified according to period and styles. Due to this issue, this paper will be focusing on developing of Heritage BIM (HBIM) standards and the HAOL library to reduce the cost and the delivering time for heritage and new projects that involve in Hijazi architectural styles. Through this paper, the fundamentals of Hijazi architecture informatics will be provided via developing framework for HBIM models and standards. This framework will provide schema and critical information, for example, classifying the different shapes, models, and forms of structure, construction, and ornamentation of Hijazi architecture in order to digitalize parametric building identity.

  9. Modeling and optimization by particle swarm embedded neural network for adsorption of zinc (II) by palm kernel shell based activated carbon from aqueous environment.

    PubMed

    Karri, Rama Rao; Sahu, J N

    2018-01-15

    Zn (II) is one the common pollutant among heavy metals found in industrial effluents. Removal of pollutant from industrial effluents can be accomplished by various techniques, out of which adsorption was found to be an efficient method. Applications of adsorption limits itself due to high cost of adsorbent. In this regard, a low cost adsorbent produced from palm oil kernel shell based agricultural waste is examined for its efficiency to remove Zn (II) from waste water and aqueous solution. The influence of independent process variables like initial concentration, pH, residence time, activated carbon (AC) dosage and process temperature on the removal of Zn (II) by palm kernel shell based AC from batch adsorption process are studied systematically. Based on the design of experimental matrix, 50 experimental runs are performed with each process variable in the experimental range. The optimal values of process variables to achieve maximum removal efficiency is studied using response surface methodology (RSM) and artificial neural network (ANN) approaches. A quadratic model, which consists of first order and second order degree regressive model is developed using the analysis of variance and RSM - CCD framework. The particle swarm optimization which is a meta-heuristic optimization is embedded on the ANN architecture to optimize the search space of neural network. The optimized trained neural network well depicts the testing data and validation data with R 2 equal to 0.9106 and 0.9279 respectively. The outcomes indicates that the superiority of ANN-PSO based model predictions over the quadratic model predictions provided by RSM. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Control Architecture for Robotic Agent Command and Sensing

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terrance; Aghazarian, Hrand; Estlin, Tara; Gaines, Daniel

    2008-01-01

    Control Architecture for Robotic Agent Command and Sensing (CARACaS) is a recent product of a continuing effort to develop architectures for controlling either a single autonomous robotic vehicle or multiple cooperating but otherwise autonomous robotic vehicles. CARACaS is potentially applicable to diverse robotic systems that could include aircraft, spacecraft, ground vehicles, surface water vessels, and/or underwater vessels. CARACaS incudes an integral combination of three coupled agents: a dynamic planning engine, a behavior engine, and a perception engine. The perception and dynamic planning en - gines are also coupled with a memory in the form of a world model. CARACaS is intended to satisfy the need for two major capabilities essential for proper functioning of an autonomous robotic system: a capability for deterministic reaction to unanticipated occurrences and a capability for re-planning in the face of changing goals, conditions, or resources. The behavior engine incorporates the multi-agent control architecture, called CAMPOUT, described in An Architecture for Controlling Multiple Robots (NPO-30345), NASA Tech Briefs, Vol. 28, No. 11 (November 2004), page 65. CAMPOUT is used to develop behavior-composition and -coordination mechanisms. Real-time process algebra operators are used to compose a behavior network for any given mission scenario. These operators afford a capability for producing a formally correct kernel of behaviors that guarantee predictable performance. By use of a method based on multi-objective decision theory (MODT), recommendations from multiple behaviors are combined to form a set of control actions that represents their consensus. In this approach, all behaviors contribute simultaneously to the control of the robotic system in a cooperative rather than a competitive manner. This approach guarantees a solution that is good enough with respect to resolution of complex, possibly conflicting goals within the constraints of the mission to be accomplished by the vehicle(s).

  11. A Novel Extreme Learning Machine Classification Model for e-Nose Application Based on the Multiple Kernel Approach.

    PubMed

    Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong

    2017-06-19

    A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification.

  12. Classification With Truncated Distance Kernel.

    PubMed

    Huang, Xiaolin; Suykens, Johan A K; Wang, Shuning; Hornegger, Joachim; Maier, Andreas

    2018-05-01

    This brief proposes a truncated distance (TL1) kernel, which results in a classifier that is nonlinear in the global region but is linear in each subregion. With this kernel, the subregion structure can be trained using all the training data and local linear classifiers can be established simultaneously. The TL1 kernel has good adaptiveness to nonlinearity and is suitable for problems which require different nonlinearities in different areas. Though the TL1 kernel is not positive semidefinite, some classical kernel learning methods are still applicable which means that the TL1 kernel can be directly used in standard toolboxes by replacing the kernel evaluation. In numerical experiments, the TL1 kernel with a pregiven parameter achieves similar or better performance than the radial basis function kernel with the parameter tuned by cross validation, implying the TL1 kernel a promising nonlinear kernel for classification tasks.

  13. Brownian motion of a nano-colloidal particle: the role of the solvent.

    PubMed

    Torres-Carbajal, Alexis; Herrera-Velarde, Salvador; Castañeda-Priego, Ramón

    2015-07-15

    Brownian motion is a feature of colloidal particles immersed in a liquid-like environment. Usually, it can be described by means of the generalised Langevin equation (GLE) within the framework of the Mori theory. In principle, all quantities that appear in the GLE can be calculated from the molecular information of the whole system, i.e., colloids and solvent molecules. In this work, by means of extensive Molecular Dynamics simulations, we study the effects of the microscopic details and the thermodynamic state of the solvent on the movement of a single nano-colloid. In particular, we consider a two-dimensional model system in which the mass and size of the colloid are two and one orders of magnitude, respectively, larger than the ones associated with the solvent molecules. The latter ones interact via a Lennard-Jones-type potential to tune the nature of the solvent, i.e., it can be either repulsive or attractive. We choose the linear momentum of the Brownian particle as the observable of interest in order to fully describe the Brownian motion within the Mori framework. We particularly focus on the colloid diffusion at different solvent densities and two temperature regimes: high and low (near the critical point) temperatures. To reach our goal, we have rewritten the GLE as a second kind Volterra integral in order to compute the memory kernel in real space. With this kernel, we evaluate the momentum-fluctuating force correlation function, which is of particular relevance since it allows us to establish when the stationarity condition has been reached. Our findings show that even at high temperatures, the details of the attractive interaction potential among solvent molecules induce important changes in the colloid dynamics. Additionally, near the critical point, the dynamical scenario becomes more complex; all the correlation functions decay slowly in an extended time window, however, the memory kernel seems to be only a function of the solvent density. Thus, the explicit inclusion of the solvent in the description of Brownian motion allows us to better understand the behaviour of the memory kernel at those thermodynamic states near the critical region without any further approximation. This information is useful to elaborate more realistic descriptions of Brownian motion that take into account the particular details of the host medium.

  14. Genome-Wide Association Study for Identifying Loci that Affect Fillet Yield, Carcass, and Body Weight Traits in Rainbow Trout (Oncorhynchus mykiss).

    PubMed

    Gonzalez-Pena, Dianelys; Gao, Guangtu; Baranski, Matthew; Moen, Thomas; Cleveland, Beth M; Kenney, P Brett; Vallejo, Roger L; Palti, Yniv; Leeds, Timothy D

    2016-01-01

    Fillet yield (FY, %) is an economically-important trait in rainbow trout aquaculture that affects production efficiency. Despite that, FY has received little attention in breeding programs because it is difficult to measure on a large number of fish and cannot be directly measured on breeding candidates. The recent development of a high-density SNP array for rainbow trout has provided the needed tool for studying the underlying genetic architecture of this trait. A genome-wide association study (GWAS) was conducted for FY, body weight at 10 (BW10) and 13 (BW13) months post-hatching, head-off carcass weight (CAR), and fillet weight (FW) in a pedigreed rainbow trout population selectively bred for improved growth performance. The GWAS analysis was performed using the weighted single-step GBLUP method (wssGWAS). Phenotypic records of 1447 fish (1.5 kg at harvest) from 299 full-sib families in three successive generations, of which 875 fish from 196 full-sib families were genotyped, were used in the GWAS analysis. A total of 38,107 polymorphic SNPs were analyzed in a univariate model with hatch year and harvest group as fixed effects, harvest weight as a continuous covariate, and animal and common environment as random effects. A new linkage map was developed to create windows of 20 adjacent SNPs for use in the GWAS. The two windows with largest effect for FY and FW were located on chromosome Omy9 and explained only 1.0-1.5% of genetic variance, thus suggesting a polygenic architecture affected by multiple loci with small effects in this population. One window on Omy5 explained 1.4 and 1.0% of the genetic variance for BW10 and BW13, respectively. Three windows located on Omy27, Omy17, and Omy9 (same window detected for FY) explained 1.7, 1.7, and 1.0%, respectively, of genetic variance for CAR. Among the detected 100 SNPs, 55% were located directly in genes (intron and exons). Nucleotide sequences of intragenic SNPs were blasted to the Mus musculus genome to create a putative gene network. The network suggests that differences in the ability to maintain a proliferative and renewable population of myogenic precursor cells may affect variation in growth and fillet yield in rainbow trout.

  15. Genome-Wide Association Study for Identifying Loci that Affect Fillet Yield, Carcass, and Body Weight Traits in Rainbow Trout (Oncorhynchus mykiss)

    PubMed Central

    Gonzalez-Pena, Dianelys; Gao, Guangtu; Baranski, Matthew; Moen, Thomas; Cleveland, Beth M.; Kenney, P. Brett; Vallejo, Roger L.; Palti, Yniv; Leeds, Timothy D.

    2016-01-01

    Fillet yield (FY, %) is an economically-important trait in rainbow trout aquaculture that affects production efficiency. Despite that, FY has received little attention in breeding programs because it is difficult to measure on a large number of fish and cannot be directly measured on breeding candidates. The recent development of a high-density SNP array for rainbow trout has provided the needed tool for studying the underlying genetic architecture of this trait. A genome-wide association study (GWAS) was conducted for FY, body weight at 10 (BW10) and 13 (BW13) months post-hatching, head-off carcass weight (CAR), and fillet weight (FW) in a pedigreed rainbow trout population selectively bred for improved growth performance. The GWAS analysis was performed using the weighted single-step GBLUP method (wssGWAS). Phenotypic records of 1447 fish (1.5 kg at harvest) from 299 full-sib families in three successive generations, of which 875 fish from 196 full-sib families were genotyped, were used in the GWAS analysis. A total of 38,107 polymorphic SNPs were analyzed in a univariate model with hatch year and harvest group as fixed effects, harvest weight as a continuous covariate, and animal and common environment as random effects. A new linkage map was developed to create windows of 20 adjacent SNPs for use in the GWAS. The two windows with largest effect for FY and FW were located on chromosome Omy9 and explained only 1.0–1.5% of genetic variance, thus suggesting a polygenic architecture affected by multiple loci with small effects in this population. One window on Omy5 explained 1.4 and 1.0% of the genetic variance for BW10 and BW13, respectively. Three windows located on Omy27, Omy17, and Omy9 (same window detected for FY) explained 1.7, 1.7, and 1.0%, respectively, of genetic variance for CAR. Among the detected 100 SNPs, 55% were located directly in genes (intron and exons). Nucleotide sequences of intragenic SNPs were blasted to the Mus musculus genome to create a putative gene network. The network suggests that differences in the ability to maintain a proliferative and renewable population of myogenic precursor cells may affect variation in growth and fillet yield in rainbow trout. PMID:27920797

  16. Real-Time Visualization of Spacecraft Telemetry for the GLAST and LRO Missions

    NASA Technical Reports Server (NTRS)

    Stoneking, Eric T.; Shah, Neerav; Chai, Dean J.

    2010-01-01

    GlastCam and LROCam are closely-related tools developed at NASA Goddard Space Flight Center for real-time visualization of spacecraft telemetry, developed for the Gamma-Ray Large Area Space Telescope (GLAST) and Lunar Reconnaissance Orbiter (LRO) missions, respectively. Derived from a common simulation tool, they use related but different architectures to ingest real-time spacecraft telemetry and ground predicted ephemerides, and to compute and display features of special interest to each mission in its operational environment. We describe the architectures of GlastCam and LROCam, the customizations required to fit into the mission operations environment, and the features that were found to be especially useful in early operations for their respective missions. Both tools have a primary window depicting a three-dimensional Cam view of the spacecraft that may be freely manipulated by the user. The scene is augmented with fields of view, pointing constraints, and other features which enhance situational awareness. Each tool also has another "Map" window showing the spacecraft's groundtrack projected onto a map of the Earth or Moon, along with useful features such as the Sun, eclipse regions, and TDRS satellite locations. Additional windows support specialized checkout tasks. One such window shows the star tracker fields of view, with tracking window locations and the mission star catalog. This view was instrumental for GLAST in quickly resolving a star tracker mounting polarity issue; visualization made the 180-deg mismatch immediately obvious. Full access to GlastCam's source code also made possible a rapid coarse star tracker mounting calibration with some on the fly code adjustments; adding a fine grid to measure alignment offsets, and introducing a calibration quaternion which could be adjusted within GlastCam without perturbing the flight parameters. This calibration, from concept to completion, took less than half an hour. Both GlastCam and LROCam were developed in the C language, with non-proprietary support libraries, for ease of customization and portability. This no-blackboxes aspect enables engineers to adapt quickly to unforeseen circumstances in the intense operations environment. GlastCam and LROCam were installed on multiple workstations in the operations support rooms, allowing independent use by multiple subsystems, systems engineers and managers, with negligible draw on telemetry system resources.

  17. Flexible Tagged Architecture for Trustworthy Multi-core Platforms

    DTIC Science & Technology

    2015-06-01

    well as two kernel benchmarks for SHA - 256 and GMAC, which are popular cryptographic standards. We compared the execution time of these benchmarks...UMC UMC on Flex fabric (FPGA) 266 90,384 10.8% 21 5.8% DIFT DIFT on Flex fabric (FPGA) 256 123,471 14.8% 23 6.3% BC BC on Flex fabric (FPGA) 229...0.25X) (1X) (0.5X) (0.25X) (1X) (0.5X) (0.25X) (1X) (0.5X) (0.25X) sha 1.01 1.01 1.01 1.01 1.06 1.16 1.03 1.07 1.15 1.00 1.33 1.50 gmac 1.01 1.01 1.09

  18. Cooperative fault-tolerant distributed computing U.S. Department of Energy Grant DE-FG02-02ER25537 Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sunderam, Vaidy S.

    2007-01-09

    The Harness project has developed novel software frameworks for the execution of high-end simulations in a fault-tolerant manner on distributed resources. The H2O subsystem comprises the kernel of the Harness framework, and controls the key functions of resource management across multiple administrative domains, especially issues of access and allocation. It is based on a “pluggable” architecture that enables the aggregated use of distributed heterogeneous resources for high performance computing. The major contributions of the Harness II project result in significantly enhancing the overall computational productivity of high-end scientific applications by enabling robust, failure-resilient computations on cooperatively pooled resource collections.

  19. Implementation of High-Order Multireference Coupled-Cluster Methods on Intel Many Integrated Core Architecture.

    PubMed

    Aprà, E; Kowalski, K

    2016-03-08

    In this paper we discuss the implementation of multireference coupled-cluster formalism with singles, doubles, and noniterative triples (MRCCSD(T)), which is capable of taking advantage of the processing power of the Intel Xeon Phi coprocessor. We discuss the integration of two levels of parallelism underlying the MRCCSD(T) implementation with computational kernels designed to offload the computationally intensive parts of the MRCCSD(T) formalism to Intel Xeon Phi coprocessors. Special attention is given to the enhancement of the parallel performance by task reordering that has improved load balancing in the noniterative part of the MRCCSD(T) calculations. We also discuss aspects regarding efficient optimization and vectorization strategies.

  20. Open-WiSe: A Solar Powered Wireless Sensor Network Platform

    PubMed Central

    González, Apolinar; Aquino, Raúl; Mata, Walter; Ochoa, Alberto; Saldaña, Pedro; Edwards, Arthur

    2012-01-01

    Because battery-powered nodes are required in wireless sensor networks and energy consumption represents an important design consideration, alternate energy sources are needed to provide more effective and optimal function. The main goal of this work is to present an energy harvesting wireless sensor network platform, the Open Wireless Sensor node (WiSe). The design and implementation of the solar powered wireless platform is described including the hardware architecture, firmware, and a POSIX Real-Time Kernel. A sleep and wake up strategy was implemented to prolong the lifetime of the wireless sensor network. This platform was developed as a tool for researchers investigating Wireless sensor network or system integrators. PMID:22969396

  1. ALMA Correlator Real-Time Data Processor

    NASA Astrophysics Data System (ADS)

    Pisano, J.; Amestica, R.; Perez, J.

    2005-10-01

    The design of a real-time Linux application utilizing Real-Time Application Interface (RTAI) to process real-time data from the radio astronomy correlator for the Atacama Large Millimeter Array (ALMA) is described. The correlator is a custom-built digital signal processor which computes the cross-correlation function of two digitized signal streams. ALMA will have 64 antennas with 2080 signal streams each with a sample rate of 4 giga-samples per second. The correlator's aggregate data output will be 1 gigabyte per second. The software is defined by hard deadlines with high input and processing data rates, while requiring interfaces to non real-time external computers. The designed computer system - the Correlator Data Processor or CDP, consists of a cluster of 17 SMP computers, 16 of which are compute nodes plus a master controller node all running real-time Linux kernels. Each compute node uses an RTAI kernel module to interface to a 32-bit parallel interface which accepts raw data at 64 megabytes per second in 1 megabyte chunks every 16 milliseconds. These data are transferred to tasks running on multiple CPUs in hard real-time using RTAI's LXRT facility to perform quantization corrections, data windowing, FFTs, and phase corrections for a processing rate of approximately 1 GFLOPS. Highly accurate timing signals are distributed to all seventeen computer nodes in order to synchronize them to other time-dependent devices in the observatory array. RTAI kernel tasks interface to the timing signals providing sub-millisecond timing resolution. The CDP interfaces, via the master node, to other computer systems on an external intra-net for command and control, data storage, and further data (image) processing. The master node accesses these external systems utilizing ALMA Common Software (ACS), a CORBA-based client-server software infrastructure providing logging, monitoring, data delivery, and intra-computer function invocation. The software is being developed in tandem with the correlator hardware which presents software engineering challenges as the hardware evolves. The current status of this project and future goals are also presented.

  2. A Novel Extreme Learning Machine Classification Model for e-Nose Application Based on the Multiple Kernel Approach

    PubMed Central

    Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong

    2017-01-01

    A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification. PMID:28629202

  3. Gabor-based kernel PCA with fractional power polynomial models for face recognition.

    PubMed

    Liu, Chengjun

    2004-05-01

    This paper presents a novel Gabor-based kernel Principal Component Analysis (PCA) method by integrating the Gabor wavelet representation of face images and the kernel PCA method for face recognition. Gabor wavelets first derive desirable facial features characterized by spatial frequency, spatial locality, and orientation selectivity to cope with the variations due to illumination and facial expression changes. The kernel PCA method is then extended to include fractional power polynomial models for enhanced face recognition performance. A fractional power polynomial, however, does not necessarily define a kernel function, as it might not define a positive semidefinite Gram matrix. Note that the sigmoid kernels, one of the three classes of widely used kernel functions (polynomial kernels, Gaussian kernels, and sigmoid kernels), do not actually define a positive semidefinite Gram matrix either. Nevertheless, the sigmoid kernels have been successfully used in practice, such as in building support vector machines. In order to derive real kernel PCA features, we apply only those kernel PCA eigenvectors that are associated with positive eigenvalues. The feasibility of the Gabor-based kernel PCA method with fractional power polynomial models has been successfully tested on both frontal and pose-angled face recognition, using two data sets from the FERET database and the CMU PIE database, respectively. The FERET data set contains 600 frontal face images of 200 subjects, while the PIE data set consists of 680 images across five poses (left and right profiles, left and right half profiles, and frontal view) with two different facial expressions (neutral and smiling) of 68 subjects. The effectiveness of the Gabor-based kernel PCA method with fractional power polynomial models is shown in terms of both absolute performance indices and comparative performance against the PCA method, the kernel PCA method with polynomial kernels, the kernel PCA method with fractional power polynomial models, the Gabor wavelet-based PCA method, and the Gabor wavelet-based kernel PCA method with polynomial kernels.

  4. A multi-label learning based kernel automatic recommendation method for support vector machine.

    PubMed

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance.

  5. A Multi-Label Learning Based Kernel Automatic Recommendation Method for Support Vector Machine

    PubMed Central

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance. PMID:25893896

  6. Assessing Server Fault Tolerance and Disaster Recovery Implementation in Thin Client Architectures

    DTIC Science & Technology

    2007-09-01

    server • Windows 2003 server Processor AMD Geode GX Memory 512MB Flash/256MB DDR RAM I/O/Peripheral Support • VGA-type video output (DB-15...2000 Advanced Server Processor AMD Geode NX 1500 Memory • 256MB or 512MB or 1GB DDR SDRAM • 1GB or 512MB Flash I/O/Peripheral Support • SiS741 GX

  7. Naval Open Architecture Machinery Control Systems for Next Generation Integrated Power Systems

    DTIC Science & Technology

    2012-05-01

    PORTABLE) OS / RTOS ADAPTATION MIDDLEWARE (FOR OS PORTABILITY) MACHINERY CONTROLLER FRAMEWORK MACHINERY CONTROL SYSTEM SERVICES POWER CONTROL SYSTEM...SERVICES SHIP SYSTEM SERVICES TTY 0 TTY N … OPERATING SYSTEM ( OS / RTOS ) COMPUTER HARDWARE UDP IP TCP RAW DEV 0 DEV N … POWER MANAGEMENT CONTROLLER...operating systems (DOS, Windows, Linux, OS /2, QNX, SCO Unix ...) COMPUTERS: ISA compatible motherboards, workstations and portables (Compaq, Dell

  8. Strategic Implications of Human Exploration of Near-Earth Asteroids

    NASA Technical Reports Server (NTRS)

    Drake, Bret G.

    2011-01-01

    The current United States Space Policy [1] as articulated by the White House and later confirmed by the Congress [2] calls for [t]he extension of the human presence from low-Earth orbit to other regions of space beyond low-Earth orbit will enable missions to the surface of the Moon and missions to deep space destinations such as near-Earth asteroids and Mars. Human exploration of the Moon and Mars has been the focus of numerous exhaustive studies and planning, but missions to Near-Earth Asteroids (NEAs) has, by comparison, garnered relatively little attention in terms of mission and systems planning. This paper examines the strategic implications of human exploration of NEAs and how they can fit into the overall exploration strategy. This paper specifically addresses how accessible NEAs are in terms of mission duration, technologies required, and overall architecture construct. Example mission architectures utilizing different propulsion technologies such as chemical, nuclear thermal, and solar electric propulsion were formulated to determine resulting figures of merit including number of NEAs accessible, time of flight, mission mass, number of departure windows, and length of the launch windows. These data, in conjunction with what we currently know about these potential exploration targets (or need to know in the future), provide key insights necessary for future mission and strategic planning.

  9. High-performance image processing on the desktop

    NASA Astrophysics Data System (ADS)

    Jordan, Stephen D.

    1996-04-01

    The suitability of computers to the task of medical image visualization for the purposes of primary diagnosis and treatment planning depends on three factors: speed, image quality, and price. To be widely accepted the technology must increase the efficiency of the diagnostic and planning processes. This requires processing and displaying medical images of various modalities in real-time, with accuracy and clarity, on an affordable system. Our approach to meeting this challenge began with market research to understand customer image processing needs. These needs were translated into system-level requirements, which in turn were used to determine which image processing functions should be implemented in hardware. The result is a computer architecture for 2D image processing that is both high-speed and cost-effective. The architectural solution is based on the high-performance PA-RISC workstation with an HCRX graphics accelerator. The image processing enhancements are incorporated into the image visualization accelerator (IVX) which attaches to the HCRX graphics subsystem. The IVX includes a custom VLSI chip which has a programmable convolver, a window/level mapper, and an interpolator supporting nearest-neighbor, bi-linear, and bi-cubic modes. This combination of features can be used to enable simultaneous convolution, pan, zoom, rotate, and window/level control into 1 k by 1 k by 16-bit medical images at 40 frames/second.

  10. 7 CFR 981.7 - Edible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Edible kernel. 981.7 Section 981.7 Agriculture... Regulating Handling Definitions § 981.7 Edible kernel. Edible kernel means a kernel, piece, or particle of almond kernel that is not inedible. [41 FR 26852, June 30, 1976] ...

  11. Kernel K-Means Sampling for Nyström Approximation.

    PubMed

    He, Li; Zhang, Hong

    2018-05-01

    A fundamental problem in Nyström-based kernel matrix approximation is the sampling method by which training set is built. In this paper, we suggest to use kernel -means sampling, which is shown in our works to minimize the upper bound of a matrix approximation error. We first propose a unified kernel matrix approximation framework, which is able to describe most existing Nyström approximations under many popular kernels, including Gaussian kernel and polynomial kernel. We then show that, the matrix approximation error upper bound, in terms of the Frobenius norm, is equal to the -means error of data points in kernel space plus a constant. Thus, the -means centers of data in kernel space, or the kernel -means centers, are the optimal representative points with respect to the Frobenius norm error upper bound. Experimental results, with both Gaussian kernel and polynomial kernel, on real-world data sets and image segmentation tasks show the superiority of the proposed method over the state-of-the-art methods.

  12. Daylight Design for Urban Residential Planning in Poland in Regulations and in A Practice. A Comparison Study of Daylight Conditions Observed in the Four Neighbouring Residential Areas

    NASA Astrophysics Data System (ADS)

    Sokol, Natalia; Martyniuk-Peczek, Justyna

    2017-10-01

    This paper reports on the partial results of the research aiming to illustrate how an integration of daylight design into an architectural planning process can help designers to create the residential buildings in respect to the environmental issues, solar and illuminance gains, as well as, the residents’ needs and comfort. It describes how changing daylight recommendations affected the design of the block of flats regarding their orientation, the spacing, the forms, and the size of the windows in the four urban residential areas. The results of this study help to determine more precise characterization of daylight indicators useful in architectural planning.

  13. Assemble: an interactive graphical tool to analyze and build RNA architectures at the 2D and 3D levels.

    PubMed

    Jossinet, Fabrice; Ludwig, Thomas E; Westhof, Eric

    2010-08-15

    Assemble is an intuitive graphical interface to analyze, manipulate and build complex 3D RNA architectures. It provides several advanced and unique features within the framework of a semi-automated modeling process that can be performed by homology and ab initio with or without electron density maps. Those include the interactive editing of a secondary structure and a searchable, embedded library of annotated tertiary structures. Assemble helps users with performing recurrent and otherwise tedious tasks in structural RNA research. Assemble is released under an open-source license (MIT license) and is freely available at http://bioinformatics.org/assemble. It is implemented in the Java language and runs on MacOSX, Linux and Windows operating systems.

  14. Replica Exchange Molecular Dynamics in the Age of Heterogeneous Architectures

    NASA Astrophysics Data System (ADS)

    Roitberg, Adrian

    2014-03-01

    The rise of GPU-based codes has allowed MD to reach timescales only dreamed of only 5 years ago. Even within this new paradigm there is still need for advanced sampling techniques. Modern supercomputers (e.g. Blue Waters, Titan, Keeneland) have made available to users a significant number of GPUS and CPUS, which in turn translate into amazing opportunities for dream calculations. Replica-exchange based methods can optimally use tis combination of codes and architectures to explore conformational variabilities in large systems. I will show our recent work in porting the program Amber to GPUS, and the support for replica exchange methods, where the replicated dimension could be Temperature, pH, Hamiltonian, Umbrella windows and combinations of those schemes.

  15. Real-time data acquisition and control system for the measurement of motor and neural data

    PubMed Central

    Bryant, Christopher L.; Gandhi, Neeraj J.

    2013-01-01

    This paper outlines a powerful, yet flexible real-time data acquisition and control system for use in the triggering and measurement of both analog and digital events. Built using the LabVIEW development architecture (version 7.1) and freely available, this system provides precisely timed auditory and visual stimuli to a subject while recording analog data and timestamps of neural activity retrieved from a window discriminator. The system utilizes the most recent real-time (RT) technology in order to provide not only a guaranteed data acquisition rate of 1 kHz, but a much more difficult to achieve guaranteed system response time of 1 ms. The system interface is windows-based and easy to use, providing a host of configurable options for end-user customization. PMID:15698659

  16. Tuning the sensitivity of lanthanide-activated NIR nanothermometers in the biological windows.

    PubMed

    Cortelletti, P; Skripka, A; Facciotti, C; Pedroni, M; Caputo, G; Pinna, N; Quintanilla, M; Benayas, A; Vetrone, F; Speghini, A

    2018-02-01

    Lanthanide-activated SrF 2 nanoparticles with a multishell architecture were investigated as optical thermometers in the biological windows. A ratiometric approach based on the relative changes in the intensities of different lanthanide (Nd 3+ and Yb 3+ ) NIR emissions was applied to investigate the thermometric properties of the nanoparticles. It was found that an appropriate doping with Er 3+ ions can increase the thermometric properties of the Nd 3+ -Yb 3+ coupled systems. In addition, a core containing Yb 3+ and Tm 3+ can generate light in the visible and UV regions upon near-infrared (NIR) laser excitation at 980 nm. The multishell structure combined with the rational choice of dopants proves to be particularly important to control and enhance the performance of nanoparticles as NIR nanothermometers.

  17. 53 W average power few-cycle fiber laser system generating soft x rays up to the water window.

    PubMed

    Rothhardt, Jan; Hädrich, Steffen; Klenke, Arno; Demmler, Stefan; Hoffmann, Armin; Gotschall, Thomas; Eidam, Tino; Krebs, Manuel; Limpert, Jens; Tünnermann, Andreas

    2014-09-01

    We report on a few-cycle laser system delivering sub-8-fs pulses with 353 μJ pulse energy and 25 GW of peak power at up to 150 kHz repetition rate. The corresponding average output power is as high as 53 W, which represents the highest average power obtained from any few-cycle laser architecture so far. The combination of both high average and high peak power provides unique opportunities for applications. We demonstrate high harmonic generation up to the water window and record-high photon flux in the soft x-ray spectral region. This tabletop source of high-photon flux soft x rays will, for example, enable coherent diffractive imaging with sub-10-nm resolution in the near future.

  18. Exploiting graph kernels for high performance biomedical relation extraction.

    PubMed

    Panyam, Nagesh C; Verspoor, Karin; Cohn, Trevor; Ramamohanarao, Kotagiri

    2018-01-30

    Relation extraction from biomedical publications is an important task in the area of semantic mining of text. Kernel methods for supervised relation extraction are often preferred over manual feature engineering methods, when classifying highly ordered structures such as trees and graphs obtained from syntactic parsing of a sentence. Tree kernels such as the Subset Tree Kernel and Partial Tree Kernel have been shown to be effective for classifying constituency parse trees and basic dependency parse graphs of a sentence. Graph kernels such as the All Path Graph kernel (APG) and Approximate Subgraph Matching (ASM) kernel have been shown to be suitable for classifying general graphs with cycles, such as the enhanced dependency parse graph of a sentence. In this work, we present a high performance Chemical-Induced Disease (CID) relation extraction system. We present a comparative study of kernel methods for the CID task and also extend our study to the Protein-Protein Interaction (PPI) extraction task, an important biomedical relation extraction task. We discuss novel modifications to the ASM kernel to boost its performance and a method to apply graph kernels for extracting relations expressed in multiple sentences. Our system for CID relation extraction attains an F-score of 60%, without using external knowledge sources or task specific heuristic or rules. In comparison, the state of the art Chemical-Disease Relation Extraction system achieves an F-score of 56% using an ensemble of multiple machine learning methods, which is then boosted to 61% with a rule based system employing task specific post processing rules. For the CID task, graph kernels outperform tree kernels substantially, and the best performance is obtained with APG kernel that attains an F-score of 60%, followed by the ASM kernel at 57%. The performance difference between the ASM and APG kernels for CID sentence level relation extraction is not significant. In our evaluation of ASM for the PPI task, ASM performed better than APG kernel for the BioInfer dataset, in the Area Under Curve (AUC) measure (74% vs 69%). However, for all the other PPI datasets, namely AIMed, HPRD50, IEPA and LLL, ASM is substantially outperformed by the APG kernel in F-score and AUC measures. We demonstrate a high performance Chemical Induced Disease relation extraction, without employing external knowledge sources or task specific heuristics. Our work shows that graph kernels are effective in extracting relations that are expressed in multiple sentences. We also show that the graph kernels, namely the ASM and APG kernels, substantially outperform the tree kernels. Among the graph kernels, we showed the ASM kernel as effective for biomedical relation extraction, with comparable performance to the APG kernel for datasets such as the CID-sentence level relation extraction and BioInfer in PPI. Overall, the APG kernel is shown to be significantly more accurate than the ASM kernel, achieving better performance on most datasets.

  19. Modernization and optimization of a legacy open-source CFD code for high-performance computing architectures

    NASA Astrophysics Data System (ADS)

    Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha; Kalinkin, Alexander A.

    2017-02-01

    Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, which is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,'bottom-up' and 'top-down', are illustrated. Preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.

  20. Optimization of Sparse Matrix-Vector Multiplication on Emerging Multicore Platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Samuel; Oliker, Leonid; Vuduc, Richard

    2008-10-16

    We are witnessing a dramatic change in computer architecture due to the multicore paradigm shift, as every electronic device from cell phones to supercomputers confronts parallelism of unprecedented scale. To fully unleash the potential of these systems, the HPC community must develop multicore specific-optimization methodologies for important scientific computations. In this work, we examine sparse matrix-vector multiply (SpMV) - one of the most heavily used kernels in scientific computing - across a broad spectrum of multicore designs. Our experimental platform includes the homogeneous AMD quad-core, AMD dual-core, and Intel quad-core designs, the heterogeneous STI Cell, as well as one ofmore » the first scientific studies of the highly multithreaded Sun Victoria Falls (a Niagara2 SMP). We present several optimization strategies especially effective for the multicore environment, and demonstrate significant performance improvements compared to existing state-of-the-art serial and parallel SpMV implementations. Additionally, we present key insights into the architectural trade-offs of leading multicore design strategies, in the context of demanding memory-bound numerical algorithms.« less

  1. Lattice Boltzmann Simulation Optimization on Leading Multicore Platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Samuel; Carter, Jonathan; Oliker, Leonid

    2008-02-01

    We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to a lattice Boltzmann application (LBMHD) that historically has made poor use of scalar microprocessors due to its complex data structures and memory access patterns. We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Clovertown, AMD Opteron X2, Sun Niagara2, STI Cell, as well as the single core Intel Itanium2. Rather than hand-tuning LBMHDmore » for each system, we develop a code generator that allows us identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned LBMHD application achieves up to a 14x improvement compared with the original code. Additionally, we present detailed analysis of each optimization, which reveal surprising hardware bottlenecks and software challenges for future multicore systems and applications.« less

  2. Lattice Boltzmann simulation optimization on leading multicore platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, S.; Carter, J.; Oliker, L.

    2008-01-01

    We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of searchbased performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to a lattice Boltzmann application (LBMHD) that historically has made poor use of scalar microprocessors due to its complex data structures and memory access patterns. We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Clovertown, AMD Opteron X2, Sun Niagara2, STI Cell, as well as the single core Intel Itanium2. Rather than hand-tuning LBMHDmore » for each system, we develop a code generator that allows us identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our autotuned LBMHD application achieves up to a 14 improvement compared with the original code. Additionally, we present detailed analysis of each optimization, which reveal surprising hardware bottlenecks and software challenges for future multicore systems and applications.« less

  3. Global magnetohydrodynamic simulations on multiple GPUs

    NASA Astrophysics Data System (ADS)

    Wong, Un-Hong; Wong, Hon-Cheng; Ma, Yonghui

    2014-01-01

    Global magnetohydrodynamic (MHD) models play the major role in investigating the solar wind-magnetosphere interaction. However, the huge computation requirement in global MHD simulations is also the main problem that needs to be solved. With the recent development of modern graphics processing units (GPUs) and the Compute Unified Device Architecture (CUDA), it is possible to perform global MHD simulations in a more efficient manner. In this paper, we present a global magnetohydrodynamic (MHD) simulator on multiple GPUs using CUDA 4.0 with GPUDirect 2.0. Our implementation is based on the modified leapfrog scheme, which is a combination of the leapfrog scheme and the two-step Lax-Wendroff scheme. GPUDirect 2.0 is used in our implementation to drive multiple GPUs. All data transferring and kernel processing are managed with CUDA 4.0 API instead of using MPI or OpenMP. Performance measurements are made on a multi-GPU system with eight NVIDIA Tesla M2050 (Fermi architecture) graphics cards. These measurements show that our multi-GPU implementation achieves a peak performance of 97.36 GFLOPS in double precision.

  4. 7 CFR 810.2202 - Definition of other terms.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... kernels, foreign material, and shrunken and broken kernels. The sum of these three factors may not exceed... the removal of dockage and shrunken and broken kernels. (g) Heat-damaged kernels. Kernels, pieces of... sample after the removal of dockage and shrunken and broken kernels. (h) Other grains. Barley, corn...

  5. 7 CFR 981.8 - Inedible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.8 Section 981.8 Agriculture... Regulating Handling Definitions § 981.8 Inedible kernel. Inedible kernel means a kernel, piece, or particle of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or...

  6. 7 CFR 51.1415 - Inedible kernels.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Inedible kernels. 51.1415 Section 51.1415 Agriculture... Standards for Grades of Pecans in the Shell 1 Definitions § 51.1415 Inedible kernels. Inedible kernels means that the kernel or pieces of kernels are rancid, moldy, decayed, injured by insects or otherwise...

  7. An Approximate Approach to Automatic Kernel Selection.

    PubMed

    Ding, Lizhong; Liao, Shizhong

    2016-02-02

    Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.

  8. Coupling individual kernel-filling processes with source-sink interactions into GREENLAB-Maize.

    PubMed

    Ma, Yuntao; Chen, Youjia; Zhu, Jinyu; Meng, Lei; Guo, Yan; Li, Baoguo; Hoogenboom, Gerrit

    2018-02-13

    Failure to account for the variation of kernel growth in a cereal crop simulation model may cause serious deviations in the estimates of crop yield. The goal of this research was to revise the GREENLAB-Maize model to incorporate source- and sink-limited allocation approaches to simulate the dry matter accumulation of individual kernels of an ear (GREENLAB-Maize-Kernel). The model used potential individual kernel growth rates to characterize the individual potential sink demand. The remobilization of non-structural carbohydrates from reserve organs to kernels was also incorporated. Two years of field experiments were conducted to determine the model parameter values and to evaluate the model using two maize hybrids with different plant densities and pollination treatments. Detailed observations were made on the dimensions and dry weights of individual kernels and other above-ground plant organs throughout the seasons. Three basic traits characterizing an individual kernel were compared on simulated and measured individual kernels: (1) final kernel size; (2) kernel growth rate; and (3) duration of kernel filling. Simulations of individual kernel growth closely corresponded to experimental data. The model was able to reproduce the observed dry weight of plant organs well. Then, the source-sink dynamics and the remobilization of carbohydrates for kernel growth were quantified to show that remobilization processes accompanied source-sink dynamics during the kernel-filling process. We conclude that the model may be used to explore options for optimizing plant kernel yield by matching maize management to the environment, taking into account responses at the level of individual kernels. © The Author(s) 2018. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  9. Unconventional protein sources: apricot seed kernels.

    PubMed

    Gabrial, G N; El-Nahry, F I; Awadalla, M Z; Girgis, S M

    1981-09-01

    Hamawy apricot seed kernels (sweet), Amar apricot seed kernels (bitter) and treated Amar apricot kernels (bitterness removed) were evaluated biochemically. All kernels were found to be high in fat (42.2--50.91%), protein (23.74--25.70%) and fiber (15.08--18.02%). Phosphorus, calcium, and iron were determined in all experimental samples. The three different apricot seed kernels were used for extensive study including the qualitative determination of the amino acid constituents by acid hydrolysis, quantitative determination of some amino acids, and biological evaluation of the kernel proteins in order to use them as new protein sources. Weanling albino rats failed to grow on diets containing the Amar apricot seed kernels due to low food consumption because of its bitterness. There was no loss in weight in that case. The Protein Efficiency Ratio data and blood analysis results showed the Hamawy apricot seed kernels to be higher in biological value than treated apricot seed kernels. The Net Protein Ratio data which accounts for both weight, maintenance and growth showed the treated apricot seed kernels to be higher in biological value than both Hamawy and Amar kernels. The Net Protein Ratio for the last two kernels were nearly equal.

  10. The Genetic Architecture of Response to Long-Term Artificial Selection for Oil Concentration in the Maize Kernel

    PubMed Central

    Laurie, Cathy C.; Chasalow, Scott D.; LeDeaux, John R.; McCarroll, Robert; Bush, David; Hauge, Brian; Lai, Chaoqiang; Clark, Darryl; Rocheford, Torbert R.; Dudley, John W.

    2004-01-01

    In one of the longest-running experiments in biology, researchers at the University of Illinois have selected for altered composition of the maize kernel since 1896. Here we use an association study to infer the genetic basis of dramatic changes that occurred in response to selection for changes in oil concentration. The study population was produced by a cross between the high- and low-selection lines at generation 70, followed by 10 generations of random mating and the derivation of 500 lines by selfing. These lines were genotyped for 488 genetic markers and the oil concentration was evaluated in replicated field trials. Three methods of analysis were tested in simulations for ability to detect quantitative trait loci (QTL). The most effective method was model selection in multiple regression. This method detected ∼50 QTL accounting for ∼50% of the genetic variance, suggesting that >50 QTL are involved. The QTL effect estimates are small and largely additive. About 20% of the QTL have negative effects (i.e., not predicted by the parental difference), which is consistent with hitchhiking and small population size during selection. The large number of QTL detected accounts for the smooth and sustained response to selection throughout the twentieth century. PMID:15611182

  11. Tcl as a Software Environment for a TCS

    NASA Astrophysics Data System (ADS)

    Terrett, David L.

    2002-12-01

    This paper describes how the Tcl scripting language and C API has been used as the software environment for a telescope pointing kernel so that new pointing algorithms and software architectures can be developed and tested without needing a real-time operating system or real-time software environment. It has enabled development to continue outside the framework of a specific telescope project while continuing to build a system that is sufficiently complete to be capable of controlling real hardware but expending minimum effort on replacing the services that would normally by provided by a real-time software environment. Tcl is used as a scripting language for configuring the system at startup and then as the command interface for controlling the running system; the Tcl C language API is used to provided a system independent interface to file and socket I/O and other operating system services. The pointing algorithms themselves are implemented as a set of C++ objects calling C library functions that implement the algorithms described in [2]. Although originally designed as a test and development environment, the system, running as a soft real-time process on Linux, has been used to test the SOAR mount control system and will be used as the pointing kernel of the SOAR telescope control system

  12. A customizable system for real-time image processing using the Blackfin DSProcessor and the MicroC/OS-II real-time kernel

    NASA Astrophysics Data System (ADS)

    Coffey, Stephen; Connell, Joseph

    2005-06-01

    This paper presents a development platform for real-time image processing based on the ADSP-BF533 Blackfin processor and the MicroC/OS-II real-time operating system (RTOS). MicroC/OS-II is a completely portable, ROMable, pre-emptive, real-time kernel. The Blackfin Digital Signal Processors (DSPs), incorporating the Analog Devices/Intel Micro Signal Architecture (MSA), are a broad family of 16-bit fixed-point products with a dual Multiply Accumulate (MAC) core. In addition, they have a rich instruction set with variable instruction length and both DSP and MCU functionality thus making them ideal for media based applications. Using the MicroC/OS-II for task scheduling and management, the proposed system can capture and process raw RGB data from any standard 8-bit greyscale image sensor in soft real-time and then display the processed result using a simple PC graphical user interface (GUI). Additionally, the GUI allows configuration of the image capture rate and the system and core DSP clock rates thereby allowing connectivity to a selection of image sensors and memory devices. The GUI also allows selection from a set of image processing algorithms based in the embedded operating system.

  13. Multi-Target Regression via Robust Low-Rank Learning.

    PubMed

    Zhen, Xiantong; Yu, Mengyang; He, Xiaofei; Li, Shuo

    2018-02-01

    Multi-target regression has recently regained great popularity due to its capability of simultaneously learning multiple relevant regression tasks and its wide applications in data mining, computer vision and medical image analysis, while great challenges arise from jointly handling inter-target correlations and input-output relationships. In this paper, we propose Multi-layer Multi-target Regression (MMR) which enables simultaneously modeling intrinsic inter-target correlations and nonlinear input-output relationships in a general framework via robust low-rank learning. Specifically, the MMR can explicitly encode inter-target correlations in a structure matrix by matrix elastic nets (MEN); the MMR can work in conjunction with the kernel trick to effectively disentangle highly complex nonlinear input-output relationships; the MMR can be efficiently solved by a new alternating optimization algorithm with guaranteed convergence. The MMR leverages the strength of kernel methods for nonlinear feature learning and the structural advantage of multi-layer learning architectures for inter-target correlation modeling. More importantly, it offers a new multi-layer learning paradigm for multi-target regression which is endowed with high generality, flexibility and expressive ability. Extensive experimental evaluation on 18 diverse real-world datasets demonstrates that our MMR can achieve consistently high performance and outperforms representative state-of-the-art algorithms, which shows its great effectiveness and generality for multivariate prediction.

  14. Extracting physicochemical features to predict protein secondary structure.

    PubMed

    Huang, Yin-Fu; Chen, Shu-Ying

    2013-01-01

    We propose a protein secondary structure prediction method based on position-specific scoring matrix (PSSM) profiles and four physicochemical features including conformation parameters, net charges, hydrophobic, and side chain mass. First, the SVM with the optimal window size and the optimal parameters of the kernel function is found. Then, we train the SVM using the PSSM profiles generated from PSI-BLAST and the physicochemical features extracted from the CB513 data set. Finally, we use the filter to refine the predicted results from the trained SVM. For all the performance measures of our method, Q 3 reaches 79.52, SOV94 reaches 86.10, and SOV99 reaches 74.60; all the measures are higher than those of the SVMpsi method and the SVMfreq method. This validates that considering these physicochemical features in predicting protein secondary structure would exhibit better performances.

  15. Extracting Physicochemical Features to Predict Protein Secondary Structure

    PubMed Central

    Chen, Shu-Ying

    2013-01-01

    We propose a protein secondary structure prediction method based on position-specific scoring matrix (PSSM) profiles and four physicochemical features including conformation parameters, net charges, hydrophobic, and side chain mass. First, the SVM with the optimal window size and the optimal parameters of the kernel function is found. Then, we train the SVM using the PSSM profiles generated from PSI-BLAST and the physicochemical features extracted from the CB513 data set. Finally, we use the filter to refine the predicted results from the trained SVM. For all the performance measures of our method, Q 3 reaches 79.52, SOV94 reaches 86.10, and SOV99 reaches 74.60; all the measures are higher than those of the SVMpsi method and the SVMfreq method. This validates that considering these physicochemical features in predicting protein secondary structure would exhibit better performances. PMID:23766688

  16. Design and implementation of laser target simulator in hardware-in-the-loop simulation system based on LabWindows/CVI and RTX

    NASA Astrophysics Data System (ADS)

    Tong, Qiujie; Wang, Qianqian; Li, Xiaoyang; Shan, Bin; Cui, Xuntai; Li, Chenyu; Peng, Zhong

    2016-11-01

    In order to satisfy the requirements of the real-time and generality, a laser target simulator in semi-physical simulation system based on RTX+LabWindows/CVI platform is proposed in this paper. Compared with the upper-lower computers simulation platform architecture used in the most of the real-time system now, this system has better maintainability and portability. This system runs on the Windows platform, using Windows RTX real-time extension subsystem to ensure the real-time performance of the system combining with the reflective memory network to complete some real-time tasks such as calculating the simulation model, transmitting the simulation data, and keeping real-time communication. The real-time tasks of simulation system run under the RTSS process. At the same time, we use the LabWindows/CVI to compile a graphical interface, and complete some non-real-time tasks in the process of simulation such as man-machine interaction, display and storage of the simulation data, which run under the Win32 process. Through the design of RTX shared memory and task scheduling algorithm, the data interaction between the real-time tasks process of RTSS and non-real-time tasks process of Win32 is completed. The experimental results show that this system has the strongly real-time performance, highly stability, and highly simulation accuracy. At the same time, it also has the good performance of human-computer interaction.

  17. An introduction to kernel-based learning algorithms.

    PubMed

    Müller, K R; Mika, S; Rätsch, G; Tsuda, K; Schölkopf, B

    2001-01-01

    This paper provides an introduction to support vector machines, kernel Fisher discriminant analysis, and kernel principal component analysis, as examples for successful kernel-based learning methods. We first give a short background about Vapnik-Chervonenkis theory and kernel feature spaces and then proceed to kernel based learning in supervised and unsupervised scenarios including practical and algorithmic considerations. We illustrate the usefulness of kernel algorithms by discussing applications such as optical character recognition and DNA analysis.

  18. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.408 Section 981.408 Agriculture... Administrative Rules and Regulations § 981.408 Inedible kernel. Pursuant to § 981.8, the definition of inedible kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as...

  19. Design of CT reconstruction kernel specifically for clinical lung imaging

    NASA Astrophysics Data System (ADS)

    Cody, Dianna D.; Hsieh, Jiang; Gladish, Gregory W.

    2005-04-01

    In this study we developed a new reconstruction kernel specifically for chest CT imaging. An experimental flat-panel CT scanner was used on large dogs to produce 'ground-truth" reference chest CT images. These dogs were also examined using a clinical 16-slice CT scanner. We concluded from the dog images acquired on the clinical scanner that the loss of subtle lung structures was due mostly to the presence of the background noise texture when using currently available reconstruction kernels. This qualitative evaluation of the dog CT images prompted the design of a new recon kernel. This new kernel consisted of the combination of a low-pass and a high-pass kernel to produce a new reconstruction kernel, called the 'Hybrid" kernel. The performance of this Hybrid kernel fell between the two kernels on which it was based, as expected. This Hybrid kernel was also applied to a set of 50 patient data sets; the analysis of these clinical images is underway. We are hopeful that this Hybrid kernel will produce clinical images with an acceptable tradeoff of lung detail, reliable HU, and image noise.

  20. Quality changes in macadamia kernel between harvest and farm-gate.

    PubMed

    Walton, David A; Wallace, Helen M

    2011-02-01

    Macadamia integrifolia, Macadamia tetraphylla and their hybrids are cultivated for their edible kernels. After harvest, nuts-in-shell are partially dried on-farm and sorted to eliminate poor-quality kernels before consignment to a processor. During these operations, kernel quality may be lost. In this study, macadamia nuts-in-shell were sampled at five points of an on-farm postharvest handling chain from dehusking to the final storage silo to assess quality loss prior to consignment. Shoulder damage, weight of pieces and unsound kernel were assessed for raw kernels, and colour, mottled colour and surface damage for roasted kernels. Shoulder damage, weight of pieces and unsound kernel for raw kernels increased significantly between the dehusker and the final silo. Roasted kernels displayed a significant increase in dark colour, mottled colour and surface damage during on-farm handling. Significant loss of macadamia kernel quality occurred on a commercial farm during sorting and storage of nuts-in-shell before nuts were consigned to a processor. Nuts-in-shell should be dried as quickly as possible and on-farm handling minimised to maintain optimum kernel quality. 2010 Society of Chemical Industry.

  1. Application of Adjoint Method and Spectral-Element Method to Tomographic Inversion of Regional Seismological Structure Beneath Japanese Islands

    NASA Astrophysics Data System (ADS)

    Tsuboi, S.; Miyoshi, T.; Obayashi, M.; Tono, Y.; Ando, K.

    2014-12-01

    Recent progress in large scale computing by using waveform modeling technique and high performance computing facility has demonstrated possibilities to perform full-waveform inversion of three dimensional (3D) seismological structure inside the Earth. We apply the adjoint method (Liu and Tromp, 2006) to obtain 3D structure beneath Japanese Islands. First we implemented Spectral-Element Method to K-computer in Kobe, Japan. We have optimized SPECFEM3D_GLOBE (Komatitsch and Tromp, 2002) by using OpenMP so that the code fits hybrid architecture of K-computer. Now we could use 82,134 nodes of K-computer (657,072 cores) to compute synthetic waveform with about 1 sec accuracy for realistic 3D Earth model and its performance was 1.2 PFLOPS. We use this optimized SPECFEM3D_GLOBE code and take one chunk around Japanese Islands from global mesh and compute synthetic seismograms with accuracy of about 10 second. We use GAP-P2 mantle tomography model (Obayashi et al., 2009) as an initial 3D model and use as many broadband seismic stations available in this region as possible to perform inversion. We then use the time windows for body waves and surface waves to compute adjoint sources and calculate adjoint kernels for seismic structure. We have performed several iteration and obtained improved 3D structure beneath Japanese Islands. The result demonstrates that waveform misfits between observed and theoretical seismograms improves as the iteration proceeds. We now prepare to use much shorter period in our synthetic waveform computation and try to obtain seismic structure for basin scale model, such as Kanto basin, where there are dense seismic network and high seismic activity. Acknowledgements: This research was partly supported by MEXT Strategic Program for Innovative Research. We used F-net seismograms of the National Research Institute for Earth Science and Disaster Prevention.

  2. The NASA Mission Operations and Control Architecture Program

    NASA Technical Reports Server (NTRS)

    Ondrus, Paul J.; Carper, Richard D.; Jeffries, Alan J.

    1994-01-01

    The conflict between increases in space mission complexity and rapidly declining space mission budgets has created strong pressures to radically reduce the costs of designing and operating spacecraft. A key approach to achieving such reductions is through reducing the development and operations costs of the supporting mission operations systems. One of the efforts which the Communications and Data Systems Division at NASA Headquarters is using to meet this challenge is the Mission Operations Control Architecture (MOCA) project. Technical direction of this effort has been delegated to the Mission Operations Division (MOD) of the Goddard Space Flight Center (GSFC). MOCA is to develop a mission control and data acquisition architecture, and supporting standards, to guide the development of future spacecraft and mission control facilities at GSFC. The architecture will reduce the need for around-the-clock operations staffing, obtain a high level of reuse of flight and ground software elements from mission to mission, and increase overall system flexibility by enabling the migration of appropriate functions from the ground to the spacecraft. The end results are to be an established way of designing the spacecraft-ground system interface for GSFC's in-house developed spacecraft, and a specification of the end to end spacecraft control process, including data structures, interfaces, and protocols, suitable for inclusion in solicitation documents for future flight spacecraft. A flight software kernel may be developed and maintained in a condition that it can be offered as Government Furnished Equipment in solicitations. This paper describes the MOCA project, its current status, and the results to date.

  3. A new discriminative kernel from probabilistic models.

    PubMed

    Tsuda, Koji; Kawanabe, Motoaki; Rätsch, Gunnar; Sonnenburg, Sören; Müller, Klaus-Robert

    2002-10-01

    Recently, Jaakkola and Haussler (1999) proposed a method for constructing kernel functions from probabilistic models. Their so-called Fisher kernel has been combined with discriminative classifiers such as support vector machines and applied successfully in, for example, DNA and protein analysis. Whereas the Fisher kernel is calculated from the marginal log-likelihood, we propose the TOP kernel derived; from tangent vectors of posterior log-odds. Furthermore, we develop a theoretical framework on feature extractors from probabilistic models and use it for analyzing the TOP kernel. In experiments, our new discriminative TOP kernel compares favorably to the Fisher kernel.

  4. Software structure for Vega/Chara instrument

    NASA Astrophysics Data System (ADS)

    Clausse, J.-M.

    2008-07-01

    VEGA (Visible spEctroGraph and polArimeter) is one of the focal instruments of the CHARA array at Mount Wilson near Los Angeles. Its control system is based on techniques developed on the GI2T interferometer (Grand Interferometre a 2 Telescopes) and on the SIRIUS fibered hyper telescope testbed at OCA (Observatoire de la Cote d'Azur). This article describes the software and electronics architecture of the instrument. It is based on local network architecture and uses also Virtual Private Network connections. The server part is based on Windows XP (VC++). The control software is on Linux (C, GTK). For the control of the science detector and the fringe tracking systems, distributed API use real-time techniques. The control software gathers all the necessary informations of the instrument. It allows an automatic management of the instrument by using an original task scheduler. This architecture intends to drive the instrument from remote sites, such as our institute in South of France.

  5. Architectural transitions in Vibrio cholerae biofilms at single-cell resolution

    PubMed Central

    Drescher, Knut; Dunkel, Jörn; Nadell, Carey D.; van Teeffelen, Sven; Grnja, Ivan; Wingreen, Ned S.; Stone, Howard A.; Bassler, Bonnie L.

    2016-01-01

    Many bacterial species colonize surfaces and form dense 3D structures, known as biofilms, which are highly tolerant to antibiotics and constitute one of the major forms of bacterial biomass on Earth. Bacterial biofilms display remarkable changes during their development from initial attachment to maturity, yet the cellular architecture that gives rise to collective biofilm morphology during growth is largely unknown. Here, we use high-resolution optical microscopy to image all individual cells in Vibrio cholerae biofilms at different stages of development, including colonies that range in size from 2 to 4,500 cells. From these data, we extracted the precise 3D cellular arrangements, cell shapes, sizes, and global morphological features during biofilm growth on submerged glass substrates under flow. We discovered several critical transitions of the internal and external biofilm architectures that separate the major phases of V. cholerae biofilm growth. Optical imaging of biofilms with single-cell resolution provides a new window into biofilm formation that will prove invaluable to understanding the mechanics underlying biofilm development. PMID:26933214

  6. The New Realm of 3-D Vision

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Dimension Technologies Inc., developed a line of 2-D/3-D Liquid Crystal Display (LCD) screens, including a 15-inch model priced at consumer levels. DTI's family of flat panel LCD displays, called the Virtual Window(TM), provide real-time 3-D images without the use of glasses, head trackers, helmets, or other viewing aids. Most of the company initial 3-D display research was funded through NASA's Small Business Innovation Research (SBIR) program. The images on DTI's displays appear to leap off the screen and hang in space. The display accepts input from computers or stereo video sources, and can be switched from 3-D to full-resolution 2-D viewing with the push of a button. The Virtual Window displays have applications in data visualization, medicine, architecture, business, real estate, entertainment, and other research, design, military, and consumer applications. Displays are currently used for computer games, protein analysis, and surgical imaging. The technology greatly benefits the medical field, as surgical simulators are helping to increase the skills of surgical residents. Virtual Window(TM) is a trademark of Dimension Technologies Inc.

  7. Integration of ground-penetrating radar, ultrasonic tests and infrared thermography for the analysis of a precious medieval rose window

    NASA Astrophysics Data System (ADS)

    Nuzzo, L.; Calia, A.; Liberatore, D.; Masini, N.; Rizzo, E.

    2010-04-01

    The integration of high-resolution, non-invasive geophysical techniques (such as ground-penetrating radar or GPR) with emerging sensing techniques (acoustics, thermography) can complement limited destructive tests to provide a suitable methodology for a multi-scale assessment of the state of preservation, material and construction components of monuments. This paper presents the results of the application of GPR, infrared thermography (IRT) and ultrasonic tests to the 13th century rose window of Troia Cathedral (Apulia, Italy), affected by widespread decay and instability problems caused by the 1731 earthquake and reactivated by recent seismic activity. This integrated approach provided a wide amount of complementary information at different scales, ranging from the sub-centimetre size of the metallic joints between the various architectural elements, narrow fractures and thin mortar fillings, up to the sub-metre scale of the internal masonry structure of the circular ashlar curb linking the rose window to the façade, which was essential to understand the original building technique and to design an effective restoration strategy.

  8. Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.

    PubMed

    Kwak, Nojun

    2016-05-20

    Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.

  9. Increasing accuracy of dispersal kernels in grid-based population models

    USGS Publications Warehouse

    Slone, D.H.

    2011-01-01

    Dispersal kernels in grid-based population models specify the proportion, distance and direction of movements within the model landscape. Spatial errors in dispersal kernels can have large compounding effects on model accuracy. Circular Gaussian and Laplacian dispersal kernels at a range of spatial resolutions were investigated, and methods for minimizing errors caused by the discretizing process were explored. Kernels of progressively smaller sizes relative to the landscape grid size were calculated using cell-integration and cell-center methods. These kernels were convolved repeatedly, and the final distribution was compared with a reference analytical solution. For large Gaussian kernels (σ > 10 cells), the total kernel error was <10 &sup-11; compared to analytical results. Using an invasion model that tracked the time a population took to reach a defined goal, the discrete model results were comparable to the analytical reference. With Gaussian kernels that had σ ≤ 0.12 using the cell integration method, or σ ≤ 0.22 using the cell center method, the kernel error was greater than 10%, which resulted in invasion times that were orders of magnitude different than theoretical results. A goal-seeking routine was developed to adjust the kernels to minimize overall error. With this, corrections for small kernels were found that decreased overall kernel error to <10-11 and invasion time error to <5%.

  10. Anthraquinones isolated from the browned Chinese chestnut kernels (Castanea mollissima blume)

    NASA Astrophysics Data System (ADS)

    Zhang, Y. L.; Qi, J. H.; Qin, L.; Wang, F.; Pang, M. X.

    2016-08-01

    Anthraquinones (AQS) represent a group of secondary metallic products in plants. AQS are often naturally occurring in plants and microorganisms. In a previous study, we found that AQS were produced by enzymatic browning reaction in Chinese chestnut kernels. To find out whether non-enzymatic browning reaction in the kernels could produce AQS too, AQS were extracted from three groups of chestnut kernels: fresh kernels, non-enzymatic browned kernels, and browned kernels, and the contents of AQS were determined. High performance liquid chromatography (HPLC) and nuclear magnetic resonance (NMR) methods were used to identify two compounds of AQS, rehein(1) and emodin(2). AQS were barely exists in the fresh kernels, while both browned kernel groups sample contained a high amount of AQS. Thus, we comfirmed that AQS could be produced during both enzymatic and non-enzymatic browning process. Rhein and emodin were the main components of AQS in the browned kernels.

  11. Architectural requirements for the Red Storm computing system.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Camp, William J.; Tomkins, James Lee

    This report is based on the Statement of Work (SOW) describing the various requirements for delivering 3 new supercomputer system to Sandia National Laboratories (Sandia) as part of the Department of Energy's (DOE) Accelerated Strategic Computing Initiative (ASCI) program. This system is named Red Storm and will be a distributed memory, massively parallel processor (MPP) machine built primarily out of commodity parts. The requirements presented here distill extensive architectural and design experience accumulated over a decade and a half of research, development and production operation of similar machines at Sandia. Red Storm will have an unusually high bandwidth, low latencymore » interconnect, specially designed hardware and software reliability features, a light weight kernel compute node operating system and the ability to rapidly switch major sections of the machine between classified and unclassified computing environments. Particular attention has been paid to architectural balance in the design of Red Storm, and it is therefore expected to achieve an atypically high fraction of its peak speed of 41 TeraOPS on real scientific computing applications. In addition, Red Storm is designed to be upgradeable to many times this initial peak capability while still retaining appropriate balance in key design dimensions. Installation of the Red Storm computer system at Sandia's New Mexico site is planned for 2004, and it is expected that the system will be operated for a minimum of five years following installation.« less

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madduri, Kamesh; Im, Eun-Jin; Ibrahim, Khaled Z.

    The next decade of high-performance computing (HPC) systems will see a rapid evolution and divergence of multi- and manycore architectures as power and cooling constraints limit increases in microprocessor clock speeds. Understanding efficient optimization methodologies on diverse multicore designs in the context of demanding numerical methods is one of the greatest challenges faced today by the HPC community. In this paper, we examine the efficient multicore optimization of GTC, a petascale gyrokinetic toroidal fusion code for studying plasma microturbulence in tokamak devices. For GTC’s key computational components (charge deposition and particle push), we explore efficient parallelization strategies across a broadmore » range of emerging multicore designs, including the recently-released Intel Nehalem-EX, the AMD Opteron Istanbul, and the highly multithreaded Sun UltraSparc T2+. We also present the first study on tuning gyrokinetic particle-in-cell (PIC) algorithms for graphics processors, using the NVIDIA C2050 (Fermi). Our work discusses several novel optimization approaches for gyrokinetic PIC, including mixed-precision computation, particle binning and decomposition strategies, grid replication, SIMDized atomic floating-point operations, and effective GPU texture memory utilization. Overall, we achieve significant performance improvements of 1.3–4.7× on these complex PIC kernels, despite the inherent challenges of data dependency and locality. Finally, our work also points to several architectural and programming features that could significantly enhance PIC performance and productivity on next-generation architectures.« less

  13. MVC for Content Management on the Cloud

    DTIC Science & Technology

    2011-09-01

    Windows, Linux , MacOS, PalmOS and other customized ones (Qiu). Figure 20 illustrates implementation of MVC architecture. Qiu examines a “universal...Listing of Unzipped Text Document (From O’Reilly & Associates, Inc, 2005) Figure 37 shows the results of unzipping this file in Linux . The contents of the...ODF Adoption TC, and the ODF Alliance include members from Adobe, BBC, Bristol City Council, Bull, Corel, EDS, EMC, GNOME, IBM, Intel, KDE , MySQL

  14. Future Directions for Astronomical Image Display

    NASA Technical Reports Server (NTRS)

    Mandel, Eric

    2000-01-01

    In the "Future Directions for Astronomical Image Displav" project, the Smithsonian Astrophysical Observatory (SAO) and the National Optical Astronomy Observatories (NOAO) evolved our existing image display program into fully extensible. cross-platform image display software. We also devised messaging software to support integration of image display into astronomical analysis systems. Finally, we migrated our software from reliance on Unix and the X Window System to a platform-independent architecture that utilizes the cross-platform Tcl/Tk technology.

  15. Broken rice kernels and the kinetics of rice hydration and texture during cooking.

    PubMed

    Saleh, Mohammed; Meullenet, Jean-Francois

    2013-05-01

    During rice milling and processing, broken kernels are inevitably present, although to date it has been unclear as to how the presence of broken kernels affects rice hydration and cooked rice texture. Therefore, this work intended to study the effect of broken kernels in a rice sample on rice hydration and texture during cooking. Two medium-grain and two long-grain rice cultivars were harvested, dried and milled, and the broken kernels were separated from unbroken kernels. Broken rice kernels were subsequently combined with unbroken rice kernels forming treatments of 0, 40, 150, 350 or 1000 g kg(-1) broken kernels ratio. Rice samples were then cooked and the moisture content of the cooked rice, the moisture uptake rate, and rice hardness and stickiness were measured. As the amount of broken rice kernels increased, rice sample texture became increasingly softer (P < 0.05) but the unbroken kernels became significantly harder. Moisture content and moisture uptake rate were positively correlated, and cooked rice hardness was negatively correlated to the percentage of broken kernels in rice samples. Differences in the proportions of broken rice in a milled rice sample play a major role in determining the texture properties of cooked rice. Variations in the moisture migration kinetics between broken and unbroken kernels caused faster hydration of the cores of broken rice kernels, with greater starch leach-out during cooking affecting the texture of the cooked rice. The texture of cooked rice can be controlled, to some extent, by varying the proportion of broken kernels in milled rice. © 2012 Society of Chemical Industry.

  16. Triple shape memory effects of cross-linked polyethylene/polypropylene blends with cocontinuous architecture.

    PubMed

    Zhao, Jun; Chen, Min; Wang, Xiaoyan; Zhao, Xiaodong; Wang, Zhenwen; Dang, Zhi-Min; Ma, Lan; Hu, Guo-Hua; Chen, Fenghua

    2013-06-26

    In this paper, the triple shape memory effects (SMEs) observed in chemically cross-linked polyethylene (PE)/polypropylene (PP) blends with cocontinuous architecture are systematically investigated. The cocontinuous window of typical immiscible PE/PP blends is the volume fraction of PE (v(PE)) of ca. 30-70 vol %. This architecture can be stabilized by chemical cross-linking. Different initiators, 2,5-dimethyl-2,5-di(tert-butylperoxy)-hexane (DHBP), dicumylperoxide (DCP) coupled with divinylbenzene (DVB) (DCP-DVB), and their mixture (DHBP/DCP-DVB), are used for the cross-linking. According to the differential scanning calorimetry (DSC) measurements and gel fraction calculations, DHBP produces the best cross-linking and DCP-DVB the worst, and the mixture, DHBP/DCP-DVB, is in between. The chemical cross-linking causes lower melting temperature (Tm) and smaller melting enthalpy (ΔHm). The prepared triple shape memory polymers (SMPs) by cocontinuous immiscible PE/PP blends with v(PE) of 50 vol % show pronounced triple SMEs in the dynamic mechanical thermal analysis (DMTA) and visual observation. This new strategy of chemically cross-linked immiscible blends with cocontinuous architecture can be used to design and prepare new SMPs with triple SMEs.

  17. Open mHealth Architecture: A Primer for Tomorrow's Orthopedic Surgeon and Introduction to Its Use in Lower Extremity Arthroplasty.

    PubMed

    Ramkumar, Prem N; Muschler, George F; Spindler, Kurt P; Harris, Joshua D; McCulloch, Patrick C; Mont, Michael A

    2017-04-01

    The recent private-public partnership to unlock and utilize all available health data has large-scale implications for public health and personalized medicine, especially within orthopedics. Today, consumer based technologies such as smartphones and "wearables" store tremendous amounts of personal health data (known as "mHealth") that, when processed and contextualized, have the potential to open new windows of insight for the orthopedic surgeon about their patients. In the present report, the landscape, role, and future technical considerations of mHealth and open architecture are defined with particular examples in lower extremity arthroplasty. A limitation of the current mHealth landscape is the fragmentation and lack of interconnectivity between the myriad of available apps. The importance behind the currently lacking open mHealth architecture is underscored by the offer of improved research, increased workflow efficiency, and value capture for the orthopedic surgeon. There exists an opportunity to leverage existing mobile health data for orthopaedic surgeons, particularly those specializing in lower extremity arthroplasty, by transforming patient small data into insightful big data through the implementation of "open" architecture that affords universal data standards and a global interconnected network. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Prospective PET image quality gain calculation method by optimizing detector parameters.

    PubMed

    Theodorakis, Lampros; Loudos, George; Prassopoulos, Vasilios; Kappas, Constantine; Tsougos, Ioannis; Georgoulias, Panagiotis

    2015-12-01

    Lutetium-based scintillators with high-performance electronics introduced time-of-flight (TOF) reconstruction in the clinical setting. Let G' be the total signal to noise ratio gain in a reconstructed image using the TOF kernel compared with conventional reconstruction modes. G' is then the product of G1 gain arising from the reconstruction process itself and (n-1) other gain factors (G2, G3, … Gn) arising from the inherent properties of the detector. We calculated G2 and G3 gains resulting from the optimization of the coincidence and energy window width for prompts and singles, respectively. Both quantitative and image-based validated Monte Carlo models of Lu2SiO5 (LSO) TOF-permitting and Bi4Ge3O12 (BGO) TOF-nonpermitting detectors were used for the calculations. G2 and G3 values were 1.05 and 1.08 for the BGO detector and G3 was 1.07 for the LSO. A value of almost unity for G2 of the LSO detector indicated a nonsignificant optimization by altering the energy window setting. G' was found to be ∼1.4 times higher for the TOF-permitting detector after reconstruction and optimization of the coincidence and energy windows. The method described could potentially predict image noise variations by altering detector acquisition parameters. It could also further contribute toward a long-lasting debate related to cost-efficiency issues of TOF scanners versus the non-TOF ones. Some vendors re-engage nowadays to non-TOF product line designs in an effort to reduce crystal costs. Therefore, exploring the limits of image quality gain by altering the parameters of these detectors remains a topical issue.

  19. SigHunt: horizontal gene transfer finder optimized for eukaryotic genomes.

    PubMed

    Jaron, Kamil S; Moravec, Jiří C; Martínková, Natália

    2014-04-15

    Genomic islands (GIs) are DNA fragments incorporated into a genome through horizontal gene transfer (also called lateral gene transfer), often with functions novel for a given organism. While methods for their detection are well researched in prokaryotes, the complexity of eukaryotic genomes makes direct utilization of these methods unreliable, and so labour-intensive phylogenetic searches are used instead. We present a surrogate method that investigates nucleotide base composition of the DNA sequence in a eukaryotic genome and identifies putative GIs. We calculate a genomic signature as a vector of tetranucleotide (4-mer) frequencies using a sliding window approach. Extending the neighbourhood of the sliding window, we establish a local kernel density estimate of the 4-mer frequency. We score the number of 4-mer frequencies in the sliding window that deviate from the credibility interval of their local genomic density using a newly developed discrete interval accumulative score (DIAS). To further improve the effectiveness of DIAS, we select informative 4-mers in a range of organisms using the tetranucleotide quality score developed herein. We show that the SigHunt method is computationally efficient and able to detect GIs in eukaryotic genomes that represent non-ameliorated integration. Thus, it is suited to scanning for change in organisms with different DNA composition. Source code and scripts freely available for download at http://www.iba.muni.cz/index-en.php?pg=research-data-analysis-tools-sighunt are implemented in C and R and are platform-independent. 376090@mail.muni.cz or martinkova@ivb.cz. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. Nonlinear Deep Kernel Learning for Image Annotation.

    PubMed

    Jiu, Mingyuan; Sahbi, Hichem

    2017-02-08

    Multiple kernel learning (MKL) is a widely used technique for kernel design. Its principle consists in learning, for a given support vector classifier, the most suitable convex (or sparse) linear combination of standard elementary kernels. However, these combinations are shallow and often powerless to capture the actual similarity between highly semantic data, especially for challenging classification tasks such as image annotation. In this paper, we redefine multiple kernels using deep multi-layer networks. In this new contribution, a deep multiple kernel is recursively defined as a multi-layered combination of nonlinear activation functions, each one involves a combination of several elementary or intermediate kernels, and results into a positive semi-definite deep kernel. We propose four different frameworks in order to learn the weights of these networks: supervised, unsupervised, kernel-based semisupervised and Laplacian-based semi-supervised. When plugged into support vector machines (SVMs), the resulting deep kernel networks show clear gain, compared to several shallow kernels for the task of image annotation. Extensive experiments and analysis on the challenging ImageCLEF photo annotation benchmark, the COREL5k database and the Banana dataset validate the effectiveness of the proposed method.

  1. Multineuron spike train analysis with R-convolution linear combination kernel.

    PubMed

    Tezuka, Taro

    2018-06-01

    A spike train kernel provides an effective way of decoding information represented by a spike train. Some spike train kernels have been extended to multineuron spike trains, which are simultaneously recorded spike trains obtained from multiple neurons. However, most of these multineuron extensions were carried out in a kernel-specific manner. In this paper, a general framework is proposed for extending any single-neuron spike train kernel to multineuron spike trains, based on the R-convolution kernel. Special subclasses of the proposed R-convolution linear combination kernel are explored. These subclasses have a smaller number of parameters and make optimization tractable when the size of data is limited. The proposed kernel was evaluated using Gaussian process regression for multineuron spike trains recorded from an animal brain. It was compared with the sum kernel and the population Spikernel, which are existing ways of decoding multineuron spike trains using kernels. The results showed that the proposed approach performs better than these kernels and also other commonly used neural decoding methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Study on Energy Productivity Ratio (EPR) at palm kernel oil processing factory: case study on PT-X at Sumatera Utara Plantation

    NASA Astrophysics Data System (ADS)

    Haryanto, B.; Bukit, R. Br; Situmeang, E. M.; Christina, E. P.; Pandiangan, F.

    2018-02-01

    The purpose of this study was to determine the performance, productivity and feasibility of the operation of palm kernel processing plant based on Energy Productivity Ratio (EPR). EPR is expressed as the ratio of output to input energy and by-product. Palm Kernel plan is process in palm kernel to become palm kernel oil. The procedure started from collecting data needed as energy input such as: palm kernel prices, energy demand and depreciation of the factory. The energy output and its by-product comprise the whole production price such as: palm kernel oil price and the remaining products such as shells and pulp price. Calculation the equality of energy of palm kernel oil is to analyze the value of Energy Productivity Ratio (EPR) bases on processing capacity per year. The investigation has been done in Kernel Oil Processing Plant PT-X at Sumatera Utara plantation. The value of EPR was 1.54 (EPR > 1), which indicated that the processing of palm kernel into palm kernel oil is feasible to be operated based on the energy productivity.

  3. Multi-kernel aggregation of local and global features in long-wave infrared for detection of SWAT teams in challenging environments

    NASA Astrophysics Data System (ADS)

    Arya, Ankit S.; Anderson, Derek T.; Bethel, Cindy L.; Carruth, Daniel

    2013-05-01

    A vision system was designed for people detection to provide support to SWAT team members operating in challenging environments such as low-to-no light, smoke, etc. When the vision system is mounted on a mobile robot platform: it will enable the robot to function as an effective member of the SWAT team; to provide surveillance information; to make first contact with suspects; and provide safe entry for team members. The vision task is challenging because SWAT team members are typically concealed, carry various equipment such as shields, and perform tactical and stealthy maneuvers. Occlusion is a particular challenge because team members operate in close proximity to one another. An uncooled electro-opticaljlong wav e infrared (EO/ LWIR) camera, 7.5 to 13.5 m, was used. A unique thermal dataset was collected of SWAT team members from multiple teams performing tactical maneuvers during monthly training exercises. Our approach consisted of two stages: an object detector trained on people to find candidate windows, and a secondary feature extraction, multi-kernel (MK) aggregation and classification step to distinguish between SWAT team members and civilians. Two types of thermal features, local and global, are presented based on ma ximally stable extremal region (MSER) blob detection. Support vector machine (SVM) classification results of approximately [70, 93]% for SWAT team member detection are reported based on the exploration of different combinations of visual information in terms of training data.

  4. Safe and Secure Virtualization: Answers for IMA next Generation and Beyond

    NASA Astrophysics Data System (ADS)

    Almeida, Jose; Vatrinet, Francis

    2010-08-01

    This paper presents some of the challenges the aerospace industry is facing for the future and explains why and how a safe and secured virtualization technology can help solving these challenges Efforts around the next generation of IMA have already started, like the European FP7 funded project SCARLETT or the IDEE5 project and many avionics players and working groupware focused on how the new technologies like SMP capabilities introduced in latest CPU architectures, can help increasing system performances in future avionics system. We present PikeOS, a separation micro-kernel, which applies the state-of-the-art techniques and widely recognized standards such as ARINC 653 and MILS in order to guarantee safety and security properties, and still improve overall performance.

  5. GERICOS: A Generic Framework for the Development of On-Board Software

    NASA Astrophysics Data System (ADS)

    Plasson, P.; Cuomo, C.; Gabriel, G.; Gauthier, N.; Gueguen, L.; Malac-Allain, L.

    2016-08-01

    This paper presents an overview of the GERICOS framework (GEneRIC Onboard Software), its architecture, its various layers and its future evolutions. The GERICOS framework, developed and qualified by LESIA, offers a set of generic, reusable and customizable software components for the rapid development of payload flight software. The GERICOS framework has a layered structure. The first layer (GERICOS::CORE) implements the concept of active objects and forms an abstraction layer over the top of real-time kernels. The second layer (GERICOS::BLOCKS) offers a set of reusable software components for building flight software based on generic solutions to recurrent functionalities. The third layer (GERICOS::DRIVERS) implements software drivers for several COTS IP cores of the LEON processor ecosystem.

  6. Vanadium dioxide nanogrid films for high transparency smart architectural window applications.

    PubMed

    Liu, Chang; Balin, Igal; Magdassi, Shlomo; Abdulhalim, Ibrahim; Long, Yi

    2015-02-09

    This study presents a novel approach towards achieving high luminous transmittance (T(lum)) for vanadium dioxide (VO(2)) thermochromic nanogrid films whilst maintaining the solar modulation ability (ΔT(sol)). The perforated VO(2)-based films employ orderly-patterned nano-holes, which are able to favorably transmit visible light dramatically but retain large near-infrared modulation, thereby enhancing ΔT(sol). Numerical optimizations using parameter search algorithms have implemented through a series of Finite Difference Time Domain (FDTD) simulations by varying film thickness, cell periodicity, grid dimensions and variations of grid arrangement. The best performing results of T(lum) (76.5%) and ΔT(sol) (14.0%) are comparable, if not superior, to the results calculated from nanothermochromism, nanoporosity and biomimic nanostructuring. It opens up a new approach for thermochromic smart window applications.

  7. Programmable resistive-switch nanowire transistor logic circuits.

    PubMed

    Shim, Wooyoung; Yao, Jun; Lieber, Charles M

    2014-09-10

    Programmable logic arrays (PLA) constitute a promising architecture for developing increasingly complex and functional circuits through nanocomputers from nanoscale building blocks. Here we report a novel one-dimensional PLA element that incorporates resistive switch gate structures on a semiconductor nanowire and show that multiple elements can be integrated to realize functional PLAs. In our PLA element, the gate coupling to the nanowire transistor can be modulated by the memory state of the resistive switch to yield programmable active (transistor) or inactive (resistor) states within a well-defined logic window. Multiple PLA nanowire elements were integrated and programmed to yield a working 2-to-4 demultiplexer with long-term retention. The well-defined, controllable logic window and long-term retention of our new one-dimensional PLA element provide a promising route for building increasingly complex circuits with nanoscale building blocks.

  8. A Sensor Middleware for integration of heterogeneous medical devices.

    PubMed

    Brito, M; Vale, L; Carvalho, P; Henriques, J

    2010-01-01

    In this paper, the architecture of a modular, service-oriented, Sensor Middleware for data acquisition and processing is presented. The described solution was developed with the purpose of solving two increasingly relevant problems in the context of modern pHealth systems: i) to aggregate a number of heterogeneous, off-the-shelf, devices from which clinical measurements can be acquired and ii) to provide access and integration with an 802.15.4 network of wearable sensors. The modular nature of the Middleware provides the means to easily integrate pre-processing algorithms into processing pipelines, as well as new drivers for adding support for new sensor devices or communication technologies. Tests performed with both real and artificially generated data streams show that the presented solution is suitable for use both in a Windows PC or a Windows Mobile PDA with minimal overhead.

  9. Proceedings of the Workshop on Space Telerobotics, volume 1

    NASA Technical Reports Server (NTRS)

    Rodriguez, G. (Editor)

    1987-01-01

    These proceedings report the results of a workshop on space telerobotics, which was held at the Jet Propulsion Laboratory, January 20-22, 1987. Sponsored by the NASA Office of Aeronautics and Space Technology (OAST), the Workshop reflected NASA's interest in developing new telerobotics technology for automating the space systems planned for the 1990s and beyond. The workshop provided a window into NASA telerobotics research, allowing leading researchers in telerobotics to exchange ideas on manipulation, control, system architectures, artificial intelligence, and machine sensing. One of the objectives was to identify important unsolved problems of current interest. The workshop consisted of surveys, tutorials, and contributed papers of both theoretical and practical interest. Several sessions were held on the themes of sensing and perception, control execution, operator interface, planning and reasoning, and system architecture.

  10. 7 CFR 981.9 - Kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Kernel weight. 981.9 Section 981.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels, including...

  11. An SVM model with hybrid kernels for hydrological time series

    NASA Astrophysics Data System (ADS)

    Wang, C.; Wang, H.; Zhao, X.; Xie, Q.

    2017-12-01

    Support Vector Machine (SVM) models have been widely applied to the forecast of climate/weather and its impact on other environmental variables such as hydrologic response to climate/weather. When using SVM, the choice of the kernel function plays the key role. Conventional SVM models mostly use one single type of kernel function, e.g., radial basis kernel function. Provided that there are several featured kernel functions available, each having its own advantages and drawbacks, a combination of these kernel functions may give more flexibility and robustness to SVM approach, making it suitable for a wide range of application scenarios. This paper presents such a linear combination of radial basis kernel and polynomial kernel for the forecast of monthly flowrate in two gaging stations using SVM approach. The results indicate significant improvement in the accuracy of predicted series compared to the approach with either individual kernel function, thus demonstrating the feasibility and advantages of such hybrid kernel approach for SVM applications.

  12. Approximate kernel competitive learning.

    PubMed

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Multiple kernels learning-based biological entity relationship extraction method.

    PubMed

    Dongliang, Xu; Jingchang, Pan; Bailing, Wang

    2017-09-20

    Automatic extracting protein entity interaction information from biomedical literature can help to build protein relation network and design new drugs. There are more than 20 million literature abstracts included in MEDLINE, which is the most authoritative textual database in the field of biomedicine, and follow an exponential growth over time. This frantic expansion of the biomedical literature can often be difficult to absorb or manually analyze. Thus efficient and automated search engines are necessary to efficiently explore the biomedical literature using text mining techniques. The P, R, and F value of tag graph method in Aimed corpus are 50.82, 69.76, and 58.61%, respectively. The P, R, and F value of tag graph kernel method in other four evaluation corpuses are 2-5% higher than that of all-paths graph kernel. And The P, R and F value of feature kernel and tag graph kernel fuse methods is 53.43, 71.62 and 61.30%, respectively. The P, R and F value of feature kernel and tag graph kernel fuse methods is 55.47, 70.29 and 60.37%, respectively. It indicated that the performance of the two kinds of kernel fusion methods is better than that of simple kernel. In comparison with the all-paths graph kernel method, the tag graph kernel method is superior in terms of overall performance. Experiments show that the performance of the multi-kernels method is better than that of the three separate single-kernel method and the dual-mutually fused kernel method used hereof in five corpus sets.

  14. 7 CFR 51.2295 - Half kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half kernel. 51.2295 Section 51.2295 Agriculture... Standards for Shelled English Walnuts (Juglans Regia) Definitions § 51.2295 Half kernel. Half kernel means the separated half of a kernel with not more than one-eighth broken off. ...

  15. 7 CFR 810.206 - Grades and grade requirements for barley.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... weight per bushel (pounds) Sound barley (percent) Maximum Limits of— Damaged kernels 1 (percent) Heat damaged kernels (percent) Foreign material (percent) Broken kernels (percent) Thin barley (percent) U.S... or otherwise of distinctly low quality. 1 Includes heat-damaged kernels. Injured-by-frost kernels and...

  16. 7 CFR 51.1449 - Damage.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...) Kernel which is “dark amber” or darker color; (e) Kernel having more than one dark kernel spot, or one dark kernel spot more than one-eighth inch in greatest dimension; (f) Shriveling when the surface of the kernel is very conspicuously wrinkled; (g) Internal flesh discoloration of a medium shade of gray...

  17. 7 CFR 51.1449 - Damage.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...) Kernel which is “dark amber” or darker color; (e) Kernel having more than one dark kernel spot, or one dark kernel spot more than one-eighth inch in greatest dimension; (f) Shriveling when the surface of the kernel is very conspicuously wrinkled; (g) Internal flesh discoloration of a medium shade of gray...

  18. 7 CFR 51.2125 - Split or broken kernels.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Split or broken kernels. 51.2125 Section 51.2125 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... kernels. Split or broken kernels means seven-eighths or less of complete whole kernels but which will not...

  19. 7 CFR 51.2296 - Three-fourths half kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Three-fourths half kernel. 51.2296 Section 51.2296 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards...-fourths half kernel. Three-fourths half kernel means a portion of a half of a kernel which has more than...

  20. The Classification of Diabetes Mellitus Using Kernel k-means

    NASA Astrophysics Data System (ADS)

    Alamsyah, M.; Nafisah, Z.; Prayitno, E.; Afida, A. M.; Imah, E. M.

    2018-01-01

    Diabetes Mellitus is a metabolic disorder which is characterized by chronicle hypertensive glucose. Automatics detection of diabetes mellitus is still challenging. This study detected diabetes mellitus by using kernel k-Means algorithm. Kernel k-means is an algorithm which was developed from k-means algorithm. Kernel k-means used kernel learning that is able to handle non linear separable data; where it differs with a common k-means. The performance of kernel k-means in detecting diabetes mellitus is also compared with SOM algorithms. The experiment result shows that kernel k-means has good performance and a way much better than SOM.

  1. UNICOS Kernel Internals Application Development

    NASA Technical Reports Server (NTRS)

    Caredo, Nicholas; Craw, James M. (Technical Monitor)

    1995-01-01

    Having an understanding of UNICOS Kernel Internals is valuable information. However, having the knowledge is only half the value. The second half comes with knowing how to use this information and apply it to the development of tools. The kernel contains vast amounts of useful information that can be utilized. This paper discusses the intricacies of developing utilities that utilize kernel information. In addition, algorithms, logic, and code will be discussed for accessing kernel information. Code segments will be provided that demonstrate how to locate and read kernel structures. Types of applications that can utilize kernel information will also be discussed.

  2. Detection of maize kernels breakage rate based on K-means clustering

    NASA Astrophysics Data System (ADS)

    Yang, Liang; Wang, Zhuo; Gao, Lei; Bai, Xiaoping

    2017-04-01

    In order to optimize the recognition accuracy of maize kernels breakage detection and improve the detection efficiency of maize kernels breakage, this paper using computer vision technology and detecting of the maize kernels breakage based on K-means clustering algorithm. First, the collected RGB images are converted into Lab images, then the original images clarity evaluation are evaluated by the energy function of Sobel 8 gradient. Finally, the detection of maize kernels breakage using different pixel acquisition equipments and different shooting angles. In this paper, the broken maize kernels are identified by the color difference between integrity kernels and broken kernels. The original images clarity evaluation and different shooting angles are taken to verify that the clarity and shooting angles of the images have a direct influence on the feature extraction. The results show that K-means clustering algorithm can distinguish the broken maize kernels effectively.

  3. Modeling adaptive kernels from probabilistic phylogenetic trees.

    PubMed

    Nicotra, Luca; Micheli, Alessio

    2009-01-01

    Modeling phylogenetic interactions is an open issue in many computational biology problems. In the context of gene function prediction we introduce a class of kernels for structured data leveraging on a hierarchical probabilistic modeling of phylogeny among species. We derive three kernels belonging to this setting: a sufficient statistics kernel, a Fisher kernel, and a probability product kernel. The new kernels are used in the context of support vector machine learning. The kernels adaptivity is obtained through the estimation of the parameters of a tree structured model of evolution using as observed data phylogenetic profiles encoding the presence or absence of specific genes in a set of fully sequenced genomes. We report results obtained in the prediction of the functional class of the proteins of the budding yeast Saccharomyces cerevisae which favorably compare to a standard vector based kernel and to a non-adaptive tree kernel function. A further comparative analysis is performed in order to assess the impact of the different components of the proposed approach. We show that the key features of the proposed kernels are the adaptivity to the input domain and the ability to deal with structured data interpreted through a graphical model representation.

  4. Aflatoxin and nutrient contents of peanut collected from local market and their processed foods

    NASA Astrophysics Data System (ADS)

    Ginting, E.; Rahmianna, A. A.; Yusnawan, E.

    2018-01-01

    Peanut is succeptable to aflatoxin contamination and the sources of peanut as well as processing methods considerably affect aflatoxin content of the products. Therefore, the study on aflatoxin and nutrient contents of peanut collected from local market and their processed foods were performed. Good kernels of peanut were prepared into fried peanut, pressed-fried peanut, peanut sauce, peanut press cake, fermented peanut press cake (tempe) and fried tempe, while blended kernels (good and poor kernels) were processed into peanut sauce and tempe and poor kernels were only processed into tempe. The results showed that good and blended kernels which had high number of sound/intact kernels (82,46% and 62,09%), contained 9.8-9.9 ppb of aflatoxin B1, while slightly higher level was seen in poor kernels (12.1 ppb). However, the moisture, ash, protein, and fat contents of the kernels were similar as well as the products. Peanut tempe and fried tempe showed the highest increase in protein content, while decreased fat contents were seen in all products. The increase in aflatoxin B1 of peanut tempe prepared from poor kernels > blended kernels > good kernels. However, it averagely decreased by 61.2% after deep-fried. Excluding peanut tempe and fried tempe, aflatoxin B1 levels in all products derived from good kernels were below the permitted level (15 ppb). This suggests that sorting peanut kernels as ingredients and followed by heat processing would decrease the aflatoxin content in the products.

  5. Partial Deconvolution with Inaccurate Blur Kernel.

    PubMed

    Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei

    2017-10-17

    Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.

  6. Study of a brittle and precious medieval rose-window by means of the integration of GPR, stress wave tests and infrared thermography

    NASA Astrophysics Data System (ADS)

    Nuzzo, L.; Masini, N.; Rizzo, E.

    2009-04-01

    The correct management and restoration of architectural monuments of high cultural interest requires a comprehensive understanding of their status of preservation, the detection of the building features, the localization of damages and possibly the identification of their causes, nature and extent. To this aim, in recent times there is a growing interest on non-destructive and non-invasive geophysical methods as an invaluable tool for correlating spatially the information gained through destructive tests, which are restricted to a few locations of the investigated structure, and to optimize the choice of their position in order to minimize their impact on the monument structural stability. Moreover, the integration of the classical geophysical techniques with emerging surface and subsurface sensing techniques (acoustics, thermography) provides a suitable methodology for a multi-scale assessment of the monument state of preservation and its material and building components, which is vital for addressing maintenance and restoration issues. The present case study focuses on the application of Ground Penetrating Radar (GPR), infrared thermography (IRT), sonic and ultrasonic tests to analyze a 13th century precious rose window in Southern Italy, affected by widespread decay and instability problems. The Cathedral of Troia (Apulia, Italy) is the masterpiece of the Apulian Romanesque architecture. Its façade is adorned with an astonishing 6 m diameter rose window consisting of 11 twin columns, in various stone and reused marbles, connected to a central oculus and to a ring of trapezoidal elements decorated with arched ribworks. Between the twin columns there are 11 triangular carved panels with different and strongly symbolic geometrical patterns. According to visual inspection, mineralogical and petrographic studies, different materials have been used for the different architectural elements: fine grained limestone for the central oculus, medium-fine grained calcarenite for the carved panels and coarse grained calcarenite for the elements with arched ribworks. The various elements are supposedly joined together by iron bolts and melted lead, as exposed at the base of a damaged column. As a consequence of the 1731 earthquake, the upper part of the façade underwent rotational strains which induced severe out-of-plane deformation in the rose window and caused disconnection and rotation of the capitals, compression failures, cracks and detachments in various architectural elements. Despite the 19th century consolidation works, the progressing of strains caused by the recent seismic activity made necessary further structural rehabilitation to preserve the historical rose window. The geometrical and structural complexity of the monument and the multiplicity of issues to be addressed required an integrated approach using various Non-Destructive Testing (NDT) techniques to complement the laboratory analyses for the material characterization. Being based on different theoretical principles, the NDT techniques are sensitive to different physical parameters of the structure, which are in turn interpreted in terms of its engineering properties based on some calibrations or assumptions. Therefore, especially in case of monuments having exceptional artistic value or structural complexity, the integration of several methods becomes mandatory in order to compensate the limitations of each single method thus reducing interpretation ambiguities and the risk of failure in detecting structural defects. In our case the combination of GPR, IRT and sonic/ultrasonic tests was deemed the best strategy to obtain information on different physical parameter and to achieve different depths of penetration and resolution capabilities. A multi-scale integrated approach was necessary since the aspects to be investigated were themselves of different scales, ranging from the sub-centimeter size of small metallic joints, narrow fractures and thin mortar fillings up to the decimeter-meter scale of the masonry structure of the circular ashlar curb linking the rose window to the façade. This multi-technique NDT approach provided a wide amount of high-resolution complementary information on the internal and surface characteristics, materials, state of conservation and building techniques, which were essential for the design of an effective restoration strategy.

  7. Creating and Manipulating Formalized Software Architectures to Support a Domain-Oriented Application Composition System

    DTIC Science & Technology

    1992-12-01

    OOD) Paradigm ...... .... 2-7 2.4.3 Feature-Oriented Domain Analysis ( FODA ) ..... 2-7 2.4.4 Hierarchical Software Systems .................. 2-7...domain analysis ( FODA ) is one approach to domain analysis whose primary goal is to make domain products reusable (20:47). A domain model describes 2-5...7), among others. 2.4.3 Feature-Oriented Domain Analysis ( FODA ) Kang and others used the com- plete FODA methodology to successfully develop a window

  8. Air and Space Operations Center-Weapon System Increment 10.2 (AOC-WS Inc 10.2)

    DTIC Science & Technology

    2016-03-01

    DAE - Defense Acquisition Executive DoD - Department of Defense DoDAF - DoD Architecture Framework FD - Full Deployment FDD - Full Deployment...the AOC 10.2 MAIS was approximately 11 months beyond the original schedule estimates for MS-C, July 31, 2015, and FDD , July 31, 2016. Accordingly...ADM effectively acknowledged selection of AOC-WS Inc 10.2 as the Preferred Alternative, thereby starting the 5-year development window to attain FDD

  9. Radiation Hardened Low Power Digital Signal Processor

    DTIC Science & Technology

    2005-04-15

    Image Figure 53.0 Point Spread Function PSF Figure 54.0 Restored Image and Restored PSF Figure 55.0 Newly Created Array Figure 56.0 Deblurred Image and... noise and interference rejection. WOA’s of 32-taps and greater are easily managed by the TCSP. An architecture that could efficiently perform filter...to quickly calculate a Remez filter impulse response to be used in place of the window function. Using the Remez exchange algorithm to calculate the

  10. Feature co-localization landscape of the human genome

    PubMed Central

    Ng, Siu-Kin; Hu, Taobo; Long, Xi; Chan, Cheuk-Hin; Tsang, Shui-Ying; Xue, Hong

    2016-01-01

    Although feature co-localizations could serve as useful guide-posts to genome architecture, a comprehensive and quantitative feature co-localization map of the human genome has been lacking. Herein we show that, in contrast to the conventional bipartite division of genomic sequences into genic and inter-genic regions, pairwise co-localizations of forty-two genomic features in the twenty-two autosomes based on 50-kb to 2,000-kb sequence windows indicate a tripartite zonal architecture comprising Genic zones enriched with gene-related features and Alu-elements; Proximal zones enriched with MIR- and L2-elements, transcription-factor-binding-sites (TFBSs), and conserved-indels (CIDs); and Distal zones enriched with L1-elements. Co-localizations between single-nucleotide-polymorphisms (SNPs) and copy-number-variations (CNVs) reveal a fraction of sequence windows displaying steeply enhanced levels of SNPs, CNVs and recombination rates that point to active adaptive evolution in such pathways as immune response, sensory perceptions, and cognition. The strongest positive co-localization observed between TFBSs and CIDs suggests a regulatory role of CIDs in cooperation with TFBSs. The positive co-localizations of cancer somatic CNVs (CNVT) with all Proximal zone and most Genic zone features, in contrast to the distinctly more restricted co-localizations exhibited by germline CNVs (CNVG), reveal disparate distributions of CNVTs and CNVGs indicative of dissimilarity in their underlying mechanisms. PMID:26854351

  11. Thermally adapted design strategy of colonial houses in Surabaya

    NASA Astrophysics Data System (ADS)

    Antaryama, I. G. N.; Ekasiwi, S. N. N.; Mappajaya, A.; Ulum, M. S.

    2018-03-01

    Colonial buildings, including houses, have been considered as a representation of climate-responsive architecture. The design was thought to be a hybrid model of Dutch and tropical architecture. It was created by way of reinventing tropical and Dutch architecture design principles, and expressed in a new form, i.e. neither resembling Dutch nor tropical building. Aside from this new image, colonial house does show good climatic responses. Previous researches on colonial house generally focus on qualitative assessment of climate performance of the building. Yet this kind of study tends to concentrate on building elements, e.g. wall, window, etc. The present study is designed to give more complete picture of architecture design strategy of the house by exploring and analysing thermal performance of colonial buildings and their related architecture design strategies. Field measurements are conducted during the dry season in several colonial building in Surabaya. Air temperature and humidity are both taken, representing internal and external thermal conditions of the building. These data are then evaluated to determine thermal performance of the house. Finally, various design strategies are examined in order to reveal their significant contributions to its thermal performance. Results of the study in Surabaya confirm findings of the previous researches that are conducted in other locations, which stated that thermal performance of the house is generally good. Passive design strategies such as mass effect and ventilation play an important role in determining performance of the building.

  12. Patterns of bird-window collisions inform mitigation on a university campus

    PubMed Central

    Winton, R. Scott; Wu, Charlene J.; Zambello, Erika; Wittig, Thomas W.; Cagle, Nicolette L.

    2016-01-01

    Bird-window collisions cause an estimated one billion bird deaths annually in the United States. Building characteristics and surrounding habitat affect collision frequency. Given the importance of collisions as an anthropogenic threat to birds, mitigation is essential. Patterned glass and UV-reflective films have been proven to prevent collisions. At Duke University’s West campus in Durham, North Carolina, we set out to identify the buildings and building characteristics associated with the highest frequencies of collisions in order to propose a mitigation strategy. We surveyed six buildings, stratified by size, and measured architectural characteristics and surrounding area variables. During 21 consecutive days in spring and fall 2014, and spring 2015, we conducted carcass surveys to document collisions. In addition, we also collected ad hoc collision data year-round and recorded the data using the app iNaturalist. Consistent with previous studies, we found a positive relationship between glass area and collisions. Fitzpatrick, the building with the most window area, caused the most collisions. Schwartz and the Perk, the two small buildings with small window areas, had the lowest collision frequencies. Penn, the only building with bird deterrent pattern, caused just two collisions, despite being almost completely made out of glass. Unlike many research projects, our data collection led to mitigation action. A resolution supported by the student government, including news stories in the local media, resulted in the application of a bird deterrent film to the building with the most collisions: Fitzpatrick. We present our collision data and mitigation result to inspire other researchers and organizations to prevent bird-window collisions. PMID:26855877

  13. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  14. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  15. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  16. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  17. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  18. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half-kernel. 51.1441 Section 51.1441 Agriculture... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume missing...

  19. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color...

  20. 7 CFR 51.1450 - Serious damage.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...; (c) Decay affecting any portion of the kernel; (d) Insects, web, or frass or any distinct evidence of insect feeding on the kernel; (e) Internal discoloration which is dark gray, dark brown, or black and...) Dark kernel spots when more than three are on the kernel, or when any dark kernel spot or the aggregate...

  1. 7 CFR 51.1450 - Serious damage.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...; (c) Decay affecting any portion of the kernel; (d) Insects, web, or frass or any distinct evidence of insect feeding on the kernel; (e) Internal discoloration which is dark gray, dark brown, or black and...) Dark kernel spots when more than three are on the kernel, or when any dark kernel spot or the aggregate...

  2. 7 CFR 51.1450 - Serious damage.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...; (c) Decay affecting any portion of the kernel; (d) Insects, web, or frass or any distinct evidence of insect feeding on the kernel; (e) Internal discoloration which is dark gray, dark brown, or black and...) Dark kernel spots when more than three are on the kernel, or when any dark kernel spot or the aggregate...

  3. Wavelet SVM in Reproducing Kernel Hilbert Space for hyperspectral remote sensing image classification

    NASA Astrophysics Data System (ADS)

    Du, Peijun; Tan, Kun; Xing, Xiaoshi

    2010-12-01

    Combining Support Vector Machine (SVM) with wavelet analysis, we constructed wavelet SVM (WSVM) classifier based on wavelet kernel functions in Reproducing Kernel Hilbert Space (RKHS). In conventional kernel theory, SVM is faced with the bottleneck of kernel parameter selection which further results in time-consuming and low classification accuracy. The wavelet kernel in RKHS is a kind of multidimensional wavelet function that can approximate arbitrary nonlinear functions. Implications on semiparametric estimation are proposed in this paper. Airborne Operational Modular Imaging Spectrometer II (OMIS II) hyperspectral remote sensing image with 64 bands and Reflective Optics System Imaging Spectrometer (ROSIS) data with 115 bands were used to experiment the performance and accuracy of the proposed WSVM classifier. The experimental results indicate that the WSVM classifier can obtain the highest accuracy when using the Coiflet Kernel function in wavelet transform. In contrast with some traditional classifiers, including Spectral Angle Mapping (SAM) and Minimum Distance Classification (MDC), and SVM classifier using Radial Basis Function kernel, the proposed wavelet SVM classifier using the wavelet kernel function in Reproducing Kernel Hilbert Space is capable of improving classification accuracy obviously.

  4. A trace ratio maximization approach to multiple kernel-based dimensionality reduction.

    PubMed

    Jiang, Wenhao; Chung, Fu-lai

    2014-01-01

    Most dimensionality reduction techniques are based on one metric or one kernel, hence it is necessary to select an appropriate kernel for kernel-based dimensionality reduction. Multiple kernel learning for dimensionality reduction (MKL-DR) has been recently proposed to learn a kernel from a set of base kernels which are seen as different descriptions of data. As MKL-DR does not involve regularization, it might be ill-posed under some conditions and consequently its applications are hindered. This paper proposes a multiple kernel learning framework for dimensionality reduction based on regularized trace ratio, termed as MKL-TR. Our method aims at learning a transformation into a space of lower dimension and a corresponding kernel from the given base kernels among which some may not be suitable for the given data. The solutions for the proposed framework can be found based on trace ratio maximization. The experimental results demonstrate its effectiveness in benchmark datasets, which include text, image and sound datasets, for supervised, unsupervised as well as semi-supervised settings. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Distributed smoothed tree kernel for protein-protein interaction extraction from the biomedical literature

    PubMed Central

    Murugesan, Gurusamy; Abdulkadhar, Sabenabanu; Natarajan, Jeyakumar

    2017-01-01

    Automatic extraction of protein-protein interaction (PPI) pairs from biomedical literature is a widely examined task in biological information extraction. Currently, many kernel based approaches such as linear kernel, tree kernel, graph kernel and combination of multiple kernels has achieved promising results in PPI task. However, most of these kernel methods fail to capture the semantic relation information between two entities. In this paper, we present a special type of tree kernel for PPI extraction which exploits both syntactic (structural) and semantic vectors information known as Distributed Smoothed Tree kernel (DSTK). DSTK comprises of distributed trees with syntactic information along with distributional semantic vectors representing semantic information of the sentences or phrases. To generate robust machine learning model composition of feature based kernel and DSTK were combined using ensemble support vector machine (SVM). Five different corpora (AIMed, BioInfer, HPRD50, IEPA, and LLL) were used for evaluating the performance of our system. Experimental results show that our system achieves better f-score with five different corpora compared to other state-of-the-art systems. PMID:29099838

  6. Hadamard Kernel SVM with applications for breast cancer outcome predictions.

    PubMed

    Jiang, Hao; Ching, Wai-Ki; Cheung, Wai-Shun; Hou, Wenpin; Yin, Hong

    2017-12-21

    Breast cancer is one of the leading causes of deaths for women. It is of great necessity to develop effective methods for breast cancer detection and diagnosis. Recent studies have focused on gene-based signatures for outcome predictions. Kernel SVM for its discriminative power in dealing with small sample pattern recognition problems has attracted a lot attention. But how to select or construct an appropriate kernel for a specified problem still needs further investigation. Here we propose a novel kernel (Hadamard Kernel) in conjunction with Support Vector Machines (SVMs) to address the problem of breast cancer outcome prediction using gene expression data. Hadamard Kernel outperform the classical kernels and correlation kernel in terms of Area under the ROC Curve (AUC) values where a number of real-world data sets are adopted to test the performance of different methods. Hadamard Kernel SVM is effective for breast cancer predictions, either in terms of prognosis or diagnosis. It may benefit patients by guiding therapeutic options. Apart from that, it would be a valuable addition to the current SVM kernel families. We hope it will contribute to the wider biology and related communities.

  7. Distributed smoothed tree kernel for protein-protein interaction extraction from the biomedical literature.

    PubMed

    Murugesan, Gurusamy; Abdulkadhar, Sabenabanu; Natarajan, Jeyakumar

    2017-01-01

    Automatic extraction of protein-protein interaction (PPI) pairs from biomedical literature is a widely examined task in biological information extraction. Currently, many kernel based approaches such as linear kernel, tree kernel, graph kernel and combination of multiple kernels has achieved promising results in PPI task. However, most of these kernel methods fail to capture the semantic relation information between two entities. In this paper, we present a special type of tree kernel for PPI extraction which exploits both syntactic (structural) and semantic vectors information known as Distributed Smoothed Tree kernel (DSTK). DSTK comprises of distributed trees with syntactic information along with distributional semantic vectors representing semantic information of the sentences or phrases. To generate robust machine learning model composition of feature based kernel and DSTK were combined using ensemble support vector machine (SVM). Five different corpora (AIMed, BioInfer, HPRD50, IEPA, and LLL) were used for evaluating the performance of our system. Experimental results show that our system achieves better f-score with five different corpora compared to other state-of-the-art systems.

  8. Telescope Array Control System Based on Wireless Touch Screen Platform

    NASA Astrophysics Data System (ADS)

    Fu, Xia-nan; Huang, Lei; Wei, Jian-yan

    2017-10-01

    Ground-based Wide Angle Cameras (GMAC) are the ground-based observational facility for the SVOM (Space Variable Object Monitor) astronomical satellite of Sino-French cooperation, and Mini-GWAC is the pathfinder and supplement of GWAC. In the context of the Mini-GWAC telescope array, this paper introduces the design and implementation of a kind of telescope array control system based on the wireless touch screen platform. We describe the development and implementation of the system in detail in terms of control system principle, system hardware structure, software design, experiment, and test etc. The system uses a touch-control PC which is based on the Windows CE system as the upper computer, while the wireless transceiver module and PLC (Programmable Logic Controller) are taken as the system kernel. It has the advantages of low cost, reliable data transmission, and simple operation. And the control system has been applied to the Mini-GWAC successfully.

  9. Modeling returns volatility: Realized GARCH incorporating realized risk measure

    NASA Astrophysics Data System (ADS)

    Jiang, Wei; Ruan, Qingsong; Li, Jianfeng; Li, Ye

    2018-06-01

    This study applies realized GARCH models by introducing several risk measures of intraday returns into the measurement equation, to model the daily volatility of E-mini S&P 500 index futures returns. Besides using the conventional realized measures, realized volatility and realized kernel as our benchmarks, we also use generalized realized risk measures, realized absolute deviation, and two realized tail risk measures, realized value-at-risk and realized expected shortfall. The empirical results show that realized GARCH models using the generalized realized risk measures provide better volatility estimation for the in-sample and substantial improvement in volatility forecasting for the out-of-sample. In particular, the realized expected shortfall performs best for all of the alternative realized measures. Our empirical results reveal that future volatility may be more attributable to present losses (risk measures). The results are robust to different sample estimation windows.

  10. Pse-Analysis: a python package for DNA/RNA and protein/ peptide sequence analysis based on pseudo components and kernel methods.

    PubMed

    Liu, Bin; Wu, Hao; Zhang, Deyuan; Wang, Xiaolong; Chou, Kuo-Chen

    2017-02-21

    To expedite the pace in conducting genome/proteome analysis, we have developed a Python package called Pse-Analysis. The powerful package can automatically complete the following five procedures: (1) sample feature extraction, (2) optimal parameter selection, (3) model training, (4) cross validation, and (5) evaluating prediction quality. All the work a user needs to do is to input a benchmark dataset along with the query biological sequences concerned. Based on the benchmark dataset, Pse-Analysis will automatically construct an ideal predictor, followed by yielding the predicted results for the submitted query samples. All the aforementioned tedious jobs can be automatically done by the computer. Moreover, the multiprocessing technique was adopted to enhance computational speed by about 6 folds. The Pse-Analysis Python package is freely accessible to the public at http://bioinformatics.hitsz.edu.cn/Pse-Analysis/, and can be directly run on Windows, Linux, and Unix.

  11. Filtering techniques for efficient inversion of two-dimensional Nuclear Magnetic Resonance data

    NASA Astrophysics Data System (ADS)

    Bortolotti, V.; Brizi, L.; Fantazzini, P.; Landi, G.; Zama, F.

    2017-10-01

    The inversion of two-dimensional Nuclear Magnetic Resonance (NMR) data requires the solution of a first kind Fredholm integral equation with a two-dimensional tensor product kernel and lower bound constraints. For the solution of this ill-posed inverse problem, the recently presented 2DUPEN algorithm [V. Bortolotti et al., Inverse Problems, 33(1), 2016] uses multiparameter Tikhonov regularization with automatic choice of the regularization parameters. In this work, I2DUPEN, an improved version of 2DUPEN that implements Mean Windowing and Singular Value Decomposition filters, is deeply tested. The reconstruction problem with filtered data is formulated as a compressed weighted least squares problem with multi-parameter Tikhonov regularization. Results on synthetic and real 2D NMR data are presented with the main purpose to deeper analyze the separate and combined effects of these filtering techniques on the reconstructed 2D distribution.

  12. Quantitative architectural analysis: a new approach to cortical mapping.

    PubMed

    Schleicher, A; Palomero-Gallagher, N; Morosan, P; Eickhoff, S B; Kowalski, T; de Vos, K; Amunts, K; Zilles, K

    2005-12-01

    Recent progress in anatomical and functional MRI has revived the demand for a reliable, topographic map of the human cerebral cortex. Till date, interpretations of specific activations found in functional imaging studies and their topographical analysis in a spatial reference system are, often, still based on classical architectonic maps. The most commonly used reference atlas is that of Brodmann and his successors, despite its severe inherent drawbacks. One obvious weakness in traditional, architectural mapping is the subjective nature of localising borders between cortical areas, by means of a purely visual, microscopical examination of histological specimens. To overcome this limitation, more objective, quantitative mapping procedures have been established in the past years. The quantification of the neocortical, laminar pattern by defining intensity line profiles across the cortical layers, has a long tradition. During the last years, this method has been extended to enable a reliable, reproducible mapping of the cortex based on image analysis and multivariate statistics. Methodological approaches to such algorithm-based, cortical mapping were published for various architectural modalities. In our contribution, principles of algorithm-based mapping are described for cyto- and receptorarchitecture. In a cytoarchitectural parcellation of the human auditory cortex, using a sliding window procedure, the classical areal pattern of the human superior temporal gyrus was modified by a replacing of Brodmann's areas 41, 42, 22 and parts of area 21, with a novel, more detailed map. An extension and optimisation of the sliding window procedure to the specific requirements of receptorarchitectonic mapping, is also described using the macaque central sulcus and adjacent superior parietal lobule as a second, biologically independent example. Algorithm-based mapping procedures, however, are not limited to these two architectural modalities, but can be applied to all images in which a laminar cortical pattern can be detected and quantified, e.g. myeloarchitectonic and in vivo high resolution MR imaging. Defining cortical borders, based on changes in cortical lamination in high resolution, in vivo structural MR images will result in a rapid increase of our knowledge on the structural parcellation of the human cerebral cortex.

  13. LZW-Kernel: fast kernel utilizing variable length code blocks from LZW compressors for protein sequence classification.

    PubMed

    Filatov, Gleb; Bauwens, Bruno; Kertész-Farkas, Attila

    2018-05-07

    Bioinformatics studies often rely on similarity measures between sequence pairs, which often pose a bottleneck in large-scale sequence analysis. Here, we present a new convolutional kernel function for protein sequences called the LZW-Kernel. It is based on code words identified with the Lempel-Ziv-Welch (LZW) universal text compressor. The LZW-Kernel is an alignment-free method, it is always symmetric, is positive, always provides 1.0 for self-similarity and it can directly be used with Support Vector Machines (SVMs) in classification problems, contrary to normalized compression distance (NCD), which often violates the distance metric properties in practice and requires further techniques to be used with SVMs. The LZW-Kernel is a one-pass algorithm, which makes it particularly plausible for big data applications. Our experimental studies on remote protein homology detection and protein classification tasks reveal that the LZW-Kernel closely approaches the performance of the Local Alignment Kernel (LAK) and the SVM-pairwise method combined with Smith-Waterman (SW) scoring at a fraction of the time. Moreover, the LZW-Kernel outperforms the SVM-pairwise method when combined with BLAST scores, which indicates that the LZW code words might be a better basis for similarity measures than local alignment approximations found with BLAST. In addition, the LZW-Kernel outperforms n-gram based mismatch kernels, hidden Markov model based SAM and Fisher kernel, and protein family based PSI-BLAST, among others. Further advantages include the LZW-Kernel's reliance on a simple idea, its ease of implementation, and its high speed, three times faster than BLAST and several magnitudes faster than SW or LAK in our tests. LZW-Kernel is implemented as a standalone C code and is a free open-source program distributed under GPLv3 license and can be downloaded from https://github.com/kfattila/LZW-Kernel. akerteszfarkas@hse.ru. Supplementary data are available at Bioinformatics Online.

  14. A framework for optimal kernel-based manifold embedding of medical image data.

    PubMed

    Zimmer, Veronika A; Lekadir, Karim; Hoogendoorn, Corné; Frangi, Alejandro F; Piella, Gemma

    2015-04-01

    Kernel-based dimensionality reduction is a widely used technique in medical image analysis. To fully unravel the underlying nonlinear manifold the selection of an adequate kernel function and of its free parameters is critical. In practice, however, the kernel function is generally chosen as Gaussian or polynomial and such standard kernels might not always be optimal for a given image dataset or application. In this paper, we present a study on the effect of the kernel functions in nonlinear manifold embedding of medical image data. To this end, we first carry out a literature review on existing advanced kernels developed in the statistics, machine learning, and signal processing communities. In addition, we implement kernel-based formulations of well-known nonlinear dimensional reduction techniques such as Isomap and Locally Linear Embedding, thus obtaining a unified framework for manifold embedding using kernels. Subsequently, we present a method to automatically choose a kernel function and its associated parameters from a pool of kernel candidates, with the aim to generate the most optimal manifold embeddings. Furthermore, we show how the calculated selection measures can be extended to take into account the spatial relationships in images, or used to combine several kernels to further improve the embedding results. Experiments are then carried out on various synthetic and phantom datasets for numerical assessment of the methods. Furthermore, the workflow is applied to real data that include brain manifolds and multispectral images to demonstrate the importance of the kernel selection in the analysis of high-dimensional medical images. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Evaluating the Gradient of the Thin Wire Kernel

    NASA Technical Reports Server (NTRS)

    Wilton, Donald R.; Champagne, Nathan J.

    2008-01-01

    Recently, a formulation for evaluating the thin wire kernel was developed that employed a change of variable to smooth the kernel integrand, canceling the singularity in the integrand. Hence, the typical expansion of the wire kernel in a series for use in the potential integrals is avoided. The new expression for the kernel is exact and may be used directly to determine the gradient of the wire kernel, which consists of components that are parallel and radial to the wire axis.

  16. Completing sparse and disconnected protein-protein network by deep learning.

    PubMed

    Huang, Lei; Liao, Li; Wu, Cathy H

    2018-03-22

    Protein-protein interaction (PPI) prediction remains a central task in systems biology to achieve a better and holistic understanding of cellular and intracellular processes. Recently, an increasing number of computational methods have shifted from pair-wise prediction to network level prediction. Many of the existing network level methods predict PPIs under the assumption that the training network should be connected. However, this assumption greatly affects the prediction power and limits the application area because the current golden standard PPI networks are usually very sparse and disconnected. Therefore, how to effectively predict PPIs based on a training network that is sparse and disconnected remains a challenge. In this work, we developed a novel PPI prediction method based on deep learning neural network and regularized Laplacian kernel. We use a neural network with an autoencoder-like architecture to implicitly simulate the evolutionary processes of a PPI network. Neurons of the output layer correspond to proteins and are labeled with values (1 for interaction and 0 for otherwise) from the adjacency matrix of a sparse disconnected training PPI network. Unlike autoencoder, neurons at the input layer are given all zero input, reflecting an assumption of no a priori knowledge about PPIs, and hidden layers of smaller sizes mimic ancient interactome at different times during evolution. After the training step, an evolved PPI network whose rows are outputs of the neural network can be obtained. We then predict PPIs by applying the regularized Laplacian kernel to the transition matrix that is built upon the evolved PPI network. The results from cross-validation experiments show that the PPI prediction accuracies for yeast data and human data measured as AUC are increased by up to 8.4 and 14.9% respectively, as compared to the baseline. Moreover, the evolved PPI network can also help us leverage complementary information from the disconnected training network and multiple heterogeneous data sources. Tested by the yeast data with six heterogeneous feature kernels, the results show our method can further improve the prediction performance by up to 2%, which is very close to an upper bound that is obtained by an Approximate Bayesian Computation based sampling method. The proposed evolution deep neural network, coupled with regularized Laplacian kernel, is an effective tool in completing sparse and disconnected PPI networks and in facilitating integration of heterogeneous data sources.

  17. Efficient Scalable Median Filtering Using Histogram-Based Operations.

    PubMed

    Green, Oded

    2018-05-01

    Median filtering is a smoothing technique for noise removal in images. While there are various implementations of median filtering for a single-core CPU, there are few implementations for accelerators and multi-core systems. Many parallel implementations of median filtering use a sorting algorithm for rearranging the values within a filtering window and taking the median of the sorted value. While using sorting algorithms allows for simple parallel implementations, the cost of the sorting becomes prohibitive as the filtering windows grow. This makes such algorithms, sequential and parallel alike, inefficient. In this work, we introduce the first software parallel median filtering that is non-sorting-based. The new algorithm uses efficient histogram-based operations. These reduce the computational requirements of the new algorithm while also accessing the image fewer times. We show an implementation of our algorithm for both the CPU and NVIDIA's CUDA supported graphics processing unit (GPU). The new algorithm is compared with several other leading CPU and GPU implementations. The CPU implementation has near perfect linear scaling with a speedup on a quad-core system. The GPU implementation is several orders of magnitude faster than the other GPU implementations for mid-size median filters. For small kernels, and , comparison-based approaches are preferable as fewer operations are required. Lastly, the new algorithm is open-source and can be found in the OpenCV library.

  18. 7 CFR 810.202 - Definition of other terms.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... barley kernels, other grains, and wild oats that are badly shrunken and distinctly discolored black or... kernels. Kernels and pieces of barley kernels that are distinctly indented, immature or shrunken in...

  19. 7 CFR 810.202 - Definition of other terms.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... barley kernels, other grains, and wild oats that are badly shrunken and distinctly discolored black or... kernels. Kernels and pieces of barley kernels that are distinctly indented, immature or shrunken in...

  20. 7 CFR 810.202 - Definition of other terms.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... barley kernels, other grains, and wild oats that are badly shrunken and distinctly discolored black or... kernels. Kernels and pieces of barley kernels that are distinctly indented, immature or shrunken in...

  1. graphkernels: R and Python packages for graph comparison

    PubMed Central

    Ghisu, M Elisabetta; Llinares-López, Felipe; Borgwardt, Karsten

    2018-01-01

    Abstract Summary Measuring the similarity of graphs is a fundamental step in the analysis of graph-structured data, which is omnipresent in computational biology. Graph kernels have been proposed as a powerful and efficient approach to this problem of graph comparison. Here we provide graphkernels, the first R and Python graph kernel libraries including baseline kernels such as label histogram based kernels, classic graph kernels such as random walk based kernels, and the state-of-the-art Weisfeiler-Lehman graph kernel. The core of all graph kernels is implemented in C ++ for efficiency. Using the kernel matrices computed by the package, we can easily perform tasks such as classification, regression and clustering on graph-structured samples. Availability and implementation The R and Python packages including source code are available at https://CRAN.R-project.org/package=graphkernels and https://pypi.python.org/pypi/graphkernels. Contact mahito@nii.ac.jp or elisabetta.ghisu@bsse.ethz.ch Supplementary information Supplementary data are available online at Bioinformatics. PMID:29028902

  2. Aflatoxin variability in pistachios.

    PubMed Central

    Mahoney, N E; Rodriguez, S B

    1996-01-01

    Pistachio fruit components, including hulls (mesocarps and epicarps), seed coats (testas), and kernels (seeds), all contribute to variable aflatoxin content in pistachios. Fresh pistachio kernels were individually inoculated with Aspergillus flavus and incubated 7 or 10 days. Hulled, shelled kernels were either left intact or wounded prior to inoculation. Wounded kernels, with or without the seed coat, were readily colonized by A. flavus and after 10 days of incubation contained 37 times more aflatoxin than similarly treated unwounded kernels. The aflatoxin levels in the individual wounded pistachios were highly variable. Neither fungal colonization nor aflatoxin was detected in intact kernels without seed coats. Intact kernels with seed coats had limited fungal colonization and low aflatoxin concentrations compared with their wounded counterparts. Despite substantial fungal colonization of wounded hulls, aflatoxin was not detected in hulls. Aflatoxin levels were significantly lower in wounded kernels with hulls than in kernels of hulled pistachios. Both the seed coat and a water-soluble extract of hulls suppressed aflatoxin production by A. flavus. PMID:8919781

  3. graphkernels: R and Python packages for graph comparison.

    PubMed

    Sugiyama, Mahito; Ghisu, M Elisabetta; Llinares-López, Felipe; Borgwardt, Karsten

    2018-02-01

    Measuring the similarity of graphs is a fundamental step in the analysis of graph-structured data, which is omnipresent in computational biology. Graph kernels have been proposed as a powerful and efficient approach to this problem of graph comparison. Here we provide graphkernels, the first R and Python graph kernel libraries including baseline kernels such as label histogram based kernels, classic graph kernels such as random walk based kernels, and the state-of-the-art Weisfeiler-Lehman graph kernel. The core of all graph kernels is implemented in C ++ for efficiency. Using the kernel matrices computed by the package, we can easily perform tasks such as classification, regression and clustering on graph-structured samples. The R and Python packages including source code are available at https://CRAN.R-project.org/package=graphkernels and https://pypi.python.org/pypi/graphkernels. mahito@nii.ac.jp or elisabetta.ghisu@bsse.ethz.ch. Supplementary data are available online at Bioinformatics. © The Author(s) 2017. Published by Oxford University Press.

  4. Investigation of various energy deposition kernel refinements for the convolution/superposition method

    PubMed Central

    Huang, Jessie Y.; Eklund, David; Childress, Nathan L.; Howell, Rebecca M.; Mirkovic, Dragan; Followill, David S.; Kry, Stephen F.

    2013-01-01

    Purpose: Several simplifications used in clinical implementations of the convolution/superposition (C/S) method, specifically, density scaling of water kernels for heterogeneous media and use of a single polyenergetic kernel, lead to dose calculation inaccuracies. Although these weaknesses of the C/S method are known, it is not well known which of these simplifications has the largest effect on dose calculation accuracy in clinical situations. The purpose of this study was to generate and characterize high-resolution, polyenergetic, and material-specific energy deposition kernels (EDKs), as well as to investigate the dosimetric impact of implementing spatially variant polyenergetic and material-specific kernels in a collapsed cone C/S algorithm. Methods: High-resolution, monoenergetic water EDKs and various material-specific EDKs were simulated using the EGSnrc Monte Carlo code. Polyenergetic kernels, reflecting the primary spectrum of a clinical 6 MV photon beam at different locations in a water phantom, were calculated for different depths, field sizes, and off-axis distances. To investigate the dosimetric impact of implementing spatially variant polyenergetic kernels, depth dose curves in water were calculated using two different implementations of the collapsed cone C/S method. The first method uses a single polyenergetic kernel, while the second method fully takes into account spectral changes in the convolution calculation. To investigate the dosimetric impact of implementing material-specific kernels, depth dose curves were calculated for a simplified titanium implant geometry using both a traditional C/S implementation that performs density scaling of water kernels and a novel implementation using material-specific kernels. Results: For our high-resolution kernels, we found good agreement with the Mackie et al. kernels, with some differences near the interaction site for low photon energies (<500 keV). For our spatially variant polyenergetic kernels, we found that depth was the most dominant factor affecting the pattern of energy deposition; however, the effects of field size and off-axis distance were not negligible. For the material-specific kernels, we found that as the density of the material increased, more energy was deposited laterally by charged particles, as opposed to in the forward direction. Thus, density scaling of water kernels becomes a worse approximation as the density and the effective atomic number of the material differ more from water. Implementation of spatially variant, polyenergetic kernels increased the percent depth dose value at 25 cm depth by 2.1%–5.8% depending on the field size, while implementation of titanium kernels gave 4.9% higher dose upstream of the metal cavity (i.e., higher backscatter dose) and 8.2% lower dose downstream of the cavity. Conclusions: Of the various kernel refinements investigated, inclusion of depth-dependent and metal-specific kernels into the C/S method has the greatest potential to improve dose calculation accuracy. Implementation of spatially variant polyenergetic kernels resulted in a harder depth dose curve and thus has the potential to affect beam modeling parameters obtained in the commissioning process. For metal implants, the C/S algorithms generally underestimate the dose upstream and overestimate the dose downstream of the implant. Implementation of a metal-specific kernel mitigated both of these errors. PMID:24320507

  5. Unified heat kernel regression for diffusion, kernel smoothing and wavelets on manifolds and its application to mandible growth modeling in CT images.

    PubMed

    Chung, Moo K; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K

    2015-05-01

    We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel method is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, the method is applied to characterize the localized growth pattern of mandible surfaces obtained in CT images between ages 0 and 20 by regressing the length of displacement vectors with respect to a surface template. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Comparing Alternative Kernels for the Kernel Method of Test Equating: Gaussian, Logistic, and Uniform Kernels. Research Report. ETS RR-08-12

    ERIC Educational Resources Information Center

    Lee, Yi-Hsuan; von Davier, Alina A.

    2008-01-01

    The kernel equating method (von Davier, Holland, & Thayer, 2004) is based on a flexible family of equipercentile-like equating functions that use a Gaussian kernel to continuize the discrete score distributions. While the classical equipercentile, or percentile-rank, equating method carries out the continuization step by linear interpolation,…

  7. 7 CFR 810.204 - Grades and grade requirements for Six-rowed Malting barley and Six-rowed Blue Malting barley.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...— Damaged kernels 1 (percent) Foreign material (percent) Other grains (percent) Skinned and broken kernels....0 10.0 15.0 1 Injured-by-frost kernels and injured-by-mold kernels are not considered damaged kernels or considered against sound barley. Notes: Malting barley shall not be infested in accordance with...

  8. Parametric Deformation of Discrete Geometry for Aerodynamic Shape Design

    NASA Technical Reports Server (NTRS)

    Anderson, George R.; Aftosmis, Michael J.; Nemec, Marian

    2012-01-01

    We present a versatile discrete geometry manipulation platform for aerospace vehicle shape optimization. The platform is based on the geometry kernel of an open-source modeling tool called Blender and offers access to four parametric deformation techniques: lattice, cage-based, skeletal, and direct manipulation. Custom deformation methods are implemented as plugins, and the kernel is controlled through a scripting interface. Surface sensitivities are provided to support gradient-based optimization. The platform architecture allows the use of geometry pipelines, where multiple modelers are used in sequence, enabling manipulation difficult or impossible to achieve with a constructive modeler or deformer alone. We implement an intuitive custom deformation method in which a set of surface points serve as the design variables and user-specified constraints are intrinsically satisfied. We test our geometry platform on several design examples using an aerodynamic design framework based on Cartesian grids. We examine inverse airfoil design and shape matching and perform lift-constrained drag minimization on an airfoil with thickness constraints. A transport wing-fuselage integration problem demonstrates the approach in 3D. In a final example, our platform is pipelined with a constructive modeler to parabolically sweep a wingtip while applying a 1-G loading deformation across the wingspan. This work is an important first step towards the larger goal of leveraging the investment of the graphics industry to improve the state-of-the-art in aerospace geometry tools.

  9. Towards Highly Scalable Ab Initio Molecular Dynamics (AIMD) Simulations on the Intel Knights Landing Manycore Processor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacquelin, Mathias; De Jong, Wibe A.; Bylaska, Eric J.

    2017-07-03

    The Ab Initio Molecular Dynamics (AIMD) method allows scientists to treat the dynamics of molecular and condensed phase systems while retaining a first-principles-based description of their interactions. This extremely important method has tremendous computational requirements, because the electronic Schr¨odinger equation, approximated using Kohn-Sham Density Functional Theory (DFT), is solved at every time step. With the advent of manycore architectures, application developers have a significant amount of processing power within each compute node that can only be exploited through massive parallelism. A compute intensive application such as AIMD forms a good candidate to leverage this processing power. In this paper, wemore » focus on adding thread level parallelism to the plane wave DFT methodology implemented in NWChem. Through a careful optimization of tall-skinny matrix products, which are at the heart of the Lagrange multiplier and nonlocal pseudopotential kernels, as well as 3D FFTs, our OpenMP implementation delivers excellent strong scaling on the latest Intel Knights Landing (KNL) processor. We assess the efficiency of our Lagrange multiplier kernels by building a Roofline model of the platform, and verify that our implementation is close to the roofline for various problem sizes. Finally, we present strong scaling results on the complete AIMD simulation for a 64 water molecules test case, that scales up to all 68 cores of the Knights Landing processor.« less

  10. 7 CFR 51.1413 - Damage.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... well cured; (e) Poorly developed kernels; (f) Kernels which are dark amber in color; (g) Kernel spots when more than one dark spot is present on either half of the kernel, or when any such spot is more...

  11. 7 CFR 51.1413 - Damage.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... well cured; (e) Poorly developed kernels; (f) Kernels which are dark amber in color; (g) Kernel spots when more than one dark spot is present on either half of the kernel, or when any such spot is more...

  12. 7 CFR 810.205 - Grades and grade requirements for Two-rowed Malting barley.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... (percent) Maximum limits of— Wild oats (percent) Foreign material (percent) Skinned and broken kernels... Injured-by-frost kernels and injured-by-mold kernels are not considered damaged kernels or considered...

  13. Long-term optical stimulation of channelrhodopsin-expressing neurons to study network plasticity

    PubMed Central

    Lignani, Gabriele; Ferrea, Enrico; Difato, Francesco; Amarù, Jessica; Ferroni, Eleonora; Lugarà, Eleonora; Espinoza, Stefano; Gainetdinov, Raul R.; Baldelli, Pietro; Benfenati, Fabio

    2013-01-01

    Neuronal plasticity produces changes in excitability, synaptic transmission, and network architecture in response to external stimuli. Network adaptation to environmental conditions takes place in time scales ranging from few seconds to days, and modulates the entire network dynamics. To study the network response to defined long-term experimental protocols, we setup a system that combines optical and electrophysiological tools embedded in a cell incubator. Primary hippocampal neurons transduced with lentiviruses expressing channelrhodopsin-2/H134R were subjected to various photostimulation protocols in a time window in the order of days. To monitor the effects of light-induced gating of network activity, stimulated transduced neurons were simultaneously recorded using multi-electrode arrays (MEAs). The developed experimental model allows discerning short-term, long-lasting, and adaptive plasticity responses of the same neuronal network to distinct stimulation frequencies applied over different temporal windows. PMID:23970852

  14. Alview: Portable Software for Viewing Sequence Reads in BAM Formatted Files.

    PubMed

    Finney, Richard P; Chen, Qing-Rong; Nguyen, Cu V; Hsu, Chih Hao; Yan, Chunhua; Hu, Ying; Abawi, Massih; Bian, Xiaopeng; Meerzaman, Daoud M

    2015-01-01

    The name Alview is a contraction of the term Alignment Viewer. Alview is a compiled to native architecture software tool for visualizing the alignment of sequencing data. Inputs are files of short-read sequences aligned to a reference genome in the SAM/BAM format and files containing reference genome data. Outputs are visualizations of these aligned short reads. Alview is written in portable C with optional graphical user interface (GUI) code written in C, C++, and Objective-C. The application can run in three different ways: as a web server, as a command line tool, or as a native, GUI program. Alview is compatible with Microsoft Windows, Linux, and Apple OS X. It is available as a web demo at https://cgwb.nci.nih.gov/cgi-bin/alview. The source code and Windows/Mac/Linux executables are available via https://github.com/NCIP/alview.

  15. Long-term optical stimulation of channelrhodopsin-expressing neurons to study network plasticity.

    PubMed

    Lignani, Gabriele; Ferrea, Enrico; Difato, Francesco; Amarù, Jessica; Ferroni, Eleonora; Lugarà, Eleonora; Espinoza, Stefano; Gainetdinov, Raul R; Baldelli, Pietro; Benfenati, Fabio

    2013-01-01

    Neuronal plasticity produces changes in excitability, synaptic transmission, and network architecture in response to external stimuli. Network adaptation to environmental conditions takes place in time scales ranging from few seconds to days, and modulates the entire network dynamics. To study the network response to defined long-term experimental protocols, we setup a system that combines optical and electrophysiological tools embedded in a cell incubator. Primary hippocampal neurons transduced with lentiviruses expressing channelrhodopsin-2/H134R were subjected to various photostimulation protocols in a time window in the order of days. To monitor the effects of light-induced gating of network activity, stimulated transduced neurons were simultaneously recorded using multi-electrode arrays (MEAs). The developed experimental model allows discerning short-term, long-lasting, and adaptive plasticity responses of the same neuronal network to distinct stimulation frequencies applied over different temporal windows.

  16. Multi-pass amplifier architecture for high power laser systems

    DOEpatents

    Manes, Kenneth R; Spaeth, Mary L; Erlandson, Alvin C

    2014-04-01

    A main amplifier system includes a first reflector operable to receive input light through a first aperture and direct the input light along an optical path. The input light is characterized by a first polarization. The main amplifier system also includes a first polarizer operable to reflect light characterized by the first polarization state. The main amplifier system further includes a first and second set of amplifier modules. Each of the first and second set of amplifier modules includes an entrance window, a quarter wave plate, a plurality of amplifier slablets arrayed substantially parallel to each other, and an exit window. The main amplifier system additionally includes a set of mirrors operable to reflect light exiting the first set of amplifier modules to enter the second set of amplifier modules and a second polarizer operable to reflect light characterized by a second polarization state.

  17. Deep neural ensemble for retinal vessel segmentation in fundus images towards achieving label-free angiography.

    PubMed

    Lahiri, A; Roy, Abhijit Guha; Sheet, Debdoot; Biswas, Prabir Kumar

    2016-08-01

    Automated segmentation of retinal blood vessels in label-free fundus images entails a pivotal role in computed aided diagnosis of ophthalmic pathologies, viz., diabetic retinopathy, hypertensive disorders and cardiovascular diseases. The challenge remains active in medical image analysis research due to varied distribution of blood vessels, which manifest variations in their dimensions of physical appearance against a noisy background. In this paper we formulate the segmentation challenge as a classification task. Specifically, we employ unsupervised hierarchical feature learning using ensemble of two level of sparsely trained denoised stacked autoencoder. First level training with bootstrap samples ensures decoupling and second level ensemble formed by different network architectures ensures architectural revision. We show that ensemble training of auto-encoders fosters diversity in learning dictionary of visual kernels for vessel segmentation. SoftMax classifier is used for fine tuning each member autoencoder and multiple strategies are explored for 2-level fusion of ensemble members. On DRIVE dataset, we achieve maximum average accuracy of 95.33% with an impressively low standard deviation of 0.003 and Kappa agreement coefficient of 0.708. Comparison with other major algorithms substantiates the high efficacy of our model.

  18. Optimization of a Lattice Boltzmann Computation on State-of-the-Art Multicore Platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Samuel; Carter, Jonathan; Oliker, Leonid

    2009-04-10

    We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to a lattice Boltzmann application (LBMHD) that historically has made poor use of scalar microprocessors due to its complex data structures and memory access patterns. We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Xeon E5345 (Clovertown), AMD Opteron 2214 (Santa Rosa), AMD Opteron 2356 (Barcelona), Sun T5140 T2+ (Victoria Falls), as well asmore » a QS20 IBM Cell Blade. Rather than hand-tuning LBMHD for each system, we develop a code generator that allows us to identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned LBMHD application achieves up to a 15x improvement compared with the original code at a given concurrency. Additionally, we present detailed analysis of each optimization, which reveal surprising hardware bottlenecks and software challenges for future multicore systems and applications.« less

  19. Modernization and optimization of a legacy open-source CFD code for high-performance computing architectures

    DOE PAGES

    Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha; ...

    2017-03-20

    Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, whichmore » is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,‘bottom-up’ and ‘top-down’, are illustrated. Here, preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.« less

  20. Modernization and optimization of a legacy open-source CFD code for high-performance computing architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha

    Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, whichmore » is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,‘bottom-up’ and ‘top-down’, are illustrated. Here, preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.« less

  1. Database technology and the management of multimedia data in the Mirror project

    NASA Astrophysics Data System (ADS)

    de Vries, Arjen P.; Blanken, H. M.

    1998-10-01

    Multimedia digital libraries require an open distributed architecture instead of a monolithic database system. In the Mirror project, we use the Monet extensible database kernel to manage different representation of multimedia objects. To maintain independence between content, meta-data, and the creation of meta-data, we allow distribution of data and operations using CORBA. This open architecture introduces new problems for data access. From an end user's perspective, the problem is how to search the available representations to fulfill an actual information need; the conceptual gap between human perceptual processes and the meta-data is too large. From a system's perspective, several representations of the data may semantically overlap or be irrelevant. We address these problems with an iterative query process and active user participating through relevance feedback. A retrieval model based on inference networks assists the user with query formulation. The integration of this model into the database design has two advantages. First, the user can query both the logical and the content structure of multimedia objects. Second, the use of different data models in the logical and the physical database design provides data independence and allows algebraic query optimization. We illustrate query processing with a music retrieval application.

  2. Detection of ochratoxin A contamination in stored wheat using near-infrared hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Senthilkumar, T.; Jayas, D. S.; White, N. D. G.; Fields, P. G.; Gräfenhan, T.

    2017-03-01

    Near-infrared (NIR) hyperspectral imaging system was used to detect five concentration levels of ochratoxin A (OTA) in contaminated wheat kernels. The wheat kernels artificially inoculated with two different OTA producing Penicillium verrucosum strains, two different non-toxigenic P. verrucosum strains, and sterile control wheat kernels were subjected to NIR hyperspectral imaging. The acquired three-dimensional data were reshaped into readable two-dimensional data. Principal Component Analysis (PCA) was applied to the two dimensional data to identify the key wavelengths which had greater significance in detecting OTA contamination in wheat. Statistical and histogram features extracted at the key wavelengths were used in the linear, quadratic and Mahalanobis statistical discriminant models to differentiate between sterile control, five concentration levels of OTA contamination in wheat kernels, and five infection levels of non-OTA producing P. verrucosum inoculated wheat kernels. The classification models differentiated sterile control samples from OTA contaminated wheat kernels and non-OTA producing P. verrucosum inoculated wheat kernels with a 100% accuracy. The classification models also differentiated between five concentration levels of OTA contaminated wheat kernels and between five infection levels of non-OTA producing P. verrucosum inoculated wheat kernels with a correct classification of more than 98%. The non-OTA producing P. verrucosum inoculated wheat kernels and OTA contaminated wheat kernels subjected to hyperspectral imaging provided different spectral patterns.

  3. Application of kernel method in fluorescence molecular tomography

    NASA Astrophysics Data System (ADS)

    Zhao, Yue; Baikejiang, Reheman; Li, Changqing

    2017-02-01

    Reconstruction of fluorescence molecular tomography (FMT) is an ill-posed inverse problem. Anatomical guidance in the FMT reconstruction can improve FMT reconstruction efficiently. We have developed a kernel method to introduce the anatomical guidance into FMT robustly and easily. The kernel method is from machine learning for pattern analysis and is an efficient way to represent anatomical features. For the finite element method based FMT reconstruction, we calculate a kernel function for each finite element node from an anatomical image, such as a micro-CT image. Then the fluorophore concentration at each node is represented by a kernel coefficient vector and the corresponding kernel function. In the FMT forward model, we have a new system matrix by multiplying the sensitivity matrix with the kernel matrix. Thus, the kernel coefficient vector is the unknown to be reconstructed following a standard iterative reconstruction process. We convert the FMT reconstruction problem into the kernel coefficient reconstruction problem. The desired fluorophore concentration at each node can be calculated accordingly. Numerical simulation studies have demonstrated that the proposed kernel-based algorithm can improve the spatial resolution of the reconstructed FMT images. In the proposed kernel method, the anatomical guidance can be obtained directly from the anatomical image and is included in the forward modeling. One of the advantages is that we do not need to segment the anatomical image for the targets and background.

  4. Credit scoring analysis using kernel discriminant

    NASA Astrophysics Data System (ADS)

    Widiharih, T.; Mukid, M. A.; Mustafid

    2018-05-01

    Credit scoring model is an important tool for reducing the risk of wrong decisions when granting credit facilities to applicants. This paper investigate the performance of kernel discriminant model in assessing customer credit risk. Kernel discriminant analysis is a non- parametric method which means that it does not require any assumptions about the probability distribution of the input. The main ingredient is a kernel that allows an efficient computation of Fisher discriminant. We use several kernel such as normal, epanechnikov, biweight, and triweight. The models accuracy was compared each other using data from a financial institution in Indonesia. The results show that kernel discriminant can be an alternative method that can be used to determine who is eligible for a credit loan. In the data we use, it shows that a normal kernel is relevant to be selected for credit scoring using kernel discriminant model. Sensitivity and specificity reach to 0.5556 and 0.5488 respectively.

  5. Unified Heat Kernel Regression for Diffusion, Kernel Smoothing and Wavelets on Manifolds and Its Application to Mandible Growth Modeling in CT Images

    PubMed Central

    Chung, Moo K.; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K.

    2014-01-01

    We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel regression is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. Unlike many previous partial differential equation based approaches involving diffusion, our approach represents the solution of diffusion analytically, reducing numerical inaccuracy and slow convergence. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, we have applied the method in characterizing the localized growth pattern of mandible surfaces obtained in CT images from subjects between ages 0 and 20 years by regressing the length of displacement vectors with respect to the template surface. PMID:25791435

  6. Implementation of a frame-based representation in CLIPS

    NASA Technical Reports Server (NTRS)

    Assal, Hisham; Myers, Leonard

    1990-01-01

    Knowledge representation is one of the major concerns in expert systems. The representation of domain-specific knowledge should agree with the nature of the domain entities and their use in the real world. For example, architectural applications deal with objects and entities such as spaces, walls, and windows. A natural way of representing these architectural entities is provided by frames. This research explores the potential of using the expert system shell CLIPS, developed by NASA, to implement a frame-based representation that can accommodate architectural knowledge. These frames are similar but quite different from the 'template' construct in version 4.3 of CLIPS. Templates support only the grouping of related information and the assignment of default values to template fields. In addition to these features frames provide other capabilities including definition of classes, inheritance between classes and subclasses, relation of objects of different classes with 'has-a', association of methods (demons) of different types (standard and user-defined) to fields (slots), and creation of new fields at run-time. This frame-based representation is implemented completely in CLIPS. No change to the source code is necessary.

  7. Watt-level single-frequency tunable neodymium MOPA fiber laser operating at 915-937 nm

    NASA Astrophysics Data System (ADS)

    Rota-Rodrigo, S.; Gouhier, B.; Laroche, M.; Zhao, J.; Canuel, B.; Bertoldi, A.; Bouyer, P.; Traynor, N.; Cadier, B.; Robin, T.; Santarelli, G.

    2018-02-01

    We have developed a Watt-level single-frequency tunable fiber laser in the 915-937 nm spectral window. The laser is based on a neodymium-doped fiber master oscillator power amplifier architecture, with two amplification stages using a 20 mW extended cavity diode laser as seed. The system output power is higher than 2 W from 921 to 933 nm, with a stability better than 1.4% and a low relative intensity noise.

  8. Neurobiology as Information Physics

    PubMed Central

    Street, Sterling

    2016-01-01

    This article reviews thermodynamic relationships in the brain in an attempt to consolidate current research in systems neuroscience. The present synthesis supports proposals that thermodynamic information in the brain can be quantified to an appreciable degree of objectivity, that many qualitative properties of information in systems of the brain can be inferred by observing changes in thermodynamic quantities, and that many features of the brain’s anatomy and architecture illustrate relatively simple information-energy relationships. The brain may provide a unique window into the relationship between energy and information. PMID:27895560

  9. Environmental Assessment: Demolition of McGuire Central Heat Plant at Joint Base McGuire-Dix-Lakehurst, New Jersey

    DTIC Science & Technology

    2012-06-01

    insulation, boiler, holding tank and duct coverings, floor tiles , window caulking/glazing, and corrugated building siding. The asbestos insulation and...facility, with the Bulk Fuel Storage area and the golf course located between them. BOMARC is located several miles from the proposed solar sites...Architectural Resources The Central Heat Plant was constructed in 1956. It is a flat- roofed building originally rectangular in form, and is now L-shaped. The

  10. Gyrokinetic particle-in-cell optimization on emerging multi- and manycore platforms

    DOE PAGES

    Madduri, Kamesh; Im, Eun-Jin; Ibrahim, Khaled Z.; ...

    2011-03-02

    The next decade of high-performance computing (HPC) systems will see a rapid evolution and divergence of multi- and manycore architectures as power and cooling constraints limit increases in microprocessor clock speeds. Understanding efficient optimization methodologies on diverse multicore designs in the context of demanding numerical methods is one of the greatest challenges faced today by the HPC community. In this paper, we examine the efficient multicore optimization of GTC, a petascale gyrokinetic toroidal fusion code for studying plasma microturbulence in tokamak devices. For GTC’s key computational components (charge deposition and particle push), we explore efficient parallelization strategies across a broadmore » range of emerging multicore designs, including the recently-released Intel Nehalem-EX, the AMD Opteron Istanbul, and the highly multithreaded Sun UltraSparc T2+. We also present the first study on tuning gyrokinetic particle-in-cell (PIC) algorithms for graphics processors, using the NVIDIA C2050 (Fermi). Our work discusses several novel optimization approaches for gyrokinetic PIC, including mixed-precision computation, particle binning and decomposition strategies, grid replication, SIMDized atomic floating-point operations, and effective GPU texture memory utilization. Overall, we achieve significant performance improvements of 1.3–4.7× on these complex PIC kernels, despite the inherent challenges of data dependency and locality. Finally, our work also points to several architectural and programming features that could significantly enhance PIC performance and productivity on next-generation architectures.« less

  11. Correlation and classification of single kernel fluorescence hyperspectral data with aflatoxin concentration in corn kernels inoculated with Aspergillus flavus spores.

    PubMed

    Yao, H; Hruska, Z; Kincaid, R; Brown, R; Cleveland, T; Bhatnagar, D

    2010-05-01

    The objective of this study was to examine the relationship between fluorescence emissions of corn kernels inoculated with Aspergillus flavus and aflatoxin contamination levels within the kernels. Aflatoxin contamination in corn has been a long-standing problem plaguing the grain industry with potentially devastating consequences to corn growers. In this study, aflatoxin-contaminated corn kernels were produced through artificial inoculation of corn ears in the field with toxigenic A. flavus spores. The kernel fluorescence emission data were taken with a fluorescence hyperspectral imaging system when corn kernels were excited with ultraviolet light. Raw fluorescence image data were preprocessed and regions of interest in each image were created for all kernels. The regions of interest were used to extract spectral signatures and statistical information. The aflatoxin contamination level of single corn kernels was then chemically measured using affinity column chromatography. A fluorescence peak shift phenomenon was noted among different groups of kernels with different aflatoxin contamination levels. The fluorescence peak shift was found to move more toward the longer wavelength in the blue region for the highly contaminated kernels and toward the shorter wavelengths for the clean kernels. Highly contaminated kernels were also found to have a lower fluorescence peak magnitude compared with the less contaminated kernels. It was also noted that a general negative correlation exists between measured aflatoxin and the fluorescence image bands in the blue and green regions. The correlation coefficients of determination, r(2), was 0.72 for the multiple linear regression model. The multivariate analysis of variance found that the fluorescence means of four aflatoxin groups, <1, 1-20, 20-100, and >or=100 ng g(-1) (parts per billion), were significantly different from each other at the 0.01 level of alpha. Classification accuracy under a two-class schema ranged from 0.84 to 0.91 when a threshold of either 20 or 100 ng g(-1) was used. Overall, the results indicate that fluorescence hyperspectral imaging may be applicable in estimating aflatoxin content in individual corn kernels.

  12. Architectural Kansei of ‘Wall’ in The Façade Design by Le Corbusier

    NASA Astrophysics Data System (ADS)

    Sendai, Shoichiro

    The purpose of this paper is to discuss the modern architect Le Corbusier's architectural Kansei (sensibility) on wall in site environment through the analysis of his façade design, using Œuvres complètes (1910-1965, 8 vols., Les éditions d'architecture, Artemis, Zurich) and Le Corbusier Archives (1982-1984, 32 vols., Garland Publishing, Inc. and Fondation Le Corbusier, New York, London, Paris). At first, I arrange five façade types, according to the explanation by Le Corbusier ; ‘fenêtre en longueur (strip window)’, ‘pan de verre (glass wall)’, ‘brise-soleil (sun-breaker)’, ‘loggia’ and ‘claustra’. Through the analysis of the relationship between these types and the design process of each building, we find that Le Corbusier's façade design includes the affirmation and the negation of the ‘wall’ at the same time. In fact, the nature of façade modification during design process is divers: increase in transparency, decrease in transparency and spatialization of façade. That means, Le Corbusier studied the environmental condition by these façade types, and tried to realize the phenomenal openness. This trial bases on the function of architectural Kansei as correspondence between body and environment beyond the physical design.

  13. A Comparative Study of the Traditional Houses Kaili and Bugis-Makassar in Indonesia

    NASA Astrophysics Data System (ADS)

    Suharto, M. F.; Kawet, R. S. S. I.; Tumanduk, M. S. S. S.

    2018-02-01

    In this study, I compared the physical elements of two Indonesian traditional houses between a Kaili tribe (Central Sulawesi) and a Bugis-Makassar tribe (South Sulawesi). If we viewed of the name, meaning and function from both traditional houses have similarities, namely the Souraja/Saoraja house (House of the King), however, observed more detail the physical elements of architecture also show the differences. The spatial, physical and stylistic systems (N. John Habraken’s theory) were applied to analyze their differences and the similarities of the physical elements of architecture on those two traditional houses. The results of the analysis identified that the physical elements of architecture such as the orientation, the function and distribution of rooms (the spatial system), the constructions and materials of floor, wall and roof (the physical system) and the opening types of the door and window as well as ornaments used showed similarities. Meanwhile the physical elements of architecture such as the arrangement of columns, form and spatial pattern as well as the placement of the stairs (the spatial system), the constructions and materials of foundation, column and beam (the physical system) as well as the form of the roof and façade found differences of both traditional houses.

  14. Classification of Phylogenetic Profiles for Protein Function Prediction: An SVM Approach

    NASA Astrophysics Data System (ADS)

    Kotaru, Appala Raju; Joshi, Ramesh C.

    Predicting the function of an uncharacterized protein is a major challenge in post-genomic era due to problems complexity and scale. Having knowledge of protein function is a crucial link in the development of new drugs, better crops, and even the development of biochemicals such as biofuels. Recently numerous high-throughput experimental procedures have been invented to investigate the mechanisms leading to the accomplishment of a protein’s function and Phylogenetic profile is one of them. Phylogenetic profile is a way of representing a protein which encodes evolutionary history of proteins. In this paper we proposed a method for classification of phylogenetic profiles using supervised machine learning method, support vector machine classification along with radial basis function as kernel for identifying functionally linked proteins. We experimentally evaluated the performance of the classifier with the linear kernel, polynomial kernel and compared the results with the existing tree kernel. In our study we have used proteins of the budding yeast saccharomyces cerevisiae genome. We generated the phylogenetic profiles of 2465 yeast genes and for our study we used the functional annotations that are available in the MIPS database. Our experiments show that the performance of the radial basis kernel is similar to polynomial kernel is some functional classes together are better than linear, tree kernel and over all radial basis kernel outperformed the polynomial kernel, linear kernel and tree kernel. In analyzing these results we show that it will be feasible to make use of SVM classifier with radial basis function as kernel to predict the gene functionality using phylogenetic profiles.

  15. Intraear Compensation of Field Corn, Zea mays, from Simulated and Naturally Occurring Injury by Ear-Feeding Larvae.

    PubMed

    Steckel, S; Stewart, S D

    2015-06-01

    Ear-feeding larvae, such as corn earworm, Helicoverpa zea Boddie (Lepidoptera: Noctuidae), can be important insect pests of field corn, Zea mays L., by feeding on kernels. Recently introduced, stacked Bacillus thuringiensis (Bt) traits provide improved protection from ear-feeding larvae. Thus, our objective was to evaluate how injury to kernels in the ear tip might affect yield when this injury was inflicted at the blister and milk stages. In 2010, simulated corn earworm injury reduced total kernel weight (i.e., yield) at both the blister and milk stage. In 2011, injury to ear tips at the milk stage affected total kernel weight. No differences in total kernel weight were found in 2013, regardless of when or how much injury was inflicted. Our data suggested that kernels within the same ear could compensate for injury to ear tips by increasing in size, but this increase was not always statistically significant or sufficient to overcome high levels of kernel injury. For naturally occurring injury observed on multiple corn hybrids during 2011 and 2012, our analyses showed either no or a minimal relationship between number of kernels injured by ear-feeding larvae and the total number of kernels per ear, total kernel weight, or the size of individual kernels. The results indicate that intraear compensation for kernel injury to ear tips can occur under at least some conditions. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  16. Evidence-based Kernels: Fundamental Units of Behavioral Influence

    PubMed Central

    Biglan, Anthony

    2008-01-01

    This paper describes evidence-based kernels, fundamental units of behavioral influence that appear to underlie effective prevention and treatment for children, adults, and families. A kernel is a behavior–influence procedure shown through experimental analysis to affect a specific behavior and that is indivisible in the sense that removing any of its components would render it inert. Existing evidence shows that a variety of kernels can influence behavior in context, and some evidence suggests that frequent use or sufficient use of some kernels may produce longer lasting behavioral shifts. The analysis of kernels could contribute to an empirically based theory of behavioral influence, augment existing prevention or treatment efforts, facilitate the dissemination of effective prevention and treatment practices, clarify the active ingredients in existing interventions, and contribute to efficiently developing interventions that are more effective. Kernels involve one or more of the following mechanisms of behavior influence: reinforcement, altering antecedents, changing verbal relational responding, or changing physiological states directly. The paper describes 52 of these kernels, and details practical, theoretical, and research implications, including calling for a national database of kernels that influence human behavior. PMID:18712600

  17. Integrating the Gradient of the Thin Wire Kernel

    NASA Technical Reports Server (NTRS)

    Champagne, Nathan J.; Wilton, Donald R.

    2008-01-01

    A formulation for integrating the gradient of the thin wire kernel is presented. This approach employs a new expression for the gradient of the thin wire kernel derived from a recent technique for numerically evaluating the exact thin wire kernel. This approach should provide essentially arbitrary accuracy and may be used with higher-order elements and basis functions using the procedure described in [4].When the source and observation points are close, the potential integrals over wire segments involving the wire kernel are split into parts to handle the singular behavior of the integrand [1]. The singularity characteristics of the gradient of the wire kernel are different than those of the wire kernel, and the axial and radial components have different singularities. The characteristics of the gradient of the wire kernel are discussed in [2]. To evaluate the near electric and magnetic fields of a wire, the integration of the gradient of the wire kernel needs to be calculated over the source wire. Since the vector bases for current have constant direction on linear wire segments, these integrals reduce to integrals of the form

  18. Ranking Support Vector Machine with Kernel Approximation

    PubMed Central

    Dou, Yong

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms. PMID:28293256

  19. Ranking Support Vector Machine with Kernel Approximation.

    PubMed

    Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  20. Performance Evaluation at the Hardware Architecture Level and the Operating System Kernel Design Level.

    DTIC Science & Technology

    1977-12-01

    Rev’ieuw c:, P i. I.I!i Work 6 5.3. GC!.’crir~i’w :.f t -(c- Experimcantai Setup .65 5.4. 1 ocls it- C -nip Hy)dra) environment 67 5.5. The mc-.c’ 71 ...to segments 2 Vammnce due to programs crA Var-mance dUe to application areas. I h . i .,,I1111 al v I- timnaied wow"~ the mcr .(I ii ’ Iw irw I( (lIn...1,030 .. .311 I 1 -011 111,) .1101 131 0.0..) .11, 4 C NIlF .::’) . 12A IB, l) a liB. r] .4 o, , 5 M. OS148 813 050.3 71 ,).[3 F’"* .1:. 0(0,0] 0 0] S 14

  1. A neural network approach for the blind deconvolution of turbulent flows

    NASA Astrophysics Data System (ADS)

    Maulik, R.; San, O.

    2017-11-01

    We present a single-layer feedforward artificial neural network architecture trained through a supervised learning approach for the deconvolution of flow variables from their coarse grained computations such as those encountered in large eddy simulations. We stress that the deconvolution procedure proposed in this investigation is blind, i.e. the deconvolved field is computed without any pre-existing information about the filtering procedure or kernel. This may be conceptually contrasted to the celebrated approximate deconvolution approaches where a filter shape is predefined for an iterative deconvolution process. We demonstrate that the proposed blind deconvolution network performs exceptionally well in the a-priori testing of both two-dimensional Kraichnan and three-dimensional Kolmogorov turbulence and shows promise in forming the backbone of a physics-augmented data-driven closure for the Navier-Stokes equations.

  2. Enabling the High Level Synthesis of Data Analytics Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Minutoli, Marco; Castellana, Vito G.; Tumeo, Antonino

    Conventional High Level Synthesis (HLS) tools mainly tar- get compute intensive kernels typical of digital signal pro- cessing applications. We are developing techniques and ar- chitectural templates to enable HLS of data analytics appli- cations. These applications are memory intensive, present fine-grained, unpredictable data accesses, and irregular, dy- namic task parallelism. We discuss an architectural tem- plate based around a distributed controller to efficiently ex- ploit thread level parallelism. We present a memory in- terface that supports parallel memory subsystems and en- ables implementing atomic memory operations. We intro- duce a dynamic task scheduling approach to efficiently ex- ecute heavilymore » unbalanced workload. The templates are val- idated by synthesizing queries from the Lehigh University Benchmark (LUBM), a well know SPARQL benchmark.« less

  3. 21 CFR 182.40 - Natural extractives (solvent-free) used in conjunction with spices, seasonings, and flavorings.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... source Apricot kernel (persic oil) Prunus armeniaca L. Peach kernel (persic oil) Prunus persica Sieb. et Zucc. Peanut stearine Arachis hypogaea L. Persic oil (see apricot kernel and peach kernel) Quince seed...

  4. 21 CFR 182.40 - Natural extractives (solvent-free) used in conjunction with spices, seasonings, and flavorings.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... source Apricot kernel (persic oil) Prunus armeniaca L. Peach kernel (persic oil) Prunus persica Sieb. et Zucc. Peanut stearine Arachis hypogaea L. Persic oil (see apricot kernel and peach kernel) Quince seed...

  5. 21 CFR 182.40 - Natural extractives (solvent-free) used in conjunction with spices, seasonings, and flavorings.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... source Apricot kernel (persic oil) Prunus armeniaca L. Peach kernel (persic oil) Prunus persica Sieb. et Zucc. Peanut stearine Arachis hypogaea L. Persic oil (see apricot kernel and peach kernel) Quince seed...

  6. Wigner functions defined with Laplace transform kernels.

    PubMed

    Oh, Se Baek; Petruccelli, Jonathan C; Tian, Lei; Barbastathis, George

    2011-10-24

    We propose a new Wigner-type phase-space function using Laplace transform kernels--Laplace kernel Wigner function. Whereas momentum variables are real in the traditional Wigner function, the Laplace kernel Wigner function may have complex momentum variables. Due to the property of the Laplace transform, a broader range of signals can be represented in complex phase-space. We show that the Laplace kernel Wigner function exhibits similar properties in the marginals as the traditional Wigner function. As an example, we use the Laplace kernel Wigner function to analyze evanescent waves supported by surface plasmon polariton. © 2011 Optical Society of America

  7. Online learning control using adaptive critic designs with sparse kernel machines.

    PubMed

    Xu, Xin; Hou, Zhongsheng; Lian, Chuanqiang; He, Haibo

    2013-05-01

    In the past decade, adaptive critic designs (ACDs), including heuristic dynamic programming (HDP), dual heuristic programming (DHP), and their action-dependent ones, have been widely studied to realize online learning control of dynamical systems. However, because neural networks with manually designed features are commonly used to deal with continuous state and action spaces, the generalization capability and learning efficiency of previous ACDs still need to be improved. In this paper, a novel framework of ACDs with sparse kernel machines is presented by integrating kernel methods into the critic of ACDs. To improve the generalization capability as well as the computational efficiency of kernel machines, a sparsification method based on the approximately linear dependence analysis is used. Using the sparse kernel machines, two kernel-based ACD algorithms, that is, kernel HDP (KHDP) and kernel DHP (KDHP), are proposed and their performance is analyzed both theoretically and empirically. Because of the representation learning and generalization capability of sparse kernel machines, KHDP and KDHP can obtain much better performance than previous HDP and DHP with manually designed neural networks. Simulation and experimental results of two nonlinear control problems, that is, a continuous-action inverted pendulum problem and a ball and plate control problem, demonstrate the effectiveness of the proposed kernel ACD methods.

  8. Influence of wheat kernel physical properties on the pulverizing process.

    PubMed

    Dziki, Dariusz; Cacak-Pietrzak, Grażyna; Miś, Antoni; Jończyk, Krzysztof; Gawlik-Dziki, Urszula

    2014-10-01

    The physical properties of wheat kernel were determined and related to pulverizing performance by correlation analysis. Nineteen samples of wheat cultivars about similar level of protein content (11.2-12.8 % w.b.) and obtained from organic farming system were used for analysis. The kernel (moisture content 10 % w.b.) was pulverized by using the laboratory hammer mill equipped with round holes 1.0 mm screen. The specific grinding energy ranged from 120 kJkg(-1) to 159 kJkg(-1). On the basis of data obtained many of significant correlations (p < 0.05) were found between wheat kernel physical properties and pulverizing process of wheat kernel, especially wheat kernel hardness index (obtained on the basis of Single Kernel Characterization System) and vitreousness significantly and positively correlated with the grinding energy indices and the mass fraction of coarse particles (> 0.5 mm). Among the kernel mechanical properties determined on the basis of uniaxial compression test only the rapture force was correlated with the impact grinding results. The results showed also positive and significant relationships between kernel ash content and grinding energy requirements. On the basis of wheat physical properties the multiple linear regression was proposed for predicting the average particle size of pulverized kernel.

  9. Relationship between processing score and kernel-fraction particle size in whole-plant corn silage.

    PubMed

    Dias Junior, G S; Ferraretto, L F; Salvati, G G S; de Resende, L C; Hoffman, P C; Pereira, M N; Shaver, R D

    2016-04-01

    Kernel processing increases starch digestibility in whole-plant corn silage (WPCS). Corn silage processing score (CSPS), the percentage of starch passing through a 4.75-mm sieve, is widely used to assess degree of kernel breakage in WPCS. However, the geometric mean particle size (GMPS) of the kernel-fraction that passes through the 4.75-mm sieve has not been well described. Therefore, the objectives of this study were (1) to evaluate particle size distribution and digestibility of kernels cut in varied particle sizes; (2) to propose a method to measure GMPS in WPCS kernels; and (3) to evaluate the relationship between CSPS and GMPS of the kernel fraction in WPCS. Composite samples of unfermented, dried kernels from 110 corn hybrids commonly used for silage production were kept whole (WH) or manually cut in 2, 4, 8, 16, 32 or 64 pieces (2P, 4P, 8P, 16P, 32P, and 64P, respectively). Dry sieving to determine GMPS, surface area, and particle size distribution using 9 sieves with nominal square apertures of 9.50, 6.70, 4.75, 3.35, 2.36, 1.70, 1.18, and 0.59 mm and pan, as well as ruminal in situ dry matter (DM) digestibilities were performed for each kernel particle number treatment. Incubation times were 0, 3, 6, 12, and 24 h. The ruminal in situ DM disappearance of unfermented kernels increased with the reduction in particle size of corn kernels. Kernels kept whole had the lowest ruminal DM disappearance for all time points with maximum DM disappearance of 6.9% at 24 h and the greatest disappearance was observed for 64P, followed by 32P and 16P. Samples of WPCS (n=80) from 3 studies representing varied theoretical length of cut settings and processor types and settings were also evaluated. Each WPCS sample was divided in 2 and then dried at 60 °C for 48 h. The CSPS was determined in duplicate on 1 of the split samples, whereas on the other split sample the kernel and stover fractions were separated using a hydrodynamic separation procedure. After separation, the kernel fraction was redried at 60°C for 48 h in a forced-air oven and dry sieved to determine GMPS and surface area. Linear relationships between CSPS from WPCS (n=80) and kernel fraction GMPS, surface area, and proportion passing through the 4.75-mm screen were poor. Strong quadratic relationships between proportion of kernel fraction passing through the 4.75-mm screen and kernel fraction GMPS and surface area were observed. These findings suggest that hydrodynamic separation and dry sieving of the kernel fraction may provide a better assessment of kernel breakage in WPCS than CSPS. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  10. Classification of corn kernels contaminated with aflatoxins using fluorescence and reflectance hyperspectral images analysis

    NASA Astrophysics Data System (ADS)

    Zhu, Fengle; Yao, Haibo; Hruska, Zuzana; Kincaid, Russell; Brown, Robert; Bhatnagar, Deepak; Cleveland, Thomas

    2015-05-01

    Aflatoxins are secondary metabolites produced by certain fungal species of the Aspergillus genus. Aflatoxin contamination remains a problem in agricultural products due to its toxic and carcinogenic properties. Conventional chemical methods for aflatoxin detection are time-consuming and destructive. This study employed fluorescence and reflectance visible near-infrared (VNIR) hyperspectral images to classify aflatoxin contaminated corn kernels rapidly and non-destructively. Corn ears were artificially inoculated in the field with toxigenic A. flavus spores at the early dough stage of kernel development. After harvest, a total of 300 kernels were collected from the inoculated ears. Fluorescence hyperspectral imagery with UV excitation and reflectance hyperspectral imagery with halogen illumination were acquired on both endosperm and germ sides of kernels. All kernels were then subjected to chemical analysis individually to determine aflatoxin concentrations. A region of interest (ROI) was created for each kernel to extract averaged spectra. Compared with healthy kernels, fluorescence spectral peaks for contaminated kernels shifted to longer wavelengths with lower intensity, and reflectance values for contaminated kernels were lower with a different spectral shape in 700-800 nm region. Principal component analysis was applied for data compression before classifying kernels into contaminated and healthy based on a 20 ppb threshold utilizing the K-nearest neighbors algorithm. The best overall accuracy achieved was 92.67% for germ side in the fluorescence data analysis. The germ side generally performed better than endosperm side. Fluorescence and reflectance image data achieved similar accuracy.

  11. Influence of Kernel Age on Fumonisin B1 Production in Maize by Fusarium moniliforme

    PubMed Central

    Warfield, Colleen Y.; Gilchrist, David G.

    1999-01-01

    Production of fumonisins by Fusarium moniliforme on naturally infected maize ears is an important food safety concern due to the toxic nature of this class of mycotoxins. Assessing the potential risk of fumonisin production in developing maize ears prior to harvest requires an understanding of the regulation of toxin biosynthesis during kernel maturation. We investigated the developmental-stage-dependent relationship between maize kernels and fumonisin B1 production by using kernels collected at the blister (R2), milk (R3), dough (R4), and dent (R5) stages following inoculation in culture at their respective field moisture contents with F. moniliforme. Highly significant differences (P ≤ 0.001) in fumonisin B1 production were found among kernels at the different developmental stages. The highest levels of fumonisin B1 were produced on the dent stage kernels, and the lowest levels were produced on the blister stage kernels. The differences in fumonisin B1 production among kernels at the different developmental stages remained significant (P ≤ 0.001) when the moisture contents of the kernels were adjusted to the same level prior to inoculation. We concluded that toxin production is affected by substrate composition as well as by moisture content. Our study also demonstrated that fumonisin B1 biosynthesis on maize kernels is influenced by factors which vary with the developmental age of the tissue. The risk of fumonisin contamination may begin early in maize ear development and increases as the kernels reach physiological maturity. PMID:10388675

  12. Differential evolution algorithm-based kernel parameter selection for Fukunaga-Koontz Transform subspaces construction

    NASA Astrophysics Data System (ADS)

    Binol, Hamidullah; Bal, Abdullah; Cukur, Huseyin

    2015-10-01

    The performance of the kernel based techniques depends on the selection of kernel parameters. That's why; suitable parameter selection is an important problem for many kernel based techniques. This article presents a novel technique to learn the kernel parameters in kernel Fukunaga-Koontz Transform based (KFKT) classifier. The proposed approach determines the appropriate values of kernel parameters through optimizing an objective function constructed based on discrimination ability of KFKT. For this purpose we have utilized differential evolution algorithm (DEA). The new technique overcomes some disadvantages such as high time consumption existing in the traditional cross-validation method, and it can be utilized in any type of data. The experiments for target detection applications on the hyperspectral images verify the effectiveness of the proposed method.

  13. Design of a multiple kernel learning algorithm for LS-SVM by convex programming.

    PubMed

    Jian, Ling; Xia, Zhonghang; Liang, Xijun; Gao, Chuanhou

    2011-06-01

    As a kernel based method, the performance of least squares support vector machine (LS-SVM) depends on the selection of the kernel as well as the regularization parameter (Duan, Keerthi, & Poo, 2003). Cross-validation is efficient in selecting a single kernel and the regularization parameter; however, it suffers from heavy computational cost and is not flexible to deal with multiple kernels. In this paper, we address the issue of multiple kernel learning for LS-SVM by formulating it as semidefinite programming (SDP). Furthermore, we show that the regularization parameter can be optimized in a unified framework with the kernel, which leads to an automatic process for model selection. Extensive experimental validations are performed and analyzed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  14. Novel near-infrared sampling apparatus for single kernel analysis of oil content in maize.

    PubMed

    Janni, James; Weinstock, B André; Hagen, Lisa; Wright, Steve

    2008-04-01

    A method of rapid, nondestructive chemical and physical analysis of individual maize (Zea mays L.) kernels is needed for the development of high value food, feed, and fuel traits. Near-infrared (NIR) spectroscopy offers a robust nondestructive method of trait determination. However, traditional NIR bulk sampling techniques cannot be applied successfully to individual kernels. Obtaining optimized single kernel NIR spectra for applied chemometric predictive analysis requires a novel sampling technique that can account for the heterogeneous forms, morphologies, and opacities exhibited in individual maize kernels. In this study such a novel technique is described and compared to less effective means of single kernel NIR analysis. Results of the application of a partial least squares (PLS) derived model for predictive determination of percent oil content per individual kernel are shown.

  15. Local Competition-Based Superpixel Segmentation Algorithm in Remote Sensing

    PubMed Central

    Liu, Jiayin; Tang, Zhenmin; Cui, Ying; Wu, Guoxing

    2017-01-01

    Remote sensing technologies have been widely applied in urban environments’ monitoring, synthesis and modeling. Incorporating spatial information in perceptually coherent regions, superpixel-based approaches can effectively eliminate the “salt and pepper” phenomenon which is common in pixel-wise approaches. Compared with fixed-size windows, superpixels have adaptive sizes and shapes for different spatial structures. Moreover, superpixel-based algorithms can significantly improve computational efficiency owing to the greatly reduced number of image primitives. Hence, the superpixel algorithm, as a preprocessing technique, is more and more popularly used in remote sensing and many other fields. In this paper, we propose a superpixel segmentation algorithm called Superpixel Segmentation with Local Competition (SSLC), which utilizes a local competition mechanism to construct energy terms and label pixels. The local competition mechanism leads to energy terms locality and relativity, and thus, the proposed algorithm is less sensitive to the diversity of image content and scene layout. Consequently, SSLC could achieve consistent performance in different image regions. In addition, the Probability Density Function (PDF), which is estimated by Kernel Density Estimation (KDE) with the Gaussian kernel, is introduced to describe the color distribution of superpixels as a more sophisticated and accurate measure. To reduce computational complexity, a boundary optimization framework is introduced to only handle boundary pixels instead of the whole image. We conduct experiments to benchmark the proposed algorithm with the other state-of-the-art ones on the Berkeley Segmentation Dataset (BSD) and remote sensing images. Results demonstrate that the SSLC algorithm yields the best overall performance, while the computation time-efficiency is still competitive. PMID:28604641

  16. Seismic tomography of the southern California crust based on spectral-element and adjoint methods

    NASA Astrophysics Data System (ADS)

    Tape, Carl; Liu, Qinya; Maggi, Alessia; Tromp, Jeroen

    2010-01-01

    We iteratively improve a 3-D tomographic model of the southern California crust using numerical simulations of seismic wave propagation based on a spectral-element method (SEM) in combination with an adjoint method. The initial 3-D model is provided by the Southern California Earthquake Center. The data set comprises three-component seismic waveforms (i.e. both body and surface waves), filtered over the period range 2-30 s, from 143 local earthquakes recorded by a network of 203 stations. Time windows for measurements are automatically selected by the FLEXWIN algorithm. The misfit function in the tomographic inversion is based on frequency-dependent multitaper traveltime differences. The gradient of the misfit function and related finite-frequency sensitivity kernels for each earthquake are computed using an adjoint technique. The kernels are combined using a source subspace projection method to compute a model update at each iteration of a gradient-based minimization algorithm. The inversion involved 16 iterations, which required 6800 wavefield simulations. The new crustal model, m16, is described in terms of independent shear (VS) and bulk-sound (VB) wave speed variations. It exhibits strong heterogeneity, including local changes of +/-30 per cent with respect to the initial 3-D model. The model reveals several features that relate to geological observations, such as sedimentary basins, exhumed batholiths, and contrasting lithologies across faults. The quality of the new model is validated by quantifying waveform misfits of full-length seismograms from 91 earthquakes that were not used in the tomographic inversion. The new model provides more accurate synthetic seismograms that will benefit seismic hazard assessment.

  17. Adjoint Tomography of the Southern California Crust (Invited) (Invited)

    NASA Astrophysics Data System (ADS)

    Tape, C.; Liu, Q.; Maggi, A.; Tromp, J.

    2009-12-01

    We iteratively improve a three-dimensional tomographic model of the southern California crust using numerical simulations of seismic wave propagation based on a spectral-element method (SEM) in combination with an adjoint method. The initial 3D model is provided by the Southern California Earthquake Center. The dataset comprises three-component seismic waveforms (i.e. both body and surface waves), filtered over the period range 2-30 s, from 143 local earthquakes recorded by a network of 203 stations. Time windows for measurements are automatically selected by the FLEXWIN algorithm. The misfit function in the tomographic inversion is based on frequency-dependent multitaper traveltime differences. The gradient of the misfit function and related finite-frequency sensitivity kernels for each earthquake are computed using an adjoint technique. The kernels are combined using a source subspace projection method to compute a model update at each iteration of a gradient-based minimization algorithm. The inversion involved 16 iterations, which required 6800 wavefield simulations and a total of 0.8 million CPU hours. The new crustal model, m16, is described in terms of independent shear (Vs) and bulk-sound (Vb) wavespeed variations. It exhibits strong heterogeneity, including local changes of ±30% with respect to the initial 3D model. The model reveals several features that relate to geologic observations, such as sedimentary basins, exhumed batholiths, and contrasting lithologies across faults. The quality of the new model is validated by quantifying waveform misfits of full-length seismograms from 91 earthquakes that were not used in the tomographic inversion. The new model provides more accurate synthetic seismograms that will benefit seismic hazard assessment.

  18. Local Competition-Based Superpixel Segmentation Algorithm in Remote Sensing.

    PubMed

    Liu, Jiayin; Tang, Zhenmin; Cui, Ying; Wu, Guoxing

    2017-06-12

    Remote sensing technologies have been widely applied in urban environments' monitoring, synthesis and modeling. Incorporating spatial information in perceptually coherent regions, superpixel-based approaches can effectively eliminate the "salt and pepper" phenomenon which is common in pixel-wise approaches. Compared with fixed-size windows, superpixels have adaptive sizes and shapes for different spatial structures. Moreover, superpixel-based algorithms can significantly improve computational efficiency owing to the greatly reduced number of image primitives. Hence, the superpixel algorithm, as a preprocessing technique, is more and more popularly used in remote sensing and many other fields. In this paper, we propose a superpixel segmentation algorithm called Superpixel Segmentation with Local Competition (SSLC), which utilizes a local competition mechanism to construct energy terms and label pixels. The local competition mechanism leads to energy terms locality and relativity, and thus, the proposed algorithm is less sensitive to the diversity of image content and scene layout. Consequently, SSLC could achieve consistent performance in different image regions. In addition, the Probability Density Function (PDF), which is estimated by Kernel Density Estimation (KDE) with the Gaussian kernel, is introduced to describe the color distribution of superpixels as a more sophisticated and accurate measure. To reduce computational complexity, a boundary optimization framework is introduced to only handle boundary pixels instead of the whole image. We conduct experiments to benchmark the proposed algorithm with the other state-of-the-art ones on the Berkeley Segmentation Dataset (BSD) and remote sensing images. Results demonstrate that the SSLC algorithm yields the best overall performance, while the computation time-efficiency is still competitive.

  19. Computed tomography coronary stent imaging with iterative reconstruction: a trade-off study between medium kernel and sharp kernel.

    PubMed

    Zhou, Qijing; Jiang, Biao; Dong, Fei; Huang, Peiyu; Liu, Hongtao; Zhang, Minming

    2014-01-01

    To evaluate the improvement of iterative reconstruction in image space (IRIS) technique in computed tomographic (CT) coronary stent imaging with sharp kernel, and to make a trade-off analysis. Fifty-six patients with 105 stents were examined by 128-slice dual-source CT coronary angiography (CTCA). Images were reconstructed using standard filtered back projection (FBP) and IRIS with both medium kernel and sharp kernel applied. Image noise and the stent diameter were investigated. Image noise was measured both in background vessel and in-stent lumen as objective image evaluation. Image noise score and stent score were performed as subjective image evaluation. The CTCA images reconstructed with IRIS were associated with significant noise reduction compared to that of CTCA images reconstructed using FBP technique in both of background vessel and in-stent lumen (the background noise decreased by approximately 25.4% ± 8.2% in medium kernel (P

  20. Multiple Kernel Sparse Representation based Orthogonal Discriminative Projection and Its Cost-Sensitive Extension.

    PubMed

    Zhang, Guoqing; Sun, Huaijiang; Xia, Guiyu; Sun, Quansen

    2016-07-07

    Sparse representation based classification (SRC) has been developed and shown great potential for real-world application. Based on SRC, Yang et al. [10] devised a SRC steered discriminative projection (SRC-DP) method. However, as a linear algorithm, SRC-DP cannot handle the data with highly nonlinear distribution. Kernel sparse representation-based classifier (KSRC) is a non-linear extension of SRC and can remedy the drawback of SRC. KSRC requires the use of a predetermined kernel function and selection of the kernel function and its parameters is difficult. Recently, multiple kernel learning for SRC (MKL-SRC) [22] has been proposed to learn a kernel from a set of base kernels. However, MKL-SRC only considers the within-class reconstruction residual while ignoring the between-class relationship, when learning the kernel weights. In this paper, we propose a novel multiple kernel sparse representation-based classifier (MKSRC), and then we use it as a criterion to design a multiple kernel sparse representation based orthogonal discriminative projection method (MK-SR-ODP). The proposed algorithm aims at learning a projection matrix and a corresponding kernel from the given base kernels such that in the low dimension subspace the between-class reconstruction residual is maximized and the within-class reconstruction residual is minimized. Furthermore, to achieve a minimum overall loss by performing recognition in the learned low-dimensional subspace, we introduce cost information into the dimensionality reduction method. The solutions for the proposed method can be efficiently found based on trace ratio optimization method [33]. Extensive experimental results demonstrate the superiority of the proposed algorithm when compared with the state-of-the-art methods.

  1. Improving prediction of heterodimeric protein complexes using combination with pairwise kernel.

    PubMed

    Ruan, Peiying; Hayashida, Morihiro; Akutsu, Tatsuya; Vert, Jean-Philippe

    2018-02-19

    Since many proteins become functional only after they interact with their partner proteins and form protein complexes, it is essential to identify the sets of proteins that form complexes. Therefore, several computational methods have been proposed to predict complexes from the topology and structure of experimental protein-protein interaction (PPI) network. These methods work well to predict complexes involving at least three proteins, but generally fail at identifying complexes involving only two different proteins, called heterodimeric complexes or heterodimers. There is however an urgent need for efficient methods to predict heterodimers, since the majority of known protein complexes are precisely heterodimers. In this paper, we use three promising kernel functions, Min kernel and two pairwise kernels, which are Metric Learning Pairwise Kernel (MLPK) and Tensor Product Pairwise Kernel (TPPK). We also consider the normalization forms of Min kernel. Then, we combine Min kernel or its normalization form and one of the pairwise kernels by plugging. We applied kernels based on PPI, domain, phylogenetic profile, and subcellular localization properties to predicting heterodimers. Then, we evaluate our method by employing C-Support Vector Classification (C-SVC), carrying out 10-fold cross-validation, and calculating the average F-measures. The results suggest that the combination of normalized-Min-kernel and MLPK leads to the best F-measure and improved the performance of our previous work, which had been the best existing method so far. We propose new methods to predict heterodimers, using a machine learning-based approach. We train a support vector machine (SVM) to discriminate interacting vs non-interacting protein pairs, based on informations extracted from PPI, domain, phylogenetic profiles and subcellular localization. We evaluate in detail new kernel functions to encode these data, and report prediction performance that outperforms the state-of-the-art.

  2. Mapping QTLs controlling kernel dimensions in a wheat inter-varietal RIL mapping population.

    PubMed

    Cheng, Ruiru; Kong, Zhongxin; Zhang, Liwei; Xie, Quan; Jia, Haiyan; Yu, Dong; Huang, Yulong; Ma, Zhengqiang

    2017-07-01

    Seven kernel dimension QTLs were identified in wheat, and kernel thickness was found to be the most important dimension for grain weight improvement. Kernel morphology and weight of wheat (Triticum aestivum L.) affect both yield and quality; however, the genetic basis of these traits and their interactions has not been fully understood. In this study, to investigate the genetic factors affecting kernel morphology and the association of kernel morphology traits with kernel weight, kernel length (KL), width (KW) and thickness (KT) were evaluated, together with hundred-grain weight (HGW), in a recombinant inbred line population derived from Nanda2419 × Wangshuibai, with data from five trials (two different locations over 3 years). The results showed that HGW was more closely correlated with KT and KW than with KL. A whole genome scan revealed four QTLs for KL, one for KW and two for KT, distributed on five different chromosomes. Of them, QKl.nau-2D for KL, and QKt.nau-4B and QKt.nau-5A for KT were newly identified major QTLs for the respective traits, explaining up to 32.6 and 41.5% of the phenotypic variations, respectively. Increase of KW and KT and reduction of KL/KT and KW/KT ratios always resulted in significant higher grain weight. Lines combining the Nanda 2419 alleles of the 4B and 5A intervals had wider, thicker, rounder kernels and a 14% higher grain weight in the genotype-based analysis. A strong, negative linear relationship of the KW/KT ratio with grain weight was observed. It thus appears that kernel thickness is the most important kernel dimension factor in wheat improvement for higher yield. Mapping and marker identification of the kernel dimension-related QTLs definitely help realize the breeding goals.

  3. Kernel learning at the first level of inference.

    PubMed

    Cawley, Gavin C; Talbot, Nicola L C

    2014-05-01

    Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Adaptive kernel function using line transect sampling

    NASA Astrophysics Data System (ADS)

    Albadareen, Baker; Ismail, Noriszura

    2018-04-01

    The estimation of f(0) is crucial in the line transect method which is used for estimating population abundance in wildlife survey's. The classical kernel estimator of f(0) has a high negative bias. Our study proposes an adaptation in the kernel function which is shown to be more efficient than the usual kernel estimator. A simulation study is adopted to compare the performance of the proposed estimators with the classical kernel estimators.

  5. Kernel Partial Least Squares for Nonlinear Regression and Discrimination

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This paper summarizes recent results on applying the method of partial least squares (PLS) in a reproducing kernel Hilbert space (RKHS). A previously proposed kernel PLS regression model was proven to be competitive with other regularized regression methods in RKHS. The family of nonlinear kernel-based PLS models is extended by considering the kernel PLS method for discrimination. Theoretical and experimental results on a two-class discrimination problem indicate usefulness of the method.

  6. Pollen source effects on growth of kernel structures and embryo chemical compounds in maize.

    PubMed

    Tanaka, W; Mantese, A I; Maddonni, G A

    2009-08-01

    Previous studies have reported effects of pollen source on the oil concentration of maize (Zea mays) kernels through modifications to both the embryo/kernel ratio and embryo oil concentration. The present study expands upon previous analyses by addressing pollen source effects on the growth of kernel structures (i.e. pericarp, endosperm and embryo), allocation of embryo chemical constituents (i.e. oil, protein, starch and soluble sugars), and the anatomy and histology of the embryos. Maize kernels with different oil concentration were obtained from pollinations with two parental genotypes of contrasting oil concentration. The dynamics of the growth of kernel structures and allocation of embryo chemical constituents were analysed during the post-flowering period. Mature kernels were dissected to study the anatomy (embryonic axis and scutellum) and histology [cell number and cell size of the scutellums, presence of sub-cellular structures in scutellum tissue (starch granules, oil and protein bodies)] of the embryos. Plants of all crosses exhibited a similar kernel number and kernel weight. Pollen source modified neither the growth period of kernel structures, nor pericarp growth rate. By contrast, pollen source determined a trade-off between embryo and endosperm growth rates, which impacted on the embryo/kernel ratio of mature kernels. Modifications to the embryo size were mediated by scutellum cell number. Pollen source also affected (P < 0.01) allocation of embryo chemical compounds. Negative correlations among embryo oil concentration and those of starch (r = 0.98, P < 0.01) and soluble sugars (r = 0.95, P < 0.05) were found. Coincidently, embryos with low oil concentration had an increased (P < 0.05-0.10) scutellum cell area occupied by starch granules and fewer oil bodies. The effects of pollen source on both embryo/kernel ratio and allocation of embryo chemicals seems to be related to the early established sink strength (i.e. sink size and sink activity) of the embryos.

  7. 7 CFR 868.254 - Broken kernels determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Governing Application of Standards § 868.254 Broken kernels determination. Broken kernels shall be...

  8. 7 CFR 51.2090 - Serious damage.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... defect which makes a kernel or piece of kernel unsuitable for human consumption, and includes decay...: Shriveling when the kernel is seriously withered, shrunken, leathery, tough or only partially developed: Provided, that partially developed kernels are not considered seriously damaged if more than one-fourth of...

  9. Anisotropic hydrodynamics with a scalar collisional kernel

    NASA Astrophysics Data System (ADS)

    Almaalol, Dekrayat; Strickland, Michael

    2018-04-01

    Prior studies of nonequilibrium dynamics using anisotropic hydrodynamics have used the relativistic Anderson-Witting scattering kernel or some variant thereof. In this paper, we make the first study of the impact of using a more realistic scattering kernel. For this purpose, we consider a conformal system undergoing transversally homogenous and boost-invariant Bjorken expansion and take the collisional kernel to be given by the leading order 2 ↔2 scattering kernel in scalar λ ϕ4 . We consider both classical and quantum statistics to assess the impact of Bose enhancement on the dynamics. We also determine the anisotropic nonequilibrium attractor of a system subject to this collisional kernel. We find that, when the near-equilibrium relaxation-times in the Anderson-Witting and scalar collisional kernels are matched, the scalar kernel results in a higher degree of momentum-space anisotropy during the system's evolution, given the same initial conditions. Additionally, we find that taking into account Bose enhancement further increases the dynamically generated momentum-space anisotropy.

  10. Ideal regularization for learning kernels from labels.

    PubMed

    Pan, Binbin; Lai, Jianhuang; Shen, Lixin

    2014-08-01

    In this paper, we propose a new form of regularization that is able to utilize the label information of a data set for learning kernels. The proposed regularization, referred to as ideal regularization, is a linear function of the kernel matrix to be learned. The ideal regularization allows us to develop efficient algorithms to exploit labels. Three applications of the ideal regularization are considered. Firstly, we use the ideal regularization to incorporate the labels into a standard kernel, making the resulting kernel more appropriate for learning tasks. Next, we employ the ideal regularization to learn a data-dependent kernel matrix from an initial kernel matrix (which contains prior similarity information, geometric structures, and labels of the data). Finally, we incorporate the ideal regularization to some state-of-the-art kernel learning problems. With this regularization, these learning problems can be formulated as simpler ones which permit more efficient solvers. Empirical results show that the ideal regularization exploits the labels effectively and efficiently. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Straight-chain halocarbon forming fluids for TRISO fuel kernel production - Tests with yttria-stabilized zirconia microspheres

    NASA Astrophysics Data System (ADS)

    Baker, M. P.; King, J. C.; Gorman, B. P.; Braley, J. C.

    2015-03-01

    Current methods of TRISO fuel kernel production in the United States use a sol-gel process with trichloroethylene (TCE) as the forming fluid. After contact with radioactive materials, the spent TCE becomes a mixed hazardous waste, and high costs are associated with its recycling or disposal. Reducing or eliminating this mixed waste stream would not only benefit the environment, but would also enhance the economics of kernel production. Previous research yielded three candidates for testing as alternatives to TCE: 1-bromotetradecane, 1-chlorooctadecane, and 1-iodododecane. This study considers the production of yttria-stabilized zirconia (YSZ) kernels in silicone oil and the three chosen alternative formation fluids, with subsequent characterization of the produced kernels and used forming fluid. Kernels formed in silicone oil and bromotetradecane were comparable to those produced by previous kernel production efforts, while those produced in chlorooctadecane and iodododecane experienced gelation issues leading to poor kernel formation and geometry.

  12. Numerical study of the ignition behavior of a post-discharge kernel injected into a turbulent stratified cross-flow

    NASA Astrophysics Data System (ADS)

    Jaravel, Thomas; Labahn, Jeffrey; Ihme, Matthias

    2017-11-01

    The reliable initiation of flame ignition by high-energy spark kernels is critical for the operability of aviation gas turbines. The evolution of a spark kernel ejected by an igniter into a turbulent stratified environment is investigated using detailed numerical simulations with complex chemistry. At early times post ejection, comparisons of simulation results with high-speed Schlieren data show that the initial trajectory of the kernel is well reproduced, with a significant amount of air entrainment from the surrounding flow that is induced by the kernel ejection. After transiting in a non-flammable mixture, the kernel reaches a second stream of flammable methane-air mixture, where the successful of the kernel ignition was found to depend on the local flow state and operating conditions. By performing parametric studies, the probability of kernel ignition was identified, and compared with experimental observations. The ignition behavior is characterized by analyzing the local chemical structure, and its stochastic variability is also investigated.

  13. The site, size, spatial stability, and energetics of an X-ray flare kernel

    NASA Technical Reports Server (NTRS)

    Petrasso, R.; Gerassimenko, M.; Nolte, J.

    1979-01-01

    The site, size evolution, and energetics of an X-ray kernel that dominated a solar flare during its rise and somewhat during its peak are investigated. The position of the kernel remained stationary to within about 3 arc sec over the 30-min interval of observations, despite pulsations in the kernel X-ray brightness in excess of a factor of 10. This suggests a tightly bound, deeply rooted magnetic structure, more plausibly associated with the near chromosphere or low corona rather than with the high corona. The H-alpha flare onset coincided with the appearance of the kernel, again suggesting a close spatial and temporal coupling between the chromospheric H-alpha event and the X-ray kernel. At the first kernel brightness peak its size was no larger than about 2 arc sec, when it accounted for about 40% of the total flare flux. In the second rise phase of the kernel, a source power input of order 2 times 10 to the 24th ergs/sec is minimally required.

  14. The pre-image problem in kernel methods.

    PubMed

    Kwok, James Tin-yau; Tsang, Ivor Wai-hung

    2004-11-01

    In this paper, we address the problem of finding the pre-image of a feature vector in the feature space induced by a kernel. This is of central importance in some kernel applications, such as on using kernel principal component analysis (PCA) for image denoising. Unlike the traditional method which relies on nonlinear optimization, our proposed method directly finds the location of the pre-image based on distance constraints in the feature space. It is noniterative, involves only linear algebra and does not suffer from numerical instability or local minimum problems. Evaluations on performing kernel PCA and kernel clustering on the USPS data set show much improved performance.

  15. Effects of Amygdaline from Apricot Kernel on Transplanted Tumors in Mice.

    PubMed

    Yamshanov, V A; Kovan'ko, E G; Pustovalov, Yu I

    2016-03-01

    The effects of amygdaline from apricot kernel added to fodder on the growth of transplanted LYO-1 and Ehrlich carcinoma were studied in mice. Apricot kernels inhibited the growth of both tumors. Apricot kernels, raw and after thermal processing, given 2 days before transplantation produced a pronounced antitumor effect. Heat-processed apricot kernels given in 3 days after transplantation modified the tumor growth and prolonged animal lifespan. Thermal treatment did not considerably reduce the antitumor effect of apricot kernels. It was hypothesized that the antitumor effect of amygdaline on Ehrlich carcinoma and LYO-1 lymphosarcoma was associated with the presence of bacterial genome in the tumor.

  16. Development of a kernel function for clinical data.

    PubMed

    Daemen, Anneleen; De Moor, Bart

    2009-01-01

    For most diseases and examinations, clinical data such as age, gender and medical history guides clinical management, despite the rise of high-throughput technologies. To fully exploit such clinical information, appropriate modeling of relevant parameters is required. As the widely used linear kernel function has several disadvantages when applied to clinical data, we propose a new kernel function specifically developed for this data. This "clinical kernel function" more accurately represents similarities between patients. Evidently, three data sets were studied and significantly better performances were obtained with a Least Squares Support Vector Machine when based on the clinical kernel function compared to the linear kernel function.

  17. Manycore Performance-Portability: Kokkos Multidimensional Array Library

    DOE PAGES

    Edwards, H. Carter; Sunderland, Daniel; Porter, Vicki; ...

    2012-01-01

    Large, complex scientific and engineering application code have a significant investment in computational kernels to implement their mathematical models. Porting these computational kernels to the collection of modern manycore accelerator devices is a major challenge in that these devices have diverse programming models, application programming interfaces (APIs), and performance requirements. The Kokkos Array programming model provides library-based approach to implement computational kernels that are performance-portable to CPU-multicore and GPGPU accelerator devices. This programming model is based upon three fundamental concepts: (1) manycore compute devices each with its own memory space, (2) data parallel kernels and (3) multidimensional arrays. Kernel executionmore » performance is, especially for NVIDIA® devices, extremely dependent on data access patterns. Optimal data access pattern can be different for different manycore devices – potentially leading to different implementations of computational kernels specialized for different devices. The Kokkos Array programming model supports performance-portable kernels by (1) separating data access patterns from computational kernels through a multidimensional array API and (2) introduce device-specific data access mappings when a kernel is compiled. An implementation of Kokkos Array is available through Trilinos [Trilinos website, http://trilinos.sandia.gov/, August 2011].« less

  18. Protein Subcellular Localization with Gaussian Kernel Discriminant Analysis and Its Kernel Parameter Selection.

    PubMed

    Wang, Shunfang; Nie, Bing; Yue, Kun; Fei, Yu; Li, Wenjia; Xu, Dongshu

    2017-12-15

    Kernel discriminant analysis (KDA) is a dimension reduction and classification algorithm based on nonlinear kernel trick, which can be novelly used to treat high-dimensional and complex biological data before undergoing classification processes such as protein subcellular localization. Kernel parameters make a great impact on the performance of the KDA model. Specifically, for KDA with the popular Gaussian kernel, to select the scale parameter is still a challenging problem. Thus, this paper introduces the KDA method and proposes a new method for Gaussian kernel parameter selection depending on the fact that the differences between reconstruction errors of edge normal samples and those of interior normal samples should be maximized for certain suitable kernel parameters. Experiments with various standard data sets of protein subcellular localization show that the overall accuracy of protein classification prediction with KDA is much higher than that without KDA. Meanwhile, the kernel parameter of KDA has a great impact on the efficiency, and the proposed method can produce an optimum parameter, which makes the new algorithm not only perform as effectively as the traditional ones, but also reduce the computational time and thus improve efficiency.

  19. Impact of deep learning on the normalization of reconstruction kernel effects in imaging biomarker quantification: a pilot study in CT emphysema

    NASA Astrophysics Data System (ADS)

    Jin, Hyeongmin; Heo, Changyong; Kim, Jong Hyo

    2018-02-01

    Differing reconstruction kernels are known to strongly affect the variability of imaging biomarkers and thus remain as a barrier in translating the computer aided quantification techniques into clinical practice. This study presents a deep learning application to CT kernel conversion which converts a CT image of sharp kernel to that of standard kernel and evaluates its impact on variability reduction of a pulmonary imaging biomarker, the emphysema index (EI). Forty cases of low-dose chest CT exams obtained with 120kVp, 40mAs, 1mm thickness, of 2 reconstruction kernels (B30f, B50f) were selected from the low dose lung cancer screening database of our institution. A Fully convolutional network was implemented with Keras deep learning library. The model consisted of symmetric layers to capture the context and fine structure characteristics of CT images from the standard and sharp reconstruction kernels. Pairs of the full-resolution CT data set were fed to input and output nodes to train the convolutional network to learn the appropriate filter kernels for converting the CT images of sharp kernel to standard kernel with a criterion of measuring the mean squared error between the input and target images. EIs (RA950 and Perc15) were measured with a software package (ImagePrism Pulmo, Seoul, South Korea) and compared for the data sets of B50f, B30f, and the converted B50f. The effect of kernel conversion was evaluated with the mean and standard deviation of pair-wise differences in EI. The population mean of RA950 was 27.65 +/- 7.28% for B50f data set, 10.82 +/- 6.71% for the B30f data set, and 8.87 +/- 6.20% for the converted B50f data set. The mean of pair-wise absolute differences in RA950 between B30f and B50f is reduced from 16.83% to 1.95% using kernel conversion. Our study demonstrates the feasibility of applying the deep learning technique for CT kernel conversion and reducing the kernel-induced variability of EI quantification. The deep learning model has a potential to improve the reliability of imaging biomarker, especially in evaluating the longitudinal changes of EI even when the patient CT scans were performed with different kernels.

  20. Metabolic network prediction through pairwise rational kernels.

    PubMed

    Roche-Lima, Abiel; Domaratzki, Michael; Fristensky, Brian

    2014-09-26

    Metabolic networks are represented by the set of metabolic pathways. Metabolic pathways are a series of biochemical reactions, in which the product (output) from one reaction serves as the substrate (input) to another reaction. Many pathways remain incompletely characterized. One of the major challenges of computational biology is to obtain better models of metabolic pathways. Existing models are dependent on the annotation of the genes. This propagates error accumulation when the pathways are predicted by incorrectly annotated genes. Pairwise classification methods are supervised learning methods used to classify new pair of entities. Some of these classification methods, e.g., Pairwise Support Vector Machines (SVMs), use pairwise kernels. Pairwise kernels describe similarity measures between two pairs of entities. Using pairwise kernels to handle sequence data requires long processing times and large storage. Rational kernels are kernels based on weighted finite-state transducers that represent similarity measures between sequences or automata. They have been effectively used in problems that handle large amount of sequence information such as protein essentiality, natural language processing and machine translations. We create a new family of pairwise kernels using weighted finite-state transducers (called Pairwise Rational Kernel (PRK)) to predict metabolic pathways from a variety of biological data. PRKs take advantage of the simpler representations and faster algorithms of transducers. Because raw sequence data can be used, the predictor model avoids the errors introduced by incorrect gene annotations. We then developed several experiments with PRKs and Pairwise SVM to validate our methods using the metabolic network of Saccharomyces cerevisiae. As a result, when PRKs are used, our method executes faster in comparison with other pairwise kernels. Also, when we use PRKs combined with other simple kernels that include evolutionary information, the accuracy values have been improved, while maintaining lower construction and execution times. The power of using kernels is that almost any sort of data can be represented using kernels. Therefore, completely disparate types of data can be combined to add power to kernel-based machine learning methods. When we compared our proposal using PRKs with other similar kernel, the execution times were decreased, with no compromise of accuracy. We also proved that by combining PRKs with other kernels that include evolutionary information, the accuracy can also also be improved. As our proposal can use any type of sequence data, genes do not need to be properly annotated, avoiding accumulation errors because of incorrect previous annotations.

Top