Science.gov

Sample records for algorithm performs significantly

  1. Statistically significant performance results of a mine detector and fusion algorithm from an x-band high-resolution SAR

    NASA Astrophysics Data System (ADS)

    Williams, Arnold C.; Pachowicz, Peter W.

    2004-09-01

    Current mine detection research indicates that no single sensor or single look from a sensor will detect mines/minefields in a real-time manner at a performance level suitable for a forward maneuver unit. Hence, the integrated development of detectors and fusion algorithms are of primary importance. A problem in this development process has been the evaluation of these algorithms with relatively small data sets, leading to anecdotal and frequently over trained results. These anecdotal results are often unreliable and conflicting among various sensors and algorithms. Consequently, the physical phenomena that ought to be exploited and the performance benefits of this exploitation are often ambiguous. The Army RDECOM CERDEC Night Vision Laboratory and Electron Sensors Directorate has collected large amounts of multisensor data such that statistically significant evaluations of detection and fusion algorithms can be obtained. Even with these large data sets care must be taken in algorithm design and data processing to achieve statistically significant performance results for combined detectors and fusion algorithms. This paper discusses statistically significant detection and combined multilook fusion results for the Ellipse Detector (ED) and the Piecewise Level Fusion Algorithm (PLFA). These statistically significant performance results are characterized by ROC curves that have been obtained through processing this multilook data for the high resolution SAR data of the Veridian X-Band radar. We discuss the implications of these results on mine detection and the importance of statistical significance, sample size, ground truth, and algorithm design in performance evaluation.

  2. Algorithm for Detecting Significant Locations from Raw GPS Data

    NASA Astrophysics Data System (ADS)

    Kami, Nobuharu; Enomoto, Nobuyuki; Baba, Teruyuki; Yoshikawa, Takashi

    We present a fast algorithm for probabilistically extracting significant locations from raw GPS data based on data point density. Extracting significant locations from raw GPS data is the first essential step of algorithms designed for location-aware applications. Assuming that a location is significant if users spend a certain time around that area, most current algorithms compare spatial/temporal variables, such as stay duration and a roaming diameter, with given fixed thresholds to extract significant locations. However, the appropriate threshold values are not clearly known in priori and algorithms with fixed thresholds are inherently error-prone, especially under high noise levels. Moreover, for N data points, they are generally O(N 2) algorithms since distance computation is required. We developed a fast algorithm for selective data point sampling around significant locations based on density information by constructing random histograms using locality sensitive hashing. Evaluations show competitive performance in detecting significant locations even under high noise levels.

  3. High-performance combinatorial algorithms

    SciTech Connect

    Pinar, Ali

    2003-10-31

    Combinatorial algorithms have long played an important role in many applications of scientific computing such as sparse matrix computations and parallel computing. The growing importance of combinatorial algorithms in emerging applications like computational biology and scientific data mining calls for development of a high performance library for combinatorial algorithms. Building such a library requires a new structure for combinatorial algorithms research that enables fast implementation of new algorithms. We propose a structure for combinatorial algorithms research that mimics the research structure of numerical algorithms. Numerical algorithms research is nicely complemented with high performance libraries, and this can be attributed to the fact that there are only a small number of fundamental problems that underlie numerical solvers. Furthermore there are only a handful of kernels that enable implementation of algorithms for these fundamental problems. Building a similar structure for combinatorial algorithms will enable efficient implementations for existing algorithms and fast implementation of new algorithms. Our results will promote utilization of combinatorial techniques and will impact research in many scientific computing applications, some of which are listed.

  4. Benchmarking image fusion algorithm performance

    NASA Astrophysics Data System (ADS)

    Howell, Christopher L.

    2012-06-01

    Registering two images produced by two separate imaging sensors having different detector sizes and fields of view requires one of the images to undergo transformation operations that may cause its overall quality to degrade with regards to visual task performance. This possible change in image quality could add to an already existing difference in measured task performance. Ideally, a fusion algorithm would take as input unaltered outputs from each respective sensor used in the process. Therefore, quantifying how well an image fusion algorithm performs should be base lined to whether the fusion algorithm retained the performance benefit achievable by each independent spectral band being fused. This study investigates an identification perception experiment using a simple and intuitive process for discriminating between image fusion algorithm performances. The results from a classification experiment using information theory based image metrics is presented and compared to perception test results. The results show an effective performance benchmark for image fusion algorithms can be established using human perception test data. Additionally, image metrics have been identified that either agree with or surpass the performance benchmark established.

  5. Performance analysis of cone detection algorithms.

    PubMed

    Mariotti, Letizia; Devaney, Nicholas

    2015-04-01

    Many algorithms have been proposed to help clinicians evaluate cone density and spacing, as these may be related to the onset of retinal diseases. However, there has been no rigorous comparison of the performance of these algorithms. In addition, the performance of such algorithms is typically determined by comparison with human observers. Here we propose a technique to simulate realistic images of the cone mosaic. We use the simulated images to test the performance of three popular cone detection algorithms, and we introduce an algorithm which is used by astronomers to detect stars in astronomical images. We use Free Response Operating Characteristic (FROC) curves to evaluate and compare the performance of the four algorithms. This allows us to optimize the performance of each algorithm. We observe that performance is significantly enhanced by up-sampling the images. We investigate the effect of noise and image quality on cone mosaic parameters estimated using the different algorithms, finding that the estimated regularity is the most sensitive parameter. PMID:26366758

  6. Belief network algorithms: A study of performance

    SciTech Connect

    Jitnah, N.

    1996-12-31

    This abstract gives an overview of the work. We present a survey of Belief Network algorithms and propose a domain characterization system to be used as a basis for algorithm comparison and for predicting algorithm performance.

  7. TIRS stray light correction: algorithms and performance

    NASA Astrophysics Data System (ADS)

    Gerace, Aaron; Montanaro, Matthew; Beckmann, Tim; Tyrrell, Kaitlin; Cozzo, Alexandra; Carney, Trevor; Ngan, Vicki

    2015-09-01

    The Thermal Infrared Sensor (TIRS) onboard Landsat 8 was tasked with continuing thermal band measurements of the Earth as part of the Landsat program. From first light in early 2013, there were obvious indications that stray light was contaminating the thermal image data collected from the instrument. Traditional calibration techniques did not perform adequately as non-uniform banding was evident in the corrected data and error in absolute estimates of temperature over trusted buoys sites varied seasonally and, in worst cases, exceeded 9 K error. The development of an operational technique to remove the effects of the stray light has become a high priority to enhance the utility of the TIRS data. This paper introduces the current algorithm being tested by Landsat's calibration and validation team to remove stray light from TIRS image data. The integration of the algorithm into the EROS test system is discussed with strategies for operationalizing the method emphasized. Techniques for assessing the methodologies used are presented and potential refinements to the algorithm are suggested. Initial results indicate that the proposed algorithm significantly removes stray light artifacts from the image data. Specifically, visual and quantitative evidence suggests that the algorithm practically eliminates banding in the image data. Additionally, the seasonal variation in absolute errors is flattened and, in the worst case, errors of over 9 K are reduced to within 2 K. Future work focuses on refining the algorithm based on these findings and applying traditional calibration techniques to enhance the final image product.

  8. Discovering simple DNA sequences by the algorithmic significance method.

    PubMed

    Milosavljević, A; Jurka, J

    1993-08-01

    A new method, 'algorithmic significance', is proposed as a tool for discovery of patterns in DNA sequences. The main idea is that patterns can be discovered by finding ways to encode the observed data concisely. In this sense, the method can be viewed as a formal version of the Occam's Razor principle. In this paper the method is applied to discover significantly simple DNA sequences. We define DNA sequences to be simple if they contain repeated occurrences of certain 'words' and thus can be encoded in a small number of bits. Such definition includes minisatellites and microsatellites. A standard dynamic programming algorithm for data compression is applied to compute the minimal encoding lengths of sequences in linear time. An electronic mail server for identification of simple sequences based on the proposed method has been installed at the Internet address pythia/anl.gov. PMID:8402207

  9. Algorithms for Detecting Significantly Mutated Pathways in Cancer

    NASA Astrophysics Data System (ADS)

    Vandin, Fabio; Upfal, Eli; Raphael, Benjamin J.

    Recent genome sequencing studies have shown that the somatic mutations that drive cancer development are distributed across a large number of genes. This mutational heterogeneity complicates efforts to distinguish functional mutations from sporadic, passenger mutations. Since cancer mutations are hypothesized to target a relatively small number of cellular signaling and regulatory pathways, a common approach is to assess whether known pathways are enriched for mutated genes. However, restricting attention to known pathways will not reveal novel cancer genes or pathways. An alterative strategy is to examine mutated genes in the context of genome-scale interaction networks that include both well characterized pathways and additional gene interactions measured through various approaches. We introduce a computational framework for de novo identification of subnetworks in a large gene interaction network that are mutated in a significant number of patients. This framework includes two major features. First, we introduce a diffusion process on the interaction network to define a local neighborhood of "influence" for each mutated gene in the network. Second, we derive a two-stage multiple hypothesis test to bound the false discovery rate (FDR) associated with the identified subnetworks. We test these algorithms on a large human protein-protein interaction network using mutation data from two recent studies: glioblastoma samples from The Cancer Genome Atlas and lung adenocarcinoma samples from the Tumor Sequencing Project. We successfully recover pathways that are known to be important in these cancers, such as the p53 pathway. We also identify additional pathways, such as the Notch signaling pathway, that have been implicated in other cancers but not previously reported as mutated in these samples. Our approach is the first, to our knowledge, to demonstrate a computationally efficient strategy for de novo identification of statistically significant mutated subnetworks. We

  10. Algorithms for improved performance in cryptographic protocols.

    SciTech Connect

    Schroeppel, Richard Crabtree; Beaver, Cheryl Lynn

    2003-11-01

    Public key cryptographic algorithms provide data authentication and non-repudiation for electronic transmissions. The mathematical nature of the algorithms, however, means they require a significant amount of computation, and encrypted messages and digital signatures possess high bandwidth. Accordingly, there are many environments (e.g. wireless, ad-hoc, remote sensing networks) where public-key requirements are prohibitive and cannot be used. The use of elliptic curves in public-key computations has provided a means by which computations and bandwidth can be somewhat reduced. We report here on the research conducted in an LDRD aimed to find even more efficient algorithms and to make public-key cryptography available to a wider range of computing environments. We improved upon several algorithms, including one for which a patent has been applied. Further we discovered some new problems and relations on which future cryptographic algorithms may be based.

  11. Passive MMW algorithm performance characterization using MACET

    NASA Astrophysics Data System (ADS)

    Williams, Bradford D.; Watson, John S.; Amphay, Sengvieng A.

    1997-06-01

    As passive millimeter wave sensor technology matures, algorithms which are tailored to exploit the benefits of this technology are being developed. The expedient development of such algorithms requires an understanding of not only the gross phenomenology, but also specific quirks and limitations inherent in sensors and the data gathering methodology specific to this regime. This level of understanding is approached as the technology matures and increasing amounts of data become available for analysis. The Armament Directorate of Wright Laboratory, WL/MN, has spearheaded the advancement of passive millimeter-wave technology in algorithm development tools and modeling capability as well as sensor development. A passive MMW channel is available within WL/MNs popular multi-channel modeling program Irma, and a sample passive MMW algorithm is incorporated into the Modular Algorithm Concept Evaluation Tool, an algorithm development and evaluation system. The Millimeter Wave Analysis of Passive Signatures system provides excellent data collection capability in the 35, 60, and 95 GHz MMW bands. This paper exploits these assets for the study of the PMMW signature of a High Mobility Multi- Purpose Wheeled Vehicle in the three bands mentioned, and the effect of camouflage upon this signature and autonomous target recognition algorithm performance.

  12. Bootstrap performance profiles in stochastic algorithms assessment

    SciTech Connect

    Costa, Lino; Espírito Santo, Isabel A.C.P.; Oliveira, Pedro

    2015-03-10

    Optimization with stochastic algorithms has become a relevant research field. Due to its stochastic nature, its assessment is not straightforward and involves integrating accuracy and precision. Performance profiles for the mean do not show the trade-off between accuracy and precision, and parametric stochastic profiles require strong distributional assumptions and are limited to the mean performance for a large number of runs. In this work, bootstrap performance profiles are used to compare stochastic algorithms for different statistics. This technique allows the estimation of the sampling distribution of almost any statistic even with small samples. Multiple comparison profiles are presented for more than two algorithms. The advantages and drawbacks of each assessment methodology are discussed.

  13. The Real World Significance of Performance Prediction

    ERIC Educational Resources Information Center

    Pardos, Zachary A.; Wang, Qing Yang; Trivedi, Shubhendu

    2012-01-01

    In recent years, the educational data mining and user modeling communities have been aggressively introducing models for predicting student performance on external measures such as standardized tests as well as within-tutor performance. While these models have brought statistically reliable improvement to performance prediction, the real world…

  14. Performance of a streaming mesh refinement algorithm.

    SciTech Connect

    Thompson, David C.; Pebay, Philippe Pierre

    2004-08-01

    In SAND report 2004-1617, we outline a method for edge-based tetrahedral subdivision that does not rely on saving state or communication to produce compatible tetrahedralizations. This report analyzes the performance of the technique by characterizing (a) mesh quality, (b) execution time, and (c) traits of the algorithm that could affect quality or execution time differently for different meshes. It also details the method used to debug the several hundred subdivision templates that the algorithm relies upon. Mesh quality is on par with other similar refinement schemes and throughput on modern hardware can exceed 600,000 output tetrahedra per second. But if you want to understand the traits of the algorithm, you have to read the report!

  15. Evaluating Algorithm Performance Metrics Tailored for Prognostics

    NASA Technical Reports Server (NTRS)

    Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai

    2009-01-01

    Prognostics has taken a center stage in Condition Based Maintenance (CBM) where it is desired to estimate Remaining Useful Life (RUL) of the system so that remedial measures may be taken in advance to avoid catastrophic events or unwanted downtimes. Validation of such predictions is an important but difficult proposition and a lack of appropriate evaluation methods renders prognostics meaningless. Evaluation methods currently used in the research community are not standardized and in many cases do not sufficiently assess key performance aspects expected out of a prognostics algorithm. In this paper we introduce several new evaluation metrics tailored for prognostics and show that they can effectively evaluate various algorithms as compared to other conventional metrics. Specifically four algorithms namely; Relevance Vector Machine (RVM), Gaussian Process Regression (GPR), Artificial Neural Network (ANN), and Polynomial Regression (PR) are compared. These algorithms vary in complexity and their ability to manage uncertainty around predicted estimates. Results show that the new metrics rank these algorithms in different manner and depending on the requirements and constraints suitable metrics may be chosen. Beyond these results, these metrics offer ideas about how metrics suitable to prognostics may be designed so that the evaluation procedure can be standardized. 1

  16. A Hybrid Swarm Intelligence Algorithm for Intrusion Detection Using Significant Features

    PubMed Central

    Amudha, P.; Karthik, S.; Sivakumari, S.

    2015-01-01

    Intrusion detection has become a main part of network security due to the huge number of attacks which affects the computers. This is due to the extensive growth of internet connectivity and accessibility to information systems worldwide. To deal with this problem, in this paper a hybrid algorithm is proposed to integrate Modified Artificial Bee Colony (MABC) with Enhanced Particle Swarm Optimization (EPSO) to predict the intrusion detection problem. The algorithms are combined together to find out better optimization results and the classification accuracies are obtained by 10-fold cross-validation method. The purpose of this paper is to select the most relevant features that can represent the pattern of the network traffic and test its effect on the success of the proposed hybrid classification algorithm. To investigate the performance of the proposed method, intrusion detection KDDCup'99 benchmark dataset from the UCI Machine Learning repository is used. The performance of the proposed method is compared with the other machine learning algorithms and found to be significantly different. PMID:26221625

  17. A Hybrid Swarm Intelligence Algorithm for Intrusion Detection Using Significant Features.

    PubMed

    Amudha, P; Karthik, S; Sivakumari, S

    2015-01-01

    Intrusion detection has become a main part of network security due to the huge number of attacks which affects the computers. This is due to the extensive growth of internet connectivity and accessibility to information systems worldwide. To deal with this problem, in this paper a hybrid algorithm is proposed to integrate Modified Artificial Bee Colony (MABC) with Enhanced Particle Swarm Optimization (EPSO) to predict the intrusion detection problem. The algorithms are combined together to find out better optimization results and the classification accuracies are obtained by 10-fold cross-validation method. The purpose of this paper is to select the most relevant features that can represent the pattern of the network traffic and test its effect on the success of the proposed hybrid classification algorithm. To investigate the performance of the proposed method, intrusion detection KDDCup'99 benchmark dataset from the UCI Machine Learning repository is used. The performance of the proposed method is compared with the other machine learning algorithms and found to be significantly different. PMID:26221625

  18. Predicting the performance of a spatial gamut mapping algorithm

    NASA Astrophysics Data System (ADS)

    Bakke, Arne M.; Farup, Ivar; Hardeberg, Jon Y.

    2009-01-01

    Gamut mapping algorithms are currently being developed to take advantage of the spatial information in an image to improve the utilization of the destination gamut. These algorithms try to preserve the spatial information between neighboring pixels in the image, such as edges and gradients, without sacrificing global contrast. Experiments have shown that such algorithms can result in significantly improved reproduction of some images compared with non-spatial methods. However, due to the spatial processing of images, they introduce unwanted artifacts when used on certain types of images. In this paper we perform basic image analysis to predict whether a spatial algorithm is likely to perform better or worse than a good, non-spatial algorithm. Our approach starts by detecting the relative amount of areas in the image that are made up of uniformly colored pixels, as well as the amount of areas that contain details in out-of-gamut areas. A weighted difference is computed from these numbers, and we show that the result has a high correlation with the observed performance of the spatial algorithm in a previously conducted psychophysical experiment.

  19. Impact of Multiscale Retinex Computation on Performance of Segmentation Algorithms

    NASA Technical Reports Server (NTRS)

    Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.; Hines, Glenn D.

    2004-01-01

    Classical segmentation algorithms subdivide an image into its constituent components based upon some metric that defines commonality between pixels. Often, these metrics incorporate some measure of "activity" in the scene, e.g. the amount of detail that is in a region. The Multiscale Retinex with Color Restoration (MSRCR) is a general purpose, non-linear image enhancement algorithm that significantly affects the brightness, contrast and sharpness within an image. In this paper, we will analyze the impact the MSRCR has on segmentation results and performance.

  20. Performance appraisal of estimation algorithms and application of estimation algorithms to target tracking

    NASA Astrophysics Data System (ADS)

    Zhao, Zhanlue

    This dissertation consists of two parts. The first part deals with the performance appraisal of estimation algorithms. The second part focuses on the application of estimation algorithms to target tracking. Performance appraisal is crucial for understanding, developing and comparing various estimation algorithms. In particular, with the evolvement of estimation theory and the increase of problem complexity, performance appraisal is getting more and more challenging for engineers to make comprehensive conclusions. However, the existing theoretical results are inadequate for practical reference. The first part of this dissertation is dedicated to performance measures which include local performance measures, global performance measures and model distortion measure. The second part focuses on application of the recursive best linear unbiased estimation (BLUE) or linear minimum mean square error (LIB-M-ISE) estimation to nonlinear measurement problem in target tracking. Kalman filter has been the dominant basis for dynamic state filtering for several decades. Beyond Kalman filter, a more fundamental basis for the recursive best linear unbiased filtering has been thoroughly investigated in a series of papers by my advisor Dr. X. Rong Li. Based on the so-called quasi-recursive best linear unbiased filtering technique, the constraints of the Kalman filter Linear-Gaussian assumptions can be relaxed such that a general linear filtering technique for nonlinear systems can be achieved. An approximate optimal BLUE filter is implemented for nonlinear measurements in target tracking which outperforms the existing method significantly in terms of accuracy, credibility and robustness.

  1. Performance Comparison Of Evolutionary Algorithms For Image Clustering

    NASA Astrophysics Data System (ADS)

    Civicioglu, P.; Atasever, U. H.; Ozkan, C.; Besdok, E.; Karkinli, A. E.; Kesikoglu, A.

    2014-09-01

    Evolutionary computation tools are able to process real valued numerical sets in order to extract suboptimal solution of designed problem. Data clustering algorithms have been intensively used for image segmentation in remote sensing applications. Despite of wide usage of evolutionary algorithms on data clustering, their clustering performances have been scarcely studied by using clustering validation indexes. In this paper, the recently proposed evolutionary algorithms (i.e., Artificial Bee Colony Algorithm (ABC), Gravitational Search Algorithm (GSA), Cuckoo Search Algorithm (CS), Adaptive Differential Evolution Algorithm (JADE), Differential Search Algorithm (DSA) and Backtracking Search Optimization Algorithm (BSA)) and some classical image clustering techniques (i.e., k-means, fcm, som networks) have been used to cluster images and their performances have been compared by using four clustering validation indexes. Experimental test results exposed that evolutionary algorithms give more reliable cluster-centers than classical clustering techniques, but their convergence time is quite long.

  2. The significance of task significance: Job performance effects, relational mechanisms, and boundary conditions.

    PubMed

    Grant, Adam M

    2008-01-01

    Does task significance increase job performance? Correlational designs and confounded manipulations have prevented researchers from assessing the causal impact of task significance on job performance. To address this gap, 3 field experiments examined the performance effects, relational mechanisms, and boundary conditions of task significance. In Experiment 1, fundraising callers who received a task significance intervention increased their levels of job performance relative to callers in 2 other conditions and to their own prior performance. In Experiment 2, task significance increased the job dedication and helping behavior of lifeguards, and these effects were mediated by increases in perceptions of social impact and social worth. In Experiment 3, conscientiousness and prosocial values moderated the effects of task significance on the performance of new fundraising callers. The results provide fresh insights into the effects, relational mechanisms, and boundary conditions of task significance, offering noteworthy implications for theory, research, and practice on job design, social information processing, and work motivation and performance. PMID:18211139

  3. A clinical algorithm for triaging patients with significant lymphadenopathy in primary health care settings in Sudan

    PubMed Central

    El Hag, Imad A.; Elsiddig, Kamal E.; Elsafi, Mohamed E.M.O; Elfaki, Mona E.E.; Musa, Ahmed M.; Musa, Brima Y.; Elhassan, Ahmed M.

    2013-01-01

    Abstract Background Tuberculosis is a major health problem in developing countries. The distinction between tuberculous lymphadenitis, non-specific lymphadenitis and malignant lymph node enlargement has to be made at primary health care levels using easy, simple and cheap methods. Objective To develop a reliable clinical algorithm for primary care settings to triage cases of non-specific, tuberculous and malignant lymphadenopathies. Methods Calculation of the odd ratios (OR) of the chosen predictor variables was carried out using logistic regression. The numerical score values of the predictor variables were weighed against their respective OR. The performance of the score was evaluated by the ROC (Receiver Operator Characteristic) curve. Results Four predictor variables; Mantoux reading, erythrocytes sedimentation rate (ESR), nocturnal fever and discharging sinuses correlated significantly with TB diagnosis and were included in the reduced model to establish score A. For score B, the reduced model included Mantoux reading, ESR, lymph-node size and lymph-node number as predictor variables for malignant lymph nodes. Score A ranged 0 to 12 and a cut-off point of 6 gave a best sensitivity and specificity of 91% and 90% respectively, whilst score B ranged -3 to 8 and a cut-off point of 3 gave a best sensitivity and specificity of 83% and 76% respectively. The calculated area under the ROC curve was 0.964 (95% CI, 0.949 – 0.980) and -0.856 (95% CI, 0.787 - 0.925) for scores A and B respectively, indicating good performance. Conclusion The developed algorithm can efficiently triage cases with tuberculous and malignant lymphadenopathies for treatment or referral to specialised centres for further work-up.

  4. Turbopump Performance Improved by Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Oyama, Akira; Liou, Meng-Sing

    2002-01-01

    The development of design optimization technology for turbomachinery has been initiated using the multiobjective evolutionary algorithm under NASA's Intelligent Synthesis Environment and Revolutionary Aeropropulsion Concepts programs. As an alternative to the traditional gradient-based methods, evolutionary algorithms (EA's) are emergent design-optimization algorithms modeled after the mechanisms found in natural evolution. EA's search from multiple points, instead of moving from a single point. In addition, they require no derivatives or gradients of the objective function, leading to robustness and simplicity in coupling any evaluation codes. Parallel efficiency also becomes very high by using a simple master-slave concept for function evaluations, since such evaluations often consume the most CPU time, such as computational fluid dynamics. Application of EA's to multiobjective design problems is also straightforward because EA's maintain a population of design candidates in parallel. Because of these advantages, EA's are a unique and attractive approach to real-world design optimization problems.

  5. Case study of isosurface extraction algorithm performance

    SciTech Connect

    Sutton, P M; Hansen, C D; Shen, H; Schikore, D

    1999-12-14

    Isosurface extraction is an important and useful visualization method. Over the past ten years, the field has seen numerous isosurface techniques published leaving the user in a quandary about which one should be used. Some papers have published complexity analysis of the techniques yet empirical evidence comparing different methods is lacking. This case study presents a comparative study of several representative isosurface extraction algorithms. It reports and analyzes empirical measurements of execution times and memory behavior for each algorithm. The results show that asymptotically optimal techniques may not be the best choice when implemented on modern computer architectures.

  6. Generic algorithms for high performance scalable geocomputing

    NASA Astrophysics Data System (ADS)

    de Jong, Kor; Schmitz, Oliver; Karssenberg, Derek

    2016-04-01

    During the last decade, the characteristics of computing hardware have changed a lot. For example, instead of a single general purpose CPU core, personal computers nowadays contain multiple cores per CPU and often general purpose accelerators, like GPUs. Additionally, compute nodes are often grouped together to form clusters or a supercomputer, providing enormous amounts of compute power. For existing earth simulation models to be able to use modern hardware platforms, their compute intensive parts must be rewritten. This can be a major undertaking and may involve many technical challenges. Compute tasks must be distributed over CPU cores, offloaded to hardware accelerators, or distributed to different compute nodes. And ideally, all of this should be done in such a way that the compute task scales well with the hardware resources. This presents two challenges: 1) how to make good use of all the compute resources and 2) how to make these compute resources available for developers of simulation models, who may not (want to) have the required technical background for distributing compute tasks. The first challenge requires the use of specialized technology (e.g.: threads, OpenMP, MPI, OpenCL, CUDA). The second challenge requires the abstraction of the logic handling the distribution of compute tasks from the model-specific logic, hiding the technical details from the model developer. To assist the model developer, we are developing a C++ software library (called Fern) containing algorithms that can use all CPU cores available in a single compute node (distributing tasks over multiple compute nodes will be done at a later stage). The algorithms are grid-based (finite difference) and include local and spatial operations such as convolution filters. The algorithms handle distribution of the compute tasks to CPU cores internally. In the resulting model the low-level details of how this is done is separated from the model-specific logic representing the modeled system

  7. Generic algorithms for high performance scalable geocomputing

    NASA Astrophysics Data System (ADS)

    de Jong, Kor; Schmitz, Oliver; Karssenberg, Derek

    2016-04-01

    During the last decade, the characteristics of computing hardware have changed a lot. For example, instead of a single general purpose CPU core, personal computers nowadays contain multiple cores per CPU and often general purpose accelerators, like GPUs. Additionally, compute nodes are often grouped together to form clusters or a supercomputer, providing enormous amounts of compute power. For existing earth simulation models to be able to use modern hardware platforms, their compute intensive parts must be rewritten. This can be a major undertaking and may involve many technical challenges. Compute tasks must be distributed over CPU cores, offloaded to hardware accelerators, or distributed to different compute nodes. And ideally, all of this should be done in such a way that the compute task scales well with the hardware resources. This presents two challenges: 1) how to make good use of all the compute resources and 2) how to make these compute resources available for developers of simulation models, who may not (want to) have the required technical background for distributing compute tasks. The first challenge requires the use of specialized technology (e.g.: threads, OpenMP, MPI, OpenCL, CUDA). The second challenge requires the abstraction of the logic handling the distribution of compute tasks from the model-specific logic, hiding the technical details from the model developer. To assist the model developer, we are developing a C++ software library (called Fern) containing algorithms that can use all CPU cores available in a single compute node (distributing tasks over multiple compute nodes will be done at a later stage). The algorithms are grid-based (finite difference) and include local and spatial operations such as convolution filters. The algorithms handle distribution of the compute tasks to CPU cores internally. In the resulting model the low-level details of how this is done is separated from the model-specific logic representing the modeled system

  8. Motion Cueing Algorithm Development: Piloted Performance Testing of the Cueing Algorithms

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.; Kelly, Lon C.

    2005-01-01

    The relative effectiveness in simulating aircraft maneuvers with both current and newly developed motion cueing algorithms was assessed with an eleven-subject piloted performance evaluation conducted on the NASA Langley Visual Motion Simulator (VMS). In addition to the current NASA adaptive algorithm, two new cueing algorithms were evaluated: the optimal algorithm and the nonlinear algorithm. The test maneuvers included a straight-in approach with a rotating wind vector, an offset approach with severe turbulence and an on/off lateral gust that occurs as the aircraft approaches the runway threshold, and a takeoff both with and without engine failure after liftoff. The maneuvers were executed with each cueing algorithm with added visual display delay conditions ranging from zero to 200 msec. Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. Piloted performance parameters for the approach maneuvers, the vertical velocity upon touchdown and the runway touchdown position, were also analyzed but did not show any noticeable difference among the cueing algorithms. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input analysis shows pilot-induced oscillations on a straight-in approach were less prevalent compared to the optimal algorithm. The augmented turbulence cues increased workload on an offset approach that the pilots deemed more realistic compared to the NASA adaptive algorithm. The takeoff with engine failure showed the least roll activity for the nonlinear algorithm, with the least rudder pedal activity for the optimal algorithm.

  9. Performance characterization of a combined material identification and screening algorithm

    NASA Astrophysics Data System (ADS)

    Green, Robert L.; Hargreaves, Michael D.; Gardner, Craig M.

    2013-05-01

    Portable analytical devices based on a gamut of technologies (Infrared, Raman, X-Ray Fluorescence, Mass Spectrometry, etc.) are now widely available. These tools have seen increasing adoption for field-based assessment by diverse users including military, emergency response, and law enforcement. Frequently, end-users of portable devices are non-scientists who rely on embedded software and the associated algorithms to convert collected data into actionable information. Two classes of problems commonly encountered in field applications are identification and screening. Identification algorithms are designed to scour a library of known materials and determine whether the unknown measurement is consistent with a stored response (or combination of stored responses). Such algorithms can be used to identify a material from many thousands of possible candidates. Screening algorithms evaluate whether at least a subset of features in an unknown measurement correspond to one or more specific substances of interest and are typically configured to detect from a small list potential target analytes. Thus, screening algorithms are much less broadly applicable than identification algorithms; however, they typically provide higher detection rates which makes them attractive for specific applications such as chemical warfare agent or narcotics detection. This paper will present an overview and performance characterization of a combined identification/screening algorithm that has recently been developed. It will be shown that the combined algorithm provides enhanced detection capability more typical of screening algorithms while maintaining a broad identification capability. Additionally, we will highlight how this approach can enable users to incorporate situational awareness during a response.

  10. Quantitative comparison of the performance of SAR segmentation algorithms.

    PubMed

    Caves, R; Quegan, S; White, R

    1998-01-01

    Methods to evaluate the performance of segmentation algorithms for synthetic aperture radar (SAR) images are developed, based on known properties of coherent speckle and a scene model in which areas of constant backscatter coefficient are separated by abrupt edges. Local and global measures of segmentation homogeneity are derived and applied to the outputs of two segmentation algorithms developed for SAR data, one based on iterative edge detection and segment growing, the other based on global maximum a posteriori (MAP) estimation using simulated annealing. The quantitative statistically based measures appear consistent with visual impressions of the relative quality of the segmentations produced by the two algorithms. On simulated data meeting algorithm assumptions, both algorithms performed well but MAP methods appeared visually and measurably better. On real data, MAP estimation was markedly the better method and retained performance comparable to that on simulated data, while the performance of the other algorithm deteriorated sharply. Improvements in the performance measures will require a more realistic scene model and techniques to recognize oversegmentation. PMID:18276219

  11. Significant Advances in the AIRS Science Team Version-6 Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Blaisdell, John; Iredell, Lena; Molnar, Gyula

    2012-01-01

    AIRS/AMSU is the state of the art infrared and microwave atmospheric sounding system flying aboard EOS Aqua. The Goddard DISC has analyzed AIRS/AMSU observations, covering the period September 2002 until the present, using the AIRS Science Team Version-S retrieval algorithm. These products have been used by many researchers to make significant advances in both climate and weather applications. The AIRS Science Team Version-6 Retrieval, which will become operation in mid-20l2, contains many significant theoretical and practical improvements compared to Version-5 which should further enhance the utility of AIRS products for both climate and weather applications. In particular, major changes have been made with regard to the algOrithms used to 1) derive surface skin temperature and surface spectral emissivity; 2) generate the initial state used to start the retrieval procedure; 3) compute Outgoing Longwave Radiation; and 4) determine Quality Control. This paper will describe these advances found in the AIRS Version-6 retrieval algorithm and demonstrate the improvement of AIRS Version-6 products compared to those obtained using Version-5,

  12. Lytro camera technology: theory, algorithms, performance analysis

    NASA Astrophysics Data System (ADS)

    Georgiev, Todor; Yu, Zhan; Lumsdaine, Andrew; Goma, Sergio

    2013-03-01

    The Lytro camera is the first implementation of a plenoptic camera for the consumer market. We consider it a successful example of the miniaturization aided by the increase in computational power characterizing mobile computational photography. The plenoptic camera approach to radiance capture uses a microlens array as an imaging system focused on the focal plane of the main camera lens. This paper analyzes the performance of Lytro camera from a system level perspective, considering the Lytro camera as a black box, and uses our interpretation of Lytro image data saved by the camera. We present our findings based on our interpretation of Lytro camera file structure, image calibration and image rendering; in this context, artifacts and final image resolution are discussed.

  13. A Hybrid Actuation System Demonstrating Significantly Enhanced Electromechanical Performance

    NASA Technical Reports Server (NTRS)

    Su, Ji; Xu, Tian-Bing; Zhang, Shujun; Shrout, Thomas R.; Zhang, Qiming

    2004-01-01

    A hybrid actuation system (HYBAS) utilizing advantages of a combination of electromechanical responses of an electroactive polymer (EAP), an electrostrictive copolymer, and an electroactive ceramic single crystal, PZN-PT single crystal, has been developed. The system employs the contribution of the actuation elements cooperatively and exhibits a significantly enhanced electromechanical performance compared to the performances of the device made of each constituting material, the electroactive polymer or the ceramic single crystal, individually. The theoretical modeling of the performances of the HYBAS is in good agreement with experimental observation. The consistence between the theoretical modeling and experimental test make the design concept an effective route for the development of high performance actuating devices for many applications. The theoretical modeling, fabrication of the HYBAS and the initial experimental results will be presented and discussed.

  14. Improved Ant Colony Clustering Algorithm and Its Performance Study.

    PubMed

    Gao, Wei

    2016-01-01

    Clustering analysis is used in many disciplines and applications; it is an important tool that descriptively identifies homogeneous groups of objects based on attribute values. The ant colony clustering algorithm is a swarm-intelligent method used for clustering problems that is inspired by the behavior of ant colonies that cluster their corpses and sort their larvae. A new abstraction ant colony clustering algorithm using a data combination mechanism is proposed to improve the computational efficiency and accuracy of the ant colony clustering algorithm. The abstraction ant colony clustering algorithm is used to cluster benchmark problems, and its performance is compared with the ant colony clustering algorithm and other methods used in existing literature. Based on similar computational difficulties and complexities, the results show that the abstraction ant colony clustering algorithm produces results that are not only more accurate but also more efficiently determined than the ant colony clustering algorithm and the other methods. Thus, the abstraction ant colony clustering algorithm can be used for efficient multivariate data clustering. PMID:26839533

  15. Improved Ant Colony Clustering Algorithm and Its Performance Study

    PubMed Central

    Gao, Wei

    2016-01-01

    Clustering analysis is used in many disciplines and applications; it is an important tool that descriptively identifies homogeneous groups of objects based on attribute values. The ant colony clustering algorithm is a swarm-intelligent method used for clustering problems that is inspired by the behavior of ant colonies that cluster their corpses and sort their larvae. A new abstraction ant colony clustering algorithm using a data combination mechanism is proposed to improve the computational efficiency and accuracy of the ant colony clustering algorithm. The abstraction ant colony clustering algorithm is used to cluster benchmark problems, and its performance is compared with the ant colony clustering algorithm and other methods used in existing literature. Based on similar computational difficulties and complexities, the results show that the abstraction ant colony clustering algorithm produces results that are not only more accurate but also more efficiently determined than the ant colony clustering algorithm and the other methods. Thus, the abstraction ant colony clustering algorithm can be used for efficient multivariate data clustering. PMID:26839533

  16. Developmental Changes in Adolescents' Olfactory Performance and Significance of Olfaction

    PubMed Central

    Klötze, Paula; Gerber, Friederike; Croy, Ilona; Hummel, Thomas

    2016-01-01

    Aim of the current work was to examine developmental changes in adolescents’ olfactory performance and personal significance of olfaction. In the first study olfactory identification abilities of 76 participants (31 males and 45 females aged between 10 and 18 years; M = 13.8, SD = 2.3) was evaluated with the Sniffin Stick identification test, presented in a cued and in an uncued manner. Verbal fluency was additionally examined for control purpose. In the second study 131 participants (46 males and 85 females aged between 10 and 18 years; (M = 14.4, SD = 2.2) filled in the importance of olfaction questionnaire. Odor identification abilities increased significantly with age and were significantly higher in girls as compared to boys. These effects were especially pronounced in the uncued task and partly related to verbal fluency. In line, the personal significance of olfaction increased with age and was generally higher among female compared to male participants. PMID:27332887

  17. Modeling and performance analysis of GPS vector tracking algorithms

    NASA Astrophysics Data System (ADS)

    Lashley, Matthew

    This dissertation provides a detailed analysis of GPS vector tracking algorithms and the advantages they have over traditional receiver architectures. Standard GPS receivers use a decentralized architecture that separates the tasks of signal tracking and position/velocity estimation. Vector tracking algorithms combine the two tasks into a single algorithm. The signals from the various satellites are processed collectively through a Kalman filter. The advantages of vector tracking over traditional, scalar tracking methods are thoroughly investigated. A method for making a valid comparison between vector and scalar tracking loops is developed. This technique avoids the ambiguities encountered when attempting to make a valid comparison between tracking loops (which are characterized by noise bandwidths and loop order) and the Kalman filters (which are characterized by process and measurement noise covariance matrices) that are used by vector tracking algorithms. The improvement in performance offered by vector tracking is calculated in multiple different scenarios. Rule of thumb analysis techniques for scalar Frequency Lock Loops (FLL) are extended to the vector tracking case. The analysis tools provide a simple method for analyzing the performance of vector tracking loops. The analysis tools are verified using Monte Carlo simulations. Monte Carlo simulations are also used to study the effects of carrier to noise power density (C/N0) ratio estimation and the advantage offered by vector tracking over scalar tracking. The improvement from vector tracking ranges from 2.4 to 6.2 dB in various scenarios. The difference in the performance of the three vector tracking architectures is analyzed. The effects of using a federated architecture with and without information sharing between the receiver's channels are studied. A combination of covariance analysis and Monte Carlo simulation is used to analyze the performance of the three algorithms. The federated algorithm without

  18. A High-Performance Genetic Algorithm: Using Traveling Salesman Problem as a Case

    PubMed Central

    Tsai, Chun-Wei; Tseng, Shih-Pang; Yang, Chu-Sing

    2014-01-01

    This paper presents a simple but efficient algorithm for reducing the computation time of genetic algorithm (GA) and its variants. The proposed algorithm is motivated by the observation that genes common to all the individuals of a GA have a high probability of surviving the evolution and ending up being part of the final solution; as such, they can be saved away to eliminate the redundant computations at the later generations of a GA. To evaluate the performance of the proposed algorithm, we use it not only to solve the traveling salesman problem but also to provide an extensive analysis on the impact it may have on the quality of the end result. Our experimental results indicate that the proposed algorithm can significantly reduce the computation time of GA and GA-based algorithms while limiting the degradation of the quality of the end result to a very small percentage compared to traditional GA. PMID:24892038

  19. Dentate Gyrus Circuitry Features Improve Performance of Sparse Approximation Algorithms

    PubMed Central

    Petrantonakis, Panagiotis C.; Poirazi, Panayiota

    2015-01-01

    Memory-related activity in the Dentate Gyrus (DG) is characterized by sparsity. Memory representations are seen as activated neuronal populations of granule cells, the main encoding cells in DG, which are estimated to engage 2–4% of the total population. This sparsity is assumed to enhance the ability of DG to perform pattern separation, one of the most valuable contributions of DG during memory formation. In this work, we investigate how features of the DG such as its excitatory and inhibitory connectivity diagram can be used to develop theoretical algorithms performing Sparse Approximation, a widely used strategy in the Signal Processing field. Sparse approximation stands for the algorithmic identification of few components from a dictionary that approximate a certain signal. The ability of DG to achieve pattern separation by sparsifing its representations is exploited here to improve the performance of the state of the art sparse approximation algorithm “Iterative Soft Thresholding” (IST) by adding new algorithmic features inspired by the DG circuitry. Lateral inhibition of granule cells, either direct or indirect, via mossy cells, is shown to enhance the performance of the IST. Apart from revealing the potential of DG-inspired theoretical algorithms, this work presents new insights regarding the function of particular cell types in the pattern separation task of the DG. PMID:25635776

  20. Logit Model based Performance Analysis of an Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Hernández, J. A.; Ospina, J. D.; Villada, D.

    2011-09-01

    In this paper, the performance of the Multi Dynamics Algorithm for Global Optimization (MAGO) is studied through simulation using five standard test functions. To guarantee that the algorithm converges to a global optimum, a set of experiments searching for the best combination between the only two MAGO parameters -number of iterations and number of potential solutions, are considered. These parameters are sequentially varied, while increasing the dimension of several test functions, and performance curves were obtained. The MAGO was originally designed to perform well with small populations; therefore, the self-adaptation task with small populations is more challenging while the problem dimension is higher. The results showed that the convergence probability to an optimal solution increases according to growing patterns of the number of iterations and the number of potential solutions. However, the success rates slow down when the dimension of the problem escalates. Logit Model is used to determine the mutual effects between the parameters of the algorithm.

  1. Leukoaraiosis Significantly Worsens Driving Performance of Ordinary Older Drivers

    PubMed Central

    Zheng, Rencheng; Fang, Fang; Ohori, Masanori; Nakamura, Hiroki; Kumagai, Yasuhiho; Okada, Hiroshi; Teramura, Kazuhiko; Nakayama, Satoshi; Irimajiri, Akinori; Taoka, Hiroshi; Okada, Satoshi

    2014-01-01

    Background Leukoaraiosis is defined as extracellular space caused mainly by atherosclerotic or demyelinated changes in the brain tissue and is commonly found in the brains of healthy older people. A significant association between leukoaraiosis and traffic crashes was reported in our previous study; however, the reason for this is still unclear. Method This paper presents a comprehensive evaluation of driving performance in ordinary older drivers with leukoaraiosis. First, the degree of leukoaraiosis was examined in 33 participants, who underwent an actual-vehicle driving examination on a standard driving course, and a driver skill rating was also collected while the driver carried out a paced auditory serial addition test, which is a calculating task given verbally. At the same time, a steering entropy method was used to estimate steering operation performance. Results The experimental results indicated that a normal older driver with leukoaraiosis was readily affected by external disturbances and made more operation errors and steered less smoothly than one without leukoaraiosis during driving; at the same time, their steering skill significantly deteriorated. Conclusions Leukoaraiosis worsens the driving performance of older drivers because of their increased vulnerability to distraction. PMID:25295736

  2. Preliminary flight evaluation of an engine performance optimization algorithm

    NASA Technical Reports Server (NTRS)

    Lambert, H. H.; Gilyard, G. B.; Chisholm, J. D.; Kerr, L. J.

    1991-01-01

    A performance seeking control (PSC) algorithm has undergone initial flight test evaluation in subsonic operation of a PW 1128 engined F-15. This algorithm is designed to optimize the quasi-steady performance of an engine for three primary modes: (1) minimum fuel consumption; (2) minimum fan turbine inlet temperature (FTIT); and (3) maximum thrust. The flight test results have verified a thrust specific fuel consumption reduction of 1 pct., up to 100 R decreases in FTIT, and increases of as much as 12 pct. in maximum thrust. PSC technology promises to be of value in next generation tactical and transport aircraft.

  3. Performance of recovery time improvement algorithms for software RAIDs

    SciTech Connect

    Riegel, J.; Menon, Jai

    1996-12-31

    A software RAID is a RAID implemented purely in software running on a host computer. One problem with software RAIDs is that they do not have access to special hardware such as NVRAM. Thus, software RAIDs may need to check every parity group of an array for consistency following a host crash or power failure. This process of checking parity groups is called recovery, and results in long delays when the software RAID is restarted. In this paper, we review two algorithms to reduce this recovery time for software RAIDs: the PGS Bitmap algorithm we proposed in and the List Algorithm proposed in. We compare the performance of these two algorithms using trace-driven simulations. Our results show that the PGS Bitmap Algorithm can reduce recovery time by a factor of 12 with a response time penalty of less than 1%, or by a factor of 50 with a response time penalty of less than 2%, and a memory requirement of around 9 Kbytes. The List Algorithm can reduce recovery time by a factor of 50 but cannot achieve a response time penalty of less than 16%.

  4. Atmospheric turbulence and sensor system effects on biometric algorithm performance

    NASA Astrophysics Data System (ADS)

    Espinola, Richard L.; Leonard, Kevin R.; Byrd, Kenneth A.; Potvin, Guy

    2015-05-01

    Biometric technologies composed of electro-optical/infrared (EO/IR) sensor systems and advanced matching algorithms are being used in various force protection/security and tactical surveillance applications. To date, most of these sensor systems have been widely used in controlled conditions with varying success (e.g., short range, uniform illumination, cooperative subjects). However the limiting conditions of such systems have yet to be fully studied for long range applications and degraded imaging environments. Biometric technologies used for long range applications will invariably suffer from the effects of atmospheric turbulence degradation. Atmospheric turbulence causes blur, distortion and intensity fluctuations that can severely degrade image quality of electro-optic and thermal imaging systems and, for the case of biometrics technology, translate to poor matching algorithm performance. In this paper, we evaluate the effects of atmospheric turbulence and sensor resolution on biometric matching algorithm performance. We use a subset of the Facial Recognition Technology (FERET) database and a commercial algorithm to analyze facial recognition performance on turbulence degraded facial images. The goal of this work is to understand the feasibility of long-range facial recognition in degraded imaging conditions, and the utility of camera parameter trade studies to enable the design of the next generation biometrics sensor systems.

  5. Significant improvements in long trace profiler measurement performance

    SciTech Connect

    Takacs, P.Z.; Bresloff, C.J.

    1996-07-01

    A Modifications made to the Long Trace Profiler (LTP II) system at the Advanced Photon Source at Argonne National Laboratory have significantly improved the accuracy and repeatability of the instrument The use of a Dove prism in the reference beam path corrects for phasing problems between mechanical efforts and thermally-induced system errors. A single reference correction now completely removes both error signals from the measured surface profile. The addition of a precision air conditioner keeps the temperature in the metrology enclosure constant to within {+-}0.1{degrees}C over a 24 hour period and has significantly improved the stability and repeatability of the system. We illustrate the performance improvements with several sets of measurements. The improved environmental control has reduced thermal drift error to about 0.75 microradian RMS over a 7.5 hour time period. Measurements made in the forward scan direction and the reverse scan direction differ by only about 0.5 microradian RMS over a 500mm, trace length. We are now able to put 1-sigma error bar of 0.3 microradian on an average of 10 slope profile measurements over a 500mm long trace length, and we are now able to put a 0.2 microradian error bar on an average of 10 measurements over a 200mm trace length. The corresponding 1-sigma height error bar for this measurement is 1.1 run.

  6. Significant improvements in Long Trace Profiler measurement performance

    SciTech Connect

    Takacs, P.Z.; Bresloff, C.J.

    1996-12-31

    Modifications made to the Long Trace Profiler (LTP II) system at the Advanced Photon Source at Argonne National Laboratory have significantly improved the accuracy and repeatability of the instrument. The use of a Dove prism in the reference beam path corrects for phasing problems between mechanical errors and thermally-induced system errors. A single reference correction now completely removes both error signals from the measured surface profile. The addition of a precision air conditioner keeps the temperature in the metrology enclosure constant to within {+-} 0.1 C over a 24 hour period and has significantly improved the stability and repeatability of the system. The authors illustrate the performance improvements with several sets of measurements. The improved environmental control has reduced thermal drift error to about 0.75 microradian RMS over a 7.5 hour time period. Measurements made in the forward scan direction and the reverse scan direction differ by only about 0.5 microradian RMS over a 500 mm trace length. They are now able to put 1-sigma error bar of 0.3 microradian on an average of 10 slope profile measurements over a 500 mm long trace length, and they are now able to put a 0.2 microradian error bar on an average of 10 measurements over a 200 mm trace length. The corresponding 1-sigma height error bar for this measurement is 1.1 nm.

  7. On the performances of computer vision algorithms on mobile platforms

    NASA Astrophysics Data System (ADS)

    Battiato, S.; Farinella, G. M.; Messina, E.; Puglisi, G.; Ravì, D.; Capra, A.; Tomaselli, V.

    2012-01-01

    Computer Vision enables mobile devices to extract the meaning of the observed scene from the information acquired with the onboard sensor cameras. Nowadays, there is a growing interest in Computer Vision algorithms able to work on mobile platform (e.g., phone camera, point-and-shot-camera, etc.). Indeed, bringing Computer Vision capabilities on mobile devices open new opportunities in different application contexts. The implementation of vision algorithms on mobile devices is still a challenging task since these devices have poor image sensors and optics as well as limited processing power. In this paper we have considered different algorithms covering classic Computer Vision tasks: keypoint extraction, face detection, image segmentation. Several tests have been done to compare the performances of the involved mobile platforms: Nokia N900, LG Optimus One, Samsung Galaxy SII.

  8. An algorithm for the contextual adaption of SURF octave selection with good matching performance: best octaves.

    PubMed

    Ehsan, Shoaib; Kanwal, Nadia; Clark, Adrian F; McDonald-Maier, Klaus D

    2012-01-01

    Speeded-Up Robust Features is a feature extraction algorithm designed for real-time execution, although this is rarely achievable on low-power hardware such as that in mobile robots. One way to reduce the computation is to discard some of the scale-space octaves, and previous research has simply discarded the higher octaves. This paper shows that this approach is not always the most sensible and presents an algorithm for choosing which octaves to discard based on the properties of the imagery. Results obtained with this best octaves algorithm show that it is able to achieve a significant reduction in computation without compromising matching performance. PMID:21712160

  9. Performance impact of dynamic parallelism on different clustering algorithms

    NASA Astrophysics Data System (ADS)

    DiMarco, Jeffrey; Taufer, Michela

    2013-05-01

    In this paper, we aim to quantify the performance gains of dynamic parallelism. The newest version of CUDA, CUDA 5, introduces dynamic parallelism, which allows GPU threads to create new threads, without CPU intervention, and adapt to its data. This effectively eliminates the superfluous back and forth communication between the GPU and CPU through nested kernel computations. The change in performance will be measured using two well-known clustering algorithms that exhibit data dependencies: the K-means clustering and the hierarchical clustering. K-means has a sequential data dependence wherein iterations occur in a linear fashion, while the hierarchical clustering has a tree-like dependence that produces split tasks. Analyzing the performance of these data-dependent algorithms gives us a better understanding of the benefits or potential drawbacks of CUDA 5's new dynamic parallelism feature.

  10. Performance evaluation of image segmentation algorithms on microscopic image data.

    PubMed

    Beneš, Miroslav; Zitová, Barbara

    2015-01-01

    In our paper, we present a performance evaluation of image segmentation algorithms on microscopic image data. In spite of the existence of many algorithms for image data partitioning, there is no universal and 'the best' method yet. Moreover, images of microscopic samples can be of various character and quality which can negatively influence the performance of image segmentation algorithms. Thus, the issue of selecting suitable method for a given set of image data is of big interest. We carried out a large number of experiments with a variety of segmentation methods to evaluate the behaviour of individual approaches on the testing set of microscopic images (cross-section images taken in three different modalities from the field of art restoration). The segmentation results were assessed by several indices used for measuring the output quality of image segmentation algorithms. In the end, the benefit of segmentation combination approach is studied and applicability of achieved results on another representatives of microscopic data category - biological samples - is shown. PMID:25233873

  11. Performance analysis of bearing-only target location algorithms

    NASA Astrophysics Data System (ADS)

    Gavish, Motti; Weiss, Anthony J.

    1992-07-01

    The performance of two well known bearing only location techniques, the maximum likelihood (ML) and the Stansfield estimators, is examined. Analytical expressions are obtained for the bias and the covariance matrix of the estimation error, which permit performance comparison for any case of interest. It is shown that the Stansfield algorithm provides biased estimates even for large numbers of measurements, in contrast with the ML method. The rms error of the Stansfield technique is not necessarily larger than the rms of the ML technique. However, it is shown that the ML technique is superior to the Stansfield method when the number of measurements is large enough. Simulation results verify the predicted theoretical performance.

  12. S-index: Measuring significant, not average, citation performance

    NASA Astrophysics Data System (ADS)

    Antonoyiannakis, Manolis

    2009-03-01

    We recently [1] introduced the ``citation density curve'' (or cumulative impact factor curve) that captures the full citation performance of a journal: its size, impact factor, the maximum number of citations per paper, the relative size of the different-cited portions of the journal, etc. The citation density curve displays a universal behavior across journals. We exploit this universality to extract a simple metric (the ``S-index'') to characterize the citation impact of ``significant'' papers in each journal. In doing so, we go beyond the journal impact factor, which only measures the impact of the average paper. The conventional wisdom of ranking journals according to their impact factors is thus challenged. Having shown the utility and robustness of the S-index in comparing and ranking journals of different sizes but within the same field, we explore the concept further, going beyond a single field, and beyond journals. Can we compare different scientific fields, departments, or universities? And how should one generalize the citation density curve and the S-index to address these questions? [1] M. Antonoyiannakis and S. Mitra, ``Is PRL too large to have an `impact'?'', Editorial, Physical Review Letters, December 2008.

  13. Scalable software-defined optical networking with high-performance routing and wavelength assignment algorithms.

    PubMed

    Lee, Chankyun; Cao, Xiaoyuan; Yoshikane, Noboru; Tsuritani, Takehiro; Rhee, June-Koo Kevin

    2015-10-19

    The feasibility of software-defined optical networking (SDON) for a practical application critically depends on scalability of centralized control performance. The paper, highly scalable routing and wavelength assignment (RWA) algorithms are investigated on an OpenFlow-based SDON testbed for proof-of-concept demonstration. Efficient RWA algorithms are proposed to achieve high performance in achieving network capacity with reduced computation cost, which is a significant attribute in a scalable centralized-control SDON. The proposed heuristic RWA algorithms differ in the orders of request processes and in the procedures of routing table updates. Combined in a shortest-path-based routing algorithm, a hottest-request-first processing policy that considers demand intensity and end-to-end distance information offers both the highest throughput of networks and acceptable computation scalability. We further investigate trade-off relationship between network throughput and computation complexity in routing table update procedure by a simulation study. PMID:26480397

  14. Effects of activity and energy budget balancing algorithm on laboratory performance of a fish bioenergetics model

    USGS Publications Warehouse

    Madenjian, Charles P.; David, Solomon R.; Pothoven, Steven A.

    2012-01-01

    We evaluated the performance of the Wisconsin bioenergetics model for lake trout Salvelinus namaycush that were fed ad libitum in laboratory tanks under regimes of low activity and high activity. In addition, we compared model performance under two different model algorithms: (1) balancing the lake trout energy budget on day t based on lake trout energy density on day t and (2) balancing the lake trout energy budget on day t based on lake trout energy density on day t + 1. Results indicated that the model significantly underestimated consumption for both inactive and active lake trout when algorithm 1 was used and that the degree of underestimation was similar for the two activity levels. In contrast, model performance substantially improved when using algorithm 2, as no detectable bias was found in model predictions of consumption for inactive fish and only a slight degree of overestimation was detected for active fish. The energy budget was accurately balanced by using algorithm 2 but not by using algorithm 1. Based on the results of this study, we recommend the use of algorithm 2 to estimate food consumption by fish in the field. Our study results highlight the importance of accurately accounting for changes in fish energy density when balancing the energy budget; furthermore, these results have implications for the science of evaluating fish bioenergetics model performance and for more accurate estimation of food consumption by fish in the field when fish energy density undergoes relatively rapid changes.

  15. Proper bibeta ROC model: algorithm, software, and performance evaluation

    NASA Astrophysics Data System (ADS)

    Chen, Weijie; Hu, Nan

    2016-03-01

    Semi-parametric models are often used to fit data collected in receiver operating characteristic (ROC) experiments to obtain a smooth ROC curve and ROC parameters for statistical inference purposes. The proper bibeta model as recently proposed by Mossman and Peng enjoys several theoretical properties. In addition to having explicit density functions for the latent decision variable and an explicit functional form of the ROC curve, the two parameter bibeta model also has simple closed-form expressions for true-positive fraction (TPF), false-positive fraction (FPF), and the area under the ROC curve (AUC). In this work, we developed a computational algorithm and R package implementing this model for ROC curve fitting. Our algorithm can deal with any ordinal data (categorical or continuous). To improve accuracy, efficiency, and reliability of our software, we adopted several strategies in our computational algorithm including: (1) the LABROC4 categorization to obtain the true maximum likelihood estimation of the ROC parameters; (2) a principled approach to initializing parameters; (3) analytical first-order and second-order derivatives of the likelihood function; (4) an efficient optimization procedure (the L-BFGS algorithm in the R package "nlopt"); and (5) an analytical delta method to estimate the variance of the AUC. We evaluated the performance of our software with intensive simulation studies and compared with the conventional binormal and the proper binormal-likelihood-ratio models developed at the University of Chicago. Our simulation results indicate that our software is highly accurate, efficient, and reliable.

  16. A hybrid genetic algorithm-extreme learning machine approach for accurate significant wave height reconstruction

    NASA Astrophysics Data System (ADS)

    Alexandre, E.; Cuadra, L.; Nieto-Borge, J. C.; Candil-García, G.; del Pino, M.; Salcedo-Sanz, S.

    2015-08-01

    Wave parameters computed from time series measured by buoys (significant wave height Hs, mean wave period, etc.) play a key role in coastal engineering and in the design and operation of wave energy converters. Storms or navigation accidents can make measuring buoys break down, leading to missing data gaps. In this paper we tackle the problem of locally reconstructing Hs at out-of-operation buoys by using wave parameters from nearby buoys, based on the spatial correlation among values at neighboring buoy locations. The novelty of our approach for its potential application to problems in coastal engineering is twofold. On one hand, we propose a genetic algorithm hybridized with an extreme learning machine that selects, among the available wave parameters from the nearby buoys, a subset FnSP with nSP parameters that minimizes the Hs reconstruction error. On the other hand, we evaluate to what extent the selected parameters in subset FnSP are good enough in assisting other machine learning (ML) regressors (extreme learning machines, support vector machines and gaussian process regression) to reconstruct Hs. The results show that all the ML method explored achieve a good Hs reconstruction in the two different locations studied (Caribbean Sea and West Atlantic).

  17. Framework for performance evaluation of face recognition algorithms

    NASA Astrophysics Data System (ADS)

    Black, John A., Jr.; Gargesha, Madhusudhana; Kahol, Kanav; Kuchi, Prem; Panchanathan, Sethuraman

    2002-07-01

    Face detection and recognition is becoming increasingly important in the contexts of surveillance,credit card fraud detection,assistive devices for visual impaired,etc. A number of face recognition algorithms have been proposed in the literature.The availability of a comprehensive face database is crucial to test the performance of these face recognition algorithms.However,while existing publicly-available face databases contain face images with a wide variety of poses angles, illumination angles,gestures,face occlusions,and illuminant colors, these images have not been adequately annotated,thus limiting their usefulness for evaluating the relative performance of face detection algorithms. For example,many of the images in existing databases are not annotated with the exact pose angles at which they were taken.In order to compare the performance of various face recognition algorithms presented in the literature there is a need for a comprehensive,systematically annotated database populated with face images that have been captured (1)at a variety of pose angles (to permit testing of pose invariance),(2)with a wide variety of illumination angles (to permit testing of illumination invariance),and (3)under a variety of commonly encountered illumination color temperatures (to permit testing of illumination color invariance). In this paper, we present a methodology for creating such an annotated database that employs a novel set of apparatus for the rapid capture of face images from a wide variety of pose angles and illumination angles. Four different types of illumination are used,including daylight,skylight,incandescent and fluorescent. The entire set of images,as well as the annotations and the experimental results,is being placed in the public domain,and made available for download over the worldwide web.

  18. A DRAM compiler algorithm for high performance VLSI embedded memories

    NASA Technical Reports Server (NTRS)

    Eldin, A. G.

    1992-01-01

    In many applications, the limited density of the embedded SRAM does not allow integrating the memory on the same chip with other logic and functional blocks. In such cases, the embedded DRAM provides the optimum combination of very high density, low power, and high performance. For ASIC's to take full advantage of this design strategy, an efficient and highly reliable DRAM compiler must be used. The embedded DRAM architecture, cell, and peripheral circuit design considerations and the algorithm of a high performance memory compiler are presented .

  19. Classification of Non-Small Cell Lung Cancer Using Significance Analysis of Microarray-Gene Set Reduction Algorithm

    PubMed Central

    Zhang, Lei; Wang, Linlin; Du, Bochuan; Wang, Tianjiao; Tian, Pu

    2016-01-01

    Among non-small cell lung cancer (NSCLC), adenocarcinoma (AC), and squamous cell carcinoma (SCC) are two major histology subtypes, accounting for roughly 40% and 30% of all lung cancer cases, respectively. Since AC and SCC differ in their cell of origin, location within the lung, and growth pattern, they are considered as distinct diseases. Gene expression signatures have been demonstrated to be an effective tool for distinguishing AC and SCC. Gene set analysis is regarded as irrelevant to the identification of gene expression signatures. Nevertheless, we found that one specific gene set analysis method, significance analysis of microarray-gene set reduction (SAMGSR), can be adopted directly to select relevant features and to construct gene expression signatures. In this study, we applied SAMGSR to a NSCLC gene expression dataset. When compared with several novel feature selection algorithms, for example, LASSO, SAMGSR has equivalent or better performance in terms of predictive ability and model parsimony. Therefore, SAMGSR is a feature selection algorithm, indeed. Additionally, we applied SAMGSR to AC and SCC subtypes separately to discriminate their respective stages, that is, stage II versus stage I. Few overlaps between these two resulting gene signatures illustrate that AC and SCC are technically distinct diseases. Therefore, stratified analyses on subtypes are recommended when diagnostic or prognostic signatures of these two NSCLC subtypes are constructed. PMID:27446945

  20. Performance analysis of the attenuation-partition based iterative phase retrieval algorithm for in-line phase-contrast imaging

    PubMed Central

    Yan, Aimin; Wu, Xizeng; Liu, Hong

    2010-01-01

    The phase retrieval is an important task in x-ray phase contrast imaging. The robustness of phase retrieval is especially important for potential medical imaging applications such as phase contrast mammography. Recently the authors developed an iterative phase retrieval algorithm, the attenuation-partition based algorithm, for the phase retrieval in inline phase-contrast imaging [1]. Applied to experimental images, the algorithm was proven to be fast and robust. However, a quantitative analysis of the performance of this new algorithm is desirable. In this work, we systematically compared the performance of this algorithm with other two widely used phase retrieval algorithms, namely the Gerchberg-Saxton (GS) algorithm and the Transport of Intensity Equation (TIE) algorithm. The systematical comparison is conducted by analyzing phase retrieval performances with a digital breast specimen model. We show that the proposed algorithm converges faster than the GS algorithm in the Fresnel diffraction regime, and is more robust against image noise than the TIE algorithm. These results suggest the significance of the proposed algorithm for future medical applications with the x-ray phase contrast imaging technique. PMID:20720992

  1. Stereo matching: performance study of two global algorithms

    NASA Astrophysics Data System (ADS)

    Arunagiri, Sarala; Jordan, Victor J.; Teller, Patricia J.; Deroba, Joseph C.; Shires, Dale R.; Park, Song J.; Nguyen, Lam H.

    2011-06-01

    Techniques such as clinometry, stereoscopy, interferometry, and polarimetry are used for Digital Elevation Model (DEM) generation from Synthetic Aperture Radar (SAR) images. The choice of technique depends on the SAR configuration, the means used for image acquisition, and the relief type. The most popular techniques are interferometry for regions of high coherence and stereoscopy for regions such as steep forested mountain slopes. Stereo matching, which is finds the disparity map or correspondence points between two images acquired from different sensor positions, is a core process in stereoscopy. Additionally, automatic stereo processing, which involves stereo matching, is an important process in other applications including vision-based obstacle avoidance for unmanned air vehicles (UAVs), extraction of weak targets in clutter, and automatic target detection. Due to its high computational complexity, stereo matching has traditionally been, and continues to be, one of the most heavily investigated topics in computer vision. A stereo matching algorithm performs a subset of the following four steps: cost computation, cost (support) aggregation, disparity computation/optimization, and disparity refinement. Based on the method used for cost computation, the algorithms are classified into feature-, phase-, and area-based algorithms; and they are classified as local or global based on how they perform disparity computation/optimization. We present a comparative performance study of two pairs, i.e., four versions, of global stereo matching codes. Each pair uses a different minimization technique: a simulated annealing or graph cut algorithm. And, the codes of a pair differ in terms of the employed global cost function: absolute difference (AD) or a variation of normalized cross correlation (NCC). The performance comparison is in terms of execution time, the global minimum cost achieved, power and energy consumption, and the quality of generated output. The results of

  2. Performance evaluation of operational atmospheric correction algorithms over the East China Seas

    NASA Astrophysics Data System (ADS)

    He, Shuangyan; He, Mingxia; Fischer, Jürgen

    2016-04-01

    To acquire high-quality operational data products for Chinese in-orbit and scheduled ocean color sensors, the performances of two operational atmospheric correction (AC) algorithms (ESA MEGS 7.4.1 and NASA SeaDAS 6.1) were evaluated over the East China Seas (ECS) using MERIS data. The spectral remote sensing reflectance R rs(λ), aerosol optical thickness (AOT), and Ångström exponent (α) retrieved using the two algorithms were validated using in situ measurements obtained between May 2002 and October 2009. Match-ups of R rs, AOT, and α between the in situ and MERIS data were obtained through strict exclusion criteria. Statistical analysis of R rs(λ) showed a mean percentage difference (MPD) of 9%-13% in the 490-560 nm spectral range, and significant overestimation was observed at 413 nm (MPD>72%). The AOTs were overestimated (MPD>32%), and although the ESA algorithm outperformed the NASA algorithm in the blue-green bands, the situation was reversed in the red-near-infrared bands. The value of α was obviously underestimated by the ESA algorithm (MPD=41%) but not by the NASA algorithm (MPD=35%). To clarify why the NASA algorithm performed better in the retrieval of α, scatter plots of the α single scattering albedo (SSA) density were prepared. These α-SSA density scatter plots showed that the applicability of the aerosol models used by the NASA algorithm over the ECS is better than that used by the ESA algorithm, although neither aerosol model is suitable for the ECS region. The results of this study provide a reference to both data users and data agencies regarding the use of operational data products and the investigation into the improvement of current AC schemes over the ECS.

  3. Performance evaluation of PCA-based spike sorting algorithms.

    PubMed

    Adamos, Dimitrios A; Kosmidis, Efstratios K; Theophilidis, George

    2008-09-01

    Deciphering the electrical activity of individual neurons from multi-unit noisy recordings is critical for understanding complex neural systems. A widely used spike sorting algorithm is being evaluated for single-electrode nerve trunk recordings. The algorithm is based on principal component analysis (PCA) for spike feature extraction. In the neuroscience literature it is generally assumed that the use of the first two or most commonly three principal components is sufficient. We estimate the optimum PCA-based feature space by evaluating the algorithm's performance on simulated series of action potentials. A number of modifications are made to the open source nev2lkit software to enable systematic investigation of the parameter space. We introduce a new metric to define clustering error considering over-clustering more favorable than under-clustering as proposed by experimentalists for our data. Both the program patch and the metric are available online. Correlated and white Gaussian noise processes are superimposed to account for biological and artificial jitter in the recordings. We report that the employment of more than three principal components is in general beneficial for all noise cases considered. Finally, we apply our results to experimental data and verify that the sorting process with four principal components is in agreement with a panel of electrophysiology experts. PMID:18565614

  4. High performance graphics processor based computed tomography reconstruction algorithms for nuclear and other large scale applications.

    SciTech Connect

    Jimenez, Edward Steven,

    2013-09-01

    The goal of this work is to develop a fast computed tomography (CT) reconstruction algorithm based on graphics processing units (GPU) that achieves significant improvement over traditional central processing unit (CPU) based implementations. The main challenge in developing a CT algorithm that is capable of handling very large datasets is parallelizing the algorithm in such a way that data transfer does not hinder performance of the reconstruction algorithm. General Purpose Graphics Processing (GPGPU) is a new technology that the Science and Technology (S&T) community is starting to adopt in many fields where CPU-based computing is the norm. GPGPU programming requires a new approach to algorithm development that utilizes massively multi-threaded environments. Multi-threaded algorithms in general are difficult to optimize since performance bottlenecks occur that are non-existent in single-threaded algorithms such as memory latencies. If an efficient GPU-based CT reconstruction algorithm can be developed; computational times could be improved by a factor of 20. Additionally, cost benefits will be realized as commodity graphics hardware could potentially replace expensive supercomputers and high-end workstations. This project will take advantage of the CUDA programming environment and attempt to parallelize the task in such a way that multiple slices of the reconstruction volume are computed simultaneously. This work will also take advantage of the GPU memory by utilizing asynchronous memory transfers, GPU texture memory, and (when possible) pinned host memory so that the memory transfer bottleneck inherent to GPGPU is amortized. Additionally, this work will take advantage of GPU-specific hardware (i.e. fast texture memory, pixel-pipelines, hardware interpolators, and varying memory hierarchy) that will allow for additional performance improvements.

  5. Performance Trend of Different Algorithms for Structural Design Optimization

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.

    1996-01-01

    Nonlinear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Center, a project was initiated to assess performance of different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with the sequential unconstrained minimizations technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.

  6. Restoration algorithms and system performance evaluation for active imagers

    NASA Astrophysics Data System (ADS)

    Gilles, Jérôme

    2007-10-01

    This paper deals with two fields related to active imaging system. First, we begin to explore image processing algorithms to restore the artefacts like speckle, scintillation and image dancing caused by atmospheric turbulence. Next, we examine how to evaluate the performance of this kind of systems. To do this task, we propose a modified version of the german TRM3 metric which permits to get MTF-like measures. We use the database acquired during NATO-TG40 field trials to make our tests.

  7. Empirical study of self-configuring genetic programming algorithm performance and behaviour

    NASA Astrophysics Data System (ADS)

    Semenkin, E.; Semenkina, M.

    2015-01-01

    The behaviour of the self-configuring genetic programming algorithm with a modified uniform crossover operator that implements a selective pressure on the recombination stage, is studied over symbolic programming problems. The operator's probabilistic rates interplay is studied and the role of operator variants on algorithm performance is investigated. Algorithm modifications based on the results of investigations are suggested. The performance improvement of the algorithm is demonstrated by the comparative analysis of suggested algorithms on the benchmark and real world problems.

  8. Performance comparison of accelerometer calibration algorithms based on 3D-ellipsoid fitting methods.

    PubMed

    Gietzelt, Matthias; Wolf, Klaus-Hendrik; Marschollek, Michael; Haux, Reinhold

    2013-07-01

    Calibration of accelerometers can be reduced to 3D-ellipsoid fitting problems. Changing extrinsic factors like temperature, pressure or humidity, as well as intrinsic factors like the battery status, demand to calibrate the measurements permanently. Thus, there is a need for fast calibration algorithms, e.g. for online analyses. The primary aim of this paper is to propose a non-iterative calibration algorithm for accelerometers with the focus on minimal execution time and low memory consumption. The secondary aim is to benchmark existing calibration algorithms based on 3D-ellipsoid fitting methods. We compared the algorithms regarding the calibration quality and the execution time as well as the number of quasi-static measurements needed for a stable calibration. As evaluation criterion for the calibration, both the norm of calibrated real-life measurements during inactivity and simulation data was used. The algorithms showed a high calibration quality, but the execution time differed significantly. The calibration method proposed in this paper showed the shortest execution time and a very good performance regarding the number of measurements needed to produce stable results. Furthermore, this algorithm was successfully implemented on a sensor node and calibrates the measured data on-the-fly while continuously storing the measured data to a microSD-card. PMID:23566707

  9. Genetic algorithm based task reordering to improve the performance of batch scheduled massively parallel scientific applications

    DOE PAGESBeta

    Sankaran, Ramanan; Angel, Jordan; Brown, W. Michael

    2015-04-08

    The growth in size of networked high performance computers along with novel accelerator-based node architectures has further emphasized the importance of communication efficiency in high performance computing. The world's largest high performance computers are usually operated as shared user facilities due to the costs of acquisition and operation. Applications are scheduled for execution in a shared environment and are placed on nodes that are not necessarily contiguous on the interconnect. Furthermore, the placement of tasks on the nodes allocated by the scheduler is sub-optimal, leading to performance loss and variability. Here, we investigate the impact of task placement on themore » performance of two massively parallel application codes on the Titan supercomputer, a turbulent combustion flow solver (S3D) and a molecular dynamics code (LAMMPS). Benchmark studies show a significant deviation from ideal weak scaling and variability in performance. The inter-task communication distance was determined to be one of the significant contributors to the performance degradation and variability. A genetic algorithm-based parallel optimization technique was used to optimize the task ordering. This technique provides an improved placement of the tasks on the nodes, taking into account the application's communication topology and the system interconnect topology. As a result, application benchmarks after task reordering through genetic algorithm show a significant improvement in performance and reduction in variability, therefore enabling the applications to achieve better time to solution and scalability on Titan during production.« less

  10. Genetic algorithm based task reordering to improve the performance of batch scheduled massively parallel scientific applications

    SciTech Connect

    Sankaran, Ramanan; Angel, Jordan; Brown, W. Michael

    2015-04-08

    The growth in size of networked high performance computers along with novel accelerator-based node architectures has further emphasized the importance of communication efficiency in high performance computing. The world's largest high performance computers are usually operated as shared user facilities due to the costs of acquisition and operation. Applications are scheduled for execution in a shared environment and are placed on nodes that are not necessarily contiguous on the interconnect. Furthermore, the placement of tasks on the nodes allocated by the scheduler is sub-optimal, leading to performance loss and variability. Here, we investigate the impact of task placement on the performance of two massively parallel application codes on the Titan supercomputer, a turbulent combustion flow solver (S3D) and a molecular dynamics code (LAMMPS). Benchmark studies show a significant deviation from ideal weak scaling and variability in performance. The inter-task communication distance was determined to be one of the significant contributors to the performance degradation and variability. A genetic algorithm-based parallel optimization technique was used to optimize the task ordering. This technique provides an improved placement of the tasks on the nodes, taking into account the application's communication topology and the system interconnect topology. As a result, application benchmarks after task reordering through genetic algorithm show a significant improvement in performance and reduction in variability, therefore enabling the applications to achieve better time to solution and scalability on Titan during production.

  11. Performance Analysis of Apriori Algorithm with Different Data Structures on Hadoop Cluster

    NASA Astrophysics Data System (ADS)

    Singh, Sudhakar; Garg, Rakhi; Mishra, P. K.

    2015-10-01

    Mining frequent itemsets from massive datasets is always being a most important problem of data mining. Apriori is the most popular and simplest algorithm for frequent itemset mining. To enhance the efficiency and scalability of Apriori, a number of algorithms have been proposed addressing the design of efficient data structures, minimizing database scan and parallel and distributed processing. MapReduce is the emerging parallel and distributed technology to process big datasets on Hadoop Cluster. To mine big datasets it is essential to re-design the data mining algorithm on this new paradigm. In this paper, we implement three variations of Apriori algorithm using data structures hash tree, trie and hash table trie i.e. trie with hash technique on MapReduce paradigm. We emphasize and investigate the significance of these three data structures for Apriori algorithm on Hadoop cluster, which has not been given attention yet. Experiments are carried out on both real life and synthetic datasets which shows that hash table trie data structures performs far better than trie and hash tree in terms of execution time. Moreover the performance in case of hash tree becomes worst.

  12. Performance of humans vs. exploration algorithms on the Tower of London Test.

    PubMed

    Fimbel, Eric; Lauzon, Stéphane; Rainville, Constant

    2009-01-01

    The Tower of London Test (TOL) used to assess executive functions was inspired in Artificial Intelligence tasks used to test problem-solving algorithms. In this study, we compare the performance of humans and of exploration algorithms. Instead of absolute execution times, we focus on how the execution time varies with the tasks and/or the number of moves. This approach used in Algorithmic Complexity provides a fair comparison between humans and computers, although humans are several orders of magnitude slower. On easy tasks (1 to 5 moves), healthy elderly persons performed like exploration algorithms using bounded memory resources, i.e., the execution time grew exponentially with the number of moves. This result was replicated with a group of healthy young participants. However, for difficult tasks (5 to 8 moves) the execution time of young participants did not increase significantly, whereas for exploration algorithms, the execution time keeps on increasing exponentially. A pre-and post-test control task showed a 25% improvement of visuo-motor skills but this was insufficient to explain this result. The findings suggest that naive participants used systematic exploration to solve the problem but under the effect of practice, they developed markedly more efficient strategies using the information acquired during the test. PMID:19787066

  13. A Genetic Algorithm for Learning Significant Phrase Patterns in Radiology Reports

    SciTech Connect

    Patton, Robert M; Potok, Thomas E; Beckerman, Barbara G; Treadwell, Jim N

    2009-01-01

    Radiologists disagree with each other over the characteristics and features of what constitutes a normal mammogram and the terminology to use in the associated radiology report. Recently, the focus has been on classifying abnormal or suspicious reports, but even this process needs further layers of clustering and gradation, so that individual lesions can be more effectively classified. Using a genetic algorithm, the approach described here successfully learns phrase patterns for two distinct classes of radiology reports (normal and abnormal). These patterns can then be used as a basis for automatically analyzing, categorizing, clustering, or retrieving relevant radiology reports for the user.

  14. Implementation and performance of a domain decomposition algorithm in Sisal

    SciTech Connect

    DeBoni, T.; Feo, J.; Rodrigue, G.; Muller, J.

    1993-09-23

    Sisal is a general-purpose functional language that hides the complexity of parallel processing, expedites parallel program development, and guarantees determinacy. Parallelism and management of concurrent tasks are realized automatically by the compiler and runtime system. Spatial domain decomposition is a widely-used method that focuses computational resources on the most active, or important, areas of a domain. Many complex programming issues are introduced in paralleling this method including: dynamic spatial refinement, dynamic grid partitioning and fusion, task distribution, data distribution, and load balancing. In this paper, we describe a spatial domain decomposition algorithm programmed in Sisal. We explain the compilation process, and present the execution performance of the resultant code on two different multiprocessor systems: a multiprocessor vector supercomputer, and cache-coherent scalar multiprocessor.

  15. Performance analysis of bearings-only tracking algorithm

    NASA Astrophysics Data System (ADS)

    van Huyssteen, David; Farooq, Mohamad

    1998-07-01

    A number of 'bearing-only' target motion analysis algorithms have appeared in the literature over the years, all suited to track an object based solely on noisy measurements of its angular position. In their paper 'Utilization of Modified Polar (MP) Coordinates for Bearings-Only Tracking' Aidala and Hammel advocate a filter in which the observable and unobservable states are naturally decoupled. While the MP filter has certain advantages over Cartesian and pseudolinear extended Kalman filters, it does not escape the requirement for the observer to steer an optimum maneuvering course to guarantee acceptable performance. This paper demonstrates by simulation the consequence if the observer deviates from this profile, even if it is sufficient to produce full state observability.

  16. Detrending moving average algorithm: Frequency response and scaling performances.

    PubMed

    Carbone, Anna; Kiyono, Ken

    2016-06-01

    The Detrending Moving Average (DMA) algorithm has been widely used in its several variants for characterizing long-range correlations of random signals and sets (one-dimensional sequences or high-dimensional arrays) over either time or space. In this paper, mainly based on analytical arguments, the scaling performances of the centered DMA, including higher-order ones, are investigated by means of a continuous time approximation and a frequency response approach. Our results are also confirmed by numerical tests. The study is carried out for higher-order DMA operating with moving average polynomials of different degree. In particular, detrending power degree, frequency response, asymptotic scaling, upper limit of the detectable scaling exponent, and finite scale range behavior will be discussed. PMID:27415389

  17. Burg algorithm for enhancing measurement performance in wavelength scanning interferometry

    NASA Astrophysics Data System (ADS)

    Woodcock, Rebecca; Muhamedsalih, Hussam; Martin, Haydn; Jiang, Xiangqian

    2016-06-01

    Wavelength scanning interferometry (WSI) is a technique for measuring surface topography that is capable of resolving step discontinuities and does not require any mechanical movement of the apparatus or measurand, allowing measurement times to be reduced substantially in comparison to related techniques. The axial (height) resolution and measurement range in WSI depends in part on the algorithm used to evaluate the spectral interferograms. Previously reported Fourier transform based methods have a number of limitations which is in part due to the short data lengths obtained. This paper compares the performance auto-regressive model based techniques for frequency estimation in WSI. Specifically, the Burg method is compared with established Fourier transform based approaches using both simulation and experimental data taken from a WSI measurement of a step-height sample.

  18. Detrending moving average algorithm: Frequency response and scaling performances

    NASA Astrophysics Data System (ADS)

    Carbone, Anna; Kiyono, Ken

    2016-06-01

    The Detrending Moving Average (DMA) algorithm has been widely used in its several variants for characterizing long-range correlations of random signals and sets (one-dimensional sequences or high-dimensional arrays) over either time or space. In this paper, mainly based on analytical arguments, the scaling performances of the centered DMA, including higher-order ones, are investigated by means of a continuous time approximation and a frequency response approach. Our results are also confirmed by numerical tests. The study is carried out for higher-order DMA operating with moving average polynomials of different degree. In particular, detrending power degree, frequency response, asymptotic scaling, upper limit of the detectable scaling exponent, and finite scale range behavior will be discussed.

  19. A Matter of Timing: Identifying Significant Multi-Dose Radiotherapy Improvements by Numerical Simulation and Genetic Algorithm Search

    PubMed Central

    Angus, Simon D.; Piotrowska, Monika Joanna

    2014-01-01

    Multi-dose radiotherapy protocols (fraction dose and timing) currently used in the clinic are the product of human selection based on habit, received wisdom, physician experience and intra-day patient timetabling. However, due to combinatorial considerations, the potential treatment protocol space for a given total dose or treatment length is enormous, even for relatively coarse search; well beyond the capacity of traditional in-vitro methods. In constrast, high fidelity numerical simulation of tumor development is well suited to the challenge. Building on our previous single-dose numerical simulation model of EMT6/Ro spheroids, a multi-dose irradiation response module is added and calibrated to the effective dose arising from 18 independent multi-dose treatment programs available in the experimental literature. With the developed model a constrained, non-linear, search for better performing cadidate protocols is conducted within the vicinity of two benchmarks by genetic algorithm (GA) techniques. After evaluating less than 0.01% of the potential benchmark protocol space, candidate protocols were identified by the GA which conferred an average of 9.4% (max benefit 16.5%) and 7.1% (13.3%) improvement (reduction) on tumour cell count compared to the two benchmarks, respectively. Noticing that a convergent phenomenon of the top performing protocols was their temporal synchronicity, a further series of numerical experiments was conducted with periodic time-gap protocols (10 h to 23 h), leading to the discovery that the performance of the GA search candidates could be replicated by 17–18 h periodic candidates. Further dynamic irradiation-response cell-phase analysis revealed that such periodicity cohered with latent EMT6/Ro cell-phase temporal patterning. Taken together, this study provides powerful evidence towards the hypothesis that even simple inter-fraction timing variations for a given fractional dose program may present a facile, and highly cost

  20. Full tensor gravity gradiometry data inversion: Performance analysis of parallel computing algorithms

    NASA Astrophysics Data System (ADS)

    Hou, Zhen-Long; Wei, Xiao-Hui; Huang, Da-Nian; Sun, Xu

    2015-09-01

    We apply reweighted inversion focusing to full tensor gravity gradiometry data using message-passing interface (MPI) and compute unified device architecture (CUDA) parallel computing algorithms, and then combine MPI with CUDA to formulate a hybrid algorithm. Parallel computing performance metrics are introduced to analyze and compare the performance of the algorithms. We summarize the rules for the performance evaluation of parallel algorithms. We use model and real data from the Vinton salt dome to test the algorithms. We find good match between model and real density data, and verify the high efficiency and feasibility of parallel computing algorithms in the inversion of full tensor gravity gradiometry data.

  1. Performance comparison of neural network training algorithms in modeling of bimodal drug delivery.

    PubMed

    Ghaffari, A; Abdollahi, H; Khoshayand, M R; Bozchalooi, I Soltani; Dadgar, A; Rafiee-Tehrani, M

    2006-12-11

    The major aim of this study was to model the effect of two causal factors, i.e. coating weight gain and amount of pectin-chitosan in the coating solution on the in vitro release profile of theophylline for bimodal drug delivery. Artificial neural network (ANN) as a multilayer perceptron feedforward network was incorporated for developing a predictive model of the formulations. Five different training algorithms belonging to three classes: gradient descent, quasi-Newton (Levenberg-Marquardt, LM) and genetic algorithm (GA) were used to train ANN containing a single hidden layer of four nodes. The next objective of the current study was to compare the performance of aforementioned algorithms with regard to predicting ability. The ANNs were trained with those algorithms using the available experimental data as the training set. The divergence of the RMSE between the output and target values of test set was monitored and used as a criterion to stop training. Two versions of gradient descent backpropagation algorithms, i.e. incremental backpropagation (IBP) and batch backpropagation (BBP) outperformed the others. No significant differences were found between the predictive abilities of IBP and BBP, although, the convergence speed of BBP is three- to four-fold higher than IBP. Although, both gradient descent backpropagation and LM methodologies gave comparable results for the data modeling, training of ANNs with genetic algorithm was erratic. The precision of predictive ability was measured for each training algorithm and their performances were in the order of: IBP, BBP>LM>QP (quick propagation)>GA. According to BBP-ANN implementation, an increase in coating levels and a decrease in the amount of pectin-chitosan generally retarded the drug release. Moreover, the latter causal factor namely the amount of pectin-chitosan played slightly more dominant role in determination of the dissolution profiles. PMID:16959449

  2. Hybrid Neural-Network: Genetic Algorithm Technique for Aircraft Engine Performance Diagnostics Developed and Demonstrated

    NASA Technical Reports Server (NTRS)

    Kobayashi, Takahisa; Simon, Donald L.

    2002-01-01

    As part of the NASA Aviation Safety Program, a unique model-based diagnostics method that employs neural networks and genetic algorithms for aircraft engine performance diagnostics has been developed and demonstrated at the NASA Glenn Research Center against a nonlinear gas turbine engine model. Neural networks are applied to estimate the internal health condition of the engine, and genetic algorithms are used for sensor fault detection, isolation, and quantification. This hybrid architecture combines the excellent nonlinear estimation capabilities of neural networks with the capability to rank the likelihood of various faults given a specific sensor suite signature. The method requires a significantly smaller data training set than a neural network approach alone does, and it performs the combined engine health monitoring objectives of performance diagnostics and sensor fault detection and isolation in the presence of nominal and degraded engine health conditions.

  3. An efficient algorithm to perform multiple testing in epistasis screening

    PubMed Central

    2013-01-01

    Background Research in epistasis or gene-gene interaction detection for human complex traits has grown over the last few years. It has been marked by promising methodological developments, improved translation efforts of statistical epistasis to biological epistasis and attempts to integrate different omics information sources into the epistasis screening to enhance power. The quest for gene-gene interactions poses severe multiple-testing problems. In this context, the maxT algorithm is one technique to control the false-positive rate. However, the memory needed by this algorithm rises linearly with the amount of hypothesis tests. Gene-gene interaction studies will require a memory proportional to the squared number of SNPs. A genome-wide epistasis search would therefore require terabytes of memory. Hence, cache problems are likely to occur, increasing the computation time. In this work we present a new version of maxT, requiring an amount of memory independent from the number of genetic effects to be investigated. This algorithm was implemented in C++ in our epistasis screening software MBMDR-3.0.3. We evaluate the new implementation in terms of memory efficiency and speed using simulated data. The software is illustrated on real-life data for Crohn’s disease. Results In the case of a binary (affected/unaffected) trait, the parallel workflow of MBMDR-3.0.3 analyzes all gene-gene interactions with a dataset of 100,000 SNPs typed on 1000 individuals within 4 days and 9 hours, using 999 permutations of the trait to assess statistical significance, on a cluster composed of 10 blades, containing each four Quad-Core AMD Opteron(tm) Processor 2352 2.1 GHz. In the case of a continuous trait, a similar run takes 9 days. Our program found 14 SNP-SNP interactions with a multiple-testing corrected p-value of less than 0.05 on real-life Crohn’s disease (CD) data. Conclusions Our software is the first implementation of the MB-MDR methodology able to solve large-scale SNP

  4. Specification of Selected Performance Monitoring and Commissioning Verification Algorithms for CHP Systems

    SciTech Connect

    Brambley, Michael R.; Katipamula, Srinivas

    2006-10-06

    Pacific Northwest National Laboratory (PNNL) is assisting the U.S. Department of Energy (DOE) Distributed Energy (DE) Program by developing advanced control algorithms that would lead to development of tools to enhance performance and reliability, and reduce emissions of distributed energy technologies, including combined heat and power technologies. This report documents phase 2 of the program, providing a detailed functional specification for algorithms for performance monitoring and commissioning verification, scheduled for development in FY 2006. The report identifies the systems for which algorithms will be developed, the specific functions of each algorithm, metrics which the algorithms will output, and inputs required by each algorithm.

  5. Computational Performance Assessment of k-mer Counting Algorithms.

    PubMed

    Pérez, Nelson; Gutierrez, Miguel; Vera, Nelson

    2016-04-01

    This article is about the assessment of several tools for k-mer counting, with the purpose to create a reference framework for bioinformatics researchers to identify computational requirements, parallelizing, advantages, disadvantages, and bottlenecks of each of the algorithms proposed in the tools. The k-mer counters evaluated in this article were BFCounter, DSK, Jellyfish, KAnalyze, KHMer, KMC2, MSPKmerCounter, Tallymer, and Turtle. Measured parameters were the following: RAM occupied space, processing time, parallelization, and read and write disk access. A dataset consisting of 36,504,800 reads was used corresponding to the 14th human chromosome. The assessment was performed for two k-mer lengths: 31 and 55. Obtained results were the following: pure Bloom filter-based tools and disk-partitioning techniques showed a lesser RAM use. The tools that took less execution time were the ones that used disk-partitioning techniques. The techniques that made the major parallelization were the ones that used disk partitioning, hash tables with lock-free approach, or multiple hash tables. PMID:26982880

  6. Performance study of a new time-delay estimation algorithm in ultrasonic echo signals and ultrasound elastography.

    PubMed

    Shaswary, Elyas; Xu, Yuan; Tavakkoli, Jahan

    2016-07-01

    Time-delay estimation has countless applications in ultrasound medical imaging. Previously, we proposed a new time-delay estimation algorithm, which was based on the summation of the sign function to compute the time-delay estimate (Shaswary et al., 2015). We reported that the proposed algorithm performs similar to normalized cross-correlation (NCC) and sum squared differences (SSD) algorithms, even though it was significantly more computationally efficient. In this paper, we study the performance of the proposed algorithm using statistical analysis and image quality analysis in ultrasound elastography imaging. Field II simulation software was used for generation of ultrasound radio frequency (RF) echo signals for statistical analysis, and a clinical ultrasound scanner (Sonix® RP scanner, Ultrasonix Medical Corp., Richmond, BC, Canada) was used to scan a commercial ultrasound elastography tissue-mimicking phantom for image quality analysis. The statistical analysis results confirmed that, in overall, the proposed algorithm has similar performance compared to NCC and SSD algorithms. The image quality analysis results indicated that the proposed algorithm produces strain images with marginally higher signal-to-noise and contrast-to-noise ratios compared to NCC and SSD algorithms. PMID:27010697

  7. Using edge-preserving algorithm with non-local mean for significantly improved image-domain material decomposition in dual-energy CT

    NASA Astrophysics Data System (ADS)

    Zhao, Wei; Niu, Tianye; Xing, Lei; Xie, Yaoqin; Xiong, Guanglei; Elmore, Kimberly; Zhu, Jun; Wang, Luyao; Min, James K.

    2016-02-01

    Increased noise is a general concern for dual-energy material decomposition. Here, we develop an image-domain material decomposition algorithm for dual-energy CT (DECT) by incorporating an edge-preserving filter into the Local HighlY constrained backPRojection reconstruction (HYPR-LR) framework. With effective use of the non-local mean, the proposed algorithm, which is referred to as HYPR-NLM, reduces the noise in dual-energy decomposition while preserving the accuracy of quantitative measurement and spatial resolution of the material-specific dual-energy images. We demonstrate the noise reduction and resolution preservation of the algorithm with an iodine concentrate numerical phantom by comparing the HYPR-NLM algorithm to the direct matrix inversion, HYPR-LR and iterative image-domain material decomposition (Iter-DECT). We also show the superior performance of the HYPR-NLM over the existing methods by using two sets of cardiac perfusing imaging data. The DECT material decomposition comparison study shows that all four algorithms yield acceptable quantitative measurements of iodine concentrate. Direct matrix inversion yields the highest noise level, followed by HYPR-LR and Iter-DECT. HYPR-NLM in an iterative formulation significantly reduces image noise and the image noise is comparable to or even lower than that generated using Iter-DECT. For the HYPR-NLM method, there are marginal edge effects in the difference image, suggesting the high-frequency details are well preserved. In addition, when the search window size increases from 11× 11 to 19× 19 , there are no significant changes or marginal edge effects in the HYPR-NLM difference images. The reference drawn from the comparison study includes: (1) HYPR-NLM significantly reduces the DECT material decomposition noise while preserving quantitative measurements and high-frequency edge information, and (2) HYPR-NLM is robust with respect to parameter selection.

  8. Performance of a parallel algorithm for standard cell placement on the Intel Hypercube

    NASA Technical Reports Server (NTRS)

    Jones, Mark; Banerjee, Prithviraj

    1987-01-01

    A parallel simulated annealing algorithm for standard cell placement on the Intel Hypercube is presented. A novel tree broadcasting strategy is used extensively for updating cell locations in the parallel environment. Studies on the performance of the algorithm on example industrial circuits show that it is faster and gives better final placement results than uniprocessor simulated annealing algorithms.

  9. Enhanced algorithm performance for land cover classification from remotely sensed data using bagging and boosting

    USGS Publications Warehouse

    Chan, J.C.-W.; Huang, C.; DeFries, R.

    2001-01-01

    Two ensemble methods, bagging and boosting, were investigated for improving algorithm performance. Our results confirmed the theoretical explanation [1] that bagging improves unstable, but not stable, learning algorithms. While boosting enhanced accuracy of a weak learner, its behavior is subject to the characteristics of each learning algorithm.

  10. In-depth performance analysis of an EEG based neonatal seizure detection algorithm

    PubMed Central

    Mathieson, S.; Rennie, J.; Livingstone, V.; Temko, A.; Low, E.; Pressler, R.M.; Boylan, G.B.

    2016-01-01

    Objective To describe a novel neurophysiology based performance analysis of automated seizure detection algorithms for neonatal EEG to characterize features of detected and non-detected seizures and causes of false detections to identify areas for algorithmic improvement. Methods EEGs of 20 term neonates were recorded (10 seizure, 10 non-seizure). Seizures were annotated by an expert and characterized using a novel set of 10 criteria. ANSeR seizure detection algorithm (SDA) seizure annotations were compared to the expert to derive detected and non-detected seizures at three SDA sensitivity thresholds. Differences in seizure characteristics between groups were compared using univariate and multivariate analysis. False detections were characterized. Results The expert detected 421 seizures. The SDA at thresholds 0.4, 0.5, 0.6 detected 60%, 54% and 45% of seizures. At all thresholds, multivariate analyses demonstrated that the odds of detecting seizure increased with 4 criteria: seizure amplitude, duration, rhythmicity and number of EEG channels involved at seizure peak. Major causes of false detections included respiration and sweat artefacts or a highly rhythmic background, often during intermediate sleep. Conclusion This rigorous analysis allows estimation of how key seizure features are exploited by SDAs. Significance This study resulted in a beta version of ANSeR with significantly improved performance. PMID:27072097

  11. Determining the Effectiveness of Incorporating Geographic Information Into Vehicle Performance Algorithms

    SciTech Connect

    Sera White

    2012-04-01

    This thesis presents a research study using one year of driving data obtained from plug-in hybrid electric vehicles (PHEV) located in Sacramento and San Francisco, California to determine the effectiveness of incorporating geographic information into vehicle performance algorithms. Sacramento and San Francisco were chosen because of the availability of high resolution (1/9 arc second) digital elevation data. First, I present a method for obtaining instantaneous road slope, given a latitude and longitude, and introduce its use into common driving intensity algorithms. I show that for trips characterized by >40m of net elevation change (from key on to key off), the use of instantaneous road slope significantly changes the results of driving intensity calculations. For trips exhibiting elevation loss, algorithms ignoring road slope overestimated driving intensity by as much as 211 Wh/mile, while for trips exhibiting elevation gain these algorithms underestimated driving intensity by as much as 333 Wh/mile. Second, I describe and test an algorithm that incorporates vehicle route type into computations of city and highway fuel economy. Route type was determined by intersecting trip GPS points with ESRI StreetMap road types and assigning each trip as either city or highway route type according to whichever road type comprised the largest distance traveled. The fuel economy results produced by the geographic classification were compared to the fuel economy results produced by algorithms that assign route type based on average speed or driving style. Most results were within 1 mile per gallon ({approx}3%) of one another; the largest difference was 1.4 miles per gallon for charge depleting highway trips. The methods for acquiring and using geographic data introduced in this thesis will enable other vehicle technology researchers to incorporate geographic data into their research problems.

  12. Performance of a parallel algorithm for standard cell placement on the Intel Hypercube

    NASA Technical Reports Server (NTRS)

    Jones, Mark; Banerjee, Prithviraj

    1987-01-01

    A parallel simulated annealing algorithm for standard cell placement that is targeted to run on the Intel Hypercube is presented. A tree broadcasting strategy that is used extensively in our algorithm for updating cell locations in the parallel environment is presented. Studies on the performance of our algorithm on example industrial circuits show that it is faster and gives better final placement results than the uniprocessor simulated annealing algorithms.

  13. Dual Engine application of the Performance Seeking Control algorithm

    NASA Technical Reports Server (NTRS)

    Mueller, F. D.; Nobbs, S. G.; Stewart, J. F.

    1993-01-01

    The Dual Engine Performance Seeking Control (PSC) flight/propulsion optimization program has been developed and will be flown during the second quarter of 1993. Previously, only single engine optimization was possible due to the limited capability of the on-board computer. The implementation of Dual Engine PSC has been made possible with the addition of a new state-of-the-art, higher throughput computer. As a result, the single engine PSC performance improvements already flown will be demonstrated on both engines, simultaneously. Dual Engine PSC will make it possible to directly compare aircraft performance with and without the improvements generated by PSC. With the additional thrust achieved with PSC, significant improvements in acceleration times and time to climb will be possible. PSC is also able to reduce deceleration time from supersonic speeds. This paper traces the history of the PSC program, describes the basic components of PSC, discusses the development and implementation of Dual Engine PSC including additions to the code, and presents predictions of the impact of Dual Engine PSC on aircraft performance.

  14. A novel ROC approach for performance evaluation of target detection algorithms

    NASA Astrophysics Data System (ADS)

    Ganapathy, Priya; Skipper, Julie A.

    2007-04-01

    Receiver operator characteristic (ROC) analysis is an emerging automated target recognition system performance assessment tool. The ROC metric, area under the curve (AUC), is a universally accepted measure of classifying accuracy. In the presented approach, the detection algorithm output, i.e., a response plane (RP), must consist of grayscale values wherein a maximum value (e.g. 255) corresponds to highest probability of target locations. AUC computation involves the comparison of the RP and the ground truth to classify RP pixels as true positives (TP), true negatives (TN), false positives (FP), or false negatives (FN). Ideally, the background and all objects other than targets are TN. Historically, evaluation methods have excluded the background, and only a few spoof objects likely to be considered as a hit by detection algorithms were a priori demarcated as TN. This can potentially exaggerate the algorithm's performance. Here, a new ROC approach has been developed that divides the entire image into mutually exclusive target (TP) and background (TN) grid squares with adjustable size. Based on the overlap of the thresholded RP with the TP and TN grids, the FN and FP fractions are computed. Variation of the grid square size can bias the ROC results by artificially altering specificity, so an assessment of relative performance under a constant grid square size is adopted in our approach. A pilot study was performed to assess the method's ability to capture RP changes under three different detection algorithm parameter settings on ten images with different backgrounds and target orientations. An ANOVA-based comparison of the AUCs for the three settings showed a significant difference (p<0.001) at 95% confidence interval.

  15. Testing earthquake prediction algorithms: Statistically significant advance prediction of the largest earthquakes in the Circum-Pacific, 1992-1997

    USGS Publications Warehouse

    Kossobokov, V.G.; Romashkova, L.L.; Keilis-Borok, V. I.; Healy, J.H.

    1999-01-01

    Algorithms M8 and MSc (i.e., the Mendocino Scenario) were used in a real-time intermediate-term research prediction of the strongest earthquakes in the Circum-Pacific seismic belt. Predictions are made by M8 first. Then, the areas of alarm are reduced by MSc at the cost that some earthquakes are missed in the second approximation of prediction. In 1992-1997, five earthquakes of magnitude 8 and above occurred in the test area: all of them were predicted by M8 and MSc identified correctly the locations of four of them. The space-time volume of the alarms is 36% and 18%, correspondingly, when estimated with a normalized product measure of empirical distribution of epicenters and uniform time. The statistical significance of the achieved results is beyond 99% both for M8 and MSc. For magnitude 7.5 + , 10 out of 19 earthquakes were predicted by M8 in 40% and five were predicted by M8-MSc in 13% of the total volume considered. This implies a significance level of 81% for M8 and 92% for M8-MSc. The lower significance levels might result from a global change in seismic regime in 1993-1996, when the rate of the largest events has doubled and all of them become exclusively normal or reversed faults. The predictions are fully reproducible; the algorithms M8 and MSc in complete formal definitions were published before we started our experiment [Keilis-Borok, V.I., Kossobokov, V.G., 1990. Premonitory activation of seismic flow: Algorithm M8, Phys. Earth and Planet. Inter. 61, 73-83; Kossobokov, V.G., Keilis-Borok, V.I., Smith, S.W., 1990. Localization of intermediate-term earthquake prediction, J. Geophys. Res., 95, 19763-19772; Healy, J.H., Kossobokov, V.G., Dewey, J.W., 1992. A test to evaluate the earthquake prediction algorithm, M8. U.S. Geol. Surv. OFR 92-401]. M8 is available from the IASPEI Software Library [Healy, J.H., Keilis-Borok, V.I., Lee, W.H.K. (Eds.), 1997. Algorithms for Earthquake Statistics and Prediction, Vol. 6. IASPEI Software Library]. ?? 1999 Elsevier

  16. Subsonic flight test evaluation of a performance seeking control algorithm on an F-15 airplane

    NASA Technical Reports Server (NTRS)

    Gilyard, Glenn B.; Orme, John S.

    1992-01-01

    The subsonic flight test evaluation phase of the NASA F-15 (powered by F 100 engines) performance seeking control program was completed for single-engine operation at part- and military-power settings. The subsonic performance seeking control algorithm optimizes the quasi-steady-state performance of the propulsion system for three modes of operation. The minimum fuel flow mode minimizes fuel consumption. The minimum thrust mode maximizes thrust at military power. Decreases in thrust-specific fuel consumption of 1 to 2 percent were measured in the minimum fuel flow mode; these fuel savings are significant, especially for supersonic cruise aircraft. Decreases of up to approximately 100 degree R in fan turbine inlet temperature were measured in the minimum temperature mode. Temperature reductions of this magnitude would more than double turbine life if inlet temperature was the only life factor. Measured thrust increases of up to approximately 15 percent in the maximum thrust mode cause substantial increases in aircraft acceleration. The system dynamics of the closed-loop algorithm operation were good. The subsonic flight phase has validated the performance seeking control technology, which can significantly benefit the next generation of fighter and transport aircraft.

  17. NETRA: A parallel architecture for integrated vision systems 2: Algorithms and performance evaluation

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok N.; Patel, Janak H.; Ahuja, Narendra

    1989-01-01

    In part 1 architecture of NETRA is presented. A performance evaluation of NETRA using several common vision algorithms is also presented. Performance of algorithms when they are mapped on one cluster is described. It is shown that SIMD, MIMD, and systolic algorithms can be easily mapped onto processor clusters, and almost linear speedups are possible. For some algorithms, analytical performance results are compared with implementation performance results. It is observed that the analysis is very accurate. Performance analysis of parallel algorithms when mapped across clusters is presented. Mappings across clusters illustrate the importance and use of shared as well as distributed memory in achieving high performance. The parameters for evaluation are derived from the characteristics of the parallel algorithms, and these parameters are used to evaluate the alternative communication strategies in NETRA. Furthermore, the effect of communication interference from other processors in the system on the execution of an algorithm is studied. Using the analysis, performance of many algorithms with different characteristics is presented. It is observed that if communication speeds are matched with the computation speeds, good speedups are possible when algorithms are mapped across clusters.

  18. Performance and development plans for the Inner Detector trigger algorithms at ATLAS

    NASA Astrophysics Data System (ADS)

    Martin-Haugh, Stewart

    2015-12-01

    A description of the design and performance of the newly re-implemented tracking algorithms for the ATLAS trigger for LHC Run 2, to commence in spring 2015, is presented. The ATLAS High Level Trigger (HLT) has been restructured to run as a more flexible single stage process, rather than the two separate Level 2 and Event Filter stages used during Run 1. To make optimal use of this new scenario, a new tracking strategy has been implemented for Run 2. This new strategy will use a FastTrackFinder algorithm to directly seed the subsequent Precision Tracking, and will result in improved track parameter resolution and significantly faster execution times than achieved during Run 1 and with better efficiency. The timings of the algorithms for electron and tau track triggers are presented. The profiling infrastructure, constructed to provide prompt feedback from the optimisation, is described, including the methods used to monitor the relative performance improvements as the code evolves. The online deployment and commissioning are also discussed.

  19. High-Performance Algorithm for Solving the Diagnosis Problem

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Vatan, Farrokh

    2009-01-01

    An improved method of model-based diagnosis of a complex engineering system is embodied in an algorithm that involves considerably less computation than do prior such algorithms. This method and algorithm are based largely on developments reported in several NASA Tech Briefs articles: The Complexity of the Diagnosis Problem (NPO-30315), Vol. 26, No. 4 (April 2002), page 20; Fast Algorithms for Model-Based Diagnosis (NPO-30582), Vol. 29, No. 3 (March 2005), page 69; Two Methods of Efficient Solution of the Hitting-Set Problem (NPO-30584), Vol. 29, No. 3 (March 2005), page 73; and Efficient Model-Based Diagnosis Engine (NPO-40544), on the following page. Some background information from the cited articles is prerequisite to a meaningful summary of the innovative aspects of the present method and algorithm. In model-based diagnosis, the function of each component and the relationships among all the components of the engineering system to be diagnosed are represented as a logical system denoted the system description (SD). Hence, the expected normal behavior of the engineering system is the set of logical consequences of the SD. Faulty components lead to inconsistencies between the observed behaviors of the system and the SD. Diagnosis the task of finding faulty components is reduced to finding those components, the abnormalities of which could explain all the inconsistencies. The solution of the diagnosis problem should be a minimal diagnosis, which is a minimal set of faulty components. The calculation of a minimal diagnosis is inherently a hard problem, the solution of which requires amounts of computation time and memory that increase exponentially with the number of components of the engineering system. Among the developments to reduce the computational burden, as reported in the cited articles, is the mapping of the diagnosis problem onto the integer-programming (IP) problem. This mapping makes it possible to utilize a variety of algorithms developed previously

  20. Obstacle Detection Algorithms for Aircraft Navigation: Performance Characterization of Obstacle Detection Algorithms for Aircraft Navigation

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar; Camps, Octavia; Coraor, Lee

    2000-01-01

    The research reported here is a part of NASA's Synthetic Vision System (SVS) project for the development of a High Speed Civil Transport Aircraft (HSCT). One of the components of the SVS is a module for detection of potential obstacles in the aircraft's flight path by analyzing the images captured by an on-board camera in real-time. Design of such a module includes the selection and characterization of robust, reliable, and fast techniques and their implementation for execution in real-time. This report describes the results of our research in realizing such a design. It is organized into three parts. Part I. Data modeling and camera characterization; Part II. Algorithms for detecting airborne obstacles; and Part III. Real time implementation of obstacle detection algorithms on the Datacube MaxPCI architecture. A list of publications resulting from this grant as well as a list of relevant publications resulting from prior NASA grants on this topic are presented.

  1. Performance evaluation of power control algorithms in wireless cellular networks

    NASA Astrophysics Data System (ADS)

    Temaneh-Nyah, C.; Iita, V.

    2014-10-01

    Power control in a mobile communication network intents to control the transmission power levels in such a way that the required quality of service (QoS) for the users is guaranteed with lowest possible transmission powers. Most of the studies of power control algorithms in the literature are based on some kind of simplified assumptions which leads to compromise in the validity of the results when applied in a real environment. In this paper, a CDMA network was simulated. The real environment was accounted for by defining the analysis area and the network base stations and mobile stations are defined by their geographical coordinates, the mobility of the mobile stations is accounted for. The simulation also allowed for a number of network parameters including the network traffic, and the wireless channel models to be modified. Finally, we present the simulation results of a convergence speed based comparative analysis of three uplink power control algorithms.

  2. A high performance hardware implementation image encryption with AES algorithm

    NASA Astrophysics Data System (ADS)

    Farmani, Ali; Jafari, Mohamad; Miremadi, Seyed Sohrab

    2011-06-01

    This paper describes implementation of a high-speed encryption algorithm with high throughput for encrypting the image. Therefore, we select a highly secured symmetric key encryption algorithm AES(Advanced Encryption Standard), in order to increase the speed and throughput using pipeline technique in four stages, control unit based on logic gates, optimal design of multiplier blocks in mixcolumn phase and simultaneous production keys and rounds. Such procedure makes AES suitable for fast image encryption. Implementation of a 128-bit AES on FPGA of Altra company has been done and the results are as follow: throughput, 6 Gbps in 471MHz. The time of encrypting in tested image with 32*32 size is 1.15ms.

  3. GOES-R Geostationary Lightning Mapper Performance Specifications and Algorithms

    NASA Technical Reports Server (NTRS)

    Mach, Douglas M.; Goodman, Steven J.; Blakeslee, Richard J.; Koshak, William J.; Petersen, William A.; Boldi, Robert A.; Carey, Lawrence D.; Bateman, Monte G.; Buchler, Dennis E.; McCaul, E. William, Jr.

    2008-01-01

    The Geostationary Lightning Mapper (GLM) is a single channel, near-IR imager/optical transient event detector, used to detect, locate and measure total lightning activity over the full-disk. The next generation NOAA Geostationary Operational Environmental Satellite (GOES-R) series will carry a GLM that will provide continuous day and night observations of lightning. The mission objectives for the GLM are to: (1) Provide continuous, full-disk lightning measurements for storm warning and nowcasting, (2) Provide early warning of tornadic activity, and (2) Accumulate a long-term database to track decadal changes of lightning. The GLM owes its heritage to the NASA Lightning Imaging Sensor (1997- present) and the Optical Transient Detector (1995-2000), which were developed for the Earth Observing System and have produced a combined 13 year data record of global lightning activity. GOES-R Risk Reduction Team and Algorithm Working Group Lightning Applications Team have begun to develop the Level 2 algorithms and applications. The science data will consist of lightning "events", "groups", and "flashes". The algorithm is being designed to be an efficient user of the computational resources. This may include parallelization of the code and the concept of sub-dividing the GLM FOV into regions to be processed in parallel. Proxy total lightning data from the NASA Lightning Imaging Sensor on the Tropical Rainfall Measuring Mission (TRMM) satellite and regional test beds (e.g., Lightning Mapping Arrays in North Alabama, Oklahoma, Central Florida, and the Washington DC Metropolitan area) are being used to develop the prelaunch algorithms and applications, and also improve our knowledge of thunderstorm initiation and evolution.

  4. Performance of a community detection algorithm based on semidefinite programming

    NASA Astrophysics Data System (ADS)

    Ricci-Tersenghi, Federico; Javanmard, Adel; Montanari, Andrea

    2016-03-01

    The problem of detecting communities in a graph is maybe one the most studied inference problems, given its simplicity and widespread diffusion among several disciplines. A very common benchmark for this problem is the stochastic block model or planted partition problem, where a phase transition takes place in the detection of the planted partition by changing the signal-to-noise ratio. Optimal algorithms for the detection exist which are based on spectral methods, but we show these are extremely sensible to slight modification in the generative model. Recently Javanmard, Montanari and Ricci-Tersenghi [1] have used statistical physics arguments, and numerical simulations to show that finding communities in the stochastic block model via semidefinite programming is quasi optimal. Further, the resulting semidefinite relaxation can be solved efficiently, and is very robust with respect to changes in the generative model. In this paper we study in detail several practical aspects of this new algorithm based on semidefinite programming for the detection of the planted partition. The algorithm turns out to be very fast, allowing the solution of problems with O(105) variables in few second on a laptop computer.

  5. Thermal contact algorithms in SIERRA mechanics : mathematical background, numerical verification, and evaluation of performance.

    SciTech Connect

    Copps, Kevin D.; Carnes, Brian R.

    2008-04-01

    We examine algorithms for the finite element approximation of thermal contact models. We focus on the implementation of thermal contact algorithms in SIERRA Mechanics. Following the mathematical formulation of models for tied contact and resistance contact, we present three numerical algorithms: (1) the multi-point constraint (MPC) algorithm, (2) a resistance algorithm, and (3) a new generalized algorithm. We compare and contrast both the correctness and performance of the algorithms in three test problems. We tabulate the convergence rates of global norms of the temperature solution on sequentially refined meshes. We present the results of a parameter study of the effect of contact search tolerances. We outline best practices in using the software for predictive simulations, and suggest future improvements to the implementation.

  6. Distributed concurrency control performance: A study of algorithms, distribution, and replication

    SciTech Connect

    Carey, M.J.; Livny, M.

    1988-01-01

    Many concurrency control algorithms have been proposed for use in distributed database systems. Despite the large number of available algorithms, and the fact that distributed database systems are becoming a commercial reality, distributed concurrency control performance tradeoffs are still not well understood. In this paper the authors attempt to shed light on some of the important issues by studying the performance of four representative algorithms - distributed 2PL, wound-wait, basic timestamp ordering, and a distributed optimistic algorithm - using a detailed simulation model of a distributed DBMS. The authors examine the performance of these algorithms for various levels of contention, ''distributedness'' of the workload, and data replication. The results should prove useful to designers of future distributed database systems.

  7. A comprehensive performance evaluation on the prediction results of existing cooperative transcription factors identification algorithms

    PubMed Central

    2014-01-01

    Background Eukaryotic transcriptional regulation is known to be highly connected through the networks of cooperative transcription factors (TFs). Measuring the cooperativity of TFs is helpful for understanding the biological relevance of these TFs in regulating genes. The recent advances in computational techniques led to various predictions of cooperative TF pairs in yeast. As each algorithm integrated different data resources and was developed based on different rationales, it possessed its own merit and claimed outperforming others. However, the claim was prone to subjectivity because each algorithm compared with only a few other algorithms and only used a small set of performance indices for comparison. This motivated us to propose a series of indices to objectively evaluate the prediction performance of existing algorithms. And based on the proposed performance indices, we conducted a comprehensive performance evaluation. Results We collected 14 sets of predicted cooperative TF pairs (PCTFPs) in yeast from 14 existing algorithms in the literature. Using the eight performance indices we adopted/proposed, the cooperativity of each PCTFP was measured and a ranking score according to the mean cooperativity of the set was given to each set of PCTFPs under evaluation for each performance index. It was seen that the ranking scores of a set of PCTFPs vary with different performance indices, implying that an algorithm used in predicting cooperative TF pairs is of strength somewhere but may be of weakness elsewhere. We finally made a comprehensive ranking for these 14 sets. The results showed that Wang J's study obtained the best performance evaluation on the prediction of cooperative TF pairs in yeast. Conclusions In this study, we adopted/proposed eight performance indices to make a comprehensive performance evaluation on the prediction results of 14 existing cooperative TFs identification algorithms. Most importantly, these proposed indices can be easily applied to

  8. Performance of a Chase-type decoding algorithm for Reed-Solomon codes on perpendicular magnetic recording channels

    NASA Astrophysics Data System (ADS)

    Wang, H.; Chang, W.; Cruz, J. R.

    Algebraic soft-decision Reed-Solomon (RS) decoding algorithms with improved error-correcting capability and comparable complexity to standard algebraic hard-decision algorithms could be very attractive for possible implementation in the next generation of read channels. In this work, we investigate the performance of a low-complexity Chase (LCC)-type soft-decision RS decoding algorithm, recently proposed by Bellorado and Kavčić, on perpendicular magnetic recording channels for sector-long RS codes of practical interest. Previous results for additive white Gaussian noise channels have shown that for a moderately long high-rate code, the LCC algorithm can achieve a coding gain comparable to the Koetter-Vardy algorithm with much lower complexity. We present a set of numerical results that show that this algorithm provides small coding gains, on the order of a fraction of a dB, with similar complexity to the hard-decision algorithms currently used, and that larger coding gains can be obtained if we use more test patterns, which significantly increases its computational complexity.

  9. Differential estimation of verbal intelligence and performance intelligence scores from combined performance and demographic variables: the OPIE-3 verbal and performance algorithms.

    PubMed

    Schoenberg, Mike R; Duff, Kevin; Dorfman, Karen; Adams, Russell L

    2004-05-01

    Data from the WAIS-III standardization sample (The Psychological Corporation, 1997) was used to generate VIQ and PIQ estimation formulae using demographic variables and current WAIS-III subtest performances. The sample (n = 2450) was randomly divided into two groups; the first was used to develop formulas and the second to validate the regression equations. Age, education, ethnicity, gender, region of the country as well as Vocabulary, Matrix Reasoning, and Picture Completion subtests raw scores were used as predictor variables. Prediction formulas were generated using a single verbal and two performance subtest algorithms. The VIQ OPIE-3 model combined Vocabulary raw scores with demographic variables. The PIQ estimation algorithm used Matrix Reasoning and Picture Completion raw scores with demographic variables. The formulas for estimating premorbid VIQ and PIQ were highly significant and accurate in estimation. Differences in estimated VIQ and PIQ scores were evaluated and the OPIE-3 algorithms were found to accurately predict VIQ and PIQ differences within the WAIS-III standardization sample. PMID:15587673

  10. Binocular self-calibration performed via adaptive genetic algorithm based on laser line imaging

    NASA Astrophysics Data System (ADS)

    Apolinar Muñoz Rodríguez, J.; Mejía Alanís, Francisco Carlos

    2016-07-01

    An accurate technique to perform binocular self-calibration by means of an adaptive genetic algorithm based on a laser line is presented. In this calibration, the genetic algorithm computes the vision parameters through simulated binary crossover (SBX). To carry it out, the genetic algorithm constructs an objective function from the binocular geometry of the laser line projection. Then, the SBX minimizes the objective function via chromosomes recombination. In this algorithm, the adaptive procedure determines the search space via line position to obtain the minimum convergence. Thus, the chromosomes of vision parameters provide the minimization. The approach of the proposed adaptive genetic algorithm is to calibrate and recalibrate the binocular setup without references and physical measurements. This procedure leads to improve the traditional genetic algorithms, which calibrate the vision parameters by means of references and an unknown search space. It is because the proposed adaptive algorithm avoids errors produced by the missing of references. Additionally, the three-dimensional vision is carried out based on the laser line position and vision parameters. The contribution of the proposed algorithm is corroborated by an evaluation of accuracy of binocular calibration, which is performed via traditional genetic algorithms.

  11. On the estimation algorithm used in adaptive performance optimization of turbofan engines

    NASA Technical Reports Server (NTRS)

    Espana, Martin D.; Gilyard, Glenn B.

    1993-01-01

    The performance seeking control algorithm is designed to continuously optimize the performance of propulsion systems. The performance seeking control algorithm uses a nominal model of the propulsion system and estimates, in flight, the engine deviation parameters characterizing the engine deviations with respect to nominal conditions. In practice, because of measurement biases and/or model uncertainties, the estimated engine deviation parameters may not reflect the engine's actual off-nominal condition. This factor has a necessary impact on the overall performance seeking control scheme exacerbated by the open-loop character of the algorithm. The effects produced by unknown measurement biases over the estimation algorithm are evaluated. This evaluation allows for identification of the most critical measurements for application of the performance seeking control algorithm to an F100 engine. An equivalence relation between the biases and engine deviation parameters stems from an observability study; therefore, it is undecided whether the estimated engine deviation parameters represent the actual engine deviation or whether they simply reflect the measurement biases. A new algorithm, based on the engine's (steady-state) optimization model, is proposed and tested with flight data. When compared with previous Kalman filter schemes, based on local engine dynamic models, the new algorithm is easier to design and tune and it reduces the computational burden of the onboard computer.

  12. Performance of Thorup's Shortest Path Algorithm for Large-Scale Network Simulation

    NASA Astrophysics Data System (ADS)

    Sakumoto, Yusuke; Ohsaki, Hiroyuki; Imase, Makoto

    In this paper, we investigate the performance of Thorup's algorithm by comparing it to Dijkstra's algorithm for large-scale network simulations. One of the challenges toward the realization of large-scale network simulations is the efficient execution to find shortest paths in a graph with N vertices and M edges. The time complexity for solving a single-source shortest path (SSSP) problem with Dijkstra's algorithm with a binary heap (DIJKSTRA-BH) is O((M+N)log N). An sophisticated algorithm called Thorup's algorithm has been proposed. The original version of Thorup's algorithm (THORUP-FR) has the time complexity of O(M+N). A simplified version of Thorup's algorithm (THORUP-KL) has the time complexity of O(Mα(N)+N) where α(N) is the functional inverse of the Ackerman function. In this paper, we compare the performances (i.e., execution time and memory consumption) of THORUP-KL and DIJKSTRA-BH since it is known that THORUP-FR is at least ten times slower than Dijkstra's algorithm with a Fibonaccii heap. We find that (1) THORUP-KL is almost always faster than DIJKSTRA-BH for large-scale network simulations, and (2) the performances of THORUP-KL and DIJKSTRA-BH deviate from their time complexities due to the presence of the memory cache in the microprocessor.

  13. A Study on the Optimization Performance of Fireworks and Cuckoo Search Algorithms in Laser Machining Processes

    NASA Astrophysics Data System (ADS)

    Goswami, D.; Chakraborty, S.

    2014-11-01

    Laser machining is a promising non-contact process for effective machining of difficult-to-process advanced engineering materials. Increasing interest in the use of lasers for various machining operations can be attributed to its several unique advantages, like high productivity, non-contact processing, elimination of finishing operations, adaptability to automation, reduced processing cost, improved product quality, greater material utilization, minimum heat-affected zone and green manufacturing. To achieve the best desired machining performance and high quality characteristics of the machined components, it is extremely important to determine the optimal values of the laser machining process parameters. In this paper, fireworks algorithm and cuckoo search (CS) algorithm are applied for single as well as multi-response optimization of two laser machining processes. It is observed that although almost similar solutions are obtained for both these algorithms, CS algorithm outperforms fireworks algorithm with respect to average computation time, convergence rate and performance consistency.

  14. Algorithms and architectures for high performance analysis of semantic graphs.

    SciTech Connect

    Hendrickson, Bruce Alan

    2005-09-01

    analysis. Since intelligence datasets can be extremely large, the focus of this work is on the use of parallel computers. We have been working to develop scalable parallel algorithms that will be at the core of a semantic graph analysis infrastructure. Our work has involved two different thrusts, corresponding to two different computer architectures. The first architecture of interest is distributed memory, message passing computers. These machines are ubiquitous and affordable, but they are challenging targets for graph algorithms. Much of our distributed-memory work to date has been collaborative with researchers at Lawrence Livermore National Laboratory and has focused on finding short paths on distributed memory parallel machines. Our implementation on 32K processors of BlueGene/Light finds shortest paths between two specified vertices in just over a second for random graphs with 4 billion vertices.

  15. Towards Enhancement of Performance of K-Means Clustering Using Nature-Inspired Optimization Algorithms

    PubMed Central

    Deb, Suash; Yang, Xin-She

    2014-01-01

    Traditional K-means clustering algorithms have the drawback of getting stuck at local optima that depend on the random values of initial centroids. Optimization algorithms have their advantages in guiding iterative computation to search for global optima while avoiding local optima. The algorithms help speed up the clustering process by converging into a global optimum early with multiple search agents in action. Inspired by nature, some contemporary optimization algorithms which include Ant, Bat, Cuckoo, Firefly, and Wolf search algorithms mimic the swarming behavior allowing them to cooperatively steer towards an optimal objective within a reasonable time. It is known that these so-called nature-inspired optimization algorithms have their own characteristics as well as pros and cons in different applications. When these algorithms are combined with K-means clustering mechanism for the sake of enhancing its clustering quality by avoiding local optima and finding global optima, the new hybrids are anticipated to produce unprecedented performance. In this paper, we report the results of our evaluation experiments on the integration of nature-inspired optimization methods into K-means algorithms. In addition to the standard evaluation metrics in evaluating clustering quality, the extended K-means algorithms that are empowered by nature-inspired optimization methods are applied on image segmentation as a case study of application scenario. PMID:25202730

  16. Towards enhancement of performance of K-means clustering using nature-inspired optimization algorithms.

    PubMed

    Fong, Simon; Deb, Suash; Yang, Xin-She; Zhuang, Yan

    2014-01-01

    Traditional K-means clustering algorithms have the drawback of getting stuck at local optima that depend on the random values of initial centroids. Optimization algorithms have their advantages in guiding iterative computation to search for global optima while avoiding local optima. The algorithms help speed up the clustering process by converging into a global optimum early with multiple search agents in action. Inspired by nature, some contemporary optimization algorithms which include Ant, Bat, Cuckoo, Firefly, and Wolf search algorithms mimic the swarming behavior allowing them to cooperatively steer towards an optimal objective within a reasonable time. It is known that these so-called nature-inspired optimization algorithms have their own characteristics as well as pros and cons in different applications. When these algorithms are combined with K-means clustering mechanism for the sake of enhancing its clustering quality by avoiding local optima and finding global optima, the new hybrids are anticipated to produce unprecedented performance. In this paper, we report the results of our evaluation experiments on the integration of nature-inspired optimization methods into K-means algorithms. In addition to the standard evaluation metrics in evaluating clustering quality, the extended K-means algorithms that are empowered by nature-inspired optimization methods are applied on image segmentation as a case study of application scenario. PMID:25202730

  17. Comparison of Controller and Flight Deck Algorithm Performance During Interval Management with Dynamic Arrival Trees (STARS)

    NASA Technical Reports Server (NTRS)

    Battiste, Vernol; Lawton, George; Lachter, Joel; Brandt, Summer; Koteskey, Robert; Dao, Arik-Quang; Kraut, Josh; Ligda, Sarah; Johnson, Walter W.

    2012-01-01

    Managing the interval between arrival aircraft is a major part of the en route and TRACON controller s job. In an effort to reduce controller workload and low altitude vectoring, algorithms have been developed to allow pilots to take responsibility for, achieve and maintain proper spacing. Additionally, algorithms have been developed to create dynamic weather-free arrival routes in the presence of convective weather. In a recent study we examined an algorithm to handle dynamic re-routing in the presence of convective weather and two distinct spacing algorithms. The spacing algorithms originated from different core algorithms; both were enhanced with trajectory intent data for the study. These two algorithms were used simultaneously in a human-in-the-loop (HITL) simulation where pilots performed weather-impacted arrival operations into Louisville International Airport while also performing interval management (IM) on some trials. The controllers retained responsibility for separation and for managing the en route airspace and some trials managing IM. The goal was a stress test of dynamic arrival algorithms with ground and airborne spacing concepts. The flight deck spacing algorithms or controller managed spacing not only had to be robust to the dynamic nature of aircraft re-routing around weather but also had to be compatible with two alternative algorithms for achieving the spacing goal. Flight deck interval management spacing in this simulation provided a clear reduction in controller workload relative to when controllers were responsible for spacing the aircraft. At the same time, spacing was much less variable with the flight deck automated spacing. Even though the approaches taken by the two spacing algorithms to achieve the interval management goals were slightly different they seem to be simpatico in achieving the interval management goal of 130 sec by the TRACON boundary.

  18. Performance measure of image and video quality assessment algorithms: subjective root-mean-square error

    NASA Astrophysics Data System (ADS)

    Nuutinen, Mikko; Virtanen, Toni; Häkkinen, Jukka

    2016-03-01

    Evaluating algorithms used to assess image and video quality requires performance measures. Traditional performance measures (e.g., Pearson's linear correlation coefficient, Spearman's rank-order correlation coefficient, and root mean square error) compare quality predictions of algorithms to subjective mean opinion scores (mean opinion score/differential mean opinion score). We propose a subjective root-mean-square error (SRMSE) performance measure for evaluating the accuracy of algorithms used to assess image and video quality. The SRMSE performance measure takes into account dispersion between observers. The other important property of the SRMSE performance measure is its measurement scale, which is calibrated to units of the number of average observers. The results of the SRMSE performance measure indicate the extent to which the algorithm can replace the subjective experiment (as the number of observers). Furthermore, we have presented the concept of target values, which define the performance level of the ideal algorithm. We have calculated the target values for all sample sets of the CID2013, CVD2014, and LIVE multiply distorted image quality databases.The target values and MATLAB implementation of the SRMSE performance measure are available on the project page of this study.

  19. Classifying performance impairment in response to sleep loss using pattern recognition algorithms on single session testing

    PubMed Central

    St. Hilaire, Melissa A.; Sullivan, Jason P.; Anderson, Clare; Cohen, Daniel A.; Barger, Laura K.; Lockley, Steven W.; Klerman, Elizabeth B.

    2012-01-01

    There is currently no “gold standard” marker of cognitive performance impairment resulting from sleep loss. We utilized pattern recognition algorithms to determine which features of data collected under controlled laboratory conditions could most reliably identify cognitive performance impairment in response to sleep loss using data from only one testing session, such as would occur in the “real world” or field conditions. A training set for testing the pattern recognition algorithms was developed using objective Psychomotor Vigilance Task (PVT) and subjective Karolinska Sleepiness Scale (KSS) data collected from laboratory studies during which subjects were sleep deprived for 26 – 52 hours. The algorithm was then tested in data from both laboratory and field experiments. The pattern recognition algorithm was able to identify performance impairment with a single testing session in individuals studied under laboratory conditions using PVT, KSS, length of time awake and time of day information with sensitivity and specificity as high as 82%. When this algorithm was tested on data collected under real-world conditions from individuals whose data were not in the training set, accuracy of predictions for individuals categorized with low performance impairment were as high as 98%. Predictions for medium and severe performance impairment were less accurate. We conclude that pattern recognition algorithms may be a promising method for identifying performance impairment in individuals using only current information about the individual’s behavior. Single testing features (e.g., number of PVT lapses) with high correlation with performance impairment in the laboratory setting may not be the best indicators of performance impairment under real-world conditions. Pattern recognition algorithms should be further tested for their ability to be used in conjunction with other assessments of sleepiness in real-world conditions to quantify performance impairment in

  20. The performance and development for the Inner Detector Trigger algorithms at ATLAS

    NASA Astrophysics Data System (ADS)

    Penc, Ondrej

    2015-05-01

    A redesign of the tracking algorithms for the ATLAS trigger for LHC's Run 2 starting in 2015 is in progress. The ATLAS HLT software has been restructured to run as a more flexible single stage HLT, instead of two separate stages (Level 2 and Event Filter) as in Run 1. The new tracking strategy employed for Run 2 will use a Fast Track Finder (FTF) algorithm to seed subsequent Precision Tracking, and will result in improved track parameter resolution and faster execution times than achieved during Run 1. The performance of the new algorithms has been evaluated to identify those aspects where code optimisation would be most beneficial. The performance and timing of the algorithms for electron and muon reconstruction in the trigger are presented. The profiling infrastructure, constructed to provide prompt feedback from the optimisation, is described, including the methods used to monitor the relative performance improvements as the code evolves.

  1. Performance Assessment Method for a Forged Fingerprint Detection Algorithm

    NASA Astrophysics Data System (ADS)

    Shin, Yong Nyuo; Jun, In-Kyung; Kim, Hyun; Shin, Woochang

    The threat of invasion of privacy and of the illegal appropriation of information both increase with the expansion of the biometrics service environment to open systems. However, while certificates or smart cards can easily be cancelled and reissued if found to be missing, there is no way to recover the unique biometric information of an individual following a security breach. With the recognition that this threat factor may disrupt the large-scale civil service operations approaching implementation, such as electronic ID cards and e-Government systems, many agencies and vendors around the world continue to develop forged fingerprint detection technology, but no objective performance assessment method has, to date, been reported. Therefore, in this paper, we propose a methodology designed to evaluate the objective performance of the forged fingerprint detection technology that is currently attracting a great deal of attention.

  2. On the Effectiveness of Nature-Inspired Metaheuristic Algorithms for Performing Phase Equilibrium Thermodynamic Calculations

    PubMed Central

    Fateen, Seif-Eddeen K.; Bonilla-Petriciolet, Adrian

    2014-01-01

    The search for reliable and efficient global optimization algorithms for solving phase stability and phase equilibrium problems in applied thermodynamics is an ongoing area of research. In this study, we evaluated and compared the reliability and efficiency of eight selected nature-inspired metaheuristic algorithms for solving difficult phase stability and phase equilibrium problems. These algorithms are the cuckoo search (CS), intelligent firefly (IFA), bat (BA), artificial bee colony (ABC), MAKHA, a hybrid between monkey algorithm and krill herd algorithm, covariance matrix adaptation evolution strategy (CMAES), magnetic charged system search (MCSS), and bare bones particle swarm optimization (BBPSO). The results clearly showed that CS is the most reliable of all methods as it successfully solved all thermodynamic problems tested in this study. CS proved to be a promising nature-inspired optimization method to perform applied thermodynamic calculations for process design. PMID:24967430

  3. On the effectiveness of nature-inspired metaheuristic algorithms for performing phase equilibrium thermodynamic calculations.

    PubMed

    Fateen, Seif-Eddeen K; Bonilla-Petriciolet, Adrian

    2014-01-01

    The search for reliable and efficient global optimization algorithms for solving phase stability and phase equilibrium problems in applied thermodynamics is an ongoing area of research. In this study, we evaluated and compared the reliability and efficiency of eight selected nature-inspired metaheuristic algorithms for solving difficult phase stability and phase equilibrium problems. These algorithms are the cuckoo search (CS), intelligent firefly (IFA), bat (BA), artificial bee colony (ABC), MAKHA, a hybrid between monkey algorithm and krill herd algorithm, covariance matrix adaptation evolution strategy (CMAES), magnetic charged system search (MCSS), and bare bones particle swarm optimization (BBPSO). The results clearly showed that CS is the most reliable of all methods as it successfully solved all thermodynamic problems tested in this study. CS proved to be a promising nature-inspired optimization method to perform applied thermodynamic calculations for process design. PMID:24967430

  4. Performance comparison between total variation (TV)-based compressed sensing and statistical iterative reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Tang, Jie; Nett, Brian E.; Chen, Guang-Hong

    2009-10-01

    Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.

  5. Dependence of Adaptive Cross-correlation Algorithm Performance on the Extended Scene Image Quality

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin

    2008-01-01

    Recently, we reported an adaptive cross-correlation (ACC) algorithm to estimate with high accuracy the shift as large as several pixels between two extended-scene sub-images captured by a Shack-Hartmann wavefront sensor. It determines the positions of all extended-scene image cells relative to a reference cell in the same frame using an FFT-based iterative image-shifting algorithm. It works with both point-source spot images as well as extended scene images. We have demonstrated previously based on some measured images that the ACC algorithm can determine image shifts with as high an accuracy as 0.01 pixel for shifts as large 3 pixels, and yield similar results for both point source spot images and extended scene images. The shift estimate accuracy of the ACC algorithm depends on illumination level, background, and scene content in addition to the amount of the shift between two image cells. In this paper we investigate how the performance of the ACC algorithm depends on the quality and the frequency content of extended scene images captured by a Shack-Hatmann camera. We also compare the performance of the ACC algorithm with those of several other approaches, and introduce a failsafe criterion for the ACC algorithm-based extended scene Shack-Hatmann sensors.

  6. Dependence of adaptive cross-correlation algorithm performance on the extended scene image quality

    NASA Astrophysics Data System (ADS)

    Sidick, Erkin

    2008-08-01

    Recently, we reported an adaptive cross-correlation (ACC) algorithm to estimate with high accuracy the shift as large as several pixels between two extended-scene sub-images captured by a Shack-Hartmann wavefront sensor. It determines the positions of all extended-scene image cells relative to a reference cell in the same frame using an FFT-based iterative image-shifting algorithm. It works with both point-source spot images as well as extended scene images. We have demonstrated previously based on some measured images that the ACC algorithm can determine image shifts with as high an accuracy as 0.01 pixel for shifts as large 3 pixels, and yield similar results for both point source spot images and extended scene images. The shift estimate accuracy of the ACC algorithm depends on illumination level, background, and scene content in addition to the amount of the shift between two image cells. In this paper we investigate how the performance of the ACC algorithm depends on the quality and the frequency content of extended scene images captured by a Shack-Hatmann camera. We also compare the performance of the ACC algorithm with those of several other approaches, and introduce a failsafe criterion for the ACC algorithm-based extended scene Shack-Hatmann sensors.

  7. On the estimation algorithm for adaptive performance optimization of turbofan engines

    NASA Technical Reports Server (NTRS)

    Espana, Martin D.

    1993-01-01

    The performance seeking control (PSC) algorithm is designed to continuously optimize the performance of propulsion systems. The PSC algorithm uses a nominal propulsion system model and estimates, in flight, the engine deviation parameters (EDPs) characterizing the engine deviations with respect to nominal conditions. In practice, because of measurement biases and/or model uncertainties, the estimated EDPs may not reflect the engine's actual off-nominal condition. This factor has a direct impact on the PSC scheme exacerbated by the open-loop character of the algorithm. In this paper, the effects produced by unknown measurement biases over the estimation algorithm are evaluated. This evaluation allows for identification of the most critical measurements for application of the PSC algorithm to an F100 engine. An equivalence relation between the biases and EDPs stems from the analysis; therefore, it is undecided whether the estimated EDPs represent the actual engine deviation or whether they simply reflect the measurement biases. A new algorithm, based on the engine's (steady-state) optimization model, is proposed and tested with flight data. When compared with previous Kalman filter schemes, based on local engine dynamic models, the new algorithm is easier to design and tune and it reduces the computational burden of the onboard computer.

  8. Performance study of LMS based adaptive algorithms for unknown system identification

    SciTech Connect

    Javed, Shazia; Ahmad, Noor Atinah

    2014-07-10

    Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signal is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment.

  9. Architecture-Aware Algorithms for Scalable Performance and Resilience on Heterogeneous Architectures. Final Report

    SciTech Connect

    Gropp, William D.

    2014-06-23

    With the coming end of Moore's law, it has become essential to develop new algorithms and techniques that can provide the performance needed by demanding computational science applications, especially those that are part of the DOE science mission. This work was part of a multi-institution, multi-investigator project that explored several approaches to develop algorithms that would be effective at the extreme scales and with the complex processor architectures that are expected at the end of this decade. The work by this group developed new performance models that have already helped guide the development of highly scalable versions of an algebraic multigrid solver, new programming approaches designed to support numerical algorithms on heterogeneous architectures, and a new, more scalable version of conjugate gradient, an important algorithm in the solution of very large linear systems of equations.

  10. Performance study of LMS based adaptive algorithms for unknown system identification

    NASA Astrophysics Data System (ADS)

    Javed, Shazia; Ahmad, Noor Atinah

    2014-07-01

    Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signal is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment.

  11. Performance evaluation of recommendation algorithms on Internet of Things services

    NASA Astrophysics Data System (ADS)

    Mashal, Ibrahim; Alsaryrah, Osama; Chung, Tein-Yaw

    2016-06-01

    Internet of Things (IoT) is the next wave of industry revolution that will initiate many services, such as personal health care and green energy monitoring, which people may subscribe for their convenience. Recommending IoT services to the users based on objects they own will become very crucial for the success of IoT. In this work, we introduce the concept of service recommender systems in IoT by a formal model. As a first attempt in this direction, we have proposed a hyper-graph model for IoT recommender system in which each hyper-edge connects users, objects, and services. Next, we studied the usefulness of traditional recommendation schemes and their hybrid approaches on IoT service recommendation (IoTSRS) based on existing well known metrics. The preliminary results show that existing approaches perform reasonably well but further extension is required for IoTSRS. Several challenges were discussed to point out the direction of future development in IoTSR.

  12. Performance evaluation of trigger algorithm for the MACE telescope

    NASA Astrophysics Data System (ADS)

    Yadav, Kuldeep; Yadav, K. K.; Bhatt, N.; Chouhan, N.; Sikder, S. S.; Behere, A.; Pithawa, C. K.; Tickoo, A. K.; Rannot, R. C.; Bhattacharyya, S.; Mitra, A. K.; Koul, R.

    The MACE (Major Atmospheric Cherenkov Experiment) telescope with a light collector diameter of 21 m, is being set up at Hanle (32.80 N, 78.90 E, 4200m asl) India, to explore the gamma-ray sky in the tens of GeV energy range. The imaging camera of the telescope comprises 1088 pixels covering a total field-of-view of 4.30 × 4.00 with trigger field-of-view of 2.60 × 3.00 and an uniform pixel resolution of 0.120. In order to achieve low energy trigger threshold of less than 30 GeV, a two level trigger scheme is being designed for the telescope. The first level trigger is generated within 16 pixels of the Camera Integrated Module (CIM) based on 4 nearest neighbour (4NN) close cluster configuration within a coincidence gate window of 5 ns while the second level trigger is generated by combining the first level triggers from neighbouring CIMs. Each pixel of the telescope is expected to operate at a single pixel threshold between 8-10 photo-electrons where the single channel rate dominated by the after- pulsing is expected to be ˜500 kHz. The hardware implementation of the trigger logic is based on complex programmable logic devices (CPLD). The basic design concept, hardware implementation and performance evaluation of the trigger system in terms of threshold energy and trigger rate estimates based on Monte Carlo data for the MACE telescope will be presented in this meeting.

  13. Performance of an advanced lump correction algorithm for gamma-ray assays of plutonium

    SciTech Connect

    Prettyman, T.H.; Sprinkle, J.K. Jr.; Sheppard, G.A.

    1994-08-01

    The results of an experimental study to evaluate the performance of an advanced lump correction algorithm for gamma-ray assays of plutonium is presented. The algorithm is applied to correct segmented gamma scanner (SGS) and tomographic gamma scanner (TGS) assays of plutonium samples in 55-gal. drums containing heterogeneous matrices. The relative ability of the SGS and TGS to separate matrix and lump effects is examined, and a technique to detect gross heterogeneity in SGS assays is presented.

  14. Significant alterations in reported clinical practice associated with increased oversight of organ transplant center performance.

    PubMed

    Schold, Jesse D; Arrington, Charlotte J; Levine, Greg

    2010-09-01

    In the past several years, emphasis on quality metrics in the field of organ transplantation has increased significantly, largely because of the new conditions of participation issued by the Centers for Medicare and Medicaid Services. These regulations directly associate patients' outcomes and measured performance of centers with the distribution of public funding to institutions. Moreover, insurers and marketing ventures have used publicly available outcomes data from transplant centers for business decision making and advertisement purposes. We gave a 10-question survey to attendees of the Transplant Management Forum at the 2009 meeting of the United Network for Organ Sharing to ascertain how centers have responded to the increased oversight of performance. Of 63 responses, 55% indicated a low or near low performance rating at their center in the past 3 years. Respondents from low-performing centers were significantly more likely to indicate increased selection criteria for candidates (81% vs 38%, P = .001) and donors (77% vs 31%, P < .001) as well as alterations in clinical protocols (84% vs 52%, P = .007). Among respondents indicating lost insurance contracts (31%), these differences were also highly significant. Based on respondents' perceptions, outcomes of performance evaluations are associated with significant changes in clinical practice at transplant centers. The transplant community and policy makers should practice vigilance that performance evaluations and regulatory oversight do not inadvertently lead to diminished access to care among viable candidates or decreased transplant volume. PMID:20929114

  15. Measure profile surrogates: A method to validate the performance of epileptic seizure prediction algorithms

    NASA Astrophysics Data System (ADS)

    Kreuz, Thomas; Andrzejak, Ralph G.; Mormann, Florian; Kraskov, Alexander; Stögbauer, Harald; Elger, Christian E.; Lehnertz, Klaus; Grassberger, Peter

    2004-06-01

    In a growing number of publications it is claimed that epileptic seizures can be predicted by analyzing the electroencephalogram (EEG) with different characterizing measures. However, many of these studies suffer from a severe lack of statistical validation. Only rarely are results passed to a statistical test and verified against some null hypothesis H0 in order to quantify their significance. In this paper we propose a method to statistically validate the performance of measures used to predict epileptic seizures. From measure profiles rendered by applying a moving-window technique to the electroencephalogram we first generate an ensemble of surrogates by a constrained randomization using simulated annealing. Subsequently the seizure prediction algorithm is applied to the original measure profile and to the surrogates. If detectable changes before seizure onset exist, highest performance values should be obtained for the original measure profiles and the null hypothesis. “The measure is not suited for seizure prediction” can be rejected. We demonstrate our method by applying two measures of synchronization to a quasicontinuous EEG recording and by evaluating their predictive performance using a straightforward seizure prediction statistics. We would like to stress that the proposed method is rather universal and can be applied to many other prediction and detection problems.

  16. Independent component analysis algorithm FPGA design to perform real-time blind source separation

    NASA Astrophysics Data System (ADS)

    Meyer-Baese, Uwe; Odom, Crispin; Botella, Guillermo; Meyer-Baese, Anke

    2015-05-01

    The conditions that arise in the Cocktail Party Problem prevail across many fields creating a need for of Blind Source Separation. The need for BSS has become prevalent in several fields of work. These fields include array processing, communications, medical signal processing, and speech processing, wireless communication, audio, acoustics and biomedical engineering. The concept of the cocktail party problem and BSS led to the development of Independent Component Analysis (ICA) algorithms. ICA proves useful for applications needing real time signal processing. The goal of this research was to perform an extensive study on ability and efficiency of Independent Component Analysis algorithms to perform blind source separation on mixed signals in software and implementation in hardware with a Field Programmable Gate Array (FPGA). The Algebraic ICA (A-ICA), Fast ICA, and Equivariant Adaptive Separation via Independence (EASI) ICA were examined and compared. The best algorithm required the least complexity and fewest resources while effectively separating mixed sources. The best algorithm was the EASI algorithm. The EASI ICA was implemented on hardware with Field Programmable Gate Arrays (FPGA) to perform and analyze its performance in real time.

  17. Thrust stand evaluation of engine performance improvement algorithms in an F-15 airplane

    NASA Technical Reports Server (NTRS)

    Conners, Timothy R.

    1992-01-01

    An investigation is underway to determine the benefits of a new propulsion system optimization algorithm in an F-15 airplane. The performance seeking control (PSC) algorithm optimizes the quasi-steady-state performance of an F100 derivative turbofan engine for several modes of operation. The PSC algorithm uses an onboard software engine model that calculates thrust, stall margin, and other unmeasured variables for use in the optimization. As part of the PSC test program, the F-15 aircraft was operated on a horizontal thrust stand. Thrust was measured with highly accurate load cells. The measured thrust was compared to onboard model estimates and to results from posttest performance programs. Thrust changes using the various PSC modes were recorded. Those results were compared to benefits using the less complex highly integrated digital electronic control (HIDEC) algorithm. The PSC maximum thrust mode increased intermediate power thrust by 10 percent. The PSC engine model did very well at estimating measured thrust and closely followed the transients during optimization. Quantitative results from the evaluation of the algorithms and performance calculation models are included with emphasis on measured thrust results. The report presents a description of the PSC system and a discussion of factors affecting the accuracy of the thrust stand load measurements.

  18. A new multiobjective performance criterion used in PID tuning optimization algorithms

    PubMed Central

    Sahib, Mouayad A.; Ahmed, Bestoun S.

    2015-01-01

    In PID controller design, an optimization algorithm is commonly employed to search for the optimal controller parameters. The optimization algorithm is based on a specific performance criterion which is defined by an objective or cost function. To this end, different objective functions have been proposed in the literature to optimize the response of the controlled system. These functions include numerous weighted time and frequency domain variables. However, for an optimum desired response it is difficult to select the appropriate objective function or identify the best weight values required to optimize the PID controller design. This paper presents a new time domain performance criterion based on the multiobjective Pareto front solutions. The proposed objective function is tested in the PID controller design for an automatic voltage regulator system (AVR) application using particle swarm optimization algorithm. Simulation results show that the proposed performance criterion can highly improve the PID tuning optimization in comparison with traditional objective functions. PMID:26843978

  19. A new multiobjective performance criterion used in PID tuning optimization algorithms.

    PubMed

    Sahib, Mouayad A; Ahmed, Bestoun S

    2016-01-01

    In PID controller design, an optimization algorithm is commonly employed to search for the optimal controller parameters. The optimization algorithm is based on a specific performance criterion which is defined by an objective or cost function. To this end, different objective functions have been proposed in the literature to optimize the response of the controlled system. These functions include numerous weighted time and frequency domain variables. However, for an optimum desired response it is difficult to select the appropriate objective function or identify the best weight values required to optimize the PID controller design. This paper presents a new time domain performance criterion based on the multiobjective Pareto front solutions. The proposed objective function is tested in the PID controller design for an automatic voltage regulator system (AVR) application using particle swarm optimization algorithm. Simulation results show that the proposed performance criterion can highly improve the PID tuning optimization in comparison with traditional objective functions. PMID:26843978

  20. Experimental Performance of a Genetic Algorithm for Airborne Strategic Conflict Resolution

    NASA Technical Reports Server (NTRS)

    Karr, David A.; Vivona, Robert A.; Roscoe, David A.; DePascale, Stephen M.; Consiglio, Maria

    2009-01-01

    The Autonomous Operations Planner, a research prototype flight-deck decision support tool to enable airborne self-separation, uses a pattern-based genetic algorithm to resolve predicted conflicts between the ownship and traffic aircraft. Conflicts are resolved by modifying the active route within the ownship s flight management system according to a predefined set of maneuver pattern templates. The performance of this pattern-based genetic algorithm was evaluated in the context of batch-mode Monte Carlo simulations running over 3600 flight hours of autonomous aircraft in en-route airspace under conditions ranging from typical current traffic densities to several times that level. Encountering over 8900 conflicts during two simulation experiments, the genetic algorithm was able to resolve all but three conflicts, while maintaining a required time of arrival constraint for most aircraft. Actual elapsed running time for the algorithm was consistent with conflict resolution in real time. The paper presents details of the genetic algorithm s design, along with mathematical models of the algorithm s performance and observations regarding the effectiveness of using complimentary maneuver patterns when multiple resolutions by the same aircraft were required.

  1. Experimental Performance of a Genetic Algorithm for Airborne Strategic Conflict Resolution

    NASA Technical Reports Server (NTRS)

    Karr, David A.; Vivona, Robert A.; Roscoe, David A.; DePascale, Stephen M.; Consiglio, Maria

    2009-01-01

    The Autonomous Operations Planner, a research prototype flight-deck decision support tool to enable airborne self-separation, uses a pattern-based genetic algorithm to resolve predicted conflicts between the ownship and traffic aircraft. Conflicts are resolved by modifying the active route within the ownship's flight management system according to a predefined set of maneuver pattern templates. The performance of this pattern-based genetic algorithm was evaluated in the context of batch-mode Monte Carlo simulations running over 3600 flight hours of autonomous aircraft in en-route airspace under conditions ranging from typical current traffic densities to several times that level. Encountering over 8900 conflicts during two simulation experiments, the genetic algorithm was able to resolve all but three conflicts, while maintaining a required time of arrival constraint for most aircraft. Actual elapsed running time for the algorithm was consistent with conflict resolution in real time. The paper presents details of the genetic algorithm's design, along with mathematical models of the algorithm's performance and observations regarding the effectiveness of using complimentary maneuver patterns when multiple resolutions by the same aircraft were required.

  2. Influence of Fiber Bragg Grating Spectrum Degradation on the Performance of Sensor Interrogation Algorithms

    PubMed Central

    Lamberti, Alfredo; Vanlanduit, Steve; De Pauw, Ben; Berghmans, Francis

    2014-01-01

    The working principle of fiber Bragg grating (FBG) sensors is mostly based on the tracking of the Bragg wavelength shift. To accomplish this task, different algorithms have been proposed, from conventional maximum and centroid detection algorithms to more recently-developed correlation-based techniques. Several studies regarding the performance of these algorithms have been conducted, but they did not take into account spectral distortions, which appear in many practical applications. This paper addresses this issue and analyzes the performance of four different wavelength tracking algorithms (maximum detection, centroid detection, cross-correlation and fast phase-correlation) when applied to distorted FBG spectra used for measuring dynamic loads. Both simulations and experiments are used for the analyses. The dynamic behavior of distorted FBG spectra is simulated using the transfer-matrix approach, and the amount of distortion of the spectra is quantified using dedicated distortion indices. The algorithms are compared in terms of achievable precision and accuracy. To corroborate the simulation results, experiments were conducted using three FBG sensors glued on a steel plate and subjected to a combination of transverse force and vibration loads. The analysis of the results showed that the fast phase-correlation algorithm guarantees the best combination of versatility, precision and accuracy. PMID:25521386

  3. Performance Analysis of Selective Breeding Algorithm on One Dimensional Bin Packing Problems

    NASA Astrophysics Data System (ADS)

    Sriramya, P.; Parvathavarthini, B.

    2012-12-01

    The bin packing optimization problem packs a set of objects into a set of bins so that the amount of wasted space is minimized. The bin packing problem has many important applications. The objective is to find a feasible assignment of all weights to bins that minimizes the total number of bins used. The bin packing problem models several practical problems in such diverse areas as industrial control, computer systems, machine scheduling, VLSI chip layout and etc. Selective breeding algorithm (SBA) is an iterative procedure which borrows the ideas of artificial selection and breeding process. By simulating artificial evolution in this way SBA algorithm can easily solve complex problems. One dimensional bin packing benchmark problems are taken for evaluating the performance of the SBA. The computational results of SBA algorithm show optimal solution for the tested benchmark problems. The proposed SBA algorithm is a good problem-solving technique for one dimensional bin packing problems.

  4. Comparative analysis of the speed performance of texture analysis algorithms on a graphic processing unit (GPU)

    NASA Astrophysics Data System (ADS)

    Triana-Martinez, J.; Orjuela-Vargas, S. A.; Philips, W.

    2013-03-01

    This paper compares the speed performance of a set of classic image algorithms for evaluating texture in images by using CUDA programming. We include a summary of the general program mode of CUDA. We select a set of texture algorithms, based on statistical analysis, that allow the use of repetitive functions, such as the Coocurrence Matrix, Haralick features and local binary patterns techniques. The memory allocation time between the host and device memory is not taken into account. The results of this approach show a comparison of the texture algorithms in terms of speed when executed on CPU and GPU processors. The comparison shows that the algorithms can be accelerated more than 40 times when implemented using CUDA environment.

  5. The performance of monotonic and new non-monotonic gradient ascent reconstruction algorithms for high-resolution neuroreceptor PET imaging.

    PubMed

    Angelis, G I; Reader, A J; Kotasidis, F A; Lionheart, W R; Matthews, J C

    2011-07-01

    Iterative expectation maximization (EM) techniques have been extensively used to solve maximum likelihood (ML) problems in positron emission tomography (PET) image reconstruction. Although EM methods offer a robust approach to solving ML problems, they usually suffer from slow convergence rates. The ordered subsets EM (OSEM) algorithm provides significant improvements in the convergence rate, but it can cycle between estimates converging towards the ML solution of each subset. In contrast, gradient-based methods, such as the recently proposed non-monotonic maximum likelihood (NMML) and the more established preconditioned conjugate gradient (PCG), offer a globally convergent, yet equally fast, alternative to OSEM. Reported results showed that NMML provides faster convergence compared to OSEM; however, it has never been compared to other fast gradient-based methods, like PCG. Therefore, in this work we evaluate the performance of two gradient-based methods (NMML and PCG) and investigate their potential as an alternative to the fast and widely used OSEM. All algorithms were evaluated using 2D simulations, as well as a single [(11)C]DASB clinical brain dataset. Results on simulated 2D data show that both PCG and NMML achieve orders of magnitude faster convergence to the ML solution compared to MLEM and exhibit comparable performance to OSEM. Equally fast performance is observed between OSEM and PCG for clinical 3D data, but NMML seems to perform poorly. However, with the addition of a preconditioner term to the gradient direction, the convergence behaviour of NMML can be substantially improved. Although PCG is a fast convergent algorithm, the use of a (bent) line search increases the complexity of the implementation, as well as the computational time involved per iteration. Contrary to previous reports, NMML offers no clear advantage over OSEM or PCG, for noisy PET data. Therefore, we conclude that there is little evidence to replace OSEM as the algorithm of choice for

  6. Significant Differences in Pediatric Psychotropic Side Effects: Implications for School Performance

    ERIC Educational Resources Information Center

    Kubiszyn, Thomas; Mire, Sarah; Dutt, Sonia; Papathopoulos, Katina; Burridge, Andrea Backsheider

    2012-01-01

    Some side effects (SEs) of increasingly prescribed psychotropic medications can impact student performance in school. SE risk varies, even among drugs from the same class (e.g., antidepressants). Knowing which SEs occur significantly more often than others may enable school psychologists to enhance collaborative risk-benefit analysis, medication…

  7. Performance feedback, paraeducators, and literacy instruction for students with significant disabilities.

    PubMed

    Westover, Jennifer M; Martin, Emma J

    2014-12-01

    Literacy skills are fundamental for all learners. For students with significant disabilities, strong literacy skills provide a gateway to generative communication, genuine friendships, improved access to academic opportunities, access to information technology, and future employment opportunities. Unfortunately, many educators lack the knowledge to design or implement appropriate evidence-based literacy instruction for students with significant disabilities. Furthermore, students with significant disabilities often receive the majority of their instruction from paraeducators. This single-subject design study examined the effects of performance feedback on the delivery skills of paraeducators during systematic and explicit literacy instruction for students with significant disabilities. The specific skills targeted for feedback were planned opportunities for student responses and correct academic responses. Findings suggested that delivery of feedback on performance resulted in increased pacing, accuracy in student responses, and subsequent attainment of literacy skills for students with significant disabilities. Implications for the use of performance feedback as an evaluation and training tool for increasing effective instructional practices are provided. PMID:25271082

  8. A fast and high performance multiple data integration algorithm for identifying human disease genes

    PubMed Central

    2015-01-01

    Background Integrating multiple data sources is indispensable in improving disease gene identification. It is not only due to the fact that disease genes associated with similar genetic diseases tend to lie close with each other in various biological networks, but also due to the fact that gene-disease associations are complex. Although various algorithms have been proposed to identify disease genes, their prediction performances and the computational time still should be further improved. Results In this study, we propose a fast and high performance multiple data integration algorithm for identifying human disease genes. A posterior probability of each candidate gene associated with individual diseases is calculated by using a Bayesian analysis method and a binary logistic regression model. Two prior probability estimation strategies and two feature vector construction methods are developed to test the performance of the proposed algorithm. Conclusions The proposed algorithm is not only generated predictions with high AUC scores, but also runs very fast. When only a single PPI network is employed, the AUC score is 0.769 by using F2 as feature vectors. The average running time for each leave-one-out experiment is only around 1.5 seconds. When three biological networks are integrated, the AUC score using F3 as feature vectors increases to 0.830, and the average running time for each leave-one-out experiment takes only about 12.54 seconds. It is better than many existing algorithms. PMID:26399620

  9. Performance comparison of wavefront reconstruction and control algorithms for Extremely Large Telescopes.

    PubMed

    Montilla, I; Béchet, C; Le Louarn, M; Reyes, M; Tallon, M

    2010-11-01

    Extremely Large Telescopes (ELTs) are very challenging with respect to their adaptive optics (AO) requirements. Their diameters and the specifications required by the astronomical science for which they are being designed imply a huge increment in the number of degrees of freedom in the deformable mirrors. Faster algorithms are needed to implement the real-time reconstruction and control in AO at the required speed. We present the results of a study of the AO correction performance of three different algorithms applied to the case of a 42-m ELT: one considered as a reference, the matrix-vector multiply (MVM) algorithm; and two considered fast, the fractal iterative method (FrIM) and the Fourier transform reconstructor (FTR). The MVM and the FrIM both provide a maximum a posteriori estimation, while the FTR provides a least-squares one. The algorithms are tested on the European Southern Observatory (ESO) end-to-end simulator, OCTOPUS. The performance is compared using a natural guide star single-conjugate adaptive optics configuration. The results demonstrate that the methods have similar performance in a large variety of simulated conditions. However, with respect to system misregistrations, the fast algorithms demonstrate an interesting robustness. PMID:21045895

  10. Performance of the reconstruction algorithms of the FIRST experiment pixel sensors vertex detector

    NASA Astrophysics Data System (ADS)

    Rescigno, R.; Finck, Ch.; Juliani, D.; Spiriti, E.; Baudot, J.; Abou-Haidar, Z.; Agodi, C.; Alvarez, M. A. G.; Aumann, T.; Battistoni, G.; Bocci, A.; Böhlen, T. T.; Boudard, A.; Brunetti, A.; Carpinelli, M.; Cirrone, G. A. P.; Cortes-Giraldo, M. A.; Cuttone, G.; De Napoli, M.; Durante, M.; Gallardo, M. I.; Golosio, B.; Iarocci, E.; Iazzi, F.; Ickert, G.; Introzzi, R.; Krimmer, J.; Kurz, N.; Labalme, M.; Leifels, Y.; Le Fevre, A.; Leray, S.; Marchetto, F.; Monaco, V.; Morone, M. C.; Oliva, P.; Paoloni, A.; Patera, V.; Piersanti, L.; Pleskac, R.; Quesada, J. M.; Randazzo, N.; Romano, F.; Rossi, D.; Rousseau, M.; Sacchi, R.; Sala, P.; Sarti, A.; Scheidenberger, C.; Schuy, C.; Sciubba, A.; Sfienti, C.; Simon, H.; Sipala, V.; Tropea, S.; Vanstalle, M.; Younis, H.

    2014-12-01

    Hadrontherapy treatments use charged particles (e.g. protons and carbon ions) to treat tumors. During a therapeutic treatment with carbon ions, the beam undergoes nuclear fragmentation processes giving rise to significant yields of secondary charged particles. An accurate prediction of these production rates is necessary to estimate precisely the dose deposited into the tumours and the surrounding healthy tissues. Nowadays, a limited set of double differential carbon fragmentation cross-section is available. Experimental data are necessary to benchmark Monte Carlo simulations for their use in hadrontherapy. The purpose of the FIRST experiment is to study nuclear fragmentation processes of ions with kinetic energy in the range from 100 to 1000 MeV/u. Tracks are reconstructed using information from a pixel silicon detector based on the CMOS technology. The performances achieved using this device for hadrontherapy purpose are discussed. For each reconstruction step (clustering, tracking and vertexing), different methods are implemented. The algorithm performances and the accuracy on reconstructed observables are evaluated on the basis of simulated and experimental data.

  11. Hardware acceleration of lucky-region fusion (LRF) algorithm for high-performance real-time video processing

    NASA Astrophysics Data System (ADS)

    Browning, Tyler; Jackson, Christopher; Cayci, Furkan; Carhart, Gary W.; Liu, J. J.; Kiamilev, Fouad

    2015-06-01

    "Lucky-region" fusion (LRF) is a synthetic imaging technique that has proven successful in enhancing the quality of images distorted by atmospheric turbulence. The LRF algorithm extracts sharp regions of an image obtained from a series of short exposure frames from fast, high-resolution image sensors, and fuses the sharp regions into a final, improved image. In our previous research, the LRF algorithm had been implemented on CPU and field programmable gate array (FPGA) platforms. The CPU did not have sufficient processing power to handle real-time processing of video. Last year, we presented a real-time LRF implementation using an FPGA. However, due to the slow register-transfer level (RTL) development and simulation time, it was difficult to adjust and discover optimal LRF settings such as Gaussian kernel radius and synthetic frame buffer size. To overcome this limitation, we implemented the LRF algorithm on an off-the-shelf graphical processing unit (GPU) in order to take advantage of built-in parallelization and significantly faster development time. Our initial results show that the unoptimized GPU implementation has almost comparable turbulence mitigation to the FPGA version. In our presentation, we will explore optimization of the LRF algorithm on the GPU to achieve higher performance results, and adding new performance capabilities such as image stabilization.

  12. Performance assessment of an algorithm for the alignment of fMRI time series.

    PubMed

    Ciulla, Carlo; Deek, Fadi P

    2002-01-01

    This paper reports on performance assessment of an algorithm developed to align functional Magnetic Resonance Image (fMRI) time series. The algorithm is based on the assumption that the human brain is subject to rigid-body motion and has been devised by pipelining fiducial markers and tensor based registration methodologies. Feature extraction is performed on each fMRI volume to determine tensors of inertia and gradient image of the brain. A head coordinate system is determined on the basis of three fiducial markers found automatically at the head boundary by means of the tensors and is used to compute a point-based rigid matching transformation. Intensity correction is performed with sub-voxel accuracy by trilinear interpolation. Performance of the algorithm was preliminarily assessed by fMR brain images in which controlled motion has been simulated. Further experimentation has been conducted with real fMRI time series. Rigid-body transformations were retrieved automatically and the value of motion parameters compared to those obtained with the Statistical Parametric Mapping (SPM99) and the Automatic Image Registration (AIR 3.08). Results indicate that the algorithm offers sub-voxel accuracy in performing both misalignment and intensity correction of fMRI time series. PMID:12137364

  13. Performance comparison of independent component analysis algorithms for fetal cardiac signal reconstruction: a study on synthetic fMCG data

    NASA Astrophysics Data System (ADS)

    Mantini, D.; Hild, K. E., II; Alleva, G.; Comani, S.

    2006-02-01

    Independent component analysis (ICA) algorithms have been successfully used for signal extraction tasks in the field of biomedical signal processing. We studied the performances of six algorithms (FastICA, CubICA, JADE, Infomax, TDSEP and MRMI-SIG) for fetal magnetocardiography (fMCG). Synthetic datasets were used to check the quality of the separated components against the original traces. Real fMCG recordings were simulated with linear combinations of typical fMCG source signals: maternal and fetal cardiac activity, ambient noise, maternal respiration, sensor spikes and thermal noise. Clusters of different dimensions (19, 36 and 55 sensors) were prepared to represent different MCG systems. Two types of signal-to-interference ratios (SIR) were measured. The first involves averaging over all estimated components and the second is based solely on the fetal trace. The computation time to reach a minimum of 20 dB SIR was measured for all six algorithms. No significant dependency on gestational age or cluster dimension was observed. Infomax performed poorly when a sub-Gaussian source was included; TDSEP and MRMI-SIG were sensitive to additive noise, whereas FastICA, CubICA and JADE showed the best performances. Of all six methods considered, FastICA had the best overall performance in terms of both separation quality and computation times.

  14. Investigation of the relative significance of individual environmental parameters to sonar performance prediction uncertainty

    NASA Astrophysics Data System (ADS)

    Wang, Ding; Xu, Wen; Schmidt, Henrik

    2002-11-01

    A large part of sonar performance prediction uncertainty is associated with the uncertain ocean acoustic environment. Optimal in situ measurement strategy, i.e. adaptively capturing the most critical uncertain environmental parameters within operational constrains can minimize the sonar performance prediction uncertainty. Understanding the relative significance of individual environmental parameters to sonar performance prediction uncertainty is fundamental to the heuristics to determine the most critical environmental parameters. Based on this understanding, the optimal parametrization of ocean acoustic environments can be defined, which will significantly simplify the adaptively sampling pattern. As an example, the matched-field processing is used to localize an unknown sound source position in a realistic ocean environment. Typical shallow water environmental models are used with some of the properties being stochastic variables. The ratio of the main lobe peak to the maximum side lobe peak of the ambiguity function and the main lobe peak displacement due to mismatch are chosen as performance metrics, respectively, in two different scenarios. The relative significance of some environmental parameters such as sediment thickness, weights of empirical orthogonal functions (EOFs) has been computed. Some preliminary results are discussed.

  15. Performance evaluation of imaging seeker tracking algorithm based on multi-features

    NASA Astrophysics Data System (ADS)

    Li, Yujue; Yan, Jinglong

    2011-08-01

    The paper presents a new efficient method for performance evaluation of imaging seeker tracking algorithm. The method utilizes multi features which associate with tracking point of each video frame, gets local score(LS) for every feature, and achieves global score(GS) for given tracking algorithm according to the combined strategy. The method can be divided into three steps. In a first step, it extracts evaluation feature from neighbor zone of each tracking point. The feature may include tracking error, shape of target, area of target, tracking path, and so on. Then, as to each feature, a local score can be got rely on the number of target which tracked successfully. It uses similarity measurement and experiential threshold between neighbor zone of tracking point and target template to define tracking successful or not. Of course, the number should be 0 or 1 for single target tracking. Finally, it assigns weight for each feature according to the validity grade for the performance. The weights multiply by local scores and normalized between 0 and 1, this gets global score of certain tracking algorithm. By compare the global score of each tracking algorithm as to certain type of scene, it can evaluate the performance of tracking algorithm quantificational. The proposed method nearly covers all tracking error factors which can be introduced into the process of target tracking, so the evaluation result has a higher reliability. Experimental results, obtained with flying video of infrared imaging seeker, and also included several target tracking algorithms, illustrate the performance of target tracking, demonstrate the effectiveness and robustness of the proposed method.

  16. Syndromic surveillance using veterinary laboratory data: data pre-processing and algorithm performance evaluation

    PubMed Central

    Dórea, Fernanda C.; McEwen, Beverly J.; McNab, W. Bruce; Revie, Crawford W.; Sanchez, Javier

    2013-01-01

    Diagnostic test orders to an animal laboratory were explored as a data source for monitoring trends in the incidence of clinical syndromes in cattle. Four years of real data and over 200 simulated outbreak signals were used to compare pre-processing methods that could remove temporal effects in the data, as well as temporal aberration detection algorithms that provided high sensitivity and specificity. Weekly differencing demonstrated solid performance in removing day-of-week effects, even in series with low daily counts. For aberration detection, the results indicated that no single algorithm showed performance superior to all others across the range of outbreak scenarios simulated. Exponentially weighted moving average charts and Holt–Winters exponential smoothing demonstrated complementary performance, with the latter offering an automated method to adjust to changes in the time series that will likely occur in the future. Shewhart charts provided lower sensitivity but earlier detection in some scenarios. Cumulative sum charts did not appear to add value to the system; however, the poor performance of this algorithm was attributed to characteristics of the data monitored. These findings indicate that automated monitoring aimed at early detection of temporal aberrations will likely be most effective when a range of algorithms are implemented in parallel. PMID:23576782

  17. Signal and image processing algorithm performance in a virtual and elastic computing environment

    NASA Astrophysics Data System (ADS)

    Bennett, Kelly W.; Robertson, James

    2013-05-01

    The U.S. Army Research Laboratory (ARL) supports the development of classification, detection, tracking, and localization algorithms using multiple sensing modalities including acoustic, seismic, E-field, magnetic field, PIR, and visual and IR imaging. Multimodal sensors collect large amounts of data in support of algorithm development. The resulting large amount of data, and their associated high-performance computing needs, increases and challenges existing computing infrastructures. Purchasing computer power as a commodity using a Cloud service offers low-cost, pay-as-you-go pricing models, scalability, and elasticity that may provide solutions to develop and optimize algorithms without having to procure additional hardware and resources. This paper provides a detailed look at using a commercial cloud service provider, such as Amazon Web Services (AWS), to develop and deploy simple signal and image processing algorithms in a cloud and run the algorithms on a large set of data archived in the ARL Multimodal Signatures Database (MMSDB). Analytical results will provide performance comparisons with existing infrastructure. A discussion on using cloud computing with government data will discuss best security practices that exist within cloud services, such as AWS.

  18. Performance evaluation of wavelet-based ECG compression algorithms for telecardiology application over CDMA network.

    PubMed

    Kim, Byung S; Yoo, Sun K

    2007-09-01

    The use of wireless networks bears great practical importance in instantaneous transmission of ECG signals during movement. In this paper, three typical wavelet-based ECG compression algorithms, Rajoub (RA), Embedded Zerotree Wavelet (EZ), and Wavelet Transform Higher-Order Statistics Coding (WH), were evaluated to find an appropriate ECG compression algorithm for scalable and reliable wireless tele-cardiology applications, particularly over a CDMA network. The short-term and long-term performance characteristics of the three algorithms were analyzed using normal, abnormal, and measurement noise-contaminated ECG signals from the MIT-BIH database. In addition to the processing delay measurement, compression efficiency and reconstruction sensitivity to error were also evaluated via simulation models including the noise-free channel model, random noise channel model, and CDMA channel model, as well as over an actual CDMA network currently operating in Korea. This study found that the EZ algorithm achieves the best compression efficiency within a low-noise environment, and that the WH algorithm is competitive for use in high-error environments with degraded short-term performance with abnormal or contaminated ECG signals. PMID:17701824

  19. Performance evaluation of a sequential minimal radial basis function (RBF) neural network learning algorithm.

    PubMed

    Lu, Y; Sundararajan, N; Saratchandran, P

    1998-01-01

    This paper presents a detailed performance analysis of the minimal resource allocation network (M-RAN) learning algorithm, M-RAN is a sequential learning radial basis function neural network which combines the growth criterion of the resource allocating network (RAN) of Platt (1991) with a pruning strategy based on the relative contribution of each hidden unit to the overall network output. The resulting network leads toward a minimal topology for the RAN. The performance of this algorithm is compared with the multilayer feedforward networks (MFNs) trained with 1) a variant of the standard backpropagation algorithm, known as RPROP and 2) the dependence identification (DI) algorithm of Moody and Antsaklis on several benchmark problems in the function approximation and pattern classification areas. For all these problems, the M-RAN algorithm is shown to realize networks with far fewer hidden neurons with better or same approximation/classification accuracy. Further, the time taken for learning (training) is also considerably shorter as M-RAN does not require repeated presentation of the training data. PMID:18252454

  20. Performance of 12 DIR algorithms in low-contrast regions for mass and density conserving deformation

    SciTech Connect

    Yeo, U. J.; Supple, J. R.; Franich, R. D.; Taylor, M. L.; Smith, R.; Kron, T.

    2013-10-15

    Purpose: Deformable image registration (DIR) has become a key tool for adaptive radiotherapy to account for inter- and intrafraction organ deformation. Of contemporary interest, the application to deformable dose accumulation requires accurate deformation even in low contrast regions where dose gradients may exist within near-uniform tissues. One expects high-contrast features to generally be deformed more accurately by DIR algorithms. The authors systematically assess the accuracy of 12 DIR algorithms and quantitatively examine, in particular, low-contrast regions, where accuracy has not previously been established.Methods: This work investigates DIR algorithms in three dimensions using deformable gel (DEFGEL) [U. J. Yeo, M. L. Taylor, L. Dunn, R. L. Smith, T. Kron, and R. D. Franich, “A novel methodology for 3D deformable dosimetry,” Med. Phys. 39, 2203–2213 (2012)], for application to mass- and density-conserving deformations. CT images of DEFGEL phantoms with 16 fiducial markers (FMs) implanted were acquired in deformed and undeformed states for three different representative deformation geometries. Nonrigid image registration was performed using 12 common algorithms in the public domain. The optimum parameter setup was identified for each algorithm and each was tested for deformation accuracy in three scenarios: (I) original images of the DEFGEL with 16 FMs; (II) images with eight of the FMs mathematically erased; and (III) images with all FMs mathematically erased. The deformation vector fields obtained for scenarios II and III were then applied to the original images containing all 16 FMs. The locations of the FMs estimated by the algorithms were compared to actual locations determined by CT imaging. The accuracy of the algorithms was assessed by evaluation of three-dimensional vectors between true marker locations and predicted marker locations.Results: The mean magnitude of 16 error vectors per sample ranged from 0.3 to 3.7, 1.0 to 6.3, and 1.3 to 7

  1. Significant differences in pediatric psychotropic side effects: Implications for school performance.

    PubMed

    Kubiszyn, Thomas; Mire, Sarah; Dutt, Sonia; Papathopoulos, Katina; Burridge, Andrea Backsheider

    2012-03-01

    Some side effects (SEs) of increasingly prescribed psychotropic medications can impact student performance in school. SE risk varies, even among drugs from the same class (e.g., antidepressants). Knowing which SEs occur significantly more often than others may enable school psychologists to enhance collaborative risk-benefit analysis, medication monitoring, data-based decision-making, and inform mitigation efforts. SE data from Full Prescribing Information (PI) on the FDA website for ADHD drugs, atypical antipsychotics, and antidepressants with pediatric indications were analyzed. Risk ratios (RR) are reported for each drug within a category compared with placebo. RR tables and graphs inform the reader about SE incidence differences for each drug and provide clear evidence of the wide variability in SE incidence in the FDA data. Breslow-Day and Cochran Mantel-Haenszel methods were used to test for drug-placebo SE differences and to test for significance across drugs within each category based on odds ratios (ORs). Significant drug-placebo differences were found for each drug compared with placebo, when odds were pooled across all drugs in a category compared with placebo, and between some drugs within categories. Unexpectedly, many large RR differences did not reach significance. Potential explanations are offered, including limitations of the FDA data sets and statistical and methodological issues. Future research directions are offered. The potential impact of certain SEs on school performance, mitigation strategies, and the potential role of the school psychologist is discussed, with consideration for ethical and legal limitations. PMID:22582933

  2. The FPGA realization of a real-time Bayer image restoration algorithm with better performance

    NASA Astrophysics Data System (ADS)

    Ma, Huaping; Liu, Shuang; Zhou, Jiangyong; Tang, Zunlie; Deng, Qilin; Zhang, Hongliu

    2014-11-01

    Along with the wide usage of realizing Bayer color interpolation algorithm through FPGA, better performance, real-time processing, and less resource consumption have become the pursuits for the users. In order to realize the function of high speed and high quality processing of the Bayer image restoration with less resource consumption, the color reconstruction is designed and optimized from the interpolation algorithm and the FPGA realization in this article. Then the hardware realization is finished with FPGA development platform, and the function of real-time and high-fidelity image processing with less resource consumption is realized in the embedded image acquisition systems.

  3. Global Precipitation Measurement (GPM) Microwave Imager Falling Snow Retrieval Algorithm Performance

    NASA Astrophysics Data System (ADS)

    Skofronick Jackson, Gail; Munchak, Stephen J.; Johnson, Benjamin T.

    2015-04-01

    Retrievals of falling snow from space represent an important data set for understanding the Earth's atmospheric, hydrological, and energy cycles. While satellite-based remote sensing provides global coverage of falling snow events, the science is relatively new and retrievals are still undergoing development with challenges and uncertainties remaining. This work reports on the development and post-launch testing of retrieval algorithms for the NASA Global Precipitation Measurement (GPM) mission Core Observatory satellite launched in February 2014. In particular, we will report on GPM Microwave Imager (GMI) radiometer instrument algorithm performance with respect to falling snow detection and estimation. Since GPM's launch, the at-launch GMI precipitation algorithms, based on a Bayesian framework, have been used with the new GPM data. The at-launch database is generated using proxy satellite data merged with surface measurements (instead of models). One year after launch, the Bayesian database will begin to be replaced with the more realistic observational data from the GPM spacecraft radar retrievals and GMI data. It is expected that the observational database will be much more accurate for falling snow retrievals because that database will take full advantage of the 166 and 183 GHz snow-sensitive channels. Furthermore, much retrieval algorithm work has been done to improve GPM retrievals over land. The Bayesian framework for GMI retrievals is dependent on the a priori database used in the algorithm and how profiles are selected from that database. Thus, a land classification sorts land surfaces into ~15 different categories for surface-specific databases (radiometer brightness temperatures are quite dependent on surface characteristics). In addition, our work has shown that knowing if the land surface is snow-covered, or not, can improve the performance of the algorithm. Improvements were made to the algorithm that allow for daily inputs of ancillary snow cover

  4. Performance evaluation of dynamic assembly period algorithm in TCP over OBS networks

    NASA Astrophysics Data System (ADS)

    Peng, Shuping; Li, Zhengbin; He, Yongqi; Xu, Anshi

    2007-11-01

    Dynamic Assembly Period (DAP) is a novel assembly algorithm, which is based on the dynamic TCP window. The assembly algorithm can track the variation of the current TCP window aroused by the burst loss events, and update the assembly period dynamically for the next assembly. The analytical model provides the theoretical foundation for the proposed assembly algorithm. Nowadays, there are several kinds of TCP flavors proposed to enhance the performance of TCP, such as Default, Tahoe, Reno, New Reno, SACK, etc., which are adopted in the current internet. In this paper, we evaluated the performance of DAP under the different TCP flavors. The simulation results show that the performance of DAP under Default TCP flavor is the best. The difference in the performance of DAP under such flavors is correlated with the inside mechanism of the flavors. We also compared the performance of DAP and FAP under the same TCP flavor. It indicates that the performance of DAP is better than that of FAP in a wide range of burst loss rate.

  5. Benchmark for Peak Detection Algorithms in Fiber Bragg Grating Interrogation and a New Neural Network for its Performance Improvement

    PubMed Central

    Negri, Lucas; Nied, Ademir; Kalinowski, Hypolito; Paterno, Aleksander

    2011-01-01

    This paper presents a benchmark for peak detection algorithms employed in fiber Bragg grating spectrometric interrogation systems. The accuracy, precision, and computational performance of currently used algorithms and those of a new proposed artificial neural network algorithm are compared. Centroid and gaussian fitting algorithms are shown to have the highest precision but produce systematic errors that depend on the FBG refractive index modulation profile. The proposed neural network displays relatively good precision with reduced systematic errors and improved computational performance when compared to other networks. Additionally, suitable algorithms may be chosen with the general guidelines presented. PMID:22163806

  6. Assessing SWOT discharge algorithms performance across a range of river types

    NASA Astrophysics Data System (ADS)

    Durand, M. T.; Smith, L. C.; Gleason, C. J.; Bjerklie, D. M.; Garambois, P. A.; Roux, H.

    2014-12-01

    Scheduled for launch in 2020, the Surface Water and Ocean Topography (SWOT) satellite mission will measure river height, width, and slope, globally, as well as characterizing storage change in lakes, and ocean surface dynamics. Four discharge algorithms have been formulated to solve the inverse problem of river discharge from SWOT observations. Three of these approaches are based on Manning's equation, while the fourth utilizes at-many-stations hydraulic geometry relating width and discharge. In all cases, SWOT will provide some but not all of the information required to estimate discharge. The focus of the inverse approaches is estimation of the unknown parameters. The algorithms use a range of a priori information. This paper will generate synthetic measurements of height, width, and slope for a number of rivers, including reaches of the Sacramento, Ohio, Mississippi, Platte, Amazon, Garonne, Po, Severn, St. Lawrence, and Tanana. These rivers have a wide range of flows, geometries, hydraulic regimes, floodplain interactions, and planforms. One-year synthetic datasets will be generated in each case. We will add white noise to the simulated quantities and generate scenarios with different repeat time. The focus will be on retrievability of the hydraulic parameters across a range of space-time sampling, rather than on ability to retrieve under the specific SWOT orbit. We will focus on several specific research questions affecting algorithm performance, including river characteristics, temporal sampling, and algorithm accuracy. The overall goal is to be able to predict which algorithms will work better for different kinds of rivers, and potentially to combine the outputs of the various algorithms to obtain more robust estimates. Preliminary results on the Sacramento River indicate that all algorithms perform well for this single-channel river, with diffusive hydraulics, with relative RMSE values ranging from 9% to 26% for the various algorithms. Preliminary

  7. Measuring localization performance of super-resolution algorithms on very active samples.

    PubMed

    Wolter, Steve; Endesfelder, Ulrike; van de Linde, Sebastian; Heilemann, Mike; Sauer, Markus

    2011-04-11

    Super-resolution fluorescence imaging based on single-molecule localization relies critically on the availability of efficient processing algorithms to distinguish, identify, and localize emissions of single fluorophores. In multiple current applications, such as three-dimensional, time-resolved or cluster imaging, high densities of fluorophore emissions are common. Here, we provide an analytic tool to test the performance and quality of localization microscopy algorithms and demonstrate that common algorithms encounter difficulties for samples with high fluorophore density. We demonstrate that, for typical single-molecule localization microscopy methods such as dSTORM and the commonly used rapidSTORM scheme, computational precision limits the acceptable density of concurrently active fluorophores to 0.6 per square micrometer and that the number of successfully localized fluorophores per frame is limited to 0.2 per square micrometer. PMID:21503016

  8. A Hybrid Neural Network-Genetic Algorithm Technique for Aircraft Engine Performance Diagnostics

    NASA Technical Reports Server (NTRS)

    Kobayashi, Takahisa; Simon, Donald L.

    2001-01-01

    In this paper, a model-based diagnostic method, which utilizes Neural Networks and Genetic Algorithms, is investigated. Neural networks are applied to estimate the engine internal health, and Genetic Algorithms are applied for sensor bias detection and estimation. This hybrid approach takes advantage of the nonlinear estimation capability provided by neural networks while improving the robustness to measurement uncertainty through the application of Genetic Algorithms. The hybrid diagnostic technique also has the ability to rank multiple potential solutions for a given set of anomalous sensor measurements in order to reduce false alarms and missed detections. The performance of the hybrid diagnostic technique is evaluated through some case studies derived from a turbofan engine simulation. The results show this approach is promising for reliable diagnostics of aircraft engines.

  9. Using modified fruit fly optimisation algorithm to perform the function test and case studies

    NASA Astrophysics Data System (ADS)

    Pan, Wen-Tsao

    2013-06-01

    Evolutionary computation is a computing mode established by practically simulating natural evolutionary processes based on the concept of Darwinian Theory, and it is a common research method. The main contribution of this paper was to reinforce the function of searching for the optimised solution using the fruit fly optimization algorithm (FOA), in order to avoid the acquisition of local extremum solutions. The evolutionary computation has grown to include the concepts of animal foraging behaviour and group behaviour. This study discussed three common evolutionary computation methods and compared them with the modified fruit fly optimization algorithm (MFOA). It further investigated the ability of the three mathematical functions in computing extreme values, as well as the algorithm execution speed and the forecast ability of the forecasting model built using the optimised general regression neural network (GRNN) parameters. The findings indicated that there was no obvious difference between particle swarm optimization and the MFOA in regards to the ability to compute extreme values; however, they were both better than the artificial fish swarm algorithm and FOA. In addition, the MFOA performed better than the particle swarm optimization in regards to the algorithm execution speed, and the forecast ability of the forecasting model built using the MFOA's GRNN parameters was better than that of the other three forecasting models.

  10. Algorithms for Performance, Dependability, and Performability Evaluation using Stochastic Activity Networks

    NASA Technical Reports Server (NTRS)

    Deavours, Daniel D.; Qureshi, M. Akber; Sanders, William H.

    1997-01-01

    Modeling tools and technologies are important for aerospace development. At the University of Illinois, we have worked on advancing the state of the art in modeling by Markov reward models in two important areas: reducing the memory necessary to numerically solve systems represented as stochastic activity networks and other stochastic Petri net extensions while still obtaining solutions in a reasonable amount of time, and finding numerically stable and memory-efficient methods to solve for the reward accumulated during a finite mission time. A long standing problem when modeling with high level formalisms such as stochastic activity networks is the so-called state space explosion, where the number of states increases exponentially with size of the high level model. Thus, the corresponding Markov model becomes prohibitively large and solution is constrained by the the size of primary memory. To reduce the memory necessary to numerically solve complex systems, we propose new methods that can tolerate such large state spaces that do not require any special structure in the model (as many other techniques do). First, we develop methods that generate row and columns of the state transition-rate-matrix on-the-fly, eliminating the need to explicitly store the matrix at all. Next, we introduce a new iterative solution method, called modified adaptive Gauss-Seidel, that exhibits locality in its use of data from the state transition-rate-matrix, permitting us to cache portions of the matrix and hence reduce the solution time. Finally, we develop a new memory and computationally efficient technique for Gauss-Seidel based solvers that avoids the need for generating rows of A in order to solve Ax = b. This is a significant performance improvement for on-the-fly methods as well as other recent solution techniques based on Kronecker operators. Taken together, these new results show that one can solve very large models without any special structure.

  11. Significantly Increasing the Ductility of High Performance Polymer Semiconductors through Polymer Blending.

    PubMed

    Scott, Joshua I; Xue, Xiao; Wang, Ming; Kline, R Joseph; Hoffman, Benjamin C; Dougherty, Daniel; Zhou, Chuanzhen; Bazan, Guillermo; O'Connor, Brendan T

    2016-06-01

    Polymer semiconductors based on donor-acceptor monomers have recently resulted in significant gains in field effect mobility in organic thin film transistors (OTFTs). These polymers incorporate fused aromatic rings and have been designed to have stiff planar backbones, resulting in strong intermolecular interactions, which subsequently result in stiff and brittle films. The complex synthesis typically required for these materials may also result in increased production costs. Thus, the development of methods to improve mechanical plasticity while lowering material consumption during fabrication will significantly improve opportunities for adoption in flexible and stretchable electronics. To achieve these goals, we consider blending a brittle donor-acceptor polymer, poly[4-(4,4-dihexadecyl-4H-cyclopenta[1,2-b:5,4-b']dithiophen-2-yl)-alt-[1,2,5]thiadiazolo[3,4-c]pyridine] (PCDTPT), with ductile poly(3-hexylthiophene). We found that the ductility of the blend films is significantly improved compared to that of neat PCDTPT films, and when the blend film is employed in an OTFT, the performance is largely maintained. The ability to maintain charge transport character is due to vertical segregation within the blend, while the improved ductility is due to intermixing of the polymers throughout the film thickness. Importantly, the application of large strains to the ductile films is shown to orient both polymers, which further increases charge carrier mobility. These results highlight a processing approach to achieve high performance polymer OTFTs that are electrically and mechanically optimized. PMID:27200458

  12. Proper nozzle location, bit profile, and cutter arrangement affect PDC-bit performance significantly

    SciTech Connect

    Garcia-Gavito, D.; Azar, J.J.

    1994-09-01

    During the past 20 years, the drilling industry has looked to new technology to halt the exponentially increasing costs of drilling oil, gas, and geothermal wells. This technology includes bit design innovations to improve overall drilling performance and reduce drilling costs. These innovations include development of drag bits that use PDC cutters, also called PDC bits, to drill long, continuous intervals of soft to medium-hard formations more economically than conventional three-cone roller-cone bits. The cost advantage is the result of higher rates of penetration (ROP's) and longer bit life obtained with the PDC bits. An experimental study comparing the effects of polycrystalline-diamond-compact (PDC)-bit design features on the dynamic pressure distribution at the bit/rock interface was conducted on a full-scale drilling rig. Results showed that nozzle location, bit profile, and cutter arrangement are significant factors in PDC-bit performance.

  13. Significantly enhanced robustness and electrochemical performance of flexible carbon nanotube-based supercapacitors by electrodepositing polypyrrole

    NASA Astrophysics Data System (ADS)

    Chen, Yanli; Du, Lianhuan; Yang, Peihua; Sun, Peng; Yu, Xiang; Mai, Wenjie

    2015-08-01

    Here, we report robust, flexible CNT-based supercapacitor (SC) electrodes fabricated by electrodepositing polypyrrole (PPy) on freestanding vacuum-filtered CNT film. These electrodes demonstrate significantly improved mechanical properties (with the ultimate tensile strength of 16 MPa), and greatly enhanced electrochemical performance (5.6 times larger areal capacitance). The major drawback of conductive polymer electrodes is the fast capacitance decay caused by structural breakdown, which decreases cycling stability but this is not observed in our case. All-solid-state SCs assembled with the robust CNT/PPy electrodes exhibit excellent flexibility, long lifetime (95% capacitance retention after 10,000 cycles) and high electrochemical performance (a total device volumetric capacitance of 4.9 F/cm3). Moreover, a flexible SC pack is demonstrated to light up 53 LEDs or drive a digital watch, indicating the broad potential application of our SCs for portable/wearable electronics.

  14. Significantly improving electromagnetic performance of nanopaper and its shape-memory nanocomposite by aligned carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Lu, Haibao; Gou, Jan

    2012-04-01

    A new nanopaper that exhibits exciting electrical and electromagnetic performances is fabricated by incorporating magnetically aligned carbon nanotube (CNT) with carbon nanofibers (CNFs). Electromagnetic CNTs were blended with and aligned into the nanopaper using a magnetic field, to significantly improve the electrical and electromagnetic performances of nanopaper and its enabled shape-memory polymer (SMP) composite. The morphology and structure of the aligned CNT arrays in nanopaper were characterized with scanning electronic microscopy (SEM). A continuous and compact network of CNFs and aligned CNTs indicated that the nanopaper could have highly conductive properties. Furthermore, the electromagnetic interference (EMI) shielding efficiency of the SMP composites with different weight content of aligned CNT arrays was characterized. Finally, the aligned CNT arrays in nanopapers were employed to achieve the electrical actuation and accelerate the recovery speed of SMP composites.

  15. Quantitative performance evaluation of a blurring restoration algorithm based on principal component analysis

    NASA Astrophysics Data System (ADS)

    Greco, Mario; Huebner, Claudia; Marchi, Gabriele

    2008-10-01

    In the field on blind image deconvolution a new promising algorithm, based on the Principal Component Analysis (PCA), has been recently proposed in the literature. The main advantages of the algorithm are the following: computational complexity is generally lower than other deconvolution techniques (e.g., the widely used Iterative Blind Deconvolution - IBD - method); it is robust to white noise; only the blurring point spread function support is required to perform the single-observation deconvolution (i.e., a single degraded observation of a scene is available), while the multiple-observation one is completely unsupervised (i.e., multiple degraded observations of a scene are available). The effectiveness of the PCA-based restoration algorithm has been only confirmed by visual inspection and, to the best of our knowledge, no objective image quality assessment has been performed. In this paper a generalization of the original algorithm version is proposed; then the previous unexplored issue is considered and the achieved results are compared with that of the IBD method, which is used as benchmark.

  16. Algorithmic, LOCS and HOCS (chemistry) exam questions: performance and attitudes of college students

    NASA Astrophysics Data System (ADS)

    Zoller, Uri

    2002-02-01

    The performance of freshmen biology and physics-mathematics majors and chemistry majors as well as pre- and in-service chemistry teachers in two Israeli universities on algorithmic (ALG), lower-order cognitive skills (LOCS), and higher-order cognitive skills (HOCS) chemistry exam questions were studied. The driving force for the study was an interest in moving science and chemistry instruction from an algorithmic and factual recall orientation dominated by LOCS, to a decision-making, problem-solving and critical system thinking approach, dominated by HOCS. College students' responses to the specially designed ALG, LOCS and HOCS chemistry exam questions were scored and analysed for differences and correlation between the performance means within and across universities by the questions' category. This was followed by a combined student interview - 'speaking aloud' problem solving session for assessing the thinking processes involved in solving these types of questions and the students' attitudes towards them. The main findings were: (1) students in both universities performed consistently in each of the three categories in the order of ALG > LOCS > HOCS; their 'ideological' preference, was HOCS > algorithmic/LOCS, - referred to as 'computational questions', but their pragmatic preference was the reverse; (2) success on algorithmic/LOCS does not imply success on HOCS questions; algorithmic questions constitute a category on its own as far as students success in solving them is concerned. Our study and its results support the effort being made, worldwide, to integrate HOCS-fostering teaching and assessment strategies and, to develop HOCS-oriented science-technology-environment-society (STES)-type curricula within science and chemistry education.

  17. Performance Evaluation of Different Ground Filtering Algorithms for Uav-Based Point Clouds

    NASA Astrophysics Data System (ADS)

    Serifoglu, C.; Gungor, O.; Yilmaz, V.

    2016-06-01

    Digital Elevation Model (DEM) generation is one of the leading application areas in geomatics. Since a DEM represents the bare earth surface, the very first step of generating a DEM is to separate the ground and non-ground points, which is called ground filtering. Once the point cloud is filtered, the ground points are interpolated to generate the DEM. LiDAR (Light Detection and Ranging) point clouds have been used in many applications thanks to their success in representing the objects they belong to. Hence, in the literature, various ground filtering algorithms have been reported to filter the LiDAR data. Since the LiDAR data acquisition is still a costly process, using point clouds generated from the UAV images to produce DEMs is a reasonable alternative. In this study, point clouds with three different densities were generated from the aerial photos taken from a UAV (Unmanned Aerial Vehicle) to examine the effect of point density on filtering performance. The point clouds were then filtered by means of five different ground filtering algorithms as Progressive Morphological 1D (PM1D), Progressive Morphological 2D (PM2D), Maximum Local Slope (MLS), Elevation Threshold with Expand Window (ETEW) and Adaptive TIN (ATIN). The filtering performance of each algorithm was investigated qualitatively and quantitatively. The results indicated that the ATIN and PM2D algorithms showed the best overall ground filtering performances. The MLS and ETEW algorithms were found as the least successful ones. It was concluded that the point clouds generated from the UAVs can be a good alternative for LiDAR data.

  18. Subjective Significance Shapes Arousal Effects on Modified Stroop Task Performance: A Duality of Activation Mechanisms Account.

    PubMed

    Imbir, Kamil K

    2016-01-01

    Activation mechanisms such as arousal are known to be responsible for slowdown observed in the Emotional Stroop and modified Stroop tasks. Using the duality of mind perspective, we may conclude that both ways of processing information (automatic or controlled) should have their own mechanisms of activation, namely, arousal for an experiential mind, and subjective significance for a rational mind. To investigate the consequences of both, factorial manipulation was prepared. Other factors that influence Stroop task processing such as valence, concreteness, frequency, and word length were controlled. Subjective significance was expected to influence arousal effects. In the first study, the task was to name the color of font for activation charged words. In the second study, activation charged words were, at the same time, combined with an incongruent condition of the classical Stroop task around a fixation point. The task was to indicate the font color for color-meaning words. In both studies, subjective significance was found to shape the arousal impact on performance in terms of the slowdown reduction for words charged with subjective significance. PMID:26869974

  19. Subjective Significance Shapes Arousal Effects on Modified Stroop Task Performance: A Duality of Activation Mechanisms Account

    PubMed Central

    Imbir, Kamil K.

    2016-01-01

    Activation mechanisms such as arousal are known to be responsible for slowdown observed in the Emotional Stroop and modified Stroop tasks. Using the duality of mind perspective, we may conclude that both ways of processing information (automatic or controlled) should have their own mechanisms of activation, namely, arousal for an experiential mind, and subjective significance for a rational mind. To investigate the consequences of both, factorial manipulation was prepared. Other factors that influence Stroop task processing such as valence, concreteness, frequency, and word length were controlled. Subjective significance was expected to influence arousal effects. In the first study, the task was to name the color of font for activation charged words. In the second study, activation charged words were, at the same time, combined with an incongruent condition of the classical Stroop task around a fixation point. The task was to indicate the font color for color-meaning words. In both studies, subjective significance was found to shape the arousal impact on performance in terms of the slowdown reduction for words charged with subjective significance. PMID:26869974

  20. Development of Analytical Algorithm for the Performance Analysis of Power Train System of an Electric Vehicle

    NASA Astrophysics Data System (ADS)

    Kim, Chul-Ho; Lee, Kee-Man; Lee, Sang-Heon

    Power train system design is one of the key R&D areas on the development process of new automobile because an optimum size of engine with adaptable power transmission which can accomplish the design requirement of new vehicle can be obtained through the system design. Especially, for the electric vehicle design, very reliable design algorithm of a power train system is required for the energy efficiency. In this study, an analytical simulation algorithm is developed to estimate driving performance of a designed power train system of an electric. The principal theory of the simulation algorithm is conservation of energy with several analytical and experimental data such as rolling resistance, aerodynamic drag, mechanical efficiency of power transmission etc. From the analytical calculation results, running resistance of a designed vehicle is obtained with the change of operating condition of the vehicle such as inclined angle of road and vehicle speed. Tractive performance of the model vehicle with a given power train system is also calculated at each gear ratio of transmission. Through analysis of these two calculation results: running resistance and tractive performance, the driving performance of a designed electric vehicle is estimated and it will be used to evaluate the adaptability of the designed power train system on the vehicle.

  1. Focused R&D For Electrochromic Smart Windowsa: Significant Performance and Yield Enhancements

    SciTech Connect

    Mark Burdis; Neil Sbar

    2003-01-31

    There is a need to improve the energy efficiency of building envelopes as they are the primary factor governing the heating, cooling, lighting and ventilation requirements of buildings--influencing 53% of building energy use. In particular, windows contribute significantly to the overall energy performance of building envelopes, thus there is a need to develop advanced energy efficient window and glazing systems. Electrochromic (EC) windows represent the next generation of advanced glazing technology that will (1) reduce the energy consumed in buildings, (2) improve the overall comfort of the building occupants, and (3) improve the thermal performance of the building envelope. ''Switchable'' EC windows provide, on demand, dynamic control of visible light, solar heat gain, and glare without blocking the view. As exterior light levels change, the window's performance can be electronically adjusted to suit conditions. A schematic illustrating how SageGlass{reg_sign} electrochromic windows work is shown in Figure I.1. SageGlass{reg_sign} EC glazings offer the potential to save cooling and lighting costs, with the added benefit of improving thermal and visual comfort. Control over solar heat gain will also result in the use of smaller HVAC equipment. If a step change in the energy efficiency and performance of buildings is to be achieved, there is a clear need to bring EC technology to the marketplace. This project addresses accelerating the widespread introduction of EC windows in buildings and thus maximizing total energy savings in the U.S. and worldwide. We report on R&D activities to improve the optical performance needed to broadly penetrate the full range of architectural markets. Also, processing enhancements have been implemented to reduce manufacturing costs. Finally, tests are being conducted to demonstrate the durability of the EC device and the dual pane insulating glass unit (IGU) to be at least equal to that of conventional windows.

  2. Behavioral Change and Building Performance: Strategies for Significant, Persistent, and Measurable Institutional Change

    SciTech Connect

    Wolfe, Amy K.; Malone, Elizabeth L.; Heerwagen, Judith H.; Dion, Jerome P.

    2014-04-01

    The people who use Federal buildings — Federal employees, operations and maintenance staff, and the general public — can significantly impact a building’s environmental performance and the consumption of energy, water, and materials. Many factors influence building occupants’ use of resources (use behaviors) including work process requirements, ability to fulfill agency missions, new and possibly unfamiliar high-efficiency/high-performance building technologies; a lack of understanding, education, and training; inaccessible information or ineffective feedback mechanisms; and cultural norms and institutional rules and requirements, among others. While many strategies have been used to introduce new occupant use behaviors that promote sustainability and reduced resource consumption, few have been verified in the scientific literature or have properly documented case study results. This paper documents validated strategies that have been shown to encourage new use behaviors that can result in significant, persistent, and measureable reductions in resource consumption. From the peer-reviewed literature, the paper identifies relevant strategies for Federal facilities and commercial buildings that focus on the individual, groups of individuals (e.g., work groups), and institutions — their policies, requirements, and culture. The paper documents methods with evidence of success in changing use behaviors and enabling occupants to effectively interact with new technologies/designs. It also provides a case study of the strategies used at a Federal facility — Fort Carson, Colorado. The paper documents gaps in the current literature and approaches, and provides topics for future research.

  3. Performance Analysis of the ertPS Algorithm and Enhanced ertPS Algorithm for VoIP Services in IEEE 802.16e Systems

    NASA Astrophysics Data System (ADS)

    Kim, Bong Joo; Hwang, Gang Uk

    In this paper, we analyze the extended real-time Polling Service (ertPS) algorithm in IEEE 802.16e systems, which is designed to support Voice-over-Internet-Protocol (VoIP) services with data packets of various sizes and silence suppression. The analysis uses a two-dimensional Markov Chain, where the grant size and the voice packet state are considered, and an approximation formula for the total throughput in the ertPS algorithm is derived. Next, to improve the performance of the ertPS algorithm, we propose an enhanced uplink resource allocation algorithm, called the e2rtPS algorithm, for VoIP services in IEEE 802.16e systems. The e2rtPS algorithm considers the queue status information and tries to alleviate the queue congestion as soon as possible by using remaining network resources. Numerical results are provided to show the accuracy of the approximation analysis for the ertPS algorithm and to verify the effectiveness of the e2rtPS algorithm.

  4. An algorithm for automatic measurement of stimulation thresholds: clinical performance and preliminary results.

    PubMed

    Danilovic, D; Ohm, O J; Stroebel, J; Breivik, K; Hoff, P I; Markowitz, T

    1998-05-01

    We have developed an algorithmic method for automatic determination of stimulation thresholds in both cardiac chambers in patients with intact atrioventricular (AV) conduction. The algorithm utilizes ventricular sensing, may be used with any type of pacing leads, and may be downloaded via telemetry links into already implanted dual-chamber Thera pacemakers. Thresholds are determined with 0.5 V amplitude and 0.06 ms pulse-width resolution in unipolar, bipolar, or both lead configurations, with a programmable sampling interval from 2 minutes to 48 hours. Measured values are stored in the pacemaker memory for later retrieval and do not influence permanent output settings. The algorithm was intended to gather information on continuous behavior of stimulation thresholds, which is important in the formation of strategies for programming pacemaker outputs. Clinical performance of the algorithm was evaluated in eight patients who received bipolar tined steroid-eluting leads and were observed for a mean of 5.1 months. Patient safety was not compromised by the algorithm, except for the possibility of pacing during the physiologic refractory period. Methods for discrimination of incorrect data points were developed and incorrect values were discarded. Fine resolution threshold measurements collected during this study indicated that: (1) there were great differences in magnitude of threshold peaking in different patients; (2) the initial intensive threshold peaking was usually followed by another less intensive but longer-lasting wave of threshold peaking; (3) the pattern of tissue reaction in the atrium appeared different from that in the ventricle; and (4) threshold peaking in the bipolar lead configuration was greater than in the unipolar configuration. The algorithm proved to be useful in studying ambulatory thresholds. PMID:9604237

  5. Experimental Investigation of the Performance of Image Registration and De-aliasing Algorithms

    NASA Astrophysics Data System (ADS)

    Crabtree, P.; Dao, P.

    Various image de-aliasing algorithms and techniques have been developed to improve the resolution of sensor-aliased images captured with an under sampled point spread function. In the literature these types of algorithms are sometimes included under the broad umbrella of superresolution. Image restoration is a more appropriate categorization for this work because we aim to restore image resolution lost due to sensor aliasing, but only up to the limit imposed by diffraction. Specifically, the work presented here is focused on image de-aliasing using microscanning. Much of the previous work in this area demonstrates improvement by using simulated imagery, or using imagery obtained where the sub pixel shifts are unknown and must be estimated. This paper takes an experimental approach to investigate performance for both the visible and long-wave infrared (LWIR) regions. Two linear translation stages are used to provide two-axis camera control via RS-232 interface. The translation stages use stepper motors, but also include a microstepping capability which allows discrete steps of approximately 0.1 microns. However, there are several types of position error associated with these devices. Therefore, the microstepping error is investigated and partially quantified prior to performing microscan image capture and processing. We also consider the impact of less than 100% fill factor on algorithm performance. For the visible region we use a CMOS camera and a resolution target to generate a contrast transfer function (CTF) for both the raw and microscanned images. This allows modulation transfer function (MTF) estimation, which gives a more complete and quantitative description of performance as opposed to simply estimating the limiting resolution and/or visual inspection. The difference between the MTF curves for the raw and microscanned images will be explored as a means to describe performance as a function of spatial frequency. Finally, our goal is to also demonstrate

  6. 48 CFR 1553.216-70 - EPA Form 1900-41A, CPAF Contract Summary of Significant Performance Observation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Contract Summary of Significant Performance Observation. 1553.216-70 Section 1553.216-70 Federal... 1553.216-70 EPA Form 1900-41A, CPAF Contract Summary of Significant Performance Observation. As prescribed in 1516.404-278, EPA Form 1900-41A shall be used to document significant performance...

  7. 48 CFR 1553.216-70 - EPA Form 1900-41A, CPAF Contract Summary of Significant Performance Observation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Contract Summary of Significant Performance Observation. 1553.216-70 Section 1553.216-70 Federal... 1553.216-70 EPA Form 1900-41A, CPAF Contract Summary of Significant Performance Observation. As prescribed in 1516.404-278, EPA Form 1900-41A shall be used to document significant performance...

  8. Information theoretic bounds of ATR algorithm performance for sidescan sonar target classification

    NASA Astrophysics Data System (ADS)

    Myers, Vincent L.; Pinto, Marc A.

    2005-05-01

    With research on autonomous underwater vehicles for minehunting beginning to focus on cooperative and adaptive behaviours, some effort is being spent on developing automatic target recognition (ATR) algorithms that are able to operate with high reliability under a wide range of scenarios, particularly in areas of high clutter density, and without human supervision. Because of the great diversity of pattern recognition methods and continuously improving sensor technology, there is an acute requirement for objective performance measures that are independent of any particular sensor, algorithm or target definitions. This paper approaches the ATR problem from the point of view of information theory in an attempt to place bounds on the performance of target classification algorithms that are based on the acoustic shadow of proud targets. Performance is bounded by analysing the simplest of shape classification tasks, that of differentiating between a circular and square shadow, thus allowing us to isolate system design criteria and assess their effect on the overall probability of classification. The information that can be used for target recognition in sidescan sonar imagery is examined and common information theory relationships are used to derive properties of the ATR problem. Some common bounds with analytical solutions are also derived.

  9. Orion Guidance and Control Ascent Abort Algorithm Design and Performance Results

    NASA Technical Reports Server (NTRS)

    Proud, Ryan W.; Bendle, John R.; Tedesco, Mark B.; Hart, Jeremy J.

    2009-01-01

    During the ascent flight phase of NASA s Constellation Program, the Ares launch vehicle propels the Orion crew vehicle to an agreed to insertion target. If a failure occurs at any point in time during ascent then a system must be in place to abort the mission and return the crew to a safe landing with a high probability of success. To achieve continuous abort coverage one of two sets of effectors is used. Either the Launch Abort System (LAS), consisting of the Attitude Control Motor (ACM) and the Abort Motor (AM), or the Service Module (SM), consisting of SM Orion Main Engine (OME), Auxiliary (Aux) Jets, and Reaction Control System (RCS) jets, is used. The LAS effectors are used for aborts from liftoff through the first 30 seconds of second stage flight. The SM effectors are used from that point through Main Engine Cutoff (MECO). There are two distinct sets of Guidance and Control (G&C) algorithms that are designed to maximize the performance of these abort effectors. This paper will outline the necessary inputs to the G&C subsystem, the preliminary design of the G&C algorithms, the ability of the algorithms to predict what abort modes are achievable, and the resulting success of the abort system. Abort success will be measured against the Preliminary Design Review (PDR) abort performance metrics and overall performance will be reported. Finally, potential improvements to the G&C design will be discussed.

  10. Performance Enhancement of Radial Distributed System with Distributed Generators by Reconfiguration Using Binary Firefly Algorithm

    NASA Astrophysics Data System (ADS)

    Rajalakshmi, N.; Padma Subramanian, D.; Thamizhavel, K.

    2015-03-01

    The extent of real power loss and voltage deviation associated with overloaded feeders in radial distribution system can be reduced by reconfiguration. Reconfiguration is normally achieved by changing the open/closed state of tie/sectionalizing switches. Finding optimal switch combination is a complicated problem as there are many switching combinations possible in a distribution system. Hence optimization techniques are finding greater importance in reducing the complexity of reconfiguration problem. This paper presents the application of firefly algorithm (FA) for optimal reconfiguration of radial distribution system with distributed generators (DG). The algorithm is tested on IEEE 33 bus system installed with DGs and the results are compared with binary genetic algorithm. It is found that binary FA is more effective than binary genetic algorithm in achieving real power loss reduction and improving voltage profile and hence enhancing the performance of radial distribution system. Results are found to be optimum when DGs are added to the test system, which proved the impact of DGs on distribution system.

  11. Performance of a rain retrieval algorithm using TRMM data in the Eastern Mediterranean

    NASA Astrophysics Data System (ADS)

    Katsanos, D.; Viltard, N.; Lagouvardos, K.; Kotroni, V.

    2006-05-01

    This study aims to make a regional characterization of the performance of the rain retrieval algorithm BRAIN. This algorithm estimates the rain rate from brightness temperatures measured by the TRMM Microwave Imager (TMI) onboard the TRMM satellite. In this stage of the study, a comparison between the rain estimated from Precipitation Radar (PR) onboard TRMM (2A25 version 5) and the rain retrieved by the BRAIN algorithm is presented, for about 30 satellite overpasses over the Central and Eastern Mediterranean during the period October 2003-March 2004, in order to assess the behavior of the algorithm in the Eastern Mediterranean region. BRAIN was built and tested using PR rain estimates distributed randomly over the whole TRMM sampling region. Characterization of the differences between PR and BRAIN over a specific region is thus interesting because it might show some local trend for one or the other of the instrument. The checking of BRAIN results against the PR rain-estimate appears to be consistent with former results i.e. a somewhat marked discrepancy for the highest rain rates. This difference arises from a known problem that affect rain retrieval based on passive microwave radiometers measurements, but some of the higher radar rain rates could also be questioned. As an independent test, a good correlation between the rain retrieved by BRAIN and lighting data (obtained by the UK Met. Office long range detection system) is also emphasized in the paper.

  12. Injection Temperature Significantly Affects In Vitro and In Vivo Performance of Collagen-Platelet Scaffolds

    PubMed Central

    Palmer, M.P.; Abreu, E.L.; Mastrangelo, A.; Murray, M.M.

    2009-01-01

    Collagen-platelet composites have recently been successfully used as scaffolds to stimulate anterior cruciate ligament (ACL) wound healing in large animal models. These materials are typically kept on ice until use to prevent premature gelation; however, with surgical use, placement of a cold solution then requires up to an hour while the solution comes to body temperature (at which point gelation occurs). Bringing the solution to a higher temperature before injection would likely decrease this intra-operative wait; however, the effects of this on composite performance are not known. The hypothesis tested here was that increasing the temperature of the gel at the time of injection would significantly decrease the time to gelation, but would not significantly alter the mechanical properties of the composite or its ability to support functional tissue repair. Primary outcome measures included the maximum elastic modulus (stiffness) of the composite in vitro and the in vivo yield load of an ACL transection treated with an injected collagen-platelet composite. In vitro findings were that injection temperatures over 30°C resulted in a faster visco-elastic transition; however, the warmed composites had a 50% decrease in their maximum elastic modulus. In vivo studies found that warming the gels prior to injection also resulted in a decrease in the yield load of the healing ACL at 14 weeks. These studies suggest that increasing injection temperature of collagen-platelet composites results in a decrease in performance of the composite in vitro and in the strength of the healing ligament in vivo and this technique should be used only with great caution. PMID:19030174

  13. Performance comparison of multi-label learning algorithms on clinical data for chronic diseases.

    PubMed

    Zufferey, Damien; Hofer, Thomas; Hennebert, Jean; Schumacher, Michael; Ingold, Rolf; Bromuri, Stefano

    2015-10-01

    We are motivated by the issue of classifying diseases of chronically ill patients to assist physicians in their everyday work. Our goal is to provide a performance comparison of state-of-the-art multi-label learning algorithms for the analysis of multivariate sequential clinical data from medical records of patients affected by chronic diseases. As a matter of fact, the multi-label learning approach appears to be a good candidate for modeling overlapped medical conditions, specific to chronically ill patients. With the availability of such comparison study, the evaluation of new algorithms should be enhanced. According to the method, we choose a summary statistics approach for the processing of the sequential clinical data, so that the extracted features maintain an interpretable link to their corresponding medical records. The publicly available MIMIC-II dataset, which contains more than 19,000 patients with chronic diseases, is used in this study. For the comparison we selected the following multi-label algorithms: ML-kNN, AdaBoostMH, binary relevance, classifier chains, HOMER and RAkEL. Regarding the results, binary relevance approaches, despite their elementary design and their independence assumption concerning the chronic illnesses, perform optimally in most scenarios, in particular for the detection of relevant diseases. In addition, binary relevance approaches scale up to large dataset and are easy to learn. However, the RAkEL algorithm, despite its scalability problems when it is confronted to large dataset, performs well in the scenario which consists of the ranking of the labels according to the dominant disease of the patient. PMID:26275389

  14. Performance evaluation of hydrological models: Statistical significance for reducing subjectivity in goodness-of-fit assessments

    NASA Astrophysics Data System (ADS)

    Ritter, Axel; Muñoz-Carpena, Rafael

    2013-02-01

    SummarySuccess in the use of computer models for simulating environmental variables and processes requires objective model calibration and verification procedures. Several methods for quantifying the goodness-of-fit of observations against model-calculated values have been proposed but none of them is free of limitations and are often ambiguous. When a single indicator is used it may lead to incorrect verification of the model. Instead, a combination of graphical results, absolute value error statistics (i.e. root mean square error), and normalized goodness-of-fit statistics (i.e. Nash-Sutcliffe Efficiency coefficient, NSE) is currently recommended. Interpretation of NSE values is often subjective, and may be biased by the magnitude and number of data points, data outliers and repeated data. The statistical significance of the performance statistics is an aspect generally ignored that helps in reducing subjectivity in the proper interpretation of the model performance. In this work, approximated probability distributions for two common indicators (NSE and root mean square error) are derived with bootstrapping (block bootstrapping when dealing with time series), followed by bias corrected and accelerated calculation of confidence intervals. Hypothesis testing of the indicators exceeding threshold values is proposed in a unified framework for statistically accepting or rejecting the model performance. It is illustrated how model performance is not linearly related with NSE, which is critical for its proper interpretation. Additionally, the sensitivity of the indicators to model bias, outliers and repeated data is evaluated. The potential of the difference between root mean square error and mean absolute error for detecting outliers is explored, showing that this may be considered a necessary but not a sufficient condition of outlier presence. The usefulness of the approach for the evaluation of model performance is illustrated with case studies including those with

  15. Nanoporosity Significantly Enhances the Biological Performance of Engineered Glass Tissue Scaffolds

    PubMed Central

    Wang, Shaojie; Kowal, Tia J.; Marei, Mona K.

    2013-01-01

    Nanoporosity is known to impact the performance of implants and scaffolds such as bioactive glass (BG) scaffolds, either by providing a higher concentration of bioactive chemical species from enhanced surface area, or due to inherent nanoscale topology, or both. To delineate the role of these two characteristics, BG scaffolds have been fabricated with nearly identical surface area (81 and 83±2 m2/g) but significantly different pore size (av. 3.7 and 17.7 nm) by varying both the sintering temperature and the ammonia concentration during the solvent exchange phase of the sol-gel fabrication process. In vitro tests performed with MC3T3-E1 preosteoblast cells on such scaffolds show that initial cell attachment is increased on samples with the smaller nanopore size, providing the first direct evidence of the influence of nanopore topography on cell response to a bioactive structure. Furthermore, in vivo animal tests in New Zealand rabbits (subcutaneous implantation) indicate that nanopores promote colonization and cell penetration into these scaffolds, further demonstrating the favorable effects of nanopores in tissue-engineering-relevant BG scaffolds. PMID:23427819

  16. Amorphous Semiconductor Nanowires Created by Site-Specific Heteroatom Substitution with Significantly Enhanced Photoelectrochemical Performance.

    PubMed

    He, Ting; Zu, Lianhai; Zhang, Yan; Mao, Chengliang; Xu, Xiaoxiang; Yang, Jinhu; Yang, Shihe

    2016-08-23

    Semiconductor nanowires that have been extensively studied are typically in a crystalline phase. Much less studied are amorphous semiconductor nanowires due to the difficulty for their synthesis, despite a set of characteristics desirable for photoelectric devices, such as higher surface area, higher surface activity, and higher light harvesting. In this work of combined experiment and computation, taking Zn2GeO4 (ZGO) as an example, we propose a site-specific heteroatom substitution strategy through a solution-phase ions-alternative-deposition route to prepare amorphous/crystalline Si-incorporated ZGO nanowires with tunable band structures. The substitution of Si atoms for the Zn or Ge atoms distorts the bonding network to a different extent, leading to the formation of amorphous Zn1.7Si0.3GeO4 (ZSGO) or crystalline Zn2(GeO4)0.88(SiO4)0.12 (ZGSO) nanowires, respectively, with different bandgaps. The amorphous ZSGO nanowire arrays exhibit significantly enhanced performance in photoelectrochemical water splitting, such as higher and more stable photocurrent, and faster photoresponse and recovery, relative to crystalline ZGSO and ZGO nanowires in this work, as well as ZGO photocatalysts reported previously. The remarkable performance highlights the advantages of the ZSGO amorphous nanowires for photoelectric devices, such as higher light harvesting capability, faster charge separation, lower charge recombination, and higher surface catalytic activity. PMID:27494205

  17. Performance of MODIS Thermal Emissive Bands On-orbit Calibration Algorithms

    NASA Technical Reports Server (NTRS)

    Xiong, Xiaoxiong; Chang, T.

    2009-01-01

    serves as the thermal calibration source and the SV provides measurements for the sensor's background and offsets. MODIS on-board BB is a v-grooved plate with its temperature measured using 12 platinum resistive thermistors (PRT) uniformly embedded in the BB substrate. All the BB thermistors were characterized pre-launch with reference to the NIST temperature standards. Unlike typical BB operations in many heritage sensors, which have no temperature control capability, the MODIS on-board BB can be operated at any temperatures between instrument ambient (about 270K) and 315K and can also be varied continuously within this range. This feature has significantly enhanced the MODIS' capability of tracking and updating the TEB nonlinear calibration coefficients over its entire mission. Following a brief description of MODIS TEB on-orbit calibration methodologies and its onboard BB operational activities, this paper provides a comprehensive performance assessment of MODIS TEB quadratic calibration algorithm. It examines the scan-by-scan, orbit-by-orbit, daily, and seasonal variations of detector responses and associated impact due changes in the CFPA and instrument temperatures. Specifically, this paper will analyze the contribution by each individual thermal emissive source term (BB, scan cavity, and scan mirror), the impact on the Level 1 B data product quality due to pre-launch and on-orbit calibration uncertainties. A comparison of Terra and Aqua TEB on-orbit performance, lessons learned, and suggestions for future improvements will also be made.

  18. Graphene Oxide Quantum Dots Covalently Functionalized PVDF Membrane with Significantly-Enhanced Bactericidal and Antibiofouling Performances.

    PubMed

    Zeng, Zhiping; Yu, Dingshan; He, Ziming; Liu, Jing; Xiao, Fang-Xing; Zhang, Yan; Wang, Rong; Bhattacharyya, Dibakar; Tan, Timothy Thatt Yang

    2016-01-01

    Covalent bonding of graphene oxide quantum dots (GOQDs) onto amino modified polyvinylidene fluoride (PVDF) membrane has generated a new type of nano-carbon functionalized membrane with significantly enhanced antibacterial and antibiofouling properties. A continuous filtration test using E. coli containing feedwater shows that the relative flux drop over GOQDs modified PVDF is 23%, which is significantly lower than those over pristine PVDF (86%) and GO-sheet modified PVDF (62%) after 10 h of filtration. The presence of GOQD coating layer effectively inactivates E. coli and S. aureus cells, and prevents the biofilm formation on the membrane surface, producing excellent antimicrobial activity and potentially antibiofouling capability, more superior than those of previously reported two-dimensional GO sheets and one-dimensional CNTs modified membranes. The distinctive antimicrobial and antibiofouling performances could be attributed to the unique structure and uniform dispersion of GOQDs, enabling the exposure of a larger fraction of active edges and facilitating the formation of oxidation stress. Furthermore, GOQDs modified membrane possesses satisfying long-term stability and durability due to the strong covalent interaction between PVDF and GOQDs. This study opens up a new synthetic avenue in the fabrication of efficient surface-functionalized polymer membranes for potential waste water treatment and biomolecules separation. PMID:26832603

  19. Graphene Oxide Quantum Dots Covalently Functionalized PVDF Membrane with Significantly-Enhanced Bactericidal and Antibiofouling Performances

    NASA Astrophysics Data System (ADS)

    Zeng, Zhiping; Yu, Dingshan; He, Ziming; Liu, Jing; Xiao, Fang-Xing; Zhang, Yan; Wang, Rong; Bhattacharyya, Dibakar; Tan, Timothy Thatt Yang

    2016-02-01

    Covalent bonding of graphene oxide quantum dots (GOQDs) onto amino modified polyvinylidene fluoride (PVDF) membrane has generated a new type of nano-carbon functionalized membrane with significantly enhanced antibacterial and antibiofouling properties. A continuous filtration test using E. coli containing feedwater shows that the relative flux drop over GOQDs modified PVDF is 23%, which is significantly lower than those over pristine PVDF (86%) and GO-sheet modified PVDF (62%) after 10 h of filtration. The presence of GOQD coating layer effectively inactivates E. coli and S. aureus cells, and prevents the biofilm formation on the membrane surface, producing excellent antimicrobial activity and potentially antibiofouling capability, more superior than those of previously reported two-dimensional GO sheets and one-dimensional CNTs modified membranes. The distinctive antimicrobial and antibiofouling performances could be attributed to the unique structure and uniform dispersion of GOQDs, enabling the exposure of a larger fraction of active edges and facilitating the formation of oxidation stress. Furthermore, GOQDs modified membrane possesses satisfying long-term stability and durability due to the strong covalent interaction between PVDF and GOQDs. This study opens up a new synthetic avenue in the fabrication of efficient surface-functionalized polymer membranes for potential waste water treatment and biomolecules separation.

  20. Graphene Oxide Quantum Dots Covalently Functionalized PVDF Membrane with Significantly-Enhanced Bactericidal and Antibiofouling Performances

    PubMed Central

    Zeng, Zhiping; Yu, Dingshan; He, Ziming; Liu, Jing; Xiao, Fang-Xing; Zhang, Yan; Wang, Rong; Bhattacharyya, Dibakar; Tan, Timothy Thatt Yang

    2016-01-01

    Covalent bonding of graphene oxide quantum dots (GOQDs) onto amino modified polyvinylidene fluoride (PVDF) membrane has generated a new type of nano-carbon functionalized membrane with significantly enhanced antibacterial and antibiofouling properties. A continuous filtration test using E. coli containing feedwater shows that the relative flux drop over GOQDs modified PVDF is 23%, which is significantly lower than those over pristine PVDF (86%) and GO-sheet modified PVDF (62%) after 10 h of filtration. The presence of GOQD coating layer effectively inactivates E. coli and S. aureus cells, and prevents the biofilm formation on the membrane surface, producing excellent antimicrobial activity and potentially antibiofouling capability, more superior than those of previously reported two-dimensional GO sheets and one-dimensional CNTs modified membranes. The distinctive antimicrobial and antibiofouling performances could be attributed to the unique structure and uniform dispersion of GOQDs, enabling the exposure of a larger fraction of active edges and facilitating the formation of oxidation stress. Furthermore, GOQDs modified membrane possesses satisfying long-term stability and durability due to the strong covalent interaction between PVDF and GOQDs. This study opens up a new synthetic avenue in the fabrication of efficient surface-functionalized polymer membranes for potential waste water treatment and biomolecules separation. PMID:26832603

  1. Field Significance of Performance Measures in the Context of Regional Climate Model Verification

    NASA Astrophysics Data System (ADS)

    Ivanov, Martin; Warrach-Sagi, Kirsten; Wulfmeyer, Volker

    2015-04-01

    The purpose of this study is to rigorously evaluate the skill of dynamically downscaled global climate simulations. We investigate a dynamical downscaling of the ERA-Interim reanalysis using the Weather Research and Forecasting (WRF) model, coupled with the NOAH land surface model within the scope of EURO-CORDEX. WRF has a horizontal resolution of 11° and contains the following physics: the Yonsei university atmospheric boundary layer parameterization, the Morrison two-moment microphysics, the Kain-Fritsch-Eta convection and the Community Atmosphere Model radiation schemes. Daily precipitation is verified over Germany for summer and winter against high-resolution observation data from the German weather service for the first time. The ability of WRF to reproduce the statistical distribution of daily precipitation is evaluated using metrics based on distribution characteristics. Skill against the large-scale ERA-Interim data gives insight into the potential, additional skill of dynamical downscaling. To quantify it, we transform the absolute performance measures to relative skill measures against ERA-Interim. Their field significance is rigorously estimated and locally significant regions are highlighted. Statistical distributions are better reproduced in summer than in winter. In both seasons WRF is too dry over mountain tops due to underestimated and too rare high and underestimated and too frequent small precipitations. In winter WRF is too wet at windward sides and land-sea transition regions due to too frequent weak and moderate precipitation events. In summer it is too dry over land-sea transition regions due to underestimated small and too rare moderate precipitations, and too wet in some river valleys due to too frequent high precipitations. Additional skill relative to ERA-Interim is documented for overall measures as well as measures regarding the spread and tails of the statistical distribution, but not regarding mean seasonal precipitation. The added

  2. Performance analysis of visual tracking algorithms for motion-based user interfaces on mobile devices

    NASA Astrophysics Data System (ADS)

    Winkler, Stefan; Rangaswamy, Karthik; Tedjokusumo, Jefry; Zhou, ZhiYing

    2008-02-01

    Determining the self-motion of a camera is useful for many applications. A number of visual motion-tracking algorithms have been developed till date, each with their own advantages and restrictions. Some of them have also made their foray into the mobile world, powering augmented reality-based applications on phones with inbuilt cameras. In this paper, we compare the performances of three feature or landmark-guided motion tracking algorithms, namely marker-based tracking with MXRToolkit, face tracking based on CamShift, and MonoSLAM. We analyze and compare the complexity, accuracy, sensitivity, robustness and restrictions of each of the above methods. Our performance tests are conducted over two stages: The first stage of testing uses video sequences created with simulated camera movements along the six degrees of freedom in order to compare accuracy in tracking, while the second stage analyzes the robustness of the algorithms by testing for manipulative factors like image scaling and frame-skipping.

  3. Assessment of next-best-view algorithms performance with various 3D scanners and manipulator

    NASA Astrophysics Data System (ADS)

    Karaszewski, M.; Adamczyk, M.; Sitnik, R.

    2016-09-01

    The problem of calculating three dimensional (3D) sensor position (and orientation) during the digitization of real-world objects (called next best view planning or NBV) has been an active topic of research for over 20 years. While many solutions have been developed, it is hard to compare their quality based only on the exemplary results presented in papers. We implemented 13 of the most popular NBV algorithms and evaluated their performance by digitizing five objects of various properties, using three measurement heads with different working volumes mounted on a 6-axis robot with a rotating table for placing objects. The results obtained for the 13 algorithms were then compared based on four criteria: the number of directional measurements, digitization time, total positioning distance, and surface coverage required to digitize test objects with available measurement heads.

  4. Evaluation of Techniques to Detect Significant Network Performance Problems using End-to-End Active Network Measurements

    SciTech Connect

    Cottrell, R.Les; Logg, Connie; Chhaparia, Mahesh; Grigoriev, Maxim; Haro, Felipe; Nazir, Fawad; Sandford, Mark

    2006-01-25

    End-to-End fault and performance problems detection in wide area production networks is becoming increasingly hard as the complexity of the paths, the diversity of the performance, and dependency on the network increase. Several monitoring infrastructures are built to monitor different network metrics and collect monitoring information from thousands of hosts around the globe. Typically there are hundreds to thousands of time-series plots of network metrics which need to be looked at to identify network performance problems or anomalous variations in the traffic. Furthermore, most commercial products rely on a comparison with user configured static thresholds and often require access to SNMP-MIB information, to which a typical end-user does not usually have access. In our paper we propose new techniques to detect network performance problems proactively in close to realtime and we do not rely on static thresholds and SNMP-MIB information. We describe and compare the use of several different algorithms that we have implemented to detect persistent network problems using anomalous variations analysis in real end-to-end Internet performance measurements. We also provide methods and/or guidance for how to set the user settable parameters. The measurements are based on active probes running on 40 production network paths with bottlenecks varying from 0.5Mbits/s to 1000Mbit/s. For well behaved data (no missed measurements and no very large outliers) with small seasonal changes most algorithms identify similar events. We compare the algorithms' robustness with respect to false positives and missed events especially when there are large seasonal effects in the data. Our proposed techniques cover a wide variety of network paths and traffic patterns. We also discuss the applicability of the algorithms in terms of their intuitiveness, their speed of execution as implemented, and areas of applicability. Our encouraging results compare and evaluate the accuracy of our detection

  5. Heat Capacity Mapping Radiometer (HCMR) data processing algorithm, calibration, and flight performance evaluation

    NASA Technical Reports Server (NTRS)

    Bohse, J. R.; Bewtra, M.; Barnes, W. L.

    1979-01-01

    The rationale and procedures used in the radiometric calibration and correction of Heat Capacity Mapping Mission (HCMM) data are presented. Instrument-level testing and calibration of the Heat Capacity Mapping Radiometer (HCMR) were performed by the sensor contractor ITT Aerospace/Optical Division. The principal results are included. From the instrumental characteristics and calibration data obtained during ITT acceptance tests, an algorithm for post-launch processing was developed. Integrated spacecraft-level sensor calibration was performed at Goddard Space Flight Center (GSFC) approximately two months before launch. This calibration provided an opportunity to validate the data calibration algorithm. Instrumental parameters and results of the validation are presented and the performances of the instrument and the data system after launch are examined with respect to the radiometric results. Anomalies and their consequences are discussed. Flight data indicates a loss in sensor sensitivity with time. The loss was shown to be recoverable by an outgassing procedure performed approximately 65 days after the infrared channel was turned on. It is planned to repeat this procedure periodically.

  6. Prolonged Exercise in Type 1 Diabetes: Performance of a Customizable Algorithm to Estimate the Carbohydrate Supplements to Minimize Glycemic Imbalances

    PubMed Central

    Francescato, Maria Pia; Stel, Giuliana; Stenner, Elisabetta; Geat, Mario

    2015-01-01

    Physical activity in patients with type 1 diabetes (T1DM) is hindered because of the high risk of glycemic imbalances. A recently proposed algorithm (named Ecres) estimates well enough the supplemental carbohydrates for exercises lasting one hour, but its performance for prolonged exercise requires validation. Nine T1DM patients (5M/4F; 35–65 years; HbA1c 54±13 mmol·mol-1) performed, under free-life conditions, a 3-h walk at 30% heart rate reserve while insulin concentrations, whole-body carbohydrate oxidation rates (determined by indirect calorimetry) and supplemental carbohydrates (93% sucrose), together with glycemia, were measured every 30 min. Data were subsequently compared with the corresponding values estimated by the algorithm. No significant difference was found between the estimated insulin concentrations and the laboratory-measured values (p = NS). Carbohydrates oxidation rate decreased significantly with time (from 0.84±0.31 to 0.53±0.24 g·min-1, respectively; p<0.001), being estimated well enough by the algorithm (p = NS). Estimated carbohydrates requirements were practically equal to the corresponding measured values (p = NS), the difference between the two quantities amounting to –1.0±6.1 g, independent of the elapsed exercise time (time effect, p = NS). Results confirm that Ecres provides a satisfactory estimate of the carbohydrates required to avoid glycemic imbalances during moderate intensity aerobic physical activity, opening the prospect of an intriguing method that could liberate patients from the fear of exercise-induced hypoglycemia. PMID:25918842

  7. Relevant priors prefetching algorithm performance for a picture archiving and communication system.

    PubMed

    Andriole, K P; Avrin, D E; Yin, L; Gould, R G; Luth, D M; Arenson, R L

    2000-05-01

    Proper prefetching of relevant prior examinations from a picture archiving and communication system (PACS) archive, when a patient is scheduled for a new imaging study, and sending the historic images to the display station where the new examination is expected to be routed and subsequently read out, can greatly facilitate interpretation and review, as well as enhance radiology departmental workflow and PACS performance. In practice, it has proven extremely difficult to implement an automatic prefetch as successful as the experienced fileroom clerk. An algorithm based on defined metagroup categories for examination type mnemonics has been designed and implemented as one possible solution to the prefetch problem. The metagroups such as gastrointestinal (GI) tract, abdomen, chest, etc, can represent, in a small number of categories, the several hundreds of examination types performed by a typical radiology department. These metagroups can be defined in a table of examination mnemonics that maps a particular mnemonic to a metagroup or groups, and vice versa. This table is used to effect the prefetch rules of relevance. A given examination may relate to several prefetch categories, and preferences are easily configurable for a particular site. The prefetch algorithm metatable was implemented in database structured query language (SQL) using a many-to-many fetch category strategy. Algorithm performance was measured by analyzing the appropriateness of the priors fetched based on the examination type of the current study. Fetched relevant priors, missed relevant priors, fetched priors that were not relevant to the current examination, and priors not fetched that were not relevant were used to calculate sensitivity and specificity for the prefetch method. The time required for real-time requesting of priors not previously prefetched was also measured. The sensitivity of the prefetch algorithm was determined to be 98.3% and the specificity 100%. Time required for on

  8. No Significant Effect of Prefrontal tDCS on Working Memory Performance in Older Adults

    PubMed Central

    Nilsson, Jonna; Lebedev, Alexander V.; Lövdén, Martin

    2015-01-01

    Transcranial direct current stimulation (tDCS) has been put forward as a non-pharmacological alternative for alleviating cognitive decline in old age. Although results have shown some promise, little is known about the optimal stimulation parameters for modulation in the cognitive domain. In this study, the effects of tDCS over the dorsolateral prefrontal cortex (dlPFC) on working memory performance were investigated in thirty older adults. An N-back task assessed working memory before, during and after anodal tDCS at a current strength of 1 mA and 2 mA, in addition to sham stimulation. The study used a single-blind, cross-over design. The results revealed no significant effect of tDCS on accuracy or response times during or after stimulation, for any of the current strengths. These results suggest that a single session of tDCS over the dlPFC is unlikely to improve working memory, as assessed by an N-back task, in old age. PMID:26696882

  9. Improving the Response of a Rollover Sensor Placed in a Car under Performance Tests by Using a RLS Lattice Algorithm

    PubMed Central

    Hernandez, Wilmar

    2005-01-01

    In this paper, a sensor to measure the rollover angle of a car under performance tests is presented. Basically, the sensor consists of a dual-axis accelerometer, analog-electronic instrumentation stages, a data acquisition system and an adaptive filter based on a recursive least-squares (RLS) lattice algorithm. In short, the adaptive filter is used to improve the performance of the rollover sensor by carrying out an optimal prediction of the relevant signal coming from the sensor, which is buried in a broad-band noise background where we have little knowledge of the noise characteristics. The experimental results are satisfactory and show a significant improvement in the signal-to-noise ratio at the system output.

  10. Boosting runtime-performance of photon pencil beam algorithms for radiotherapy treatment planning.

    PubMed

    Siggel, M; Ziegenhein, P; Nill, S; Oelfke, U

    2012-10-01

    Pencil beam algorithms are still considered as standard photon dose calculation methods in Radiotherapy treatment planning for many clinical applications. Despite their established role in radiotherapy planning their performance and clinical applicability has to be continuously adapted to evolving complex treatment techniques such as adaptive radiation therapy (ART). We herewith report on a new highly efficient version of a well-established pencil beam convolution algorithm which relies purely on measured input data. A method was developed that improves raytracing efficiency by exploiting the capability of modern CPU architecture for a runtime reduction. Since most of the current desktop computers provide more than one calculation unit we used symmetric multiprocessing extensively to parallelize the workload and thus decreasing the algorithmic runtime. To maximize the advantage of code parallelization, we present two implementation strategies - one for the dose calculation in inverse planning software, and one for traditional forward planning. As a result, we could achieve on a 16-core personal computer with AMD processors a superlinear speedup factor of approx. 18 for calculating the dose distribution of typical forward IMRT treatment plans. PMID:22071169

  11. Performance evaluation of a routing algorithm based on Hopfield Neural Network for network-on-chip

    NASA Astrophysics Data System (ADS)

    Esmaelpoor, Jamal; Ghafouri, Abdollah

    2015-12-01

    Network on chip (NoC) has emerged as a solution to overcome the system on chip growing complexity and design challenges. A proper routing algorithm is a key issue of an NoC design. An appropriate routing method balances load across the network channels and keeps path length as short as possible. This survey investigates the performance of a routing algorithm based on Hopfield Neural Network. It is a dynamic programming to provide optimal path and network monitoring in real time. The aim of this article is to analyse the possibility of using a neural network as a router. The algorithm takes into account the path with the lowest delay (cost) form source to destination. In other words, the path a message takes from source to destination depends on network traffic situation at the time and it is the fastest one. The simulation results show that the proposed approach improves average delay, throughput and network congestion efficiently. At the same time, the increase in power consumption is almost negligible.

  12. The royal road for genetic algorithms: Fitness landscapes and GA performance

    SciTech Connect

    Mitchell, M.; Holland, J.H. ); Forrest, S. . Dept. of Computer Science)

    1991-01-01

    Genetic algorithms (GAs) play a major role in many artificial-life systems, but there is often little detailed understanding of why the GA performs as it does, and little theoretical basis on which to characterize the types of fitness landscapes that lead to successful GA performance. In this paper we propose a strategy for addressing these issues. Our strategy consists of defining a set of features of fitness landscapes that are particularly relevant to the GA, and experimentally studying how various configurations of these features affect the GA's performance along a number of dimensions. In this paper we informally describe an initial set of proposed feature classes, describe in detail one such class ( Royal Road'' functions), and present some initial experimental results concerning the role of crossover and building blocks'' on landscapes constructed from features of this class. 27 refs., 1 fig., 5 tabs.

  13. K-Means Re-Clustering-Algorithmic Options with Quantifiable Performance Comparisons

    SciTech Connect

    Meyer, A W; Paglieroni, D; Asteneh, C

    2002-12-17

    This paper presents various architectural options for implementing a K-Means Re-Clustering algorithm suitable for unsupervised segmentation of hyperspectral images. Performance metrics are developed based upon quantitative comparisons of convergence rates and segmentation quality. A methodology for making these comparisons is developed and used to establish K values that produce the best segmentations with minimal processing requirements. Convergence rates depend on the initial choice of cluster centers. Consequently, this same methodology may be used to evaluate the effectiveness of different initialization techniques.

  14. An administrative data validation study of the accuracy of algorithms for identifying rheumatoid arthritis: the influence of the reference standard on algorithm performance

    PubMed Central

    2014-01-01

    Background We have previously validated administrative data algorithms to identify patients with rheumatoid arthritis (RA) using rheumatology clinic records as the reference standard. Here we reassessed the accuracy of the algorithms using primary care records as the reference standard. Methods We performed a retrospective chart abstraction study using a random sample of 7500 adult patients under the care of 83 family physicians contributing to the Electronic Medical Record Administrative data Linked Database (EMRALD) in Ontario, Canada. Using physician-reported diagnoses as the reference standard, we computed and compared the sensitivity, specificity, and predictive values for over 100 administrative data algorithms for RA case ascertainment. Results We identified 69 patients with RA for a lifetime RA prevalence of 0.9%. All algorithms had excellent specificity (>97%). However, sensitivity varied (75-90%) among physician billing algorithms. Despite the low prevalence of RA, most algorithms had adequate positive predictive value (PPV; 51-83%). The algorithm of “[1 hospitalization RA diagnosis code] or [3 physician RA diagnosis codes with ≥1 by a specialist over 2 years]” had a sensitivity of 78% (95% CI 69–88), specificity of 100% (95% CI 100–100), PPV of 78% (95% CI 69–88) and NPV of 100% (95% CI 100–100). Conclusions Administrative data algorithms for detecting RA patients achieved a high degree of accuracy amongst the general population. However, results varied slightly from our previous report, which can be attributed to differences in the reference standards with respect to disease prevalence, spectrum of disease, and type of comparator group. PMID:24956925

  15. Flight assessment of the onboard propulsion system model for the Performance Seeking Control algorithm on an F-15 aircraft

    NASA Technical Reports Server (NTRS)

    Orme, John S.; Schkolnik, Gerard S.

    1995-01-01

    Performance Seeking Control (PSC), an onboard, adaptive, real-time optimization algorithm, relies upon an onboard propulsion system model. Flight results illustrated propulsion system performance improvements as calculated by the model. These improvements were subject to uncertainty arising from modeling error. Thus to quantify uncertainty in the PSC performance improvements, modeling accuracy must be assessed. A flight test approach to verify PSC-predicted increases in thrust (FNP) and absolute levels of fan stall margin is developed and applied to flight test data. Application of the excess thrust technique shows that increases of FNP agree to within 3 percent of full-scale measurements for most conditions. Accuracy to these levels is significant because uncertainty bands may now be applied to the performance improvements provided by PSC. Assessment of PSC fan stall margin modeling accuracy was completed with analysis of in-flight stall tests. Results indicate that the model overestimates the stall margin by between 5 to 10 percent. Because PSC achieves performance gains by using available stall margin, this overestimation may represent performance improvements to be recovered with increased modeling accuracy. Assessment of thrust and stall margin modeling accuracy provides a critical piece for a comprehensive understanding of PSC's capabilities and limitations.

  16. Associating optical measurements and estimating orbits of geocentric objects with a Genetic Algorithm: performance limitations.

    NASA Astrophysics Data System (ADS)

    Zittersteijn, Michiel; Schildknecht, Thomas; Vananti, Alessandro; Dolado Perez, Juan Carlos; Martinot, Vincent

    2016-07-01

    Currently several thousands of objects are being tracked in the MEO and GEO regions through optical means. With the advent of improved sensors and a heightened interest in the problem of space debris, it is expected that the number of tracked objects will grow by an order of magnitude in the near future. This research aims to provide a method that can treat the correlation and orbit determination problems simultaneously, and is able to efficiently process large data sets with minimal manual intervention. This problem is also known as the Multiple Target Tracking (MTT) problem. The complexity of the MTT problem is defined by its dimension S. Current research tends to focus on the S = 2 MTT problem. The reason for this is that for S = 2 the problem has a P-complexity. However, with S = 2 the decision to associate a set of observations is based on the minimum amount of information, in ambiguous situations (e.g. satellite clusters) this will lead to incorrect associations. The S > 2 MTT problem is an NP-hard combinatorial optimization problem. In previous work an Elitist Genetic Algorithm (EGA) was proposed as a method to approximately solve this problem. It was shown that the EGA is able to find a good approximate solution with a polynomial time complexity. The EGA relies on solving the Lambert problem in order to perform the necessary orbit determinations. This means that the algorithm is restricted to orbits that are described by Keplerian motion. The work presented in this paper focuses on the impact that this restriction has on the algorithm performance.

  17. Algorithms for thermal and mechanical contact in nuclear fuel performance analysis

    SciTech Connect

    Hales, J. D.; Andrs, D.; Gaston, D. R.

    2013-07-01

    The transfer of heat and force from UO{sub 2} pellets to the cladding is an essential element in typical nuclear fuel performance modeling. Traditionally, this has been accomplished in a one-dimensional fashion, with a slice of fuel interacting with a slice of cladding. In this manner, the location at which the transfer occurs is set a priori. While straightforward, this limits the applicability and accuracy of the model. We propose finite element algorithms for the transfer of heat and force where the location for the transfer is not predetermined. This enables analysis of individual fuel pellets with large sliding between the fuel and the cladding. The simplest of these approaches is a node on face constraint. Heat and force are transferred from a node on the fuel to the cladding face opposite. Another option is a transfer based on quadrature point locations, which is applied here to the transfer of heat. The final algorithm outlined here is the so-called mortar method, with applicability to heat and force transfer. The mortar method promises to be a highly accurate approach which may be used for a transfer of other quantities and in other contexts, such as heat from cladding to a CFD mesh of the coolant. This paper reviews these approaches, discusses their strengths and weaknesses, and presents results from each on simplified nuclear fuel performance models. (authors)

  18. Performance Evaluation of Block Acquisition and Tracking Algorithms Using an Open Source GPS Receiver Platform

    NASA Technical Reports Server (NTRS)

    Ramachandran, Ganesh K.; Akopian, David; Heckler, Gregory W.; Winternitz, Luke B.

    2011-01-01

    Location technologies have many applications in wireless communications, military and space missions, etc. US Global Positioning System (GPS) and other existing and emerging Global Navigation Satellite Systems (GNSS) are expected to provide accurate location information to enable such applications. While GNSS systems perform very well in strong signal conditions, their operation in many urban, indoor, and space applications is not robust or even impossible due to weak signals and strong distortions. The search for less costly, faster and more sensitive receivers is still in progress. As the research community addresses more and more complicated phenomena there exists a demand on flexible multimode reference receivers, associated SDKs, and development platforms which may accelerate and facilitate the research. One of such concepts is the software GPS/GNSS receiver (GPS SDR) which permits a facilitated access to algorithmic libraries and a possibility to integrate more advanced algorithms without hardware and essential software updates. The GNU-SDR and GPS-SDR open source receiver platforms are such popular examples. This paper evaluates the performance of recently proposed block-corelator techniques for acquisition and tracking of GPS signals using open source GPS-SDR platform.

  19. Performance Comparison of Binary Search Tree and Framed ALOHA Algorithms for RFID Anti-Collision

    NASA Astrophysics Data System (ADS)

    Chen, Wen-Tzu

    Binary search tree and framed ALOHA algorithms are commonly adopted to solve the anti-collision problem in RFID systems. In this letter, the read efficiency of these two anti-collision algorithms is compared through computer simulations. Simulation results indicate the framed ALOHA algorithm requires less total read time than the binary search tree algorithm. The initial frame length strongly affects the uplink throughput for the framed ALOHA algorithm.

  20. Competing Sudakov veto algorithms

    NASA Astrophysics Data System (ADS)

    Kleiss, Ronald; Verheyen, Rob

    2016-07-01

    We present a formalism to analyze the distribution produced by a Monte Carlo algorithm. We perform these analyses on several versions of the Sudakov veto algorithm, adding a cutoff, a second variable and competition between emission channels. The formal analysis allows us to prove that multiple, seemingly different competition algorithms, including those that are currently implemented in most parton showers, lead to the same result. Finally, we test their performance in a semi-realistic setting and show that there are significantly faster alternatives to the commonly used algorithms.

  1. Analysis of reproductive performance of lactating cows on large dairy farms using machine learning algorithms.

    PubMed

    Caraviello, D Z; Weigel, K A; Craven, M; Gianola, D; Cook, N B; Nordlund, K V; Fricke, P M; Wiltbank, M C

    2006-12-01

    The fertility of lactating dairy cows is economically important, but the mean reproductive performance of Holstein cows has declined during the past 3 decades. Traits such as first-service conception rate and pregnancy status at 150 d in milk (DIM) are influenced by numerous explanatory factors common to specific farms or individual cows on these farms. Machine learning algorithms offer great flexibility with regard to problems of multicollinearity, missing values, or complex interactions among variables. The objective of this study was to use machine learning algorithms to identify factors affecting the reproductive performance of lactating Holstein cows on large dairy farms. This study used data from farms in the Alta Genetics Advantage progeny-testing program. Production and reproductive records from 153 farms were obtained from on-farm DHI-Plus, Dairy Comp 305, or PCDART herd management software. A survey regarding management, facilities, labor, nutrition, reproduction, genetic selection, climate, and milk production was completed by managers of 103 farms; body condition scores were measured by a single evaluator on 63 farms; and temperature data were obtained from nearby weather stations. The edited data consisted of 31,076 lactation records, 14,804 cows, and 317 explanatory variables for first-service conception rate and 17,587 lactation records, 9,516 cows, and 341 explanatory variables for pregnancy status at 150 DIM. An alternating decision tree algorithm for first-service conception rate classified 75.6% of records correctly and identified the frequency of hoof trimming maintenance, type of bedding in the dry cow pen, type of cow restraint system, and duration of the voluntary waiting period as key explanatory variables. An alternating decision tree algorithm for pregnancy status at 150 DIM classified 71.4% of records correctly and identified bunk space per cow, temperature for thawing semen, percentage of cows with low body condition scores, number of

  2. The performance of phylogenetic algorithms in estimating haplotype genealogies with migration.

    PubMed

    Salzburger, Walter; Ewing, Greg B; Von Haeseler, Arndt

    2011-05-01

    Genealogies estimated from haplotypic genetic data play a prominent role in various biological disciplines in general and in phylogenetics, population genetics and phylogeography in particular. Several software packages have specifically been developed for the purpose of reconstructing genealogies from closely related, and hence, highly similar haplotype sequence data. Here, we use simulated data sets to test the performance of traditional phylogenetic algorithms, neighbour-joining, maximum parsimony and maximum likelihood in estimating genealogies from nonrecombining haplotypic genetic data. We demonstrate that these methods are suitable for constructing genealogies from sets of closely related DNA sequences with or without migration. As genealogies based on phylogenetic reconstructions are fully resolved, but not necessarily bifurcating, and without reticulations, these approaches outperform widespread 'network' constructing methods. In our simulations of coalescent scenarios involving panmictic, symmetric and asymmetric migration, we found that phylogenetic reconstruction methods performed well, while the statistical parsimony approach as implemented in TCS performed poorly. Overall, parsimony as implemented in the PHYLIP package performed slightly better than other methods. We further point out that we are not making the case that widespread 'network' constructing methods are bad, but that traditional phylogenetic tree finding methods are applicable to haplotypic data and exhibit reasonable performance with respect to accuracy and robustness. We also discuss some of the problems of converting a tree to a haplotype genealogy, in particular that it is nonunique. PMID:21457168

  3. Basic Performance of the Standard Retrieval Algorithm for the Dual-frequency Precipitation Radar

    NASA Astrophysics Data System (ADS)

    Seto, S.; Iguchi, T.; Kubota, T.

    2013-12-01

    applied again by using the adjusted k-Z relations. By iterating a combination of the HB method and the DFR method, k-Z relations are improved. This is termed HB-DFR method (Seto et al. 2013). Though k-Z relations are adjusted simultaneously for all range bins using SRT method, this method can adjust k-Z relation at a range bin independently of other range bins. Therefore, in this method, DSD is represented on a 2-dimensional plane. The HB-DFR method has been incorporated in the DPR Level 2 standard algorithm (L2). The basic performance of L2 is tested with synthetic dataset which were produced from the TRMM/PR standard product. In L2, when only KuPR radar measurement is used, precipitation estimates are in good agreement with the corresponding rain rate estimates in the PR standard product. However, when both KuPR and KaPR radars measurements are used and the HB-DFR method is applied, the precipitation rate estimates are deviated from the estimates in the PR standard product. This is partly because of the poor performance of the HB-DFR and is also partly because of the overestimation in PIA by the Dual-frequency SRT. Improvement of the standard algorithm particularly for the dual-frequency measurement will be presented.

  4. Performance Analysis for Acoustic Echo Cancellation Systems based on Variable Step Size NLMS algorithms

    NASA Astrophysics Data System (ADS)

    Hegde, Rajeshwari; Balachandra, K.; Rao, Madhusudhan

    2011-12-01

    Acoustic echo cancellation is an essential signal enhancement tool in hands-free communication. Loudspeaker signals are picked up by a microphone and are fed back to the correspondent, resulting in an undesired echo. Nowadays, adaptive filtering techniques are typically employed to suppress this echo. In acoustic applications long filters need to be adapted for sufficient echo suppression. Classical adaptation schemes such as LMS are quite expensive for accurate echo path modeling in highly reverberating environments. In order to cope with dynamic signals, step-size μ is often normalized by taking it inversely proportional to the energy of x. This normalized version of LMS (NLMS) is typically used in practice. This paper discusses various variable step-size NLMS based algorithms which can be implemented in acoustic echo cancelling applications. The performance of these algorithms in terms of ERLE and NSEC curves are obtained and comparison between them is done. Also a simple and novel Double-Talk Detection scheme is proposed in this paper.

  5. Control performance evaluation of railway vehicle MR suspension using fuzzy sky-ground hook control algorithm

    NASA Astrophysics Data System (ADS)

    Ha, S. H.; Choi, S. B.; Lee, G. S.; Yoo, W. H.

    2013-02-01

    This paper presents control performance evaluation of railway vehicle featured by semi-active suspension system using magnetorheological (MR) fluid damper. In order to achieve this goal, a nine degree of freedom of railway vehicle model, which includes car body and bogie, is established. The wheel-set data is loaded from measured value of railway vehicle. The MR damper system is incorporated with the governing equation of motion of the railway vehicle model which includes secondary suspension. To illustrate the effectiveness of the controlled MR dampers on suspension system of railway vehicle, the control law using the sky-ground hook controller is adopted. This controller takes into account for both vibration control of car body and increasing stability of bogie by adopting a weighting parameter between two performance requirements. The parameters appropriately determined by employing a fuzzy algorithm associated with two fuzzy variables: the lateral speed of the car body and the lateral performance of the bogie. Computer simulation results of control performances such as vibration control and stability analysis are presented in time and frequency domains.

  6. Student-Led Project Teams: Significance of Regulation Strategies in High- and Low-Performing Teams

    ERIC Educational Resources Information Center

    Ainsworth, Judith

    2016-01-01

    We studied group and individual co-regulatory and self-regulatory strategies of self-managed student project teams using data from intragroup peer evaluations and a postproject survey. We found that high team performers shared their research and knowledge with others, collaborated to advise and give constructive criticism, and demonstrated moral…

  7. MOF Thin Film-Coated Metal Oxide Nanowire Array: Significantly Improved Chemiresistor Sensor Performance.

    PubMed

    Yao, Ming-Shui; Tang, Wen-Xiang; Wang, Guan-E; Nath, Bhaskar; Xu, Gang

    2016-07-01

    A strategy for combining metal oxides and metal-organic frameworks is proposed to design new materials for sensing volatile organic compounds, for the first time. The prepared ZnO@ZIF-CoZn core-sheath nanowire arrays show greatly enhanced performance not only on its selectivity but also on its response, recovery behavior, and working temperature. PMID:27153113

  8. Significant Returns in Engagement and Performance with a Free Teaching App

    ERIC Educational Resources Information Center

    Green, Alan

    2016-01-01

    Pedagogical research shows that teaching methods other than traditional lectures may result in better outcomes. However, lecture remains the dominant method in economics, likely due to high implementation costs of methods shown to be effective in the literature. In this article, the author shows significant benefits of using a teaching app for…

  9. The Performance Analysis of a 3d Map Embedded Ins/gps Fusion Algorithm for Seamless Vehicular Navigation in Elevated Highway Environments

    NASA Astrophysics Data System (ADS)

    Lee, Y. H.; Chiang, K. W.

    2012-07-01

    In this study, a 3D Map Matching (3D MM) algorithm is embedded to current INS/GPS fusion algorithm for enhancing the sustainability and accuracy of INS/GPS integration systems, especially the height component. In addition, this study propose an effective solutions to the limitation of current commercial vehicular navigation systems where they fail to distinguish whether the vehicle is moving on the elevated highway or the road under it because those systems don't have sufficient height resolution. To validate the performance of proposed 3D MM embedded INS/GPS integration algorithms, in the test area, two scenarios were considered, paths under the freeways and streets between tall buildings, where the GPS signal is obstacle or interfered easily. The test platform was mounted on the top of a land vehicle and also systems in the vehicle. The IMUs applied includes SPAN-LCI (0.1 deg/hr gyro bias) from NovAtel, which was used as the reference system, and two MEMS IMUs with different specifications for verifying the performance of proposed algorithm. The preliminary results indicate the proposed algorithms are able to improve the accuracy of positional components in GPS denied environments significantly with the use of INS/GPS integrated systems in SPP mode.

  10. Phytoplankton blooms in the Patagonian shelf-break and vicinities: bio-optical signature and performance of ocean color algorithms

    NASA Astrophysics Data System (ADS)

    Garcia, C. A.; Ferreira, A.; Dogliotti, A. I.; Tavano, V. M.; High Latitude Oceanography Group-GOAL

    2011-12-01

    The PATagonia EXperiment (PATEX) is a Brazilian research project, which has the overall objective of characterizing the environmental constraints, phytoplankton assemblages, primary production rates, bio-optical characteristics, and air-sea CO2 fluxes waters along the Argentinean shelf-break during austral spring and summer. A set of seven PATEX cruises has been conducted from 2004 to 2009 (total of 189 CTD stations) covering a broad region, in waters whose surface chlorophyll-a concentration (chla) varied from 0.10 to 22.30 mg m-3. This wide range of phytoplankton biomass reflected several stages of the phytoplankton blooms with relative higher chlorophyll associated to microplankton (picoplankton and/or nanoplankton) dominance during the spring (later summer) cruises in the shelf-break blooms. A special cruise (PATEX 5) was specially designed for sampling a coccolithophorid bloom in the Patagonia inner shelf (Garcia et al, 2011, JGR, 116, C03025). Overall, distinct efficiencies in absorption and scattering properties were observed due to differences in the algal cell size and pigment composition. Cluster analysis performed on both chla-specific absorption and scattering coefficients has shown the relative contributions by each of three cell-size classes. A hierarchical cluster analysis was also applied to "in situ" hyperspectral remote sensing reflectance spectra in order to classify the whole spectra set into coherent groups. Three spectrally distinct classes were well defined and they are significantly associated chla range. NASA OC4v6 chlorophyll algorithm has shown a relatively good performance when combining bio-optical data from all cruises (r2=0.78, slope of 0.86 and intercept of 0.03), with a positive bias (Mean Relative Percentage Difference, RPD=11.53%). The impact of chlorophyll-specific absorption and scattering coefficients on the performance of ocean empirical algorithms is also accessed.

  11. Optimizing the performance of single-mode laser diode system using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Aydin, Elif; Yildirim, Remzi

    2004-07-01

    In this correspondence, micro-genetic algorithm (MGA) application results for optimizing the performance of electronic feedback of a laser diode are presented. The goal of optimization is to find the maximum bandwidth of the laser diode with electronic feedback used in fiber optic digital communication. A numerical analysis of the system theory of the single-mode laser diode to obtain numerical results of the gain, the pulse response, and the harmonic distortion for electronic feedback is also presented. The dependence of the system gain on the feedback gain and delay is examined. The pulse response is studied and it is shown that a transmission rate over 1 Gbyte/s can be achieved.

  12. Compressive Sensing on Manifolds Using a Nonparametric Mixture of Factor Analyzers: Algorithm and Performance Bounds

    PubMed Central

    Chen, Minhua; Silva, Jorge; Paisley, John; Wang, Chunping; Dunson, David; Carin, Lawrence

    2013-01-01

    Nonparametric Bayesian methods are employed to constitute a mixture of low-rank Gaussians, for data x ∈ ℝN that are of high dimension N but are constrained to reside in a low-dimensional subregion of ℝN. The number of mixture components and their rank are inferred automatically from the data. The resulting algorithm can be used for learning manifolds and for reconstructing signals from manifolds, based on compressive sensing (CS) projection measurements. The statistical CS inversion is performed analytically. We derive the required number of CS random measurements needed for successful reconstruction, based on easily-computed quantities, drawing on block-sparsity properties. The proposed methodology is validated on several synthetic and real datasets. PMID:23894225

  13. Lithium deficient mesoporous Li2-xMnSiO4 with significantly improved electrochemical performance

    NASA Astrophysics Data System (ADS)

    Wang, Haiyan; Hou, Tianli; Sun, Dan; Huang, Xiaobing; He, Hanna; Tang, Yougen; Liu, Younian

    2014-02-01

    Li2-xMnSiO4 compounds with mesoporous structure are first proposed in the present work. It is interesting to note that the lithium deficient compounds exhibit much higher electrochemical performance in comparison with the stoichiometric one. Among these compounds, Li1.8MnSiO4 shows the best electrochemical performance. It is found that mesoporous Li1.8MnSiO4 without carbon coating delivers a maximum discharge capacity of 110.9 mAh g-1 at 15 mA g-1, maintaining 90.8 mAh g-1 after 25 cycles, while that of the stoichiometric one is only 48.0 mAh g-1, with 12.5 mAh g-1 remaining. The superior properties are mainly due to the great improvement of electronic conductivity and structure stability, as well as suppressed charge-transfer resistance.

  14. The Proposal of Key Performance Indicators in Facility Management and Determination the Weights of Significance

    NASA Astrophysics Data System (ADS)

    Rimbalová, Jarmila; Vilčeková, Silvia

    2013-11-01

    The practice of facilities management is rapidly evolving with the increasing interest in the discourse of sustainable development. The industry and its market are forecasted to develop to include non-core functions, activities traditionally not associated with this profession, but which are increasingly being addressed by facilities managers. The scale of growth in the built environment and the consequential growth of the facility management sector is anticipated to be enormous. Key Performance Indicators (KPI) are measure that provides essential information about performance of facility services delivery. In selecting KPI, it is critical to limit them to those factors that are essential to the organization reaching its goals. It is also important to keep the number of KPI small just to keep everyone's attention focused on achieving the same KPIs. This paper deals with the determination of weights of KPI of FM in terms of the design and use of sustainable buildings.

  15. Performance-based seismic design of steel frames utilizing colliding bodies algorithm.

    PubMed

    Veladi, H

    2014-01-01

    A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm. PMID:25202717

  16. Performance-Based Seismic Design of Steel Frames Utilizing Colliding Bodies Algorithm

    PubMed Central

    Veladi, H.

    2014-01-01

    A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm. PMID:25202717

  17. Performance Comparison of Attribute Set Reduction Algorithms in Stock Price Prediction - A Case Study on Indian Stock Data

    NASA Astrophysics Data System (ADS)

    Sivakumar, P. Bagavathi; Mohandas, V. P.

    Stock price prediction and stock trend prediction are the two major research problems of financial time series analysis. In this work, performance comparison of various attribute set reduction algorithms were made for short term stock price prediction. Forward selection, backward elimination, optimized selection, optimized selection based on brute force, weight guided and optimized selection based on the evolutionary principle and strategy was used. Different selection schemes and cross over types were explored. To supplement learning and modeling, support vector machine was also used in combination. The algorithms were applied on a real time Indian stock data namely CNX Nifty. The experimental study was conducted using the open source data mining tool Rapidminer. The performance was compared in terms of root mean squared error, squared error and execution time. The obtained results indicates the superiority of evolutionary algorithms and the optimize selection algorithm based on evolutionary principles outperforms others.

  18. FOCUSED R&D FOR ELECTROCHROMIC SMART WINDOWS: SIGNIFICANT PERFORMANCE AND YIELD ENHANCEMENTS

    SciTech Connect

    Marcus Milling

    2004-09-23

    Developments made under this program will play a key role in underpinning the technology for producing EC devices. It is anticipated that the work begun during this period will continue to improve materials properties, and drive yields up and costs down, increase durability and make manufacture simpler and more cost effective. It is hoped that this will contribute to a successful and profitable industry, which will help reduce energy consumption and improve comfort for building occupants worldwide. The first major task involved improvements to the materials used in the process. The improvements made as a result of the work done during this project have contributed to the enhanced performance, including dynamic range, uniformity and electrical characteristics. Another major objective of the project was to develop technology to improve yield, reduce cost, and facilitate manufacturing of EC products. Improvements directly attributable to the work carried out as part of this project and seen in the overall EC device performance, have been accompanied by an improvement in the repeatability and consistency of the production process. Innovative test facilities for characterizing devices in a timely and well-defined manner have been developed. The equipment has been designed in such a way as to make scaling-up to accommodate higher throughput necessary for manufacturing relatively straightforward. Finally, the third major goal was to assure the durability of the EC product, both by developments aimed at improving the product performance, as well as development of novel procedures to test the durability of this new product. Both aspects have been demonstrated, both by carrying out a number of different durability tests, both in-house and by independent third-party testers, and also developing several novel durability tests.

  19. Impact of different disassembly line balancing algorithms on the performance of dynamic kanban system for disassembly line

    NASA Astrophysics Data System (ADS)

    Kizilkaya, Elif A.; Gupta, Surendra M.

    2005-11-01

    In this paper, we compare the impact of different disassembly line balancing (DLB) algorithms on the performance of our recently introduced Dynamic Kanban System for Disassembly Line (DKSDL) to accommodate the vagaries of uncertainties associated with disassembly and remanufacturing processing. We consider a case study to illustrate the impact of various DLB algorithms on the DKSDL. The approach to the solution, scenario settings, results and the discussions of the results are included.

  20. Significantly enhancing supercapacitive performance of nitrogen-doped graphene nanosheet electrodes by phosphoric acid activation.

    PubMed

    Wang, Ping; He, Haili; Xu, Xiaolong; Jin, Yongdong

    2014-02-12

    In this work, we present a new method to synthesize the phosphorus, nitrogen contained graphene nanosheets, which uses dicyandiamide to prevent the aggregation of graphene oxide and act as the nitrogen precursor, and phosphoric acid (H3PO4) as the activation reagent. We have found that through the H3PO4 activation, the samples exhibit the remarkably enhanced supercapacitive performance, and depending on the amount of H3PO4 introduced, the specific capacitance of the samples is gradually increased from 7.6 to 244.6 F g(-1). Meanwhile, the samples also exhibit the good rate capability and excellent stability (up to 10 000 cycles). Through the transmission electron microscopy, high-resolution transmission electron microscopy, X-ray diffraction, X-ray photoelectron spectroscopy and Brunauer-Emmett-Teller analyses, H3PO4 treatment induced large pore volume and phosphorus related function groups in the product are assumed to response for the enhancement. PMID:24456232

  1. Postexercise Glycogen Recovery and Exercise Performance is Not Significantly Different Between Fast Food and Sport Supplements.

    PubMed

    Cramer, Michael J; Dumke, Charles L; Hailes, Walter S; Cuddy, John S; Ruby, Brent C

    2015-10-01

    A variety of dietary choices are marketed to enhance glycogen recovery after physical activity. Past research informs recommendations regarding the timing, dose, and nutrient compositions to facilitate glycogen recovery. This study examined the effects of isoenergetic sport supplements (SS) vs. fast food (FF) on glycogen recovery and exercise performance. Eleven males completed two experimental trials in a randomized, counterbalanced order. Each trial included a 90-min glycogen depletion ride followed by a 4-hr recovery period. Absolute amounts of macronutrients (1.54 ± 0.27 g·kg-1 carbohydrate, 0.24 ± 0.04 g·kg fat-1, and 0.18 ±0.03g·kg protein-1) as either SS or FF were provided at 0 and 2 hr. Muscle biopsies were collected from the vastus lateralis at 0 and 4 hr post exercise. Blood samples were analyzed at 0, 30, 60, 120, 150, 180, and 240 min post exercise for insulin and glucose, with blood lipids analyzed at 0 and 240 min. A 20k time-trial (TT) was completed following the final muscle biopsy. There were no differences in the blood glucose and insulin responses. Similarly, rates of glycogen recovery were not different across the diets (6.9 ± 1.7 and 7.9 ± 2.4 mmol·kg wet weight- 1·hr-1 for SS and FF, respectively). There was also no difference across the diets for TT performance (34.1 ± 1.8 and 34.3 ± 1.7 min for SS and FF, respectively. These data indicate that short-term food options to initiate glycogen resynthesis can include dietary options not typically marketed as sports nutrition products such as fast food menu items. PMID:25811308

  2. Significant Performance Enhancement in Asymmetric Supercapacitors based on Metal Oxides, Carbon nanotubes and Neutral Aqueous Electrolyte

    PubMed Central

    Singh, Arvinder; Chandra, Amreesh

    2015-01-01

    Amongst the materials being investigated for supercapacitor electrodes, carbon based materials are most investigated. However, pure carbon materials suffer from inherent physical processes which limit the maximum specific energy and power that can be achieved in an energy storage device. Therefore, use of carbon-based composites with suitable nano-materials is attaining prominence. The synergistic effect between the pseudocapacitive nanomaterials (high specific energy) and carbon (high specific power) is expected to deliver the desired improvements. We report the fabrication of high capacitance asymmetric supercapacitor based on electrodes of composites of SnO2 and V2O5 with multiwall carbon nanotubes and neutral 0.5 M Li2SO4 aqueous electrolyte. The advantages of the fabricated asymmetric supercapacitors are compared with the results published in the literature. The widened operating voltage window is due to the higher over-potential of electrolyte decomposition and a large difference in the work functions of the used metal oxides. The charge balanced device returns the specific capacitance of ~198 F g−1 with corresponding specific energy of ~89 Wh kg−1 at 1 A g−1. The proposed composite systems have shown great potential in fabricating high performance supercapacitors. PMID:26494197

  3. Significant Performance Enhancement in Asymmetric Supercapacitors based on Metal Oxides, Carbon nanotubes and Neutral Aqueous Electrolyte

    NASA Astrophysics Data System (ADS)

    Singh, Arvinder; Chandra, Amreesh

    2015-10-01

    Amongst the materials being investigated for supercapacitor electrodes, carbon based materials are most investigated. However, pure carbon materials suffer from inherent physical processes which limit the maximum specific energy and power that can be achieved in an energy storage device. Therefore, use of carbon-based composites with suitable nano-materials is attaining prominence. The synergistic effect between the pseudocapacitive nanomaterials (high specific energy) and carbon (high specific power) is expected to deliver the desired improvements. We report the fabrication of high capacitance asymmetric supercapacitor based on electrodes of composites of SnO2 and V2O5 with multiwall carbon nanotubes and neutral 0.5 M Li2SO4 aqueous electrolyte. The advantages of the fabricated asymmetric supercapacitors are compared with the results published in the literature. The widened operating voltage window is due to the higher over-potential of electrolyte decomposition and a large difference in the work functions of the used metal oxides. The charge balanced device returns the specific capacitance of ~198 F g-1 with corresponding specific energy of ~89 Wh kg-1 at 1 A g-1. The proposed composite systems have shown great potential in fabricating high performance supercapacitors.

  4. Authoring experience: the significance and performance of storytelling in Socratic dialogue with rehabilitating cancer patients.

    PubMed

    Knox, Jeanette Bresson Ladegaard; Svendsen, Mette Nordahl

    2015-08-01

    This article examines the storytelling aspect in philosophizing with rehabilitating cancer patients in small Socratic dialogue groups (SDG). Recounting an experience to illustrate a philosophical question chosen by the participants is the traditional point of departure for the dialogical exchange. However, narrating is much more than a beginning point or the skeletal framework of events and it deserves more scholarly attention than hitherto given. Storytelling pervades the whole Socratic process and impacts the conceptual analysis in a SDG. In this article we show how the narrative aspect became a rich resource for the compassionate bond between participants and how their stories cultivated the abstract reflection in the group. In addition, the aim of the article is to reveal the different layers in the performance of storytelling, or of authoring experience. By picking, poking and dissecting an experience through a collaborative effort, most participants had their initial experience existentially refined and the chosen concept of which the experience served as an illustration transformed into a moral compass to be used in self-orientation post cancer. PMID:25894237

  5. Performance of the Lidar Design and Data Algorithms for the GLAS Global Cloud and Aerosol Measurements

    NASA Technical Reports Server (NTRS)

    Spinhirne, James D.; Palm, Stephen P.; Hlavka, Dennis L.; Hart, William D.

    2007-01-01

    The Geoscience Laser Altimeter System (GLAS) launched in early 2003 is the first polar orbiting satellite lidar. The instrument design includes high performance observations of the distribution and optical scattering cross sections of atmospheric clouds and aerosol. The backscatter lidar operates at two wavelengths, 532 and 1064 nm. For the atmospheric cloud and aerosol measurements, the 532 nm channel was designed for ultra high efficiency with solid state photon counting detectors and etalon filtering. Data processing algorithms were developed to calibrate and normalize the signals and produce global scale data products of the height distribution of cloud and aerosol layers and their optical depths and particulate scattering cross sections up to the limit of optical attenuation. The paper will concentrate on the effectiveness and limitations of the lidar channel design and data product algorithms. Both atmospheric receiver channels meet and exceed their design goals. Geiger Mode Avalanche Photodiode modules are used for the 532 nm signal. The operational experience is that some signal artifacts and non-linearity require correction in data processing. As with all photon counting detectors, a pulse-pile-up calibration is an important aspect of the measurement. Additional signal corrections were found to be necessary relating to correction of a saturation signal-run-on effect and also for daytime data, a small range dependent variation in the responsivity. It was possible to correct for these signal errors in data processing and achieve the requirement to accurately profile aerosol and cloud cross section down to 10-7 llm-sr. The analysis procedure employs a precise calibration against molecular scattering in the mid-stratosphere. The 1064 nm channel detection employs a high-speed analog APD for surface and atmospheric measurements where the detection sensitivity is limited by detector noise and is over an order of magnitude less than at 532 nm. A unique feature of

  6. Performance analysis of hybrid algorithms for lossless compression of climate data

    NASA Astrophysics Data System (ADS)

    Mummadisetty, Bharath Chandra

    Climate data is very important and at the same time, voluminous. Every minute a new entry is recorded for different climate parameters in climate databases around the world. Given the explosive growth of data that needs to be transmitted and stored, there is a necessity to focus on developing better transmission and storage technologies. Data compression is known to be a viable and effective solution to reduce bandwidth and storage requirements of bulk data. So, the goal is to develop the best compression methods for climate data. The methodology used is based on predictive analysis. The focus is to implement a hybrid algorithm which utilizes the functionality of Artificial Neural Networks (ANN) for prediction of climate data. ANN is a very efficient tool to generate models for predicting climate data with great accuracy. Two types of ANN's such as Multilayer Perceptron (MLP) and Cascade Feedforward Neural Network (CFNN) are used. It is beneficial to take advantage of ANN and combine its output with lossless compression algorithms such as differential encoding and Huffman coding to generate high compression ratios. The performance of the two techniques based on MLP and CFNN types are compared using metrics including compression ratio, Mean Square Error (MSE) and Root Mean Square Error (RMSE). The two methods are also compared with a conventional method of differential encoding followed by Huffman Coding. The results indicate that MLP outperforms CFNN. Also compression ratios of both the proposed methods are higher than those obtained by the standard method. Compression ratios as high as 10.3, 9.8, and 9.54 are obtained for precipitation, photosynthetically active radiation, and solar radiation datasets respectively.

  7. Light-concentrating plasmonic Au superstructures with significantly visible-light-enhanced catalytic performance.

    PubMed

    Yang, Jinhu; Li, Ying; Zu, Lianhai; Tong, Lianming; Liu, Guanglei; Qin, Yao; Shi, Donglu

    2015-04-22

    Noble metals are well-known for their surface plasmon resonance effect that enables strong light absorption typically in the visible regions for gold and silver. However, unlike semiconductors, noble metals are commonly considered incapable of catalyzing reactions via photogenerated electron-hole pairs due to their continuous energy band structures. So far, photonically activated catalytic system based on pure noble metal nanostructures has seldom been reported. Here, we report the development of three different novel plasmonic Au superstructures comprised of Au nanoparticles, multiple-twinned nanoparticles and nanoworms assembling on the surfaces of SiO2 nanospheres respectively via a well-designed synthetic strategy. It is found that these novel Au superstructures show enhanced broadband visible-light absorption due to the plasmon resonance coupling within the superstructures, and thus can effectively focus the energy of photon fluxes to generate much more excited hot electrons and holes for promoting catalytic reactions. Accordingly, these Au superstructures exhibit significantly visible-light-enhanced catalytic efficiency (up to ∼264% enhancement) for the commercial reaction of p-nitrophenol reduction. PMID:25840556

  8. System Performance of an Integrated Airborne Spacing Algorithm with Ground Automation

    NASA Technical Reports Server (NTRS)

    Swieringa, Kurt A.; Wilson, Sara R.; Baxley, Brian T.

    2016-01-01

    The National Aeronautics and Space Administration's (NASA's) first Air Traffic Management (ATM) Technology Demonstration (ATD-1) was created to facilitate the transition of mature ATM technologies from the laboratory to operational use. The technologies selected for demonstration are the Traffic Management Advisor with Terminal Metering (TMA-TM), which provides precise time-based scheduling in the Terminal airspace; Controller Managed Spacing (CMS), which provides controllers with decision support tools to enable precise schedule conformance; and Interval Management (IM), which consists of flight deck automation that enables aircraft to achieve or maintain precise spacing behind another aircraft. Recent simulations and IM algorithm development at NASA have focused on trajectory-based IM operations where aircraft equipped with IM avionics are expected to achieve a spacing goal, assigned by air traffic controllers, at the final approach fix. The recently published IM Minimum Operational Performance Standards describe five types of IM operations. This paper discusses the results and conclusions of a human-in-the-loop simulation that investigated three of those IM operations. The results presented in this paper focus on system performance and integration metrics. Overall, the IM operations conducted in this simulation integrated well with ground-based decisions support tools and certain types of IM operational were able to provide improved spacing precision at the final approach fix; however, some issues were identified that should be addressed prior to implementing IM procedures into real-world operations.

  9. Performance-based semi-active control algorithm for protecting base isolated buildings from near-fault earthquakes

    NASA Astrophysics Data System (ADS)

    Mehrparvar, Behnam; Khoshnoudian, Taramarz

    2012-03-01

    Base isolated structures have been found to be at risk in near-fault regions as a result of long period pulses that may exist in near-source ground motions. Various control strategies, including passive, active and semi-active control systems, have been investigated to overcome this problem. This study focuses on the development of a semi-active control algorithm based on several performance levels anticipated from an isolated building during different levels of ground shaking corresponding to various earthquake hazard levels. The proposed performance-based algorithm is based on a modified version of the well-known semi-active skyhook control algorithm. The proposed control algorithm changes the control gain depending on the level of shaking imposed on the structure. The proposed control system has been evaluated using a series of analyses performed on a base isolated benchmark building subjected to seven pairs of scaled ground motion records. Simulation results show that the newly proposed algorithm is effective in improving the structural and nonstructural performance of the building for selected earthquakes.

  10. Performance of a benchmark implementation of the Van Slyke and Wets algorithm for stochastic programs on the Alliant FX/8

    SciTech Connect

    Ariyawansa, K.A.

    1991-04-01

    A benchmark parallel implementation of the Van Slyke and Wets algorithm for stochastic linear programs, and the results of a carefully designed numerical experiment on the Sequent/Balance using the implementation are presented. An important use of this implementation is as a benchmark to assess the performance of approximation algorithms for stochastic linear programs. These approximation algorithms are best suited for implementation on parallel vector processes like the Alliant FX/8. Therefore, the performance of the benchmark implementation on the Alliant FX/8 is of interest. In this paper, we present results observed when a portion of the numerical experiment is performed on the Alliant FX/8. These results indicate that the implementation makes satisfactory use of the concurrency capabilities of the Alliant FX/8. They also indicate that the vectorization capabilities of the Alliant FX/8 are not satisfactorily utilized by the implementation. 9 refs., 9 tabs.

  11. Performances of the Data Compression and Binning Algorithms adopted on the VIRTIS-M Spectrometer onboard Rosetta

    NASA Astrophysics Data System (ADS)

    Giuppi, S.; Coradini, A.; Capaccioni, F.; Capria, M. T.; de Sanctis, M. C.; Erard, S.; Filacchione, G.; Tosi, F.; Ammannito, E.

    2011-10-01

    This paper describes the analysis of the consequences on the retrieval of reflectance spectra of using several levels of data compression algorithms and of degrading the instrument resolution by means of spectral/spatial binning. These are software algorithms available on the spectrometer VIRTIS-M onboard the ESA's Rosetta spacecraft. An accurate knowledge of the compression algorithms performances and of the spectral/spatial binning performances will be very useful when Rosetta will arrive to its main target (comet 67P/Churyumov-Gerasimenko) in order to define the limits of operability of VIRTIS-M and therefore to plan a set of observation types which can be adapted to various phases and to various scientific objectives. This analysis has used observations performed during Earth swing-by#1 and Earth swing-by#3 which have been planned explicitly for this purpose.

  12. Performance of and Uncertainties in the Global Precipitation Measurement (GPM) Microwave Imager Retrieval Algorithm for Falling Snow Estimates

    NASA Astrophysics Data System (ADS)

    Skofronick Jackson, G.; Munchak, S. J.; Johnson, B. T.

    2014-12-01

    Retrievals of falling snow from space represent an important data set for understanding the Earth's atmospheric, hydrological, and energy cycles. While satellite-based remote sensing provides global coverage of falling snow events, the science is relatively new and retrievals are still undergoing development with challenges and uncertainties remaining. This work reports on the development and early post-launch testing of retrieval algorithms for the Global Precipitation Measurement (GPM) mission Core Observatory satellite launched in February 2014. In particular, we will report on GPM Microwave Imager (GMI) radiometer instrument algorithm performance with respect to falling snow detection and estimation. Throughout 2014, the at-launch GMI precipitation algorithms, based on a Bayesian framework, have been used with the new GPM data. The Bayesian framework for GMI retrievals is dependent on the a priori database used in the algorithm and how profiles are selected from that database. Our work has shown that knowing if the land surface is snow-covered, or not, can improve the performance of the algorithm. Improvements were made to the algorithm that allow for daily inputs of ancillary snow cover values and also updated Bayesian channel weights for various surface types. We will evaluate the algorithm that was released to the public in July 2014 and has already shown that it can detect and estimate falling snow. Performance factors to be investigated include the ability to detect falling snow at various rates, causes of errors, and performance for various surface types. A major source of ground validation data will be the NOAA NMQ dataset. We will also provide qualitative information on known uncertainties and errors associated with both the satellite retrievals and the ground validation measurements. We will report on the analysis of our falling snow validation completed by the time of the AGU conference.

  13. Simulated Performance of Algorithms for the Localization of Radioactive Sources from a Position Sensitive Radiation Detecting System (COCAE)

    SciTech Connect

    Karafasoulis, K.; Zachariadou, K.; Seferlis, S.; Kaissas, I.; Potiriadis, C.; Lambropoulos, C.; Loukas, D.

    2011-12-13

    Simulation studies are presented regarding the performance of algorithms that localize point-like radioactive sources detected by a position sensitive portable radiation instrument (COCAE). The source direction is estimated by using the List Mode Maximum Likelihood Expectation Maximization (LM-ML-EM) imaging algorithm. Furthermore, the source-to-detector distance is evaluated by three different algorithms based on the photo-peak count information of each detecting layer, the quality of the reconstructed source image, and the triangulation method. These algorithms have been tested on a large number of simulated photons over a wide energy range (from 200 keV to 2 MeV) emitted by point-like radioactive sources located at different orientations and source-to-detector distances.

  14. An evaluation of the performance of the soil temperature simulation algorithms used in the PRZM model.

    PubMed

    Tsiros, I X; Dimopoulos, I F

    2007-04-01

    Soil temperature simulation is an important component in environmental modeling since it is involved in several aspects of pollutant transport and fate. This paper deals with the performance of the soil temperature simulation algorithms of the well-known environmental model PRZM. Model results are compared and evaluated based on the basis of its ability to predict in situ measured soil temperature profiles in an experimental plot during a 3-year monitoring study. The evaluation of the performance is based on linear regression statistics and typical model statistical errors such as the root mean square error (RMSE) and the normalized objective function (NOF). Results show that the model required minimal calibration to match the observed response of the system. Values of the determination coefficient R(2) were found to be in all cases around the value of 0.98 indicating a very good agreement between measured and simulated data. Values of the RMSE were found to be in the range of 1.2 to 1.4 degrees C, 1.1 to 1.4 degrees C, 0.9 to 1.1 degrees C, and 0.8 to 1.1 degrees C, for the examined 2, 5, 10 and 20 cm soil depths, respectively. Sensitivity analyses were also performed to investigate the influence of various factors involved in the energy balance equation at the ground surface on the soil temperature profiles. The results showed that the model was able to represent important processes affecting the soil temperature regime such as the combined effect of the heat transfer by convection between the ground surface and the atmosphere and the latent heat flux due to soil water evaporation. PMID:17454373

  15. The Doylestown Algorithm: A Test to Improve the Performance of AFP in the Detection of Hepatocellular Carcinoma.

    PubMed

    Wang, Mengjun; Devarajan, Karthik; Singal, Amit G; Marrero, Jorge A; Dai, Jianliang; Feng, Ziding; Rinaudo, Jo Ann S; Srivastava, Sudhir; Evans, Alison; Hann, Hie-Won; Lai, Yinzhi; Yang, Hushan; Block, Timothy M; Mehta, Anand

    2016-02-01

    Biomarkers for the early diagnosis of hepatocellular carcinoma (HCC) are needed to decrease mortality from this cancer. However, as new biomarkers have been slow to be brought to clinical practice, we have developed a diagnostic algorithm that utilizes commonly used clinical measurements in those at risk of developing HCC. Briefly, as α-fetoprotein (AFP) is routinely used, an algorithm that incorporated AFP values along with four other clinical factors was developed. Discovery analysis was performed on electronic data from patients who had liver disease (cirrhosis) alone or HCC in the background of cirrhosis. The discovery set consisted of 360 patients from two independent locations. A logistic regression algorithm was developed that incorporated log-transformed AFP values with age, gender, alkaline phosphatase, and alanine aminotransferase levels. We define this as the Doylestown algorithm. In the discovery set, the Doylestown algorithm improved the overall performance of AFP by 10%. In subsequent external validation in over 2,700 patients from three independent sites, the Doylestown algorithm improved detection of HCC as compared with AFP alone by 4% to 20%. In addition, at a fixed specificity of 95%, the Doylestown algorithm improved the detection of HCC as compared with AFP alone by 2% to 20%. In conclusion, the Doylestown algorithm consolidates clinical laboratory values, with age and gender, which are each individually associated with HCC risk, into a single value that can be used for HCC risk assessment. As such, it should be applicable and useful to the medical community that manages those at risk for developing HCC. PMID:26712941

  16. In vivo optic nerve head biomechanics: performance testing of a three-dimensional tracking algorithm

    PubMed Central

    Girard, Michaël J. A.; Strouthidis, Nicholas G.; Desjardins, Adrien; Mari, Jean Martial; Ethier, C. Ross

    2013-01-01

    Measurement of optic nerve head (ONH) deformations could be useful in the clinical management of glaucoma. Here, we propose a novel three-dimensional tissue-tracking algorithm designed to be used in vivo. We carry out preliminary verification of the algorithm by testing its accuracy and its robustness. An algorithm based on digital volume correlation was developed to extract ONH tissue displacements from two optical coherence tomography (OCT) volumes of the ONH (undeformed and deformed). The algorithm was tested by applying artificial deformations to a baseline OCT scan while manipulating speckle noise, illumination and contrast enhancement. Tissue deformations determined by our algorithm were compared with the known (imposed) values. Errors in displacement magnitude, orientation and strain decreased with signal averaging and were 0.15 µm, 0.15° and 0.0019, respectively (for optimized algorithm parameters). Previous computational work suggests that these errors are acceptable to provide in vivo characterization of ONH biomechanics. Our algorithm is robust to OCT speckle noise as well as to changes in illumination conditions, and increasing signal averaging can produce better results. This algorithm has potential be used to quantify ONH three-dimensional strains in vivo, of benefit in the diagnosis and identification of risk factors in glaucoma. PMID:23883953

  17. SU-E-T-605: Performance Evaluation of MLC Leaf-Sequencing Algorithms in Head-And-Neck IMRT

    SciTech Connect

    Jing, J; Lin, H; Chow, J

    2015-06-15

    Purpose: To investigate the efficiency of three multileaf collimator (MLC) leaf-sequencing algorithms proposed by Galvin et al, Chen et al and Siochi et al using external beam treatment plans for head-and-neck intensity modulated radiation therapy (IMRT). Methods: IMRT plans for head-and-neck were created using the CORVUS treatment planning system. The plans were optimized and the fluence maps for all photon beams determined. Three different MLC leaf-sequencing algorithms based on Galvin et al, Chen et al and Siochi et al were used to calculate the final photon segmental fields and their monitor units in delivery. For comparison purpose, the maximum intensity of fluence map was kept constant in different plans. The number of beam segments and total number of monitor units were calculated for the three algorithms. Results: From results of number of beam segments and total number of monitor units, we found that algorithm of Galvin et al had the largest number of monitor unit which was about 70% larger than the other two algorithms. Moreover, both algorithms of Galvin et al and Siochi et al have relatively lower number of beam segment compared to Chen et al. Although values of number of beam segment and total number of monitor unit calculated by different algorithms varied with the head-and-neck plans, it can be seen that algorithms of Galvin et al and Siochi et al performed well with a lower number of beam segment, though algorithm of Galvin et al had a larger total number of monitor units than Siochi et al. Conclusion: Although performance of the leaf-sequencing algorithm varied with different IMRT plans having different fluence maps, an evaluation is possible based on the calculated number of beam segment and monitor unit. In this study, algorithm by Siochi et al was found to be more efficient in the head-and-neck IMRT. The Project Sponsored by the Fundamental Research Funds for the Central Universities (J2014HGXJ0094) and the Scientific Research Foundation for the

  18. Subpixelic measurement of large 1D displacements: principle, processing algorithms, performances and software.

    PubMed

    Guelpa, Valérian; Laurent, Guillaume J; Sandoz, Patrick; Zea, July Galeano; Clévy, Cédric

    2014-01-01

    This paper presents a visual measurement method able to sense 1D rigid body displacements with very high resolutions, large ranges and high processing rates. Sub-pixelic resolution is obtained thanks to a structured pattern placed on the target. The pattern is made of twin periodic grids with slightly different periods. The periodic frames are suited for Fourier-like phase calculations-leading to high resolution-while the period difference allows the removal of phase ambiguity and thus a high range-to-resolution ratio. The paper presents the measurement principle as well as the processing algorithms (source files are provided as supplementary materials). The theoretical and experimental performances are also discussed. The processing time is around 3 µs for a line of 780 pixels, which means that the measurement rate is mostly limited by the image acquisition frame rate. A 3-σ repeatability of 5 nm is experimentally demonstrated which has to be compared with the 168 µm measurement range. PMID:24625736

  19. Performance analysis for time-frequency MUSIC algorithm in presence of both additive noise and array calibration errors

    NASA Astrophysics Data System (ADS)

    Khodja, Mohamed; Belouchrani, Adel; Abed-Meraim, Karim

    2012-12-01

    This article deals with the application of Spatial Time-Frequency Distribution (STFD) to the direction finding problem using the Multiple Signal Classification (MUSIC)algorithm. A comparative performance analysis is performed for the method under consideration with respect to that using data covariance matrix when the received array signals are subject to calibration errors in a non-stationary environment. An unified analytical expression of the Direction Of Arrival (DOA) error estimation is derived for both methods. Numerical results show the effect of the parameters intervening in the derived expression on the algorithm performance. It is particularly observed that for low Signal to Noise Ratio (SNR) and high Signal to sensor Perturbation Ratio (SPR) the STFD method gives better performance, while for high SNR and for the same SPR both methods give similar performance.

  20. New dispenser types for integrated pest management of agriculturally significant insect pests: an algorithm with specialized searching capacity in electronic data bases.

    PubMed

    Hummel, H E; Eisinger, M T; Hein, D F; Breuer, M; Schmid, S; Leithold, G

    2012-01-01

    Pheromone effects discovered some 130 years, but scientifically defined just half a century ago, are a great bonus for basic and applied biology. Specifically, pest management efforts have been advanced in many insect orders, either for purposes or monitoring, mass trapping, or for mating disruption. Finding and applying a new search algorithm, nearly 20,000 entries in the pheromone literature have been counted, a number much higher than originally anticipated. This compilation contains identified and thus synthesizable structures for all major orders of insects. Among them are hundreds of agriculturally significant insect pests whose aggregated damages and costly control measures range in the multibillions of dollars annually. Unfortunately, and despite a lot of effort within the international entomological scene, the number of efficient and cheap engineering solutions for dispensing pheromones under variable field conditions is uncomfortably lagging behind. Some innovative approaches are cited from the relevant literature in an attempt to rectify this situation. Recently, specifically designed electrospun organic nanofibers offer a lot of promise. With their use, the mating communication of vineyard insects like Lobesia botrana (Lep.: Tortricidae) can be disrupted for periods of seven weeks. PMID:23885431

  1. Sensor and algorithm performance of the WAR HORSE hyperspectral sensor during the 2001 Camp Navajo wide-area collect

    NASA Astrophysics Data System (ADS)

    Olchowski, Frederick M.; Hazel, Geoffrey G.; Stellman, Christopher M.

    2002-08-01

    The following paper describes a recent data collection exercise in which the WAR HORSE visible-near-infrared hyperspectral sensor was employed in the collection of wide- area hyperspectral data sets. Two anomaly detection algorithms, Subspace RX (SSRX) ans Gaussian Spectral Clustering (GSC), were used on the data and their performance is discussed.

  2. 40 CFR 141.723 - Requirements to respond to significant deficiencies identified in sanitary surveys performed by EPA.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... EPA. (a) A sanitary survey is an onsite review of the water source (identifying sources of... deficiency includes a defect in design, operation, or maintenance, or a failure or malfunction of the sources... performed by EPA, systems must respond in writing to significant deficiencies identified in sanitary...

  3. 40 CFR 141.723 - Requirements to respond to significant deficiencies identified in sanitary surveys performed by EPA.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... EPA. (a) A sanitary survey is an onsite review of the water source (identifying sources of... deficiency includes a defect in design, operation, or maintenance, or a failure or malfunction of the sources... performed by EPA, systems must respond in writing to significant deficiencies identified in sanitary...

  4. 40 CFR 141.723 - Requirements to respond to significant deficiencies identified in sanitary surveys performed by EPA.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... EPA. (a) A sanitary survey is an onsite review of the water source (identifying sources of... deficiency includes a defect in design, operation, or maintenance, or a failure or malfunction of the sources... performed by EPA, systems must respond in writing to significant deficiencies identified in sanitary...

  5. The Influence of Access to General Education Curriculum on Alternate Assessment Performance of Students with Significant Cognitive Disabilities

    ERIC Educational Resources Information Center

    Roach, Andrew T.; Elliott, Stephen N.

    2006-01-01

    The primary purpose of this investigation was to understand the influence of access to the general curriculum on the performance of students with significant cognitive disabilities, as measured by the Wisconsin Alternate Assessment (WAA) for Students with Disabilities. Special education teachers (N=113) submitted case materials for students with…

  6. Clustering performance comparison using K-means and expectation maximization algorithms

    PubMed Central

    Jung, Yong Gyu; Kang, Min Soo; Heo, Jun

    2014-01-01

    Clustering is an important means of data mining based on separating data categories by similar features. Unlike the classification algorithm, clustering belongs to the unsupervised type of algorithms. Two representatives of the clustering algorithms are the K-means and the expectation maximization (EM) algorithm. Linear regression analysis was extended to the category-type dependent variable, while logistic regression was achieved using a linear combination of independent variables. To predict the possibility of occurrence of an event, a statistical approach is used. However, the classification of all data by means of logistic regression analysis cannot guarantee the accuracy of the results. In this paper, the logistic regression analysis is applied to EM clusters and the K-means clustering method for quality assessment of red wine, and a method is proposed for ensuring the accuracy of the classification results. PMID:26019610

  7. Some aspects of algorithm performance and modeling in transient thermal analysis of structures. [aerospace vehicle structures

    NASA Technical Reports Server (NTRS)

    Adelman, H. M.; Haftka, R. T.; Robinson, J. C.

    1982-01-01

    The status of an effort to increase the efficiency of calculating transient temperature fields in complex aerospace vehicle structures is described. The advantages and disadvantages of explicit and implicit algorithms are discussed. A promising set of implicit algorithms with variable time steps, known as the GEAR package is described. Four test problems, used for evaluating and comparing various algorithms, were selected and finite element models of the configurations are described. These problems include a space shuttle frame component, an insulated cylinder, a metallic panel for a thermal protection system, and a model of the space shuttle orbiter wing. Results generally indicate a preference for implicit oer explicit algorithms for solution of transient structural heat transfer problems when the governing equations are stiff.

  8. Performance of decoder-based algorithms for signal synchronization for DSSS waveforms

    NASA Astrophysics Data System (ADS)

    Matache, A.; Valles, E. L.

    This paper presents results on the implementation of pilotless carrier synchronization algorithms at low SNRs using joint decoding and decision-directed tracking. A software test bed was designed to simulate the effects of decision-directed carrier synchronization (DDCS) techniques. These techniques are compared to non-decision directed algorithms used in phase-locked loops (PLLs) or Costas loops. In previous work by the authors, results for direct M-ARY modulation constellations, with no code spreading were introduced. This paper focuses on the application of the proposed family of decision directed algorithms to direct sequence spread spectrum (DSSS) waveforms, typical of GPS signals. The current algorithm can utilize feedback from turbo codes in addition to the prior support of LDPC codes.

  9. Methylphenidate significantly improves driving performance of adults with attention-deficit hyperactivity disorder: a randomized crossover trial.

    PubMed

    Verster, Joris C; Bekker, Evelijne M; de Roos, Marlise; Minova, Anita; Eijken, Erik J E; Kooij, J J Sandra; Buitelaar, Jan K; Kenemans, J Leon; Verbaten, Marinus N; Olivier, Berend; Volkerts, Edmund R

    2008-05-01

    Although patients with attention-deficit hyperactivity disorder (ADHD) have reported improved driving performance on methylphenidate, limited evidence exists to support an effect of treatment on driving performance and some regions prohibit driving on methylphenidate. A randomized, crossover trial examining the effects of methylphenidate versus placebo on highway driving in 18 adults with ADHD was carried out. After three days of no treatment, patients received either their usual methylphenidate dose (mean: 14.7 mg; range: 10-30 mg) or placebo and then the opposite treatment after a six to seven days washout period. Patients performed a 100 km driving test during normal traffic, 1.5 h after treatment administration. Standard deviation of lateral position (SDLP), the weaving of the car, was the primary outcome measure. Secondary outcome measurements included the standard deviation of speed and patient reports of driving performance. Driving performance was significantly better in the methylphenidate than in the placebo condition, as reflected by the SDLP difference (2.3 cm, 95% CI = 0.8-3.8, P = 0.004). Variation in speed was similar on treatment and on placebo (-0.05 km/h, 95% CI = -0.4 to 0.2, P = 0.70). Among adults with ADHD, with a history of a positive clinical response to methylphenidate, methylphenidate significantly improves driving performance. PMID:18308788

  10. Performance evaluation of gratings applied by genetic algorithm for the real-time optical interconnection

    NASA Astrophysics Data System (ADS)

    Yoon, Jin-Seon; Kim, Nam; Suh, HoHyung; Jeon, Seok Hee

    2000-03-01

    In this paper, gratings to apply for the optical interconnection are designed using a genetic algorithm (GA) for a robust and efficient schema. The real-time optical interconnection system architecture is composed with LC-SLM, CCD array detector, IBM-PC, He-Ne laser, and Fourier transform lens. A pixelated binary phase grating is displayed on LC-SLM and could interconnect incoming beams to desired output spots freely by real-time. So as to adapt a GA for finding near globally-cost solutions, a chromosome is coded as a binary integer of length 32 X 32, the stochastic tournament method for decreasing the stochastic sampling error is performed, and a single-point crossover having 16 X 16 block size is used. The characteristics on the several parameters are analyzed in the desired grating design. Firstly, as the analysis of the effect on the probability of crossover, a designed grating when the probability of crossover is 0.75 has a 74.7[%] high diffraction efficiency and a 1.73 X 10-1 uniformity quantitatively, where the probability of mutation is 0.001 and the population size is 300. Secondly, on the probability of mutation, a designed grating when the probability of mutation is 0.001 has a 74.4[%] high efficiency and a 1.61 X 10-1 uniformity quantitatively, where the probability of crossover is 1.0 and the population size is 300. Thirdly, on the population size, a designed grating when the population size is 300 and the generation is 400 has above 74[%] diffraction efficiency, where the probability of mutation is 0.001 and the probability of crossover is 1.0.

  11. Semioptimal practicable algorithmic cooling

    SciTech Connect

    Elias, Yuval; Mor, Tal; Weinstein, Yossi

    2011-04-15

    Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.

  12. Assessment of dedicated low-dose cardiac micro-CT reconstruction algorithms using the left ventricular volume of small rodents as a performance measure

    SciTech Connect

    Maier, Joscha; Sawall, Stefan; Kachelrieß, Marc

    2014-05-15

    Purpose: Phase-correlated microcomputed tomography (micro-CT) imaging plays an important role in the assessment of mouse models of cardiovascular diseases and the determination of functional parameters as the left ventricular volume. As the current gold standard, the phase-correlated Feldkamp reconstruction (PCF), shows poor performance in case of low dose scans, more sophisticated reconstruction algorithms have been proposed to enable low-dose imaging. In this study, the authors focus on the McKinnon-Bates (MKB) algorithm, the low dose phase-correlated (LDPC) reconstruction, and the high-dimensional total variation minimization reconstruction (HDTV) and investigate their potential to accurately determine the left ventricular volume at different dose levels from 50 to 500 mGy. The results were verified in phantom studies of a five-dimensional (5D) mathematical mouse phantom. Methods: Micro-CT data of eight mice, each administered with an x-ray dose of 500 mGy, were acquired, retrospectively gated for cardiac and respiratory motion and reconstructed using PCF, MKB, LDPC, and HDTV. Dose levels down to 50 mGy were simulated by using only a fraction of the projections. Contrast-to-noise ratio (CNR) was evaluated as a measure of image quality. Left ventricular volume was determined using different segmentation algorithms (Otsu, level sets, region growing). Forward projections of the 5D mouse phantom were performed to simulate a micro-CT scan. The simulated data were processed the same way as the real mouse data sets. Results: Compared to the conventional PCF reconstruction, the MKB, LDPC, and HDTV algorithm yield images of increased quality in terms of CNR. While the MKB reconstruction only provides small improvements, a significant increase of the CNR is observed in LDPC and HDTV reconstructions. The phantom studies demonstrate that left ventricular volumes can be determined accurately at 500 mGy. For lower dose levels which were simulated for real mouse data sets, the

  13. Improved multiprocessor garbage collection algorithms

    SciTech Connect

    Newman, I.A.; Stallard, R.P.; Woodward, M.C.

    1983-01-01

    Outlines the results of an investigation of existing multiprocessor garbage collection algorithms and introduces two new algorithms which significantly improve some aspects of the performance of their predecessors. The two algorithms arise from different starting assumptions. One considers the case where the algorithm will terminate successfully whatever list structure is being processed and assumes that the extra data space should be minimised. The other seeks a very fast garbage collection time for list structures that do not contain loops. Results of both theoretical and experimental investigations are given to demonstrate the efficacy of the algorithms. 7 references.

  14. Compared performance of different centroiding algorithms for high-pass filtered laser guide star Shack-Hartmann wavefront sensors

    NASA Astrophysics Data System (ADS)

    Lardière, Olivier; Conan, Rodolphe; Clare, Richard; Bradley, Colin; Hubin, Norbert

    2010-07-01

    Variations of the sodium layer altitude and atom density profile induce errors on laser-guide-star (LGS) adaptive optics systems. These errors must be mitigated by (i), optimizing the LGS wavefront sensor (WFS) and the centroiding algorithm, and (ii), by adding a high-pass filter on the LGS path and a low-bandwidth natural-guide-star WFS. In the context of the ESO E-ELT project, five centroiding algorithms, namely the centre-of-gravity (CoG), the weighted CoG, the matched filter, the quad-cell and the correlation, have been evaluated in closedloop on the University of Victoria LGS wavefront sensing test bed. Each centroiding algorithm performance is compared for a central versus side-launch laser, different fields of view, pixel sampling, and LGS flux.

  15. Performance analysis of a dual-tree algorithm for computing spatial distance histograms.

    PubMed

    Chen, Shaoping; Tu, Yi-Cheng; Xia, Yuni

    2011-08-01

    Many scientific and engineering fields produce large volume of spatiotemporal data. The storage, retrieval, and analysis of such data impose great challenges to database systems design. Analysis of scientific spatiotemporal data often involves computing functions of all point-to-point interactions. One such analytics, the Spatial Distance Histogram (SDH), is of vital importance to scientific discovery. Recently, algorithms for efficient SDH processing in large-scale scientific databases have been proposed. These algorithms adopt a recursive tree-traversing strategy to process point-to-point distances in the visited tree nodes in batches, thus require less time when compared to the brute-force approach where all pairwise distances have to be computed. Despite the promising experimental results, the complexity of such algorithms has not been thoroughly studied. In this paper, we present an analysis of such algorithms based on a geometric modeling approach. The main technique is to transform the analysis of point counts into a problem of quantifying the area of regions where pairwise distances can be processed in batches by the algorithm. From the analysis, we conclude that the number of pairwise distances that are left to be processed decreases exponentially with more levels of the tree visited. This leads to the proof of a time complexity lower than the quadratic time needed for a brute-force algorithm and builds the foundation for a constant-time approximate algorithm. Our model is also general in that it works for a wide range of point spatial distributions, histogram types, and space-partitioning options in building the tree. PMID:21804753

  16. A comparison of spectral decorrelation techniques and performance evaluation metrics for a wavelet-based, multispectral data compression algorithm

    NASA Technical Reports Server (NTRS)

    Matic, Roy M.; Mosley, Judith I.

    1994-01-01

    Future space-based, remote sensing systems will have data transmission requirements that exceed available downlinks necessitating the use of lossy compression techniques for multispectral data. In this paper, we describe several algorithms for lossy compression of multispectral data which combine spectral decorrelation techniques with an adaptive, wavelet-based, image compression algorithm to exploit both spectral and spatial correlation. We compare the performance of several different spectral decorrelation techniques including wavelet transformation in the spectral dimension. The performance of each technique is evaluated at compression ratios ranging from 4:1 to 16:1. Performance measures used are visual examination, conventional distortion measures, and multispectral classification results. We also introduce a family of distortion metrics that are designed to quantify and predict the effect of compression artifacts on multi spectral classification of the reconstructed data.

  17. On the performance of explicit and implicit algorithms for transient thermal analysis

    NASA Technical Reports Server (NTRS)

    Adelman, H. M.; Haftka, R. T.

    1980-01-01

    The status of an effort to increase the efficiency of calculating transient temperature fields in complex aerospace vehicle structures is described. The advantages and disadvantages of explicit and implicit algorithms are discussed. A promising set of implicit algorithms, known as the GEAR package is described. Four test problems, used for evaluating and comparing various algorithms, have been selected and finite element models of the configurations are discribed. These problems include a space shuttle frame component, an insulated cylinder, a metallic panel for a thermal protection system and a model of the space shuttle orbiter wing. Calculations were carried out using the SPAR finite element program, the MITAS lumped parameter program and a special purpose finite element program incorporating the GEAR algorithms. Results generally indicate a preference for implicit over explicit algorithms for solution of transient structural heat transfer problems when the governing equations are stiff. Careful attention to modeling detail such as avoiding thin or short high-conducting elements can sometimes reduce the stiffness to the extent that explicit methods become advantageous.

  18. The use of algorithmic behavioural transfer functions in parametric EO system performance models

    NASA Astrophysics Data System (ADS)

    Hickman, Duncan L.; Smith, Moira I.

    2015-10-01

    The use of mathematical models to predict the overall performance of an electro-optic (EO) system is well-established as a methodology and is used widely to support requirements definition, system design, and produce performance predictions. Traditionally these models have been based upon cascades of transfer functions based on established physical theory, such as the calculation of signal levels from radiometry equations, as well as the use of statistical models. However, the performance of an EO system is increasing being dominated by the on-board processing of the image data and this automated interpretation of image content is complex in nature and presents significant modelling challenges. Models and simulations of EO systems tend to either involve processing of image data as part of a performance simulation (image-flow) or else a series of mathematical functions that attempt to define the overall system characteristics (parametric). The former approach is generally more accurate but statistically and theoretically weak in terms of specific operational scenarios, and is also time consuming. The latter approach is generally faster but is unable to provide accurate predictions of a system's performance under operational conditions. An alternative and novel architecture is presented in this paper which combines the processing speed attributes of parametric models with the accuracy of image-flow representations in a statistically valid framework. An additional dimension needed to create an effective simulation is a robust software design whose architecture reflects the structure of the EO System and its interfaces. As such, the design of the simulator can be viewed as a software prototype of a new EO System or an abstraction of an existing design. This new approach has been used successfully to model a number of complex military systems and has been shown to combine improved performance estimation with speed of computation. Within the paper details of the approach

  19. Performance improvements of wavelength-shifting-fiber neutron detectors using high-resolution positioning algorithms.

    PubMed

    Wang, C L

    2016-05-01

    Three high-resolution positioning methods based on the FluoroBancroft linear-algebraic method [S. B. Andersson, Opt. Express 16, 18714 (2008)] are proposed for wavelength-shifting fiber (WLSF) neutron detectors. Using a Gaussian or exponential-decay light-response function, the non-linear relation of photon-number profiles vs. x-pixels was linearized and neutron positions were determined. After taking the super-Poissonian photon noise into account, the proposed algorithms give an average of 0.03-0.08 pixel position error much smaller than that (0.29 pixel) from a traditional maximum photon algorithm (MPA). The new algorithms result in better detector uniformity, less position misassignment (ghosting), better spatial resolution, and an equivalent or better instrument resolution in powder diffraction than the MPA. These improvements will facilitate broader applications of WLSF detectors at time-of-flight neutron powder diffraction beamlines, including single-crystal diffraction and texture analysis. PMID:27250410

  20. Enhancing Plasma Wakefield and E-cloud Simulation Performance Using a Pipelining Algorithm

    NASA Astrophysics Data System (ADS)

    Feng, B.; Huang, C.; Decyk, V.; Mori, W. B.; Katsouleas, T.; Muggli, P.

    2006-11-01

    Modeling long timescale propagation of beams in plasma wakefield accelerators at the energy frontier and in electron clouds in circular accelerators such as CERN-LHC requires faster and more efficient simulation codes. Simply increasing the number of processors does not scale beyond one-fifth of the number of cells in the decomposition direction. A pipelining algorithm applied on fully parallelized code QuickPIC is suggested to overcome this limit. The pipelining algorithm uses many groups of processors and optimizes the job allocation on the processors in parallel computing. With the new algorithm, it is possible to use on the order of 102 groups of processors, expanding the scale and speed of simulations with QuickPIC by a similar factor.

  1. Enhancing plasma wakefield and e-cloud simulation performance using a pipelining algorithm

    NASA Astrophysics Data System (ADS)

    Feng, Bing; Katsouleas, Tom; Huang, Chengkun; Decyk, Viktor; Mori, Warren B.

    2006-10-01

    Modeling long timescale propagation of beams in plasma wakefield accelerators at the energy frontier and in electron clouds in circular accelerators such as CERN-LHC require a faster and more efficient simulation code. Simply increasing the number of processors does not scale beyond one-fifth of the number of cells in the decomposition direction. A pipelining algorithm applied on fully parallel code QUICKPIC is suggested to overcome this limit. The pipelining algorithm uses many groups of processors and optimizes the job allocation on the processors in parallel computing. With the new algorithm, it is possible to use on the order of 100 groups of processors, expanding the scale and speed of simulations with QuickPIC by a similar factor.

  2. Performance improvements of wavelength-shifting-fiber neutron detectors using high-resolution positioning algorithms

    DOE PAGESBeta

    Wang, C. L.

    2016-05-17

    On the basis of FluoroBancroft linear-algebraic method [S.B. Andersson, Opt. Exp. 16, 18714 (2008)] three highly-resolved positioning methodswere proposed for wavelength-shifting fiber (WLSF) neutron detectors. Using a Gaussian or exponential-decay light-response function (LRF), the non-linear relation of photon-number profiles vs. x-pixels was linearized and neutron positions were determined. The proposed algorithms give an average 0.03-0.08 pixel position error, much smaller than that (0.29 pixel) from a traditional maximum photon algorithm (MPA). The new algorithms result in better detector uniformity, less position misassignment (ghosting), better spatial resolution, and an equivalent or better instrument resolution in powder diffraction than the MPA. Moreover,more » these characters will facilitate broader applications of WLSF detectors at time-of-flight neutron powder diffraction beamlines, including single-crystal diffraction and texture analysis.« less

  3. Performing target specific band reduction using artificial neural networks and assessment of its efficacy using various target detection algorithms

    NASA Astrophysics Data System (ADS)

    Yadav, Deepti; Arora, M. K.; Tiwari, K. C.; Ghosh, J. K.

    2016-04-01

    Hyperspectral imaging is a powerful tool in the field of remote sensing and has been used for many applications like mineral detection, detection of landmines, target detection etc. Major issues in target detection using HSI are spectral variability, noise, small size of the target, huge data dimensions, high computation cost, complex backgrounds etc. Many of the popular detection algorithms do not work for difficult targets like small, camouflaged etc. and may result in high false alarms. Thus, target/background discrimination is a key issue and therefore analyzing target's behaviour in realistic environments is crucial for the accurate interpretation of hyperspectral imagery. Use of standard libraries for studying target's spectral behaviour has limitation that targets are measured in different environmental conditions than application. This study uses the spectral data of the same target which is used during collection of the HSI image. This paper analyze spectrums of targets in a way that each target can be spectrally distinguished from a mixture of spectral data. Artificial neural network (ANN) has been used to identify the spectral range for reducing data and further its efficacy for improving target detection is verified. The results of ANN proposes discriminating band range for targets; these ranges were further used to perform target detection using four popular spectral matching target detection algorithm. Further, the results of algorithms were analyzed using ROC curves to evaluate the effectiveness of the ranges suggested by ANN over full spectrum for detection of desired targets. In addition, comparative assessment of algorithms is also performed using ROC.

  4. A Real-time Spectrum Handoff Algorithm for VoIP based Cognitive Radio Networks: Design and Performance Analysis

    NASA Astrophysics Data System (ADS)

    Chakraborty, Tamal; Saha Misra, Iti

    2016-03-01

    Secondary Users (SUs) in a Cognitive Radio Network (CRN) face unpredictable interruptions in transmission due to the random arrival of Primary Users (PUs), leading to spectrum handoff or dropping instances. An efficient spectrum handoff algorithm, thus, becomes one of the indispensable components in CRN, especially for real-time communication like Voice over IP (VoIP). In this regard, this paper investigates the effects of spectrum handoff on the Quality of Service (QoS) for VoIP traffic in CRN, and proposes a real-time spectrum handoff algorithm in two phases. The first phase (VAST-VoIP based Adaptive Sensing and Transmission) adaptively varies the channel sensing and transmission durations to perform intelligent dropping decisions. The second phase (ProReact-Proactive and Reactive Handoff) deploys efficient channel selection mechanisms during spectrum handoff for resuming communication. Extensive performance analysis in analytical and simulation models confirms a decrease in spectrum handoff delay for VoIP SUs by more than 40% and 60%, compared to existing proactive and reactive algorithms, respectively and ensures a minimum 10% reduction in call-dropping probability with respect to the previous works in this domain. The effective SU transmission duration is also maximized under the proposed algorithm, thereby making it suitable for successful VoIP communication.

  5. Performance impact of mutation operators of a subpopulation-based genetic algorithm for multi-robot task allocation problems.

    PubMed

    Liu, Chun; Kroll, Andreas

    2016-01-01

    Multi-robot task allocation determines the task sequence and distribution for a group of robots in multi-robot systems, which is one of constrained combinatorial optimization problems and more complex in case of cooperative tasks because they introduce additional spatial and temporal constraints. To solve multi-robot task allocation problems with cooperative tasks efficiently, a subpopulation-based genetic algorithm, a crossover-free genetic algorithm employing mutation operators and elitism selection in each subpopulation, is developed in this paper. Moreover, the impact of mutation operators (swap, insertion, inversion, displacement, and their various combinations) is analyzed when solving several industrial plant inspection problems. The experimental results show that: (1) the proposed genetic algorithm can obtain better solutions than the tested binary tournament genetic algorithm with partially mapped crossover; (2) inversion mutation performs better than other tested mutation operators when solving problems without cooperative tasks, and the swap-inversion combination performs better than other tested mutation operators/combinations when solving problems with cooperative tasks. As it is difficult to produce all desired effects with a single mutation operator, using multiple mutation operators (including both inversion and swap) is suggested when solving similar combinatorial optimization problems. PMID:27588254

  6. An algorithm for calculating exam quality as a basis for performance-based allocation of funds at medical schools

    PubMed Central

    Kirschstein, Timo; Wolters, Alexander; Lenz, Jan-Hendrik; Fröhlich, Susanne; Hakenberg, Oliver; Kundt, Günther; Darmüntzel, Martin; Hecker, Michael; Altiner, Attila; Müller-Hilke, Brigitte

    2016-01-01

    Objective: The amendment of the Medical Licensing Act (ÄAppO) in Germany in 2002 led to the introduction of graded assessments in the clinical part of medical studies. This, in turn, lent new weight to the importance of written tests, even though the minimum requirements for exam quality are sometimes difficult to reach. Introducing exam quality as a criterion for the award of performance-based allocation of funds is expected to steer the attention of faculty members towards more quality and perpetuate higher standards. However, at present there is a lack of suitable algorithms for calculating exam quality. Methods: In the spring of 2014, the students‘ dean commissioned the „core group“ for curricular improvement at the University Medical Center in Rostock to revise the criteria for the allocation of performance-based funds for teaching. In a first approach, we developed an algorithm that was based on the results of the most common type of exam in medical education, multiple choice tests. It included item difficulty and discrimination, reliability as well as the distribution of grades achieved. Results: This algorithm quantitatively describes exam quality of multiple choice exams. However, it can also be applied to exams involving short assay questions and the OSCE. It thus allows for the quantitation of exam quality in the various subjects and – in analogy to impact factors and third party grants – a ranking among faculty. Conclusion: Our algorithm can be applied to all test formats in which item difficulty, the discriminatory power of the individual items, reliability of the exam and the distribution of grades are measured. Even though the content validity of an exam is not considered here, we believe that our algorithm is suitable as a general basis for performance-based allocation of funds. PMID:27275509

  7. Improve the algorithmic performance of collaborative filtering by using the interevent time distribution of human behaviors

    NASA Astrophysics Data System (ADS)

    Jia, Chun-Xiao; Liu, Run-Ran

    2015-10-01

    Recently, many scaling laws of interevent time distribution of human behaviors are observed and some quantitative understanding of human behaviors are also provided by researchers. In this paper, we propose a modified collaborative filtering algorithm by making use the scaling law of human behaviors for information filtering. Extensive experimental analyses demonstrate that the accuracies on MovieLensand Last.fm datasets could be improved greatly, compared with the standard collaborative filtering. Surprisingly, further statistical analyses suggest that the present algorithm could simultaneously improve the novelty and diversity of recommendations. This work provides a creditable way for highly efficient information filtering.

  8. The effect of interfering ions on search algorithm performance for electron-transfer dissociation data.

    PubMed

    Good, David M; Wenger, Craig D; Coon, Joshua J

    2010-01-01

    Collision-activated dissociation and electron-transfer dissociation (ETD) each produce spectra containing unique features. Though several database search algorithms (e.g. SEQUEST, MASCOT, and Open Mass Spectrometry Search Algorithm) have been modified to search ETD data, this consists chiefly of the ability to search for c- and z(*)-ions; additional ETD-specific features are often unaccounted for and may hinder identification. Removal of these features via spectral processing increased total search sensitivity by approximately 20% for both human and yeast data sets; unique peptide identifications increased by approximately 17% for the yeast data sets and approximately 16% for the human data set. PMID:19899080

  9. Performance analysis for the expanding search PN acquisition algorithm. [pseudonoise in spread spectrum transmission

    NASA Technical Reports Server (NTRS)

    Braun, W. R.

    1982-01-01

    An approach is described for approximating the cumulative probability distribution of the acquisition time of the serial pseudonoise (PN) search algorithm. The results are applicable to both variable and fixed dwell time systems. The theory is developed for the case where some a priori information is available on the PN code epoch (reacquisition problem or acquisition of very long codes). Also considered is the special case of a search over the whole code. The accuracy of the approximation is demonstrated by comparisons with published exact results for the fixed dwell time algorithm.

  10. An Improved Performance Frequency Estimation Algorithm for Passive Wireless SAW Resonant Sensors

    PubMed Central

    Liu, Boquan; Zhang, Chenrui; Ji, Xiaojun; Chen, Jing; Han, Tao

    2014-01-01

    Passive wireless surface acoustic wave (SAW) resonant sensors are suitable for applications in harsh environments. The traditional SAW resonant sensor system requires, however, Fourier transformation (FT) which has a resolution restriction and decreases the accuracy. In order to improve the accuracy and resolution of the measurement, the singular value decomposition (SVD)-based frequency estimation algorithm is applied for wireless SAW resonant sensor responses, which is a combination of a single tone undamped and damped sinusoid signal with the same frequency. Compared with the FT algorithm, the accuracy and the resolution of the method used in the self-developed wireless SAW resonant sensor system are validated. PMID:25429410

  11. CHEOPS Performance for Exomoons: The Detectability of Exomoons by Using Optimal Decision Algorithm

    NASA Astrophysics Data System (ADS)

    Simon, A. E.; Szabó, Gy. M.; Kiss, L. L.; Fortier, A.; Benz, W.

    2015-10-01

    Many attempts have already been made to detect exomoons around transiting exoplanets, but the first confirmed discovery is still pending. The experiences that have been gathered so far allow us to better optimize future space telescopes for this challenge already during the development phase. In this paper we focus on the forthcoming CHaraterising ExOPlanet Satellite (CHEOPS), describing an optimized decision algorithm with step-by-step evaluation, and calculating the number of required transits for an exomoon detection for various planet-moon configurations that can be observable by CHEOPS. We explore the most efficient way for such an observation to minimize the cost in observing time. Our study is based on PTV observations (photocentric transit timing variation) in simulated CHEOPS data, but the recipe does not depend on the actual detection method, and it can be substituted with, e.g., the photodynamical method for later applications. Using the current state-of-the-art level simulation of CHEOPS data we analyzed transit observation sets for different star-planet-moon configurations and performed a bootstrap analysis to determine their detection statistics. We have found that the detection limit is around an Earth-sized moon. In the case of favorable spatial configurations, systems with at least a large moon and a Neptune-sized planet, an 80% detection chance requires at least 5-6 transit observations on average. There is also a nonzero chance in the case of smaller moons, but the detection statistics deteriorate rapidly, while the necessary transit measurements increase quickly. After the CoRoT and Kepler spacecrafts, CHEOPS will be the next dedicated space telescope that will observe exoplanetary transits and characterize systems with known Doppler-planets. Although it has a smaller aperture than Kepler (the ratio of the mirror diameters is about 1/3) and is mounted with a CCD that is similar to Kepler's, it will observe brighter stars and operate with larger

  12. Performance Assessment of the Optical Transient Detector and Lightning Imaging Sensor. Part 2; Clustering Algorithm

    NASA Technical Reports Server (NTRS)

    Mach, Douglas M.; Christian, Hugh J.; Blakeslee, Richard; Boccippio, Dennis J.; Goodman, Steve J.; Boeck, William

    2006-01-01

    We describe the clustering algorithm used by the Lightning Imaging Sensor (LIS) and the Optical Transient Detector (OTD) for combining the lightning pulse data into events, groups, flashes, and areas. Events are single pixels that exceed the LIS/OTD background level during a single frame (2 ms). Groups are clusters of events that occur within the same frame and in adjacent pixels. Flashes are clusters of groups that occur within 330 ms and either 5.5 km (for LIS) or 16.5 km (for OTD) of each other. Areas are clusters of flashes that occur within 16.5 km of each other. Many investigators are utilizing the LIS/OTD flash data; therefore, we test how variations in the algorithms for the event group and group-flash clustering affect the flash count for a subset of the LIS data. We divided the subset into areas with low (1-3), medium (4-15), high (16-63), and very high (64+) flashes to see how changes in the clustering parameters affect the flash rates in these different sizes of areas. We found that as long as the cluster parameters are within about a factor of two of the current values, the flash counts do not change by more than about 20%. Therefore, the flash clustering algorithm used by the LIS and OTD sensors create flash rates that are relatively insensitive to reasonable variations in the clustering algorithms.

  13. Performance Evaluation of Various STL File Mesh Refining Algorithms Applied for FDM-RP Process

    NASA Astrophysics Data System (ADS)

    Ledalla, Siva Rama Krishna; Tirupathi, Balaji; Sriram, Venkatesh

    2016-06-01

    Layered manufacturing machines use the stereolithography (STL) file to build parts. When a curved surface is converted from a computer aided design (CAD) file to STL, it results in a geometrical distortion and chordal error. Parts manufactured with this file, might not satisfy geometric dimensioning and tolerance requirements due to approximated geometry. Current algorithms built in CAD packages have export options to globally reduce this distortion, which leads to an increase in the file size and pre-processing time. In this work, different mesh subdivision algorithms are applied on STL file of a complex geometric features using MeshLab software. The mesh subdivision algorithms considered in this work are modified butterfly subdivision technique, loops sub division technique and general triangular midpoint sub division technique. A comparative study is made with respect to volume and the build time using the above techniques. It is found that triangular midpoint sub division algorithm is more suitable for the geometry under consideration. Only the wheel cap part is then manufactured on Stratasys MOJO FDM machine. The surface roughness of the part is measured on Talysurf surface roughness tester.

  14. Cognitive Correlates of Performance in Algorithms in a Computer Science Course for High School

    ERIC Educational Resources Information Center

    Avancena, Aimee Theresa; Nishihara, Akinori

    2014-01-01

    Computer science for high school faces many challenging issues. One of these is whether the students possess the appropriate cognitive ability for learning the fundamentals of computer science. Online tests were created based on known cognitive factors and fundamental algorithms and were implemented among the second grade students in the…

  15. A Comparative Study of Classification and Regression Algorithms for Modelling Students' Academic Performance

    ERIC Educational Resources Information Center

    Strecht, Pedro; Cruz, Luís; Soares, Carlos; Mendes-Moreira, João; Abreu, Rui

    2015-01-01

    Predicting the success or failure of a student in a course or program is a problem that has recently been addressed using data mining techniques. In this paper we evaluate some of the most popular classification and regression algorithms on this problem. We address two problems: prediction of approval/failure and prediction of grade. The former is…

  16. Comparing Learning Performance of Students Using Algorithm Visualizations Collaboratively on Different Engagement Levels

    ERIC Educational Resources Information Center

    Laakso, Mikko-Jussi; Myller, Niko; Korhonen, Ari

    2009-01-01

    In this paper, two emerging learning and teaching methods have been studied: collaboration in concert with algorithm visualization. When visualizations have been employed in collaborative learning, collaboration introduces new challenges for the visualization tools. In addition, new theories are needed to guide the development and research of the…

  17. Improving performance of computer-aided detection of pulmonary embolisms by incorporating a new pulmonary vascular-tree segmentation algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Xingwei; Song, XiaoFei; Chapman, Brian E.; Zheng, Bin

    2012-03-01

    We developed a new pulmonary vascular tree segmentation/extraction algorithm. The purpose of this study was to assess whether adding this new algorithm to our previously developed computer-aided detection (CAD) scheme of pulmonary embolism (PE) could improve the CAD performance (in particular reducing false positive detection rates). A dataset containing 12 CT examinations with 384 verified pulmonary embolism regions associated with 24 threedimensional (3-D) PE lesions was selected in this study. Our new CAD scheme includes the following image processing and feature classification steps. (1) A 3-D based region growing process followed by a rolling-ball algorithm was utilized to segment lung areas. (2) The complete pulmonary vascular trees were extracted by combining two approaches of using an intensity-based region growing to extract the larger vessels and a vessel enhancement filtering to extract the smaller vessel structures. (3) A toboggan algorithm was implemented to identify suspicious PE candidates in segmented lung or vessel area. (4) A three layer artificial neural network (ANN) with the topology 27-10-1 was developed to reduce false positive detections. (5) A k-nearest neighbor (KNN) classifier optimized by a genetic algorithm was used to compute detection scores for the PE candidates. (6) A grouping scoring method was designed to detect the final PE lesions in three dimensions. The study showed that integrating the pulmonary vascular tree extraction algorithm into the CAD scheme reduced false positive rates by 16.2%. For the case based 3D PE lesion detecting results, the integrated CAD scheme achieved 62.5% detection sensitivity with 17.1 false-positive lesions per examination.

  18. The Computational Complexity, Parallel Scalability, and Performance of Atmospheric Data Assimilation Algorithms

    NASA Technical Reports Server (NTRS)

    Lyster, Peter M.; Guo, J.; Clune, T.; Larson, J. W.; Atlas, Robert (Technical Monitor)

    2001-01-01

    The computational complexity of algorithms for Four Dimensional Data Assimilation (4DDA) at NASA's Data Assimilation Office (DAO) is discussed. In 4DDA, observations are assimilated with the output of a dynamical model to generate best-estimates of the states of the system. It is thus a mapping problem, whereby scattered observations are converted into regular accurate maps of wind, temperature, moisture and other variables. The DAO is developing and using 4DDA algorithms that provide these datasets, or analyses, in support of Earth System Science research. Two large-scale algorithms are discussed. The first approach, the Goddard Earth Observing System Data Assimilation System (GEOS DAS), uses an atmospheric general circulation model (GCM) and an observation-space based analysis system, the Physical-space Statistical Analysis System (PSAS). GEOS DAS is very similar to global meteorological weather forecasting data assimilation systems, but is used at NASA for climate research. Systems of this size typically run at between 1 and 20 gigaflop/s. The second approach, the Kalman filter, uses a more consistent algorithm to determine the forecast error covariance matrix than does GEOS DAS. For atmospheric assimilation, the gridded dynamical fields typically have More than 10(exp 6) variables, therefore the full error covariance matrix may be in excess of a teraword. For the Kalman filter this problem can easily scale to petaflop/s proportions. We discuss the computational complexity of GEOS DAS and our implementation of the Kalman filter. We also discuss and quantify some of the technical issues and limitations in developing efficient, in terms of wall clock time, and scalable parallel implementations of the algorithms.

  19. Computational performance comparison of wavefront reconstruction algorithms for the European Extremely Large Telescope on multi-CPU architecture.

    PubMed

    Feng, Lu; Fedrigo, Enrico; Béchet, Clémentine; Brunner, Elisabeth; Pirani, Werther

    2012-06-01

    The European Southern Observatory (ESO) is studying the next generation giant telescope, called the European Extremely Large Telescope (E-ELT). With a 42 m diameter primary mirror, it is a significant step from currently existing telescopes. Therefore, the E-ELT with its instruments poses new challenges in terms of cost and computational complexity for the control system, including its adaptive optics (AO). Since the conventional matrix-vector multiplication (MVM) method successfully used so far for AO wavefront reconstruction cannot be efficiently scaled to the size of the AO systems on the E-ELT, faster algorithms are needed. Among those recently developed wavefront reconstruction algorithms, three are studied in this paper from the point of view of design, implementation, and absolute speed on three multicore multi-CPU platforms. We focus on a single-conjugate AO system for the E-ELT. The algorithms are the MVM, the Fourier transform reconstructor (FTR), and the fractal iterative method (FRiM). This study enhances the scaling of these algorithms with an increasing number of CPUs involved in the computation. We discuss implementation strategies, depending on various CPU architecture constraints, and we present the first quantitative execution times so far at the E-ELT scale. MVM suffers from a large computational burden, making the current computing platform undersized to reach timings short enough for AO wavefront reconstruction. In our study, the FTR provides currently the fastest reconstruction. FRiM is a recently developed algorithm, and several strategies are investigated and presented here in order to implement it for real-time AO wavefront reconstruction, and to optimize its execution time. The difficulty to parallelize the algorithm in such architecture is enhanced. We also show that FRiM can provide interesting scalability using a sparse matrix approach. PMID:22695596

  20. Performance analysis of distributed symmetric sparse matrix vector multiplication algorithm for multi-core architectures

    SciTech Connect

    Oryspayev, Dossay; Aktulga, Hasan Metin; Sosonkina, Masha; Maris, Pieter; Vary, James P.

    2015-07-14

    In this article, sparse matrix vector multiply (SpMVM) is an important kernel that frequently arises in high performance computing applications. Due to its low arithmetic intensity, several approaches have been proposed in literature to improve its scalability and efficiency in large scale computations. In this paper, our target systems are high end multi-core architectures and we use messaging passing interface + open multiprocessing hybrid programming model for parallelism. We analyze the performance of recently proposed implementation of the distributed symmetric SpMVM, originally developed for large sparse symmetric matrices arising in ab initio nuclear structure calculations. We also study important features of this implementation and compare with previously reported implementations that do not exploit underlying symmetry. Our SpMVM implementations leverage the hybrid paradigm to efficiently overlap expensive communications with computations. Our main comparison criterion is the "CPU core hours" metric, which is the main measure of resource usage on supercomputers. We analyze the effects of topology-aware mapping heuristic using simplified network load model. Furthermore, we have tested the different SpMVM implementations on two large clusters with 3D Torus and Dragonfly topology. Our results show that the distributed SpMVM implementation that exploits matrix symmetry and hides communication yields the best value for the "CPU core hours" metric and significantly reduces data movement overheads.

  1. Performance analysis of distributed symmetric sparse matrix vector multiplication algorithm for multi-core architectures

    DOE PAGESBeta

    Oryspayev, Dossay; Aktulga, Hasan Metin; Sosonkina, Masha; Maris, Pieter; Vary, James P.

    2015-07-14

    In this article, sparse matrix vector multiply (SpMVM) is an important kernel that frequently arises in high performance computing applications. Due to its low arithmetic intensity, several approaches have been proposed in literature to improve its scalability and efficiency in large scale computations. In this paper, our target systems are high end multi-core architectures and we use messaging passing interface + open multiprocessing hybrid programming model for parallelism. We analyze the performance of recently proposed implementation of the distributed symmetric SpMVM, originally developed for large sparse symmetric matrices arising in ab initio nuclear structure calculations. We also study important featuresmore » of this implementation and compare with previously reported implementations that do not exploit underlying symmetry. Our SpMVM implementations leverage the hybrid paradigm to efficiently overlap expensive communications with computations. Our main comparison criterion is the "CPU core hours" metric, which is the main measure of resource usage on supercomputers. We analyze the effects of topology-aware mapping heuristic using simplified network load model. Furthermore, we have tested the different SpMVM implementations on two large clusters with 3D Torus and Dragonfly topology. Our results show that the distributed SpMVM implementation that exploits matrix symmetry and hides communication yields the best value for the "CPU core hours" metric and significantly reduces data movement overheads.« less

  2. CUDA-based high-performance computing of the S-BPF algorithm with no-waiting pipelining

    NASA Astrophysics Data System (ADS)

    Deng, Lin; Yan, Bin; Chang, Qingmei; Han, Yu; Zhang, Xiang; Xi, Xiaoqi; Li, Lei

    2015-10-01

    The backprojection-filtration (BPF) algorithm has become a good solution for local reconstruction in cone-beam computed tomography (CBCT). However, the reconstruction speed of BPF is a severe limitation for clinical applications. The selective-backprojection filtration (S-BPF) algorithm is developed to improve the parallel performance of BPF by selective backprojection. Furthermore, the general-purpose graphics processing unit (GP-GPU) is a popular tool for accelerating the reconstruction. Much work has been performed aiming for the optimization of the cone-beam back-projection. As the cone-beam back-projection process becomes faster, the data transportation holds a much bigger time proportion in the reconstruction than before. This paper focuses on minimizing the total time in the reconstruction with the S-BPF algorithm by hiding the data transportation among hard disk, CPU and GPU. And based on the analysis of the S-BPF algorithm, some strategies are implemented: (1) the asynchronous calls are used to overlap the implemention of CPU and GPU, (2) an innovative strategy is applied to obtain the DBP image to hide the transport time effectively, (3) two streams for data transportation and calculation are synchronized by the cudaEvent in the inverse of finite Hilbert transform on GPU. Our main contribution is a smart reconstruction of the S-BPF algorithm with GPU's continuous calculation and no data transportation time cost. a 5123 volume is reconstructed in less than 0.7 second on a single Tesla-based K20 GPU from 182 views projection with 5122 pixel per projection. The time cost of our implementation is about a half of that without the overlap behavior.

  3. Performance of soil moisture retrieval algorithms using multiangular L band brightness temperatures

    NASA Astrophysics Data System (ADS)

    Piles, M.; Camps, A.; Vall-Llossera, M.; Monerris, A.; Talone, M.; Sabater, J. M.

    2010-06-01

    The Soil Moisture and Ocean Salinity (SMOS) mission of the European Space Agency was successfully launched in November 2009 to provide global surface soil moisture and sea surface salinity maps. The SMOS single payload is the Microwave Imaging Radiometer by Aperture Synthesis (MIRAS), an L band two-dimensional aperture synthesis interferometric radiometer with multiangular and polarimetric imaging capabilities. SMOS-derived soil moisture products are expected to have an accuracy of 0.04 m3/m3 over 50 × 50 km2 and a revisit time of 3 days. Previous studies have remarked the necessity of combining SMOS brightness temperatures with auxiliary data to achieve the required accuracy. However, the required auxiliary data and optimal soil moisture retrieval setup need yet to be optimized. Also, the satellite operation mode (dual polarization or full polarimetric) is an open issue to be addressed during the commissioning phase activities. In this paper, an in-depth study of the different retrieval configurations and ancillary data needed for the retrieval of soil moisture from future SMOS observations is presented. A dedicated L2 Processor Simulator software has been developed to obtain soil moisture estimates from SMOS-like brightness temperatures generated using the SMOS End-to-End Performance Simulator (SEPS). Full-polarimetric brightness temperatures are generated in SEPS, and soil moisture retrievals are performed using vertical (Tvv) and horizontal (Thh) brightness temperatures and using the first Stokes parameter (TI). Results show the accuracy obtained with the different retrieval setups for four main surface conditions combining wet and dry soils with bare and vegetation-covered surfaces. Soil moisture retrievals using TI exhibit a significantly better performance than using Thh and Tvv in all scenarios, which indicates that the dual-polarization mode should not be disregarded. The uncertainty of the ancillary data used in the minimization process and its effect on

  4. Drowsiness/alertness algorithm development and validation using synchronized EEG and cognitive performance to individualize a generalized model

    PubMed Central

    Johnson, Robin R.; Popovic, Djordje P.; Olmstead, Richard E.; Stikic, Maja; Levendowski, Daniel J.; Berka, Chris

    2011-01-01

    A great deal of research over the last century has focused on drowsiness/alertness detection, as fatigue-related physical and cognitive impairments pose a serious risk to public health and safety. Available drowsiness/alertness detection solutions are unsatisfactory for a number of reasons: 1) lack of generalizability, 2) failure to address individual variability in generalized models, and/or 3) they lack a portable, un-tethered application. The current study aimed to address these issues, and determine if an individualized electroencephalography (EEG) based algorithm could be defined to track performance decrements associated with sleep loss, as this is the first step in developing a field deployable drowsiness/alertness detection system. The results indicated that an EEG-based algorithm, individualized using a series of brief "identification" tasks, was able to effectively track performance decrements associated with sleep deprivation. Future development will address the need for the algorithm to predict performance decrements due to sleep loss, and provide field applicability. PMID:21419826

  5. A performance comparison of static VAr compensator based on Goertzel and FFT algorithm and experimental validation.

    PubMed

    Kececioglu, O Fatih; Gani, Ahmet; Sekkeli, Mustafa

    2016-01-01

    The main objective of the present paper is to introduce a new approach for measuring and calculation of fundamental power components in the case of various distorted waveforms including those containing harmonics. The parameters of active, reactive, apparent power and power factor, are measured and calculated by using Goertzel algorithm instead of fast Fourier transformation which is commonly used. The main advantage of utilizing Goertzel algorithm is to minimize computational load and trigonometric equations. The parameters measured in the new technique are applied to a fixed capacitor-thyristor controlled reactor based static VAr compensation system to achieve accurate power factor correction for the first time. This study is implemented both simulation and experimentally. PMID:27047717

  6. Performance analysis results of a battery fuel gauge algorithm at multiple temperatures

    NASA Astrophysics Data System (ADS)

    Balasingam, B.; Avvari, G. V.; Pattipati, K. R.; Bar-Shalom, Y.

    2015-01-01

    Evaluating a battery fuel gauge (BFG) algorithm is a challenging problem due to the fact that there are no reliable mathematical models to represent the complex features of a Li-ion battery, such as hysteresis and relaxation effects, temperature effects on parameters, aging, power fade (PF), and capacity fade (CF) with respect to the chemical composition of the battery. The existing literature is largely focused on developing different BFG strategies and BFG validation has received little attention. In this paper, using hardware in the loop (HIL) data collected form three Li-ion batteries at nine different temperatures ranging from -20 °C to 40 °C, we demonstrate detailed validation results of a battery fuel gauge (BFG) algorithm. The BFG validation is based on three different BFG validation metrics; we provide implementation details of these three BFG evaluation metrics by proposing three different BFG validation load profiles that satisfy varying levels of user requirements.

  7. Performance analysis of compression algorithms for noisy multispectral underwater images of small targets

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Ritter, Gerhard X.; Caimi, Frank M.

    1997-07-01

    Underwater (UW) imagery presents several challenging problems for the developer of automated target recognition (ATR) algorithms, due to the presence of noise, point-spread function (PSF) effects resulting from camera or media inhomogeneities, and loss of contrast and resolution due to in-water scattering and absorption. Additional problems include the effects of sensor noise upon lossy image compression transformations, which can produce feature aliasing in the reconstructed imagery. Low-distortion, high- compression image transformations have been developed that facilitate transmission along a low-bandwidth uplink of compressed imagery acquired by a UW vehicle to a surface processing or viewing station. In early research that employed visual pattern image coding and the recently- developed BLAST transform, compression ratios ranging from 6,500:1 to 16,500:1 were reported, based on prefiltered six- band multispectral imagery of resolution 720 X 480 pixels. The prefiltering step, which removes unwanted background objects, is key to achieving high compression. This paper contains an analysis of several common compression algorithms, together with BLAST, to determine compression ratio, information loss, and computational efficiency achievable on a database of UW imagery. Information loss is derived rom the modulation transfer function, as well as several measures of spatial complexity that have been reported in the literature. Algorithms are expressed in image algebra, a concise notation that rigorously unifies linear and nonlinear mathematics in the image domain an has ben implemented on a variety of workstations and parallel processors. Thus, our algorithms are feasible, widely portable, and can be implemented on digital signal processors and fast parallel machines.

  8. Effects of using a 3D model on the performance of vision algorithms

    NASA Astrophysics Data System (ADS)

    Benjamin, D. Paul; Lyons, Damian; Lynch, Robert

    2015-05-01

    In previous work, we have shown how a 3D model can be built in real time and synchronized with the environment. This world model permits a robot to predict dynamics in its environment and classify behaviors. In this paper we evaluate the effect of such a 3D model on the accuracy and speed of various computer vision algorithms, including tracking, optical flow and stereo disparity. We report results based on the KITTI database and on our own videos.

  9. The performance of flux-split algorithms in high-speed viscous flows

    NASA Astrophysics Data System (ADS)

    Gaitonde, Datta; Shang, J. S.

    1992-01-01

    The algorithms are investigated in terms of their behavior in 2D perfect gas laminar viscous flows with attention given to the van Leer, Modified Steger-Warming (MSW), and Roe methods. The techniques are studied in the context of examples including a blunt flow at Mach 16, a Mach-14 flow past a 24-deg compression corner, and a Mach-8 type-IV shock-shock interaction. Existing experimental values are compared to the results of the corresponding grid-resolution studies. The algorithms indicate similar surface pressures for the blunt-body and corner flows, but the van Leer approach predicts a very high heat-transfer value. Anomalous carbuncle solutions appear in the blunt-body solutions for the MSW and Roe techniques. Accurate predictions of the separated flow regions are found with the MSW method, the Roe scheme, and the finer grids of the van Leer algorithm, but only the MSW scheme predicts an oscillatory supersonic jet structure in the limit cycle.

  10. A Semi-Automated Machine Learning Algorithm for Tree Cover Delineation from 1-m Naip Imagery Using a High Performance Computing Architecture

    NASA Astrophysics Data System (ADS)

    Basu, S.; Ganguly, S.; Nemani, R. R.; Mukhopadhyay, S.; Milesi, C.; Votava, P.; Michaelis, A.; Zhang, G.; Cook, B. D.; Saatchi, S. S.; Boyda, E.

    2014-12-01

    Accurate tree cover delineation is a useful instrument in the derivation of Above Ground Biomass (AGB) density estimates from Very High Resolution (VHR) satellite imagery data. Numerous algorithms have been designed to perform tree cover delineation in high to coarse resolution satellite imagery, but most of them do not scale to terabytes of data, typical in these VHR datasets. In this paper, we present an automated probabilistic framework for the segmentation and classification of 1-m VHR data as obtained from the National Agriculture Imagery Program (NAIP) for deriving tree cover estimates for the whole of Continental United States, using a High Performance Computing Architecture. The results from the classification and segmentation algorithms are then consolidated into a structured prediction framework using a discriminative undirected probabilistic graphical model based on Conditional Random Field (CRF), which helps in capturing the higher order contextual dependencies between neighboring pixels. Once the final probability maps are generated, the framework is updated and re-trained by incorporating expert knowledge through the relabeling of misclassified image patches. This leads to a significant improvement in the true positive rates and reduction in false positive rates. The tree cover maps were generated for the state of California, which covers a total of 11,095 NAIP tiles and spans a total geographical area of 163,696 sq. miles. Our framework produced correct detection rates of around 85% for fragmented forests and 70% for urban tree cover areas, with false positive rates lower than 3% for both regions. Comparative studies with the National Land Cover Data (NLCD) algorithm and the LiDAR high-resolution canopy height model shows the effectiveness of our algorithm in generating accurate high-resolution tree cover maps.

  11. G/SPLINES: A hybrid of Friedman's Multivariate Adaptive Regression Splines (MARS) algorithm with Holland's genetic algorithm

    NASA Technical Reports Server (NTRS)

    Rogers, David

    1991-01-01

    G/SPLINES are a hybrid of Friedman's Multivariable Adaptive Regression Splines (MARS) algorithm with Holland's Genetic Algorithm. In this hybrid, the incremental search is replaced by a genetic search. The G/SPLINE algorithm exhibits performance comparable to that of the MARS algorithm, requires fewer least squares computations, and allows significantly larger problems to be considered.

  12. Performance comparison of six independent components analysis algorithms for fetal signal extraction from real fMCG data

    NASA Astrophysics Data System (ADS)

    Hild, Kenneth E.; Alleva, Giovanna; Nagarajan, Srikantan; Comani, Silvia

    2007-01-01

    In this study we compare the performance of six independent components analysis (ICA) algorithms on 16 real fetal magnetocardiographic (fMCG) datasets for the application of extracting the fetal cardiac signal. We also compare the extraction results for real data with the results previously obtained for synthetic data. The six ICA algorithms are FastICA, CubICA, JADE, Infomax, MRMI-SIG and TDSEP. The results obtained using real fMCG data indicate that the FastICA method consistently outperforms the others in regard to separation quality and that the performance of an ICA method that uses temporal information suffers in the presence of noise. These two results confirm the previous results obtained using synthetic fMCG data. There were also two notable differences between the studies based on real and synthetic data. The differences are that all six ICA algorithms are independent of gestational age and sensor dimensionality for synthetic data, but depend on gestational age and sensor dimensionality for real data. It is possible to explain these differences by assuming that the number of point sources needed to completely explain the data is larger than the dimensionality used in the ICA extraction.

  13. Advanced Transport Delay Compensation Algorithms: Results of Delay Measurement and Piloted Performance Tests

    NASA Technical Reports Server (NTRS)

    Guo, Liwen; Cardullo, Frank M.; Kelly, Lon C.

    2007-01-01

    This report summarizes the results of delay measurement and piloted performance tests that were conducted to assess the effectiveness of the adaptive compensator and the state space compensator for alleviating the phase distortion of transport delay in the visual system in the VMS at the NASA Langley Research Center. Piloted simulation tests were conducted to assess the effectiveness of two novel compensators in comparison to the McFarland predictor and the baseline system with no compensation. Thirteen pilots with heterogeneous flight experience executed straight-in and offset approaches, at various delay configurations, on a flight simulator where different predictors were applied to compensate for transport delay. The glideslope and touchdown errors, power spectral density of the pilot control inputs, NASA Task Load Index, and Cooper-Harper rating of the handling qualities were employed for the analyses. The overall analyses show that the adaptive predictor results in slightly poorer compensation for short added delay (up to 48 ms) and better compensation for long added delay (up to 192 ms) than the McFarland compensator. The analyses also show that the state space predictor is fairly superior for short delay and significantly superior for long delay than the McFarland compensator.

  14. MixSim : An R Package for Simulating Data to Study Performance of Clustering Algorithms

    SciTech Connect

    Melnykov, Volodymyr; Chen, Wei-Chen; Maitra, Ranjan

    2012-01-01

    The R package MixSim is a new tool that allows simulating mixtures of Gaussian distributions with different levels of overlap between mixture components. Pairwise overlap, defined as a sum of two misclassification probabilities, measures the degree of interaction between components and can be readily employed to control the clustering complexity of datasets simulated from mixtures. These datasets can then be used for systematic performance investigation of clustering and finite mixture modeling algorithms. Among other capabilities of MixSim, there are computing the exact overlap for Gaussian mixtures, simulating Gaussian and non-Gaussian data, simulating outliers and noise variables, calculating various measures of agreement between two partitionings, and constructing parallel distribution plots for the graphical display of finite mixture models. All features of the package are illustrated in great detail. The utility of the package is highlighted through a small comparison study of several popular clustering algorithms.

  15. Effect of Algorithm Aggressiveness on the Performance of the Hypoglycemia-Hyperglycemia Minimizer (HHM) System

    PubMed Central

    McCann, Thomas W.; Rhein, Kathleen; Dassau, Eyal; Breton, Marc D.; Patek, Stephen D.; Anhalt, Henry; Kovatchev, Boris P.; Doyle, Francis J.; Anderson, Stacey M.; Zisser, Howard; Venugopalan, Ramakrishna

    2014-01-01

    Background: The Hypoglycemia-Hyperglycemia Minimizer (HHM) System aims to mitigate glucose excursions by preemptively modulating insulin delivery based on continuous glucose monitor (CGM) measurements. The “aggressiveness factor” is a key parameter in the HHM System algorithm, affecting how readily the system adjusts insulin infusion in response to changing CGM levels. Methods: Twenty adults with type 1 diabetes were studied in closed-loop in a clinical research center for approximately 26 hours. This analysis focused on the effect of the aggressiveness factor on the insulin dosing characteristics of the algorithm and, to a lesser extent, on the glucose control results observed. Results: As the aggressiveness factor increased from conservative to medium to aggressive: the maximum observed insulin dose delivered by the algorithm—which is designed to give doses that are corrective in nature every 5 minutes—increased (1.00 vs 1.15 vs 2.20 U, respectively); tendency to adhere to the subject’s nominal basal dose decreased (61.9% vs 56.6% vs 53.4%); and readiness to decrease insulin below basal also increased (18.4% vs 19.4% vs 25.2%). Glucose analyses by both CGM and Yellow Springs Instruments (YSI) indicated that the aggressive setting of the algorithm resulted in the least time spent at levels >180 mg/dL, and the most time spent between 70-180 mg/dL. There was no severe hyperglycemia, diabetic ketoacidosis, or severe hypoglycemia for any of the aggressiveness values investigated. Conclusions: These analyses underscore the importance of investigating the sensitivity of the HHM System to its key parameters, such as the aggressiveness factor, to guide future development decisions. PMID:24876443

  16. Effect of algorithm aggressiveness on the performance of the Hypoglycemia-Hyperglycemia Minimizer (HHM) System.

    PubMed

    Finan, Daniel A; McCann, Thomas W; Rhein, Kathleen; Dassau, Eyal; Breton, Marc D; Patek, Stephen D; Anhalt, Henry; Kovatchev, Boris P; Doyle, Francis J; Anderson, Stacey M; Zisser, Howard; Venugopalan, Ramakrishna

    2014-07-01

    The Hypoglycemia-Hyperglycemia Minimizer (HHM) System aims to mitigate glucose excursions by preemptively modulating insulin delivery based on continuous glucose monitor (CGM) measurements. The "aggressiveness factor" is a key parameter in the HHM System algorithm, affecting how readily the system adjusts insulin infusion in response to changing CGM levels. Twenty adults with type 1 diabetes were studied in closed-loop in a clinical research center for approximately 26 hours. This analysis focused on the effect of the aggressiveness factor on the insulin dosing characteristics of the algorithm and, to a lesser extent, on the glucose control results observed. As the aggressiveness factor increased from conservative to medium to aggressive: the maximum observed insulin dose delivered by the algorithm—which is designed to give doses that are corrective in nature every 5 minutes—increased (1.00 vs 1.15 vs 2.20 U, respectively); tendency to adhere to the subject's nominal basal dose decreased (61.9% vs 56.6% vs 53.4%); and readiness to decrease insulin below basal also increased (18.4% vs 19.4% vs 25.2%). Glucose analyses by both CGM and Yellow Springs Instruments (YSI) indicated that the aggressive setting of the algorithm resulted in the least time spent at levels >180 mg/dL, and the most time spent between 70-180 mg/dL. There was no severe hyperglycemia, diabetic ketoacidosis, or severe hypoglycemia for any of the aggressiveness values investigated. These analyses underscore the importance of investigating the sensitivity of the HHM System to its key parameters, such as the aggressiveness factor, to guide future development decisions. PMID:24876443

  17. Algorithms Performance Investigation of a Generalized Spreader-Bar Detection System

    SciTech Connect

    Robinson, Sean M.; Ashbaker, Eric D.; Hensley, Walter K.; Schweppe, John E.; Sandness, Gerald A.; Erikson, Luke E.; Ely, James H.

    2010-10-01

    A “generic” gantry-crane-mounted spreader bar detector has been simulated in the Monte-Carlo radiation transport code MCNP [1]. This model is intended to represent the largest realistically feasible number of detector crystals in a single gantry-crane model intended to sit atop an InterModal Cargo Container (IMCC). Detectors were chosen from among large commonly-available sodium iodide (NaI) crystal scintillators and spaced as evenly as is thought possible with a detector apparatus attached to a gantry crane. Several scenarios were simulated with this model, based on a single IMCC being moved between a ship’s deck or cargo hold and the dock. During measurement, the gantry crane will carry that IMCC through the air and lower it onto a receiving vehicle (e.g. a chassis or a bomb cart). The case of an IMCC being moved through the air from an unknown radiological environment to the ground is somewhat complex; for this initial study a single location was picked at which to simulate background. An HEU source based on earlier validated models was used, and placed at varying depths in a wood cargo. Many statistical realizations of these scenarios are constructed from simulations of the component spectra, simulated to have high statistics. The resultant data are analyzed with several different algorithms. The simulated data were evaluated by each algorithm, with a threshold set to a statistical-only false alarm probability of 0.001 and the resultant Minimum Detectable Amounts were generated for each Cargo depth possible within the IMCC. Using GADRAS as an anomaly detector provided the greatest detection sensitivity, and it is expected that an algorithm similar to this will be of great use to the detection of highly shielded sources.

  18. The Effect of Interfering Ions on Search Algorithm Performance for ETD Data

    PubMed Central

    Good, David M.; Wenger, Craig D.; Coon, Joshua J.

    2009-01-01

    Collision-activated dissociation (CAD) and electron-transfer dissociation (ETD) each produce spectra containing unique features. Though several database search algorithms (e.g., SEQUEST, Mascot, and OMSSA) have been modified to search ETD data, this consists chiefly of the ability to search for c- and z•-ions; additional ETD-specific features are often unaccounted for, and may hinder identification. Removal of these features via spectral processing increased total search sensitivity by ∼20% for both human and yeast datasets; unique identifications increased by ∼17% for the yeast datasets and ∼16% for the human dataset. PMID:19899080

  19. Improvement of Image Quality and Diagnostic Performance by an Innovative Motion-Correction Algorithm for Prospectively ECG Triggered Coronary CT Angiography

    PubMed Central

    Lu, Bin; Yan, Hong-Bing; Mu, Chao-Wei; Gao, Yang; Hou, Zhi-Hui; Wang, Zhi-Qiang; Liu, Kun; Parinella, Ashley H.; Leipsic, Jonathon A.

    2015-01-01

    Objective To investigate the effect of a novel motion-correction algorithm (Snap-short Freeze, SSF) on image quality and diagnostic accuracy in patients undergoing prospectively ECG-triggered CCTA without administering rate-lowering medications. Materials and Methods Forty-six consecutive patients suspected of CAD prospectively underwent CCTA using prospective ECG-triggering without rate control and invasive coronary angiography (ICA). Image quality, interpretability, and diagnostic performance of SSF were compared with conventional multisegment reconstruction without SSF, using ICA as the reference standard. Results All subjects (35 men, 57.6 ± 8.9 years) successfully underwent ICA and CCTA. Mean heart rate was 68.8±8.4 (range: 50–88 beats/min) beats/min without rate controlling medications during CT scanning. Overall median image quality score (graded 1–4) was significantly increased from 3.0 to 4.0 by the new algorithm in comparison to conventional reconstruction. Overall interpretability was significantly improved, with a significant reduction in the number of non-diagnostic segments (690 of 694, 99.4% vs 659 of 694, 94.9%; P<0.001). However, only the right coronary artery (RCA) showed a statistically significant difference (45 of 46, 97.8% vs 35 of 46, 76.1%; P = 0.004) on a per-vessel basis in this regard. Diagnostic accuracy for detecting ≥50% stenosis was improved using the motion-correction algorithm on per-vessel [96.2% (177/184) vs 87.0% (160/184); P = 0.002] and per-segment [96.1% (667/694) vs 86.6% (601/694); P <0.001] levels, but there was not a statistically significant improvement on a per-patient level [97.8 (45/46) vs 89.1 (41/46); P = 0.203]. By artery analysis, diagnostic accuracy was improved only for the RCA [97.8% (45/46) vs 78.3% (36/46); P = 0.007]. Conclusion The intracycle motion correction algorithm significantly improved image quality and diagnostic interpretability in patients undergoing CCTA with prospective ECG triggering and

  20. Cloud Algorithm Design and Performance for the 2002 Geoscience Laser Altimeter System Mission

    NASA Technical Reports Server (NTRS)

    Spinhirne, J. D.; Palm, S. P.; Hart, W. D.; Hlavka, D. L.; Mahesh, A.; Starr, David (Technical Monitor)

    2002-01-01

    A satellite borne lidar instrument, the Geoscience Laser Altimeter System (GLAS), is to be launched in late 2002 and will provide continuous profiling of atmospheric clouds and aerosol on a global basis. Data processing algorithms have been developed to provide operational data products in near real time. Basic data products for cloud observations are the height of the top and bottom of single to multiple cloud layers and the lidar calibrated observed backscatter cross section up to the level of signal attenuation. In addition the optical depth and vertical profile of visible extinction cross section of many transmissive cloud layers and most haze layers are to be derived. The optical thickness is derivable in some cases from the attenuation of the molecular scattering below cloud base. In other cases an assumption of the scattering phase function is required. In both cases a estimated correction for multiple scattering is required. The data processing algorithms have been tested in part from aircraft measurements used to simulated satellite data. The GLAS lidar observations will be made from an orbit that will allow inter comparison with all other existing satellite cloud measurements.

  1. Architecture-Aware Algorithms for Scalable Performance and Resilience on Heterogeneous Architectures

    SciTech Connect

    Dongarra, Jack

    2013-03-14

    There is a widening gap between the peak performance of high performance computers and the performance realized by full applications. Over the next decade, extreme-scale systems will present major new challenges to software development that could widen the gap so much that it prevents the productive use of future DOE Leadership computers.

  2. Performance of a New Rapid Immunoassay Test Kit for Point-of-Care Diagnosis of Significant Bacteriuria.

    PubMed

    Stapleton, Ann E; Cox, Marsha E; DiNello, Robert K; Geisberg, Mark; Abbott, April; Roberts, Pacita L; Hooton, Thomas M

    2015-09-01

    Urinary tract infections (UTIs) are frequently encountered in clinical practice and most commonly caused by Escherichia coli and other Gram-negative uropathogens. We tested RapidBac, a rapid immunoassay for bacteriuria developed by Silver Lake Research Corporation (SLRC), compared with standard bacterial culture using 966 clean-catch urine specimens submitted to a clinical microbiology laboratory in an urban academic medical center. RapidBac was performed in accordance with instructions, providing a positive or negative result in 20 min. RapidBac identified as positive 245/285 (sensitivity 86%) samples with significant bacteriuria, defined as the presence of a Gram-negative uropathogen or Staphylococcus saprophyticus at ≥10(3) CFU/ml. The sensitivities for Gram-negative bacteriuria at ≥10(4) CFU/ml and ≥10(5) CFU/ml were 96% and 99%, respectively. The specificity of the test, detecting the absence of significant bacteriuria, was 94%. The sensitivity and specificity of RapidBac were similar on samples from inpatient and outpatient settings, from male and female patients, and across age groups from 18 to 89 years old, although specificity was higher in men (100%) compared with that in women (92%). The RapidBac test for bacteriuria may be effective as an aid in the point-of-care diagnosis of UTIs especially in emergency and primary care settings. PMID:26063858

  3. Robust and highly performant ring detection algorithm for 3d particle tracking using 2d microscope imaging.

    PubMed

    Afik, Eldad

    2015-01-01

    Three-dimensional particle tracking is an essential tool in studying dynamics under the microscope, namely, fluid dynamics in microfluidic devices, bacteria taxis, cellular trafficking. The 3d position can be determined using 2d imaging alone by measuring the diffraction rings generated by an out-of-focus fluorescent particle, imaged on a single camera. Here I present a ring detection algorithm exhibiting a high detection rate, which is robust to the challenges arising from ring occlusion, inclusions and overlaps, and allows resolving particles even when near to each other. It is capable of real time analysis thanks to its high performance and low memory footprint. The proposed algorithm, an offspring of the circle Hough transform, addresses the need to efficiently trace the trajectories of many particles concurrently, when their number in not necessarily fixed, by solving a classification problem, and overcomes the challenges of finding local maxima in the complex parameter space which results from ring clusters and noise. Several algorithmic concepts introduced here can be advantageous in other cases, particularly when dealing with noisy and sparse data. The implementation is based on open-source and cross-platform software packages only, making it easy to distribute and modify. It is implemented in a microfluidic experiment allowing real-time multi-particle tracking at 70 Hz, achieving a detection rate which exceeds 94% and only 1% false-detection. PMID:26329642

  4. Robust and highly performant ring detection algorithm for 3d particle tracking using 2d microscope imaging

    PubMed Central

    Afik, Eldad

    2015-01-01

    Three-dimensional particle tracking is an essential tool in studying dynamics under the microscope, namely, fluid dynamics in microfluidic devices, bacteria taxis, cellular trafficking. The 3d position can be determined using 2d imaging alone by measuring the diffraction rings generated by an out-of-focus fluorescent particle, imaged on a single camera. Here I present a ring detection algorithm exhibiting a high detection rate, which is robust to the challenges arising from ring occlusion, inclusions and overlaps, and allows resolving particles even when near to each other. It is capable of real time analysis thanks to its high performance and low memory footprint. The proposed algorithm, an offspring of the circle Hough transform, addresses the need to efficiently trace the trajectories of many particles concurrently, when their number in not necessarily fixed, by solving a classification problem, and overcomes the challenges of finding local maxima in the complex parameter space which results from ring clusters and noise. Several algorithmic concepts introduced here can be advantageous in other cases, particularly when dealing with noisy and sparse data. The implementation is based on open-source and cross-platform software packages only, making it easy to distribute and modify. It is implemented in a microfluidic experiment allowing real-time multi-particle tracking at 70 Hz, achieving a detection rate which exceeds 94% and only 1% false-detection. PMID:26329642

  5. Robust and highly performant ring detection algorithm for 3d particle tracking using 2d microscope imaging

    NASA Astrophysics Data System (ADS)

    Afik, Eldad

    2015-09-01

    Three-dimensional particle tracking is an essential tool in studying dynamics under the microscope, namely, fluid dynamics in microfluidic devices, bacteria taxis, cellular trafficking. The 3d position can be determined using 2d imaging alone by measuring the diffraction rings generated by an out-of-focus fluorescent particle, imaged on a single camera. Here I present a ring detection algorithm exhibiting a high detection rate, which is robust to the challenges arising from ring occlusion, inclusions and overlaps, and allows resolving particles even when near to each other. It is capable of real time analysis thanks to its high performance and low memory footprint. The proposed algorithm, an offspring of the circle Hough transform, addresses the need to efficiently trace the trajectories of many particles concurrently, when their number in not necessarily fixed, by solving a classification problem, and overcomes the challenges of finding local maxima in the complex parameter space which results from ring clusters and noise. Several algorithmic concepts introduced here can be advantageous in other cases, particularly when dealing with noisy and sparse data. The implementation is based on open-source and cross-platform software packages only, making it easy to distribute and modify. It is implemented in a microfluidic experiment allowing real-time multi-particle tracking at 70 Hz, achieving a detection rate which exceeds 94% and only 1% false-detection.

  6. Performance optimization of EDFA-Raman hybrid optical amplifier using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Singh, Simranjit; Kaler, R. S.

    2015-05-01

    For the first time, a novel net gain analytical model of EDFA-Raman hybrid optical amplifier (HOA) is designed and optimized the various parameters using genetic algorithm. Our method has shown to be robust in the simultaneous analysis of multiple parameters, such as Raman length, EDFA length and its pump powers, to obtained highest possible gain. The optimized HOA is further investigated and characterized on system level in the scenario of 100×10 Gbps dense wavelength division multiplexed (DWDM) system with 25 GHz interval. With an optimized HOA, a flat gain of >18 dB is obtained from frequency region 187 to 189.5 THz with a gain variation of less than 1.35 dB without using any gain flattened technique. The obtained noise figure is also the lowest value (<2 dB/channel) ever reported for proposed hybrid optical amplifier at reduced channel spacing with acceptable bit error rate.

  7. Performance of the Falling Snow Retrieval Algorithms for the Global Precipitation Measurement (GPM) Mission

    NASA Technical Reports Server (NTRS)

    Skofronick-Jackson, Gail; Munchak, Stephen J.; Ringerud, Sarah

    2016-01-01

    Retrievals of falling snow from space represent an important data set for understanding the Earth's atmospheric, hydrological, and energy cycles, especially during climate change. Estimates of falling snow must be captured to obtain the true global precipitation water cycle, snowfall accumulations are required for hydrological studies, and without knowledge of the frozen particles in clouds one cannot adequately understand the energy and radiation budgets. While satellite-based remote sensing provides global coverage of falling snow events, the science is relatively new and retrievals are still undergoing development with challenges remaining). This work reports on the development and testing of retrieval algorithms for the Global Precipitation Measurement (GPM) mission Core Satellite, launched February 2014.

  8. Improving TCP throughput performance on high-speed networks with a receiver-side adaptive acknowledgment algorithm

    NASA Astrophysics Data System (ADS)

    Yeung, Wing-Keung; Chang, Rocky K. C.

    1998-12-01

    A drastic TCP performance degradation was reported when TCP is operated on the ATM networks. This deadlock problem is 'caused' by the high speed provided by the ATM networks. Therefore this deadlock problem is shared by any high-speed networking technologies when TCP is run on them. The problems are caused by the interaction of the sender-side and receiver-side Silly Window Syndrome (SWS) avoidance algorithms because the network's Maximum Segment Size (MSS) is no longer small when compared with the sender and receiver socket buffer sizes. Here we propose a new receiver-side adaptive acknowledgment algorithm (RSA3) to eliminate the deadlock problems while maintaining the SWS avoidance mechanisms. Unlike the current delayed acknowledgment strategy, the RSA3 does not rely on the exact value of MSS an the receiver's buffer size to determine the acknowledgement threshold.Instead the RSA3 periodically probes the sender to estimate the maximum amount of data that can be sent without receiving acknowledgement from the receiver. The acknowledgment threshold is computed as 35 percent of the estimate. In this way, deadlock-free TCP transmission is guaranteed. Simulation studies have shown that the RSA3 even improves the throughput performance in some non-deadlock regions. This is due to a quicker response taken by the RSA3 receiver. We have also evaluated different acknowledgment thresholds. It is found that the case of 35 percent gives the best performance when the sender and receiver buffer sizes are large.

  9. Performance enhancement of MC-CDMA system through novel sensitive bit algorithm aided turbo multi user detection.

    PubMed

    Kumaravel, Rasadurai; Narayanaswamy, Kumaratharan

    2015-01-01

    Multi carrier code division multiple access (MC-CDMA) system is a promising multi carrier modulation (MCM) technique for high data rate wireless communication over frequency selective fading channels. MC-CDMA system is a combination of code division multiple access (CDMA) and orthogonal frequency division multiplexing (OFDM). The OFDM parts reduce multipath fading and inter symbol interference (ISI) and the CDMA part increases spectrum utilization. Advantages of this technique are its robustness in case of multipath propagation and improve security with the minimize ISI. Nevertheless, due to the loss of orthogonality at the receiver in a mobile environment, the multiple access interference (MAI) appears. The MAI is one of the factors that degrade the bit error rate (BER) performance of MC-CDMA system. The multiuser detection (MUD) and turbo coding are the two dominant techniques for enhancing the performance of the MC-CDMA systems in terms of BER as a solution of overcome to MAI effects. In this paper a low complexity iterative soft sensitive bits algorithm (SBA) aided logarithmic-Maximum a-Posteriori algorithm (Log MAP) based turbo MUD is proposed. Simulation results show that the proposed method provides better BER performance with low complexity decoding, by mitigating the detrimental effects of MAI. PMID:25714917

  10. Performance Enhancement of MC-CDMA System through Novel Sensitive Bit Algorithm Aided Turbo Multi User Detection

    PubMed Central

    Kumaravel, Rasadurai; Narayanaswamy, Kumaratharan

    2015-01-01

    Multi carrier code division multiple access (MC-CDMA) system is a promising multi carrier modulation (MCM) technique for high data rate wireless communication over frequency selective fading channels. MC-CDMA system is a combination of code division multiple access (CDMA) and orthogonal frequency division multiplexing (OFDM). The OFDM parts reduce multipath fading and inter symbol interference (ISI) and the CDMA part increases spectrum utilization. Advantages of this technique are its robustness in case of multipath propagation and improve security with the minimize ISI. Nevertheless, due to the loss of orthogonality at the receiver in a mobile environment, the multiple access interference (MAI) appears. The MAI is one of the factors that degrade the bit error rate (BER) performance of MC-CDMA system. The multiuser detection (MUD) and turbo coding are the two dominant techniques for enhancing the performance of the MC-CDMA systems in terms of BER as a solution of overcome to MAI effects. In this paper a low complexity iterative soft sensitive bits algorithm (SBA) aided logarithmic-Maximum a-Posteriori algorithm (Log MAP) based turbo MUD is proposed. Simulation results show that the proposed method provides better BER performance with low complexity decoding, by mitigating the detrimental effects of MAI. PMID:25714917

  11. Performance analysis of different surface reconstruction algorithms for 3D reconstruction of outdoor objects from their digital images.

    PubMed

    Maiti, Abhik; Chakravarty, Debashish

    2016-01-01

    3D reconstruction of geo-objects from their digital images is a time-efficient and convenient way of studying the structural features of the object being modelled. This paper presents a 3D reconstruction methodology which can be used to generate photo-realistic 3D watertight surface of different irregular shaped objects, from digital image sequences of the objects. The 3D reconstruction approach described here is robust, simplistic and can be readily used in reconstructing watertight 3D surface of any object from its digital image sequence. Here, digital images of different objects are used to build sparse, followed by dense 3D point clouds of the objects. These image-obtained point clouds are then used for generation of photo-realistic 3D surfaces, using different surface reconstruction algorithms such as Poisson reconstruction and Ball-pivoting algorithm. Different control parameters of these algorithms are identified, which affect the quality and computation time of the reconstructed 3D surface. The effects of these control parameters in generation of 3D surface from point clouds of different density are studied. It is shown that the reconstructed surface quality of Poisson reconstruction depends on Samples per node (SN) significantly, greater SN values resulting in better quality surfaces. Also, the quality of the 3D surface generated using Ball-Pivoting algorithm is found to be highly depend upon Clustering radius and Angle threshold values. The results obtained from this study give the readers of the article a valuable insight into the effects of different control parameters on determining the reconstructed surface quality. PMID:27386376

  12. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  13. A comparative study based on image quality and clinical task performance for CT reconstruction algorithms in radiotherapy.

    PubMed

    Li, Hua; Dolly, Steven; Chen, Hsin-Chen; Anastasio, Mark A; Low, Daniel A; Li, Harold H; Michalski, Jeff M; Thorstad, Wade L; Gay, Hiram; Mutic, Sasa

    2016-01-01

    CT image reconstruction is typically evaluated based on the ability to reduce the radiation dose to as-low-as-reasonably-achievable (ALARA) while maintaining acceptable image quality. However, the determination of common image quality metrics, such as noise, contrast, and contrast-to-noise ratio, is often insufficient for describing clinical radiotherapy task performance. In this study we designed and implemented a new comparative analysis method associating image quality, radiation dose, and patient size with radiotherapy task performance, with the purpose of guiding the clinical radiotherapy usage of CT reconstruction algorithms. The iDose4 iterative reconstruction algorithm was selected as the target for comparison, wherein filtered back-projection (FBP) reconstruction was regarded as the baseline. Both phantom and patient images were analyzed. A layer-adjustable anthropomorphic pelvis phantom capable of mimicking 38-58 cm lateral diameter-sized patients was imaged and reconstructed by the FBP and iDose4 algorithms with varying noise-reduction-levels, respectively. The resulting image sets were quantitatively assessed by two image quality indices, noise and contrast-to-noise ratio, and two clinical task-based indices, target CT Hounsfield number (for electron density determination) and structure contouring accuracy (for dose-volume calculations). Additionally, CT images of 34 patients reconstructed with iDose4 with six noise reduction levels were qualitatively evaluated by two radiation oncologists using a five-point scoring mechanism. For the phantom experiments, iDose4 achieved noise reduction up to 66.1% and CNR improvement up to 53.2%, compared to FBP without considering the changes of spatial resolution among images and the clinical acceptance of reconstructed images. Such improvements consistently appeared across different iDose4 noise reduction levels, exhibiting limited interlevel noise (< 5 HU) and target CT number variations (< 1 HU). The radiation

  14. Performance evaluation of an automated single-channel sleep–wake detection algorithm

    PubMed Central

    Kaplan, Richard F; Wang, Ying; Loparo, Kenneth A; Kelly, Monica R; Bootzin, Richard R

    2014-01-01

    Background A need exists, from both a clinical and a research standpoint, for objective sleep measurement systems that are both easy to use and can accurately assess sleep and wake. This study evaluates the output of an automated sleep–wake detection algorithm (Z-ALG) used in the Zmachine (a portable, single-channel, electroencephalographic [EEG] acquisition and analysis system) against laboratory polysomnography (PSG) using a consensus of expert visual scorers. Methods Overnight laboratory PSG studies from 99 subjects (52 females/47 males, 18–60 years, median age 32.7 years), including both normal sleepers and those with a variety of sleep disorders, were assessed. PSG data obtained from the differential mastoids (A1–A2) were assessed by Z-ALG, which determines sleep versus wake every 30 seconds using low-frequency, intermediate-frequency, and high-frequency and time domain EEG features. PSG data were independently scored by two to four certified PSG technologists, using standard Rechtschaffen and Kales guidelines, and these score files were combined on an epoch-by-epoch basis, using a majority voting rule, to generate a single score file per subject to compare against the Z-ALG output. Both epoch-by-epoch and standard sleep indices (eg, total sleep time, sleep efficiency, latency to persistent sleep, and wake after sleep onset) were compared between the Z-ALG output and the technologist consensus score files. Results Overall, the sensitivity and specificity for detecting sleep using the Z-ALG as compared to the technologist consensus are 95.5% and 92.5%, respectively, across all subjects, and the positive predictive value and the negative predictive value for detecting sleep are 98.0% and 84.2%, respectively. Overall κ agreement is 0.85 (approaching the level of agreement observed among sleep technologists). These results persist when the sleep disorder subgroups are analyzed separately. Conclusion This study demonstrates that the Z-ALG automated sleep

  15. Research on the influence of scan path of image on the performance of information hiding algorithm

    NASA Astrophysics Data System (ADS)

    Yan, Su; Xie, Chengjun; Huang, Ruirui; Xu, Xiaolong

    2015-12-01

    This paper carried out a study on information hiding performance using technology of histogram shift combining hybrid transform. As the approach of data selection, scan path of images is discussed. Ten paths were designed and tested on international standard testing images. Experiment results indicate that scan path has a great influence on the performance of image lossless information hiding. For selected test image, the peak of optimized path increased up to 9.84% while that of the worst path dropped 24.2%, that is to say, for different test images, scan path greatly impacts information hiding performance by influencing image redundancy and sparse matrix.

  16. Confronting Decision Cliffs: Diagnostic Assessment of Multi-Objective Evolutionary Algorithms' Performance for Addressing Uncertain Environmental Thresholds

    NASA Astrophysics Data System (ADS)

    Ward, V. L.; Singh, R.; Reed, P. M.; Keller, K.

    2014-12-01

    As water resources problems typically involve several stakeholders with conflicting objectives, multi-objective evolutionary algorithms (MOEAs) are now key tools for understanding management tradeoffs. Given the growing complexity of water planning problems, it is important to establish if an algorithm can consistently perform well on a given class of problems. This knowledge allows the decision analyst to focus on eliciting and evaluating appropriate problem formulations. This study proposes a multi-objective adaptation of the classic environmental economics "Lake Problem" as a computationally simple but mathematically challenging MOEA benchmarking problem. The lake problem abstracts a fictional town on a lake which hopes to maximize its economic benefit without degrading the lake's water quality to a eutrophic (polluted) state through excessive phosphorus loading. The problem poses the challenge of maintaining economic activity while confronting the uncertainty of potentially crossing a nonlinear and potentially irreversible pollution threshold beyond which the lake is eutrophic. Objectives for optimization are maximizing economic benefit from lake pollution, maximizing water quality, maximizing the reliability of remaining below the environmental threshold, and minimizing the probability that the town will have to drastically change pollution policies in any given year. The multi-objective formulation incorporates uncertainty with a stochastic phosphorus inflow abstracting non-point source pollution. We performed comprehensive diagnostics using 6 algorithms: Borg, MOEAD, eMOEA, eNSGAII, GDE3, and NSGAII to ascertain their controllability, reliability, efficiency, and effectiveness. The lake problem abstracts elements of many current water resources and climate related management applications where there is the potential for crossing irreversible, nonlinear thresholds. We show that many modern MOEAs can fail on this test problem, indicating its suitability as a

  17. Design and performance testing of an avalanche photodiode receiver with multiplication gain control algorithm for intersatellite laser communication

    NASA Astrophysics Data System (ADS)

    Yu, Xiaonan; Tong, Shoufeng; Dong, Yan; Song, Yansong; Hao, Shicong; Lu, Jing

    2016-06-01

    An avalanche photodiode (APD) receiver for intersatellite laser communication links is proposed and its performance is experimentally demonstrated. In the proposed system, a series of analog circuits are used not only to adjust the temperature and control the bias voltage but also to monitor the current and recover the clock from the communication data. In addition, the temperature compensation and multiplication gain control algorithm are embedded in the microcontroller to improve the performance of the receiver. As shown in the experiment, with the change of communication rate from 10 to 2000 Mbps, the detection sensitivity of the APD receiver varies from -47 to -34 dBm. Moreover, due to the existence of the multiplication gain control algorithm, the dynamic range of the APD receiver is effectively improved, while the dynamic range at 10, 100, and 1000 Mbps is 38.7, 37.7, and 32.8 dB, respectively. As a result, the experimental results agree well with the theoretical predictions, and the receiver will improve the flexibility of the intersatellite links without increasing the cost.

  18. Comparative performance analysis of cervix ROI extraction and specular reflection removal algorithms for uterine cervix image analysis

    NASA Astrophysics Data System (ADS)

    Xue, Zhiyun; Antani, Sameer; Long, L. Rodney; Jeronimo, Jose; Thoma, George R.

    2007-03-01

    Cervicography is a technique for visual screening of uterine cervix images for cervical cancer. One of our research goals is the automated detection in these images of acetowhite (AW) lesions, which are sometimes correlated with cervical cancer. These lesions are characterized by the whitening of regions along the squamocolumnar junction on the cervix when treated with 5% acetic acid. Image preprocessing is required prior to invoking AW detection algorithms on cervicographic images for two reasons: (1) to remove Specular Reflections (SR) caused by camera flash, and (2) to isolate the cervix region-of-interest (ROI) from image regions that are irrelevant to the analysis. These image regions may contain medical instruments, film markup, or other non-cervix anatomy or regions, such as vaginal walls. We have qualitatively and quantitatively evaluated the performance of alternative preprocessing algorithms on a test set of 120 images. For cervix ROI detection, all approaches use a common feature set, but with varying combinations of feature weights, normalization, and clustering methods. For SR detection, while one approach uses a Gaussian Mixture Model on an intensity/saturation feature set, a second approach uses Otsu thresholding on a top-hat transformed input image. Empirical results are analyzed to derive conclusions on the performance of each approach.

  19. Performance portability study of an automatic target detection and classification algorithm for hyperspectral image analysis using OpenCL

    NASA Astrophysics Data System (ADS)

    Bernabe, Sergio; Igual, Francisco D.; Botella, Guillermo; Garcia, Carlos; Prieto-Matias, Manuel; Plaza, Antonio

    2015-10-01

    Recent advances in heterogeneous high performance computing (HPC) have opened new avenues for demanding remote sensing applications. Perhaps one of the most popular algorithm in target detection and identification is the automatic target detection and classification algorithm (ATDCA) widely used in the hyperspectral image analysis community. Previous research has already investigated the mapping of ATDCA on graphics processing units (GPUs) and field programmable gate arrays (FPGAs), showing impressive speedup factors that allow its exploitation in time-critical scenarios. Based on these studies, our work explores the performance portability of a tuned OpenCL implementation across a range of processing devices including multicore processors, GPUs and other accelerators. This approach differs from previous papers, which focused on achieving the optimal performance on each platform. Here, we are more interested in the following issues: (1) evaluating if a single code written in OpenCL allows us to achieve acceptable performance across all of them, and (2) assessing the gap between our portable OpenCL code and those hand-tuned versions previously investigated. Our study includes the analysis of different tuning techniques that expose data parallelism as well as enable an efficient exploitation of the complex memory hierarchies found in these new heterogeneous devices. Experiments have been conducted using hyperspectral data sets collected by NASA's Airborne Visible Infra- red Imaging Spectrometer (AVIRIS) and the Hyperspectral Digital Imagery Collection Experiment (HYDICE) sensors. To the best of our knowledge, this kind of analysis has not been previously conducted in the hyperspectral imaging processing literature, and in our opinion it is very important in order to really calibrate the possibility of using heterogeneous platforms for efficient hyperspectral imaging processing in real remote sensing missions.

  20. Improving nonlinear performance of the HEPS baseline design with a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Jiao, Yi

    2016-07-01

    A baseline design for the High Energy Photon Source has been proposed, with a natural emittance of 60 pm·rad within a circumference of about 1.3 kilometers. Nevertheless, the nonlinear performance of the design needs further improvements to increase both the dynamic aperture and the momentum acceptance. In this study, genetic optimization of the linear optics is performed, so as to find all the possible solutions with weaker sextupoles and hence weaker nonlinearities, while keeping the emittance at the same level as the baseline design. The solutions obtained enable us to explore the dependence of nonlinear dynamics on the working point. The result indicates that with the same layout, it is feasible to obtain much better nonlinear performance with a delicate tuning of the magnetic field strengths and a wise choice of the working point. Supported by NSFC (11475202, 11405187) and Youth Innovation Promotion Association CAS (2015009)

  1. Significance of ground-water chemistry in performance of North Sahara Tube wells in Algeria and Tunisia

    USGS Publications Warehouse

    Clarke, Frank Eldridge; Jones, Blair F.

    1972-01-01

    Nine ground-water samples from the principal shallow and deep North Sahara aquifers of Algeria and Tunisia were examined to determine the relation of their chemical composition to corrosion and mineral encrustation thought to be contributing to observed decline in well capacities within a UNESCO/UNDP Special Fund Project area. Although the shallow and deep waters differ significantly in certain quality factors, all are sulfochloride types with corrosion potentials ranging from moderate to extreme. None appear to be sufficiently supersaturated with troublesome mineral species to cause rapid or severe encrustation of filter pipes or other well parts. However, calcium carbonate encrustation of deep-well cooling towers and related irrigation pipes can be expected because of loss of carbon dioxide and water during evaporative cooling. Corrosion products, particularly iron sulfide, can be expected to deposit in wells producing waters from the deep aquifers. This could reduce filterpipe openings and increase casing roughness sufficiently to cause significant reduction in well capacity. It seems likely, however, that normal pressure reduction due to exploitation of the artesian systems is a more important control of well performance. If troublesome corrosion and related encrustation are confirmed by downhole inspection, use of corrosion-resisting materials, such as fiber-glass casing and saw-slotted filter pipe (shallow wells only), or stainless-steel screen, will minimize the effects of the waters represented by these samples. A combination of corrosion-resisting stainless steel filter pipe electrically insulated from the casing with a nonconductive spacer and cathodic protection will minimize external corrosion of steel casing, if this is found to be a problem. However, such installations are difficult to make in very deep wells and difficult to control in remote areas. Both the shallow waters and the deep waters examined in this study will tend to cause soil

  2. Analysis of grid performance using an optical flow algorithm for medical image processing

    NASA Astrophysics Data System (ADS)

    Moreno, Ramon A.; Cunha, Rita de Cássio Porfírio; Gutierrez, Marco A.

    2014-03-01

    The development of bigger and faster computers has not yet provided the computing power for medical image processing required nowadays. This is the result of several factors, including: i) the increasing number of qualified medical image users requiring sophisticated tools; ii) the demand for more performance and quality of results; iii) researchers are addressing problems that were previously considered extremely difficult to achieve; iv) medical images are produced with higher resolution and on a larger number. These factors lead to the need of exploring computing techniques that can boost the computational power of Healthcare Institutions while maintaining a relative low cost. Parallel computing is one of the approaches that can help solving this problem. Parallel computing can be achieved using multi-core processors, multiple processors, Graphical Processing Units (GPU), clusters or Grids. In order to gain the maximum benefit of parallel computing it is necessary to write specific programs for each environment or divide the data in smaller subsets. In this article we evaluate the performance of the two parallel computing tools when dealing with a medical image processing application. We compared the performance of the EELA-2 (E-science grid facility for Europe and Latin- America) grid infrastructure with a small Cluster (3 nodes x 8 cores = 24 cores) and a regular PC (Intel i3 - 2 cores). As expected the grid had a better performance for a large number of processes, the cluster for a small to medium number of processes and the PC for few processes.

  3. Development of Automated Scoring Algorithms for Complex Performance Assessments: A Comparison of Two Approaches.

    ERIC Educational Resources Information Center

    Clauser, Brian E.; Margolis, Melissa J.; Clyman, Stephen G.; Ross, Linette P.

    1997-01-01

    Research on automated scoring is extended by comparing alternative automated systems for scoring a computer simulation of physicians' patient management skills. A regression-based system is more highly correlated with experts' evaluations than a system that uses complex rules to map performances into score levels, but both approaches are feasible.…

  4. The influence of the regularization parameter and the first estimate on the performance of tikhonov regularized non-linear image restoration algorithms

    PubMed

    Van Kempen GM; Van Vliet LJ

    2000-04-01

    This paper reports studies on the influence of the regularization parameter and the first estimate on the performance of iterative image restoration algorithms. We discuss regularization parameter estimation methods that have been developed for the linear Tikhonov-Miller filter to restore images distorted by additive Gaussian noise. We have performed experiments on synthetic data to show that these methods can be used to determine the regularization parameter of non-linear iterative image restoration algorithms, which we use to restore images contaminated by Poisson noise. We conclude that the generalized cross-validation method is an efficient method to determine a value of the regularization parameter close to the optimal value. We have also derived a method to estimate the regularization parameter of a Tikhonov regularized version of the Richardson-Lucy algorithm. These iterative image restoration algorithms need a first estimate to start their iteration. An obvious and frequently used choice for the first estimate is the acquired image. However, the restoration algorithm could be sensitive to the noise present in this image, which may hamper the convergence of the algorithm. We have therefore compared various choices of first estimates and tested the convergence of various iterative restoration algorithms. We found that most algorithms converged for most choices, but that smoothed first estimates resulted in a faster convergence. PMID:10781209

  5. Finite-volume versus streaming-based lattice Boltzmann algorithm for fluid-dynamics simulations: A one-to-one accuracy and performance study.

    PubMed

    Shrestha, Kalyan; Mompean, Gilmar; Calzavarini, Enrico

    2016-02-01

    A finite-volume (FV) discretization method for the lattice Boltzmann (LB) equation, which combines high accuracy with limited computational cost is presented. In order to assess the performance of the FV method we carry out a systematic comparison, focused on accuracy and computational performances, with the standard streaming lattice Boltzmann equation algorithm. In particular we aim at clarifying whether and in which conditions the proposed algorithm, and more generally any FV algorithm, can be taken as the method of choice in fluid-dynamics LB simulations. For this reason the comparative analysis is further extended to the case of realistic flows, in particular thermally driven flows in turbulent conditions. We report the successful simulation of high-Rayleigh number convective flow performed by a lattice Boltzmann FV-based algorithm with wall grid refinement. PMID:26986438

  6. Finite-volume versus streaming-based lattice Boltzmann algorithm for fluid-dynamics simulations: A one-to-one accuracy and performance study

    NASA Astrophysics Data System (ADS)

    Shrestha, Kalyan; Mompean, Gilmar; Calzavarini, Enrico

    2016-02-01

    A finite-volume (FV) discretization method for the lattice Boltzmann (LB) equation, which combines high accuracy with limited computational cost is presented. In order to assess the performance of the FV method we carry out a systematic comparison, focused on accuracy and computational performances, with the standard streaming lattice Boltzmann equation algorithm. In particular we aim at clarifying whether and in which conditions the proposed algorithm, and more generally any FV algorithm, can be taken as the method of choice in fluid-dynamics LB simulations. For this reason the comparative analysis is further extended to the case of realistic flows, in particular thermally driven flows in turbulent conditions. We report the successful simulation of high-Rayleigh number convective flow performed by a lattice Boltzmann FV-based algorithm with wall grid refinement.

  7. Implementing Legacy-C Algorithms in FPGA Co-Processors for Performance Accelerated Smart Payloads

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Scharenbroich, Lucas J.; Werne, Thomas A.; Hartzell, Christine

    2008-01-01

    Accurate, on-board classification of instrument data is used to increase science return by autonomously identifying regions of interest for priority transmission or generating summary products to conserve transmission bandwidth. Due to on-board processing constraints, such classification has been limited to using the simplest functions on a small subset of the full instrument data. FPGA co-processor designs for SVM1 classifiers will lead to significant improvement in on-board classification capability and accuracy.

  8. Genetic algorithm to design Laue lenses with optimal performance for focusing hard X- and γ-rays

    NASA Astrophysics Data System (ADS)

    Camattari, Riccardo; Guidi, Vincenzo

    2014-10-01

    To focus hard X- and γ-rays it is possible to use a Laue lens as a concentrator. With this optics it is possible to improve the detection of radiation for several applications, from the observation of the most violent phenomena in the sky to nuclear medicine applications for diagnostic and therapeutic purposes. We implemented a code named LaueGen, which is based on a genetic algorithm and aims to design optimized Laue lenses. The genetic algorithm was selected because optimizing a Laue lens is a complex and discretized problem. The output of the code consists of the design of a Laue lens, which is composed of diffracting crystals that are selected and arranged in such a way as to maximize the lens performance. The code allows managing crystals of any material and crystallographic orientation. The program is structured in such a way that the user can control all the initial lens parameters. As a result, LaueGen is highly versatile and can be used to design very small lenses, for example, for nuclear medicine, or very large lenses, for example, for satellite-borne astrophysical missions.

  9. A Hybrid Feature Selection Method to Improve Performance of a Group of Classification Algorithms

    NASA Astrophysics Data System (ADS)

    Naseriparsa, Mehdi; Bidgoli, Amir-Masoud; Varaee, Touraj

    2013-05-01

    In this paper a hybrid feature selection method is proposed which takes advantages of wrapper subset evaluation with a lower cost and improves the performance of a group of classifiers. The method uses combination of sample domain filtering and resampling to refine the sample domain and two feature subset evaluation methods to select reliable features. This method utilizes both feature space and sample domain in two phases. The first phase filters and resamples the sample domain and the second phase adopts a hybrid procedure by information gain, wrapper subset evaluation and genetic search to find the optimal feature space. Experiments carried out on different types of datasets from UCI Repository of Machine Learning databases and the results show a rise in the average performance of five classifiers (Naive Bayes, Logistic, Multilayer Perceptron, Best First Decision Tree and JRIP) simultaneously and the classification error for these classifiers decreases considerably. The experiments also show that this method outperforms other feature selection methods with a lower cost.

  10. Conic algorithms for unconstrained minimization with special consideration of problems with large sparse hessians: A summary of research performed

    SciTech Connect

    Ariyawansa, K.A.

    1987-10-31

    This paper contains a brief description of research activities on conic algorithms for unconstrained minimization. Line search termination criteria and the rate of convergence for a certain class of conic algorithms are for sparse problems are discussed. (LSP)

  11. An automatic and fast centerline extraction algorithm for virtual colonoscopy.

    PubMed

    Jiang, Guangxiang; Gu, Lixu

    2005-01-01

    This paper introduces a new refined centerline extraction algorithm, which is based on and significantly improved from distance mapping algorithms. The new approach include three major parts: employing a colon segmentation method; designing and realizing a fast Euclidean Transform algorithm and inducting boundary voxels cutting (BVC) approach. The main contribution is the BVC processing, which greatly speeds up the Dijkstra algorithm and improves the whole performance of the new algorithm. Experimental results demonstrate that the new centerline algorithm was more efficient and accurate comparing with existing algorithms. PMID:17281406

  12. A parallel-vector algorithm for rapid structural analysis on high-performance computers

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.; Nguyen, Duc T.; Agarwal, Tarun K.

    1990-01-01

    A fast, accurate Choleski method for the solution of symmetric systems of linear equations is presented. This direct method is based on a variable-band storage scheme and takes advantage of column heights to reduce the number of operations in the Choleski factorization. The method employs parallel computation in the outermost DO-loop and vector computation via the loop unrolling technique in the innermost DO-loop. The method avoids computations with zeros outside the column heights, and as an option, zeros inside the band. The close relationship between Choleski and Gauss elimination methods is examined. The minor changes required to convert the Choleski code to a Gauss code to solve non-positive-definite symmetric systems of equations are identified. The results for two large scale structural analyses performed on supercomputers, demonstrate the accuracy and speed of the method.

  13. Evaluation of Algorithm Performance in ChIP-Seq Peak Detection

    PubMed Central

    Wilbanks, Elizabeth G.; Facciotti, Marc T.

    2010-01-01

    Next-generation DNA sequencing coupled with chromatin immunoprecipitation (ChIP-seq) is revolutionizing our ability to interrogate whole genome protein-DNA interactions. Identification of protein binding sites from ChIP-seq data has required novel computational tools, distinct from those used for the analysis of ChIP-Chip experiments. The growing popularity of ChIP-seq spurred the development of many different analytical programs (at last count, we noted 31 open source methods), each with some purported advantage. Given that the literature is dense and empirical benchmarking challenging, selecting an appropriate method for ChIP-seq analysis has become a daunting task. Herein we compare the performance of eleven different peak calling programs on common empirical, transcription factor datasets and measure their sensitivity, accuracy and usability. Our analysis provides an unbiased critical assessment of available technologies, and should assist researchers in choosing a suitable tool for handling ChIP-seq data. PMID:20628599

  14. The road surveying system of the federal highway research institute - a performance evaluation of road segmentation algorithms

    NASA Astrophysics Data System (ADS)

    Streiter, R.; Wanielik, G.

    2013-07-01

    The construction of highways and federal roadways is subject to many restrictions and designing rules. The focus is on safety, comfort and smooth driving. Unfortunately, the planning information for roadways and their real constitution, course and their number of lanes and lane widths is often unsure or not available. Due to digital map databases of roads raised much interest during the last years and became one major cornerstone of innovative Advanced Driving Assistance Systems (ADASs), the demand for accurate and detailed road information increases considerably. Within this project a measurement system for collecting high accurate road data was developed. This paper gives an overview about the sensor configuration within the measurement vehicle, introduces the implemented algorithms and shows some applications implemented in the post processing platform. The aim is to recover the origin parametric description of the roadway and the performance of the measurement system is being evaluated against several original road construction information.

  15. Improvement of Step-Down Converter Performance with Optimum Lqr and Pid Controller with Applied Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Nejati, R.; Eshtehardiha, S.; Poudeh, M. Bayati

    2008-10-01

    The DC converter can be employed alone for the stabilization or the control of DC voltage of a battery or it can be a component of a complex converter to control the intermediate or output voltages. Due to the switching property included in their structure, DC-DC converters have a non-linear behavior and their controlling design is accompanied with complexities. But by employing the average method it is possible to approximate the system by a linear system and then linear control methods can be used. Dynamic performance of buck converters output voltage can be controlled by methods of Linear Quadratic Regulator (LQR) and PID. The former controller designing needs to positive definite matrix selection and the later is relative to desired pole places in complex coordinate. In this article, matrixes coefficients and the best constant values for PID controllers are selected based on Genetic algorithm method. The simulation results show an improvement in voltage control response.

  16. Optimal Performance of a Nonlinear Gantry Crane System via Priority-based Fitness Scheme in Binary PSO Algorithm

    NASA Astrophysics Data System (ADS)

    Izzuan Jaafar, Hazriq; Mohd Ali, Nursabillilah; Mohamed, Z.; Asmiza Selamat, Nur; Faiz Zainal Abidin, Amar; Jamian, J. J.; Kassim, Anuar Mohamed

    2013-12-01

    This paper presents development of an optimal PID and PD controllers for controlling the nonlinear gantry crane system. The proposed Binary Particle Swarm Optimization (BPSO) algorithm that uses Priority-based Fitness Scheme is adopted in obtaining five optimal controller gains. The optimal gains are tested on a control structure that combines PID and PD controllers to examine system responses including trolley displacement and payload oscillation. The dynamic model of gantry crane system is derived using Lagrange equation. Simulation is conducted within Matlab environment to verify the performance of system in terms of settling time (Ts), steady state error (SSE) and overshoot (OS). This proposed technique demonstrates that implementation of Priority-based Fitness Scheme in BPSO is effective and able to move the trolley as fast as possible to the various desired position.

  17. The Superior Lambert Algorithm

    NASA Astrophysics Data System (ADS)

    der, G.

    2011-09-01

    Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most

  18. Evaluation of Real-Time Performance of the Virtual Seismologist Earthquake Early Warning Algorithm in Switzerland and California

    NASA Astrophysics Data System (ADS)

    Behr, Y.; Cua, G. B.; Clinton, J. F.; Heaton, T. H.

    2012-12-01

    The Virtual Seismologist (VS) method is a Bayesian approach to regional network-based earthquake early warning (EEW) originally formulated by Cua and Heaton (2007). Implementation of VS into real-time EEW codes has been an on-going effort of the Swiss Seismological Service at ETH Zürich since 2006, with support from ETH Zürich, various European projects, and the United States Geological Survey (USGS). VS is one of three EEW algorithms - the other two being ElarmS (Allen and Kanamori, 2003) and On-Site (Wu and Kanamori, 2005; Boese et al., 2008) algorithms - that form the basis of the California Integrated Seismic Network (CISN) ShakeAlert system, a USGS-funded prototype end-to-end EEW system that could potentially be implemented in California. In Europe, VS is currently operating as a real-time test system in Switzerland. As part of the on-going EU project REAKT (Strategies and Tools for Real-Time Earthquake Risk Reduction), VS will be installed and tested at other European networks. VS has been running in real-time on stations of the Southern California Seismic Network (SCSN) since July 2008, and on stations of the Berkeley Digital Seismic Network (BDSN) and the USGS Menlo Park strong motion network in northern California since February 2009. In Switzerland, VS has been running in real-time on stations monitored by the Swiss Seismological Service (including stations from Austria, France, Germany, and Italy) since 2010. We present summaries of the real-time performance of VS in Switzerland and California over the past two and three years respectively. The empirical relationships used by VS to estimate magnitudes and ground motion, originally derived from southern California data, are demonstrated to perform well in northern California and Switzerland. Implementation in real-time and off-line testing in Europe will potentially be extended to southern Italy, western Greece, Istanbul, Romania, and Iceland. Integration of the VS algorithm into both the CISN Advanced

  19. Performance evaluation of iterative reconstruction algorithms for achieving CT radiation dose reduction - a phantom study.

    PubMed

    Dodge, Cristina T; Tamm, Eric P; Cody, Dianna D; Liu, Xinming; Jensen, Corey T; Wei, Wei; Kundra, Vikas; Rong, John

    2016-01-01

    The purpose of this study was to characterize image quality and dose performance with GE CT iterative reconstruction techniques, adaptive statistical iterative recon-struction (ASiR), and model-based iterative reconstruction (MBIR), over a range of typical to low-dose intervals using the Catphan 600 and the anthropomorphic Kyoto Kagaku abdomen phantoms. The scope of the project was to quantitatively describe the advantages and limitations of these approaches. The Catphan 600 phantom, supplemented with a fat-equivalent oval ring, was scanned using a GE Discovery HD750 scanner at 120 kVp, 0.8 s rotation time, and pitch factors of 0.516, 0.984, and 1.375. The mA was selected for each pitch factor to achieve CTDIvol values of 24, 18, 12, 6, 3, 2, and 1 mGy. Images were reconstructed at 2.5 mm thickness with filtered back-projection (FBP); 20%, 40%, and 70% ASiR; and MBIR. The potential for dose reduction and low-contrast detectability were evaluated from noise and contrast-to-noise ratio (CNR) measurements in the CTP 404 module of the Catphan. Hounsfield units (HUs) of several materials were evaluated from the cylinder inserts in the CTP 404 module, and the modulation transfer function (MTF) was calculated from the air insert. The results were con-firmed in the anthropomorphic Kyoto Kagaku abdomen phantom at 6, 3, 2, and 1mGy. MBIR reduced noise levels five-fold and increased CNR by a factor of five compared to FBP below 6mGy CTDIvol, resulting in a substantial improvement in image quality. Compared to ASiR and FBP, HU in images reconstructed with MBIR were consistently lower, and this discrepancy was reversed by higher pitch factors in some materials. MBIR improved the conspicuity of the high-contrast spatial resolution bar pattern, and MTF quantification confirmed the superior spatial resolution performance of MBIR versus FBP and ASiR at higher dose levels. While ASiR and FBP were relatively insensitive to changes in dose and pitch, the spatial resolution for MBIR

  20. Slow light performance enhancement of Bragg slot photonic crystal waveguide with particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Abedi, Kambiz; Mirjalili, Seyed Mohammad

    2015-03-01

    Recently, majority of current research in the field of designing Phonic Crystal Waveguides (PCW) focus in extracting the relations between output slow light properties of PCW and structural parameters through a huge number of tedious non-systematic simulations in order to introduce better designs. This paper proposes a novel systematic approach which can be considered as a shortcut to alleviate the difficulties and human involvements in designing PCWs. In the proposed method, the problem of PCW design is first formulated as an optimization problem. Then, an optimizer is employed in order to automatically find the optimum design for the formulated PCWs. Meanwhile, different constraints are also considered during optimization with the purpose of applying physical limitations to the final optimum structure. As a case study, the structure of a Bragg-like Corrugation Slotted PCWs (BCSPCW) is optimized by using the proposed method. One of the most computationally powerful techniques in Computational Intelligence (CI) called Particle Swarm Optimization (PSO) is employed as an optimizer to automatically find the optimum structure for BCSPCW. The optimization process is done by considering five constraints to guarantee the feasibility of the final optimized structures and avoid band mixing. Numerical results demonstrate that the proposed method is able to find an optimum structure for BCSPCW with 172% and 100% substantial improvements in the bandwidth and Normalized Delay-Bandwidth Product (NDBP) respectively compared to the best current structure in the literature. Moreover, there is a time domain analysis at the end of the paper which verifies the performance of the optimized structure and proves that this structure has low distortion and attenuation simultaneously.

  1. Tower-scale performance of four observation-based evapotranspiration algorithms within the WACMOS-ET project

    NASA Astrophysics Data System (ADS)

    Michel, Dominik; Miralles, Diego; Jimenez, Carlos; Ershadi, Ali; McCabe, Matthew F.; Hirschi, Martin; Seneviratne, Sonia I.; Jung, Martin; Wood, Eric F.; (Bob) Su, Z.; Timmermans, Joris; Chen, Xuelong; Fisher, Joshua B.; Mu, Quiaozen; Fernandez, Diego

    2015-04-01

    Research on climate variations and the development of predictive capabilities largely rely on globally available reference data series of the different components of the energy and water cycles. Several efforts have recently aimed at producing large-scale and long-term reference data sets of these components, e.g. based on in situ observations and remote sensing, in order to allow for diagnostic analyses of the drivers of temporal variations in the climate system. Evapotranspiration (ET) is an essential component of the energy and water cycle, which cannot be monitored directly on a global scale by remote sensing techniques. In recent years, several global multi-year ET data sets have been derived from remote sensing-based estimates, observation-driven land surface model simulations or atmospheric reanalyses. The LandFlux-EVAL initiative presented an ensemble-evaluation of these data sets over the time periods 1989-1995 and 1989-2005 (Mueller et al. 2013). The WACMOS-ET project (http://wacmoset.estellus.eu) started in the year 2012 and constitutes an ESA contribution to the GEWEX initiative LandFlux. It focuses on advancing the development of ET estimates at global, regional and tower scales. WACMOS-ET aims at developing a Reference Input Data Set exploiting European Earth Observations assets and deriving ET estimates produced by a set of four ET algorithms covering the period 2005-2007. The algorithms used are the SEBS (Su et al., 2002), Penman-Monteith from MODIS (Mu et al., 2011), the Priestley and Taylor JPL model (Fisher et al., 2008) and GLEAM (Miralles et al., 2011). The algorithms are run with Fluxnet tower observations, reanalysis data (ERA-Interim), and satellite forcings. They are cross-compared and validated against in-situ data. In this presentation the performance of the different ET algorithms with respect to different temporal resolutions, hydrological regimes, land cover types (including grassland, cropland, shrubland, vegetation mosaic, savanna

  2. The Rice coding algorithm achieves high-performance lossless and progressive image compression based on the improving of integer lifting scheme Rice coding algorithm

    NASA Astrophysics Data System (ADS)

    Jun, Xie Cheng; Su, Yan; Wei, Zhang

    2006-08-01

    In this paper, a modified algorithm was introduced to improve Rice coding algorithm and researches of image compression with the CDF (2,2) wavelet lifting scheme was made. Our experiments show that the property of the lossless image compression is much better than Huffman, Zip, lossless JPEG, RAR, and a little better than (or equal to) the famous SPIHT. The lossless compression rate is improved about 60.4%, 45%, 26.2%, 16.7%, 0.4% on average. The speed of the encoder is faster about 11.8 times than the SPIHT's and its efficiency in time can be improved by 162%. The speed of the decoder is faster about 12.3 times than that of the SPIHT's and its efficiency in time can be rasied about 148%. This algorithm, instead of largest levels wavelet transform, has high coding efficiency when the wavelet transform levels is larger than 3. For the source model of distributions similar to the Laplacian, it can improve the efficiency of coding and realize the progressive transmit coding and decoding.

  3. The Functional Effect of Teacher Positive and Neutral Affect on Task Performance of Students with Significant Disabilities

    ERIC Educational Resources Information Center

    Park, Sungho; Singer, George H. S.; Gibson, Mary

    2005-01-01

    The study uses an alternating treatment design to evaluate the functional effect of teacher's affect on students' task performance. Tradition in special education holds that teachers should engage students using positive and enthusiastic affect for task presentations and praise. To test this assumption, we compared two affective conditions. Three…

  4. WISC-R Verbal and Performance IQ Discrepancy in an Unselected Cohort: Clinical Significance and Longitudinal Stability.

    ERIC Educational Resources Information Center

    Moffitt, Terrie E.; Silva, P. A.

    1987-01-01

    Examined children whose Wechsler Intelligence Scale for Children-Revised (WISC-R) verbal and performance Intelligence Quotient discrepancies placed them beyond the 90th percentile. Longitudinal study showed 23 percent of the discrepant cases to be discrepant at two or more ages. Studied frequency of perinatal difficulties, early childhood…

  5. The Relative Significance of Syntactic Knowledge and Vocabulary Breadth in the Prediction of Reading Comprehension Test Performance

    ERIC Educational Resources Information Center

    Shiotsu, Toshihiko; Weir, Cyril J.

    2007-01-01

    In the componential approach to modelling reading ability, a number of contributory factors have been empirically validated. However, research on their relative contribution to explaining performance on second language reading tests is limited. Furthermore, the contribution of knowledge of syntax has been largely ignored in comparison with the…

  6. SIGNIFICANCE OF SIZE REDUCTION IN SOLID WASTE MANAGEMENT. VOLUME 3. EFFECTS OF MACHINE PARAMETERS ON SHREDDER PERFORMANCE

    EPA Science Inventory

    Hammermill shredders for size reduction of refuse were examined at three sites to determine the influence of key machine parameters on their performance. Internal machine configuration and single-versus two stage size reduction were studied. Key parameters that were investigated ...

  7. Face recognition algorithms surpass humans matching faces over changes in illumination.

    PubMed

    O'Toole, Alice J; Jonathon Phillips, P; Jiang, Fang; Ayyad, Janet; Penard, Nils; Abdi, Hervé

    2007-09-01

    There has been significant progress in improving the performance of computer-based face recognition algorithms over the last decade. Although algorithms have been tested and compared extensively with each other, there has been remarkably little work comparing the accuracy of computer-based face recognition systems with humans. We compared seven state-of-the-art face recognition algorithms with humans on a facematching task. Humans and algorithms determined whether pairs of face images, taken under different illumination conditions, were pictures of the same person or of different people. Three algorithms surpassed human performance matching face pairs prescreened to be "difficult" and six algorithms surpassed humans on "easy" face pairs. Although illumination variation continues to challenge face recognition algorithms, current algorithms compete favorably with humans. The superior performance of the best algorithms over humans, in light of the absolute performance levels of the algorithms, underscores the need to compare algorithms with the best current control--humans. PMID:17627050

  8. Improving the Performance of Highly Constrained Water Resource Systems using Multiobjective Evolutionary Algorithms and RiverWare

    NASA Astrophysics Data System (ADS)

    Smith, R.; Kasprzyk, J. R.; Zagona, E. A.

    2015-12-01

    Instead of building new infrastructure to increase their supply reliability, water resource managers are often tasked with better management of current systems. The managers often have existing simulation models that aid their planning, and lack methods for efficiently generating and evaluating planning alternatives. This presentation discusses how multiobjective evolutionary algorithm (MOEA) decision support can be used with the sophisticated water infrastructure model, RiverWare, in highly constrained water planning environments. We first discuss a study that performed a many-objective tradeoff analysis of water supply in the Tarrant Regional Water District (TRWD) in Texas. RiverWare is combined with the Borg MOEA to solve a seven objective problem that includes systemwide performance objectives and individual reservoir storage reliability. Decisions within the formulation balance supply in multiple reservoirs and control pumping between the eastern and western parts of the system. The RiverWare simulation model is forced by two stochastic hydrology scenarios to inform how management changes in wet versus dry conditions. The second part of the presentation suggests how a broader set of RiverWare-MOEA studies can inform tradeoffs in other systems, especially in political situations where multiple actors are in conflict over finite water resources. By incorporating quantitative representations of diverse parties' objectives during the search for solutions, MOEAs may provide support for negotiations and lead to more widely beneficial water management outcomes.

  9. Performance of a cavity-method-based algorithm for the prize-collecting Steiner tree problem on graphs

    NASA Astrophysics Data System (ADS)

    Biazzo, Indaco; Braunstein, Alfredo; Zecchina, Riccardo

    2012-08-01

    We study the behavior of an algorithm derived from the cavity method for the prize-collecting steiner tree (PCST) problem on graphs. The algorithm is based on the zero temperature limit of the cavity equations and as such is formally simple (a fixed point equation resolved by iteration) and distributed (parallelizable). We provide a detailed comparison with state-of-the-art algorithms on a wide range of existing benchmarks, networks, and random graphs. Specifically, we consider an enhanced derivative of the Goemans-Williamson heuristics and the dhea solver, a branch and cut integer linear programming based approach. The comparison shows that the cavity algorithm outperforms the two algorithms in most large instances both in running time and quality of the solution. Finally we prove a few optimality properties of the solutions provided by our algorithm, including optimality under the two postprocessing procedures defined in the Goemans-Williamson derivative and global optimality in some limit cases.

  10. Precision feeding can significantly reduce lysine intake and nitrogen excretion without compromising the performance of growing pigs.

    PubMed

    Andretta, I; Pomar, C; Rivest, J; Pomar, J; Radünz, J

    2016-07-01

    This study was developed to assess the impact on performance, nutrient balance, serum parameters and feeding costs resulting from the switching of conventional to precision-feeding programs for growing-finishing pigs. A total of 70 pigs (30.4±2.2 kg BW) were used in a performance trial (84 days). The five treatments used in this experiment were a three-phase group-feeding program (control) obtained with fixed blending proportions of feeds A (high nutrient density) and B (low nutrient density); against four individual daily-phase feeding programs in which the blending proportions of feeds A and B were updated daily to meet 110%, 100%, 90% or 80% of the lysine requirements estimated using a mathematical model. Feed intake was recorded automatically by a computerized device in the feeders, and the pigs were weighed weekly during the project. Body composition traits were estimated by scanning with an ultrasound device and densitometer every 28 days. Nitrogen and phosphorus excretions were calculated by the difference between retention (obtained from densitometer measurements) and intake. Feeding costs were assessed using 2013 ingredient cost data. Feed intake, feed efficiency, back fat thickness, body fat mass and serum contents of total protein and phosphorus were similar among treatments. Feeding pigs in a daily-basis program providing 110%, 100% or 90% of the estimated individual lysine requirements also did not influence BW, body protein mass, weight gain and nitrogen retention in comparison with the animals in the group-feeding program. However, feeding pigs individually with diets tailored to match 100% of nutrient requirements made it possible to reduce (P<0.05) digestible lysine intake by 26%, estimated nitrogen excretion by 30% and feeding costs by US$7.60/pig (-10%) relative to group feeding. Precision feeding is an effective approach to make pig production more sustainable without compromising growth performance. PMID:26759074

  11. Impact of image normalization and quantization on the performance of sonar computer-aided detection/computer-aided classification (CAD/CAC) algorithms

    NASA Astrophysics Data System (ADS)

    Ciany, Charles M.; Zurawski, William C.

    2007-04-01

    Raytheon has extensively processed high-resolution sonar images with its CAD/CAC algorithms to provide real-time classification of mine-like bottom objects in a wide range of shallow-water environments. The algorithm performance is measured in terms of probability of correct classification (Pcc) as a function of false alarm rate, and is impacted by variables associated with both the physics of the problem and the signal processing design choices. Some examples of prominent variables pertaining to the choices of signal processing parameters are image resolution (i.e., pixel dimensions), image normalization scheme, and pixel intensity quantization level (i.e., number of bits used to represent the intensity of each image pixel). Improvements in image resolution associated with the technology transition from sidescan to synthetic aperture sonars have prompted the use of image decimation algorithms to reduce the number of pixels per image that are processed by the CAD/CAC algorithms, in order to meet real-time processor throughput requirements. Additional improvements in digital signal processing hardware have also facilitated the use of an increased quantization level in converting the image data from analog to digital format. This study evaluates modifications to the normalization algorithm and image pixel quantization level within the image processing prior to CAD/CAC processing, and examines their impact on the resulting CAD/CAC algorithm performance. The study utilizes a set of at-sea data from multiple test exercises in varying shallow water environments.

  12. A novel iron (Ⅱ) polyphthalocyanine catalyst assembled on graphene with significantly enhanced performance for oxygen reduction reaction in alkaline medium

    NASA Astrophysics Data System (ADS)

    Lin, Lin; Li, Meng; Jiang, Liqing; Li, Yongfeng; Liu, Dajun; He, Xingquan; Cui, Lili

    2014-12-01

    To realize the large-scale commercial application of direct methanol fuel cells (DMFCs), the catalysts for oxygen reduction reaction (ORR) are the crucial obstacle. Here, an efficient non-noble-metal catalyst for ORR, denoted FePPc/PSS-Gr, has been obtained by anchoring p-phenyl-bis(3,4-dicyanophenyl) ether iron(Ⅱ) polyphthalocyanine (FePPc) on poly(sodium-p-styrenesulfonate) (PSS) modified graphene (PSS-Gr) through a solvothermally assisted π-π assembling approach. The Ultraviolet-visible (UV-vis) spectroscopy, Fourier transform infrared spectroscopy (FTIR) and X-ray photoelectron spectroscopy (XPS) results reveal the π-π interaction between FePPc and PSS-Gr. The rotating disk electrode (RDE) and rotating ring disk electrode (RRDE) measurements show that the proposed catalyst possesses an excellent catalytic performance towards ORR comparable with the commercial Pt/C catalyst in alkaline medium, such as high onset potential (-0.08 V vs. SCE), half-wave potential (-0.19 V vs. SCE), better tolerance to methanol crossover, excellent stability (81.1%, retention after 10,000 s) and an efficient four-electron pathway. The enhanced electrocatalytic performance could be chiefly attributed to its large electrochemically accessible surface area, fast electron transfer rate of PSS-Gr, in particular, the synergistic effect between the FePPc moieties and the PSS-Gr sheets.

  13. Assessment of the performance of algorithms for cervical cancer screening: Evidence from the Ludwig-McGill Cohort Study

    PubMed Central

    Chevarie-Davis, M; Ramanakumar, AV; Ferenczy, A; Villa, LL; Franco, EL

    2015-01-01

    Objective There are currently multiple tests available for cervical cancer screening and the existing screening policies vary from country to country. No single approach will satisfy the specific needs and variations in risk aversion of all populations, and screening algorithms should be tailored to specific groups. We performed long term risk stratification based on screening test results and compared the accuracy of different tests and their combinations. Methods A longitudinal cohort study of the natural history of HPV infection and cervical neoplasia enrolled 2462 women from a low-income population in Brazil. Interviews and cervical screening with cytology and HPV DNA testing were repeated according to a pre-established protocol and the subjects were referred for colposcopy and biopsy whenever high grade lesions were suspected. We compared the specificity, sensitivity and predictive values of each screening modality. Long term risk stratification was performed through time-to-event analyses using Kaplan-Meier analysis and Cox regression. Results The best optimization of sensitivity and specificity was achieved when using dual testing with cytology and HPV DNA testing, whereby the screening test is considered positive if either component yields an abnormal result. However, when allowing 12 months for the detection of lesions, cytology alone performed nearly as well. Risk stratification revealed that HPV DNA testing was not beneficial for HSIL cases, whereas it was for ASCUS and, in some combinations, for negative and LSIL cytology. Conclusion Our results suggest that some high risk populations may benefit equally from cytology or HPV DNA testing, and may require shorter intervals between repeat testing. PMID:23234804

  14. Open-loop control of SCExAO's MEMS deformable mirror using the Fast Iterative Algorithm: speckle control performances

    NASA Astrophysics Data System (ADS)

    Blain, Célia; Guyon, Olivier; Martinache, Frantz; Bradley, Colin; Clergeon, Christophe

    2012-07-01

    Micro-Electro-Mechanical Systems (MEMS) deformable mirrors (DMs) are widely utilized in astronomical Adaptive Optics (AO) instrumentation. High precision open-loop control of MEMS DMs has been achieved by developing a high accuracy DM model, the Fast Iterative Algorithm (FIA), a physics-based model allowing precise control of the DM shape. Accurate open-loop control is particularly critical for the wavefront control of High- Contrast Imaging (HCI) instruments to create a dark hole area free of most slow and quasi-static speckles which remain the limiting factor for direct detection and imaging of exoplanets. The Subaru Coronagraphic Extreme Adaptive Optics (SCExAO) system is one of these high contrast imaging instruments and uses a 1024-actuator MEMS deformable mirror (DM) both in closed-loop and open-loop. The DM is used to modulate speckles in order to distinguish (i) speckles due to static and slow-varying residual aberrations from (ii) speckles due to genuine structures, such as exoplanets. The FIA has been fully integrated into the SCExAO wavefront control software and we report the FIA’s performance for the control of speckles in the focal plane.

  15. Scaling to 150K cores: recent algorithm and performance engineering developments enabling XGC1 to run at scale

    SciTech Connect

    Mark F. Adams; Seung-Hoe Ku; Patrick Worley; Ed D'Azevedo; Julian C. Cummings; C.S. Chang

    2009-10-01

    Particle-in-cell (PIC) methods have proven to be eft#11;ective in discretizing the Vlasov-Maxwell system of equations describing the core of toroidal burning plasmas for many decades. Recent physical understanding of the importance of edge physics for stability and transport in tokamaks has lead to development of the fi#12;rst fully toroidal edge PIC code - XGC1. The edge region poses special problems in meshing for PIC methods due to the lack of closed flux surfaces, which makes fi#12;eld-line following meshes and coordinate systems problematic. We present a solution to this problem with a semi-#12;field line following mesh method in a cylindrical coordinate system. Additionally, modern supercomputers require highly concurrent algorithms and implementations, with all levels of the memory hierarchy being effe#14;ciently utilized to realize optimal code performance. This paper presents a mesh and particle partitioning method, suitable to our meshing strategy, for use on highly concurrent cache-based computing platforms.

  16. Performance of two commercial electron beam algorithms over regions close to the lung-mediastinum interface, against Monte Carlo simulation and point dosimetry in virtual and anthropomorphic phantoms.

    PubMed

    Ojala, J; Hyödynmaa, S; Barańczyk, R; Góra, E; Waligórski, M P R

    2014-03-01

    Electron radiotherapy is applied to treat the chest wall close to the mediastinum. The performance of the GGPB and eMC algorithms implemented in the Varian Eclipse treatment planning system (TPS) was studied in this region for 9 and 16 MeV beams, against Monte Carlo (MC) simulations, point dosimetry in a water phantom and dose distributions calculated in virtual phantoms. For the 16 MeV beam, the accuracy of these algorithms was also compared over the lung-mediastinum interface region of an anthropomorphic phantom, against MC calculations and thermoluminescence dosimetry (TLD). In the phantom with a lung-equivalent slab the results were generally congruent, the eMC results for the 9 MeV beam slightly overestimating the lung dose, and the GGPB results for the 16 MeV beam underestimating the lung dose. Over the lung-mediastinum interface, for 9 and 16 MeV beams, the GGPB code underestimated the lung dose and overestimated the dose in water close to the lung, compared to the congruent eMC and MC results. In the anthropomorphic phantom, results of TLD measurements and MC and eMC calculations agreed, while the GGPB code underestimated the lung dose. Good agreement between TLD measurements and MC calculations attests to the accuracy of "full" MC simulations as a reference for benchmarking TPS codes. Application of the GGPB code in chest wall radiotherapy may result in significant underestimation of the lung dose and overestimation of dose to the mediastinum, affecting plan optimization over volumes close to the lung-mediastinum interface, such as the lung or heart. PMID:23702438

  17. Evaluation of Real-Time and Off-Line Performance of the Virtual Seismologist Earthquake Early Warning Algorithm in Switzerland

    NASA Astrophysics Data System (ADS)

    Behr, Yannik; Clinton, John; Cua, Georgia; Cauzzi, Carlo; Heimers, Stefan; Kästli, Philipp; Becker, Jan; Heaton, Thomas

    2013-04-01

    The Virtual Seismologist (VS) method is a Bayesian approach to regional network-based earthquake early warning (EEW) originally formulated by Cua and Heaton (2007). Implementation of VS into real-time EEW codes has been an on-going effort of the Swiss Seismological Service at ETH Zürich since 2006, with support from ETH Zürich, various European projects, and the United States Geological Survey (USGS). VS is one of three EEW algorithms that form the basis of the California Integrated Seismic Network (CISN) ShakeAlert system, a USGS-funded prototype end-to-end EEW system that could potentially be implemented in California. In Europe, VS is currently operating as a real-time test system in Switzerland. As part of the on-going EU project REAKT (Strategies and Tools for Real-Time Earthquake Risk Reduction), VS installations in southern Italy, western Greece, Istanbul, Romania, and Iceland are planned or underway. In Switzerland, VS has been running in real-time on stations monitored by the Swiss Seismological Service (including stations from Austria, France, Germany, and Italy) since 2010. While originally based on the Earthworm system it has recently been ported to the SeisComp3 system. Besides taking advantage of SeisComp3's picking and phase association capabilities it greatly simplifies the potential installation of VS at networks in particular those already running SeisComp3. We present the architecture of the new SeisComp3 based version and compare its results from off-line tests with the real-time performance of VS in Switzerland over the past two years. We further show that the empirical relationships used by VS to estimate magnitudes and ground motion, originally derived from southern California data, perform well in Switzerland.

  18. A novel waveband routing algorithm in hierarchical WDM optical networks

    NASA Astrophysics Data System (ADS)

    Huang, Jun; Guo, Xiaojin; Qiu, Shaofeng; Luo, Jiangtao; Zhang, Zhizhong

    2007-11-01

    Hybrid waveband/wavelength switching in intelligent optical networks is gaining more and more academic attention. It is very challenging to develop efficient algorithms to efficiently use waveband switching capability. In this paper, we propose a novel cross-layer routing algorithm, waveband layered graph routing algorithm (WBLGR), in waveband switching-enabled optical networks. Through extensive simulation WBLGR algorithm can significantly improve the performance in terms of reduced call blocking probability.

  19. Short-range ground-based synthetic aperture radar imaging: performance comparison between frequency-wavenumber migration and back-projection algorithms

    NASA Astrophysics Data System (ADS)

    Yigit, Enes; Demirci, Sevket; Özdemir, Caner; Tekbaş, Mustafa

    2013-01-01

    Two popular synthetic aperture radar (SAR) reconstruction algorithms, namely the back-projection (BP) and the frequency wavenumber (ω-k) algorithms, were tested and compared against each other, especially for their use in ground-based (GB) SAR applications directed to foreign object debris removal. For this purpose, an experimental setup in a semi-anechoic chamber room was accomplished to obtain near-field SAR images of objects on the ground. Then, the 90 to 95 GHz scattering data were acquired by using a stepped frequency continuous-wave radar operation. The performances of the setup and the imaging algorithms were then assessed by exploiting various metrics including point spread function, signal-to-clutter ratio, integrated side-lobe ratio, and computational complexity. Results demonstrate that although both algorithms produce almost accurate images of targets, the BP algorithm is shown to be superior to the ω-k algorithm due to its some inherent advantages specifically suited for short-range GB-SAR applications.

  20. Performance Improvement of the Goertzel Algorithm in Estimating of Protein Coding Regions Using Modified Anti-notch Filter and Linear Predictive Coding Model

    PubMed Central

    Farsani, Mahsa Saffari; Sahhaf, Masoud Reza Aghabozorgi; Abootalebi, Vahid

    2016-01-01

    The aim of this paper is to improve the performance of the conventional Goertzel algorithm in determining the protein coding regions in deoxyribonucleic acid (DNA) sequences. First, the symbolic DNA sequences are converted into numerical signals using electron ion interaction potential method. Then by combining the modified anti-notch filter and linear predictive coding model, we proposed an efficient algorithm to achieve the performance improvement in the Goertzel algorithm for estimating genetic regions. Finally, a thresholding method is applied to precisely identify the exon and intron regions. The proposed algorithm is applied to several genes, including genes available in databases BG570 and HMR195 and the results are compared to other methods based on the nucleotide level evaluation criteria. Results demonstrate that our proposed method reduces the number of incorrect nucleotides which are estimated to be in the noncoding region. In addition, the area under the receiver operating characteristic curve has improved by the factor of 1.35 and 1.12 in HMR195 and BG570 datasets respectively, in comparison with the conventional Goertzel algorithm. PMID:27563569

  1. Performance Improvement of the Goertzel Algorithm in Estimating of Protein Coding Regions Using Modified Anti-notch Filter and Linear Predictive Coding Model.

    PubMed

    Farsani, Mahsa Saffari; Sahhaf, Masoud Reza Aghabozorgi; Abootalebi, Vahid

    2016-01-01

    The aim of this paper is to improve the performance of the conventional Goertzel algorithm in determining the protein coding regions in deoxyribonucleic acid (DNA) sequences. First, the symbolic DNA sequences are converted into numerical signals using electron ion interaction potential method. Then by combining the modified anti-notch filter and linear predictive coding model, we proposed an efficient algorithm to achieve the performance improvement in the Goertzel algorithm for estimating genetic regions. Finally, a thresholding method is applied to precisely identify the exon and intron regions. The proposed algorithm is applied to several genes, including genes available in databases BG570 and HMR195 and the results are compared to other methods based on the nucleotide level evaluation criteria. Results demonstrate that our proposed method reduces the number of incorrect nucleotides which are estimated to be in the noncoding region. In addition, the area under the receiver operating characteristic curve has improved by the factor of 1.35 and 1.12 in HMR195 and BG570 datasets respectively, in comparison with the conventional Goertzel algorithm. PMID:27563569

  2. Efficient algorithms for optimising the optical performance of profiled smooth walled horns for future CMB and Far-IR missions

    NASA Astrophysics Data System (ADS)

    McCarthy, Darragh; Trappe, Neil; Murphy, J. Anthony; O'Sullivan, Créidhe; Gradziel, Marcin; Doherty, Stephen; Bracken, Colm; Tynan, Niall; Polegre, Arturo; Huggard, Peter

    2014-07-01

    Astronomical observations in the far-infrared are critical for investigation of cosmic microwave background (CMB) radiation and the formation and evolution of planets, stars and galaxies. In the case of space telescope receivers, a strong heritage exists for corrugated horn antenna feeds to couple the far-infrared signals to the detectors mounted in a waveguide or cavity structure. Such antenna feeds have been utilized, for example, in the Planck satellite in both single-mode channels for the observation of the CMB and the multi-mode channels optimized for the detection of foreground sources. Looking to the demands of the future space missions, it is clear that the development of new technology solutions for the optimization and simplification of horn antenna structures will be required for large arrays. Horn antennas will continue to offer excellent control of beam and polarization properties for CMB polarisation experiments satisfying stringent requirements on low sidelobe levels, symmetry, and low cross polarization in large arrays. Similarly for far infrared systems, multi-mode horn and waveguide cavity structures are proposed to enhance optical coupling of weak signals for cavity coupled bolometers. In this paper we present a computationally efficient approach for modelling and optimising horn character-istics. We investigate smooth-walled horns that have an equivalent optical performance to that of corrugated horns traditionally used for CMB measurements. We discuss the horn optimisation process and the algorithms available to maximise performance of a merit parameter such as low cross polarisation or high Gaussicity. A single moded horn resulting from this design process has been constructed and experimentally verified in the W band. The results of the measurement campaign are presented in this paper and compared to the simulated results, showing a high level of agreement in co and cross polarisation radiation patterns, with low levels of integrated cross

  3. Simulation-Based Evaluation of the Performances of an Algorithm for Detecting Abnormal Disease-Related Features in Cattle Mortality Records

    PubMed Central

    Perrin, Jean-Baptiste; Durand, Benoît; Gay, Emilie; Ducrot, Christian; Hendrikx, Pascal; Calavas, Didier; Hénaux, Viviane

    2015-01-01

    We performed a simulation study to evaluate the performances of an anomaly detection algorithm considered in the frame of an automated surveillance system of cattle mortality. The method consisted in a combination of temporal regression and spatial cluster detection which allows identifying, for a given week, clusters of spatial units showing an excess of deaths in comparison with their own historical fluctuations. First, we simulated 1,000 outbreaks of a disease causing extra deaths in the French cattle population (about 200,000 herds and 20 million cattle) according to a model mimicking the spreading patterns of an infectious disease and injected these disease-related extra deaths in an authentic mortality dataset, spanning from January 2005 to January 2010. Second, we applied our algorithm on each of the 1,000 semi-synthetic datasets to identify clusters of spatial units showing an excess of deaths considering their own historical fluctuations. Third, we verified if the clusters identified by the algorithm did contain simulated extra deaths in order to evaluate the ability of the algorithm to identify unusual mortality clusters caused by an outbreak. Among the 1,000 simulations, the median duration of simulated outbreaks was 8 weeks, with a median number of 5,627 simulated deaths and 441 infected herds. Within the 12-week trial period, 73% of the simulated outbreaks were detected, with a median timeliness of 1 week, and a mean of 1.4 weeks. The proportion of outbreak weeks flagged by an alarm was 61% (i.e. sensitivity) whereas one in three alarms was a true alarm (i.e. positive predictive value). The performances of the detection algorithm were evaluated for alternative combination of epidemiologic parameters. The results of our study confirmed that in certain conditions automated algorithms could help identifying abnormal cattle mortality increases possibly related to unidentified health events. PMID:26536596

  4. Performance evaluation and optimization of BM4D-AV denoising algorithm for cone-beam CT images

    NASA Astrophysics Data System (ADS)

    Huang, Kuidong; Tian, Xiaofei; Zhang, Dinghua; Zhang, Hua

    2015-12-01

    The broadening application of cone-beam Computed Tomography (CBCT) in medical diagnostics and nondestructive testing, necessitates advanced denoising algorithms for its 3D images. The block-matching and four dimensional filtering algorithm with adaptive variance (BM4D-AV) is applied to the 3D image denoising in this research. To optimize it, the key filtering parameters of the BM4D-AV algorithm are assessed firstly based on the simulated CBCT images and a table of optimized filtering parameters is obtained. Then, considering the complexity of the noise in realistic CBCT images, possible noise standard deviations in BM4D-AV are evaluated to attain the chosen principle for the realistic denoising. The results of corresponding experiments demonstrate that the BM4D-AV algorithm with optimized parameters presents excellent denosing effect on the realistic 3D CBCT images.

  5. Constraint satisfaction using a hybrid evolutionary hill-climbing algorithm that performs opportunistic arc and path revision

    SciTech Connect

    Bowen, J.; Dozier, G.

    1996-12-31

    This paper introduces a hybrid evolutionary hill-climbing algorithm that quickly solves (Constraint Satisfaction Problems (CSPs)). This hybrid uses opportunistic arc and path revision in an interleaved fashion to reduce the size of the search space and to realize when to quit if a CSP is based on an inconsistent constraint network. This hybrid outperforms a well known hill-climbing algorithm, the Iterative Descent Method, on a test suite of 750 randomly generated CSPs.

  6. Automatic control algorithm effects on energy production

    NASA Technical Reports Server (NTRS)

    Mcnerney, G. M.

    1981-01-01

    A computer model was developed using actual wind time series and turbine performance data to simulate the power produced by the Sandia 17-m VAWT operating in automatic control. The model was used to investigate the influence of starting algorithms on annual energy production. The results indicate that, depending on turbine and local wind characteristics, a bad choice of a control algorithm can significantly reduce overall energy production. The model can be used to select control algorithms and threshold parameters that maximize long term energy production. The results from local site and turbine characteristics were generalized to obtain general guidelines for control algorithm design.

  7. Molecular simulation workflows as parallel algorithms: the execution engine of Copernicus, a distributed high-performance computing platform.

    PubMed

    Pronk, Sander; Pouya, Iman; Lundborg, Magnus; Rotskoff, Grant; Wesén, Björn; Kasson, Peter M; Lindahl, Erik

    2015-06-01

    Computational chemistry and other simulation fields are critically dependent on computing resources, but few problems scale efficiently to the hundreds of thousands of processors available in current supercomputers-particularly for molecular dynamics. This has turned into a bottleneck as new hardware generations primarily provide more processing units rather than making individual units much faster, which simulation applications are addressing by increasingly focusing on sampling with algorithms such as free-energy perturbation, Markov state modeling, metadynamics, or milestoning. All these rely on combining results from multiple simulations into a single observation. They are potentially powerful approaches that aim to predict experimental observables directly, but this comes at the expense of added complexity in selecting sampling strategies and keeping track of dozens to thousands of simulations and their dependencies. Here, we describe how the distributed execution framework Copernicus allows the expression of such algorithms in generic workflows: dataflow programs. Because dataflow algorithms explicitly state dependencies of each constituent part, algorithms only need to be described on conceptual level, after which the execution is maximally parallel. The fully automated execution facilitates the optimization of these algorithms with adaptive sampling, where undersampled regions are automatically detected and targeted without user intervention. We show how several such algorithms can be formulated for computational chemistry problems, and how they are executed efficiently with many loosely coupled simulations using either distributed or parallel resources with Copernicus. PMID:26575558

  8. Evaluation of Variable Refrigerant Flow Systems Performance and the Enhanced Control Algorithm on Oak Ridge National Laboratory s Flexible Research Platform

    SciTech Connect

    Im, Piljae; Munk, Jeffrey D; Gehl, Anthony C

    2015-06-01

    A research project “Evaluation of Variable Refrigerant Flow (VRF) Systems Performance and the Enhanced Control Algorithm on Oak Ridge National Laboratory’s (ORNL’s) Flexible Research Platform” was performed to (1) install and validate the performance of Samsung VRF systems compared with the baseline rooftop unit (RTU) variable-air-volume (VAV) system and (2) evaluate the enhanced control algorithm for the VRF system on the two-story flexible research platform (FRP) in Oak Ridge, Tennessee. Based on the VRF system designed by Samsung and ORNL, the system was installed from February 18 through April 15, 2014. The final commissioning and system optimization were completed on June 2, 2014, and the initial test for system operation was started the following day, June 3, 2014. In addition, the enhanced control algorithm was implemented and updated on June 18. After a series of additional commissioning actions, the energy performance data from the RTU and the VRF system were monitored from July 7, 2014, through February 28, 2015. Data monitoring and analysis were performed for the cooling season and heating season separately, and the calibrated simulation model was developed and used to estimate the energy performance of the RTU and VRF systems. This final report includes discussion of the design and installation of the VRF system, the data monitoring and analysis plan, the cooling season and heating season data analysis, and the building energy modeling study

  9. Comprehensive evaluation of fusion transcript detection algorithms and a meta-caller to combine top performing methods in paired-end RNA-seq data

    PubMed Central

    Liu, Silvia; Tsai, Wei-Hsiang; Ding, Ying; Chen, Rui; Fang, Zhou; Huo, Zhiguang; Kim, SungHwan; Ma, Tianzhou; Chang, Ting-Yu; Priedigkeit, Nolan Michael; Lee, Adrian V.; Luo, Jianhua; Wang, Hsei-Wei; Chung, I-Fang; Tseng, George C.

    2016-01-01

    Background: Fusion transcripts are formed by either fusion genes (DNA level) or trans-splicing events (RNA level). They have been recognized as a promising tool for diagnosing, subtyping and treating cancers. RNA-seq has become a precise and efficient standard for genome-wide screening of such aberration events. Many fusion transcript detection algorithms have been developed for paired-end RNA-seq data but their performance has not been comprehensively evaluated to guide practitioners. In this paper, we evaluated 15 popular algorithms by their precision and recall trade-off, accuracy of supporting reads and computational cost. We further combine top-performing methods for improved ensemble detection. Results: Fifteen fusion transcript detection tools were compared using three synthetic data sets under different coverage, read length, insert size and background noise, and three real data sets with selected experimental validations. No single method dominantly performed the best but SOAPfuse generally performed well, followed by FusionCatcher and JAFFA. We further demonstrated the potential of a meta-caller algorithm by combining top performing methods to re-prioritize candidate fusion transcripts with high confidence that can be followed by experimental validation. Conclusion: Our result provides insightful recommendations when applying individual tool or combining top performers to identify fusion transcript candidates. PMID:26582927

  10. LEED I/V determination of the structure of a MoO3 monolayer on Au(111): Testing the performance of the CMA-ES evolutionary strategy algorithm, differential evolution, a genetic algorithm and tensor LEED based structural optimization

    NASA Astrophysics Data System (ADS)

    Primorac, E.; Kuhlenbeck, H.; Freund, H.-J.

    2016-07-01

    The structure of a thin MoO3 layer on Au(111) with a c(4 × 2) superstructure was studied with LEED I/V analysis. As proposed previously (Quek et al., Surf. Sci. 577 (2005) L71), the atomic structure of the layer is similar to that of a MoO3 single layer as found in regular α-MoO3. The layer on Au(111) has a glide plane parallel to the short unit vector of the c(4 × 2) unit cell and the molybdenum atoms are bridge-bonded to two surface gold atoms with the structure of the gold surface being slightly distorted. The structural refinement of the structure was performed with the CMA-ES evolutionary strategy algorithm which could reach a Pendry R-factor of ∼ 0.044. In the second part the performance of CMA-ES is compared with that of the differential evolution method, a genetic algorithm and the Powell optimization algorithm employing I/V curves calculated with tensor LEED.

  11. Performance of a benchmark parallel implementation of the Van Slyke and Wets algorithm for two-stage stochastic programs on the Sequent/Balance

    SciTech Connect

    Ariyawansa, K.A.; Hudson, D.D.

    1989-01-01

    We describe a benchmark parallel version of the Van Slyke and Wets algorithm for two-stage stochastic programs and an implementation of that algorithm on the Sequent/Balance. We also report results of a numerical experiment using random test problems and our implementation. These performance results, to the best of our knowledge, are the first available for the Van Slyke and Wets algorithm on a parallel processor. They indicate that the benchmark implementation parallelizes well, and that even with the use of parallel processing, problems with random variables having large numbers of realizations can take prohibitively large amounts of computation for solution. Thus, they demonstrate the need for exploiting both parallelization and approximation for the solution of stochastic programs. 15 refs., 18 tabs.

  12. Assessment of the dose reduction potential of a model-based iterative reconstruction algorithm using a task-based performance metrology

    SciTech Connect

    Samei, Ehsan; Richard, Samuel

    2015-01-15

    Purpose: Different computed tomography (CT) reconstruction techniques offer different image quality attributes of resolution and noise, challenging the ability to compare their dose reduction potential against each other. The purpose of this study was to evaluate and compare the task-based imaging performance of CT systems to enable the assessment of the dose performance of a model-based iterative reconstruction (MBIR) to that of an adaptive statistical iterative reconstruction (ASIR) and a filtered back projection (FBP) technique. Methods: The ACR CT phantom (model 464) was imaged across a wide range of mA setting on a 64-slice CT scanner (GE Discovery CT750 HD, Waukesha, WI). Based on previous work, the resolution was evaluated in terms of a task-based modulation transfer function (MTF) using a circular-edge technique and images from the contrast inserts located in the ACR phantom. Noise performance was assessed in terms of the noise-power spectrum (NPS) measured from the uniform section of the phantom. The task-based MTF and NPS were combined with a task function to yield a task-based estimate of imaging performance, the detectability index (d′). The detectability index was computed as a function of dose for two imaging tasks corresponding to the detection of a relatively small and a relatively large feature (1.5 and 25 mm, respectively). The performance of MBIR in terms of the d′ was compared with that of ASIR and FBP to assess its dose reduction potential. Results: Results indicated that MBIR exhibits a variability spatial resolution with respect to object contrast and noise while significantly reducing image noise. The NPS measurements for MBIR indicated a noise texture with a low-pass quality compared to the typical midpass noise found in FBP-based CT images. At comparable dose, the d′ for MBIR was higher than those of FBP and ASIR by at least 61% and 19% for the small feature and the large feature tasks, respectively. Compared to FBP and ASIR, MBIR

  13. Performance on the Luria-Nebraska Neuropsychological Test Battery-Children's Revision: A Comparison of Children with and without Significant WISC-R VIQ-PIQ Discrepancies.

    ERIC Educational Resources Information Center

    Gilger, J. W.; Geary, D. C.

    1985-01-01

    Compared the performance of 56 children on the 11 subscales of the Luria-Nebraska Neuropsychological Battery-Children's Revision. Results revealed significant differences on Receptive Speech and Expressive Language subscales, suggesting a possible differential sensitivity of the children's Luria-Nebraska to verbal and nonverbal cognitive deficits.…

  14. Near real-time expectation-maximization algorithm: computational performance and passive millimeter-wave imaging field test results

    NASA Astrophysics Data System (ADS)

    Reynolds, William R.; Talcott, Denise; Hilgers, John W.

    2002-07-01

    A new iterative algorithm (EMLS) via the expectation maximization method is derived for extrapolating a non- negative object function from noisy, diffraction blurred image data. The algorithm has the following desirable attributes; fast convergence is attained for high frequency object components, is less sensitive to constraint parameters, and will accommodate randomly missing data. Speed and convergence results are presented. Field test imagery was obtained with a passive millimeter wave imaging sensor having a 30.5 cm aperture. The algorithm was implemented and tested in near real time using field test imagery. Theoretical results and experimental results using the field test imagery will be compared using an effective aperture measure of resolution increase. The effective aperture measure, based on examination of the edge-spread function, will be detailed.

  15. RISMA: A Rule-based Interval State Machine Algorithm for Alerts Generation, Performance Analysis and Monitoring Real-Time Data Processing

    NASA Astrophysics Data System (ADS)

    Laban, Shaban; El-Desouky, Aly

    2013-04-01

    Commission for the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO). The CLIPS expert system shell has been used as the main rule engine for implementing the algorithm rules. Python programming language and the module "PyCLIPS" are used for building the necessary code for algorithm implementation. More than 1.7 million intervals constitute the Concise List of Frames (CLF) from 20 different seismic stations have been used for evaluating the proposed algorithm and evaluating stations behaviour and performance. The initial results showed that proposed algorithm can help in better understanding of the operation and performance of those stations. Different important information, such as alerts and some station performance parameters, can be derived from the proposed algorithm. For IMS interval-based data and at any period of time it is possible to analyze station behavior, determine the missing data, generate necessary alerts, and to measure some of station performance attributes. The details of the proposed algorithm, methodology, implementation, experimental results, advantages, and limitations of this research are presented. Finally, future directions and recommendations are discussed.

  16. An Upperbound to the Performance of Ranked-Output Searching: Optimal Weighting of Query Terms Using A Genetic Algorithm.

    ERIC Educational Resources Information Center

    Robertson, Alexander M.; Willett, Peter

    1996-01-01

    Describes a genetic algorithm (GA) that assigns weights to query terms in a ranked-output document retrieval system. Experiments showed the GA often found weights slightly superior to those produced by deterministic weighting (F4). Many times, however, the two methods gave the same results and sometimes the F4 results were superior, indicating…

  17. Offline Performance of the Filter Bank EEW Algorithm in the 2014 M6.0 South Napa Earthquake

    NASA Astrophysics Data System (ADS)

    Meier, M. A.; Heaton, T. H.; Clinton, J. F.

    2014-12-01

    Medium size events like the M6.0 South Napa earthquake are very challenging for EEW: the damage such events produce can be severe, but it is generally confined to relatively small zones around the epicenter and the shaking duration is short. This leaves a very short window for timely EEW alerts. Algorithms that wait for several stations to trigger before sending out EEW alerts are typically not fast enough for these kind of events because their blind zone (the zone where strong ground motions start before the warnings arrive) typically covers all or most of the area that experiences strong ground motions. At the same time, single station algorithms are often too unreliable to provide useful alerts. The filter bank EEW algorithm is a new algorithm that is designed to provide maximally accurate and precise earthquake parameter estimates with minimum data input, with the goal of producing reliable EEW alerts when only a very small number of stations have been reached by the p-wave. It combines the strengths of single station and network based algorithms in that it starts parameter estimates as soon as 0.5 seconds of data are available from the first station, but then perpetually incorporates additional data from the same or from any number of other stations. The algorithm analyzes the time dependent frequency content of real time waveforms with a filter bank. It then uses an extensive training data set to find earthquake records from the past that have had similar frequency content at a given time since the p-wave onset. The source parameters of the most similar events are used to parameterize a likelihood function for the source parameters of the ongoing event, which can then be maximized to find the most likely parameter estimates. Our preliminary results show that the filter bank EEW algorithm correctly estimated the magnitude of the South Napa earthquake to be ~M6 with only 1 second worth of data at the nearest station to the epicenter. This estimate is then

  18. Acceleration of iterative image restoration algorithms.

    PubMed

    Biggs, D S; Andrews, M

    1997-03-10

    A new technique for the acceleration of iterative image restoration algorithms is proposed. The method is based on the principles of vector extrapolation and does not require the minimization of a cost function. The algorithm is derived and its performance illustrated with Richardson-Lucy (R-L) and maximum entropy (ME) deconvolution algorithms and the Gerchberg-Saxton magnitude and phase retrieval algorithms. Considerable reduction in restoration times is achieved with little image distortion or computational overhead per iteration. The speedup achieved is shown to increase with the number of iterations performed and is easily adapted to suit different algorithms. An example R-L restoration achieves an average speedup of 40 times after 250 iterations and an ME method 20 times after only 50 iterations. An expression for estimating the acceleration factor is derived and confirmed experimentally. Comparisons with other acceleration techniques in the literature reveal significant improvements in speed and stability. PMID:18250863

  19. Passive microwave algorithm development and evaluation

    NASA Technical Reports Server (NTRS)

    Petty, Grant W.

    1995-01-01

    The scientific objectives of this grant are: (1) thoroughly evaluate, both theoretically and empirically, all available Special Sensor Microwave Imager (SSM/I) retrieval algorithms for column water vapor, column liquid water, and surface wind speed; (2) where both appropriate and feasible, develop, validate, and document satellite passive microwave retrieval algorithms that offer significantly improved performance compared with currently available algorithms; and (3) refine and validate a novel physical inversion scheme for retrieving rain rate over the ocean. This report summarizes work accomplished or in progress during the first year of a three year grant. The emphasis during the first year has been on the validation and refinement of the rain rate algorithm published by Petty and on the analysis of independent data sets that can be used to help evaluate the performance of rain rate algorithms over remote areas of the ocean. Two articles in the area of global oceanic precipitation are attached.

  20. Optimization of Deep Drilling Performance - Development and Benchmark Testing of Advanced Diamond Product Drill Bits & HP/HT Fluids to Significantly Improve Rates of Penetration

    SciTech Connect

    Alan Black; Arnis Judzis

    2005-09-30

    This document details the progress to date on the OPTIMIZATION OF DEEP DRILLING PERFORMANCE--DEVELOPMENT AND BENCHMARK TESTING OF ADVANCED DIAMOND PRODUCT DRILL BITS AND HP/HT FLUIDS TO SIGNIFICANTLY IMPROVE RATES OF PENETRATION contract for the year starting October 2004 through September 2005. The industry cost shared program aims to benchmark drilling rates of penetration in selected simulated deep formations and to significantly improve ROP through a team development of aggressive diamond product drill bit--fluid system technologies. Overall the objectives are as follows: Phase 1--Benchmark ''best in class'' diamond and other product drilling bits and fluids and develop concepts for a next level of deep drilling performance; Phase 2--Develop advanced smart bit-fluid prototypes and test at large scale; and Phase 3--Field trial smart bit--fluid concepts, modify as necessary and commercialize products. As of report date, TerraTek has concluded all Phase 1 testing and is planning Phase 2 development.

  1. There are lots of big fish in this pond: The role of peer overqualification on task significance, perceived fit, and performance for overqualified employees.

    PubMed

    Hu, Jia; Erdogan, Berrin; Bauer, Talya N; Jiang, Kaifeng; Liu, Songbo; Li, Yuhui

    2015-07-01

    Research has uncovered mixed results regarding the influence of overqualification on employee performance outcomes, suggesting the existence of boundary conditions for such an influence. Using relative deprivation theory (Crosby, 1976) as the primary theoretical basis, in the current research, we examine the moderating role of peer overqualification and provide insights to the questions regarding whether, when, and how overqualification relates to employee performance. We tested the theoretical model with data gathered across three phases over 6 months from 351 individuals and their supervisors in 72 groups. Results showed that when working with peers whose average overqualification level was high, as opposed to low, employees who felt overqualified for their jobs perceived greater task significance and person-group fit, and demonstrated higher levels of in-role and extra-role performance. We discuss theoretical and managerial implications for overqualification at the individual level and within the larger group context. PMID:25546266

  2. Sampling Within k-Means Algorithm to Cluster Large Datasets

    SciTech Connect

    Bejarano, Jeremy; Bose, Koushiki; Brannan, Tyler; Thomas, Anita; Adragni, Kofi; Neerchal, Nagaraj; Ostrouchov, George

    2011-08-01

    Due to current data collection technology, our ability to gather data has surpassed our ability to analyze it. In particular, k-means, one of the simplest and fastest clustering algorithms, is ill-equipped to handle extremely large datasets on even the most powerful machines. Our new algorithm uses a sample from a dataset to decrease runtime by reducing the amount of data analyzed. We perform a simulation study to compare our sampling based k-means to the standard k-means algorithm by analyzing both the speed and accuracy of the two methods. Results show that our algorithm is significantly more efficient than the existing algorithm with comparable accuracy. Further work on this project might include a more comprehensive study both on more varied test datasets as well as on real weather datasets. This is especially important considering that this preliminary study was performed on rather tame datasets. Also, these datasets should analyze the performance of the algorithm on varied values of k. Lastly, this paper showed that the algorithm was accurate for relatively low sample sizes. We would like to analyze this further to see how accurate the algorithm is for even lower sample sizes. We could find the lowest sample sizes, by manipulating width and confidence level, for which the algorithm would be acceptably accurate. In order for our algorithm to be a success, it needs to meet two benchmarks: match the accuracy of the standard k-means algorithm and significantly reduce runtime. Both goals are accomplished for all six datasets analyzed. However, on datasets of three and four dimension, as the data becomes more difficult to cluster, both algorithms fail to obtain the correct classifications on some trials. Nevertheless, our algorithm consistently matches the performance of the standard algorithm while becoming remarkably more efficient with time. Therefore, we conclude that analysts can use our algorithm, expecting accurate results in considerably less time.

  3. A puzzle form of a non-verbal intelligence test gives significantly higher performance measures in children with severe intellectual disability

    PubMed Central

    Bello, Katrina D; Goharpey, Nahal; Crewther, Sheila G; Crewther, David P

    2008-01-01

    Background Assessment of 'potential intellectual ability' of children with severe intellectual disability (ID) is limited, as current tests designed for normal children do not maintain their interest. Thus a manual puzzle version of the Raven's Coloured Progressive Matrices (RCPM) was devised to appeal to the attentional and sensory preferences and language limitations of children with ID. It was hypothesized that performance on the book and manual puzzle forms would not differ for typically developing children but that children with ID would perform better on the puzzle form. Methods The first study assessed the validity of this puzzle form of the RCPM for 76 typically developing children in a test-retest crossover design, with a 3 week interval between tests. A second study tested performance and completion rate for the puzzle form compared to the book form in a sample of 164 children with ID. Results In the first study, no significant difference was found between performance on the puzzle and book forms in typically developing children, irrespective of the order of completion. The second study demonstrated a significantly higher performance and completion rate for the puzzle form compared to the book form in the ID population. Conclusion Similar performance on book and puzzle forms of the RCPM by typically developing children suggests that both forms measure the same construct. These findings suggest that the puzzle form does not require greater cognitive ability but demands sensory-motor attention and limits distraction in children with severe ID. Thus, we suggest the puzzle form of the RCPM is a more reliable measure of the non-verbal mentation of children with severe ID than the book form. PMID:18671882

  4. Statistically significant relational data mining :

    SciTech Connect

    Berry, Jonathan W.; Leung, Vitus Joseph; Phillips, Cynthia Ann; Pinar, Ali; Robinson, David Gerald; Berger-Wolf, Tanya; Bhowmick, Sanjukta; Casleton, Emily; Kaiser, Mark; Nordman, Daniel J.; Wilson, Alyson G.

    2014-02-01

    This report summarizes the work performed under the project (3z(BStatitically significant relational data mining.(3y (BThe goal of the project was to add more statistical rigor to the fairly ad hoc area of data mining on graphs. Our goal was to develop better algorithms and better ways to evaluate algorithm quality. We concetrated on algorithms for community detection, approximate pattern matching, and graph similarity measures. Approximate pattern matching involves finding an instance of a relatively small pattern, expressed with tolerance, in a large graph of data observed with uncertainty. This report gathers the abstracts and references for the eight refereed publications that have appeared as part of this work. We then archive three pieces of research that have not yet been published. The first is theoretical and experimental evidence that a popular statistical measure for comparison of community assignments favors over-resolved communities over approximations to a ground truth. The second are statistically motivated methods for measuring the quality of an approximate match of a small pattern in a large graph. The third is a new probabilistic random graph model. Statisticians favor these models for graph analysis. The new local structure graph model overcomes some of the issues with popular models such as exponential random graph models and latent variable models.

  5. The enhanced locating performance of an integrated cross-correlation and genetic algorithm for radio monitoring systems.

    PubMed

    Chang, Yao-Tang; Wu, Chi-Lin; Cheng, Hsu-Chih

    2014-01-01

    The rapid development of wireless broadband communication technology has affected the location accuracy of worldwide radio monitoring stations that employ time-difference-of-arrival (TDOA) location technology. In this study, TDOA-based location technology was implemented in Taiwan for the first time according to International Telecommunications Union Radiocommunication (ITU-R) recommendations regarding monitoring and location applications. To improve location accuracy, various scenarios, such as a three-dimensional environment (considering an unequal locating antenna configuration), were investigated. Subsequently, the proposed integrated cross-correlation and genetic algorithm was evaluated in the metropolitan area of Tainan. The results indicated that the location accuracy at a circular error probability of 50% was less than 60 m when a multipath effect was present in the area. Moreover, compared with hyperbolic algorithms that have been applied in conventional TDOA-based location systems, the proposed algorithm yielded 17-fold and 19-fold improvements in the mean difference when the location position of the interference station was favorable and unfavorable, respectively. Hence, the various forms of radio interference, such as low transmission power, burst and weak signals, and metropolitan interference, was proved to be easily identified, located, and removed. PMID:24763254

  6. Study of synthetic aperture radar data compression and encoding. Part 3: Performance evaluation of speckle suppression and data compression algorithms

    NASA Astrophysics Data System (ADS)

    Huisman, W. C.; Verhoef, W.; Okkes, R. W.

    1986-03-01

    Rate distortion bounds for SAR images are compared with rate versus distortion relations obtained with speckle suppression and data compression algorithms. A method for optimally processing multispectral SAR-images is given. It uses the spectral correlation between the mean return power corresponding to each spectral channel. Real SAR-data is processed with the algorithms and subjected to information extraction experiments. Synthetic SAR-images can not efficiently be processed by the speckle suppression algorithm for one and four looks, if the goal is to obtain a least squares estimate of the reference image. For Seasat imagery (4 looks) data reduction with a compression ratio of 8, without speckle suppression, gives very acceptable results, with almost no impact on image segmentation for land scenes and on Fourier analysis for ocean scenes. The extraction of dominant ocean wave length and direction is not influenced by data compression and speckle suppression applied to Seasat data, even when the compression ratio is 20, and the appearance of Seasat imagery improves if speckle suppression is applied.

  7. The Enhanced Locating Performance of an Integrated Cross-Correlation and Genetic Algorithm for Radio Monitoring Systems

    PubMed Central

    Chang, Yao-Tang; Wu, Chi-Lin; Cheng, Hsu-Chih

    2014-01-01

    The rapid development of wireless broadband communication technology has affected the location accuracy of worldwide radio monitoring stations that employ time-difference-of-arrival (TDOA) location technology. In this study, TDOA-based location technology was implemented in Taiwan for the first time according to International Telecommunications Union Radiocommunication (ITU-R) recommendations regarding monitoring and location applications. To improve location accuracy, various scenarios, such as a three-dimensional environment (considering an unequal locating antenna configuration), were investigated. Subsequently, the proposed integrated cross-correlation and genetic algorithm was evaluated in the metropolitan area of Tainan. The results indicated that the location accuracy at a circular error probability of 50% was less than 60 m when a multipath effect was present in the area. Moreover, compared with hyperbolic algorithms that have been applied in conventional TDOA-based location systems, the proposed algorithm yielded 17-fold and 19-fold improvements in the mean difference when the location position of the interference station was favorable and unfavorable, respectively. Hence, the various forms of radio interference, such as low transmission power, burst and weak signals, and metropolitan interference, was proved to be easily identified, located, and removed. PMID:24763254

  8. Fast adaptive OFDM-PON over single fiber loopback transmission using dynamic rate adaptation-based algorithm for channel performance improvement

    NASA Astrophysics Data System (ADS)

    Kartiwa, Iwa; Jung, Sang-Min; Hong, Moon-Ki; Han, Sang-Kook

    2014-03-01

    In this paper, we propose a novel fast adaptive approach that was applied to an OFDM-PON 20-km single fiber loopback transmission system to improve channel performance in term of stabilized BER below 2 × 10-3 and higher throughput beyond 10 Gb/s. The upstream transmission is performed through light source-seeded modulation using 1-GHz RSOA at the ONU. Experimental results indicated that the dynamic rate adaptation algorithm based on greedy Levin-Campello could be an effective solution to mitigate channel instability and data rate degradation caused by the Rayleigh back scattering effect and inefficient resource subcarrier allocation.

  9. Dynamic Bubble-Check Algorithm for Check Node Processing in Q-Ary LDPC Decoders

    NASA Astrophysics Data System (ADS)

    Lin, Wei; Bai, Baoming; Ma, Xiao; Sun, Rong

    A simplified algorithm for check node processing of extended min-sum (EMS) q-ary LDPC decoders is presented in this letter. Compared with the bubble check algorithm, the so-called dynamic bubble-check (DBC) algorithm aims to further reduce the computational complexity for the elementary check node (ECN) processing. By introducing two flag vectors in ECN processing, The DBC algorithm can use the minimum number of comparisons at each step. Simulation results show that, DBC algorithm uses significantly fewer comparison operations than the bubble check algorithm, and presents no performance loss compared with standard EMS algorithm on AWGN channels.

  10. Nonlinear dynamics optimization with particle swarm and genetic algorithms for SPEAR3 emittance upgrade

    NASA Astrophysics Data System (ADS)

    Huang, Xiaobiao; Safranek, James

    2014-09-01

    Nonlinear dynamics optimization is carried out for a low emittance upgrade lattice of SPEAR3 in order to improve its dynamic aperture and Touschek lifetime. Two multi-objective optimization algorithms, a genetic algorithm and a particle swarm algorithm, are used for this study. The performance of the two algorithms are compared. The result shows that the particle swarm algorithm converges significantly faster to similar or better solutions than the genetic algorithm and it does not require seeding of good solutions in the initial population. These advantages of the particle swarm algorithm may make it more suitable for many accelerator optimization applications.

  11. Versatility of the CFR algorithm for limited angle reconstruction

    SciTech Connect

    Fujieda, I.; Heiskanen, K.; Perez-Mendez, V. )

    1990-04-01

    The constrained Fourier reconstruction (CFR) algorithm and the iterative reconstruction-reprojection (IRR) algorithm are evaluated based on their accuracy for three types of limited angle reconstruction problems. The cFR algorithm performs better for problems such as Xray CT imaging of a nuclear reactor core with one large data gap due to structural blocking of the source and detector pair. For gated heart imaging by Xray CT, radioisotope distribution imaging by PET or SPECT, using a polygonal array of gamma cameras with insensitive gaps between camera boundaries, the IRR algorithm has a slight advantage over the CFR algorithm but the difference is not significant.

  12. Reasoning about systolic algorithms

    SciTech Connect

    Purushothaman, S.

    1986-01-01

    Systolic algorithms are a class of parallel algorithms, with small grain concurrency, well suited for implementation in VLSI. They are intended to be implemented as high-performance, computation-bound back-end processors and are characterized by a tesselating interconnection of identical processing elements. This dissertation investigates the problem of providing correctness of systolic algorithms. The following are reported in this dissertation: (1) a methodology for verifying correctness of systolic algorithms based on solving the representation of an algorithm as recurrence equations. The methodology is demonstrated by proving the correctness of a systolic architecture for optimal parenthesization. (2) The implementation of mechanical proofs of correctness of two systolic algorithms, a convolution algorithm and an optimal parenthesization algorithm, using the Boyer-Moore theorem prover. (3) An induction principle for proving correctness of systolic arrays which are modular. Two attendant inference rules, weak equivalence and shift transformation, which capture equivalent behavior of systolic arrays, are also presented.

  13. Developing dataflow algorithms

    SciTech Connect

    Hiromoto, R.E. ); Bohm, A.P.W. . Dept. of Computer Science)

    1991-01-01

    Our goal is to study the performance of a collection of numerical algorithms written in Id which is available to users of Motorola's dataflow machine Monsoon. We will study the dataflow performance of these implementations first under the parallel profiling simulator Id World, and second in comparison with actual dataflow execution on the Motorola Monsoon. This approach will allow us to follow the computational and structural details of the parallel algorithms as implemented on dataflow systems. When running our programs on the Id World simulator we will examine the behaviour of algorithms at dataflow graph level, where each instruction takes one timestep and data becomes available at the next. This implies that important machine level phenomena such as the effect that global communication time may have on the computation are not addressed. These phenomena will be addressed when we run our programs on the Monsoon hardware. Potential ramifications for compilation techniques, functional programming style, and program efficiency are significant to this study. In a later stage of our research we will compare the efficiency of Id programs to programs written in other languages. This comparison will be of a rather qualitative nature as there are too many degrees of freedom in a language implementation for a quantitative comparison to be of interest. We begin our study by examining one routine that exhibit different computational characteristics. This routine and its corresponding characteristics is Fast Fourier Transforms; computational parallelism and data dependences between the butterfly shuffles.

  14. ICA analysis of fMRI with real-time constraints: an evaluation of fast detection performance as function of algorithms, parameters and a priori conditions

    PubMed Central

    Soldati, Nicola; Calhoun, Vince D.; Bruzzone, Lorenzo; Jovicich, Jorge

    2013-01-01

    Independent component analysis (ICA) techniques offer a data-driven possibility to analyze brain functional MRI data in real-time. Typical ICA methods used in functional magnetic resonance imaging (fMRI), however, have been until now mostly developed and optimized for the off-line case in which all data is available. Real-time experiments are ill-posed for ICA in that several constraints are added: limited data, limited analysis time and dynamic changes in the data and computational speed. Previous studies have shown that particular choices of ICA parameters can be used to monitor real-time fMRI (rt-fMRI) brain activation, but it is unknown how other choices would perform. In this rt-fMRI simulation study we investigate and compare the performance of 14 different publicly available ICA algorithms systematically sampling different growing window lengths (WLs), model order (MO) as well as a priori conditions (none, spatial or temporal). Performance is evaluated by computing the spatial and temporal correlation to a target component as well as computation time. Four algorithms are identified as best performing (constrained ICA, fastICA, amuse, and evd), with their corresponding parameter choices. Both spatial and temporal priors are found to provide equal or improved performances in similarity to the target compared with their off-line counterpart, with greatly reduced computation costs. This study suggests parameter choices that can be further investigated in a sliding-window approach for a rt-fMRI experiment. PMID:23378835

  15. Performance Evaluation of Automatic Anatomy Segmentation Algorithm on Repeat or Four-Dimensional Computed Tomography Images Using Deformable Image Registration Method

    SciTech Connect

    Wang He; Garden, Adam S.; Zhang Lifei; Wei Xiong; Ahamad, Anesa; Kuban, Deborah A.; Komaki, Ritsuko; O'Daniel, Jennifer; Zhang Yongbin; Mohan, Radhe; Dong Lei

    2008-09-01

    Purpose: Auto-propagation of anatomic regions of interest from the planning computed tomography (CT) scan to the daily CT is an essential step in image-guided adaptive radiotherapy. The goal of this study was to quantitatively evaluate the performance of the algorithm in typical clinical applications. Methods and Materials: We had previously adopted an image intensity-based deformable registration algorithm to find the correspondence between two images. In the present study, the regions of interest delineated on the planning CT image were mapped onto daily CT or four-dimensional CT images using the same transformation. Postprocessing methods, such as boundary smoothing and modification, were used to enhance the robustness of the algorithm. Auto-propagated contours for 8 head-and-neck cancer patients with a total of 100 repeat CT scans, 1 prostate patient with 24 repeat CT scans, and 9 lung cancer patients with a total of 90 four-dimensional CT images were evaluated against physician-drawn contours and physician-modified deformed contours using the volume overlap index and mean absolute surface-to-surface distance. Results: The deformed contours were reasonably well matched with the daily anatomy on the repeat CT images. The volume overlap index and mean absolute surface-to-surface distance was 83% and 1.3 mm, respectively, compared with the independently drawn contours. Better agreement (>97% and <0.4 mm) was achieved if the physician was only asked to correct the deformed contours. The algorithm was also robust in the presence of random noise in the image. Conclusion: The deformable algorithm might be an effective method to propagate the planning regions of interest to subsequent CT images of changed anatomy, although a final review by physicians is highly recommended.

  16. CoastColour Round Robin datasets: a database to evaluate the performance of algorithms for the retrieval of water quality parameters in coastal waters

    NASA Astrophysics Data System (ADS)

    Nechad, B.; Ruddick, K.; Schroeder, T.; Oubelkheir, K.; Blondeau-Patissier, D.; Cherukuru, N.; Brando, V.; Dekker, A.; Clementson, L.; Banks, A. C.; Maritorena, S.; Werdell, J.; Sá, C.; Brotas, V.; Caballero de Frutos, I.; Ahn, Y.-H.; Salama, S.; Tilstone, G.; Martinez-Vicente, V.; Foley, D.; McKibben, M.; Nahorniak, J.; Peterson, T.; Siliò-Calzada, A.; Röttgers, R.; Lee, Z.; Peters, M.; Brockmann, C.

    2015-02-01

    The use of in situ measurements is essential in the validation and evaluation of the algorithms that provide coastal water quality data products from ocean colour satellite remote sensing. Over the past decade, various types of ocean colour algorithms have been developed to deal with the optical complexity of coastal waters. Yet there is a lack of a comprehensive inter-comparison due to the availability of quality checked in situ databases. The CoastColour project Round Robin (CCRR) project funded by the European Space Agency (ESA) was designed to bring together a variety of reference datasets and to use these to test algorithms and assess their accuracy for retrieving water quality parameters. This information was then developed to help end-users of remote sensing products to select the most accurate algorithms for their coastal region. To facilitate this, an inter-comparison of the performance of algorithms for the retrieval of in-water properties over coastal waters was carried out. The comparison used three types of datasets on which ocean colour algorithms were tested. The description and comparison of the three datasets are the focus of this paper, and include the Medium Resolution Imaging Spectrometer (MERIS) Level 2 match-ups, in situ reflectance measurements and data generated by a radiative transfer model (HydroLight). These datasets are available from doi.pangaea.de/10.1594/PANGAEA.841950. The datasets mainly consisted of 6484 marine reflectance associated with various geometrical (sensor viewing and solar angles) and sky conditions and water constituents: Total Suspended Matter (TSM) and Chlorophyll a (CHL) concentrations, and the absorption of Coloured Dissolved Organic Matter (CDOM). Inherent optical properties were also provided in the simulated datasets (5000 simulations) and from 3054 match-up locations. The distributions of reflectance at selected MERIS bands and band ratios, CHL

  17. Significant Treasures.

    ERIC Educational Resources Information Center

    Andrews, Ian A.

    1999-01-01

    Provides a crossword puzzle with an answer key corresponding to the book entitled "Significant Treasures/Tresors Parlants" that is filled with color and black-and-white prints of paintings and artifacts from 131 museums and art galleries as a sampling of the 2,200 such Canadian institutions. (CMK)

  18. Performance comparison of the Prophecy (forecasting) Algorithm in FFT form for unseen feature and time-series prediction

    NASA Astrophysics Data System (ADS)

    Jaenisch, Holger; Handley, James

    2013-06-01

    We introduce a generalized numerical prediction and forecasting algorithm. We have previously published it for malware byte sequence feature prediction and generalized distribution modeling for disparate test article analysis. We show how non-trivial non-periodic extrapolation of a numerical sequence (forecast and backcast) from the starting data is possible. Our ancestor-progeny prediction can yield new options for evolutionary programming. Our equations enable analytical integrals and derivatives to any order. Interpolation is controllable from smooth continuous to fractal structure estimation. We show how our generalized trigonometric polynomial can be derived using a Fourier transform.

  19. An Iterative Soft-Decision Decoding Algorithm

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Koumoto, Takuya; Takata, Toyoo; Kasami, Tadao

    1996-01-01

    This paper presents a new minimum-weight trellis-based soft-decision iterative decoding algorithm for binary linear block codes. Simulation results for the RM(64,22), EBCH(64,24), RM(64,42) and EBCH(64,45) codes show that the proposed decoding algorithm achieves practically (or near) optimal error performance with significant reduction in decoding computational complexity. The average number of search iterations is also small even for low signal-to-noise ratio.

  20. Diagnostic Performance of Transluminal Attenuation Gradient and Noninvasive Fractional Flow Reserve Derived from 320-Detector Row CT Angiography to Diagnose Hemodynamically Significant Coronary Stenosis: An NXT Substudy.

    PubMed

    Ko, Brian S; Wong, Dennis T L; Nørgaard, Bjarne L; Leong, Darryl P; Cameron, James D; Gaur, Sara; Marwan, Mohamed; Achenbach, Stephan; Kuribayashi, Sachio; Kimura, Takeshi; Meredith, Ian T; Seneviratne, Sujith K

    2016-04-01

    Purpose To compare the diagnostic performance of 320-detector row computed tomography (CT) coronary angiography-derived computed fractional flow reserve (FFR; FFRCT), transluminal attenuation gradient (TAG; TAG320), and CT coronary angiography alone to diagnose hemodynamically significant stenosis as determined by invasive FFR. Materials and Methods This substudy of the prospective NXT study (no. NCT01757678) was approved by each participating institution's review board, and informed consent was obtained from all participants. Fifty-one consecutive patients who underwent 320-detector row CT coronary angiographic examination and invasive coronary angiography with FFR measurement were included. Independent core laboratories determined coronary artery disease severity by using CT coronary angiography, TAG320, FFRCT, and FFR. TAG320 is defined as the linear regression coefficient between luminal attenuation and axial distance from the coronary ostium. FFRCT was computed from CT coronary angiography data by using computational fluid dynamics technology. Diagnostic performance was evaluated and compared on a per-vessel basis by the area under the receiver operating characteristic (ROC) curve (AUC). Results Among 82 vessels, 24 lesions (29%) had ischemia by FFR (FFR ≤ 0.80). FFRCT exhibited a stronger correlation with invasive FFR compared with TAG320 (Spearman ρ, 0.78 vs 0.47, respectively). Overall per-vessel accuracy, sensitivity, specificity, and positive and negative predictive values for TAG320 (<15.37) were 78%, 58%, 86%, 64%, and 83%, respectively; and those of FFRCT were 83%, 92%, 79%, 65%, and 96%, respectively. ROC curve analysis showed a significantly larger AUC for FFRCT (0.93) compared with that for TAG320 (0.72; P = .003) and CT coronary angiography alone (0.68; P = .008). Conclusion FFRCT computed from 320-detector row CT coronary angiography provides better diagnostic performance for the diagnosis of hemodynamically significant coronary stenoses

  1. Preliminary results of real-time PPP-RTK positioning algorithm development for moving platforms and its performance validation

    NASA Astrophysics Data System (ADS)

    Won, Jihye; Park, Kwan-Dong

    2015-04-01

    Real-time PPP-RTK positioning algorithms were developed for the purpose of getting precise coordinates of moving platforms. In this implementation, corrections for the satellite orbit and satellite clock were taken from the IGS-RTS products while the ionospheric delay was removed through ionosphere-free combination and the tropospheric delay was either taken care of using the Global Pressure and Temperature (GPT) model or estimated as a stochastic parameter. To improve the convergence speed, all the available GPS and GLONASS measurements were used and Extended Kalman Filter parameters were optimized. To validate our algorithms, we collected the GPS and GLONASS data from a geodetic-quality receiver installed on a roof of a moving vehicle in an open-sky environment and used IGS final products of satellite orbits and clock offsets. The horizontal positioning error got less than 10 cm within 5 minutes, and the error stayed below 10 cm even after the vehicle start moving. When the IGS-RTS product and the GPT model were used instead of the IGS precise product, the positioning accuracy of the moving vehicle was maintained at better than 20 cm once convergence was achieved at around 6 minutes.

  2. Annealed Importance Sampling Reversible Jump MCMC algorithms

    SciTech Connect

    Karagiannis, Georgios; Andrieu, Christophe

    2013-03-20

    It will soon be 20 years since reversible jump Markov chain Monte Carlo (RJ-MCMC) algorithms have been proposed. They have significantly extended the scope of Markov chain Monte Carlo simulation methods, offering the promise to be able to routinely tackle transdimensional sampling problems, as encountered in Bayesian model selection problems for example, in a principled and flexible fashion. Their practical efficient implementation, however, still remains a challenge. A particular difficulty encountered in practice is in the choice of the dimension matching variables (both their nature and their distribution) and the reversible transformations which allow one to define the one-to-one mappings underpinning the design of these algorithms. Indeed, even seemingly sensible choices can lead to algorithms with very poor performance. The focus of this paper is the development and performance evaluation of a method, annealed importance sampling RJ-MCMC (aisRJ), which addresses this problem by mitigating the sensitivity of RJ-MCMC algorithms to the aforementioned poor design. As we shall see the algorithm can be understood as being an “exact approximation” of an idealized MCMC algorithm that would sample from the model probabilities directly in a model selection set-up. Such an idealized algorithm may have good theoretical convergence properties, but typically cannot be implemented, and our algorithms can approximate the performance of such idealized algorithms to an arbitrary degree while not introducing any bias for any degree of approximation. Our approach combines the dimension matching ideas of RJ-MCMC with annealed importance sampling and its Markov chain Monte Carlo implementation. We illustrate the performance of the algorithm with numerical simulations which indicate that, although the approach may at first appear computationally involved, it is in fact competitive.

  3. A new frame-based registration algorithm.

    PubMed

    Yan, C H; Whalen, R T; Beaupre, G S; Sumanaweera, T S; Yen, S Y; Napel, S

    1998-01-01

    This paper presents a new algorithm for frame registration. Our algorithm requires only that the frame be comprised of straight rods, as opposed to the N structures or an accurate frame model required by existing algorithms. The algorithm utilizes the full 3D information in the frame as well as a least squares weighting scheme to achieve highly accurate registration. We use simulated CT data to assess the accuracy of our algorithm. We compare the performance of the proposed algorithm to two commonly used algorithms. Simulation results show that the proposed algorithm is comparable to the best existing techniques with knowledge of the exact mathematical frame model. For CT data corrupted with an unknown in-plane rotation or translation, the proposed technique is also comparable to the best existing techniques. However, in situations where there is a discrepancy of more than 2 mm (0.7% of the frame dimension) between the frame and the mathematical model, the proposed technique is significantly better (p < or = 0.05) than the existing techniques. The proposed algorithm can be applied to any existing frame without modification. It provides better registration accuracy and is robust against model mis-match. It allows greater flexibility on the frame structure. Lastly, it reduces the frame construction cost as adherence to a concise model is not required. PMID:9472834

  4. A new frame-based registration algorithm

    NASA Technical Reports Server (NTRS)

    Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Sumanaweera, T. S.; Yen, S. Y.; Napel, S.

    1998-01-01

    This paper presents a new algorithm for frame registration. Our algorithm requires only that the frame be comprised of straight rods, as opposed to the N structures or an accurate frame model required by existing algorithms. The algorithm utilizes the full 3D information in the frame as well as a least squares weighting scheme to achieve highly accurate registration. We use simulated CT data to assess the accuracy of our algorithm. We compare the performance of the proposed algorithm to two commonly used algorithms. Simulation results show that the proposed algorithm is comparable to the best existing techniques with knowledge of the exact mathematical frame model. For CT data corrupted with an unknown in-plane rotation or translation, the proposed technique is also comparable to the best existing techniques. However, in situations where there is a discrepancy of more than 2 mm (0.7% of the frame dimension) between the frame and the mathematical model, the proposed technique is significantly better (p < or = 0.05) than the existing techniques. The proposed algorithm can be applied to any existing frame without modification. It provides better registration accuracy and is robust against model mis-match. It allows greater flexibility on the frame structure. Lastly, it reduces the frame construction cost as adherence to a concise model is not required.

  5. Iterative phase retrieval algorithms. I: optimization.

    PubMed

    Guo, Changliang; Liu, Shi; Sheridan, John T

    2015-05-20

    Two modified Gerchberg-Saxton (GS) iterative phase retrieval algorithms are proposed. The first we refer to as the spatial phase perturbation GS algorithm (SPP GSA). The second is a combined GS hybrid input-output algorithm (GS/HIOA). In this paper (Part I), it is demonstrated that the SPP GS and GS/HIO algorithms are both much better at avoiding stagnation during phase retrieval, allowing them to successfully locate superior solutions compared with either the GS or the HIO algorithms. The performances of the SPP GS and GS/HIO algorithms are also compared. Then, the error reduction (ER) algorithm is combined with the HIO algorithm (ER/HIOA) to retrieve the input object image and the phase, given only some knowledge of its extent and the amplitude in the Fourier domain. In Part II, the algorithms developed here are applied to carry out known plaintext and ciphertext attacks on amplitude encoding and phase encoding double random phase encryption systems. Significantly, ER/HIOA is then used to carry out a ciphertext-only attack on AE DRPE systems. PMID:26192504

  6. Detecting the 11 March 2011 Tohoku tsunami arrival on sea-level records in the Pacific Ocean: application and performance of the Tsunami Early Detection Algorithm (TEDA)

    NASA Astrophysics Data System (ADS)

    Bressan, L.; Tinti, S.

    2012-05-01

    Real-time detection of a tsunami on instrumental sea-level records is quite an important task for a Tsunami Warning System (TWS), and in case of alert conditions for an ongoing tsunami it is often performed by visual inspection in operational warning centres. In this paper we stress the importance of automatic detection algorithms and apply the TEDA (Tsunami Early Detection Algorithm) to identify tsunami arrivals of the 2011 Tohoku tsunami in a real-time virtual exercise. TEDA is designed to work at station level, that is on sea-level data of a single station, and was calibrated on data from the Adak island, Alaska, USA, tide-gauge station. Using the parameters' configuration devised for the Adak station, the TEDA has been applied to 123 coastal sea-level records from the coasts of the Pacific Ocean, which enabled us to evaluate the efficiency and sensitivity of the algorithm on a wide range of background conditions and of signal-to-noise ratios. The result is that TEDA is able to detect quickly the majority of the tsunami signals and therefore proves to have the potential for being a valid tool in the operational TWS practice.

  7. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  8. Influence of injury risk thresholds on the performance of an algorithm to predict crashes with serious injuries.

    PubMed

    Bahouth, George; Digges, Kennerly; Schulman, Carl

    2012-01-01

    This paper presents methods to estimate crash injury risk based on crash characteristics captured by some passenger vehicles equipped with Advanced Automatic Crash Notification technology. The resulting injury risk estimates could be used within an algorithm to optimize rescue care. Regression analysis was applied to the National Automotive Sampling System / Crashworthiness Data System (NASS/CDS) to determine how variations in a specific injury risk threshold would influence the accuracy of predicting crashes with serious injuries. The recommended thresholds for classifying crashes with severe injuries are 0.10 for frontal crashes and 0.05 for side crashes. The regression analysis of NASS/CDS indicates that these thresholds will provide sensitivity above 0.67 while maintaining a positive predictive value in the range of 0.20. PMID:23169132

  9. Influence of Injury Risk Thresholds on the Performance of an Algorithm to Predict Crashes with Serious Injuries

    PubMed Central

    Bahouth, George; Digges, Kennerly; Schulman, Carl

    2012-01-01

    This paper presents methods to estimate crash injury risk based on crash characteristics captured by some passenger vehicles equipped with Advanced Automatic Crash Notification technology. The resulting injury risk estimates could be used within an algorithm to optimize rescue care. Regression analysis was applied to the National Automotive Sampling System / Crashworthiness Data System (NASS/CDS) to determine how variations in a specific injury risk threshold would influence the accuracy of predicting crashes with serious injuries. The recommended thresholds for classifying crashes with severe injuries are 0.10 for frontal crashes and 0.05 for side crashes. The regression analysis of NASS/CDS indicates that these thresholds will provide sensitivity above 0.67 while maintaining a positive predictive value in the range of 0.20. PMID:23169132

  10. Full Monte Carlo and measurement-based overall performance assessment of improved clinical implementation of eMC algorithm with emphasis on lower energy range.

    PubMed

    Ojala, Jarkko; Kapanen, Mika; Hyödynmaa, Simo

    2016-06-01

    New version 13.6.23 of the electron Monte Carlo (eMC) algorithm in Varian Eclipse™ treatment planning system has a model for 4MeV electron beam and some general improvements for dose calculation. This study provides the first overall accuracy assessment of this algorithm against full Monte Carlo (MC) simulations for electron beams from 4MeV to 16MeV with most emphasis on the lower energy range. Beams in a homogeneous water phantom and clinical treatment plans were investigated including measurements in the water phantom. Two different material sets were used with full MC: (1) the one applied in the eMC algorithm and (2) the one included in the Eclipse™ for other algorithms. The results of clinical treatment plans were also compared to those of the older eMC version 11.0.31. In the water phantom the dose differences against the full MC were mostly less than 3% with distance-to-agreement (DTA) values within 2mm. Larger discrepancies were obtained in build-up regions, at depths near the maximum electron ranges and with small apertures. For the clinical treatment plans the overall dose differences were mostly within 3% or 2mm with the first material set. Larger differences were observed for a large 4MeV beam entering curved patient surface with extended SSD and also in regions of large dose gradients. Still the DTA values were within 3mm. The discrepancies between the eMC and the full MC were generally larger for the second material set. The version 11.0.31 performed always inferiorly, when compared to the 13.6.23. PMID:27189311

  11. Advanced Algorithms and High-Performance Testbed for Large-Scale Site Characterization and Subsurface Target Detecting Using Airborne Ground Penetrating SAR

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Collier, James B.; Citak, Ari

    1997-01-01

    A team of US Army Corps of Engineers, Omaha District and Engineering and Support Center, Huntsville, let Propulsion Laboratory (JPL), Stanford Research Institute (SRI), and Montgomery Watson is currently in the process of planning and conducting the largest ever survey at the Former Buckley Field (60,000 acres), in Colorado, by using SRI airborne, ground penetrating, Synthetic Aperture Radar (SAR). The purpose of this survey is the detection of surface and subsurface Unexploded Ordnance (UXO) and in a broader sense the site characterization for identification of contaminated as well as clear areas. In preparation for such a large-scale survey, JPL has been developing advanced algorithms and a high-performance restbed for processing of massive amount of expected SAR data from this site. Two key requirements of this project are the accuracy (in terms of UXO detection) and speed of SAR data processing. The first key feature of this testbed is a large degree of automation and a minimum degree of the need for human perception in the processing to achieve an acceptable processing rate of several hundred acres per day. For accurate UXO detection, novel algorithms have been developed and implemented. These algorithms analyze dual polarized (HH and VV) SAR data. They are based on the correlation of HH and VV SAR data and involve a rather large set of parameters for accurate detection of UXO. For each specific site, this set of parameters can be optimized by using ground truth data (i.e., known surface and subsurface UXOs). In this paper, we discuss these algorithms and their successful application for detection of surface and subsurface anti-tank mines by using a data set from Yuma proving Ground, A7, acquired by SRI SAR.

  12. Advanced algorithms and high-performance testbed for large-scale site characterization and subsurface target detection using airborne ground-penetrating SAR

    NASA Astrophysics Data System (ADS)

    Fijany, Amir; Collier, James B.; Citak, Ari

    1999-08-01

    A team of US Army Corps of Engineers, Omaha District and Engineering and Support Center, Huntsville, JPL, Stanford Research Institute (SRI), and Montgomery Watson is currently in the process of planning and conducting the largest ever survey at the Former Buckley Field, in Colorado, by using SRI airborne, ground penetrating, SAR. The purpose of this survey is the detection of surface and subsurface Unexploded Ordnance (UXO) and in a broader sense the site characterization for identification of contaminated as well as clear areas. In preparation for such a large-scale survey, JPL has been developing advanced algorithms and a high-performance testbed for processing of massive amount of expected SAR data from this site. Two key requirements of this project are the accuracy and speed of SAR data processing. The first key feature of this testbed is a large degree of automation and maximum degree of the need for human perception in the processing to achieve an acceptable processing rate of several hundred acres per day. For accuracy UXO detection, novel algorithms have been developed and implemented. These algorithms analyze dual polarized SAR data. They are based on the correlation of HH and VV SAR data and involve a rather large set of parameters for accurate detection of UXO. For each specific site, this set of parameters can be optimized by using ground truth data. In this paper, we discuss these algorithms and their successful application for detection of surface and subsurface anti-tank mines by using a data set from Yuma Proving Ground, AZ, acquired by SRI SAR.

  13. Efficient Homotopy Continuation Algorithms with Application to Computational Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Brown, David A.

    New homotopy continuation algorithms are developed and applied to a parallel implicit finite-difference Newton-Krylov-Schur external aerodynamic flow solver for the compressible Euler, Navier-Stokes, and Reynolds-averaged Navier-Stokes equations with the Spalart-Allmaras one-equation turbulence model. Many new analysis tools, calculations, and numerical algorithms are presented for the study and design of efficient and robust homotopy continuation algorithms applicable to solving very large and sparse nonlinear systems of equations. Several specific homotopies are presented and studied and a methodology is presented for assessing the suitability of specific homotopies for homotopy continuation. . A new class of homotopy continuation algorithms, referred to as monolithic homotopy continuation algorithms, is developed. These algorithms differ from classical predictor-corrector algorithms by combining the predictor and corrector stages into a single update, significantly reducing the amount of computation and avoiding wasted computational effort resulting from over-solving in the corrector phase. The new algorithms are also simpler from a user perspective, with fewer input parameters, which also improves the user's ability to choose effective parameters on the first flow solve attempt. Conditional convergence is proved analytically and studied numerically for the new algorithms. The performance of a fully-implicit monolithic homotopy continuation algorithm is evaluated for several inviscid, laminar, and turbulent flows over NACA 0012 airfoils and ONERA M6 wings. The monolithic algorithm is demonstrated to be more efficient than the predictor-corrector algorithm for all applications investigated. It is also demonstrated to be more efficient than the widely-used pseudo-transient continuation algorithm for all inviscid and laminar cases investigated, and good performance scaling with grid refinement is demonstrated for the inviscid cases. Performance is also demonstrated

  14. SU-E-J-142: Performance Study of Automatic Image-Segmentation Algorithms in Motion Tracking Via MR-IGRT

    SciTech Connect

    Feng, Y; Olsen, J.; Parikh, P.; Noel, C; Wooten, H; Du, D; Mutic, S; Hu, Y; Kawrakow, I; Dempsey, J

    2014-06-01

    Purpose: Evaluate commonly used segmentation algorithms on a commercially available real-time MR image guided radiotherapy (MR-IGRT) system (ViewRay), compare the strengths and weaknesses of each method, with the purpose of improving motion tracking for more accurate radiotherapy. Methods: MR motion images of bladder, kidney, duodenum, and liver tumor were acquired for three patients using a commercial on-board MR imaging system and an imaging protocol used during MR-IGRT. A series of 40 frames were selected for each case to cover at least 3 respiratory cycles. Thresholding, Canny edge detection, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE), along with the ViewRay treatment planning and delivery system (TPDS) were included in the comparisons. To evaluate the segmentation results, an expert manual contouring of the organs or tumor from a physician was used as a ground-truth. Metrics value of sensitivity, specificity, Jaccard similarity, and Dice coefficient were computed for comparison. Results: In the segmentation of single image frame, all methods successfully segmented the bladder and kidney, but only FKM, KHM and TPDS were able to segment the liver tumor and the duodenum. For segmenting motion image series, the TPDS method had the highest sensitivity, Jarccard, and Dice coefficients in segmenting bladder and kidney, while FKM and KHM had a slightly higher specificity. A similar pattern was observed when segmenting the liver tumor and the duodenum. The Canny method is not suitable for consistently segmenting motion frames in an automated process, while thresholding and RD-LSE cannot consistently segment a liver tumor and the duodenum. Conclusion: The study compared six different segmentation methods and showed the effectiveness of the ViewRay TPDS algorithm in segmenting motion images during MR-IGRT. Future studies include a selection of conformal segmentation methods based on image/organ-specific information

  15. OPTIMIZATION OF DEEP DRILLING PERFORMANCE--DEVELOPMENT AND BENCHMARK TESTING OF ADVANCED DIAMOND PRODUCT DRILL BITS & HP/HT FLUIDS TO SIGNIFICANTLY IMPROVE RATES OF PENETRATION

    SciTech Connect

    Alan Black; Arnis Judzis

    2004-10-01

    The industry cost shared program aims to benchmark drilling rates of penetration in selected simulated deep formations and to significantly improve ROP through a team development of aggressive diamond product drill bit--fluid system technologies. Overall the objectives are as follows: Phase 1--Benchmark ''best in class'' diamond and other product drilling bits and fluids and develop concepts for a next level of deep drilling performance; Phase 2--Develop advanced smart bit-fluid prototypes and test at large scale; and Phase 3--Field trial smart bit-fluid concepts, modify as necessary and commercialize products. As of report date, TerraTek has concluded all major preparations for the high pressure drilling campaign. Baker Hughes encountered difficulties in providing additional pumping capacity before TerraTek's scheduled relocation to another facility, thus the program was delayed further to accommodate the full testing program.

  16. Optimization of Deep Drilling Performance--Development and Benchmark Testing of Advanced Diamond Product Drill Bits & HP/HT Fluids to Significantly Improve Rates of Penetration

    SciTech Connect

    Alan Black; Arnis Judzis

    2003-10-01

    This document details the progress to date on the OPTIMIZATION OF DEEP DRILLING PERFORMANCE--DEVELOPMENT AND BENCHMARK TESTING OF ADVANCED DIAMOND PRODUCT DRILL BITS AND HP/HT FLUIDS TO SIGNIFICANTLY IMPROVE RATES OF PENETRATION contract for the year starting October 2002 through September 2002. The industry cost shared program aims to benchmark drilling rates of penetration in selected simulated deep formations and to significantly improve ROP through a team development of aggressive diamond product drill bit--fluid system technologies. Overall the objectives are as follows: Phase 1--Benchmark ''best in class'' diamond and other product drilling bits and fluids and develop concepts for a next level of deep drilling performance; Phase 2--Develop advanced smart bit--fluid prototypes and test at large scale; and Phase 3--Field trial smart bit--fluid concepts, modify as necessary and commercialize products. Accomplishments to date include the following: 4Q 2002--Project started; Industry Team was assembled; Kick-off meeting was held at DOE Morgantown; 1Q 2003--Engineering meeting was held at Hughes Christensen, The Woodlands Texas to prepare preliminary plans for development and testing and review equipment needs; Operators started sending information regarding their needs for deep drilling challenges and priorities for large-scale testing experimental matrix; Aramco joined the Industry Team as DEA 148 objectives paralleled the DOE project; 2Q 2003--Engineering and planning for high pressure drilling at TerraTek commenced; 3Q 2003--Continuation of engineering and design work for high pressure drilling at TerraTek; Baker Hughes INTEQ drilling Fluids and Hughes Christensen commence planning for Phase 1 testing--recommendations for bits and fluids.

  17. PSO Algorithm Particle Filters for Improving the Performance of Lane Detection and Tracking Systems in Difficult Roads

    PubMed Central

    Cheng, Wen-Chang

    2012-01-01

    In this paper we propose a robust lane detection and tracking method by combining particle filters with the particle swarm optimization method. This method mainly uses the particle filters to detect and track the local optimum of the lane model in the input image and then seeks the global optimal solution of the lane model by a particle swarm optimization method. The particle filter can effectively complete lane detection and tracking in complicated or variable lane environments. However, the result obtained is usually a local optimal system status rather than the global optimal system status. Thus, the particle swarm optimization method is used to further refine the global optimal system status in all system statuses. Since the particle swarm optimization method is a global optimization algorithm based on iterative computing, it can find the global optimal lane model by simulating the food finding way of fish school or insects under the mutual cooperation of all particles. In verification testing, the test environments included highways and ordinary roads as well as straight and curved lanes, uphill and downhill lanes, lane changes, etc. Our proposed method can complete the lane detection and tracking more accurately and effectively then existing options. PMID:23235453

  18. Significant performance enhancement of yttrium-doped barium cerate proton conductor as electrolyte for solid oxide fuel cells through a Pd ingress-egress approach

    NASA Astrophysics Data System (ADS)

    Liu, Yu; Ran, Ran; Li, Sidian; Jiao, Yong; Tade, Moses O.; Shao, Zongping

    2014-07-01

    Proton-conducting perovskite oxides are excellent electrolyte materials for SOFCs that may improve power density at reduced temperatures and increase fuel efficiency, thus encouraging the widespread implementation of this attractive technology. The main challenges in the application of these oxides in SOFCs are difficult sintering and insufficient conductivity in real cells. In this study, we propose a novel method to significantly enhance the performance of a yttrium-doped barium cerate proton conductor as an electrolyte for SOFCs through a Pd ingress-egress approach to the development of BaCe0.8Y0.1Pd0.1O3-δ (BCYP10). The capability of the Pd egress from the BCYP10 perovskite lattice is demonstrated by H2-TPR, XRD, EDX mapping of STEM and XPS. Significant improvement in the sinterability is observed after the introduction of Pd due to the increased ionic conductivity and the sintering aid effect of egressed Pd. The formation of a B-site cation defect structure after Pd egress and the consequent modification of perovskite grain boundaries with Pd nanoparticles leads to a proton conductivity of BCYP10 that is approximately 3 times higher than that of BCY under a reducing atmosphere. A single cell with a thin film BCYP10 electrolyte reaches a peak power density as high as 645 mA cm-2 at 700 °C.

  19. Comparison between human and model observer performance in low-contrast detection tasks in CT images: application to images reconstructed with filtered back projection and iterative algorithms

    PubMed Central

    Calzado, A; Geleijns, J; Joemai, R M S; Veldkamp, W J H

    2014-01-01

    Objective: To compare low-contrast detectability (LCDet) performance between a model [non–pre-whitening matched filter with an eye filter (NPWE)] and human observers in CT images reconstructed with filtered back projection (FBP) and iterative [adaptive iterative dose reduction three-dimensional (AIDR 3D; Toshiba Medical Systems, Zoetermeer, Netherlands)] algorithms. Methods: Images of the Catphan® phantom (Phantom Laboratories, New York, NY) were acquired with Aquilion ONE™ 320-detector row CT (Toshiba Medical Systems, Tokyo, Japan) at five tube current levels (20–500 mA range) and reconstructed with FBP and AIDR 3D. Samples containing either low-contrast objects (diameters, 2–15 mm) or background were extracted and analysed by the NPWE model and four human observers in a two-alternative forced choice detection task study. Proportion correct (PC) values were obtained for each analysed object and used to compare human and model observer performances. An efficiency factor (η) was calculated to normalize NPWE to human results. Results: Human and NPWE model PC values (normalized by the efficiency, η = 0.44) were highly correlated for the whole dose range. The Pearson's product-moment correlation coefficients (95% confidence interval) between human and NPWE were 0.984 (0.972–0.991) for AIDR 3D and 0.984 (0.971–0.991) for FBP, respectively. Bland–Altman plots based on PC results showed excellent agreement between human and NPWE [mean absolute difference 0.5 ± 0.4%; range of differences (−4.7%, 5.6%)]. Conclusion: The NPWE model observer can predict human performance in LCDet tasks in phantom CT images reconstructed with FBP and AIDR 3D algorithms at different dose levels. Advances in knowledge: Quantitative assessment of LCDet in CT can accurately be performed using software based on a model observer. PMID:24837275

  20. Ensemble algorithms in reinforcement learning.

    PubMed

    Wiering, Marco A; van Hasselt, Hado

    2008-08-01

    This paper describes several ensemble methods that combine multiple different reinforcement learning (RL) algorithms in a single agent. The aim is to enhance learning speed and final performance by combining the chosen actions or action probabilities of different RL algorithms. We designed and implemented four different ensemble methods combining the following five different RL algorithms: Q-learning, Sarsa, actor-critic (AC), QV-learning, and AC learning automaton. The intuitively designed ensemble methods, namely, majority voting (MV), rank voting, Boltzmann multiplication (BM), and Boltzmann addition, combine the policies derived from the value functions of the different RL algorithms, in contrast to previous work where ensemble methods have been used in RL for representing and learning a single value function. We show experiments on five maze problems of varying complexity; the first problem is simple, but the other four maze tasks are of a dynamic or partially observable nature. The results indicate that the BM and MV ensembles significantly outperform the single RL algorithms. PMID:18632380

  1. Fast chromatographic method for the determination of dyes in beverages by using high performance liquid chromatography--diode array detection data and second order algorithms.

    PubMed

    Culzoni, María J; Schenone, Agustina V; Llamas, Natalia E; Garrido, Mariano; Di Nezio, Maria S; Band, Beatriz S Fernández; Goicoechea, Héctor C

    2009-10-16

    A fast chromatographic methodology is presented for the analysis of three synthetic dyes in non-alcoholic beverages: amaranth (E123), sunset yellow FCF (E110) and tartrazine (E102). Seven soft drinks (purchased from a local supermarket) were homogenized, filtered and injected into the chromatographic system. Second order data were obtained by a rapid LC separation and DAD detection. A comparative study of the performance of two second order algorithms (MCR-ALS and U-PLS/RBL) applied to model the data, is presented. Interestingly, the data present time shift between different chromatograms and cannot be conveniently corrected to determine the above-mentioned dyes in beverage samples. This fact originates the lack of trilinearity that cannot be conveniently pre-processed and can hardly be modelled by using U-PLS/RBL algorithm. On the contrary, MCR-ALS has shown to be an excellent tool for modelling this kind of data allowing to reach acceptable figures of merit. Recovery values ranged between 97% and 105% when analyzing artificial and real samples were indicative of the good performance of the method. In contrast with the complete separation, which consumes 10 mL of methanol and 3 mL of 0.08 mol L(-1) ammonium acetate, the proposed fast chromatography method requires only 0.46 mL of methanol and 1.54 mL of 0.08 mol L(-1) ammonium acetate. Consequently, analysis time could be reduced up to 14.2% of the necessary time to perform the complete separation allowing saving both solvents and time, which are related to a reduction of both the costs per analysis and environmental impact. PMID:19748097

  2. New Effective Multithreaded Matching Algorithms

    SciTech Connect

    Manne, Fredrik; Halappanavar, Mahantesh

    2014-05-19

    Matching is an important combinatorial problem with a number of applications in areas such as community detection, sparse linear algebra, and network alignment. Since computing optimal matchings can be very time consuming, several fast approximation algorithms, both sequential and parallel, have been suggested. Common to the algorithms giving the best solutions is that they tend to be sequential by nature, while algorithms more suitable for parallel computation give solutions of less quality. We present a new simple 1 2 -approximation algorithm for the weighted matching problem. This algorithm is both faster than any other suggested sequential 1 2 -approximation algorithm on almost all inputs and also scales better than previous multithreaded algorithms. We further extend this to a general scalable multithreaded algorithm that computes matchings of weight comparable with the best sequential algorithms. The performance of the suggested algorithms is documented through extensive experiments on different multithreaded architectures.

  3. The significant effect of the thickness of Ni film on the performance of the Ni/Au Ohmic contact to p-GaN

    SciTech Connect

    Li, X. J.; Zhao, D. G. Jiang, D. S.; Liu, Z. S.; Chen, P.; Zhu, J. J.; Le, L. C.; Yang, J.; He, X. G.; Zhang, S. M.; Zhang, B. S.; Liu, J. P.; Yang, H.

    2014-10-28

    The significant effect of the thickness of Ni film on the performance of the Ohmic contact of Ni/Au to p-GaN is studied. The Ni/Au metal films with thickness of 15/50 nm on p-GaN led to better electrical characteristics, showing a lower specific contact resistivity after annealing in the presence of oxygen. Both the formation of a NiO layer and the evolution of metal structure on the sample surface and at the interface with p-GaN were checked by transmission electron microscopy and energy-dispersive x-ray spectroscopy. The experimental results indicate that a too thin Ni film cannot form enough NiO to decrease the barrier height and get Ohmic contact to p-GaN, while a too thick Ni film will transform into too thick NiO cover on the sample surface and thus will also deteriorate the electrical conductivity of sample.

  4. Implementation and analysis of a fast backprojection algorithm

    NASA Astrophysics Data System (ADS)

    Gorham, LeRoy A.; Majumder, Uttam K.; Buxa, Peter; Backues, Mark J.; Lindgren, Andrew C.

    2006-05-01

    The convolution backprojection algorithm is an accurate synthetic aperture radar imaging technique, but it has seen limited use in the radar community due to its high computational costs. Therefore, significant research has been conducted for a fast backprojection algorithm, which surrenders some image quality for increased computational efficiency. This paper describes an implementation of both a standard convolution backprojection algorithm and a fast backprojection algorithm optimized for use on a Linux cluster and a field-programmable gate array (FPGA) based processing system. The performance of the different implementations is compared using synthetic ideal point targets and the SPIE XPatch Backhoe dataset.

  5. Multi-objective optimization of combustion, performance and emission parameters in a jatropha biodiesel engine using Non-dominated sorting genetic algorithm-II

    NASA Astrophysics Data System (ADS)

    Dhingra, Sunil; Bhushan, Gian; Dubey, Kashyap Kumar

    2014-03-01

    The present work studies and identifies the different variables that affect the output parameters involved in a single cylinder direct injection compression ignition (CI) engine using jatropha biodiesel. Response surface methodology based on Central composite design (CCD) is used to design the experiments. Mathematical models are developed for combustion parameters (Brake specific fuel consumption (BSFC) and peak cylinder pressure (Pmax)), performance parameter brake thermal efficiency (BTE) and emission parameters (CO, NO x , unburnt HC and smoke) using regression techniques. These regression equations are further utilized for simultaneous optimization of combustion (BSFC, Pmax), performance (BTE) and emission (CO, NO x , HC, smoke) parameters. As the objective is to maximize BTE and minimize BSFC, Pmax, CO, NO x , HC, smoke, a multiobjective optimization problem is formulated. Nondominated sorting genetic algorithm-II is used in predicting the Pareto optimal sets of solution. Experiments are performed at suitable optimal solutions for predicting the combustion, performance and emission parameters to check the adequacy of the proposed model. The Pareto optimal sets of solution can be used as guidelines for the end users to select optimal combination of engine output and emission parameters depending upon their own requirements.

  6. Feature-Based Quality Evaluation of 3d Point Clouds - Study of the Performance of 3d Registration Algorithms

    NASA Astrophysics Data System (ADS)

    Ridene, T.; Goulette, F.; Chendeb, S.

    2013-08-01

    The production of realistic 3D map databases is continuously growing. We studied an approach of 3D mapping database producing based on the fusion of heterogeneous 3D data. In this term, a rigid registration process was performed. Before starting the modeling process, we need to validate the quality of the registration results, and this is one of the most difficult and open research problems. In this paper, we suggest a new method of evaluation of 3D point clouds based on feature extraction and comparison with a 2D reference model. This method is based on tow metrics: binary and fuzzy.

  7. An Artificial Immune Univariate Marginal Distribution Algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Qingbin; Kang, Shuo; Gao, Junxiang; Wu, Song; Tian, Yanping

    Hybridization is an extremely effective way of improving the performance of the Univariate Marginal Distribution Algorithm (UMDA). Owing to its diversity and memory mechanisms, artificial immune algorithm has been widely used to construct hybrid algorithms with other optimization algorithms. This paper proposes a hybrid algorithm which combines the UMDA with the principle of general artificial immune algorithm. Experimental results on deceptive function of order 3 show that the proposed hybrid algorithm can get more building blocks (BBs) than the UMDA.

  8. THE HARPS-TERRA PROJECT. I. DESCRIPTION OF THE ALGORITHMS, PERFORMANCE, AND NEW MEASUREMENTS ON A FEW REMARKABLE STARS OBSERVED BY HARPS

    SciTech Connect

    Anglada-Escude, Guillem; Butler, R. Paul

    2012-06-01

    Doppler spectroscopy has uncovered or confirmed all the known planets orbiting nearby stars. Two main techniques are used to obtain precision Doppler measurements at optical wavelengths. The first approach is the gas cell method, which consists of least-squares matching of the spectrum of iodine imprinted on the spectrum of the star. The second method relies on the construction of a stabilized spectrograph externally calibrated in wavelength. The most precise stabilized spectrometer in operation is the High Accuracy Radial velocity Planet Searcher (HARPS), operated by the European Southern Observatory in La Silla Observatory, Chile. The Doppler measurements obtained with HARPS are typically obtained using the cross-correlation function (CCF) technique. This technique consists of multiplying the stellar spectrum by a weighted binary mask and finding the minimum of the product as a function of the Doppler shift. It is known that CCF is suboptimal in exploiting the Doppler information in the stellar spectrum. Here we describe an algorithm to obtain precision radial velocity measurements using least-squares matching of each observed spectrum to a high signal-to-noise ratio template derived from the same observations. This algorithm is implemented in our software HARPS-TERRA (Template-Enhanced Radial velocity Re-analysis Application). New radial velocity measurements on a representative sample of stars observed by HARPS are used to illustrate the benefits of the proposed method. We show that, compared with CCF, template matching provides a significant improvement in accuracy, especially when applied to M dwarfs.

  9. Sensitivity study of a large-scale air pollution model by using high-performance computations and Monte Carlo algorithms

    NASA Astrophysics Data System (ADS)

    Ostromsky, Tz.; Dimov, I.; Georgieva, R.; Marinov, P.; Zlatev, Z.

    2013-10-01

    In this paper we present some new results of our work on sensitivity analysis of a large-scale air pollution model, more specificly the Danish Eulerian Model (DEM). The main purpose of this study is to analyse the sensitivity of ozone concentrations with respect to the rates of some chemical reactions. The current sensitivity study considers the rates of six important chemical reactions and is done for the areas of several European cities with different geographical locations, climate, industrialization and population density. One of the most widely used variance-based techniques for sensitivity analysis, such as Sobol estimates and their modifications, have been used in this study. A vast number of numerical experiments with a specially adapted for the purpose version of the Danish Eulerian Model (SA-DEM) were carried out to compute global Sobol sensitivity measures. SA-DEM was implemented and run on two powerful cluster supercomputers: IBM Blue Gene/P, the most powerful parallel supercomputer in Bulgaria and IBM MareNostrum III, the most powerful parallel supercomputer in Spain. The refined (480 × 480) mesh version of the model was used in the experiments on MareNostrum III, which is a challenging computational problem even on such a powerful machine. Some optimizations of the code with respect to the parallel efficiency and the memory use were performed. Tables with performance results of a number of numerical experiments on IBM BlueGene/P and on IBM MareNostrum III are presented and analysed.

  10. Graphics Processing Unit (GPU) implementation of image processing algorithms to improve system performance of the Control, Acquisition, Processing, and Image Display System (CAPIDS) of the Micro-Angiographic Fluoroscope (MAF).

    PubMed

    Vasan, S N Swetadri; Ionita, Ciprian N; Titus, A H; Cartwright, A N; Bednarek, D R; Rudin, S

    2012-02-23

    We present the image processing upgrades implemented on a Graphics Processing Unit (GPU) in the Control, Acquisition, Processing, and Image Display System (CAPIDS) for the custom Micro-Angiographic Fluoroscope (MAF) detector. Most of the image processing currently implemented in the CAPIDS system is pixel independent; that is, the operation on each pixel is the same and the operation on one does not depend upon the result from the operation on the other, allowing the entire image to be processed in parallel. GPU hardware was developed for this kind of massive parallel processing implementation. Thus for an algorithm which has a high amount of parallelism, a GPU implementation is much faster than a CPU implementation. The image processing algorithm upgrades implemented on the CAPIDS system include flat field correction, temporal filtering, image subtraction, roadmap mask generation and display window and leveling. A comparison between the previous and the upgraded version of CAPIDS has been presented, to demonstrate how the improvement is achieved. By performing the image processing on a GPU, significant improvements (with respect to timing or frame rate) have been achieved, including stable operation of the system at 30 fps during a fluoroscopy run, a DSA run, a roadmap procedure and automatic image windowing and leveling during each frame. PMID:24027619

  11. Graphics processing unit (GPU) implementation of image processing algorithms to improve system performance of the control acquisition, processing, and image display system (CAPIDS) of the micro-angiographic fluoroscope (MAF)

    NASA Astrophysics Data System (ADS)

    Swetadri Vasan, S. N.; Ionita, Ciprian N.; Titus, A. H.; Cartwright, A. N.; Bednarek, D. R.; Rudin, S.

    2012-03-01

    We present the image processing upgrades implemented on a Graphics Processing Unit (GPU) in the Control, Acquisition, Processing, and Image Display System (CAPIDS) for the custom Micro-Angiographic Fluoroscope (MAF) detector. Most of the image processing currently implemented in the CAPIDS system is pixel independent; that is, the operation on each pixel is the same and the operation on one does not depend upon the result from the operation on the other, allowing the entire image to be processed in parallel. GPU hardware was developed for this kind of massive parallel processing implementation. Thus for an algorithm which has a high amount of parallelism, a GPU implementation is much faster than a CPU implementation. The image processing algorithm upgrades implemented on the CAPIDS system include flat field correction, temporal filtering, image subtraction, roadmap mask generation and display window and leveling. A comparison between the previous and the upgraded version of CAPIDS has been presented, to demonstrate how the improvement is achieved. By performing the image processing on a GPU, significant improvements (with respect to timing or frame rate) have been achieved, including stable operation of the system at 30 fps during a fluoroscopy run, a DSA run, a roadmap procedure and automatic image windowing and leveling during each frame.

  12. Graphics Processing Unit (GPU) implementation of image processing algorithms to improve system performance of the Control, Acquisition, Processing, and Image Display System (CAPIDS) of the Micro-Angiographic Fluoroscope (MAF)

    PubMed Central

    Vasan, S.N. Swetadri; Ionita, Ciprian N.; Titus, A.H.; Cartwright, A.N.; Bednarek, D.R.; Rudin, S.

    2012-01-01

    We present the image processing upgrades implemented on a Graphics Processing Unit (GPU) in the Control, Acquisition, Processing, and Image Display System (CAPIDS) for the custom Micro-Angiographic Fluoroscope (MAF) detector. Most of the image processing currently implemented in the CAPIDS system is pixel independent; that is, the operation on each pixel is the same and the operation on one does not depend upon the result from the operation on the other, allowing the entire image to be processed in parallel. GPU hardware was developed for this kind of massive parallel processing implementation. Thus for an algorithm which has a high amount of parallelism, a GPU implementation is much faster than a CPU implementation. The image processing algorithm upgrades implemented on the CAPIDS system include flat field correction, temporal filtering, image subtraction, roadmap mask generation and display window and leveling. A comparison between the previous and the upgraded version of CAPIDS has been presented, to demonstrate how the improvement is achieved. By performing the image processing on a GPU, significant improvements (with respect to timing or frame rate) have been achieved, including stable operation of the system at 30 fps during a fluoroscopy run, a DSA run, a roadmap procedure and automatic image windowing and leveling during each frame. PMID:24027619

  13. Significant Improvements in Cognitive Performance Post-Transcranial, Red/Near-Infrared Light-Emitting Diode Treatments in Chronic, Mild Traumatic Brain Injury: Open-Protocol Study

    PubMed Central

    Zafonte, Ross; Krengel, Maxine H.; Martin, Paula I.; Frazier, Judith; Hamblin, Michael R.; Knight, Jeffrey A.; Meehan, William P.; Baker, Errol H.

    2014-01-01

    Abstract This pilot, open-protocol study examined whether scalp application of red and near-infrared (NIR) light-emitting diodes (LED) could improve cognition in patients with chronic, mild traumatic brain injury (mTBI). Application of red/NIR light improves mitochondrial function (especially in hypoxic/compromised cells) promoting increased adenosine triphosphate (ATP) important for cellular metabolism. Nitric oxide is released locally, increasing regional cerebral blood flow. LED therapy is noninvasive, painless, and non-thermal (cleared by the United States Food and Drug Administration [FDA], an insignificant risk device). Eleven chronic, mTBI participants (26–62 years of age, 6 males) with nonpenetrating brain injury and persistent cognitive dysfunction were treated for 18 outpatient sessions (Monday, Wednesday, Friday, for 6 weeks), starting at 10 months to 8 years post- mTBI (motor vehicle accident [MVA] or sports-related; and one participant, improvised explosive device [IED] blast injury). Four had a history of multiple concussions. Each LED cluster head (5.35 cm diameter, 500 mW, 22.2 mW/cm2) was applied for 10 min to each of 11 scalp placements (13 J/cm2). LEDs were placed on the midline from front-to-back hairline; and bilaterally on frontal, parietal, and temporal areas. Neuropsychological testing was performed pre-LED, and at 1 week, and 1 and 2 months after the 18th treatment. A significant linear trend was observed for the effect of LED treatment over time for the Stroop test for Executive Function, Trial 3 inhibition (p=0.004); Stroop, Trial 4 inhibition switching (p=0.003); California Verbal Learning Test (CVLT)-II, Total Trials 1–5 (p=0.003); and CVLT-II, Long Delay Free Recall (p=0.006). Participants reported improved sleep, and fewer post-traumatic stress disorder (PTSD) symptoms, if present. Participants and family reported better ability to perform social, interpersonal, and occupational functions. These open-protocol data suggest

  14. Significant improvements in cognitive performance post-transcranial, red/near-infrared light-emitting diode treatments in chronic, mild traumatic brain injury: open-protocol study.

    PubMed

    Naeser, Margaret A; Zafonte, Ross; Krengel, Maxine H; Martin, Paula I; Frazier, Judith; Hamblin, Michael R; Knight, Jeffrey A; Meehan, William P; Baker, Errol H

    2014-06-01

    This pilot, open-protocol study examined whether scalp application of red and near-infrared (NIR) light-emitting diodes (LED) could improve cognition in patients with chronic, mild traumatic brain injury (mTBI). Application of red/NIR light improves mitochondrial function (especially in hypoxic/compromised cells) promoting increased adenosine triphosphate (ATP) important for cellular metabolism. Nitric oxide is released locally, increasing regional cerebral blood flow. LED therapy is noninvasive, painless, and non-thermal (cleared by the United States Food and Drug Administration [FDA], an insignificant risk device). Eleven chronic, mTBI participants (26-62 years of age, 6 males) with nonpenetrating brain injury and persistent cognitive dysfunction were treated for 18 outpatient sessions (Monday, Wednesday, Friday, for 6 weeks), starting at 10 months to 8 years post- mTBI (motor vehicle accident [MVA] or sports-related; and one participant, improvised explosive device [IED] blast injury). Four had a history of multiple concussions. Each LED cluster head (5.35 cm diameter, 500 mW, 22.2 mW/cm(2)) was applied for 10 min to each of 11 scalp placements (13 J/cm(2)). LEDs were placed on the midline from front-to-back hairline; and bilaterally on frontal, parietal, and temporal areas. Neuropsychological testing was performed pre-LED, and at 1 week, and 1 and 2 months after the 18th treatment. A significant linear trend was observed for the effect of LED treatment over time for the Stroop test for Executive Function, Trial 3 inhibition (p=0.004); Stroop, Trial 4 inhibition switching (p=0.003); California Verbal Learning Test (CVLT)-II, Total Trials 1-5 (p=0.003); and CVLT-II, Long Delay Free Recall (p=0.006). Participants reported improved sleep, and fewer post-traumatic stress disorder (PTSD) symptoms, if present. Participants and family reported better ability to perform social, interpersonal, and occupational functions. These open-protocol data suggest that placebo

  15. Percentage of Biopsy Cores Positive for Malignancy and Biochemical Failure Following Prostate Cancer Radiotherapy in 3,264 Men: Statistical Significance Without Predictive Performance

    SciTech Connect

    Williams, Scott G. Buyyounouski, Mark K.; Pickles, Tom; Kestin, Larry; Martinez, Alvaro; Hanlon, Alexandra L.; Duchesne, Gillian M.

    2008-03-15

    Purpose: To define and incorporate the impact of the percentage of positive biopsy cores (PPC) into a predictive model of prostate cancer radiotherapy biochemical outcome. Methods and Materials: The data of 3264 men with clinically localized prostate cancer treated with external beam radiotherapy at four institutions were retrospectively analyzed. Standard prognostic and treatment factors plus the number of biopsy cores collected and the number positive for malignancy by transrectal ultrasound-guided biopsy were available. The primary endpoint was biochemical failure (bF, Phoenix definition). Multivariate proportional hazards analyses were performed and expressed as a nomogram and the model's predictive ability assessed using the concordance index (c-index). Results: The cohort consisted of 21% low-, 51% intermediate-, and 28% high-risk cancer patients, and 30% had androgen deprivation with radiotherapy. The median PPC was 50% (interquartile range [IQR] 29-67%), and median follow-up was 51 months (IQR 29-71 months). Percentage of positive biopsy cores displayed an independent association with the risk of bF (p = 0.01), as did age, prostate-specific antigen value, Gleason score, clinical stage, androgen deprivation duration, and radiotherapy dose (p < 0.001 for all). Including PPC increased the c-index from 0.72 to 0.73 in the overall model. The influence of PPC varied significantly with radiotherapy dose and clinical stage (p = 0.02 for both interactions), with doses <66 Gy and palpable tumors showing the strongest relationship between PPC and bF. Intermediate-risk patients were poorly discriminated regardless of PPC inclusion (c-index 0.65 for both models). Conclusions: Outcome models incorporating PPC show only minor additional ability to predict biochemical failure beyond those containing standard prognostic factors.

  16. Quantifying dynamic sensitivity of optimization algorithm parameters to improve hydrological model calibration

    NASA Astrophysics Data System (ADS)

    Qi, Wei; Zhang, Chi; Fu, Guangtao; Zhou, Huicheng

    2016-02-01

    It is widely recognized that optimization algorithm parameters have significant impacts on algorithm performance, but quantifying the influence is very complex and difficult due to high computational demands and dynamic nature of search parameters. The overall aim of this paper is to develop a global sensitivity analysis based framework to dynamically quantify the individual and interactive influence of algorithm parameters on algorithm performance. A variance decomposition sensitivity analysis method, Analysis of Variance (ANOVA), is used for sensitivity quantification, because it is capable of handling small samples and more computationally efficient compared with other approaches. The Shuffled Complex Evolution method developed at the University of Arizona algorithm (SCE-UA) is selected as an optimization algorithm for investigation, and two criteria, i.e., convergence speed and success rate, are used to measure the performance of SCE-UA. Results show the proposed framework can effectively reveal the dynamic sensitivity of algorithm parameters in the search processes, including individual influences of parameters and their interactive impacts. Interactions between algorithm parameters have significant impacts on SCE-UA performance, which has not been reported in previous research. The proposed framework provides a means to understand the dynamics of algorithm parameter influence, and highlights the significance of considering interactive parameter influence to improve algorithm performance in the search processes.

  17. Entry vehicle performance analysis and atmospheric guidance algorithm for precision landing on Mars. M.S. Thesis - Massachusetts Inst. of Technology

    NASA Technical Reports Server (NTRS)

    Dieriam, Todd A.

    1990-01-01

    Future missions to Mars may require pin-point landing precision, possibly on the order of tens of meters. The ability to reach a target while meeting a dynamic pressure constraint to ensure safe parachute deployment is complicated at Mars by low atmospheric density, high atmospheric uncertainty, and the desire to employ only bank angle control. The vehicle aerodynamic performance requirements and guidance necessary for 0.5 to 1.5 lift drag ratio vehicle to maximize the achievable footprint while meeting the constraints are examined. A parametric study of the various factors related to entry vehicle performance in the Mars environment is undertaken to develop general vehicle aerodynamic design requirements. The combination of low lift drag ratio and low atmospheric density at Mars result in a large phugoid motion involving the dynamic pressure which complicates trajectory control. Vehicle ballistic coefficient is demonstrated to be the predominant characteristic affecting final dynamic pressure. Additionally, a speed brake is shown to be ineffective at reducing the final dynamic pressure. An adaptive precision entry atmospheric guidance scheme is presented. The guidance uses a numeric predictor-corrector algorithm to control downrange, an azimuth controller to govern crossrange, and analytic control law to reduce the final dynamic pressure. Guidance performance is tested against a variety of dispersions, and the results from selected tests are presented. Precision entry using bank angle control only is demonstrated to be feasible at Mars.

  18. The evaluation of the OSGLR algorithm for restructurable controls

    NASA Technical Reports Server (NTRS)

    Bonnice, W. F.; Wagner, E.; Hall, S. R.; Motyka, P.

    1986-01-01

    The detection and isolation of commercial aircraft control surface and actuator failures using the orthogonal series generalized likelihood ratio (OSGLR) test was evaluated. The OSGLR algorithm was chosen as the most promising algorithm based on a preliminary evaluation of three failure detection and isolation (FDI) algorithms (the detection filter, the generalized likelihood ratio test, and the OSGLR test) and a survey of the literature. One difficulty of analytic FDI techniques and the OSGLR algorithm in particular is their sensitivity to modeling errors. Therefore, methods of improving the robustness of the algorithm were examined with the incorporation of age-weighting into the algorithm being the most effective approach, significantly reducing the sensitivity of the algorithm to modeling errors. The steady-state implementation of the algorithm based on a single cruise linear model was evaluated using a nonlinear simulation of a C-130 aircraft. A number of off-nominal no-failure flight conditions including maneuvers, nonzero flap deflections, different turbulence levels and steady winds were tested. Based on the no-failure decision functions produced by off-nominal flight conditions, the failure detection performance at the nominal flight condition was determined. The extension of the algorithm to a wider flight envelope by scheduling the linear models used by the algorithm on dynamic pressure and flap deflection was also considered. Since simply scheduling the linear models over the entire flight envelope is unlikely to be adequate, scheduling of the steady-state implentation of the algorithm was briefly investigated.

  19. Image change detection algorithms: a systematic survey.

    PubMed

    Radke, Richard J; Andra, Srinivas; Al-Kofahi, Omar; Roysam, Badrinath

    2005-03-01

    Detecting regions of change in multiple images of the same scene taken at different times is of widespread interest due to a large number of applications in diverse disciplines, including remote sensing, surveillance, medical diagnosis and treatment, civil infrastructure, and underwater sensing. This paper presents a systematic survey of the common processing steps and core decision rules in modern change detection algorithms, including significance and hypothesis testing, predictive models, the shading model, and background modeling. We also discuss important preprocessing methods, approaches to enforcing the consistency of the change mask, and principles for evaluating and comparing the performance of change detection algorithms. It is hoped that our classification of algorithms into a relatively small number of categories will provide useful guidance to the algorithm designer. PMID:15762326

  20. A comprehensive review of swarm optimization algorithms.

    PubMed

    Ab Wahab, Mohd Nadhir; Nefti-Meziani, Samia; Atyabi, Adham

    2015-01-01

    Many swarm optimization algorithms have been introduced since the early 60's, Evolutionary Programming to the most recent, Grey Wolf Optimization. All of these algorithms have demonstrated their potential to solve many optimization problems. This paper provides an in-depth survey of well-known optimization algorithms. Selected algorithms are briefly explained and compared with each other comprehensively through experiments conducted using thirty well-known benchmark functions. Their advantages and disadvantages are also discussed. A number of statistical tests are then carried out to determine the significant performances. The results indicate the overall advantage of Differential Evolution (DE) and is closely followed by Particle Swarm Optimization (PSO), compared with other considered approaches. PMID:25992655

  1. A Comprehensive Review of Swarm Optimization Algorithms

    PubMed Central

    2015-01-01

    Many swarm optimization algorithms have been introduced since the early 60’s, Evolutionary Programming to the most recent, Grey Wolf Optimization. All of these algorithms have demonstrated their potential to solve many optimization problems. This paper provides an in-depth survey of well-known optimization algorithms. Selected algorithms are briefly explained and compared with each other comprehensively through experiments conducted using thirty well-known benchmark functions. Their advantages and disadvantages are also discussed. A number of statistical tests are then carried out to determine the significant performances. The results indicate the overall advantage of Differential Evolution (DE) and is closely followed by Particle Swarm Optimization (PSO), compared with other considered approaches. PMID:25992655

  2. A Dynamic Navigation Algorithm Considering Network Disruptions

    NASA Astrophysics Data System (ADS)

    Jiang, J.; Wu, L.

    2014-04-01

    In traffic network, link disruptions or recoveries caused by sudden accidents, bad weather and traffic congestion, lead to significant increase or decrease in travel times on some network links. Similar situation also occurs in real-time emergency evacuation plan in indoor areas. As the dynamic nature of real-time network information generates better navigation solutions than the static one, a real-time dynamic navigation algorithm for emergency evacuation with stochastic disruptions or recoveries in the network is presented in this paper. Compared with traditional existing algorithms, this new algorithm adjusts pre-existing path to a new optimal one according to the changing link travel time. With real-time network information, it can provide the optional path quickly to adapt to the rapid changing network properties. Theoretical analysis and experimental results demonstrate that this proposed algorithm performs a high time efficiency to get exact solution and indirect information can be calculated in spare time.

  3. A survey of DNA motif finding algorithms

    PubMed Central

    Das, Modan K; Dai, Ho-Kwok

    2007-01-01

    Background Unraveling the mechanisms that regulate gene expression is a major challenge in biology. An important task in this challenge is to identify regulatory elements, especially the binding sites in deoxyribonucleic acid (DNA) for transcription factors. These binding sites are short DNA segments that are called motifs. Recent advances in genome sequence availability and in high-throughput gene expression analysis technologies have allowed for the development of computational methods for motif finding. As a result, a large number of motif finding algorithms have been implemented and applied to various motif models over the past decade. This survey reviews the latest developments in DNA motif finding algorithms. Results Earlier algorithms use promoter sequences of coregulated genes from single genome and search for statistically overrepresented motifs. Recent algorithms are designed to use phylogenetic footprinting or orthologous sequences and also an integrated approach where promoter sequences of coregulated genes and phylogenetic footprinting are used. All the algorithms studied have been reported to correctly detect the motifs that have been previously detected by laboratory experimental approaches, and some algorithms were able to find novel motifs. However, most of these motif finding algorithms have been shown to work successfully in yeast and other lower organisms, but perform significantly worse in higher organisms. Conclusion Despite considerable efforts to date, DNA motif finding remains a complex challenge for biologists and computer scientists. Researchers have taken many different approaches in developing motif discovery tools and the progress made in this area of research is very encouraging. Performance comparison of different motif finding tools and identification of the best tools have proven to be a difficult task because tools are designed based on algorithms and motif models that are diverse and complex and our incomplete understanding of

  4. In vivo polymerization of poly(3,4-ethylenedioxythiophene) in the living rat hippocampus does not cause a significant loss of performance in a delayed alternation task

    NASA Astrophysics Data System (ADS)

    Ouyang, Liangqi; Shaw, Crystal L.; Kuo, Chin-chen; Griffin, Amy L.; Martin, David C.

    2014-04-01

    -polymerization time intervals, the polymerization did not cause significant deficits in performance of the DA task, suggesting that hippocampal function was not impaired by PEDOT deposition. However, GFAP+ and ED-1+ cells were also found at the deposition two weeks after the polymerization, suggesting potential secondary scarring. Therefore, less extensive deposition or milder deposition conditions may be desirable to minimize this scarring while maintaining decreased system impedance.

  5. Automatic-control-algorithm effects on energy production

    SciTech Connect

    McNerney, G.M.

    1981-01-01

    Algorithm control strategy for unattended wind turbine operation is a potentially important aspect of wind energy production that has thus far escaped treatment in the literature. Early experience in automatic operation of the Sandia 17-m VAWT has demonstrated the need for a systematic study of control algorithms. To this end, a computer model has been developed using actual wind time series and turbine performance data to simulate the power produced by the Sandia 17-m VAWT operating in automatic control. The model has been used to investigate the influence of starting algorithms on annual energy production. The results indicate that, depending on turbine and local wind characteristics, a bad choice of a control algorithm can significantly reduce overall energy production. The model can be used to select control algorithms and threshold parameters that maximize long-term energy production. An attempt has been made to generalize these results from local site and turbine characteristics to obtain general guidelines for control algorithm design.

  6. Analysis of image thresholding segmentation algorithms based on swarm intelligence

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Lu, Kai; Gao, Yinghui; Yang, Bo

    2013-03-01

    Swarm intelligence-based image thresholding segmentation algorithms are playing an important role in the research field of image segmentation. In this paper, we briefly introduce the theories of four existing image segmentation algorithms based on swarm intelligence including fish swarm algorithm, artificial bee colony, bacteria foraging algorithm and particle swarm optimization. Then some image benchmarks are tested in order to show the differences of the segmentation accuracy, time consumption, convergence and robustness for Salt & Pepper noise and Gaussian noise of these four algorithms. Through these comparisons, this paper gives qualitative analyses for the performance variance of the four algorithms. The conclusions in this paper would give a significant guide for the actual image segmentation.

  7. Application of fast BLMS algorithm in acoustic echo cancellation

    NASA Astrophysics Data System (ADS)

    Zhao, Yue; Li, Nian Q.

    2013-03-01

    The acoustic echo path is usually very long and ranges from several hundreds to few thousands of taps. Frequency domain adaptive filter provides a solution to acoustic echo cancellation by means of resulting a significant reduction in the computational burden. In this paper, fast BLMS (Block Least-Mean-Square) algorithm in frequency domain is realized by using fast FFT technology. The adaptation of filter parameters is actually performed in the frequency domain. The proposed algorithm can ensure convergence with high speed and reduce computational complexity. Simulation results indicate that the algorithm demonstrates good performance for acoustic echo cancellation in communication systems.

  8. Genetic algorithms, path relinking, and the flowshop sequencing problem.

    PubMed

    Reeves, C R; Yamada, T

    1998-01-01

    In a previous paper, a simple genetic algorithm (GA) was developed for finding (approximately) the minimum makespan of the n-job, m-machine permutation flowshop sequencing problem (PFSP). The performance of the algorithm was comparable to that of a naive neighborhood search technique and a proven simulated annealing algorithm. However, recent results have demonstrated the superiority of a tabu search method in solving the PFSP. In this paper, we reconsider the implementation of a GA for this problem and show that by taking into account the features of the landscape generated by the operators used, we are able to improve its performance significantly. PMID:10021740

  9. TrackEye tracking algorithm characterization

    NASA Astrophysics Data System (ADS)

    Valley, Michael T.; Shields, Robert W.; Reed, Jack M.

    2004-10-01

    TrackEye is a film digitization and target tracking system that offers the potential for quantitatively measuring the dynamic state variables (e.g., absolute and relative position, orientation, linear and angular velocity/acceleration, spin rate, trajectory, angle of attack, etc.) for moving objects using captured single or dual view image sequences. At the heart of the system is a set of tracking algorithms that automatically find and quantify the location of user selected image details such as natural test article features or passive fiducials that have been applied to cooperative test articles. This image position data is converted into real world coordinates and rates with user specified information such as the image scale and frame rate. Though tracking methods such as correlation algorithms are typically robust by nature, the accuracy and suitability of each TrackEye tracking algorithm is in general unknown even under good imaging conditions. The challenges of optimal algorithm selection and algorithm performance/measurement uncertainty are even more significant for long range tracking of high-speed targets where temporally varying atmospheric effects degrade the imagery. This paper will present the preliminary results from a controlled test sequence used to characterize the performance of the TrackEye tracking algorithm suite.

  10. TrackEye tracking algorithm characterization.

    SciTech Connect

    Reed, Jack W.; Shields, Rob W; Valley, Michael T.

    2004-08-01

    TrackEye is a film digitization and target tracking system that offers the potential for quantitatively measuring the dynamic state variables (e.g., absolute and relative position, orientation, linear and angular velocity/acceleration, spin rate, trajectory, angle of attack, etc.) for moving objects using captured single or dual view image sequences. At the heart of the system is a set of tracking algorithms that automatically find and quantify the location of user selected image details such as natural test article features or passive fiducials that have been applied to cooperative test articles. This image position data is converted into real world coordinates and rates with user specified information such as the image scale and frame rate. Though tracking methods such as correlation algorithms are typically robust by nature, the accuracy and suitability of each TrackEye tracking algorithm is in general unknown even under good imaging conditions. The challenges of optimal algorithm selection and algorithm performance/measurement uncertainty are even more significant for long range tracking of high-speed targets where temporally varying atmospheric effects degrade the imagery. This paper will present the preliminary results from a controlled test sequence used to characterize the performance of the TrackEye tracking algorithm suite.

  11. Quantum algorithms

    NASA Astrophysics Data System (ADS)

    Abrams, Daniel S.

    This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases (commonly found in ab initio physics and chemistry problems) for which all known classical algorithms require exponential time. Fast algorithms for simulating many body Fermi systems are also provided in both first and second quantized descriptions. An efficient quantum algorithm for anti-symmetrization is given as well as a detailed discussion of a simulation of the Hubbard model. In addition, quantum algorithms that calculate numerical integrals and various characteristics of stochastic processes are described. Two techniques are given, both of which obtain an exponential speed increase in comparison to the fastest known classical deterministic algorithms and a quadratic speed increase in comparison to classical Monte Carlo (probabilistic) methods. I derive a simpler and slightly faster version of Grover's mean algorithm, show how to apply quantum counting to the problem, develop some variations of these algorithms, and show how both (apparently distinct) approaches can be understood from the same unified framework. Finally, the relationship between physics and computation is explored in some more depth, and it is shown that computational complexity theory depends very sensitively on physical laws. In particular, it is shown that nonlinear quantum mechanics allows for the polynomial time solution of NP-complete and #P oracle problems. Using the Weinberg model as a simple example, the explicit construction of the necessary gates is derived from the underlying physics. Nonlinear quantum algorithms are also presented using Polchinski type nonlinearities which do not allow for superluminal communication. (Copies available exclusively from MIT Libraries, Rm. 14- 0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

  12. Cyclic cooling algorithm

    SciTech Connect

    Rempp, Florian; Mahler, Guenter; Michel, Mathias

    2007-09-15

    We introduce a scheme to perform the cooling algorithm, first presented by Boykin et al. in 2002, for an arbitrary number of times on the same set of qbits. We achieve this goal by adding an additional SWAP gate and a bath contact to the algorithm. This way one qbit may repeatedly be cooled without adding additional qbits to the system. By using a product Liouville space to model the bath contact we calculate the density matrix of the system after a given number of applications of the algorithm.

  13. Significant performance improvement in terms of reduced cathode flooding in polymer electrolyte fuel cell using a stainless-steel microcoil gas flow field

    NASA Astrophysics Data System (ADS)

    Tanaka, Shiro; Shudo, Toshio

    2014-02-01

    Flooding at the cathode is the greatest barrier to increasing the power density of polymer electrolyte fuel cells (PEFCs) and using them at high current densities. Previous studies have shown that flooding is caused by water accumulation in the gas diffusion layer, but only a few researchers have succeeded in overcoming this issue. In the present study, microcoils are used as the gas flow channel as well as the gas diffuser directly on the microporous layer (MPL), without using a conventional carbon-fiber gas diffusion layer (GDL), to enable flood-free performance. The current-voltage curves show flooding-free performance even under low air stoichiometry. However, the high-frequency resistance (HFR) in this case is slightly higher than that in grooved flow channels and GDLs. This is due to the differences in the electron conduction path, and the in-plane electron conductivity in the MPL is the key to enhancing the microcoil fuel cell performance.

  14. An efficient algorithm for function optimization: modified stem cells algorithm

    NASA Astrophysics Data System (ADS)

    Taherdangkoo, Mohammad; Paziresh, Mahsa; Yazdi, Mehran; Bagheri, Mohammad

    2013-03-01

    In this paper, we propose an optimization algorithm based on the intelligent behavior of stem cell swarms in reproduction and self-organization. Optimization algorithms, such as the Genetic Algorithm (GA), Particle Swarm Optimization (PSO) algorithm, Ant Colony Optimization (ACO) algorithm and Artificial Bee Colony (ABC) algorithm, can give solutions to linear and non-linear problems near to the optimum for many applications; however, in some case, they can suffer from becoming trapped in local optima. The Stem Cells Algorithm (SCA) is an optimization algorithm inspired by the natural behavior of stem cells in evolving themselves into new and improved cells. The SCA avoids the local optima problem successfully. In this paper, we have made small changes in the implementation of this algorithm to obtain improved performance over previous versions. Using a series of benchmark functions, we assess the performance of the proposed algorithm and compare it with that of the other aforementioned optimization algorithms. The obtained results prove the superiority of the Modified Stem Cells Algorithm (MSCA).

  15. Efficient implementation of the adaptive scale pixel decomposition algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Bhatnagar, S.; Rau, U.; Zhang, M.

    2016-08-01

    Context. Most popular algorithms in use to remove the effects of a telescope's point spread function (PSF) in radio astronomy are variants of the CLEAN algorithm. Most of these algorithms model the sky brightness using the delta-function basis, which results in undesired artefacts when used to image extended emission. The adaptive scale pixel decomposition (Asp-Clean) algorithm models the sky brightness on a scale-sensitive basis and thus gives a significantly better imaging performance when imaging fields that contain both resolved and unresolved emission. Aims: However, the runtime cost of Asp-Clean is higher than that of scale-insensitive algorithms. In this paper, we identify the most expensive step in the original Asp-Clean algorithm and present an efficient implementation of it, which significantly reduces the computational cost while keeping the imaging performance comparable to the original algorithm. The PSF sidelobe levels of modern wide-band telescopes are significantly reduced, allowing us to make approximations to reduce the computational cost, which in turn allows for the deconvolution of larger images on reasonable timescales. Methods: As in the original algorithm, scales in the image are estimated through function fitting. Here we introduce an analytical method to model extended emission, and a modified method for estimating the initial values used for the fitting procedure, which ultimately leads to a lower computational cost. Results: The new implementation was tested with simulated EVLA data and the imaging performance compared well with the original Asp-Clean algorithm. Tests show that the current algorithm can recover features at different scales with lower computational cost.

  16. The development of a scalable parallel 3-D CFD algorithm for turbomachinery. M.S. Thesis Final Report

    NASA Technical Reports Server (NTRS)

    Luke, Edward Allen

    1993-01-01

    Two algorithms capable of computing a transonic 3-D inviscid flow field about rotating machines are considered for parallel implementation. During the study of these algorithms, a significant new method of measuring the performance of parallel algorithms is developed. The theory that supports this new method creates an empirical definition of scalable parallel algorithms that is used to produce quantifiable evidence that a scalable parallel application was developed. The implementation of the parallel application and an automated domain decomposition tool are also discussed.

  17. An Improved Back Propagation Neural Network Algorithm on Classification Problems

    NASA Astrophysics Data System (ADS)

    Nawi, Nazri Mohd; Ransing, R. S.; Salleh, Mohd Najib Mohd; Ghazali, Rozaida; Hamid, Norhamreeza Abdul

    The back propagation algorithm is one the most popular algorithms to train feed forward neural networks. However, the convergence of this algorithm is slow, it is mainly because of gradient descent algorithm. Previous research demonstrated that in 'feed forward' algorithm, the slope of the activation function is directly influenced by a parameter referred to as 'gain'. This research proposed an algorithm for improving the performance of the back propagation algorithm by introducing the adaptive gain of the activation function. The gain values change adaptively for each node. The influence of the adaptive gain on the learning ability of a neural network is analysed. Multi layer feed forward neural networks have been assessed. Physical interpretation of the relationship between the gain value and the learning rate and weight values is given. The efficiency of the proposed algorithm is compared with conventional Gradient Descent Method and verified by means of simulation on four classification problems. In learning the patterns, the simulations result demonstrate that the proposed method converged faster on Wisconsin breast cancer with an improvement ratio of nearly 2.8, 1.76 on diabetes problem, 65% better on thyroid data sets and 97% faster on IRIS classification problem. The results clearly show that the proposed algorithm significantly improves the learning speed of the conventional back-propagation algorithm.

  18. Learning evasive maneuvers using evolutionary algorithms and neural networks

    NASA Astrophysics Data System (ADS)

    Kang, Moung Hung

    In this research, evolutionary algorithms and recurrent neural networks are combined to evolve control knowledge to help pilots avoid being struck by a missile, based on a two-dimensional air combat simulation model. The recurrent neural network is used for representing the pilot's control knowledge and evolutionary algorithms (i.e., Genetic Algorithms, Evolution Strategies, and Evolutionary Programming) are used for optimizing the weights and/or topology of the recurrent neural network. The simulation model of the two-dimensional evasive maneuver problem evolved is used for evaluating the performance of the recurrent neural network. Five typical air combat conditions were selected to evaluate the performance of the recurrent neural networks evolved by the evolutionary algorithms. Analysis of Variance (ANOVA) tests and response graphs were used to analyze the results. Overall, there was little difference in the performance of the three evolutionary algorithms used to evolve the control knowledge. However, the number of generations of each algorithm required to obtain the best performance was significantly different. ES converges the fastest, followed by EP and then by GA. The recurrent neural networks evolved by the evolutionary algorithms provided better performance than the traditional recommendations for evasive maneuvers, maximum gravitational turn, for each air combat condition. Furthermore, the recommended actions of the recurrent neural networks are reasonable and can be used for pilot training.

  19. Management and non-supervisory perceptions surrounding the implementation and significance of high-performance work practices in a nuclear power plant

    NASA Astrophysics Data System (ADS)

    Ashbridge, Gayle Ann

    Change management has become an imperative for organizations as they move into the 21st century; up to 75 percent of change initiatives fail. Nuclear power plants face the same challenges as industrial firms with the added challenge of deregulation. Faced with this challenge, restructuring the electric utility has raised a number of complex issues. Under traditional cost-of-service regulation, electric utilities were able to pass on their costs to consumers who absorbed them. In the new competitive environment, customers will now choose their suppliers based on the most competitive price. The purpose of this study is to determine the degree of congruence between non-supervisory and supervisory personnel regarding the perceived implementation of high performance workplace practices at a nuclear power plant. This study used as its foundation the practices identified in the Road to High Performance Workplaces: A Guide to Better Jobs and Better Business Results by the U.S. Department of Labor's Office of the American Workplace (1994). The population for this study consisted of organizational members at one nuclear power plant. Over 300 individuals completed surveys on high performance workplace practices. Two surveys were administered, one to non-supervisory personnel and one to first line supervisors and above. The determination of implementation levels was accomplished through descriptive statistical analysis. Results of the study revealed 32 areas of noncongruence between non-supervisory and supervisory personnel in regard to the perceived implementation level of the high performance workplace practices. Factor analysis further revealed that the order in which the respondents place emphasis on the variables varies between the two groups. This study provides recommendations that may improve the nuclear power plants alignment of activities. Recommendations are also provided for additional research on high-performance work practices.

  20. Haplotyping algorithms

    SciTech Connect

    Sobel, E.; Lange, K.; O`Connell, J.R.

    1996-12-31

    Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.

  1. Versatility of the CFR (Constrained Fourier Reconstruction) algorithm for limited angle reconstruction

    SciTech Connect

    Fujieda, I.; Heiskanen, K.; Perez-Mendez, V.

    1989-08-01

    The Constrained Fourier Reconstruction (CFR) algorithm and the Iterative Reconstruction-Reprojection (IRR) algorithm are evaluated based on their accuracy for three types of limited angle reconstruction problems. The CFR algorithm performs better for problems such as Xray CT imaging of a nuclear reactor core with one large data gap due to structural blocking of the source and detector pair. For gated heart imaging by Xray CT, radioisotope distribution imaging by PET or SPECT, using a polygonal array of gamma cameras with insensitive gaps between camera boundaries, the IRR algorithm has a slight advantage over the CFR algorithm but the difference is not significant. 3 refs., 5 figs.

  2. Model-free constrained data-driven iterative reference input tuning algorithm with experimental validation

    NASA Astrophysics Data System (ADS)

    Radac, Mircea-Bogdan; Precup, Radu-Emil

    2016-05-01

    This paper presents the design and experimental validation of a new model-free data-driven iterative reference input tuning (IRIT) algorithm that solves a reference trajectory tracking problem as an optimization problem with control signal saturation constraints and control signal rate constraints. The IRIT algorithm design employs an experiment-based stochastic search algorithm to use the advantages of iterative learning control. The experimental results validate the IRIT algorithm applied to a non-linear aerodynamic position control system. The results prove that the IRIT algorithm offers the significant control system performance improvement by few iterations and experiments conducted on the real-world process and model-free parameter tuning.

  3. Performance of Students with Significant Cognitive Disabilities on Early-Grade Curriculum-Based Measures of Word and Passage Reading Fluency

    ERIC Educational Resources Information Center

    Lemons, Christopher J.; Zigmond, Naomi; Kloo, Amanda M.; Hill, David R.; Mrachko, Alicia A.; Paterra, Matthew F.; Bost, Thomas J.; Davis, Shawn M.

    2013-01-01

    Alternate assessments have been used for the last 10 years to evaluate schools' efforts to teach children with significant cognitive disabilities. However, few studies have examined the reading skills of children who participate in these assessments. The purpose of this study was to extend understanding of the reading skills of this…

  4. Modeling and convergence analysis of distributed coevolutionary algorithms.

    PubMed

    Subbu, Raj; Sanderson, Arthur C

    2004-04-01

    A theoretical foundation is presented for modeling and convergence analysis of a class of distributed coevolutionary algorithms applied to optimization problems in which the variables are partitioned among p nodes. An evolutionary algorithm at each of the p nodes performs a local evolutionary search based on its own set of primary variables, and the secondary variable set at each node is clamped during this phase. An infrequent intercommunication between the nodes updates the secondary variables at each node. The local search and intercommunication phases alternate, resulting in a cooperative search by the p nodes. First, we specify a theoretical basis for a class of centralized evolutionary algorithms in terms of construction and evolution of sampling distributions over the feasible space. Next, this foundation is extended to develop a model for a class of distributed coevolutionary algorithms. Convergence and convergence rate analyzes are pursued for basic classes of objective functions. Our theoretical investigation reveals that for certain unimodal and multimodal objectives, we can expect these algorithms to converge at a geometrical rate. The distributed coevolutionary algorithms are of most interest from the perspective of their performance advantage compared to centralized algorithms, when they execute in a network environment with significant local access and internode communication delays. The relative performance of these algorithms is therefore evaluated in a distributed environment with realistic parameters of network behavior. PMID:15376831

  5. Soft-Etching Copper and Silver Electrodes for Significant Device Performance Improvement toward Facile, Cost-Effective, Bottom-Contacted, Organic Field-Effect Transistors.

    PubMed

    Wang, Zongrui; Dong, Huanli; Zou, Ye; Zhao, Qiang; Tan, Jiahui; Liu, Jie; Lu, Xiuqiang; Xiao, Jinchong; Zhang, Qichun; Hu, Wenping

    2016-03-01

    Poor charge injection and transport at the electrode/semiconductor contacts has been so far a severe performance hurdle for bottom-contact bottom-gate (BCBG) organic field-effect transistors (OFETs). Here, we have developed a simple, economic, and effective method to improve the carrier injection efficiency and obtained high-performance devices with low cost and widely used source/drain (S/D) electrodes (Ag/Cu). Through the simple electrode etching process, the work function of the electrodes is more aligned with the semiconductors, which reduces the energy barrier and facilitates the charge injection. Besides, the formation of the thinned electrode edge with desirable micro/nanostructures not only leads to the enlarged contact side area beneficial for the carrier injection but also is in favor of the molecular self-organization for continuous crystal growth at the contact/active channel interface, which is better for the charge injection and transport. These effects give rise to the great reduction of contact resistance and the amazing improvement of the low-cost bottom-contact configuration OFETs performance. PMID:26967358

  6. Social significance of community structure: Statistical view

    NASA Astrophysics Data System (ADS)

    Li, Hui-Jia; Daniels, Jasmine J.

    2015-01-01

    Community structure analysis is a powerful tool for social networks that can simplify their topological and functional analysis considerably. However, since community detection methods have random factors and real social networks obtained from complex systems always contain error edges, evaluating the significance of a partitioned community structure is an urgent and important question. In this paper, integrating the specific characteristics of real society, we present a framework to analyze the significance of a social community. The dynamics of social interactions are modeled by identifying social leaders and corresponding hierarchical structures. Instead of a direct comparison with the average outcome of a random model, we compute the similarity of a given node with the leader by the number of common neighbors. To determine the membership vector, an efficient community detection algorithm is proposed based on the position of the nodes and their corresponding leaders. Then, using a log-likelihood score, the tightness of the community can be derived. Based on the distribution of community tightness, we establish a connection between p -value theory and network analysis, and then we obtain a significance measure of statistical form . Finally, the framework is applied to both benchmark networks and real social networks. Experimental results show that our work can be used in many fields, such as determining the optimal number of communities, analyzing the social significance of a given community, comparing the performance among various algorithms, etc.

  7. Social significance of community structure: statistical view.

    PubMed

    Li, Hui-Jia; Daniels, Jasmine J

    2015-01-01

    Community structure analysis is a powerful tool for social networks that can simplify their topological and functional analysis considerably. However, since community detection methods have random factors and real social networks obtained from complex systems always contain error edges, evaluating the significance of a partitioned community structure is an urgent and important question. In this paper, integrating the specific characteristics of real society, we present a framework to analyze the significance of a social community. The dynamics of social interactions are modeled by identifying social leaders and corresponding hierarchical structures. Instead of a direct comparison with the average outcome of a random model, we compute the similarity of a given node with the leader by the number of common neighbors. To determine the membership vector, an efficient community detection algorithm is proposed based on the position of the nodes and their corresponding leaders. Then, using a log-likelihood score, the tightness of the community can be derived. Based on the distribution of community tightness, we establish a connection between p-value theory and network analysis, and then we obtain a significance measure of statistical form . Finally, the framework is applied to both benchmark networks and real social networks. Experimental results show that our work can be used in many fields, such as determining the optimal number of communities, analyzing the social significance of a given community, comparing the performance among various algorithms, etc. PMID:25679651

  8. A comparison of performance of automatic cloud coverage assessment algorithm for Formosat-2 image using clustering-based and spatial thresholding methods

    NASA Astrophysics Data System (ADS)

    Hsu, Kuo-Hsien

    2012-11-01

    Formosat-2 image is a kind of high-spatial-resolution (2 meters GSD) remote sensing satellite data, which includes one panchromatic band and four multispectral bands (Blue, Green, Red, near-infrared). An essential sector in the daily processing of received Formosat-2 image is to estimate the cloud statistic of image using Automatic Cloud Coverage Assessment (ACCA) algorithm. The information of cloud statistic of image is subsequently recorded as an important metadata for image product catalog. In this paper, we propose an ACCA method with two consecutive stages: preprocessing and post-processing analysis. For pre-processing analysis, the un-supervised K-means classification, Sobel's method, thresholding method, non-cloudy pixels reexamination, and cross-band filter method are implemented in sequence for cloud statistic determination. For post-processing analysis, Box-Counting fractal method is implemented. In other words, the cloud statistic is firstly determined via pre-processing analysis, the correctness of cloud statistic of image of different spectral band is eventually cross-examined qualitatively and quantitatively via post-processing analysis. The selection of an appropriate thresholding method is very critical to the result of ACCA method. Therefore, in this work, We firstly conduct a series of experiments of the clustering-based and spatial thresholding methods that include Otsu's, Local Entropy(LE), Joint Entropy(JE), Global Entropy(GE), and Global Relative Entropy(GRE) method, for performance comparison. The result shows that Otsu's and GE methods both perform better than others for Formosat-2 image. Additionally, our proposed ACCA method by selecting Otsu's method as the threshoding method has successfully extracted the cloudy pixels of Formosat-2 image for accurate cloud statistic estimation.

  9. Prediction of human observer performance in a 2-alternative forced choice low-contrast detection task using channelized Hotelling observer: Impact of radiation dose and reconstruction algorithms

    SciTech Connect

    Yu Lifeng; Leng Shuai; Chen Lingyun; Kofler, James M.; McCollough, Cynthia H.; Carter, Rickey E.

    2013-04-15

    Purpose: Efficient optimization of CT protocols demands a quantitative approach to predicting human observer performance on specific tasks at various scan and reconstruction settings. The goal of this work was to investigate how well a channelized Hotelling observer (CHO) can predict human observer performance on 2-alternative forced choice (2AFC) lesion-detection tasks at various dose levels and two different reconstruction algorithms: a filtered-backprojection (FBP) and an iterative reconstruction (IR) method. Methods: A 35 Multiplication-Sign 26 cm{sup 2} torso-shaped phantom filled with water was used to simulate an average-sized patient. Three rods with different diameters (small: 3 mm; medium: 5 mm; large: 9 mm) were placed in the center region of the phantom to simulate small, medium, and large lesions. The contrast relative to background was -15 HU at 120 kV. The phantom was scanned 100 times using automatic exposure control each at 60, 120, 240, 360, and 480 quality reference mAs on a 128-slice scanner. After removing the three rods, the water phantom was again scanned 100 times to provide signal-absent background images at the exact same locations. By extracting regions of interest around the three rods and on the signal-absent images, the authors generated 21 2AFC studies. Each 2AFC study had 100 trials, with each trial consisting of a signal-present image and a signal-absent image side-by-side in randomized order. In total, 2100 trials were presented to both the model and human observers. Four medical physicists acted as human observers. For the model observer, the authors used a CHO with Gabor channels, which involves six channel passbands, five orientations, and two phases, leading to a total of 60 channels. The performance predicted by the CHO was compared with that obtained by four medical physicists at each 2AFC study. Results: The human and model observers were highly correlated at each dose level for each lesion size for both FBP and IR. The

  10. a Distributed Polygon Retrieval Algorithm Using Mapreduce

    NASA Astrophysics Data System (ADS)

    Guo, Q.; Palanisamy, B.; Karimi, H. A.

    2015-07-01

    The burst of large-scale spatial terrain data due to the proliferation of data acquisition devices like 3D laser scanners poses challenges to spatial data analysis and computation. Among many spatial analyses and computations, polygon retrieval is a fundamental operation which is often performed under real-time constraints. However, existing sequential algorithms fail to meet this demand for larger sizes of terrain data. Motivated by the MapReduce programming model, a well-adopted large-scale parallel data processing technique, we present a MapReduce-based polygon retrieval algorithm designed with the objective of reducing the IO and CPU loads of spatial data processing. By indexing the data based on a quad-tree approach, a significant amount of unneeded data is filtered in the filtering stage and it reduces the IO overhead. The indexed data also facilitates querying the relationship between the terrain data and query area in shorter time. The results of the experiments performed in our Hadoop cluster demonstrate that our algorithm performs significantly better than the existing distributed algorithms.

  11. Automated detection of radiology reports that document non-routine communication of critical or significant results.

    PubMed

    Lakhani, Paras; Langlotz, Curtis P

    2010-12-01

    The purpose of this investigation is to develop an automated method to accurately detect radiology reports that indicate non-routine communication of critical or significant results. Such a classification system would be valuable for performance monitoring and accreditation. Using a database of 2.3 million free-text radiology reports, a rule-based query algorithm was developed after analyzing hundreds of radiology reports that indicated communication of critical or significant results to a healthcare provider. This algorithm consisted of words and phrases used by radiologists to indicate such communications combined with specific handcrafted rules. This algorithm was iteratively refined and retested on hundreds of reports until the precision and recall did not significantly change between iterations. The algorithm was then validated on the entire database of 2.3 million reports, excluding those reports used during the testing and refinement process. Human review was used as the reference standard. The accuracy of this algorithm was determined using precision, recall, and F measure. Confidence intervals were calculated using the adjusted Wald method. The developed algorithm for detecting critical result communication has a precision of 97.0% (95% CI, 93.5-98.8%), recall 98.2% (95% CI, 93.4-100%), and F measure of 97.6% (ß=1). Our query algorithm is accurate for identifying radiology reports that contain non-routine communication of critical or significant results. This algorithm can be applied to a radiology reports database for quality control purposes and help satisfy accreditation requirements. PMID:19826871

  12. Cloud model bat algorithm.

    PubMed

    Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi

    2014-01-01

    Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: "bats approach their prey." Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization. PMID:24967425

  13. Significant performance enhancement of a UASB reactor by using acyl homoserine lactones to facilitate the long filaments of Methanosaeta harundinacea 6Ac.

    PubMed

    Li, Lingyan; Zheng, Mingyue; Ma, Hailing; Gong, Shufen; Ai, Guomin; Liu, Xiaoli; Li, Jie; Wang, Kaijun; Dong, Xiuzhu

    2015-08-01

    Methanosaeta strains are frequently involved in the granule formation during methanogenic wastewater treatment. To investigate the impact of Methanosaeta on granulation and performance of upflow anaerobic sludge blanket (UASB) reactors, three 1-L working volume reactors noted as R1, R2, and R3 were operated fed with a synthetic wastewater containing sodium acetate and glucose. R1 was inoculated with 1-L activated sludge, while R2 and R3 were inoculated with 200-mL concentrated pre-grown Methanosaeta harundinacea 6Ac culture and 800 mL of activated sludge. Additionally, R3 was daily dosed with 0.5 mL/L of acetyl ether extract of 6Ac spent culture containing its quorum sensing signal carboxyl acyl homoserine lactone (AHL). Compared to R1, R2 and R3 had a higher and more constant chemical oxygen demand (COD) removal efficiency and alkaline pH (8.2) during the granulation phase, particularly, R3 maintained approximately 90 % COD removal. Moreover, R3 formed the best granules, and microscopic images showed fluorescent Methanosaeta-like filaments dominating in the R3 granules, but rod cells dominating in the R2 granules. Analysis of 16S rRNA gene libraries showed increased diversity of methanogen species like Methanosarcina and Methanospirillum in R2 and R3, and increased bacteria diversity in R3 that included the syntrophic propionate degrader Syntrophobacter. Quantitative PCR determined that 6Ac made up more than 22 % of the total prokaryotes in R3, but only 3.6 % in R2. The carboxyl AHL was detected in R3. This work indicates that AHL-facilitated filaments of Methanosaeta contribute to the granulation and performance of UASB reactors, likely through immobilizing other functional microorganisms. PMID:25776059

  14. A baseline algorithm for face detection and tracking in video

    NASA Astrophysics Data System (ADS)

    Manohar, Vasant; Soundararajan, Padmanabhan; Korzhova, Valentina; Boonstra, Matthew; Goldgof, Dmitry; Kasturi, Rangachar

    2007-10-01

    Establishing benchmark datasets, performance metrics and baseline algorithms have considerable research significance in gauging the progress in any application domain. These primarily allow both users and developers to compare the performance of various algorithms on a common platform. In our earlier works, we focused on developing performance metrics and establishing a substantial dataset with ground truth for object detection and tracking tasks (text and face) in two video domains -- broadcast news and meetings. In this paper, we present the results of a face detection and tracking algorithm on broadcast news videos with the objective of establishing a baseline performance for this task-domain pair. The detection algorithm uses a statistical approach that was originally developed by Viola and Jones and later extended by Lienhart. The algorithm uses a feature set that is Haar-like and a cascade of boosted decision tree classifiers as a statistical model. In this work, we used the Intel Open Source Computer Vision Library (OpenCV) implementation of the Haar face detection algorithm. The optimal values for the tunable parameters of this implementation were found through an experimental design strategy commonly used in statistical analyses of industrial processes. Tracking was accomplished as continuous detection with the detected objects in two frames mapped using a greedy algorithm based on the distances between the centroids of bounding boxes. Results on the evaluation set containing 50 sequences (~ 2.5 mins.) using the developed performance metrics show good performance of the algorithm reflecting the state-of-the-art which makes it an appropriate choice as the baseline algorithm for the problem.

  15. Spatial compression algorithm for the analysis of very large multivariate images

    DOEpatents

    Keenan, Michael R.

    2008-07-15

    A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.