Science.gov

Sample records for algorithmic framework based

  1. Scale-space point spread function based framework to boost infrared target detection algorithms

    NASA Astrophysics Data System (ADS)

    Moradi, Saed; Moallem, Payman; Sabahi, Mohamad Farzan

    2016-07-01

    Small target detection is one of the major concern in the development of infrared surveillance systems. Detection algorithms based on Gaussian target modeling have attracted most attention from researchers in this field. However, the lack of accurate target modeling limits the performance of this type of infrared small target detection algorithms. In this paper, signal to clutter ratio (SCR) improvement mechanism based on the matched filter is described in detail and effect of Point Spread Function (PSF) on the intensity and spatial distribution of the target pixels is clarified comprehensively. In the following, a new parametric model for small infrared targets is developed based on the PSF of imaging system which can be considered as a matched filter. Based on this model, a new framework to boost model-based infrared target detection algorithms is presented. In order to show the performance of this new framework, the proposed model is adopted in Laplacian scale-space algorithms which is a well-known algorithm in the small infrared target detection field. Simulation results show that the proposed framework has better detection performance in comparison with the Gaussian one and improves the overall performance of IRST system. By analyzing the performance of the proposed algorithm based on this new framework in a quantitative manner, this new framework shows at least 20% improvement in the output SCR values in comparison with Laplacian of Gaussian (LoG) algorithm.

  2. A likelihood-based reconstruction algorithm for top-quark pairs and the KLFitter framework

    NASA Astrophysics Data System (ADS)

    Erdmann, Johannes; Guindon, Stefan; Kröninger, Kevin; Lemmer, Boris; Nackenhorst, Olaf; Quadt, Arnulf; Stolte, Philipp

    2014-06-01

    A likelihood-based reconstruction algorithm for arbitrary event topologies is introduced and, as an example, applied to the single-lepton decay mode of top-quark pair production. The algorithm comes with several options which further improve its performance, in particular the reconstruction efficiency, i.e., the fraction of events for which the observed jets and leptons can be correctly associated with the final-state particles of the corresponding event topology. The performance is compared to that of well-established reconstruction algorithms using a common framework for kinematic fitting. This framework has a modular structure which describes the physics processes and detector models independently. The implemented algorithms are generic and can easily be ported from one experiment to another.

  3. Framework for the implementation of vision-based fuzzy logic navigational algorithms for a mobile robot

    NASA Astrophysics Data System (ADS)

    Akec, John A.; Steiner, Simon J.

    1996-10-01

    Fuzzy logic has been promoted recently by many researchers for the design of navigational algorithms for mobile robots. The new approach fits in well with a behavior-based autonomous systems framework, where common-sense rules can naturally be formulated to create rule-based navigational algorithms, and conflicts between behaviors may be resolved by assigning weights to different rules in the rule base. The applicability of the techniques has been demonstrated for robots that have used sensor devices such as ultrasonics and infrared detectors. However, the implementation issues relating to the development of vision-based, fuzzy-logic navigation algorithms do not appear, as yet, to have been fully explored. The salient features that need to be extracted from an image for recognition or collision avoidance purposes are very much application dependent; however, the needs of an autonomous mobile vehicle cannot be known fully 'a priori'. Similarly, the issues relating to the understanding of a vision generated image which is based on geometric models of the observed objects have an important role to play; however, these issues have not as yet been either addressed or incorporated into the current fuzzy logic-based algorithms that have been purported for navigational control. This paper attempts to address these issues, and attempts to come up with a suitable framework which may clarify the implementation of navigation algorithms for mobile robots that use vision sensor/s and fuzzy logic for map building, target location, and collision avoidance. The scope for application of this approach is demonstrated.

  4. An Automatic Web Service Composition Framework Using QoS-Based Web Service Ranking Algorithm

    PubMed Central

    Mallayya, Deivamani; Ramachandran, Baskaran; Viswanathan, Suganya

    2015-01-01

    Web service has become the technology of choice for service oriented computing to meet the interoperability demands in web applications. In the Internet era, the exponential addition of web services nominates the “quality of service” as essential parameter in discriminating the web services. In this paper, a user preference based web service ranking (UPWSR) algorithm is proposed to rank web services based on user preferences and QoS aspect of the web service. When the user's request cannot be fulfilled by a single atomic service, several existing services should be composed and delivered as a composition. The proposed framework allows the user to specify the local and global constraints for composite web services which improves flexibility. UPWSR algorithm identifies best fit services for each task in the user request and, by choosing the number of candidate services for each task, reduces the time to generate the composition plans. To tackle the problem of web service composition, QoS aware automatic web service composition (QAWSC) algorithm proposed in this paper is based on the QoS aspects of the web services and user preferences. The proposed framework allows user to provide feedback about the composite service which improves the reputation of the services. PMID:26504894

  5. A hybrid-algorithm-based parallel computing framework for optimal reservoir operation

    NASA Astrophysics Data System (ADS)

    Li, X.; Wei, J.; Li, T.; Wang, G.

    2012-12-01

    Up to date, various optimization models have been developed to offer optimal operating policies for reservoirs. Each optimization model has its own merits and limitations, and no general algorithm exists even today. At times, some optimization models have to be combined to obtain desired results. In this paper, we present a parallel computing framework to combine various optimization models in a different way compared to traditional serial computing. This framework consists of three functional processor types, that is, master processor, slave processor and transfer processor. The master processor has a full computation scheme that allocates optimization models to slave processors; slave processors perform allocated optimization models; the transfer processor is in charge of the solution communication among all slave processors. Based on these, the proposed framework can perform various optimization models in parallel. Because of the solution communication, the framework can also integrate the merits of involved optimization models while in iteration and the performance of each optimization model can therefore be improved. And more, it can be concluded the framework can effectively improve the solution quality and increase the solution speed by making full use of computing power of parallel computers.

  6. An infrared thermal image processing framework based on superpixel algorithm to detect cracks on metal surface

    NASA Astrophysics Data System (ADS)

    Xu, Changhang; Xie, Jing; Chen, Guoming; Huang, Weiping

    2014-11-01

    Infrared thermography has been used increasingly as an effective non-destructive technique to detect cracks on metal surface. Due to many factors, infrared thermal image has low definition compared to visible image. The contrasts between cracks and sound areas in different thermal image frames of a specimen vary greatly with the recorded time. An accurate detection can only be obtained by glancing over the whole thermal video, which is a laborious work. Moreover, experience of the operator has a great important influence on the accuracy of detection result. In this paper, an infrared thermal image processing framework based on superpixel algorithm is proposed to accomplish crack detection automatically. Two popular superpixel algorithms are compared and one of them is selected to generate superpixels in this application. Combined features of superpixels were selected from both the raw gray level image and the high-pass filtered image. Fuzzy c-means clustering is used to cluster superpixels in order to segment infrared thermal image. Experimental results show that the proposed framework can recognize cracks on metal surface through infrared thermal image automatically.

  7. A modified surface energy balance algorithm for land (M-SEBAL) based on a trapezoidal framework

    NASA Astrophysics Data System (ADS)

    Long, Di; Singh, Vijay P.

    2012-02-01

    The surface energy balance algorithm for land (SEBAL) has been designed and widely used (and misused) worldwide to estimate evapotranspiration across varying spatial and temporal scales using satellite remote sensing over the past 15 yr. It is, however, beset by visual identification of a hot and cold pixel to determine the temperature difference (dT) between the surface and the lower atmosphere, which is assumed to be linearly correlated with surface radiative temperature (Trad) throughout a scene. To reduce ambiguity in flux estimation by SEBAL due to the subjectivity in extreme pixel selection, this study first demonstrates that SEBAL is of a rectangular framework of the contextual relationship between vegetation fraction (fc) and Trad, which can distort the spatial distribution of heat flux retrievals to varying degrees. End members of SEBAL were replaced by a trapezoidal framework of the fc-Trad space in the modified surface energy balance algorithm for land (M-SEBAL). The warm edge of the trapezoidal framework is determined by analytically deriving temperatures of the bare surface with the largest water stress and the fully vegetated surface with the largest water stress implicit in both energy balance and radiation budget equations. Areally averaged air temperature (Ta) across a study site is taken to be the cold edge of the trapezoidal framework. Coefficients of the linear relationship between dT and Trad can vary with fc but are assumed essentially invariant for the same fc or within the same fc class in M-SEBAL. SEBAL and M-SEBAL are applied to the soil moisture-atmosphere coupling experiment (SMACEX) site in central Iowa, U.S. Results show that M-SEBAL is capable of reproducing latent heat flux in terms of an overall root-mean-square difference of 41.1 W m-2 and mean absolute percentage difference of 8.9% with reference to eddy covariance tower-based measurements for three landsat thematic mapper/enhanced thematic mapper plus imagery acquisition dates in

  8. PEDLA: predicting enhancers with a deep learning-based algorithmic framework

    PubMed Central

    Liu, Feng; Li, Hao; Ren, Chao; Bo, Xiaochen; Shu, Wenjie

    2016-01-01

    Transcriptional enhancers are non-coding segments of DNA that play a central role in the spatiotemporal regulation of gene expression programs. However, systematically and precisely predicting enhancers remain a major challenge. Although existing methods have achieved some success in enhancer prediction, they still suffer from many issues. We developed a deep learning-based algorithmic framework named PEDLA (https://github.com/wenjiegroup/PEDLA), which can directly learn an enhancer predictor from massively heterogeneous data and generalize in ways that are mostly consistent across various cell types/tissues. We first trained PEDLA with 1,114-dimensional heterogeneous features in H1 cells, and demonstrated that PEDLA framework integrates diverse heterogeneous features and gives state-of-the-art performance relative to five existing methods for enhancer prediction. We further extended PEDLA to iteratively learn from 22 training cell types/tissues. Our results showed that PEDLA manifested superior performance consistency in both training and independent test sets. On average, PEDLA achieved 95.0% accuracy and a 96.8% geometric mean (GM) of sensitivity and specificity across 22 training cell types/tissues, as well as 95.7% accuracy and a 96.8% GM across 20 independent test cell types/tissues. Together, our work illustrates the power of harnessing state-of-the-art deep learning techniques to consistently identify regulatory elements at a genome-wide scale from massively heterogeneous data across diverse cell types/tissues. PMID:27329130

  9. PEDLA: predicting enhancers with a deep learning-based algorithmic framework.

    PubMed

    Liu, Feng; Li, Hao; Ren, Chao; Bo, Xiaochen; Shu, Wenjie

    2016-01-01

    Transcriptional enhancers are non-coding segments of DNA that play a central role in the spatiotemporal regulation of gene expression programs. However, systematically and precisely predicting enhancers remain a major challenge. Although existing methods have achieved some success in enhancer prediction, they still suffer from many issues. We developed a deep learning-based algorithmic framework named PEDLA (https://github.com/wenjiegroup/PEDLA), which can directly learn an enhancer predictor from massively heterogeneous data and generalize in ways that are mostly consistent across various cell types/tissues. We first trained PEDLA with 1,114-dimensional heterogeneous features in H1 cells, and demonstrated that PEDLA framework integrates diverse heterogeneous features and gives state-of-the-art performance relative to five existing methods for enhancer prediction. We further extended PEDLA to iteratively learn from 22 training cell types/tissues. Our results showed that PEDLA manifested superior performance consistency in both training and independent test sets. On average, PEDLA achieved 95.0% accuracy and a 96.8% geometric mean (GM) of sensitivity and specificity across 22 training cell types/tissues, as well as 95.7% accuracy and a 96.8% GM across 20 independent test cell types/tissues. Together, our work illustrates the power of harnessing state-of-the-art deep learning techniques to consistently identify regulatory elements at a genome-wide scale from massively heterogeneous data across diverse cell types/tissues. PMID:27329130

  10. PEDLA: predicting enhancers with a deep learning-based algorithmic framework.

    PubMed

    Liu, Feng; Li, Hao; Ren, Chao; Bo, Xiaochen; Shu, Wenjie

    2016-06-22

    Transcriptional enhancers are non-coding segments of DNA that play a central role in the spatiotemporal regulation of gene expression programs. However, systematically and precisely predicting enhancers remain a major challenge. Although existing methods have achieved some success in enhancer prediction, they still suffer from many issues. We developed a deep learning-based algorithmic framework named PEDLA (https://github.com/wenjiegroup/PEDLA), which can directly learn an enhancer predictor from massively heterogeneous data and generalize in ways that are mostly consistent across various cell types/tissues. We first trained PEDLA with 1,114-dimensional heterogeneous features in H1 cells, and demonstrated that PEDLA framework integrates diverse heterogeneous features and gives state-of-the-art performance relative to five existing methods for enhancer prediction. We further extended PEDLA to iteratively learn from 22 training cell types/tissues. Our results showed that PEDLA manifested superior performance consistency in both training and independent test sets. On average, PEDLA achieved 95.0% accuracy and a 96.8% geometric mean (GM) of sensitivity and specificity across 22 training cell types/tissues, as well as 95.7% accuracy and a 96.8% GM across 20 independent test cell types/tissues. Together, our work illustrates the power of harnessing state-of-the-art deep learning techniques to consistently identify regulatory elements at a genome-wide scale from massively heterogeneous data across diverse cell types/tissues.

  11. Genetic Algorithm Based Framework for Automation of Stochastic Modeling of Multi-Season Streamflows

    NASA Astrophysics Data System (ADS)

    Srivastav, R. K.; Srinivasan, K.; Sudheer, K.

    2009-05-01

    bootstrap (MABB) ) based on the explicit objective functions of minimizing the relative bias and relative root mean square error in estimating the storage capacity of the reservoir. The optimal parameter set of the hybrid model is obtained based on the search over a multi- dimensional parameter space (involving simultaneous exploration of the parametric (PAR(1)) as well as the non-parametric (MABB) components). This is achieved using the efficient evolutionary search based optimization tool namely, non-dominated sorting genetic algorithm - II (NSGA-II). This approach helps in reducing the drudgery involved in the process of manual selection of the hybrid model, in addition to predicting the basic summary statistics dependence structure, marginal distribution and water-use characteristics accurately. The proposed optimization framework is used to model the multi-season streamflows of River Beaver and River Weber of USA. In case of both the rivers, the proposed GA-based hybrid model yields a much better prediction of the storage capacity (where simultaneous exploration of both parametric and non-parametric components is done) when compared with the MLE-based hybrid models (where the hybrid model selection is done in two stages, thus probably resulting in a sub-optimal model). This framework can be further extended to include different linear/non-linear hybrid stochastic models at other temporal and spatial scales as well.

  12. Adaptive Multi-Objective Sub-Pixel Mapping Framework Based on Memetic Algorithm for Hyperspectral Remote Sensing Imagery

    NASA Astrophysics Data System (ADS)

    Zhong, Y.; Zhang, L.

    2012-07-01

    Sub-pixel mapping technique can specify the location of each class within the pixels based on the assumption of spatial dependence. Traditional sub-pixel mapping algorithms only consider the spatial dependence at the pixel level. The spatial dependence of each sub-pixel is ignored and sub-pixel spatial relation is lost. In this paper, a novel multi-objective sub-pixel mapping framework based on memetic algorithm, namely MSMF, is proposed. In MSMF, the sub-pixel mapping is transformed to a multi-objective optimization problem, which maximizing the spatial dependence index (SDI) and Moran's I, synchronously. Memetic algorithm is utilized to solve the multi-objective problem, which combines global search strategies with local search heuristics. In this framework, the sub-pixel mapping problem can be solved using different evolutionary algorithms and local algorithms. In this paper, memetic algorithm based on clonal selection algorithm (CSA) and random swapping as an example is designed and applied simultaneously in the proposed MSMF. In MSMF, CSA inherits the biologic properties of human immune systems, i.e. clone, mutation, memory, to search the possible sub-pixel mapping solution in the global space. After the exploration based on CSA, the local search based on random swapping is employed to dynamically decide which neighbourhood should be selected to stress exploitation in each generation. In addition, a solution set is used in MSMF to hold and update the obtained non-dominated solutions for multi-objective problem. Experimental results demonstrate that the proposed approach outperform traditional sub-pixel mapping algorithms, and hence provide an effective option for sub-pixel mapping of hyperspectral remote sensing imagery.

  13. A junction-tree based learning algorithm to optimize network wide traffic control: A coordinated multi-agent framework

    SciTech Connect

    Zhu, Feng; Aziz, H. M. Abdul; Qian, Xinwu; Ukkusuri, Satish V.

    2015-01-31

    Our study develops a novel reinforcement learning algorithm for the challenging coordinated signal control problem. Traffic signals are modeled as intelligent agents interacting with the stochastic traffic environment. The model is built on the framework of coordinated reinforcement learning. The Junction Tree Algorithm (JTA) based reinforcement learning is proposed to obtain an exact inference of the best joint actions for all the coordinated intersections. Moreover, the algorithm is implemented and tested with a network containing 18 signalized intersections in VISSIM. Finally, our results show that the JTA based algorithm outperforms independent learning (Q-learning), real-time adaptive learning, and fixed timing plans in terms of average delay, number of stops, and vehicular emissions at the network level.

  14. A junction-tree based learning algorithm to optimize network wide traffic control: A coordinated multi-agent framework

    DOE PAGES

    Zhu, Feng; Aziz, H. M. Abdul; Qian, Xinwu; Ukkusuri, Satish V.

    2015-01-31

    Our study develops a novel reinforcement learning algorithm for the challenging coordinated signal control problem. Traffic signals are modeled as intelligent agents interacting with the stochastic traffic environment. The model is built on the framework of coordinated reinforcement learning. The Junction Tree Algorithm (JTA) based reinforcement learning is proposed to obtain an exact inference of the best joint actions for all the coordinated intersections. Moreover, the algorithm is implemented and tested with a network containing 18 signalized intersections in VISSIM. Finally, our results show that the JTA based algorithm outperforms independent learning (Q-learning), real-time adaptive learning, and fixed timing plansmore » in terms of average delay, number of stops, and vehicular emissions at the network level.« less

  15. Applying Probability Theory for the Quality Assessment of a Wildfire Spread Prediction Framework Based on Genetic Algorithms

    PubMed Central

    Cencerrado, Andrés; Cortés, Ana; Margalef, Tomàs

    2013-01-01

    This work presents a framework for assessing how the existing constraints at the time of attending an ongoing forest fire affect simulation results, both in terms of quality (accuracy) obtained and the time needed to make a decision. In the wildfire spread simulation and prediction area, it is essential to properly exploit the computational power offered by new computing advances. For this purpose, we rely on a two-stage prediction process to enhance the quality of traditional predictions, taking advantage of parallel computing. This strategy is based on an adjustment stage which is carried out by a well-known evolutionary technique: Genetic Algorithms. The core of this framework is evaluated according to the probability theory principles. Thus, a strong statistical study is presented and oriented towards the characterization of such an adjustment technique in order to help the operation managers deal with the two aspects previously mentioned: time and quality. The experimental work in this paper is based on a region in Spain which is one of the most prone to forest fires: El Cap de Creus. PMID:24453898

  16. Applying probability theory for the quality assessment of a wildfire spread prediction framework based on genetic algorithms.

    PubMed

    Cencerrado, Andrés; Cortés, Ana; Margalef, Tomàs

    2013-01-01

    This work presents a framework for assessing how the existing constraints at the time of attending an ongoing forest fire affect simulation results, both in terms of quality (accuracy) obtained and the time needed to make a decision. In the wildfire spread simulation and prediction area, it is essential to properly exploit the computational power offered by new computing advances. For this purpose, we rely on a two-stage prediction process to enhance the quality of traditional predictions, taking advantage of parallel computing. This strategy is based on an adjustment stage which is carried out by a well-known evolutionary technique: Genetic Algorithms. The core of this framework is evaluated according to the probability theory principles. Thus, a strong statistical study is presented and oriented towards the characterization of such an adjustment technique in order to help the operation managers deal with the two aspects previously mentioned: time and quality. The experimental work in this paper is based on a region in Spain which is one of the most prone to forest fires: El Cap de Creus. PMID:24453898

  17. Web-Based Application for Outliers Detection on Hotspot Data Using K-Means Algorithm and Shiny Framework

    NASA Astrophysics Data System (ADS)

    Mutiara Yoga Asmarani Suci, Agisha; Sukaesih Sitanggang, Imas

    2016-01-01

    Outliers analysis on hotspot data as an indicator of fire occurences in Riau Province between 2001 and 2012 have been done, but it was less helpful in fire prevention efforts. This is because the results can only be used by certain people and can not be easily and quickly accessed by users. The purpose of this research is to create a web-based application to detect outliers on Hotspot data and to visualize the outliers based on the time and location. Outliers detection was done in the previous research using the k-means clustering method with global and collective outlier approach in Riau Province Hotspot data between 2001 and 2012. This work aims to develop a web-based application using the framework Shiny with the R programming language. This application provides several functions including summary and visualization of the selected data, clustering hotspot data using k-means algorithm, visualization of the clustering results and sum square error (SSE), and displaying global and collective outliers and visualization of outlier spread on Riau Province Map.

  18. An Algorithmic Framework for Multiobjective Optimization

    PubMed Central

    Ganesan, T.; Elamvazuthi, I.; Shaari, Ku Zilati Ku; Vasant, P.

    2013-01-01

    Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization. PMID:24470795

  19. Four (Algorithms) in One (Bag): An Integrative Framework of Knowledge for Teaching the Standard Algorithms of the Basic Arithmetic Operations

    ERIC Educational Resources Information Center

    Raveh, Ira; Koichu, Boris; Peled, Irit; Zaslavsky, Orit

    2016-01-01

    In this article we present an integrative framework of knowledge for teaching the standard algorithms of the four basic arithmetic operations. The framework is based on a mathematical analysis of the algorithms, a connectionist perspective on teaching mathematics and an analogy with previous frameworks of knowledge for teaching arithmetic…

  20. Applying a multi-criteria genetic algorithm framework for brownfield reuse optimization: improving redevelopment options based on stakeholder preferences.

    PubMed

    Morio, Maximilian; Schädler, Sebastian; Finkel, Michael

    2013-11-30

    The reuse of underused or abandoned contaminated land, so-called brownfields, is increasingly seen as an important means for reducing the consumption of land and natural resources. Many existing decision support systems are not appropriate because they focus mainly on economic aspects, while neglecting sustainability issues. To fill this gap, we present a framework for spatially explicit, integrated planning and assessment of brownfield redevelopment options. A multi-criteria genetic algorithm allows us to determine optimal land use configurations with respect to assessment criteria and given constraints on the composition of land use classes, according to, e.g., stakeholder preferences. Assessment criteria include sustainability indicators as well as economic aspects, including remediation costs and land value. The framework is applied to a case study of a former military site near Potsdam, Germany. Emphasis is placed on the trade-off between possibly conflicting objectives (e.g., economic goals versus the need for sustainable development in the regional context of the brownfield site), which may represent different perspectives of involved stakeholders. The economic analysis reveals the trade-off between the increase in land value due to reuse and the costs for remediation required to make reuse possible. We identify various reuse options, which perform similarly well although they exhibit different land use patterns. High-cost high-value options dominated by residential land use and low-cost low-value options with less sensitive land use types may perform equally well economically. The results of the integrated analysis show that the quantitative integration of sustainability may change optimal land use patterns considerably.

  1. Towards a Framework for Evaluating and Comparing Diagnosis Algorithms

    NASA Technical Reports Server (NTRS)

    Kurtoglu, Tolga; Narasimhan, Sriram; Poll, Scott; Garcia,David; Kuhn, Lukas; deKleer, Johan; vanGemund, Arjan; Feldman, Alexander

    2009-01-01

    Diagnostic inference involves the detection of anomalous system behavior and the identification of its cause, possibly down to a failed unit or to a parameter of a failed unit. Traditional approaches to solving this problem include expert/rule-based, model-based, and data-driven methods. Each approach (and various techniques within each approach) use different representations of the knowledge required to perform the diagnosis. The sensor data is expected to be combined with these internal representations to produce the diagnosis result. In spite of the availability of various diagnosis technologies, there have been only minimal efforts to develop a standardized software framework to run, evaluate, and compare different diagnosis technologies on the same system. This paper presents a framework that defines a standardized representation of the system knowledge, the sensor data, and the form of the diagnosis results and provides a run-time architecture that can execute diagnosis algorithms, send sensor data to the algorithms at appropriate time steps from a variety of sources (including the actual physical system), and collect resulting diagnoses. We also define a set of metrics that can be used to evaluate and compare the performance of the algorithms, and provide software to calculate the metrics.

  2. Experiences and evolutions of the ALICE DAQ Detector Algorithms framework

    NASA Astrophysics Data System (ADS)

    Chapeland, Sylvain; Carena, Franco; Carena, Wisla; Chibante Barroso, Vasco; Costa, Filippo; Denes, Ervin; Divia, Roberto; Fuchs, Ulrich; Grigore, Alexandru; Simonetti, Giuseppe; Soos, Csaba; Telesca, Adriana; Vande Vyvre, Pierre; von Haller, Barthelemy

    2012-12-01

    ALICE (A Large Ion Collider Experiment) is the heavy-ion detector studying the physics of strongly interacting matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). The 18 ALICE sub-detectors are regularly calibrated in order to achieve most accurate physics measurements. Some of these procedures are done online in the DAQ (Data Acquisition System) so that calibration results can be directly used for detector electronics configuration before physics data taking, at run time for online event monitoring, and offline for data analysis. A framework was designed to collect statistics and compute calibration parameters, and has been used in production since 2008. This paper focuses on the recent features developed to benefit from the multi-cores architecture of CPUs, and to optimize the processing power available for the calibration tasks. It involves some C++ base classes to effectively implement detector specific code, with independent processing of events in parallel threads and aggregation of partial results. The Detector Algorithm (DA) framework provides utility interfaces for handling of input and output (configuration, monitored physics data, results, logging), and self-documentation of the produced executable. New algorithms are created quickly by inheritance of base functionality and implementation of few ad-hoc virtual members, while the framework features are kept expandable thanks to the isolation of the detector calibration code. The DA control system also handles unexpected processes behaviour, logs execution status, and collects performance statistics.

  3. Component-Based Framework for Subsurface Simulations

    SciTech Connect

    Palmer, Bruce J.; Fang, Yilin; Hammond, Glenn E.; Gurumoorthi, Vidhya

    2007-08-01

    Simulations in the subsurface environment represent a broad range of phenomena covering an equally broad range of scales. Developing modelling capabilities that can integrate models representing different phenomena acting at different scales present formidable challenges both from the algorithmic and computer science perspective. This paper will describe the development of an integrated framework that will be used to combine different models into a single simulation. Initial work has focused on creating two frameworks, one for performing smooth particle hydrodynamics (SPH) simulations of fluid systems, the other for performing grid-based continuum simulations of reactive subsurface flow. The SPH framework is based on a parallel code developed for doing pore scale simulations, the continuum grid-based framework is based on the STOMP (Subsurface Transport Over Multiple Phases) code developed at PNNL. Future work will focus on combining the frameworks together to perform multiscale, multiphysics simulations of reactive subsurface flow.

  4. Segmentation of cervical cell nuclei in high-resolution microscopic images: A new algorithm and a web-based software framework.

    PubMed

    Bergmeir, Christoph; García Silvente, Miguel; Benítez, José Manuel

    2012-09-01

    In order to automate cervical cancer screening tests, one of the most important and longstanding challenges is the segmentation of cell nuclei in the stained specimens. Though nuclei of isolated cells in high-quality acquisitions often are easy to segment, the problem lies in the segmentation of large numbers of nuclei with various characteristics under differing acquisition conditions in high-resolution scans of the complete microscope slides. We implemented a system that enables processing of full resolution images, and proposes a new algorithm for segmenting the nuclei under adequate control of the expert user. The system can work automatically or interactively guided, to allow for segmentation within the whole range of slide and image characteristics. It facilitates data storage and interaction of technical and medical experts, especially with its web-based architecture. The proposed algorithm localizes cell nuclei using a voting scheme and prior knowledge, before it determines the exact shape of the nuclei by means of an elastic segmentation algorithm. After noise removal with a mean-shift and a median filtering takes place, edges are extracted with a Canny edge detection algorithm. Motivated by the observation that cell nuclei are surrounded by cytoplasm and their shape is roughly elliptical, edges adjacent to the background are removed. A randomized Hough transform for ellipses finds candidate nuclei, which are then processed by a level set algorithm. The algorithm is tested and compared to other algorithms on a database containing 207 images acquired from two different microscope slides, with promising results.

  5. Analytic TOF PET reconstruction algorithm within DIRECT data partitioning framework.

    PubMed

    Matej, Samuel; Daube-Witherspoon, Margaret E; Karp, Joel S

    2016-05-01

    Iterative reconstruction algorithms are routinely used for clinical practice; however, analytic algorithms are relevant candidates for quantitative research studies due to their linear behavior. While iterative algorithms also benefit from the inclusion of accurate data and noise models the widespread use of time-of-flight (TOF) scanners with less sensitivity to noise and data imperfections make analytic algorithms even more promising. In our previous work we have developed a novel iterative reconstruction approach (DIRECT: direct image reconstruction for TOF) providing convenient TOF data partitioning framework and leading to very efficient reconstructions. In this work we have expanded DIRECT to include an analytic TOF algorithm with confidence weighting incorporating models of both TOF and spatial resolution kernels. Feasibility studies using simulated and measured data demonstrate that analytic-DIRECT with appropriate resolution and regularization filters is able to provide matched bias versus variance performance to iterative TOF reconstruction with a matched resolution model.

  6. Overarching framework for data-based modelling

    NASA Astrophysics Data System (ADS)

    Schelter, Björn; Mader, Malenka; Mader, Wolfgang; Sommerlade, Linda; Platt, Bettina; Lai, Ying-Cheng; Grebogi, Celso; Thiel, Marco

    2014-02-01

    One of the main modelling paradigms for complex physical systems are networks. When estimating the network structure from measured signals, typically several assumptions such as stationarity are made in the estimation process. Violating these assumptions renders standard analysis techniques fruitless. We here propose a framework to estimate the network structure from measurements of arbitrary non-linear, non-stationary, stochastic processes. To this end, we propose a rigorous mathematical theory that underlies this framework. Based on this theory, we present a highly efficient algorithm and the corresponding statistics that are immediately sensibly applicable to measured signals. We demonstrate its performance in a simulation study. In experiments of transitions between vigilance stages in rodents, we infer small network structures with complex, time-dependent interactions; this suggests biomarkers for such transitions, the key to understand and diagnose numerous diseases such as dementia. We argue that the suggested framework combines features that other approaches followed so far lack.

  7. A framework for data-driven algorithm testing

    NASA Astrophysics Data System (ADS)

    Funk, Wolfgang; Kirchner, Daniel

    2005-03-01

    We describe the requirements, design, architecture and implementation of a framework that facilitates the setup, management and realisation of data-driven performance and acceptance tests for algorithms. The framework builds on standard components, supports distributed tests on heterogeneous platforms, is scalable and requires minimum integration efforts for algorithm providers by chaining command line driven applications. We use XML as test specification language, so tests can be set up in a declarative way without any programming effort and the test specification can easily be validated against an XML schema. We consider a test scenario where each test consists of one to many test processes and each process works on a representative set of input data that are accessible as data files. The test process is built up of operations that are executed successively in a predefined sequence. Each operation may be one of the algorithms under test or a supporting functionality (e.g. a file format conversion utility). The test definition and the test results are made persistent in a relational database. We decided to use a J2EE compliant application server as persistence engine, thus the natural choice is to implement the test client as Java application. Java is available for the most important operating systems, provides control of OS-processes, including the input and output channels and has extensive support for XML processing.

  8. Kodiak: An Implementation Framework for Branch and Bound Algorithms

    NASA Technical Reports Server (NTRS)

    Smith, Andrew P.; Munoz, Cesar A.; Narkawicz, Anthony J.; Markevicius, Mantas

    2015-01-01

    Recursive branch and bound algorithms are often used to refine and isolate solutions to several classes of global optimization problems. A rigorous computation framework for the solution of systems of equations and inequalities involving nonlinear real arithmetic over hyper-rectangular variable and parameter domains is presented. It is derived from a generic branch and bound algorithm that has been formally verified, and utilizes self-validating enclosure methods, namely interval arithmetic and, for polynomials and rational functions, Bernstein expansion. Since bounds computed by these enclosure methods are sound, this approach may be used reliably in software verification tools. Advantage is taken of the partial derivatives of the constraint functions involved in the system, firstly to reduce the branching factor by the use of bisection heuristics and secondly to permit the computation of bifurcation sets for systems of ordinary differential equations. The associated software development, Kodiak, is presented, along with examples of three different branch and bound problem types it implements.

  9. A Formal Framework for the Analysis of Algorithms That Recover From Loss of Separation

    NASA Technical Reports Server (NTRS)

    Butler, RIcky W.; Munoz, Cesar A.

    2008-01-01

    We present a mathematical framework for the specification and verification of state-based conflict resolution algorithms that recover from loss of separation. In particular, we propose rigorous definitions of horizontal and vertical maneuver correctness that yield horizontal and vertical separation, respectively, in a bounded amount of time. We also provide sufficient conditions for independent correctness, i.e., separation under the assumption that only one aircraft maneuvers, and for implicitly coordinated correctness, i.e., separation under the assumption that both aircraft maneuver. An important benefit of this approach is that different aircraft can execute different algorithms and implicit coordination will still be achieved, as long as they all meet the explicit criteria of the framework. Towards this end we have sought to make the criteria as general as possible. The framework presented in this paper has been formalized and mechanically verified in the Prototype Verification System (PVS).

  10. A Framework of Algorithms: Computing the Bias and Prestige of Nodes in Trust Networks

    PubMed Central

    Li, Rong-Hua; Yu, Jeffrey Xu; Huang, Xin; Cheng, Hong

    2012-01-01

    A trust network is a social network in which edges represent the trust relationship between two nodes in the network. In a trust network, a fundamental question is how to assess and compute the bias and prestige of the nodes, where the bias of a node measures the trustworthiness of a node and the prestige of a node measures the importance of the node. The larger bias of a node implies the lower trustworthiness of the node, and the larger prestige of a node implies the higher importance of the node. In this paper, we define a vector-valued contractive function to characterize the bias vector which results in a rich family of bias measurements, and we propose a framework of algorithms for computing the bias and prestige of nodes in trust networks. Based on our framework, we develop four algorithms that can calculate the bias and prestige of nodes effectively and robustly. The time and space complexities of all our algorithms are linear with respect to the size of the graph, thus our algorithms are scalable to handle large datasets. We evaluate our algorithms using five real datasets. The experimental results demonstrate the effectiveness, robustness, and scalability of our algorithms. PMID:23239990

  11. Grid-based Visualization Framework

    NASA Astrophysics Data System (ADS)

    Thiebaux, M.; Tangmunarunkit, H.; Kesselman, C.

    2003-12-01

    Advances in science and engineering have put high demands on tools for high-performance large-scale visual data exploration and analysis. For example, earthquake scientists can now study earthquake phenomena from first principle physics-based simulations. These simulations can generate large amounts of data, possibly high spatial resolution, and long time series. Single-system visualization software running on commodity machines cannot scale up to the large amounts of data generated by these simulations. To address this problem, we propose a flexible and extensible Grid-based visualization framework for time-critical, interactively controlled visual browsing of spatially and temporally large datasets in a Grid environment. Our framework leverages Grid resources for scalable computation and data storage to maintain performance and interactivity with large visualization jobs. Our framework utilizes Globus Toolkit 2.4 components for security (i.e., GSI), resource allocation and management (i.e., DUROC, GRAM) and communication (i.e., Globus-IO) to couple commodity desktops with remote, scalable storage and computational resources in a Grid for interactive data exploration. There are two major components in this framework---Grid Data Transport (GDT) and the Grid Visualization Utility (GVU). GDT provides libraries for performing parallel data filtering and parallel data exchange among Grid resources. GDT allows arbitrary data filtering to be integrated into the system. It also facilitates multi-tiered pipeline topology construction of compute resources and displays. In addition to scientific visualization applications, GDT can be used to support other applications that require parallel processing and parallel transfer of partial ordered independent files, such as file-set transfer. On top of GDT, we have developed the Grid Visualization Utility (GVU), which is designed to assist visualization dataset management, including file formatting, data transport and automatic

  12. Rate distortion optimization for H.264 interframe coding: a general framework and algorithms.

    PubMed

    Yang, En-Hui; Yu, Xiang

    2007-07-01

    Rate distortion (RD) optimization for H.264 interframe coding with complete baseline decoding compatibility is investigated on a frame basis. Using soft decision quantization (SDQ) rather than the standard hard decision quantization, we first establish a general framework in which motion estimation, quantization, and entropy coding (in H.264) for the current frame can be jointly designed to minimize a true RD cost given previously coded reference frames. We then propose three RD optimization algorithms--a graph-based algorithm for near optimal SDQ in H.264 baseline encoding given motion estimation and quantization step sizes, an algorithm for near optimal residual coding in H.264 baseline encoding given motion estimation, and an iterative overall algorithm to optimize H.264 baseline encoding for each individual frame given previously coded reference frames-with them embedded in the indicated order. The graph-based algorithm for near optimal SDQ is the core; given motion estimation and quantization step sizes, it is guaranteed to perform optimal SDQ if the weak adjacent block dependency utilized in the context adaptive variable length coding of H.264 is ignored for optimization. The proposed algorithms have been implemented based on the reference encoder JM82 of H.264 with complete compatibility to the baseline profile. Experiments show that for a set of typical video testing sequences, the graph-based algorithm for near optimal SDQ, the algorithm for near optimal residual coding, and the overall algorithm achieve on average, 6%, 8%, and 12%, respectively, rate reduction at the same PSNR (ranging from 30 to 38 dB) when compared with the RD optimization method implemented in the H.264 reference software.

  13. A Framework for Decision Support Systems Based on Zachman Framework

    NASA Astrophysics Data System (ADS)

    Ostadzadeh, S. Shervin; Habibi, Jafar; Ostadzadeh, S. Arash

    Recent challenges have brought about an inevitable tendency for enterprises to lunge towards organizing their information activities in a comprehensive way. In this respect, Enterprise Architecture (EA) has proven to be the leading option for development and maintenance of information systems. EA clearly provides a thorough outline of the whole information system comprising an enterprise. To establish such an outline, a logical framework needs to be laid upon the entire information system. Zachman framework (ZF) has been widely accepted as a standard scheme for identifying and organizing descriptive representations that have critical roles in enterprise management and system development. In this paper, we propose a framework based on ZF for Decision Support Systems (DSS). Furthermore, a modeling approach based on Model-Driven Architecture (MDA) is utilized to obtain compatible models for all cells in the framework. The efficiency of the proposed framework is examined through a case study.

  14. Optimized Uncertainty Quantification Algorithm Within a Dynamic Event Tree Framework

    SciTech Connect

    J. W. Nielsen; Akira Tokuhiro; Robert Hiromoto

    2014-06-01

    Methods for developing Phenomenological Identification and Ranking Tables (PIRT) for nuclear power plants have been a useful tool in providing insight into modelling aspects that are important to safety. These methods have involved expert knowledge with regards to reactor plant transients and thermal-hydraulic codes to identify are of highest importance. Quantified PIRT provides for rigorous method for quantifying the phenomena that can have the greatest impact. The transients that are evaluated and the timing of those events are typically developed in collaboration with the Probabilistic Risk Analysis. Though quite effective in evaluating risk, traditional PRA methods lack the capability to evaluate complex dynamic systems where end states may vary as a function of transition time from physical state to physical state . Dynamic PRA (DPRA) methods provide a more rigorous analysis of complex dynamic systems. A limitation of DPRA is its potential for state or combinatorial explosion that grows as a function of the number of components; as well as, the sampling of transition times from state-to-state of the entire system. This paper presents a method for performing QPIRT within a dynamic event tree framework such that timing events which result in the highest probabilities of failure are captured and a QPIRT is performed simultaneously while performing a discrete dynamic event tree evaluation. The resulting simulation results in a formal QPIRT for each end state. The use of dynamic event trees results in state explosion as the number of possible component states increases. This paper utilizes a branch and bound algorithm to optimize the solution of the dynamic event trees. The paper summarizes the methods used to implement the branch-and-bound algorithm in solving the discrete dynamic event trees.

  15. Evaluating cloud retrieval algorithms with the ARM BBHRP framework

    SciTech Connect

    Mlawer,E.; Dunn,M.; Mlawer, E.; Shippert, T.; Troyan, D.; Johnson, K. L.; Miller, M. A.; Delamere, J.; Turner, D. D.; Jensen, M. P.; Flynn, C.; Shupe, M.; Comstock, J.; Long, C. N.; Clough, S. T.; Sivaraman, C.; Khaiyer, M.; Xie, S.; Rutan, D.; Minnis, P.

    2008-03-10

    Climate and weather prediction models require accurate calculations of vertical profiles of radiative heating. Although heating rate calculations cannot be directly validated due to the lack of corresponding observations, surface and top-of-atmosphere measurements can indirectly establish the quality of computed heating rates through validation of the calculated irradiances at the atmospheric boundaries. The ARM Broadband Heating Rate Profile (BBHRP) project, a collaboration of all the working groups in the program, was designed with these heating rate validations as a key objective. Given the large dependence of radiative heating rates on cloud properties, a critical component of BBHRP radiative closure analyses has been the evaluation of cloud microphysical retrieval algorithms. This evaluation is an important step in establishing the necessary confidence in the continuous profiles of computed radiative heating rates produced by BBHRP at the ARM Climate Research Facility (ACRF) sites that are needed for modeling studies. This poster details the continued effort to evaluate cloud property retrieval algorithms within the BBHRP framework, a key focus of the project this year. A requirement for the computation of accurate heating rate profiles is a robust cloud microphysical product that captures the occurrence, height, and phase of clouds above each ACRF site. Various approaches to retrieve the microphysical properties of liquid, ice, and mixed-phase clouds have been processed in BBHRP for the ACRF Southern Great Plains (SGP) and the North Slope of Alaska (NSA) sites. These retrieval methods span a range of assumptions concerning the parameterization of cloud location, particle density, size, shape, and involve different measurement sources. We will present the radiative closure results from several different retrieval approaches for the SGP site, including those from Microbase, the current 'reference' retrieval approach in BBHRP. At the NSA, mixed-phase clouds and

  16. Comparing, optimizing, and benchmarking quantum-control algorithms in a unifying programming framework

    NASA Astrophysics Data System (ADS)

    Machnes, S.; Sander, U.; Glaser, S. J.; de Fouquières, P.; Gruslys, A.; Schirmer, S.; Schulte-Herbrüggen, T.

    2011-08-01

    For paving the way to novel applications in quantum simulation, computation, and technology, increasingly large quantum systems have to be steered with high precision. It is a typical task amenable to numerical optimal control to turn the time course of pulses, i.e., piecewise constant control amplitudes, iteratively into an optimized shape. Here, we present a comparative study of optimal-control algorithms for a wide range of finite-dimensional applications. We focus on the most commonly used algorithms: GRAPE methods which update all controls concurrently, and Krotov-type methods which do so sequentially. Guidelines for their use are given and open research questions are pointed out. Moreover, we introduce a unifying algorithmic framework, DYNAMO (dynamic optimization platform), designed to provide the quantum-technology community with a convenient matlab-based tool set for optimal control. In addition, it gives researchers in optimal-control techniques a framework for benchmarking and comparing newly proposed algorithms with the state of the art. It allows a mix-and-match approach with various types of gradients, update and step-size methods as well as subspace choices. Open-source code including examples is made available at http://qlib.info.

  17. Comparing, optimizing, and benchmarking quantum-control algorithms in a unifying programming framework

    SciTech Connect

    Machnes, S.; Sander, U.; Glaser, S. J.; Schulte-Herbrueggen, T.; Fouquieres, P. de; Gruslys, A.; Schirmer, S.

    2011-08-15

    For paving the way to novel applications in quantum simulation, computation, and technology, increasingly large quantum systems have to be steered with high precision. It is a typical task amenable to numerical optimal control to turn the time course of pulses, i.e., piecewise constant control amplitudes, iteratively into an optimized shape. Here, we present a comparative study of optimal-control algorithms for a wide range of finite-dimensional applications. We focus on the most commonly used algorithms: GRAPE methods which update all controls concurrently, and Krotov-type methods which do so sequentially. Guidelines for their use are given and open research questions are pointed out. Moreover, we introduce a unifying algorithmic framework, DYNAMO (dynamic optimization platform), designed to provide the quantum-technology community with a convenient matlab-based tool set for optimal control. In addition, it gives researchers in optimal-control techniques a framework for benchmarking and comparing newly proposed algorithms with the state of the art. It allows a mix-and-match approach with various types of gradients, update and step-size methods as well as subspace choices. Open-source code including examples is made available at http://qlib.info.

  18. An algorithmic interactive planning framework in support of sustainable technologies

    NASA Astrophysics Data System (ADS)

    Prica, Marija D.

    This thesis addresses the difficult problem of generation expansion planning that employs the most effective technologies in today's changing electric energy industry. The electrical energy industry, in both the industrialized world and in developing countries, is experiencing transformation in a number of different ways. This transformation is driven by major technological breakthroughs (such as the influx of unconventional smaller-scale resources), by industry restructuring, changing environmental objectives, and the ultimate threat of resource scarcity. This thesis proposes a possible planning framework in support of sustainable technologies where sustainability is viewed as a mix of multiple attributes ranging from reliability and environmental impact to short- and long-term efficiency. The idea of centralized peak-load pricing, which accounts for the tradeoffs between cumulative operational effects and the cost of new investments, is the key concept in support of long-term planning in the changing industry. To start with, an interactive planning framework for generation expansion is posed as a distributed decision-making model. In order to reconcile the distributed sub-objectives of different decision makers with system-wide sustainability objectives, a new concept of distributed interactive peak load pricing is proposed. To be able to make the right decisions, the decision makers must have sufficient information about the estimated long-term electricity prices. The sub-objectives of power plant owners and load-serving entities are profit maximization. Optimized long-term expansion plans based on predicted electricity prices are communicated to the system-wide planning authority as long-run bids. The long-term expansion bids are cleared by the coordinating planner so that the system-wide long-term performance criteria are satisfied. The interactions between generation owners and the coordinating planning authority are repeated annually. We view the proposed

  19. Optimizing SRF Gun Cavity Profiles in a Genetic Algorithm Framework

    SciTech Connect

    Alicia Hofler, Pavel Evtushenko, Frank Marhauser

    2009-09-01

    Automation of DC photoinjector designs using a genetic algorithm (GA) based optimization is an accepted practice in accelerator physics. Allowing the gun cavity field profile shape to be varied can extend the utility of this optimization methodology to superconducting and normal conducting radio frequency (SRF/RF) gun based injectors. Finding optimal field and cavity geometry configurations can provide guidance for cavity design choices and verify existing designs. We have considered two approaches for varying the electric field profile. The first is to determine the optimal field profile shape that should be used independent of the cavity geometry, and the other is to vary the geometry of the gun cavity structure to produce an optimal field profile. The first method can provide a theoretical optimal and can illuminate where possible gains can be made in field shaping. The second method can produce more realistically achievable designs that can be compared to existing designs. In this paper, we discuss the design and implementation for these two methods for generating field profiles for SRF/RF guns in a GA based injector optimization scheme and provide preliminary results.

  20. Adaptive image contrast enhancement algorithm for point-based rendering

    NASA Astrophysics Data System (ADS)

    Xu, Shaoping; Liu, Xiaoping P.

    2015-03-01

    Surgical simulation is a major application in computer graphics and virtual reality, and most of the existing work indicates that interactive real-time cutting simulation of soft tissue is a fundamental but challenging research problem in virtual surgery simulation systems. More specifically, it is difficult to achieve a fast enough graphic update rate (at least 30 Hz) on commodity PC hardware by utilizing traditional triangle-based rendering algorithms. In recent years, point-based rendering (PBR) has been shown to offer the potential to outperform the traditional triangle-based rendering in speed when it is applied to highly complex soft tissue cutting models. Nevertheless, the PBR algorithms are still limited in visual quality due to inherent contrast distortion. We propose an adaptive image contrast enhancement algorithm as a postprocessing module for PBR, providing high visual rendering quality as well as acceptable rendering efficiency. Our approach is based on a perceptible image quality technique with automatic parameter selection, resulting in a visual quality comparable to existing conventional PBR algorithms. Experimental results show that our adaptive image contrast enhancement algorithm produces encouraging results both visually and numerically compared to representative algorithms, and experiments conducted on the latest hardware demonstrate that the proposed PBR framework with the postprocessing module is superior to the conventional PBR algorithm and that the proposed contrast enhancement algorithm can be utilized in (or compatible with) various variants of the conventional PBR algorithm.

  1. On effectiveness of network sensor-based defense framework

    NASA Astrophysics Data System (ADS)

    Zhang, Difan; Zhang, Hanlin; Ge, Linqiang; Yu, Wei; Lu, Chao; Chen, Genshe; Pham, Khanh

    2012-06-01

    Cyber attacks are increasing in frequency, impact, and complexity, which demonstrate extensive network vulnerabilities with the potential for serious damage. Defending against cyber attacks calls for the distributed collaborative monitoring, detection, and mitigation. To this end, we develop a network sensor-based defense framework, with the aim of handling network security awareness, mitigation, and prediction. We implement the prototypical system and show its effectiveness on detecting known attacks, such as port-scanning and distributed denial-of-service (DDoS). Based on this framework, we also implement the statistical-based detection and sequential testing-based detection techniques and compare their respective detection performance. The future implementation of defensive algorithms can be provisioned in our proposed framework for combating cyber attacks.

  2. A Framework for Enterprise Operating Systems Based on Zachman Framework

    NASA Astrophysics Data System (ADS)

    Ostadzadeh, S. Shervin; Rahmani, Amir Masoud

    Nowadays, the Operating System (OS) isn't only the software that runs your computer. In the typical information-driven organization, the operating system is part of a much larger platform for applications and data that extends across the LAN, WAN and Internet. An OS cannot be an island unto itself; it must work with the rest of the enterprise. Enterprise wide applications require an Enterprise Operating System (EOS). Enterprise operating systems used in an enterprise have brought about an inevitable tendency to lunge towards organizing their information activities in a comprehensive way. In this respect, Enterprise Architecture (EA) has proven to be the leading option for development and maintenance of enterprise operating systems. EA clearly provides a thorough outline of the whole information system comprising an enterprise. To establish such an outline, a logical framework needs to be laid upon the entire information system. Zachman Framework (ZF) has been widely accepted as a standard scheme for identifying and organizing descriptive representations that have prominent roles in enterprise-wide system development. In this paper, we propose a framework based on ZF for enterprise operating systems. The presented framework helps developers to design and justify completely integrated business, IT systems, and operating systems which results in improved project success rate.

  3. A Test Generation Framework for Distributed Fault-Tolerant Algorithms

    NASA Technical Reports Server (NTRS)

    Goodloe, Alwyn; Bushnell, David; Miner, Paul; Pasareanu, Corina S.

    2009-01-01

    Heavyweight formal methods such as theorem proving have been successfully applied to the analysis of safety critical fault-tolerant systems. Typically, the models and proofs performed during such analysis do not inform the testing process of actual implementations. We propose a framework for generating test vectors from specifications written in the Prototype Verification System (PVS). The methodology uses a translator to produce a Java prototype from a PVS specification. Symbolic (Java) PathFinder is then employed to generate a collection of test cases. A small example is employed to illustrate how the framework can be used in practice.

  4. Crystal Symmetry Algorithms in a High-Throughput Framework for Materials

    NASA Astrophysics Data System (ADS)

    Taylor, Richard

    The high-throughput framework AFLOW that has been developed and used successfully over the last decade is improved to include fully-integrated software for crystallographic symmetry characterization. The standards used in the symmetry algorithms conform with the conventions and prescriptions given in the International Tables of Crystallography (ITC). A standard cell choice with standard origin is selected, and the space group, point group, Bravais lattice, crystal system, lattice system, and representative symmetry operations are determined. Following the conventions of the ITC, the Wyckoff sites are also determined and their labels and site symmetry are provided. The symmetry code makes no assumptions on the input cell orientation, origin, or reduction and has been integrated in the AFLOW high-throughput framework for materials discovery by adding to the existing code base and making use of existing classes and functions. The software is written in object-oriented C++ for flexibility and reuse. A performance analysis and examination of the algorithms scaling with cell size and symmetry is also reported.

  5. An early illness recognition framework using a temporal Smith Waterman algorithm and NLP.

    PubMed

    Hajihashemi, Zahra; Popescu, Mihail

    2013-01-01

    In this paper we propose a framework for detecting health patterns based on non-wearable sensor sequence similarity and natural language processing (NLP). In TigerPlace, an aging in place facility from Columbia, MO, we deployed 47 sensor networks together with a nursing electronic health record (EHR) system to provide early illness recognition. The proposed framework utilizes sensor sequence similarity and NLP on EHR nursing comments to automatically notify the physician when health problems are detected. The reported methodology is inspired by genomic sequence annotation using similarity algorithms such as Smith Waterman (SW). Similarly, for each sensor sequence, we associate health concepts extracted from the nursing notes using Metamap, a NLP tool provided by Unified Medical Language System (UMLS). Since sensor sequences, unlike genomics ones, have an associated time dimension we propose a temporal variant of SW (TSW) to account for time. The main challenges presented by our framework are finding the most suitable time sequence similarity and aggregation of the retrieved UMLS concepts. On a pilot dataset from three Tiger Place residents, with a total of 1685 sensor days and 626 nursing records, we obtained an average precision of 0.64 and a recall of 0.37.

  6. Ontological Problem-Solving Framework for Assigning Sensor Systems and Algorithms to High-Level Missions

    PubMed Central

    Qualls, Joseph; Russomanno, David J.

    2011-01-01

    The lack of knowledge models to represent sensor systems, algorithms, and missions makes opportunistically discovering a synthesis of systems and algorithms that can satisfy high-level mission specifications impractical. A novel ontological problem-solving framework has been designed that leverages knowledge models describing sensors, algorithms, and high-level missions to facilitate automated inference of assigning systems to subtasks that may satisfy a given mission specification. To demonstrate the efficacy of the ontological problem-solving architecture, a family of persistence surveillance sensor systems and algorithms has been instantiated in a prototype environment to demonstrate the assignment of systems to subtasks of high-level missions. PMID:22164081

  7. Smell Detection Agent Based Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Vinod Chandra, S. S.

    2016-09-01

    In this paper, a novel nature-inspired optimization algorithm has been employed and the trained behaviour of dogs in detecting smell trails is adapted into computational agents for problem solving. The algorithm involves creation of a surface with smell trails and subsequent iteration of the agents in resolving a path. This algorithm can be applied in different computational constraints that incorporate path-based problems. Implementation of the algorithm can be treated as a shortest path problem for a variety of datasets. The simulated agents have been used to evolve the shortest path between two nodes in a graph. This algorithm is useful to solve NP-hard problems that are related to path discovery. This algorithm is also useful to solve many practical optimization problems. The extensive derivation of the algorithm can be enabled to solve shortest path problems.

  8. An efficient and effective region-based image retrieval framework.

    PubMed

    Jing, Feng; Li, Mingjing; Zhang, Hong-Jiang; Zhang, Bo

    2004-05-01

    An image retrieval framework that integrates efficient region-based representation in terms of storage and complexity and effective on-line learning capability is proposed. The framework consists of methods for region-based image representation and comparison, indexing using modified inverted files, relevance feedback, and learning region weighting. By exploiting a vector quantization method, both compact and sparse (vector) region-based image representations are achieved. Using the compact representation, an indexing scheme similar to the inverted file technology and an image similarity measure based on Earth Mover's Distance are presented. Moreover, the vector representation facilitates a weighted query point movement algorithm and the compact representation enables a classification-based algorithm for relevance feedback. Based on users' feedback information, a region weighting strategy is also introduced to optimally weight the regions and enable the system to self-improve. Experimental results on a database of 10,000 general-purposed images demonstrate the efficiency and effectiveness of the proposed framework.

  9. Unified Framework for Development, Deployment and Robust Testing of Neuroimaging Algorithms

    PubMed Central

    Joshi, Alark; Scheinost, Dustin; Okuda, Hirohito; Belhachemi, Dominique; Murphy, Isabella; Staib, Lawrence H.; Papademetris, Xenophon

    2011-01-01

    Developing both graphical and command-line user interfaces for neuroimaging algorithms requires considerable effort. Neuroimaging algorithms can meet their potential only if they can be easily and frequently used by their intended users. Deployment of a large suite of such algorithms on multiple platforms requires consistency of user interface controls, consistent results across various platforms and thorough testing. We present the design and implementation of a novel object-oriented framework that allows for rapid development of complex image analysis algorithms with many reusable components and the ability to easily add graphical user interface controls. Our framework also allows for simplified yet robust nightly testing of the algorithms to ensure stability and cross platform interoperability. All of the functionality is encapsulated into a software object requiring no separate source code for user interfaces, testing or deployment. This formulation makes our framework ideal for developing novel, stable and easy-to-use algorithms for medical image analysis and computer assisted interventions. The framework has been both deployed at Yale and released for public use in the open source multi-platform image analysis software—BioImage Suite (bioimagesuite.org). PMID:21249532

  10. An Intrusion Detection Algorithm Based On NFPA

    NASA Astrophysics Data System (ADS)

    Anming, Zhong

    A process oriented intrusion detection algorithm based on Probabilistic Automaton with No Final probabilities (NFPA) is introduced, system call sequence of process is used as the source data. By using information in system call sequence of normal process and system call sequence of anomaly process, the anomaly detection and the misuse detection are efficiently combined. Experiments show better performance of our algorithm compared to the classical algorithm in this field.

  11. 3D magnetic sources' framework estimation using Genetic Algorithm (GA)

    NASA Astrophysics Data System (ADS)

    Ponte-Neto, C. F.; Barbosa, V. C.

    2008-05-01

    We present a method for inverting total-field anomaly for determining simple 3D magnetic sources' framework such as: batholiths, dikes, sills, geological contacts, kimberlite and lamproite pipes. We use GA to obtain magnetic sources' frameworks and their magnetic features simultaneously. Specifically, we estimate the magnetization direction (inclination and declination) and the total dipole moment intensity, and the horizontal and vertical positions, in Cartesian coordinates , of a finite set of elementary magnetic dipoles. The spatial distribution of these magnetic dipoles composes the skeletal outlines of the geologic sources. We assume that the geologic sources have a homogeneous magnetization distribution and, thus all dipoles have the same magnetization direction and dipole moment intensity. To implement the GA, we use real-valued encoding with crossover, mutation, and elitism. To obtain a unique and stable solution, we set upper and lower bounds on declination and inclination of [0,360°] and [-90°, 90°], respectively. We also set the criterion of minimum scattering of the dipole-position coordinates, to guarantee that spatial distribution of the dipoles (defining the source skeleton) be as close as possible to continuous distribution. To this end, we fix the upper and lower bounds of the dipole moment intensity and we evaluate the dipole-position estimates. If the dipole scattering is greater than a value expected by the interpreter, the upper bound of the dipole moment intensity is reduced by 10 % of the latter. We repeat this procedure until the dipole scattering and the data fitting are acceptable. We apply our method to noise-corrupted magnetic data from simulated 3D magnetic sources with simple geometries and located at different depths. In tests simulating sources such as sphere and cube, all estimates of the dipole coordinates are agreeing with center of mass of these sources. To elongated-prismatic sources in an arbitrary direction, we estimate

  12. QPSO-based adaptive DNA computing algorithm.

    PubMed

    Karakose, Mehmet; Cigdem, Ugur

    2013-01-01

    DNA (deoxyribonucleic acid) computing that is a new computation model based on DNA molecules for information storage has been increasingly used for optimization and data analysis in recent years. However, DNA computing algorithm has some limitations in terms of convergence speed, adaptability, and effectiveness. In this paper, a new approach for improvement of DNA computing is proposed. This new approach aims to perform DNA computing algorithm with adaptive parameters towards the desired goal using quantum-behaved particle swarm optimization (QPSO). Some contributions provided by the proposed QPSO based on adaptive DNA computing algorithm are as follows: (1) parameters of population size, crossover rate, maximum number of operations, enzyme and virus mutation rate, and fitness function of DNA computing algorithm are simultaneously tuned for adaptive process, (2) adaptive algorithm is performed using QPSO algorithm for goal-driven progress, faster operation, and flexibility in data, and (3) numerical realization of DNA computing algorithm with proposed approach is implemented in system identification. Two experiments with different systems were carried out to evaluate the performance of the proposed approach with comparative results. Experimental results obtained with Matlab and FPGA demonstrate ability to provide effective optimization, considerable convergence speed, and high accuracy according to DNA computing algorithm.

  13. CRBLASTER: A Parallel-Processing Computational Framework for Embarrassingly Parallel Image-Analysis Algorithms

    NASA Astrophysics Data System (ADS)

    Mighell, Kenneth John

    2010-10-01

    The development of parallel-processing image-analysis codes is generally a challenging task that requires complicated choreography of interprocessor communications. If, however, the image-analysis algorithm is embarrassingly parallel, then the development of a parallel-processing implementation of that algorithm can be a much easier task to accomplish because, by definition, there is little need for communication between the compute processes. I describe the design, implementation, and performance of a parallel-processing image-analysis application, called crblaster, which does cosmic-ray rejection of CCD images using the embarrassingly parallel l.a.cosmic algorithm. crblaster is written in C using the high-performance computing industry standard Message Passing Interface (MPI) library. crblaster uses a two-dimensional image partitioning algorithm that partitions an input image into N rectangular subimages of nearly equal area; the subimages include sufficient additional pixels along common image partition edges such that the need for communication between computer processes is eliminated. The code has been designed to be used by research scientists who are familiar with C as a parallel-processing computational framework that enables the easy development of parallel-processing image-analysis programs based on embarrassingly parallel algorithms. The crblaster source code is freely available at the official application Web site at the National Optical Astronomy Observatory. Removing cosmic rays from a single 800 × 800 pixel Hubble Space Telescope WFPC2 image takes 44 s with the IRAF script lacos_im.cl running on a single core of an Apple Mac Pro computer with two 2.8 GHz quad-core Intel Xeon processors. crblaster is 7.4 times faster when processing the same image on a single core on the same machine. Processing the same image with crblaster simultaneously on all eight cores of the same machine takes 0.875 s—which is a speedup factor of 50.3 times faster than the

  14. Optimisation of nonlinear motion cueing algorithm based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Asadi, Houshyar; Mohamed, Shady; Rahim Zadeh, Delpak; Nahavandi, Saeid

    2015-04-01

    Motion cueing algorithms (MCAs) are playing a significant role in driving simulators, aiming to deliver the most accurate human sensation to the simulator drivers compared with a real vehicle driver, without exceeding the physical limitations of the simulator. This paper provides the optimisation design of an MCA for a vehicle simulator, in order to find the most suitable washout algorithm parameters, while respecting all motion platform physical limitations, and minimising human perception error between real and simulator driver. One of the main limitations of the classical washout filters is that it is attuned by the worst-case scenario tuning method. This is based on trial and error, and is effected by driving and programmers experience, making this the most significant obstacle to full motion platform utilisation. This leads to inflexibility of the structure, production of false cues and makes the resulting simulator fail to suit all circumstances. In addition, the classical method does not take minimisation of human perception error and physical constraints into account. Production of motion cues and the impact of different parameters of classical washout filters on motion cues remain inaccessible for designers for this reason. The aim of this paper is to provide an optimisation method for tuning the MCA parameters, based on nonlinear filtering and genetic algorithms. This is done by taking vestibular sensation error into account between real and simulated cases, as well as main dynamic limitations, tilt coordination and correlation coefficient. Three additional compensatory linear blocks are integrated into the MCA, to be tuned in order to modify the performance of the filters successfully. The proposed optimised MCA is implemented in MATLAB/Simulink software packages. The results generated using the proposed method show increased performance in terms of human sensation, reference shape tracking and exploiting the platform more efficiently without reaching

  15. CCD Base Line Subtraction Algorithms

    SciTech Connect

    Kotov, I.V.; OConnor, P.; Kotov, A.; Frank, J.; Perevoztchikov, V.; Takacs, P.

    2010-06-28

    High statistics astronomical surveys require photometric accuracy on a few percent level. The accuracy of sensor calibration procedures should match this goal. The first step in calibration procedures is the base line subtraction. The accuracy and robustness of different base line subtraction techniques used for Charge Coupled Device (CCD) sensors are discussed.

  16. A component based software framework for vision measurement

    NASA Astrophysics Data System (ADS)

    He, Lingsong; Bei, Lei

    2011-12-01

    In vision measurement applications, it is usually used to achieve an optimal result by combing different processing steps and algorithms .This paper proposes a component based software framework for vision measurement. First, commonly used processing algorithms of vision measurement are encapsulated into components that contained in a components library. The component which is designed to have its own properties also provides I/O interfaces for extern calls. Second, a software bus is proposed which can plug components and assemble them to form a vision measurement application. Besides components managing and data line linking, the software bus also provides service of message distribution, which is used to drive all the plugged components working properly. Third, a XML based script language is proposed to record the plugging and assembling process of a vision measurement application, which can be used to rebuild the vision measurement application later. At last, based on this framework, an application of landmark extraction that applied in camera calibration is introduced to show how it works.

  17. Color sorting algorithm based on K-means clustering algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, BaoFeng; Huang, Qian

    2009-11-01

    In the process of raisin production, there were a variety of color impurities, which needs be removed effectively. A new kind of efficient raisin color-sorting algorithm was presented here. First, the technology of image processing basing on the threshold was applied for the image pre-processing, and then the gray-scale distribution characteristic of the raisin image was found. In order to get the chromatic aberration image and reduce some disturbance, we made the flame image subtraction that the target image data minus the background image data. Second, Haar wavelet filter was used to get the smooth image of raisins. According to the different colors and mildew, spots and other external features, the calculation was made to identify the characteristics of their images, to enable them to fully reflect the quality differences between the raisins of different types. After the processing above, the image were analyzed by K-means clustering analysis method, which can achieve the adaptive extraction of the statistic features, in accordance with which, the image data were divided into different categories, thereby the categories of abnormal colors were distinct. By the use of this algorithm, the raisins of abnormal colors and ones with mottles were eliminated. The sorting rate was up to 98.6%, and the ratio of normal raisins to sorted grains was less than one eighth.

  18. Multi-expert tracking algorithm based on improved compressive tracker

    NASA Astrophysics Data System (ADS)

    Feng, Yachun; Zhang, Hong; Yuan, Ding

    2015-12-01

    Object tracking is a challenging task in computer vision. Most state-of-the-art methods maintain an object model and update the object model by using new examples obtained incoming frames in order to deal with the variation in the appearance. It will inevitably introduce the model drift problem into the object model updating frame-by-frame without any censorship mechanism. In this paper, we adopt a multi-expert tracking framework, which is able to correct the effect of bad updates after they happened such as the bad updates caused by the severe occlusion. Hence, the proposed framework exactly has the ability which a robust tracking method should process. The expert ensemble is constructed of a base tracker and its formal snapshot. The tracking result is produced by the current tracker that is selected by means of a simple loss function. We adopt an improved compressive tracker as the base tracker in our work and modify it to fit the multi-expert framework. The proposed multi-expert tracking algorithm significantly improves the robustness of the base tracker, especially in the scenes with frequent occlusions and illumination variations. Experiments on challenging video sequences with comparisons to several state-of-the-art trackers demonstrate the effectiveness of our method and our tracking algorithm can run at real-time.

  19. Evaluation of five non-rigid image registration algorithms using the NIREP framework

    NASA Astrophysics Data System (ADS)

    Wei, Ying; Christensen, Gary E.; Song, Joo Hyun; Rudrauf, David; Bruss, Joel; Kuhl, Jon G.; Grabowski, Thomas J.

    2010-03-01

    Evaluating non-rigid image registration algorithm performance is a difficult problem since there is rarely a "gold standard" (i.e., known) correspondence between two images. This paper reports the analysis and comparison of five non-rigid image registration algorithms using the Non-Rigid Image Registration Evaluation Project (NIREP) (www.nirep.org) framework. The NIREP framework evaluates registration performance using centralized databases of well-characterized images and standard evaluation statistics (methods) which are implemented in a software package. The performance of five non-rigid registration algorithms (Affine, AIR, Demons, SLE and SICLE) was evaluated using 22 images from two NIREP neuroanatomical evaluation databases. Six evaluation statistics (relative overlap, intensity variance, normalized ROI overlap, alignment of calcarine sulci, inverse consistency error and transitivity error) were used to evaluate and compare image registration performance. The results indicate that the Demons registration algorithm produced the best registration results with respect to the relative overlap statistic but produced nearly the worst registration results with respect to the inverse consistency statistic. The fact that one registration algorithm produced the best result for one criterion and nearly the worst for another illustrates the need to use multiple evaluation statistics to fully assess performance.

  20. A framework for porting the NeuroBayes machine learning algorithm to FPGAs

    NASA Astrophysics Data System (ADS)

    Baehr, S.; Sander, O.; Heck, M.; Feindt, M.; Becker, J.

    2016-01-01

    The NeuroBayes machine learning algorithm is deployed for online data reduction at the pixel detector of Belle II. In order to test, characterize and easily adapt its implementation on FPGAs, a framework was developed. Within the framework an HDL model, written in python using MyHDL, is used for fast exploration of possible configurations. Under usage of input data from physics simulations figures of merit like throughput, accuracy and resource demand of the implementation are evaluated in a fast and flexible way. Functional validation is supported by usage of unit tests and HDL simulation for chosen configurations.

  1. Secure OFDM communications based on hashing algorithms

    NASA Astrophysics Data System (ADS)

    Neri, Alessandro; Campisi, Patrizio; Blasi, Daniele

    2007-10-01

    In this paper we propose an OFDM (Orthogonal Frequency Division Multiplexing) wireless communication system that introduces mutual authentication and encryption at the physical layer, without impairing spectral efficiency, exploiting some freedom degrees of the base-band signal, and using encrypted-hash algorithms. FEC (Forward Error Correction) is instead performed through variable-rate Turbo Codes. To avoid false rejections, i.e. rejections of enrolled (authorized) users, we designed and tested a robust hash algorithm. This robustness is obtained both by a segmentation of the hash domain (based on BCH codes) and by the FEC capabilities of Turbo Codes.

  2. Recent numerical and algorithmic advances within the volume tracking framework for modeling interfacial flows

    DOE PAGES

    François, Marianne M.

    2015-05-28

    A review of recent advances made in numerical methods and algorithms within the volume tracking framework is presented. The volume tracking method, also known as the volume-of-fluid method has become an established numerical approach to model and simulate interfacial flows. Its advantage is its strict mass conservation. However, because the interface is not explicitly tracked but captured via the material volume fraction on a fixed mesh, accurate estimation of the interface position, its geometric properties and modeling of interfacial physics in the volume tracking framework remain difficult. Several improvements have been made over the last decade to address these challenges.more » In this study, the multimaterial interface reconstruction method via power diagram, curvature estimation via heights and mean values and the balanced-force algorithm for surface tension are highlighted.« less

  3. Recent numerical and algorithmic advances within the volume tracking framework for modeling interfacial flows

    SciTech Connect

    François, Marianne M.

    2015-05-28

    A review of recent advances made in numerical methods and algorithms within the volume tracking framework is presented. The volume tracking method, also known as the volume-of-fluid method has become an established numerical approach to model and simulate interfacial flows. Its advantage is its strict mass conservation. However, because the interface is not explicitly tracked but captured via the material volume fraction on a fixed mesh, accurate estimation of the interface position, its geometric properties and modeling of interfacial physics in the volume tracking framework remain difficult. Several improvements have been made over the last decade to address these challenges. In this study, the multimaterial interface reconstruction method via power diagram, curvature estimation via heights and mean values and the balanced-force algorithm for surface tension are highlighted.

  4. An Automatic Learning-Based Framework for Robust Nucleus Segmentation.

    PubMed

    Xing, Fuyong; Xie, Yuanpu; Yang, Lin

    2016-02-01

    Computer-aided image analysis of histopathology specimens could potentially provide support for early detection and improved characterization of diseases such as brain tumor, pancreatic neuroendocrine tumor (NET), and breast cancer. Automated nucleus segmentation is a prerequisite for various quantitative analyses including automatic morphological feature computation. However, it remains to be a challenging problem due to the complex nature of histopathology images. In this paper, we propose a learning-based framework for robust and automatic nucleus segmentation with shape preservation. Given a nucleus image, it begins with a deep convolutional neural network (CNN) model to generate a probability map, on which an iterative region merging approach is performed for shape initializations. Next, a novel segmentation algorithm is exploited to separate individual nuclei combining a robust selection-based sparse shape model and a local repulsive deformable model. One of the significant benefits of the proposed framework is that it is applicable to different staining histopathology images. Due to the feature learning characteristic of the deep CNN and the high level shape prior modeling, the proposed method is general enough to perform well across multiple scenarios. We have tested the proposed algorithm on three large-scale pathology image datasets using a range of different tissue and stain preparations, and the comparative experiments with recent state of the arts demonstrate the superior performance of the proposed approach.

  5. An improved filter-u least mean square vibration control algorithm for aircraft framework.

    PubMed

    Huang, Quanzhen; Luo, Jun; Gao, Zhiyuan; Zhu, Xiaojin; Li, Hengyu

    2014-09-01

    Active vibration control of aerospace vehicle structures is very a hot spot and in which filter-u least mean square (FULMS) algorithm is one of the key methods. But for practical reasons and technical limitations, vibration reference signal extraction is always a difficult problem for FULMS algorithm. To solve the vibration reference signal extraction problem, an improved FULMS vibration control algorithm is proposed in this paper. Reference signal is constructed based on the controller structure and the data in the algorithm process, using a vibration response residual signal extracted directly from the vibration structure. To test the proposed algorithm, an aircraft frame model is built and an experimental platform is constructed. The simulation and experimental results show that the proposed algorithm is more practical with a good vibration suppression performance.

  6. An improved filter-u least mean square vibration control algorithm for aircraft framework

    NASA Astrophysics Data System (ADS)

    Huang, Quanzhen; Luo, Jun; Gao, Zhiyuan; Zhu, Xiaojin; Li, Hengyu

    2014-09-01

    Active vibration control of aerospace vehicle structures is very a hot spot and in which filter-u least mean square (FULMS) algorithm is one of the key methods. But for practical reasons and technical limitations, vibration reference signal extraction is always a difficult problem for FULMS algorithm. To solve the vibration reference signal extraction problem, an improved FULMS vibration control algorithm is proposed in this paper. Reference signal is constructed based on the controller structure and the data in the algorithm process, using a vibration response residual signal extracted directly from the vibration structure. To test the proposed algorithm, an aircraft frame model is built and an experimental platform is constructed. The simulation and experimental results show that the proposed algorithm is more practical with a good vibration suppression performance.

  7. A framework for evaluating wavelet based watermarking for scalable coded digital item adaptation attacks

    NASA Astrophysics Data System (ADS)

    Bhowmik, Deepayan; Abhayaratne, Charith

    2009-02-01

    A framework for evaluating wavelet based watermarking schemes against scalable coded visual media content adaptation attacks is presented. The framework, Watermark Evaluation Bench for Content Adaptation Modes (WEBCAM), aims to facilitate controlled evaluation of wavelet based watermarking schemes under MPEG-21 part-7 digital item adaptations (DIA). WEBCAM accommodates all major wavelet based watermarking in single generalised framework by considering a global parameter space, from which the optimum parameters for a specific algorithm may be chosen. WEBCAM considers the traversing of media content along various links and required content adaptations at various nodes of media supply chains. In this paper, the content adaptation is emulated by the JPEG2000 coded bit stream extraction for various spatial resolution and quality levels of the content. The proposed framework is beneficial not only as an evaluation tool but also as design tool for new wavelet based watermark algorithms by picking and mixing of available tools and finding the optimum design parameters.

  8. Propensity scores-potential outcomes framework to incorporate severity probabilities in the highway safety manual crash prediction algorithm.

    PubMed

    Sasidharan, Lekshmi; Donnell, Eric T

    2014-10-01

    Accurate estimation of the expected number of crashes at different severity levels for entities with and without countermeasures plays a vital role in selecting countermeasures in the framework of the safety management process. The current practice is to use the American Association of State Highway and Transportation Officials' Highway Safety Manual crash prediction algorithms, which combine safety performance functions and crash modification factors, to estimate the effects of safety countermeasures on different highway and street facility types. Many of these crash prediction algorithms are based solely on crash frequency, or assume that severity outcomes are unchanged when planning for, or implementing, safety countermeasures. Failing to account for the uncertainty associated with crash severity outcomes, and assuming crash severity distributions remain unchanged in safety performance evaluations, limits the utility of the Highway Safety Manual crash prediction algorithms in assessing the effect of safety countermeasures on crash severity. This study demonstrates the application of a propensity scores-potential outcomes framework to estimate the probability distribution for the occurrence of different crash severity levels by accounting for the uncertainties associated with them. The probability of fatal and severe injury crash occurrence at lighted and unlighted intersections is estimated in this paper using data from Minnesota. The results show that the expected probability of occurrence of fatal and severe injury crashes at a lighted intersection was 1 in 35 crashes and the estimated risk ratio indicates that the respective probabilities at an unlighted intersection was 1.14 times higher compared to lighted intersections. The results from the potential outcomes-propensity scores framework are compared to results obtained from traditional binary logit models, without application of propensity scores matching. Traditional binary logit analysis suggests that

  9. jClustering, an Open Framework for the Development of 4D Clustering Algorithms

    PubMed Central

    Mateos-Pérez, José María; García-Villalba, Carmen; Pascau, Javier; Desco, Manuel; Vaquero, Juan J.

    2013-01-01

    We present jClustering, an open framework for the design of clustering algorithms in dynamic medical imaging. We developed this tool because of the difficulty involved in manually segmenting dynamic PET images and the lack of availability of source code for published segmentation algorithms. Providing an easily extensible open tool encourages publication of source code to facilitate the process of comparing algorithms and provide interested third parties with the opportunity to review code. The internal structure of the framework allows an external developer to implement new algorithms easily and quickly, focusing only on the particulars of the method being implemented and not on image data handling and preprocessing. This tool has been coded in Java and is presented as an ImageJ plugin in order to take advantage of all the functionalities offered by this imaging analysis platform. Both binary packages and source code have been published, the latter under a free software license (GNU General Public License) to allow modification if necessary. PMID:23990913

  10. Numerical Algorithms Based on Biorthogonal Wavelets

    NASA Technical Reports Server (NTRS)

    Ponenti, Pj.; Liandrat, J.

    1996-01-01

    Wavelet bases are used to generate spaces of approximation for the resolution of bidimensional elliptic and parabolic problems. Under some specific hypotheses relating the properties of the wavelets to the order of the involved operators, it is shown that an approximate solution can be built. This approximation is then stable and converges towards the exact solution. It is designed such that fast algorithms involving biorthogonal multi resolution analyses can be used to resolve the corresponding numerical problems. Detailed algorithms are provided as well as the results of numerical tests on partial differential equations defined on the bidimensional torus.

  11. Algorithmic Differentiation for Calculus-based Optimization

    NASA Astrophysics Data System (ADS)

    Walther, Andrea

    2010-10-01

    For numerous applications, the computation and provision of exact derivative information plays an important role for optimizing the considered system but quite often also for its simulation. This presentation introduces the technique of Algorithmic Differentiation (AD), a method to compute derivatives of arbitrary order within working precision. Quite often an additional structure exploitation is indispensable for a successful coupling of these derivatives with state-of-the-art optimization algorithms. The talk will discuss two important situations where the problem-inherent structure allows a calculus-based optimization. Examples from aerodynamics and nano optics illustrate these advanced optimization approaches.

  12. A Reliability-Based Track Fusion Algorithm

    PubMed Central

    Xu, Li; Pan, Liqiang; Jin, Shuilin; Liu, Haibo; Yin, Guisheng

    2015-01-01

    The common track fusion algorithms in multi-sensor systems have some defects, such as serious imbalances between accuracy and computational cost, the same treatment of all the sensor information regardless of their quality, high fusion errors at inflection points. To address these defects, a track fusion algorithm based on the reliability (TFR) is presented in multi-sensor and multi-target environments. To improve the information quality, outliers in the local tracks are eliminated at first. Then the reliability of local tracks is calculated, and the local tracks with high reliability are chosen for the state estimation fusion. In contrast to the existing methods, TFR reduces high fusion errors at the inflection points of system tracks, and obtains a high accuracy with less computational cost. Simulation results verify the effectiveness and the superiority of the algorithm in dense sensor environments. PMID:25950174

  13. A reliability-based track fusion algorithm.

    PubMed

    Xu, Li; Pan, Liqiang; Jin, Shuilin; Liu, Haibo; Yin, Guisheng

    2015-01-01

    The common track fusion algorithms in multi-sensor systems have some defects, such as serious imbalances between accuracy and computational cost, the same treatment of all the sensor information regardless of their quality, high fusion errors at inflection points. To address these defects, a track fusion algorithm based on the reliability (TFR) is presented in multi-sensor and multi-target environments. To improve the information quality, outliers in the local tracks are eliminated at first. Then the reliability of local tracks is calculated, and the local tracks with high reliability are chosen for the state estimation fusion. In contrast to the existing methods, TFR reduces high fusion errors at the inflection points of system tracks, and obtains a high accuracy with less computational cost. Simulation results verify the effectiveness and the superiority of the algorithm in dense sensor environments.

  14. Bell-Curve Based Evolutionary Optimization Algorithm

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; Laba, K.; Kincaid, R.

    1998-01-01

    The paper presents an optimization algorithm that falls in the category of genetic, or evolutionary algorithms. While the bit exchange is the basis of most of the Genetic Algorithms (GA) in research and applications in America, some alternatives, also in the category of evolutionary algorithms, but use a direct, geometrical approach have gained popularity in Europe and Asia. The Bell-Curve Based Evolutionary Algorithm (BCB) is in this alternative category and is distinguished by the use of a combination of n-dimensional geometry and the normal distribution, the bell-curve, in the generation of the offspring. The tool for creating a child is a geometrical construct comprising a line connecting two parents and a weighted point on that line. The point that defines the child deviates from the weighted point in two directions: parallel and orthogonal to the connecting line, the deviation in each direction obeying a probabilistic distribution. Tests showed satisfactory performance of BCB. The principal advantage of BCB is its controllability via the normal distribution parameters and the geometrical construct variables.

  15. Thiophene-based covalent organic frameworks

    PubMed Central

    Bertrand, Guillaume H. V.; Michaelis, Vladimir K.; Ong, Ta-Chung; Griffin, Robert G.; Dincă, Mircea

    2013-01-01

    We report the synthesis and characterization of covalent organic frameworks (COFs) incorporating thiophene-based building blocks. We show that these are amenable to reticular synthesis, and that bent ditopic monomers, such as 2,5-thiophenediboronic acid, are defect-prone building blocks that are susceptible to synthetic variations during COF synthesis. The synthesis and characterization of an unusual charge transfer complex between thieno[3,2-b]thiophene-2,5-diboronic acid and tetracyanoquinodimethane enabled by the unique COF architecture is also presented. Together, these results delineate important synthetic advances toward the implementation of COFs in electronic devices. PMID:23479656

  16. Thiophene-based covalent organic frameworks.

    PubMed

    Bertrand, Guillaume H V; Michaelis, Vladimir K; Ong, Ta-Chung; Griffin, Robert G; Dincă, Mircea

    2013-03-26

    We report the synthesis and characterization of covalent organic frameworks (COFs) incorporating thiophene-based building blocks. We show that these are amenable to reticular synthesis, and that bent ditopic monomers, such as 2,5-thiophenediboronic acid, are defect-prone building blocks that are susceptible to synthetic variations during COF synthesis. The synthesis and characterization of an unusual charge transfer complex between thieno[3,2-b]thiophene-2,5-diboronic acid and tetracyanoquinodimethane enabled by the unique COF architecture is also presented. Together, these results delineate important synthetic advances toward the implementation of COFs in electronic devices.

  17. Adaptive inpainting algorithm based on DCT induced wavelet regularization.

    PubMed

    Li, Yan-Ran; Shen, Lixin; Suter, Bruce W

    2013-02-01

    In this paper, we propose an image inpainting optimization model whose objective function is a smoothed l(1) norm of the weighted nondecimated discrete cosine transform (DCT) coefficients of the underlying image. By identifying the objective function of the proposed model as a sum of a differentiable term and a nondifferentiable term, we present a basic algorithm inspired by Beck and Teboulle's recent work on the model. Based on this basic algorithm, we propose an automatic way to determine the weights involved in the model and update them in each iteration. The DCT as an orthogonal transform is used in various applications. We view the rows of a DCT matrix as the filters associated with a multiresolution analysis. Nondecimated wavelet transforms with these filters are explored in order to analyze the images to be inpainted. Our numerical experiments verify that under the proposed framework, the filters from a DCT matrix demonstrate promise for the task of image inpainting.

  18. Integrated consensus-based frameworks for unmanned vehicle routing and targeting assignment

    NASA Astrophysics Data System (ADS)

    Barnawi, Waleed T.

    Unmanned aerial vehicles (UAVs) are increasingly deployed in complex and dynamic environments to perform multiple tasks cooperatively with other UAVs that contribute to overarching mission effectiveness. Studies by the Department of Defense (DoD) indicate future operations may include anti-access/area-denial (A2AD) environments which limit human teleoperator decision-making and control. This research addresses the problem of decentralized vehicle re-routing and task reassignments through consensus-based UAV decision-making. An Integrated Consensus-Based Framework (ICF) is formulated as a solution to the combined single task assignment problem and vehicle routing problem. The multiple assignment and vehicle routing problem is solved with the Integrated Consensus-Based Bundle Framework (ICBF). The frameworks are hierarchically decomposed into two levels. The bottom layer utilizes the renowned Dijkstra's Algorithm. The top layer addresses task assignment with two methods. The single assignment approach is called the Caravan Auction Algorithm (CarA) Algorithm. This technique extends the Consensus-Based Auction Algorithm (CBAA) to provide awareness for task completion by agents and adopt abandoned tasks. The multiple assignment approach called the Caravan Auction Bundle Algorithm (CarAB) extends the Consensus-Based Bundle Algorithm (CBBA) by providing awareness for lost resources, prioritizing remaining tasks, and adopting abandoned tasks. Research questions are investigated regarding the novelty and performance of the proposed frameworks. Conclusions regarding the research questions will be provided through hypothesis testing. Monte Carlo simulations will provide evidence to support conclusions regarding the research hypotheses for the proposed frameworks. The approach provided in this research addresses current and future military operations for unmanned aerial vehicles. However, the general framework implied by the proposed research is adaptable to any unmanned

  19. Fast Algorithms for Model-Based Diagnosis

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Barrett, Anthony; Vatan, Farrokh; Mackey, Ryan

    2005-01-01

    Two improved new methods for automated diagnosis of complex engineering systems involve the use of novel algorithms that are more efficient than prior algorithms used for the same purpose. Both the recently developed algorithms and the prior algorithms in question are instances of model-based diagnosis, which is based on exploring the logical inconsistency between an observation and a description of a system to be diagnosed. As engineering systems grow more complex and increasingly autonomous in their functions, the need for automated diagnosis increases concomitantly. In model-based diagnosis, the function of each component and the interconnections among all the components of the system to be diagnosed (for example, see figure) are represented as a logical system, called the system description (SD). Hence, the expected behavior of the system is the set of logical consequences of the SD. Faulty components lead to inconsistency between the observed behaviors of the system and the SD. The task of finding the faulty components (diagnosis) reduces to finding the components, the abnormalities of which could explain all the inconsistencies. Of course, the meaningful solution should be a minimal set of faulty components (called a minimal diagnosis), because the trivial solution, in which all components are assumed to be faulty, always explains all inconsistencies. Although the prior algorithms in question implement powerful methods of diagnosis, they are not practical because they essentially require exhaustive searches among all possible combinations of faulty components and therefore entail the amounts of computation that grow exponentially with the number of components of the system.

  20. A task-based analytical framework for ultrasonic beamformer comparison.

    PubMed

    Nguyen, Nghia Q; Prager, Richard W; Insana, Michael F

    2016-08-01

    A task-based approach is employed to develop an analytical framework for ultrasound beamformer design and evaluation. In this approach, a Bayesian ideal-observer provides an idealized starting point and a way to measure information loss in practical beamformer designs. Different approximations of this ideal strategy are shown to lead to popular beamformers in the literature, including the matched filter, minimum variance (MV), and Wiener filter (WF) beamformers. Analysis of the approximations indicates that the WF beamformer should outperform the MV approach, especially in low echo signal-to-noise conditions. The beamformers are applied to five typical tasks from the BIRADS lexicon. Their performance is evaluated based on ability to discriminate idealized malignant and benign features. The numerical results show the advantages of the WF over the MV technique in general; although performance varies predictably in some contrast-limited tasks because of the model modifications required for the MV algorithm to avoid ill-conditioning. PMID:27586736

  1. Framework for Integrating Science Data Processing Algorithms Into Process Control Systems

    NASA Technical Reports Server (NTRS)

    Mattmann, Chris A.; Crichton, Daniel J.; Chang, Albert Y.; Foster, Brian M.; Freeborn, Dana J.; Woollard, David M.; Ramirez, Paul M.

    2011-01-01

    A software framework called PCS Task Wrapper is responsible for standardizing the setup, process initiation, execution, and file management tasks surrounding the execution of science data algorithms, which are referred to by NASA as Product Generation Executives (PGEs). PGEs codify a scientific algorithm, some step in the overall scientific process involved in a mission science workflow. The PCS Task Wrapper provides a stable operating environment to the underlying PGE during its execution lifecycle. If the PGE requires a file, or metadata regarding the file, the PCS Task Wrapper is responsible for delivering that information to the PGE in a manner that meets its requirements. If the PGE requires knowledge of upstream or downstream PGEs in a sequence of executions, that information is also made available. Finally, if information regarding disk space, or node information such as CPU availability, etc., is required, the PCS Task Wrapper provides this information to the underlying PGE. After this information is collected, the PGE is executed, and its output Product file and Metadata generation is managed via the PCS Task Wrapper framework. The innovation is responsible for marshalling output Products and Metadata back to a PCS File Management component for use in downstream data processing and pedigree. In support of this, the PCS Task Wrapper leverages the PCS Crawler Framework to ingest (during pipeline processing) the output Product files and Metadata produced by the PGE. The architectural components of the PCS Task Wrapper framework include PGE Task Instance, PGE Config File Builder, Config File Property Adder, Science PGE Config File Writer, and PCS Met file Writer. This innovative framework is really the unifying bridge between the execution of a step in the overall processing pipeline, and the available PCS component services as well as the information that they collectively manage.

  2. Framework and algorithms for illustrative visualizations of time-varying flows on unstructured meshes

    DOE PAGES

    Rattner, Alexander S.; Guillen, Donna Post; Joshi, Alark; Garimella, Srinivas

    2016-03-17

    Photo- and physically realistic techniques are often insufficient for visualization of fluid flow simulations, especially for 3D and time-varying studies. Substantial research effort has been dedicated to the development of non-photorealistic and illustration-inspired visualization techniques for compact and intuitive presentation of such complex datasets. However, a great deal of work has been reproduced in this field, as many research groups have developed specialized visualization software. Additionally, interoperability between illustrative visualization software is limited due to diverse processing and rendering architectures employed in different studies. In this investigation, a framework for illustrative visualization is proposed, and implemented in MarmotViz, a ParaViewmore » plug-in, enabling its use on a variety of computing platforms with various data file formats and mesh geometries. Region-of-interest identification and feature-tracking algorithms incorporated into this tool are described. Implementations of multiple illustrative effect algorithms are also presented to demonstrate the use and flexibility of this framework. Here, by providing an integrated framework for illustrative visualization of CFD data, MarmotViz can serve as a valuable asset for the interpretation of simulations of ever-growing scale.« less

  3. Generalized Framework and Algorithms for Illustrative Visualization of Time-Varying Data on Unstructured Meshes

    SciTech Connect

    Alexander S. Rattner; Donna Post Guillen; Alark Joshi

    2012-12-01

    Photo- and physically-realistic techniques are often insufficient for visualization of simulation results, especially for 3D and time-varying datasets. Substantial research efforts have been dedicated to the development of non-photorealistic and illustration-inspired visualization techniques for compact and intuitive presentation of such complex datasets. While these efforts have yielded valuable visualization results, a great deal of work has been reproduced in studies as individual research groups often develop purpose-built platforms. Additionally, interoperability between illustrative visualization software is limited due to specialized processing and rendering architectures employed in different studies. In this investigation, a generalized framework for illustrative visualization is proposed, and implemented in marmotViz, a ParaView plugin, enabling its use on variety of computing platforms with various data file formats and mesh geometries. Detailed descriptions of the region-of-interest identification and feature-tracking algorithms incorporated into this tool are provided. Additionally, implementations of multiple illustrative effect algorithms are presented to demonstrate the use and flexibility of this framework. By providing a framework and useful underlying functionality, the marmotViz tool can act as a springboard for future research in the field of illustrative visualization.

  4. LSB Based Quantum Image Steganography Algorithm

    NASA Astrophysics Data System (ADS)

    Jiang, Nan; Zhao, Na; Wang, Luo

    2016-01-01

    Quantum steganography is the technique which hides a secret message into quantum covers such as quantum images. In this paper, two blind LSB steganography algorithms in the form of quantum circuits are proposed based on the novel enhanced quantum representation (NEQR) for quantum images. One algorithm is plain LSB which uses the message bits to substitute for the pixels' LSB directly. The other is block LSB which embeds a message bit into a number of pixels that belong to one image block. The extracting circuits can regain the secret message only according to the stego cover. Analysis and simulation-based experimental results demonstrate that the invisibility is good, and the balance between the capacity and the robustness can be adjusted according to the needs of applications.

  5. Network-based recommendation algorithms: A review

    NASA Astrophysics Data System (ADS)

    Yu, Fei; Zeng, An; Gillard, Sébastien; Medo, Matúš

    2016-06-01

    Recommender systems are a vital tool that helps us to overcome the information overload problem. They are being used by most e-commerce web sites and attract the interest of a broad scientific community. A recommender system uses data on users' past preferences to choose new items that might be appreciated by a given individual user. While many approaches to recommendation exist, the approach based on a network representation of the input data has gained considerable attention in the past. We review here a broad range of network-based recommendation algorithms and for the first time compare their performance on three distinct real datasets. We present recommendation topics that go beyond the mere question of which algorithm to use-such as the possible influence of recommendation on the evolution of systems that use it-and finally discuss open research directions and challenges.

  6. Automated Vectorization of Decision-Based Algorithms

    NASA Technical Reports Server (NTRS)

    James, Mark

    2006-01-01

    Virtually all existing vectorization algorithms are designed to only analyze the numeric properties of an algorithm and distribute those elements across multiple processors. This advances the state of the practice because it is the only known system, at the time of this reporting, that takes high-level statements and analyzes them for their decision properties and converts them to a form that allows them to automatically be executed in parallel. The software takes a high-level source program that describes a complex decision- based condition and rewrites it as a disjunctive set of component Boolean relations that can then be executed in parallel. This is important because parallel architectures are becoming more commonplace in conventional systems and they have always been present in NASA flight systems. This technology allows one to take existing condition-based code and automatically vectorize it so it naturally decomposes across parallel architectures.

  7. An algorithm for hyperspectral remote sensing of aerosols: 1. Development of theoretical framework

    NASA Astrophysics Data System (ADS)

    Hou, Weizhen; Wang, Jun; Xu, Xiaoguang; Reid, Jeffrey S.; Han, Dong

    2016-07-01

    This paper describes the first part of a series of investigations to develop algorithms for simultaneous retrieval of aerosol parameters and surface reflectance from a newly developed hyperspectral instrument, the GEOstationary Trace gas and Aerosol Sensor Optimization (GEO-TASO), by taking full advantage of available hyperspectral measurement information in the visible bands. We describe the theoretical framework of an inversion algorithm for the hyperspectral remote sensing of the aerosol optical properties, in which major principal components (PCs) for surface reflectance is assumed known, and the spectrally dependent aerosol refractive indices are assumed to follow a power-law approximation with four unknown parameters (two for real and two for imaginary part of refractive index). New capabilities for computing the Jacobians of four Stokes parameters of reflected solar radiation at the top of the atmosphere with respect to these unknown aerosol parameters and the weighting coefficients for each PC of surface reflectance are added into the UNified Linearized Vector Radiative Transfer Model (UNL-VRTM), which in turn facilitates the optimization in the inversion process. Theoretical derivations of the formulas for these new capabilities are provided, and the analytical solutions of Jacobians are validated against the finite-difference calculations with relative error less than 0.2%. Finally, self-consistency check of the inversion algorithm is conducted for the idealized green-vegetation and rangeland surfaces that were spectrally characterized by the U.S. Geological Survey digital spectral library. It shows that the first six PCs can yield the reconstruction of spectral surface reflectance with errors less than 1%. Assuming that aerosol properties can be accurately characterized, the inversion yields a retrieval of hyperspectral surface reflectance with an uncertainty of 2% (and root-mean-square error of less than 0.003), which suggests self-consistency in the

  8. A Flexible and Efficient Algorithmic Framework for Constrained Matrix and Tensor Factorization

    NASA Astrophysics Data System (ADS)

    Huang, Kejun; Sidiropoulos, Nicholas D.; Liavas, Athanasios P.

    2016-10-01

    We propose a general algorithmic framework for constrained matrix and tensor factorization, which is widely used in signal processing and machine learning. The new framework is a hybrid between alternating optimization (AO) and the alternating direction method of multipliers (ADMM): each matrix factor is updated in turn, using ADMM, hence the name AO-ADMM. This combination can naturally accommodate a great variety of constraints on the factor matrices, and almost all possible loss measures for the fitting. Computation caching and warm start strategies are used to ensure that each update is evaluated efficiently, while the outer AO framework exploits recent developments in block coordinate descent (BCD)-type methods which help ensure that every limit point is a stationary point, as well as faster and more robust convergence in practice. Three special cases are studied in detail: non-negative matrix/tensor factorization, constrained matrix/tensor completion, and dictionary learning. Extensive simulations and experiments with real data are used to showcase the effectiveness and broad applicability of the proposed framework.

  9. Image enhancement based on edge boosting algorithm

    NASA Astrophysics Data System (ADS)

    Ngernplubpla, Jaturon; Chitsobhuk, Orachat

    2015-12-01

    In this paper, a technique for image enhancement based on proposed edge boosting algorithm to reconstruct high quality image from a single low resolution image is described. The difficulty in single-image super-resolution is that the generic image priors resided in the low resolution input image may not be sufficient to generate the effective solutions. In order to achieve a success in super-resolution reconstruction, efficient prior knowledge should be estimated. The statistics of gradient priors in terms of priority map based on separable gradient estimation, maximum likelihood edge estimation, and local variance are introduced. The proposed edge boosting algorithm takes advantages of these gradient statistics to select the appropriate enhancement weights. The larger weights are applied to the higher frequency details while the low frequency details are smoothed. From the experimental results, the significant performance improvement quantitatively and perceptually is illustrated. It can be seen that the proposed edge boosting algorithm demonstrates high quality results with fewer artifacts, sharper edges, superior texture areas, and finer detail with low noise.

  10. Schwarz-Based Algorithms for Compressible Flows

    NASA Technical Reports Server (NTRS)

    Tidriri, M. D.

    1996-01-01

    We investigate in this paper the application of Schwarz-based algorithms to compressible flows. First we study the combination of these methods with defect-correction procedures. We then study the effect on the Schwarz-based methods of replacing the explicit treatment of the boundary conditions by an implicit one. In the last part of this paper we study the combination of these methods with Newton-Krylov matrix-free methods. Numerical experiments that show the performance of our approaches are then presented.

  11. An archetype-based testing framework.

    PubMed

    Chen, Rong; Garde, Sebastian; Beale, Thomas; Nyström, Mikael; Karlsson, Daniel; Klein, Gunnar O; Ahlfeldt, Hans

    2008-01-01

    With the introduction of EHR two-level modelling and archetype methodologies pioneered by openEHR and standardized by CEN/ISO, we are one step closer to semantic interoperability and future-proof adaptive healthcare information systems. Along with the opportunities, there are also challenges. Archetypes provide the full semantics of EHR data explicitly to surrounding systems in a platform-independent way, yet it is up to the receiving system to interpret the semantics and process the data accordingly. In this paper we propose a design of an archetype-based platform-independent testing framework for validating implementations of the openEHR archetype formalism as a means of improving quality and interoperability of EHRs.

  12. A Trust Based Clustering Framework for Securing Ad Hoc Networks

    NASA Astrophysics Data System (ADS)

    Chatterjee, Pushpita; Sengupta, Indranil; Ghosh, S. K.

    In this paper we present a distributed self-organizing trust based clustering framework for securing ad hoc networks. The mobile nodes are vulnerable to security attacks, so ensuring the security of the network is essential. To enhance security, it is important to evaluate the trustworthiness of nodes without depending on central authorities. In our proposal the evidence of trustworthiness is captured in an efficient manner and from broader perspectives including direct interactions with neighbors, observing interactions of neighbors and through recommendations. Our prediction scheme uses a trust evaluation algorithm at each node to calculate the direct trust rating normalized as a fuzzy value between zero and one. The evidence theory of Dempster-Shafer [9], [10] is used in order to combine the evidences collected by a clusterhead itself and the recommendations from other neighbor nodes. Moreover, in our scheme we do not restrict to a single gateway node for inter cluster routing.

  13. A meta-learning system based on genetic algorithms

    NASA Astrophysics Data System (ADS)

    Pellerin, Eric; Pigeon, Luc; Delisle, Sylvain

    2004-04-01

    The design of an efficient machine learning process through self-adaptation is a great challenge. The goal of meta-learning is to build a self-adaptive learning system that is constantly adapting to its specific (and dynamic) environment. To that end, the meta-learning mechanism must improve its bias dynamically by updating the current learning strategy in accordance with its available experiences or meta-knowledge. We suggest using genetic algorithms as the basis of an adaptive system. In this work, we propose a meta-learning system based on a combination of the a priori and a posteriori concepts. A priori refers to input information and knowledge available at the beginning in order to built and evolve one or more sets of parameters by exploiting the context of the system"s information. The self-learning component is based on genetic algorithms and neural Darwinism. A posteriori refers to the implicit knowledge discovered by estimation of the future states of parameters and is also applied to the finding of optimal parameters values. The in-progress research presented here suggests a framework for the discovery of knowledge that can support human experts in their intelligence information assessment tasks. The conclusion presents avenues for further research in genetic algorithms and their capability to learn to learn.

  14. Orthogonalizing EM: A design-based least squares algorithm

    PubMed Central

    Xiong, Shifeng; Dai, Bin; Huling, Jared; Qian, Peter Z. G.

    2016-01-01

    We introduce an efficient iterative algorithm, intended for various least squares problems, based on a design of experiments perspective. The algorithm, called orthogonalizing EM (OEM), works for ordinary least squares and can be easily extended to penalized least squares. The main idea of the procedure is to orthogonalize a design matrix by adding new rows and then solve the original problem by embedding the augmented design in a missing data framework. We establish several attractive theoretical properties concerning OEM. For the ordinary least squares with a singular regression matrix, an OEM sequence converges to the Moore-Penrose generalized inverse-based least squares estimator. For ordinary and penalized least squares with various penalties, it converges to a point having grouping coherence for fully aliased regression matrices. Convergence and the convergence rate of the algorithm are examined. Finally, we demonstrate that OEM is highly efficient for large-scale least squares and penalized least squares problems, and is considerably faster than competing methods when n is much larger than p. Supplementary materials for this article are available online. PMID:27499558

  15. Statistical algorithms for ontology-based annotation of scientific literature

    PubMed Central

    2014-01-01

    Background Ontologies encode relationships within a domain in robust data structures that can be used to annotate data objects, including scientific papers, in ways that ease tasks such as search and meta-analysis. However, the annotation process requires significant time and effort when performed by humans. Text mining algorithms can facilitate this process, but they render an analysis mainly based upon keyword, synonym and semantic matching. They do not leverage information embedded in an ontology's structure. Methods We present a probabilistic framework that facilitates the automatic annotation of literature by indirectly modeling the restrictions among the different classes in the ontology. Our research focuses on annotating human functional neuroimaging literature within the Cognitive Paradigm Ontology (CogPO). We use an approach that combines the stochastic simplicity of naïve Bayes with the formal transparency of decision trees. Our data structure is easily modifiable to reflect changing domain knowledge. Results We compare our results across naïve Bayes, Bayesian Decision Trees, and Constrained Decision Tree classifiers that keep a human expert in the loop, in terms of the quality measure of the F1-mirco score. Conclusions Unlike traditional text mining algorithms, our framework can model the knowledge encoded by the dependencies in an ontology, albeit indirectly. We successfully exploit the fact that CogPO has explicitly stated restrictions, and implicit dependencies in the form of patterns in the expert curated annotations. PMID:25093071

  16. Interactive Genetic Algorithm - An Adaptive and Interactive Decision Support Framework for Design of Optimal Groundwater Monitoring Plans

    NASA Astrophysics Data System (ADS)

    Babbar-Sebens, M.; Minsker, B. S.

    2006-12-01

    In the water resources management field, decision making encompasses many kinds of engineering, social, and economic constraints and objectives. Representing all of these problem dependant criteria through models (analytical or numerical) and various formulations (e.g., objectives, constraints, etc.) within an optimization- simulation system can be a very non-trivial issue. Most models and formulations utilized for discerning desirable traits in a solution can only approximate the decision maker's (DM) true preference criteria, and they often fail to consider important qualitative and incomputable phenomena related to the management problem. In our research, we have proposed novel decision support frameworks that allow DMs to actively participate in the optimization process. The DMs explicitly indicate their true preferences based on their subjective criteria and the results of various simulation models and formulations. The feedback from the DMs is then used to guide the search process towards solutions that are "all-rounders" from the perspective of the DM. The two main research questions explored in this work are: a) Does interaction between the optimization algorithm and a DM assist the system in searching for groundwater monitoring designs that are robust from the DM's perspective?, and b) How can an interactive search process be made more effective when human factors, such as human fatigue and cognitive learning processes, affect the performance of the algorithm? The application of these frameworks on a real-world groundwater long-term monitoring (LTM) case study in Michigan highlighted the following salient advantages: a) in contrast to the non-interactive optimization methodology, the proposed interactive frameworks were able to identify low cost monitoring designs whose interpolation maps respected the expected spatial distribution of the contaminants, b) for many same-cost designs, the interactive methodologies were able to propose multiple alternatives

  17. Automated DNA Base Pair Calling Algorithm

    1999-07-07

    The procedure solves the problem of calling the DNA base pair sequence from two channel electropherogram separations in an automated fashion. The core of the program involves a peak picking algorithm based upon first, second, and third derivative spectra for each electropherogram channel, signal levels as a function of time, peak spacing, base pair signal to noise sequence patterns, frequency vs ratio of the two channel histograms, and confidence levels generated during the run. Themore » ratios of the two channels at peak centers can be used to accurately and reproducibly determine the base pair sequence. A further enhancement is a novel Gaussian deconvolution used to determine the peak heights used in generating the ratio.« less

  18. Differential Search Algorithm Based Edge Detection

    NASA Astrophysics Data System (ADS)

    Gunen, M. A.; Civicioglu, P.; Beşdok, E.

    2016-06-01

    In this paper, a new method has been presented for the extraction of edge information by using Differential Search Optimization Algorithm. The proposed method is based on using a new heuristic image thresholding method for edge detection. The success of the proposed method has been examined on fusion of two remote sensed images. The applicability of the proposed method on edge detection and image fusion problems have been analysed in detail and the empirical results exposed that the proposed method is useful for solving the mentioned problems.

  19. Tile-Based Two-Dimensional Phase Unwrapping for Digital Holography Using a Modular Framework.

    PubMed

    Antonopoulos, Georgios C; Steltner, Benjamin; Heisterkamp, Alexander; Ripken, Tammo; Meyer, Heiko

    2015-01-01

    A variety of physical and biomedical imaging techniques, such as digital holography, interferometric synthetic aperture radar (InSAR), or magnetic resonance imaging (MRI) enable measurement of the phase of a physical quantity additionally to its amplitude. However, the phase can commonly only be measured modulo 2π, as a so called wrapped phase map. Phase unwrapping is the process of obtaining the underlying physical phase map from the wrapped phase. Tile-based phase unwrapping algorithms operate by first tessellating the phase map, then unwrapping individual tiles, and finally merging them to a continuous phase map. They can be implemented computationally efficiently and are robust to noise. However, they are prone to failure in the presence of phase residues or erroneous unwraps of single tiles. We tried to overcome these shortcomings by creating novel tile unwrapping and merging algorithms as well as creating a framework that allows to combine them in modular fashion. To increase the robustness of the tile unwrapping step, we implemented a model-based algorithm that makes efficient use of linear algebra to unwrap individual tiles. Furthermore, we adapted an established pixel-based unwrapping algorithm to create a quality guided tile merger. These original algorithms as well as previously existing ones were implemented in a modular phase unwrapping C++ framework. By examining different combinations of unwrapping and merging algorithms we compared our method to existing approaches. We could show that the appropriate choice of unwrapping and merging algorithms can significantly improve the unwrapped result in the presence of phase residues and noise. Beyond that, our modular framework allows for efficient design and test of new tile-based phase unwrapping algorithms. The software developed in this study is freely available.

  20. Tile-Based Two-Dimensional Phase Unwrapping for Digital Holography Using a Modular Framework.

    PubMed

    Antonopoulos, Georgios C; Steltner, Benjamin; Heisterkamp, Alexander; Ripken, Tammo; Meyer, Heiko

    2015-01-01

    A variety of physical and biomedical imaging techniques, such as digital holography, interferometric synthetic aperture radar (InSAR), or magnetic resonance imaging (MRI) enable measurement of the phase of a physical quantity additionally to its amplitude. However, the phase can commonly only be measured modulo 2π, as a so called wrapped phase map. Phase unwrapping is the process of obtaining the underlying physical phase map from the wrapped phase. Tile-based phase unwrapping algorithms operate by first tessellating the phase map, then unwrapping individual tiles, and finally merging them to a continuous phase map. They can be implemented computationally efficiently and are robust to noise. However, they are prone to failure in the presence of phase residues or erroneous unwraps of single tiles. We tried to overcome these shortcomings by creating novel tile unwrapping and merging algorithms as well as creating a framework that allows to combine them in modular fashion. To increase the robustness of the tile unwrapping step, we implemented a model-based algorithm that makes efficient use of linear algebra to unwrap individual tiles. Furthermore, we adapted an established pixel-based unwrapping algorithm to create a quality guided tile merger. These original algorithms as well as previously existing ones were implemented in a modular phase unwrapping C++ framework. By examining different combinations of unwrapping and merging algorithms we compared our method to existing approaches. We could show that the appropriate choice of unwrapping and merging algorithms can significantly improve the unwrapped result in the presence of phase residues and noise. Beyond that, our modular framework allows for efficient design and test of new tile-based phase unwrapping algorithms. The software developed in this study is freely available. PMID:26599984

  1. Tile-Based Two-Dimensional Phase Unwrapping for Digital Holography Using a Modular Framework

    PubMed Central

    Antonopoulos, Georgios C.; Steltner, Benjamin; Heisterkamp, Alexander; Ripken, Tammo; Meyer, Heiko

    2015-01-01

    A variety of physical and biomedical imaging techniques, such as digital holography, interferometric synthetic aperture radar (InSAR), or magnetic resonance imaging (MRI) enable measurement of the phase of a physical quantity additionally to its amplitude. However, the phase can commonly only be measured modulo 2π, as a so called wrapped phase map. Phase unwrapping is the process of obtaining the underlying physical phase map from the wrapped phase. Tile-based phase unwrapping algorithms operate by first tessellating the phase map, then unwrapping individual tiles, and finally merging them to a continuous phase map. They can be implemented computationally efficiently and are robust to noise. However, they are prone to failure in the presence of phase residues or erroneous unwraps of single tiles. We tried to overcome these shortcomings by creating novel tile unwrapping and merging algorithms as well as creating a framework that allows to combine them in modular fashion. To increase the robustness of the tile unwrapping step, we implemented a model-based algorithm that makes efficient use of linear algebra to unwrap individual tiles. Furthermore, we adapted an established pixel-based unwrapping algorithm to create a quality guided tile merger. These original algorithms as well as previously existing ones were implemented in a modular phase unwrapping C++ framework. By examining different combinations of unwrapping and merging algorithms we compared our method to existing approaches. We could show that the appropriate choice of unwrapping and merging algorithms can significantly improve the unwrapped result in the presence of phase residues and noise. Beyond that, our modular framework allows for efficient design and test of new tile-based phase unwrapping algorithms. The software developed in this study is freely available. PMID:26599984

  2. A framework for benchmarking of homogenisation algorithm performance on the global scale

    NASA Astrophysics Data System (ADS)

    Willett, K.; Williams, C.; Jolliffe, I. T.; Lund, R.; Alexander, L. V.; Brönnimann, S.; Vincent, L. A.; Easterbrook, S.; Venema, V. K. C.; Berry, D.; Warren, R. E.; Lopardo, G.; Auchmann, R.; Aguilar, E.; Menne, M. J.; Gallagher, C.; Hausfather, Z.; Thorarinsdottir, T.; Thorne, P. W.

    2014-09-01

    The International Surface Temperature Initiative (ISTI) is striving towards substantively improving our ability to robustly understand historical land surface air temperature change at all scales. A key recently completed first step has been collating all available records into a comprehensive open access, traceable and version-controlled databank. The crucial next step is to maximise the value of the collated data through a robust international framework of benchmarking and assessment for product intercomparison and uncertainty estimation. We focus on uncertainties arising from the presence of inhomogeneities in monthly mean land surface temperature data and the varied methodological choices made by various groups in building homogeneous temperature products. The central facet of the benchmarking process is the creation of global-scale synthetic analogues to the real-world database where both the "true" series and inhomogeneities are known (a luxury the real-world data do not afford us). Hence, algorithmic strengths and weaknesses can be meaningfully quantified and conditional inferences made about the real-world climate system. Here we discuss the necessary framework for developing an international homogenisation benchmarking system on the global scale for monthly mean temperatures. The value of this framework is critically dependent upon the number of groups taking part and so we strongly advocate involvement in the benchmarking exercise from as many data analyst groups as possible to make the best use of this substantial effort.

  3. An example-based brain MRI simulation framework

    NASA Astrophysics Data System (ADS)

    He, Qing; Roy, Snehashis; Jog, Amod; Pham, Dzung L.

    2015-03-01

    The simulation of magnetic resonance (MR) images plays an important role in the validation of image analysis algorithms such as image segmentation, due to lack of sufficient ground truth in real MR images. Previous work on MRI simulation has focused on explicitly modeling the MR image formation process. However, because of the overwhelming complexity of MR acquisition these simulations must involve simplifications and approximations that can result in visually unrealistic simulated images. In this work, we describe an example-based simulation framework, which uses an "atlas" consisting of an MR image and its anatomical models derived from the hard segmentation. The relationships between the MR image intensities and its anatomical models are learned using a patch-based regression that implicitly models the physics of the MR image formation. Given the anatomical models of a new brain, a new MR image can be simulated using the learned regression. This approach has been extended to also simulate intensity inhomogeneity artifacts based on the statistical model of training data. Results show that the example based MRI simulation method is capable of simulating different image contrasts and is robust to different choices of atlas. The simulated images resemble real MR images more than simulations produced by a physics-based model.

  4. Online adaptive decision fusion framework based on projections onto convex sets with application to wildfire detection in video

    NASA Astrophysics Data System (ADS)

    Günay, Osman; Töreyin, Behcet Uǧur; Çetin, Ahmet Enis

    2011-07-01

    In this paper, an online adaptive decision fusion framework is developed for image analysis and computer vision applications. In this framework, it is assumed that the compound algorithm consists of several sub-algorithms, each of which yields its own decision as a real number centered around zero, representing the confidence level of that particular sub-algorithm. Decision values are linearly combined with weights that are updated online according to an active fusion method based on performing orthogonal projections onto convex sets describing sub-algorithms. It is assumed that there is an oracle, who is usually a human operator, providing feedback to the decision fusion method. A video-based wildfire detection system is developed to evaluate the performance of the algorithm in handling the problems where data arrives sequentially. In this case, the oracle is the security guard of the forest lookout tower verifying the decision of the combined algorithm. Simulation results are presented.

  5. Cognitive radio resource allocation based on coupled chaotic genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zu, Yun-Xiao; Zhou, Jie; Zeng, Chang-Chang

    2010-11-01

    A coupled chaotic genetic algorithm for cognitive radio resource allocation which is based on genetic algorithm and coupled Logistic map is proposed. A fitness function for cognitive radio resource allocation is provided. Simulations are conducted for cognitive radio resource allocation by using the coupled chaotic genetic algorithm, simple genetic algorithm and dynamic allocation algorithm respectively. The simulation results show that, compared with simple genetic and dynamic allocation algorithm, coupled chaotic genetic algorithm reduces the total transmission power and bit error rate in cognitive radio system, and has faster convergence speed.

  6. Transaction-Based Building Controls Framework, Volume 1: Reference Guide

    SciTech Connect

    Somasundaram, Sriram; Pratt, Robert G.; Akyol, Bora A.; Fernandez, Nicholas; Foster, Nikolas AF; Katipamula, Srinivas; Mayhorn, Ebony T.; Somani, Abhishek; Steckley, Andrew C.; Taylor, Zachary T.

    2014-12-01

    This document proposes a framework concept to achieve the objectives of raising buildings’ efficiency and energy savings potential benefitting building owners and operators. We call it a transaction-based framework, wherein mutually-beneficial and cost-effective market-based transactions can be enabled between multiple players across different domains. Transaction-based building controls are one part of the transactional energy framework. While these controls realize benefits by enabling automatic, market-based intra-building efficiency optimizations, the transactional energy framework provides similar benefits using the same market -based structure, yet on a larger scale and beyond just buildings, to the society at large.

  7. An open-source framework for stress-testing non-invasive foetal ECG extraction algorithms.

    PubMed

    Andreotti, Fernando; Behar, Joachim; Zaunseder, Sebastian; Oster, Julien; Clifford, Gari D

    2016-05-01

    avoided. Data, extraction algorithms and evaluation routines were released as part of the fecgsyn toolbox on Physionet under an GNU GPL open-source license. This contribution provides a standard framework for benchmarking and regulatory testing of NI-FECG extraction algorithms. PMID:27067286

  8. Spectro-Perfectionism: An Algorithmic Framework for Photon Noise-Limited Extraction of Optical Fiber Spectroscopy

    NASA Astrophysics Data System (ADS)

    Bolton, Adam S.; Schlegel, David J.

    2010-02-01

    We describe a new algorithm for the "perfect" extraction of one-dimensional (1D) spectra from two-dimensional (2D) digital images of optical fiber spectrographs, based on accurate 2D forward modeling of the raw pixel data. The algorithm is correct for arbitrarily complicated 2D point-spread functions (PSFs), as compared to the traditional optimal extraction algorithm, which is only correct for a limited class of separable PSFs. The algorithm results in statistically independent extracted samples in the 1D spectrum, and preserves the full native resolution of the 2D spectrograph without degradation. Both the statistical errors and the 1D resolution of the extracted spectrum are accurately determined, allowing a correct χ2 comparison of any model spectrum with the data. Using a model PSF similar to that found in the red channel of the Sloan Digital Sky Survey spectrograph, we compare the performance of our algorithm to that of cross-section based optimal extraction, and also demonstrate that our method allows coaddition and foreground estimation to be carried out as an integral part of the extraction step. This work demonstrates the feasibility of current and next-generation multifiber spectrographs for faint-galaxy surveys even in the presence of strong night-sky foregrounds. We describe the handling of subtleties arising from fiber-to-fiber cross talk, discuss some of the likely challenges in deploying our method to the analysis of a full-scale survey, and note that our algorithm could be generalized into an optimal method for the rectification and combination of astronomical imaging data.

  9. PDE Based Algorithms for Smooth Watersheds.

    PubMed

    Hodneland, Erlend; Tai, Xue-Cheng; Kalisch, Henrik

    2016-04-01

    Watershed segmentation is useful for a number of image segmentation problems with a wide range of practical applications. Traditionally, the tracking of the immersion front is done by applying a fast sorting algorithm. In this work, we explore a continuous approach based on a geometric description of the immersion front which gives rise to a partial differential equation. The main advantage of using a partial differential equation to track the immersion front is that the method becomes versatile and may easily be stabilized by introducing regularization terms. Coupling the geometric approach with a proper "merging strategy" creates a robust algorithm which minimizes over- and under-segmentation even without predefined markers. Since reliable markers defined prior to segmentation can be difficult to construct automatically for various reasons, being able to treat marker-free situations is a major advantage of the proposed method over earlier watershed formulations. The motivation for the methods developed in this paper is taken from high-throughput screening of cells. A fully automated segmentation of single cells enables the extraction of cell properties from large data sets, which can provide substantial insight into a biological model system. Applying smoothing to the boundaries can improve the accuracy in many image analysis tasks requiring a precise delineation of the plasma membrane of the cell. The proposed segmentation method is applied to real images containing fluorescently labeled cells, and the experimental results show that our implementation is robust and reliable for a variety of challenging segmentation tasks.

  10. Speech Enhancement based on Compressive Sensing Algorithm

    NASA Astrophysics Data System (ADS)

    Sulong, Amart; Gunawan, Teddy S.; Khalifa, Othman O.; Chebil, Jalel

    2013-12-01

    There are various methods, in performance of speech enhancement, have been proposed over the years. The accurate method for the speech enhancement design mainly focuses on quality and intelligibility. The method proposed with high performance level. A novel speech enhancement by using compressive sensing (CS) is a new paradigm of acquiring signals, fundamentally different from uniform rate digitization followed by compression, often used for transmission or storage. Using CS can reduce the number of degrees of freedom of a sparse/compressible signal by permitting only certain configurations of the large and zero/small coefficients, and structured sparsity models. Therefore, CS is significantly provides a way of reconstructing a compressed version of the speech in the original signal by taking only a small amount of linear and non-adaptive measurement. The performance of overall algorithms will be evaluated based on the speech quality by optimise using informal listening test and Perceptual Evaluation of Speech Quality (PESQ). Experimental results show that the CS algorithm perform very well in a wide range of speech test and being significantly given good performance for speech enhancement method with better noise suppression ability over conventional approaches without obvious degradation of speech quality.

  11. Comparison of cone beam artifacts reduction: two pass algorithm vs TV-based CS algorithm

    NASA Astrophysics Data System (ADS)

    Choi, Shinkook; Baek, Jongduk

    2015-03-01

    In a cone beam computed tomography (CBCT), the severity of the cone beam artifacts is increased as the cone angle increases. To reduce the cone beam artifacts, several modified FDK algorithms and compressed sensing based iterative algorithms have been proposed. In this paper, we used two pass algorithm and Gradient-Projection-Barzilai-Borwein (GPBB) algorithm to reduce the cone beam artifacts, and compared their performance using structural similarity (SSIM) index. In two pass algorithm, it is assumed that the cone beam artifacts are mainly caused by extreme-density(ED) objects, and therefore the algorithm reproduces the cone beam artifacts(i.e., error image) produced by ED objects, and then subtract it from the original image. GPBB algorithm is a compressed sensing based iterative algorithm which minimizes an energy function for calculating the gradient projection with the step size determined by the Barzilai- Borwein formulation, therefore it can estimate missing data caused by the cone beam artifacts. To evaluate the performance of two algorithms, we used testing objects consisting of 7 ellipsoids separated along the z direction and cone beam artifacts were generated using 30 degree cone angle. Even though the FDK algorithm produced severe cone beam artifacts with a large cone angle, two pass algorithm reduced the cone beam artifacts with small residual errors caused by inaccuracy of ED objects. In contrast, GPBB algorithm completely removed the cone beam artifacts and restored the original shape of the objects.

  12. sp3-hybridized framework structure of group-14 elements discovered by genetic algorithm

    SciTech Connect

    Nguyen, Manh Cuong; Zhao, Xin; Wang, Cai-Zhuang; Ho, Kai-Ming

    2014-05-01

    Group-14 elements, including C, Si, Ge, and Sn, can form various stable and metastable structures. Finding new metastable structures of group-14 elements with desirable physical properties for new technological applications has attracted a lot of interest. Using a genetic algorithm, we discovered a new low-energy metastable distorted sp3-hybridized framework structure of the group-14 elements. It has P42/mnm symmetry with 12 atoms per unit cell. The void volume of this structure is as large as 139.7Å3 for Si P42/mnm, and it can be used for gas or metal-atom encapsulation. Band-structure calculations show that P42/mnm structures of Si and Ge are semiconducting with energy band gaps close to the optimal values for optoelectronic or photovoltaic applications. With metal-atom encapsulation, the P42/mnm structure would also be a candidate for rattling-mediated superconducting or used as thermoelectric materials.

  13. A Framework for Socio-Scientific Issues Based Education

    ERIC Educational Resources Information Center

    Presley, Morgan L.; Sickel, Aaron J.; Muslu, Nilay; Merle-Johnson, Dominike; Witzig, Stephen B.; Izci, Kemal; Sadler, Troy D.

    2013-01-01

    Science instruction based on student exploration of socio-scientific issues (SSI) has been presented as a powerful strategy for supporting science learning and the development of scientific literacy. This paper presents an instructional framework for SSI based education. The framework is based on a series of research studies conducted in a diverse…

  14. Implicit function-based phantoms for evaluation of registration algorithms

    NASA Astrophysics Data System (ADS)

    Gopalakrishnan, Girish; Poston, Timothy; Nagaraj, Nithin; Mullick, Rakesh; Knoplioch, Jerome

    2005-04-01

    Medical image fusion is increasingly enhancing diagnostic accuracy by synergizing information from multiple images, obtained by the same modality at different times or from complementary modalities such as structural information from CT and functional from PET. An active, crucial research topic in fusion is validation of the registration (point-to-point correspondence) used. Phantoms and other simulated studies are useful in the absence of, or as a preliminary to, definitive clinical tests. Software phantoms in specific have the added advantage of robustness, repeatability and reproducibility. Our virtual-lung-phantom-based scheme can test the accuracy of any registration algorithm and is flexible enough for added levels of complexity (addition of blur/anti-alias, rotate/warp, and modality-associated noise) to help evaluate the robustness of an image registration/fusion methodology. Such a framework extends easily to different anatomies. The feature of adding software-based fiducials both within and outside simulated anatomies prove more beneficial when compared to experiments using data from external fiducials on a patient. It would help the diagnosing clinician make a prudent choice of registration algorithm.

  15. A Winner Determination Algorithm for Combinatorial Auctions Based on Hybrid Artificial Fish Swarm Algorithm

    NASA Astrophysics Data System (ADS)

    Zheng, Genrang; Lin, ZhengChun

    The problem of winner determination in combinatorial auctions is a hotspot electronic business, and a NP hard problem. A Hybrid Artificial Fish Swarm Algorithm(HAFSA), which is combined with First Suite Heuristic Algorithm (FSHA) and Artificial Fish Swarm Algorithm (AFSA), is proposed to solve the problem after probing it base on the theories of AFSA. Experiment results show that the HAFSA is a rapidly and efficient algorithm for The problem of winner determining. Compared with Ant colony Optimization Algorithm, it has a good performance with broad and prosperous application.

  16. Genetic algorithm-based form error evaluation

    NASA Astrophysics Data System (ADS)

    Cui, Changcai; Li, Bing; Huang, Fugui; Zhang, Rencheng

    2007-07-01

    Form error evaluation of geometrical products is a nonlinear optimization problem, for which a solution has been attempted by different methods with some complexity. A genetic algorithm (GA) was developed to deal with the problem, which was proved simple to understand and realize, and its key techniques have been investigated in detail. Firstly, the fitness function of GA was discussed emphatically as a bridge between GA and the concrete problems to be solved. Secondly, the real numbers-based representation of the desired solutions in the continual space optimization problem was discussed. Thirdly, many improved evolutionary strategies of GA were described on emphasis. These evolutionary strategies were the selection operation of 'odd number selection plus roulette wheel selection', the crossover operation of 'arithmetic crossover between near relatives and far relatives' and the mutation operation of 'adaptive Gaussian' mutation. After evolutions from generation to generation with the evolutionary strategies, the initial population produced stochastically around the least-squared solutions of the problem would be updated and improved iteratively till the best chromosome or individual of GA appeared. Finally, some examples were given to verify the evolutionary method. Experimental results show that the GA-based method can find desired solutions that are superior to the least-squared solutions except for a few examples in which the GA-based method can obtain similar results to those by the least-squared method. Compared with other optimization techniques, the GA-based method can obtain almost equal results but with less complicated models and computation time.

  17. A New Aloha Anti-Collision Algorithm Based on CDMA

    NASA Astrophysics Data System (ADS)

    Bai, Enjian; Feng, Zhu

    The tags' collision is a common problem in RFID (radio frequency identification) system. The problem has affected the integrity of the data transmission during the process of communication in the RFID system. Based on analysis of the existing anti-collision algorithm, a novel anti-collision algorithm is presented. The new algorithm combines the group dynamic frame slotted Aloha algorithm with code division multiple access technology. The algorithm can effectively reduce the collision probability between tags. Under the same number of tags, the algorithm is effective in reducing the reader recognition time and improve overall system throughput rate.

  18. An improved localization algorithm based on genetic algorithm in wireless sensor networks.

    PubMed

    Peng, Bo; Li, Lei

    2015-04-01

    Wireless sensor network (WSN) are widely used in many applications. A WSN is a wireless decentralized structure network comprised of nodes, which autonomously set up a network. The node localization that is to be aware of position of the node in the network is an essential part of many sensor network operations and applications. The existing localization algorithms can be classified into two categories: range-based and range-free. The range-based localization algorithm has requirements on hardware, thus is expensive to be implemented in practice. The range-free localization algorithm reduces the hardware cost. Because of the hardware limitations of WSN devices, solutions in range-free localization are being pursued as a cost-effective alternative to more expensive range-based approaches. However, these techniques usually have higher localization error compared to the range-based algorithms. DV-Hop is a typical range-free localization algorithm utilizing hop-distance estimation. In this paper, we propose an improved DV-Hop algorithm based on genetic algorithm. Simulation results show that our proposed algorithm improves the localization accuracy compared with previous algorithms.

  19. Cordic based algorithms for software defined radio (SDR) baseband processing

    NASA Astrophysics Data System (ADS)

    Heyne, B.; Götze, J.

    2006-09-01

    This paper presents two Cordic based algorithms which may be used for digital baseband processing in OFDM and/or CDMA based communication systems. The first one is a linear least squares based multiuser detector for CDMA incorporating descrambling and despreading. The second algorithm is a pure Cordic based FFT implementation. Both algorithms can be implemented using solely Cordic based architectures (e.g. coprocessors or ASIPs). The algorithms exactly fit the needs of a multistandard terminal as they both are freely parameterizable. This regards to the accuracy of the results as well as to the parameters of the performed function (e.g. size of the FFT).

  20. An incremental clustering algorithm based on Mahalanobis distance

    NASA Astrophysics Data System (ADS)

    Aik, Lim Eng; Choon, Tan Wee

    2014-12-01

    Classical fuzzy c-means clustering algorithm is insufficient to cluster non-spherical or elliptical distributed datasets. The paper replaces classical fuzzy c-means clustering euclidean distance with Mahalanobis distance. It applies Mahalanobis distance to incremental learning for its merits. A Mahalanobis distance based fuzzy incremental clustering learning algorithm is proposed. Experimental results show the algorithm is an effective remedy for the defect in fuzzy c-means algorithm but also increase training accuracy.

  1. Adaptive RED algorithm based on minority game

    NASA Astrophysics Data System (ADS)

    Wei, Jiaolong; Lei, Ling; Qian, Jingjing

    2007-11-01

    With more and more applications appearing and the technology developing in the Internet, only relying on terminal system can not satisfy the complicated demand of QoS network. Router mechanisms must be participated into protecting responsive flows from the non-responsive. Routers mainly use active queue management mechanism (AQM) to avoid congestion. In the point of interaction between the routers, the paper applies minority game to describe the interaction of the users and observes the affection on the length of average queue. The parameters α, β of ARED being hard to confirm, adaptive RED based on minority game can depict the interactions of main body and amend the parameter α, β of ARED to the best. Adaptive RED based on minority game optimizes ARED and realizes the smoothness of average queue length. At the same time, this paper extends the network simulator plat - NS by adding new elements. Simulation has been implemented and the results show that new algorithm can reach the anticipative objects.

  2. A Framework for the Comparative Assessment of Neuronal Spike Sorting Algorithms towards More Accurate Off-Line and On-Line Microelectrode Arrays Data Analysis.

    PubMed

    Regalia, Giulia; Coelli, Stefania; Biffi, Emilia; Ferrigno, Giancarlo; Pedrocchi, Alessandra

    2016-01-01

    Neuronal spike sorting algorithms are designed to retrieve neuronal network activity on a single-cell level from extracellular multiunit recordings with Microelectrode Arrays (MEAs). In typical analysis of MEA data, one spike sorting algorithm is applied indiscriminately to all electrode signals. However, this approach neglects the dependency of algorithms' performances on the neuronal signals properties at each channel, which require data-centric methods. Moreover, sorting is commonly performed off-line, which is time and memory consuming and prevents researchers from having an immediate glance at ongoing experiments. The aim of this work is to provide a versatile framework to support the evaluation and comparison of different spike classification algorithms suitable for both off-line and on-line analysis. We incorporated different spike sorting "building blocks" into a Matlab-based software, including 4 feature extraction methods, 3 feature clustering methods, and 1 template matching classifier. The framework was validated by applying different algorithms on simulated and real signals from neuronal cultures coupled to MEAs. Moreover, the system has been proven effective in running on-line analysis on a standard desktop computer, after the selection of the most suitable sorting methods. This work provides a useful and versatile instrument for a supported comparison of different options for spike sorting towards more accurate off-line and on-line MEA data analysis. PMID:27239191

  3. A Framework for the Comparative Assessment of Neuronal Spike Sorting Algorithms towards More Accurate Off-Line and On-Line Microelectrode Arrays Data Analysis

    PubMed Central

    Pedrocchi, Alessandra

    2016-01-01

    Neuronal spike sorting algorithms are designed to retrieve neuronal network activity on a single-cell level from extracellular multiunit recordings with Microelectrode Arrays (MEAs). In typical analysis of MEA data, one spike sorting algorithm is applied indiscriminately to all electrode signals. However, this approach neglects the dependency of algorithms' performances on the neuronal signals properties at each channel, which require data-centric methods. Moreover, sorting is commonly performed off-line, which is time and memory consuming and prevents researchers from having an immediate glance at ongoing experiments. The aim of this work is to provide a versatile framework to support the evaluation and comparison of different spike classification algorithms suitable for both off-line and on-line analysis. We incorporated different spike sorting “building blocks” into a Matlab-based software, including 4 feature extraction methods, 3 feature clustering methods, and 1 template matching classifier. The framework was validated by applying different algorithms on simulated and real signals from neuronal cultures coupled to MEAs. Moreover, the system has been proven effective in running on-line analysis on a standard desktop computer, after the selection of the most suitable sorting methods. This work provides a useful and versatile instrument for a supported comparison of different options for spike sorting towards more accurate off-line and on-line MEA data analysis. PMID:27239191

  4. Development and Evaluation of Vectorised and Multi-Core Event Reconstruction Algorithms within the CMS Software Framework

    NASA Astrophysics Data System (ADS)

    Hauth, T.; Innocente and, V.; Piparo, D.

    2012-12-01

    The processing of data acquired by the CMS detector at LHC is carried out with an object-oriented C++ software framework: CMSSW. With the increasing luminosity delivered by the LHC, the treatment of recorded data requires extraordinary large computing resources, also in terms of CPU usage. A possible solution to cope with this task is the exploitation of the features offered by the latest microprocessor architectures. Modern CPUs present several vector units, the capacity of which is growing steadily with the introduction of new processor generations. Moreover, an increasing number of cores per die is offered by the main vendors, even on consumer hardware. Most recent C++ compilers provide facilities to take advantage of such innovations, either by explicit statements in the programs sources or automatically adapting the generated machine instructions to the available hardware, without the need of modifying the existing code base. Programming techniques to implement reconstruction algorithms and optimised data structures are presented, that aim to scalable vectorization and parallelization of the calculations. One of their features is the usage of new language features of the C++11 standard. Portions of the CMSSW framework are illustrated which have been found to be especially profitable for the application of vectorization and multi-threading techniques. Specific utility components have been developed to help vectorization and parallelization. They can easily become part of a larger common library. To conclude, careful measurements are described, which show the execution speedups achieved via vectorised and multi-threaded code in the context of CMSSW.

  5. An evolutionary algorithm for global optimization based on self-organizing maps

    NASA Astrophysics Data System (ADS)

    Barmada, Sami; Raugi, Marco; Tucci, Mauro

    2016-10-01

    In this article, a new population-based algorithm for real-parameter global optimization is presented, which is denoted as self-organizing centroids optimization (SOC-opt). The proposed method uses a stochastic approach which is based on the sequential learning paradigm for self-organizing maps (SOMs). A modified version of the SOM is proposed where each cell contains an individual, which performs a search for a locally optimal solution and it is affected by the search for a global optimum. The movement of the individuals in the search space is based on a discrete-time dynamic filter, and various choices of this filter are possible to obtain different dynamics of the centroids. In this way, a general framework is defined where well-known algorithms represent a particular case. The proposed algorithm is validated through a set of problems, which include non-separable problems, and compared with state-of-the-art algorithms for global optimization.

  6. Microphysical particle properties derived from inversion algorithms developed in the framework of EARLINET

    NASA Astrophysics Data System (ADS)

    Müller, Detlef; Böckmann, Christine; Kolgotin, Alexei; Schneidenbach, Lars; Chemyakin, Eduard; Rosemann, Julia; Znak, Pavel; Romanov, Anton

    2016-10-01

    We present a summary on the current status of two inversion algorithms that are used in EARLINET (European Aerosol Research Lidar Network) for the inversion of data collected with EARLINET multiwavelength Raman lidars. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. Development of these two algorithms started in 2000 when EARLINET was founded. The algorithms are based on a manually controlled inversion of optical data which allows for detailed sensitivity studies. The algorithms allow us to derive particle effective radius as well as volume and surface area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light absorption needs to be known with high accuracy. It is an extreme challenge to retrieve the real part with an accuracy better than 0.05 and the imaginary part with accuracy better than 0.005-0.1 or ±50 %. Single-scattering albedo can be computed from the retrieved microphysical parameters and allows us to categorize aerosols into high- and low-absorbing aerosols. On the basis of a few exemplary simulations with synthetic optical data we discuss the current status of these manually operated algorithms, the potentially achievable accuracy of data products, and the goals for future work. One algorithm was used with the purpose of testing how well microphysical parameters can be derived if the real part of the complex refractive index is known to at least 0.05 or 0.1. The other algorithm was used to find out how well microphysical parameters can be derived if this constraint for the real part is not applied. The optical data used in our study cover a range of Ångström exponents and extinction-to-backscatter (lidar) ratios that are found from lidar measurements of various aerosol types. We also tested

  7. Combined string searching algorithm based on knuth-morris- pratt and boyer-moore algorithms

    NASA Astrophysics Data System (ADS)

    Tsarev, R. Yu; Chernigovskiy, A. S.; Tsareva, E. A.; Brezitskaya, V. V.; Nikiforov, A. Yu; Smirnov, N. A.

    2016-04-01

    The string searching task can be classified as a classic information processing task. Users either encounter the solution of this task while working with text processors or browsers, employing standard built-in tools, or this task is solved unseen by the users, while they are working with various computer programmes. Nowadays there are many algorithms for solving the string searching problem. The main criterion of these algorithms’ effectiveness is searching speed. The larger the shift of the pattern relative to the string in case of pattern and string characters’ mismatch is, the higher is the algorithm running speed. This article offers a combined algorithm, which has been developed on the basis of well-known Knuth-Morris-Pratt and Boyer-Moore string searching algorithms. These algorithms are based on two different basic principles of pattern matching. Knuth-Morris-Pratt algorithm is based upon forward pattern matching and Boyer-Moore is based upon backward pattern matching. Having united these two algorithms, the combined algorithm allows acquiring the larger shift in case of pattern and string characters’ mismatch. The article provides an example, which illustrates the results of Boyer-Moore and Knuth-Morris- Pratt algorithms and combined algorithm’s work and shows advantage of the latter in solving string searching problem.

  8. A Viola-Jones based hybrid face detection framework

    NASA Astrophysics Data System (ADS)

    Murphy, Thomas M.; Broussard, Randy; Schultz, Robert; Rakvic, Ryan; Ngo, Hau

    2013-12-01

    Improvements in face detection performance would benefit many applications. The OpenCV library implements a standard solution, the Viola-Jones detector, with a statistically boosted rejection cascade of binary classifiers. Empirical evidence has shown that Viola-Jones underdetects in some instances. This research shows that a truncated cascade augmented by a neural network could recover these undetected faces. A hybrid framework is constructed, with a truncated Viola-Jones cascade followed by an artificial neural network, used to refine the face decision. Optimally, a truncation stage that captured all faces and allowed the neural network to remove the false alarms is selected. A feedforward backpropagation network with one hidden layer is trained to discriminate faces based upon the thresholding (detection) values of intermediate stages of the full rejection cascade. A clustering algorithm is used as a precursor to the neural network, to group significant overlappings. Evaluated on the CMU/VASC Image Database, comparison with an unmodified OpenCV approach shows: (1) a 37% increase in detection rates if constrained by the requirement of no increase in false alarms, (2) a 48% increase in detection rates if some additional false alarms are tolerated, and (3) an 82% reduction in false alarms with no reduction in detection rates. These results demonstrate improved face detection and could address the need for such improvement in various applications.

  9. An Airway Network Flow Assignment Approach Based on an Efficient Multiobjective Optimization Framework

    PubMed Central

    Guan, Xiangmin; Zhang, Xuejun; Zhu, Yanbo; Sun, Dengfeng; Lei, Jiaxing

    2015-01-01

    Considering reducing the airspace congestion and the flight delay simultaneously, this paper formulates the airway network flow assignment (ANFA) problem as a multiobjective optimization model and presents a new multiobjective optimization framework to solve it. Firstly, an effective multi-island parallel evolution algorithm with multiple evolution populations is employed to improve the optimization capability. Secondly, the nondominated sorting genetic algorithm II is applied for each population. In addition, a cooperative coevolution algorithm is adapted to divide the ANFA problem into several low-dimensional biobjective optimization problems which are easier to deal with. Finally, in order to maintain the diversity of solutions and to avoid prematurity, a dynamic adjustment operator based on solution congestion degree is specifically designed for the ANFA problem. Simulation results using the real traffic data from China air route network and daily flight plans demonstrate that the proposed approach can improve the solution quality effectively, showing superiority to the existing approaches such as the multiobjective genetic algorithm, the well-known multiobjective evolutionary algorithm based on decomposition, and a cooperative coevolution multiobjective algorithm as well as other parallel evolution algorithms with different migration topology. PMID:26180840

  10. An Airway Network Flow Assignment Approach Based on an Efficient Multiobjective Optimization Framework.

    PubMed

    Guan, Xiangmin; Zhang, Xuejun; Zhu, Yanbo; Sun, Dengfeng; Lei, Jiaxing

    2015-01-01

    Considering reducing the airspace congestion and the flight delay simultaneously, this paper formulates the airway network flow assignment (ANFA) problem as a multiobjective optimization model and presents a new multiobjective optimization framework to solve it. Firstly, an effective multi-island parallel evolution algorithm with multiple evolution populations is employed to improve the optimization capability. Secondly, the nondominated sorting genetic algorithm II is applied for each population. In addition, a cooperative coevolution algorithm is adapted to divide the ANFA problem into several low-dimensional biobjective optimization problems which are easier to deal with. Finally, in order to maintain the diversity of solutions and to avoid prematurity, a dynamic adjustment operator based on solution congestion degree is specifically designed for the ANFA problem. Simulation results using the real traffic data from China air route network and daily flight plans demonstrate that the proposed approach can improve the solution quality effectively, showing superiority to the existing approaches such as the multiobjective genetic algorithm, the well-known multiobjective evolutionary algorithm based on decomposition, and a cooperative coevolution multiobjective algorithm as well as other parallel evolution algorithms with different migration topology. PMID:26180840

  11. A knowledge-based framework for image enhancement in aviation security.

    PubMed

    Singh, Maneesha; Singh, Sameer; Partridge, Derek

    2004-12-01

    The main aim of this paper is to present a knowledge-based framework for automatically selecting the best image enhancement algorithm from several available on a per image basis in the context of X-ray images of airport luggage. The approach detailed involves a system that learns to map image features that represent its viewability to one or more chosen enhancement algorithms. Viewability measures have been developed to provide an automatic check on the quality of the enhanced image, i.e., is it really enhanced? The choice is based on ground-truth information generated by human X-ray screening experts. Such a system, for a new image, predicts the best-suited enhancement algorithm. Our research details the various characteristics of the knowledge-based system and shows extensive results on real images.

  12. Tree-based shortest-path routing algorithm

    NASA Astrophysics Data System (ADS)

    Long, Y. H.; Ho, T. K.; Rad, A. B.; Lam, S. P. S.

    1998-12-01

    A tree-based shortest path routing algorithm is introduced in this paper. With this algorithm, every network node can maintain a shortest path routing tree topology of the network with itself as the root. In this algorithm, every node constructs its own routing tree based upon its neighbors' routing trees. Initially, the routing tree at each node has the root only, the node itself. As information exchanges, every node's routing tree will evolve until a complete tree is obtained. This algorithm is a trade-off between distance vector algorithm and link state algorithm. Loops are automatically deleted, so there is no count-to- infinity effect. A simple routing tree information storage approach and a protocol data until format to transmit the tree information are given. Some special issues, such as adaptation to topology change, implementation of the algorithm on LAN, convergence and computation overhead etc., are also discussed in the paper.

  13. A Curriculum Framework Based on Archetypal Phenomena and Technologies.

    ERIC Educational Resources Information Center

    Zubrowski, Bernie

    2002-01-01

    Presents an alternative paradigm of curriculum development based on the theory of situated cognition. This approach starts with context rather than concept, gives greater weight to students' interpretative frameworks, and provides for a more holistic development. Presents a grade 1-8 framework that uses archetypal phenomena and technologies as the…

  14. A Framework for Concept-Based Digital Course Libraries

    ERIC Educational Resources Information Center

    Dicheva, Darina; Dichev, Christo

    2004-01-01

    This article presents a general framework for building conceptbased digital course libraries. The framework is based on the idea of using a conceptual structure that represents a subject domain ontology for classification of the course library content. Two aspects, domain conceptualization, which supports findability and ontologies, which support…

  15. Situational Analysis: A Framework for Evidence-Based Practice

    ERIC Educational Resources Information Center

    Annan, Jean

    2005-01-01

    Situational analysis is a framework for professional practice and research in educational psychology. The process is guided by a set of practice principles requiring that psychologists' work is evidence-based, ecological, collaborative and constructive. The framework is designed to provide direction for psychologists who wish to tailor their…

  16. Solving SAT Problem Based on Hybrid Differential Evolution Algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Kunqi; Zhang, Jingmin; Liu, Gang; Kang, Lishan

    Satisfiability (SAT) problem is an NP-complete problem. Based on the analysis about it, SAT problem is translated equally into an optimization problem on the minimum of objective function. A hybrid differential evolution algorithm is proposed to solve the Satisfiability problem. It makes full use of strong local search capacity of hill-climbing algorithm and strong global search capability of differential evolution algorithm, which makes up their disadvantages, improves the efficiency of algorithm and avoids the stagnation phenomenon. The experiment results show that the hybrid algorithm is efficient in solving SAT problem.

  17. LMI-Based Generation of Feedback Laws for a Robust Model Predictive Control Algorithm

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet; Carson, John M., III

    2007-01-01

    This technical note provides a mathematical proof of Corollary 1 from the paper 'A Nonlinear Model Predictive Control Algorithm with Proven Robustness and Resolvability' that appeared in the 2006 Proceedings of the American Control Conference. The proof was omitted for brevity in the publication. The paper was based on algorithms developed for the FY2005 R&TD (Research and Technology Development) project for Small-body Guidance, Navigation, and Control [2].The framework established by the Corollary is for a robustly stabilizing MPC (model predictive control) algorithm for uncertain nonlinear systems that guarantees the resolvability of the associated nite-horizon optimal control problem in a receding-horizon implementation. Additional details of the framework are available in the publication.

  18. List-Based Simulated Annealing Algorithm for Traveling Salesman Problem.

    PubMed

    Zhan, Shi-hua; Lin, Juan; Zhang, Ze-jun; Zhong, Yi-wen

    2016-01-01

    Simulated annealing (SA) algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters' setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA) algorithm to solve traveling salesman problem (TSP). LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms.

  19. Extensions of kmeans-type algorithms: a new clustering framework by integrating intracluster compactness and intercluster separation.

    PubMed

    Huang, Xiaohui; Ye, Yunming; Zhang, Haijun

    2014-08-01

    Kmeans-type clustering aims at partitioning a data set into clusters such that the objects in a cluster are compact and the objects in different clusters are well separated. However, most kmeans-type clustering algorithms rely on only intracluster compactness while overlooking intercluster separation. In this paper, a series of new clustering algorithms by extending the existing kmeans-type algorithms is proposed by integrating both intracluster compactness and intercluster separation. First, a set of new objective functions for clustering is developed. Based on these objective functions, the corresponding updating rules for the algorithms are then derived analytically. The properties and performances of these algorithms are investigated on several synthetic and real-life data sets. Experimental studies demonstrate that our proposed algorithms outperform the state-of-the-art kmeans-type clustering algorithms with respect to four metrics: accuracy, RandIndex, Fscore, and normal mutual information.

  20. Developing a theoretical framework for complex community-based interventions.

    PubMed

    Angeles, Ricardo N; Dolovich, Lisa; Kaczorowski, Janusz; Thabane, Lehana

    2014-01-01

    Applying existing theories to research, in the form of a theoretical framework, is necessary to advance knowledge from what is already known toward the next steps to be taken. This article proposes a guide on how to develop a theoretical framework for complex community-based interventions using the Cardiovascular Health Awareness Program as an example. Developing a theoretical framework starts with identifying the intervention's essential elements. Subsequent steps include the following: (a) identifying and defining the different variables (independent, dependent, mediating/intervening, moderating, and control); (b) postulating mechanisms how the independent variables will lead to the dependent variables; (c) identifying existing theoretical models supporting the theoretical framework under development; (d) scripting the theoretical framework into a figure or sets of statements as a series of hypotheses, if/then logic statements, or a visual model; (e) content and face validation of the theoretical framework; and (f) revising the theoretical framework. In our example, we combined the "diffusion of innovation theory" and the "health belief model" to develop our framework. Using the Cardiovascular Health Awareness Program as the model, we demonstrated a stepwise process of developing a theoretical framework. The challenges encountered are described, and an overview of the strategies employed to overcome these challenges is presented.

  1. MERIS burned area algorithm in the framework of the ESA Fire CCI Project

    NASA Astrophysics Data System (ADS)

    Oliva, P.; Calado, T.; Gonzalez, F.

    2012-04-01

    The Fire-CCI project aims at generating long and reliable time series of burned area (BA) maps based on existing information provided by European satellite sensors. In this context, a BA algorithm is currently being developed using the Medium Resolution Imaging Spectrometer (MERIS) sensor. The algorithm is being tested over a series of ten study sites with a area of 500x500 km2 each, for the period of 2003 to 2009. The study sites are located in Canada, Colombia, Brazil, Portugal, Angola, South Africa, Kazakhstan, Borneo, Russia and Australia and include a variety of vegetation types characterized by different fire regimes. The algorithm has to take into account several limiting aspects that range from the MERIS sensor characteristics (e.g. the lack of SWIR bands) to the noise presented in the data. In addition the lack of data in some areas caused either because of cloud contamination or because the sensor does not acquire full resolution data over the study area, provokes a limitation difficult to overcome. In order to overcome these drawbacks, the design of the BA algorithm is based on the analysis of maximum composites of spectral indices characterized by low values of temporal standard deviation in space and associated to MODIS hot spots. Accordingly, for each study site and year, composites of maximum values of BAI are computed and the corresponding Julian day of the maximum value and number of observations in the period are registered by pixel . Then we computed the temporal standard deviation for pixels with a number of observations greater than 10 using spatial matrices of 3x3 pixels. To classify the BAI values as burned or non-burned we extract statistics using the MODIS hot spots. A pixel is finally classified as burned if it satisfies the following conditions: i) it is associated to hot spots; ii) BAI maximum is higher than a certain threshold and iii) the standard deviation of the Julian day is less than a given number of days.

  2. Web of Objects Based Ambient Assisted Living Framework for Emergency Psychiatric State Prediction

    PubMed Central

    Alam, Md Golam Rabiul; Abedin, Sarder Fakhrul; Al Ameen, Moshaddique; Hong, Choong Seon

    2016-01-01

    Ambient assisted living can facilitate optimum health and wellness by aiding physical, mental and social well-being. In this paper, patients’ psychiatric symptoms are collected through lightweight biosensors and web-based psychiatric screening scales in a smart home environment and then analyzed through machine learning algorithms to provide ambient intelligence in a psychiatric emergency. The psychiatric states are modeled through a Hidden Markov Model (HMM), and the model parameters are estimated using a Viterbi path counting and scalable Stochastic Variational Inference (SVI)-based training algorithm. The most likely psychiatric state sequence of the corresponding observation sequence is determined, and an emergency psychiatric state is predicted through the proposed algorithm. Moreover, to enable personalized psychiatric emergency care, a service a web of objects-based framework is proposed for a smart-home environment. In this framework, the biosensor observations and the psychiatric rating scales are objectified and virtualized in the web space. Then, the web of objects of sensor observations and psychiatric rating scores are used to assess the dweller’s mental health status and to predict an emergency psychiatric state. The proposed psychiatric state prediction algorithm reported 83.03 percent prediction accuracy in an empirical performance study. PMID:27608023

  3. Web of Objects Based Ambient Assisted Living Framework for Emergency Psychiatric State Prediction.

    PubMed

    Alam, Md Golam Rabiul; Abedin, Sarder Fakhrul; Al Ameen, Moshaddique; Hong, Choong Seon

    2016-01-01

    Ambient assisted living can facilitate optimum health and wellness by aiding physical, mental and social well-being. In this paper, patients' psychiatric symptoms are collected through lightweight biosensors and web-based psychiatric screening scales in a smart home environment and then analyzed through machine learning algorithms to provide ambient intelligence in a psychiatric emergency. The psychiatric states are modeled through a Hidden Markov Model (HMM), and the model parameters are estimated using a Viterbi path counting and scalable Stochastic Variational Inference (SVI)-based training algorithm. The most likely psychiatric state sequence of the corresponding observation sequence is determined, and an emergency psychiatric state is predicted through the proposed algorithm. Moreover, to enable personalized psychiatric emergency care, a service a web of objects-based framework is proposed for a smart-home environment. In this framework, the biosensor observations and the psychiatric rating scales are objectified and virtualized in the web space. Then, the web of objects of sensor observations and psychiatric rating scores are used to assess the dweller's mental health status and to predict an emergency psychiatric state. The proposed psychiatric state prediction algorithm reported 83.03 percent prediction accuracy in an empirical performance study.

  4. Web of Objects Based Ambient Assisted Living Framework for Emergency Psychiatric State Prediction.

    PubMed

    Alam, Md Golam Rabiul; Abedin, Sarder Fakhrul; Al Ameen, Moshaddique; Hong, Choong Seon

    2016-01-01

    Ambient assisted living can facilitate optimum health and wellness by aiding physical, mental and social well-being. In this paper, patients' psychiatric symptoms are collected through lightweight biosensors and web-based psychiatric screening scales in a smart home environment and then analyzed through machine learning algorithms to provide ambient intelligence in a psychiatric emergency. The psychiatric states are modeled through a Hidden Markov Model (HMM), and the model parameters are estimated using a Viterbi path counting and scalable Stochastic Variational Inference (SVI)-based training algorithm. The most likely psychiatric state sequence of the corresponding observation sequence is determined, and an emergency psychiatric state is predicted through the proposed algorithm. Moreover, to enable personalized psychiatric emergency care, a service a web of objects-based framework is proposed for a smart-home environment. In this framework, the biosensor observations and the psychiatric rating scales are objectified and virtualized in the web space. Then, the web of objects of sensor observations and psychiatric rating scores are used to assess the dweller's mental health status and to predict an emergency psychiatric state. The proposed psychiatric state prediction algorithm reported 83.03 percent prediction accuracy in an empirical performance study. PMID:27608023

  5. Microphysical particle properties derived from inversion algorithms developed in the framework of EARLINET

    NASA Astrophysics Data System (ADS)

    Müller, D.; Böckmann, C.; Kolgotin, A.; Schneidenbach, L.; Chemyakin, E.; Rosemann, J.; Znak, P.; Romanov, A.

    2015-12-01

    We present a summary on the current status of two inversion algorithms that are used in EARLINET for the inversion of data collected with EARLINET multiwavelength Raman lidars. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. Development of these two algorithms started in 2000 when EARLINET was founded. The algorithms are based on manually controlled inversion of optical data which allows for detailed sensitivity studies and thus provides us with comparably high quality of the derived data products. The algorithms allow us to derive particle effective radius, and volume and surface-area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light-absorption needs to be known with high accuracy. Single-scattering albedo can be computed from the retrieved microphysical parameters and allows us to categorize aerosols into high and low absorbing aerosols. We discuss the current status of these manually operated algorithms, the potentially achievable accuracy of data products, and the goals for future work on the basis of a few exemplary simulations with synthetic optical data. The optical data used in our study cover a range of Ångström exponents and extinction-to-backscatter (lidar) ratios that are found from lidar measurements of various aerosol types. We also tested aerosol scenarios that are considered highly unlikely, e.g., the lidar ratios fall outside the commonly accepted range of values measured with Raman lidar, even though the underlying microphysical particle properties are not uncommon. The goal of this part of the study is to test robustness of the algorithms toward their ability to identify aerosol types that have not been measured so far, but cannot be ruled out based on our current knowledge of

  6. Analysis of image thresholding segmentation algorithms based on swarm intelligence

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Lu, Kai; Gao, Yinghui; Yang, Bo

    2013-03-01

    Swarm intelligence-based image thresholding segmentation algorithms are playing an important role in the research field of image segmentation. In this paper, we briefly introduce the theories of four existing image segmentation algorithms based on swarm intelligence including fish swarm algorithm, artificial bee colony, bacteria foraging algorithm and particle swarm optimization. Then some image benchmarks are tested in order to show the differences of the segmentation accuracy, time consumption, convergence and robustness for Salt & Pepper noise and Gaussian noise of these four algorithms. Through these comparisons, this paper gives qualitative analyses for the performance variance of the four algorithms. The conclusions in this paper would give a significant guide for the actual image segmentation.

  7. Multiscale Edge Detection Using a Finite Element Framework for Hexagonal Pixel-Based Images.

    PubMed

    Gardiner, Bryan; Coleman, Sonya A; Scotney, Bryan W

    2016-04-01

    In recent years, the processing of hexagonal pixel-based images has been investigated, and as a result, a number of edge detection algorithms for direct application to such image structures have been developed. We build on this paper by presenting a novel and efficient approach to the design of hexagonal image processing operators using linear basis and test functions within the finite element framework. Development of these scalable first order and Laplacian operators using this approach presents a framework both for obtaining large-scale neighborhood operators in an efficient manner and for obtaining edge maps at different scales by efficient reuse of the seven-point linear operator. We evaluate the accuracy of these proposed operators and compare the algorithmic performance using the efficient linear approach with conventional operator convolution for generating edge maps at different scale levels. PMID:26890865

  8. Barzilai-Borwein method in graph drawing algorithm based on Kamada-Kawai algorithm

    NASA Astrophysics Data System (ADS)

    Hasal, Martin; Pospisil, Lukas; Nowakova, Jana

    2016-06-01

    Extension of Kamada-Kawai algorithm, which was designed for calculating layouts of simple undirected graphs, is presented in this paper. Graphs drawn by Kamada-Kawai algorithm exhibit symmetries, tend to produce aesthetically pleasing and crossing-free layouts for planar graphs. Minimization of Kamada-Kawai algorithm is based on Newton-Raphson method, which needs Hessian matrix of second derivatives of minimized node. Disadvantage of Kamada-Kawai embedder algorithm is computational requirements. This is caused by searching of minimal potential energy of the whole system, which is minimized node by node. The node with highest energy is minimized against all nodes till the local equilibrium state is reached. In this paper with Barzilai-Borwein (BB) minimization algorithm, which needs only gradient for minimum searching, instead of Newton-Raphson method, is worked. It significantly improves the computational time and requirements.

  9. Fast diffraction computation algorithms based on FFT

    NASA Astrophysics Data System (ADS)

    Logofatu, Petre Catalin; Nascov, Victor; Apostol, Dan

    2010-11-01

    The discovery of the Fast Fourier transform (FFT) algorithm by Cooley and Tukey meant for diffraction computation what the invention of computers meant for computation in general. The computation time reduction is more significant for large input data, but generally FFT reduces the computation time with several orders of magnitude. This was the beginning of an entire revolution in optical signal processing and resulted in an abundance of fast algorithms for diffraction computation in a variety of situations. The property that allowed the creation of these fast algorithms is that, as it turns out, most diffraction formulae contain at their core one or more Fourier transforms which may be rapidly calculated using the FFT. The key in discovering a new fast algorithm is to reformulate the diffraction formulae so that to identify and isolate the Fourier transforms it contains. In this way, the fast scaled transformation, the fast Fresnel transformation and the fast Rayleigh-Sommerfeld transform were designed. Remarkable improvements were the generalization of the DFT to scaled DFT which allowed freedom to choose the dimensions of the output window for the Fraunhofer-Fourier and Fresnel diffraction, the mathematical concept of linearized convolution which thwarts the circular character of the discrete Fourier transform and allows the use of the FFT, and last but not least the linearized discrete scaled convolution, a new concept of which we claim priority.

  10. Raytracing Based upon the Sympletic Algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Li, C.

    2014-12-01

    The raytracing is the basic problem in seismic imaging, and the reliability of the imaging depends on the accuracies both spatial trajectory and traveltime of the ray, and is using in seismology broadly. The seismic ray travels through the inhomogeneous media fallows the the eikonal equation, and the eikonal equation is an one order differential equation of traveltime, and satisfies the Hamilton System. In Cartesian coordinate system, we use a separable Hamilton System function. In this paper, the Sympletic algorithm method with bi-cubic convolution algorithm was used to solve the Hamilton System to deal with the raytracing problem. Compared with the Fsat Marching Method (FMM), The result shows that the Sympletic algorithm method (SAM) can keep the stability of the solution for the eikonal equation. Due to the use of the Sympletic algorithm, the method can produce a reliable seismic wavefront with an accurate ray trajectory (Fig.1). Meanwhile, the numerical modeling shows that the use of SAM can not only keep the stability of the Hamilton System with a fast computation but also improve the accuracy of the seismic ray tracing (Fig.2).

  11. Function-Based Algorithms for Biological Sequences

    ERIC Educational Resources Information Center

    Mohanty, Pragyan Sheela P.

    2015-01-01

    Two problems at two different abstraction levels of computational biology are studied. At the molecular level, efficient pattern matching algorithms in DNA sequences are presented. For gene order data, an efficient data structure is presented capable of storing all gene re-orderings in a systematic manner. A common characteristic of presented…

  12. LGBTQ relationally based positive psychology: An inclusive and systemic framework.

    PubMed

    Domínguez, Daniela G; Bobele, Monte; Coppock, Jacqueline; Peña, Ezequiel

    2015-05-01

    Positive psychologists have contributed to our understandings of how positive emotions and flexible cognition enhance resiliency. However, positive psychologists' research has been slow to address the relational resources and interactions that help nonheterosexual families overcome adversity. Addressing overlooked lesbian, gay, bisexual, transgender, or queer (LGBTQ) and systemic factors in positive psychology, this article draws on family resilience literature and LGBTQ literature to theorize a systemic positive psychology framework for working with nonheterosexual families. We developed the LGBTQ relationally based positive psychology framework that integrates positive psychology's strengths-based perspective with the systemic orientation of Walsh's (1996) family resilience framework along with the cultural considerations proposed by LGBTQ family literature. We theorize that the LGBTQ relationally based positive psychology framework takes into consideration the sociopolitical adversities impacting nonheterosexual families and sensitizes positive psychologists, including those working in organized care settings, to the systemic interactions of same-sex loving relationships.

  13. Developing JSequitur to Study the Hierarchical Structure of Biological Sequences in a Grammatical Inference Framework of String Compression Algorithms.

    PubMed

    Galbadrakh, Bulgan; Lee, Kyung-Eun; Park, Hyun-Seok

    2012-12-01

    Grammatical inference methods are expected to find grammatical structures hidden in biological sequences. One hopes that studies of grammar serve as an appropriate tool for theory formation. Thus, we have developed JSequitur for automatically generating the grammatical structure of biological sequences in an inference framework of string compression algorithms. Our original motivation was to find any grammatical traits of several cancer genes that can be detected by string compression algorithms. Through this research, we could not find any meaningful unique traits of the cancer genes yet, but we could observe some interesting traits in regards to the relationship among gene length, similarity of sequences, the patterns of the generated grammar, and compression rate.

  14. Framework for Supporting Web-Based Collaborative Applications

    NASA Astrophysics Data System (ADS)

    Dai, Wei

    The article proposes an intelligent framework for supporting Web-based applications. The framework focuses on innovative use of existing resources and technologies in the form of services and takes the leverage of theoretical foundation of services science and the research from services computing. The main focus of the framework is to deliver benefits to users with various roles such as service requesters, service providers, and business owners to maximize their productivity when engaging with each other via the Web. The article opens up with research motivations and questions, analyses the existing state of research in the field, and describes the approach in implementing the proposed framework. Finally, an e-health application is discussed to evaluate the effectiveness of the framework where participants such as general practitioners (GPs), patients, and health-care workers collaborate via the Web.

  15. Using remote sensing in support of environmental management: A framework for selecting products, algorithms and methods.

    PubMed

    de Klerk, Helen M; Gilbertson, Jason; Lück-Vogel, Melanie; Kemp, Jaco; Munch, Zahn

    2016-11-01

    Traditionally, to map environmental features using remote sensing, practitioners will use training data to develop models on various satellite data sets using a number of classification approaches and use test data to select a single 'best performer' from which the final map is made. We use a combination of an omission/commission plot to evaluate various results and compile a probability map based on consistently strong performing models across a range of standard accuracy measures. We suggest that this easy-to-use approach can be applied in any study using remote sensing to map natural features for management action. We demonstrate this approach using optical remote sensing products of different spatial and spectral resolution to map the endemic and threatened flora of quartz patches in the Knersvlakte, South Africa. Quartz patches can be mapped using either SPOT 5 (used due to its relatively fine spatial resolution) or Landsat8 imagery (used because it is freely accessible and has higher spectral resolution). Of the variety of classification algorithms available, we tested maximum likelihood and support vector machine, and applied these to raw spectral data, the first three PCA summaries of the data, and the standard normalised difference vegetation index. We found that there is no 'one size fits all' solution to the choice of a 'best fit' model (i.e. combination of classification algorithm or data sets), which is in agreement with the literature that classifier performance will vary with data properties. We feel this lends support to our suggestion that rather than the identification of a 'single best' model and a map based on this result alone, a probability map based on the range of consistently top performing models provides a rigorous solution to environmental mapping.

  16. A Framework for Batched and GPU-Resident Factorization Algorithms Applied to Block Householder Transformations

    SciTech Connect

    Dong, Tingzing Tim; Tomov, Stanimire Z; Luszczek, Piotr R; Dongarra, Jack J

    2015-01-01

    As modern hardware keeps evolving, an increasingly effective approach to developing energy efficient and high-performance solvers is to design them to work on many small size and independent problems. Many applications already need this functionality, especially for GPUs, which are currently known to be about four to five times more energy efficient than multicore CPUs. We describe the development of one-sided factorizations that work for a set of small dense matrices in parallel, and we illustrate our techniques on the QR factorization based on Householder transformations. We refer to this mode of operation as a batched factorization. Our approach is based on representing the algorithms as a sequence of batched BLAS routines for GPU-only execution. This is in contrast to the hybrid CPU-GPU algorithms that rely heavily on using the multicore CPU for specific parts of the workload. But for a system to benefit fully from the GPU's significantly higher energy efficiency, avoiding the use of the multicore CPU must be a primary design goal, so the system can rely more heavily on the more efficient GPU. Additionally, this will result in the removal of the costly CPU-to-GPU communication. Furthermore, we do not use a single symmetric multiprocessor(on the GPU) to factorize a single problem at a time. We illustrate how our performance analysis, and the use of profiling and tracing tools, guided the development and optimization of our batched factorization to achieve up to a 2-fold speedup and a 3-fold energy efficiency improvement compared to our highly optimized batched CPU implementations based on the MKL library(when using two sockets of Intel Sandy Bridge CPUs). Compared to a batched QR factorization featured in the CUBLAS library for GPUs, we achieved up to 5x speedup on the K40 GPU.

  17. Using remote sensing in support of environmental management: A framework for selecting products, algorithms and methods.

    PubMed

    de Klerk, Helen M; Gilbertson, Jason; Lück-Vogel, Melanie; Kemp, Jaco; Munch, Zahn

    2016-11-01

    Traditionally, to map environmental features using remote sensing, practitioners will use training data to develop models on various satellite data sets using a number of classification approaches and use test data to select a single 'best performer' from which the final map is made. We use a combination of an omission/commission plot to evaluate various results and compile a probability map based on consistently strong performing models across a range of standard accuracy measures. We suggest that this easy-to-use approach can be applied in any study using remote sensing to map natural features for management action. We demonstrate this approach using optical remote sensing products of different spatial and spectral resolution to map the endemic and threatened flora of quartz patches in the Knersvlakte, South Africa. Quartz patches can be mapped using either SPOT 5 (used due to its relatively fine spatial resolution) or Landsat8 imagery (used because it is freely accessible and has higher spectral resolution). Of the variety of classification algorithms available, we tested maximum likelihood and support vector machine, and applied these to raw spectral data, the first three PCA summaries of the data, and the standard normalised difference vegetation index. We found that there is no 'one size fits all' solution to the choice of a 'best fit' model (i.e. combination of classification algorithm or data sets), which is in agreement with the literature that classifier performance will vary with data properties. We feel this lends support to our suggestion that rather than the identification of a 'single best' model and a map based on this result alone, a probability map based on the range of consistently top performing models provides a rigorous solution to environmental mapping. PMID:27543751

  18. A Model Independent S/W Framework for Search-Based Software Testing

    PubMed Central

    Baik, Jongmoon

    2014-01-01

    In Model-Based Testing (MBT) area, Search-Based Software Testing (SBST) has been employed to generate test cases from the model of a system under test. However, many types of models have been used in MBT. If the type of a model has changed from one to another, all functions of a search technique must be reimplemented because the types of models are different even if the same search technique has been applied. It requires too much time and effort to implement the same algorithm over and over again. We propose a model-independent software framework for SBST, which can reduce redundant works. The framework provides a reusable common software platform to reduce time and effort. The software framework not only presents design patterns to find test cases for a target model but also reduces development time by using common functions provided in the framework. We show the effectiveness and efficiency of the proposed framework with two case studies. The framework improves the productivity by about 50% when changing the type of a model. PMID:25302314

  19. Argumentation in Science Education: A Model-Based Framework

    ERIC Educational Resources Information Center

    Bottcher, Florian; Meisert, Anke

    2011-01-01

    The goal of this article is threefold: First, the theoretical background for a model-based framework of argumentation to describe and evaluate argumentative processes in science education is presented. Based on the general model-based perspective in cognitive science and the philosophy of science, it is proposed to understand arguments as reasons…

  20. A Vehicle Detection Algorithm Based on Deep Belief Network

    PubMed Central

    Cai, Yingfeng; Chen, Long

    2014-01-01

    Vision based vehicle detection is a critical technology that plays an important role in not only vehicle active safety but also road video surveillance application. Traditional shallow model based vehicle detection algorithm still cannot meet the requirement of accurate vehicle detection in these applications. In this work, a novel deep learning based vehicle detection algorithm with 2D deep belief network (2D-DBN) is proposed. In the algorithm, the proposed 2D-DBN architecture uses second-order planes instead of first-order vector as input and uses bilinear projection for retaining discriminative information so as to determine the size of the deep architecture which enhances the success rate of vehicle detection. On-road experimental results demonstrate that the algorithm performs better than state-of-the-art vehicle detection algorithm in testing data sets. PMID:24959617

  1. Robust facial expression recognition algorithm based on local metric learning

    NASA Astrophysics Data System (ADS)

    Jiang, Bin; Jia, Kebin

    2016-01-01

    In facial expression recognition tasks, different facial expressions are often confused with each other. Motivated by the fact that a learned metric can significantly improve the accuracy of classification, a facial expression recognition algorithm based on local metric learning is proposed. First, k-nearest neighbors of the given testing sample are determined from the total training data. Second, chunklets are selected from the k-nearest neighbors. Finally, the optimal transformation matrix is computed by maximizing the total variance between different chunklets and minimizing the total variance of instances in the same chunklet. The proposed algorithm can find the suitable distance metric for every testing sample and improve the performance on facial expression recognition. Furthermore, the proposed algorithm can be used for vector-based and matrix-based facial expression recognition. Experimental results demonstrate that the proposed algorithm could achieve higher recognition rates and be more robust than baseline algorithms on the JAFFE, CK, and RaFD databases.

  2. Comparison of Beam-Based Alignment Algorithms for the ILC

    SciTech Connect

    Smith, J.C.; Gibbons, L.; Patterson, J.R.; Rubin, D.L.; Sagan, D.; Tenenbaum, P.; /SLAC

    2006-03-15

    The main linac of the International Linear Collider (ILC) requires more sophisticated alignment techniques than those provided by survey alone. Various Beam-Based Alignment (BBA) algorithms have been proposed to achieve the desired low emittance preservation. Dispersion Free Steering, Ballistic Alignment and the Kubo method are compared. Alignment algorithms are also tested in the presence of an Earth-like stray field.

  3. A danger-theory-based immune network optimization algorithm.

    PubMed

    Zhang, Ruirui; Li, Tao; Xiao, Xin; Shi, Yuanquan

    2013-01-01

    Existing artificial immune optimization algorithms reflect a number of shortcomings, such as premature convergence and poor local search ability. This paper proposes a danger-theory-based immune network optimization algorithm, named dt-aiNet. The danger theory emphasizes that danger signals generated from changes of environments will guide different levels of immune responses, and the areas around danger signals are called danger zones. By defining the danger zone to calculate danger signals for each antibody, the algorithm adjusts antibodies' concentrations through its own danger signals and then triggers immune responses of self-regulation. So the population diversity can be maintained. Experimental results show that the algorithm has more advantages in the solution quality and diversity of the population. Compared with influential optimization algorithms, CLONALG, opt-aiNet, and dopt-aiNet, the algorithm has smaller error values and higher success rates and can find solutions to meet the accuracies within the specified function evaluation times.

  4. Improved artificial bee colony algorithm based gravity matching navigation method.

    PubMed

    Gao, Wei; Zhao, Bo; Zhou, Guang Tao; Wang, Qiu Ying; Yu, Chun Yang

    2014-07-18

    Gravity matching navigation algorithm is one of the key technologies for gravity aided inertial navigation systems. With the development of intelligent algorithms, the powerful search ability of the Artificial Bee Colony (ABC) algorithm makes it possible to be applied to the gravity matching navigation field. However, existing search mechanisms of basic ABC algorithms cannot meet the need for high accuracy in gravity aided navigation. Firstly, proper modifications are proposed to improve the performance of the basic ABC algorithm. Secondly, a new search mechanism is presented in this paper which is based on an improved ABC algorithm using external speed information. At last, modified Hausdorff distance is introduced to screen the possible matching results. Both simulations and ocean experiments verify the feasibility of the method, and results show that the matching rate of the method is high enough to obtain a precise matching position.

  5. Fast parallel algorithm for slicing STL based on pipeline

    NASA Astrophysics Data System (ADS)

    Ma, Xulong; Lin, Feng; Yao, Bo

    2016-05-01

    In Additive Manufacturing field, the current researches of data processing mainly focus on a slicing process of large STL files or complicated CAD models. To improve the efficiency and reduce the slicing time, a parallel algorithm has great advantages. However, traditional algorithms can't make full use of multi-core CPU hardware resources. In the paper, a fast parallel algorithm is presented to speed up data processing. A pipeline mode is adopted to design the parallel algorithm. And the complexity of the pipeline algorithm is analyzed theoretically. To evaluate the performance of the new algorithm, effects of threads number and layers number are investigated by a serial of experiments. The experimental results show that the threads number and layers number are two remarkable factors to the speedup ratio. The tendency of speedup versus threads number reveals a positive relationship which greatly agrees with the Amdahl's law, and the tendency of speedup versus layers number also keeps a positive relationship agreeing with Gustafson's law. The new algorithm uses topological information to compute contours with a parallel method of speedup. Another parallel algorithm based on data parallel is used in experiments to show that pipeline parallel mode is more efficient. A case study at last shows a suspending performance of the new parallel algorithm. Compared with the serial slicing algorithm, the new pipeline parallel algorithm can make full use of the multi-core CPU hardware, accelerate the slicing process, and compared with the data parallel slicing algorithm, the new slicing algorithm in this paper adopts a pipeline parallel model, and a much higher speedup ratio and efficiency is achieved.

  6. PARALLELISATION OF THE MODEL-BASED ITERATIVE RECONSTRUCTION ALGORITHM DIRA.

    PubMed

    Örtenberg, A; Magnusson, M; Sandborg, M; Alm Carlsson, G; Malusek, A

    2016-06-01

    New paradigms for parallel programming have been devised to simplify software development on multi-core processors and many-core graphical processing units (GPU). Despite their obvious benefits, the parallelisation of existing computer programs is not an easy task. In this work, the use of the Open Multiprocessing (OpenMP) and Open Computing Language (OpenCL) frameworks is considered for the parallelisation of the model-based iterative reconstruction algorithm DIRA with the aim to significantly shorten the code's execution time. Selected routines were parallelised using OpenMP and OpenCL libraries; some routines were converted from MATLAB to C and optimised. Parallelisation of the code with the OpenMP was easy and resulted in an overall speedup of 15 on a 16-core computer. Parallelisation with OpenCL was more difficult owing to differences between the central processing unit and GPU architectures. The resulting speedup was substantially lower than the theoretical peak performance of the GPU; the cause was explained.

  7. PARALLELISATION OF THE MODEL-BASED ITERATIVE RECONSTRUCTION ALGORITHM DIRA.

    PubMed

    Örtenberg, A; Magnusson, M; Sandborg, M; Alm Carlsson, G; Malusek, A

    2016-06-01

    New paradigms for parallel programming have been devised to simplify software development on multi-core processors and many-core graphical processing units (GPU). Despite their obvious benefits, the parallelisation of existing computer programs is not an easy task. In this work, the use of the Open Multiprocessing (OpenMP) and Open Computing Language (OpenCL) frameworks is considered for the parallelisation of the model-based iterative reconstruction algorithm DIRA with the aim to significantly shorten the code's execution time. Selected routines were parallelised using OpenMP and OpenCL libraries; some routines were converted from MATLAB to C and optimised. Parallelisation of the code with the OpenMP was easy and resulted in an overall speedup of 15 on a 16-core computer. Parallelisation with OpenCL was more difficult owing to differences between the central processing unit and GPU architectures. The resulting speedup was substantially lower than the theoretical peak performance of the GPU; the cause was explained. PMID:26454270

  8. Fast image matching algorithm based on projection characteristics

    NASA Astrophysics Data System (ADS)

    Zhou, Lijuan; Yue, Xiaobo; Zhou, Lijun

    2011-06-01

    Based on analyzing the traditional template matching algorithm, this paper identified the key factors restricting the speed of matching and put forward a brand new fast matching algorithm based on projection. Projecting the grayscale image, this algorithm converts the two-dimensional information of the image into one-dimensional one, and then matches and identifies through one-dimensional correlation, meanwhile, because of normalization has been done, when the image brightness or signal amplitude increasing in proportion, it could also perform correct matching. Experimental results show that the projection characteristics based image registration method proposed in this article could greatly improve the matching speed, which ensuring the matching accuracy as well.

  9. A Web Services Composition Design framework based on Agent Organization

    NASA Astrophysics Data System (ADS)

    Li, JiaJia; Li, Bin; Zhang, Xiaowei

    Computing environments are becoming more open, distributed and pervasive. The web services compositions build for these dynamic environments will need to become more adaptable and adaptive to unexpected event. This paper defines a way for web services composition which based on agent organization. The functions of three layers, classification of agent, and agent model and agents design in this framework are introduced in details. It realizes a reliable and flexible web services composition using this framework.

  10. Research on classified real-time flood forecasting framework based on K-means cluster and rough set.

    PubMed

    Xu, Wei; Peng, Yong

    2015-01-01

    This research presents a new classified real-time flood forecasting framework. In this framework, historical floods are classified by a K-means cluster according to the spatial and temporal distribution of precipitation, the time variance of precipitation intensity and other hydrological factors. Based on the classified results, a rough set is used to extract the identification rules for real-time flood forecasting. Then, the parameters of different categories within the conceptual hydrological model are calibrated using a genetic algorithm. In real-time forecasting, the corresponding category of parameters is selected for flood forecasting according to the obtained flood information. This research tests the new classified framework on Guanyinge Reservoir and compares the framework with the traditional flood forecasting method. It finds that the performance of the new classified framework is significantly better in terms of accuracy. Furthermore, the framework can be considered in a catchment with fewer historical floods. PMID:26442493

  11. Research on classified real-time flood forecasting framework based on K-means cluster and rough set.

    PubMed

    Xu, Wei; Peng, Yong

    2015-01-01

    This research presents a new classified real-time flood forecasting framework. In this framework, historical floods are classified by a K-means cluster according to the spatial and temporal distribution of precipitation, the time variance of precipitation intensity and other hydrological factors. Based on the classified results, a rough set is used to extract the identification rules for real-time flood forecasting. Then, the parameters of different categories within the conceptual hydrological model are calibrated using a genetic algorithm. In real-time forecasting, the corresponding category of parameters is selected for flood forecasting according to the obtained flood information. This research tests the new classified framework on Guanyinge Reservoir and compares the framework with the traditional flood forecasting method. It finds that the performance of the new classified framework is significantly better in terms of accuracy. Furthermore, the framework can be considered in a catchment with fewer historical floods.

  12. Location-based information retrieval framework

    NASA Astrophysics Data System (ADS)

    Hariharan, Gurushyam; Mehta, Sandeep

    2003-03-01

    The recent advances in mobile communication technologies and their widespread use calls for a host of new value added services for the mobile user. In their current avatar, these deices are not more than mere communication equipments. Now consumer orientated, mobile, internet connected devices which are location aware (that are capable of determining and transmitting their current geographical location) are becoming available everywhere. The availability of internet access and location awareness in portable devices like cell phones, Personal Digital Assistants, etc. opens up a host of new opportunities for services which can en cash on the location of the user. Besides providing navigational information to the user, additional push down information can be sent to the user based on his profile and his preferences. The domain is wide and the number of applications is enormous. This paper presents a design and implementation of a basic location aware service.

  13. Pilot based frameworks for Weather Research Forecasting

    NASA Astrophysics Data System (ADS)

    Ganapathi, Dinesh Prasanth

    The Weather Research Forecasting (WRF) domain consists of complex workflows that demand the use of Distributed Computing Infrastructure (DCI). Weather forecasting requires that weather researchers use different set of initial conditions and one or a combination of physics models on the same set of input data. For these type of simulations an ensemble based computing approach becomes imperative. Most DCIs have local job-schedulers that have no smart way of dealing with the execution of an ensemble type of computational problem as the job-schedulers are built to cater to the bare essentials of resource allocation. This means the weather scientists have to submit multiple jobs to the job-scheduler. In this dissertation we use Pilot-Job based tools to decouple work-load submission and resource allocation therefore streamlining the complex workflows in Weather Research and Forecasting domain and reduce their overall time to completion. We also achieve location independent job execution, data movement, placement and processing. Next, we create the necessary enablers to run an ensemble of tasks bearing the capability to run on multiple heterogeneous distributed computing resources there by creating the opportunity to minimize the overall time consumed in running the models. Our experiments show that the tools developed exhibit very good, strong and weak scaling characteristics. These results bear the potential to change the way weather researchers are submitting traditional WRF jobs to the DCIs by giving them a powerful weapon in their arsenal that can exploit the combined power of various heterogeneous DCIs that could otherwise be difficult to harness owing to interoperability issues.

  14. An Emerging Framework for Analyzing School-Based Professional Community.

    ERIC Educational Resources Information Center

    Kruse, Sharon D.; Louis, Karen Seashore

    This paper attempts to blend the literature on professionalism with the literature of community, thus positing a framework for a school-based professional community. Sociologists have long distinguished between occupations--even high status ones--and professions. Among the key distinctions of professionalism are: a technical knowledge base shared…

  15. Construct Definition Using Cognitively Based Evidence: A Framework for Practice

    ERIC Educational Resources Information Center

    Ketterlin-Geller, Leanne R.; Yovanoff, Paul; Jung, EunJu; Liu, Kimy; Geller, Josh

    2013-01-01

    In this article, we highlight the need for a precisely defined construct in score-based validation and discuss the contribution of cognitive theories to accurately and comprehensively defining the construct. We propose a framework for integrating cognitively based theoretical and empirical evidence to specify and evaluate the construct. We apply…

  16. Curriculum Research: Toward a Framework for "Research-based Curricula"

    ERIC Educational Resources Information Center

    Clements, Douglas H.

    2007-01-01

    Government agencies and members of the educational research community have petitioned for research-based curricula. The ambiguity of the phrase "research-based", however, undermines attempts to create a shared research foundation for the development of, and informed choices about, classroom curricula. This article presents a framework for the…

  17. A Moment-Based Condensed History Algorithm

    SciTech Connect

    Tolar, D.R.; Larsen, E.W.

    2000-06-15

    ''Condensed History'' algorithms are Monte Carlo models for electron transport problems, They describe the aggregate effect of multiple collisions that occur when an electron travels a path length s{sub 0}. This path length is the distance each Monte Carlo electron travels between Condensed History steps. Conventional Condensed History schemes employ a splitting routine over the range 0 {le} s {le} s{sub 0}. For example, the Random Hinge method splits each path length step into two substeps; one with length {xi}s{sub 0} and one with length (1-{xi})s{sub 0}, where {xi} is a random number from 0 < {xi} < 1. Here we develop a new Condensed History algorithm to improve the accuracy of electron transport simulations by preserving the mean position and the variance in the mean of electrons that have traveled a path length s and are traveling with the direction cosine {mu}. These means and variances are obtained from the zeroth-, first-, and second-order spatial moments of the Boltzmann transport equation. Hence, our method is a Monte Carlo application of the ''Method of Moments''.

  18. A transport-based condensed history algorithm

    SciTech Connect

    Tolar Jr, D R

    1999-01-06

    Condensed history algorithms are approximate electron transport Monte Carlo methods in which the cumulative effects of multiple collisions are modeled in a single step of (user-specified) path length s{sub 0}. This path length is the distance each Monte Carlo electron travels between collisions. Current condensed history techniques utilize a splitting routine over the range 0 {le} s {le} s{sub 0}. For example, the PEnELOPE method splits each step into two substeps; one with length {xi}s{sub 0} and one with length (1 {minus}{xi})s{sub 0}, where {xi} is a random number from 0 < {xi} < 1. because s{sub 0} is fixed (not sampled from an exponential distribution), conventional condensed history schemes are not transport processes. Here the authors describe a new condensed history algorithm that is a transport process. The method simulates a transport equation that approximates the exact Boltzmann equation. The new transport equation has a larger mean free path than, and preserves two angular moments of, the Boltzmann equation. Thus, the new process is solved more efficiently by Monte Carlo, and it conserves both particles and scattering power.

  19. CUDT: A CUDA Based Decision Tree Algorithm

    PubMed Central

    Sheu, Ruey-Kai; Chiu, Chun-Chieh

    2014-01-01

    Decision tree is one of the famous classification methods in data mining. Many researches have been proposed, which were focusing on improving the performance of decision tree. However, those algorithms are developed and run on traditional distributed systems. Obviously the latency could not be improved while processing huge data generated by ubiquitous sensing node in the era without new technology help. In order to improve data processing latency in huge data mining, in this paper, we design and implement a new parallelized decision tree algorithm on a CUDA (compute unified device architecture), which is a GPGPU solution provided by NVIDIA. In the proposed system, CPU is responsible for flow control while the GPU is responsible for computation. We have conducted many experiments to evaluate system performance of CUDT and made a comparison with traditional CPU version. The results show that CUDT is 5∼55 times faster than Weka-j48 and is 18 times speedup than SPRINT for large data set. PMID:25140346

  20. PCA-LBG-based algorithms for VQ codebook generation

    NASA Astrophysics Data System (ADS)

    Tsai, Jinn-Tsong; Yang, Po-Yuan

    2015-04-01

    Vector quantisation (VQ) codebooks are generated by combining principal component analysis (PCA) algorithms with Linde-Buzo-Gray (LBG) algorithms. All training vectors are grouped according to the projected values of the principal components. The PCA-LBG-based algorithms include (1) PCA-LBG-Median, which selects the median vector of each group, (2) PCA-LBG-Centroid, which adopts the centroid vector of each group, and (3) PCA-LBG-Random, which randomly selects a vector of each group. The LBG algorithm finds a codebook based on the better vectors sent to an initial codebook by the PCA. The PCA performs an orthogonal transformation to convert a set of potentially correlated variables into a set of variables that are not linearly correlated. Because the orthogonal transformation efficiently distinguishes test image vectors, the proposed PCA-LBG-based algorithm is expected to outperform conventional algorithms in designing VQ codebooks. The experimental results confirm that the proposed PCA-LBG-based algorithms indeed obtain better results compared to existing methods reported in the literature.

  1. A modified density-based clustering algorithm and its implementation

    NASA Astrophysics Data System (ADS)

    Ban, Zhihua; Liu, Jianguo; Yuan, Lulu; Yang, Hua

    2015-12-01

    This paper presents an improved density-based clustering algorithm based on the paper of clustering by fast search and find of density peaks. A distance threshold is introduced for the purpose of economizing memory. In order to reduce the probability that two points share the same density value, similarity is utilized to define proximity measure. We have tested the modified algorithm on a large data set, several small data sets and shape data sets. It turns out that the proposed algorithm can obtain acceptable results and can be applied more wildly.

  2. A novel bit-quad-based Euler number computing algorithm.

    PubMed

    Yao, Bin; He, Lifeng; Kang, Shiying; Chao, Yuyan; Zhao, Xiao

    2015-01-01

    The Euler number of a binary image is an important topological property in computer vision and pattern recognition. This paper proposes a novel bit-quad-based Euler number computing algorithm. Based on graph theory and analysis on bit-quad patterns, our algorithm only needs to count two bit-quad patterns. Moreover, by use of the information obtained during processing the previous bit-quad, the average number of pixels to be checked for processing a bit-quad is only 1.75. Experimental results demonstrated that our method outperforms significantly conventional Euler number computing algorithms. PMID:26636023

  3. A novel bit-quad-based Euler number computing algorithm.

    PubMed

    Yao, Bin; He, Lifeng; Kang, Shiying; Chao, Yuyan; Zhao, Xiao

    2015-01-01

    The Euler number of a binary image is an important topological property in computer vision and pattern recognition. This paper proposes a novel bit-quad-based Euler number computing algorithm. Based on graph theory and analysis on bit-quad patterns, our algorithm only needs to count two bit-quad patterns. Moreover, by use of the information obtained during processing the previous bit-quad, the average number of pixels to be checked for processing a bit-quad is only 1.75. Experimental results demonstrated that our method outperforms significantly conventional Euler number computing algorithms.

  4. Analysis of a wavelet-based robust hash algorithm

    NASA Astrophysics Data System (ADS)

    Meixner, Albert; Uhl, Andreas

    2004-06-01

    This paper paper is a quantitative evaluation of a wavelet-based, robust authentication hashing algorithm. Based on the results of a series of robustness and tampering sensitivity tests, we describepossible shortcomings and propose variousmodifications to the algorithm to improve its performance. The second part of the paper describes and attack against the scheme. It allows an attacker to modify a tampered image, such that it's hash value closely matches the hash value of the original.

  5. Optree: a learning-based adaptive watershed algorithm for neuron segmentation.

    PubMed

    Uzunbaş, Mustafa Gökhan; Chen, Chao; Metaxas, Dimitris

    2014-01-01

    We present a new algorithm for automatic and interactive segmentation of neuron structures from electron microscopy (EM) images. Our method selects a collection of nodes from the watershed mergng tree as the proposed segmentation. This is achieved by building a onditional random field (CRF) whose underlying graph is the merging tree. The maximum a posteriori (MAP) prediction of the CRF is the output segmentation. Our algorithm outperforms state-of-the-art methods. Both the inference and the training are very efficient as the graph is tree-structured. Furthermore, we develop an interactive segmentation framework which selects uncertain regions for a user to proofread. The uncertainty is measured by the marginals of the graphical model. Based on user corrections, our framework modifies the merging tree and thus improves the segmentation globally. PMID:25333106

  6. A continuum mechanics-based framework for optimizing boundary and finite element meshes associated with underground excavations-framework

    NASA Astrophysics Data System (ADS)

    Zsáki, Attila M.; Curran, John H.

    2005-11-01

    Many field problems, from stress analysis, heat transfer to contaminant transport, deal with disturbances in a continuum caused by a source (defined by its discrete geometry) and a region of interest (where a solution is sought). Depending on the location of regions of interest in relation to the sources, the level of geometric detail necessary to represent the sources in a model can vary considerably. A practical application of stress analysis in mining is the evaluation of the effects of continuous excavation on the states of stress around mine openings. Labour intensive model preparation and lengthy computation coupled with the interpretation of analysis results can have considerable impact on the successful operation of an underground mine, where stope failures can cost tens of millions of dollars and possibly lead to closure of the mine.A framework is proposed based on continuum mechanics principles to automatically optimize the level of geometric detail required for an analysis by simplifying the model geometry using expanded and modified algorithms that originated in computer graphics. This reduction in model size directly translates to savings in computational time. The results obtained from an optimized model have accuracy comparable to the uncertainty in input data (e.g. rock mass properties, geology, etc.). This first paper defines the optimization framework, while a companion paper investigates its efficiency and application to practical mining and excavation-related problems. Copyright

  7. Tools and Algorithms to Link Horizontal Hydrologic and Vertical Hydrodynamic Models and Provide a Stochastic Modeling Framework

    NASA Astrophysics Data System (ADS)

    Salah, Ahmad M.; Nelson, E. James; Williams, Gustavious P.

    2010-04-01

    We present algorithms and tools we developed to automatically link an overland flow model to a hydrodynamic water quality model with different spatial and temporal discretizations. These tools run the linked models which provide a stochastic simulation frame. We also briefly present the tools and algorithms we developed to facilitate and analyze stochastic simulations of the linked models. We demonstrate the algorithms by linking the Gridded Surface Subsurface Hydrologic Analysis (GSSHA) model for overland flow with the CE-QUAL-W2 model for water quality and reservoir hydrodynamics. GSSHA uses a two-dimensional horizontal grid while CE-QUAL-W2 uses a two-dimensional vertical grid. We implemented the algorithms and tools in the Watershed Modeling System (WMS) which allows modelers to easily create and use models. The algorithms are general and could be used for other models. Our tools create and analyze stochastic simulations to help understand uncertainty in the model application. While a number of examples of linked models exist, the ability to perform automatic, unassisted linking is a step forward and provides the framework to easily implement stochastic modeling studies.

  8. Algorithmic framework for group analysis of differential equations and its application to generalized Zakharov-Kuznetsov equations

    NASA Astrophysics Data System (ADS)

    Huang, Ding-jiang; Ivanova, Nataliya M.

    2016-02-01

    In this paper, we explain in more details the modern treatment of the problem of group classification of (systems of) partial differential equations (PDEs) from the algorithmic point of view. More precisely, we revise the classical Lie algorithm of construction of symmetries of differential equations, describe the group classification algorithm and discuss the process of reduction of (systems of) PDEs to (systems of) equations with smaller number of independent variables in order to construct invariant solutions. The group classification algorithm and reduction process are illustrated by the example of the generalized Zakharov-Kuznetsov (GZK) equations of form ut +(F (u)) xxx +(G (u)) xyy +(H (u)) x = 0. As a result, a complete group classification of the GZK equations is performed and a number of new interesting nonlinear invariant models which have non-trivial invariance algebras are obtained. Lie symmetry reductions and exact solutions for two important invariant models, i.e., the classical and modified Zakharov-Kuznetsov equations, are constructed. The algorithmic framework for group analysis of differential equations presented in this paper can also be applied to other nonlinear PDEs.

  9. A ray-based algorithm for multi-dimensional linearconversion

    SciTech Connect

    Tracy, Eugene R.; Kaufman, Allan N.; Jaun, Andre

    2004-04-19

    A numerical algorithm is proposed for connecting the incoming and outgoing wave fields in studies of linear conversion. This is the first such ray-based algorithm for wave conversion in multiple spatial dimensions. it is demonstrated that, aside from the overall phase of the coupling, one can directly evaluate all quantities needed for the connection coefficients from the ray geometry. The ray dynamics is generated using the determinant of the dispersion matrix as the hamiltonian. Using information available while following an incoming ray, the algorithm automatically detects that the ray has entered a conversion region, evaluates the transmission and conversion coefficients, and launches the transmitted ray. The algorithm does not require any prior knowledge of the geometry of the conversion region. The algorithm is illustrated using a two-dimensional toroidal model with resonant conversion from a magnetosonic to an ion-hybrid wave.

  10. Static algorithm based on MPLS and QoS routing

    NASA Astrophysics Data System (ADS)

    Yang, Ting; Sun, Yugeng; Liu, Bin

    2004-04-01

    This paper proposes a new static routing algorithm applying Traffic Engineering, which integrates Multiprotocol Label Switching (MPLS) and Quality of Service (QoS) Routing. Because of using MPLS, centralized control is applied to the transmission paths of different service type in the algorithm. At the same time, to select LSP based on the state of networks and the requirements of QoS, the algorithm can make the resource using globally optimal. It avoids the traditional routings" shortage that the network congestion is produced by the disequilibrium of resource using. United object strategic in the algorithm can produce effective projects for the problem of satisfying Multi-requirement in one routing count, which is NP-hard. Finally the paper proves that the algorithm is feasible and preferable by computer simulation and theoretical deduction.

  11. Markov random-field-based anomaly screening algorithm

    NASA Astrophysics Data System (ADS)

    Bello, Martin G.

    1995-06-01

    A novel anomaly screening algorithm is described which makes use of a regression diagnostic associated with the fitting of Markov Random Field (MRF) models. This regression diagnostic quantifies the extent to which a given neighborhood of pixels is atypical, relative to local background characteristics. The screening algorithm consists first in the calculation of an MRF-based anomoly statistic values. Next, 'blob' features, such as pixel count and maximal pixel intensity are calculated, and ranked over the image, in order to 'filter' the blobs to some final subset of most likely candidates. Receiver operating characteristics obtained from applying the above described screening algorithm to the detection of minelike targets in high- and low-frequency side-scan sonar imagery are presented together with results obtained from other screening algorithms for comparison, demonstrating performance comparable to trained human operators. In addition, real-time implementation considerations associated with each algorithmic component of the described procedure are identified.

  12. Level-1 pixel based tracking trigger algorithm for LHC upgrade

    NASA Astrophysics Data System (ADS)

    Moon, C.-S.; Savoy-Navarro, A.

    2015-10-01

    The Pixel Detector is the innermost detector of the tracking system of the Compact Muon Solenoid (CMS) experiment at CERN Large Hadron Collider (LHC) . It precisely determines the interaction point (primary vertex) of the events and the possible secondary vertexes due to heavy flavours (b and c quarks); it is part of the overall tracking system that allows reconstructing the tracks of the charged particles in the events and combined with the magnetic field to measure their momentum. The pixel detector allows measuring the tracks in the region closest to the interaction point. The Level-1 (real-time) pixel based tracking trigger is a novel trigger system that is currently being studied for the LHC upgrade. An important goal is developing real-time track reconstruction algorithms able to cope with very high rates and high flux of data in a very harsh environment. The pixel detector has an especially crucial role in precisely identifying the primary vertex of the rare physics events from the large pile-up (PU) of events. The goal of adding the pixel information already at the real-time level of the selection is to help reducing the total level-1 trigger rate while keeping an high selection capability. This is quite an innovative and challenging objective for the experiments upgrade for the High Luminosity LHC (HL-LHC) . The special case here addressed is the CMS experiment. This document describes exercises focusing on the development of a fast pixel track reconstruction where the pixel track matches with a Level-1 electron object using a ROOT-based simulation framework.

  13. Adaptive bad pixel correction algorithm for IRFPA based on PCNN

    NASA Astrophysics Data System (ADS)

    Leng, Hanbing; Zhou, Zuofeng; Cao, Jianzhong; Yi, Bo; Yan, Aqi; Zhang, Jian

    2013-10-01

    Bad pixels and response non-uniformity are the primary obstacles when IRFPA is used in different thermal imaging systems. The bad pixels of IRFPA include fixed bad pixels and random bad pixels. The former is caused by material or manufacture defect and their positions are always fixed, the latter is caused by temperature drift and their positions are always changing. Traditional radiometric calibration-based bad pixel detection and compensation algorithm is only valid to the fixed bad pixels. Scene-based bad pixel correction algorithm is the effective way to eliminate these two kinds of bad pixels. Currently, the most used scene-based bad pixel correction algorithm is based on adaptive median filter (AMF). In this algorithm, bad pixels are regarded as image noise and then be replaced by filtered value. However, missed correction and false correction often happens when AMF is used to handle complex infrared scenes. To solve this problem, a new adaptive bad pixel correction algorithm based on pulse coupled neural networks (PCNN) is proposed. Potential bad pixels are detected by PCNN in the first step, then image sequences are used periodically to confirm the real bad pixels and exclude the false one, finally bad pixels are replaced by the filtered result. With the real infrared images obtained from a camera, the experiment results show the effectiveness of the proposed algorithm.

  14. An optical water type framework for selecting and blending retrievals from bio-optical algorithms in lakes and coastal waters.

    PubMed

    Moore, Timothy S; Dowell, Mark D; Bradt, Shane; Verdu, Antonio Ruiz

    2014-03-01

    Bio-optical models are based on relationships between the spectral remote sensing reflectance and optical properties of in-water constituents. The wavelength range where this information can be exploited changes depending on the water characteristics. In low chlorophyll-a waters, the blue/green region of the spectrum is more sensitive to changes in chlorophyll-a concentration, whereas the red/NIR region becomes more important in turbid and/or eutrophic waters. In this work we present an approach to manage the shift from blue/green ratios to red/NIR-based chlorophyll-a algorithms for optically complex waters. Based on a combined in situ data set of coastal and inland waters, measures of overall algorithm uncertainty were roughly equal for two chlorophyll-a algorithms-the standard NASA OC4 algorithm based on blue/green bands and a MERIS 3-band algorithm based on red/NIR bands-with RMS error of 0.416 and 0.437 for each in log chlorophyll-a units, respectively. However, it is clear that each algorithm performs better at different chlorophyll-a ranges. When a blending approach is used based on an optical water type classification, the overall RMS error was reduced to 0.320. Bias and relative error were also reduced when evaluating the blended chlorophyll-a product compared to either of the single algorithm products. As a demonstration for ocean color applications, the algorithm blending approach was applied to MERIS imagery over Lake Erie. We also examined the use of this approach in several coastal marine environments, and examined the long-term frequency of the OWTs to MODIS-Aqua imagery over Lake Erie. PMID:24839311

  15. Introducing a framework to improve estimation of actual evapotranspiration using MODIS images with SEBAL algorithm

    NASA Astrophysics Data System (ADS)

    Mianabadi, Ameneh; Alizadeh, Amin; Sanaeinejad, Hossein; Ghahraman, Bijan; Davary, Kamran; Coenders-Gerrits, Miriam

    2015-04-01

    To have an accurate estimation of actual evapotranspiration, it is a good idea to use every-day images of MODIS. But under clouded condition, it is difficult to have appropriate images and also it is time-consuming to interpret all those images. Therefore, in this paper, we tried to choose the appropriate images to improve estimation of actual evapotranspiration. For this purpose, we introduced a framework to choose appropriate dates to produce best estimation of actual evapotranspiration. On the other hand, finding the location of dry (hot pixel) and wet (cold pixel) endpoints of evapotranspiration spectrum is so important. We dealt with this problem by employing the statistical procedure for automated selection of cold and hot pixels. We also visually reviewed the location of hot and cold pixels using land cover image to ensure that the most appropriate pixels had been selected. To integrate evapotranspiration over time, the linear and spline interpolation techniques were applied. Also, based on the precipitation rates during 5 days before the date of image and the mean seasonal amount of evapotranspiration, we found a logarithmic equation to produce the best estimation of evapotranspiration during the given time. Results showed that the logarithmic equation could produce more accurate estimation of evapotranspiration rather than linear interpolation.

  16. A framework for disseminating evidence-based health promotion practices.

    PubMed

    Harris, Jeffrey R; Cheadle, Allen; Hannon, Peggy A; Forehand, Mark; Lichiello, Patricia; Mahoney, Eustacia; Snyder, Susan; Yarrow, Judith

    2012-01-01

    Wider adoption of evidence-based, health promotion practices depends on developing and testing effective dissemination approaches. To assist in developing these approaches, we created a practical framework drawn from the literature on dissemination and our experiences disseminating evidence-based practices. The main elements of our framework are 1) a close partnership between researchers and a disseminating organization that takes ownership of the dissemination process and 2) use of social marketing principles to work closely with potential user organizations. We present 2 examples illustrating the framework: EnhanceFitness, for physical activity among older adults, and American Cancer Society Workplace Solutions, for chronic disease prevention among workers. We also discuss 7 practical roles that researchers play in dissemination and related research: sorting through the evidence, conducting formative research, assessing readiness of user organizations, balancing fidelity and reinvention, monitoring and evaluating, influencing the outer context, and testing dissemination approaches. PMID:22172189

  17. A Flocking Based algorithm for Document Clustering Analysis

    SciTech Connect

    Cui, Xiaohui; Gao, Jinzhu; Potok, Thomas E

    2006-01-01

    Social animals or insects in nature often exhibit a form of emergent collective behavior known as flocking. In this paper, we present a novel Flocking based approach for document clustering analysis. Our Flocking clustering algorithm uses stochastic and heuristic principles discovered from observing bird flocks or fish schools. Unlike other partition clustering algorithm such as K-means, the Flocking based algorithm does not require initial partitional seeds. The algorithm generates a clustering of a given set of data through the embedding of the high-dimensional data items on a two-dimensional grid for easy clustering result retrieval and visualization. Inspired by the self-organized behavior of bird flocks, we represent each document object with a flock boid. The simple local rules followed by each flock boid result in the entire document flock generating complex global behaviors, which eventually result in a clustering of the documents. We evaluate the efficiency of our algorithm with both a synthetic dataset and a real document collection that includes 100 news articles collected from the Internet. Our results show that the Flocking clustering algorithm achieves better performance compared to the K- means and the Ant clustering algorithm for real document clustering.

  18. AdaBoost-based algorithm for network intrusion detection.

    PubMed

    Hu, Weiming; Hu, Wei; Maybank, Steve

    2008-04-01

    Network intrusion detection aims at distinguishing the attacks on the Internet from normal use of the Internet. It is an indispensable part of the information security system. Due to the variety of network behaviors and the rapid development of attack fashions, it is necessary to develop fast machine-learning-based intrusion detection algorithms with high detection rates and low false-alarm rates. In this correspondence, we propose an intrusion detection algorithm based on the AdaBoost algorithm. In the algorithm, decision stumps are used as weak classifiers. The decision rules are provided for both categorical and continuous features. By combining the weak classifiers for continuous features and the weak classifiers for categorical features into a strong classifier, the relations between these two different types of features are handled naturally, without any forced conversions between continuous and categorical features. Adaptable initial weights and a simple strategy for avoiding overfitting are adopted to improve the performance of the algorithm. Experimental results show that our algorithm has low computational complexity and error rates, as compared with algorithms of higher computational complexity, as tested on the benchmark sample data. PMID:18348941

  19. Genetic Algorithm Based Neural Networks for Nonlinear Optimization

    1994-09-28

    This software develops a novel approach to nonlinear optimization using genetic algorithm based neural networks. To our best knowledge, this approach represents the first attempt at applying both neural network and genetic algorithm techniques to solve a nonlinear optimization problem. The approach constructs a neural network structure and an appropriately shaped energy surface whose minima correspond to optimal solutions of the problem. A genetic algorithm is employed to perform a parallel and powerful search ofmore » the energy surface.« less

  20. Quantum Image Encryption Algorithm Based on Quantum Image XOR Operations

    NASA Astrophysics Data System (ADS)

    Gong, Li-Hua; He, Xiang-Tao; Cheng, Shan; Hua, Tian-Xiang; Zhou, Nan-Run

    2016-07-01

    A novel encryption algorithm for quantum images based on quantum image XOR operations is designed. The quantum image XOR operations are designed by using the hyper-chaotic sequences generated with the Chen's hyper-chaotic system to control the control-NOT operation, which is used to encode gray-level information. The initial conditions of the Chen's hyper-chaotic system are the keys, which guarantee the security of the proposed quantum image encryption algorithm. Numerical simulations and theoretical analyses demonstrate that the proposed quantum image encryption algorithm has larger key space, higher key sensitivity, stronger resistance of statistical analysis and lower computational complexity than its classical counterparts.

  1. Heuristic-based scheduling algorithm for high level synthesis

    NASA Technical Reports Server (NTRS)

    Mohamed, Gulam; Tan, Han-Ngee; Chng, Chew-Lye

    1992-01-01

    A new scheduling algorithm is proposed which uses a combination of a resource utilization chart, a heuristic algorithm to estimate the minimum number of hardware units based on operator mobilities, and a list-scheduling technique to achieve fast and near optimal schedules. The schedule time of this algorithm is almost independent of the length of mobilities of operators as can be seen from the benchmark example (fifth order digital elliptical wave filter) presented when the cycle time was increased from 17 to 18 and then to 21 cycles. It is implemented in C on a SUN3/60 workstation.

  2. Restart-Based Genetic Algorithm for the Quadratic Assignment Problem

    NASA Astrophysics Data System (ADS)

    Misevicius, Alfonsas

    The power of genetic algorithms (GAs) has been demonstrated for various domains of the computer science, including combinatorial optimization. In this paper, we propose a new conceptual modification of the genetic algorithm entitled a "restart-based genetic algorithm" (RGA). An effective implementation of RGA for a well-known combinatorial optimization problem, the quadratic assignment problem (QAP), is discussed. The results obtained from the computational experiments on the QAP instances from the publicly available library QAPLIB show excellent performance of RGA. This is especially true for the real-life like QAPs.

  3. A novel image encryption algorithm based on DNA subsequence operation.

    PubMed

    Zhang, Qiang; Xue, Xianglian; Wei, Xiaopeng

    2012-01-01

    We present a novel image encryption algorithm based on DNA subsequence operation. Different from the traditional DNA encryption methods, our algorithm does not use complex biological operation but just uses the idea of DNA subsequence operations (such as elongation operation, truncation operation, deletion operation, etc.) combining with the logistic chaotic map to scramble the location and the value of pixel points from the image. The experimental results and security analysis show that the proposed algorithm is easy to be implemented, can get good encryption effect, has a wide secret key's space, strong sensitivity to secret key, and has the abilities of resisting exhaustive attack and statistic attack.

  4. Ray-tracing-based reconstruction algorithms for digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Zhou, Weihua; Lu, Jianping; Zhou, Otto; Chen, Ying

    2015-03-01

    As a breast-imaging technique, digital breast tomosynthesis has great potential to improve the diagnosis of early breast cancer over mammography. Ray-tracing-based reconstruction algorithms, such as ray-tracing back projection, maximum-likelihood expectation maximization (MLEM), ordered-subset MLEM (OS-MLEM), and simultaneous algebraic reconstruction technique (SART), have been developed as reconstruction methods for different breast tomosynthesis systems. This paper provides a comparative study to investigate these algorithms by computer simulation and phantom study. Experimental results suggested that, among the four investigated reconstruction algorithms, OS-MLEM and SART performed better in interplane artifact removal with a fast speed convergence.

  5. Phase shift extraction algorithm based on Euclidean matrix norm.

    PubMed

    Deng, Jian; Wang, Hankun; Zhang, Desi; Zhong, Liyun; Fan, Jinping; Lu, Xiaoxu

    2013-05-01

    In this Letter, the character of Euclidean matrix norm (EMN) of the intensity difference between phase-shifting interferograms, which changes in sinusoidal form with the phase shifts, is presented. Based on this character, an EMN phase shift extraction algorithm is proposed. Both the simulation calculation and experimental research show that the phase shifts with high precision can be determined with the proposed EMN algorithm easily. Importantly, the proposed EMN algorithm will supply a powerful tool for the rapid calibration of the phase shifts.

  6. Genetic algorithm based fuzzy control of spacecraft autonomous rendezvous

    NASA Technical Reports Server (NTRS)

    Karr, C. L.; Freeman, L. M.; Meredith, D. L.

    1990-01-01

    The U.S. Bureau of Mines is currently investigating ways to combine the control capabilities of fuzzy logic with the learning capabilities of genetic algorithms. Fuzzy logic allows for the uncertainty inherent in most control problems to be incorporated into conventional expert systems. Although fuzzy logic based expert systems have been used successfully for controlling a number of physical systems, the selection of acceptable fuzzy membership functions has generally been a subjective decision. High performance fuzzy membership functions for a fuzzy logic controller that manipulates a mathematical model simulating the autonomous rendezvous of spacecraft are learned using a genetic algorithm, a search technique based on the mechanics of natural genetics. The membership functions learned by the genetic algorithm provide for a more efficient fuzzy logic controller than membership functions selected by the authors for the rendezvous problem. Thus, genetic algorithms are potentially an effective and structured approach for learning fuzzy membership functions.

  7. A new augmentation based algorithm for extracting maximal chordal subgraphs

    DOE PAGES

    Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh

    2014-10-18

    If every cycle of a graph is chordal length greater than three then it contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms’more » parallelizability. In our paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. Finally, we experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.« less

  8. A New Augmentation Based Algorithm for Extracting Maximal Chordal Subgraphs

    PubMed Central

    Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh

    2014-01-01

    A graph is chordal if every cycle of length greater than three contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms’ parallelizability. In this paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. We experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph. PMID:25767331

  9. A new augmentation based algorithm for extracting maximal chordal subgraphs

    SciTech Connect

    Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh

    2014-10-18

    If every cycle of a graph is chordal length greater than three then it contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms’ parallelizability. In our paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. Finally, we experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.

  10. Methodology Evaluation Framework for Component-Based System Development.

    ERIC Educational Resources Information Center

    Dahanayake, Ajantha; Sol, Henk; Stojanovic, Zoran

    2003-01-01

    Explains component-based development (CBD) for distributed information systems and presents an evaluation framework, which highlights the extent to which a methodology is component oriented. Compares prominent CBD methods, discusses ways of modeling, and suggests that this is a first step towards a components-oriented systems development…

  11. A Proposed Framework for Conducting Data-Based Test Analysis

    ERIC Educational Resources Information Center

    Slaney, Kathleen L.; Maraun, Michael D.

    2008-01-01

    The authors argue that the current state of applied data-based test analytic practice is unstructured and unmethodical due in large part to the fact that there is no clearly specified, widely accepted test analytic framework for judging the performances of particular tests in particular contexts. Drawing from the extant test theory literature,…

  12. 63. VIEW FROM BASE AREA INSIDE FRAMEWORK OF STEEL WINDMILL ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    63. VIEW FROM BASE AREA INSIDE FRAMEWORK OF STEEL WINDMILL TOWER WITH ELI WINDMILL ON THE GROUND AT STOLL RESIDENCE ABOUT 1-1/2 MILES WEST OF NEBRASKA CITY ON STEAM WAGON ROAD. - Kregel Windmill Company Factory, 1416 Central Avenue, Nebraska City, Otoe County, NE

  13. 64. VIEW FROM BASE AREA INSIDE FRAMEWORK OF STEEL WINDMILL ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    64. VIEW FROM BASE AREA INSIDE FRAMEWORK OF STEEL WINDMILL TOWER WITH ELI WINDMILL ON THE GROUND AT STOLL RESIDENCE ABOUT 1-1/2 MILES WEST OF NEBRASKA CITY ON STEAM WAGON ROAD. - Kregel Windmill Company Factory, 1416 Central Avenue, Nebraska City, Otoe County, NE

  14. Cloud computing-based TagSNP selection algorithm for human genome data.

    PubMed

    Hung, Che-Lun; Chen, Wen-Pei; Hua, Guan-Jie; Zheng, Huiru; Tsai, Suh-Jen Jane; Lin, Yaw-Ling

    2015-01-01

    Single nucleotide polymorphisms (SNPs) play a fundamental role in human genetic variation and are used in medical diagnostics, phylogeny construction, and drug design. They provide the highest-resolution genetic fingerprint for identifying disease associations and human features. Haplotypes are regions of linked genetic variants that are closely spaced on the genome and tend to be inherited together. Genetics research has revealed SNPs within certain haplotype blocks that introduce few distinct common haplotypes into most of the population. Haplotype block structures are used in association-based methods to map disease genes. In this paper, we propose an efficient algorithm for identifying haplotype blocks in the genome. In chromosomal haplotype data retrieved from the HapMap project website, the proposed algorithm identified longer haplotype blocks than an existing algorithm. To enhance its performance, we extended the proposed algorithm into a parallel algorithm that copies data in parallel via the Hadoop MapReduce framework. The proposed MapReduce-paralleled combinatorial algorithm performed well on real-world data obtained from the HapMap dataset; the improvement in computational efficiency was proportional to the number of processors used. PMID:25569088

  15. Cloud computing-based TagSNP selection algorithm for human genome data.

    PubMed

    Hung, Che-Lun; Chen, Wen-Pei; Hua, Guan-Jie; Zheng, Huiru; Tsai, Suh-Jen Jane; Lin, Yaw-Ling

    2015-01-05

    Single nucleotide polymorphisms (SNPs) play a fundamental role in human genetic variation and are used in medical diagnostics, phylogeny construction, and drug design. They provide the highest-resolution genetic fingerprint for identifying disease associations and human features. Haplotypes are regions of linked genetic variants that are closely spaced on the genome and tend to be inherited together. Genetics research has revealed SNPs within certain haplotype blocks that introduce few distinct common haplotypes into most of the population. Haplotype block structures are used in association-based methods to map disease genes. In this paper, we propose an efficient algorithm for identifying haplotype blocks in the genome. In chromosomal haplotype data retrieved from the HapMap project website, the proposed algorithm identified longer haplotype blocks than an existing algorithm. To enhance its performance, we extended the proposed algorithm into a parallel algorithm that copies data in parallel via the Hadoop MapReduce framework. The proposed MapReduce-paralleled combinatorial algorithm performed well on real-world data obtained from the HapMap dataset; the improvement in computational efficiency was proportional to the number of processors used.

  16. List-Based Simulated Annealing Algorithm for Traveling Salesman Problem.

    PubMed

    Zhan, Shi-hua; Lin, Juan; Zhang, Ze-jun; Zhong, Yi-wen

    2016-01-01

    Simulated annealing (SA) algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters' setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA) algorithm to solve traveling salesman problem (TSP). LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms. PMID:27034650

  17. List-Based Simulated Annealing Algorithm for Traveling Salesman Problem

    PubMed Central

    Zhan, Shi-hua; Lin, Juan; Zhang, Ze-jun

    2016-01-01

    Simulated annealing (SA) algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters' setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA) algorithm to solve traveling salesman problem (TSP). LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms. PMID:27034650

  18. An algorithm for hyperspectral remote sensing of aerosols: theoretical framework, information content analysis and application to GEO-TASO

    NASA Astrophysics Data System (ADS)

    Hou, W.; Wang, J.; Xu, X.; Leitch, J. W.; Delker, T.; Chen, G.

    2015-12-01

    This paper includes a series of studies that aim to develop a hyperspectral remote sensing technique for retrieving aerosol properties from a newly developed instrument GEO-TASO (Geostationary Trance gas and Aerosol Sensor Optimization) that measures the radiation at 0.4-0.7 wavelengths at spectral resolution of 0.02 nm. GEOS-TASO instrument is a prototype instrument of TEMPO (Tropospheric Emissions: Monitoring of Pollution), which will be launched in 2022 to measure aerosols, O3, and other trace gases from a geostationary orbit over the N-America. The theoretical framework of optimized inversion algorithm and the information content analysis such as degree of freedom for signal (DFS) will be discussed for hyperspectral remote sensing in visible bands, as well as the application to GEO-TASO, which has mounted on the NASA HU-25C aircraft and gathered several days' of airborne hyperspectral data for our studies. Based on the optimization theory and different from the traditional lookup table (LUT) retrieval technique, our inversion method intends to retrieve the aerosol parameters and surface reflectance simultaneously, in which UNL-VRTM (UNified Linearized Radiative Transfer Model) is employed for forward model and Jacobians calculation, meanwhile, principal component analysis (PCA) is used to constrain the hyperspectral surface reflectance.The information content analysis provides the theoretical analysis guidance about what kind of aerosol parameters could be retrieved from GeoTASO hyperspectral remote sensing to the practical inversion study. Besides, the inversion conducted iteratively until the modeled spectral radiance fits with GeoTASO measurements by a Quasi-Newton method called L-BFGS-B (Large scale BFGS Bound constrained). Finally, the retrieval results of aerosol optical depth and other aerosol parameters are compared against those retrieved by AEROENT and/or in situ measurements such as DISCOVER-AQ during the aircraft campaign.

  19. An optical water type framework for selecting and blending retrievals from bio-optical algorithms in lakes and coastal waters

    PubMed Central

    Moore, Timothy S.; Dowell, Mark D.; Bradt, Shane; Verdu, Antonio Ruiz

    2014-01-01

    Bio-optical models are based on relationships between the spectral remote sensing reflectance and optical properties of in-water constituents. The wavelength range where this information can be exploited changes depending on the water characteristics. In low chlorophyll-a waters, the blue/green region of the spectrum is more sensitive to changes in chlorophyll-a concentration, whereas the red/NIR region becomes more important in turbid and/or eutrophic waters. In this work we present an approach to manage the shift from blue/green ratios to red/NIR-based chlorophyll-a algorithms for optically complex waters. Based on a combined in situ data set of coastal and inland waters, measures of overall algorithm uncertainty were roughly equal for two chlorophyll-a algorithms—the standard NASA OC4 algorithm based on blue/green bands and a MERIS 3-band algorithm based on red/NIR bands—with RMS error of 0.416 and 0.437 for each in log chlorophyll-a units, respectively. However, it is clear that each algorithm performs better at different chlorophyll-a ranges. When a blending approach is used based on an optical water type classification, the overall RMS error was reduced to 0.320. Bias and relative error were also reduced when evaluating the blended chlorophyll-a product compared to either of the single algorithm products. As a demonstration for ocean color applications, the algorithm blending approach was applied to MERIS imagery over Lake Erie. We also examined the use of this approach in several coastal marine environments, and examined the long-term frequency of the OWTs to MODIS-Aqua imagery over Lake Erie. PMID:24839311

  20. Creating a nursing strategic planning framework based on evidence.

    PubMed

    Shoemaker, Lorie K; Fischer, Brenda

    2011-03-01

    This article describes an evidence-informed strategic planning process and framework used by a Magnet-recognized public health system in California. This article includes (1) an overview of the organization and its strategic planning process, (2) the structure created within nursing for collaborative strategic planning and decision making, (3) the strategic planning framework developed based on the organization's balanced scorecard domains and the new Magnet model, and (4) the process undertaken to develop the nursing strategic priorities. Outcomes associated with the structure, process, and key initiatives are discussed throughout the article.

  1. Creating a nursing strategic planning framework based on evidence.

    PubMed

    Shoemaker, Lorie K; Fischer, Brenda

    2011-03-01

    This article describes an evidence-informed strategic planning process and framework used by a Magnet-recognized public health system in California. This article includes (1) an overview of the organization and its strategic planning process, (2) the structure created within nursing for collaborative strategic planning and decision making, (3) the strategic planning framework developed based on the organization's balanced scorecard domains and the new Magnet model, and (4) the process undertaken to develop the nursing strategic priorities. Outcomes associated with the structure, process, and key initiatives are discussed throughout the article. PMID:21320657

  2. Development of a rule-based algorithm for rice cultivation mapping using Landsat 8 time series

    NASA Astrophysics Data System (ADS)

    Karydas, Christos G.; Toukiloglou, Pericles; Minakou, Chara; Gitas, Ioannis Z.

    2015-06-01

    In the framework of ERMES project (FP7 66983), an algorithm for mapping rice cultivation extents using mediumhigh resolution satellite data was developed. ERMES (An Earth obseRvation Model based RicE information Service) aims to develop a prototype of downstream service for rice yield modelling based on a combination of Earth Observation and in situ data. The algorithm was designed as a set of rules applied on a time series of Landsat 8 images, acquired throughout the rice cultivation season of 2014 from the plain of Thessaloniki, Greece. The rules rely on the use of spectral indices, such as the Normalized Difference Vegetation Index (NDVI), the Normalized Difference Water Index (NDWI), and the Normalized Seasonal Wetness Index (NSWI), extracted from the Landsat 8 dataset. The algorithm is subdivided into two phases: a) a hard classification phase, resulting in a binary map (rice/no-rice), where pixels are judged according to their performance in all the images of the time series, while index thresholds were defined after a trial and error approach; b) a soft classification phase, resulting in a fuzzy map, by assigning scores to the pixels which passed (as `rice') the first phase. Finally, a user-defined threshold of the fuzzy score will discriminate rice from no-rice pixels in the output map. The algorithm was tested in a subset of Thessaloniki plain against a set of selected field data. The results indicated an overall accuracy of the algorithm higher than 97%. The algorithm was also applied in a study are in Spain (Valencia) and a preliminary test indicated a similar performance, i.e. about 98%. Currently, the algorithm is being modified, so as to map rice extents early in the cultivation season (by the end of June), with a view to contribute more substantially to the rice yield prediction service of ERMES. Both algorithm modes (late and early) are planned to be tested in extra Mediterranean study areas, in Greece, Italy, and Spain.

  3. Image enhancement algorithm based on improved lateral inhibition network

    NASA Astrophysics Data System (ADS)

    Yun, Haijiao; Wu, Zhiyong; Wang, Guanjun; Tong, Gang; Yang, Hua

    2016-05-01

    There is often substantial noise and blurred details in the images captured by cameras. To solve this problem, we propose a novel image enhancement algorithm combined with an improved lateral inhibition network. Firstly, we built a mathematical model of a lateral inhibition network in conjunction with biological visual perception; this model helped to realize enhanced contrast and improved edge definition in images. Secondly, we proposed that the adaptive lateral inhibition coefficient adhere to an exponential distribution thus making the model more flexible and more universal. Finally, we added median filtering and a compensation measure factor to build the framework with high pass filtering functionality thus eliminating image noise and improving edge contrast, addressing problems with blurred image edges. Our experimental results show that our algorithm is able to eliminate noise and the blurring phenomena, and enhance the details of visible and infrared images.

  4. A curriculum framework based on archetypal phenomena and technologies

    NASA Astrophysics Data System (ADS)

    Zubrowski, Bernie

    2002-07-01

    The current crop of published curriculum materials for elementary and middle school makes various claims about their relevancy to the student and their alignment with national standards. Although it may appear that they show improvement in their pedagogical practices and use of recent research, it is argued that they still are founded on questionable assumptions about student learning. The general approach of these curriculum programs is examined in relationship to issues such as the context of learning, the relationship between domain general and domain specific knowledge, and the essential role that aesthetics and personal frameworks play in conceptual change. An alternative paradigm of curriculum development is presented based on the theory of situated cognition. This approach starts with context rather than concept, gives greater weight to students' interpretative frameworks, and provides for a more holistic development. A grade 1-8 framework is presented having archetypal phenomena and technologies as the focus of investigations.

  5. Texture Analysis of Chaotic Coupled Map Lattices Based Image Encryption Algorithm

    NASA Astrophysics Data System (ADS)

    Khan, Majid; Shah, Tariq; Batool, Syeda Iram

    2014-09-01

    As of late, data security is key in different enclosures like web correspondence, media frameworks, therapeutic imaging, telemedicine and military correspondence. In any case, a large portion of them confronted with a few issues, for example, the absence of heartiness and security. In this letter, in the wake of exploring the fundamental purposes of the chaotic trigonometric maps and the coupled map lattices, we have presented the algorithm of chaos-based image encryption based on coupled map lattices. The proposed mechanism diminishes intermittent impact of the ergodic dynamical systems in the chaos-based image encryption. To assess the security of the encoded image of this scheme, the association of two nearby pixels and composition peculiarities were performed. This algorithm tries to minimize the problems arises in image encryption.

  6. Flexible Phrase Based Query Handling Algorithms.

    ERIC Educational Resources Information Center

    Wilbur, W. John; Kim, Won

    2001-01-01

    Flexibility in query handling can be important if one types a search engine query that is misspelled, contains terms not in the database, or requires knowledge of a controlled vocabulary. Presents results of experiments that suggest the optimal form of similarity functions that are applicable to the task of phrase based retrieval to find either…

  7. Facial Affect Recognition Using Regularized Discriminant Analysis-Based Algorithms

    NASA Astrophysics Data System (ADS)

    Lee, Chien-Cheng; Huang, Shin-Sheng; Shih, Cheng-Yuan

    2010-12-01

    This paper presents a novel and effective method for facial expression recognition including happiness, disgust, fear, anger, sadness, surprise, and neutral state. The proposed method utilizes a regularized discriminant analysis-based boosting algorithm (RDAB) with effective Gabor features to recognize the facial expressions. Entropy criterion is applied to select the effective Gabor feature which is a subset of informative and nonredundant Gabor features. The proposed RDAB algorithm uses RDA as a learner in the boosting algorithm. The RDA combines strengths of linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA). It solves the small sample size and ill-posed problems suffered from QDA and LDA through a regularization technique. Additionally, this study uses the particle swarm optimization (PSO) algorithm to estimate optimal parameters in RDA. Experiment results demonstrate that our approach can accurately and robustly recognize facial expressions.

  8. A robust DCT domain watermarking algorithm based on chaos system

    NASA Astrophysics Data System (ADS)

    Xiao, Mingsong; Wan, Xiaoxia; Gan, Chaohua; Du, Bo

    2009-10-01

    Digital watermarking is a kind of technique that can be used for protecting and enforcing the intellectual property (IP) rights of the digital media like the digital images containting in the transaction copyright. There are many kinds of digital watermarking algorithms. However, existing digital watermarking algorithms are not robust enough against geometric attacks and signal processing operations. In this paper, a robust watermarking algorithm based on chaos array in DCT (discrete cosine transform)-domain for gray images is proposed. The algorithm provides an one-to-one method to extract the watermark.Experimental results have proved that this new method has high accuracy and is highly robust against geometric attacks, signal processing operations and geometric transformations. Furthermore, the one who have on idea of the key can't find the position of the watermark embedded in. As a result, the watermark not easy to be modified, so this scheme is secure and robust.

  9. Validation of a Bayesian-based isotope identification algorithm

    NASA Astrophysics Data System (ADS)

    Sullivan, C. J.; Stinnett, J.

    2015-06-01

    Handheld radio-isotope identifiers (RIIDs) are widely used in Homeland Security and other nuclear safety applications. However, most commercially available devices have serious problems in their ability to correctly identify isotopes. It has been reported that this flaw is largely due to the overly simplistic identification algorithms on-board the RIIDs. This paper reports on the experimental validation of a new isotope identification algorithm using a Bayesian statistics approach to identify the source while allowing for calibration drift and unknown shielding. We present here results on further testing of this algorithm and a study on the observed variation in the gamma peak energies and areas from a wavelet-based peak identification algorithm.

  10. Quantum Image Encryption Algorithm Based on Image Correlation Decomposition

    NASA Astrophysics Data System (ADS)

    Hua, Tianxiang; Chen, Jiamin; Pei, Dongju; Zhang, Wenquan; Zhou, Nanrun

    2015-02-01

    A novel quantum gray-level image encryption and decryption algorithm based on image correlation decomposition is proposed. The correlation among image pixels is established by utilizing the superposition and measurement principle of quantum states. And a whole quantum image is divided into a series of sub-images. These sub-images are stored into a complete binary tree array constructed previously and then randomly performed by one of the operations of quantum random-phase gate, quantum revolving gate and Hadamard transform. The encrypted image can be obtained by superimposing the resulting sub-images with the superposition principle of quantum states. For the encryption algorithm, the keys are the parameters of random phase gate, rotation angle, binary sequence and orthonormal basis states. The security and the computational complexity of the proposed algorithm are analyzed. The proposed encryption algorithm can resist brute force attack due to its very large key space and has lower computational complexity than its classical counterparts.

  11. Decomposition-based multiobjective evolutionary algorithm for community detection in dynamic social networks.

    PubMed

    Ma, Jingjing; Liu, Jie; Ma, Wenping; Gong, Maoguo; Jiao, Licheng

    2014-01-01

    Community structure is one of the most important properties in social networks. In dynamic networks, there are two conflicting criteria that need to be considered. One is the snapshot quality, which evaluates the quality of the community partitions at the current time step. The other is the temporal cost, which evaluates the difference between communities at different time steps. In this paper, we propose a decomposition-based multiobjective community detection algorithm to simultaneously optimize these two objectives to reveal community structure and its evolution in dynamic networks. It employs the framework of multiobjective evolutionary algorithm based on decomposition to simultaneously optimize the modularity and normalized mutual information, which quantitatively measure the quality of the community partitions and temporal cost, respectively. A local search strategy dealing with the problem-specific knowledge is incorporated to improve the effectiveness of the new algorithm. Experiments on computer-generated and real-world networks demonstrate that the proposed algorithm can not only find community structure and capture community evolution more accurately, but also be steadier than the two compared algorithms.

  12. A compressed sensing based reconstruction algorithm for synchrotron source propagation-based X-ray phase contrast computed tomography

    NASA Astrophysics Data System (ADS)

    Melli, Seyed Ali; Wahid, Khan A.; Babyn, Paul; Montgomery, James; Snead, Elisabeth; El-Gayed, Ali; Pettitt, Murray; Wolkowski, Bailey; Wesolowski, Michal

    2016-01-01

    Synchrotron source propagation-based X-ray phase contrast computed tomography is increasingly used in pre-clinical imaging. However, it typically requires a large number of projections, and subsequently a large radiation dose, to produce high quality images. To improve the applicability of this imaging technique, reconstruction algorithms that can reduce the radiation dose and acquisition time without degrading image quality are needed. The proposed research focused on using a novel combination of Douglas-Rachford splitting and randomized Kaczmarz algorithms to solve large-scale total variation based optimization in a compressed sensing framework to reconstruct 2D images from a reduced number of projections. Visual assessment and quantitative performance evaluations of a synthetic abdomen phantom and real reconstructed image of an ex-vivo slice of canine prostate tissue demonstrate that the proposed algorithm is competitive in reconstruction process compared with other well-known algorithms. An additional potential benefit of reducing the number of projections would be reduction of time for motion artifact to occur if the sample moves during image acquisition. Use of this reconstruction algorithm to reduce the required number of projections in synchrotron source propagation-based X-ray phase contrast computed tomography is an effective form of dose reduction that may pave the way for imaging of in-vivo samples.

  13. Adaptive NUC algorithm for uncooled IRFPA based on neural networks

    NASA Astrophysics Data System (ADS)

    Liu, Ziji; Jiang, Yadong; Lv, Jian; Zhu, Hongbin

    2010-10-01

    With developments in uncooled infrared plane array (UFPA) technology, many new advanced uncooled infrared sensors are used in defensive weapons, scientific research, industry and commercial applications. A major difference in imaging techniques between infrared IRFPA imaging system and a visible CCD camera is that, IRFPA need nonuniformity correction and dead pixel compensation, we usually called it infrared image pre-processing. Two-point or multi-point correction algorithms based on calibration commonly used may correct the non-uniformity of IRFPAs, but they are limited by pixel linearity and instability. Therefore, adaptive non-uniformity correction techniques are developed. Two of these adaptive non-uniformity correction algorithms are mostly discussed, one is based on temporal high-pass filter, and another is based on neural network. In this paper, a new NUC algorithm based on improved neural networks is introduced, and involves the compare result between improved neural networks and other adaptive correction techniques. A lot of different will discussed in different angle, like correction effects, calculation efficiency, hardware implementation and so on. According to the result and discussion, it could be concluding that the adaptive algorithm offers improved performance compared to traditional calibration mode techniques. This new algorithm not only provides better sensitivity, but also increases the system dynamic range. As the sensor application expended, it will be very useful in future infrared imaging systems.

  14. A superpixel-based framework for automatic tumor segmentation on breast DCE-MRI

    NASA Astrophysics Data System (ADS)

    Yu, Ning; Wu, Jia; Weinstein, Susan P.; Gaonkar, Bilwaj; Keller, Brad M.; Ashraf, Ahmed B.; Jiang, YunQing; Davatzikos, Christos; Conant, Emily F.; Kontos, Despina

    2015-03-01

    Accurate and efficient automated tumor segmentation in breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is highly desirable for computer-aided tumor diagnosis. We propose a novel automatic segmentation framework which incorporates mean-shift smoothing, superpixel-wise classification, pixel-wise graph-cuts partitioning, and morphological refinement. A set of 15 breast DCE-MR images, obtained from the American College of Radiology Imaging Network (ACRIN) 6657 I-SPY trial, were manually segmented to generate tumor masks (as ground truth) and breast masks (as regions of interest). Four state-of-the-art segmentation approaches based on diverse models were also utilized for comparison. Based on five standard evaluation metrics for segmentation, the proposed framework consistently outperformed all other approaches. The performance of the proposed framework was: 1) 0.83 for Dice similarity coefficient, 2) 0.96 for pixel-wise accuracy, 3) 0.72 for VOC score, 4) 0.79 mm for mean absolute difference, and 5) 11.71 mm for maximum Hausdorff distance, which surpassed the second best method (i.e., adaptive geodesic transformation), a semi-automatic algorithm depending on precise initialization. Our results suggest promising potential applications of our segmentation framework in assisting analysis of breast carcinomas.

  15. Can We Do Better in Unimodal Biometric Systems? A Rank-Based Score Normalization Framework.

    PubMed

    Moutafis, Panagiotis; Kakadiaris, Ioannis A

    2015-12-01

    Biometric systems use score normalization techniques and fusion rules to improve recognition performance. The large amount of research on score fusion for multimodal systems raises an important question: can we utilize the available information from unimodal systems more effectively? In this paper, we present a rank-based score normalization framework that addresses this problem. Specifically, our approach consists of three algorithms: 1) partition the matching scores into subsets and normalize each subset independently; 2) utilize the gallery versus gallery matching scores matrix (i.e., gallery-based information); and 3) dynamically augment the gallery in an online fashion. We invoke the theory of stochastic dominance along with results of prior research to demonstrate when and why our approach yields increased performance. Our framework: 1) can be used in conjunction with any score normalization technique and any fusion rule; 2) is amenable to parallel programming; and 3) is suitable for both verification and open-set identification. To assess the performance of our framework, we use the UHDB11 and FRGC v2 face datasets. Specifically, the statistical hypothesis tests performed illustrate that the performance of our framework improves as we increase the number of samples per subject. Furthermore, the corresponding statistical analysis demonstrates that increased separation between match and nonmatch scores is obtained for each probe. Besides the benefits and limitations highlighted by our experimental evaluation, results under optimal and pessimal conditions are also presented to offer better insights.

  16. A Turn-Projected State-Based Conflict Resolution Algorithm

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Lewis, Timothy A.

    2013-01-01

    State-based conflict detection and resolution (CD&R) algorithms detect conflicts and resolve them on the basis on current state information without the use of additional intent information from aircraft flight plans. Therefore, the prediction of the trajectory of aircraft is based solely upon the position and velocity vectors of the traffic aircraft. Most CD&R algorithms project the traffic state using only the current state vectors. However, the past state vectors can be used to make a better prediction of the future trajectory of the traffic aircraft. This paper explores the idea of using past state vectors to detect traffic turns and resolve conflicts caused by these turns using a non-linear projection of the traffic state. A new algorithm based on this idea is presented and validated using a fast-time simulator developed for this study.

  17. An algorithmic method for reducing conductance-based neuron models.

    PubMed

    Sorensen, Michael E; DeWeerth, Stephen P

    2006-08-01

    Although conductance-based neural models provide a realistic depiction of neuronal activity, their complexity often limits effective implementation and analysis. Neuronal model reduction methods provide a means to reduce model complexity while retaining the original model's realism and relevance. Such methods, however, typically include ad hoc components that require that the modeler already be intimately familiar with the dynamics of the original model. We present an automated, algorithmic method for reducing conductance-based neuron models using the method of equivalent potentials (Kelper et al., Biol Cybern 66(5):381-387, 1992) Our results demonstrate that this algorithm is able to reduce the complexity of the original model with minimal performance loss, and requires minimal prior knowledge of the model's dynamics. Furthermore, by utilizing a cost function based on the contribution of each state variable to the total conductance of the model, the performance of the algorithm can be significantly improved.

  18. Study on Increasing the Accuracy of Classification Based on Ant Colony algorithm

    NASA Astrophysics Data System (ADS)

    Yu, M.; Chen, D.-W.; Dai, C.-Y.; Li, Z.-L.

    2013-05-01

    The application for GIS advances the ability of data analysis on remote sensing image. The classification and distill of remote sensing image is the primary information source for GIS in LUCC application. How to increase the accuracy of classification is an important content of remote sensing research. Adding features and researching new classification methods are the ways to improve accuracy of classification. Ant colony algorithm based on mode framework defined, agents of the algorithms in nature-inspired computation field can show a kind of uniform intelligent computation mode. It is applied in remote sensing image classification is a new method of preliminary swarm intelligence. Studying the applicability of ant colony algorithm based on more features and exploring the advantages and performance of ant colony algorithm are provided with very important significance. The study takes the outskirts of Fuzhou with complicated land use in Fujian Province as study area. The multi-source database which contains the integration of spectral information (TM1-5, TM7, NDVI, NDBI) and topography characters (DEM, Slope, Aspect) and textural information (Mean, Variance, Homogeneity, Contrast, Dissimilarity, Entropy, Second Moment, Correlation) were built. Classification rules based different characters are discovered from the samples through ant colony algorithm and the classification test is performed based on these rules. At the same time, we compare with traditional maximum likelihood method, C4.5 algorithm and rough sets classifications for checking over the accuracies. The study showed that the accuracy of classification based on the ant colony algorithm is higher than other methods. In addition, the land use and cover changes in Fuzhou for the near term is studied and display the figures by using remote sensing technology based on ant colony algorithm. In addition, the land use and cover changes in Fuzhou for the near term is studied and display the figures by using

  19. A Hybrid Optimization Framework with POD-based Order Reduction and Design-Space Evolution Scheme

    NASA Astrophysics Data System (ADS)

    Ghoman, Satyajit S.

    The main objective of this research is to develop an innovative multi-fidelity multi-disciplinary design, analysis and optimization suite that integrates certain solution generation codes and newly developed innovative tools to improve the overall optimization process. The research performed herein is divided into two parts: (1) the development of an MDAO framework by integration of variable fidelity physics-based computational codes, and (2) enhancements to such a framework by incorporating innovative features extending its robustness. The first part of this dissertation describes the development of a conceptual Multi-Fidelity Multi-Strategy and Multi-Disciplinary Design Optimization Environment (M3 DOE), in context of aircraft wing optimization. M 3 DOE provides the user a capability to optimize configurations with a choice of (i) the level of fidelity desired, (ii) the use of a single-step or multi-step optimization strategy, and (iii) combination of a series of structural and aerodynamic analyses. The modularity of M3 DOE allows it to be a part of other inclusive optimization frameworks. The M 3 DOE is demonstrated within the context of shape and sizing optimization of the wing of a Generic Business Jet aircraft. Two different optimization objectives, viz. dry weight minimization, and cruise range maximization are studied by conducting one low-fidelity and two high-fidelity optimization runs to demonstrate the application scope of M3 DOE. The second part of this dissertation describes the development of an innovative hybrid optimization framework that extends the robustness of M 3 DOE by employing a proper orthogonal decomposition-based design-space order reduction scheme combined with the evolutionary algorithm technique. The POD method of extracting dominant modes from an ensemble of candidate configurations is used for the design-space order reduction. The snapshot of candidate population is updated iteratively using evolutionary algorithm technique of

  20. Study of a Quantum Framework for Search Based Software Engineering

    NASA Astrophysics Data System (ADS)

    Wu, Nan; Song, Fangmin; Li, Xiangdong

    2013-06-01

    The Search Based Software Engineering (SBSE) is widely used in the software engineering to identify optimal solutions. The traditional methods and algorithms used in SBSE are criticized due to their high costs. In this paper, we propose a rapid modified-Grover quantum searching method for SBSE, and theoretically this method can be applied to any search-space structure and any type of searching problems.

  1. Texture orientation-based algorithm for detecting infrared maritime targets.

    PubMed

    Wang, Bin; Dong, Lili; Zhao, Ming; Wu, Houde; Xu, Wenhai

    2015-05-20

    Infrared maritime target detection is a key technology for maritime target searching systems. However, in infrared maritime images (IMIs) taken under complicated sea conditions, background clutters, such as ocean waves, clouds or sea fog, usually have high intensity that can easily overwhelm the brightness of real targets, which is difficult for traditional target detection algorithms to deal with. To mitigate this problem, this paper proposes a novel target detection algorithm based on texture orientation. This algorithm first extracts suspected targets by analyzing the intersubband correlation between horizontal and vertical wavelet subbands of the original IMI on the first scale. Then the self-adaptive wavelet threshold denoising and local singularity analysis of the original IMI is combined to remove false alarms further. Experiments show that compared with traditional algorithms, this algorithm can suppress background clutter much better and realize better single-frame detection for infrared maritime targets. Besides, in order to guarantee accurate target extraction further, the pipeline-filtering algorithm is adopted to eliminate residual false alarms. The high practical value and applicability of this proposed strategy is backed strongly by experimental data acquired under different environmental conditions.

  2. A new root-based direction-finding algorithm

    NASA Astrophysics Data System (ADS)

    Wasylkiwskyj, Wasyl; Kopriva, Ivica; DoroslovačKi, Miloš; Zaghloul, Amir I.

    2007-04-01

    Polynomial rooting direction-finding (DF) algorithms are a computationally efficient alternative to search-based DF algorithms and are particularly suitable for uniform linear arrays of physically identical elements provided that mutual interaction among the array elements can be either neglected or compensated for. A popular algorithm in such situations is Root Multiple Signal Classification (Root MUSIC (RM)), wherein the estimation of the directions of arrivals (DOA) requires the computation of the roots of a (2N - 2) -order polynomial, where N represents number of array elements. The DOA are estimated from the L pairs of roots closest to the unit circle, where L represents number of sources. In this paper we derive a modified root polynomial (MRP) algorithm requiring the calculation of only L roots in order to estimate the L DOA. We evaluate the performance of the MRP algorithm numerically and show that it is as accurate as the RM algorithm but with a significantly simpler algebraic structure. In order to demonstrate that the theoretically predicted performance can be achieved in an experimental setting, a decoupled array is emulated in hardware using phase shifters. The results are in excellent agreement with theory.

  3. Auto-focus algorithm based on statistical blur estimation

    NASA Astrophysics Data System (ADS)

    Kulkarni, Prajit

    2013-03-01

    Conventional auto-focus techniques in movable-lens camera systems use a measure of image sharpness to determine the lens position that brings the scene into focus. This paper presents a novel wavelet-domain approach to determine the position of best focus. In contrast to current techniques, the proposed algorithm estimates the level of blur in the captured image at each lens position. Image blur is quantified by fitting a Generalized Gaussian Density (GGD) curve to a high-pass version of the image using second-order statistics. The system then moves the lens to the position that yields the least measure of image blur. The algorithm overcomes shortcomings of sharpness-based approaches, namely, the application of large band-pass filters, sensitivity to image noise and need for calibration under different imaging conditions. Since noise has no effect on the proposed blur metric, the algorithm works with a short filter and is devoid of parameter tuning. Furthermore, the algorithm could be simplified to use a single high-pass filter to reduce complexity. These advantages, along with the optimization presented in the paper, make the proposed algorithm very attractive for hardware implementation on cell phones. Experiments prove that the algorithm performs well in the presence of noise as well as resolution and data scaling.

  4. Toward an Ontology-Based Framework for Clinical Research Databases

    PubMed Central

    Kong, Y. Megan; Dahlke, Carl; Xiang, Qun; Qian, Yu; Karp, David; Scheuermann, Richard H.

    2010-01-01

    Clinical research includes a wide range of study designs from focused observational studies to complex interventional studies with multiple study arms, treatment and assessment events, and specimen procurement procedures. Participant characteristics from case report forms need to be integrated with molecular characteristics from mechanistic experiments on procured specimens. In order to capture and manage this diverse array of data, we have developed the Ontology-Based eXtensible conceptual model (OBX) to serve as a framework for clinical research data in the Immunology Database and Analysis Portal (ImmPort). By designing OBX around the logical structure of the Basic Formal Ontology (BFO) and the Ontology for Biomedical Investigations (OBI), we have found that a relatively simple conceptual model can represent the relatively complex domain of clinical research. In addition, the common framework provided by BFO makes it straightforward to develop data dictionaries based on reference and application ontologies from the OBO Foundry. PMID:20460173

  5. LAHS: A novel harmony search algorithm based on learning automata

    NASA Astrophysics Data System (ADS)

    Enayatifar, Rasul; Yousefi, Moslem; Abdullah, Abdul Hanan; Darus, Amer Nordin

    2013-12-01

    This study presents a learning automata-based harmony search (LAHS) for unconstrained optimization of continuous problems. The harmony search (HS) algorithm performance strongly depends on the fine tuning of its parameters, including the harmony consideration rate (HMCR), pitch adjustment rate (PAR) and bandwidth (bw). Inspired by the spur-in-time responses in the musical improvisation process, learning capabilities are employed in the HS to select these parameters based on spontaneous reactions. An extensive numerical investigation is conducted on several well-known test functions, and the results are compared with the HS algorithm and its prominent variants, including the improved harmony search (IHS), global-best harmony search (GHS) and self-adaptive global-best harmony search (SGHS). The numerical results indicate that the LAHS is more efficient in finding optimum solutions and outperforms the existing HS algorithm variants.

  6. A fast image encryption algorithm based on chaotic map

    NASA Astrophysics Data System (ADS)

    Liu, Wenhao; Sun, Kehui; Zhu, Congxu

    2016-09-01

    Derived from Sine map and iterative chaotic map with infinite collapse (ICMIC), a new two-dimensional Sine ICMIC modulation map (2D-SIMM) is proposed based on a close-loop modulation coupling (CMC) model, and its chaotic performance is analyzed by means of phase diagram, Lyapunov exponent spectrum and complexity. It shows that this map has good ergodicity, hyperchaotic behavior, large maximum Lyapunov exponent and high complexity. Based on this map, a fast image encryption algorithm is proposed. In this algorithm, the confusion and diffusion processes are combined for one stage. Chaotic shift transform (CST) is proposed to efficiently change the image pixel positions, and the row and column substitutions are applied to scramble the pixel values simultaneously. The simulation and analysis results show that this algorithm has high security, low time complexity, and the abilities of resisting statistical analysis, differential, brute-force, known-plaintext and chosen-plaintext attacks.

  7. A Graph Based Backtracking Algorithm for Solving General CSPs

    NASA Technical Reports Server (NTRS)

    Pang, Wanlin; Goodwin, Scott D.

    2003-01-01

    Many AI tasks can be formalized as constraint satisfaction problems (CSPs), which involve finding values for variables subject to constraints. While solving a CSP is an NP-complete task in general, tractable classes of CSPs have been identified based on the structure of the underlying constraint graphs. Much effort has been spent on exploiting structural properties of the constraint graph to improve the efficiency of finding a solution. These efforts contributed to development of a class of CSP solving algorithms called decomposition algorithms. The strength of CSP decomposition is that its worst-case complexity depends on the structural properties of the constraint graph and is usually better than the worst-case complexity of search methods. Its practical application is limited, however, since it cannot be applied if the CSP is not decomposable. In this paper, we propose a graph based backtracking algorithm called omega-CDBT, which shares merits and overcomes the weaknesses of both decomposition and search approaches.

  8. Realization and optimization of AES algorithm on the TMS320DM6446 based on DaVinci technology

    NASA Astrophysics Data System (ADS)

    Jia, Wen-bin; Xiao, Fu-hai

    2013-03-01

    The application of AES algorithm in the digital cinema system avoids video data to be illegal theft or malicious tampering, and solves its security problems. At the same time, in order to meet the requirements of the real-time, scene and transparent encryption of high-speed data streams of audio and video in the information security field, through the in-depth analysis of AES algorithm principle, based on the hardware platform of TMS320DM6446, with the software framework structure of DaVinci, this paper proposes the specific realization methods of AES algorithm in digital video system and its optimization solutions. The test results show digital movies encrypted by AES128 can not play normally, which ensures the security of digital movies. Through the comparison of the performance of AES128 algorithm before optimization and after, the correctness and validity of improved algorithm is verified.

  9. Developing a Comprehensive, Empirically Based Research Framework for Classroom-Based Assessment

    ERIC Educational Resources Information Center

    Hill, Kathryn; McNamara, Tim

    2012-01-01

    This paper presents a comprehensive framework for researching classroom-based assessment (CBA) processes, and is based on a detailed empirical study of two Australian school classrooms where students aged 11 to 13 were studying Indonesian as a foreign language. The framework can be considered innovative in several respects. It goes beyond the…

  10. An extended framework for adaptive playback-based video summarization

    NASA Astrophysics Data System (ADS)

    Peker, Kadir A.; Divakaran, Ajay

    2003-11-01

    In our previous work, we described an adaptive fast playback framework for video summarization where we changed the playback rate using the motion activity feature so as to maintain a constant "pace." This method provides an effective way of skimming through video, especially when the motion is not too complex and the background is mostly still, such as in surveillance video. In this paper, we present an extended summarization framework that, in addition to motion activity, uses semantic cues such as face or skin color appearance, speech and music detection, or other domain dependent semantically significant events to control the playback rate. The semantic features we use are computationally inexpensive and can be computed in compressed domain, yet are robust, reliable, and have a wide range of applicability across different content types. The presented framework also allows for adaptive summaries based on preference, for example, to include more dramatic vs. action elements, or vice versa. The user can switch at any time between the skimming and the normal playback modes. The continuity of the video is preserved, and complete omission of segments that may be important to the user is avoided by using adaptive fast playback instead of skipping over long segments. The rule-set and the input parameters can be further modified to fit a certain domain or application. Our framework can be used by itself, or as a subsequent presentation stage for a summary produced by any other summarization technique that relies on generating a sub-set of the content.

  11. A General Framework for Multiphysics Modeling Based on Numerical Averaging

    NASA Astrophysics Data System (ADS)

    Lunati, I.; Tomin, P.

    2014-12-01

    In the last years, multiphysics (hybrid) modeling has attracted increasing attention as a tool to bridge the gap between pore-scale processes and a continuum description at the meter-scale (laboratory scale). This approach is particularly appealing for complex nonlinear processes, such as multiphase flow, reactive transport, density-driven instabilities, and geomechanical coupling. We present a general framework that can be applied to all these classes of problems. The method is based on ideas from the Multiscale Finite-Volume method (MsFV), which has been originally developed for Darcy-scale application. Recently, we have reformulated MsFV starting with a local-global splitting, which allows us to retain the original degree of coupling for the local problems and to use spatiotemporal adaptive strategies. The new framework is based on the simple idea that different characteristic temporal scales are inherited from different spatial scales, and the global and the local problems are solved with different temporal resolutions. The global (coarse-scale) problem is constructed based on a numerical volume-averaging paradigm and a continuum (Darcy-scale) description is obtained by introducing additional simplifications (e.g., by assuming that pressure is the only independent variable at the coarse scale, we recover an extended Darcy's law). We demonstrate that it is possible to adaptively and dynamically couple the Darcy-scale and the pore-scale descriptions of multiphase flow in a single conceptual and computational framework. Pore-scale problems are solved only in the active front region where fluid distribution changes with time. In the rest of the domain, only a coarse description is employed. This framework can be applied to other important problems such as reactive transport and crack propagation. As it is based on a numerical upscaling paradigm, our method can be used to explore the limits of validity of macroscopic models and to illuminate the meaning of

  12. Arcade: A Web-Java Based Framework for Distributed Computing

    NASA Technical Reports Server (NTRS)

    Chen, Zhikai; Maly, Kurt; Mehrotra, Piyush; Zubair, Mohammad; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    Distributed heterogeneous environments are being increasingly used to execute a variety of large size simulations and computational problems. We are developing Arcade, a web-based environment to design, execute, monitor, and control distributed applications. These targeted applications consist of independent heterogeneous modules which can be executed on a distributed heterogeneous environment. In this paper we describe the overall design of the system and discuss the prototype implementation of the core functionalities required to support such a framework.

  13. A universal framework for non-deteriorating time-domain numerical algorithms in Maxwell's electrodynamics

    NASA Astrophysics Data System (ADS)

    Fedoseyev, A.; Kansa, E. J.; Tsynkov, S.; Petropavlovskiy, S.; Osintcev, M.; Shumlak, U.; Henshaw, W. D.

    2016-10-01

    We present the implementation of the Lacuna method, that removes a key diffculty that currently hampers many existing methods for computing unsteady electromagnetic waves on unbounded regions. Numerical accuracy and/or stability may deterio-rate over long times due to the treatment of artificial outer boundaries. We describe a developed universal algorithm and software that correct this problem by employing the Huygens' principle and lacunae of Maxwell's equations. The algorithm provides a temporally uniform guaranteed error bound (no deterioration at all), and the software will enable robust electromagnetic simulations in a high-performance computing environment. The methodology applies to any geometry, any scheme, and any boundary condition. It eliminates the long-time deterioration regardless of its origin and how it manifests itself. In retrospect, the lacunae method was first proposed by V. Ryaben'kii and subsequently developed by S. Tsynkov. We have completed development of an innovative numerical methodology for high fidelity error-controlled modeling of a broad variety of electromagnetic and other wave phenomena. Proof-of-concept 3D computations have been conducted that con-vincingly demonstrate the feasibility and effciency of the proposed approach. Our algorithms are being implemented as robust commercial software tools in a standalone module to be combined with existing numerical schemes in several widely used computational electromagnetic codes.

  14. A fast quantum mechanics based contour extraction algorithm

    NASA Astrophysics Data System (ADS)

    Lan, Tian; Sun, Yangguang; Ding, Mingyue

    2009-02-01

    A fast algorithm was proposed to decrease the computational cost of the contour extraction approach based on quantum mechanics. The contour extraction approach based on quantum mechanics is a novel method proposed recently by us, which will be presented on the same conference by another paper of us titled "a statistical approach to contour extraction based on quantum mechanics". In our approach, contour extraction was modeled as the locus of a moving particle described by quantum mechanics, which is obtained by the most probable locus of the particle simulated in a large number of iterations. In quantum mechanics, the probability that a particle appears at a point is equivalent to the square amplitude of the wave function. Furthermore, the expression of the wave function can be derived from digital images, making the probability of the locus of a particle available. We employed the Markov Chain Monte Carlo (MCMC) method to estimate the square amplitude of the wave function. Finally, our fast quantum mechanics based contour extraction algorithm (referred as our fast algorithm hereafter) was evaluated by a number of different images including synthetic and medical images. It was demonstrated that our fast algorithm can achieve significant improvements in accuracy and robustness compared with the well-known state-of-the-art contour extraction techniques and dramatic reduction of time complexity compared to the statistical approach to contour extraction based on quantum mechanics.

  15. NIC-based Reduction Algorithms for Large-scale Clusters

    SciTech Connect

    Petrini, F; Moody, A T; Fernandez, J; Frachtenberg, E; Panda, D K

    2004-07-30

    Efficient algorithms for reduction operations across a group of processes are crucial for good performance in many large-scale, parallel scientific applications. While previous algorithms limit processing to the host CPU, we utilize the programmable processors and local memory available on modern cluster network interface cards (NICs) to explore a new dimension in the design of reduction algorithms. In this paper, we present the benefits and challenges, design issues and solutions, analytical models, and experimental evaluations of a family of NIC-based reduction algorithms. Performance and scalability evaluations were conducted on the ASCI Linux Cluster (ALC), a 960-node, 1920-processor machine at Lawrence Livermore National Laboratory, which uses the Quadrics QsNet interconnect. We find NIC-based reductions on modern interconnects to be more efficient than host-based implementations in both scalability and consistency. In particular, at large-scale--1812 processes--NIC-based reductions of small integer and floating-point arrays provided respective speedups of 121% and 39% over the host-based, production-level MPI implementation.

  16. Evaluation of machine learning algorithms for treatment outcome prediction in patients with epilepsy based on structural connectome data.

    PubMed

    Munsell, Brent C; Wee, Chong-Yaw; Keller, Simon S; Weber, Bernd; Elger, Christian; da Silva, Laura Angelica Tomaz; Nesland, Travis; Styner, Martin; Shen, Dinggang; Bonilha, Leonardo

    2015-09-01

    The objective of this study is to evaluate machine learning algorithms aimed at predicting surgical treatment outcomes in groups of patients with temporal lobe epilepsy (TLE) using only the structural brain connectome. Specifically, the brain connectome is reconstructed using white matter fiber tracts from presurgical diffusion tensor imaging. To achieve our objective, a two-stage connectome-based prediction framework is developed that gradually selects a small number of abnormal network connections that contribute to the surgical treatment outcome, and in each stage a linear kernel operation is used to further improve the accuracy of the learned classifier. Using a 10-fold cross validation strategy, the first stage in the connectome-based framework is able to separate patients with TLE from normal controls with 80% accuracy, and second stage in the connectome-based framework is able to correctly predict the surgical treatment outcome of patients with TLE with 70% accuracy. Compared to existing state-of-the-art methods that use VBM data, the proposed two-stage connectome-based prediction framework is a suitable alternative with comparable prediction performance. Our results additionally show that machine learning algorithms that exclusively use structural connectome data can predict treatment outcomes in epilepsy with similar accuracy compared with "expert-based" clinical decision. In summary, using the unprecedented information provided in the brain connectome, machine learning algorithms may uncover pathological changes in brain network organization and improve outcome forecasting in the context of epilepsy.

  17. A Bayesian least squares support vector machines based framework for fault diagnosis and failure prognosis

    NASA Astrophysics Data System (ADS)

    Khawaja, Taimoor Saleem

    A high-belief low-overhead Prognostics and Health Management (PHM) system is desired for online real-time monitoring of complex non-linear systems operating in a complex (possibly non-Gaussian) noise environment. This thesis presents a Bayesian Least Squares Support Vector Machine (LS-SVM) based framework for fault diagnosis and failure prognosis in nonlinear non-Gaussian systems. The methodology assumes the availability of real-time process measurements, definition of a set of fault indicators and the existence of empirical knowledge (or historical data) to characterize both nominal and abnormal operating conditions. An efficient yet powerful Least Squares Support Vector Machine (LS-SVM) algorithm, set within a Bayesian Inference framework, not only allows for the development of real-time algorithms for diagnosis and prognosis but also provides a solid theoretical framework to address key concepts related to classification for diagnosis and regression modeling for prognosis. SVM machines are founded on the principle of Structural Risk Minimization (SRM) which tends to find a good trade-off between low empirical risk and small capacity. The key features in SVM are the use of non-linear kernels, the absence of local minima, the sparseness of the solution and the capacity control obtained by optimizing the margin. The Bayesian Inference framework linked with LS-SVMs allows a probabilistic interpretation of the results for diagnosis and prognosis. Additional levels of inference provide the much coveted features of adaptability and tunability of the modeling parameters. The two main modules considered in this research are fault diagnosis and failure prognosis. With the goal of designing an efficient and reliable fault diagnosis scheme, a novel Anomaly Detector is suggested based on the LS-SVM machines. The proposed scheme uses only baseline data to construct a 1-class LS-SVM machine which, when presented with online data is able to distinguish between normal behavior

  18. A motion detection-based framework for improving image quality of CCTV security systems.

    PubMed

    Chiu, Shih-Hsuan; Lu, Chuan-Pin; Wen, Che-Yen

    2006-09-01

    Closed-circuit television (CCTV) security systems have been widely used in banks, convenience stores, and other facilities. They are useful to deter crime and depict criminal activity. However, CCTV cameras that provide an overview of a monitored region can be useful for criminal investigation but sometimes can also be used for object identification (e.g., vehicle numbers, persons, etc.). In this paper, we propose a framework for improving the image quality of CCTV security systems. This framework is based upon motion detection technology. There are two cameras in the framework: one camera (camera A) is fixed focus with a zoom lens for moving-object detection, and the other one (camera B) is variable focus with an auto-zoom lens to capture higher resolution images of the objects of interest. When camera A detects a moving object in the monitored area, camera B, driven by an auto-zoom focus control algorithm, will take a higher resolution image of the object of interest. Experimental results show that the proposed framework can improve the likelihood that images obtained from stationary unattended CCTV cameras are sufficient to enable law enforcement officials to identify suspects and other objects of interest.

  19. Measuring Disorientation Based on the Needleman-Wunsch Algorithm

    ERIC Educational Resources Information Center

    Güyer, Tolga; Atasoy, Bilal; Somyürek, Sibel

    2015-01-01

    This study offers a new method to measure navigation disorientation in web based systems which is powerful learning medium for distance and open education. The Needleman-Wunsch algorithm is used to measure disorientation in a more precise manner. The process combines theoretical and applied knowledge from two previously distinct research areas,…

  20. SPRITE: Sparsity-based super-resolution algorithm

    NASA Astrophysics Data System (ADS)

    Ngolè Mboula, F. M.; Starck, J.-L.; Ronayette, S.; Okumura, K.; Amiaux, J.

    2015-06-01

    SPRITE (Sparse Recovery of InstrumenTal rEsponse) computes a well-resolved compact source image from several undersampled and noisy observations. The algorithm is based on sparse regularization; adding a sparse penalty in the recovery leads to far better accuracy in terms of ellipticity error, especially at low S/N.

  1. Matrix-based, finite-difference algorithms for computational acoustics

    NASA Technical Reports Server (NTRS)

    Davis, Sanford

    1990-01-01

    A compact numerical algorithm is introduced for simulating multidimensional acoustic waves. The algorithm is expressed in terms of a set of matrix coefficients on a three-point spatial grid that approximates the acoustic wave equation with a discretization error of O(h exp 5). The method is based on tracking a local phase variable and its implementation suggests a convenient coordinate splitting along with natural intermediate boundary conditions. Results are presented for oblique plane waves and compared with other procedures. Preliminary computations of acoustic diffraction are also considered.

  2. Multiple Lookup Table-Based AES Encryption Algorithm Implementation

    NASA Astrophysics Data System (ADS)

    Gong, Jin; Liu, Wenyi; Zhang, Huixin

    Anew AES (Advanced Encryption Standard) encryption algorithm implementation was proposed in this paper. It is based on five lookup tables, which are generated from S-box(the substitution table in AES). The obvious advantages are reducing the code-size, improving the implementation efficiency, and helping new learners to understand the AES encryption algorithm and GF(28) multiplication which are necessary to correctly implement AES[1]. This method can be applied on processors with word length 32 or above, FPGA and others. And correspondingly we can implement it by VHDL, Verilog, VB and other languages.

  3. Moving target detection algorithm based on Gaussian mixture model

    NASA Astrophysics Data System (ADS)

    Wang, Zhihua; Kai, Du; Zhang, Xiandong

    2013-07-01

    In real-time video surveillance system, background noise and disturbance for the detection of moving objects will have a significant impact. The traditional Gaussian mixture model;GMM&;has strong adaptive various complex background ability, but slow convergence speed and vulnerable to illumination change influence. the paper proposes an improved moving target detection algorithm based on Gaussian mixture model which increase the convergence rate of foreground to the background model transformation and introducing the concept of the changing factors, through the three frame differential method solved light mutation problem. The results show that this algorithm can improve the accuracy of the moving object detection, and has good stability and real-time.

  4. A filter-based evolutionary algorithm for constrained optimization.

    SciTech Connect

    Clevenger, Lauren M.; Hart, William Eugene; Ferguson, Lauren Ann

    2004-02-01

    We introduce a filter-based evolutionary algorithm (FEA) for constrained optimization. The filter used by an FEA explicitly imposes the concept of dominance on a partially ordered solution set. We show that the algorithm is provably robust for both linear and nonlinear problems and constraints. FEAs use a finite pattern of mutation offsets, and our analysis is closely related to recent convergence results for pattern search methods. We discuss how properties of this pattern impact the ability of an FEA to converge to a constrained local optimum.

  5. An Optimal Seed Based Compression Algorithm for DNA Sequences

    PubMed Central

    Gopalakrishnan, Gopakumar; Karunakaran, Muralikrishnan

    2016-01-01

    This paper proposes a seed based lossless compression algorithm to compress a DNA sequence which uses a substitution method that is similar to the LempelZiv compression scheme. The proposed method exploits the repetition structures that are inherent in DNA sequences by creating an offline dictionary which contains all such repeats along with the details of mismatches. By ensuring that only promising mismatches are allowed, the method achieves a compression ratio that is at par or better than the existing lossless DNA sequence compression algorithms. PMID:27555868

  6. A VGI data integration framework based on linked data model

    NASA Astrophysics Data System (ADS)

    Wan, Lin; Ren, Rongrong

    2015-12-01

    This paper aims at the geographic data integration and sharing method for multiple online VGI data sets. We propose a semantic-enabled framework for online VGI sources cooperative application environment to solve a target class of geospatial problems. Based on linked data technologies - which is one of core components of semantic web, we can construct the relationship link among geographic features distributed in diverse VGI platform by using linked data modeling methods, then deploy these semantic-enabled entities on the web, and eventually form an interconnected geographic data network to support geospatial information cooperative application across multiple VGI data sources. The mapping and transformation from VGI sources to RDF linked data model is presented to guarantee the unique data represent model among different online social geographic data sources. We propose a mixed strategy which combined spatial distance similarity and feature name attribute similarity as the measure standard to compare and match different geographic features in various VGI data sets. And our work focuses on how to apply Markov logic networks to achieve interlinks of the same linked data in different VGI-based linked data sets. In our method, the automatic generating method of co-reference object identification model according to geographic linked data is discussed in more detail. It finally built a huge geographic linked data network across loosely-coupled VGI web sites. The results of the experiment built on our framework and the evaluation of our method shows the framework is reasonable and practicable.

  7. Framework Support For Knowledge-Based Software Development

    NASA Astrophysics Data System (ADS)

    Huseth, Steve

    1988-03-01

    The advent of personal engineering workstations has brought substantial information processing power to the individual programmer. Advanced tools and environment capabilities supporting the software lifecycle are just beginning to become generally available. However, many of these tools are addressing only part of the software development problem by focusing on rapid construction of self-contained programs by a small group of talented engineers. Additional capabilities are required to support the development of large programming systems where a high degree of coordination and communication is required among large numbers of software engineers, hardware engineers, and managers. A major player in realizing these capabilities is the framework supporting the software development environment. In this paper we discuss our research toward a Knowledge-Based Software Assistant (KBSA) framework. We propose the development of an advanced framework containing a distributed knowledge base that can support the data representation needs of tools, provide environmental support for the formalization and control of the software development process, and offer a highly interactive and consistent user interface.

  8. Microarray missing data imputation based on a set theoretic framework and biological knowledge.

    PubMed

    Gan, Xiangchao; Liew, Alan Wee-Chung; Yan, Hong

    2006-01-01

    Gene expressions measured using microarrays usually suffer from the missing value problem. However, in many data analysis methods, a complete data matrix is required. Although existing missing value imputation algorithms have shown good performance to deal with missing values, they also have their limitations. For example, some algorithms have good performance only when strong local correlation exists in data while some provide the best estimate when data is dominated by global structure. In addition, these algorithms do not take into account any biological constraint in their imputation. In this paper, we propose a set theoretic framework based on projection onto convex sets (POCS) for missing data imputation. POCS allows us to incorporate different types of a priori knowledge about missing values into the estimation process. The main idea of POCS is to formulate every piece of prior knowledge into a corresponding convex set and then use a convergence-guaranteed iterative procedure to obtain a solution in the intersection of all these sets. In this work, we design several convex sets, taking into consideration the biological characteristic of the data: the first set mainly exploit the local correlation structure among genes in microarray data, while the second set captures the global correlation structure among arrays. The third set (actually a series of sets) exploits the biological phenomenon of synchronization loss in microarray experiments. In cyclic systems, synchronization loss is a common phenomenon and we construct a series of sets based on this phenomenon for our POCS imputation algorithm. Experiments show that our algorithm can achieve a significant reduction of error compared to the KNNimpute, SVDimpute and LSimpute methods.

  9. A Gaussian Process Based Online Change Detection Algorithm for Monitoring Periodic Time Series

    SciTech Connect

    Chandola, Varun; Vatsavai, Raju

    2011-01-01

    Online time series change detection is a critical component of many monitoring systems, such as space and air-borne remote sensing instruments, cardiac monitors, and network traffic profilers, which continuously analyze observations recorded by sensors. Data collected by such sensors typically has a periodic (seasonal) component. Most existing time series change detection methods are not directly applicable to handle such data, either because they are not designed to handle periodic time series or because they cannot operate in an online mode. We propose an online change detection algorithm which can handle periodic time series. The algorithm uses a Gaussian process based non-parametric time series prediction model and monitors the difference between the predictions and actual observations within a statistically principled control chart framework to identify changes. A key challenge in using Gaussian process in an online mode is the need to solve a large system of equations involving the associated covariance matrix which grows with every time step. The proposed algorithm exploits the special structure of the covariance matrix and can analyze a time series of length T in O(T^2) time while maintaining a O(T) memory footprint, compared to O(T^4) time and O(T^2) memory requirement of standard matrix manipulation methods. We experimentally demonstrate the superiority of the proposed algorithm over several existing time series change detection algorithms on a set of synthetic and real time series. Finally, we illustrate the effectiveness of the proposed algorithm for identifying land use land cover changes using Normalized Difference Vegetation Index (NDVI) data collected for an agricultural region in Iowa state, USA. Our algorithm is able to detect different types of changes in a NDVI validation data set (with ~80% accuracy) which occur due to crop type changes as well as disruptive changes (e.g., natural disasters).

  10. Evaluation of machine learning algorithms for treatment outcome prediction in patients with epilepsy based on structural connectome data

    PubMed Central

    Munsell, Brent C.; Wee, Chong-Yaw; Keller, Simon S.; Weber, Bernd; Elger, Christian; da Silva, Laura Angelica Tomaz; Nesland, Travis; Styner, Martin; Shen, Dinggang; Bonilha, Leonardo

    2015-01-01

    The objective of this study is to evaluate machine learning algorithms aimed at predicting surgical treatment outcomes in groups of patients with temporal lobe epilepsy (TLE) using only the structural brain connectome. Specifically, the brain connectome is reconstructed using white matter fiber tracts from presurgical diffusion tensor imaging. To achieve our objective, a two-stage connectome-based prediction framework is developed that gradually selects a small number of abnormal network connections that contribute to the surgical treatment outcome, and in each stage a linear kernel operation is used to further improve the accuracy of the learned classifier. Using a 10-fold cross validation strategy, the first stage in the connectome-based framework is able to separate patients with TLE from normal controls with 80% accuracy, and second stage in the connectome-based framework is able to correctly predict the surgical treatment outcome of patients with TLE with 70% accuracy. Compared to existing state-of-the-art methods that use VBM data, the proposed two-stage connectome-based prediction framework is a suitable alternative with comparable prediction performance. Our results additionally show that machine learning algorithms that exclusively use structural connectome data can predict treatment outcomes in epilepsy with similar accuracy compared with “expert-based” clinical decision. In summary, using the unprecedented information provided in the brain connectome, machine learning algorithms may uncover pathological changes in brain network organization and improve outcome forecasting in the context of epilepsy. PMID:26054876

  11. An improved EZBC algorithm based on block bit length

    NASA Astrophysics Data System (ADS)

    Wang, Renlong; Ruan, Shuangchen; Liu, Chengxiang; Wang, Wenda; Zhang, Li

    2011-12-01

    Embedded ZeroBlock Coding and context modeling (EZBC) algorithm has high compression performance. However, it consumes large amounts of memory space because an Amplitude Quadtree of wavelet coefficients and other two link lists would be built during the encoding process. This is one of the big challenges for EZBC to be used in real time or hardware applications. An improved EZBC algorithm based on bit length of coefficients was brought forward in this article. It uses Bit Length Quadtree to complete the coding process and output the context for Arithmetic Coder. It can achieve the same compression performance as EZBC and save more than 75% memory space required in the encoding process. As Bit Length Quadtree can quickly locate the wavelet coefficients and judge their significance, the improved algorithm can dramatically accelerate the encoding speed. These improvements are also beneficial for hardware. PACS: 42.30.Va, 42.30.Wb

  12. Audio Watermarking Algorithm Based on Centroid and Statistical Features

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoming; Yin, Xiong

    Experimental testing shows that the relative relation in the number of samples among the neighboring bins and the audio frequency centroid are two robust features to the Time Scale Modification (TSM) attacks. Accordingly, an audio watermark algorithm based on frequency centroid and histogram is proposed by modifying the frequency coefficients. The audio histogram with equal-sized bins is extracted from a selected frequency coefficient range referred to the audio centroid. The watermarked audio signal is perceptibly similar to the original one. The experimental results show that the algorithm is very robust to resample TSM and a variety of common attacks. Subjective quality evaluation of the algorithm shows that embedded watermark introduces low, inaudible distortion of host audio signal.

  13. Optimization algorithm based characterization scheme for tunable semiconductor lasers.

    PubMed

    Chen, Quanan; Liu, Gonghai; Lu, Qiaoyin; Guo, Weihua

    2016-09-01

    In this paper, an optimization algorithm based characterization scheme for tunable semiconductor lasers is proposed and demonstrated. In the process of optimization, the ratio between the power of the desired frequency and the power except of the desired frequency is used as the figure of merit, which approximately represents the side-mode suppression ratio. In practice, we use tunable optical band-pass and band-stop filters to obtain the power of the desired frequency and the power except of the desired frequency separately. With the assistance of optimization algorithms, such as the particle swarm optimization (PSO) algorithm, we can get stable operation conditions for tunable lasers at designated frequencies directly and efficiently. PMID:27607701

  14. A Multi-Scale Settlement Matching Algorithm Based on ARG

    NASA Astrophysics Data System (ADS)

    Yue, Han; Zhu, Xinyan; Chen, Di; Liu, Lingjia

    2016-06-01

    Homonymous entity matching is an important part of multi-source spatial data integration, automatic updating and change detection. Considering the low accuracy of existing matching methods in dealing with matching multi-scale settlement data, an algorithm based on Attributed Relational Graph (ARG) is proposed. The algorithm firstly divides two settlement scenes at different scales into blocks by small-scale road network and constructs local ARGs in each block. Then, ascertains candidate sets by merging procedures and obtains the optimal matching pairs by comparing the similarity of ARGs iteratively. Finally, the corresponding relations between settlements at large and small scales are identified. At the end of this article, a demonstration is presented and the results indicate that the proposed algorithm is capable of handling sophisticated cases.

  15. Uplink Scheduling of Navigation Constellation Based on Immune Genetic Algorithm

    PubMed Central

    Tang, Yinyin; Wang, Yueke; Chen, Jianyun; Li, Xianbin

    2016-01-01

    The uplink of navigation data as satellite ephemeris is a complex satellite range scheduling problem. Large–scale optimal problems cannot be tackled using traditional heuristic methods, and the efficiency of standard genetic algorithm is unsatisfactory. We propose a multi-objective immune genetic algorithm (IGA) for uplink scheduling of navigation constellation. The method focuses on balance traffic and maximum task objects based on satellite-ground index encoding method, individual diversity evaluation and memory library. Numerical results show that the multi–hierarchical encoding method can improve the computation efficiency, the fuzzy deviation toleration method can speed up convergence, and the method can achieve the balance target with a negligible loss in task number (approximately 2.98%). The proposed algorithm is a general method and thus can be used in similar problems. PMID:27736986

  16. An ellipse detection algorithm based on edge classification

    NASA Astrophysics Data System (ADS)

    Yu, Liu; Chen, Feng; Huang, Jianming; Wei, Xiangquan

    2015-12-01

    In order to enhance the speed and accuracy of ellipse detection, an ellipse detection algorithm based on edge classification is proposed. Too many edge points are removed by making edge into point in serialized form and the distance constraint between the edge points. It achieves effective classification by the criteria of the angle between the edge points. And it makes the probability of randomly selecting the edge points falling on the same ellipse greatly increased. Ellipse fitting accuracy is significantly improved by the optimization of the RED algorithm. It uses Euclidean distance to measure the distance from the edge point to the elliptical boundary. Experimental results show that: it can detect ellipse well in case of edge with interference or edges blocking each other. It has higher detecting precision and less time consuming than the RED algorithm.

  17. Sublingual vein extraction algorithm based on hyperspectral tongue imaging technology.

    PubMed

    Li, Qingli; Wang, Yiting; Liu, Hongying; Guan, Yana; Xu, Liang

    2011-04-01

    Among the parts of the human tongue surface, the sublingual vein is one of the most important ones which may have pathological relationship with some diseases. To analyze this information quantitatively, one primitive work is to extract sublingual veins accurately from tongue body. In this paper, a hyperspectral tongue imaging system instead of a digital camera is used to capture sublingual images. A hidden Markov model approach is presented to extract the sublingual veins from the hyperspectral sublingual images. This approach characterizes the spectral correlation and the band-to-band variability using a hidden Markov process, where the model parameters are estimated by the spectra of the pixel vectors forming the observation sequences. The proposed algorithm, the pixel-based sublingual vein segmentation algorithm, and the spectral angle mapper algorithm are tested on a total of 150 scenes of hyperspectral sublingual veins images to evaluate the performance of the new method. The experimental results demonstrate that the proposed algorithm can extract the sublingual veins more accurately than the traditional algorithms and can perform well even in a noisy environment. PMID:21030208

  18. Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation

    PubMed Central

    Sun, Xiao; Zhang, Tongda; Chai, Yueting; Liu, Yi

    2015-01-01

    Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS) algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it. PMID:26221133

  19. Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation.

    PubMed

    Sun, Xiao; Zhang, Tongda; Chai, Yueting; Liu, Yi

    2015-01-01

    Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS) algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it. PMID:26221133

  20. A rapid place name locating algorithm based on ontology qualitative retrieval, ranking and recommendation

    NASA Astrophysics Data System (ADS)

    Fan, Hong; Zhu, Anfeng; Zhang, Weixia

    2015-12-01

    In order to meet the rapid positioning of 12315 complaints, aiming at the natural language expression of telephone complaints, a semantic retrieval framework is proposed which is based on natural language parsing and geographical names ontology reasoning. Among them, a search result ranking and recommended algorithms is proposed which is regarding both geo-name conceptual similarity and spatial geometry relation similarity. The experiments show that this method can assist the operator to quickly find location of 12,315 complaints, increased industry and commerce customer satisfaction.

  1. Evolutionary algorithm based offline/online path planner for UAV navigation.

    PubMed

    Nikolos, I K; Valavanis, K P; Tsourveloudis, N C; Kostaras, A N

    2003-01-01

    An evolutionary algorithm based framework, a combination of modified breeder genetic algorithms incorporating characteristics of classic genetic algorithms, is utilized to design an offline/online path planner for unmanned aerial vehicles (UAVs) autonomous navigation. The path planner calculates a curved path line with desired characteristics in a three-dimensional (3-D) rough terrain environment, represented using B-spline curves, with the coordinates of its control points being the evolutionary algorithm artificial chromosome genes. Given a 3-D rough environment and assuming flight envelope restrictions, two problems are solved: i) UAV navigation using an offline planner in a known environment, and, ii) UAV navigation using an online planner in a completely unknown environment. The offline planner produces a single B-Spline curve that connects the starting and target points with a predefined initial direction. The online planner, based on the offline one, is given on-board radar readings which gradually produces a smooth 3-D trajectory aiming at reaching a predetermined target in an unknown environment; the produced trajectory consists of smaller B-spline curves smoothly connected with each other. Both planners have been tested under different scenarios, and they have been proven effective in guiding an UAV to its final destination, providing near-optimal curved paths quickly and efficiently.

  2. Measurement Theory in Deutsch's Algorithm Based on the Truth Values

    NASA Astrophysics Data System (ADS)

    Nagata, Koji; Nakamura, Tadao

    2016-08-01

    We propose a new measurement theory, in qubits handling, based on the truth values, i.e., the truth T (1) for true and the falsity F (0) for false. The results of measurement are either 0 or 1. To implement Deutsch's algorithm, we need both observability and controllability of a quantum state. The new measurement theory can satisfy these two. Especially, we systematically describe our assertion based on more mathematical analysis using raw data in a thoughtful experiment.

  3. Fractional Fourier domain optical image hiding using phase retrieval algorithm based on iterative nonlinear double random phase encoding.

    PubMed

    Wang, Xiaogang; Chen, Wen; Chen, Xudong

    2014-09-22

    We present a novel image hiding method based on phase retrieval algorithm under the framework of nonlinear double random phase encoding in fractional Fourier domain. Two phase-only masks (POMs) are efficiently determined by using the phase retrieval algorithm, in which two cascaded phase-truncated fractional Fourier transforms (FrFTs) are involved. No undesired information disclosure, post-processing of the POMs or digital inverse computation appears in our proposed method. In order to achieve the reduction in key transmission, a modified image hiding method based on the modified phase retrieval algorithm and logistic map is further proposed in this paper, in which the fractional orders and the parameters with respect to the logistic map are regarded as encryption keys. Numerical results have demonstrated the feasibility and effectiveness of the proposed algorithms.

  4. Blind Adaptive Interference Suppression Based on Set-Membership Constrained Constant-Modulus Algorithms With Dynamic Bounds

    NASA Astrophysics Data System (ADS)

    de Lamare, Rodrigo C.; Diniz, Paulo S. R.

    2013-03-01

    This work presents blind constrained constant modulus (CCM) adaptive algorithms based on the set-membership filtering (SMF) concept and incorporates dynamic bounds {for interference suppression} applications. We develop stochastic gradient and recursive least squares type algorithms based on the CCM design criterion in accordance with the specifications of the SMF concept. We also propose a blind framework that includes channel and amplitude estimators that take into account parameter estimation dependency, multiple access interference (MAI) and inter-symbol interference (ISI) to address the important issue of bound specification in multiuser communications. A convergence and tracking analysis of the proposed algorithms is carried out along with the development of analytical expressions to predict their performance. Simulations for a number of scenarios of interest with a DS-CDMA system show that the proposed algorithms outperform previously reported techniques with a smaller number of parameter updates and a reduced risk of overbounding or underbounding.

  5. A model-based framework for the detection of spiculated masses on mammography

    SciTech Connect

    Sampat, Mehul P.; Bovik, Alan C.; Whitman, Gary J.; Markey, Mia K.

    2008-05-15

    The detection of lesions on mammography is a repetitive and fatiguing task. Thus, computer-aided detection systems have been developed to aid radiologists. The detection accuracy of current systems is much higher for clusters of microcalcifications than for spiculated masses. In this article, the authors present a new model-based framework for the detection of spiculated masses. The authors have invented a new class of linear filters, spiculated lesion filters, for the detection of converging lines or spiculations. These filters are highly specific narrowband filters, which are designed to match the expected structures of spiculated masses. As a part of this algorithm, the authors have also invented a novel technique to enhance spicules on mammograms. This entails filtering in the radon domain. They have also developed models to reduce the false positives due to normal linear structures. A key contribution of this work is that the parameters of the detection algorithm are based on measurements of physical properties of spiculated masses. The results of the detection algorithm are presented in the form of free-response receiver operating characteristic curves on images from the Mammographic Image Analysis Society and Digital Database for Screening Mammography databases.

  6. Towards uncertainty quantification and parameter estimation for Earth system models in a component-based modeling framework

    NASA Astrophysics Data System (ADS)

    Peckham, Scott D.; Kelbert, Anna; Hill, Mary C.; Hutton, Eric W. H.

    2016-05-01

    Component-based modeling frameworks make it easier for users to access, configure, couple, run and test numerical models. However, they do not typically provide tools for uncertainty quantification or data-based model verification and calibration. To better address these important issues, modeling frameworks should be integrated with existing, general-purpose toolkits for optimization, parameter estimation and uncertainty quantification. This paper identifies and then examines the key issues that must be addressed in order to make a component-based modeling framework interoperable with general-purpose packages for model analysis. As a motivating example, one of these packages, DAKOTA, is applied to a representative but nontrivial surface process problem of comparing two models for the longitudinal elevation profile of a river to observational data. Results from a new mathematical analysis of the resulting nonlinear least squares problem are given and then compared to results from several different optimization algorithms in DAKOTA.

  7. Applications of the automatic change detection for disaster monitoring by the knowledge-based framework

    NASA Astrophysics Data System (ADS)

    Tadono, T.; Hashimoto, S.; Onosato, M.; Hori, M.

    2012-11-01

    Change detection is a fundamental approach in utilization of satellite remote sensing image, especially in multi-temporal analysis that involves for example extracting damaged areas by a natural disaster. Recently, the amount of data obtained by Earth observation satellites has increased significantly owing to the increasing number and types of observing sensors, the enhancement of their spatial resolution, and improvements in their data processing systems. In applications for disaster monitoring, in particular, fast and accurate analysis of broad geographical areas is required to facilitate efficient rescue efforts. It is expected that robust automatic image interpretation is necessary. Several algorithms have been proposed in the field of automatic change detection in past, however they are still lack of robustness for multi purposes, an instrument independency, and accuracy better than a manual interpretation. We are trying to develop a framework for automatic image interpretation using ontology-based knowledge representation. This framework permits the description, accumulation, and use of knowledge drawn from image interpretation. Local relationships among certain concepts defined in the ontology are described as knowledge modules and are collected in the knowledge base. The knowledge representation uses a Bayesian network as a tool to describe various types of knowledge in a uniform manner. Knowledge modules are synthesized and used for target-specified inference. The results applied to two types of disasters by the framework without any modification and tuning are shown in this paper.

  8. New color-based tracking algorithm for joints of the upper extremities

    NASA Astrophysics Data System (ADS)

    Wu, Xiangping; Chow, Daniel H. K.; Zheng, Xiaoxiang

    2007-11-01

    To track the joints of the upper limb of stroke sufferers for rehabilitation assessment, a new tracking algorithm which utilizes a developed color-based particle filter and a novel strategy for handling occlusions is proposed in this paper. Objects are represented by their color histogram models and particle filter is introduced to track the objects within a probability framework. Kalman filter, as a local optimizer, is integrated into the sampling stage of the particle filter that steers samples to a region with high likelihood and therefore fewer samples is required. A color clustering method and anatomic constraints are used in dealing with occlusion problem. Compared with the general basic particle filtering method, the experimental results show that the new algorithm has reduced the number of samples and hence the computational consumption, and has achieved better abilities of handling complete occlusion over a few frames.

  9. Implementation of Finite Volume based Navier Stokes Algorithm Within General Purpose Flow Network Code

    NASA Technical Reports Server (NTRS)

    Schallhorn, Paul; Majumdar, Alok

    2012-01-01

    This paper describes a finite volume based numerical algorithm that allows multi-dimensional computation of fluid flow within a system level network flow analysis. There are several thermo-fluid engineering problems where higher fidelity solutions are needed that are not within the capacity of system level codes. The proposed algorithm will allow NASA's Generalized Fluid System Simulation Program (GFSSP) to perform multi-dimensional flow calculation within the framework of GFSSP s typical system level flow network consisting of fluid nodes and branches. The paper presents several classical two-dimensional fluid dynamics problems that have been solved by GFSSP's multi-dimensional flow solver. The numerical solutions are compared with the analytical and benchmark solution of Poiseulle, Couette and flow in a driven cavity.

  10. A framework for probabilistic atlas-based organ segmentation

    NASA Astrophysics Data System (ADS)

    Dong, Chunhua; Chen, Yen-Wei; Foruzan, Amir Hossein; Han, Xian-Hua; Tateyama, Tomoko; Wu, Xing

    2016-03-01

    Probabilistic atlas based on human anatomical structure has been widely used for organ segmentation. The challenge is how to register the probabilistic atlas to the patient volume. Additionally, there is the disadvantage that the conventional probabilistic atlas may cause a bias toward the specific patient study due to a single reference. Hence, we propose a template matching framework based on an iterative probabilistic atlas for organ segmentation. Firstly, we find a bounding box for the organ based on human anatomical localization. Then, the probabilistic atlas is used as a template to find the organ in this bounding box by using template matching technology. Comparing our method with conventional and recently developed atlas-based methods, our results show an improvement in the segmentation accuracy for multiple organs (p < 0:00001).

  11. Microwave-based medical diagnosis using particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Modiri, Arezoo

    This dissertation proposes and investigates a novel architecture intended for microwave-based medical diagnosis (MBMD). Furthermore, this investigation proposes novel modifications of particle swarm optimization algorithm for achieving enhanced convergence performance. MBMD has been investigated through a variety of innovative techniques in the literature since the 1990's and has shown significant promise in early detection of some specific health threats. In comparison to the X-ray- and gamma-ray-based diagnostic tools, MBMD does not expose patients to ionizing radiation; and due to the maturity of microwave technology, it lends itself to miniaturization of the supporting systems. This modality has been shown to be effective in detecting breast malignancy, and hence, this study focuses on the same modality. A novel radiator device and detection technique is proposed and investigated in this dissertation. As expected, hardware design and implementation are of paramount importance in such a study, and a good deal of research, analysis, and evaluation has been done in this regard which will be reported in ensuing chapters of this dissertation. It is noteworthy that an important element of any detection system is the algorithm used for extracting signatures. Herein, the strong intrinsic potential of the swarm-intelligence-based algorithms in solving complicated electromagnetic problems is brought to bear. This task is accomplished through addressing both mathematical and electromagnetic problems. These problems are called benchmark problems throughout this dissertation, since they have known answers. After evaluating the performance of the algorithm for the chosen benchmark problems, the algorithm is applied to MBMD tumor detection problem. The chosen benchmark problems have already been tackled by solution techniques other than particle swarm optimization (PSO) algorithm, the results of which can be found in the literature. However, due to the relatively high level

  12. An ORCID based synchronization framework for a national CRIS ecosystem

    PubMed Central

    Mendes Moreira, João; Cunha, Alcino; Macedo, Nuno

    2015-01-01

    PTCRIS (Portuguese Current Research Information System) is a program aiming at the creation and sustained development of a national integrated information ecosystem, to support research management according to the best international standards and practices. This paper reports on the experience of designing and prototyping a synchronization framework for PTCRIS based on ORCID (Open Researcher and Contributor ID). This framework embraces the "input once, re-use often" principle, and will enable a substantial reduction of the research output management burden by allowing automatic information exchange between the various national systems. The design of the framework followed best practices in rigorous software engineering, namely well-established principles in the research field of consistency management, and relied on formal analysis techniques and tools for its validation and verification. The notion of consistency between the services was formally specified and discussed with the stakeholders before the technical aspects on how to preserve said consistency were explored. Formal specification languages and automated verification tools were used to analyze the specifications and generate usage scenarios, useful for validation with the stakeholder and essential to certificate compliant services. PMID:26308833

  13. In silico discovery of metal-organic frameworks for precombustion CO2 capture using a genetic algorithm

    PubMed Central

    Chung, Yongchul G.; Gómez-Gualdrón, Diego A.; Li, Peng; Leperi, Karson T.; Deria, Pravas; Zhang, Hongda; Vermeulen, Nicolaas A.; Stoddart, J. Fraser; You, Fengqi; Hupp, Joseph T.; Farha, Omar K.; Snurr, Randall Q.

    2016-01-01

    Discovery of new adsorbent materials with a high CO2 working capacity could help reduce CO2 emissions from newly commissioned power plants using precombustion carbon capture. High-throughput computational screening efforts can accelerate the discovery of new adsorbents but sometimes require significant computational resources to explore the large space of possible materials. We report the in silico discovery of high-performing adsorbents for precombustion CO2 capture by applying a genetic algorithm to efficiently search a large database of metal-organic frameworks (MOFs) for top candidates. High-performing MOFs identified from the in silico search were synthesized and activated and show a high CO2 working capacity and a high CO2/H2 selectivity. One of the synthesized MOFs shows a higher CO2 working capacity than any MOF reported in the literature under the operating conditions investigated here. PMID:27757420

  14. Entropy-Based Search Algorithm for Experimental Design

    NASA Astrophysics Data System (ADS)

    Malakar, N. K.; Knuth, K. H.

    2011-03-01

    The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. This algorithm is inspired by Skilling's nested sampling algorithm used in inference and borrows the concept of a rising threshold while a set of experiment samples are maintained. We demonstrate that this algorithm not only selects highly relevant experiments, but also is more efficient than brute force search. Such entropic search techniques promise to greatly benefit autonomous experimental design.

  15. Point of Care and Factor Concentrate-Based Coagulation Algorithms

    PubMed Central

    Theusinger, Oliver M.; Stein, Philipp; Levy, Jerrold H.

    2015-01-01

    In the last years it has become evident that the use of blood products should be reduced whenever possible. There is increasing evidence regarding serious adverse events, including higher mortality and morbidity, related to transfusions. The use of point of care (POC) devices integrated in algorithms is one of the important mechanisms to limit blood product exposure. Any type of algorithm, especially the POC-based ones, allows goal-directed transfusions of blood products and even better targeted factor concentrate substitutions. Different types of algorithms in different surgical settings (cardiac surgery, trauma, liver surgery etc.) have been established with growing interest in their use as they offer objective therapy for management and reduction of blood product use. The use of POC devices with evidence-based algorithms is important in the bleeding patient independent of its origin (traumatic vs. surgical). The use of factor concentrates compared to the classical blood products can be cost-saving, beneficial for the patient, and in agreement with the WHO-requested standard of care. The empiric and uncontrolled use of blood products such as fresh frozen plasma, red blood cells, and platelets without POC monitoring should no longer be followed with regard to actual evidence in literature. Furthermore, the use of factor concentrates may provide better outcomes and potential for cost saving. PMID:26019707

  16. Development of antibiotic regimens using graph based evolutionary algorithms.

    PubMed

    Corns, Steven M; Ashlock, Daniel A; Bryden, Kenneth M

    2013-12-01

    This paper examines the use of evolutionary algorithms in the development of antibiotic regimens given to production animals. A model is constructed that combines the lifespan of the animal and the bacteria living in the animal's gastro-intestinal tract from the early finishing stage until the animal reaches market weight. This model is used as the fitness evaluation for a set of graph based evolutionary algorithms to assess the impact of diversity control on the evolving antibiotic regimens. The graph based evolutionary algorithms have two objectives: to find an antibiotic treatment regimen that maintains the weight gain and health benefits of antibiotic use and to reduce the risk of spreading antibiotic resistant bacteria. This study examines different regimens of tylosin phosphate use on bacteria populations divided into Gram positive and Gram negative types, with a focus on Campylobacter spp. Treatment regimens were found that provided decreased antibiotic resistance relative to conventional methods while providing nearly the same benefits as conventional antibiotic regimes. By using a graph to control the information flow in the evolutionary algorithm, a variety of solutions along the Pareto front can be found automatically for this and other multi-objective problems.

  17. Assessing excellence in translational cancer research: a consensus based framework

    PubMed Central

    2013-01-01

    Background It takes several years on average to translate basic research findings into clinical research and eventually deliver patient benefits. An expert-based excellence assessment can help improve this process by: identifying high performing Comprehensive Cancer Centres; best practices in translational cancer research; improving the quality and efficiency of the translational cancer research process. This can help build networks of excellent Centres by aiding focused partnerships. In this paper we report on a consensus building exercise that was undertaken to construct an excellence assessment framework for translational cancer research in Europe. Methods We used mixed methods to reach consensus: a systematic review of existing translational research models critically appraised for suitability in performance assessment of Cancer Centres; a survey among European stakeholders (researchers, clinicians, patient representatives and managers) to score a list of potential excellence criteria, a focus group with selected representatives of survey participants to review and rescore the excellence criteria; an expert group meeting to refine the list; an open validation round with stakeholders and a critical review of the emerging framework by an independent body: a committee formed by the European Academy of Cancer Sciences. Results The resulting excellence assessment framework has 18 criteria categorized in 6 themes. Each criterion has a number of questions/sub-criteria. Stakeholders favoured using qualitative excellence criteria to evaluate the translational research “process” rather than quantitative criteria or judging only the outputs. Examples of criteria include checking if the Centre has mechanisms that can be rated as excellent for: involvement of basic researchers and clinicians in translational research (quality of supervision and incentives provided to clinicians to do a PhD in translational research) and well designed clinical trials based on ground

  18. A constitutive model for magnetostriction based on thermodynamic framework

    NASA Astrophysics Data System (ADS)

    Ho, Kwangsoo

    2016-08-01

    This work presents a general framework for the continuum-based formulation of dissipative materials with magneto-mechanical coupling in the viewpoint of irreversible thermodynamics. The thermodynamically consistent model developed for the magnetic hysteresis is extended to include the magnetostrictive effect. The dissipative and hysteretic response of magnetostrictive materials is captured through the introduction of internal state variables. The evolution rate of magnetostrictive strain as well as magnetization is derived from thermodynamic and dissipative potentials in accordance with the general principles of thermodynamics. It is then demonstrated that the constitutive model is competent to describe the magneto-mechanical behavior by comparing simulation results with the experimental data reported in the literature.

  19. Research and design of web application framework based on AJAX

    NASA Astrophysics Data System (ADS)

    Zhang, Yan-feng; Liu, San-jun

    2013-03-01

    AJAX is an emerging presentation layer technology of Web, which allows dynamic, fast, and flexible Web application procedures to be built. AJAX can eliminate the dependence on the form in the tradition HTTP communication mode, which can achieve a fast and lightweight asynchronous communication. This paper firstly introduces the work principle of the AJAX technology, and combines the AJAX technology with the Web services technology to design a new Web application framework based on AJAX, to achieve an asynchronous communication of the browser directly with the back-end services.

  20. Staff line detection and revision algorithm based on subsection projection and correlation algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Yin-xian; Yang, Ding-li

    2013-03-01

    Staff line detection plays a key role in OMR technology, and is the precon-ditions of subsequent segmentation 1& recognition of music sheets. For the phenomena of horizontal inclination & curvature of staff lines and vertical inclination of image, which often occur in music scores, an improved approach based on subsection projection is put forward to realize the detection of original staff lines and revision in an effect to implement staff line detection more successfully. Experimental results show the presented algorithm can detect and revise staff lines fast and effectively.

  1. Multipurpose image watermarking algorithm based on multistage vector quantization.

    PubMed

    Lu, Zhe-Ming; Xu, Dian-Guo; Sun, Sheng-He

    2005-06-01

    The rapid growth of digital multimedia and Internet technologies has made copyright protection, copy protection, and integrity verification three important issues in the digital world. To solve these problems, the digital watermarking technique has been presented and widely researched. Traditional watermarking algorithms are mostly based on discrete transform domains, such as the discrete cosine transform, discrete Fourier transform (DFT), and discrete wavelet transform (DWT). Most of these algorithms are good for only one purpose. Recently, some multipurpose digital watermarking methods have been presented, which can achieve the goal of content authentication and copyright protection simultaneously. However, they are based on DWT or DFT. Lately, several robust watermarking schemes based on vector quantization (VQ) have been presented, but they can only be used for copyright protection. In this paper, we present a novel multipurpose digital image watermarking method based on the multistage vector quantizer structure, which can be applied to image authentication and copyright protection. In the proposed method, the semi-fragile watermark and the robust watermark are embedded in different VQ stages using different techniques, and both of them can be extracted without the original image. Simulation results demonstrate the effectiveness of our algorithm in terms of robustness and fragility. PMID:15971780

  2. A test sheet generating algorithm based on intelligent genetic algorithm and hierarchical planning

    NASA Astrophysics Data System (ADS)

    Gu, Peipei; Niu, Zhendong; Chen, Xuting; Chen, Wei

    2013-03-01

    In recent years, computer-based testing has become an effective method to evaluate students' overall learning progress so that appropriate guiding strategies can be recommended. Research has been done to develop intelligent test assembling systems which can automatically generate test sheets based on given parameters of test items. A good multisubject test sheet depends on not only the quality of the test items but also the construction of the sheet. Effective and efficient construction of test sheets according to multiple subjects and criteria is a challenging problem. In this paper, a multi-subject test sheet generation problem is formulated and a test sheet generating approach based on intelligent genetic algorithm and hierarchical planning (GAHP) is proposed to tackle this problem. The proposed approach utilizes hierarchical planning to simplify the multi-subject testing problem and adopts genetic algorithm to process the layered criteria, enabling the construction of good test sheets according to multiple test item requirements. Experiments are conducted and the results show that the proposed approach is capable of effectively generating multi-subject test sheets that meet specified requirements and achieve good performance.

  3. A test sheet generating algorithm based on intelligent genetic algorithm and hierarchical planning

    NASA Astrophysics Data System (ADS)

    Gu, Peipei; Niu, Zhendong; Chen, Xuting; Chen, Wei

    2012-04-01

    In recent years, computer-based testing has become an effective method to evaluate students' overall learning progress so that appropriate guiding strategies can be recommended. Research has been done to develop intelligent test assembling systems which can automatically generate test sheets based on given parameters of test items. A good multisubject test sheet depends on not only the quality of the test items but also the construction of the sheet. Effective and efficient construction of test sheets according to multiple subjects and criteria is a challenging problem. In this paper, a multi-subject test sheet generation problem is formulated and a test sheet generating approach based on intelligent genetic algorithm and hierarchical planning (GAHP) is proposed to tackle this problem. The proposed approach utilizes hierarchical planning to simplify the multi-subject testing problem and adopts genetic algorithm to process the layered criteria, enabling the construction of good test sheets according to multiple test item requirements. Experiments are conducted and the results show that the proposed approach is capable of effectively generating multi-subject test sheets that meet specified requirements and achieve good performance.

  4. Multi-Objective Community Detection Based on Memetic Algorithm

    PubMed Central

    2015-01-01

    Community detection has drawn a lot of attention as it can provide invaluable help in understanding the function and visualizing the structure of networks. Since single objective optimization methods have intrinsic drawbacks to identifying multiple significant community structures, some methods formulate the community detection as multi-objective problems and adopt population-based evolutionary algorithms to obtain multiple community structures. Evolutionary algorithms have strong global search ability, but have difficulty in locating local optima efficiently. In this study, in order to identify multiple significant community structures more effectively, a multi-objective memetic algorithm for community detection is proposed by combining multi-objective evolutionary algorithm with a local search procedure. The local search procedure is designed by addressing three issues. Firstly, nondominated solutions generated by evolutionary operations and solutions in dominant population are set as initial individuals for local search procedure. Then, a new direction vector named as pseudonormal vector is proposed to integrate two objective functions together to form a fitness function. Finally, a network specific local search strategy based on label propagation rule is expanded to search the local optimal solutions efficiently. The extensive experiments on both artificial and real-world networks evaluate the proposed method from three aspects. Firstly, experiments on influence of local search procedure demonstrate that the local search procedure can speed up the convergence to better partitions and make the algorithm more stable. Secondly, comparisons with a set of classic community detection methods illustrate the proposed method can find single partitions effectively. Finally, the method is applied to identify hierarchical structures of networks which are beneficial for analyzing networks in multi-resolution levels. PMID:25932646

  5. Creating "Intelligent" Ensemble Averages Using a Process-Based Framework

    NASA Astrophysics Data System (ADS)

    Baker, Noel; Taylor, Patrick

    2014-05-01

    The CMIP5 archive contains future climate projections from over 50 models provided by dozens of modeling centers from around the world. Individual model projections, however, are subject to biases created by structural model uncertainties. As a result, ensemble averaging of multiple models is used to add value to individual model projections and construct a consensus projection. Previous reports for the IPCC establish climate change projections based on an equal-weighted average of all model projections. However, individual models reproduce certain climate processes better than other models. Should models be weighted based on performance? Unequal ensemble averages have previously been constructed using a variety of mean state metrics. What metrics are most relevant for constraining future climate projections? This project develops a framework for systematically testing metrics in models to identify optimal metrics for unequal weighting multi-model ensembles. The intention is to produce improved ("intelligent") unequal-weight ensemble averages. A unique aspect of this project is the construction and testing of climate process-based model evaluation metrics. A climate process-based metric is defined as a metric based on the relationship between two physically related climate variables—e.g., outgoing longwave radiation and surface temperature. Several climate process metrics are constructed using high-quality Earth radiation budget data from NASA's Clouds and Earth's Radiant Energy System (CERES) instrument in combination with surface temperature data sets. It is found that regional values of tested quantities can vary significantly when comparing the equal-weighted ensemble average and an ensemble weighted using the process-based metric. Additionally, this study investigates the dependence of the metric weighting scheme on the climate state using a combination of model simulations including a non-forced preindustrial control experiment, historical simulations, and

  6. A survey on evolutionary algorithm based hybrid intelligence in bioinformatics.

    PubMed

    Li, Shan; Kang, Liying; Zhao, Xing-Ming

    2014-01-01

    With the rapid advance in genomics, proteomics, metabolomics, and other types of omics technologies during the past decades, a tremendous amount of data related to molecular biology has been produced. It is becoming a big challenge for the bioinformatists to analyze and interpret these data with conventional intelligent techniques, for example, support vector machines. Recently, the hybrid intelligent methods, which integrate several standard intelligent approaches, are becoming more and more popular due to their robustness and efficiency. Specifically, the hybrid intelligent approaches based on evolutionary algorithms (EAs) are widely used in various fields due to the efficiency and robustness of EAs. In this review, we give an introduction about the applications of hybrid intelligent methods, in particular those based on evolutionary algorithm, in bioinformatics. In particular, we focus on their applications to three common problems that arise in bioinformatics, that is, feature selection, parameter estimation, and reconstruction of biological networks.

  7. The positioning algorithm based on feature variance of billet character

    NASA Astrophysics Data System (ADS)

    Yi, Jiansong; Hong, Hanyu; Shi, Yu; Chen, Hongyang

    2015-12-01

    In the process of steel billets recognition on the production line, the key problem is how to determine the position of the billet from complex scenes. To solve this problem, this paper presents a positioning algorithm based on the feature variance of billet character. Using the largest intra-cluster variance recursive method based on multilevel filtering, the billet characters are segmented completely from the complex scenes. There are three rows of characters on each steel billet, we are able to determine whether the connected regions, which satisfy the condition of the feature variance, are on a straight line. Then we can accurately locate the steel billet. The experimental results demonstrated that the proposed method in this paper is competitive to other methods in positioning the characters and it also reduce the running time. The algorithm can provide a better basis for the character recognition.

  8. Voronoi-based localisation algorithm for mobile sensor networks

    NASA Astrophysics Data System (ADS)

    Guan, Zixiao; Zhang, Yongtao; Zhang, Baihai; Dong, Lijing

    2016-11-01

    Localisation is an essential and important part in wireless sensor networks (WSNs). Many applications require location information. So far, there are less researchers studying on mobile sensor networks (MSNs) than static sensor networks (SSNs). However, MSNs are required in more and more areas such that the number of anchor nodes can be reduced and the location accuracy can be improved. In this paper, we firstly propose a range-free Voronoi-based Monte Carlo localisation algorithm (VMCL) for MSNs. We improve the localisation accuracy by making better use of the information that a sensor node gathers. Then, we propose an optimal region selection strategy of Voronoi diagram based on VMCL, called ORSS-VMCL, to increase the efficiency and accuracy for VMCL by adapting the size of Voronoi area during the filtering process. Simulation results show that the accuracy of these two algorithms, especially ORSS-VMCL, outperforms traditional MCL.

  9. Independent component analysis based two-step phase retrieval algorithm

    NASA Astrophysics Data System (ADS)

    Xu, Xiaofei; Shou, Junwei; Lu, Xiaoxu; Yin, Zhenxing; Tian, Jindong; Li, Dong; Zhong, Liyun

    2016-10-01

    Based on the independent component analysis (ICA), we achieve phase retrieval from two-frame phase-shifting interferograms with unknown phase shifts. First, we remove the background of interferogram with a Gaussian high-pass filter. Second, the background-removed interferograms are decomposed into a group of mutual independent components through performing the pixel position recombination of an interferogram. Third, the phase shifts and the measured phase can be retrieved with high accuracy from the ratio of independent components. Compared with the existing two-step phase retrieval algorithms, both the simulation calculation and experimental result show that the proposed ICA based two-step algorithm reveals the advantage in the accuracy improvement of phase retrieval.

  10. Image compression using a novel edge-based coding algorithm

    NASA Astrophysics Data System (ADS)

    Keissarian, Farhad; Daemi, Mohammad F.

    2001-08-01

    In this paper, we present a novel edge-based coding algorithm for image compression. The proposed coding scheme is the predictive version of the original algorithm, which we presented earlier in literature. In the original version, an image is block coded according to the level of visual activity of individual blocks, following a novel edge-oriented classification stage. Each block is then represented by a set of parameters associated with the pattern appearing inside the block. The use of these parameters at the receiver reduces the cost of reconstruction significantly. In the present study, we extend and improve the performance of the existing technique by exploiting the expected spatial redundancy across the neighboring blocks. Satisfactory coded images at competitive bit rate with other block-based coding techniques have been obtained.

  11. A Survey on Evolutionary Algorithm Based Hybrid Intelligence in Bioinformatics

    PubMed Central

    Li, Shan; Zhao, Xing-Ming

    2014-01-01

    With the rapid advance in genomics, proteomics, metabolomics, and other types of omics technologies during the past decades, a tremendous amount of data related to molecular biology has been produced. It is becoming a big challenge for the bioinformatists to analyze and interpret these data with conventional intelligent techniques, for example, support vector machines. Recently, the hybrid intelligent methods, which integrate several standard intelligent approaches, are becoming more and more popular due to their robustness and efficiency. Specifically, the hybrid intelligent approaches based on evolutionary algorithms (EAs) are widely used in various fields due to the efficiency and robustness of EAs. In this review, we give an introduction about the applications of hybrid intelligent methods, in particular those based on evolutionary algorithm, in bioinformatics. In particular, we focus on their applications to three common problems that arise in bioinformatics, that is, feature selection, parameter estimation, and reconstruction of biological networks. PMID:24729969

  12. Improved total variation algorithms for wavelet-based denoising

    NASA Astrophysics Data System (ADS)

    Easley, Glenn R.; Colonna, Flavia

    2007-04-01

    Many improvements of wavelet-based restoration techniques suggest the use of the total variation (TV) algorithm. The concept of combining wavelet and total variation methods seems effective but the reasons for the success of this combination have been so far poorly understood. We propose a variation of the total variation method designed to avoid artifacts such as oil painting effects and is more suited than the standard TV techniques to be implemented with wavelet-based estimates. We then illustrate the effectiveness of this new TV-based method using some of the latest wavelet transforms such as contourlets and shearlets.

  13. A Model-Based Probabilistic Inversion Framework for Wire Fault Detection Using TDR

    NASA Technical Reports Server (NTRS)

    Schuet, Stefan R.; Timucin, Dogan A.; Wheeler, Kevin R.

    2010-01-01

    Time-domain reflectometry (TDR) is one of the standard methods for diagnosing faults in electrical wiring and interconnect systems, with a long-standing history focused mainly on hardware development of both high-fidelity systems for laboratory use and portable hand-held devices for field deployment. While these devices can easily assess distance to hard faults such as sustained opens or shorts, their ability to assess subtle but important degradation such as chafing remains an open question. This paper presents a unified framework for TDR-based chafing fault detection in lossy coaxial cables by combining an S-parameter based forward modeling approach with a probabilistic (Bayesian) inference algorithm. Results are presented for the estimation of nominal and faulty cable parameters from laboratory data.

  14. Tatool: a Java-based open-source programming framework for psychological studies.

    PubMed

    von Bastian, Claudia C; Locher, André; Ruflin, Michael

    2013-03-01

    Tatool (Training and Testing Tool) was developed to assist researchers with programming training software, experiments, and questionnaires. Tatool is Java-based, and thus is a platform-independent and object-oriented framework. The architecture was designed to meet the requirements of experimental designs and provides a large number of predefined functions that are useful in psychological studies. Tatool comprises features crucial for training studies (e.g., configurable training schedules, adaptive training algorithms, and individual training statistics) and allows for running studies online via Java Web Start. The accompanying "Tatool Online" platform provides the possibility to manage studies and participants' data easily with a Web-based interface. Tatool is published open source under the GNU Lesser General Public License, and is available at www.tatool.ch. PMID:22723043

  15. Tatool: a Java-based open-source programming framework for psychological studies.

    PubMed

    von Bastian, Claudia C; Locher, André; Ruflin, Michael

    2013-03-01

    Tatool (Training and Testing Tool) was developed to assist researchers with programming training software, experiments, and questionnaires. Tatool is Java-based, and thus is a platform-independent and object-oriented framework. The architecture was designed to meet the requirements of experimental designs and provides a large number of predefined functions that are useful in psychological studies. Tatool comprises features crucial for training studies (e.g., configurable training schedules, adaptive training algorithms, and individual training statistics) and allows for running studies online via Java Web Start. The accompanying "Tatool Online" platform provides the possibility to manage studies and participants' data easily with a Web-based interface. Tatool is published open source under the GNU Lesser General Public License, and is available at www.tatool.ch.

  16. A novel pipeline based FPGA implementation of a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Thirer, Nonel

    2014-05-01

    To solve problems when an analytical solution is not available, more and more bio-inspired computation techniques have been applied in the last years. Thus, an efficient algorithm is the Genetic Algorithm (GA), which imitates the biological evolution process, finding the solution by the mechanism of "natural selection", where the strong has higher chances to survive. A genetic algorithm is an iterative procedure which operates on a population of individuals called "chromosomes" or "possible solutions" (usually represented by a binary code). GA performs several processes with the population individuals to produce a new population, like in the biological evolution. To provide a high speed solution, pipelined based FPGA hardware implementations are used, with a nstages pipeline for a n-phases genetic algorithm. The FPGA pipeline implementations are constraints by the different execution time of each stage and by the FPGA chip resources. To minimize these difficulties, we propose a bio-inspired technique to modify the crossover step by using non identical twins. Thus two of the chosen chromosomes (parents) will build up two new chromosomes (children) not only one as in classical GA. We analyze the contribution of this method to reduce the execution time in the asynchronous and synchronous pipelines and also the possibility to a cheaper FPGA implementation, by using smaller populations. The full hardware architecture for a FPGA implementation to our target ALTERA development card is presented and analyzed.

  17. A disturbance based control/structure design algorithm

    NASA Technical Reports Server (NTRS)

    Mclaren, Mark D.; Slater, Gary L.

    1989-01-01

    Some authors take a classical approach to the simultaneous structure/control optimization by attempting to simultaneously minimize the weighted sum of the total mass and a quadratic form, subject to all of the structural and control constraints. Here, the optimization will be based on the dynamic response of a structure to an external unknown stochastic disturbance environment. Such a response to excitation approach is common to both the structural and control design phases, and hence represents a more natural control/structure optimization strategy than relying on artificial and vague control penalties. The design objective is to find the structure and controller of minimum mass such that all the prescribed constraints are satisfied. Two alternative solution algorithms are presented which have been applied to this problem. Each algorithm handles the optimization strategy and the imposition of the nonlinear constraints in a different manner. Two controller methodologies, and their effect on the solution algorithm, will be considered. These are full state feedback and direct output feedback, although the problem formulation is not restricted solely to these forms of controller. In fact, although full state feedback is a popular choice among researchers in this field (for reasons that will become apparent), its practical application is severely limited. The controller/structure interaction is inserted by the imposition of appropriate closed-loop constraints, such as closed-loop output response and control effort constraints. Numerical results will be obtained for a representative flexible structure model to illustrate the effectiveness of the solution algorithms.

  18. An adaptive gyroscope-based algorithm for temporal gait analysis.

    PubMed

    Greene, Barry R; McGrath, Denise; O'Neill, Ross; O'Donovan, Karol J; Burns, Adrian; Caulfield, Brian

    2010-12-01

    Body-worn kinematic sensors have been widely proposed as the optimal solution for portable, low cost, ambulatory monitoring of gait. This study aims to evaluate an adaptive gyroscope-based algorithm for automated temporal gait analysis using body-worn wireless gyroscopes. Gyroscope data from nine healthy adult subjects performing four walks at four different speeds were then compared against data acquired simultaneously using two force plates and an optical motion capture system. Data from a poliomyelitis patient, exhibiting pathological gait walking with and without the aid of a crutch, were also compared to the force plate. Results show that the mean true error between the adaptive gyroscope algorithm and force plate was -4.5 ± 14.4 ms and 43.4 ± 6.0 ms for IC and TC points, respectively, in healthy subjects. Similarly, the mean true error when data from the polio patient were compared against the force plate was -75.61 ± 27.53 ms and 99.20 ± 46.00 ms for IC and TC points, respectively. A comparison of the present algorithm against temporal gait parameters derived from an optical motion analysis system showed good agreement for nine healthy subjects at four speeds. These results show that the algorithm reported here could constitute the basis of a robust, portable, low-cost system for ambulatory monitoring of gait.

  19. Digital watermarking algorithm based on HVS in wavelet domain

    NASA Astrophysics Data System (ADS)

    Zhang, Qiuhong; Xia, Ping; Liu, Xiaomei

    2013-10-01

    As a new technique used to protect the copyright of digital productions, the digital watermark technique has drawn extensive attention. A digital watermarking algorithm based on discrete wavelet transform (DWT) was presented according to human visual properties in the paper. Then some attack analyses were given. Experimental results show that the watermarking scheme proposed in this paper is invisible and robust to cropping, and also has good robustness to cut , compression , filtering , and noise adding .

  20. DNA-based watermarks using the DNA-Crypt algorithm

    PubMed Central

    Heider, Dominik; Barnekow, Angelika

    2007-01-01

    Background The aim of this paper is to demonstrate the application of watermarks based on DNA sequences to identify the unauthorized use of genetically modified organisms (GMOs) protected by patents. Predicted mutations in the genome can be corrected by the DNA-Crypt program leaving the encrypted information intact. Existing DNA cryptographic and steganographic algorithms use synthetic DNA sequences to store binary information however, although these sequences can be used for authentication, they may change the target DNA sequence when introduced into living organisms. Results The DNA-Crypt algorithm and image steganography are based on the same watermark-hiding principle, namely using the least significant base in case of DNA-Crypt and the least significant bit in case of the image steganography. It can be combined with binary encryption algorithms like AES, RSA or Blowfish. DNA-Crypt is able to correct mutations in the target DNA with several mutation correction codes such as the Hamming-code or the WDH-code. Mutations which can occur infrequently may destroy the encrypted information, however an integrated fuzzy controller decides on a set of heuristics based on three input dimensions, and recommends whether or not to use a correction code. These three input dimensions are the length of the sequence, the individual mutation rate and the stability over time, which is represented by the number of generations. In silico experiments using the Ypt7 in Saccharomyces cerevisiae shows that the DNA watermarks produced by DNA-Crypt do not alter the translation of mRNA into protein. Conclusion The program is able to store watermarks in living organisms and can maintain the original information by correcting mutations itself. Pairwise or multiple sequence alignments show that DNA-Crypt produces few mismatches between the sequences similar to all steganographic algorithms. PMID:17535434

  1. Nausea and vomiting of pregnancy. Evidence-based treatment algorithm.

    PubMed Central

    Levichek, Zina; Atanackovic, Gardana; Oepkes, Dick; Maltepe, Carolyn; Einarson, Adrienne; Magee, Laura; Koren, Gideon

    2002-01-01

    QUESTION: One of my patients suffers from a moderate-to-severe form of morning sickness. She responded only partially to doxylamine and pyridoxine (Dicletin), and I wish to try adding another medication. What should my priority be? ANSWER: An algorithm used by Motherisk to manage thousands of patients takes a hierarchical approach to this condition. This approach is evidence based with regard to fetal safety as well as efficacy. PMID:11889884

  2. Physics-based signal processing algorithms for micromachined cantilever arrays

    DOEpatents

    Candy, James V; Clague, David S; Lee, Christopher L; Rudd, Robert E; Burnham, Alan K; Tringe, Joseph W

    2013-11-19

    A method of using physics-based signal processing algorithms for micromachined cantilever arrays. The methods utilize deflection of a micromachined cantilever that represents the chemical, biological, or physical element being detected. One embodiment of the method comprises the steps of modeling the deflection of the micromachined cantilever producing a deflection model, sensing the deflection of the micromachined cantilever and producing a signal representing the deflection, and comparing the signal representing the deflection with the deflection model.

  3. Fast wavelet based algorithms for linear evolution equations

    NASA Technical Reports Server (NTRS)

    Engquist, Bjorn; Osher, Stanley; Zhong, Sifen

    1992-01-01

    A class was devised of fast wavelet based algorithms for linear evolution equations whose coefficients are time independent. The method draws on the work of Beylkin, Coifman, and Rokhlin which they applied to general Calderon-Zygmund type integral operators. A modification of their idea is applied to linear hyperbolic and parabolic equations, with spatially varying coefficients. A significant speedup over standard methods is obtained when applied to hyperbolic equations in one space dimension and parabolic equations in multidimensions.

  4. New image watermarking algorithm based on mixed scales wavelets

    NASA Astrophysics Data System (ADS)

    El Hajji, Mohamed; Douzi, Hassan; Mammass, Driss; Harba, Rachid; Ros, Frédéric

    2012-01-01

    Watermarking is a technology for embedding secure information in digital content such as audio, images, and video. An effective watermarking algorithm is proposed based on a discrete wavelet transform (DWT) using mixed scales representation. The watermark is embedded in dominant blocks using quantization index modulation (QIM). These dominant blocks correspond to the texture and contour zones. Experimental results demonstrate that the proposed method is robust against various attacks and improves watermark invisibility.

  5. A background suppression algorithm for infrared image based on shearlet

    NASA Astrophysics Data System (ADS)

    Zou, Ruibin; Shi, Caicheng; Qin, Xiao

    2015-04-01

    Because of the relative far distance between infrared imaging system and target or the wide field infrared optical, the imaging area of infrared target is only a few pixels, which is isolated or spots to be showed in the field of view. The only available is the intensity information (gray value) for the target detection. Simultaneously, there are many shortcomings of the infrared image, such as large noise, interference and so on, therefore the small target is always buried in the background and noises. The small target is relatively difficult to detect, so generally, it is impossible to make reliable detection to this target in a single frame image. Summarily, the core of the infrared small target detection algorithm is the background and noise suppression based on a single frame image. Aiming at the infrared small target detection and the above problems, a shearlets-based background suppression algorithm for infrared image is proposed. The algorithm demonstrates the performance of advantage based on shearlets, which is especially designed to address anisotropic and directional information at various scales. This transform provides an optimally efficient representation of images, which is greatly reduced the amount of the information and the available information representation. In the paper, introducing the principle of shearlets first, and then proposing the theory of the algorithm and explaining the implementation step. Finally, giving the simulation results. In Matlab simulations with this method for several sets of infrared images, simulation results conformed to the theory on background suppression based on shearlets. The result showed that this method can effectively suppress background, and improve the SCR and achieve a satisfactory effect in the sky background. The method is very effectively for target detection, identification, track in infrared image system for the future.

  6. A DSP-based neural network non-uniformity correction algorithm for IRFPA

    NASA Astrophysics Data System (ADS)

    Liu, Chong-liang; Jin, Wei-qi; Cao, Yang; Liu, Xiu

    2009-07-01

    An effective neural network non-uniformity correction (NUC) algorithm based on DSP is proposed in this paper. The non-uniform response in infrared focal plane array (IRFPA) detectors produces corrupted images with a fixed-pattern noise(FPN).We introduced and analyzed the artificial neural network scene-based non-uniformity correction (SBNUC) algorithm. A design of DSP-based NUC development platform for IRFPA is described. The DSP hardware platform designed is of low power consumption, with 32-bit fixed point DSP TMS320DM643 as the kernel processor. The dependability and expansibility of the software have been improved by DSP/BIOS real-time operating system and Reference Framework 5. In order to realize real-time performance, the calibration parameters update is set at a lower task priority then video input and output in DSP/BIOS. In this way, calibration parameters updating will not affect video streams. The work flow of the system and the strategy of real-time realization are introduced. Experiments on real infrared imaging sequences demonstrate that this algorithm requires only a few frames to obtain high quality corrections. It is computationally efficient and suitable for all kinds of non-uniformity.

  7. CoP Sensing Framework on Web-Based Environment

    NASA Astrophysics Data System (ADS)

    Mustapha, S. M. F. D. Syed

    The Web technologies and Web applications have shown similar high growth rate in terms of daily usages and user acceptance. The Web applications have not only penetrated in the traditional domains such as education and business but have also encroached into areas such as politics, social, lifestyle, and culture. The emergence of Web technologies has enabled Web access even to the person on the move through PDAs or mobile phones that are connected using Wi-Fi, HSDPA, or other communication protocols. These two phenomena are the inducement factors toward the need of building Web-based systems as the supporting tools in fulfilling many mundane activities. In doing this, one of the many focuses in research has been to look at the implementation challenges in building Web-based support systems in different types of environment. This chapter describes the implementation issues in building the community learning framework that can be supported on the Web-based platform. The Community of Practice (CoP) has been chosen as the community learning theory to be the case study and analysis as it challenges the creativity of the architectural design of the Web system in order to capture the presence of learning activities. The details of this chapter describe the characteristics of the CoP to understand the inherent intricacies in modeling in the Web-based environment, the evidences of CoP that need to be traced automatically in a slick manner such that the evidence-capturing process is unobtrusive, and the technologies needed to embrace a full adoption of Web-based support system for the community learning framework.

  8. Managing evidence-based health care: a diagnostic framework.

    PubMed

    Newman, K; Pyne, T; Cowling, A

    1998-01-01

    This paper proposes a diagnostic framework useful to Trust managers who are faced with the task of devising and implementing strategies for improvements in clinical effectiveness, and is based on a recent study incorporating clinicians, managers, and professional staff in four NHS Trusts in the North Thames Region. The gap framework is inspired by the gap model developed by Zeithaml, Parasuraman and Berry from their research into service quality and incorporates Dave Sackett's schema as well as a personal competency profile needed for the practice of evidence based health-care (EBHC). The paper highlights the four organisational and personal failures (gaps) which contribute to the fifth gap, namely the discrepancy between clinically relevant research evidence and its implementation in health care. To close the gaps, Trusts need to set the goal and tackle the cultural, organisational, attitudinal and more material aspects such as investment in the information infrastructure, education and training of doctors. Doctors need to go through a process from awareness to action facilitated through a combination of personal and organisational incentives and rewards as well as training in the requisite skills. Researchers should take steps to improve the quality of the evidence and its accessibility and purchasers should reinforce the use of EBHC by withdrawing funding for care which has proved to be ineffective, inappropriate or inferior.

  9. A service-based framework for pharmacogenomics data integration

    NASA Astrophysics Data System (ADS)

    Wang, Kun; Bai, Xiaoying; Li, Jing; Ding, Cong

    2010-08-01

    Data are central to scientific research and practices. The advance of experiment methods and information retrieval technologies leads to explosive growth of scientific data and databases. However, due to the heterogeneous problems in data formats, structures and semantics, it is hard to integrate the diversified data that grow explosively and analyse them comprehensively. As more and more public databases are accessible through standard protocols like programmable interfaces and Web portals, Web-based data integration becomes a major trend to manage and synthesise data that are stored in distributed locations. Mashup, a Web 2.0 technique, presents a new way to compose content and software from multiple resources. The paper proposes a layered framework for integrating pharmacogenomics data in a service-oriented approach using the mashup technology. The framework separates the integration concerns from three perspectives including data, process and Web-based user interface. Each layer encapsulates the heterogeneous issues of one aspect. To facilitate the mapping and convergence of data, the ontology mechanism is introduced to provide consistent conceptual models across different databases and experiment platforms. To support user-interactive and iterative service orchestration, a context model is defined to capture information of users, tasks and services, which can be used for service selection and recommendation during a dynamic service composition process. A prototype system is implemented and cases studies are presented to illustrate the promising capabilities of the proposed approach.

  10. An interoperability test framework for HL7-based systems.

    PubMed

    Namli, Tuncay; Aluc, Gunes; Dogac, Asuman

    2009-05-01

    Health Level Seven (HL7) is a prominent messaging standard in the eHealth domain, and with HL7 v2, it addresses only the messaging layer. However, HL7 implementations also deal with the other layers of interoperability, namely the business process layer and the communication layer. This need is addressed in HL7 v3 by providing a number of normative transport specification profiles. Furthermore, there are storyboards describing HL7 v3 message choreographies between specific roles in specific events. Having alternative transport protocols and descriptive message choreographies introduces great flexibility in implementing HL7 standards, yet, this brings in the need for test frameworks that can accommodate different protocols and permit the dynamic definition of test scenarios. In this paper, we describe a complete test execution framework for HL7-based systems that provides high-level constructs allowing dynamic set up of test scenarios involving all the layers in the interoperability stack. The computer-interpretable test description language developed offers a configurable system with pluggable adaptors. The Web-based GUIs make it possible to test systems over the Web anytime, anywhere, and with any party willing to do so.

  11. An interoperability test framework for HL7-based systems.

    PubMed

    Namli, Tuncay; Aluc, Gunes; Dogac, Asuman

    2009-05-01

    Health Level Seven (HL7) is a prominent messaging standard in the eHealth domain, and with HL7 v2, it addresses only the messaging layer. However, HL7 implementations also deal with the other layers of interoperability, namely the business process layer and the communication layer. This need is addressed in HL7 v3 by providing a number of normative transport specification profiles. Furthermore, there are storyboards describing HL7 v3 message choreographies between specific roles in specific events. Having alternative transport protocols and descriptive message choreographies introduces great flexibility in implementing HL7 standards, yet, this brings in the need for test frameworks that can accommodate different protocols and permit the dynamic definition of test scenarios. In this paper, we describe a complete test execution framework for HL7-based systems that provides high-level constructs allowing dynamic set up of test scenarios involving all the layers in the interoperability stack. The computer-interpretable test description language developed offers a configurable system with pluggable adaptors. The Web-based GUIs make it possible to test systems over the Web anytime, anywhere, and with any party willing to do so. PMID:19304492

  12. An image reconstruction framework based on boundary voltages for ultrasound modulated electrical impedance tomography

    NASA Astrophysics Data System (ADS)

    Song, Xizi; Xu, Yanbin; Dong, Feng

    2016-11-01

    A new image reconstruction framework based on boundary voltages is presented for ultrasound modulated electrical impedance tomography (UMEIT). Combining the electric and acoustic modalities, UMEIT reconstructs the conductivity distribution with more measurements with position information. The proposed image reconstruction framework begins with approximately constructing the sensitivity matrix of the imaging object with inclusion. Then the conductivity is recovered from the boundary voltages of the imaging object. To solve the nonlinear inverse problem, an optimization method is adopted and the iterative method is tested. Compared with that for electrical resistance tomography (ERT), the newly constructed sensitivity matrix is more sensitive to the inclusion, even in the center of the imaging object, and it contains more effective information about the inclusions. Finally, image reconstruction is carried out by the conjugate gradient algorithm, and results show that reconstructed images with higher quality can be obtained for UMEIT with a faster convergence rate. Both theory and image reconstruction results validate the feasibility of the proposed framework for UMEIT and confirm that UMEIT is a potential imaging technique.

  13. An Overview of NCA-Based Algorithms for Transcriptional Regulatory Network Inference

    PubMed Central

    Wang, Xu; Alshawaqfeh, Mustafa; Dang, Xuan; Wajid, Bilal; Noor, Amina; Qaraqe, Marwa; Serpedin, Erchin

    2015-01-01

    In systems biology, the regulation of gene expressions involves a complex network of regulators. Transcription factors (TFs) represent an important component of this network: they are proteins that control which genes are turned on or off in the genome by binding to specific DNA sequences. Transcription regulatory networks (TRNs) describe gene expressions as a function of regulatory inputs specified by interactions between proteins and DNA. A complete understanding of TRNs helps to predict a variety of biological processes and to diagnose, characterize and eventually develop more efficient therapies. Recent advances in biological high-throughput technologies, such as DNA microarray data and next-generation sequence (NGS) data, have made the inference of transcription factor activities (TFAs) and TF-gene regulations possible. Network component analysis (NCA) represents an efficient computational framework for TRN inference from the information provided by microarrays, ChIP-on-chip and the prior information about TF-gene regulation. However, NCA suffers from several shortcomings. Recently, several algorithms based on the NCA framework have been proposed to overcome these shortcomings. This paper first overviews the computational principles behind NCA, and then, it surveys the state-of-the-art NCA-based algorithms proposed in the literature for TRN reconstruction.

  14. Multilevel and motion model-based ultrasonic speckle tracking algorithms.

    PubMed

    Yeung, F; Levinson, S F; Parker, K J

    1998-03-01

    A multilevel motion model-based approach to ultrasonic speckle tracking has been developed that addresses the inherent trade-offs associated with traditional single-level block matching (SLBM) methods. The multilevel block matching (MLBM) algorithm uses variable matching block and search window sizes in a coarse-to-fine scheme, preserving the relative immunity to noise associated with the use of a large matching block while preserving the motion field detail associated with the use of a small matching block. To decrease further the sensitivity of the multilevel approach to noise, speckle decorrelation and false matches, a smooth motion model-based block matching (SMBM) algorithm has been implemented that takes into account the spatial inertia of soft tissue elements. The new algorithms were compared to SLBM through a series of experiments involving manual translation of soft tissue phantoms, motion field computer simulations of rotation, compression and shear deformation, and an experiment involving contraction of human forearm muscles. Measures of tracking accuracy included mean squared tracking error, peak signal-to-noise ratio (PSNR) and blinded observations of optical flow. Measures of tracking efficiency included the number of sum squared difference calculations and the computation time. In the phantom translation experiments, the SMBM algorithm successfully matched the accuracy of SLBM using both large and small matching blocks while significantly reducing the number of computations and computation time when a large matching block was used. For the computer simulations, SMBM yielded better tracking accuracies and spatial resolution when compared with SLBM using a large matching block. For the muscle experiment, SMBM outperformed SLBM both in terms of PSNR and observations of optical flow. We believe that the smooth motion model-based MLBM approach represents a meaningful development in ultrasonic soft tissue motion measurement. PMID:9587997

  15. Algorithms for effective querying of compound graph-based pathway databases

    PubMed Central

    2009-01-01

    Background Graph-based pathway ontologies and databases are widely used to represent data about cellular processes. This representation makes it possible to programmatically integrate cellular networks and to investigate them using the well-understood concepts of graph theory in order to predict their structural and dynamic properties. An extension of this graph representation, namely hierarchically structured or compound graphs, in which a member of a biological network may recursively contain a sub-network of a somehow logically similar group of biological objects, provides many additional benefits for analysis of biological pathways, including reduction of complexity by decomposition into distinct components or modules. In this regard, it is essential to effectively query such integrated large compound networks to extract the sub-networks of interest with the help of efficient algorithms and software tools. Results Towards this goal, we developed a querying framework, along with a number of graph-theoretic algorithms from simple neighborhood queries to shortest paths to feedback loops, that is applicable to all sorts of graph-based pathway databases, from PPIs (protein-protein interactions) to metabolic and signaling pathways. The framework is unique in that it can account for compound or nested structures and ubiquitous entities present in the pathway data. In addition, the queries may be related to each other through "AND" and "OR" operators, and can be recursively organized into a tree, in which the result of one query might be a source and/or target for another, to form more complex queries. The algorithms were implemented within the querying component of a new version of the software tool PATIKAweb (Pathway Analysis Tool for Integration and Knowledge Acquisition) and have proven useful for answering a number of biologically significant questions for large graph-based pathway databases. Conclusion The PATIKA Project Web site is http

  16. Matched field localization based on CS-MUSIC algorithm

    NASA Astrophysics Data System (ADS)

    Guo, Shuangle; Tang, Ruichun; Peng, Linhui; Ji, Xiaopeng

    2016-04-01

    The problem caused by shortness or excessiveness of snapshots and by coherent sources in underwater acoustic positioning is considered. A matched field localization algorithm based on CS-MUSIC (Compressive Sensing Multiple Signal Classification) is proposed based on the sparse mathematical model of the underwater positioning. The signal matrix is calculated through the SVD (Singular Value Decomposition) of the observation matrix. The observation matrix in the sparse mathematical model is replaced by the signal matrix, and a new concise sparse mathematical model is obtained, which means not only the scale of the localization problem but also the noise level is reduced; then the new sparse mathematical model is solved by the CS-MUSIC algorithm which is a combination of CS (Compressive Sensing) method and MUSIC (Multiple Signal Classification) method. The algorithm proposed in this paper can overcome effectively the difficulties caused by correlated sources and shortness of snapshots, and it can also reduce the time complexity and noise level of the localization problem by using the SVD of the observation matrix when the number of snapshots is large, which will be proved in this paper.

  17. A Radio-Map Automatic Construction Algorithm Based on Crowdsourcing.

    PubMed

    Yu, Ning; Xiao, Chenxian; Wu, Yinfeng; Feng, Renjian

    2016-01-01

    Traditional radio-map-based localization methods need to sample a large number of location fingerprints offline, which requires huge amount of human and material resources. To solve the high sampling cost problem, an automatic radio-map construction algorithm based on crowdsourcing is proposed. The algorithm employs the crowd-sourced information provided by a large number of users when they are walking in the buildings as the source of location fingerprint data. Through the variation characteristics of users' smartphone sensors, the indoor anchors (doors) are identified and their locations are regarded as reference positions of the whole radio-map. The AP-Cluster method is used to cluster the crowdsourced fingerprints to acquire the representative fingerprints. According to the reference positions and the similarity between fingerprints, the representative fingerprints are linked to their corresponding physical locations and the radio-map is generated. Experimental results demonstrate that the proposed algorithm reduces the cost of fingerprint sampling and radio-map construction and guarantees the localization accuracy. The proposed method does not require users' explicit participation, which effectively solves the resource-consumption problem when a location fingerprint database is established. PMID:27070623

  18. A Radio-Map Automatic Construction Algorithm Based on Crowdsourcing

    PubMed Central

    Yu, Ning; Xiao, Chenxian; Wu, Yinfeng; Feng, Renjian

    2016-01-01

    Traditional radio-map-based localization methods need to sample a large number of location fingerprints offline, which requires huge amount of human and material resources. To solve the high sampling cost problem, an automatic radio-map construction algorithm based on crowdsourcing is proposed. The algorithm employs the crowd-sourced information provided by a large number of users when they are walking in the buildings as the source of location fingerprint data. Through the variation characteristics of users’ smartphone sensors, the indoor anchors (doors) are identified and their locations are regarded as reference positions of the whole radio-map. The AP-Cluster method is used to cluster the crowdsourced fingerprints to acquire the representative fingerprints. According to the reference positions and the similarity between fingerprints, the representative fingerprints are linked to their corresponding physical locations and the radio-map is generated. Experimental results demonstrate that the proposed algorithm reduces the cost of fingerprint sampling and radio-map construction and guarantees the localization accuracy. The proposed method does not require users’ explicit participation, which effectively solves the resource-consumption problem when a location fingerprint database is established. PMID:27070623

  19. An Enhanced Differential Evolution Algorithm Based on Multiple Mutation Strategies

    PubMed Central

    Xiang, Wan-li; Meng, Xue-lei; An, Mei-qing; Li, Yin-zhen; Gao, Ming-xia

    2015-01-01

    Differential evolution algorithm is a simple yet efficient metaheuristic for global optimization over continuous spaces. However, there is a shortcoming of premature convergence in standard DE, especially in DE/best/1/bin. In order to take advantage of direction guidance information of the best individual of DE/best/1/bin and avoid getting into local trap, based on multiple mutation strategies, an enhanced differential evolution algorithm, named EDE, is proposed in this paper. In the EDE algorithm, an initialization technique, opposition-based learning initialization for improving the initial solution quality, and a new combined mutation strategy composed of DE/current/1/bin together with DE/pbest/bin/1 for the sake of accelerating standard DE and preventing DE from clustering around the global best individual, as well as a perturbation scheme for further avoiding premature convergence, are integrated. In addition, we also introduce two linear time-varying functions, which are used to decide which solution search equation is chosen at the phases of mutation and perturbation, respectively. Experimental results tested on twenty-five benchmark functions show that EDE is far better than the standard DE. In further comparisons, EDE is compared with other five state-of-the-art approaches and related results show that EDE is still superior to or at least equal to these methods on most of benchmark functions. PMID:26609304

  20. A Radio-Map Automatic Construction Algorithm Based on Crowdsourcing.

    PubMed

    Yu, Ning; Xiao, Chenxian; Wu, Yinfeng; Feng, Renjian

    2016-04-09

    Traditional radio-map-based localization methods need to sample a large number of location fingerprints offline, which requires huge amount of human and material resources. To solve the high sampling cost problem, an automatic radio-map construction algorithm based on crowdsourcing is proposed. The algorithm employs the crowd-sourced information provided by a large number of users when they are walking in the buildings as the source of location fingerprint data. Through the variation characteristics of users' smartphone sensors, the indoor anchors (doors) are identified and their locations are regarded as reference positions of the whole radio-map. The AP-Cluster method is used to cluster the crowdsourced fingerprints to acquire the representative fingerprints. According to the reference positions and the similarity between fingerprints, the representative fingerprints are linked to their corresponding physical locations and the radio-map is generated. Experimental results demonstrate that the proposed algorithm reduces the cost of fingerprint sampling and radio-map construction and guarantees the localization accuracy. The proposed method does not require users' explicit participation, which effectively solves the resource-consumption problem when a location fingerprint database is established.

  1. Optimization algorithm of digital watermarking anti-coalition attacks in DWT-domain based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Que, Dashun; Li, Gang; Yue, Peng

    2007-12-01

    An adaptive optimization watermarking algorithm based on Genetic Algorithm (GA) and discrete wavelet transform (DWT) is proposed in this paper. The core of this algorithm is the fitness function optimization model for digital watermarking based on GA. The embedding intensity for digital watermarking can be modified adaptively, and the algorithm can effectively ensure the imperceptibility of watermarking while the robustness is ensured. The optimization model research may provide a new idea for anti-coalition attacks of digital watermarking algorithm. The paper has fulfilled many experiments, including the embedding and extracting experiments of watermarking, the influence experiments by the weighting factor, the experiments of embedding same watermarking to the different cover image, the experiments of embedding different watermarking to the same cover image, the comparative analysis experiments between this optimization algorithm and human visual system (HVS) algorithm and etc. The simulation results and the further analysis show the effectiveness and advantage of the new algorithm, which also has versatility and expandability. And meanwhile it has better ability of anti-coalition attacks. Moreover, the robustness and security of watermarking algorithm are improved by scrambling transformation and chaotic encryption while preprocessing the watermarking.

  2. Library support for problem-based learning: an algorithmic approach.

    PubMed

    Ispahany, Nighat; Torraca, Kathren; Chilov, Marina; Zimbler, Elaine R; Matsoukas, Konstantina; Allen, Tracy Y

    2007-01-01

    Academic health sciences libraries can take various approaches to support the problem-based learning component of the curriculum. This article presents one such approach taken to integrate information navigation skills into the small group discussion part of the Pathophysiology course in the second year of the Dental school curriculum. Along with presenting general resources for the course, the Library Toolkit introduced an algorithmic approach to finding answers to sample clinical case questions. While elements of Evidence-Based Practice were introduced, the emphasis was on teaching students to navigate relevant resources and apply various database search techniques to find answers to the clinical problems presented.

  3. Genetic Algorithm based Decentralized PI Type Controller: Load Frequency Control

    NASA Astrophysics Data System (ADS)

    Dwivedi, Atul; Ray, Goshaidas; Sharma, Arun Kumar

    2016-12-01

    This work presents a design of decentralized PI type Linear Quadratic (LQ) controller based on genetic algorithm (GA). The proposed design technique allows considerable flexibility in defining the control objectives and it does not consider any knowledge of the system matrices and moreover it avoids the solution of algebraic Riccati equation. To illustrate the results of this work, a load-frequency control problem is considered. Simulation results reveal that the proposed scheme based on GA is an alternative and attractive approach to solve load-frequency control problem from both performance and design point of views.

  4. Missile placement analysis based on improved SURF feature matching algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Kaida; Zhao, Wenjie; Li, Dejun; Gong, Xiran; Sheng, Qian

    2015-03-01

    The precious battle damage assessment by use of video images to analysis missile placement is a new study area. The article proposed an improved speeded up robust features algorithm named restricted speeded up robust features, which combined the combat application of TV-command-guided missiles and the characteristics of video image. Its restrictions mainly reflected in two aspects, one is to restrict extraction area of feature point; the second is to restrict the number of feature points. The process of missile placement analysis based on video image was designed and a video splicing process and random sample consensus purification were achieved. The RSURF algorithm is proved that has good realtime performance on the basis of guarantee the accuracy.

  5. An improved piecewise linear chaotic map based image encryption algorithm.

    PubMed

    Hu, Yuping; Zhu, Congxu; Wang, Zhijian

    2014-01-01

    An image encryption algorithm based on improved piecewise linear chaotic map (MPWLCM) model was proposed. The algorithm uses the MPWLCM to permute and diffuse plain image simultaneously. Due to the sensitivity to initial key values, system parameters, and ergodicity in chaotic system, two pseudorandom sequences are designed and used in the processes of permutation and diffusion. The order of processing pixels is not in accordance with the index of pixels, but it is from beginning or end alternately. The cipher feedback was introduced in diffusion process. Test results and security analysis show that not only the scheme can achieve good encryption results but also its key space is large enough to resist against brute attack.

  6. A Matrix Pencil Algorithm Based Multiband Iterative Fusion Imaging Method

    PubMed Central

    Zou, Yong Qiang; Gao, Xun Zhang; Li, Xiang; Liu, Yong Xiang

    2016-01-01

    Multiband signal fusion technique is a practicable and efficient way to improve the range resolution of ISAR image. The classical fusion method estimates the poles of each subband signal by the root-MUSIC method, and some good results were get in several experiments. However, this method is fragile in noise for the proper poles could not easy to get in low signal to noise ratio (SNR). In order to eliminate the influence of noise, this paper propose a matrix pencil algorithm based method to estimate the multiband signal poles. And to deal with mutual incoherent between subband signals, the incoherent parameters (ICP) are predicted through the relation of corresponding poles of each subband. Then, an iterative algorithm which aimed to minimize the 2-norm of signal difference is introduced to reduce signal fusion error. Applications to simulate dada verify that the proposed method get better fusion results at low SNR. PMID:26781194

  7. A Matrix Pencil Algorithm Based Multiband Iterative Fusion Imaging Method.

    PubMed

    Zou, Yong Qiang; Gao, Xun Zhang; Li, Xiang; Liu, Yong Xiang

    2016-01-01

    Multiband signal fusion technique is a practicable and efficient way to improve the range resolution of ISAR image. The classical fusion method estimates the poles of each subband signal by the root-MUSIC method, and some good results were get in several experiments. However, this method is fragile in noise for the proper poles could not easy to get in low signal to noise ratio (SNR). In order to eliminate the influence of noise, this paper propose a matrix pencil algorithm based method to estimate the multiband signal poles. And to deal with mutual incoherent between subband signals, the incoherent parameters (ICP) are predicted through the relation of corresponding poles of each subband. Then, an iterative algorithm which aimed to minimize the 2-norm of signal difference is introduced to reduce signal fusion error. Applications to simulate dada verify that the proposed method get better fusion results at low SNR.

  8. A Matrix Pencil Algorithm Based Multiband Iterative Fusion Imaging Method

    NASA Astrophysics Data System (ADS)

    Zou, Yong Qiang; Gao, Xun Zhang; Li, Xiang; Liu, Yong Xiang

    2016-01-01

    Multiband signal fusion technique is a practicable and efficient way to improve the range resolution of ISAR image. The classical fusion method estimates the poles of each subband signal by the root-MUSIC method, and some good results were get in several experiments. However, this method is fragile in noise for the proper poles could not easy to get in low signal to noise ratio (SNR). In order to eliminate the influence of noise, this paper propose a matrix pencil algorithm based method to estimate the multiband signal poles. And to deal with mutual incoherent between subband signals, the incoherent parameters (ICP) are predicted through the relation of corresponding poles of each subband. Then, an iterative algorithm which aimed to minimize the 2-norm of signal difference is introduced to reduce signal fusion error. Applications to simulate dada verify that the proposed method get better fusion results at low SNR.

  9. A Modified Decision Tree Algorithm Based on Genetic Algorithm for Mobile User Classification Problem

    PubMed Central

    Liu, Dong-sheng; Fan, Shu-jiang

    2014-01-01

    In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity. PMID:24688389

  10. A modified decision tree algorithm based on genetic algorithm for mobile user classification problem.

    PubMed

    Liu, Dong-sheng; Fan, Shu-jiang

    2014-01-01

    In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity. PMID:24688389

  11. A modified decision tree algorithm based on genetic algorithm for mobile user classification problem.

    PubMed

    Liu, Dong-sheng; Fan, Shu-jiang

    2014-01-01

    In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity.

  12. Algorithmic support for commodity-based parallel computing systems.

    SciTech Connect

    Leung, Vitus Joseph; Bender, Michael A.; Bunde, David P.; Phillips, Cynthia Ann

    2003-10-01

    The Computational Plant or Cplant is a commodity-based distributed-memory supercomputer under development at Sandia National Laboratories. Distributed-memory supercomputers run many parallel programs simultaneously. Users submit their programs to a job queue. When a job is scheduled to run, it is assigned to a set of available processors. Job runtime depends not only on the number of processors but also on the particular set of processors assigned to it. Jobs should be allocated to localized clusters of processors to minimize communication costs and to avoid bandwidth contention caused by overlapping jobs. This report introduces new allocation strategies and performance metrics based on space-filling curves and one dimensional allocation strategies. These algorithms are general and simple. Preliminary simulations and Cplant experiments indicate that both space-filling curves and one-dimensional packing improve processor locality compared to the sorted free list strategy previously used on Cplant. These new allocation strategies are implemented in Release 2.0 of the Cplant System Software that was phased into the Cplant systems at Sandia by May 2002. Experimental results then demonstrated that the average number of communication hops between the processors allocated to a job strongly correlates with the job's completion time. This report also gives processor-allocation algorithms for minimizing the average number of communication hops between the assigned processors for grid architectures. The associated clustering problem is as follows: Given n points in {Re}d, find k points that minimize their average pairwise L{sub 1} distance. Exact and approximate algorithms are given for these optimization problems. One of these algorithms has been implemented on Cplant and will be included in Cplant System Software, Version 2.1, to be released. In more preliminary work, we suggest improvements to the scheduler separate from the allocator.

  13. Reproducibility of UAV-based earth topography reconstructions based on Structure-from-Motion algorithms

    NASA Astrophysics Data System (ADS)

    Clapuyt, Francois; Vanacker, Veerle; Van Oost, Kristof

    2016-05-01

    Combination of UAV-based aerial pictures and Structure-from-Motion (SfM) algorithm provides an efficient, low-cost and rapid framework for remote sensing and monitoring of dynamic natural environments. This methodology is particularly suitable for repeated topographic surveys in remote or poorly accessible areas. However, temporal analysis of landform topography requires high accuracy of measurements and reproducibility of the methodology as differencing of digital surface models leads to error propagation. In order to assess the repeatability of the SfM technique, we surveyed a study area characterized by gentle topography with an UAV platform equipped with a standard reflex camera, and varied the focal length of the camera and location of georeferencing targets between flights. Comparison of different SfM-derived topography datasets shows that precision of measurements is in the order of centimetres for identical replications which highlights the excellent performance of the SfM workflow, all parameters being equal. The precision is one order of magnitude higher for 3D topographic reconstructions involving independent sets of ground control points, which results from the fact that the accuracy of the localisation of ground control points strongly propagates into final results.

  14. Standardized evaluation of algorithms for computer-aided diagnosis of dementia based on structural MRI: the CADDementia challenge.

    PubMed

    Bron, Esther E; Smits, Marion; van der Flier, Wiesje M; Vrenken, Hugo; Barkhof, Frederik; Scheltens, Philip; Papma, Janne M; Steketee, Rebecca M E; Méndez Orellana, Carolina; Meijboom, Rozanna; Pinto, Madalena; Meireles, Joana R; Garrett, Carolina; Bastos-Leite, António J; Abdulkadir, Ahmed; Ronneberger, Olaf; Amoroso, Nicola; Bellotti, Roberto; Cárdenas-Peña, David; Álvarez-Meza, Andrés M; Dolph, Chester V; Iftekharuddin, Khan M; Eskildsen, Simon F; Coupé, Pierrick; Fonov, Vladimir S; Franke, Katja; Gaser, Christian; Ledig, Christian; Guerrero, Ricardo; Tong, Tong; Gray, Katherine R; Moradi, Elaheh; Tohka, Jussi; Routier, Alexandre; Durrleman, Stanley; Sarica, Alessia; Di Fatta, Giuseppe; Sensi, Francesco; Chincarini, Andrea; Smith, Garry M; Stoyanov, Zhivko V; Sørensen, Lauge; Nielsen, Mads; Tangaro, Sabina; Inglese, Paolo; Wachinger, Christian; Reuter, Martin; van Swieten, John C; Niessen, Wiro J; Klein, Stefan

    2015-05-01

    Algorithms for computer-aided diagnosis of dementia based on structural MRI have demonstrated high performance in the literature, but are difficult to compare as different data sets and methodology were used for evaluation. In addition, it is unclear how the algorithms would perform on previously unseen data, and thus, how they would perform in clinical practice when there is no real opportunity to adapt the algorithm to the data at hand. To address these comparability, generalizability and clinical applicability issues, we organized a grand challenge that aimed to objectively compare algorithms based on a clinically representative multi-center data set. Using clinical practice as the starting point, the goal was to reproduce the clinical diagnosis. Therefore, we evaluated algorithms for multi-class classification of three diagnostic groups: patients with probable Alzheimer's disease, patients with mild cognitive impairment and healthy controls. The diagnosis based on clinical criteria was used as reference standard, as it was the best available reference despite its known limitations. For evaluation, a previously unseen test set was used consisting of 354 T1-weighted MRI scans with the diagnoses blinded. Fifteen research teams participated with a total of 29 algorithms. The algorithms were trained on a small training set (n=30) and optionally on data from other sources (e.g., the Alzheimer's Disease Neuroimaging Initiative, the Australian Imaging Biomarkers and Lifestyle flagship study of aging). The best performing algorithm yielded an accuracy of 63.0% and an area under the receiver-operating-characteristic curve (AUC) of 78.8%. In general, the best performances were achieved using feature extraction based on voxel-based morphometry or a combination of features that included volume, cortical thickness, shape and intensity. The challenge is open for new submissions via the web-based framework: http://caddementia.grand-challenge.org.

  15. Watershed model calibration framework developed using an influence coefficient algorithm and a genetic algorithm and analysis of pollutant discharge characteristics and load reduction in a TMDL planning area.

    PubMed

    Cho, Jae Heon; Lee, Jong Ho

    2015-11-01

    Manual calibration is common in rainfall-runoff model applications. However, rainfall-runoff models include several complicated parameters; thus, significant time and effort are required to manually calibrate the parameters individually and repeatedly. Automatic calibration has relative merit regarding time efficiency and objectivity but shortcomings regarding understanding indigenous processes in the basin. In this study, a watershed model calibration framework was developed using an influence coefficient algorithm and genetic algorithm (WMCIG) to automatically calibrate the distributed models. The optimization problem used to minimize the sum of squares of the normalized residuals of the observed and predicted values was solved using a genetic algorithm (GA). The final model parameters were determined from the iteration with the smallest sum of squares of the normalized residuals of all iterations. The WMCIG was applied to a Gomakwoncheon watershed located in an area that presents a total maximum daily load (TMDL) in Korea. The proportion of urbanized area in this watershed is low, and the diffuse pollution loads of nutrients such as phosphorus are greater than the point-source pollution loads because of the concentration of rainfall that occurs during the summer. The pollution discharges from the watershed were estimated for each land-use type, and the seasonal variations of the pollution loads were analyzed. Consecutive flow measurement gauges have not been installed in this area, and it is difficult to survey the flow and water quality in this area during the frequent heavy rainfall that occurs during the wet season. The Hydrological Simulation Program-Fortran (HSPF) model was used to calculate the runoff flow and water quality in this basin. Using the water quality results, a load duration curve was constructed for the basin, the exceedance frequency of the water quality standard was calculated for each hydrologic condition class, and the percent reduction

  16. Watershed model calibration framework developed using an influence coefficient algorithm and a genetic algorithm and analysis of pollutant discharge characteristics and load reduction in a TMDL planning area.

    PubMed

    Cho, Jae Heon; Lee, Jong Ho

    2015-11-01

    Manual calibration is common in rainfall-runoff model applications. However, rainfall-runoff models include several complicated parameters; thus, significant time and effort are required to manually calibrate the parameters individually and repeatedly. Automatic calibration has relative merit regarding time efficiency and objectivity but shortcomings regarding understanding indigenous processes in the basin. In this study, a watershed model calibration framework was developed using an influence coefficient algorithm and genetic algorithm (WMCIG) to automatically calibrate the distributed models. The optimization problem used to minimize the sum of squares of the normalized residuals of the observed and predicted values was solved using a genetic algorithm (GA). The final model parameters were determined from the iteration with the smallest sum of squares of the normalized residuals of all iterations. The WMCIG was applied to a Gomakwoncheon watershed located in an area that presents a total maximum daily load (TMDL) in Korea. The proportion of urbanized area in this watershed is low, and the diffuse pollution loads of nutrients such as phosphorus are greater than the point-source pollution loads because of the concentration of rainfall that occurs during the summer. The pollution discharges from the watershed were estimated for each land-use type, and the seasonal variations of the pollution loads were analyzed. Consecutive flow measurement gauges have not been installed in this area, and it is difficult to survey the flow and water quality in this area during the frequent heavy rainfall that occurs during the wet season. The Hydrological Simulation Program-Fortran (HSPF) model was used to calculate the runoff flow and water quality in this basin. Using the water quality results, a load duration curve was constructed for the basin, the exceedance frequency of the water quality standard was calculated for each hydrologic condition class, and the percent reduction

  17. On long-only information-based portfolio diversification framework

    NASA Astrophysics Data System (ADS)

    Santos, Raphael A.; Takada, Hellinton H.

    2014-12-01

    Using the concepts from information theory, it is possible to improve the traditional frameworks for long-only asset allocation. In modern portfolio theory, the investor has two basic procedures: the choice of a portfolio that maximizes its risk-adjusted excess return or the mixed allocation between the maximum Sharpe portfolio and the risk-free asset. In the literature, the first procedure was already addressed using information theory. One contribution of this paper is the consideration of the second procedure in the information theory context. The performance of these approaches was compared with three traditional asset allocation methodologies: the Markowitz's mean-variance, the resampled mean-variance and the equally weighted portfolio. Using simulated and real data, the information theory-based methodologies were verified to be more robust when dealing with the estimation errors.

  18. A SAT Based Effective Algorithm for the Directed Hamiltonian Cycle Problem

    NASA Astrophysics Data System (ADS)

    Jäger, Gerold; Zhang, Weixiong

    The Hamiltonian cycle problem (HCP) is an important combinatorial problem with applications in many areas. While thorough theoretical and experimental analyses have been made on the HCP in undirected graphs, little is known for the HCP in directed graphs (DHCP). The contribution of this work is an effective algorithm for the DHCP. Our algorithm explores and exploits the close relationship between the DHCP and the Assignment Problem (AP) and utilizes a technique based on Boolean satisfiability (SAT). By combining effective algorithms for the AP and SAT, our algorithm significantly outperforms previous exact DHCP algorithms including an algorithm based on the award-winning Concorde TSP algorithm.

  19. A Slow Retrieval Algorithm for Satellite and Surface Based Instruments

    NASA Technical Reports Server (NTRS)

    Weaver, C.; Flittner, D.

    2007-01-01

    We present results of a retrieval algorithm for satellite and ground based instruments using the Arizona radiative transfer code. A state vector describing the atmospheric and surface condition is iteratively modified until the calculated radiances match the observed values. Elements of the state vector include: aerosol concentrations, radius, optical properties, mass-weighted altitudes, chlorophyll concentration and wind speed. While computationally expensive, many assumptions used in other retrieval algorithms are not invoked. We present co-located retrievals for MODIS, SEAWIFS and nearby AERONET sites. MODIS AQUA and SEA WIFS: Ten MODIS (.412 - 2.110 microns) and eight SEA WIFS (.412-.865 microns) radiances (.412-.865 microns) include channels where aerosols absorb and reflect radiation. We focus on retrieving bio-mass burning aerosols that are advected over open ocean. Since chlorophyll absorbs at frequencies where black carbon absorbs, our retrieval algorithm accounts for chlorophyll absorption by simultaneously retrieving both aerosol and chlorophyll amount. Our retrieved chlorophyll concentrations are similar to those from the Ocean Color Group. AERONET: Both Almucantar and Principle plane radiances are used to retrieve the state of the atmosphere and ocean conditions. Our retrieved aerosol size distributions and optical properties are consistent with the aerosol inversions from the AERONET group.

  20. Building simplification algorithms based on user cognition in mobile environment

    NASA Astrophysics Data System (ADS)

    Shen, Jie; Shi, Junfei; Wang, Meizhen; Wu, Chenyan

    2008-10-01

    With the development of LBS, mobile map should adaptively satisfy the cognitive requirement of user. User cognition in mobile environment is much more objective oriented and also seem to be a heavier burden than the user in static environment. The holistic idea and methods of map generalization can not fully suitable for the mobile map. This paper took the building simplification in habitation generalization as example, analyzed the characteristic of user cognition in mobile environment and the basic rules of building simplification, collected and studied the state-of-the-art of algorithms of building simplification in the static and mobile environment, put forward the idea of hierarchical building simplification based on user cognition. This paper took Hunan road business district of Nanjing as test area and took the building data with shapfile format of ESRI as test data and realized the simplification algorithm. The method took user as center, calculated the distance between user and the building which will be simplified and took the distance as the basis for choosing different simplification algorithm for different spaces. This contribution aimed to hierarchically present the building in different level of detail by real-time simplification.

  1. Sonoluminescence Bubble Measurements using Vision-Based Algorithms

    NASA Technical Reports Server (NTRS)

    Hall, Nancy R.; Mackey, Jeffrey R.; Matula, Thomas J.

    2003-01-01

    Vision-based measurement methods were used to measure bubble sizes in this sonoluminescence experiment. Bubble imaging was accomplished by placing the bubble between a bright light source and a microscope-CCD camera system. A collimated light-emitting diode was operated in a pulsed model with an adjustable time delay with respect to the piezo-electric transducer drive signal. The light-emitting diode produced a bubble shadowgraph consisting of a multiple exposure made by numerous light pulses imaged onto a charge-couple device camera. Each image was transferred from the camera to a computer-controlled machine vision system via a frame grabber. The frame grabber was equipped with on-board memory to accomodate sequential image buffering while images were transferred to the host processor and analyzed. This configuration allowed the host computer to perform diameter measurements, centroid position measurements and shape estimation in "real-time" as the next image was being acquired. Bubble size measurement accuracy with an uncertainty of 3 microns was achieved using standard lenses and machine vision algorithms. Bubble centroid position accuracy was also within the 3 micron tolerance of the vision system. This uncertainty estimation accounted for the optical spatial resolution, digitization errors and the edge detection algorithm accuracy. The vision algorithms include camera calibration, thresholding, edge detection, edge position determination, distance between two edges computations and centroid position computations.

  2. An Evolved Wavelet Library Based on Genetic Algorithm

    PubMed Central

    Vaithiyanathan, D.; Seshasayanan, R.; Kunaraj, K.; Keerthiga, J.

    2014-01-01

    As the size of the images being captured increases, there is a need for a robust algorithm for image compression which satiates the bandwidth limitation of the transmitted channels and preserves the image resolution without considerable loss in the image quality. Many conventional image compression algorithms use wavelet transform which can significantly reduce the number of bits needed to represent a pixel and the process of quantization and thresholding further increases the compression. In this paper the authors evolve two sets of wavelet filter coefficients using genetic algorithm (GA), one for the whole image portion except the edge areas and the other for the portions near the edges in the image (i.e., global and local filters). Images are initially separated into several groups based on their frequency content, edges, and textures and the wavelet filter coefficients are evolved separately for each group. As there is a possibility of the GA settling in local maximum, we introduce a new shuffling operator to prevent the GA from this effect. The GA used to evolve filter coefficients primarily focuses on maximizing the peak signal to noise ratio (PSNR). The evolved filter coefficients by the proposed method outperform the existing methods by a 0.31 dB improvement in the average PSNR and a 0.39 dB improvement in the maximum PSNR. PMID:25405225

  3. Novel similarity-based clustering algorithm for grouping broadcast news

    NASA Astrophysics Data System (ADS)

    Ibrahimov, Oktay V.; Sethi, Ishwar K.; Dimitrova, Nevenka

    2002-03-01

    The goal of the current paper is to introduce a novel clustering algorithm that has been designed for grouping transcribed textual documents obtained out of audio, video segments. Since audio transcripts are normally highly erroneous documents, one of the major challenges at the text processing stage is to reduce the negative impacts of errors gained at the speech recognition stage. Other difficulties come from the nature of conversational speech. In the paper we describe the main difficulties of the spoken documents and suggest an approach restricting their negative effects. In our paper we also present a clustering algorithm that groups transcripts on the base of informative closeness of documents. To carry out such partitioning we give an intuitive definition of informative field of a transcript and use it in our algorithm. To assess informative closeness of the transcripts, we apply Chi-square similarity measure, which is also described in the paper. Our experiments with Chi-square similarity measure showed its robustness and high efficacy. In particular, the performance analysis that have been carried out in regard to Chi-square and three other similarity measures such as Cosine, Dice, and Jaccard showed that Chi-square is more robust to specific features of spoken documents.

  4. Contrast Enhancement Algorithm Based on Gap Adjustment for Histogram Equalization

    PubMed Central

    Chiu, Chung-Cheng; Ting, Chih-Chung

    2016-01-01

    Image enhancement methods have been widely used to improve the visual effects of images. Owing to its simplicity and effectiveness histogram equalization (HE) is one of the methods used for enhancing image contrast. However, HE may result in over-enhancement and feature loss problems that lead to unnatural look and loss of details in the processed images. Researchers have proposed various HE-based methods to solve the over-enhancement problem; however, they have largely ignored the feature loss problem. Therefore, a contrast enhancement algorithm based on gap adjustment for histogram equalization (CegaHE) is proposed. It refers to a visual contrast enhancement algorithm based on histogram equalization (VCEA), which generates visually pleasing enhanced images, and improves the enhancement effects of VCEA. CegaHE adjusts the gaps between two gray values based on the adjustment equation, which takes the properties of human visual perception into consideration, to solve the over-enhancement problem. Besides, it also alleviates the feature loss problem and further enhances the textures in the dark regions of the images to improve the quality of the processed images for human visual perception. Experimental results demonstrate that CegaHE is a reliable method for contrast enhancement and that it significantly outperforms VCEA and other methods. PMID:27338412

  5. Performance evaluation of algorithms for SAW-based temperature measurement.

    PubMed

    Schuster, Stefan; Scheiblhofer, Stefan; Reindl, Leonhard; Stelzer, Andreas

    2006-06-01

    Whenever harsh environmental conditions such as high temperatures, accelerations, radiation, etc., prohibit usage of standard temperature sensors, surface acoustic wave-based temperature sensors are the first choice for highly reliable wireless temperature measurement. Interrogation of these sensors is often based on frequency modulated or frequency stepped continuous wave-based radars (FMCW/FSCW). We investigate known algorithms regarding their achievable temperature accuracy and their applicability in practice. Furthermore, some general rules of thumb for FMCW/FSCW radar-based range estimation by means of the Cramer-Rao lower bound (CRLB) for frequency and phase estimation are provided. The theoretical results are verified on both simulated and measured data. PMID:16846150

  6. Visual tracking method based on cuckoo search algorithm

    NASA Astrophysics Data System (ADS)

    Gao, Ming-Liang; Yin, Li-Ju; Zou, Guo-Feng; Li, Hai-Tao; Liu, Wei

    2015-07-01

    Cuckoo search (CS) is a new meta-heuristic optimization algorithm that is based on the obligate brood parasitic behavior of some cuckoo species in combination with the Lévy flight behavior of some birds and fruit flies. It has been found to be efficient in solving global optimization problems. An application of CS is presented to solve the visual tracking problem. The relationship between optimization and visual tracking is comparatively studied and the parameters' sensitivity and adjustment of CS in the tracking system are experimentally studied. To demonstrate the tracking ability of a CS-based tracker, a comparative study of tracking accuracy and speed of the CS-based tracker with six "state-of-art" trackers, namely, particle filter, meanshift, PSO, ensemble tracker, fragments tracker, and compressive tracker are presented. Comparative results show that the CS-based tracker outperforms the other trackers.

  7. Primitive fitting based on the efficient multiBaySAC algorithm.

    PubMed

    Kang, Zhizhong; Li, Zhen

    2015-01-01

    Although RANSAC is proven to be robust, the original RANSAC algorithm selects hypothesis sets at random, generating numerous iterations and high computational costs because many hypothesis sets are contaminated with outliers. This paper presents a conditional sampling method, multiBaySAC (Bayes SAmple Consensus), that fuses the BaySAC algorithm with candidate model parameters statistical testing for unorganized 3D point clouds to fit multiple primitives. This paper first presents a statistical testing algorithm for a candidate model parameter histogram to detect potential primitives. As the detected initial primitives were optimized using a parallel strategy rather than a sequential one, every data point in the multiBaySAC algorithm was assigned to multiple prior inlier probabilities for initial multiple primitives. Each prior inlier probability determined the probability that a point belongs to the corresponding primitive. We then implemented in parallel a conditional sampling method: BaySAC. With each iteration of the hypothesis testing process, hypothesis sets with the highest inlier probabilities were selected and verified for the existence of multiple primitives, revealing the fitting for multiple primitives. Moreover, the updated version of the initial probability was implemented based on a memorable form of Bayes' Theorem, which describes the relationship between prior and posterior probabilities of a data point by determining whether the hypothesis set to which a data point belongs is correct. The proposed approach was tested using real and synthetic point clouds. The results show that the proposed multiBaySAC algorithm can achieve a high computational efficiency (averaging 34% higher than the efficiency of the sequential RANSAC method) and fitting accuracy (exhibiting good performance in the intersection of two primitives), whereas the sequential RANSAC framework clearly suffers from over- and under-segmentation problems. Future work will aim at further

  8. a Hadoop-Based Algorithm of Generating dem Grid from Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Jian, X.; Xiao, X.; Chengfang, H.; Zhizhong, Z.; Zhaohui, W.; Dengzhong, Z.

    2015-04-01

    Airborne LiDAR technology has proven to be the most powerful tools to obtain high-density, high-accuracy and significantly detailed surface information of terrain and surface objects within a short time, and from which the Digital Elevation Model of high quality can be extracted. Point cloud data generated from the pre-processed data should be classified by segmentation algorithms, so as to differ the terrain points from disorganized points, then followed by a procedure of interpolating the selected points to turn points into DEM data. The whole procedure takes a long time and huge computing resource due to high-density, that is concentrated on by a number of researches. Hadoop is a distributed system infrastructure developed by the Apache Foundation, which contains a highly fault-tolerant distributed file system (HDFS) with high transmission rate and a parallel programming model (Map/Reduce). Such a framework is appropriate for DEM generation algorithms to improve efficiency. Point cloud data of Dongting Lake acquired by Riegl LMS-Q680i laser scanner was utilized as the original data to generate DEM by a Hadoop-based algorithms implemented in Linux, then followed by another traditional procedure programmed by C++ as the comparative experiment. Then the algorithm's efficiency, coding complexity, and performance-cost ratio were discussed for the comparison. The results demonstrate that the algorithm's speed depends on size of point set and density of DEM grid, and the non-Hadoop implementation can achieve a high performance when memory is big enough, but the multiple Hadoop implementation can achieve a higher performance-cost ratio, while point set is of vast quantities on the other hand.

  9. Primitive Fitting Based on the Efficient multiBaySAC Algorithm

    PubMed Central

    Kang, Zhizhong; Li, Zhen

    2015-01-01

    Although RANSAC is proven to be robust, the original RANSAC algorithm selects hypothesis sets at random, generating numerous iterations and high computational costs because many hypothesis sets are contaminated with outliers. This paper presents a conditional sampling method, multiBaySAC (Bayes SAmple Consensus), that fuses the BaySAC algorithm with candidate model parameters statistical testing for unorganized 3D point clouds to fit multiple primitives. This paper first presents a statistical testing algorithm for a candidate model parameter histogram to detect potential primitives. As the detected initial primitives were optimized using a parallel strategy rather than a sequential one, every data point in the multiBaySAC algorithm was assigned to multiple prior inlier probabilities for initial multiple primitives. Each prior inlier probability determined the probability that a point belongs to the corresponding primitive. We then implemented in parallel a conditional sampling method: BaySAC. With each iteration of the hypothesis testing process, hypothesis sets with the highest inlier probabilities were selected and verified for the existence of multiple primitives, revealing the fitting for multiple primitives. Moreover, the updated version of the initial probability was implemented based on a memorable form of Bayes’ Theorem, which describes the relationship between prior and posterior probabilities of a data point by determining whether the hypothesis set to which a data point belongs is correct. The proposed approach was tested using real and synthetic point clouds. The results show that the proposed multiBaySAC algorithm can achieve a high computational efficiency (averaging 34% higher than the efficiency of the sequential RANSAC method) and fitting accuracy (exhibiting good performance in the intersection of two primitives), whereas the sequential RANSAC framework clearly suffers from over- and under-segmentation problems. Future work will aim at further

  10. Teachers Implementing Context-Based Teaching Materials: A Framework for Case-Analysis in Chemistry

    ERIC Educational Resources Information Center

    Vos, Martin A. J.; Taconis, Ruurd; Jochems, Wim M. G.; Pilot, Albert

    2010-01-01

    We present a framework for analysing the interplay between context-based teaching material and teachers, and for evaluating the adequacy of the resulting implementation of context-based pedagogy in chemistry classroom practice. The development of the framework is described, including an account of its theoretical foundations. The framework needs…

  11. A remote quantitative Fugl-Meyer assessment framework for stroke patients based on wearable sensor networks.

    PubMed

    Yu, Lei; Xiong, Daxi; Guo, Liquan; Wang, Jiping

    2016-05-01

    To extend the use of wearable sensor networks for stroke patients training and assessment in non-clinical settings, this paper proposes a novel remote quantitative Fugl-Meyer assessment (FMA) framework, in which two accelerometer and seven flex sensors were used to monitoring the movement function of upper limb, wrist and fingers. The extreme learning machine based ensemble regression model was established to map the sensor data to clinical FMA scores while the RRelief algorithm was applied to find the optimal features subset. Considering the FMA scale is time-consuming and complicated, seven training exercises were designed to replace the upper limb related 33 items in FMA scale. 24 stroke inpatients participated in the experiments in clinical settings and 5 of them were involved in the experiments in home settings after they left the hospital. Both the experimental results in clinical and home settings showed that the proposed quantitative FMA model can precisely predict the FMA scores based on wearable sensor data, the coefficient of determination can reach as high as 0.917. It also indicated that the proposed framework can provide a potential approach to the remote quantitative rehabilitation training and evaluation.

  12. An active contour framework based on the Hermite transform for shape segmentation of cardiac MR images

    NASA Astrophysics Data System (ADS)

    Barba-J, Leiner; Escalante-Ramírez, Boris

    2016-04-01

    Early detection of cardiac affections is fundamental to address a correct treatment that allows preserving the patient's life. Since heart disease is one of the main causes of death in most countries, analysis of cardiac images is of great value for cardiac assessment. Cardiac MR has become essential for heart evaluation. In this work we present a segmentation framework for shape analysis in cardiac magnetic resonance (MR) images. The method consists of an active contour model which is guided by the spectral coefficients obtained from the Hermite transform (HT) of the data. The HT is used as model to code image features of the analyzed images. Region and boundary based energies are coded using the zero and first order coefficients. An additional shape constraint based on an elliptical function is used for controlling the active contour deformations. The proposed framework is applied to the segmentation of the endocardial and epicardial boundaries of the left ventricle using MR images with short axis view. The segmentation is sequential for both regions: the endocardium is segmented followed by the epicardium. The algorithm is evaluated with several MR images at different phases of the cardiac cycle demonstrating the effectiveness of the proposed method. Several metrics are used for performance evaluation.

  13. A Decision Support Framework For Science-Based, Multi-Stakeholder Deliberation: A Coral Reef Example

    EPA Science Inventory

    We present a decision support framework for science-based assessment and multi-stakeholder deliberation. The framework consists of two parts: a DPSIR (Drivers-Pressures-States-Impacts-Responses) analysis to identify the important causal relationships among anthropogenic environ...

  14. O-buffer: a framework for sample-based graphics.

    PubMed

    Qu, Huamin; Kaufman, Arie E

    2004-01-01

    We present an innovative modeling and rendering primitive, called the O-buffer, as a framework for sample-based graphics. The 2D or 3D O-buffer is, in essence, a conventional image or a volume, respectively, except that samples are not restricted to a regular grid. A sample position in the O-buffer is recorded as an offset to the nearest grid point of a regular base grid (hence the name O-buffer). The O-buffer can greatly improve the expressive power of images and volumes. Image quality can be improved by storing more spatial information with samples and by avoiding multiple resamplings. It can be exploited to represent and render unstructured primitives, such as points, particles, and curvilinear or irregular volumes. The O-buffer is therefore a unified representation for a variety of graphics primitives and supports mixing them in the same scene. It is a semiregular structure which lends itself to efficient construction and rendering. O-buffers may assume a variety of forms including 2D O-buffers, 3D O-buffers, uniform O-buffers, nonuniform O-buffers, adaptive O-buffers, layered-depth O-buffers, and O-buffer trees. We demonstrate the effectiveness of the O--buffer in a variety of applications, such as image-based rendering, point sample rendering, and volume rendering. PMID:18579969

  15. Custodianship as an Ethical Framework for Biospecimen-Based Research

    PubMed Central

    Yassin, Rihab; Lockhart, Nicole; Riego, Mariana González del; Pitt, Karen; Thomas, Jeffrey W.; Weiss, Linda; Compton, Carolyn

    2010-01-01

    Human biological specimens (biospecimens) are increasingly important for research that aims to advance human health. Yet, despite significant proliferation in specimen-based research and discoveries during the past decade, researchremains challenged by the inequitable access to high quality biospecimens that are collected under rigorous ethical standards. This is primarily caused by the complex level of control and ownership exerted by the myriad of stakeholders involved in the biospecimen research process. This article discusses the ethical model of custodianship as a framework for biospecimen-based research to promote fair research access and resolve issues of control and potential conflicts between biobanks**, investigators, human research participants (human subjects), and sponsors. Custodianship is the caretaking obligation for biospecimens from initial collection to final dissemination of research findings. It endorses key practices and operating principles for responsible oversight of biospecimens collected for research. Embracing the custodial model would ensure transparency in research, fairness to human research participants, and shared accountability among all stakeholders involved in biospecimen-based research. PMID:20332272

  16. Algorithm for detecting human faces based on convex-hull

    NASA Astrophysics Data System (ADS)

    Park, Minsick; Park, Chang-Woo; Park, Mignon; Lee, Chang-Hoon

    2002-03-01

    In this paper, we proposed a new method to detect faces in color based on the convex-hull. We detect two kinds of regions that are skin and hair likeness region. After preprocessing, we apply the convex-hull to their regions and can find a face from their intersection relationship. The proposed algorithm can accomplish face detection in an image involving rotated and turned faces as well as several faces. To validity the effectiveness of the proposed method, we make experiment with various cases.

  17. Research of optical rotation measurement system based on centroid algorithm

    NASA Astrophysics Data System (ADS)

    Cao, Junjie; Jia, Hongzhi; Shen, Xinrong; Jiang, Shixin

    2016-09-01

    An optical rotation measurement system based on digital signal processor, modulated laser, and step motor rotating stage is established. Centroid algorithm featured fast and simple calculation is introduced to process light signals with or without sample to obtain the optical rotating angle through the step difference between two centroids. The system performance is proved experimentally with standard quartz tubes and glucose solutions. After various measurements, the relative error and precision of the system are determined to 0.4% and 0.004°, which demonstrates the reliable repeatability and high accuracy of whole measurement system.

  18. Polygon star identification based on ant colony algorithm

    NASA Astrophysics Data System (ADS)

    Ma, Baolin; Wu, Jie; Zhang, Hongbo

    2014-11-01

    In order to enhance the rate of star identification under different view fields and reduce memory storage, this paper presents a polygon star identification based on ACO algorithm .First, fast cluster analysis. Second, calculate argument for each guide star, using the advantages of ACO in fast path optimization to complete building feature polygon. Third, comparing optimization results and optimization data of guide database to realize match and identifying. Through the simulation shows that the above method can simplify searching process and structure of storage. It can promise the completeness of characteristic patterns of star image. The robustness and reliability are better than traditional triangle identification.

  19. Research on Palmprint Identification Method Based on Quantum Algorithms

    PubMed Central

    Zhang, Zhanzhan

    2014-01-01

    Quantum image recognition is a technology by using quantum algorithm to process the image information. It can obtain better effect than classical algorithm. In this paper, four different quantum algorithms are used in the three stages of palmprint recognition. First, quantum adaptive median filtering algorithm is presented in palmprint filtering processing. Quantum filtering algorithm can get a better filtering result than classical algorithm through the comparison. Next, quantum Fourier transform (QFT) is used to extract pattern features by only one operation due to quantum parallelism. The proposed algorithm exhibits an exponential speed-up compared with discrete Fourier transform in the feature extraction. Finally, quantum set operations and Grover algorithm are used in palmprint matching. According to the experimental results, quantum algorithm only needs to apply square of N operations to find out the target palmprint, but the traditional method needs N times of calculation. At the same time, the matching accuracy of quantum algorithm is almost 100%. PMID:25105165

  20. Algorithm-Based Fault Tolerance for Numerical Subroutines

    NASA Technical Reports Server (NTRS)

    Tumon, Michael; Granat, Robert; Lou, John

    2007-01-01

    A software library implements a new methodology of detecting faults in numerical subroutines, thus enabling application programs that contain the subroutines to recover transparently from single-event upsets. The software library in question is fault-detecting middleware that is wrapped around the numericalsubroutines. Conventional serial versions (based on LAPACK and FFTW) and a parallel version (based on ScaLAPACK) exist. The source code of the application program that contains the numerical subroutines is not modified, and the middleware is transparent to the user. The methodology used is a type of algorithm- based fault tolerance (ABFT). In ABFT, a checksum is computed before a computation and compared with the checksum of the computational result; an error is declared if the difference between the checksums exceeds some threshold. Novel normalization methods are used in the checksum comparison to ensure correct fault detections independent of algorithm inputs. In tests of this software reported in the peer-reviewed literature, this library was shown to enable detection of 99.9 percent of significant faults while generating no false alarms.

  1. Human emotion detector based on genetic algorithm using lip features

    NASA Astrophysics Data System (ADS)

    Brown, Terrence; Fetanat, Gholamreza; Homaifar, Abdollah; Tsou, Brian; Mendoza-Schrock, Olga

    2010-04-01

    We predicted human emotion using a Genetic Algorithm (GA) based lip feature extractor from facial images to classify all seven universal emotions of fear, happiness, dislike, surprise, anger, sadness and neutrality. First, we isolated the mouth from the input images using special methods, such as Region of Interest (ROI) acquisition, grayscaling, histogram equalization, filtering, and edge detection. Next, the GA determined the optimal or near optimal ellipse parameters that circumvent and separate the mouth into upper and lower lips. The two ellipses then went through fitness calculation and were followed by training using a database of Japanese women's faces expressing all seven emotions. Finally, our proposed algorithm was tested using a published database consisting of emotions from several persons. The final results were then presented in confusion matrices. Our results showed an accuracy that varies from 20% to 60% for each of the seven emotions. The errors were mainly due to inaccuracies in the classification, and also due to the different expressions in the given emotion database. Detailed analysis of these errors pointed to the limitation of detecting emotion based on the lip features alone. Similar work [1] has been done in the literature for emotion detection in only one person, we have successfully extended our GA based solution to include several subjects.

  2. Registration algorithm of point clouds based on multiscale normal features

    NASA Astrophysics Data System (ADS)

    Lu, Jun; Peng, Zhongtao; Su, Hang; Xia, GuiHua

    2015-01-01

    The point cloud registration technology for obtaining a three-dimensional digital model is widely applied in many areas. To improve the accuracy and speed of point cloud registration, a registration method based on multiscale normal vectors is proposed. The proposed registration method mainly includes three parts: the selection of key points, the calculation of feature descriptors, and the determining and optimization of correspondences. First, key points are selected from the point cloud based on the changes of magnitude of multiscale curvatures obtained by using principal components analysis. Then the feature descriptor of each key point is proposed, which consists of 21 elements based on multiscale normal vectors and curvatures. The correspondences in a pair of two point clouds are determined according to the descriptor's similarity of key points in the source point cloud and target point cloud. Correspondences are optimized by using a random sampling consistency algorithm and clustering technology. Finally, singular value decomposition is applied to optimized correspondences so that the rigid transformation matrix between two point clouds is obtained. Experimental results show that the proposed point cloud registration algorithm has a faster calculation speed, higher registration accuracy, and better antinoise performance.

  3. A registration based nonuniformity correction algorithm for infrared line scanner

    NASA Astrophysics Data System (ADS)

    Liu, Zhe; Ma, Yong; Huang, Jun; Fan, Fan; Ma, Jiayi

    2016-05-01

    A scene-based algorithm is developed for nonuniformity correction in focal plane of line scanning infrared imaging systems (LSIR) based on registration. By utilizing the 2D shift between consecutive frames, an implicit scheme is proposed to determine correction coefficients. All nonuniform biases are corrected to the same designated value, without estimating and removing biases explicitly, permitting quick computation for high-quality nonuniformity correction. Firstly, scene motion is estimated by image registration and consecutive frames exhibiting required 2D subpixel shift are collected. Secondly, we retrieve the difference matrix of adjacent biases by utilizing the 2D shift between consecutive frames. Thirdly, we perform specified elementary transformations and corresponding cumulative sums to the difference matrix to obtain a bias compensator. This bias compensator converts nonuniform biases to a designated detector's bias. Finally, based on the different bias compensators obtained from several frame pairs, we calculate an averaged bias compensator for nonuniformity correction with less error. Quantitative comparisons with other nonuniformity correction methods demonstrate that the proposed algorithm achieves better fixed-pattern noise reduction with low computational complexity.

  4. A trait-based framework for stream algal communities.

    PubMed

    Lange, Katharina; Townsend, Colin Richard; Matthaei, Christoph David

    2016-01-01

    The use of trait-based approaches to detect effects of land use and climate change on terrestrial plant and aquatic phytoplankton communities is increasing, but such a framework is still needed for benthic stream algae. Here we present a conceptual framework of morphological, physiological, behavioural and life-history traits relating to resource acquisition and resistance to disturbance. We tested this approach by assessing the relationships between multiple anthropogenic stressors and algal traits at 43 stream sites. Our "natural experiment" was conducted along gradients of agricultural land-use intensity (0-95% of the catchment in high-producing pasture) and hydrological alteration (0-92% streamflow reduction resulting from water abstraction for irrigation) as well as related physicochemical variables (total nitrogen concentration and deposited fine sediment). Strategic choice of study sites meant that agricultural intensity and hydrological alteration were uncorrelated. We studied the relationships of seven traits (with 23 trait categories) to our environmental predictor variables using general linear models and an information-theoretic model-selection approach. Life form, nitrogen fixation and spore formation were key traits that showed the strongest relationships with environmental stressors. Overall, FI (farming intensity) exerted stronger effects on algal communities than hydrological alteration. The large-bodied, non-attached, filamentous algae that dominated under high farming intensities have limited dispersal abilities but may cope with unfavourable conditions through the formation of spores. Antagonistic interactions between FI and flow reduction were observed for some trait variables, whereas no interactions occurred for nitrogen concentration and fine sediment. Our conceptual framework was well supported by tests of ten specific hypotheses predicting effects of resource supply and disturbance on algal traits. Our study also shows that investigating a

  5. A MAP-based algorithm for spectroscopic semi-blind deconvolution.

    PubMed

    Liu, Hai; Zhang, Tianxu; Yan, Luxin; Fang, Houzhang; Chang, Yi

    2012-08-21

    Spectroscopic data often suffer from common problems of bands overlapping and random noise. In this paper, we show that the issue of overlapping peaks can be considered as a maximum a posterior (MAP) problem and be solved by minimizing an object functional that includes a likelihood term and two prior terms. In the MAP framework, the likelihood probability density function (PDF) is constructed based on a spectral observation model, a robust Huber-Markov model is used as spectra prior PDF, and the kernel prior is described based on a parametric Gaussian function. Moreover, we describe an efficient optimization scheme that alternates between latent spectrum recovery and blur kernel estimation until convergence. The major novelty of the proposed algorithm is that it can estimate the kernel slit width and latent spectrum simultaneously. Comparative results with other deconvolution methods suggest that the proposed method can recover spectral structural details as well as suppress noise effectively. PMID:22768389

  6. A hybrid skull-stripping algorithm based on adaptive balloon snake models

    NASA Astrophysics Data System (ADS)

    Liu, Hung-Ting; Sheu, Tony W. H.; Chang, Herng-Hua

    2013-02-01

    Skull-stripping is one of the most important preprocessing steps in neuroimage analysis. We proposed a hybrid algorithm based on an adaptive balloon snake model to handle this challenging task. The proposed framework consists of two stages: first, the fuzzy possibilistic c-means (FPCM) is used for voxel clustering, which provides a labeled image for the snake contour initialization. In the second stage, the contour is initialized outside the brain surface based on the FPCM result and evolves under the guidance of the balloon snake model, which drives the contour with an adaptive inward normal force to capture the boundary of the brain. The similarity indices indicate that our method outperformed the BSE and BET methods in skull-stripping the MR image volumes in the IBSR data set. Experimental results show the effectiveness of this new scheme and potential applications in a wide variety of skull-stripping applications.

  7. Microsystem design framework based on tool adaptations and library developments

    NASA Astrophysics Data System (ADS)

    Karam, Jean Michel; Courtois, Bernard; Rencz, Marta; Poppe, Andras; Szekely, Vladimir

    1996-09-01

    Besides foundry facilities, Computer-Aided Design (CAD) tools are also required to move microsystems from research prototypes to an industrial market. This paper describes a Computer-Aided-Design Framework for microsystems, based on selected existing software packages adapted and extended for microsystem technology, assembled with libraries where models are available in the form of standard cells described at different levels (symbolic, system/behavioral, layout). In microelectronics, CAD has already attained a highly sophisticated and professional level, where complete fabrication sequences are simulated and the device and system operation is completely tested before manufacturing. In comparison, the art of microsystem design and modelling is still in its infancy. However, at least for the numerical simulation of the operation of single microsystem components, such as mechanical resonators, thermo-elements, elastic diaphragms, reliable simulation tools are available. For the different engineering disciplines (like electronics, mechanics, optics, etc) a lot of CAD-tools for the design, simulation and verification of specific devices are available, but there is no CAD-environment within which we could perform a (micro-)system simulation due to the different nature of the devices. In general there are two different approaches to overcome this limitation: the first possibility would be to develop a new framework tailored for microsystem-engineering. The second approach, much more realistic, would be to use the existing CAD-tools which contain the most promising features, and to extend these tools so that they can be used for the simulation and verification of microsystems and of the devices involved. These tools are assembled with libraries in a microsystem design environment allowing a continuous design flow. The approach is driven by the wish to make microsystems accessible to a large community of people, including SMEs and non-specialized academic institutions.

  8. A modified SUnSAL-TV algorithm for hyperspectral unmixing based on spatial homogeneity analysis

    NASA Astrophysics Data System (ADS)

    Yuqian, Wang; Zhenfeng, Shao; Lei, Zhang; Weixun, Zhou

    2014-03-01

    The sparse regression framework has been introduced by many works to solve the linear spectral unmixing problem due to the knowledge that a pixel is usually mixed by less endmembers compared with the endmembers in spectral libraries or the entire hyperspectral data sets. Traditional sparse unmixing techniques focus on analyzing the spectral properties of hyperspectral imagery without incorporating spatial information. But the integration of spatial information would be beneficial to promote the performance of the linear unmixing process. An algorithm called sparse unmixing via variable splitting augmented Lagrangian and total variation (SUnSAL-TV) adds a total variation spatial regularizer besides the sparsity-inducing regularizer to the final unmixing objective function. The total variation spatial regularization is helpful to promote the fractional abundance smoothness. However, the abundance smoothness varies in the image. In this paper, the spatial smoothness is estimated through homogeneity analysis. Then the spatial regularizer is weighted for each pixel by a homogeneity index. The modified algorithm, called homogeneity analysis based SUnSAL-TV (SUnSAL-TVH), integrates the spatial information with finer modelling of spatial smoothness and is supposed insensitive to the noise and more stable. Experiments on synthetic data sets are taken and indicate the validity of our algorithm.

  9. A novel image-domain-based cone-beam computed tomography enhancement algorithm

    NASA Astrophysics Data System (ADS)

    Li, Xiang; Li, Tianfang; Yang, Yong; Heron, Dwight E.; Saiful Huq, M.

    2011-05-01

    Kilo-voltage (kV) cone-beam computed tomography (CBCT) plays an important role in image-guided radiotherapy. However, due to a large cone-beam angle, scatter effects significantly degrade the CBCT image quality and limit its clinical application. The goal of this study is to develop an image enhancement algorithm to reduce the low-frequency CBCT image artifacts, which are also called the bias field. The proposed algorithm is based on the hypothesis that image intensities of different types of materials in CBCT images are approximately globally uniform (in other words, a piecewise property). A maximum a posteriori probability framework was developed to estimate the bias field contribution from a given CBCT image. The performance of the proposed CBCT image enhancement method was tested using phantoms and clinical CBCT images. Compared to the original CBCT images, the corrected images using the proposed method achieved a more uniform intensity distribution within each tissue type and significantly reduced cupping and shading artifacts. In a head and a pelvic case, the proposed method reduced the Hounsfield unit (HU) errors within the region of interest from 300 HU to less than 60 HU. In a chest case, the HU errors were reduced from 460 HU to less than 110 HU. The proposed CBCT image enhancement algorithm demonstrated a promising result by the reduction of the scatter-induced low-frequency image artifacts commonly encountered in kV CBCT imaging.

  10. Vision-based vehicle detection and tracking algorithm design

    NASA Astrophysics Data System (ADS)

    Hwang, Junyeon; Huh, Kunsoo; Lee, Donghwi

    2009-12-01

    The vision-based vehicle detection in front of an ego-vehicle is regarded as promising for driver assistance as well as for autonomous vehicle guidance. The feasibility of vehicle detection in a passenger car requires accurate and robust sensing performance. A multivehicle detection system based on stereo vision has been developed for better accuracy and robustness. This system utilizes morphological filter, feature detector, template matching, and epipolar constraint techniques in order to detect the corresponding pairs of vehicles. After the initial detection, the system executes the tracking algorithm for the vehicles. The proposed system can detect front vehicles such as the leading vehicle and side-lane vehicles. The position parameters of the vehicles located in front are obtained based on the detection information. The proposed vehicle detection system is implemented on a passenger car, and its performance is verified experimentally.

  11. [MicroRNA Target Prediction Based on Support Vector Machine Ensemble Classification Algorithm of Under-sampling Technique].

    PubMed

    Chen, Zhiru; Hong, Wenxue

    2016-02-01

    Considering the low accuracy of prediction in the positive samples and poor overall classification effects caused by unbalanced sample data of MicroRNA (miRNA) target, we proposes a support vector machine (SVM)-integration of under-sampling and weight (IUSM) algorithm in this paper, an under-sampling based on the ensemble learning algorithm. The algorithm adopts SVM as learning algorithm and AdaBoost as integration framework, and embeds clustering-based under-sampling into the iterative process, aiming at reducing the degree of unbalanced distribution of positive and negative samples. Meanwhile, in the process of adaptive weight adjustment of the samples, the SVM-IUSM algorithm eliminates the abnormal ones in negative samples with robust sample weights smoothing mechanism so as to avoid over-learning. Finally, the prediction of miRNA target integrated classifier is achieved with the combination of multiple weak classifiers through the voting mechanism. The experiment revealed that the SVM-IUSW, compared with other algorithms on unbalanced dataset collection, could not only improve the accuracy of positive targets and the overall effect of classification, but also enhance the generalization ability of miRNA target classifier.

  12. [MicroRNA Target Prediction Based on Support Vector Machine Ensemble Classification Algorithm of Under-sampling Technique].

    PubMed

    Chen, Zhiru; Hong, Wenxue

    2016-02-01

    Considering the low accuracy of prediction in the positive samples and poor overall classification effects caused by unbalanced sample data of MicroRNA (miRNA) target, we proposes a support vector machine (SVM)-integration of under-sampling and weight (IUSM) algorithm in this paper, an under-sampling based on the ensemble learning algorithm. The algorithm adopts SVM as learning algorithm and AdaBoost as integration framework, and embeds clustering-based under-sampling into the iterative process, aiming at reducing the degree of unbalanced distribution of positive and negative samples. Meanwhile, in the process of adaptive weight adjustment of the samples, the SVM-IUSM algorithm eliminates the abnormal ones in negative samples with robust sample weights smoothing mechanism so as to avoid over-learning. Finally, the prediction of miRNA target integrated classifier is achieved with the combination of multiple weak classifiers through the voting mechanism. The experiment revealed that the SVM-IUSW, compared with other algorithms on unbalanced dataset collection, could not only improve the accuracy of positive targets and the overall effect of classification, but also enhance the generalization ability of miRNA target classifier. PMID:27382743

  13. Framework for traits-based assessment in ecotoxicology.

    PubMed

    Rubach, Mascha N; Ashauer, Roman; Buchwalter, David B; De Lange, Hj; Hamer, Mick; Preuss, Thomas G; Töpke, Katrien; Maund, Stephen J

    2011-04-01

    A key challenge in ecotoxicology is to assess the potential risks of chemicals to the wide range of species in the environment on the basis of laboratory toxicity data derived from a limited number of species. These species are then assumed to be suitable surrogates for a wider class of related taxa. For example, Daphnia spp. are used as the indicator species for freshwater aquatic invertebrates. Extrapolation from these datasets to natural communities poses a challenge because the extent to which test species are representative of their various taxonomic groups is often largely unknown, and different taxonomic groups and chemicals are variously represented in the available datasets. Moreover, it has been recognized that physiological and ecological factors can each be powerful determinants of vulnerability to chemical stress, thus differentially influencing toxicant effects at the population and community level. Recently it was proposed that detailed study of species traits might eventually permit better understanding, and thus prediction, of the potential for adverse effects of chemicals to a wider range of organisms than those amenable for study in the laboratory. This line of inquiry stems in part from the ecology literature, in which species traits are being used for improved understanding of how communities are constructed, as well as how communities might respond to, and recover from, disturbance (see other articles in this issue). In the present work, we develop a framework for the application of traits-based assessment. The framework is based on the population vulnerability conceptual model of Van Straalen in which vulnerability is determined by traits that can be grouped into 3 major categories, i.e., external exposure, intrinsic sensitivity, and population sustainability. Within each of these major categories, we evaluate specific traits as well as how they could contribute to the assessment of the potential effects of a toxicant on an organism. We then

  14. A quantum mechanics-based algorithm for vessel segmentation in retinal images

    NASA Astrophysics Data System (ADS)

    Youssry, Akram; El-Rafei, Ahmed; Elramly, Salwa

    2016-06-01

    Blood vessel segmentation is an important step in retinal image analysis. It is one of the steps required for computer-aided detection of ophthalmic diseases. In this paper, a novel quantum mechanics-based algorithm for retinal vessel segmentation is presented. The algorithm consists of three major steps. The first step is the preprocessing of the images to prepare the images for further processing. The second step is feature extraction where a set of four features is generated at each image pixel. These features are then combined using a nonlinear transformation for dimensionality reduction. The final step is applying a recently proposed quantum mechanics-based framework for image processing. In this step, pixels are mapped to quantum systems that are allowed to evolve from an initial state to a final state governed by Schrödinger's equation. The evolution is controlled by the Hamiltonian operator which is a function of the extracted features at each pixel. A measurement step is consequently performed to determine whether the pixel belongs to vessel or non-vessel classes. Many functional forms of the Hamiltonian are proposed, and the best performing form was selected. The algorithm is tested on the publicly available DRIVE database. The average results for sensitivity, specificity, and accuracy are 80.29, 97.34, and 95.83 %, respectively. These results are compared to some recently published techniques showing the superior performance of the proposed method. Finally, the implementation of the algorithm on a quantum computer and the challenges facing this implementation are introduced.

  15. An evidence-based conceptual framework of healthy cooking.

    PubMed

    Raber, Margaret; Chandra, Joya; Upadhyaya, Mudita; Schick, Vanessa; Strong, Larkin L; Durand, Casey; Sharma, Shreela

    2016-12-01

    Eating out of the home has been positively associated with body weight, obesity, and poor diet quality. While cooking at home has declined steadily over the last several decades, the benefits of home cooking have gained attention in recent years and many healthy cooking projects have emerged around the United States. The purpose of this study was to develop an evidence-based conceptual framework of healthy cooking behavior in relation to chronic disease prevention. A systematic review of the literature was undertaken using broad search terms. Studies analyzing the impact of cooking behaviors across a range of disciplines were included. Experts in the field reviewed the resulting constructs in a small focus group. The model was developed from the extant literature on the subject with 59 studies informing 5 individual constructs (frequency, techniques and methods, minimal usage, flavoring, and ingredient additions/replacements), further defined by a series of individual behaviors. Face validity of these constructs was supported by the focus group. A validated conceptual model is a significant step toward better understanding the relationship between cooking, disease and disease prevention and may serve as a base for future assessment tools and curricula. PMID:27413657

  16. Internal modelling under Risk-Based Capital (RBC) framework

    NASA Astrophysics Data System (ADS)

    Ling, Ang Siew; Hin, Pooi Ah

    2015-12-01

    Very often the methods for the internal modelling under the Risk-Based Capital framework make use of the data which are in the form of run-off triangle. The present research will instead extract from a group of n customers, the historical data for the sum insured si of the i-th customer together with the amount paid yij and the amount aij reported but not yet paid in the j-th development year for j = 1, 2, 3, 4, 5, 6. We model the future value (yij+1, aij+1) to be dependent on the present year value (yij, aij) and the sum insured si via a conditional distribution which is derived from a multivariate power-normal mixture distribution. For a group of given customers with different original purchase dates, the distribution of the aggregate claims liabilities may be obtained from the proposed model. The prediction interval based on the distribution for the aggregate claim liabilities is found to have good ability of covering the observed aggregate claim liabilities.

  17. Mathematical framework for activity-based cancer biomarkers.

    PubMed

    Kwong, Gabriel A; Dudani, Jaideep S; Carrodeguas, Emmanuel; Mazumdar, Eric V; Zekavat, Seyedeh M; Bhatia, Sangeeta N

    2015-10-13

    Advances in nanomedicine are providing sophisticated functions to precisely control the behavior of nanoscale drugs and diagnostics. Strategies that coopt protease activity as molecular triggers are increasingly important in nanoparticle design, yet the pharmacokinetics of these systems are challenging to understand without a quantitative framework to reveal nonintuitive associations. We describe a multicompartment mathematical model to predict strategies for ultrasensitive detection of cancer using synthetic biomarkers, a class of activity-based probes that amplify cancer-derived signals into urine as a noninvasive diagnostic. Using a model formulation made of a PEG core conjugated with protease-cleavable peptides, we explore a vast design space and identify guidelines for increasing sensitivity that depend on critical parameters such as enzyme kinetics, dosage, and probe stability. According to this model, synthetic biomarkers that circulate in stealth but then activate at sites of disease have the theoretical capacity to discriminate tumors as small as 5 mm in diameter-a threshold sensitivity that is otherwise challenging for medical imaging and blood biomarkers to achieve. This model may be adapted to describe the behavior of additional activity-based approaches to allow cross-platform comparisons, and to predict allometric scaling across species.

  18. An evidence-based conceptual framework of healthy cooking.

    PubMed

    Raber, Margaret; Chandra, Joya; Upadhyaya, Mudita; Schick, Vanessa; Strong, Larkin L; Durand, Casey; Sharma, Shreela

    2016-12-01

    Eating out of the home has been positively associated with body weight, obesity, and poor diet quality. While cooking at home has declined steadily over the last several decades, the benefits of home cooking have gained attention in recent years and many healthy cooking projects have emerged around the United States. The purpose of this study was to develop an evidence-based conceptual framework of healthy cooking behavior in relation to chronic disease prevention. A systematic review of the literature was undertaken using broad search terms. Studies analyzing the impact of cooking behaviors across a range of disciplines were included. Experts in the field reviewed the resulting constructs in a small focus group. The model was developed from the extant literature on the subject with 59 studies informing 5 individual constructs (frequency, techniques and methods, minimal usage, flavoring, and ingredient additions/replacements), further defined by a series of individual behaviors. Face validity of these constructs was supported by the focus group. A validated conceptual model is a significant step toward better understanding the relationship between cooking, disease and disease prevention and may serve as a base for future assessment tools and curricula.

  19. Mathematical framework for activity-based cancer biomarkers.

    PubMed

    Kwong, Gabriel A; Dudani, Jaideep S; Carrodeguas, Emmanuel; Mazumdar, Eric V; Zekavat, Seyedeh M; Bhatia, Sangeeta N

    2015-10-13

    Advances in nanomedicine are providing sophisticated functions to precisely control the behavior of nanoscale drugs and diagnostics. Strategies that coopt protease activity as molecular triggers are increasingly important in nanoparticle design, yet the pharmacokinetics of these systems are challenging to understand without a quantitative framework to reveal nonintuitive associations. We describe a multicompartment mathematical model to predict strategies for ultrasensitive detection of cancer using synthetic biomarkers, a class of activity-based probes that amplify cancer-derived signals into urine as a noninvasive diagnostic. Using a model formulation made of a PEG core conjugated with protease-cleavable peptides, we explore a vast design space and identify guidelines for increasing sensitivity that depend on critical parameters such as enzyme kinetics, dosage, and probe stability. According to this model, synthetic biomarkers that circulate in stealth but then activate at sites of disease have the theoretical capacity to discriminate tumors as small as 5 mm in diameter-a threshold sensitivity that is otherwise challenging for medical imaging and blood biomarkers to achieve. This model may be adapted to describe the behavior of additional activity-based approaches to allow cross-platform comparisons, and to predict allometric scaling across species. PMID:26417077

  20. An adaptive grid-based all hexahedral meshing algorithm based on 2-refinement.

    SciTech Connect

    Edgel, Jared; Benzley, Steven E.; Owen, Steven James

    2010-08-01

    Most adaptive mesh generation algorithms employ a 3-refinement method. This method, although easy to employ, provides a mesh that is often too coarse in some areas and over refined in other areas. Because this method generates 27 new hexes in place of a single hex, there is little control on mesh density. This paper presents an adaptive all-hexahedral grid-based meshing algorithm that employs a 2-refinement method. 2-refinement is based on dividing the hex to be refined into eight new hexes. This method allows a greater control on mesh density when compared to a 3-refinement procedure. This adaptive all-hexahedral meshing algorithm provides a mesh that is efficient for analysis by providing a high element density in specific locations and a reduced mesh density in other areas. In addition, this tool can be effectively used for inside-out hexahedral grid based schemes, using Cartesian structured grids for the base mesh, which have shown great promise in accommodating automatic all-hexahedral algorithms. This adaptive all-hexahedral grid-based meshing algorithm employs a 2-refinement insertion method. This allows greater control on mesh density when compared to 3-refinement methods. This algorithm uses a two layer transition zone to increase element quality and keeps transitions from lower to higher mesh densities smooth. Templates were introduced to allow both convex and concave refinement.

  1. Updated Evidence-Based Treatment Algorithm in Pulmonary Arterial Hypertension

    PubMed Central

    Barst, Robyn J.; Gibbs, J. Simon; Ghofrani, Hossein A.; Hoeper, Marius M.; McLaughlin, Vallerie V.; Rubin, Lewis J.; Sitbon, Olivier; Tapson, Victor; Galiè, Nazzareno

    2009-01-01

    Uncontrolled and controlled clinical trials with different compounds and procedures are reviewed to define the risk-benefit profiles for therapeutic options in pulmonary arterial hypertension (PAH). A grading system for the level of evidence of treatments based on the controlled clinical trials performed with each compound is used to propose an evidence-based treatment algorithm. The algorithm includes drugs approved by regulatory agencies for the treatment of PAH and/or drugs available for other indications. The different treatments have been evaluated mainly in idiopathic PAH, heritable PAH, and in PAH associated with the scleroderma spectrum of diseases or with anorexigen use. Extrapolation of these recommendations to other PAH subgroups should be done with caution. Oral anticoagulation is proposed for most patients; diuretic treatment and supplemental oxygen are indicated in cases of fluid retention and hypoxemia, respectively. High doses of calcium channel blockers are indicated only in the minority of patients who respond to acute vasoreactivity testing. Nonresponders to acute vasoreactivity testing, or responders who remain in World Health Organization (WHO) functional class III, should be considered candidates for treatment with either an oral phosphodiesterase-5 inhibitor or an oral endothelin-receptor antagonist. Continuous intravenous administration of epoprostenol remains the treatment of choice in WHO functional class IV patients. Combination therapy is recommended for patients treated with PAH monotherapy who remain in New York Heart Association functional class III. Atrial septostomy and lung transplantation are indicated for refractory patients or where medical treatment is unavailable. PMID:19555861

  2. Digital super-resolution microscopy using example-based algorithm

    NASA Astrophysics Data System (ADS)

    Ishikawa, Shinji; Hayasaki, Yoshio

    2015-05-01

    We propose a super-resolution microscopy with a confocal optical setup and an example-based algorithm. The example-based super-resolution algorithm was performed by an example database which is constructed by learning a lot of sets of a high-resolution patch and a low-resolution patch. The high-resolution patch is a part of the high-resolution image of an object model expressed in a computer, and the low-resolution patch is calculated from the high-resolution patch in consideration with a spatial property of an optical microscope. In the reconstruction process, a low-resolution image observed by the confocal optical setup with an image sensor is converted to the super-resolved high-resolution image selected by a pattern matching method from the example database. We demonstrate the adequate selection of the patch size and the weighting superposition method performs the super resolution with a low signal-to noise ratio.

  3. Chaos Time Series Prediction Based on Membrane Optimization Algorithms

    PubMed Central

    Li, Meng; Yi, Liangzhong; Pei, Zheng; Gao, Zhisheng

    2015-01-01

    This paper puts forward a prediction model based on membrane computing optimization algorithm for chaos time series; the model optimizes simultaneously the parameters of phase space reconstruction (τ, m) and least squares support vector machine (LS-SVM) (γ, σ) by using membrane computing optimization algorithm. It is an important basis for spectrum management to predict accurately the change trend of parameters in the electromagnetic environment, which can help decision makers to adopt an optimal action. Then, the model presented in this paper is used to forecast band occupancy rate of frequency modulation (FM) broadcasting band and interphone band. To show the applicability and superiority of the proposed model, this paper will compare the forecast model presented in it with conventional similar models. The experimental results show that whether single-step prediction or multistep prediction, the proposed model performs best based on three error measures, namely, normalized mean square error (NMSE), root mean square error (RMSE), and mean absolute percentage error (MAPE). PMID:25874249

  4. Rank-based algorithms for anlaysis of microarrays

    NASA Astrophysics Data System (ADS)

    Liu, Wei-min; Mei, Rui; Bartell, Daniel M.; Di, Xiaojun; Webster, Teresa A.; Ryder, Tom

    2001-06-01

    Analysis of microarray data often involves extracting information from raw intensities of spots of cells and making certain calls. Rank-based algorithms are powerful tools to provide probability values of hypothesis tests, especially when the distribution of the intensities is unknown. For our current gene expression arrays, a gene is detected by a set of probe pairs consisting of perfect match and mismatch cells. The one-sided upper-tail Wilcoxon's signed rank test is used in our algorithms for absolute calls (whether a gene is detected or not), as well as comparative calls (whether a gene is increasing or decreasing or no significant change in a sample compared with another sample). We also test the possibility to use only perfect match cells to make calls. This paper focuses on absolute calls. We have developed error analysis methods and software tools that allow us to compare the accuracy of the calls in the presence or absence of mismatch cells at different target concentrations. The usage of nonparametric rank-based tests is not limited to absolute and comparative calls of gene expression chips. They can also be applied to other oligonucleotide microarrays for genotyping and mutation detection, as well as spotted arrays.

  5. A cooperative control algorithm for camera based observational systems.

    SciTech Connect

    Young, Joseph G.

    2012-01-01

    Over the last several years, there has been considerable growth in camera based observation systems for a variety of safety, scientific, and recreational applications. In order to improve the effectiveness of these systems, we frequently desire the ability to increase the number of observed objects, but solving this problem is not as simple as adding more cameras. Quite often, there are economic or physical restrictions that prevent us from adding additional cameras to the system. As a result, we require methods that coordinate the tracking of objects between multiple cameras in an optimal way. In order to accomplish this goal, we present a new cooperative control algorithm for a camera based observational system. Specifically, we present a receding horizon control where we model the underlying optimal control problem as a mixed integer linear program. The benefit of this design is that we can coordinate the actions between each camera while simultaneously respecting its kinematics. In addition, we further improve the quality of our solution by coupling our algorithm with a Kalman filter. Through this integration, we not only add a predictive component to our control, but we use the uncertainty estimates provided by the filter to encourage the system to periodically observe any outliers in the observed area. This combined approach allows us to intelligently observe the entire region of interest in an effective and thorough manner.

  6. Algorithm design for a gun simulator based on image processing

    NASA Astrophysics Data System (ADS)

    Liu, Yu; Wei, Ping; Ke, Jun

    2015-08-01

    In this paper, an algorithm is designed for shooting games under strong background light. Six LEDs are uniformly distributed on the edge of a game machine screen. They are located at the four corners and in the middle of the top and the bottom edges. Three LEDs are enlightened in the odd frames, and the other three are enlightened in the even frames. A simulator is furnished with one camera, which is used to obtain the image of the LEDs by applying inter-frame difference between the even and odd frames. In the resulting images, six LED are six bright spots. To obtain the LEDs' coordinates rapidly, we proposed a method based on the area of the bright spots. After calibrating the camera based on a pinhole model, four equations can be found using the relationship between the image coordinate system and the world coordinate system with perspective transformation. The center point of the image of LEDs is supposed to be at the virtual shooting point. The perspective transformation matrix is applied to the coordinate of the center point. Then we can obtain the virtual shooting point's coordinate in the world coordinate system. When a game player shoots a target about two meters away, using the method discussed in this paper, the calculated coordinate error is less than ten mm. We can obtain 65 coordinate results per second, which meets the requirement of a real-time system. It proves the algorithm is reliable and effective.

  7. Multi-user cognitive radio network resource allocation based on the adaptive niche immune genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zu, Yun-Xiao; Zhou, Jie

    2012-01-01

    Multi-user cognitive radio network resource allocation based on the adaptive niche immune genetic algorithm is proposed, and a fitness function is provided. Simulations are conducted using the adaptive niche immune genetic algorithm, the simulated annealing algorithm, the quantum genetic algorithm and the simple genetic algorithm, respectively. The results show that the adaptive niche immune genetic algorithm performs better than the other three algorithms in terms of the multi-user cognitive radio network resource allocation, and has quick convergence speed and strong global searching capability, which effectively reduces the system power consumption and bit error rate.

  8. A game theoretic framework for incentive-based models of intrinsic motivation in artificial systems.

    PubMed

    Merrick, Kathryn E; Shafi, Kamran

    2013-01-01

    An emerging body of research is focusing on understanding and building artificial systems that can achieve open-ended development influenced by intrinsic motivations. In particular, research in robotics and machine learning is yielding systems and algorithms with increasing capacity for self-directed learning and autonomy. Traditional software architectures and algorithms are being augmented with intrinsic motivations to drive cumulative acquisition of knowledge and skills. Intrinsic motivations have recently been considered in reinforcement learning, active learning and supervised learning settings among others. This paper considers game theory as a novel setting for intrinsic motivation. A game theoretic framework for intrinsic motivation is formulated by introducing the concept of optimally motivating incentive as a lens through which players perceive a game. Transformations of four well-known mixed-motive games are presented to demonstrate the perceived games when players' optimally motivating incentive falls in three cases corresponding to strong power, affiliation and achievement motivation. We use agent-based simulations to demonstrate that players with different optimally motivating incentive act differently as a result of their altered perception of the game. We discuss the implications of these results both for modeling human behavior and for designing artificial agents or robots. PMID:24198797

  9. A game theoretic framework for incentive-based models of intrinsic motivation in artificial systems.

    PubMed

    Merrick, Kathryn E; Shafi, Kamran

    2013-01-01

    An emerging body of research is focusing on understanding and building artificial systems that can achieve open-ended development influenced by intrinsic motivations. In particular, research in robotics and machine learning is yielding systems and algorithms with increasing capacity for self-directed learning and autonomy. Traditional software architectures and algorithms are being augmented with intrinsic motivations to drive cumulative acquisition of knowledge and skills. Intrinsic motivations have recently been considered in reinforcement learning, active learning and supervised learning settings among others. This paper considers game theory as a novel setting for intrinsic motivation. A game theoretic framework for intrinsic motivation is formulated by introducing the concept of optimally motivating incentive as a lens through which players perceive a game. Transformations of four well-known mixed-motive games are presented to demonstrate the perceived games when players' optimally motivating incentive falls in three cases corresponding to strong power, affiliation and achievement motivation. We use agent-based simulations to demonstrate that players with different optimally motivating incentive act differently as a result of their altered perception of the game. We discuss the implications of these results both for modeling human behavior and for designing artificial agents or robots.

  10. Evidence-Based Leadership Development: The 4L Framework

    ERIC Educational Resources Information Center

    Scott, Shelleyann; Webber, Charles F.

    2008-01-01

    Purpose: This paper aims to use the results of three research initiatives to present the life-long learning leader 4L framework, a model for leadership development intended for use by designers and providers of leadership development programming. Design/methodology/approach: The 4L model is a conceptual framework that emerged from the analysis of…

  11. Alternative Model-Based and Design-Based Frameworks for Inference from Samples to Populations: From Polarization to Integration

    ERIC Educational Resources Information Center

    Sterba, Sonya K.

    2009-01-01

    A model-based framework, due originally to R. A. Fisher, and a design-based framework, due originally to J. Neyman, offer alternative mechanisms for inference from samples to populations. We show how these frameworks can utilize different types of samples (nonrandom or random vs. only random) and allow different kinds of inference (descriptive vs.…

  12. Patch forest: a hybrid framework of random forest and patch-based segmentation

    NASA Astrophysics Data System (ADS)

    Xie, Zhongliu; Gillies, Duncan

    2016-03-01

    The development of an accurate, robust and fast segmentation algorithm has long been a research focus in medical computer vision. State-of-the-art practices often involve non-rigidly registering a target image with a set of training atlases for label propagation over the target space to perform segmentation, a.k.a. multi-atlas label propagation (MALP). In recent years, the patch-based segmentation (PBS) framework has gained wide attention due to its advantage of relaxing the strict voxel-to-voxel correspondence to a series of pair-wise patch comparisons for contextual pattern matching. Despite a high accuracy reported in many scenarios, computational efficiency has consistently been a major obstacle for both approaches. Inspired by recent work on random forest, in this paper we propose a patch forest approach, which by equipping the conventional PBS with a fast patch search engine, is able to boost segmentation speed significantly while retaining an equal level of accuracy. In addition, a fast forest training mechanism is also proposed, with the use of a dynamic grid framework to efficiently approximate data compactness computation and a 3D integral image technique for fast box feature retrieval.

  13. Optimized Laplacian image sharpening algorithm based on graphic processing unit

    NASA Astrophysics Data System (ADS)

    Ma, Tinghuai; Li, Lu; Ji, Sai; Wang, Xin; Tian, Yuan; Al-Dhelaan, Abdullah; Al-Rodhaan, Mznah

    2014-12-01

    In classical Laplacian image sharpening, all pixels are processed one by one, which leads to large amount of computation. Traditional Laplacian sharpening processed on CPU is considerably time-consuming especially for those large pictures. In this paper, we propose a parallel implementation of Laplacian sharpening based on Compute Unified Device Architecture (CUDA), which is a computing platform of Graphic Processing Units (GPU), and analyze the impact of picture size on performance and the relationship between the processing time of between data transfer time and parallel computing time. Further, according to different features of different memory, an improved scheme of our method is developed, which exploits shared memory in GPU instead of global memory and further increases the efficiency. Experimental results prove that two novel algorithms outperform traditional consequentially method based on OpenCV in the aspect of computing speed.

  14. A neighboring structure reconstructed matching algorithm based on LARK features

    NASA Astrophysics Data System (ADS)

    Xue, Taobei; Han, Jing; Zhang, Yi; Bai, Lianfa

    2015-11-01

    Aimed at the low contrast ratio and high noise of infrared images, and the randomness and ambient occlusion of its objects, this paper presents a neighboring structure reconstructed matching (NSRM) algorithm based on LARK features. The neighboring structure relationships of local window are considered based on a non-negative linear reconstruction method to build a neighboring structure relationship matrix. Then the LARK feature matrix and the NSRM matrix are processed separately to get two different similarity images. By fusing and analyzing the two similarity images, those infrared objects are detected and marked by the non-maximum suppression. The NSRM approach is extended to detect infrared objects with incompact structure. High performance is demonstrated on infrared body set, indicating a lower false detecting rate than conventional methods in complex natural scenes.

  15. Pathological leucocyte segmentation algorithm based on hyperspectral imaging technique

    NASA Astrophysics Data System (ADS)

    Guan, Yana; Li, Qingli; Wang, Yiting; Liu, Hongying; Zhu, Ziqiang

    2012-05-01

    White blood cells (WBC) are comparatively significant components in the human blood system, and they have a pathological relationship with some blood-related diseases. To analyze the disease information accurately, the most essential work is to segment WBCs. We propose a new method for pathological WBC segmentation based on a hyperspectral imaging system. This imaging system is used to capture WBC images, which is characterized by acquiring 1-D spectral information and 2-D spatial information for each pixel. A spectral information divergence algorithm is presented to segment pathological WBCs into four parts. In order to evaluate the performance of the new approach, K-means and spectral angle mapper-based segmental methods are tested in contrast on six groups of blood smears. Experimental results show that the presented method can segment pathological WBCs more accurately, regardless of their irregular shapes, sizes, and gray-values.

  16. A Lagrange multiplier based divide and conquer finite element algorithm

    NASA Technical Reports Server (NTRS)

    Farhat, C.

    1991-01-01

    A novel domain decomposition method based on a hybrid variational principle is presented. Prior to any computation, a given finite element mesh is torn into a set of totally disconnected submeshes. First, an incomplete solution is computed in each subdomain. Next, the compatibility of the displacement field at the interface nodes is enforced via discrete, polynomial and/or piecewise polynomial Lagrange multipliers. In the static case, each floating subdomain induces a local singularity that is resolved very efficiently. The interface problem associated with this domain decomposition method is, in general, indefinite and of variable size. A dedicated conjugate projected gradient algorithm is developed for solving the latter problem when it is not feasible to explicitly assemble the interface operator. When implemented on local memory multiprocessors, the proposed methodology requires less interprocessor communication than the classical method of substructuring. It is also suitable for parallel/vector computers with shared memory and compares favorably with factorization based parallel direct methods.

  17. Vibration-based damage detection algorithm for WTT structures

    NASA Astrophysics Data System (ADS)

    Nguyen, Tuan-Cuong; Kim, Tae-Hwan; Choi, Sang-Hoon; Ryu, Joo-Young; Kim, Jeong-Tae

    2016-04-01

    In this paper, the integrity of a wind turbine tower (WTT) structure is nondestructively estimated using its vibration responses. Firstly, a damage detection algorithm using changes in modal characteristics to predict damage locations and severities in structures is outlined. Secondly, a finite element (FE) model based on a real WTT structure is established by using a commercial software, Midas FEA. Thirdly, forced vibration tests are performed on the FE model of the WTT structure under various damage scenarios. The changes in modal parameters such as natural frequencies and mode shapes are examined for damage monitoring in the structure. Finally, the feasibility of the vibration-based damage detection method is numerically verified by predicting locations and severities of the damage in the FE model of the WTT structure.

  18. A prior-based metal artifact reduction algorithm for x-ray CT.

    PubMed

    Li, Ming; Zheng, Jian; Zhang, Tao; Guan, Yihui; Xu, Pin; Sun, Mingshan

    2015-01-01

    In computed tomography (CT), metal objects in the scanning filed are accompanied by physical phenomenon that causes projections to be inconsistent. These inconsistencies produce bright and dark shadows or streaks in analytically reconstructed images. Interpolation-based metal artifact reduction (MAR) algorithms usually replace the inconsistent projection data by estimating surrogate data based on the surrounding uncorrupted projections. In such cases, secondary artifacts will be generated when the data estimates are inaccurate. Therefore, better projection estimation is critical. This paper proposes an image post-processing strategy to create an intermediate image, named the prior image and better estimates of the surrogate data by forward projecting this prior image. The proposed method consists of three steps based on the forward projection MAR framework. First, metallic implants in the uncorrected images are segmented using a Markov random field model (MRF). Then a prior image is generated via an edge-preserving filter and a recovery procedure of the adjacent anatomical structures. Finally, the projection is completed via forward projecting this prior image and the corrected image is reconstructed by the filtered backprojection (FBP) method. Studies on both phantom and clinical data are carried out to verify the performance of the proposed method. The comparisons with other previous MAR algorithms demonstrate that the proposed MAR method performs better in metal artifact suppression and anatomical structure preservation. PMID:25882733

  19. A Competency-Based Guided-Learning Algorithm Applied on Adaptively Guiding E-Learning

    ERIC Educational Resources Information Center

    Hsu, Wei-Chih; Li, Cheng-Hsiu

    2015-01-01

    This paper presents a new algorithm called competency-based guided-learning algorithm (CBGLA), which can be applied on adaptively guiding e-learning. Computational process analysis and mathematical derivation of competency-based learning (CBL) were used to develop the CBGLA. The proposed algorithm could generate an effective adaptively guiding…

  20. A nonvoxel-based dose convolution/superposition algorithm optimized for scalable GPU architectures

    SciTech Connect

    Neylon, J. Sheng, K.; Yu, V.; Low, D. A.; Kupelian, P.; Santhanam, A.; Chen, Q.

    2014-10-15

    Purpose: Real-time adaptive planning and treatment has been infeasible due in part to its high computational complexity. There have been many recent efforts to utilize graphics processing units (GPUs) to accelerate the computational performance and dose accuracy in radiation therapy. Data structure and memory access patterns are the key GPU factors that determine the computational performance and accuracy. In this paper, the authors present a nonvoxel-based (NVB) approach to maximize computational and memory access efficiency and throughput on the GPU. Methods: The proposed algorithm employs a ray-tracing mechanism to restructure the 3D data sets computed from the CT anatomy into a nonvoxel-based framework. In a process that takes only a few milliseconds of computing time, the algorithm restructured the data sets by ray-tracing through precalculated CT volumes to realign the coordinate system along the convolution direction, as defined by zenithal and azimuthal angles. During the ray-tracing step, the data were resampled according to radial sampling and parallel ray-spacing parameters making the algorithm independent of the original CT resolution. The nonvoxel-based algorithm presented in this paper also demonstrated a trade-off in computational performance and dose accuracy for different coordinate system configurations. In order to find the best balance between the computed speedup and the accuracy, the authors employed an exhaustive parameter search on all sampling parameters that defined the coordinate system configuration: zenithal, azimuthal, and radial sampling of the convolution algorithm, as well as the parallel ray spacing during ray tracing. The angular sampling parameters were varied between 4 and 48 discrete angles, while both radial sampling and parallel ray spacing were varied from 0.5 to 10 mm. The gamma distribution analysis method (γ) was used to compare the dose distributions using 2% and 2 mm dose difference and distance-to-agreement criteria

  1. Framework for springback compensation based on mechanical factor evaluation

    NASA Astrophysics Data System (ADS)

    Oya, Tetsuo; Naoyuki Doke, Naoyuki

    2013-05-01

    Springback is an inevitable phenomenon in sheet metal forming, and many researches on its prediction and compensation method have been presented. The use of high-strength steels is now popular; therefore, the demand for effective springback compensation system is increasing. In this study, a novel approach of springback compensation is presented. The proposed framework consists of a springback solver and design system and optimization process. The springback solver is a finite element procedure in which the degenerated shell element is used instead of the typical shell element. This allows the designer to access directly to the resultant stresses such as the bending moment that is a major cause of springback. By our system, mechanically reasonable springback compensation is possible whereas the conventional compensation method only uses geometrical information that may lead to non-realistic solution. The authors have developed a system based on the proposed procedure to demonstrate the effectiveness of the presented strategy and applied it to some forming situations. In this paper, the overview of our approach and the latest progress is reported.

  2. A Root/10 based software framework for CMS

    SciTech Connect

    Tanenbaum, William

    2004-08-26

    The implementation of persistency in the Compact Muon Solenoid (CMS) Software Framework uses the core I/O functionality of ROOT. We will discuss the current ROOT/IO implementation, its evolution from the prior Objectivity/DB{trademark} implementation, and the plans and ongoing work for the conversion to ''POOL'', provided by the LHC Computing Grid (LCG) persistency project. The CMS experiment [1] is one of the four approved LHC experiments. Data taking is scheduled to begin in 2007, and will last at least ten years. The CMS software and computing task [2] will be 10-1000 times larger than that of current HEP experiments. Therefore it is essential that software must be modular, flexible, and maintainable as well as providing high performance and quality. One of the technologies utilized has been a C++ based object oriented database management system (ODBMS). Originally, the specific implementation used for object persistency was a commercial product, Objectivity/DB [3]. In 2001, it became apparent that Objectivity was not the optimal long term solution for data persistency, and that it was necessary to abandon Objectivity with a very short time scale. A decision was made to directly use ROOT/IO [4] as a component of an interim persistency implementation. In the very near future, the LHC computing grid persistency project will provide POOL [5] as an implementation for persistency. This paper primarily covers the conversion from Objectivity/DB to ROOT/IO. Also briefly discussed is the ongoing transition to POOL.

  3. Multiprocessor sort-merge join algorithm for relational data bases

    SciTech Connect

    Thompson, W.C. III; Ries, D.R.

    1981-01-01

    Using multiprocessor systems for rapid processing of relational operations in relational databases is currently a topic of some interest. This paper presents a new multiprocessor algorithm for merge joins of relations. Considerable gains in speed in comparison with existing algorithms are exhibited by this algorithm.

  4. CACONET: Ant Colony Optimization (ACO) Based Clustering Algorithm for VANET.

    PubMed

    Aadil, Farhan; Bajwa, Khalid Bashir; Khan, Salabat; Chaudary, Nadeem Majeed; Akram, Adeel

    2016-01-01

    A vehicular ad hoc network (VANET) is a wirelessly connected network of vehicular nodes. A number of techniques, such as message ferrying, data aggregation, and vehicular node clustering aim to improve communication efficiency in VANETs. Cluster heads (CHs), selected in the process of clustering, manage inter-cluster and intra-cluster communication. The lifetime of clusters and number of CHs determines the efficiency of network. In this paper a Clustering algorithm based on Ant Colony Optimization (ACO) for VANETs (CACONET) is proposed. CACONET forms optimized clusters for robust communication. CACONET is compared empirically with state-of-the-art baseline techniques like Multi-Objective Particle Swarm Optimization (MOPSO) and Comprehensive Learning Particle Swarm Optimization (CLPSO). Experiments varying the grid size of the network, the transmission range of nodes, and number of nodes in the network were performed to evaluate the comparative effectiveness of these algorithms. For optimized clustering, the parameters considered are the transmission range, direction and speed of the nodes. The results indicate that CACONET significantly outperforms MOPSO and CLPSO. PMID:27149517

  5. Fast Field Calibration of MIMU Based on the Powell Algorithm

    PubMed Central

    Ma, Lin; Chen, Wanwan; Li, Bin; You, Zheng; Chen, Zhigang

    2014-01-01

    The calibration of micro inertial measurement units is important in ensuring the precision of navigation systems, which are equipped with microelectromechanical system sensors that suffer from various errors. However, traditional calibration methods cannot meet the demand for fast field calibration. This paper presents a fast field calibration method based on the Powell algorithm. As the key points of this calibration, the norm of the accelerometer measurement vector is equal to the gravity magnitude, and the norm of the gyro measurement vector is equal to the rotational velocity inputs. To resolve the error parameters by judging the convergence of the nonlinear equations, the Powell algorithm is applied by establishing a mathematical error model of the novel calibration. All parameters can then be obtained in this manner. A comparison of the proposed method with the traditional calibration method through navigation tests shows the classic performance of the proposed calibration method. The proposed calibration method also saves more time compared with the traditional calibration method. PMID:25177801

  6. New algorithm for iris recognition based on video sequences

    NASA Astrophysics Data System (ADS)

    Bourennane, Salah; Fossati, Caroline; Ketchantang, William

    2010-07-01

    Among existing biometrics, iris recognition systems are among the most accurate personal biometric identification systems. However, the acquisition of a workable iris image requires strict cooperation of the user; otherwise, the image will be rejected by a verification module because of its poor quality, inducing a high false reject rate (FRR). The FRR may also increase when iris localization fails or when the pupil is too dilated. To improve the existing methods, we propose to use video sequences acquired in real time by a camera. In order to keep the same computational load to identify the iris, we propose a new method to estimate the iris characteristics. First, we propose a new iris texture characterization based on Fourier-Mellin transform, which is less sensitive to pupil dilatations than previous methods. Then, we develop a new iris localization algorithm that is robust to variations of quality (partial occlusions due to eyelids and eyelashes, light reflects, etc.), and finally, we introduce a fast and new criterion of suitable image selection from an iris video sequence for an accurate recognition. The accuracy of each step of the algorithm in the whole proposed recognition process is tested and evaluated using our own iris video database and several public image databases, such as CASIA, UBIRIS, and BATH.

  7. An algorithm based on negative probabilities for a separability criterion

    NASA Astrophysics Data System (ADS)

    de Ponte, M. A.; Mizrahi, S. S.; Moussa, M. H. Y.

    2015-09-01

    Here, we demonstrate that entangled states can be written as separable states [, 1 to N refering to the parts and to the nonnegative probabilities], although for some of the coefficients, assume negative values, while others are larger than 1 such to keep their sum equal to 1. We recognize this feature as a signature of non-separability or pseudoseparability. We systematize that kind of decomposition through an algorithm for the explicit separation of density matrices, and we apply it to illustrate the separation of some particular bipartite and tripartite states, including a multipartite one-parameter Werner-like state. We also work out an arbitrary bipartite state and show that in the particular case where this state reduces to an X-type density matrix, our algorithm leads to the separability conditions on the parameters, confirmed by the Peres-Horodecki partial transposition recipe. We finally propose a measure for quantifying the degree of entanglement based on these peculiar negative (and greater than one) probabilities.

  8. CACONET: Ant Colony Optimization (ACO) Based Clustering Algorithm for VANET

    PubMed Central

    Bajwa, Khalid Bashir; Khan, Salabat; Chaudary, Nadeem Majeed; Akram, Adeel

    2016-01-01

    A vehicular ad hoc network (VANET) is a wirelessly connected network of vehicular nodes. A number of techniques, such as message ferrying, data aggregation, and vehicular node clustering aim to improve communication efficiency in VANETs. Cluster heads (CHs), selected in the process of clustering, manage inter-cluster and intra-cluster communication. The lifetime of clusters and number of CHs determines the efficiency of network. In this paper a Clustering algorithm based on Ant Colony Optimization (ACO) for VANETs (CACONET) is proposed. CACONET forms optimized clusters for robust communication. CACONET is compared empirically with state-of-the-art baseline techniques like Multi-Objective Particle Swarm Optimization (MOPSO) and Comprehensive Learning Particle Swarm Optimization (CLPSO). Experiments varying the grid size of the network, the transmission range of nodes, and number of nodes in the network were performed to evaluate the comparative effectiveness of these algorithms. For optimized clustering, the parameters considered are the transmission range, direction and speed of the nodes. The results indicate that CACONET significantly outperforms MOPSO and CLPSO. PMID:27149517

  9. Geotube: a network based framework for Goescience dissemination

    NASA Astrophysics Data System (ADS)

    Grieco, Giovanni; Porta, Marina; Merlini, Anna Elisabetta; Caironi, Valeria; Reggiori, Donatella

    2016-04-01

    Geotube is a project promoted by Il Geco cultural association for the dissemination of Geoscience education in schools by open multimedia environments. The approach is based on the following keystones: • A deep and permanent epistemological reflection supported by confrontation within the International Scientific Community • A close link with the territory • A local to global inductive approach to basic concepts in Geosciences • The construction of an open framework to stimulate creativity The project has been developed as an educational activity for secondary schools (11 to 18 years old students). It provides for the creation of a network of institutions to be involved in order to ensure the required diversified expertise. They can comprise: Universities, Natural Parks, Mountain Communities, Municipalities, schools, private companies working in the sector, and so on. A single project lasts for one school year (October to June) and requires 8-12 work hours at school, one or two half day or full day excursions and a final event of presentation of outputs. The possible outputs comprise a pdf or ppt guidebook, a script and a video completely shooted and edited by the students. The framework is open in order to adapt to the single class or workgroup needs, the level and type of school, the time available and different subjects in Geosciences. In the last two years the two parts of the project have been successfully tested separately, while the full project will be presented at schools in in its full form in April 2016, in collaboration with University of Milan, Campo dei Fiori Natural Park, Piambello Mountain Community and Cunardo Municipality. The production of geotube outputs has been tested in a high school for three consecutive years. Students produced scripts and videos on the following subjects: geologic hazards, volcanoes and earthquakes, and climate change. The excursions have been tested with two different high schools. Firstly two areas have been

  10. Semantics-Based Interoperability Framework for the Geosciences

    NASA Astrophysics Data System (ADS)

    Sinha, A.; Malik, Z.; Raskin, R.; Barnes, C.; Fox, P.; McGuinness, D.; Lin, K.

    2008-12-01

    Interoperability between heterogeneous data, tools and services is required to transform data to knowledge. To meet geoscience-oriented societal challenges such as forcing of climate change induced by volcanic eruptions, we suggest the need to develop semantic interoperability for data, services, and processes. Because such scientific endeavors require integration of multiple data bases associated with global enterprises, implicit semantic-based integration is impossible. Instead, explicit semantics are needed to facilitate interoperability and integration. Although different types of integration models are available (syntactic or semantic) we suggest that semantic interoperability is likely to be the most successful pathway. Clearly, the geoscience community would benefit from utilization of existing XML-based data models, such as GeoSciML, WaterML, etc to rapidly advance semantic interoperability and integration. We recognize that such integration will require a "meanings-based search, reasoning and information brokering", which will be facilitated through inter-ontology relationships (ontologies defined for each discipline). We suggest that Markup languages (MLs) and ontologies can be seen as "data integration facilitators", working at different abstraction levels. Therefore, we propose to use an ontology-based data registration and discovery approach to compliment mark-up languages through semantic data enrichment. Ontologies allow the use of formal and descriptive logic statements which permits expressive query capabilities for data integration through reasoning. We have developed domain ontologies (EPONT) to capture the concept behind data. EPONT ontologies are associated with existing ontologies such as SUMO, DOLCE and SWEET. Although significant efforts have gone into developing data (object) ontologies, we advance the idea of developing semantic frameworks for additional ontologies that deal with processes and services. This evolutionary step will

  11. HOLON: a Web-based framework for fostering guideline applications.

    PubMed Central

    Silverman, B. G.; Moidu, K.; Clemente, B. E.; Reis, L.; Ravichandar, D.; Safran, C.

    1997-01-01

    HOLON is a research and development effort in extending middleware in the healthcare field to support application development, in general, and guideline applications, in particular. This framework makes use of open standards for architecture, software, guideline KBs, clinical repository models, information encodings, and intelligent system modules and agents. By pursuing the use of such standards in our middleware components, we hope eventually to maximize reusability of the HOLON framework by others who also adhere to these open standards. This research reflects lessons learned about the extensions needed in these standards if healthcare middleware frameworks are to transparently support application developers and their users over the web. PMID:9357651

  12. An evolutionary computation based algorithm for calculating solar differential rotation by automatic tracking of coronal bright points

    NASA Astrophysics Data System (ADS)

    Shahamatnia, Ehsan; Dorotovič, Ivan; Fonseca, Jose M.; Ribeiro, Rita A.

    2016-03-01

    Developing specialized software tools is essential to support studies of solar activity evolution. With new space missions such as Solar Dynamics Observatory (SDO), solar images are being produced in unprecedented volumes. To capitalize on that huge data availability, the scientific community needs a new generation of software tools for automatic and efficient data processing. In this paper a prototype of a modular framework for solar feature detection, characterization, and tracking is presented. To develop an efficient system capable of automatic solar feature tracking and measuring, a hybrid approach combining specialized image processing, evolutionary optimization, and soft computing algorithms is being followed. The specialized hybrid algorithm for tracking solar features allows automatic feature tracking while gathering characterization details about the tracked features. The hybrid algorithm takes advantages of the snake model, a specialized image processing algorithm widely used in applications such as boundary delineation, image segmentation, and object tracking. Further, it exploits the flexibility and efficiency of Particle Swarm Optimization (PSO), a stochastic population based optimization algorithm. PSO has been used successfully in a wide range of applications including combinatorial optimization, control, clustering, robotics, scheduling, and image processing and video analysis applications. The proposed tool, denoted PSO-Snake model, was already successfully tested in other works for tracking sunspots and coronal bright points. In this work, we discuss the application of the PSO-Snake algorithm for calculating the sidereal rotational angular velocity of the solar corona. To validate the results we compare them with published manual results performed by an expert.

  13. A framework for simulating ultrasound imaging based on first order nonlinear pressure-velocity relations.

    PubMed

    Du, Yigang; Fan, Rui; Li, Yong; Chen, Siping; Jensen, Jørgen Arendt

    2016-07-01

    An ultrasound imaging framework modeled with the first order nonlinear pressure-velocity relations (NPVR) and implemented by a half-time staggered solution and pseudospectral method is presented in this paper. The framework is capable of simulating linear and nonlinear ultrasound propagation and reflections in a heterogeneous medium with different sound speeds and densities. It can be initialized with arbitrary focus, excitation and apodization for multiple individual channels in both 2D and 3D spatial fields. The simulated channel data can be generated using this framework, and ultrasound image can be obtained by beamforming the simulated channel data. Various results simulated by different algorithms are illustrated for comparisons. The root mean square (RMS) errors for each compared pulses are calculated. The linear propagation is validated by an angular spectrum approach (ASA) with a RMS error of 3% at the focal point for a 2D field, and Field II with RMS errors of 0.8% and 1.5% at the electronic and the elevation focuses for 3D fields, respectively. The accuracy for the NPVR based nonlinear propagation is investigated by comparing with the Abersim simulation for pulsed fields and with the nonlinear ASA for monochromatic fields. The RMS errors of the nonlinear pulses calculated by the NPVR and Abersim are respectively 2.4%, 7.4%, 17.6% and 36.6% corresponding to initial pressure amplitudes of 50kPa, 200kPa, 500kPa and 1MPa at the transducer. By increasing the sampling frequency for the strong nonlinearity, the RMS error for 1MPa initial pressure amplitude is reduced from 36.6% to 27.3%. PMID:27107165

  14. Graph-based optimization algorithm and software on kidney exchanges.

    PubMed

    Chen, Yanhua; Li, Yijiang; Kalbfleisch, John D; Zhou, Yan; Leichtman, Alan; Song, Peter X-K

    2012-07-01

    Kidney transplantation is typically the most effective treatment for patients with end-stage renal disease. However, the supply of kidneys is far short of the fast-growing demand. Kidney paired donation (KPD) programs provide an innovative approach for increasing the number of available kidneys. In a KPD program, willing but incompatible donor-candidate pairs may exchange donor organs to achieve mutual benefit. Recently, research on exchanges initiated by altruistic donors (ADs) has attracted great attention because the resultant organ exchange mechanisms offer advantages that increase the effectiveness of KPD programs. Currently, most KPD programs focus on rule-based strategies of prioritizing kidney donation. In this paper, we consider and compare two graph-based organ allocation algorithms to optimize an outcome-based strategy defined by the overall expected utility of kidney exchanges in a KPD program with both incompatible pairs and ADs. We develop an interactive software-based decision support system to model, monitor, and visualize a conceptual KPD program, which aims to assist clinicians in the evaluation of different kidney allocation strategies. Using this system, we demonstrate empirically that an outcome-based strategy for kidney exchanges leads to improvement in both the quantity and quality of kidney transplantation through comprehensive simulation experiments. PMID:22542649

  15. A Python Plug-in Based Computational Framework for Spatially Distributed Environmental and Earth Sciences Modelling

    NASA Astrophysics Data System (ADS)

    Willgoose, G. R.

    2009-12-01

    One of the pioneering landform evolution models, SIBERIA, while developed in the 1980’s is still widely used in the science community and is a key component of engineering software used to assess the long-term stability of man-made landforms such as rehabilitated mine sites and nuclear waste repositories. While SIBERIA is very reliable, computationally fast and well tested (both its underlying science and the computer code) the range of emerging applications have challenged the ability of the author to maintain and extend the underlying computer code. Moreover, the architecture of the SIBERIA code is not well suited to collaborative extension of its capabilities without often triggering forking of the code base. This paper describes a new modelling framework designed to supersede SIBERIA (as well as other earth sciences codes by the author) called TelluSim. The design is such that it is potentially more than simply a new landform evolution model, but TelluSim is a more general dynamical system modelling framework using time evolving GIS data as its spatial discretisation. TelluSim is designed as an open modular framework facilitating open-sourcing of the code, while addressing compromises made in the original design of SIBERIA in the 1980’s. An important aspect of the design of TelluSim was to minimise the overhead in interfacing the modules with TelluSim, and minimise any requirement for recoding of existing software, so eliminating a major disadvantage of more complex frameworks. The presentation will discuss in more detail the reasoning behind the design of TelluSim, and experiences of the advantages and disadvantages of using Python relative to other approaches (e.g. Matlab, R). The paper will discuss examples of how TelluSim has facilitated the incorporation and testing of new algorithms, and environmental processes, and the support for novel science and data testing methodologies. It will also discuss plans to link TelluSim with other open source

  16. MTG2: an efficient algorithm for multivariate linear mixed model analysis based on genomic information

    PubMed Central

    Lee, S. H.; van der Werf, J. H. J.

    2016-01-01

    Summary: We have developed an algorithm for genetic analysis of complex traits using genome-wide SNPs in a linear mixed model framework. Compared to current standard REML software based on the mixed model equation, our method is substantially faster. The advantage is largest when there is only a single genetic covariance structure. The method is particularly useful for multivariate analysis, including multi-trait models and random regression models for studying reaction norms. We applied our proposed method to publicly available mice and human data and discuss the advantages and limitations. Availability and implementation: MTG2 is available in https://sites.google.com/site/honglee0707/mtg2. Contact: hong.lee@une.edu.au Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26755623

  17. Soft learning vector quantization and clustering algorithms based on non-Euclidean norms: single-norm algorithms.

    PubMed

    Karayiannis, Nicolaos B; Randolph-Gips, Mary M

    2005-03-01

    This paper presents the development of soft clustering and learning vector quantization (LVQ) algorithms that rely on a weighted norm to measure the distance between the feature vectors and their prototypes. The development of LVQ and clustering algorithms is based on the minimization of a reformulation function under the constraint that the generalized mean of the norm weights be constant. According to the proposed formulation, the norm weights can be computed from the data in an iterative fashion together with the prototypes. An error analysis provides some guidelines for selecting the parameter involved in the definition of the generalized mean in terms of the feature variances. The algorithms produced from this formulation are easy to implement and they are almost as fast as clustering algorithms relying on the Euclidean norm. An experimental evaluation on four data sets indicates that the proposed algorithms outperform consistently clustering algorithms relying on the Euclidean norm and they are strong competitors to non-Euclidean algorithms which are computationally more demanding.

  18. Alternative Model-Based and Design-Based Frameworks for Inference From Samples to Populations: From Polarization to Integration

    PubMed Central

    Sterba, Sonya K.

    2010-01-01

    A model-based framework, due originally to R. A. Fisher, and a design-based framework, due originally to J. Neyman, offer alternative mechanisms for inference from samples to populations. We show how these frameworks can utilize different types of samples (nonrandom or random vs. only random) and allow different kinds of inference (descriptive vs. analytic) to different kinds of populations (finite vs. infinite). We describe the extent of each framework's implementation in observational psychology research. After clarifying some important limitations of each framework, we describe how these limitations are overcome by a newer hybrid model/design-based inferential framework. This hybrid framework allows both kinds of inference to both kinds of populations, given a random sample. We illustrate implementation of the hybrid framework using the High School and Beyond data set. PMID:20411042

  19. Petri nets SM-cover-based on heuristic coloring algorithm

    NASA Astrophysics Data System (ADS)

    Tkacz, Jacek; Doligalski, Michał

    2015-09-01

    In the paper, coloring heuristic algorithm of interpreted Petri nets is presented. Coloring is used to determine the State Machines (SM) subnets. The present algorithm reduces the Petri net in order to reduce the computational complexity and finds one of its possible State Machines cover. The proposed algorithm uses elements of interpretation of Petri nets. The obtained result may not be the best, but it is sufficient for use in rapid prototyping of logic controllers. Found SM-cover will be also used in the development of algorithms for decomposition, and modular synthesis and implementation of parallel logic controllers. Correctness developed heuristic algorithm was verified using Gentzen formal reasoning system.

  20. Enabling big geoscience data analytics with a cloud-based, MapReduce-enabled and service-oriented workflow framework.

    PubMed

    Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew

    2015-01-01

    Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists.

  1. Enabling big geoscience data analytics with a cloud-based, MapReduce-enabled and service-oriented workflow framework.

    PubMed

    Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew

    2015-01-01

    Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists. PMID:25742012

  2. Enabling Big Geoscience Data Analytics with a Cloud-Based, MapReduce-Enabled and Service-Oriented Workflow Framework

    PubMed Central

    Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew

    2015-01-01

    Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists. PMID:25742012

  3. Parallelization of exoplanets detection algorithms based on field rotation; example of the MOODS algorithm for SPHERE

    NASA Astrophysics Data System (ADS)

    Mattei, D.; Smith, I.; Ferrari, A.; Carbillet, M.

    2010-10-01

    Post-processing for exoplanet detection using direct imaging requires large data cubes and/or sophisticated signal processing technics. For alt-azimuthal mounts, a projection effect called field rotation makes the potential planet rotate in a known manner on the set of images. For ground based telescopes that use extreme adaptive optics and advanced coronagraphy, technics based on field rotation are already broadly used and still under progress. In most such technics, for a given initial position of the planet the planet intensity estimate is a linear function of the set of images. However, due to field rotation the modified instrumental response applied is not shift invariant like usual linear filters. Testing all possible initial positions is therefore very time-consuming. To reduce the time process, we propose to deal with each subset of initial positions computed on a different machine using parallelization programming. In particular, the MOODS algorithm dedicated to the VLT-SPHERE instrument, that estimates jointly the light contributions of the star and the potential exoplanet, is parallelized on the Observatoire de la Cote d'Azur cluster. Different parallelization methods (OpenMP, MPI, Jobs Array) have been elaborated for the initial MOODS code and compared to each other. The one finally chosen splits the initial positions on the processors available by accounting at best for the different constraints of the cluster structure: memory, job submission queues, number of available CPUs, cluster average load. At the end, a standard set of images is satisfactorily processed in a few hours instead of a few days.

  4. Sensor based framework for secure multimedia communication in VANET.

    PubMed

    Rahim, Aneel; Khan, Zeeshan Shafi; Bin Muhaya, Fahad T; Sher, Muhammad; Kim, Tai-Hoon

    2010-01-01

    Secure multimedia communication enhances the safety of passengers by providing visual pictures of accidents and danger situations. In this paper we proposed a framework for secure multimedia communication in Vehicular Ad-Hoc Networks (VANETs). Our proposed framework is mainly divided into four components: redundant information, priority assignment, malicious data verification and malicious node verification. The proposed scheme jhas been validated with the help of the NS-2 network simulator and the Evalvid tool. PMID:22163462

  5. A Comparative Study of Probability Collectives Based Multi-agent Systems and Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Huang, Chien-Feng; Wolpert, David H.; Bieniawski, Stefan; Strauss, Charles E. M.

    2005-01-01

    We compare Genetic Algorithms (GA's) with Probability Collectives (PC), a new framework for distributed optimization and control. In contrast to GA's, PC-based methods do not update populations of solutions. Instead they update an explicitly parameterized probability distribution p over the space of solutions. That updating of p arises as the optimization of a functional of p. The functional is chosen so that any p that optimizes it should be p peaked about good solutions. The PC approach works in both continuous and discrete problems. It does not suffer from the resolution limitation of the finite bit length encoding of parameters into GA alleles. It also has deep connections with both game theory and statistical physics. We review the PC approach using its motivation as the information theoretic formulation of bounded rationality for multi-agent systems. It is then compared with GA's on a diverse set of problems. To handle high dimensional surfaces, in the PC method investigated here p is restricted to a product distribution. Each distribution in that product is controlled by a separate agent. The test functions were selected for their difficulty using either traditional gradient descent or genetic algorithms. On those functions the PC-based approach significantly outperforms traditional GA's in both rate of descent, trapping in false minima, and long term optimization.

  6. A Semantics-Based Information Distribution Framework for Large Web-Based Course Forum System

    ERIC Educational Resources Information Center

    Chim, Hung; Deng, Xiaotie

    2008-01-01

    We propose a novel data distribution framework for developing a large Web-based course forum system. In the distributed architectural design, each forum server is fully equipped with the ability to support some course forums independently. The forum servers collaborating with each other constitute the whole forum system. Therefore, the workload of…

  7. The guitar chord-generating algorithm based on complex network

    NASA Astrophysics Data System (ADS)

    Ren, Tao; Wang, Yi-fan; Du, Dan; Liu, Miao-miao; Siddiqi, Awais

    2016-02-01

    This paper aims to generate chords for popular songs automatically based on complex network. Firstly, according to the characteristics of guitar tablature, six chord networks of popular songs by six pop singers are constructed and the properties of all networks are concluded. By analyzing the diverse chord networks, the accompaniment regulations and features are shown, with which the chords can be generated automatically. Secondly, in terms of the characteristics of popular songs, a two-tiered network containing a verse network and a chorus network is constructed. With this network, the verse and chorus can be composed respectively with the random walk algorithm. Thirdly, the musical motif is considered for generating chords, with which the bad chord progressions can be revised. This method can make the accompaniments sound more melodious. Finally, a popular song is chosen for generating chords and the new generated accompaniment sounds better than those done by the composers.

  8. Sparsity-based algorithm for detecting faults in rotating machines

    NASA Astrophysics Data System (ADS)

    He, Wangpeng; Ding, Yin; Zi, Yanyang; Selesnick, Ivan W.

    2016-05-01

    This paper addresses the detection of periodic transients in vibration signals so as to detect faults in rotating machines. For this purpose, we present a method to estimate periodic-group-sparse signals in noise. The method is based on the formulation of a convex optimization problem. A fast iterative algorithm is given for its solution. A simulated signal is formulated to verify the performance of the proposed approach for periodic feature extraction. The detection performance of comparative methods is compared with that of the proposed approach via RMSE values and receiver operating characteristic (ROC) curves. Finally, the proposed approach is applied to single fault diagnosis of a locomotive bearing and compound faults diagnosis of motor bearings. The processed results show that the proposed approach can effectively detect and extract the useful features of bearing outer race and inner race defect.

  9. Algorithm of semicircular laser spot detection based on circle fitting

    NASA Astrophysics Data System (ADS)

    Wang, Zhengzhou; Xu, Ruihua; Hu, Bingliang

    2013-07-01

    In order to obtain the exact center of an asymmetrical and semicircular aperture laser spot, a method for laser spot detection method based on circle fitting was proposed in this paper, threshold of laser spot image was segmented by the method of gray morphology algorithm, rough edge of laser spot was detected in both vertical and horizontal direction, short arcs and isolated edge points were deleted by contour growing, the best circle contour was obtained by iterative fitting and the final standard round was fitted in the end. The experimental results show that the precision of the method is obviously better than the gravity model method being used in the traditional large laser automatic alignment system. The accuracy of the method to achieve asymmetrical and semicircular laser spot center meets the requirements of the system.

  10. Secure steganographic communication algorithm based on self-organizing patterns

    NASA Astrophysics Data System (ADS)

    Saunoriene, Loreta; Ragulskis, Minvydas

    2011-11-01

    A secure steganographic communication algorithm based on patterns evolving in a Beddington-de Angelis-type predator-prey model with self- and cross-diffusion is proposed in this paper. Small perturbations of initial states of the system around the state of equilibrium result in the evolution of self-organizing patterns. Small differences between initial perturbations result in slight differences also in the evolving patterns. It is shown that the generation of interpretable target patterns cannot be considered as a secure mean of communication because contours of the secret image can be retrieved from the cover image using statistical techniques if only it represents small perturbations of the initial states of the system. An alternative approach when the cover image represents the self-organizing pattern that has evolved from initial states perturbed using the dot-skeleton representation of the secret image can be considered as a safe visual communication technique protecting both the secret image and communicating parties.

  11. Improvements on EMG-based handwriting recognition with DTW algorithm.

    PubMed

    Li, Chengzhang; Ma, Zheren; Yao, Lin; Zhang, Dingguo

    2013-01-01

    Previous works have shown that Dynamic Time Warping (DTW) algorithm is a proper method of feature extraction for electromyography (EMG)-based handwriting recognition. In this paper, several modifications are proposed to improve the classification process and enhance recognition accuracy. A two-phase template making approach has been introduced to generate templates with more salient features, and modified Mahalanobis Distance (mMD) approach is used to replace Euclidean Distance (ED) in order to minimize the interclass variance. To validate the effectiveness of such modifications, experiments were conducted, in which four subjects wrote lowercase letters at a normal speed and four-channel EMG signals from forearms were recorded. Results of offline analysis show that the improvements increased the average recognition accuracy by 9.20%.

  12. Fuzzy Genetic Algorithm Based on Principal Operation and Inequity Degree

    NASA Astrophysics Data System (ADS)

    Li, Fachao; Jin, Chenxia

    In this paper, starting from the structure of fuzzy information, by distinguishing principal indexes and assistant indexes, give comparison of fuzzy information on synthesizing effect and operation of fuzzy optimization on principal indexes transformation, further, propose axiom system of fuzzy inequity degree from essence of constraint, and give an instructive metric method; Then, combining genetic algorithm, give fuzzy optimization methods based on principal operation and inequity degree (denoted by BPO&ID-FGA, for short); Finally, consider its convergence using Markov chain theory and analyze its performance through an example. All these indicate, BPO&ID-FGA can not only effectively merge decision consciousness into the optimization process, but possess better global convergence, so it can be applied to many fuzzy optimization problems.

  13. A segmental hidden semi-Markov model (HSMM)-based diagnostics and prognostics framework and methodology

    NASA Astrophysics Data System (ADS)

    Dong, Ming; He, David

    2007-07-01

    Diagnostics and prognostics are two important aspects in a condition-based maintenance (CBM) program. However, these two tasks are often separately performed. For example, data might be collected and analysed separately for diagnosis and prognosis. This practice increases the cost and reduces the efficiency of CBM and may affect the accuracy of the diagnostic and prognostic results. In this paper, a statistical modelling methodology for performing both diagnosis and prognosis in a unified framework is presented. The methodology is developed based on segmental hidden semi-Markov models (HSMMs). An HSMM is a hidden Markov model (HMM) with temporal structures. Unlike HMM, an HSMM does not follow the unrealistic Markov chain assumption and therefore provides more powerful modelling and analysis capability for real problems. In addition, an HSMM allows modelling the time duration of the hidden states and therefore is capable of prognosis. To facilitate the computation in the proposed HSMM-based diagnostics and prognostics, new forward-backward variables are defined and a modified forward-backward algorithm is developed. The existing state duration estimation methods are inefficient because they require a huge storage and computational load. Therefore, a new approach is proposed for training HSMMs in which state duration probabilities are estimated on the lattice (or trellis) of observations and states. The model parameters are estimated through the modified forward-backward training algorithm. The estimated state duration probability distributions combined with state-changing point detection can be used to predict the useful remaining life of a system. The evaluation of the proposed methodology was carried out through a real world application: health monitoring of hydraulic pumps. In the tests, the recognition rates for all states are greater than 96%. For each individual pump, the recognition rate is increased by 29.3% in comparison with HMMs. Because of the temporal

  14. A Framework for Geographic Object-Based Image Analysis (GEOBIA) based on geographic ontology

    NASA Astrophysics Data System (ADS)

    Gu, H. Y.; Li, H. T.; Yan, L.; Lu, X. J.

    2015-06-01

    GEOBIA (Geographic Object-Based Image Analysis) is not only a hot topic of current remote sensing and geographical research. It is believed to be a paradigm in remote sensing and GIScience. The lack of a systematic approach designed to conceptualize and formalize the class definitions makes GEOBIA a highly subjective and difficult method to reproduce. This paper aims to put forward a framework for GEOBIA based on geographic ontology theory, which could implement "Geographic entities - Image objects - Geographic objects" true reappearance. It consists of three steps, first, geographical entities are described by geographic ontology, second, semantic network model is built based on OWL(ontology web language), at last, geographical objects are classified with decision rule or other classifiers. A case study of farmland ontology was conducted for describing the framework. The strength of this framework is that it provides interpretation strategies and global framework for GEOBIA with the property of objective, overall, universal, universality, etc., which avoids inconsistencies caused by different experts' experience and provides an objective model for mage analysis.

  15. A new SPECT reconstruction algorithm based on the Novikov explicit inversion formula

    NASA Astrophysics Data System (ADS)

    Kunyansky, Leonid A.

    2001-04-01

    We present a new reconstruction algorithm for single-photon emission computed tomography. The algorithm is based on the Novikov explicit inversion formula for the attenuated Radon transform with non-uniform attenuation. Our reconstruction technique can be viewed as a generalization of both the filtered backprojection algorithm and the Tretiak-Metz algorithm. We test the performance of the present algorithm in a variety of numerical experiments. Our numerical examples show that the algorithm is capable of accurate image reconstruction even in the case of strongly non-uniform attenuation coefficient, similar to that occurring in a human thorax.

  16. Android platform based smartphones for a logistical remote association repair framework.

    PubMed

    Lien, Shao-Fan; Wang, Chun-Chieh; Su, Juhng-Perng; Chen, Hong-Ming; Wu, Chein-Hsing

    2014-01-01

    The maintenance of large-scale systems is an important issue for logistics support planning. In this paper, we developed a Logistical Remote Association Repair Framework (LRARF) to aid repairmen in keeping the system available. LRARF includes four subsystems: smart mobile phones, a Database Management System (DBMS), a Maintenance Support Center (MSC) and wireless networks. The repairman uses smart mobile phones to capture QR-codes and the images of faulty circuit boards. The captured QR-codes and images are transmitted to the DBMS so the invalid modules can be recognized via the proposed algorithm. In this paper, the Linear Projective Transform (LPT) is employed for fast QR-code calibration. Moreover, the ANFIS-based data mining system is used for module identification and searching automatically for the maintenance manual corresponding to the invalid modules. The inputs of the ANFIS-based data mining system are the QR-codes and image features; the output is the module ID. DBMS also transmits the maintenance manual back to the maintenance staff. If modules are not recognizable, the repairmen and center engineers can obtain the relevant information about the invalid modules through live video. The experimental results validate the applicability of the Android-based platform in the recognition of invalid modules. In addition, the live video can also be recorded synchronously on the MSC for later use. PMID:24967603

  17. Android Platform Based Smartphones for a Logistical Remote Association Repair Framework

    PubMed Central

    Lien, Shao-Fan; Wang, Chun-Chieh; Su, Juhng-Perng; Chen, Hong-Ming; Wu, Chein-Hsing

    2014-01-01

    The maintenance of large-scale systems is an important issue for logistics support planning. In this paper, we developed a Logistical Remote Association Repair Framework (LRARF) to aid repairmen in keeping the system available. LRARF includes four subsystems: smart mobile phones, a Database Management System (DBMS), a Maintenance Support Center (MSC) and wireless networks. The repairman uses smart mobile phones to capture QR-codes and the images of faulty circuit boards. The captured QR-codes and images are transmitted to the DBMS so the invalid modules can be recognized via the proposed algorithm. In this paper, the Linear Projective Transform (LPT) is employed for fast QR-code calibration. Moreover, the ANFIS-based data mining system is used for module identification and searching automatically for the maintenance manual corresponding to the invalid modules. The inputs of the ANFIS-based data mining system are the QR-codes and image features; the output is the module ID. DBMS also transmits the maintenance manual back to the maintenance staff. If modules are not recognizable, the repairmen and center engineers can obtain the relevant information about the invalid modules through live video. The experimental results validate the applicability of the Android-based platform in the recognition of invalid modules. In addition, the live video can also be recorded synchronously on the MSC for later use. PMID:24967603

  18. Android platform based smartphones for a logistical remote association repair framework.

    PubMed

    Lien, Shao-Fan; Wang, Chun-Chieh; Su, Juhng-Perng; Chen, Hong-Ming; Wu, Chein-Hsing

    2014-06-25

    The maintenance of large-scale systems is an important issue for logistics support planning. In this paper, we developed a Logistical Remote Association Repair Framework (LRARF) to aid repairmen in keeping the system available. LRARF includes four subsystems: smart mobile phones, a Database Management System (DBMS), a Maintenance Support Center (MSC) and wireless networks. The repairman uses smart mobile phones to capture QR-codes and the images of faulty circuit boards. The captured QR-codes and images are transmitted to the DBMS so the invalid modules can be recognized via the proposed algorithm. In this paper, the Linear Projective Transform (LPT) is employed for fast QR-code calibration. Moreover, the ANFIS-based data mining system is used for module identification and searching automatically for the maintenance manual corresponding to the invalid modules. The inputs of the ANFIS-based data mining system are the QR-codes and image features; the output is the module ID. DBMS also transmits the maintenance manual back to the maintenance staff. If modules are not recognizable, the repairmen and center engineers can obtain the relevant information about the invalid modules through live video. The experimental results validate the applicability of the Android-based platform in the recognition of invalid modules. In addition, the live video can also be recorded synchronously on the MSC for later use.

  19. A genetic-based algorithm for personalized resistance training

    PubMed Central

    Kiely, J; Suraci, B; Collins, DJ; de Lorenzo, D; Pickering, C; Grimaldi, KA

    2016-01-01

    Association studies have identified dozens of genetic variants linked to training responses and sport-related traits. However, no intervention studies utilizing the idea of personalised training based on athlete's genetic profile have been conducted. Here we propose an algorithm that allows achieving greater results in response to high- or low-intensity resistance training programs by predicting athlete's potential for the development of power and endurance qualities with the panel of 15 performance-associated gene polymorphisms. To develop and validate such an algorithm we performed two studies in independent cohorts of male athletes (study 1: athletes from different sports (n = 28); study 2: soccer players (n = 39)). In both studies athletes completed an eight-week high- or low-intensity resistance training program, which either matched or mismatched their individual genotype. Two variables of explosive power and aerobic fitness, as measured by the countermovement jump (CMJ) and aerobic 3-min cycle test (Aero3) were assessed pre and post 8 weeks of resistance training. In study 1, the athletes from the matched groups (i.e. high-intensity trained with power genotype or low-intensity trained with endurance genotype) significantly increased results in CMJ (P = 0.0005) and Aero3 (P = 0.0004). Whereas, athletes from the mismatched group (i.e. high-intensity trained with endurance genotype or low-intensity trained with power genotype) demonstrated non-significant improvements in CMJ (P = 0.175) and less prominent results in Aero3 (P = 0.0134). In study 2, soccer players from the matched group also demonstrated significantly greater (P < 0.0001) performance changes in both tests compared to the mismatched group. Among non- or low responders of both studies, 82% of athletes (both for CMJ and Aero3) were from the mismatched group (P < 0.0001). Our results indicate that matching the individual's genotype with the appropriate training modality leads to more effective

  20. Performance-Based Seismic Design of Steel Frames Utilizing Colliding Bodies Algorithm

    PubMed Central

    Veladi, H.

    2014-01-01

    A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm. PMID:25202717

  1. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1998-01-01

    Decoding algorithms based on the trellis representation of a code (block or convolutional) drastically reduce decoding complexity. The best known and most commonly used trellis-based decoding algorithm is the Viterbi algorithm. It is a maximum likelihood decoding algorithm. Convolutional codes with the Viterbi decoding have been widely used for error control in digital communications over the last two decades. This chapter is concerned with the application of the Viterbi decoding algorithm to linear block codes. First, the Viterbi algorithm is presented. Then, optimum sectionalization of a trellis to minimize the computational complexity of a Viterbi decoder is discussed and an algorithm is presented. Some design issues for IC (integrated circuit) implementation of a Viterbi decoder are considered and discussed. Finally, a new decoding algorithm based on the principle of compare-select-add is presented. This new algorithm can be applied to both block and convolutional codes and is more efficient than the conventional Viterbi algorithm based on the add-compare-select principle. This algorithm is particularly efficient for rate 1/n antipodal convolutional codes and their high-rate punctured codes. It reduces computational complexity by one-third compared with the Viterbi algorithm.

  2. Research of image matching algorithm based on local features

    NASA Astrophysics Data System (ADS)

    Sun, Wei

    2015-07-01

    For the problem of low efficiency in SIFT algorithm while using exhaustive method to search the nearest neighbor and next nearest neighbor of feature points, this paper introduces K-D tree algorithm, to index the feature points extracted in database images according to the tree structure, at the same time, using the concept of a weighted priority, further improves the algorithm, to further enhance the efficiency of feature matching.

  3. Voluntary, human rights-based family planning: a conceptual framework.

    PubMed

    Hardee, Karen; Kumar, Jan; Newman, Karen; Bakamjian, Lynn; Harris, Shannon; Rodríguez, Mariela; Brown, Win

    2014-03-01

    At the 2012 Family Planning Summit in London, world leaders committed to providing effective family planning information and services to 120 million additional women and girls by the year 2020. Amid positive response, some expressed concern that the numeric goal could signal a retreat from the human rights-centered approach that underpinned the 1994 International Conference on Population and Development. Achieving the FP2020 goal will take concerted and coordinated efforts among diverse stakeholders and a new programmatic approach supported by the public health and human rights communities. This article presents a new conceptual framework designed to serve as a path toward fulfilling the FP2020 goal. This new unifying framework, which incorporates human rights laws and principles within family-planning-program and quality-of-care frameworks, brings what have been parallel lines of thought together in one construct to make human rights issues related to family planning practical.

  4. A pipelined FPGA implementation of an encryption algorithm based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Thirer, Nonel

    2013-05-01

    With the evolution of digital data storage and exchange, it is essential to protect the confidential information from every unauthorized access. High performance encryption algorithms were developed and implemented by software and hardware. Also many methods to attack the cipher text were developed. In the last years, the genetic algorithm has gained much interest in cryptanalysis of cipher texts and also in encryption ciphers. This paper analyses the possibility to use the genetic algorithm as a multiple key sequence generator for an AES (Advanced Encryption Standard) cryptographic system, and also to use a three stages pipeline (with four main blocks: Input data, AES Core, Key generator, Output data) to provide a fast encryption and storage/transmission of a large amount of data.

  5. Mutation-Based Artificial Fish Swarm Algorithm for Bound Constrained Global Optimization

    NASA Astrophysics Data System (ADS)

    Rocha, Ana Maria A. C.; Fernandes, Edite M. G. P.

    2011-09-01

    The herein presented mutation-based artificial fish swarm (AFS) algorithm includes mutation operators to prevent the algorithm to falling into local solutions, diversifying the search, and to accelerate convergence to the global optima. Three mutation strategies are introduced into the AFS algorithm to define the trial points that emerge from random, leaping and searching behaviors. Computational results show that the new algorithm outperforms other well-known global stochastic solution methods.

  6. Block clustering based on difference of convex functions (DC) programming and DC algorithms.

    PubMed

    Le, Hoai Minh; Le Thi, Hoai An; Dinh, Tao Pham; Huynh, Van Ngai

    2013-10-01

    We investigate difference of convex functions (DC) programming and the DC algorithm (DCA) to solve the block clustering problem in the continuous framework, which traditionally requires solving a hard combinatorial optimization problem. DC reformulation techniques and exact penalty in DC programming are developed to build an appropriate equivalent DC program of the block clustering problem. They lead to an elegant and explicit DCA scheme for the resulting DC program. Computational experiments show the robustness and efficiency of the proposed algorithm and its superiority over standard algorithms such as two-mode K-means, two-mode fuzzy clustering, and block classification EM.

  7. Automated ethernet-based test setup for long wave infrared camera analysis and algorithm evaluation

    NASA Astrophysics Data System (ADS)

    Edeler, Torsten; Ohliger, Kevin; Lawrenz, Sönke; Hussmann, Stephan

    2009-06-01

    In this paper we consider a new way for automated camera calibration and specification. The proposed setup is optimized for working with uncooled long wave infrared (thermal) cameras, while the concept itself is not restricted to those cameras. Every component of the setup like black body source, climate chamber, remote power switch, and the camera itself is connected to a network via Ethernet and a Windows XP workstation is controlling all components by the use of the TCL - script language. Beside the job of communicating with the components the script tool is also capable to run Matlab code via the matlab kernel. Data exchange during the measurement is possible and offers a variety of different advantages from drastically reduction of the amount of data to enormous speedup of the measuring procedure due to data analysis during measurement. A parameter based software framework is presented to create generic test cases, where modification to the test scenario does not require any programming skills. In the second part of the paper the measurement results of a self developed GigE-Vision thermal camera are presented and correction algorithms, providing high quality image output, are shown. These algorithms are fully implemented in the FPGA of the camera to provide real time processing while maintaining GigE-Vision as standard transmission protocol as an interface to arbitrary software tools. Artefacts taken into account are spatial noise, defective pixel and offset drift due to self heating after power on.

  8. The framework of weighted subset-hood Mamdani fuzzy rule based system rule extraction (MFRBS-WSBA) for forecasting electricity load demand

    NASA Astrophysics Data System (ADS)

    Mansor, Rosnalini; Kasim, Maznah Mat; Othman, Mahmod

    2016-08-01

    Fuzzy rules are very important elements that should be taken consideration seriously when applying any fuzzy system. This paper proposes the framework of Mamdani Fuzzy Rule-based System with Weighted Subset-hood Based Algorithm (MFRBS-WSBA) in the fuzzy rule extraction for electricity load demand forecasting. The framework consist of six main steps: (1) Data Collection and Selection; (2) Preprocessing Data; (3) Variables Selection; (4) Fuzzy Model; (5) Comparison with Other FIS and (6) Performance Evaluation. The objective of this paper is to show the fourth step in the framework which applied the new electricity load forecasting rule extraction by WSBA method. Electricity load demand in Malaysia data is used as numerical data in this framework. These preliminary results show that the WSBA method can be one of alternative methods to extract fuzzy rules for forecast electricity load demand

  9. Visual Contrast Enhancement Algorithm Based on Histogram Equalization

    PubMed Central

    Ting, Chih-Chung; Wu, Bing-Fei; Chung, Meng-Liang; Chiu, Chung-Cheng; Wu, Ya-Ching

    2015-01-01

    Image enhancement techniques primarily improve the contrast of an image to lend it a better appearance. One of the popular enhancement methods is histogram equalization (HE) because of its simplicity and effectiveness. However, it is rarely applied to consumer electronics products because it can cause excessive contrast enhancement and feature loss problems. These problems make the images processed by HE look unnatural and introduce unwanted artifacts in them. In this study, a visual contrast enhancement algorithm (VCEA) based on HE is proposed. VCEA considers the requirements of the human visual perception in order to address the drawbacks of HE. It effectively solves the excessive contrast enhancement problem by adjusting the spaces between two adjacent gray values of the HE histogram. In addition, VCEA reduces the effects of the feature loss problem by using the obtained spaces. Furthermore, VCEA enhances the detailed textures of an image to generate an enhanced image with better visual quality. Experimental results show that images obtained by applying VCEA have higher contrast and are more suited to human visual perception than those processed by HE and other HE-based methods. PMID:26184219

  10. A MATLAB GUI based algorithm for modelling Magnetotelluric data

    NASA Astrophysics Data System (ADS)

    Timur, Emre; Onsen, Funda

    2016-04-01

    The magnetotelluric method is an electromagnetic survey technique that images the electrical resistivity distribution of layers in subsurface depths. Magnetotelluric method measures simultaneously total electromagnetic field components such as both time-varying magnetic field B(t) and induced electric field E(t). At the same time, forward modeling of magnetotelluric method is so beneficial for survey planning purpose, for comprehending the method, especially for students, and as part of an iteration process in inverting measured data. The MTINV program can be used to model and to interpret geophysical electromagnetic (EM) magnetotelluric (MT) measurements using a horizontally layered earth model. This program uses either the apparent resistivity and phase components of the MT data together or the apparent resistivity data alone. Parameter optimization, which is based on linearized inversion method, can be utilized in 1D interpretations. In this study, a new MATLAB GUI based algorithm has been written for the 1D-forward modeling of magnetotelluric response function for multiple layers to use in educational studies. The code also includes an automatic Gaussian noise option for a demanded ratio value. Numerous applications were carried out and presented for 2,3 and 4 layer models and obtained theoretical data were interpreted using MTINV, in order to evaluate the initial parameters and effect of noise. Keywords: Education, Forward Modelling, Inverse Modelling, Magnetotelluric

  11. An ontology-based collaborative service framework for agricultural information

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In recent years, China has developed modern agriculture energetically. An effective information framework is an important way to provide farms with agricultural information services and improve farmer's production technology and their income. The mountain areas in central China are dominated by agri...

  12. Framework Based Guidance Navigation and Control Flight Software Development

    NASA Technical Reports Server (NTRS)

    McComas, David

    2007-01-01

    This viewgraph presentation describes NASA's guidance navigation and control flight software development background. The contents include: 1) NASA/Goddard Guidance Navigation and Control (GN&C) Flight Software (FSW) Development Background; 2) GN&C FSW Development Improvement Concepts; and 3) GN&C FSW Application Framework.

  13. A Modified MinMax k-Means Algorithm Based on PSO

    PubMed Central

    2016-01-01

    The MinMax k-means algorithm is widely used to tackle the effect of bad initialization by minimizing the maximum intraclustering errors. Two parameters, including the exponent parameter and memory parameter, are involved in the executive process. Since different parameters have different clustering errors, it is crucial to choose appropriate parameters. In the original algorithm, a practical framework is given. Such framework extends the MinMax k-means to automatically adapt the exponent parameter to the data set. It has been believed that if the maximum exponent parameter has been set, then the programme can reach the lowest intraclustering errors. However, our experiments show that this is not always correct. In this paper, we modified the MinMax k-means algorithm by PSO to determine the proper values of parameters which can subject the algorithm to attain the lowest clustering errors. The proposed clustering method is tested on some favorite data sets in several different initial situations and is compared to the k-means algorithm and the original MinMax k-means algorithm. The experimental results indicate that our proposed algorithm can reach the lowest clustering errors automatically. PMID:27656201

  14. A Modified MinMax k-Means Algorithm Based on PSO

    PubMed Central

    2016-01-01

    The MinMax k-means algorithm is widely used to tackle the effect of bad initialization by minimizing the maximum intraclustering errors. Two parameters, including the exponent parameter and memory parameter, are involved in the executive process. Since different parameters have different clustering errors, it is crucial to choose appropriate parameters. In the original algorithm, a practical framework is given. Such framework extends the MinMax k-means to automatically adapt the exponent parameter to the data set. It has been believed that if the maximum exponent parameter has been set, then the programme can reach the lowest intraclustering errors. However, our experiments show that this is not always correct. In this paper, we modified the MinMax k-means algorithm by PSO to determine the proper values of parameters which can subject the algorithm to attain the lowest clustering errors. The proposed clustering method is tested on some favorite data sets in several different initial situations and is compared to the k-means algorithm and the original MinMax k-means algorithm. The experimental results indicate that our proposed algorithm can reach the lowest clustering errors automatically.

  15. Study on the algorithm of computational ghost imaging based on discrete fourier transform measurement matrix

    NASA Astrophysics Data System (ADS)

    Zhang, Leihong; Liang, Dong; Li, Bei; Kang, Yi; Pan, Zilan; Zhang, Dawei; Gao, Xiumin; Ma, Xiuhua

    2016-07-01

    On the basis of analyzing the cosine light field with determined analytic expression and the pseudo-inverse method, the object is illuminated by a presetting light field with a determined discrete Fourier transform measurement matrix, and the object image is reconstructed by the pseudo-inverse method. The analytic expression of the algorithm of computational ghost imaging based on discrete Fourier transform measurement matrix is deduced theoretically, and compared with the algorithm of compressive computational ghost imaging based on random measurement matrix. The reconstruction process and the reconstruction error are analyzed. On this basis, the simulation is done to verify the theoretical analysis. When the sampling measurement number is similar to the number of object pixel, the rank of discrete Fourier transform matrix is the same as the one of the random measurement matrix, the PSNR of the reconstruction image of FGI algorithm and PGI algorithm are similar, the reconstruction error of the traditional CGI algorithm is lower than that of reconstruction image based on FGI algorithm and PGI algorithm. As the decreasing of the number of sampling measurement, the PSNR of reconstruction image based on FGI algorithm decreases slowly, and the PSNR of reconstruction image based on PGI algorithm and CGI algorithm decreases sharply. The reconstruction time of FGI algorithm is lower than that of other algorithms and is not affected by the number of sampling measurement. The FGI algorithm can effectively filter out the random white noise through a low-pass filter and realize the reconstruction denoising which has a higher denoising capability than that of the CGI algorithm. The FGI algorithm can improve the reconstruction accuracy and the reconstruction speed of computational ghost imaging.

  16. A stochastic context free grammar based framework for analysis of protein sequences

    PubMed Central

    Dyrka, Witold; Nebel, Jean-Christophe

    2009-01-01

    Background In the last decade, there have been many applications of formal language theory in bioinformatics such as RNA structure prediction and detection of patterns in DNA. However, in the field of proteomics, the size of the protein alphabet and the complexity of relationship between amino acids have mainly limited the application of formal language theory to the production of grammars whose expressive power is not higher than stochastic regular grammars. However, these grammars, like other state of the art methods, cannot cover any higher-order dependencies such as nested and crossing relationships that are common in proteins. In order to overcome some of these limitations, we propose a Stochastic Context Free Grammar based framework for the analysis of protein sequences where grammars are induced using a genetic algorithm. Results This framework was implemented in a system aiming at the production of binding site descriptors. These descriptors not only allow detection of protein regions that are involved in these sites, but also provide insight in their structure. Grammars were induced using quantitative properties of amino acids to deal with the size of the protein alphabet. Moreover, we imposed some structural constraints on grammars to reduce the extent of the rule search space. Finally, grammars based on different properties were combined to convey as much information as possible. Evaluation was performed on sites of various sizes and complexity described either by PROSITE patterns, domain profiles or a set of patterns. Results show the produced binding site descriptors are human-readable and, hence, highlight biologically meaningful features. Moreover, they achieve good accuracy in both annotation and detection. In addition, findings suggest that, unlike current state-of-the-art methods, our system may be particularly suited to deal with patterns shared by non-homologous proteins. Conclusion A new Stochastic Context Free Grammar based framework has been

  17. An Improved Direction Finding Algorithm Based on Toeplitz Approximation

    PubMed Central

    Wang, Qing; Chen, Hua; Zhao, Guohuang; Chen, Bin; Wang, Pichao

    2013-01-01

    In this paper, a novel direction of arrival (DOA) estimation algorithm called the Toeplitz fourth order cumulants multiple signal classification method (TFOC-MUSIC) algorithm is proposed through combining a fast MUSIC-like algorithm termed the modified fourth order cumulants MUSIC (MFOC-MUSIC) algorithm and Toeplitz approximation. In the proposed algorithm, the redundant information in the cumulants is removed. Besides, the computational complexity is reduced due to the decreased dimension of the fourth-order cumulants matrix, which is equal to the number of the virtual array elements. That is, the effective array aperture of a physical array remains unchanged. However, due to finite sampling snapshots, there exists an estimation error of the reduced-rank FOC matrix and thus the capacity of DOA estimation degrades. In order to improve the estimation performance, Toeplitz approximation is introduced to recover the Toeplitz structure of the reduced-dimension FOC matrix just like the ideal one which has the Toeplitz structure possessing optimal estimated results. The theoretical formulas of the proposed algorithm are derived, and the simulations results are presented. From the simulations, in comparison with the MFOC-MUSIC algorithm, it is concluded that the TFOC-MUSIC algorithm yields an excellent performance in both spatially-white noise and in spatially-color noise environments. PMID:23296331

  18. A CORBA-based object framework with patient identification translation and dynamic linking. Methods for exchanging patient data.

    PubMed

    Wang, C; Ohe, K

    1999-03-01

    Exchanging and integration of patient data across heterogeneous databases and institutional boundaries offers many problems. We focused on two issues: (1) how to identify identical patients between different systems and institutions while lacking universal patient identifiers; and (2) how to link patient data across heterogeneous databases and institutional boundaries. To solve these problems, we created a patient identification (ID) translation model and a dynamic linking method in the Common Object Request Broker Architecture (CORBA) environment. The algorithm for the patient ID translation is based on patient attribute matching plus computer-based human checking; the method for dynamic linking is temporal mapping. By implementing these methods into computer systems with help of the distributed object computing technology, we built a prototype of a CORBA-based object framework in which the patient ID translation and dynamic linking methods were embedded. Our experiments with a Web-based user interface using the object framework and dynamic linking-through the object framework were successful. These methods are important for exchanging and integrating patient data across heterogeneous databases and institutional boundaries.

  19. WS-BP: An efficient wolf search based back-propagation algorithm

    NASA Astrophysics Data System (ADS)

    Nawi, Nazri Mohd; Rehman, M. Z.; Khan, Abdullah

    2015-05-01

    Wolf Search (WS) is a heuristic based optimization algorithm. Inspired by the preying and survival capabilities of the wolves, this algorithm is highly capable to search large spaces in the candidate solutions. This paper investigates the use of WS algorithm in combination with back-propagation neural network (BPNN) algorithm to overcome the local minima problem and to improve convergence in gradient descent. The performance of the proposed Wolf Search based Back-Propagation (WS-BP) algorithm is compared with Artificial Bee Colony Back-Propagation (ABC-BP), Bat Based Back-Propagation (Bat-BP), and conventional BPNN algorithms. Specifically, OR and XOR datasets are used for training the network. The simulation results show that the WS-BP algorithm effectively avoids the local minima and converge to global minima.

  20. One high-accuracy camera calibration algorithm based on computer vision images

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Huang, Jianming; Wei, Xiangquan

    2015-12-01

    Camera calibration is the first step of computer vision and one of the most active research fields nowadays. In order to improve the measurement precision, the internal parameters of the camera should be accurately calibrated. So one high-accuracy camera calibration algorithm is proposed based on the images of planar targets or tridimensional targets. By using the algorithm, the internal parameters of the camera are calibrated based on the existing planar target at the vision-based navigation experiment. The experimental results show that the accuracy of the proposed algorithm is obviously improved compared with the conventional linear algorithm, Tsai general algorithm, and Zhang Zhengyou calibration algorithm. The algorithm proposed by the article can satisfy the need of computer vision and provide reference for precise measurement of the relative position and attitude.

  1. Classification of Medical Datasets Using SVMs with Hybrid Evolutionary Algorithms Based on Endocrine-Based Particle Swarm Optimization and Artificial Bee Colony Algorithms.

    PubMed

    Lin, Kuan-Cheng; Hsieh, Yi-Hsiu

    2015-10-01

    The classification and analysis of data is an important issue in today's research. Selecting a suitable set of features makes it possible to classify an enormous quantity of data quickly and efficiently. Feature selection is generally viewed as a problem of feature subset selection, such as combination optimization problems. Evolutionary algorithms using random search methods have proven highly effective in obtaining solutions to problems of optimization in a diversity of applications. In this study, we developed a hybrid evolutionary algorithm based on endocrine-based particle swarm optimization (EPSO) and artificial bee colony (ABC) algorithms in conjunction with a support vector machine (SVM) for the selection of optimal feature subsets for the classification of datasets. The results of experiments using specific UCI medical datasets demonstrate that the accuracy of the proposed hybrid evolutionary algorithm is superior to that of basic PSO, EPSO and ABC algorithms, with regard to classification accuracy using subsets with a reduced number of features.

  2. Biased Randomized Algorithm for Fast Model-Based Diagnosis

    NASA Technical Reports Server (NTRS)

    Williams, Colin; Vartan, Farrokh

    2005-01-01

    A biased randomized algorithm has been developed to enable the rapid computational solution of a propositional- satisfiability (SAT) problem equivalent to a diagnosis problem. The closest competing methods of automated diagnosis are described in the preceding article "Fast Algorithms for Model-Based Diagnosis" and "Two Methods of Efficient Solution of the Hitting-Set Problem" (NPO-30584), which appears elsewhere in this issue. It is necessary to recapitulate some of the information from the cited articles as a prerequisite to a description of the present method. As used here, "diagnosis" signifies, more precisely, a type of model-based diagnosis in which one explores any logical inconsistencies between the observed and expected behaviors of an engineering system. The function of each component and the interconnections among all the components of the engineering system are represented as a logical system. Hence, the expected behavior of the engineering system is represented as a set of logical consequences. Faulty components lead to inconsistency between the observed and expected behaviors of the system, represented by logical inconsistencies. Diagnosis - the task of finding the faulty components - reduces to finding the components, the abnormalities of which could explain all the logical inconsistencies. One seeks a minimal set of faulty components (denoted a minimal diagnosis), because the trivial solution, in which all components are deemed to be faulty, always explains all inconsistencies. In the methods of the cited articles, the minimal-diagnosis problem is treated as equivalent to a minimal-hitting-set problem, which is translated from a combinatorial to a computational problem by mapping it onto the Boolean-satisfiability and integer-programming problems. The integer-programming approach taken in one of the prior methods is complete (in the sense that it is guaranteed to find a solution if one exists) and slow and yields a lower bound on the size of the

  3. A framework for graph-based synthesis, analysis, and visualization of HPC cluster job data.

    SciTech Connect

    Mayo, Jackson R.; Kegelmeyer, W. Philip, Jr.; Wong, Matthew H.; Pebay, Philippe Pierre; Gentile, Ann C.; Thompson, David C.; Roe, Diana C.; De Sapio, Vincent; Brandt, James M.

    2010-08-01

    The monitoring and system analysis of high performance computing (HPC) clusters is of increasing importance to the HPC community. Analysis of HPC job data can be used to characterize system usage and diagnose and examine failure modes and their effects. This analysis is not straightforward, however, due to the complex relationships that exist between jobs. These relationships are based on a number of factors, including shared compute nodes between jobs, proximity of jobs in time, etc. Graph-based techniques represent an approach that is particularly well suited to this problem, and provide an effective technique for discovering important relationships in job queuing and execution data. The efficacy of these techniques is rooted in the use of a semantic graph as a knowledge representation tool. In a semantic graph job data, represented in a combination of numerical and textual forms, can be flexibly processed into edges, with corresponding weights, expressing relationships between jobs, nodes, users, and other relevant entities. This graph-based representation permits formal manipulation by a number of analysis algorithms. This report presents a methodology and software implementation that leverages semantic graph-based techniques for the system-level monitoring and analysis of HPC clusters based on job queuing and execution data. Ontology development and graph synthesis is discussed with respect to the domain of HPC job data. The framework developed automates the synthesis of graphs from a database of job information. It also provides a front end, enabling visualization of the synthesized graphs. Additionally, an analysis engine is incorporated that provides performance analysis, graph-based clustering, and failure prediction capabilities for HPC systems.

  4. The ESA Cloud CCI project: Generation of Multi Sensor consistent Cloud Properties with an Optimal Estimation Based Retrieval Algorithm

    NASA Astrophysics Data System (ADS)

    Jerg, M.; Stengel, M.; Hollmann, R.; Poulsen, C.

    2012-04-01

    The ultimate objective of the ESA Climate Change Initiative (CCI) Cloud project is to provide long-term coherent cloud property data sets exploiting and improving on the synergetic capabilities of past, existing, and upcoming European and American satellite missions. The synergetic approach allows not only for improved accuracy and extended temporal and spatial sampling of retrieved cloud properties better than those provided by single instruments alone but potentially also for improved (inter-)calibration and enhanced homogeneity and stability of the derived time series. Such advances are required by the scientific community to facilitate further progress in satellite-based climate monitoring, which leads to a better understanding of climate. Some of the primary objectives of ESA Cloud CCI Cloud are (1) the development of inter-calibrated radiance data sets, so called Fundamental Climate Data Records - for ESA and non ESA instruments through an international collaboration, (2) the development of an optimal estimation based retrieval framework for cloud related essential climate variables like cloud cover, cloud top height and temperature, liquid and ice water path, and (3) the development of two multi-annual global data sets for the mentioned cloud properties including uncertainty estimates. These two data sets are characterized by different combinations of satellite systems: the AVHRR heritage product comprising (A)ATSR, AVHRR and MODIS and the novel (A)ATSR - MERIS product which is based on a synergetic retrieval using both instruments. Both datasets cover the years 2007-2009 in the first project phase. ESA Cloud CCI will also carry out a comprehensive validation of the cloud property products and provide a common data base as in the framework of the Global Energy and Water Cycle Experiment (GEWEX). The presentation will give an overview of the ESA Cloud CCI project and its goals and approaches and then continue with results from the Round Robin algorithm

  5. Map Algorithms for Decoding Linear Block codes Based on Sectionalized Trellis Diagrams

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1999-01-01

    The MAP algorithm is a trellis-based maximum a posteriori probability decoding algorithm. It is the heart of the turbo (or iterative) decoding which achieves an error performance near the Shannon limit. Unfortunately, the implementation of this algorithm requires large computation and storage. Furthermore, its forward and backward recursions result in long decoding delay. For practical applications, this decoding algorithm must be simplified and its decoding complexity and delay must be reduced. In this paper, the MAP algorithm and its variations, such as Log-MAP and Max-Log-MAP algorithms, are first applied to sectionalized trellises for linear block codes and carried out as two-stage decodings. Using the structural properties of properly sectionalized trellises, the decoding complexity and delay of the MAP algorithms can be reduced. Computation-wise optimum sectionalizations of a trellis for MAP algorithms are investigated. Also presented in this paper are bi-directional and parallel MAP decodings.

  6. MAP Algorithms for Decoding Linear Block Codes Based on Sectionalized Trellis Diagrams

    NASA Technical Reports Server (NTRS)

    Lui, Ye; Lin, Shu; Fossorier, Marc P. C.

    2000-01-01

    The maximum a posteriori probability (MAP) algorithm is a trellis-based MAP decoding algorithm. It is the heart of turbo (or iterative) decoding that achieves an error performance near the Shannon limit. Unfortunately, the implementation of this algorithm requires large computation and storage. Furthermore, its forward and backward recursions result in long decoding delay. For practical applications, this decoding algorithm must be simplified and its decoding complexity and delay must be reduced. In this paper, the MAP algorithm and its variations, such as log-MAP and max-log-MAP algorithms, are first applied to sectionalized trellises for linear block codes and carried out as two-stage decodings. Using the structural properties of properly sectionalized trellises, the decoding complexity and delay of the MAP algorithms can be reduced. Computation-wise optimum sectionalizations of a trellis for MAP algorithms are investigated. Also presented in this paper are bidirectional and parallel MAP decodings.

  7. A GPU-based framework for simulation of medical ultrasound

    NASA Astrophysics Data System (ADS)

    Kutter, Oliver; Karamalis, Athanasios; Wein, Wolfgang; Navab, Nassir

    2009-02-01

    Simulation of ultrasound (US) images from volumetric medical image data has been shown to be an important tool in medical image analysis. However, there is a trade off between the accuracy of the simulation and its real-time performance. In this paper, we present a framework for acceleration of ultrasound simulation on the graphics processing unit (GPU) of commodity computer hardware. Our framework can accommodate ultrasound modeling with varying degrees of complexity. To demonstrate the flexibility of our proposed method, we have implemented several models of acoustic propagation through 3D volumes. We conducted multiple experiments to evaluate the performance of our method for its application in multi-modal image registration and training. The results demonstrate the high performance of the GPU accelerated simulation outperforming CPU implementations by up to two orders of magnitude and encourage the investigation of even more realistic acoustic models.

  8. Improvement of unsupervised texture classification based on genetic algorithms

    NASA Astrophysics Data System (ADS)

    Okumura, Hiroshi; Togami, Yuuki; Arai, Kohei

    2004-11-01

    At the previous conference, the authors are proposed a new unsupervised texture classification method based on the genetic algorithms (GA). In the method, the GA are employed to determine location and size of the typical textures in the target image. The proposed method consists of the following procedures: 1) the determination of the number of classification category; 2) each chromosome used in the GA consists of coordinates of center pixel of each training area candidate and those size; 3) 50 chromosomes are generated using random number; 4) fitness of each chromosome is calculated; the fitness is the product of the Classification Reliability in the Mixed Texture Cases (CRMTC) and the Stability of NZMV against Scanning Field of View Size (SNSFS); 5) in the selection operation in the GA, the elite preservation strategy is employed; 6) in the crossover operation, multi point crossover is employed and two parent chromosomes are selected by the roulette strategy; 7) in mutation operation, the locuses where the bit inverting occurs are decided by a mutation rate; 8) go to the procedure 4. However, this method has not been automated because it requires not only target image but also the number of categories for classification. In this paper, we describe some improvement for implementation of automated texture classification. Some experiments are conducted to evaluate classification capability of the proposed method by using images from Brodatz's photo album and actual airborne multispectral scanner. The experimental results show that the proposed method can select appropriate texture samples and can provide reasonable classification results.

  9. A robust digital watermarking algorithm based on framelet and SVD

    NASA Astrophysics Data System (ADS)

    Xiao, Moyan; He, Zhibiao; Quan, Tingwei

    2015-12-01

    Compared with wavelet, framelet has good time frequency analysis ability and redundant characteristic. SVD (Singular Value Decomposition) can obtain stable feature of images which is not easily destroyed. To further improve the watermarking technique, a robust digital watermarking algorithm based on framelet and SVD is proposed. Firstly, Arnold transform is implemented to the grayscale watermark image. Secondly perform framelet transform to each host block which is divided according to the size of the watermark. Then embed the scrambled watermark into the biggest singular values produced in SVD transform to each coarse band gained from framelet transform to host image block. At last inverse framelet transform after inverse SVD transform to obtain embedded coarse band. Experimental results show that the proposed method gains good performance in robustness and security compared with traditional image processing including noise attack, cropping, filtering and JPEG compression etc. Moreover, the watermark imperceptibility of our method is better than that of wavelet and has stronger robustness than pure framelet without SVD.

  10. A rank-based Prediction Algorithm of Learning User's Intention

    NASA Astrophysics Data System (ADS)

    Shen, Jie; Gao, Ying; Chen, Cang; Gong, HaiPing

    Internet search has become an important part in people's daily life. People can find many types of information to meet different needs through search engines on the Internet. There are two issues for the current search engines: first, the users should predetermine the types of information they want and then change to the appropriate types of search engine interfaces. Second, most search engines can support multiple kinds of search functions, each function has its own separate search interface. While users need different types of information, they must switch between different interfaces. In practice, most queries are corresponding to various types of information results. These queries can search the relevant results in various search engines, such as query "Palace" contains the websites about the introduction of the National Palace Museum, blog, Wikipedia, some pictures and video information. This paper presents a new aggregative algorithm for all kinds of search results. It can filter and sort the search results by learning three aspects about the query words, search results and search history logs to achieve the purpose of detecting user's intention. Experiments demonstrate that this rank-based method for multi-types of search results is effective. It can meet the user's search needs well, enhance user's satisfaction, provide an effective and rational model for optimizing search engines and improve user's search experience.

  11. Genetic Algorithm (GA)-Based Inclinometer Layout Optimization

    PubMed Central

    Liang, Weijie; Zhang, Ping; Chen, Xianping; Cai, Miao; Yang, Daoguo

    2015-01-01

    This paper presents numerical simulation results of an airflow inclinometer with sensitivity studies and thermal optimization of the printed circuit board (PCB) layout for an airflow inclinometer based on a genetic algorithm (GA). Due to the working principle of the gas sensor, the changes of the ambient temperature may cause dramatic voltage drifts of sensors. Therefore, eliminating the influence of the external environment for the airflow is essential for the performance and reliability of an airflow inclinometer. In this paper, the mechanism of an airflow inclinometer and the influence of different ambient temperatures on the sensitivity of the inclinometer will be examined by the ANSYS-FLOTRAN CFD program. The results show that with changes of the ambient temperature on the sensing element, the sensitivity of the airflow inclinometer is inversely proportional to the ambient temperature and decreases when the ambient temperature increases. GA is used to optimize the PCB thermal layout of the inclinometer. The finite-element simulation method (ANSYS) is introduced to simulate and verify the results of our optimal thermal layout, and the results indicate that the optimal PCB layout greatly improves (by more than 50%) the sensitivity of the inclinometer. The study may be useful in the design of PCB layouts that are related to sensitivity improvement of gas sensors. PMID:25897500

  12. Nonlinear model-based control algorithm for a distillation column using software sensor.

    PubMed

    Jana, Amiya Kumar; Samanta, Amar Nath; Ganguly, Saibal

    2005-04-01

    This paper presents the design of model-based globally linearizing control (GLC) structure for a distillation process within the differential geometric framework. The model of a nonideal binary distillation column, whose characteristics were highly nonlinear and strongly interactive, is used as a real process. The classical GLC law is comprised of a transformer (input-output linearizing state feedback), a nonlinear state observer, and an external PI controller. The tray temperature based short-cut observer (TTBSCO) has been used as a state estimator within the control structure, in which all tray temperatures were considered to be measured. Accordingly, the liquid phase composition of each tray was calculated online using the derived temperature-composition correlation. In the simulation experiment, the proposed GLC coupled with TTBSCO (GLC-TTBSCO) outperformed a conventional PI controller based on servo performances with and without measurement noise as well as on regulatory behaviors. In the subsequent part, the GLC law has been synthesized in conjunction with tray temperature based reduced-order observer (GLC-TTBROO) where the distillate and bottom compositions of the distillation process have been inferred from top and bottom product temperatures respectively, which were measured online. Finally, the comparative performance of the GLC-TTBSCO and the GLC-TTBROO has been addressed under parametric uncertainty and the GLC-TTBSCO algorithm provided slightly better performance than the GLC-TTBROO. The resulting control laws are rather general and can be easily adopted for other binary distillation columns.

  13. Mechanochemical synthesis of an yttrium based metal-organic framework.

    PubMed

    Singh, Niraj K; Hardi, Meenakshi; Balema, Viktor P

    2013-02-01

    For the first time a metal hydride has been used for the preparation of a metal-organic framework. MIL-78 has been synthesized by the solid-state mechanochemical reaction between yttrium hydride and trimesic acid. The process does not involve solvents and does not generate liquid by-products, thus proving the viability of the solid-state approach to the synthesis of MOFs.

  14. Quality-aware features-based noise level estimator for block matching and three-dimensional filtering algorithm

    NASA Astrophysics Data System (ADS)

    Xu, Shaoping; Hu, Lingyan; Yang, Xiaohui

    2016-01-01

    The performance of conventional denoising algorithms is usually controlled by one or several parameters whose optimal settings depend on the contents of the processed images and the characteristics of the noises. Among these parameters, noise level is a fundamental parameter that is always assumed to be known by most of the existing denoising algorithms (so-called nonblind denoising algorithms), which largely limits the applicability of these nonblind denoising algorithms in many applications. Moreover, these nonblind algorithms do not always achieve the best denoised images in visual quality even when fed with the actual noise level parameter. To address these shortcomings, in this paper we propose a new quality-aware features-based noise level estimator (NLE), which consists of quality-aware features extraction and optimal noise level parameter prediction. First, considering that image local contrast features convey important structural information that is closely related to image perceptual quality, we utilize the marginal statistics of two local contrast operators, i.e., the gradient magnitude and the Laplacian of Gaussian (LOG), to extract quality-aware features. The proposed quality-aware features have very low computational complexity, making them well suited for time-constrained applications. Then we propose a learning-based framework where the noise level parameter is estimated based on the quality-aware features. Based on the proposed NLE, we develop a blind block matching and three-dimensional filtering (BBM3D) denoising algorithm which is capable of effectively removing additive white Gaussian noise, even coupled with impulse noise. The noise level parameter of the BBM3D algorithm is automatically tuned according to the quality-aware features, guaranteeing the best performance. As such, the classical block matching and three-dimensional algorithm can be transformed into a blind one in an unsupervised manner. Experimental results demonstrate that the

  15. Autonomous photogrammetric network design based on changing environment genetic algorithms

    NASA Astrophysics Data System (ADS)

    Yang, Jian; Lu, Nai-Guang; Dong, Mingli

    2008-10-01

    In order to get good accuracy, designer used to consider how to place cameras. Usually, cameras placement design is a multidimensional optimal problem, so people used genetic algorithms to solve it. But genetic algorithms could result in premature or convergent problem. Sometime we get local minimum and observe vibrating phenomenon. Those will get inaccurate design. So we try to solve the problem using the changing environment genetic algorithms. The work proposes giving those species groups difference environment during difference stage to improve the property. Computer simulation result shows the acceleration in convergent speed and ability of selecting good individual. This work would be used in other application.

  16. Multiple sequence alignment algorithm based on a dispersion graph and ant colony algorithm.

    PubMed

    Chen, Weiyang; Liao, Bo; Zhu, Wen; Xiang, Xuyu

    2009-10-01

    In this article, we describe a representation for the processes of multiple sequences alignment (MSA) and used it to solve the problem of MSA. By this representation, we took every possible aligning result into account by defining the representation of gap insertion, the value of heuristic information in every optional path and scoring rule. On the basis of the proposed multidimensional graph, we used the ant colony algorithm to find the better path that denotes a better aligning result. In our article, we proposed the instance of three-dimensional graph and four-dimensional graph and advanced a special ichnographic representation to analyze MSA. It is yet only an experimental software, and we gave an example for finding the best aligning result by three-dimensional graph and ant colony algorithm. Experimental results show that our method can improve the solution quality on MSA benchmarks. PMID:19130503

  17. Evaluation of Demons- and FEM-Based Registration Algorithms for Lung Cancer.

    PubMed

    Yang, Juan; Li, Dengwang; Yin, Yong; Zhao, Fen; Wang, Hongjun

    2016-04-01

    We evaluated and compared the accuracy of 2 deformable image registration algorithms in 4-dimensional computed tomography images for patients with lung cancer. Ten patients with non-small cell lung cancer or small cell lung cancer were enrolled in this institutional review board-approved study. The displacement vector fields relative to a specific reference image were calculated by using the diffeomorphic demons (DD) algorithm and the finite element method (FEM)-based algorithm. The registration accuracy was evaluated by using normalized mutual information (NMI), the sum of squared intensity difference (SSD), modified Hausdorff distance (dH_M), and ratio of gross tumor volume (rGTV) difference between reference image and deformed phase image. We also compared the registration speed of the 2 algorithms. Of all patients, the FEM-based algorithm showed stronger ability in aligning 2 images than the DD algorithm. The means (±standard deviation) of NMI were 0.86 (±0.05) and 0.90 (±0.05) using the DD algorithm and the FEM-based algorithm, respectively. The means of SSD were 0.006 (±0.003) and 0.003 (±0.002) using the DD algorithm and the FEM-based algorithm, respectively. The means of dH_M were 0.04 (±0.02) and 0.03 (±0.03) using the DD algorithm and the FEM-based algorithm, respectively. The means of rGTV were 3.9% (±1.01%) and 2.9% (±1.1%) using the DD algorithm and the FEM-based algorithm, respectively. However, the FEM-based algorithm costs a longer time than the DD algorithm, with the average running time of 31.4 minutes compared to 21.9 minutes for all patients. The preliminary results showed that the FEM-based algorithm was more accurate than the DD algorithm while compromised with the registration speed. PMID:25817713

  18. Learning algorithms for feedforward networks based on finite samples

    SciTech Connect

    Rao, N.S.V.; Protopopescu, V.; Mann, R.C.; Oblow, E.M.; Iyengar, S.S.

    1994-09-01

    Two classes of convergent algorithms for learning continuous functions (and also regression functions) that are represented by feedforward networks, are discussed. The first class of algorithms, applicable to networks with unknown weights located only in the output layer, is obtained by utilizing the potential function methods of Aizerman et al. The second class, applicable to general feedforward networks, is obtained by utilizing the classical Robbins-Monro style stochastic approximation methods. Conditions relating the sample sizes to the error bounds are derived for both classes of algorithms using martingale-type inequalities. For concreteness, the discussion is presented in terms of neural networks, but the results are applicable to general feedforward networks, in particular to wavelet networks. The algorithms can be directly adapted to concept learning problems.

  19. HTML Extraction Algorithm Based on Property and Data Cell

    NASA Astrophysics Data System (ADS)

    Purnamasari, Detty; Wayan Simri Wicaksana, I.; Harmanto, Suryadi; Yuniar Banowosari, Lintang

    2013-06-01

    The data available on the Internet is in various models and formats. One form of data representation is a table. Tables extraction is used in process more than one table on the Internet from different sources. Currently the effort is done by using copy-paste that is not automatic process. This article presents an approach to prepare the area, so tables in HTML format can be extracted and converted into a database that make easier to combine the data from many resources. This article was tested on the algorithm 1 used to determine the actual number of columns and rows of the table, as well as algorithm 2 are used to determine the boundary line of the property. Tests conducted at 100 tabular HTML format, and the test results provide the accuracy of the algorithm 1 is 99.9% and the accuracy of the algorithm 2 is 84%.

  20. Hash based parallel algorithms for mining association rules

    SciTech Connect

    Shintani, Takahiko; Kitsuregawa, Masaru

    1996-12-31

    In this paper, we propose four parallel algorithms (NPA, SPA, HPA and RPA-ELD) for mining association rules on shared-nothing parallel machines to improve its performance. In NPA, candidate itemsets are just copied amongst all the processors, which can lead to memory overflow for large transaction databases. The remaining three algorithms partition the candidate itemsets over the processors. If it is partitioned simply (SPA), transaction data has to be broadcast to all processors. HPA partitions the candidate itemsets using a hash function to eliminate broadcasting, which also reduces the comparison workload significantly. HPA-ELD fully utilizes the available memory space by detecting the extremely large itemsets and copying them, which is also very effective at flattering the load over the processors. We implemented these algorithms in a shared-nothing environment. Performance evaluations show that the best algorithm, HPA-ELD, attains good linearity on speedup ratio and is effective for handling skew.