Scale-space point spread function based framework to boost infrared target detection algorithms
NASA Astrophysics Data System (ADS)
Moradi, Saed; Moallem, Payman; Sabahi, Mohamad Farzan
2016-07-01
Small target detection is one of the major concern in the development of infrared surveillance systems. Detection algorithms based on Gaussian target modeling have attracted most attention from researchers in this field. However, the lack of accurate target modeling limits the performance of this type of infrared small target detection algorithms. In this paper, signal to clutter ratio (SCR) improvement mechanism based on the matched filter is described in detail and effect of Point Spread Function (PSF) on the intensity and spatial distribution of the target pixels is clarified comprehensively. In the following, a new parametric model for small infrared targets is developed based on the PSF of imaging system which can be considered as a matched filter. Based on this model, a new framework to boost model-based infrared target detection algorithms is presented. In order to show the performance of this new framework, the proposed model is adopted in Laplacian scale-space algorithms which is a well-known algorithm in the small infrared target detection field. Simulation results show that the proposed framework has better detection performance in comparison with the Gaussian one and improves the overall performance of IRST system. By analyzing the performance of the proposed algorithm based on this new framework in a quantitative manner, this new framework shows at least 20% improvement in the output SCR values in comparison with Laplacian of Gaussian (LoG) algorithm.
An Adaptive Data Collection Algorithm Based on a Bayesian Compressed Sensing Framework
Liu, Zhi; Zhang, Mengmeng; Cui, Jian
2014-01-01
For Wireless Sensor Networks, energy efficiency is always a key consideration in system design. Compressed sensing is a new theory which has promising prospects in WSNs. However, how to construct a sparse projection matrix is a problem. In this paper, based on a Bayesian compressed sensing framework, a new adaptive algorithm which can integrate routing and data collection is proposed. By introducing new target node selection metrics, embedding the routing structure and maximizing the differential entropy for each collection round, an adaptive projection vector is constructed. Simulations show that compared to reference algorithms, the proposed algorithm can decrease computation complexity and improve energy efficiency. PMID:24818659
An Automatic Web Service Composition Framework Using QoS-Based Web Service Ranking Algorithm
Mallayya, Deivamani; Ramachandran, Baskaran; Viswanathan, Suganya
2015-01-01
Web service has become the technology of choice for service oriented computing to meet the interoperability demands in web applications. In the Internet era, the exponential addition of web services nominates the “quality of service” as essential parameter in discriminating the web services. In this paper, a user preference based web service ranking (UPWSR) algorithm is proposed to rank web services based on user preferences and QoS aspect of the web service. When the user's request cannot be fulfilled by a single atomic service, several existing services should be composed and delivered as a composition. The proposed framework allows the user to specify the local and global constraints for composite web services which improves flexibility. UPWSR algorithm identifies best fit services for each task in the user request and, by choosing the number of candidate services for each task, reduces the time to generate the composition plans. To tackle the problem of web service composition, QoS aware automatic web service composition (QAWSC) algorithm proposed in this paper is based on the QoS aspects of the web services and user preferences. The proposed framework allows user to provide feedback about the composite service which improves the reputation of the services. PMID:26504894
PEDLA: predicting enhancers with a deep learning-based algorithmic framework
Liu, Feng; Li, Hao; Ren, Chao; Bo, Xiaochen; Shu, Wenjie
2016-01-01
Transcriptional enhancers are non-coding segments of DNA that play a central role in the spatiotemporal regulation of gene expression programs. However, systematically and precisely predicting enhancers remain a major challenge. Although existing methods have achieved some success in enhancer prediction, they still suffer from many issues. We developed a deep learning-based algorithmic framework named PEDLA (https://github.com/wenjiegroup/PEDLA), which can directly learn an enhancer predictor from massively heterogeneous data and generalize in ways that are mostly consistent across various cell types/tissues. We first trained PEDLA with 1,114-dimensional heterogeneous features in H1 cells, and demonstrated that PEDLA framework integrates diverse heterogeneous features and gives state-of-the-art performance relative to five existing methods for enhancer prediction. We further extended PEDLA to iteratively learn from 22 training cell types/tissues. Our results showed that PEDLA manifested superior performance consistency in both training and independent test sets. On average, PEDLA achieved 95.0% accuracy and a 96.8% geometric mean (GM) of sensitivity and specificity across 22 training cell types/tissues, as well as 95.7% accuracy and a 96.8% GM across 20 independent test cell types/tissues. Together, our work illustrates the power of harnessing state-of-the-art deep learning techniques to consistently identify regulatory elements at a genome-wide scale from massively heterogeneous data across diverse cell types/tissues. PMID:27329130
PEDLA: predicting enhancers with a deep learning-based algorithmic framework.
Liu, Feng; Li, Hao; Ren, Chao; Bo, Xiaochen; Shu, Wenjie
2016-01-01
Transcriptional enhancers are non-coding segments of DNA that play a central role in the spatiotemporal regulation of gene expression programs. However, systematically and precisely predicting enhancers remain a major challenge. Although existing methods have achieved some success in enhancer prediction, they still suffer from many issues. We developed a deep learning-based algorithmic framework named PEDLA (https://github.com/wenjiegroup/PEDLA), which can directly learn an enhancer predictor from massively heterogeneous data and generalize in ways that are mostly consistent across various cell types/tissues. We first trained PEDLA with 1,114-dimensional heterogeneous features in H1 cells, and demonstrated that PEDLA framework integrates diverse heterogeneous features and gives state-of-the-art performance relative to five existing methods for enhancer prediction. We further extended PEDLA to iteratively learn from 22 training cell types/tissues. Our results showed that PEDLA manifested superior performance consistency in both training and independent test sets. On average, PEDLA achieved 95.0% accuracy and a 96.8% geometric mean (GM) of sensitivity and specificity across 22 training cell types/tissues, as well as 95.7% accuracy and a 96.8% GM across 20 independent test cell types/tissues. Together, our work illustrates the power of harnessing state-of-the-art deep learning techniques to consistently identify regulatory elements at a genome-wide scale from massively heterogeneous data across diverse cell types/tissues. PMID:27329130
Genetic Algorithm Based Framework for Automation of Stochastic Modeling of Multi-Season Streamflows
NASA Astrophysics Data System (ADS)
Srivastav, R. K.; Srinivasan, K.; Sudheer, K.
2009-05-01
bootstrap (MABB) ) based on the explicit objective functions of minimizing the relative bias and relative root mean square error in estimating the storage capacity of the reservoir. The optimal parameter set of the hybrid model is obtained based on the search over a multi- dimensional parameter space (involving simultaneous exploration of the parametric (PAR(1)) as well as the non-parametric (MABB) components). This is achieved using the efficient evolutionary search based optimization tool namely, non-dominated sorting genetic algorithm - II (NSGA-II). This approach helps in reducing the drudgery involved in the process of manual selection of the hybrid model, in addition to predicting the basic summary statistics dependence structure, marginal distribution and water-use characteristics accurately. The proposed optimization framework is used to model the multi-season streamflows of River Beaver and River Weber of USA. In case of both the rivers, the proposed GA-based hybrid model yields a much better prediction of the storage capacity (where simultaneous exploration of both parametric and non-parametric components is done) when compared with the MLE-based hybrid models (where the hybrid model selection is done in two stages, thus probably resulting in a sub-optimal model). This framework can be further extended to include different linear/non-linear hybrid stochastic models at other temporal and spatial scales as well.
Zhu, Feng; Aziz, H. M. Abdul; Qian, Xinwu; Ukkusuri, Satish V.
2015-01-31
Our study develops a novel reinforcement learning algorithm for the challenging coordinated signal control problem. Traffic signals are modeled as intelligent agents interacting with the stochastic traffic environment. The model is built on the framework of coordinated reinforcement learning. The Junction Tree Algorithm (JTA) based reinforcement learning is proposed to obtain an exact inference of the best joint actions for all the coordinated intersections. Moreover, the algorithm is implemented and tested with a network containing 18 signalized intersections in VISSIM. Finally, our results show that the JTA based algorithm outperforms independent learning (Q-learning), real-time adaptive learning, and fixed timing plans in terms of average delay, number of stops, and vehicular emissions at the network level.
Zhu, Feng; Aziz, H. M. Abdul; Qian, Xinwu; Ukkusuri, Satish V.
2015-01-31
Our study develops a novel reinforcement learning algorithm for the challenging coordinated signal control problem. Traffic signals are modeled as intelligent agents interacting with the stochastic traffic environment. The model is built on the framework of coordinated reinforcement learning. The Junction Tree Algorithm (JTA) based reinforcement learning is proposed to obtain an exact inference of the best joint actions for all the coordinated intersections. Moreover, the algorithm is implemented and tested with a network containing 18 signalized intersections in VISSIM. Finally, our results show that the JTA based algorithm outperforms independent learning (Q-learning), real-time adaptive learning, and fixed timing plansmore » in terms of average delay, number of stops, and vehicular emissions at the network level.« less
NASA Astrophysics Data System (ADS)
Wang, Ke; Huang, Zhi; Zhong, Zhihua
2014-11-01
Due to the large variations of environment with ever-changing background and vehicles with different shapes, colors and appearances, to implement a real-time on-board vehicle recognition system with high adaptability, efficiency and robustness in complicated environments, remains challenging. This paper introduces a simultaneous detection and tracking framework for robust on-board vehicle recognition based on monocular vision technology. The framework utilizes a novel layered machine learning and particle filter to build a multi-vehicle detection and tracking system. In the vehicle detection stage, a layered machine learning method is presented, which combines coarse-search and fine-search to obtain the target using the AdaBoost-based training algorithm. The pavement segmentation method based on characteristic similarity is proposed to estimate the most likely pavement area. Efficiency and accuracy are enhanced by restricting vehicle detection within the downsized area of pavement. In vehicle tracking stage, a multi-objective tracking algorithm based on target state management and particle filter is proposed. The proposed system is evaluated by roadway video captured in a variety of traffics, illumination, and weather conditions. The evaluating results show that, under conditions of proper illumination and clear vehicle appearance, the proposed system achieves 91.2% detection rate and 2.6% false detection rate. Experiments compared to typical algorithms show that, the presented algorithm reduces the false detection rate nearly by half at the cost of decreasing 2.7%-8.6% detection rate. This paper proposes a multi-vehicle detection and tracking system, which is promising for implementation in an on-board vehicle recognition system with high precision, strong robustness and low computational cost.
Cencerrado, Andrés; Cortés, Ana; Margalef, Tomàs
2013-01-01
This work presents a framework for assessing how the existing constraints at the time of attending an ongoing forest fire affect simulation results, both in terms of quality (accuracy) obtained and the time needed to make a decision. In the wildfire spread simulation and prediction area, it is essential to properly exploit the computational power offered by new computing advances. For this purpose, we rely on a two-stage prediction process to enhance the quality of traditional predictions, taking advantage of parallel computing. This strategy is based on an adjustment stage which is carried out by a well-known evolutionary technique: Genetic Algorithms. The core of this framework is evaluated according to the probability theory principles. Thus, a strong statistical study is presented and oriented towards the characterization of such an adjustment technique in order to help the operation managers deal with the two aspects previously mentioned: time and quality. The experimental work in this paper is based on a region in Spain which is one of the most prone to forest fires: El Cap de Creus. PMID:24453898
NASA Astrophysics Data System (ADS)
Mutiara Yoga Asmarani Suci, Agisha; Sukaesih Sitanggang, Imas
2016-01-01
Outliers analysis on hotspot data as an indicator of fire occurences in Riau Province between 2001 and 2012 have been done, but it was less helpful in fire prevention efforts. This is because the results can only be used by certain people and can not be easily and quickly accessed by users. The purpose of this research is to create a web-based application to detect outliers on Hotspot data and to visualize the outliers based on the time and location. Outliers detection was done in the previous research using the k-means clustering method with global and collective outlier approach in Riau Province Hotspot data between 2001 and 2012. This work aims to develop a web-based application using the framework Shiny with the R programming language. This application provides several functions including summary and visualization of the selected data, clustering hotspot data using k-means algorithm, visualization of the clustering results and sum square error (SSE), and displaying global and collective outliers and visualization of outlier spread on Riau Province Map.
An Algorithmic Framework for Multiobjective Optimization
Ganesan, T.; Elamvazuthi, I.; Shaari, Ku Zilati Ku; Vasant, P.
2013-01-01
Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization. PMID:24470795
An algorithmic framework for multiobjective optimization.
Ganesan, T; Elamvazuthi, I; Shaari, Ku Zilati Ku; Vasant, P
2013-01-01
Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization. PMID:24470795
ERIC Educational Resources Information Center
Raveh, Ira; Koichu, Boris; Peled, Irit; Zaslavsky, Orit
2016-01-01
In this article we present an integrative framework of knowledge for teaching the standard algorithms of the four basic arithmetic operations. The framework is based on a mathematical analysis of the algorithms, a connectionist perspective on teaching mathematics and an analogy with previous frameworks of knowledge for teaching arithmetic…
Statistical algorithms for a comprehensive test ban treaty discrimination framework
Foote, N.D.; Anderson, D.N.; Higbee, K.T.; Miller, N.E.; Redgate, T.; Rohay, A.C.; Hagedorn, D.N.
1996-10-01
Seismic discrimination is the process of identifying a candidate seismic event as an earthquake or explosion using information from seismic waveform features (seismic discriminants). In the CTBT setting, low energy seismic activity must be detected and identified. A defensible CTBT discrimination decision requires an understanding of false-negative (declaring an event to be an earthquake given it is an explosion) and false-position (declaring an event to be an explosion given it is an earthquake) rates. These rates are derived from a statistical discrimination framework. A discrimination framework can be as simple as a single statistical algorithm or it can be a mathematical construct that integrates many different types of statistical algorithms and CTBT technologies. In either case, the result is the identification of an event and the numerical assessment of the accuracy of an identification, that is, false-negative and false-positive rates. In Anderson et al., eight statistical discrimination algorithms are evaluated relative to their ability to give results that effectively contribute to a decision process and to be interpretable with physical (seismic) theory. These algorithms can be discrimination frameworks individually or components of a larger framework. The eight algorithms are linear discrimination (LDA), quadratic discrimination (QDA), variably regularized discrimination (VRDA), flexible discrimination (FDA), logistic discrimination, K-th nearest neighbor (KNN), kernel discrimination, and classification and regression trees (CART). In this report, the performance of these eight algorithms, as applied to regional seismic data, is documented. Based on the findings in Anderson et al. and this analysis: CART is an appropriate algorithm for an automated CTBT setting.
Towards a Framework for Evaluating and Comparing Diagnosis Algorithms
NASA Technical Reports Server (NTRS)
Kurtoglu, Tolga; Narasimhan, Sriram; Poll, Scott; Garcia,David; Kuhn, Lukas; deKleer, Johan; vanGemund, Arjan; Feldman, Alexander
2009-01-01
Diagnostic inference involves the detection of anomalous system behavior and the identification of its cause, possibly down to a failed unit or to a parameter of a failed unit. Traditional approaches to solving this problem include expert/rule-based, model-based, and data-driven methods. Each approach (and various techniques within each approach) use different representations of the knowledge required to perform the diagnosis. The sensor data is expected to be combined with these internal representations to produce the diagnosis result. In spite of the availability of various diagnosis technologies, there have been only minimal efforts to develop a standardized software framework to run, evaluate, and compare different diagnosis technologies on the same system. This paper presents a framework that defines a standardized representation of the system knowledge, the sensor data, and the form of the diagnosis results and provides a run-time architecture that can execute diagnosis algorithms, send sensor data to the algorithms at appropriate time steps from a variety of sources (including the actual physical system), and collect resulting diagnoses. We also define a set of metrics that can be used to evaluate and compare the performance of the algorithms, and provide software to calculate the metrics.
Experiences and evolutions of the ALICE DAQ Detector Algorithms framework
NASA Astrophysics Data System (ADS)
Chapeland, Sylvain; Carena, Franco; Carena, Wisla; Chibante Barroso, Vasco; Costa, Filippo; Denes, Ervin; Divia, Roberto; Fuchs, Ulrich; Grigore, Alexandru; Simonetti, Giuseppe; Soos, Csaba; Telesca, Adriana; Vande Vyvre, Pierre; von Haller, Barthelemy
2012-12-01
ALICE (A Large Ion Collider Experiment) is the heavy-ion detector studying the physics of strongly interacting matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). The 18 ALICE sub-detectors are regularly calibrated in order to achieve most accurate physics measurements. Some of these procedures are done online in the DAQ (Data Acquisition System) so that calibration results can be directly used for detector electronics configuration before physics data taking, at run time for online event monitoring, and offline for data analysis. A framework was designed to collect statistics and compute calibration parameters, and has been used in production since 2008. This paper focuses on the recent features developed to benefit from the multi-cores architecture of CPUs, and to optimize the processing power available for the calibration tasks. It involves some C++ base classes to effectively implement detector specific code, with independent processing of events in parallel threads and aggregation of partial results. The Detector Algorithm (DA) framework provides utility interfaces for handling of input and output (configuration, monitored physics data, results, logging), and self-documentation of the produced executable. New algorithms are created quickly by inheritance of base functionality and implementation of few ad-hoc virtual members, while the framework features are kept expandable thanks to the isolation of the detector calibration code. The DA control system also handles unexpected processes behaviour, logs execution status, and collects performance statistics.
NASA Astrophysics Data System (ADS)
Tóth, Gábor
2006-05-01
We describe a general algorithm suitable for executing and coupling components of a software framework on a parallel computer. The requirements of a flexible, efficient and robust algorithm are defined precisely, and the motivation for the requirements is demonstrated on several examples. In short, the requirements are the following: (i) the algorithm should allow arbitrary distribution of processors among the components, (ii) it should allow arbitrary coupling schedule between the components, (iii) it should not use any inter-processor communication other than already required by the components and their couplings, and (iv) it should never get into a dead-lock. We show that the proposed algorithm based on the Temporal and Predefined Ordering of Tasks (TPOT) satisfies all these requirements. The TPOT algorithm has been implemented in the Space Weather Modeling Framework. The flexibility and efficiency of the algorithm is demonstrated with several examples.
Analytic TOF PET reconstruction algorithm within DIRECT data partitioning framework
NASA Astrophysics Data System (ADS)
Matej, Samuel; Daube-Witherspoon, Margaret E.; Karp, Joel S.
2016-05-01
Iterative reconstruction algorithms are routinely used for clinical practice; however, analytic algorithms are relevant candidates for quantitative research studies due to their linear behavior. While iterative algorithms also benefit from the inclusion of accurate data and noise models the widespread use of time-of-flight (TOF) scanners with less sensitivity to noise and data imperfections make analytic algorithms even more promising. In our previous work we have developed a novel iterative reconstruction approach (DIRECT: direct image reconstruction for TOF) providing convenient TOF data partitioning framework and leading to very efficient reconstructions. In this work we have expanded DIRECT to include an analytic TOF algorithm with confidence weighting incorporating models of both TOF and spatial resolution kernels. Feasibility studies using simulated and measured data demonstrate that analytic-DIRECT with appropriate resolution and regularization filters is able to provide matched bias versus variance performance to iterative TOF reconstruction with a matched resolution model.
Service Discovery Framework Supported by EM Algorithm and Bayesian Classifier
NASA Astrophysics Data System (ADS)
Peng, Yanbin
Service oriented computing has become the main stream research field nowadays. Meanwhile, machine learning is a promising AI technology which can enhance the performance of traditional algorithm. Therefore, aiming at solving service discovery problem, this paper imports Bayesian classifier to web service discovery framework, which can improve service querying speed. In this framework, services in service library become training set of Bayesian classifier, service query becomes a testing sample. Service matchmaking process can be executed in related service class, which has fewer services, thus can save time. Due to don't know the class of service in training set, EM algorithm is used to estimate prior probability and likelihood functions. Experiment results show that the EM algorithm and Bayesian classifier supported method outperforms other methods in time complexity.
The hierarchical fair competition (HFC) framework for sustainable evolutionary algorithms.
Hu, Jianjun; Goodman, Erik; Seo, Kisung; Fan, Zhun; Rosenberg, Rondal
2005-01-01
Many current Evolutionary Algorithms (EAs) suffer from a tendency to converge prematurely or stagnate without progress for complex problems. This may be due to the loss of or failure to discover certain valuable genetic material or the loss of the capability to discover new genetic material before convergence has limited the algorithm's ability to search widely. In this paper, the Hierarchical Fair Competition (HFC) model, including several variants, is proposed as a generic framework for sustainable evolutionary search by transforming the convergent nature of the current EA framework into a non-convergent search process. That is, the structure of HFC does not allow the convergence of the population to the vicinity of any set of optimal or locally optimal solutions. The sustainable search capability of HFC is achieved by ensuring a continuous supply and the incorporation of genetic material in a hierarchical manner, and by culturing and maintaining, but continually renewing, populations of individuals of intermediate fitness levels. HFC employs an assembly-line structure in which subpopulations are hierarchically organized into different fitness levels, reducing the selection pressure within each subpopulation while maintaining the global selection pressure to help ensure the exploitation of the good genetic material found. Three EAs based on the HFC principle are tested - two on the even-10-parity genetic programming benchmark problem and a real-world analog circuit synthesis problem, and another on the HIFF genetic algorithm (GA) benchmark problem. The significant gain in robustness, scalability and efficiency by HFC, with little additional computing effort, and its tolerance of small population sizes, demonstrates its effectiveness on these problems and shows promise of its potential for improving other existing EAs for difficult problems. A paradigm shift from that of most EAs is proposed: rather than trying to escape from local optima or delay convergence at a
NASA Astrophysics Data System (ADS)
Ciesielski, Krzysztof Chris; Udupa, Jayaram K.; Falcão, A. X.; Miranda, P. A. V.
2012-02-01
We present a general graph-cut segmentation framework GGC, in which the delineated objects returned by the algorithms optimize the energy functions associated with the lp norm, 1 <= p <= ∞. Two classes of well known algorithms belong to GGC: the standard graph cut GC (such as the min-cut/max-flow algorithm) and the relative fuzzy connectedness algorithms RFC (including iterative RFC, IRFC). The norm-based description of GGC provides more elegant and mathematically better recognized framework of our earlier results from [18, 19]. Moreover, it allows precise theoretical comparison of GGC representable algorithms with the algorithms discussed in a recent paper [22] (min-cut/max-flow graph cut, random walker, shortest path/geodesic, Voronoi diagram, power watershed/shortest path forest), which optimize, via lp norms, the intermediate segmentation step, the labeling of scene voxels, but for which the final object need not optimize the used lp energy function. Actually, the comparison of the GGC representable algorithms with that encompassed in the framework described in [22] constitutes the main contribution of this work.
Overarching framework for data-based modelling
NASA Astrophysics Data System (ADS)
Schelter, Björn; Mader, Malenka; Mader, Wolfgang; Sommerlade, Linda; Platt, Bettina; Lai, Ying-Cheng; Grebogi, Celso; Thiel, Marco
2014-02-01
One of the main modelling paradigms for complex physical systems are networks. When estimating the network structure from measured signals, typically several assumptions such as stationarity are made in the estimation process. Violating these assumptions renders standard analysis techniques fruitless. We here propose a framework to estimate the network structure from measurements of arbitrary non-linear, non-stationary, stochastic processes. To this end, we propose a rigorous mathematical theory that underlies this framework. Based on this theory, we present a highly efficient algorithm and the corresponding statistics that are immediately sensibly applicable to measured signals. We demonstrate its performance in a simulation study. In experiments of transitions between vigilance stages in rodents, we infer small network structures with complex, time-dependent interactions; this suggests biomarkers for such transitions, the key to understand and diagnose numerous diseases such as dementia. We argue that the suggested framework combines features that other approaches followed so far lack.
Kodiak: An Implementation Framework for Branch and Bound Algorithms
NASA Technical Reports Server (NTRS)
Smith, Andrew P.; Munoz, Cesar A.; Narkawicz, Anthony J.; Markevicius, Mantas
2015-01-01
Recursive branch and bound algorithms are often used to refine and isolate solutions to several classes of global optimization problems. A rigorous computation framework for the solution of systems of equations and inequalities involving nonlinear real arithmetic over hyper-rectangular variable and parameter domains is presented. It is derived from a generic branch and bound algorithm that has been formally verified, and utilizes self-validating enclosure methods, namely interval arithmetic and, for polynomials and rational functions, Bernstein expansion. Since bounds computed by these enclosure methods are sound, this approach may be used reliably in software verification tools. Advantage is taken of the partial derivatives of the constraint functions involved in the system, firstly to reduce the branching factor by the use of bisection heuristics and secondly to permit the computation of bifurcation sets for systems of ordinary differential equations. The associated software development, Kodiak, is presented, along with examples of three different branch and bound problem types it implements.
A Formal Framework for the Analysis of Algorithms That Recover From Loss of Separation
NASA Technical Reports Server (NTRS)
Butler, RIcky W.; Munoz, Cesar A.
2008-01-01
We present a mathematical framework for the specification and verification of state-based conflict resolution algorithms that recover from loss of separation. In particular, we propose rigorous definitions of horizontal and vertical maneuver correctness that yield horizontal and vertical separation, respectively, in a bounded amount of time. We also provide sufficient conditions for independent correctness, i.e., separation under the assumption that only one aircraft maneuvers, and for implicitly coordinated correctness, i.e., separation under the assumption that both aircraft maneuver. An important benefit of this approach is that different aircraft can execute different algorithms and implicit coordination will still be achieved, as long as they all meet the explicit criteria of the framework. Towards this end we have sought to make the criteria as general as possible. The framework presented in this paper has been formalized and mechanically verified in the Prototype Verification System (PVS).
Projection Classification Based Iterative Algorithm
NASA Astrophysics Data System (ADS)
Zhang, Ruiqiu; Li, Chen; Gao, Wenhua
2015-05-01
Iterative algorithm has good performance as it does not need complete projection data in 3D image reconstruction area. It is possible to be applied in BGA based solder joints inspection but with low convergence speed which usually acts with x-ray Laminography that has a worse reconstruction image compared to the former one. This paper explores to apply one projection classification based method which tries to separate the object to three parts, i.e. solute, solution and air, and suppose that the reconstruction speed decrease from solution to two other parts on both side lineally. And then SART and CAV algorithms are improved under the proposed idea. Simulation experiment result with incomplete projection images indicates the fast convergence speed of the improved iterative algorithms and the effectiveness of the proposed method. Less the projection images, more the superiority is also founded.
A Framework of Algorithms: Computing the Bias and Prestige of Nodes in Trust Networks
Li, Rong-Hua; Yu, Jeffrey Xu; Huang, Xin; Cheng, Hong
2012-01-01
A trust network is a social network in which edges represent the trust relationship between two nodes in the network. In a trust network, a fundamental question is how to assess and compute the bias and prestige of the nodes, where the bias of a node measures the trustworthiness of a node and the prestige of a node measures the importance of the node. The larger bias of a node implies the lower trustworthiness of the node, and the larger prestige of a node implies the higher importance of the node. In this paper, we define a vector-valued contractive function to characterize the bias vector which results in a rich family of bias measurements, and we propose a framework of algorithms for computing the bias and prestige of nodes in trust networks. Based on our framework, we develop four algorithms that can calculate the bias and prestige of nodes effectively and robustly. The time and space complexities of all our algorithms are linear with respect to the size of the graph, thus our algorithms are scalable to handle large datasets. We evaluate our algorithms using five real datasets. The experimental results demonstrate the effectiveness, robustness, and scalability of our algorithms. PMID:23239990
Knowledge-based tracking algorithm
NASA Astrophysics Data System (ADS)
Corbeil, Allan F.; Hawkins, Linda J.; Gilgallon, Paul F.
1990-10-01
This paper describes the Knowledge-Based Tracking (KBT) algorithm for which a real-time flight test demonstration was recently conducted at Rome Air Development Center (RADC). In KBT processing, the radar signal in each resolution cell is thresholded at a lower than normal setting to detect low RCS targets. This lower threshold produces a larger than normal false alarm rate. Therefore, additional signal processing including spectral filtering, CFAR and knowledge-based acceptance testing are performed to eliminate some of the false alarms. TSC's knowledge-based Track-Before-Detect (TBD) algorithm is then applied to the data from each azimuth sector to detect target tracks. In this algorithm, tentative track templates are formed for each threshold crossing and knowledge-based association rules are applied to the range, Doppler, and azimuth measurements from successive scans. Lastly, an M-association out of N-scan rule is used to declare a detection. This scan-to-scan integration enhances the probability of target detection while maintaining an acceptably low output false alarm rate. For a real-time demonstration of the KBT algorithm, the L-band radar in the Surveillance Laboratory (SL) at RADC was used to illuminate a small Cessna 310 test aircraft. The received radar signal wa digitized and processed by a ST-100 Array Processor and VAX computer network in the lab. The ST-100 performed all of the radar signal processing functions, including Moving Target Indicator (MTI) pulse cancelling, FFT Doppler filtering, and CFAR detection. The VAX computers performed the remaining range-Doppler clustering, beamsplitting and TBD processing functions. The KBT algorithm provided a 9.5 dB improvement relative to single scan performance with a nominal real time delay of less than one second between illumination and display.
Improved Rotating Kernel Transformation Based Contourlet Domain Image Denoising Framework
Guo, Qing; Dong, Fangmin; Ren, Xuhong; Feng, Shiyu; Gao, Bruce Zhi
2016-01-01
A contourlet domain image denoising framework based on a novel Improved Rotating Kernel Transformation is proposed, where the difference of subbands in contourlet domain is taken into account. In detail: (1). A novel Improved Rotating Kernel Transformation (IRKT) is proposed to calculate the direction statistic of the image; The validity of the IRKT is verified by the corresponding extracted edge information comparing with the state-of-the-art edge detection algorithm. (2). The direction statistic represents the difference between subbands and is introduced to the threshold function based contourlet domain denoising approaches in the form of weights to get the novel framework. The proposed framework is utilized to improve the contourlet soft-thresholding (CTSoft) and contourlet bivariate-thresholding (CTB) algorithms. The denoising results on the conventional testing images and the Optical Coherence Tomography (OCT) medical images show that the proposed methods improve the existing contourlet based thresholding denoising algorithm, especially for the medical images. PMID:27148597
Optimized Uncertainty Quantification Algorithm Within a Dynamic Event Tree Framework
J. W. Nielsen; Akira Tokuhiro; Robert Hiromoto
2014-06-01
Methods for developing Phenomenological Identification and Ranking Tables (PIRT) for nuclear power plants have been a useful tool in providing insight into modelling aspects that are important to safety. These methods have involved expert knowledge with regards to reactor plant transients and thermal-hydraulic codes to identify are of highest importance. Quantified PIRT provides for rigorous method for quantifying the phenomena that can have the greatest impact. The transients that are evaluated and the timing of those events are typically developed in collaboration with the Probabilistic Risk Analysis. Though quite effective in evaluating risk, traditional PRA methods lack the capability to evaluate complex dynamic systems where end states may vary as a function of transition time from physical state to physical state . Dynamic PRA (DPRA) methods provide a more rigorous analysis of complex dynamic systems. A limitation of DPRA is its potential for state or combinatorial explosion that grows as a function of the number of components; as well as, the sampling of transition times from state-to-state of the entire system. This paper presents a method for performing QPIRT within a dynamic event tree framework such that timing events which result in the highest probabilities of failure are captured and a QPIRT is performed simultaneously while performing a discrete dynamic event tree evaluation. The resulting simulation results in a formal QPIRT for each end state. The use of dynamic event trees results in state explosion as the number of possible component states increases. This paper utilizes a branch and bound algorithm to optimize the solution of the dynamic event trees. The paper summarizes the methods used to implement the branch-and-bound algorithm in solving the discrete dynamic event trees.
NASA Astrophysics Data System (ADS)
Obara, Lukasz; Żarnecki, Aleksander Filip
2015-09-01
Pi of the Sky is a system of wide field-of-view robotic telescopes, which search for short timescale astrophysical phenomena, especially for prompt optical GRB emission. The system was designed for autonomous operation, monitoring a large fraction of the sky with 12m-13m range and time resolution of the order of 1 - 100 seconds. LUIZA is a dedicated framework developed for efficient off-line processing of the Pi of the Sky data, implemented in C++. The photometric algorithm based on ASAS photometry was implemented in LUIZA and compared with the algorithm based on the pixel cluster reconstruction and simple aperture photometry algorithm. Optimized photometry algorithms were then applied to the sample of test images, which were modified to include different patterns of variability of the stars (training sample). Different statistical estimators are considered for developing the general variable star identification algorithm. The algorithm will then be used to search for short-period variable stars in the real data.
Evaluating cloud retrieval algorithms with the ARM BBHRP framework
Mlawer,E.; Dunn,M.; Mlawer, E.; Shippert, T.; Troyan, D.; Johnson, K. L.; Miller, M. A.; Delamere, J.; Turner, D. D.; Jensen, M. P.; Flynn, C.; Shupe, M.; Comstock, J.; Long, C. N.; Clough, S. T.; Sivaraman, C.; Khaiyer, M.; Xie, S.; Rutan, D.; Minnis, P.
2008-03-10
Climate and weather prediction models require accurate calculations of vertical profiles of radiative heating. Although heating rate calculations cannot be directly validated due to the lack of corresponding observations, surface and top-of-atmosphere measurements can indirectly establish the quality of computed heating rates through validation of the calculated irradiances at the atmospheric boundaries. The ARM Broadband Heating Rate Profile (BBHRP) project, a collaboration of all the working groups in the program, was designed with these heating rate validations as a key objective. Given the large dependence of radiative heating rates on cloud properties, a critical component of BBHRP radiative closure analyses has been the evaluation of cloud microphysical retrieval algorithms. This evaluation is an important step in establishing the necessary confidence in the continuous profiles of computed radiative heating rates produced by BBHRP at the ARM Climate Research Facility (ACRF) sites that are needed for modeling studies. This poster details the continued effort to evaluate cloud property retrieval algorithms within the BBHRP framework, a key focus of the project this year. A requirement for the computation of accurate heating rate profiles is a robust cloud microphysical product that captures the occurrence, height, and phase of clouds above each ACRF site. Various approaches to retrieve the microphysical properties of liquid, ice, and mixed-phase clouds have been processed in BBHRP for the ACRF Southern Great Plains (SGP) and the North Slope of Alaska (NSA) sites. These retrieval methods span a range of assumptions concerning the parameterization of cloud location, particle density, size, shape, and involve different measurement sources. We will present the radiative closure results from several different retrieval approaches for the SGP site, including those from Microbase, the current 'reference' retrieval approach in BBHRP. At the NSA, mixed-phase clouds and
Algebraic and algorithmic frameworks for optimized quantum measurements
NASA Astrophysics Data System (ADS)
Laghaout, Amine; Andersen, Ulrik L.
2015-10-01
von Neumann projections are the main operations by which information can be extracted from the quantum to the classical realm. They are, however, static processes that do not adapt to the states they measure. Advances in the field of adaptive measurement have shown that this limitation can be overcome by "wrapping" the von Neumann projectors in a higher-dimensional circuit which exploits the interplay between measurement outcomes and measurement settings. Unfortunately, the design of adaptive measurement has often been ad hoc and setup specific. We shall here develop a unified framework for designing optimized measurements. Our approach is twofold: The first is algebraic and formulates the problem of measurement as a simple matrix diagonalization problem. The second is algorithmic and models the optimal interaction between measurement outcomes and measurement settings as a cascaded network of conditional probabilities. Finally, we demonstrate that several figures of merit, such as Bell factors, can be improved by optimized measurements. This leads us to the promising observation that measurement detectors which—taken individually—have a low quantum efficiency can be arranged into circuits where, collectively, the limitations of inefficiency are compensated for.
Machnes, S.; Sander, U.; Glaser, S. J.; Schulte-Herbrueggen, T.; Fouquieres, P. de; Gruslys, A.; Schirmer, S.
2011-08-15
For paving the way to novel applications in quantum simulation, computation, and technology, increasingly large quantum systems have to be steered with high precision. It is a typical task amenable to numerical optimal control to turn the time course of pulses, i.e., piecewise constant control amplitudes, iteratively into an optimized shape. Here, we present a comparative study of optimal-control algorithms for a wide range of finite-dimensional applications. We focus on the most commonly used algorithms: GRAPE methods which update all controls concurrently, and Krotov-type methods which do so sequentially. Guidelines for their use are given and open research questions are pointed out. Moreover, we introduce a unifying algorithmic framework, DYNAMO (dynamic optimization platform), designed to provide the quantum-technology community with a convenient matlab-based tool set for optimal control. In addition, it gives researchers in optimal-control techniques a framework for benchmarking and comparing newly proposed algorithms with the state of the art. It allows a mix-and-match approach with various types of gradients, update and step-size methods as well as subspace choices. Open-source code including examples is made available at http://qlib.info.
An algorithmic interactive planning framework in support of sustainable technologies
NASA Astrophysics Data System (ADS)
Prica, Marija D.
This thesis addresses the difficult problem of generation expansion planning that employs the most effective technologies in today's changing electric energy industry. The electrical energy industry, in both the industrialized world and in developing countries, is experiencing transformation in a number of different ways. This transformation is driven by major technological breakthroughs (such as the influx of unconventional smaller-scale resources), by industry restructuring, changing environmental objectives, and the ultimate threat of resource scarcity. This thesis proposes a possible planning framework in support of sustainable technologies where sustainability is viewed as a mix of multiple attributes ranging from reliability and environmental impact to short- and long-term efficiency. The idea of centralized peak-load pricing, which accounts for the tradeoffs between cumulative operational effects and the cost of new investments, is the key concept in support of long-term planning in the changing industry. To start with, an interactive planning framework for generation expansion is posed as a distributed decision-making model. In order to reconcile the distributed sub-objectives of different decision makers with system-wide sustainability objectives, a new concept of distributed interactive peak load pricing is proposed. To be able to make the right decisions, the decision makers must have sufficient information about the estimated long-term electricity prices. The sub-objectives of power plant owners and load-serving entities are profit maximization. Optimized long-term expansion plans based on predicted electricity prices are communicated to the system-wide planning authority as long-run bids. The long-term expansion bids are cleared by the coordinating planner so that the system-wide long-term performance criteria are satisfied. The interactions between generation owners and the coordinating planning authority are repeated annually. We view the proposed
Research on Routing Selection Algorithm Based on Genetic Algorithm
NASA Astrophysics Data System (ADS)
Gao, Guohong; Zhang, Baojian; Li, Xueyong; Lv, Jinna
The hereditary algorithm is a kind of random searching and method of optimizing based on living beings natural selection and hereditary mechanism. In recent years, because of the potentiality in solving complicate problems and the successful application in the fields of industrial project, hereditary algorithm has been widely concerned by the domestic and international scholar. Routing Selection communication has been defined a standard communication model of IP version 6.This paper proposes a service model of Routing Selection communication, and designs and implements a new Routing Selection algorithm based on genetic algorithm.The experimental simulation results show that this algorithm can get more resolution at less time and more balanced network load, which enhances search ratio and the availability of network resource, and improves the quality of service.
Optimizing SRF Gun Cavity Profiles in a Genetic Algorithm Framework
Alicia Hofler, Pavel Evtushenko, Frank Marhauser
2009-09-01
Automation of DC photoinjector designs using a genetic algorithm (GA) based optimization is an accepted practice in accelerator physics. Allowing the gun cavity field profile shape to be varied can extend the utility of this optimization methodology to superconducting and normal conducting radio frequency (SRF/RF) gun based injectors. Finding optimal field and cavity geometry configurations can provide guidance for cavity design choices and verify existing designs. We have considered two approaches for varying the electric field profile. The first is to determine the optimal field profile shape that should be used independent of the cavity geometry, and the other is to vary the geometry of the gun cavity structure to produce an optimal field profile. The first method can provide a theoretical optimal and can illuminate where possible gains can be made in field shaping. The second method can produce more realistically achievable designs that can be compared to existing designs. In this paper, we discuss the design and implementation for these two methods for generating field profiles for SRF/RF guns in a GA based injector optimization scheme and provide preliminary results.
Adaptive image contrast enhancement algorithm for point-based rendering
NASA Astrophysics Data System (ADS)
Xu, Shaoping; Liu, Xiaoping P.
2015-03-01
Surgical simulation is a major application in computer graphics and virtual reality, and most of the existing work indicates that interactive real-time cutting simulation of soft tissue is a fundamental but challenging research problem in virtual surgery simulation systems. More specifically, it is difficult to achieve a fast enough graphic update rate (at least 30 Hz) on commodity PC hardware by utilizing traditional triangle-based rendering algorithms. In recent years, point-based rendering (PBR) has been shown to offer the potential to outperform the traditional triangle-based rendering in speed when it is applied to highly complex soft tissue cutting models. Nevertheless, the PBR algorithms are still limited in visual quality due to inherent contrast distortion. We propose an adaptive image contrast enhancement algorithm as a postprocessing module for PBR, providing high visual rendering quality as well as acceptable rendering efficiency. Our approach is based on a perceptible image quality technique with automatic parameter selection, resulting in a visual quality comparable to existing conventional PBR algorithms. Experimental results show that our adaptive image contrast enhancement algorithm produces encouraging results both visually and numerically compared to representative algorithms, and experiments conducted on the latest hardware demonstrate that the proposed PBR framework with the postprocessing module is superior to the conventional PBR algorithm and that the proposed contrast enhancement algorithm can be utilized in (or compatible with) various variants of the conventional PBR algorithm.
Ontological Problem-Solving Framework for Dynamically Configuring Sensor Systems and Algorithms
Qualls, Joseph; Russomanno, David J.
2011-01-01
The deployment of ubiquitous sensor systems and algorithms has led to many challenges, such as matching sensor systems to compatible algorithms which are capable of satisfying a task. Compounding the challenges is the lack of the requisite knowledge models needed to discover sensors and algorithms and to subsequently integrate their capabilities to satisfy a specific task. A novel ontological problem-solving framework has been designed to match sensors to compatible algorithms to form synthesized systems, which are capable of satisfying a task and then assigning the synthesized systems to high-level missions. The approach designed for the ontological problem-solving framework has been instantiated in the context of a persistence surveillance prototype environment, which includes profiling sensor systems and algorithms to demonstrate proof-of-concept principles. Even though the problem-solving approach was instantiated with profiling sensor systems and algorithms, the ontological framework may be useful with other heterogeneous sensing-system environments. PMID:22163793
Economic Dispatch Using Genetic Algorithm Based Hybrid Approach
Tahir Nadeem Malik; Aftab Ahmad; Shahab Khushnood
2006-07-01
Power Economic Dispatch (ED) is vital and essential daily optimization procedure in the system operation. Present day large power generating units with multi-valves steam turbines exhibit a large variation in the input-output characteristic functions, thus non-convexity appears in the characteristic curves. Various mathematical and optimization techniques have been developed, applied to solve economic dispatch (ED) problem. Most of these are calculus-based optimization algorithms that are based on successive linearization and use the first and second order differentiations of objective function and its constraint equations as the search direction. They usually require heat input, power output characteristics of generators to be of monotonically increasing nature or of piecewise linearity. These simplifying assumptions result in an inaccurate dispatch. Genetic algorithms have used to solve the economic dispatch problem independently and in conjunction with other AI tools and mathematical programming approaches. Genetic algorithms have inherent ability to reach the global minimum region of search space in a short time, but then take longer time to converge the solution. GA based hybrid approaches get around this problem and produce encouraging results. This paper presents brief survey on hybrid approaches for economic dispatch, an architecture of extensible computational framework as common environment for conventional, genetic algorithm and hybrid approaches based solution for power economic dispatch, the implementation of three algorithms in the developed framework. The framework tested on standard test systems for its performance evaluation. (authors)
On effectiveness of network sensor-based defense framework
NASA Astrophysics Data System (ADS)
Zhang, Difan; Zhang, Hanlin; Ge, Linqiang; Yu, Wei; Lu, Chao; Chen, Genshe; Pham, Khanh
2012-06-01
Cyber attacks are increasing in frequency, impact, and complexity, which demonstrate extensive network vulnerabilities with the potential for serious damage. Defending against cyber attacks calls for the distributed collaborative monitoring, detection, and mitigation. To this end, we develop a network sensor-based defense framework, with the aim of handling network security awareness, mitigation, and prediction. We implement the prototypical system and show its effectiveness on detecting known attacks, such as port-scanning and distributed denial-of-service (DDoS). Based on this framework, we also implement the statistical-based detection and sequential testing-based detection techniques and compare their respective detection performance. The future implementation of defensive algorithms can be provisioned in our proposed framework for combating cyber attacks.
A Test Generation Framework for Distributed Fault-Tolerant Algorithms
NASA Technical Reports Server (NTRS)
Goodloe, Alwyn; Bushnell, David; Miner, Paul; Pasareanu, Corina S.
2009-01-01
Heavyweight formal methods such as theorem proving have been successfully applied to the analysis of safety critical fault-tolerant systems. Typically, the models and proofs performed during such analysis do not inform the testing process of actual implementations. We propose a framework for generating test vectors from specifications written in the Prototype Verification System (PVS). The methodology uses a translator to produce a Java prototype from a PVS specification. Symbolic (Java) PathFinder is then employed to generate a collection of test cases. A small example is employed to illustrate how the framework can be used in practice.
Optimizing medical data quality based on multiagent web service framework.
Wu, Ching-Seh; Khoury, Ibrahim; Shah, Hemant
2012-07-01
One of the most important issues in e-healthcare information systems is to optimize the medical data quality extracted from distributed and heterogeneous environments, which can extremely improve diagnostic and treatment decision making. This paper proposes a multiagent web service framework based on service-oriented architecture for the optimization of medical data quality in the e-healthcare information system. Based on the design of the multiagent web service framework, an evolutionary algorithm (EA) for the dynamic optimization of the medical data quality is proposed. The framework consists of two main components; first, an EA will be used to dynamically optimize the composition of medical processes into optimal task sequence according to specific quality attributes. Second, a multiagent framework will be proposed to discover, monitor, and report any inconstancy between the optimized task sequence and the actual medical records. To demonstrate the proposed framework, experimental results for a breast cancer case study are provided. Furthermore, to show the unique performance of our algorithm, a comparison with other works in the literature review will be presented. PMID:22614723
Crystal Symmetry Algorithms in a High-Throughput Framework for Materials
NASA Astrophysics Data System (ADS)
Taylor, Richard
The high-throughput framework AFLOW that has been developed and used successfully over the last decade is improved to include fully-integrated software for crystallographic symmetry characterization. The standards used in the symmetry algorithms conform with the conventions and prescriptions given in the International Tables of Crystallography (ITC). A standard cell choice with standard origin is selected, and the space group, point group, Bravais lattice, crystal system, lattice system, and representative symmetry operations are determined. Following the conventions of the ITC, the Wyckoff sites are also determined and their labels and site symmetry are provided. The symmetry code makes no assumptions on the input cell orientation, origin, or reduction and has been integrated in the AFLOW high-throughput framework for materials discovery by adding to the existing code base and making use of existing classes and functions. The software is written in object-oriented C++ for flexibility and reuse. A performance analysis and examination of the algorithms scaling with cell size and symmetry is also reported.
Framework for performance evaluation of face recognition algorithms
NASA Astrophysics Data System (ADS)
Black, John A., Jr.; Gargesha, Madhusudhana; Kahol, Kanav; Kuchi, Prem; Panchanathan, Sethuraman
2002-07-01
Face detection and recognition is becoming increasingly important in the contexts of surveillance,credit card fraud detection,assistive devices for visual impaired,etc. A number of face recognition algorithms have been proposed in the literature.The availability of a comprehensive face database is crucial to test the performance of these face recognition algorithms.However,while existing publicly-available face databases contain face images with a wide variety of poses angles, illumination angles,gestures,face occlusions,and illuminant colors, these images have not been adequately annotated,thus limiting their usefulness for evaluating the relative performance of face detection algorithms. For example,many of the images in existing databases are not annotated with the exact pose angles at which they were taken.In order to compare the performance of various face recognition algorithms presented in the literature there is a need for a comprehensive,systematically annotated database populated with face images that have been captured (1)at a variety of pose angles (to permit testing of pose invariance),(2)with a wide variety of illumination angles (to permit testing of illumination invariance),and (3)under a variety of commonly encountered illumination color temperatures (to permit testing of illumination color invariance). In this paper, we present a methodology for creating such an annotated database that employs a novel set of apparatus for the rapid capture of face images from a wide variety of pose angles and illumination angles. Four different types of illumination are used,including daylight,skylight,incandescent and fluorescent. The entire set of images,as well as the annotations and the experimental results,is being placed in the public domain,and made available for download over the worldwide web.
An algorithmic framework for Mumford-Shah regularization of inverse problems in imaging
NASA Astrophysics Data System (ADS)
Hohm, Kilian; Storath, Martin; Weinmann, Andreas
2015-11-01
The Mumford-Shah model is a very powerful variational approach for edge preserving regularization of image reconstruction processes. However, it is algorithmically challenging because one has to deal with a non-smooth and non-convex functional. In this paper, we propose a new efficient algorithmic framework for Mumford-Shah regularization of inverse problems in imaging. It is based on a splitting into specific subproblems that can be solved exactly. We derive fast solvers for the subproblems which are key for an efficient overall algorithm. Our method neither requires a priori knowledge of the gray or color levels nor of the shape of the discontinuity set. We demonstrate the wide applicability of the method for different modalities. In particular, we consider the reconstruction from Radon data, inpainting, and deconvolution. Our method can be easily adapted to many further imaging setups. The relevant condition is that the proximal mapping of the data fidelity can be evaluated a within reasonable time. In other words, it can be used whenever classical Tikhonov regularization is possible.
Research on Bayes matting algorithm based on Gaussian mixture model
NASA Astrophysics Data System (ADS)
Quan, Wei; Jiang, Shan; Han, Cheng; Zhang, Chao; Jiang, Zhengang
2015-12-01
The digital matting problem is a classical problem of imaging. It aims at separating non-rectangular foreground objects from a background image, and compositing with a new background image. Accurate matting determines the quality of the compositing image. A Bayesian matting Algorithm Based on Gaussian Mixture Model is proposed to solve this matting problem. Firstly, the traditional Bayesian framework is improved by introducing Gaussian mixture model. Then, a weighting factor is added in order to suppress the noises of the compositing images. Finally, the effect is further improved by regulating the user's input. This algorithm is applied to matting jobs of classical images. The results are compared to the traditional Bayesian method. It is shown that our algorithm has better performance in detail such as hair. Our algorithm eliminates the noise well. And it is very effectively in dealing with the kind of work, such as interested objects with intricate boundaries.
Qualls, Joseph; Russomanno, David J.
2011-01-01
The lack of knowledge models to represent sensor systems, algorithms, and missions makes opportunistically discovering a synthesis of systems and algorithms that can satisfy high-level mission specifications impractical. A novel ontological problem-solving framework has been designed that leverages knowledge models describing sensors, algorithms, and high-level missions to facilitate automated inference of assigning systems to subtasks that may satisfy a given mission specification. To demonstrate the efficacy of the ontological problem-solving architecture, a family of persistence surveillance sensor systems and algorithms has been instantiated in a prototype environment to demonstrate the assignment of systems to subtasks of high-level missions. PMID:22164081
Optical Sensor Based Corn Algorithm Evaluation
Technology Transfer Automated Retrieval System (TEKTRAN)
Optical sensor based algorithms for corn fertilization have developed by researchers in several states. The goal of this international research project was to evaluate these different algorithms and determine their robustness over a large geographic area. Concurrently the goal of this project was to...
Initiative learning algorithm based on rough set
NASA Astrophysics Data System (ADS)
Wang, Guoyin; He, Xiao
2003-03-01
Rough set theory is emerging as a new tool for dealing with fuzzy and uncertain data. In this paper, a theory is developed to express, measure and process uncertain information and uncertain knowledge based on our result about the uncertainty measure of decision tables and decision rule systems. Based on Skowron"s propositional default rule generation algorithm, we develop an initiative learning model with rough set based initiative rule generation algorithm. Simulation results illustrate its efficiency.
Unified Framework for Development, Deployment and Robust Testing of Neuroimaging Algorithms
Joshi, Alark; Scheinost, Dustin; Okuda, Hirohito; Belhachemi, Dominique; Murphy, Isabella; Staib, Lawrence H.; Papademetris, Xenophon
2011-01-01
Developing both graphical and command-line user interfaces for neuroimaging algorithms requires considerable effort. Neuroimaging algorithms can meet their potential only if they can be easily and frequently used by their intended users. Deployment of a large suite of such algorithms on multiple platforms requires consistency of user interface controls, consistent results across various platforms and thorough testing. We present the design and implementation of a novel object-oriented framework that allows for rapid development of complex image analysis algorithms with many reusable components and the ability to easily add graphical user interface controls. Our framework also allows for simplified yet robust nightly testing of the algorithms to ensure stability and cross platform interoperability. All of the functionality is encapsulated into a software object requiring no separate source code for user interfaces, testing or deployment. This formulation makes our framework ideal for developing novel, stable and easy-to-use algorithms for medical image analysis and computer assisted interventions. The framework has been both deployed at Yale and released for public use in the open source multi-platform image analysis software—BioImage Suite (bioimagesuite.org). PMID:21249532
QPSO-Based Adaptive DNA Computing Algorithm
Karakose, Mehmet; Cigdem, Ugur
2013-01-01
DNA (deoxyribonucleic acid) computing that is a new computation model based on DNA molecules for information storage has been increasingly used for optimization and data analysis in recent years. However, DNA computing algorithm has some limitations in terms of convergence speed, adaptability, and effectiveness. In this paper, a new approach for improvement of DNA computing is proposed. This new approach aims to perform DNA computing algorithm with adaptive parameters towards the desired goal using quantum-behaved particle swarm optimization (QPSO). Some contributions provided by the proposed QPSO based on adaptive DNA computing algorithm are as follows: (1) parameters of population size, crossover rate, maximum number of operations, enzyme and virus mutation rate, and fitness function of DNA computing algorithm are simultaneously tuned for adaptive process, (2) adaptive algorithm is performed using QPSO algorithm for goal-driven progress, faster operation, and flexibility in data, and (3) numerical realization of DNA computing algorithm with proposed approach is implemented in system identification. Two experiments with different systems were carried out to evaluate the performance of the proposed approach with comparative results. Experimental results obtained with Matlab and FPGA demonstrate ability to provide effective optimization, considerable convergence speed, and high accuracy according to DNA computing algorithm. PMID:23935409
Carbon- and Nitrogen-Based Organic Frameworks.
Sakaushi, Ken; Antonietti, Markus
2015-06-16
This Account provides an overview of organic, covalent, porous frameworks and solid-state materials mainly composed of the elements carbon and nitrogen. The structures under consideration are rather diverse and cover a wide spectrum. This Account will summarize current works on the synthetic concepts leading toward those systems and cover the application side where emphasis is set on the exploration of those systems as candidates for unusual high-performance catalysis, electrocatalysis, electrochemical energy storage, and artificial photosynthesis. These issues are motivated by the new global energy cycles and the fact that sustainable technologies should not be based on rare and expensive resources. We therefore present the strategic design of functionality in cost-effective, affordable artificial materials starting from a spectrum of simple synthetic options to end up with carbon- and nitrogen-based porous frameworks. Following the synthetic strategies, we demonstrate how the electronic structure of polymeric frameworks can be tuned and how this can modify property profiles in a very unexpected fashion. Covalent triazine-based frameworks (CTFs), for instance, showed both enormously high energy and high power density in lithium and sodium battery systems. Other C,N-based organic frameworks, such as triazine-based graphitic carbon nitride, are suggested to show promising band gaps for many (photo)electrochemical reactions. Nitrogen-rich carbonaceous frameworks, which are developed from C,N-based organic framework strategies, are highlighted in order to address their promising electrocatalytic properties, such as in the hydrogen evolution reaction, oxygen reduction reaction (ORR), and oxygen evolution reaction (OER). With careful design, those materials can be multifunctional catalysts, such as a bifunctional ORR/OER electrocatalyst. Although the majority of new C,N-based materials are still not competitive with the best (usually nonsustainable candidates) for each
Swarm-based algorithm for phase unwrapping.
da Silva Maciel, Lucas; Albertazzi, Armando G
2014-08-20
A novel algorithm for phase unwrapping based on swarm intelligence is proposed. The algorithm was designed based on three main goals: maximum coverage of reliable information, focused effort for better efficiency, and reliable unwrapping. Experiments were performed, and a new agent was designed to follow a simple set of five rules in order to collectively achieve these goals. These rules consist of random walking for unwrapping and searching, ambiguity evaluation by comparing unwrapped regions, and a replication behavior responsible for the good distribution of agents throughout the image. The results were comparable with the results from established methods. The swarm-based algorithm was able to suppress ambiguities better than the flood-fill algorithm without relying on lengthy processing times. In addition, future developments such as parallel processing and better-quality evaluation present great potential for the proposed method. PMID:25321125
Vector Quantization Algorithm Based on Associative Memories
NASA Astrophysics Data System (ADS)
Guzmán, Enrique; Pogrebnyak, Oleksiy; Yáñez, Cornelio; Manrique, Pablo
This paper presents a vector quantization algorithm for image compression based on extended associative memories. The proposed algorithm is divided in two stages. First, an associative network is generated applying the learning phase of the extended associative memories between a codebook generated by the LBG algorithm and a training set. This associative network is named EAM-codebook and represents a new codebook which is used in the next stage. The EAM-codebook establishes a relation between training set and the LBG codebook. Second, the vector quantization process is performed by means of the recalling stage of EAM using as associative memory the EAM-codebook. This process generates a set of the class indices to which each input vector belongs. With respect to the LBG algorithm, the main advantages offered by the proposed algorithm is high processing speed and low demand of resources (system memory); results of image compression and quality are presented.
Optimisation of nonlinear motion cueing algorithm based on genetic algorithm
NASA Astrophysics Data System (ADS)
Asadi, Houshyar; Mohamed, Shady; Rahim Zadeh, Delpak; Nahavandi, Saeid
2015-04-01
Motion cueing algorithms (MCAs) are playing a significant role in driving simulators, aiming to deliver the most accurate human sensation to the simulator drivers compared with a real vehicle driver, without exceeding the physical limitations of the simulator. This paper provides the optimisation design of an MCA for a vehicle simulator, in order to find the most suitable washout algorithm parameters, while respecting all motion platform physical limitations, and minimising human perception error between real and simulator driver. One of the main limitations of the classical washout filters is that it is attuned by the worst-case scenario tuning method. This is based on trial and error, and is effected by driving and programmers experience, making this the most significant obstacle to full motion platform utilisation. This leads to inflexibility of the structure, production of false cues and makes the resulting simulator fail to suit all circumstances. In addition, the classical method does not take minimisation of human perception error and physical constraints into account. Production of motion cues and the impact of different parameters of classical washout filters on motion cues remain inaccessible for designers for this reason. The aim of this paper is to provide an optimisation method for tuning the MCA parameters, based on nonlinear filtering and genetic algorithms. This is done by taking vestibular sensation error into account between real and simulated cases, as well as main dynamic limitations, tilt coordination and correlation coefficient. Three additional compensatory linear blocks are integrated into the MCA, to be tuned in order to modify the performance of the filters successfully. The proposed optimised MCA is implemented in MATLAB/Simulink software packages. The results generated using the proposed method show increased performance in terms of human sensation, reference shape tracking and exploiting the platform more efficiently without reaching
SOM-based algorithms for qualitative variables.
Cottrell, Marie; Ibbou, Smaïl; Letrémy, Patrick
2004-01-01
It is well known that the SOM algorithm achieves a clustering of data which can be interpreted as an extension of Principal Component Analysis, because of its topology-preserving property. But the SOM algorithm can only process real-valued data. In previous papers, we have proposed several methods based on the SOM algorithm to analyze categorical data, which is the case in survey data. In this paper, we present these methods in a unified manner. The first one (Kohonen Multiple Correspondence Analysis, KMCA) deals only with the modalities, while the two others (Kohonen Multiple Correspondence Analysis with individuals, KMCA_ind, Kohonen algorithm on DISJonctive table, KDISJ) can take into account the individuals, and the modalities simultaneously. PMID:15555858
Color sorting algorithm based on K-means clustering algorithm
NASA Astrophysics Data System (ADS)
Zhang, BaoFeng; Huang, Qian
2009-11-01
In the process of raisin production, there were a variety of color impurities, which needs be removed effectively. A new kind of efficient raisin color-sorting algorithm was presented here. First, the technology of image processing basing on the threshold was applied for the image pre-processing, and then the gray-scale distribution characteristic of the raisin image was found. In order to get the chromatic aberration image and reduce some disturbance, we made the flame image subtraction that the target image data minus the background image data. Second, Haar wavelet filter was used to get the smooth image of raisins. According to the different colors and mildew, spots and other external features, the calculation was made to identify the characteristics of their images, to enable them to fully reflect the quality differences between the raisins of different types. After the processing above, the image were analyzed by K-means clustering analysis method, which can achieve the adaptive extraction of the statistic features, in accordance with which, the image data were divided into different categories, thereby the categories of abnormal colors were distinct. By the use of this algorithm, the raisins of abnormal colors and ones with mottles were eliminated. The sorting rate was up to 98.6%, and the ratio of normal raisins to sorted grains was less than one eighth.
Multi-expert tracking algorithm based on improved compressive tracker
NASA Astrophysics Data System (ADS)
Feng, Yachun; Zhang, Hong; Yuan, Ding
2015-12-01
Object tracking is a challenging task in computer vision. Most state-of-the-art methods maintain an object model and update the object model by using new examples obtained incoming frames in order to deal with the variation in the appearance. It will inevitably introduce the model drift problem into the object model updating frame-by-frame without any censorship mechanism. In this paper, we adopt a multi-expert tracking framework, which is able to correct the effect of bad updates after they happened such as the bad updates caused by the severe occlusion. Hence, the proposed framework exactly has the ability which a robust tracking method should process. The expert ensemble is constructed of a base tracker and its formal snapshot. The tracking result is produced by the current tracker that is selected by means of a simple loss function. We adopt an improved compressive tracker as the base tracker in our work and modify it to fit the multi-expert framework. The proposed multi-expert tracking algorithm significantly improves the robustness of the base tracker, especially in the scenes with frequent occlusions and illumination variations. Experiments on challenging video sequences with comparisons to several state-of-the-art trackers demonstrate the effectiveness of our method and our tracking algorithm can run at real-time.
A Machine Learning Based Framework for Adaptive Mobile Learning
NASA Astrophysics Data System (ADS)
Al-Hmouz, Ahmed; Shen, Jun; Yan, Jun
Advances in wireless technology and handheld devices have created significant interest in mobile learning (m-learning) in recent years. Students nowadays are able to learn anywhere and at any time. Mobile learning environments must also cater for different user preferences and various devices with limited capability, where not all of the information is relevant and critical to each learning environment. To address this issue, this paper presents a framework that depicts the process of adapting learning content to satisfy individual learner characteristics by taking into consideration his/her learning style. We use a machine learning based algorithm for acquiring, representing, storing, reasoning and updating each learner acquired profile.
Optimal caching algorithm based on dynamic programming
NASA Astrophysics Data System (ADS)
Guo, Changjie; Xiang, Zhe; Zhong, Yuzhuo; Long, Jidong
2001-07-01
With the dramatic growth of multimedia streams, the efficient distribution of stored videos has become a major concern. There are two basic caching strategies: the whole caching strategy and the caching strategy based on layered encoded video, the latter can satisfy the requirement of the highly heterogeneous access to the Internet. Conventional caching strategies assign each object a cache gain by calculating popularity or density popularity, and determine which videos and which layers should be cached. In this paper, we first investigate the delivery model of stored video based on proxy, and propose two novel caching algorithms, DPLayer (for layered encoded caching scheme) and DPWhole (for whole caching scheme) for multimedia proxy caching. The two algorithms are based on the resource allocation model of dynamic programming to select the optimal subset of objects to be cached in proxy. Simulation proved that our algorithms achieve better performance than other existing schemes. We also analyze the computational complexity and space complexity of the algorithms, and introduce a regulative parameter to compress the states space of the dynamic programming problem and reduce the complexity of algorithms.
An Argumentation Framework based on Paraconsistent Logic
NASA Astrophysics Data System (ADS)
Umeda, Yuichi; Takahashi, Takehisa; Sawamura, Hajime
Argumentation is the most representative of intelligent activities of humans. Therefore, it is natural to think that it could have many implications for artificial intelligence and computer science as well. Specifically, argumentation may be considered a most primitive capability for interaction among computational agents. In this paper we present an argumentation framework based on the four-valued paraconsistent logic. Tolerance and acceptance of inconsistency that this logic has as its logical feature allow for arguments on inconsistent knowledge bases with which we are often confronted. We introduce various concepts for argumentation, such as arguments, attack relations, argument justification, preferential criteria of arguments based on social norms, and so on, in a way proper to the four-valued paraconsistent logic. Then, we provide the fixpoint semantics and dialectical proof theory for our argumentation framework. We also give the proofs of the soundness and completeness.
Structure-based algorithms for microvessel classification
Smith, Amy F.; Secomb, Timothy W.; Pries, Axel R.; Smith, Nicolas P.; Shipley, Rebecca J.
2014-01-01
Objective Recent developments in high-resolution imaging techniques have enabled digital reconstruction of three-dimensional sections of microvascular networks down to the capillary scale. To better interpret these large data sets, our goal is to distinguish branching trees of arterioles and venules from capillaries. Methods Two novel algorithms are presented for classifying vessels in microvascular anatomical data sets without requiring flow information. The algorithms are compared with a classification based on observed flow directions (considered the gold standard), and with an existing resistance-based method that relies only on structural data. Results The first algorithm, developed for networks with one arteriolar and one venular tree, performs well in identifying arterioles and venules and is robust to parameter changes, but incorrectly labels a significant number of capillaries as arterioles or venules. The second algorithm, developed for networks with multiple inlets and outlets, correctly identifies more arterioles and venules, but is more sensitive to parameter changes. Conclusions The algorithms presented here can be used to classify microvessels in large microvascular data sets lacking flow information. This provides a basis for analyzing the distinct geometrical properties and modelling the functional behavior of arterioles, capillaries and venules. PMID:25403335
A component based software framework for vision measurement
NASA Astrophysics Data System (ADS)
He, Lingsong; Bei, Lei
2011-12-01
In vision measurement applications, it is usually used to achieve an optimal result by combing different processing steps and algorithms .This paper proposes a component based software framework for vision measurement. First, commonly used processing algorithms of vision measurement are encapsulated into components that contained in a components library. The component which is designed to have its own properties also provides I/O interfaces for extern calls. Second, a software bus is proposed which can plug components and assemble them to form a vision measurement application. Besides components managing and data line linking, the software bus also provides service of message distribution, which is used to drive all the plugged components working properly. Third, a XML based script language is proposed to record the plugging and assembling process of a vision measurement application, which can be used to rebuild the vision measurement application later. At last, based on this framework, an application of landmark extraction that applied in camera calibration is introduced to show how it works.
Evaluation of five non-rigid image registration algorithms using the NIREP framework
NASA Astrophysics Data System (ADS)
Wei, Ying; Christensen, Gary E.; Song, Joo Hyun; Rudrauf, David; Bruss, Joel; Kuhl, Jon G.; Grabowski, Thomas J.
2010-03-01
Evaluating non-rigid image registration algorithm performance is a difficult problem since there is rarely a "gold standard" (i.e., known) correspondence between two images. This paper reports the analysis and comparison of five non-rigid image registration algorithms using the Non-Rigid Image Registration Evaluation Project (NIREP) (www.nirep.org) framework. The NIREP framework evaluates registration performance using centralized databases of well-characterized images and standard evaluation statistics (methods) which are implemented in a software package. The performance of five non-rigid registration algorithms (Affine, AIR, Demons, SLE and SICLE) was evaluated using 22 images from two NIREP neuroanatomical evaluation databases. Six evaluation statistics (relative overlap, intensity variance, normalized ROI overlap, alignment of calcarine sulci, inverse consistency error and transitivity error) were used to evaluate and compare image registration performance. The results indicate that the Demons registration algorithm produced the best registration results with respect to the relative overlap statistic but produced nearly the worst registration results with respect to the inverse consistency statistic. The fact that one registration algorithm produced the best result for one criterion and nearly the worst for another illustrates the need to use multiple evaluation statistics to fully assess performance.
Numerical Algorithms Based on Biorthogonal Wavelets
NASA Technical Reports Server (NTRS)
Ponenti, Pj.; Liandrat, J.
1996-01-01
Wavelet bases are used to generate spaces of approximation for the resolution of bidimensional elliptic and parabolic problems. Under some specific hypotheses relating the properties of the wavelets to the order of the involved operators, it is shown that an approximate solution can be built. This approximation is then stable and converges towards the exact solution. It is designed such that fast algorithms involving biorthogonal multi resolution analyses can be used to resolve the corresponding numerical problems. Detailed algorithms are provided as well as the results of numerical tests on partial differential equations defined on the bidimensional torus.
Algorithmic Differentiation for Calculus-based Optimization
NASA Astrophysics Data System (ADS)
Walther, Andrea
2010-10-01
For numerous applications, the computation and provision of exact derivative information plays an important role for optimizing the considered system but quite often also for its simulation. This presentation introduces the technique of Algorithmic Differentiation (AD), a method to compute derivatives of arbitrary order within working precision. Quite often an additional structure exploitation is indispensable for a successful coupling of these derivatives with state-of-the-art optimization algorithms. The talk will discuss two important situations where the problem-inherent structure allows a calculus-based optimization. Examples from aerodynamics and nano optics illustrate these advanced optimization approaches.
Analogue factoring algorithm based on polychromatic interference
NASA Astrophysics Data System (ADS)
Tamma, Vincenzo; Garuccio, Augusto; Shih, Yanhua
2010-08-01
We present a novel factorization algorithm which can be computed using an analogue computer based on a polychromatic source with a given wavelength bandwidth, a multi-path interferometer and a spectrometer. The core of this algorithm stands on the measurement of the periodicity of a "factoring" function given by an exponential sum at continuous argument by recording a sequence of interferograms associated with suitable units of displacement in the inteferometer. A remarking rescaling property of such interferograms allows, in principle, the prime number decomposition of several large integers. The information about factors is encoded in the location of the inteferogram maxima.
Satellite mission scheduling algorithm based on GA
NASA Astrophysics Data System (ADS)
Sun, Baolin; Mao, Lifei; Wang, Wenxiang; Xie, Xing; Qin, Qianqing
2007-11-01
The Satellite Mission Scheduling problem (SMS) involves scheduling tasks to be performed by a satellite, where new task requests can arrive at any time, non-deterministically, and must be scheduled in real-time. This paper describes a new Satellite Mission Scheduling problem based on Genetic Algorithm (SMSGA). In this paper, it investigates algorithmic approaches for determining an optimal or near-optimal sequence of tasks, allocated to a satellite payload over time, with dynamic tasking considerations. The simulation results show that the proposed approach is effective and efficient in applications to the real problems.
A framework for porting the NeuroBayes machine learning algorithm to FPGAs
NASA Astrophysics Data System (ADS)
Baehr, S.; Sander, O.; Heck, M.; Feindt, M.; Becker, J.
2016-01-01
The NeuroBayes machine learning algorithm is deployed for online data reduction at the pixel detector of Belle II. In order to test, characterize and easily adapt its implementation on FPGAs, a framework was developed. Within the framework an HDL model, written in python using MyHDL, is used for fast exploration of possible configurations. Under usage of input data from physics simulations figures of merit like throughput, accuracy and resource demand of the implementation are evaluated in a fast and flexible way. Functional validation is supported by usage of unit tests and HDL simulation for chosen configurations.
A Reliability-Based Track Fusion Algorithm
Xu, Li; Pan, Liqiang; Jin, Shuilin; Liu, Haibo; Yin, Guisheng
2015-01-01
The common track fusion algorithms in multi-sensor systems have some defects, such as serious imbalances between accuracy and computational cost, the same treatment of all the sensor information regardless of their quality, high fusion errors at inflection points. To address these defects, a track fusion algorithm based on the reliability (TFR) is presented in multi-sensor and multi-target environments. To improve the information quality, outliers in the local tracks are eliminated at first. Then the reliability of local tracks is calculated, and the local tracks with high reliability are chosen for the state estimation fusion. In contrast to the existing methods, TFR reduces high fusion errors at the inflection points of system tracks, and obtains a high accuracy with less computational cost. Simulation results verify the effectiveness and the superiority of the algorithm in dense sensor environments. PMID:25950174
Bell-Curve Based Evolutionary Optimization Algorithm
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; Laba, K.; Kincaid, R.
1998-01-01
The paper presents an optimization algorithm that falls in the category of genetic, or evolutionary algorithms. While the bit exchange is the basis of most of the Genetic Algorithms (GA) in research and applications in America, some alternatives, also in the category of evolutionary algorithms, but use a direct, geometrical approach have gained popularity in Europe and Asia. The Bell-Curve Based Evolutionary Algorithm (BCB) is in this alternative category and is distinguished by the use of a combination of n-dimensional geometry and the normal distribution, the bell-curve, in the generation of the offspring. The tool for creating a child is a geometrical construct comprising a line connecting two parents and a weighted point on that line. The point that defines the child deviates from the weighted point in two directions: parallel and orthogonal to the connecting line, the deviation in each direction obeying a probabilistic distribution. Tests showed satisfactory performance of BCB. The principal advantage of BCB is its controllability via the normal distribution parameters and the geometrical construct variables.
An Automatic Learning-Based Framework for Robust Nucleus Segmentation.
Xing, Fuyong; Xie, Yuanpu; Yang, Lin
2016-02-01
Computer-aided image analysis of histopathology specimens could potentially provide support for early detection and improved characterization of diseases such as brain tumor, pancreatic neuroendocrine tumor (NET), and breast cancer. Automated nucleus segmentation is a prerequisite for various quantitative analyses including automatic morphological feature computation. However, it remains to be a challenging problem due to the complex nature of histopathology images. In this paper, we propose a learning-based framework for robust and automatic nucleus segmentation with shape preservation. Given a nucleus image, it begins with a deep convolutional neural network (CNN) model to generate a probability map, on which an iterative region merging approach is performed for shape initializations. Next, a novel segmentation algorithm is exploited to separate individual nuclei combining a robust selection-based sparse shape model and a local repulsive deformable model. One of the significant benefits of the proposed framework is that it is applicable to different staining histopathology images. Due to the feature learning characteristic of the deep CNN and the high level shape prior modeling, the proposed method is general enough to perform well across multiple scenarios. We have tested the proposed algorithm on three large-scale pathology image datasets using a range of different tissue and stain preparations, and the comparative experiments with recent state of the arts demonstrate the superior performance of the proposed approach. PMID:26415167
François, Marianne M.
2015-05-28
A review of recent advances made in numerical methods and algorithms within the volume tracking framework is presented. The volume tracking method, also known as the volume-of-fluid method has become an established numerical approach to model and simulate interfacial flows. Its advantage is its strict mass conservation. However, because the interface is not explicitly tracked but captured via the material volume fraction on a fixed mesh, accurate estimation of the interface position, its geometric properties and modeling of interfacial physics in the volume tracking framework remain difficult. Several improvements have been made over the last decade to address these challenges. In this study, the multimaterial interface reconstruction method via power diagram, curvature estimation via heights and mean values and the balanced-force algorithm for surface tension are highlighted.
François, Marianne M.
2015-05-28
A review of recent advances made in numerical methods and algorithms within the volume tracking framework is presented. The volume tracking method, also known as the volume-of-fluid method has become an established numerical approach to model and simulate interfacial flows. Its advantage is its strict mass conservation. However, because the interface is not explicitly tracked but captured via the material volume fraction on a fixed mesh, accurate estimation of the interface position, its geometric properties and modeling of interfacial physics in the volume tracking framework remain difficult. Several improvements have been made over the last decade to address these challenges.more » In this study, the multimaterial interface reconstruction method via power diagram, curvature estimation via heights and mean values and the balanced-force algorithm for surface tension are highlighted.« less
An improved filter-u least mean square vibration control algorithm for aircraft framework.
Huang, Quanzhen; Luo, Jun; Gao, Zhiyuan; Zhu, Xiaojin; Li, Hengyu
2014-09-01
Active vibration control of aerospace vehicle structures is very a hot spot and in which filter-u least mean square (FULMS) algorithm is one of the key methods. But for practical reasons and technical limitations, vibration reference signal extraction is always a difficult problem for FULMS algorithm. To solve the vibration reference signal extraction problem, an improved FULMS vibration control algorithm is proposed in this paper. Reference signal is constructed based on the controller structure and the data in the algorithm process, using a vibration response residual signal extracted directly from the vibration structure. To test the proposed algorithm, an aircraft frame model is built and an experimental platform is constructed. The simulation and experimental results show that the proposed algorithm is more practical with a good vibration suppression performance. PMID:25273765
Task-Based Flocking Algorithm for Mobile Robot Cooperation
NASA Astrophysics Data System (ADS)
He, Hongsheng; Ge, Shuzhi Sam; Tong, Guofeng
In this paper, one task-based flocking algorithm that coordinates a swarm of robots is presented and evaluated based on the standard simulation platform. Task-based flocking algorithm(TFA) is an effective framework for mobile robots cooperation. Flocking behaviors are integrated into the cooperation of the multi-robot system to organize a robot team to achieve a common goal. The goal of the whole team is obtained through the collaboration of the individual robot’s task. The flocking model is presented, and the flocking energy function is defined based on that model to analyze the stability of the flocking and the task switching criterion. The simulation study is conducted in a five-versus-five soccer game, where the each robot dynamically selects its task in accordance with status and the whole robot team behaves as a flocking. Through simulation results and experiments, it is proved that the task-based flocking algorithm can effectively coordinate and control the robot flock to achieve the goal.
DE and NLP Based QPLS Algorithm
NASA Astrophysics Data System (ADS)
Yu, Xiaodong; Huang, Dexian; Wang, Xiong; Liu, Bo
As a novel evolutionary computing technique, Differential Evolution (DE) has been considered to be an effective optimization method for complex optimization problems, and achieved many successful applications in engineering. In this paper, a new algorithm of Quadratic Partial Least Squares (QPLS) based on Nonlinear Programming (NLP) is presented. And DE is used to solve the NLP so as to calculate the optimal input weights and the parameters of inner relationship. The simulation results based on the soft measurement of diesel oil solidifying point on a real crude distillation unit demonstrate that the superiority of the proposed algorithm to linear PLS and QPLS which is based on Sequential Quadratic Programming (SQP) in terms of fitting accuracy and computational costs.
A graph spectrum based geometric biclustering algorithm.
Wang, Doris Z; Yan, Hong
2013-01-21
Biclustering is capable of performing simultaneous clustering on two dimensions of a data matrix and has many applications in pattern classification. For example, in microarray experiments, a subset of genes is co-expressed in a subset of conditions, and biclustering algorithms can be used to detect the coherent patterns in the data for further analysis of function. In this paper, we present a graph spectrum based geometric biclustering (GSGBC) algorithm. In the geometrical view, biclusters can be seen as different linear geometrical patterns in high dimensional spaces. Based on this, the modified Hough transform is used to find the Hough vector (HV) corresponding to sub-bicluster patterns in 2D spaces. A graph can be built regarding each HV as a node. The graph spectrum is utilized to identify the eigengroups in which the sub-biclusters are grouped naturally to produce larger biclusters. Through a comparative study, we find that the GSGBC achieves as good a result as GBC and outperforms other kinds of biclustering algorithms. Also, compared with the original geometrical biclustering algorithm, it reduces the computing time complexity significantly. We also show that biologically meaningful biclusters can be identified by our method from real microarray gene expression data. PMID:23079285
Fast Algorithms for Model-Based Diagnosis
NASA Technical Reports Server (NTRS)
Fijany, Amir; Barrett, Anthony; Vatan, Farrokh; Mackey, Ryan
2005-01-01
Two improved new methods for automated diagnosis of complex engineering systems involve the use of novel algorithms that are more efficient than prior algorithms used for the same purpose. Both the recently developed algorithms and the prior algorithms in question are instances of model-based diagnosis, which is based on exploring the logical inconsistency between an observation and a description of a system to be diagnosed. As engineering systems grow more complex and increasingly autonomous in their functions, the need for automated diagnosis increases concomitantly. In model-based diagnosis, the function of each component and the interconnections among all the components of the system to be diagnosed (for example, see figure) are represented as a logical system, called the system description (SD). Hence, the expected behavior of the system is the set of logical consequences of the SD. Faulty components lead to inconsistency between the observed behaviors of the system and the SD. The task of finding the faulty components (diagnosis) reduces to finding the components, the abnormalities of which could explain all the inconsistencies. Of course, the meaningful solution should be a minimal set of faulty components (called a minimal diagnosis), because the trivial solution, in which all components are assumed to be faulty, always explains all inconsistencies. Although the prior algorithms in question implement powerful methods of diagnosis, they are not practical because they essentially require exhaustive searches among all possible combinations of faulty components and therefore entail the amounts of computation that grow exponentially with the number of components of the system.
CS based confocal microwave imaging algorithm for breast cancer detection.
Sun, Y P; Zhang, S; Cui, Z; Qu, L L
2016-04-29
Based on compressive sensing (CS) technology, a high resolution confocal microwave imaging algorithm is proposed for breast cancer detection. With the exploitation of the spatial sparsity of the target space, the proposed image reconstruction problem is cast within the framework of CS and solved by the sparse constraint optimization. The effectiveness and validity of the proposed CS imaging method is verified by the full wave synthetic data from numerical breast phantom using finite-difference time-domain (FDTD) method. The imaging results have shown that the proposed imaging scheme can improve the imaging quality while significantly reducing the amount of data measurements and collection time when compared to the traditional delay-and-sum imaging algorithm. PMID:27177106
Adaptive inpainting algorithm based on DCT induced wavelet regularization.
Li, Yan-Ran; Shen, Lixin; Suter, Bruce W
2013-02-01
In this paper, we propose an image inpainting optimization model whose objective function is a smoothed l(1) norm of the weighted nondecimated discrete cosine transform (DCT) coefficients of the underlying image. By identifying the objective function of the proposed model as a sum of a differentiable term and a nondifferentiable term, we present a basic algorithm inspired by Beck and Teboulle's recent work on the model. Based on this basic algorithm, we propose an automatic way to determine the weights involved in the model and update them in each iteration. The DCT as an orthogonal transform is used in various applications. We view the rows of a DCT matrix as the filters associated with a multiresolution analysis. Nondecimated wavelet transforms with these filters are explored in order to analyze the images to be inpainted. Our numerical experiments verify that under the proposed framework, the filters from a DCT matrix demonstrate promise for the task of image inpainting. PMID:23060331
jClustering, an open framework for the development of 4D clustering algorithms.
Mateos-Pérez, José María; García-Villalba, Carmen; Pascau, Javier; Desco, Manuel; Vaquero, Juan J
2013-01-01
We present jClustering, an open framework for the design of clustering algorithms in dynamic medical imaging. We developed this tool because of the difficulty involved in manually segmenting dynamic PET images and the lack of availability of source code for published segmentation algorithms. Providing an easily extensible open tool encourages publication of source code to facilitate the process of comparing algorithms and provide interested third parties with the opportunity to review code. The internal structure of the framework allows an external developer to implement new algorithms easily and quickly, focusing only on the particulars of the method being implemented and not on image data handling and preprocessing. This tool has been coded in Java and is presented as an ImageJ plugin in order to take advantage of all the functionalities offered by this imaging analysis platform. Both binary packages and source code have been published, the latter under a free software license (GNU General Public License) to allow modification if necessary. PMID:23990913
LSB Based Quantum Image Steganography Algorithm
NASA Astrophysics Data System (ADS)
Jiang, Nan; Zhao, Na; Wang, Luo
2016-01-01
Quantum steganography is the technique which hides a secret message into quantum covers such as quantum images. In this paper, two blind LSB steganography algorithms in the form of quantum circuits are proposed based on the novel enhanced quantum representation (NEQR) for quantum images. One algorithm is plain LSB which uses the message bits to substitute for the pixels' LSB directly. The other is block LSB which embeds a message bit into a number of pixels that belong to one image block. The extracting circuits can regain the secret message only according to the stego cover. Analysis and simulation-based experimental results demonstrate that the invisibility is good, and the balance between the capacity and the robustness can be adjusted according to the needs of applications.
Automated Vectorization of Decision-Based Algorithms
NASA Technical Reports Server (NTRS)
James, Mark
2006-01-01
Virtually all existing vectorization algorithms are designed to only analyze the numeric properties of an algorithm and distribute those elements across multiple processors. This advances the state of the practice because it is the only known system, at the time of this reporting, that takes high-level statements and analyzes them for their decision properties and converts them to a form that allows them to automatically be executed in parallel. The software takes a high-level source program that describes a complex decision- based condition and rewrites it as a disjunctive set of component Boolean relations that can then be executed in parallel. This is important because parallel architectures are becoming more commonplace in conventional systems and they have always been present in NASA flight systems. This technology allows one to take existing condition-based code and automatically vectorize it so it naturally decomposes across parallel architectures.
Network-based recommendation algorithms: A review
NASA Astrophysics Data System (ADS)
Yu, Fei; Zeng, An; Gillard, Sébastien; Medo, Matúš
2016-06-01
Recommender systems are a vital tool that helps us to overcome the information overload problem. They are being used by most e-commerce web sites and attract the interest of a broad scientific community. A recommender system uses data on users' past preferences to choose new items that might be appreciated by a given individual user. While many approaches to recommendation exist, the approach based on a network representation of the input data has gained considerable attention in the past. We review here a broad range of network-based recommendation algorithms and for the first time compare their performance on three distinct real datasets. We present recommendation topics that go beyond the mere question of which algorithm to use-such as the possible influence of recommendation on the evolution of systems that use it-and finally discuss open research directions and challenges.
Thiophene-based covalent organic frameworks
Bertrand, Guillaume H. V.; Michaelis, Vladimir K.; Ong, Ta-Chung; Griffin, Robert G.; Dincă, Mircea
2013-01-01
We report the synthesis and characterization of covalent organic frameworks (COFs) incorporating thiophene-based building blocks. We show that these are amenable to reticular synthesis, and that bent ditopic monomers, such as 2,5-thiophenediboronic acid, are defect-prone building blocks that are susceptible to synthetic variations during COF synthesis. The synthesis and characterization of an unusual charge transfer complex between thieno[3,2-b]thiophene-2,5-diboronic acid and tetracyanoquinodimethane enabled by the unique COF architecture is also presented. Together, these results delineate important synthetic advances toward the implementation of COFs in electronic devices. PMID:23479656
Thiophene-based covalent organic frameworks.
Bertrand, Guillaume H V; Michaelis, Vladimir K; Ong, Ta-Chung; Griffin, Robert G; Dincă, Mircea
2013-03-26
We report the synthesis and characterization of covalent organic frameworks (COFs) incorporating thiophene-based building blocks. We show that these are amenable to reticular synthesis, and that bent ditopic monomers, such as 2,5-thiophenediboronic acid, are defect-prone building blocks that are susceptible to synthetic variations during COF synthesis. The synthesis and characterization of an unusual charge transfer complex between thieno[3,2-b]thiophene-2,5-diboronic acid and tetracyanoquinodimethane enabled by the unique COF architecture is also presented. Together, these results delineate important synthetic advances toward the implementation of COFs in electronic devices. PMID:23479656
Integrated consensus-based frameworks for unmanned vehicle routing and targeting assignment
NASA Astrophysics Data System (ADS)
Barnawi, Waleed T.
Unmanned aerial vehicles (UAVs) are increasingly deployed in complex and dynamic environments to perform multiple tasks cooperatively with other UAVs that contribute to overarching mission effectiveness. Studies by the Department of Defense (DoD) indicate future operations may include anti-access/area-denial (A2AD) environments which limit human teleoperator decision-making and control. This research addresses the problem of decentralized vehicle re-routing and task reassignments through consensus-based UAV decision-making. An Integrated Consensus-Based Framework (ICF) is formulated as a solution to the combined single task assignment problem and vehicle routing problem. The multiple assignment and vehicle routing problem is solved with the Integrated Consensus-Based Bundle Framework (ICBF). The frameworks are hierarchically decomposed into two levels. The bottom layer utilizes the renowned Dijkstra's Algorithm. The top layer addresses task assignment with two methods. The single assignment approach is called the Caravan Auction Algorithm (CarA) Algorithm. This technique extends the Consensus-Based Auction Algorithm (CBAA) to provide awareness for task completion by agents and adopt abandoned tasks. The multiple assignment approach called the Caravan Auction Bundle Algorithm (CarAB) extends the Consensus-Based Bundle Algorithm (CBBA) by providing awareness for lost resources, prioritizing remaining tasks, and adopting abandoned tasks. Research questions are investigated regarding the novelty and performance of the proposed frameworks. Conclusions regarding the research questions will be provided through hypothesis testing. Monte Carlo simulations will provide evidence to support conclusions regarding the research hypotheses for the proposed frameworks. The approach provided in this research addresses current and future military operations for unmanned aerial vehicles. However, the general framework implied by the proposed research is adaptable to any unmanned
A task-based analytical framework for ultrasonic beamformer comparison.
Nguyen, Nghia Q; Prager, Richard W; Insana, Michael F
2016-08-01
A task-based approach is employed to develop an analytical framework for ultrasound beamformer design and evaluation. In this approach, a Bayesian ideal-observer provides an idealized starting point and a way to measure information loss in practical beamformer designs. Different approximations of this ideal strategy are shown to lead to popular beamformers in the literature, including the matched filter, minimum variance (MV), and Wiener filter (WF) beamformers. Analysis of the approximations indicates that the WF beamformer should outperform the MV approach, especially in low echo signal-to-noise conditions. The beamformers are applied to five typical tasks from the BIRADS lexicon. Their performance is evaluated based on ability to discriminate idealized malignant and benign features. The numerical results show the advantages of the WF over the MV technique in general; although performance varies predictably in some contrast-limited tasks because of the model modifications required for the MV algorithm to avoid ill-conditioning. PMID:27586736
Image enhancement based on edge boosting algorithm
NASA Astrophysics Data System (ADS)
Ngernplubpla, Jaturon; Chitsobhuk, Orachat
2015-12-01
In this paper, a technique for image enhancement based on proposed edge boosting algorithm to reconstruct high quality image from a single low resolution image is described. The difficulty in single-image super-resolution is that the generic image priors resided in the low resolution input image may not be sufficient to generate the effective solutions. In order to achieve a success in super-resolution reconstruction, efficient prior knowledge should be estimated. The statistics of gradient priors in terms of priority map based on separable gradient estimation, maximum likelihood edge estimation, and local variance are introduced. The proposed edge boosting algorithm takes advantages of these gradient statistics to select the appropriate enhancement weights. The larger weights are applied to the higher frequency details while the low frequency details are smoothed. From the experimental results, the significant performance improvement quantitatively and perceptually is illustrated. It can be seen that the proposed edge boosting algorithm demonstrates high quality results with fewer artifacts, sharper edges, superior texture areas, and finer detail with low noise.
Schwarz-Based Algorithms for Compressible Flows
NASA Technical Reports Server (NTRS)
Tidriri, M. D.
1996-01-01
We investigate in this paper the application of Schwarz-based algorithms to compressible flows. First we study the combination of these methods with defect-correction procedures. We then study the effect on the Schwarz-based methods of replacing the explicit treatment of the boundary conditions by an implicit one. In the last part of this paper we study the combination of these methods with Newton-Krylov matrix-free methods. Numerical experiments that show the performance of our approaches are then presented.
Framework for Integrating Science Data Processing Algorithms Into Process Control Systems
NASA Technical Reports Server (NTRS)
Mattmann, Chris A.; Crichton, Daniel J.; Chang, Albert Y.; Foster, Brian M.; Freeborn, Dana J.; Woollard, David M.; Ramirez, Paul M.
2011-01-01
A software framework called PCS Task Wrapper is responsible for standardizing the setup, process initiation, execution, and file management tasks surrounding the execution of science data algorithms, which are referred to by NASA as Product Generation Executives (PGEs). PGEs codify a scientific algorithm, some step in the overall scientific process involved in a mission science workflow. The PCS Task Wrapper provides a stable operating environment to the underlying PGE during its execution lifecycle. If the PGE requires a file, or metadata regarding the file, the PCS Task Wrapper is responsible for delivering that information to the PGE in a manner that meets its requirements. If the PGE requires knowledge of upstream or downstream PGEs in a sequence of executions, that information is also made available. Finally, if information regarding disk space, or node information such as CPU availability, etc., is required, the PCS Task Wrapper provides this information to the underlying PGE. After this information is collected, the PGE is executed, and its output Product file and Metadata generation is managed via the PCS Task Wrapper framework. The innovation is responsible for marshalling output Products and Metadata back to a PCS File Management component for use in downstream data processing and pedigree. In support of this, the PCS Task Wrapper leverages the PCS Crawler Framework to ingest (during pipeline processing) the output Product files and Metadata produced by the PGE. The architectural components of the PCS Task Wrapper framework include PGE Task Instance, PGE Config File Builder, Config File Property Adder, Science PGE Config File Writer, and PCS Met file Writer. This innovative framework is really the unifying bridge between the execution of a step in the overall processing pipeline, and the available PCS component services as well as the information that they collectively manage.
Optimization of experimental design in fMRI: a general framework using a genetic algorithm.
Wager, Tor D; Nichols, Thomas E
2003-02-01
This article describes a method for selecting design parameters and a particular sequence of events in fMRI so as to maximize statistical power and psychological validity. Our approach uses a genetic algorithm (GA), a class of flexible search algorithms that optimize designs with respect to single or multiple measures of fitness. Two strengths of the GA framework are that (1) it operates with any sort of model, allowing for very specific parameterization of experimental conditions, including nonstandard trial types and experimentally observed scanner autocorrelation, and (2) it is flexible with respect to fitness criteria, allowing optimization over known or novel fitness measures. We describe how genetic algorithms may be applied to experimental design for fMRI, and we use the framework to explore the space of possible fMRI design parameters, with the goal of providing information about optimal design choices for several types of designs. In our simulations, we considered three fitness measures: contrast estimation efficiency, hemodynamic response estimation efficiency, and design counterbalancing. Although there are inherent trade-offs between these three fitness measures, GA optimization can produce designs that outperform random designs on all three criteria simultaneously. PMID:12595184
Utilizing knowledge-base semantics in graph-based algorithms
Darwiche, A.
1996-12-31
Graph-based algorithms convert a knowledge base with a graph structure into one with a tree structure (a join-tree) and then apply tree-inference on the result. Nodes in the join-tree are cliques of variables and tree-inference is exponential in w*, the size of the maximal clique in the join-tree. A central property of join-trees that validates tree-inference is the running-intersection property: the intersection of any two cliques must belong to every clique on the path between them. We present two key results in connection to graph-based algorithms. First, we show that the running-intersection property, although sufficient, is not necessary for validating tree-inference. We present a weaker property for this purpose, called running-interaction, that depends on non-structural (semantical) properties of a knowledge base. We also present a linear algorithm that may reduce w* of a join-tree, possibly destroying its running-intersection property, while maintaining its running-interaction property and, hence, its validity for tree-inference. Second, we develop a simple algorithm for generating trees satisfying the running-interaction property. The algorithm bypasses triangulation (the standard technique for constructing join-trees) and does not construct a join-tree first. We show that the proposed algorithm may in some cases generate trees that are more efficient than those generated by modifying a join-tree.
Rattner, Alexander S.; Guillen, Donna Post; Joshi, Alark; Garimella, Srinivas
2016-03-17
Photo- and physically realistic techniques are often insufficient for visualization of fluid flow simulations, especially for 3D and time-varying studies. Substantial research effort has been dedicated to the development of non-photorealistic and illustration-inspired visualization techniques for compact and intuitive presentation of such complex datasets. However, a great deal of work has been reproduced in this field, as many research groups have developed specialized visualization software. Additionally, interoperability between illustrative visualization software is limited due to diverse processing and rendering architectures employed in different studies. In this investigation, a framework for illustrative visualization is proposed, and implemented in MarmotViz, a ParaViewmore » plug-in, enabling its use on a variety of computing platforms with various data file formats and mesh geometries. Region-of-interest identification and feature-tracking algorithms incorporated into this tool are described. Implementations of multiple illustrative effect algorithms are also presented to demonstrate the use and flexibility of this framework. Here, by providing an integrated framework for illustrative visualization of CFD data, MarmotViz can serve as a valuable asset for the interpretation of simulations of ever-growing scale.« less
A modern solver framework to manage solution algorithms in the Community Earth System Model
Evans, Katherine J; Worley, Patrick H; Nichols, Dr Jeff A; WhiteIII, James B; Salinger, Andy; Price, Stephen; Lemieux, Jean-Francois; Lipscomb, William; Perego, Mauro; Vertenstein, Mariana; Edwards, Jim
2012-01-01
Global Earth-system models (ESM) can now produce simulations that resolve ~50 km features and include finer-scale, interacting physical processes. In order to achieve these scale-length solutions, ESMs require smaller time steps, which limits parallel performance. Solution methods that overcome these bottlenecks can be quite intricate, and there is no single set of algorithms that perform well across the range of problems of interest. This creates significant implementation challenges, which is further compounded by complexity of ESMs. Therefore, prototyping and evaluating new algorithms in these models requires a software framework that is flexible, extensible, and easily introduced into the existing software. We describe our efforts to create a parallel solver framework that links the Trilinos library of solvers to Glimmer-CISM, a continental ice sheet model used in the Community Earth System Model (CESM). We demonstrate this framework within both current and developmental versions of Glimmer-CISM and provide strategies for its integration into the rest of the CESM.
Alexander S. Rattner; Donna Post Guillen; Alark Joshi
2012-12-01
Photo- and physically-realistic techniques are often insufficient for visualization of simulation results, especially for 3D and time-varying datasets. Substantial research efforts have been dedicated to the development of non-photorealistic and illustration-inspired visualization techniques for compact and intuitive presentation of such complex datasets. While these efforts have yielded valuable visualization results, a great deal of work has been reproduced in studies as individual research groups often develop purpose-built platforms. Additionally, interoperability between illustrative visualization software is limited due to specialized processing and rendering architectures employed in different studies. In this investigation, a generalized framework for illustrative visualization is proposed, and implemented in marmotViz, a ParaView plugin, enabling its use on variety of computing platforms with various data file formats and mesh geometries. Detailed descriptions of the region-of-interest identification and feature-tracking algorithms incorporated into this tool are provided. Additionally, implementations of multiple illustrative effect algorithms are presented to demonstrate the use and flexibility of this framework. By providing a framework and useful underlying functionality, the marmotViz tool can act as a springboard for future research in the field of illustrative visualization.
A Resampling Based Clustering Algorithm for Replicated Gene Expression Data.
Li, Han; Li, Chun; Hu, Jie; Fan, Xiaodan
2015-01-01
In gene expression data analysis, clustering is a fruitful exploratory technique to reveal the underlying molecular mechanism by identifying groups of co-expressed genes. To reduce the noise, usually multiple experimental replicates are performed. An integrative analysis of the full replicate data, instead of reducing the data to the mean profile, carries the promise of yielding more precise and robust clusters. In this paper, we propose a novel resampling based clustering algorithm for genes with replicated expression measurements. Assuming those replicates are exchangeable, we formulate the problem in the bootstrap framework, and aim to infer the consensus clustering based on the bootstrap samples of replicates. In our approach, we adopt the mixed effect model to accommodate the heterogeneous variances and implement a quasi-MCMC algorithm to conduct statistical inference. Experiments demonstrate that by taking advantage of the full replicate data, our algorithm produces more reliable clusters and has robust performance in diverse scenarios, especially when the data is subject to multiple sources of variance. PMID:26671802
Orthogonalizing EM: A design-based least squares algorithm
Xiong, Shifeng; Dai, Bin; Huling, Jared; Qian, Peter Z. G.
2016-01-01
We introduce an efficient iterative algorithm, intended for various least squares problems, based on a design of experiments perspective. The algorithm, called orthogonalizing EM (OEM), works for ordinary least squares and can be easily extended to penalized least squares. The main idea of the procedure is to orthogonalize a design matrix by adding new rows and then solve the original problem by embedding the augmented design in a missing data framework. We establish several attractive theoretical properties concerning OEM. For the ordinary least squares with a singular regression matrix, an OEM sequence converges to the Moore-Penrose generalized inverse-based least squares estimator. For ordinary and penalized least squares with various penalties, it converges to a point having grouping coherence for fully aliased regression matrices. Convergence and the convergence rate of the algorithm are examined. Finally, we demonstrate that OEM is highly efficient for large-scale least squares and penalized least squares problems, and is considerably faster than competing methods when n is much larger than p. Supplementary materials for this article are available online. PMID:27499558
Automated DNA Base Pair Calling Algorithm
1999-07-07
The procedure solves the problem of calling the DNA base pair sequence from two channel electropherogram separations in an automated fashion. The core of the program involves a peak picking algorithm based upon first, second, and third derivative spectra for each electropherogram channel, signal levels as a function of time, peak spacing, base pair signal to noise sequence patterns, frequency vs ratio of the two channel histograms, and confidence levels generated during the run. Themore » ratios of the two channels at peak centers can be used to accurately and reproducibly determine the base pair sequence. A further enhancement is a novel Gaussian deconvolution used to determine the peak heights used in generating the ratio.« less
An algorithm for hyperspectral remote sensing of aerosols: 1. Development of theoretical framework
NASA Astrophysics Data System (ADS)
Hou, Weizhen; Wang, Jun; Xu, Xiaoguang; Reid, Jeffrey S.; Han, Dong
2016-07-01
This paper describes the first part of a series of investigations to develop algorithms for simultaneous retrieval of aerosol parameters and surface reflectance from a newly developed hyperspectral instrument, the GEOstationary Trace gas and Aerosol Sensor Optimization (GEO-TASO), by taking full advantage of available hyperspectral measurement information in the visible bands. We describe the theoretical framework of an inversion algorithm for the hyperspectral remote sensing of the aerosol optical properties, in which major principal components (PCs) for surface reflectance is assumed known, and the spectrally dependent aerosol refractive indices are assumed to follow a power-law approximation with four unknown parameters (two for real and two for imaginary part of refractive index). New capabilities for computing the Jacobians of four Stokes parameters of reflected solar radiation at the top of the atmosphere with respect to these unknown aerosol parameters and the weighting coefficients for each PC of surface reflectance are added into the UNified Linearized Vector Radiative Transfer Model (UNL-VRTM), which in turn facilitates the optimization in the inversion process. Theoretical derivations of the formulas for these new capabilities are provided, and the analytical solutions of Jacobians are validated against the finite-difference calculations with relative error less than 0.2%. Finally, self-consistency check of the inversion algorithm is conducted for the idealized green-vegetation and rangeland surfaces that were spectrally characterized by the U.S. Geological Survey digital spectral library. It shows that the first six PCs can yield the reconstruction of spectral surface reflectance with errors less than 1%. Assuming that aerosol properties can be accurately characterized, the inversion yields a retrieval of hyperspectral surface reflectance with an uncertainty of 2% (and root-mean-square error of less than 0.003), which suggests self-consistency in the
An archetype-based testing framework.
Chen, Rong; Garde, Sebastian; Beale, Thomas; Nyström, Mikael; Karlsson, Daniel; Klein, Gunnar O; Ahlfeldt, Hans
2008-01-01
With the introduction of EHR two-level modelling and archetype methodologies pioneered by openEHR and standardized by CEN/ISO, we are one step closer to semantic interoperability and future-proof adaptive healthcare information systems. Along with the opportunities, there are also challenges. Archetypes provide the full semantics of EHR data explicitly to surrounding systems in a platform-independent way, yet it is up to the receiving system to interpret the semantics and process the data accordingly. In this paper we propose a design of an archetype-based platform-independent testing framework for validating implementations of the openEHR archetype formalism as a means of improving quality and interoperability of EHRs. PMID:18487764
Differential Search Algorithm Based Edge Detection
NASA Astrophysics Data System (ADS)
Gunen, M. A.; Civicioglu, P.; Beşdok, E.
2016-06-01
In this paper, a new method has been presented for the extraction of edge information by using Differential Search Optimization Algorithm. The proposed method is based on using a new heuristic image thresholding method for edge detection. The success of the proposed method has been examined on fusion of two remote sensed images. The applicability of the proposed method on edge detection and image fusion problems have been analysed in detail and the empirical results exposed that the proposed method is useful for solving the mentioned problems.
Binary image authentication based on watermarking algorithm
NASA Astrophysics Data System (ADS)
Masoodifar, Behrang; Hashemi, S. Mojtaba; Zarei, Omid
2011-06-01
A digital image watermark embedding and extracting algorithm is presented based on the Finite Ridgelet Transform (FRT) which can efficiently represent image with linear singularities. In general RT also has directional sensitivity so that among the transformed coefficients the most significant one represents the most energetic direction of straight edges in an image. In this paper effect of RT is compared with wavelet transform in watermarking application. Different noises with different PSNR are added into the watermarked image in the experiments and the results are of robustness and transparency.
Telemedicine framework using case-based reasoning with evidences.
Sene, A; Kamsu-Foguem, B; Rumeau, P
2015-08-01
Telemedicine is the medical practice of information exchanged from one location to another through electronic communications to improve the delivery of health care services. This research article describes a telemedicine framework with knowledge engineering using taxonomic reasoning of ontology modeling and semantic similarity. In addition to being a precious support in the procedure of medical decision-making, this framework can be used to strengthen significant collaborations and traceability that are important for the development of official deployment of telemedicine applications. Adequate mechanisms for information management with traceability of the reasoning process are also essential in the fields of epidemiology and public health. In this paper we enrich the case-based reasoning process by taking into account former evidence-based knowledge. We use the regular four steps approach and implement an additional (iii) step: (i) establish diagnosis, (ii) retrieve treatment, (iii) apply evidence, (iv) adaptation, (v) retain. Each step is performed using tools from knowledge engineering and information processing (natural language processing, ontology, indexation, algorithm, etc.). The case representation is done by the taxonomy component of a medical ontology model. The proposed approach is illustrated with an example from the oncology domain. Medical ontology allows a good and efficient modeling of the patient and his treatment. We are pointing up the role of evidences and specialist's opinions in effectiveness and safety of care. PMID:26001421
MIRA: mutual information-based reporter algorithm for metabolic networks
Cicek, A. Ercument; Roeder, Kathryn; Ozsoyoglu, Gultekin
2014-01-01
Motivation: Discovering the transcriptional regulatory architecture of the metabolism has been an important topic to understand the implications of transcriptional fluctuations on metabolism. The reporter algorithm (RA) was proposed to determine the hot spots in metabolic networks, around which transcriptional regulation is focused owing to a disease or a genetic perturbation. Using a z-score-based scoring scheme, RA calculates the average statistical change in the expression levels of genes that are neighbors to a target metabolite in the metabolic network. The RA approach has been used in numerous studies to analyze cellular responses to the downstream genetic changes. In this article, we propose a mutual information-based multivariate reporter algorithm (MIRA) with the goal of eliminating the following problems in detecting reporter metabolites: (i) conventional statistical methods suffer from small sample sizes, (ii) as z-score ranges from minus to plus infinity, calculating average scores can lead to canceling out opposite effects and (iii) analyzing genes one by one, then aggregating results can lead to information loss. MIRA is a multivariate and combinatorial algorithm that calculates the aggregate transcriptional response around a metabolite using mutual information. We show that MIRA’s results are biologically sound, empirically significant and more reliable than RA. Results: We apply MIRA to gene expression analysis of six knockout strains of Escherichia coli and show that MIRA captures the underlying metabolic dynamics of the switch from aerobic to anaerobic respiration. We also apply MIRA to an Autism Spectrum Disorder gene expression dataset. Results indicate that MIRA reports metabolites that highly overlap with recently found metabolic biomarkers in the autism literature. Overall, MIRA is a promising algorithm for detecting metabolic drug targets and understanding the relation between gene expression and metabolic activity. Availability and
NASA Astrophysics Data System (ADS)
Babbar-Sebens, M.; Minsker, B. S.
2006-12-01
In the water resources management field, decision making encompasses many kinds of engineering, social, and economic constraints and objectives. Representing all of these problem dependant criteria through models (analytical or numerical) and various formulations (e.g., objectives, constraints, etc.) within an optimization- simulation system can be a very non-trivial issue. Most models and formulations utilized for discerning desirable traits in a solution can only approximate the decision maker's (DM) true preference criteria, and they often fail to consider important qualitative and incomputable phenomena related to the management problem. In our research, we have proposed novel decision support frameworks that allow DMs to actively participate in the optimization process. The DMs explicitly indicate their true preferences based on their subjective criteria and the results of various simulation models and formulations. The feedback from the DMs is then used to guide the search process towards solutions that are "all-rounders" from the perspective of the DM. The two main research questions explored in this work are: a) Does interaction between the optimization algorithm and a DM assist the system in searching for groundwater monitoring designs that are robust from the DM's perspective?, and b) How can an interactive search process be made more effective when human factors, such as human fatigue and cognitive learning processes, affect the performance of the algorithm? The application of these frameworks on a real-world groundwater long-term monitoring (LTM) case study in Michigan highlighted the following salient advantages: a) in contrast to the non-interactive optimization methodology, the proposed interactive frameworks were able to identify low cost monitoring designs whose interpolation maps respected the expected spatial distribution of the contaminants, b) for many same-cost designs, the interactive methodologies were able to propose multiple alternatives
A new CT metal artifacts reduction algorithm based on fractional-order sinogram inpainting.
Zhang, Yi; Pu, Yi-Fei; Hu, Jin-Rong; Liu, Yan; Zhou, Ji-Liu
2011-01-01
In this paper, we propose a new metal artifacts reduction algorithm based on fractional-order total-variation sinogram inpainting model for X-ray computed tomography (CT). The numerical algorithm for our fractional-order framework is also analyzed. Simulations show that, both quantitatively and qualitatively, our method is superior to conditional interpolation methods and the classic integral-order total variation model. PMID:21876286
Benchmarking framework for myocardial tracking and deformation algorithms: an open access database.
Tobon-Gomez, C; De Craene, M; McLeod, K; Tautz, L; Shi, W; Hennemuth, A; Prakosa, A; Wang, H; Carr-White, G; Kapetanakis, S; Lutz, A; Rasche, V; Schaeffter, T; Butakoff, C; Friman, O; Mansi, T; Sermesant, M; Zhuang, X; Ourselin, S; Peitgen, H-O; Pennec, X; Razavi, R; Rueckert, D; Frangi, A F; Rhode, K S
2013-08-01
In this paper we present a benchmarking framework for the validation of cardiac motion analysis algorithms. The reported methods are the response to an open challenge that was issued to the medical imaging community through a MICCAI workshop. The database included magnetic resonance (MR) and 3D ultrasound (3DUS) datasets from a dynamic phantom and 15 healthy volunteers. Participants processed 3D tagged MR datasets (3DTAG), cine steady state free precession MR datasets (SSFP) and 3DUS datasets, amounting to 1158 image volumes. Ground-truth for motion tracking was based on 12 landmarks (4 walls at 3 ventricular levels). They were manually tracked by two observers in the 3DTAG data over the whole cardiac cycle, using an in-house application with 4D visualization capabilities. The median of the inter-observer variability was computed for the phantom dataset (0.77 mm) and for the volunteer datasets (0.84 mm). The ground-truth was registered to 3DUS coordinates using a point based similarity transform. Four institutions responded to the challenge by providing motion estimates for the data: Fraunhofer MEVIS (MEVIS), Bremen, Germany; Imperial College London - University College London (IUCL), UK; Universitat Pompeu Fabra (UPF), Barcelona, Spain; Inria-Asclepios project (INRIA), France. Details on the implementation and evaluation of the four methodologies are presented in this manuscript. The manually tracked landmarks were used to evaluate tracking accuracy of all methodologies. For 3DTAG, median values were computed over all time frames for the phantom dataset (MEVIS=1.20mm, IUCL=0.73 mm, UPF=1.10mm, INRIA=1.09 mm) and for the volunteer datasets (MEVIS=1.33 mm, IUCL=1.52 mm, UPF=1.09 mm, INRIA=1.32 mm). For 3DUS, median values were computed at end diastole and end systole for the phantom dataset (MEVIS=4.40 mm, UPF=3.48 mm, INRIA=4.78 mm) and for the volunteer datasets (MEVIS=3.51 mm, UPF=3.71 mm, INRIA=4.07 mm). For SSFP, median values were computed at end diastole and
PDE Based Algorithms for Smooth Watersheds.
Hodneland, Erlend; Tai, Xue-Cheng; Kalisch, Henrik
2016-04-01
Watershed segmentation is useful for a number of image segmentation problems with a wide range of practical applications. Traditionally, the tracking of the immersion front is done by applying a fast sorting algorithm. In this work, we explore a continuous approach based on a geometric description of the immersion front which gives rise to a partial differential equation. The main advantage of using a partial differential equation to track the immersion front is that the method becomes versatile and may easily be stabilized by introducing regularization terms. Coupling the geometric approach with a proper "merging strategy" creates a robust algorithm which minimizes over- and under-segmentation even without predefined markers. Since reliable markers defined prior to segmentation can be difficult to construct automatically for various reasons, being able to treat marker-free situations is a major advantage of the proposed method over earlier watershed formulations. The motivation for the methods developed in this paper is taken from high-throughput screening of cells. A fully automated segmentation of single cells enables the extraction of cell properties from large data sets, which can provide substantial insight into a biological model system. Applying smoothing to the boundaries can improve the accuracy in many image analysis tasks requiring a precise delineation of the plasma membrane of the cell. The proposed segmentation method is applied to real images containing fluorescently labeled cells, and the experimental results show that our implementation is robust and reliable for a variety of challenging segmentation tasks. PMID:26625408
Speech Enhancement based on Compressive Sensing Algorithm
NASA Astrophysics Data System (ADS)
Sulong, Amart; Gunawan, Teddy S.; Khalifa, Othman O.; Chebil, Jalel
2013-12-01
There are various methods, in performance of speech enhancement, have been proposed over the years. The accurate method for the speech enhancement design mainly focuses on quality and intelligibility. The method proposed with high performance level. A novel speech enhancement by using compressive sensing (CS) is a new paradigm of acquiring signals, fundamentally different from uniform rate digitization followed by compression, often used for transmission or storage. Using CS can reduce the number of degrees of freedom of a sparse/compressible signal by permitting only certain configurations of the large and zero/small coefficients, and structured sparsity models. Therefore, CS is significantly provides a way of reconstructing a compressed version of the speech in the original signal by taking only a small amount of linear and non-adaptive measurement. The performance of overall algorithms will be evaluated based on the speech quality by optimise using informal listening test and Perceptual Evaluation of Speech Quality (PESQ). Experimental results show that the CS algorithm perform very well in a wide range of speech test and being significantly given good performance for speech enhancement method with better noise suppression ability over conventional approaches without obvious degradation of speech quality.
Tile-Based Two-Dimensional Phase Unwrapping for Digital Holography Using a Modular Framework.
Antonopoulos, Georgios C; Steltner, Benjamin; Heisterkamp, Alexander; Ripken, Tammo; Meyer, Heiko
2015-01-01
A variety of physical and biomedical imaging techniques, such as digital holography, interferometric synthetic aperture radar (InSAR), or magnetic resonance imaging (MRI) enable measurement of the phase of a physical quantity additionally to its amplitude. However, the phase can commonly only be measured modulo 2π, as a so called wrapped phase map. Phase unwrapping is the process of obtaining the underlying physical phase map from the wrapped phase. Tile-based phase unwrapping algorithms operate by first tessellating the phase map, then unwrapping individual tiles, and finally merging them to a continuous phase map. They can be implemented computationally efficiently and are robust to noise. However, they are prone to failure in the presence of phase residues or erroneous unwraps of single tiles. We tried to overcome these shortcomings by creating novel tile unwrapping and merging algorithms as well as creating a framework that allows to combine them in modular fashion. To increase the robustness of the tile unwrapping step, we implemented a model-based algorithm that makes efficient use of linear algebra to unwrap individual tiles. Furthermore, we adapted an established pixel-based unwrapping algorithm to create a quality guided tile merger. These original algorithms as well as previously existing ones were implemented in a modular phase unwrapping C++ framework. By examining different combinations of unwrapping and merging algorithms we compared our method to existing approaches. We could show that the appropriate choice of unwrapping and merging algorithms can significantly improve the unwrapped result in the presence of phase residues and noise. Beyond that, our modular framework allows for efficient design and test of new tile-based phase unwrapping algorithms. The software developed in this study is freely available. PMID:26599984
Tile-Based Two-Dimensional Phase Unwrapping for Digital Holography Using a Modular Framework
Antonopoulos, Georgios C.; Steltner, Benjamin; Heisterkamp, Alexander; Ripken, Tammo; Meyer, Heiko
2015-01-01
A variety of physical and biomedical imaging techniques, such as digital holography, interferometric synthetic aperture radar (InSAR), or magnetic resonance imaging (MRI) enable measurement of the phase of a physical quantity additionally to its amplitude. However, the phase can commonly only be measured modulo 2π, as a so called wrapped phase map. Phase unwrapping is the process of obtaining the underlying physical phase map from the wrapped phase. Tile-based phase unwrapping algorithms operate by first tessellating the phase map, then unwrapping individual tiles, and finally merging them to a continuous phase map. They can be implemented computationally efficiently and are robust to noise. However, they are prone to failure in the presence of phase residues or erroneous unwraps of single tiles. We tried to overcome these shortcomings by creating novel tile unwrapping and merging algorithms as well as creating a framework that allows to combine them in modular fashion. To increase the robustness of the tile unwrapping step, we implemented a model-based algorithm that makes efficient use of linear algebra to unwrap individual tiles. Furthermore, we adapted an established pixel-based unwrapping algorithm to create a quality guided tile merger. These original algorithms as well as previously existing ones were implemented in a modular phase unwrapping C++ framework. By examining different combinations of unwrapping and merging algorithms we compared our method to existing approaches. We could show that the appropriate choice of unwrapping and merging algorithms can significantly improve the unwrapped result in the presence of phase residues and noise. Beyond that, our modular framework allows for efficient design and test of new tile-based phase unwrapping algorithms. The software developed in this study is freely available. PMID:26599984
An example-based brain MRI simulation framework
NASA Astrophysics Data System (ADS)
He, Qing; Roy, Snehashis; Jog, Amod; Pham, Dzung L.
2015-03-01
The simulation of magnetic resonance (MR) images plays an important role in the validation of image analysis algorithms such as image segmentation, due to lack of sufficient ground truth in real MR images. Previous work on MRI simulation has focused on explicitly modeling the MR image formation process. However, because of the overwhelming complexity of MR acquisition these simulations must involve simplifications and approximations that can result in visually unrealistic simulated images. In this work, we describe an example-based simulation framework, which uses an "atlas" consisting of an MR image and its anatomical models derived from the hard segmentation. The relationships between the MR image intensities and its anatomical models are learned using a patch-based regression that implicitly models the physics of the MR image formation. Given the anatomical models of a new brain, a new MR image can be simulated using the learned regression. This approach has been extended to also simulate intensity inhomogeneity artifacts based on the statistical model of training data. Results show that the example based MRI simulation method is capable of simulating different image contrasts and is robust to different choices of atlas. The simulated images resemble real MR images more than simulations produced by a physics-based model.
A Support Vector Machine Blind Equalization Algorithm Based on Immune Clone Algorithm
NASA Astrophysics Data System (ADS)
Yecai, Guo; Rui, Ding
Aiming at affecting of the parameter selection method of support vector machine(SVM) on its application in blind equalization algorithm, a SVM constant modulus blind equalization algorithm based on immune clone selection algorithm(CSA-SVM-CMA) is proposed. In this proposed algorithm, the immune clone algorithm is used to optimize the parameters of the SVM on the basis advantages of its preventing evolutionary precocious, avoiding local optimum, and fast convergence. The proposed algorithm can improve the parameter selection efficiency of SVM constant modulus blind equalization algorithm(SVM-CMA) and overcome the defect of the artificial setting parameters. Accordingly, the CSA-SVM-CMA has faster convergence rate and smaller mean square error than the SVM-CMA. Computer simulations in underwater acoustic channels have proved the validity of the algorithm.
Comparison of cone beam artifacts reduction: two pass algorithm vs TV-based CS algorithm
NASA Astrophysics Data System (ADS)
Choi, Shinkook; Baek, Jongduk
2015-03-01
In a cone beam computed tomography (CBCT), the severity of the cone beam artifacts is increased as the cone angle increases. To reduce the cone beam artifacts, several modified FDK algorithms and compressed sensing based iterative algorithms have been proposed. In this paper, we used two pass algorithm and Gradient-Projection-Barzilai-Borwein (GPBB) algorithm to reduce the cone beam artifacts, and compared their performance using structural similarity (SSIM) index. In two pass algorithm, it is assumed that the cone beam artifacts are mainly caused by extreme-density(ED) objects, and therefore the algorithm reproduces the cone beam artifacts(i.e., error image) produced by ED objects, and then subtract it from the original image. GPBB algorithm is a compressed sensing based iterative algorithm which minimizes an energy function for calculating the gradient projection with the step size determined by the Barzilai- Borwein formulation, therefore it can estimate missing data caused by the cone beam artifacts. To evaluate the performance of two algorithms, we used testing objects consisting of 7 ellipsoids separated along the z direction and cone beam artifacts were generated using 30 degree cone angle. Even though the FDK algorithm produced severe cone beam artifacts with a large cone angle, two pass algorithm reduced the cone beam artifacts with small residual errors caused by inaccuracy of ED objects. In contrast, GPBB algorithm completely removed the cone beam artifacts and restored the original shape of the objects.
Transaction-Based Building Controls Framework, Volume 1: Reference Guide
Somasundaram, Sriram; Pratt, Robert G.; Akyol, Bora A.; Fernandez, Nicholas; Foster, Nikolas AF; Katipamula, Srinivas; Mayhorn, Ebony T.; Somani, Abhishek; Steckley, Andrew C.; Taylor, Zachary T.
2014-04-28
This document proposes a framework concept to achieve the objectives of raising buildings’ efficiency and energy savings potential benefitting building owners and operators. We call it a transaction-based framework, wherein mutually-beneficial and cost-effective market-based transactions can be enabled between multiple players across different domains. Transaction-based building controls are one part of the transactional energy framework. While these controls realize benefits by enabling automatic, market-based intra-building efficiency optimizations, the transactional energy framework provides similar benefits using the same market -based structure, yet on a larger scale and beyond just buildings, to the society at large.
Genetic algorithm-based form error evaluation
NASA Astrophysics Data System (ADS)
Cui, Changcai; Li, Bing; Huang, Fugui; Zhang, Rencheng
2007-07-01
Form error evaluation of geometrical products is a nonlinear optimization problem, for which a solution has been attempted by different methods with some complexity. A genetic algorithm (GA) was developed to deal with the problem, which was proved simple to understand and realize, and its key techniques have been investigated in detail. Firstly, the fitness function of GA was discussed emphatically as a bridge between GA and the concrete problems to be solved. Secondly, the real numbers-based representation of the desired solutions in the continual space optimization problem was discussed. Thirdly, many improved evolutionary strategies of GA were described on emphasis. These evolutionary strategies were the selection operation of 'odd number selection plus roulette wheel selection', the crossover operation of 'arithmetic crossover between near relatives and far relatives' and the mutation operation of 'adaptive Gaussian' mutation. After evolutions from generation to generation with the evolutionary strategies, the initial population produced stochastically around the least-squared solutions of the problem would be updated and improved iteratively till the best chromosome or individual of GA appeared. Finally, some examples were given to verify the evolutionary method. Experimental results show that the GA-based method can find desired solutions that are superior to the least-squared solutions except for a few examples in which the GA-based method can obtain similar results to those by the least-squared method. Compared with other optimization techniques, the GA-based method can obtain almost equal results but with less complicated models and computation time.
Cranial-base surgery: a reconstructive algorithm.
Georgantopoulou, A; Hodgkinson, P D; Gerber, C J
2003-01-01
Skull-base surgery is associated with a high risk of cerebrospinal fluid (CSF) leak, infection, and functional and aesthetic deformity. Appropriate reconstruction of cranial-base defects following surgery helps to prevent these complications. Between March 1998 and May 2000, 28 patients (age: 1-68 years) underwent reconstruction of the anterior and middle cranial fossae. The indications for surgery were tumours, trauma involving the anterior cranial fossa, midline dermoid cysts with intracranial extension, late post-traumatic CSF leak, craniofacial deformity and recurrent frontal mucocoele. We used local anteriorly based pericranial flaps (23 flaps, alone or in combination with other flaps), bipedicled galeal flaps (seven patients) and free flaps (nine patients; radial forearm fascial/fasciocutaneous flaps, rectus abdominis muscle flap and latissimus dorsi muscle flap). Follow-up has been 4-24 months. We had no deaths, no flap failure and no incidence of infection. Complications included two CSF leaks, three intracranial haematomas and one pulsatile enophthalmos. All patients had a very good aesthetic result. We present an algorithm for skull-base reconstruction and comment on the design and vascularity of the bipedicled galeal flap. The monitoring of intracranial flaps and the difficulties of perioperative management of free flaps in neurosurgical patients are also discussed. PMID:12706142
Implicit function-based phantoms for evaluation of registration algorithms
NASA Astrophysics Data System (ADS)
Gopalakrishnan, Girish; Poston, Timothy; Nagaraj, Nithin; Mullick, Rakesh; Knoplioch, Jerome
2005-04-01
Medical image fusion is increasingly enhancing diagnostic accuracy by synergizing information from multiple images, obtained by the same modality at different times or from complementary modalities such as structural information from CT and functional from PET. An active, crucial research topic in fusion is validation of the registration (point-to-point correspondence) used. Phantoms and other simulated studies are useful in the absence of, or as a preliminary to, definitive clinical tests. Software phantoms in specific have the added advantage of robustness, repeatability and reproducibility. Our virtual-lung-phantom-based scheme can test the accuracy of any registration algorithm and is flexible enough for added levels of complexity (addition of blur/anti-alias, rotate/warp, and modality-associated noise) to help evaluate the robustness of an image registration/fusion methodology. Such a framework extends easily to different anatomies. The feature of adding software-based fiducials both within and outside simulated anatomies prove more beneficial when compared to experiments using data from external fiducials on a patient. It would help the diagnosing clinician make a prudent choice of registration algorithm.
An open-source framework for stress-testing non-invasive foetal ECG extraction algorithms.
Andreotti, Fernando; Behar, Joachim; Zaunseder, Sebastian; Oster, Julien; Clifford, Gari D
2016-05-01
avoided. Data, extraction algorithms and evaluation routines were released as part of the fecgsyn toolbox on Physionet under an GNU GPL open-source license. This contribution provides a standard framework for benchmarking and regulatory testing of NI-FECG extraction algorithms. PMID:27067286
NASA Astrophysics Data System (ADS)
Bolton, Adam S.; Schlegel, David J.
2010-02-01
We describe a new algorithm for the "perfect" extraction of one-dimensional (1D) spectra from two-dimensional (2D) digital images of optical fiber spectrographs, based on accurate 2D forward modeling of the raw pixel data. The algorithm is correct for arbitrarily complicated 2D point-spread functions (PSFs), as compared to the traditional optimal extraction algorithm, which is only correct for a limited class of separable PSFs. The algorithm results in statistically independent extracted samples in the 1D spectrum, and preserves the full native resolution of the 2D spectrograph without degradation. Both the statistical errors and the 1D resolution of the extracted spectrum are accurately determined, allowing a correct χ2 comparison of any model spectrum with the data. Using a model PSF similar to that found in the red channel of the Sloan Digital Sky Survey spectrograph, we compare the performance of our algorithm to that of cross-section based optimal extraction, and also demonstrate that our method allows coaddition and foreground estimation to be carried out as an integral part of the extraction step. This work demonstrates the feasibility of current and next-generation multifiber spectrographs for faint-galaxy surveys even in the presence of strong night-sky foregrounds. We describe the handling of subtleties arising from fiber-to-fiber cross talk, discuss some of the likely challenges in deploying our method to the analysis of a full-scale survey, and note that our algorithm could be generalized into an optimal method for the rectification and combination of astronomical imaging data.
NASA Astrophysics Data System (ADS)
Zheng, Genrang; Lin, ZhengChun
The problem of winner determination in combinatorial auctions is a hotspot electronic business, and a NP hard problem. A Hybrid Artificial Fish Swarm Algorithm(HAFSA), which is combined with First Suite Heuristic Algorithm (FSHA) and Artificial Fish Swarm Algorithm (AFSA), is proposed to solve the problem after probing it base on the theories of AFSA. Experiment results show that the HAFSA is a rapidly and efficient algorithm for The problem of winner determining. Compared with Ant colony Optimization Algorithm, it has a good performance with broad and prosperous application.
A Collaborative Recommend Algorithm Based on Bipartite Community
Fu, Yuchen; Liu, Quan; Cui, Zhiming
2014-01-01
The recommendation algorithm based on bipartite network is superior to traditional methods on accuracy and diversity, which proves that considering the network topology of recommendation systems could help us to improve recommendation results. However, existing algorithms mainly focus on the overall topology structure and those local characteristics could also play an important role in collaborative recommend processing. Therefore, on account of data characteristics and application requirements of collaborative recommend systems, we proposed a link community partitioning algorithm based on the label propagation and a collaborative recommendation algorithm based on the bipartite community. Then we designed numerical experiments to verify the algorithm validity under benchmark and real database. PMID:24955393
A Framework for Socio-Scientific Issues Based Education
ERIC Educational Resources Information Center
Presley, Morgan L.; Sickel, Aaron J.; Muslu, Nilay; Merle-Johnson, Dominike; Witzig, Stephen B.; Izci, Kemal; Sadler, Troy D.
2013-01-01
Science instruction based on student exploration of socio-scientific issues (SSI) has been presented as a powerful strategy for supporting science learning and the development of scientific literacy. This paper presents an instructional framework for SSI based education. The framework is based on a series of research studies conducted in a diverse…
A New Aloha Anti-Collision Algorithm Based on CDMA
NASA Astrophysics Data System (ADS)
Bai, Enjian; Feng, Zhu
The tags' collision is a common problem in RFID (radio frequency identification) system. The problem has affected the integrity of the data transmission during the process of communication in the RFID system. Based on analysis of the existing anti-collision algorithm, a novel anti-collision algorithm is presented. The new algorithm combines the group dynamic frame slotted Aloha algorithm with code division multiple access technology. The algorithm can effectively reduce the collision probability between tags. Under the same number of tags, the algorithm is effective in reducing the reader recognition time and improve overall system throughput rate.
A new frame-based registration algorithm.
Yan, C H; Whalen, R T; Beaupre, G S; Sumanaweera, T S; Yen, S Y; Napel, S
1998-01-01
This paper presents a new algorithm for frame registration. Our algorithm requires only that the frame be comprised of straight rods, as opposed to the N structures or an accurate frame model required by existing algorithms. The algorithm utilizes the full 3D information in the frame as well as a least squares weighting scheme to achieve highly accurate registration. We use simulated CT data to assess the accuracy of our algorithm. We compare the performance of the proposed algorithm to two commonly used algorithms. Simulation results show that the proposed algorithm is comparable to the best existing techniques with knowledge of the exact mathematical frame model. For CT data corrupted with an unknown in-plane rotation or translation, the proposed technique is also comparable to the best existing techniques. However, in situations where there is a discrepancy of more than 2 mm (0.7% of the frame dimension) between the frame and the mathematical model, the proposed technique is significantly better (p < or = 0.05) than the existing techniques. The proposed algorithm can be applied to any existing frame without modification. It provides better registration accuracy and is robust against model mis-match. It allows greater flexibility on the frame structure. Lastly, it reduces the frame construction cost as adherence to a concise model is not required. PMID:9472834
A new frame-based registration algorithm
NASA Technical Reports Server (NTRS)
Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Sumanaweera, T. S.; Yen, S. Y.; Napel, S.
1998-01-01
This paper presents a new algorithm for frame registration. Our algorithm requires only that the frame be comprised of straight rods, as opposed to the N structures or an accurate frame model required by existing algorithms. The algorithm utilizes the full 3D information in the frame as well as a least squares weighting scheme to achieve highly accurate registration. We use simulated CT data to assess the accuracy of our algorithm. We compare the performance of the proposed algorithm to two commonly used algorithms. Simulation results show that the proposed algorithm is comparable to the best existing techniques with knowledge of the exact mathematical frame model. For CT data corrupted with an unknown in-plane rotation or translation, the proposed technique is also comparable to the best existing techniques. However, in situations where there is a discrepancy of more than 2 mm (0.7% of the frame dimension) between the frame and the mathematical model, the proposed technique is significantly better (p < or = 0.05) than the existing techniques. The proposed algorithm can be applied to any existing frame without modification. It provides better registration accuracy and is robust against model mis-match. It allows greater flexibility on the frame structure. Lastly, it reduces the frame construction cost as adherence to a concise model is not required.
sp3-hybridized framework structure of group-14 elements discovered by genetic algorithm
Nguyen, Manh Cuong; Zhao, Xin; Wang, Cai-Zhuang; Ho, Kai-Ming
2014-05-01
Group-14 elements, including C, Si, Ge, and Sn, can form various stable and metastable structures. Finding new metastable structures of group-14 elements with desirable physical properties for new technological applications has attracted a lot of interest. Using a genetic algorithm, we discovered a new low-energy metastable distorted sp3-hybridized framework structure of the group-14 elements. It has P42/mnm symmetry with 12 atoms per unit cell. The void volume of this structure is as large as 139.7Å3 for Si P42/mnm, and it can be used for gas or metal-atom encapsulation. Band-structure calculations show that P42/mnm structures of Si and Ge are semiconducting with energy band gaps close to the optimal values for optoelectronic or photovoltaic applications. With metal-atom encapsulation, the P42/mnm structure would also be a candidate for rattling-mediated superconducting or used as thermoelectric materials.
Generic XML-based framework for metadata portals
NASA Astrophysics Data System (ADS)
Schindler, Uwe; Diepenbroek, Michael
2008-12-01
We present a generic and flexible framework for building geoscientific metadata portals independent of content standards for metadata and protocols. Data can be harvested with commonly used protocols (e.g., Open Archives Initiative Protocol for Metadata Harvesting) and metadata standards like DIF or ISO 19115. The new Java-based portal software supports any XML encoding and makes metadata searchable through Apache Lucene. Software administrators are free to define searchable fields independent of their type using XPath. In addition, by extending the full-text search engine (FTS) Apache Lucene, we have significantly improved queries for numerical and date/time ranges by supplying a new trie-based algorithm, thus, enabling high-performance space/time retrievals in FTS-based geo portals. The harvested metadata are stored in separate indexes, which makes it possible to combine these into different portals. The portal-specific Java API and web service interface is highly flexible and supports custom front-ends for users, provides automatic query completion (AJAX), and dynamic visualization with conventional mapping tools. The software has been made freely available through the open source concept.
Cordic based algorithms for software defined radio (SDR) baseband processing
NASA Astrophysics Data System (ADS)
Heyne, B.; Götze, J.
2006-09-01
This paper presents two Cordic based algorithms which may be used for digital baseband processing in OFDM and/or CDMA based communication systems. The first one is a linear least squares based multiuser detector for CDMA incorporating descrambling and despreading. The second algorithm is a pure Cordic based FFT implementation. Both algorithms can be implemented using solely Cordic based architectures (e.g. coprocessors or ASIPs). The algorithms exactly fit the needs of a multistandard terminal as they both are freely parameterizable. This regards to the accuracy of the results as well as to the parameters of the performed function (e.g. size of the FFT).
A hybrid features based image matching algorithm
NASA Astrophysics Data System (ADS)
Tu, Zhenbiao; Lin, Tao; Sun, Xiao; Dou, Hao; Ming, Delie
2015-12-01
In this paper, we present a novel image matching method to find the correspondences between two sets of image interest points. The proposed method is based on a revised third-order tensor graph matching method, and introduces an energy function that takes four kinds of energy term into account. The third-order tensor method can hardly deal with the situation that the number of interest points is huge. To deal with this problem, we use a potential matching set and a vote mechanism to decompose the matching task into several sub-tasks. Moreover, the third-order tensor method sometimes could only find a local optimum solution. Thus we use a cluster method to divide the feature points into some groups and only sample feature triangles between different groups, which could make the algorithm to find the global optimum solution much easier. Experiments on different image databases could prove that our new method would obtain correct matching results with relatively high efficiency.
Developing a Framework of Work-Based Foundation Skills.
ERIC Educational Resources Information Center
Pennsylvania State Univ., University Park. Inst. for the Study of Adult Literacy.
The Framework of Work Based Foundation Skills Project was undertaken to facilitate development of Pennsylvania's new workforce investment system Team PA CareerLink by identifying and developing common definitions of the foundation skills all workers need to function effectively in any workplace. A framework of 19 work-based foundation skills and…
Combined string searching algorithm based on knuth-morris- pratt and boyer-moore algorithms
NASA Astrophysics Data System (ADS)
Tsarev, R. Yu; Chernigovskiy, A. S.; Tsareva, E. A.; Brezitskaya, V. V.; Nikiforov, A. Yu; Smirnov, N. A.
2016-04-01
The string searching task can be classified as a classic information processing task. Users either encounter the solution of this task while working with text processors or browsers, employing standard built-in tools, or this task is solved unseen by the users, while they are working with various computer programmes. Nowadays there are many algorithms for solving the string searching problem. The main criterion of these algorithms’ effectiveness is searching speed. The larger the shift of the pattern relative to the string in case of pattern and string characters’ mismatch is, the higher is the algorithm running speed. This article offers a combined algorithm, which has been developed on the basis of well-known Knuth-Morris-Pratt and Boyer-Moore string searching algorithms. These algorithms are based on two different basic principles of pattern matching. Knuth-Morris-Pratt algorithm is based upon forward pattern matching and Boyer-Moore is based upon backward pattern matching. Having united these two algorithms, the combined algorithm allows acquiring the larger shift in case of pattern and string characters’ mismatch. The article provides an example, which illustrates the results of Boyer-Moore and Knuth-Morris- Pratt algorithms and combined algorithm’s work and shows advantage of the latter in solving string searching problem.
An Innovative Thinking-Based Intelligent Information Fusion Algorithm
Hu, Liang; Liu, Gang; Zhou, Jin
2013-01-01
This study proposes an intelligent algorithm that can realize information fusion in reference to the relative research achievements in brain cognitive theory and innovative computation. This algorithm treats knowledge as core and information fusion as a knowledge-based innovative thinking process. Furthermore, the five key parts of this algorithm including information sense and perception, memory storage, divergent thinking, convergent thinking, and evaluation system are simulated and modeled. This algorithm fully develops innovative thinking skills of knowledge in information fusion and is a try to converse the abstract conception of brain cognitive science to specific and operable research routes and strategies. Furthermore, the influences of each parameter of this algorithm on algorithm performance are analyzed and compared with those of classical intelligent algorithms trough test. Test results suggest that the algorithm proposed in this study can obtain the optimum problem solution by less target evaluation times, improve optimization effectiveness, and achieve the effective fusion of information. PMID:23956699
Tree-based shortest-path routing algorithm
NASA Astrophysics Data System (ADS)
Long, Y. H.; Ho, T. K.; Rad, A. B.; Lam, S. P. S.
1998-12-01
A tree-based shortest path routing algorithm is introduced in this paper. With this algorithm, every network node can maintain a shortest path routing tree topology of the network with itself as the root. In this algorithm, every node constructs its own routing tree based upon its neighbors' routing trees. Initially, the routing tree at each node has the root only, the node itself. As information exchanges, every node's routing tree will evolve until a complete tree is obtained. This algorithm is a trade-off between distance vector algorithm and link state algorithm. Loops are automatically deleted, so there is no count-to- infinity effect. A simple routing tree information storage approach and a protocol data until format to transmit the tree information are given. Some special issues, such as adaptation to topology change, implementation of the algorithm on LAN, convergence and computation overhead etc., are also discussed in the paper.
NASA Astrophysics Data System (ADS)
Hauth, T.; Innocente and, V.; Piparo, D.
2012-12-01
The processing of data acquired by the CMS detector at LHC is carried out with an object-oriented C++ software framework: CMSSW. With the increasing luminosity delivered by the LHC, the treatment of recorded data requires extraordinary large computing resources, also in terms of CPU usage. A possible solution to cope with this task is the exploitation of the features offered by the latest microprocessor architectures. Modern CPUs present several vector units, the capacity of which is growing steadily with the introduction of new processor generations. Moreover, an increasing number of cores per die is offered by the main vendors, even on consumer hardware. Most recent C++ compilers provide facilities to take advantage of such innovations, either by explicit statements in the programs sources or automatically adapting the generated machine instructions to the available hardware, without the need of modifying the existing code base. Programming techniques to implement reconstruction algorithms and optimised data structures are presented, that aim to scalable vectorization and parallelization of the calculations. One of their features is the usage of new language features of the C++11 standard. Portions of the CMSSW framework are illustrated which have been found to be especially profitable for the application of vectorization and multi-threading techniques. Specific utility components have been developed to help vectorization and parallelization. They can easily become part of a larger common library. To conclude, careful measurements are described, which show the execution speedups achieved via vectorised and multi-threaded code in the context of CMSSW.
Regalia, Giulia; Coelli, Stefania; Biffi, Emilia; Ferrigno, Giancarlo; Pedrocchi, Alessandra
2016-01-01
Neuronal spike sorting algorithms are designed to retrieve neuronal network activity on a single-cell level from extracellular multiunit recordings with Microelectrode Arrays (MEAs). In typical analysis of MEA data, one spike sorting algorithm is applied indiscriminately to all electrode signals. However, this approach neglects the dependency of algorithms' performances on the neuronal signals properties at each channel, which require data-centric methods. Moreover, sorting is commonly performed off-line, which is time and memory consuming and prevents researchers from having an immediate glance at ongoing experiments. The aim of this work is to provide a versatile framework to support the evaluation and comparison of different spike classification algorithms suitable for both off-line and on-line analysis. We incorporated different spike sorting "building blocks" into a Matlab-based software, including 4 feature extraction methods, 3 feature clustering methods, and 1 template matching classifier. The framework was validated by applying different algorithms on simulated and real signals from neuronal cultures coupled to MEAs. Moreover, the system has been proven effective in running on-line analysis on a standard desktop computer, after the selection of the most suitable sorting methods. This work provides a useful and versatile instrument for a supported comparison of different options for spike sorting towards more accurate off-line and on-line MEA data analysis. PMID:27239191
Pedrocchi, Alessandra
2016-01-01
Neuronal spike sorting algorithms are designed to retrieve neuronal network activity on a single-cell level from extracellular multiunit recordings with Microelectrode Arrays (MEAs). In typical analysis of MEA data, one spike sorting algorithm is applied indiscriminately to all electrode signals. However, this approach neglects the dependency of algorithms' performances on the neuronal signals properties at each channel, which require data-centric methods. Moreover, sorting is commonly performed off-line, which is time and memory consuming and prevents researchers from having an immediate glance at ongoing experiments. The aim of this work is to provide a versatile framework to support the evaluation and comparison of different spike classification algorithms suitable for both off-line and on-line analysis. We incorporated different spike sorting “building blocks” into a Matlab-based software, including 4 feature extraction methods, 3 feature clustering methods, and 1 template matching classifier. The framework was validated by applying different algorithms on simulated and real signals from neuronal cultures coupled to MEAs. Moreover, the system has been proven effective in running on-line analysis on a standard desktop computer, after the selection of the most suitable sorting methods. This work provides a useful and versatile instrument for a supported comparison of different options for spike sorting towards more accurate off-line and on-line MEA data analysis. PMID:27239191
Solving SAT Problem Based on Hybrid Differential Evolution Algorithm
NASA Astrophysics Data System (ADS)
Liu, Kunqi; Zhang, Jingmin; Liu, Gang; Kang, Lishan
Satisfiability (SAT) problem is an NP-complete problem. Based on the analysis about it, SAT problem is translated equally into an optimization problem on the minimum of objective function. A hybrid differential evolution algorithm is proposed to solve the Satisfiability problem. It makes full use of strong local search capacity of hill-climbing algorithm and strong global search capability of differential evolution algorithm, which makes up their disadvantages, improves the efficiency of algorithm and avoids the stagnation phenomenon. The experiment results show that the hybrid algorithm is efficient in solving SAT problem.
LMI-Based Generation of Feedback Laws for a Robust Model Predictive Control Algorithm
NASA Technical Reports Server (NTRS)
Acikmese, Behcet; Carson, John M., III
2007-01-01
This technical note provides a mathematical proof of Corollary 1 from the paper 'A Nonlinear Model Predictive Control Algorithm with Proven Robustness and Resolvability' that appeared in the 2006 Proceedings of the American Control Conference. The proof was omitted for brevity in the publication. The paper was based on algorithms developed for the FY2005 R&TD (Research and Technology Development) project for Small-body Guidance, Navigation, and Control [2].The framework established by the Corollary is for a robustly stabilizing MPC (model predictive control) algorithm for uncertain nonlinear systems that guarantees the resolvability of the associated nite-horizon optimal control problem in a receding-horizon implementation. Additional details of the framework are available in the publication.
A Viola-Jones based hybrid face detection framework
NASA Astrophysics Data System (ADS)
Murphy, Thomas M.; Broussard, Randy; Schultz, Robert; Rakvic, Ryan; Ngo, Hau
2013-12-01
Improvements in face detection performance would benefit many applications. The OpenCV library implements a standard solution, the Viola-Jones detector, with a statistically boosted rejection cascade of binary classifiers. Empirical evidence has shown that Viola-Jones underdetects in some instances. This research shows that a truncated cascade augmented by a neural network could recover these undetected faces. A hybrid framework is constructed, with a truncated Viola-Jones cascade followed by an artificial neural network, used to refine the face decision. Optimally, a truncation stage that captured all faces and allowed the neural network to remove the false alarms is selected. A feedforward backpropagation network with one hidden layer is trained to discriminate faces based upon the thresholding (detection) values of intermediate stages of the full rejection cascade. A clustering algorithm is used as a precursor to the neural network, to group significant overlappings. Evaluated on the CMU/VASC Image Database, comparison with an unmodified OpenCV approach shows: (1) a 37% increase in detection rates if constrained by the requirement of no increase in false alarms, (2) a 48% increase in detection rates if some additional false alarms are tolerated, and (3) an 82% reduction in false alarms with no reduction in detection rates. These results demonstrate improved face detection and could address the need for such improvement in various applications.
Guan, Xiangmin; Zhang, Xuejun; Zhu, Yanbo; Sun, Dengfeng; Lei, Jiaxing
2015-01-01
Considering reducing the airspace congestion and the flight delay simultaneously, this paper formulates the airway network flow assignment (ANFA) problem as a multiobjective optimization model and presents a new multiobjective optimization framework to solve it. Firstly, an effective multi-island parallel evolution algorithm with multiple evolution populations is employed to improve the optimization capability. Secondly, the nondominated sorting genetic algorithm II is applied for each population. In addition, a cooperative coevolution algorithm is adapted to divide the ANFA problem into several low-dimensional biobjective optimization problems which are easier to deal with. Finally, in order to maintain the diversity of solutions and to avoid prematurity, a dynamic adjustment operator based on solution congestion degree is specifically designed for the ANFA problem. Simulation results using the real traffic data from China air route network and daily flight plans demonstrate that the proposed approach can improve the solution quality effectively, showing superiority to the existing approaches such as the multiobjective genetic algorithm, the well-known multiobjective evolutionary algorithm based on decomposition, and a cooperative coevolution multiobjective algorithm as well as other parallel evolution algorithms with different migration topology. PMID:26180840
Guan, Xiangmin; Zhang, Xuejun; Zhu, Yanbo; Sun, Dengfeng; Lei, Jiaxing
2015-01-01
Considering reducing the airspace congestion and the flight delay simultaneously, this paper formulates the airway network flow assignment (ANFA) problem as a multiobjective optimization model and presents a new multiobjective optimization framework to solve it. Firstly, an effective multi-island parallel evolution algorithm with multiple evolution populations is employed to improve the optimization capability. Secondly, the nondominated sorting genetic algorithm II is applied for each population. In addition, a cooperative coevolution algorithm is adapted to divide the ANFA problem into several low-dimensional biobjective optimization problems which are easier to deal with. Finally, in order to maintain the diversity of solutions and to avoid prematurity, a dynamic adjustment operator based on solution congestion degree is specifically designed for the ANFA problem. Simulation results using the real traffic data from China air route network and daily flight plans demonstrate that the proposed approach can improve the solution quality effectively, showing superiority to the existing approaches such as the multiobjective genetic algorithm, the well-known multiobjective evolutionary algorithm based on decomposition, and a cooperative coevolution multiobjective algorithm as well as other parallel evolution algorithms with different migration topology. PMID:26180840
A Curriculum Framework Based on Archetypal Phenomena and Technologies.
ERIC Educational Resources Information Center
Zubrowski, Bernie
2002-01-01
Presents an alternative paradigm of curriculum development based on the theory of situated cognition. This approach starts with context rather than concept, gives greater weight to students' interpretative frameworks, and provides for a more holistic development. Presents a grade 1-8 framework that uses archetypal phenomena and technologies as the…
A Framework for Concept-Based Digital Course Libraries
ERIC Educational Resources Information Center
Dicheva, Darina; Dichev, Christo
2004-01-01
This article presents a general framework for building conceptbased digital course libraries. The framework is based on the idea of using a conceptual structure that represents a subject domain ontology for classification of the course library content. Two aspects, domain conceptualization, which supports findability and ontologies, which support…
Machine learning algorithms for damage detection: Kernel-based approaches
NASA Astrophysics Data System (ADS)
Santos, Adam; Figueiredo, Eloi; Silva, M. F. M.; Sales, C. S.; Costa, J. C. W. A.
2016-02-01
This paper presents four kernel-based algorithms for damage detection under varying operational and environmental conditions, namely based on one-class support vector machine, support vector data description, kernel principal component analysis and greedy kernel principal component analysis. Acceleration time-series from an array of accelerometers were obtained from a laboratory structure and used for performance comparison. The main contribution of this study is the applicability of the proposed algorithms for damage detection as well as the comparison of the classification performance between these algorithms and other four ones already considered as reliable approaches in the literature. All proposed algorithms revealed to have better classification performance than the previous ones.
Raytracing Based upon the Sympletic Algorithm
NASA Astrophysics Data System (ADS)
Wang, Y.; Li, C.
2014-12-01
The raytracing is the basic problem in seismic imaging, and the reliability of the imaging depends on the accuracies both spatial trajectory and traveltime of the ray, and is using in seismology broadly. The seismic ray travels through the inhomogeneous media fallows the the eikonal equation, and the eikonal equation is an one order differential equation of traveltime, and satisfies the Hamilton System. In Cartesian coordinate system, we use a separable Hamilton System function. In this paper, the Sympletic algorithm method with bi-cubic convolution algorithm was used to solve the Hamilton System to deal with the raytracing problem. Compared with the Fsat Marching Method (FMM), The result shows that the Sympletic algorithm method (SAM) can keep the stability of the solution for the eikonal equation. Due to the use of the Sympletic algorithm, the method can produce a reliable seismic wavefront with an accurate ray trajectory (Fig.1). Meanwhile, the numerical modeling shows that the use of SAM can not only keep the stability of the Hamilton System with a fast computation but also improve the accuracy of the seismic ray tracing (Fig.2).
Function-Based Algorithms for Biological Sequences
ERIC Educational Resources Information Center
Mohanty, Pragyan Sheela P.
2015-01-01
Two problems at two different abstraction levels of computational biology are studied. At the molecular level, efficient pattern matching algorithms in DNA sequences are presented. For gene order data, an efficient data structure is presented capable of storing all gene re-orderings in a systematic manner. A common characteristic of presented…
Draft framework for watershed-based trading
1996-05-30
Effluent trading is an innovative way for water quality agencies and community stakeholders to develop common-sense, cost-effective solutions for water quality problems in their watersheds. Trading can allow communities to grow and prosper while retaining their commitment to water quality. The bulk of this framework discusses effluent trading in watersheds. Remaining sections discuss transactions that, while not technically fulfilling the definition of `effluent` trade, do involve the exchange of valued water quality or other ecological improvements between partners responding to market initiatives. This document therefore includes activities such as trades within a facility (intra-plant trading) and wetland mitigation banking, effluent trading/watersheds/watershed management/water quality protection/water quality management.
Analysis of image thresholding segmentation algorithms based on swarm intelligence
NASA Astrophysics Data System (ADS)
Zhang, Yi; Lu, Kai; Gao, Yinghui; Yang, Bo
2013-03-01
Swarm intelligence-based image thresholding segmentation algorithms are playing an important role in the research field of image segmentation. In this paper, we briefly introduce the theories of four existing image segmentation algorithms based on swarm intelligence including fish swarm algorithm, artificial bee colony, bacteria foraging algorithm and particle swarm optimization. Then some image benchmarks are tested in order to show the differences of the segmentation accuracy, time consumption, convergence and robustness for Salt & Pepper noise and Gaussian noise of these four algorithms. Through these comparisons, this paper gives qualitative analyses for the performance variance of the four algorithms. The conclusions in this paper would give a significant guide for the actual image segmentation.
Advances on image interpolation based on ant colony algorithm.
Rukundo, Olivier; Cao, Hanqiang
2016-01-01
This paper presents an advance on image interpolation based on ant colony algorithm (AACA) for high resolution image scaling. The difference between the proposed algorithm and the previously proposed optimization of bilinear interpolation based on ant colony algorithm (OBACA) is that AACA uses global weighting, whereas OBACA uses local weighting scheme. The strength of the proposed global weighting of AACA algorithm depends on employing solely the pheromone matrix information present on any group of four adjacent pixels to decide which case deserves a maximum global weight value or not. Experimental results are further provided to show the higher performance of the proposed AACA algorithm with reference to the algorithms mentioned in this paper. PMID:27047729
Barzilai-Borwein method in graph drawing algorithm based on Kamada-Kawai algorithm
NASA Astrophysics Data System (ADS)
Hasal, Martin; Pospisil, Lukas; Nowakova, Jana
2016-06-01
Extension of Kamada-Kawai algorithm, which was designed for calculating layouts of simple undirected graphs, is presented in this paper. Graphs drawn by Kamada-Kawai algorithm exhibit symmetries, tend to produce aesthetically pleasing and crossing-free layouts for planar graphs. Minimization of Kamada-Kawai algorithm is based on Newton-Raphson method, which needs Hessian matrix of second derivatives of minimized node. Disadvantage of Kamada-Kawai embedder algorithm is computational requirements. This is caused by searching of minimal potential energy of the whole system, which is minimized node by node. The node with highest energy is minimized against all nodes till the local equilibrium state is reached. In this paper with Barzilai-Borwein (BB) minimization algorithm, which needs only gradient for minimum searching, instead of Newton-Raphson method, is worked. It significantly improves the computational time and requirements.
BIRAM: a content-based image retrieval framework for medical images
NASA Astrophysics Data System (ADS)
Moreno, Ramon A.; Furuie, Sergio S.
2006-03-01
In the medical field, digital images are becoming more and more important for diagnostics and therapy of the patients. At the same time, the development of new technologies has increased the amount of image data produced in a hospital. This creates a demand for access methods that offer more than text-based queries for retrieval of the information. In this paper is proposed a framework for the retrieval of medical images that allows the use of different algorithms for the search of medical images by similarity. The framework also enables the search for textual information from an associated medical report and DICOM header information. The proposed system can be used for support of clinical decision making and is intended to be integrated with an open source picture, archiving and communication systems (PACS). The BIRAM has the following advantages: (i) Can receive several types of algorithms for image similarity search; (ii) Allows the codification of the report according to a medical dictionary, improving the indexing of the information and retrieval; (iii) The algorithms can be selectively applied to images with the appropriated characteristics, for instance, only in magnetic resonance images. The framework was implemented in Java language using a MS Access 97 database. The proposed framework can still be improved, by the use of regions of interest (ROI), indexing with slim-trees and integration with a PACS Server.
MERIS burned area algorithm in the framework of the ESA Fire CCI Project
NASA Astrophysics Data System (ADS)
Oliva, P.; Calado, T.; Gonzalez, F.
2012-04-01
The Fire-CCI project aims at generating long and reliable time series of burned area (BA) maps based on existing information provided by European satellite sensors. In this context, a BA algorithm is currently being developed using the Medium Resolution Imaging Spectrometer (MERIS) sensor. The algorithm is being tested over a series of ten study sites with a area of 500x500 km2 each, for the period of 2003 to 2009. The study sites are located in Canada, Colombia, Brazil, Portugal, Angola, South Africa, Kazakhstan, Borneo, Russia and Australia and include a variety of vegetation types characterized by different fire regimes. The algorithm has to take into account several limiting aspects that range from the MERIS sensor characteristics (e.g. the lack of SWIR bands) to the noise presented in the data. In addition the lack of data in some areas caused either because of cloud contamination or because the sensor does not acquire full resolution data over the study area, provokes a limitation difficult to overcome. In order to overcome these drawbacks, the design of the BA algorithm is based on the analysis of maximum composites of spectral indices characterized by low values of temporal standard deviation in space and associated to MODIS hot spots. Accordingly, for each study site and year, composites of maximum values of BAI are computed and the corresponding Julian day of the maximum value and number of observations in the period are registered by pixel . Then we computed the temporal standard deviation for pixels with a number of observations greater than 10 using spatial matrices of 3x3 pixels. To classify the BAI values as burned or non-burned we extract statistics using the MODIS hot spots. A pixel is finally classified as burned if it satisfies the following conditions: i) it is associated to hot spots; ii) BAI maximum is higher than a certain threshold and iii) the standard deviation of the Julian day is less than a given number of days.
NASA Astrophysics Data System (ADS)
Tang, Jie; Nett, Brian E.; Chen, Guang-Hong
2009-10-01
Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.
NASA Astrophysics Data System (ADS)
Müller, D.; Böckmann, C.; Kolgotin, A.; Schneidenbach, L.; Chemyakin, E.; Rosemann, J.; Znak, P.; Romanov, A.
2015-12-01
We present a summary on the current status of two inversion algorithms that are used in EARLINET for the inversion of data collected with EARLINET multiwavelength Raman lidars. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. Development of these two algorithms started in 2000 when EARLINET was founded. The algorithms are based on manually controlled inversion of optical data which allows for detailed sensitivity studies and thus provides us with comparably high quality of the derived data products. The algorithms allow us to derive particle effective radius, and volume and surface-area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light-absorption needs to be known with high accuracy. Single-scattering albedo can be computed from the retrieved microphysical parameters and allows us to categorize aerosols into high and low absorbing aerosols. We discuss the current status of these manually operated algorithms, the potentially achievable accuracy of data products, and the goals for future work on the basis of a few exemplary simulations with synthetic optical data. The optical data used in our study cover a range of Ångström exponents and extinction-to-backscatter (lidar) ratios that are found from lidar measurements of various aerosol types. We also tested aerosol scenarios that are considered highly unlikely, e.g., the lidar ratios fall outside the commonly accepted range of values measured with Raman lidar, even though the underlying microphysical particle properties are not uncommon. The goal of this part of the study is to test robustness of the algorithms toward their ability to identify aerosol types that have not been measured so far, but cannot be ruled out based on our current knowledge of
Web of Objects Based Ambient Assisted Living Framework for Emergency Psychiatric State Prediction.
Alam, Md Golam Rabiul; Abedin, Sarder Fakhrul; Al Ameen, Moshaddique; Hong, Choong Seon
2016-01-01
Ambient assisted living can facilitate optimum health and wellness by aiding physical, mental and social well-being. In this paper, patients' psychiatric symptoms are collected through lightweight biosensors and web-based psychiatric screening scales in a smart home environment and then analyzed through machine learning algorithms to provide ambient intelligence in a psychiatric emergency. The psychiatric states are modeled through a Hidden Markov Model (HMM), and the model parameters are estimated using a Viterbi path counting and scalable Stochastic Variational Inference (SVI)-based training algorithm. The most likely psychiatric state sequence of the corresponding observation sequence is determined, and an emergency psychiatric state is predicted through the proposed algorithm. Moreover, to enable personalized psychiatric emergency care, a service a web of objects-based framework is proposed for a smart-home environment. In this framework, the biosensor observations and the psychiatric rating scales are objectified and virtualized in the web space. Then, the web of objects of sensor observations and psychiatric rating scores are used to assess the dweller's mental health status and to predict an emergency psychiatric state. The proposed psychiatric state prediction algorithm reported 83.03 percent prediction accuracy in an empirical performance study. PMID:27608023
Robust facial expression recognition algorithm based on local metric learning
NASA Astrophysics Data System (ADS)
Jiang, Bin; Jia, Kebin
2016-01-01
In facial expression recognition tasks, different facial expressions are often confused with each other. Motivated by the fact that a learned metric can significantly improve the accuracy of classification, a facial expression recognition algorithm based on local metric learning is proposed. First, k-nearest neighbors of the given testing sample are determined from the total training data. Second, chunklets are selected from the k-nearest neighbors. Finally, the optimal transformation matrix is computed by maximizing the total variance between different chunklets and minimizing the total variance of instances in the same chunklet. The proposed algorithm can find the suitable distance metric for every testing sample and improve the performance on facial expression recognition. Furthermore, the proposed algorithm can be used for vector-based and matrix-based facial expression recognition. Experimental results demonstrate that the proposed algorithm could achieve higher recognition rates and be more robust than baseline algorithms on the JAFFE, CK, and RaFD databases.
A vehicle detection algorithm based on deep belief network.
Wang, Hai; Cai, Yingfeng; Chen, Long
2014-01-01
Vision based vehicle detection is a critical technology that plays an important role in not only vehicle active safety but also road video surveillance application. Traditional shallow model based vehicle detection algorithm still cannot meet the requirement of accurate vehicle detection in these applications. In this work, a novel deep learning based vehicle detection algorithm with 2D deep belief network (2D-DBN) is proposed. In the algorithm, the proposed 2D-DBN architecture uses second-order planes instead of first-order vector as input and uses bilinear projection for retaining discriminative information so as to determine the size of the deep architecture which enhances the success rate of vehicle detection. On-road experimental results demonstrate that the algorithm performs better than state-of-the-art vehicle detection algorithm in testing data sets. PMID:24959617
Identification of Traceability Barcode Based on Phase Correlation Algorithm
NASA Astrophysics Data System (ADS)
Lang, Liying; Zhang, Xiaofang
In the paper phase correlation algorithm based on Fourier transform is applied to the traceability barcode identification, which is a widely used method of image registration. And there is the rotation-invariant phase correlation algorithm which combines polar coordinate transform with phase correlation, that they can recognize the barcode with partly destroyed and rotated. The paper provides the analysis and simulation for the algorithm using Matlab, the results show that the algorithm has the advantages of good real-time and high performance. And it improves the matching precision and reduces the calculation by optimizing the rotation-invariant phase correlation.
Comparison of Beam-Based Alignment Algorithms for the ILC
Smith, J.C.; Gibbons, L.; Patterson, J.R.; Rubin, D.L.; Sagan, D.; Tenenbaum, P.; /SLAC
2006-03-15
The main linac of the International Linear Collider (ILC) requires more sophisticated alignment techniques than those provided by survey alone. Various Beam-Based Alignment (BBA) algorithms have been proposed to achieve the desired low emittance preservation. Dispersion Free Steering, Ballistic Alignment and the Kubo method are compared. Alignment algorithms are also tested in the presence of an Earth-like stray field.
A Danger-Theory-Based Immune Network Optimization Algorithm
Li, Tao; Xiao, Xin; Shi, Yuanquan
2013-01-01
Existing artificial immune optimization algorithms reflect a number of shortcomings, such as premature convergence and poor local search ability. This paper proposes a danger-theory-based immune network optimization algorithm, named dt-aiNet. The danger theory emphasizes that danger signals generated from changes of environments will guide different levels of immune responses, and the areas around danger signals are called danger zones. By defining the danger zone to calculate danger signals for each antibody, the algorithm adjusts antibodies' concentrations through its own danger signals and then triggers immune responses of self-regulation. So the population diversity can be maintained. Experimental results show that the algorithm has more advantages in the solution quality and diversity of the population. Compared with influential optimization algorithms, CLONALG, opt-aiNet, and dopt-aiNet, the algorithm has smaller error values and higher success rates and can find solutions to meet the accuracies within the specified function evaluation times. PMID:23483853
Community detection based on modularity and an improved genetic algorithm
NASA Astrophysics Data System (ADS)
Shang, Ronghua; Bai, Jing; Jiao, Licheng; Jin, Chao
2013-03-01
Complex networks are widely applied in every aspect of human society, and community detection is a research hotspot in complex networks. Many algorithms use modularity as the objective function, which can simplify the algorithm. In this paper, a community detection method based on modularity and an improved genetic algorithm (MIGA) is put forward. MIGA takes the modularity Q as the objective function, which can simplify the algorithm, and uses prior information (the number of community structures), which makes the algorithm more targeted and improves the stability and accuracy of community detection. Meanwhile, MIGA takes the simulated annealing method as the local search method, which can improve the ability of local search by adjusting the parameters. Compared with the state-of-art algorithms, simulation results on computer-generated and four real-world networks reflect the effectiveness of MIGA.
Improved artificial bee colony algorithm based gravity matching navigation method.
Gao, Wei; Zhao, Bo; Zhou, Guang Tao; Wang, Qiu Ying; Yu, Chun Yang
2014-01-01
Gravity matching navigation algorithm is one of the key technologies for gravity aided inertial navigation systems. With the development of intelligent algorithms, the powerful search ability of the Artificial Bee Colony (ABC) algorithm makes it possible to be applied to the gravity matching navigation field. However, existing search mechanisms of basic ABC algorithms cannot meet the need for high accuracy in gravity aided navigation. Firstly, proper modifications are proposed to improve the performance of the basic ABC algorithm. Secondly, a new search mechanism is presented in this paper which is based on an improved ABC algorithm using external speed information. At last, modified Hausdorff distance is introduced to screen the possible matching results. Both simulations and ocean experiments verify the feasibility of the method, and results show that the matching rate of the method is high enough to obtain a precise matching position. PMID:25046019
Fast parallel algorithm for slicing STL based on pipeline
NASA Astrophysics Data System (ADS)
Ma, Xulong; Lin, Feng; Yao, Bo
2016-04-01
In Additive Manufacturing field, the current researches of data processing mainly focus on a slicing process of large STL files or complicated CAD models. To improve the efficiency and reduce the slicing time, a parallel algorithm has great advantages. However, traditional algorithms can't make full use of multi-core CPU hardware resources. In the paper, a fast parallel algorithm is presented to speed up data processing. A pipeline mode is adopted to design the parallel algorithm. And the complexity of the pipeline algorithm is analyzed theoretically. To evaluate the performance of the new algorithm, effects of threads number and layers number are investigated by a serial of experiments. The experimental results show that the threads number and layers number are two remarkable factors to the speedup ratio. The tendency of speedup versus threads number reveals a positive relationship which greatly agrees with the Amdahl's law, and the tendency of speedup versus layers number also keeps a positive relationship agreeing with Gustafson's law. The new algorithm uses topological information to compute contours with a parallel method of speedup. Another parallel algorithm based on data parallel is used in experiments to show that pipeline parallel mode is more efficient. A case study at last shows a suspending performance of the new parallel algorithm. Compared with the serial slicing algorithm, the new pipeline parallel algorithm can make full use of the multi-core CPU hardware, accelerate the slicing process, and compared with the data parallel slicing algorithm, the new slicing algorithm in this paper adopts a pipeline parallel model, and a much higher speedup ratio and efficiency is achieved.
Fast parallel algorithm for slicing STL based on pipeline
NASA Astrophysics Data System (ADS)
Ma, Xulong; Lin, Feng; Yao, Bo
2016-05-01
In Additive Manufacturing field, the current researches of data processing mainly focus on a slicing process of large STL files or complicated CAD models. To improve the efficiency and reduce the slicing time, a parallel algorithm has great advantages. However, traditional algorithms can't make full use of multi-core CPU hardware resources. In the paper, a fast parallel algorithm is presented to speed up data processing. A pipeline mode is adopted to design the parallel algorithm. And the complexity of the pipeline algorithm is analyzed theoretically. To evaluate the performance of the new algorithm, effects of threads number and layers number are investigated by a serial of experiments. The experimental results show that the threads number and layers number are two remarkable factors to the speedup ratio. The tendency of speedup versus threads number reveals a positive relationship which greatly agrees with the Amdahl's law, and the tendency of speedup versus layers number also keeps a positive relationship agreeing with Gustafson's law. The new algorithm uses topological information to compute contours with a parallel method of speedup. Another parallel algorithm based on data parallel is used in experiments to show that pipeline parallel mode is more efficient. A case study at last shows a suspending performance of the new parallel algorithm. Compared with the serial slicing algorithm, the new pipeline parallel algorithm can make full use of the multi-core CPU hardware, accelerate the slicing process, and compared with the data parallel slicing algorithm, the new slicing algorithm in this paper adopts a pipeline parallel model, and a much higher speedup ratio and efficiency is achieved.
Fast image matching algorithm based on projection characteristics
NASA Astrophysics Data System (ADS)
Zhou, Lijuan; Yue, Xiaobo; Zhou, Lijun
2011-06-01
Based on analyzing the traditional template matching algorithm, this paper identified the key factors restricting the speed of matching and put forward a brand new fast matching algorithm based on projection. Projecting the grayscale image, this algorithm converts the two-dimensional information of the image into one-dimensional one, and then matches and identifies through one-dimensional correlation, meanwhile, because of normalization has been done, when the image brightness or signal amplitude increasing in proportion, it could also perform correct matching. Experimental results show that the projection characteristics based image registration method proposed in this article could greatly improve the matching speed, which ensuring the matching accuracy as well.
A knowledge-based clustering algorithm driven by Gene Ontology.
Cheng, Jill; Cline, Melissa; Martin, John; Finkelstein, David; Awad, Tarif; Kulp, David; Siani-Rose, Michael A
2004-08-01
We have developed an algorithm for inferring the degree of similarity between genes by using the graph-based structure of Gene Ontology (GO). We applied this knowledge-based similarity metric to a clique-finding algorithm for detecting sets of related genes with biological classifications. We also combined it with an expression-based distance metric to produce a co-cluster analysis, which accentuates genes with both similar expression profiles and similar biological characteristics and identifies gene clusters that are more stable and biologically meaningful. These algorithms are demonstrated in the analysis of MPRO cell differentiation time series experiments. PMID:15468759
PARALLELISATION OF THE MODEL-BASED ITERATIVE RECONSTRUCTION ALGORITHM DIRA.
Örtenberg, A; Magnusson, M; Sandborg, M; Alm Carlsson, G; Malusek, A
2016-06-01
New paradigms for parallel programming have been devised to simplify software development on multi-core processors and many-core graphical processing units (GPU). Despite their obvious benefits, the parallelisation of existing computer programs is not an easy task. In this work, the use of the Open Multiprocessing (OpenMP) and Open Computing Language (OpenCL) frameworks is considered for the parallelisation of the model-based iterative reconstruction algorithm DIRA with the aim to significantly shorten the code's execution time. Selected routines were parallelised using OpenMP and OpenCL libraries; some routines were converted from MATLAB to C and optimised. Parallelisation of the code with the OpenMP was easy and resulted in an overall speedup of 15 on a 16-core computer. Parallelisation with OpenCL was more difficult owing to differences between the central processing unit and GPU architectures. The resulting speedup was substantially lower than the theoretical peak performance of the GPU; the cause was explained. PMID:26454270
Framework for Supporting Web-Based Collaborative Applications
NASA Astrophysics Data System (ADS)
Dai, Wei
The article proposes an intelligent framework for supporting Web-based applications. The framework focuses on innovative use of existing resources and technologies in the form of services and takes the leverage of theoretical foundation of services science and the research from services computing. The main focus of the framework is to deliver benefits to users with various roles such as service requesters, service providers, and business owners to maximize their productivity when engaging with each other via the Web. The article opens up with research motivations and questions, analyses the existing state of research in the field, and describes the approach in implementing the proposed framework. Finally, an e-health application is discussed to evaluate the effectiveness of the framework where participants such as general practitioners (GPs), patients, and health-care workers collaborate via the Web.
Dong, Tingzing Tim; Tomov, Stanimire Z; Luszczek, Piotr R; Dongarra, Jack J
2015-01-01
As modern hardware keeps evolving, an increasingly effective approach to developing energy efficient and high-performance solvers is to design them to work on many small size and independent problems. Many applications already need this functionality, especially for GPUs, which are currently known to be about four to five times more energy efficient than multicore CPUs. We describe the development of one-sided factorizations that work for a set of small dense matrices in parallel, and we illustrate our techniques on the QR factorization based on Householder transformations. We refer to this mode of operation as a batched factorization. Our approach is based on representing the algorithms as a sequence of batched BLAS routines for GPU-only execution. This is in contrast to the hybrid CPU-GPU algorithms that rely heavily on using the multicore CPU for specific parts of the workload. But for a system to benefit fully from the GPU's significantly higher energy efficiency, avoiding the use of the multicore CPU must be a primary design goal, so the system can rely more heavily on the more efficient GPU. Additionally, this will result in the removal of the costly CPU-to-GPU communication. Furthermore, we do not use a single symmetric multiprocessor(on the GPU) to factorize a single problem at a time. We illustrate how our performance analysis, and the use of profiling and tracing tools, guided the development and optimization of our batched factorization to achieve up to a 2-fold speedup and a 3-fold energy efficiency improvement compared to our highly optimized batched CPU implementations based on the MKL library(when using two sockets of Intel Sandy Bridge CPUs). Compared to a batched QR factorization featured in the CUBLAS library for GPUs, we achieved up to 5x speedup on the K40 GPU.
de Klerk, Helen M; Gilbertson, Jason; Lück-Vogel, Melanie; Kemp, Jaco; Munch, Zahn
2016-11-01
Traditionally, to map environmental features using remote sensing, practitioners will use training data to develop models on various satellite data sets using a number of classification approaches and use test data to select a single 'best performer' from which the final map is made. We use a combination of an omission/commission plot to evaluate various results and compile a probability map based on consistently strong performing models across a range of standard accuracy measures. We suggest that this easy-to-use approach can be applied in any study using remote sensing to map natural features for management action. We demonstrate this approach using optical remote sensing products of different spatial and spectral resolution to map the endemic and threatened flora of quartz patches in the Knersvlakte, South Africa. Quartz patches can be mapped using either SPOT 5 (used due to its relatively fine spatial resolution) or Landsat8 imagery (used because it is freely accessible and has higher spectral resolution). Of the variety of classification algorithms available, we tested maximum likelihood and support vector machine, and applied these to raw spectral data, the first three PCA summaries of the data, and the standard normalised difference vegetation index. We found that there is no 'one size fits all' solution to the choice of a 'best fit' model (i.e. combination of classification algorithm or data sets), which is in agreement with the literature that classifier performance will vary with data properties. We feel this lends support to our suggestion that rather than the identification of a 'single best' model and a map based on this result alone, a probability map based on the range of consistently top performing models provides a rigorous solution to environmental mapping. PMID:27543751
A Framework for a WAP-Based Course Registration System
ERIC Educational Resources Information Center
AL-Bastaki, Yousif; Al-Ajeeli, Abid
2005-01-01
This paper describes a WAP-based course registration system designed and implemented to facilitating the process of students' registration at Bahrain University. The framework will support many opportunities for applying WAP based technology to many services such as wireless commerce, cashless payment... and location-based services. The paper…
Argumentation in Science Education: A Model-Based Framework
ERIC Educational Resources Information Center
Bottcher, Florian; Meisert, Anke
2011-01-01
The goal of this article is threefold: First, the theoretical background for a model-based framework of argumentation to describe and evaluate argumentative processes in science education is presented. Based on the general model-based perspective in cognitive science and the philosophy of science, it is proposed to understand arguments as reasons…
Finite-sample based learning algorithms for feedforward networks
Rao, N.S.V.; Protopopescu, V.; Mann, R.C.; Oblow, E.M.; Iyengar, S.S.
1995-04-01
We discuss two classes of convergent algorithms for learning continuous functions (and also regression functions) that are represented by FeedForward Networks (FFN). The first class of algorithms, applicable to networks with unknown weights located only in the output layer, is obtained by utilizing the potential function methods of Aizerman et al. The second class, applicable to general feedforward networks, is obtained by utilizing the classical Robbins-Monro style stochastic approximation methods. Conditions relating the sample sizes to the error bounds are derived for both classes of algorithms using martingale-type inequalities. For concreteness, the discussion is presented in terms of neural networks, but the results are applicable to general feedforward networks, in particular to wavelet networks. The algorithms can also be directly applied to concept learning problems. A main distinguishing feature of the this work is that the sample sizes are based on explicit algorithms rather than information-based methods.
A Moment-Based Condensed History Algorithm
Tolar, D.R.; Larsen, E.W.
2000-06-15
''Condensed History'' algorithms are Monte Carlo models for electron transport problems, They describe the aggregate effect of multiple collisions that occur when an electron travels a path length s{sub 0}. This path length is the distance each Monte Carlo electron travels between Condensed History steps. Conventional Condensed History schemes employ a splitting routine over the range 0 {le} s {le} s{sub 0}. For example, the Random Hinge method splits each path length step into two substeps; one with length {xi}s{sub 0} and one with length (1-{xi})s{sub 0}, where {xi} is a random number from 0 < {xi} < 1. Here we develop a new Condensed History algorithm to improve the accuracy of electron transport simulations by preserving the mean position and the variance in the mean of electrons that have traveled a path length s and are traveling with the direction cosine {mu}. These means and variances are obtained from the zeroth-, first-, and second-order spatial moments of the Boltzmann transport equation. Hence, our method is a Monte Carlo application of the ''Method of Moments''.
CUDT: A CUDA Based Decision Tree Algorithm
Sheu, Ruey-Kai; Chiu, Chun-Chieh
2014-01-01
Decision tree is one of the famous classification methods in data mining. Many researches have been proposed, which were focusing on improving the performance of decision tree. However, those algorithms are developed and run on traditional distributed systems. Obviously the latency could not be improved while processing huge data generated by ubiquitous sensing node in the era without new technology help. In order to improve data processing latency in huge data mining, in this paper, we design and implement a new parallelized decision tree algorithm on a CUDA (compute unified device architecture), which is a GPGPU solution provided by NVIDIA. In the proposed system, CPU is responsible for flow control while the GPU is responsible for computation. We have conducted many experiments to evaluate system performance of CUDT and made a comparison with traditional CPU version. The results show that CUDT is 5∼55 times faster than Weka-j48 and is 18 times speedup than SPRINT for large data set. PMID:25140346
A Model Independent S/W Framework for Search-Based Software Testing
Baik, Jongmoon
2014-01-01
In Model-Based Testing (MBT) area, Search-Based Software Testing (SBST) has been employed to generate test cases from the model of a system under test. However, many types of models have been used in MBT. If the type of a model has changed from one to another, all functions of a search technique must be reimplemented because the types of models are different even if the same search technique has been applied. It requires too much time and effort to implement the same algorithm over and over again. We propose a model-independent software framework for SBST, which can reduce redundant works. The framework provides a reusable common software platform to reduce time and effort. The software framework not only presents design patterns to find test cases for a target model but also reduces development time by using common functions provided in the framework. We show the effectiveness and efficiency of the proposed framework with two case studies. The framework improves the productivity by about 50% when changing the type of a model. PMID:25302314
PCA-LBG-based algorithms for VQ codebook generation
NASA Astrophysics Data System (ADS)
Tsai, Jinn-Tsong; Yang, Po-Yuan
2015-04-01
Vector quantisation (VQ) codebooks are generated by combining principal component analysis (PCA) algorithms with Linde-Buzo-Gray (LBG) algorithms. All training vectors are grouped according to the projected values of the principal components. The PCA-LBG-based algorithms include (1) PCA-LBG-Median, which selects the median vector of each group, (2) PCA-LBG-Centroid, which adopts the centroid vector of each group, and (3) PCA-LBG-Random, which randomly selects a vector of each group. The LBG algorithm finds a codebook based on the better vectors sent to an initial codebook by the PCA. The PCA performs an orthogonal transformation to convert a set of potentially correlated variables into a set of variables that are not linearly correlated. Because the orthogonal transformation efficiently distinguishes test image vectors, the proposed PCA-LBG-based algorithm is expected to outperform conventional algorithms in designing VQ codebooks. The experimental results confirm that the proposed PCA-LBG-based algorithms indeed obtain better results compared to existing methods reported in the literature.
A modified density-based clustering algorithm and its implementation
NASA Astrophysics Data System (ADS)
Ban, Zhihua; Liu, Jianguo; Yuan, Lulu; Yang, Hua
2015-12-01
This paper presents an improved density-based clustering algorithm based on the paper of clustering by fast search and find of density peaks. A distance threshold is introduced for the purpose of economizing memory. In order to reduce the probability that two points share the same density value, similarity is utilized to define proximity measure. We have tested the modified algorithm on a large data set, several small data sets and shape data sets. It turns out that the proposed algorithm can obtain acceptable results and can be applied more wildly.
A novel bit-quad-based Euler number computing algorithm.
Yao, Bin; He, Lifeng; Kang, Shiying; Chao, Yuyan; Zhao, Xiao
2015-01-01
The Euler number of a binary image is an important topological property in computer vision and pattern recognition. This paper proposes a novel bit-quad-based Euler number computing algorithm. Based on graph theory and analysis on bit-quad patterns, our algorithm only needs to count two bit-quad patterns. Moreover, by use of the information obtained during processing the previous bit-quad, the average number of pixels to be checked for processing a bit-quad is only 1.75. Experimental results demonstrated that our method outperforms significantly conventional Euler number computing algorithms. PMID:26636023
Real-time blind image deconvolution based on coordinated framework of FPGA and DSP
NASA Astrophysics Data System (ADS)
Wang, Ze; Li, Hang; Zhou, Hua; Liu, Hongjun
2015-10-01
Image restoration takes a crucial place in several important application domains. With the increasing of computation requirement as the algorithms become much more complexity, there has been a significant rise in the need for accelerating implementation. In this paper, we focus on an efficient real-time image processing system for blind iterative deconvolution method by means of the Richardson-Lucy (R-L) algorithm. We study the characteristics of algorithm, and an image restoration processing system based on the coordinated framework of FPGA and DSP (CoFD) is presented. Single precision floating-point processing units with small-scale cascade and special FFT/IFFT processing modules are adopted to guarantee the accuracy of the processing. Finally, Comparing experiments are done. The system could process a blurred image of 128×128 pixels within 32 milliseconds, and is up to three or four times faster than the traditional multi-DSPs systems.
Hybrid framework based on evidence theory for blood cell image segmentation
NASA Astrophysics Data System (ADS)
Baghli, Ismahan; Nakib, Amir; Sellam, Elie; Benazzouz, Mourtada; Chikh, Amine; Petit, Eric
2014-03-01
The segmentation of microscopic images is an important issue in biomedical image processing. Many works can be found in the literature; however, there is not a gold standard method that is able to provide good results for all kinds of microscopic images. Then, authors propose methods for a given kind of microscopic images. This paper deals with new segmentation framework based on evidence theory, called ESA (Evidential Segmentation Algorithm) to segment blood cell images. The proposed algorithm allows solving the segmentation problem of blood cell images. Herein, our goal is to extract the components of a given cell image by using evidence theory, that allows more flexibility to classify the pixels. The obtained results showed the efficiency of the proposed algorithm compared to other competing methods.
Pilot based frameworks for Weather Research Forecasting
NASA Astrophysics Data System (ADS)
Ganapathi, Dinesh Prasanth
The Weather Research Forecasting (WRF) domain consists of complex workflows that demand the use of Distributed Computing Infrastructure (DCI). Weather forecasting requires that weather researchers use different set of initial conditions and one or a combination of physics models on the same set of input data. For these type of simulations an ensemble based computing approach becomes imperative. Most DCIs have local job-schedulers that have no smart way of dealing with the execution of an ensemble type of computational problem as the job-schedulers are built to cater to the bare essentials of resource allocation. This means the weather scientists have to submit multiple jobs to the job-scheduler. In this dissertation we use Pilot-Job based tools to decouple work-load submission and resource allocation therefore streamlining the complex workflows in Weather Research and Forecasting domain and reduce their overall time to completion. We also achieve location independent job execution, data movement, placement and processing. Next, we create the necessary enablers to run an ensemble of tasks bearing the capability to run on multiple heterogeneous distributed computing resources there by creating the opportunity to minimize the overall time consumed in running the models. Our experiments show that the tools developed exhibit very good, strong and weak scaling characteristics. These results bear the potential to change the way weather researchers are submitting traditional WRF jobs to the DCIs by giving them a powerful weapon in their arsenal that can exploit the combined power of various heterogeneous DCIs that could otherwise be difficult to harness owing to interoperability issues.
Simple-random-sampling-based multiclass text classification algorithm.
Liu, Wuying; Wang, Lin; Yi, Mianzhu
2014-01-01
Multiclass text classification (MTC) is a challenging issue and the corresponding MTC algorithms can be used in many applications. The space-time overhead of the algorithms must be concerned about the era of big data. Through the investigation of the token frequency distribution in a Chinese web document collection, this paper reexamines the power law and proposes a simple-random-sampling-based MTC (SRSMTC) algorithm. Supported by a token level memory to store labeled documents, the SRSMTC algorithm uses a text retrieval approach to solve text classification problems. The experimental results on the TanCorp data set show that SRSMTC algorithm can achieve the state-of-the-art performance at greatly reduced space-time requirements. PMID:24778587
A ray-based algorithm for multi-dimensional linearconversion
Tracy, Eugene R.; Kaufman, Allan N.; Jaun, Andre
2004-04-19
A numerical algorithm is proposed for connecting the incoming and outgoing wave fields in studies of linear conversion. This is the first such ray-based algorithm for wave conversion in multiple spatial dimensions. it is demonstrated that, aside from the overall phase of the coupling, one can directly evaluate all quantities needed for the connection coefficients from the ray geometry. The ray dynamics is generated using the determinant of the dispersion matrix as the hamiltonian. Using information available while following an incoming ray, the algorithm automatically detects that the ray has entered a conversion region, evaluates the transmission and conversion coefficients, and launches the transmitted ray. The algorithm does not require any prior knowledge of the geometry of the conversion region. The algorithm is illustrated using a two-dimensional toroidal model with resonant conversion from a magnetosonic to an ion-hybrid wave.
Simple-Random-Sampling-Based Multiclass Text Classification Algorithm
Liu, Wuying; Wang, Lin; Yi, Mianzhu
2014-01-01
Multiclass text classification (MTC) is a challenging issue and the corresponding MTC algorithms can be used in many applications. The space-time overhead of the algorithms must be concerned about the era of big data. Through the investigation of the token frequency distribution in a Chinese web document collection, this paper reexamines the power law and proposes a simple-random-sampling-based MTC (SRSMTC) algorithm. Supported by a token level memory to store labeled documents, the SRSMTC algorithm uses a text retrieval approach to solve text classification problems. The experimental results on the TanCorp data set show that SRSMTC algorithm can achieve the state-of-the-art performance at greatly reduced space-time requirements. PMID:24778587
Optree: a learning-based adaptive watershed algorithm for neuron segmentation.
Uzunbaş, Mustafa Gökhan; Chen, Chao; Metaxas, Dimitris
2014-01-01
We present a new algorithm for automatic and interactive segmentation of neuron structures from electron microscopy (EM) images. Our method selects a collection of nodes from the watershed mergng tree as the proposed segmentation. This is achieved by building a onditional random field (CRF) whose underlying graph is the merging tree. The maximum a posteriori (MAP) prediction of the CRF is the output segmentation. Our algorithm outperforms state-of-the-art methods. Both the inference and the training are very efficient as the graph is tree-structured. Furthermore, we develop an interactive segmentation framework which selects uncertain regions for a user to proofread. The uncertainty is measured by the marginals of the graphical model. Based on user corrections, our framework modifies the merging tree and thus improves the segmentation globally. PMID:25333106
Research on classified real-time flood forecasting framework based on K-means cluster and rough set.
Xu, Wei; Peng, Yong
2015-01-01
This research presents a new classified real-time flood forecasting framework. In this framework, historical floods are classified by a K-means cluster according to the spatial and temporal distribution of precipitation, the time variance of precipitation intensity and other hydrological factors. Based on the classified results, a rough set is used to extract the identification rules for real-time flood forecasting. Then, the parameters of different categories within the conceptual hydrological model are calibrated using a genetic algorithm. In real-time forecasting, the corresponding category of parameters is selected for flood forecasting according to the obtained flood information. This research tests the new classified framework on Guanyinge Reservoir and compares the framework with the traditional flood forecasting method. It finds that the performance of the new classified framework is significantly better in terms of accuracy. Furthermore, the framework can be considered in a catchment with fewer historical floods. PMID:26442493
Curriculum Research: Toward a Framework for "Research-based Curricula"
ERIC Educational Resources Information Center
Clements, Douglas H.
2007-01-01
Government agencies and members of the educational research community have petitioned for research-based curricula. The ambiguity of the phrase "research-based", however, undermines attempts to create a shared research foundation for the development of, and informed choices about, classroom curricula. This article presents a framework for the…
An Emerging Framework for Analyzing School-Based Professional Community.
ERIC Educational Resources Information Center
Kruse, Sharon D.; Louis, Karen Seashore
This paper attempts to blend the literature on professionalism with the literature of community, thus positing a framework for a school-based professional community. Sociologists have long distinguished between occupations--even high status ones--and professions. Among the key distinctions of professionalism are: a technical knowledge base shared…
Construct Definition Using Cognitively Based Evidence: A Framework for Practice
ERIC Educational Resources Information Center
Ketterlin-Geller, Leanne R.; Yovanoff, Paul; Jung, EunJu; Liu, Kimy; Geller, Josh
2013-01-01
In this article, we highlight the need for a precisely defined construct in score-based validation and discuss the contribution of cognitive theories to accurately and comprehensively defining the construct. We propose a framework for integrating cognitively based theoretical and empirical evidence to specify and evaluate the construct. We apply…
The new approach for infrared target tracking based on the particle filter algorithm
NASA Astrophysics Data System (ADS)
Sun, Hang; Han, Hong-xia
2011-08-01
Target tracking on the complex background in the infrared image sequence is hot research field. It provides the important basis in some fields such as video monitoring, precision, and video compression human-computer interaction. As a typical algorithms in the target tracking framework based on filtering and data connection, the particle filter with non-parameter estimation characteristic have ability to deal with nonlinear and non-Gaussian problems so it were widely used. There are various forms of density in the particle filter algorithm to make it valid when target occlusion occurred or recover tracking back from failure in track procedure, but in order to capture the change of the state space, it need a certain amount of particles to ensure samples is enough, and this number will increase in accompany with dimension and increase exponentially, this led to the increased amount of calculation is presented. In this paper particle filter algorithm and the Mean shift will be combined. Aiming at deficiencies of the classic mean shift Tracking algorithm easily trapped into local minima and Unable to get global optimal under the complex background. From these two perspectives that "adaptive multiple information fusion" and "with particle filter framework combining", we expand the classic Mean Shift tracking framework .Based on the previous perspective, we proposed an improved Mean Shift infrared target tracking algorithm based on multiple information fusion. In the analysis of the infrared characteristics of target basis, Algorithm firstly extracted target gray and edge character and Proposed to guide the above two characteristics by the moving of the target information thus we can get new sports guide grayscale characteristics and motion guide border feature. Then proposes a new adaptive fusion mechanism, used these two new information adaptive to integrate into the Mean Shift tracking framework. Finally we designed a kind of automatic target model updating strategy
NASA Astrophysics Data System (ADS)
Yu, Miao; Liu, Cunjia; Chen, Wen-hua; Chambers, Jonathon
2014-06-01
In this work, we propose a new ground moving target indicator (GMTI) radar based ground vehicle tracking method which exploits domain knowledge. Multiple state models are considered and a Monte-Carlo sampling based algorithm is preferred due to the manoeuvring of the ground vehicle and the non-linearity of the GMTI measurement model. Unlike the commonly used algorithms such as the interacting multiple model particle filter (IMMPF) and bootstrap multiple model particle filter (BS-MMPF), we propose a new algorithm integrating the more efficient auxiliary particle filter (APF) into a Bayesian framework. Moreover, since the movement of the ground vehicle is likely to be constrained by the road, this information is taken as the domain knowledge and applied together with the tracking algorithm for improving the tracking performance. Simulations are presented to show the advantages of both the new algorithm and incorporation of the road information by evaluating the root mean square error (RMSE).
A universal variational framework for sparsity-based image inpainting.
Li, Fang; Zeng, Tieyong
2014-10-01
In this paper, we extend an existing universal variational framework for image inpainting with new numerical algorithms. Given certain regularization operator Φ and denoting u the latent image, the basic model is to minimize the l(p), (p=0,1) norm of Φu preserving the pixel values outside the inpainting region. Utilizing the operator splitting technique, the original problem can be approximated by a new problem with extra variable. With the alternating minimization method, the new problem can be decomposed as two subproblems with exact solutions. There are many choices for Φ in our approach such as gradient operator, wavelet transform, framelet transform, or other tight frames. Moreover, with slight modification, we can decouple our framework into two relatively independent parts: 1) denoising and 2) linear combination. Therefore, we can take any denoising method, including BM3D filter in the denoising step. The numerical experiments on various image inpainting tasks, such as scratch and text removal, randomly missing pixel filling, and block completion, clearly demonstrate the super performance of the proposed methods. Furthermore, the theoretical convergence of the proposed algorithms is proved. PMID:25122574
Heuristic-based scheduling algorithm for high level synthesis
NASA Technical Reports Server (NTRS)
Mohamed, Gulam; Tan, Han-Ngee; Chng, Chew-Lye
1992-01-01
A new scheduling algorithm is proposed which uses a combination of a resource utilization chart, a heuristic algorithm to estimate the minimum number of hardware units based on operator mobilities, and a list-scheduling technique to achieve fast and near optimal schedules. The schedule time of this algorithm is almost independent of the length of mobilities of operators as can be seen from the benchmark example (fifth order digital elliptical wave filter) presented when the cycle time was increased from 17 to 18 and then to 21 cycles. It is implemented in C on a SUN3/60 workstation.
Quantum Image Encryption Algorithm Based on Quantum Image XOR Operations
NASA Astrophysics Data System (ADS)
Gong, Li-Hua; He, Xiang-Tao; Cheng, Shan; Hua, Tian-Xiang; Zhou, Nan-Run
2016-03-01
A novel encryption algorithm for quantum images based on quantum image XOR operations is designed. The quantum image XOR operations are designed by using the hyper-chaotic sequences generated with the Chen's hyper-chaotic system to control the control-NOT operation, which is used to encode gray-level information. The initial conditions of the Chen's hyper-chaotic system are the keys, which guarantee the security of the proposed quantum image encryption algorithm. Numerical simulations and theoretical analyses demonstrate that the proposed quantum image encryption algorithm has larger key space, higher key sensitivity, stronger resistance of statistical analysis and lower computational complexity than its classical counterparts.
Rate control algorithm based on frame complexity estimation for MVC
NASA Astrophysics Data System (ADS)
Yan, Tao; An, Ping; Shen, Liquan; Zhang, Zhaoyang
2010-07-01
Rate control has not been well studied for multi-view video coding (MVC). In this paper, we propose an efficient rate control algorithm for MVC by improving the quadratic rate-distortion (R-D) model, which reasonably allocate bit-rate among views based on correlation analysis. The proposed algorithm consists of four levels for rate bits control more accurately, of which the frame layer allocates bits according to frame complexity and temporal activity. Extensive experiments show that the proposed algorithm can efficiently implement bit allocation and rate control according to coding parameters.
A Novel Image Encryption Algorithm Based on DNA Subsequence Operation
Zhang, Qiang; Xue, Xianglian; Wei, Xiaopeng
2012-01-01
We present a novel image encryption algorithm based on DNA subsequence operation. Different from the traditional DNA encryption methods, our algorithm does not use complex biological operation but just uses the idea of DNA subsequence operations (such as elongation operation, truncation operation, deletion operation, etc.) combining with the logistic chaotic map to scramble the location and the value of pixel points from the image. The experimental results and security analysis show that the proposed algorithm is easy to be implemented, can get good encryption effect, has a wide secret key's space, strong sensitivity to secret key, and has the abilities of resisting exhaustive attack and statistic attack. PMID:23093912
Quantum Image Encryption Algorithm Based on Quantum Image XOR Operations
NASA Astrophysics Data System (ADS)
Gong, Li-Hua; He, Xiang-Tao; Cheng, Shan; Hua, Tian-Xiang; Zhou, Nan-Run
2016-07-01
A novel encryption algorithm for quantum images based on quantum image XOR operations is designed. The quantum image XOR operations are designed by using the hyper-chaotic sequences generated with the Chen's hyper-chaotic system to control the control-NOT operation, which is used to encode gray-level information. The initial conditions of the Chen's hyper-chaotic system are the keys, which guarantee the security of the proposed quantum image encryption algorithm. Numerical simulations and theoretical analyses demonstrate that the proposed quantum image encryption algorithm has larger key space, higher key sensitivity, stronger resistance of statistical analysis and lower computational complexity than its classical counterparts.
Genetic Algorithm Based Neural Networks for Nonlinear Optimization
1994-09-28
This software develops a novel approach to nonlinear optimization using genetic algorithm based neural networks. To our best knowledge, this approach represents the first attempt at applying both neural network and genetic algorithm techniques to solve a nonlinear optimization problem. The approach constructs a neural network structure and an appropriately shaped energy surface whose minima correspond to optimal solutions of the problem. A genetic algorithm is employed to perform a parallel and powerful search ofmore » the energy surface.« less
A novel image encryption algorithm based on DNA subsequence operation.
Zhang, Qiang; Xue, Xianglian; Wei, Xiaopeng
2012-01-01
We present a novel image encryption algorithm based on DNA subsequence operation. Different from the traditional DNA encryption methods, our algorithm does not use complex biological operation but just uses the idea of DNA subsequence operations (such as elongation operation, truncation operation, deletion operation, etc.) combining with the logistic chaotic map to scramble the location and the value of pixel points from the image. The experimental results and security analysis show that the proposed algorithm is easy to be implemented, can get good encryption effect, has a wide secret key's space, strong sensitivity to secret key, and has the abilities of resisting exhaustive attack and statistic attack. PMID:23093912
Ray-tracing-based reconstruction algorithms for digital breast tomosynthesis
NASA Astrophysics Data System (ADS)
Zhou, Weihua; Lu, Jianping; Zhou, Otto; Chen, Ying
2015-03-01
As a breast-imaging technique, digital breast tomosynthesis has great potential to improve the diagnosis of early breast cancer over mammography. Ray-tracing-based reconstruction algorithms, such as ray-tracing back projection, maximum-likelihood expectation maximization (MLEM), ordered-subset MLEM (OS-MLEM), and simultaneous algebraic reconstruction technique (SART), have been developed as reconstruction methods for different breast tomosynthesis systems. This paper provides a comparative study to investigate these algorithms by computer simulation and phantom study. Experimental results suggested that, among the four investigated reconstruction algorithms, OS-MLEM and SART performed better in interplane artifact removal with a fast speed convergence.
Algorithm for calculating torque base in vehicle traction control system
NASA Astrophysics Data System (ADS)
Li, Hongzhi; Li, Liang; Song, Jian; Wu, Kaihui; Qiao, Yanjuan; Liu, Xingchun; Xia, Yongguang
2012-11-01
Existing research on the traction control system(TCS) mainly focuses on control methods, such as the PID control, fuzzy logic control, etc, aiming at achieving an ideal slip rate of the drive wheel over long control periods. The initial output of the TCS (referred to as the torque base in this paper), which has a great impact on the driving performance of the vehicle in early cycles, remains to be investigated. In order to improve the control performance of the TCS in the first several cycles, an algorithm is proposed to determine the torque base. First, torque bases are calculated by two different methods, one based on states judgment and the other based on the vehicle dynamics. The confidence level of the torque base calculated based on the vehicle dynamics is also obtained. The final torque base is then determined based on the two torque bases and the confidence level. Hardware-in-the-loop(HIL) simulation and vehicle tests emulating sudden start on low friction roads have been conducted to verify the proposed algorithm. The control performance of a PID-controlled TCS with and without the proposed torque base algorithm is compared, showing that the proposed algorithm improves the performance of the TCS over the first several cycles and enhances about 5% vehicle speed by contrast. The proposed research provides a more proper initial value for TCS control, and improves the performance of the first several control cycles of the TCS.
A novel iris segmentation algorithm based on small eigenvalue analysis
NASA Astrophysics Data System (ADS)
Harish, B. S.; Aruna Kumar, S. V.; Guru, D. S.; Ngo, Minh Ngoc
2015-12-01
In this paper, a simple and robust algorithm is proposed for iris segmentation. The proposed method consists of two steps. In first step, iris and pupil is segmented using Robust Spatial Kernel FCM (RSKFCM) algorithm. RSKFCM is based on traditional Fuzzy-c-Means (FCM) algorithm, which incorporates spatial information and uses kernel metric as distance measure. In second step, small eigenvalue transformation is applied to localize iris boundary. The transformation is based on statistical and geometrical properties of the small eigenvalue of the covariance matrix of a set of edge pixels. Extensive experimentations are carried out on standard benchmark iris dataset (viz. CASIA-IrisV4 and UBIRIS.v2). We compared our proposed method with existing iris segmentation methods. Our proposed method has the least time complexity of O(n(i+p)) . The result of the experiments emphasizes that the proposed algorithm outperforms the existing iris segmentation methods.
Medical image compression algorithm based on wavelet transform
NASA Astrophysics Data System (ADS)
Chen, Minghong; Zhang, Guoping; Wan, Wei; Liu, Minmin
2005-02-01
With rapid development of electronic imaging and multimedia technology, the telemedicine is applied to modern medical servings in the hospital. Digital medical image is characterized by high resolution, high precision and vast data. The optimized compression algorithm can alleviate restriction in the transmission speed and data storage. This paper describes the characteristics of human vision system based on the physiology structure, and analyses the characteristics of medical image in the telemedicine, then it brings forward an optimized compression algorithm based on wavelet zerotree. After the image is smoothed, it is decomposed with the haar filters. Then the wavelet coefficients are quantified adaptively. Therefore, we can maximize efficiency of compression and achieve better subjective visual image. This algorithm can be applied to image transmission in the telemedicine. In the end, we examined the feasibility of this algorithm with an image transmission experiment in the network.
A new augmentation based algorithm for extracting maximal chordal subgraphs
Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh
2014-10-18
If every cycle of a graph is chordal length greater than three then it contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms’ parallelizability. In our paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. Finally, we experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.
A new augmentation based algorithm for extracting maximal chordal subgraphs
Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh
2014-10-18
If every cycle of a graph is chordal length greater than three then it contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms’more » parallelizability. In our paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. Finally, we experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.« less
A New Augmentation Based Algorithm for Extracting Maximal Chordal Subgraphs
Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh
2014-01-01
A graph is chordal if every cycle of length greater than three contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms’ parallelizability. In this paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. We experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph. PMID:25767331
A SAR ATR algorithm based on coherent change detection
Harmony, D.W.
2000-12-01
This report discusses an automatic target recognition (ATR) algorithm for synthetic aperture radar (SAR) imagery that is based on coherent change detection techniques. The algorithm relies on templates created from training data to identify targets. Objects are identified or rejected as targets by comparing their SAR signatures with templates using the same complex correlation scheme developed for coherent change detection. Preliminary results are presented in addition to future recommendations.
Effect of object identification algorithms on feature based verification scores
NASA Astrophysics Data System (ADS)
Weniger, Michael; Friederichs, Petra
2015-04-01
Many modern spatial verification techniques rely on feature identification algorithms. We study the importance of the choice of algorithm and its parameters for the resulting scores. SAL is used as an example to show that these choices have a statistically significant impact on the distributions of object dependent scores. Non-continuous operators used for feature identification are identified as the underlying reason for the observed stability issues, with implications for many feature based verification techniques.
NASA Astrophysics Data System (ADS)
Zsáki, Attila M.; Curran, John H.
2005-11-01
Many field problems, from stress analysis, heat transfer to contaminant transport, deal with disturbances in a continuum caused by a source (defined by its discrete geometry) and a region of interest (where a solution is sought). Depending on the location of regions of interest in relation to the sources, the level of geometric detail necessary to represent the sources in a model can vary considerably. A practical application of stress analysis in mining is the evaluation of the effects of continuous excavation on the states of stress around mine openings. Labour intensive model preparation and lengthy computation coupled with the interpretation of analysis results can have considerable impact on the successful operation of an underground mine, where stope failures can cost tens of millions of dollars and possibly lead to closure of the mine.A framework is proposed based on continuum mechanics principles to automatically optimize the level of geometric detail required for an analysis by simplifying the model geometry using expanded and modified algorithms that originated in computer graphics. This reduction in model size directly translates to savings in computational time. The results obtained from an optimized model have accuracy comparable to the uncertainty in input data (e.g. rock mass properties, geology, etc.). This first paper defines the optimization framework, while a companion paper investigates its efficiency and application to practical mining and excavation-related problems. Copyright
NASA Astrophysics Data System (ADS)
Huang, Ding-jiang; Ivanova, Nataliya M.
2016-02-01
In this paper, we explain in more details the modern treatment of the problem of group classification of (systems of) partial differential equations (PDEs) from the algorithmic point of view. More precisely, we revise the classical Lie algorithm of construction of symmetries of differential equations, describe the group classification algorithm and discuss the process of reduction of (systems of) PDEs to (systems of) equations with smaller number of independent variables in order to construct invariant solutions. The group classification algorithm and reduction process are illustrated by the example of the generalized Zakharov-Kuznetsov (GZK) equations of form ut +(F (u)) xxx +(G (u)) xyy +(H (u)) x = 0. As a result, a complete group classification of the GZK equations is performed and a number of new interesting nonlinear invariant models which have non-trivial invariance algebras are obtained. Lie symmetry reductions and exact solutions for two important invariant models, i.e., the classical and modified Zakharov-Kuznetsov equations, are constructed. The algorithmic framework for group analysis of differential equations presented in this paper can also be applied to other nonlinear PDEs.
NASA Astrophysics Data System (ADS)
Salah, Ahmad M.; Nelson, E. James; Williams, Gustavious P.
2010-04-01
We present algorithms and tools we developed to automatically link an overland flow model to a hydrodynamic water quality model with different spatial and temporal discretizations. These tools run the linked models which provide a stochastic simulation frame. We also briefly present the tools and algorithms we developed to facilitate and analyze stochastic simulations of the linked models. We demonstrate the algorithms by linking the Gridded Surface Subsurface Hydrologic Analysis (GSSHA) model for overland flow with the CE-QUAL-W2 model for water quality and reservoir hydrodynamics. GSSHA uses a two-dimensional horizontal grid while CE-QUAL-W2 uses a two-dimensional vertical grid. We implemented the algorithms and tools in the Watershed Modeling System (WMS) which allows modelers to easily create and use models. The algorithms are general and could be used for other models. Our tools create and analyze stochastic simulations to help understand uncertainty in the model application. While a number of examples of linked models exist, the ability to perform automatic, unassisted linking is a step forward and provides the framework to easily implement stochastic modeling studies.
A novel image compression algorithm based on the biorthogonal invariant set multiwavelet
NASA Astrophysics Data System (ADS)
Li, Yongjun; Li, Yunsong; Liu, Weijia
2015-05-01
On the basis of the theory of the biorthogonal invariant set multiwavelets (BISM) which is established by Micchelli and Xu, a biorthogonal invariant set multi-wavelets (BISM) filter is designed and the algorithms of decomposition and reconstruction of this filter are given in this paper, and it has many characteristics, such as symmetry, compact support, orthogonality and low complexity. In this filter, the self-affine triangle domain is as support interval, and constant function is as scaling function. Advantages such as low algorithm complexity, the energy and entropy in high concentration after transformation, no blocking effect to facilitate parallel computing are analyzed when the biorthogonal invariant sets multiwavelet (BISM) filters are used image compression. Finally, the validity of image compression algorithm based on biorthogonal invariant set multiwavelet is verified by the approximate JPEG2000 framework.
List-Based Simulated Annealing Algorithm for Traveling Salesman Problem
Zhan, Shi-hua; Lin, Juan; Zhang, Ze-jun
2016-01-01
Simulated annealing (SA) algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters' setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA) algorithm to solve traveling salesman problem (TSP). LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms. PMID:27034650
List-Based Simulated Annealing Algorithm for Traveling Salesman Problem.
Zhan, Shi-hua; Lin, Juan; Zhang, Ze-jun; Zhong, Yi-wen
2016-01-01
Simulated annealing (SA) algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters' setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA) algorithm to solve traveling salesman problem (TSP). LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms. PMID:27034650
Moore, Timothy S; Dowell, Mark D; Bradt, Shane; Verdu, Antonio Ruiz
2014-03-01
Bio-optical models are based on relationships between the spectral remote sensing reflectance and optical properties of in-water constituents. The wavelength range where this information can be exploited changes depending on the water characteristics. In low chlorophyll-a waters, the blue/green region of the spectrum is more sensitive to changes in chlorophyll-a concentration, whereas the red/NIR region becomes more important in turbid and/or eutrophic waters. In this work we present an approach to manage the shift from blue/green ratios to red/NIR-based chlorophyll-a algorithms for optically complex waters. Based on a combined in situ data set of coastal and inland waters, measures of overall algorithm uncertainty were roughly equal for two chlorophyll-a algorithms-the standard NASA OC4 algorithm based on blue/green bands and a MERIS 3-band algorithm based on red/NIR bands-with RMS error of 0.416 and 0.437 for each in log chlorophyll-a units, respectively. However, it is clear that each algorithm performs better at different chlorophyll-a ranges. When a blending approach is used based on an optical water type classification, the overall RMS error was reduced to 0.320. Bias and relative error were also reduced when evaluating the blended chlorophyll-a product compared to either of the single algorithm products. As a demonstration for ocean color applications, the algorithm blending approach was applied to MERIS imagery over Lake Erie. We also examined the use of this approach in several coastal marine environments, and examined the long-term frequency of the OWTs to MODIS-Aqua imagery over Lake Erie. PMID:24839311
A framework for disseminating evidence-based health promotion practices.
Harris, Jeffrey R; Cheadle, Allen; Hannon, Peggy A; Forehand, Mark; Lichiello, Patricia; Mahoney, Eustacia; Snyder, Susan; Yarrow, Judith
2012-01-01
Wider adoption of evidence-based, health promotion practices depends on developing and testing effective dissemination approaches. To assist in developing these approaches, we created a practical framework drawn from the literature on dissemination and our experiences disseminating evidence-based practices. The main elements of our framework are 1) a close partnership between researchers and a disseminating organization that takes ownership of the dissemination process and 2) use of social marketing principles to work closely with potential user organizations. We present 2 examples illustrating the framework: EnhanceFitness, for physical activity among older adults, and American Cancer Society Workplace Solutions, for chronic disease prevention among workers. We also discuss 7 practical roles that researchers play in dissemination and related research: sorting through the evidence, conducting formative research, assessing readiness of user organizations, balancing fidelity and reinvention, monitoring and evaluating, influencing the outer context, and testing dissemination approaches. PMID:22172189
Image enhancement algorithm based on improved lateral inhibition network
NASA Astrophysics Data System (ADS)
Yun, Haijiao; Wu, Zhiyong; Wang, Guanjun; Tong, Gang; Yang, Hua
2016-05-01
There is often substantial noise and blurred details in the images captured by cameras. To solve this problem, we propose a novel image enhancement algorithm combined with an improved lateral inhibition network. Firstly, we built a mathematical model of a lateral inhibition network in conjunction with biological visual perception; this model helped to realize enhanced contrast and improved edge definition in images. Secondly, we proposed that the adaptive lateral inhibition coefficient adhere to an exponential distribution thus making the model more flexible and more universal. Finally, we added median filtering and a compensation measure factor to build the framework with high pass filtering functionality thus eliminating image noise and improving edge contrast, addressing problems with blurred image edges. Our experimental results show that our algorithm is able to eliminate noise and the blurring phenomena, and enhance the details of visible and infrared images.
Compressed Sensing Photoacoustic Imaging Based on Fast Alternating Direction Algorithm
Liu, Xueyan; Peng, Dong; Guo, Wei; Ma, Xibo; Yang, Xin; Tian, Jie
2012-01-01
Photoacoustic imaging (PAI) has been employed to reconstruct endogenous optical contrast present in tissues. At the cost of longer calculations, a compressive sensing reconstruction scheme can achieve artifact-free imaging with fewer measurements. In this paper, an effective acceleration framework using the alternating direction method (ADM) was proposed for recovering images from limited-view and noisy observations. Results of the simulation demonstrated that the proposed algorithm could perform favorably in comparison to two recently introduced algorithms in computational efficiency and data fidelity. In particular, it ran considerably faster than these two methods. PAI with ADM can improve convergence speed with fewer ultrasonic transducers, enabling a high-performance and cost-effective PAI system for biomedical applications. PMID:23365553
Flexible Phrase Based Query Handling Algorithms.
ERIC Educational Resources Information Center
Wilbur, W. John; Kim, Won
2001-01-01
Flexibility in query handling can be important if one types a search engine query that is misspelled, contains terms not in the database, or requires knowledge of a controlled vocabulary. Presents results of experiments that suggest the optimal form of similarity functions that are applicable to the task of phrase based retrieval to find either…
Particle flow reconstruction based on the directed tree clustering algorithm
Chakraborty, D.; Lima, J. G. R.; McIntosh, R.; Zutshi, V.
2006-10-27
We present the status of particle flow algorithm development at Northern Illinois University. A key element in our approach is the calorimeter-based directed tree clustering algorithm. We have attempted to identify and tackle the essential challenges and analyze the effect of several different approaches to the reconstruction of jet energies and the Z-boson mass. A number of possibilities have been studied, such as analog vs. digital energy measurement, hit density-based clustering and the use of single or multiple energy thresholds. We plan to use this PFA-based reconstruction to compare some of the proposed detector technologies and geometries.
Cloud Computing-Based TagSNP Selection Algorithm for Human Genome Data
Hung, Che-Lun; Chen, Wen-Pei; Hua, Guan-Jie; Zheng, Huiru; Tsai, Suh-Jen Jane; Lin, Yaw-Ling
2015-01-01
Single nucleotide polymorphisms (SNPs) play a fundamental role in human genetic variation and are used in medical diagnostics, phylogeny construction, and drug design. They provide the highest-resolution genetic fingerprint for identifying disease associations and human features. Haplotypes are regions of linked genetic variants that are closely spaced on the genome and tend to be inherited together. Genetics research has revealed SNPs within certain haplotype blocks that introduce few distinct common haplotypes into most of the population. Haplotype block structures are used in association-based methods to map disease genes. In this paper, we propose an efficient algorithm for identifying haplotype blocks in the genome. In chromosomal haplotype data retrieved from the HapMap project website, the proposed algorithm identified longer haplotype blocks than an existing algorithm. To enhance its performance, we extended the proposed algorithm into a parallel algorithm that copies data in parallel via the Hadoop MapReduce framework. The proposed MapReduce-paralleled combinatorial algorithm performed well on real-world data obtained from the HapMap dataset; the improvement in computational efficiency was proportional to the number of processors used. PMID:25569088
Cloud computing-based TagSNP selection algorithm for human genome data.
Hung, Che-Lun; Chen, Wen-Pei; Hua, Guan-Jie; Zheng, Huiru; Tsai, Suh-Jen Jane; Lin, Yaw-Ling
2015-01-01
Single nucleotide polymorphisms (SNPs) play a fundamental role in human genetic variation and are used in medical diagnostics, phylogeny construction, and drug design. They provide the highest-resolution genetic fingerprint for identifying disease associations and human features. Haplotypes are regions of linked genetic variants that are closely spaced on the genome and tend to be inherited together. Genetics research has revealed SNPs within certain haplotype blocks that introduce few distinct common haplotypes into most of the population. Haplotype block structures are used in association-based methods to map disease genes. In this paper, we propose an efficient algorithm for identifying haplotype blocks in the genome. In chromosomal haplotype data retrieved from the HapMap project website, the proposed algorithm identified longer haplotype blocks than an existing algorithm. To enhance its performance, we extended the proposed algorithm into a parallel algorithm that copies data in parallel via the Hadoop MapReduce framework. The proposed MapReduce-paralleled combinatorial algorithm performed well on real-world data obtained from the HapMap dataset; the improvement in computational efficiency was proportional to the number of processors used. PMID:25569088
Development of a rule-based algorithm for rice cultivation mapping using Landsat 8 time series
NASA Astrophysics Data System (ADS)
Karydas, Christos G.; Toukiloglou, Pericles; Minakou, Chara; Gitas, Ioannis Z.
2015-06-01
In the framework of ERMES project (FP7 66983), an algorithm for mapping rice cultivation extents using mediumhigh resolution satellite data was developed. ERMES (An Earth obseRvation Model based RicE information Service) aims to develop a prototype of downstream service for rice yield modelling based on a combination of Earth Observation and in situ data. The algorithm was designed as a set of rules applied on a time series of Landsat 8 images, acquired throughout the rice cultivation season of 2014 from the plain of Thessaloniki, Greece. The rules rely on the use of spectral indices, such as the Normalized Difference Vegetation Index (NDVI), the Normalized Difference Water Index (NDWI), and the Normalized Seasonal Wetness Index (NSWI), extracted from the Landsat 8 dataset. The algorithm is subdivided into two phases: a) a hard classification phase, resulting in a binary map (rice/no-rice), where pixels are judged according to their performance in all the images of the time series, while index thresholds were defined after a trial and error approach; b) a soft classification phase, resulting in a fuzzy map, by assigning scores to the pixels which passed (as `rice') the first phase. Finally, a user-defined threshold of the fuzzy score will discriminate rice from no-rice pixels in the output map. The algorithm was tested in a subset of Thessaloniki plain against a set of selected field data. The results indicated an overall accuracy of the algorithm higher than 97%. The algorithm was also applied in a study are in Spain (Valencia) and a preliminary test indicated a similar performance, i.e. about 98%. Currently, the algorithm is being modified, so as to map rice extents early in the cultivation season (by the end of June), with a view to contribute more substantially to the rice yield prediction service of ERMES. Both algorithm modes (late and early) are planned to be tested in extra Mediterranean study areas, in Greece, Italy, and Spain.
A robust DCT domain watermarking algorithm based on chaos system
NASA Astrophysics Data System (ADS)
Xiao, Mingsong; Wan, Xiaoxia; Gan, Chaohua; Du, Bo
2009-10-01
Digital watermarking is a kind of technique that can be used for protecting and enforcing the intellectual property (IP) rights of the digital media like the digital images containting in the transaction copyright. There are many kinds of digital watermarking algorithms. However, existing digital watermarking algorithms are not robust enough against geometric attacks and signal processing operations. In this paper, a robust watermarking algorithm based on chaos array in DCT (discrete cosine transform)-domain for gray images is proposed. The algorithm provides an one-to-one method to extract the watermark.Experimental results have proved that this new method has high accuracy and is highly robust against geometric attacks, signal processing operations and geometric transformations. Furthermore, the one who have on idea of the key can't find the position of the watermark embedded in. As a result, the watermark not easy to be modified, so this scheme is secure and robust.
Genetic-algorithm-based tri-state neural networks
NASA Astrophysics Data System (ADS)
Uang, Chii-Maw; Chen, Wen-Gong; Horng, Ji-Bin
2002-09-01
A new method, using genetic algorithms, for constructing a tri-state neural network is presented. The global searching features of the genetic algorithms are adopted to help us easily find the interconnection weight matrix of a bipolar neural network. The construction method is based on the biological nervous systems, which evolve the parameters encoded in genes. Taking the advantages of conventional (binary) genetic algorithms, a two-level chromosome structure is proposed for training the tri-state neural network. A Matlab program is developed for simulating the network performances. The results show that the proposed genetic algorithms method not only has the features of accurate of constructing the interconnection weight matrix, but also has better network performance.
Filter model based dwell time algorithm for ion beam figuring
NASA Astrophysics Data System (ADS)
Li, Yun; Xing, Tingwen; Jia, Xin; Wei, Haoming
2010-10-01
The process of Ion Beam Figuring (IBF) can be described by a two-dimensional convolution equation which including dwell time. Solving the dwell time is a key problem in IBF. Theoretically, the dwell time can be solved from a two-dimensional deconvolution. However, it is often ill-posed]; the suitable solution of that is hard to get. In this article, a dwell time algorithm is proposed, depending on the characters of IBF. Usually, the Beam Removal Function (BRF) in IBF is Gaussian, which can be regarded as a headstand Gaussian filter. In its stop-band, the filter has various filtering abilities for various frequencies. The dwell time algorithm proposed in this article is just based on this concept. The Curved Surface Smooth Extension (CSSE) method and Fast Fourier Transform (FFT) algorithm are also used. The simulation results show that this algorithm is high precision, effective, and suitable for actual application.
Validation of a Bayesian-based isotope identification algorithm
NASA Astrophysics Data System (ADS)
Sullivan, C. J.; Stinnett, J.
2015-06-01
Handheld radio-isotope identifiers (RIIDs) are widely used in Homeland Security and other nuclear safety applications. However, most commercially available devices have serious problems in their ability to correctly identify isotopes. It has been reported that this flaw is largely due to the overly simplistic identification algorithms on-board the RIIDs. This paper reports on the experimental validation of a new isotope identification algorithm using a Bayesian statistics approach to identify the source while allowing for calibration drift and unknown shielding. We present here results on further testing of this algorithm and a study on the observed variation in the gamma peak energies and areas from a wavelet-based peak identification algorithm.
[An algorithm of a wavelet-based medical image quantization].
Hou, Wensheng; Wu, Xiaoying; Peng, Chenglin
2002-12-01
The compression of medical image is the key to study tele-medicine & PACS. We have studied the statistical distribution of wavelet subimage coefficients and concluded that the distribution of wavelet subimage coefficients is very much similar to that of Laplacian distribution. Based on the statistical properties of image wavelet decomposition, an image quantization algorithm is proposed. In this algorithm, we selected the sample-standard-deviation as the key quantization threshold in every wavelet subimage. The test has proved that, the main advantages of this algorithm are simple computing and the predictability of coefficients in different quantization threshold range. Also, high compression efficiency can be obtained. Therefore, this algorithm can be potentially used in tele-medicine and PACS. PMID:12561372
Facial Affect Recognition Using Regularized Discriminant Analysis-Based Algorithms
NASA Astrophysics Data System (ADS)
Lee, Chien-Cheng; Huang, Shin-Sheng; Shih, Cheng-Yuan
2010-12-01
This paper presents a novel and effective method for facial expression recognition including happiness, disgust, fear, anger, sadness, surprise, and neutral state. The proposed method utilizes a regularized discriminant analysis-based boosting algorithm (RDAB) with effective Gabor features to recognize the facial expressions. Entropy criterion is applied to select the effective Gabor feature which is a subset of informative and nonredundant Gabor features. The proposed RDAB algorithm uses RDA as a learner in the boosting algorithm. The RDA combines strengths of linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA). It solves the small sample size and ill-posed problems suffered from QDA and LDA through a regularization technique. Additionally, this study uses the particle swarm optimization (PSO) algorithm to estimate optimal parameters in RDA. Experiment results demonstrate that our approach can accurately and robustly recognize facial expressions.
An Agent-based Framework for Web Query Answering.
ERIC Educational Resources Information Center
Wang, Huaiqing; Liao, Stephen; Liao, Lejian
2000-01-01
Discusses discrepancies between user queries on the Web and the answers provided by information sources; proposes an agent-based framework for Web mining tasks; introduces an object-oriented deductive data model and a flexible query language; and presents a cooperative mechanism for query answering. (Author/LRW)
Towards multifunctional lanthanide-based metal-organic frameworks.
Tobin, Gerard; Comby, Steve; Zhu, Nianyong; Clérac, Rodolphe; Gunnlaugsson, Thorfinnur; Schmitt, Wolfgang
2015-09-01
We report the synthesis, structure and physicochemical attributes of a new holmium(III)-based metal-organic framework whose 3D network structure gives rise to porosity; the reported structure-type can be varied using a range of different lanthanide ions to tune the photophysical properties and produce ligand-sensitised near-infrared (NIR) and visible light emitters. PMID:26207535
Methodology Evaluation Framework for Component-Based System Development.
ERIC Educational Resources Information Center
Dahanayake, Ajantha; Sol, Henk; Stojanovic, Zoran
2003-01-01
Explains component-based development (CBD) for distributed information systems and presents an evaluation framework, which highlights the extent to which a methodology is component oriented. Compares prominent CBD methods, discusses ways of modeling, and suggests that this is a first step towards a components-oriented systems development…
The Evidence-Based Reasoning Framework: Assessing Scientific Reasoning
ERIC Educational Resources Information Center
Brown, Nathaniel J. S.; Furtak, Erin Marie; Timms, Michael; Nagashima, Sam O.; Wilson, Mark
2010-01-01
Recent science education reforms have emphasized the importance of students engaging with and reasoning from evidence to develop scientific explanations. A number of studies have created frameworks based on Toulmin's (1958/2003) argument pattern, whereas others have developed systems for assessing the quality of students' reasoning to support…
A Proposed Framework for Conducting Data-Based Test Analysis
ERIC Educational Resources Information Center
Slaney, Kathleen L.; Maraun, Michael D.
2008-01-01
The authors argue that the current state of applied data-based test analytic practice is unstructured and unmethodical due in large part to the fact that there is no clearly specified, widely accepted test analytic framework for judging the performances of particular tests in particular contexts. Drawing from the extant test theory literature,…
A novel sparse coding algorithm for classification of tumors based on gene expression data.
Kolali Khormuji, Morteza; Bazrafkan, Mehrnoosh
2016-06-01
High-dimensional genomic and proteomic data play an important role in many applications in medicine such as prognosis of diseases, diagnosis, prevention and molecular biology, to name a few. Classifying such data is a challenging task due to the various issues such as curse of dimensionality, noise and redundancy. Recently, some researchers have used the sparse representation (SR) techniques to analyze high-dimensional biological data in various applications in classification of cancer patients based on gene expression datasets. A common problem with all SR-based biological data classification methods is that they cannot utilize the topological (geometrical) structure of data. More precisely, these methods transfer the data into sparse feature space without preserving the local structure of data points. In this paper, we proposed a novel SR-based cancer classification algorithm based on gene expression data that takes into account the geometrical information of all data. Precisely speaking, we incorporate the local linear embedding algorithm into the sparse coding framework, by which we can preserve the geometrical structure of all data. For performance comparison, we applied our algorithm on six tumor gene expression datasets, by which we demonstrate that the proposed method achieves higher classification accuracy than state-of-the-art SR-based tumor classification algorithms. PMID:26337064
Adaptive NUC algorithm for uncooled IRFPA based on neural networks
NASA Astrophysics Data System (ADS)
Liu, Ziji; Jiang, Yadong; Lv, Jian; Zhu, Hongbin
2010-10-01
With developments in uncooled infrared plane array (UFPA) technology, many new advanced uncooled infrared sensors are used in defensive weapons, scientific research, industry and commercial applications. A major difference in imaging techniques between infrared IRFPA imaging system and a visible CCD camera is that, IRFPA need nonuniformity correction and dead pixel compensation, we usually called it infrared image pre-processing. Two-point or multi-point correction algorithms based on calibration commonly used may correct the non-uniformity of IRFPAs, but they are limited by pixel linearity and instability. Therefore, adaptive non-uniformity correction techniques are developed. Two of these adaptive non-uniformity correction algorithms are mostly discussed, one is based on temporal high-pass filter, and another is based on neural network. In this paper, a new NUC algorithm based on improved neural networks is introduced, and involves the compare result between improved neural networks and other adaptive correction techniques. A lot of different will discussed in different angle, like correction effects, calculation efficiency, hardware implementation and so on. According to the result and discussion, it could be concluding that the adaptive algorithm offers improved performance compared to traditional calibration mode techniques. This new algorithm not only provides better sensitivity, but also increases the system dynamic range. As the sensor application expended, it will be very useful in future infrared imaging systems.
A Pressure Based Multi-Fluid Algorithm for Multiphase Flow
NASA Astrophysics Data System (ADS)
Ming, P. J.; Zhang, W. P.; Lei, G. D.; Zhu, M. G.
A new finite volume-based numerical algorithm for predicting multiphase flow phenomena is presented. The method is formulated on an orthogonal coordinate system in collocated primitive variables. The SIMPLE-like algorithms are based on the prediction and correction procedure, and they are extended for all speed range. The object of the present work is to extent single phase SIMPLE algorithm to multiphase flow. The overview of the algorithm is described and relevant numerical issues are discussed extensively, including implicit process of the moment interaction with “partial elimination” (of the drag term), introduction of under-relaxation factor, formulation of momentum interpolation, and pressure correction equation. This model is based on the k-ɛ model assumed that the turbulence is dictated by the continuous phase. Thus only the transport equation for the continuous phase turbulence energy kc needed to be solved while a algebraic turbulence model is used for dispersed phase. The present author also designed a general program with FORTRAN90 program language for the new algorithm based on the household code General Transport Equation Analyzer (GTEA). The performance of the new method is assessed by solving a 3D bubbly two-phase flow in a vertical pipe. A good agreement is achieved between the numerical result and experimental data in the literature.
Texture Analysis of Chaotic Coupled Map Lattices Based Image Encryption Algorithm
NASA Astrophysics Data System (ADS)
Khan, Majid; Shah, Tariq; Batool, Syeda Iram
2014-09-01
As of late, data security is key in different enclosures like web correspondence, media frameworks, therapeutic imaging, telemedicine and military correspondence. In any case, a large portion of them confronted with a few issues, for example, the absence of heartiness and security. In this letter, in the wake of exploring the fundamental purposes of the chaotic trigonometric maps and the coupled map lattices, we have presented the algorithm of chaos-based image encryption based on coupled map lattices. The proposed mechanism diminishes intermittent impact of the ergodic dynamical systems in the chaos-based image encryption. To assess the security of the encoded image of this scheme, the association of two nearby pixels and composition peculiarities were performed. This algorithm tries to minimize the problems arises in image encryption.
A Turn-Projected State-Based Conflict Resolution Algorithm
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Lewis, Timothy A.
2013-01-01
State-based conflict detection and resolution (CD&R) algorithms detect conflicts and resolve them on the basis on current state information without the use of additional intent information from aircraft flight plans. Therefore, the prediction of the trajectory of aircraft is based solely upon the position and velocity vectors of the traffic aircraft. Most CD&R algorithms project the traffic state using only the current state vectors. However, the past state vectors can be used to make a better prediction of the future trajectory of the traffic aircraft. This paper explores the idea of using past state vectors to detect traffic turns and resolve conflicts caused by these turns using a non-linear projection of the traffic state. A new algorithm based on this idea is presented and validated using a fast-time simulator developed for this study.
Template based illumination compensation algorithm for multiview video coding
NASA Astrophysics Data System (ADS)
Li, Xiaoming; Jiang, Lianlian; Ma, Siwei; Zhao, Debin; Gao, Wen
2010-07-01
Recently multiview video coding (MVC) standard has been finalized as an extension of H.264/AVC by Joint Video Team (JVT). In the project Joint Multiview Video Model (JMVM) for the standardization, illumination compensation (IC) is adopted as a useful tool. In this paper, a novel illumination compensation algorithm based on template is proposed. The basic idea of the algorithm is that the illumination of the current block has a strong correlation with its adjacent template. Based on this idea, firstly a template based illumination compensation method is presented, and then a template models selection strategy is devised to improve the illumination compensation performance. The experimental results show that the proposed algorithm can improve the coding efficiency significantly.
Modeling asset price processes based on mean-field framework
NASA Astrophysics Data System (ADS)
Ieda, Masashi; Shiino, Masatoshi
2011-12-01
We propose a model of the dynamics of financial assets based on the mean-field framework. This framework allows us to construct a model which includes the interaction among the financial assets reflecting the market structure. Our study is on the cutting edge in the sense of a microscopic approach to modeling the financial market. To demonstrate the effectiveness of our model concretely, we provide a case study, which is the pricing problem of the European call option with short-time memory noise.
Dshell++: A Component Based, Reusable Space System Simulation Framework
NASA Technical Reports Server (NTRS)
Lim, Christopher S.; Jain, Abhinandan
2009-01-01
This paper describes the multi-mission Dshell++ simulation framework for high fidelity, physics-based simulation of spacecraft, robotic manipulation and mobility systems. Dshell++ is a C++/Python library which uses modern script driven object-oriented techniques to allow component reuse and a dynamic run-time interface for complex, high-fidelity simulation of spacecraft and robotic systems. The goal of the Dshell++ architecture is to manage the inherent complexity of physicsbased simulations while supporting component model reuse across missions. The framework provides several features that support a large degree of simulation configurability and usability.
NASA Astrophysics Data System (ADS)
Melli, Seyed Ali; Wahid, Khan A.; Babyn, Paul; Montgomery, James; Snead, Elisabeth; El-Gayed, Ali; Pettitt, Murray; Wolkowski, Bailey; Wesolowski, Michal
2016-01-01
Synchrotron source propagation-based X-ray phase contrast computed tomography is increasingly used in pre-clinical imaging. However, it typically requires a large number of projections, and subsequently a large radiation dose, to produce high quality images. To improve the applicability of this imaging technique, reconstruction algorithms that can reduce the radiation dose and acquisition time without degrading image quality are needed. The proposed research focused on using a novel combination of Douglas-Rachford splitting and randomized Kaczmarz algorithms to solve large-scale total variation based optimization in a compressed sensing framework to reconstruct 2D images from a reduced number of projections. Visual assessment and quantitative performance evaluations of a synthetic abdomen phantom and real reconstructed image of an ex-vivo slice of canine prostate tissue demonstrate that the proposed algorithm is competitive in reconstruction process compared with other well-known algorithms. An additional potential benefit of reducing the number of projections would be reduction of time for motion artifact to occur if the sample moves during image acquisition. Use of this reconstruction algorithm to reduce the required number of projections in synchrotron source propagation-based X-ray phase contrast computed tomography is an effective form of dose reduction that may pave the way for imaging of in-vivo samples.
Ma, Jingjing; Liu, Jie; Ma, Wenping; Gong, Maoguo; Jiao, Licheng
2014-01-01
Community structure is one of the most important properties in social networks. In dynamic networks, there are two conflicting criteria that need to be considered. One is the snapshot quality, which evaluates the quality of the community partitions at the current time step. The other is the temporal cost, which evaluates the difference between communities at different time steps. In this paper, we propose a decomposition-based multiobjective community detection algorithm to simultaneously optimize these two objectives to reveal community structure and its evolution in dynamic networks. It employs the framework of multiobjective evolutionary algorithm based on decomposition to simultaneously optimize the modularity and normalized mutual information, which quantitatively measure the quality of the community partitions and temporal cost, respectively. A local search strategy dealing with the problem-specific knowledge is incorporated to improve the effectiveness of the new algorithm. Experiments on computer-generated and real-world networks demonstrate that the proposed algorithm can not only find community structure and capture community evolution more accurately, but also be steadier than the two compared algorithms. PMID:24723806
Moore, Timothy S.; Dowell, Mark D.; Bradt, Shane; Verdu, Antonio Ruiz
2014-01-01
Bio-optical models are based on relationships between the spectral remote sensing reflectance and optical properties of in-water constituents. The wavelength range where this information can be exploited changes depending on the water characteristics. In low chlorophyll-a waters, the blue/green region of the spectrum is more sensitive to changes in chlorophyll-a concentration, whereas the red/NIR region becomes more important in turbid and/or eutrophic waters. In this work we present an approach to manage the shift from blue/green ratios to red/NIR-based chlorophyll-a algorithms for optically complex waters. Based on a combined in situ data set of coastal and inland waters, measures of overall algorithm uncertainty were roughly equal for two chlorophyll-a algorithms—the standard NASA OC4 algorithm based on blue/green bands and a MERIS 3-band algorithm based on red/NIR bands—with RMS error of 0.416 and 0.437 for each in log chlorophyll-a units, respectively. However, it is clear that each algorithm performs better at different chlorophyll-a ranges. When a blending approach is used based on an optical water type classification, the overall RMS error was reduced to 0.320. Bias and relative error were also reduced when evaluating the blended chlorophyll-a product compared to either of the single algorithm products. As a demonstration for ocean color applications, the algorithm blending approach was applied to MERIS imagery over Lake Erie. We also examined the use of this approach in several coastal marine environments, and examined the long-term frequency of the OWTs to MODIS-Aqua imagery over Lake Erie. PMID:24839311
Knowledge-Based Framework: its specification and new related discussions
NASA Astrophysics Data System (ADS)
Rodrigues, Douglas; Zaniolo, Rodrigo R.; Branco, Kalinka R. L. J. C.
2015-09-01
Unmanned Aerial Vehicle is a common application of critical embedded systems. The heterogeneity prevalent in these vehicles in terms of services for avionics is particularly relevant to the elaboration of multi-application missions. Besides, this heterogeneity in UAV services is often manifested in the form of characteristics such as reliability, security and performance. Different service implementations typically offer different guarantees in terms of these characteristics and in terms of associated costs. Particularly, we explore the notion of Service-Oriented Architecture (SOA) in the context of UAVs as safety-critical embedded systems for the composition of services to fulfil application-specified performance and dependability guarantees. So, we propose a framework for the deployment of these services and their variants. This framework is called Knowledge-Based Framework for Dynamically Changing Applications (KBF) and we specify its services module, discussing all the related issues.
A Novel Multiobjective Evolutionary Algorithm Based on Regression Analysis
Song, Zhiming; Wang, Maocai; Dai, Guangming; Vasile, Massimiliano
2015-01-01
As is known, the Pareto set of a continuous multiobjective optimization problem with m objective functions is a piecewise continuous (m − 1)-dimensional manifold in the decision space under some mild conditions. However, how to utilize the regularity to design multiobjective optimization algorithms has become the research focus. In this paper, based on this regularity, a model-based multiobjective evolutionary algorithm with regression analysis (MMEA-RA) is put forward to solve continuous multiobjective optimization problems with variable linkages. In the algorithm, the optimization problem is modelled as a promising area in the decision space by a probability distribution, and the centroid of the probability distribution is (m − 1)-dimensional piecewise continuous manifold. The least squares method is used to construct such a model. A selection strategy based on the nondominated sorting is used to choose the individuals to the next generation. The new algorithm is tested and compared with NSGA-II and RM-MEDA. The result shows that MMEA-RA outperforms RM-MEDA and NSGA-II on the test instances with variable linkages. At the same time, MMEA-RA has higher efficiency than the other two algorithms. A few shortcomings of MMEA-RA have also been identified and discussed in this paper. PMID:25874246
Improved motion information-based infrared dim target tracking algorithms
NASA Astrophysics Data System (ADS)
Lei, Liu; Zhijian, Huang
2014-11-01
Accurate and fast tracking of infrared (IR) dim target has very important meaning for infrared precise guidance, early warning, video surveillance, etc. However, under complex backgrounds, such as clutter, varying illumination, and occlusion, the traditional tracking method often converges to a local maximum and loses the real infrared target. To cope with these problems, three improved tracking algorithm based on motion information are proposed in this paper, namely improved mean shift algorithm, improved Optical flow method and improved Particle Filter method. The basic principles and the implementing procedure of these modified algorithms for target tracking are described. Using these algorithms, the experiments on some real-life IR and color images are performed. The whole algorithm implementing processes and results are analyzed, and those algorithms for tracking targets are evaluated from the two aspects of subjective and objective. The results prove that the proposed method has satisfying tracking effectiveness and robustness. Meanwhile, it has high tracking efficiency and can be used for real-time tracking.
Auto-focus algorithm based on statistical blur estimation
NASA Astrophysics Data System (ADS)
Kulkarni, Prajit
2013-03-01
Conventional auto-focus techniques in movable-lens camera systems use a measure of image sharpness to determine the lens position that brings the scene into focus. This paper presents a novel wavelet-domain approach to determine the position of best focus. In contrast to current techniques, the proposed algorithm estimates the level of blur in the captured image at each lens position. Image blur is quantified by fitting a Generalized Gaussian Density (GGD) curve to a high-pass version of the image using second-order statistics. The system then moves the lens to the position that yields the least measure of image blur. The algorithm overcomes shortcomings of sharpness-based approaches, namely, the application of large band-pass filters, sensitivity to image noise and need for calibration under different imaging conditions. Since noise has no effect on the proposed blur metric, the algorithm works with a short filter and is devoid of parameter tuning. Furthermore, the algorithm could be simplified to use a single high-pass filter to reduce complexity. These advantages, along with the optimization presented in the paper, make the proposed algorithm very attractive for hardware implementation on cell phones. Experiments prove that the algorithm performs well in the presence of noise as well as resolution and data scaling.
A new root-based direction-finding algorithm
NASA Astrophysics Data System (ADS)
Wasylkiwskyj, Wasyl; Kopriva, Ivica; DoroslovačKi, Miloš; Zaghloul, Amir I.
2007-04-01
Polynomial rooting direction-finding (DF) algorithms are a computationally efficient alternative to search-based DF algorithms and are particularly suitable for uniform linear arrays of physically identical elements provided that mutual interaction among the array elements can be either neglected or compensated for. A popular algorithm in such situations is Root Multiple Signal Classification (Root MUSIC (RM)), wherein the estimation of the directions of arrivals (DOA) requires the computation of the roots of a (2N - 2) -order polynomial, where N represents number of array elements. The DOA are estimated from the L pairs of roots closest to the unit circle, where L represents number of sources. In this paper we derive a modified root polynomial (MRP) algorithm requiring the calculation of only L roots in order to estimate the L DOA. We evaluate the performance of the MRP algorithm numerically and show that it is as accurate as the RM algorithm but with a significantly simpler algebraic structure. In order to demonstrate that the theoretically predicted performance can be achieved in an experimental setting, a decoupled array is emulated in hardware using phase shifters. The results are in excellent agreement with theory.
Wavelets based algorithm for the evaluation of enhanced liver areas
NASA Astrophysics Data System (ADS)
Alvarez, Matheus; Rodrigues de Pina, Diana; Giacomini, Guilherme; Gomes Romeiro, Fernando; Barbosa Duarte, Sérgio; Yamashita, Seizo; de Arruda Miranda, José Ricardo
2014-03-01
Hepatocellular carcinoma (HCC) is a primary tumor of the liver. After local therapies, the tumor evaluation is based on the mRECIST criteria, which involves the measurement of the maximum diameter of the viable lesion. This paper describes a computed methodology to measure through the contrasted area of the lesions the maximum diameter of the tumor by a computational algorithm. 63 computed tomography (CT) slices from 23 patients were assessed. Noncontrasted liver and HCC typical nodules were evaluated, and a virtual phantom was developed for this purpose. Optimization of the algorithm detection and quantification was made using the virtual phantom. After that, we compared the algorithm findings of maximum diameter of the target lesions against radiologist measures. Computed results of the maximum diameter are in good agreement with the results obtained by radiologist evaluation, indicating that the algorithm was able to detect properly the tumor limits. A comparison of the estimated maximum diameter by radiologist versus the algorithm revealed differences on the order of 0.25 cm for large-sized tumors (diameter > 5 cm), whereas agreement lesser than 1.0cm was found for small-sized tumors. Differences between algorithm and radiologist measures were accurate for small-sized tumors with a trend to a small increase for tumors greater than 5 cm. Therefore, traditional methods for measuring lesion diameter should be complemented with non-subjective measurement methods, which would allow a more correct evaluation of the contrast-enhanced areas of HCC according to the mRECIST criteria.
Texture orientation-based algorithm for detecting infrared maritime targets.
Wang, Bin; Dong, Lili; Zhao, Ming; Wu, Houde; Xu, Wenhai
2015-05-20
Infrared maritime target detection is a key technology for maritime target searching systems. However, in infrared maritime images (IMIs) taken under complicated sea conditions, background clutters, such as ocean waves, clouds or sea fog, usually have high intensity that can easily overwhelm the brightness of real targets, which is difficult for traditional target detection algorithms to deal with. To mitigate this problem, this paper proposes a novel target detection algorithm based on texture orientation. This algorithm first extracts suspected targets by analyzing the intersubband correlation between horizontal and vertical wavelet subbands of the original IMI on the first scale. Then the self-adaptive wavelet threshold denoising and local singularity analysis of the original IMI is combined to remove false alarms further. Experiments show that compared with traditional algorithms, this algorithm can suppress background clutter much better and realize better single-frame detection for infrared maritime targets. Besides, in order to guarantee accurate target extraction further, the pipeline-filtering algorithm is adopted to eliminate residual false alarms. The high practical value and applicability of this proposed strategy is backed strongly by experimental data acquired under different environmental conditions. PMID:26192503
A Graph Based Backtracking Algorithm for Solving General CSPs
NASA Technical Reports Server (NTRS)
Pang, Wanlin; Goodwin, Scott D.
2003-01-01
Many AI tasks can be formalized as constraint satisfaction problems (CSPs), which involve finding values for variables subject to constraints. While solving a CSP is an NP-complete task in general, tractable classes of CSPs have been identified based on the structure of the underlying constraint graphs. Much effort has been spent on exploiting structural properties of the constraint graph to improve the efficiency of finding a solution. These efforts contributed to development of a class of CSP solving algorithms called decomposition algorithms. The strength of CSP decomposition is that its worst-case complexity depends on the structural properties of the constraint graph and is usually better than the worst-case complexity of search methods. Its practical application is limited, however, since it cannot be applied if the CSP is not decomposable. In this paper, we propose a graph based backtracking algorithm called omega-CDBT, which shares merits and overcomes the weaknesses of both decomposition and search approaches.
A fast image encryption algorithm based on chaotic map
NASA Astrophysics Data System (ADS)
Liu, Wenhao; Sun, Kehui; Zhu, Congxu
2016-09-01
Derived from Sine map and iterative chaotic map with infinite collapse (ICMIC), a new two-dimensional Sine ICMIC modulation map (2D-SIMM) is proposed based on a close-loop modulation coupling (CMC) model, and its chaotic performance is analyzed by means of phase diagram, Lyapunov exponent spectrum and complexity. It shows that this map has good ergodicity, hyperchaotic behavior, large maximum Lyapunov exponent and high complexity. Based on this map, a fast image encryption algorithm is proposed. In this algorithm, the confusion and diffusion processes are combined for one stage. Chaotic shift transform (CST) is proposed to efficiently change the image pixel positions, and the row and column substitutions are applied to scramble the pixel values simultaneously. The simulation and analysis results show that this algorithm has high security, low time complexity, and the abilities of resisting statistical analysis, differential, brute-force, known-plaintext and chosen-plaintext attacks.
LAHS: A novel harmony search algorithm based on learning automata
NASA Astrophysics Data System (ADS)
Enayatifar, Rasul; Yousefi, Moslem; Abdullah, Abdul Hanan; Darus, Amer Nordin
2013-12-01
This study presents a learning automata-based harmony search (LAHS) for unconstrained optimization of continuous problems. The harmony search (HS) algorithm performance strongly depends on the fine tuning of its parameters, including the harmony consideration rate (HMCR), pitch adjustment rate (PAR) and bandwidth (bw). Inspired by the spur-in-time responses in the musical improvisation process, learning capabilities are employed in the HS to select these parameters based on spontaneous reactions. An extensive numerical investigation is conducted on several well-known test functions, and the results are compared with the HS algorithm and its prominent variants, including the improved harmony search (IHS), global-best harmony search (GHS) and self-adaptive global-best harmony search (SGHS). The numerical results indicate that the LAHS is more efficient in finding optimum solutions and outperforms the existing HS algorithm variants.
A specialized framework for Medical Diagnostic Knowledge Based Systems.
Lanzola, G.; Stefanelli, M.
1991-01-01
To have a knowledge based system (KBS) exhibiting an intelligent behavior, it must be endowed even with knowledge able to represent the expert's strategies, other than with domain knowledge. The elicitation task is inherently difficult for strategic knowledge, because strategy is often tacit, and, even when it has been made explicit, it is not an easy task to describe it in a form that may be directly translated and implemented into a program. This paper describes a Specialized Framework for Medical Diagnostic Knowledge Based Systems able to help an expert in the process of building KBSs in a medical domain. The framework is based on an epistemological model of diagnostic reasoning which has proved to be helpful in describing the diagnostic process in terms of the tasks by which it is composed of. PMID:1807566
NIC-based Reduction Algorithms for Large-scale Clusters
Petrini, F; Moody, A T; Fernandez, J; Frachtenberg, E; Panda, D K
2004-07-30
Efficient algorithms for reduction operations across a group of processes are crucial for good performance in many large-scale, parallel scientific applications. While previous algorithms limit processing to the host CPU, we utilize the programmable processors and local memory available on modern cluster network interface cards (NICs) to explore a new dimension in the design of reduction algorithms. In this paper, we present the benefits and challenges, design issues and solutions, analytical models, and experimental evaluations of a family of NIC-based reduction algorithms. Performance and scalability evaluations were conducted on the ASCI Linux Cluster (ALC), a 960-node, 1920-processor machine at Lawrence Livermore National Laboratory, which uses the Quadrics QsNet interconnect. We find NIC-based reductions on modern interconnects to be more efficient than host-based implementations in both scalability and consistency. In particular, at large-scale--1812 processes--NIC-based reductions of small integer and floating-point arrays provided respective speedups of 121% and 39% over the host-based, production-level MPI implementation.
Nonuniformity correction algorithm based on Gaussian mixture model
NASA Astrophysics Data System (ADS)
Mou, Xin-gang; Zhang, Gui-lin; Hu, Ruo-lan; Zhou, Xiao
2011-08-01
As an important tool to acquire information of target scene, infrared detector is widely used in imaging guidance field. Because of the limit of material and technique, the performance of infrared imaging system is known to be strongly affected by the spatial nonuniformity in the photoresponse of the detectors in the array. Temporal highpass filter(THPF) is a popular adaptive NUC algorithm because of its simpleness and effectiveness. However, there still exists the problem of ghosting artifact in the algorithms caused by blind update of parameters, and the performance is noticeably degraded when the methods are applied over scenes with lack of motion. In order to tackle with this problem, a novel adaptive NUC algorithm based on Gaussian mixed model (GMM) is put forward according to traditional THPF. The drift of the detectors is assumed to obey a single Gaussian distribution, and the update of the parameters is selectively performed based on the scene. GMM is applied in the new algorithm for background modeling, in which the background is updated selectively so as to avoid the influence of the foreground target on the update of the background, thus eliminating the ghosting artifact. The performance of the proposed algorithm is evaluated with infrared image sequences with simulated and real fixed-pattern noise. The results show a more reliable fixed-pattern noise reduction, tracking the parameter drift, and presenting a good adaptability to scene changes.
Constrained Multiobjective Optimization Algorithm Based on Immune System Model.
Qian, Shuqu; Ye, Yongqiang; Jiang, Bin; Wang, Jianhong
2016-09-01
An immune optimization algorithm, based on the model of biological immune system, is proposed to solve multiobjective optimization problems with multimodal nonlinear constraints. First, the initial population is divided into feasible nondominated population and infeasible/dominated population. The feasible nondominated individuals focus on exploring the nondominated front through clone and hypermutation based on a proposed affinity design approach, while the infeasible/dominated individuals are exploited and improved via the simulated binary crossover and polynomial mutation operations. And then, to accelerate the convergence of the proposed algorithm, a transformation technique is applied to the combined population of the above two offspring populations. Finally, a crowded-comparison strategy is used to create the next generation population. In numerical experiments, a series of benchmark constrained multiobjective optimization problems are considered to evaluate the performance of the proposed algorithm and it is also compared to several state-of-art algorithms in terms of the inverted generational distance and hypervolume indicators. The results indicate that the new method achieves competitive performance and even statistically significant better results than previous algorithms do on most of the benchmark suite. PMID:26285230
A superpixel-based framework for automatic tumor segmentation on breast DCE-MRI
NASA Astrophysics Data System (ADS)
Yu, Ning; Wu, Jia; Weinstein, Susan P.; Gaonkar, Bilwaj; Keller, Brad M.; Ashraf, Ahmed B.; Jiang, YunQing; Davatzikos, Christos; Conant, Emily F.; Kontos, Despina
2015-03-01
Accurate and efficient automated tumor segmentation in breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is highly desirable for computer-aided tumor diagnosis. We propose a novel automatic segmentation framework which incorporates mean-shift smoothing, superpixel-wise classification, pixel-wise graph-cuts partitioning, and morphological refinement. A set of 15 breast DCE-MR images, obtained from the American College of Radiology Imaging Network (ACRIN) 6657 I-SPY trial, were manually segmented to generate tumor masks (as ground truth) and breast masks (as regions of interest). Four state-of-the-art segmentation approaches based on diverse models were also utilized for comparison. Based on five standard evaluation metrics for segmentation, the proposed framework consistently outperformed all other approaches. The performance of the proposed framework was: 1) 0.83 for Dice similarity coefficient, 2) 0.96 for pixel-wise accuracy, 3) 0.72 for VOC score, 4) 0.79 mm for mean absolute difference, and 5) 11.71 mm for maximum Hausdorff distance, which surpassed the second best method (i.e., adaptive geodesic transformation), a semi-automatic algorithm depending on precise initialization. Our results suggest promising potential applications of our segmentation framework in assisting analysis of breast carcinomas.
Soil Moisture Algorithm Validation with Ground Based Networks
Technology Transfer Automated Retrieval System (TEKTRAN)
Validation satellite-based soil moisture algorithms and products is particularly challenging due to the disparity of scales of the two observation methods, conventional measurements of soil moisture are made at a point whereas satellite sensors provide an integrated area/volume value over a large ar...
Density shrinking algorithm for community detection with path based similarity
NASA Astrophysics Data System (ADS)
Wu, Jianshe; Hou, Yunting; Jiao, Yang; Li, Yong; Li, Xiaoxiao; Jiao, Licheng
2015-09-01
Community structure is ubiquitous in real world complex networks. Finding the communities is the key to understand the functions of those networks. A lot of works have been done in designing algorithms for community detection, but it remains a challenge in the field. Traditional modularity optimization suffers from the resolution limit problem. Recent researches show that combining the density based technique with the modularity optimization can overcome the resolution limit and an efficient algorithm named DenShrink was provided. The main procedure of DenShrink is repeatedly finding and merging micro-communities (broad sense) into super nodes until they cannot merge. Analyses in this paper show that if the procedure is replaced by finding and merging only dense pairs, both of the detection accuracy and runtime can be obviously improved. Thus an improved density-based algorithm: ImDS is provided. Since the time complexity, path based similarity indexes are difficult to be applied in community detection for high performance. In this paper, the path based Katz index is simplified and used in the ImDS algorithm.
Optimal fractional order PID design via Tabu Search based algorithm.
Ateş, Abdullah; Yeroglu, Celaleddin
2016-01-01
This paper presents an optimization method based on the Tabu Search Algorithm (TSA) to design a Fractional-Order Proportional-Integral-Derivative (FOPID) controller. All parameter computations of the FOPID employ random initial conditions, using the proposed optimization method. Illustrative examples demonstrate the performance of the proposed FOPID controller design method. PMID:26652128
Measuring Disorientation Based on the Needleman-Wunsch Algorithm
ERIC Educational Resources Information Center
Güyer, Tolga; Atasoy, Bilal; Somyürek, Sibel
2015-01-01
This study offers a new method to measure navigation disorientation in web based systems which is powerful learning medium for distance and open education. The Needleman-Wunsch algorithm is used to measure disorientation in a more precise manner. The process combines theoretical and applied knowledge from two previously distinct research areas,…
Realization and optimization of AES algorithm on the TMS320DM6446 based on DaVinci technology
NASA Astrophysics Data System (ADS)
Jia, Wen-bin; Xiao, Fu-hai
2013-03-01
The application of AES algorithm in the digital cinema system avoids video data to be illegal theft or malicious tampering, and solves its security problems. At the same time, in order to meet the requirements of the real-time, scene and transparent encryption of high-speed data streams of audio and video in the information security field, through the in-depth analysis of AES algorithm principle, based on the hardware platform of TMS320DM6446, with the software framework structure of DaVinci, this paper proposes the specific realization methods of AES algorithm in digital video system and its optimization solutions. The test results show digital movies encrypted by AES128 can not play normally, which ensures the security of digital movies. Through the comparison of the performance of AES128 algorithm before optimization and after, the correctness and validity of improved algorithm is verified.
Study of a Quantum Framework for Search Based Software Engineering
NASA Astrophysics Data System (ADS)
Wu, Nan; Song, Fangmin; Li, Xiangdong
2013-06-01
The Search Based Software Engineering (SBSE) is widely used in the software engineering to identify optimal solutions. The traditional methods and algorithms used in SBSE are criticized due to their high costs. In this paper, we propose a rapid modified-Grover quantum searching method for SBSE, and theoretically this method can be applied to any search-space structure and any type of searching problems.
A Hybrid Optimization Framework with POD-based Order Reduction and Design-Space Evolution Scheme
NASA Astrophysics Data System (ADS)
Ghoman, Satyajit S.
The main objective of this research is to develop an innovative multi-fidelity multi-disciplinary design, analysis and optimization suite that integrates certain solution generation codes and newly developed innovative tools to improve the overall optimization process. The research performed herein is divided into two parts: (1) the development of an MDAO framework by integration of variable fidelity physics-based computational codes, and (2) enhancements to such a framework by incorporating innovative features extending its robustness. The first part of this dissertation describes the development of a conceptual Multi-Fidelity Multi-Strategy and Multi-Disciplinary Design Optimization Environment (M3 DOE), in context of aircraft wing optimization. M 3 DOE provides the user a capability to optimize configurations with a choice of (i) the level of fidelity desired, (ii) the use of a single-step or multi-step optimization strategy, and (iii) combination of a series of structural and aerodynamic analyses. The modularity of M3 DOE allows it to be a part of other inclusive optimization frameworks. The M 3 DOE is demonstrated within the context of shape and sizing optimization of the wing of a Generic Business Jet aircraft. Two different optimization objectives, viz. dry weight minimization, and cruise range maximization are studied by conducting one low-fidelity and two high-fidelity optimization runs to demonstrate the application scope of M3 DOE. The second part of this dissertation describes the development of an innovative hybrid optimization framework that extends the robustness of M 3 DOE by employing a proper orthogonal decomposition-based design-space order reduction scheme combined with the evolutionary algorithm technique. The POD method of extracting dominant modes from an ensemble of candidate configurations is used for the design-space order reduction. The snapshot of candidate population is updated iteratively using evolutionary algorithm technique of
Detection of parametric curves based on genetic algorithm
NASA Astrophysics Data System (ADS)
Li, Haimin; Wu, Chengke
1998-09-01
Detection of curves with special shapes has been put on great interest in the fields of image processing and recognition. Some commonly used algorithms such as Hough Transform and Generalized Radon Transform are global search methods. When the number of parameters increases, their efficiencies decrease rapidly because of the expansion of parameter space. To solve this problem, a new method based on Genetic Algorithm is presented which combines a local search procedure to improve its performance. Experimental results show that the proposed method improves search efficiency greatly.
Research of Electronic Image Stabilization Algorithm Based on Orbital Character
NASA Astrophysics Data System (ADS)
Xian, Xiaodong; Hou, Peipei; Liang, Shan; Gan, Ping
The monocular vision is the key technology for the locomotive anti-collision warning system. The range precision influence the system's performance. In this paper according to the question of video jitter result in the accuracy reducing, proposes a new EIS algorithm based on the orbital characteristic, through extracting and matching partial feature template obtained the global movement vector. Treat the partial feature template instead of treating the global image, speed of the system is improved obviously. The result of simulation indicates that this algorithm can effectively eliminate image migration which produces because of the video jitter, has solved the deviation of ranging precision, and satisfies the real-time request of system.
Multiple Lookup Table-Based AES Encryption Algorithm Implementation
NASA Astrophysics Data System (ADS)
Gong, Jin; Liu, Wenyi; Zhang, Huixin
Anew AES (Advanced Encryption Standard) encryption algorithm implementation was proposed in this paper. It is based on five lookup tables, which are generated from S-box(the substitution table in AES). The obvious advantages are reducing the code-size, improving the implementation efficiency, and helping new learners to understand the AES encryption algorithm and GF(28) multiplication which are necessary to correctly implement AES[1]. This method can be applied on processors with word length 32 or above, FPGA and others. And correspondingly we can implement it by VHDL, Verilog, VB and other languages.
An Optimal Seed Based Compression Algorithm for DNA Sequences
Gopalakrishnan, Gopakumar; Karunakaran, Muralikrishnan
2016-01-01
This paper proposes a seed based lossless compression algorithm to compress a DNA sequence which uses a substitution method that is similar to the LempelZiv compression scheme. The proposed method exploits the repetition structures that are inherent in DNA sequences by creating an offline dictionary which contains all such repeats along with the details of mismatches. By ensuring that only promising mismatches are allowed, the method achieves a compression ratio that is at par or better than the existing lossless DNA sequence compression algorithms. PMID:27555868
A Framework for Game-Based Security Proofs
NASA Astrophysics Data System (ADS)
Nowak, David
To be accepted, a cryptographic scheme must come with a proof that it satisfies some standard security properties. However, because cryptographic schemes are based on non-trivial mathematics, proofs are error-prone and difficult to check. The main contributions of this paper are a refinement of the game-based approach to security proofs, and its implementation on top of the proof assistant Coq. The proof assistant checks that the proof is correct and deals with the mundane part of the proof. An interesting feature of our framework is that our proofs are formal enough to be mechanically checked, but still readable enough to be humanly checked. We illustrate the use of our framework by proving in a systematic way the so-called semantic security of the encryption scheme Elgamal and its hashed version.
A General Framework for Multiphysics Modeling Based on Numerical Averaging
NASA Astrophysics Data System (ADS)
Lunati, I.; Tomin, P.
2014-12-01
In the last years, multiphysics (hybrid) modeling has attracted increasing attention as a tool to bridge the gap between pore-scale processes and a continuum description at the meter-scale (laboratory scale). This approach is particularly appealing for complex nonlinear processes, such as multiphase flow, reactive transport, density-driven instabilities, and geomechanical coupling. We present a general framework that can be applied to all these classes of problems. The method is based on ideas from the Multiscale Finite-Volume method (MsFV), which has been originally developed for Darcy-scale application. Recently, we have reformulated MsFV starting with a local-global splitting, which allows us to retain the original degree of coupling for the local problems and to use spatiotemporal adaptive strategies. The new framework is based on the simple idea that different characteristic temporal scales are inherited from different spatial scales, and the global and the local problems are solved with different temporal resolutions. The global (coarse-scale) problem is constructed based on a numerical volume-averaging paradigm and a continuum (Darcy-scale) description is obtained by introducing additional simplifications (e.g., by assuming that pressure is the only independent variable at the coarse scale, we recover an extended Darcy's law). We demonstrate that it is possible to adaptively and dynamically couple the Darcy-scale and the pore-scale descriptions of multiphase flow in a single conceptual and computational framework. Pore-scale problems are solved only in the active front region where fluid distribution changes with time. In the rest of the domain, only a coarse description is employed. This framework can be applied to other important problems such as reactive transport and crack propagation. As it is based on a numerical upscaling paradigm, our method can be used to explore the limits of validity of macroscopic models and to illuminate the meaning of
Arcade: A Web-Java Based Framework for Distributed Computing
NASA Technical Reports Server (NTRS)
Chen, Zhikai; Maly, Kurt; Mehrotra, Piyush; Zubair, Mohammad; Bushnell, Dennis M. (Technical Monitor)
2000-01-01
Distributed heterogeneous environments are being increasingly used to execute a variety of large size simulations and computational problems. We are developing Arcade, a web-based environment to design, execute, monitor, and control distributed applications. These targeted applications consist of independent heterogeneous modules which can be executed on a distributed heterogeneous environment. In this paper we describe the overall design of the system and discuss the prototype implementation of the core functionalities required to support such a framework.
Design of synthetic biological logic circuits based on evolutionary algorithm.
Chuang, Chia-Hua; Lin, Chun-Liang; Chang, Yen-Chang; Jennawasin, Tanagorn; Chen, Po-Kuei
2013-08-01
The construction of an artificial biological logic circuit using systematic strategy is recognised as one of the most important topics for the development of synthetic biology. In this study, a real-structured genetic algorithm (RSGA), which combines general advantages of the traditional real genetic algorithm with those of the structured genetic algorithm, is proposed to deal with the biological logic circuit design problem. A general model with the cis-regulatory input function and appropriate promoter activity functions is proposed to synthesise a wide variety of fundamental logic gates such as NOT, Buffer, AND, OR, NAND, NOR and XOR. The results obtained can be extended to synthesise advanced combinational and sequential logic circuits by topologically distinct connections. The resulting optimal design of these logic gates and circuits are established via the RSGA. The in silico computer-based modelling technology has been verified showing its great advantages in the purpose. PMID:23919952
Robust illumination-invariant tracking algorithm based on HOGs
NASA Astrophysics Data System (ADS)
Miramontes-Jaramillo, Daniel; Kober, Vitaly; Díaz-Ramírez, Víctor H.
2015-09-01
Common tracking systems are usually affected by environmental and technical interferences such as non-uniform illumination, sensors' noise, geometrical scene distortion, etc. Among these issues, the former is particularly interesting because it destroys important spatial characteristics of objects in observed scenes. We propose a two-step tracking algorithm: first, preprocessing locally normalizes the illumination difference between the target object and observed frames; second, the normalized object is tracked by means of a designed template structure based on histograms of oriented gradients and kinematic prediction model. The algorithm performance is tested in terms of recognition and localization errors in real scenarios. In order to achieve high rate of the processing, we use GPU parallel processing technologies. Finally, our algorithm is compared with other state-of-the-art trackers.
A Multi-Scale Settlement Matching Algorithm Based on ARG
NASA Astrophysics Data System (ADS)
Yue, Han; Zhu, Xinyan; Chen, Di; Liu, Lingjia
2016-06-01
Homonymous entity matching is an important part of multi-source spatial data integration, automatic updating and change detection. Considering the low accuracy of existing matching methods in dealing with matching multi-scale settlement data, an algorithm based on Attributed Relational Graph (ARG) is proposed. The algorithm firstly divides two settlement scenes at different scales into blocks by small-scale road network and constructs local ARGs in each block. Then, ascertains candidate sets by merging procedures and obtains the optimal matching pairs by comparing the similarity of ARGs iteratively. Finally, the corresponding relations between settlements at large and small scales are identified. At the end of this article, a demonstration is presented and the results indicate that the proposed algorithm is capable of handling sophisticated cases.
An ellipse detection algorithm based on edge classification
NASA Astrophysics Data System (ADS)
Yu, Liu; Chen, Feng; Huang, Jianming; Wei, Xiangquan
2015-12-01
In order to enhance the speed and accuracy of ellipse detection, an ellipse detection algorithm based on edge classification is proposed. Too many edge points are removed by making edge into point in serialized form and the distance constraint between the edge points. It achieves effective classification by the criteria of the angle between the edge points. And it makes the probability of randomly selecting the edge points falling on the same ellipse greatly increased. Ellipse fitting accuracy is significantly improved by the optimization of the RED algorithm. It uses Euclidean distance to measure the distance from the edge point to the elliptical boundary. Experimental results show that: it can detect ellipse well in case of edge with interference or edges blocking each other. It has higher detecting precision and less time consuming than the RED algorithm.
An ordinary differential equation based solution path algorithm.
Wu, Yichao
2011-01-01
Efron, Hastie, Johnstone and Tibshirani (2004) proposed Least Angle Regression (LAR), a solution path algorithm for the least squares regression. They pointed out that a slight modification of the LAR gives the LASSO (Tibshirani, 1996) solution path. However it is largely unknown how to extend this solution path algorithm to models beyond the least squares regression. In this work, we propose an extension of the LAR for generalized linear models and the quasi-likelihood model by showing that the corresponding solution path is piecewise given by solutions of ordinary differential equation systems. Our contribution is twofold. First, we provide a theoretical understanding on how the corresponding solution path propagates. Second, we propose an ordinary differential equation based algorithm to obtain the whole solution path. PMID:21532936
Optimization algorithm based characterization scheme for tunable semiconductor lasers.
Chen, Quanan; Liu, Gonghai; Lu, Qiaoyin; Guo, Weihua
2016-09-01
In this paper, an optimization algorithm based characterization scheme for tunable semiconductor lasers is proposed and demonstrated. In the process of optimization, the ratio between the power of the desired frequency and the power except of the desired frequency is used as the figure of merit, which approximately represents the side-mode suppression ratio. In practice, we use tunable optical band-pass and band-stop filters to obtain the power of the desired frequency and the power except of the desired frequency separately. With the assistance of optimization algorithms, such as the particle swarm optimization (PSO) algorithm, we can get stable operation conditions for tunable lasers at designated frequencies directly and efficiently. PMID:27607701
Validation of Patellar Stabilization Surgical Algorithm Based on Congruence
Kejriwal, Ritwik; Dalrymple, Rhydian; Annear, Peter
2016-01-01
Background: Multiple algorithms exist for proximal and/or distal stabilisation surgery for patellar instability with no consensus in the literature. Aim: To validate our surgical algorithm based on patellofemoral congruence for patellar instability. Algorithm: Once patellar stabilization surgery is clinically indicated, we determine patellofemoral congruence abnormality based on quadriceps active CT and intraoperative arthroscopic assessment. Arthroscopic lateral release is carried out if indicated. For patients with minimal incongruence post lateral release, MPFL reconstruction alone (MPFL group) is performed, and we perform tibial tubercle transfer and MPFL reconstruction (TTT group) for significant incongruence Methods: Retrospective study with prospective follow up of patients operated on between 2008 and 2015. We excluded patients with skeletal immaturity, previous patellofemoral surgery, and distalisation of tibial tubercle. Chart review, pre and post operative quadriceps active CT, Kujala score, and patient’s subjective stability analysed. Results: 98 patients were reviewed with mean follow up 37 weeks. 14 patients had MPFL alone. Recurrence of instability occurred in 4% of patients, all in TTT group. Reoperation rate was 19%, almost all in TTT group, with removal of hardware being the most common reason. There was no significant difference in TTTG between the two groups on pre operative CT measurement. Conclusion: Patellar stabilization surgical algorithm based on congruence is valid in preventing further instability. Reoperation rate is high due to majority of patients receiving TTT procedure.
Evolving Stochastic Learning Algorithm based on Tsallis entropic index
NASA Astrophysics Data System (ADS)
Anastasiadis, A. D.; Magoulas, G. D.
2006-03-01
In this paper, inspired from our previous algorithm, which was based on the theory of Tsallis statistical mechanics, we develop a new evolving stochastic learning algorithm for neural networks. The new algorithm combines deterministic and stochastic search steps by employing a different adaptive stepsize for each network weight, and applies a form of noise that is characterized by the nonextensive entropic index q, regulated by a weight decay term. The behavior of the learning algorithm can be made more stochastic or deterministic depending on the trade off between the temperature T and the q values. This is achieved by introducing a formula that defines a time-dependent relationship between these two important learning parameters. Our experimental study verifies that there are indeed improvements in the convergence speed of this new evolving stochastic learning algorithm, which makes learning faster than using the original Hybrid Learning Scheme (HLS). In addition, experiments are conducted to explore the influence of the entropic index q and temperature T on the convergence speed and stability of the proposed method.
Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation
Sun, Xiao; Zhang, Tongda; Chai, Yueting; Liu, Yi
2015-01-01
Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS) algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it. PMID:26221133
A novel Retinex algorithm based on alternating direction optimization
NASA Astrophysics Data System (ADS)
Fu, Xueyang; Lin, Qin; Guo, Wei; Huang, Yue; Zeng, Delu; Ding, Xinghao
2013-10-01
The goal of the Retinex theory is to removed the effects of illumination from the observed images. To address this typical ill-posed inverse problem, many existing Retinex algorithms obtain an enhanced image by using different assumptions either on the illumination or on the reflectance. One significant limitation of these Retinex algorithms is that if the assumption is false, the result is unsatisfactory. In this paper, we firstly build a Retinex model which includes two variables: the illumination and the reflectance. We propose an efficient and effective algorithm based on alternating direction optimization to solve this problem where FFT (Fast Fourier Transform) is used to speed up the computation. Comparing with most existing Retinex algorithms, the proposed method solve the illumination image and reflectance image without converting images to the logarithmic domain. One of the advantages in this paper is that, unlike other traditional Retinex algorithms, our method can simultaneously estimate the illumination image and the reflectance image, the later of which is the ideal image without the illumination effect. Since our method can directly separate the illumination and the reflectance, and the two variables constrain each other mutually in the computing process, the result is robust to some degree. Another advantage is that our method has less computational cost and can be applied to real-time processing.
Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation.
Sun, Xiao; Zhang, Tongda; Chai, Yueting; Liu, Yi
2015-01-01
Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS) algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it. PMID:26221133
Measurement Theory in Deutsch's Algorithm Based on the Truth Values
NASA Astrophysics Data System (ADS)
Nagata, Koji; Nakamura, Tadao
2016-08-01
We propose a new measurement theory, in qubits handling, based on the truth values, i.e., the truth T (1) for true and the falsity F (0) for false. The results of measurement are either 0 or 1. To implement Deutsch's algorithm, we need both observability and controllability of a quantum state. The new measurement theory can satisfy these two. Especially, we systematically describe our assertion based on more mathematical analysis using raw data in a thoughtful experiment.
Evidence-based toxicology: a comprehensive framework for causation.
Guzelian, Philip S; Victoroff, Michael S; Halmes, N Christine; James, Robert C; Guzelian, Christopher P
2005-04-01
opinions, expressed by experts (or consensus groups of experts) relying on their education, training, experience, wisdom, prestige, intuition, skill and improvisation. In response, evidence-based medicine (EBM) was developed, to make a conscientious, explicit and judicious use of current best evidence in deciding about the care of individual patients. Now globally embraced, EBM employs a structured, 'transparent' protocol for carrying out a deliberate, objective, unbiased and systematic review of the evidence about a formally framed question. Not only in medicine, but now in dentistry, engineering and other fields that have adapted the methods of EBM, it is the quality of the evidence and the rigor of the analysis through evidence-based logic (EBL), rather than the professional standing of the reviewer, that leads to evidence-based conclusions about what is known. Recent studies have disclosed that toxicologists (individually or in expert groups), not unlike their medical counterparts prior to EBM, show distressing variations in their biases with regard to data selection, data interpretation and data evaluation when performing reviews for causation analyses. Moreover, toxicologists often fail to acknowledge explicitly (particularly in regulatory and policy-making arenas) when shortcomings in the evidence necessitate reliance upon authority-based opinions, rather than evidence-based conclusions (Guzelian PS, Guzelian CP. Authority-based explanation. Science 2004; 303: 1468-69). Accordingly, for answering questions about general and specific causation, we have constructed a framework for evidence-based toxicology (EBT), derived from the accepted principles of EBM and expressed succinctly as three stages, comprising 12 total steps. These are: 1) collecting and evaluating the relevant data (Source, Exposure, Dose, Diagnosis); 2) collecting and evaluating the relevant knowledge (Frame the question, Assemble the relevant (delimited) literature, Assess and critique the literature
A Gaussian Process Based Online Change Detection Algorithm for Monitoring Periodic Time Series
Chandola, Varun; Vatsavai, Raju
2011-01-01
Online time series change detection is a critical component of many monitoring systems, such as space and air-borne remote sensing instruments, cardiac monitors, and network traffic profilers, which continuously analyze observations recorded by sensors. Data collected by such sensors typically has a periodic (seasonal) component. Most existing time series change detection methods are not directly applicable to handle such data, either because they are not designed to handle periodic time series or because they cannot operate in an online mode. We propose an online change detection algorithm which can handle periodic time series. The algorithm uses a Gaussian process based non-parametric time series prediction model and monitors the difference between the predictions and actual observations within a statistically principled control chart framework to identify changes. A key challenge in using Gaussian process in an online mode is the need to solve a large system of equations involving the associated covariance matrix which grows with every time step. The proposed algorithm exploits the special structure of the covariance matrix and can analyze a time series of length T in O(T^2) time while maintaining a O(T) memory footprint, compared to O(T^4) time and O(T^2) memory requirement of standard matrix manipulation methods. We experimentally demonstrate the superiority of the proposed algorithm over several existing time series change detection algorithms on a set of synthetic and real time series. Finally, we illustrate the effectiveness of the proposed algorithm for identifying land use land cover changes using Normalized Difference Vegetation Index (NDVI) data collected for an agricultural region in Iowa state, USA. Our algorithm is able to detect different types of changes in a NDVI validation data set (with ~80% accuracy) which occur due to crop type changes as well as disruptive changes (e.g., natural disasters).
NASA Astrophysics Data System (ADS)
Khawaja, Taimoor Saleem
A high-belief low-overhead Prognostics and Health Management (PHM) system is desired for online real-time monitoring of complex non-linear systems operating in a complex (possibly non-Gaussian) noise environment. This thesis presents a Bayesian Least Squares Support Vector Machine (LS-SVM) based framework for fault diagnosis and failure prognosis in nonlinear non-Gaussian systems. The methodology assumes the availability of real-time process measurements, definition of a set of fault indicators and the existence of empirical knowledge (or historical data) to characterize both nominal and abnormal operating conditions. An efficient yet powerful Least Squares Support Vector Machine (LS-SVM) algorithm, set within a Bayesian Inference framework, not only allows for the development of real-time algorithms for diagnosis and prognosis but also provides a solid theoretical framework to address key concepts related to classification for diagnosis and regression modeling for prognosis. SVM machines are founded on the principle of Structural Risk Minimization (SRM) which tends to find a good trade-off between low empirical risk and small capacity. The key features in SVM are the use of non-linear kernels, the absence of local minima, the sparseness of the solution and the capacity control obtained by optimizing the margin. The Bayesian Inference framework linked with LS-SVMs allows a probabilistic interpretation of the results for diagnosis and prognosis. Additional levels of inference provide the much coveted features of adaptability and tunability of the modeling parameters. The two main modules considered in this research are fault diagnosis and failure prognosis. With the goal of designing an efficient and reliable fault diagnosis scheme, a novel Anomaly Detector is suggested based on the LS-SVM machines. The proposed scheme uses only baseline data to construct a 1-class LS-SVM machine which, when presented with online data is able to distinguish between normal behavior
NASA Astrophysics Data System (ADS)
Fan, Hong; Zhu, Anfeng; Zhang, Weixia
2015-12-01
In order to meet the rapid positioning of 12315 complaints, aiming at the natural language expression of telephone complaints, a semantic retrieval framework is proposed which is based on natural language parsing and geographical names ontology reasoning. Among them, a search result ranking and recommended algorithms is proposed which is regarding both geo-name conceptual similarity and spatial geometry relation similarity. The experiments show that this method can assist the operator to quickly find location of 12,315 complaints, increased industry and commerce customer satisfaction.
Munsell, Brent C.; Wee, Chong-Yaw; Keller, Simon S.; Weber, Bernd; Elger, Christian; da Silva, Laura Angelica Tomaz; Nesland, Travis; Styner, Martin; Shen, Dinggang; Bonilha, Leonardo
2015-01-01
The objective of this study is to evaluate machine learning algorithms aimed at predicting surgical treatment outcomes in groups of patients with temporal lobe epilepsy (TLE) using only the structural brain connectome. Specifically, the brain connectome is reconstructed using white matter fiber tracts from presurgical diffusion tensor imaging. To achieve our objective, a two-stage connectome-based prediction framework is developed that gradually selects a small number of abnormal network connections that contribute to the surgical treatment outcome, and in each stage a linear kernel operation is used to further improve the accuracy of the learned classifier. Using a 10-fold cross validation strategy, the first stage in the connectome-based framework is able to separate patients with TLE from normal controls with 80% accuracy, and second stage in the connectome-based framework is able to correctly predict the surgical treatment outcome of patients with TLE with 70% accuracy. Compared to existing state-of-the-art methods that use VBM data, the proposed two-stage connectome-based prediction framework is a suitable alternative with comparable prediction performance. Our results additionally show that machine learning algorithms that exclusively use structural connectome data can predict treatment outcomes in epilepsy with similar accuracy compared with “expert-based” clinical decision. In summary, using the unprecedented information provided in the brain connectome, machine learning algorithms may uncover pathological changes in brain network organization and improve outcome forecasting in the context of epilepsy. PMID:26054876
A motion detection-based framework for improving image quality of CCTV security systems.
Chiu, Shih-Hsuan; Lu, Chuan-Pin; Wen, Che-Yen
2006-09-01
Closed-circuit television (CCTV) security systems have been widely used in banks, convenience stores, and other facilities. They are useful to deter crime and depict criminal activity. However, CCTV cameras that provide an overview of a monitored region can be useful for criminal investigation but sometimes can also be used for object identification (e.g., vehicle numbers, persons, etc.). In this paper, we propose a framework for improving the image quality of CCTV security systems. This framework is based upon motion detection technology. There are two cameras in the framework: one camera (camera A) is fixed focus with a zoom lens for moving-object detection, and the other one (camera B) is variable focus with an auto-zoom lens to capture higher resolution images of the objects of interest. When camera A detects a moving object in the monitored area, camera B, driven by an auto-zoom focus control algorithm, will take a higher resolution image of the object of interest. Experimental results show that the proposed framework can improve the likelihood that images obtained from stationary unattended CCTV cameras are sufficient to enable law enforcement officials to identify suspects and other objects of interest. PMID:17018091
Microwave-based medical diagnosis using particle swarm optimization algorithm
NASA Astrophysics Data System (ADS)
Modiri, Arezoo
This dissertation proposes and investigates a novel architecture intended for microwave-based medical diagnosis (MBMD). Furthermore, this investigation proposes novel modifications of particle swarm optimization algorithm for achieving enhanced convergence performance. MBMD has been investigated through a variety of innovative techniques in the literature since the 1990's and has shown significant promise in early detection of some specific health threats. In comparison to the X-ray- and gamma-ray-based diagnostic tools, MBMD does not expose patients to ionizing radiation; and due to the maturity of microwave technology, it lends itself to miniaturization of the supporting systems. This modality has been shown to be effective in detecting breast malignancy, and hence, this study focuses on the same modality. A novel radiator device and detection technique is proposed and investigated in this dissertation. As expected, hardware design and implementation are of paramount importance in such a study, and a good deal of research, analysis, and evaluation has been done in this regard which will be reported in ensuing chapters of this dissertation. It is noteworthy that an important element of any detection system is the algorithm used for extracting signatures. Herein, the strong intrinsic potential of the swarm-intelligence-based algorithms in solving complicated electromagnetic problems is brought to bear. This task is accomplished through addressing both mathematical and electromagnetic problems. These problems are called benchmark problems throughout this dissertation, since they have known answers. After evaluating the performance of the algorithm for the chosen benchmark problems, the algorithm is applied to MBMD tumor detection problem. The chosen benchmark problems have already been tackled by solution techniques other than particle swarm optimization (PSO) algorithm, the results of which can be found in the literature. However, due to the relatively high level
A vertical handoff decision algorithm based on ARMA prediction model
NASA Astrophysics Data System (ADS)
Li, Ru; Shen, Jiao; Chen, Jun; Liu, Qiuhuan
2011-12-01
With the development of computer technology and the increasing demand for mobile communications, the next generation wireless networks will be composed of various wireless networks (e.g., WiMAX and WiFi). Vertical handoff is a key technology of next generation wireless networks. During the vertical handoff procedure, handoff decision is a crucial issue for an efficient mobility. Based on auto regression moving average (ARMA) prediction model, we propose a vertical handoff decision algorithm, which aims to improve the performance of vertical handoff and avoid unnecessary handoff. Based on the current received signal strength (RSS) and the previous RSS, the proposed approach adopt ARMA model to predict the next RSS. And then according to the predicted RSS to determine whether trigger the link layer triggering event and complete vertical handoff. The simulation results indicate that the proposed algorithm outperforms the RSS-based scheme with a threshold in the performance of handoff and the number of handoff.
A vertical handoff decision algorithm based on ARMA prediction model
NASA Astrophysics Data System (ADS)
Li, Ru; Shen, Jiao; Chen, Jun; Liu, Qiuhuan
2012-01-01
With the development of computer technology and the increasing demand for mobile communications, the next generation wireless networks will be composed of various wireless networks (e.g., WiMAX and WiFi). Vertical handoff is a key technology of next generation wireless networks. During the vertical handoff procedure, handoff decision is a crucial issue for an efficient mobility. Based on auto regression moving average (ARMA) prediction model, we propose a vertical handoff decision algorithm, which aims to improve the performance of vertical handoff and avoid unnecessary handoff. Based on the current received signal strength (RSS) and the previous RSS, the proposed approach adopt ARMA model to predict the next RSS. And then according to the predicted RSS to determine whether trigger the link layer triggering event and complete vertical handoff. The simulation results indicate that the proposed algorithm outperforms the RSS-based scheme with a threshold in the performance of handoff and the number of handoff.
Simulation-based optimization framework for reuse of agricultural drainage water in irrigation.
Allam, A; Tawfik, A; Yoshimura, C; Fleifle, A
2016-05-01
A simulation-based optimization framework for agricultural drainage water (ADW) reuse has been developed through the integration of a water quality model (QUAL2Kw) and a genetic algorithm. This framework was applied to the Gharbia drain in the Nile Delta, Egypt, in summer and winter 2012. First, the water quantity and quality of the drain was simulated using the QUAL2Kw model. Second, uncertainty analysis and sensitivity analysis based on Monte Carlo simulation were performed to assess QUAL2Kw's performance and to identify the most critical variables for determination of water quality, respectively. Finally, a genetic algorithm was applied to maximize the total reuse quantity from seven reuse locations with the condition not to violate the standards for using mixed water in irrigation. The water quality simulations showed that organic matter concentrations are critical management variables in the Gharbia drain. The uncertainty analysis showed the reliability of QUAL2Kw to simulate water quality and quantity along the drain. Furthermore, the sensitivity analysis showed that the 5-day biochemical oxygen demand, chemical oxygen demand, total dissolved solids, total nitrogen and total phosphorous are highly sensitive to point source flow and quality. Additionally, the optimization results revealed that the reuse quantities of ADW can reach 36.3% and 40.4% of the available ADW in the drain during summer and winter, respectively. These quantities meet 30.8% and 29.1% of the drainage basin requirements for fresh irrigation water in the respective seasons. PMID:26921569
Entropy-Based Search Algorithm for Experimental Design
NASA Astrophysics Data System (ADS)
Malakar, N. K.; Knuth, K. H.
2011-03-01
The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. This algorithm is inspired by Skilling's nested sampling algorithm used in inference and borrows the concept of a rising threshold while a set of experiment samples are maintained. We demonstrate that this algorithm not only selects highly relevant experiments, but also is more efficient than brute force search. Such entropic search techniques promise to greatly benefit autonomous experimental design.
Point of Care and Factor Concentrate-Based Coagulation Algorithms
Theusinger, Oliver M.; Stein, Philipp; Levy, Jerrold H.
2015-01-01
In the last years it has become evident that the use of blood products should be reduced whenever possible. There is increasing evidence regarding serious adverse events, including higher mortality and morbidity, related to transfusions. The use of point of care (POC) devices integrated in algorithms is one of the important mechanisms to limit blood product exposure. Any type of algorithm, especially the POC-based ones, allows goal-directed transfusions of blood products and even better targeted factor concentrate substitutions. Different types of algorithms in different surgical settings (cardiac surgery, trauma, liver surgery etc.) have been established with growing interest in their use as they offer objective therapy for management and reduction of blood product use. The use of POC devices with evidence-based algorithms is important in the bleeding patient independent of its origin (traumatic vs. surgical). The use of factor concentrates compared to the classical blood products can be cost-saving, beneficial for the patient, and in agreement with the WHO-requested standard of care. The empiric and uncontrolled use of blood products such as fresh frozen plasma, red blood cells, and platelets without POC monitoring should no longer be followed with regard to actual evidence in literature. Furthermore, the use of factor concentrates may provide better outcomes and potential for cost saving. PMID:26019707
A VGI data integration framework based on linked data model
NASA Astrophysics Data System (ADS)
Wan, Lin; Ren, Rongrong
2015-12-01
This paper aims at the geographic data integration and sharing method for multiple online VGI data sets. We propose a semantic-enabled framework for online VGI sources cooperative application environment to solve a target class of geospatial problems. Based on linked data technologies - which is one of core components of semantic web, we can construct the relationship link among geographic features distributed in diverse VGI platform by using linked data modeling methods, then deploy these semantic-enabled entities on the web, and eventually form an interconnected geographic data network to support geospatial information cooperative application across multiple VGI data sources. The mapping and transformation from VGI sources to RDF linked data model is presented to guarantee the unique data represent model among different online social geographic data sources. We propose a mixed strategy which combined spatial distance similarity and feature name attribute similarity as the measure standard to compare and match different geographic features in various VGI data sets. And our work focuses on how to apply Markov logic networks to achieve interlinks of the same linked data in different VGI-based linked data sets. In our method, the automatic generating method of co-reference object identification model according to geographic linked data is discussed in more detail. It finally built a huge geographic linked data network across loosely-coupled VGI web sites. The results of the experiment built on our framework and the evaluation of our method shows the framework is reasonable and practicable.
Framework Support For Knowledge-Based Software Development
NASA Astrophysics Data System (ADS)
Huseth, Steve
1988-03-01
The advent of personal engineering workstations has brought substantial information processing power to the individual programmer. Advanced tools and environment capabilities supporting the software lifecycle are just beginning to become generally available. However, many of these tools are addressing only part of the software development problem by focusing on rapid construction of self-contained programs by a small group of talented engineers. Additional capabilities are required to support the development of large programming systems where a high degree of coordination and communication is required among large numbers of software engineers, hardware engineers, and managers. A major player in realizing these capabilities is the framework supporting the software development environment. In this paper we discuss our research toward a Knowledge-Based Software Assistant (KBSA) framework. We propose the development of an advanced framework containing a distributed knowledge base that can support the data representation needs of tools, provide environmental support for the formalization and control of the software development process, and offer a highly interactive and consistent user interface.
Novel tree-based algorithms for computational electromagnetics
NASA Astrophysics Data System (ADS)
Aronsson, Jonatan
Tree-based methods have wide applications for solving large-scale problems in electromagnetics, astrophysics, quantum chemistry, fluid mechanics, acoustics, and many more areas. This thesis focuses on their applicability for solving large-scale problems in electromagnetics. The Barnes-Hut (BH) algorithm and the Fast Multipole Method (FMM) are introduced along with a survey of important previous work. The required theory for applying those methods to problems in electromagnetics is presented with particular emphasis on the capacitance extraction problem and broadband full-wave scattering. A novel single source approximation is introduced for approximating clusters of electrostatic sources in multi-layered media. The approximation is derived by matching the spectra of the field in the vicinity of the stationary phase point. Combined with the BH algorithm, a new algorithm is shown to be an efficient method for evaluating electrostatic fields in multilayered media. Specifically, the new BH algorithm is well suited for fast capacitance extraction. The BH algorithm is also adapted to the scalar Helmholtz kernel by using the same methodology to derive an accurate single source approximation. The result is a fast algorithm that is suitable for accelerating the solution of the Electric Field Integral Equation (EFIE) for electrically small structures. Finally, a new version of FMM is presented that is stable and efficient from the low frequency regime to mid-range frequencies. By applying analytical derivatives to the field expansions at the observation points, the proposed method can rapidly evaluate vectorial kernels that arise in the FMM-accelerated solution of EFIE, the Magnetic Field Integral Equation (MFIE), and the Combined Field Integral Equation (CFIE).
Knowledge-based recognition algorithm for long-range infrared bridge images
NASA Astrophysics Data System (ADS)
Cao, Zhiguo; Sun, Qi; Zhang, Tianxu
2001-10-01
The recognition of bridge in long-range infrared images presents a number of problems due to the complexity of background, high noise interference, small size of target and low contrast between bridge and its surrounding water area. To counter these barriers, we have developed a new knowledge-based recognition algorithm. It first detects the candidate bridge sub-regions and then focus on them. According to the degree to which they match with our pre-built framework, different credits are given, so the false objects are excluded and eventually real target is found. The experimental results demonstrate our localized method is always superior to the traditional global algorithms adopted by most former researchers.
NASA Technical Reports Server (NTRS)
Schallhorn, Paul; Majumdar, Alok
2012-01-01
This paper describes a finite volume based numerical algorithm that allows multi-dimensional computation of fluid flow within a system level network flow analysis. There are several thermo-fluid engineering problems where higher fidelity solutions are needed that are not within the capacity of system level codes. The proposed algorithm will allow NASA's Generalized Fluid System Simulation Program (GFSSP) to perform multi-dimensional flow calculation within the framework of GFSSP s typical system level flow network consisting of fluid nodes and branches. The paper presents several classical two-dimensional fluid dynamics problems that have been solved by GFSSP's multi-dimensional flow solver. The numerical solutions are compared with the analytical and benchmark solution of Poiseulle, Couette and flow in a driven cavity.
Lunar Crescent Detection Based on Image Processing Algorithms
NASA Astrophysics Data System (ADS)
Fakhar, Mostafa; Moalem, Peyman; Badri, Mohamad Ali
2014-11-01
For many years lunar crescent visibility has been studied by many astronomers. Different criteria have been used to predict and evaluate the visibility status of new Moon crescents. Powerful equipment such as telescopes and binoculars have changed capability of observations. Most of conventional statistical criteria made wrong predictions when new observations (based on modern equipment) were reported. In order to verify such reports and modify criteria, not only previous statistical parameters should be considered but also some new and effective parameters like high magnification, contour effect, low signal to noise, eyestrain and weather conditions should be viewed. In this paper a new method is presented for lunar crescent detection based on processing of lunar crescent images. The method includes two main steps, first, an image processing algorithm that improves signal to noise ratio and detects lunar crescents based on circular Hough transform (CHT). Second using an algorithm based on image histogram processing to detect the crescent visually. Final decision is made by comparing the results of visual and CHT algorithms. In order to evaluate the proposed method, a database, including 31 images are tested. The illustrated method can distinguish and extract the crescent that even the eye can't recognize. Proposed method significantly reduces artifacts, increases SNR and can be used easily by both groups astronomers and who want to develop a new criterion as a reliable method to verify empirical observation.
Staff line detection and revision algorithm based on subsection projection and correlation algorithm
NASA Astrophysics Data System (ADS)
Yang, Yin-xian; Yang, Ding-li
2013-03-01
Staff line detection plays a key role in OMR technology, and is the precon-ditions of subsequent segmentation 1& recognition of music sheets. For the phenomena of horizontal inclination & curvature of staff lines and vertical inclination of image, which often occur in music scores, an improved approach based on subsection projection is put forward to realize the detection of original staff lines and revision in an effect to implement staff line detection more successfully. Experimental results show the presented algorithm can detect and revise staff lines fast and effectively.
A test sheet generating algorithm based on intelligent genetic algorithm and hierarchical planning
NASA Astrophysics Data System (ADS)
Gu, Peipei; Niu, Zhendong; Chen, Xuting; Chen, Wei
2013-03-01
In recent years, computer-based testing has become an effective method to evaluate students' overall learning progress so that appropriate guiding strategies can be recommended. Research has been done to develop intelligent test assembling systems which can automatically generate test sheets based on given parameters of test items. A good multisubject test sheet depends on not only the quality of the test items but also the construction of the sheet. Effective and efficient construction of test sheets according to multiple subjects and criteria is a challenging problem. In this paper, a multi-subject test sheet generation problem is formulated and a test sheet generating approach based on intelligent genetic algorithm and hierarchical planning (GAHP) is proposed to tackle this problem. The proposed approach utilizes hierarchical planning to simplify the multi-subject testing problem and adopts genetic algorithm to process the layered criteria, enabling the construction of good test sheets according to multiple test item requirements. Experiments are conducted and the results show that the proposed approach is capable of effectively generating multi-subject test sheets that meet specified requirements and achieve good performance.
A test sheet generating algorithm based on intelligent genetic algorithm and hierarchical planning
NASA Astrophysics Data System (ADS)
Gu, Peipei; Niu, Zhendong; Chen, Xuting; Chen, Wei
2012-04-01
In recent years, computer-based testing has become an effective method to evaluate students' overall learning progress so that appropriate guiding strategies can be recommended. Research has been done to develop intelligent test assembling systems which can automatically generate test sheets based on given parameters of test items. A good multisubject test sheet depends on not only the quality of the test items but also the construction of the sheet. Effective and efficient construction of test sheets according to multiple subjects and criteria is a challenging problem. In this paper, a multi-subject test sheet generation problem is formulated and a test sheet generating approach based on intelligent genetic algorithm and hierarchical planning (GAHP) is proposed to tackle this problem. The proposed approach utilizes hierarchical planning to simplify the multi-subject testing problem and adopts genetic algorithm to process the layered criteria, enabling the construction of good test sheets according to multiple test item requirements. Experiments are conducted and the results show that the proposed approach is capable of effectively generating multi-subject test sheets that meet specified requirements and achieve good performance.
Multi-Objective Community Detection Based on Memetic Algorithm
2015-01-01
Community detection has drawn a lot of attention as it can provide invaluable help in understanding the function and visualizing the structure of networks. Since single objective optimization methods have intrinsic drawbacks to identifying multiple significant community structures, some methods formulate the community detection as multi-objective problems and adopt population-based evolutionary algorithms to obtain multiple community structures. Evolutionary algorithms have strong global search ability, but have difficulty in locating local optima efficiently. In this study, in order to identify multiple significant community structures more effectively, a multi-objective memetic algorithm for community detection is proposed by combining multi-objective evolutionary algorithm with a local search procedure. The local search procedure is designed by addressing three issues. Firstly, nondominated solutions generated by evolutionary operations and solutions in dominant population are set as initial individuals for local search procedure. Then, a new direction vector named as pseudonormal vector is proposed to integrate two objective functions together to form a fitness function. Finally, a network specific local search strategy based on label propagation rule is expanded to search the local optimal solutions efficiently. The extensive experiments on both artificial and real-world networks evaluate the proposed method from three aspects. Firstly, experiments on influence of local search procedure demonstrate that the local search procedure can speed up the convergence to better partitions and make the algorithm more stable. Secondly, comparisons with a set of classic community detection methods illustrate the proposed method can find single partitions effectively. Finally, the method is applied to identify hierarchical structures of networks which are beneficial for analyzing networks in multi-resolution levels. PMID:25932646
A framework for grouping nanoparticles based on their measurable characteristics
Sayes, Christie M; Smith, P Alex; Ivanov, Ivan V
2013-01-01
Background There is a need to take a broader look at nanotoxicological studies. Eventually, the field will demand that some generalizations be made. To begin to address this issue, we posed a question: are metal colloids on the nanometer-size scale a homogeneous group? In general, most people can agree that the physicochemical properties of nanomaterials can be linked and related to their induced toxicological responses. Methods The focus of this study was to determine how a set of selected physicochemical properties of five specific metal-based colloidal materials on the nanometer-size scale – silver, copper, nickel, iron, and zinc – could be used as nanodescriptors that facilitate the grouping of these metal-based colloids. Results The example of the framework pipeline processing provided in this paper shows the utility of specific statistical and pattern recognition techniques in grouping nanoparticles based on experimental data about their physicochemical properties. Interestingly, the results of the analyses suggest that a seemingly homogeneous group of nanoparticles could be separated into sub-groups depending on interdependencies observed in their nanodescriptors. Conclusion These particles represent an important category of nanomaterials that are currently mass produced. Each has been reputed to induce toxicological and/or cytotoxicological effects. Here, we propose an experimental methodology coupled with mathematical and statistical modeling that can serve as a prototype for a rigorous framework that aids in the ability to group nanomaterials together and to facilitate the subsequent analysis of trends in data based on quantitative modeling of nanoparticle-specific structure–activity relationships. The computational part of the proposed framework is rather general and can be applied to other groups of nanomaterials as well. PMID:24098078
Artificial Bee Colony Algorithm Based on Information Learning.
Gao, Wei-Feng; Huang, Ling-Ling; Liu, San-Yang; Dai, Cai
2015-12-01
Inspired by the fact that the division of labor and cooperation play extremely important roles in the human history development, this paper develops a novel artificial bee colony algorithm based on information learning (ILABC, for short). In ILABC, at each generation, the whole population is divided into several subpopulations by the clustering partition and the size of subpopulation is dynamically adjusted based on the last search experience, which results in a clear division of labor. Furthermore, the two search mechanisms are designed to facilitate the exchange of information in each subpopulation and between different subpopulations, respectively, which acts as the cooperation. Finally, the comparison results on a number of benchmark functions demonstrate that the proposed method performs competitively and effectively when compared to the selected state-of-the-art algorithms. PMID:25594992
A survey on evolutionary algorithm based hybrid intelligence in bioinformatics.
Li, Shan; Kang, Liying; Zhao, Xing-Ming
2014-01-01
With the rapid advance in genomics, proteomics, metabolomics, and other types of omics technologies during the past decades, a tremendous amount of data related to molecular biology has been produced. It is becoming a big challenge for the bioinformatists to analyze and interpret these data with conventional intelligent techniques, for example, support vector machines. Recently, the hybrid intelligent methods, which integrate several standard intelligent approaches, are becoming more and more popular due to their robustness and efficiency. Specifically, the hybrid intelligent approaches based on evolutionary algorithms (EAs) are widely used in various fields due to the efficiency and robustness of EAs. In this review, we give an introduction about the applications of hybrid intelligent methods, in particular those based on evolutionary algorithm, in bioinformatics. In particular, we focus on their applications to three common problems that arise in bioinformatics, that is, feature selection, parameter estimation, and reconstruction of biological networks. PMID:24729969
A Survey on Evolutionary Algorithm Based Hybrid Intelligence in Bioinformatics
Li, Shan; Zhao, Xing-Ming
2014-01-01
With the rapid advance in genomics, proteomics, metabolomics, and other types of omics technologies during the past decades, a tremendous amount of data related to molecular biology has been produced. It is becoming a big challenge for the bioinformatists to analyze and interpret these data with conventional intelligent techniques, for example, support vector machines. Recently, the hybrid intelligent methods, which integrate several standard intelligent approaches, are becoming more and more popular due to their robustness and efficiency. Specifically, the hybrid intelligent approaches based on evolutionary algorithms (EAs) are widely used in various fields due to the efficiency and robustness of EAs. In this review, we give an introduction about the applications of hybrid intelligent methods, in particular those based on evolutionary algorithm, in bioinformatics. In particular, we focus on their applications to three common problems that arise in bioinformatics, that is, feature selection, parameter estimation, and reconstruction of biological networks. PMID:24729969
A model-based framework for the detection of spiculated masses on mammography
Sampat, Mehul P.; Bovik, Alan C.; Whitman, Gary J.; Markey, Mia K.
2008-05-15
The detection of lesions on mammography is a repetitive and fatiguing task. Thus, computer-aided detection systems have been developed to aid radiologists. The detection accuracy of current systems is much higher for clusters of microcalcifications than for spiculated masses. In this article, the authors present a new model-based framework for the detection of spiculated masses. The authors have invented a new class of linear filters, spiculated lesion filters, for the detection of converging lines or spiculations. These filters are highly specific narrowband filters, which are designed to match the expected structures of spiculated masses. As a part of this algorithm, the authors have also invented a novel technique to enhance spicules on mammograms. This entails filtering in the radon domain. They have also developed models to reduce the false positives due to normal linear structures. A key contribution of this work is that the parameters of the detection algorithm are based on measurements of physical properties of spiculated masses. The results of the detection algorithm are presented in the form of free-response receiver operating characteristic curves on images from the Mammographic Image Analysis Society and Digital Database for Screening Mammography databases.
A framework for probabilistic atlas-based organ segmentation
NASA Astrophysics Data System (ADS)
Dong, Chunhua; Chen, Yen-Wei; Foruzan, Amir Hossein; Han, Xian-Hua; Tateyama, Tomoko; Wu, Xing
2016-03-01
Probabilistic atlas based on human anatomical structure has been widely used for organ segmentation. The challenge is how to register the probabilistic atlas to the patient volume. Additionally, there is the disadvantage that the conventional probabilistic atlas may cause a bias toward the specific patient study due to a single reference. Hence, we propose a template matching framework based on an iterative probabilistic atlas for organ segmentation. Firstly, we find a bounding box for the organ based on human anatomical localization. Then, the probabilistic atlas is used as a template to find the organ in this bounding box by using template matching technology. Comparing our method with conventional and recently developed atlas-based methods, our results show an improvement in the segmentation accuracy for multiple organs (p < 0:00001).
A novel pipeline based FPGA implementation of a genetic algorithm
NASA Astrophysics Data System (ADS)
Thirer, Nonel
2014-05-01
To solve problems when an analytical solution is not available, more and more bio-inspired computation techniques have been applied in the last years. Thus, an efficient algorithm is the Genetic Algorithm (GA), which imitates the biological evolution process, finding the solution by the mechanism of "natural selection", where the strong has higher chances to survive. A genetic algorithm is an iterative procedure which operates on a population of individuals called "chromosomes" or "possible solutions" (usually represented by a binary code). GA performs several processes with the population individuals to produce a new population, like in the biological evolution. To provide a high speed solution, pipelined based FPGA hardware implementations are used, with a nstages pipeline for a n-phases genetic algorithm. The FPGA pipeline implementations are constraints by the different execution time of each stage and by the FPGA chip resources. To minimize these difficulties, we propose a bio-inspired technique to modify the crossover step by using non identical twins. Thus two of the chosen chromosomes (parents) will build up two new chromosomes (children) not only one as in classical GA. We analyze the contribution of this method to reduce the execution time in the asynchronous and synchronous pipelines and also the possibility to a cheaper FPGA implementation, by using smaller populations. The full hardware architecture for a FPGA implementation to our target ALTERA development card is presented and analyzed.
Historical feature pattern extraction based network attack situation sensing algorithm.
Zeng, Yong; Liu, Dacheng; Lei, Zhou
2014-01-01
The situation sequence contains a series of complicated and multivariate random trends, which are very sudden, uncertain, and difficult to recognize and describe its principle by traditional algorithms. To solve the above questions, estimating parameters of super long situation sequence is essential, but very difficult, so this paper proposes a situation prediction method based on historical feature pattern extraction (HFPE). First, HFPE algorithm seeks similar indications from the history situation sequence recorded and weighs the link intensity between occurred indication and subsequent effect. Then it calculates the probability that a certain effect reappears according to the current indication and makes a prediction after weighting. Meanwhile, HFPE method gives an evolution algorithm to derive the prediction deviation from the views of pattern and accuracy. This algorithm can continuously promote the adaptability of HFPE through gradual fine-tuning. The method preserves the rules in sequence at its best, does not need data preprocessing, and can track and adapt to the variation of situation sequence continuously. PMID:24892054
A disturbance based control/structure design algorithm
NASA Technical Reports Server (NTRS)
Mclaren, Mark D.; Slater, Gary L.
1989-01-01
Some authors take a classical approach to the simultaneous structure/control optimization by attempting to simultaneously minimize the weighted sum of the total mass and a quadratic form, subject to all of the structural and control constraints. Here, the optimization will be based on the dynamic response of a structure to an external unknown stochastic disturbance environment. Such a response to excitation approach is common to both the structural and control design phases, and hence represents a more natural control/structure optimization strategy than relying on artificial and vague control penalties. The design objective is to find the structure and controller of minimum mass such that all the prescribed constraints are satisfied. Two alternative solution algorithms are presented which have been applied to this problem. Each algorithm handles the optimization strategy and the imposition of the nonlinear constraints in a different manner. Two controller methodologies, and their effect on the solution algorithm, will be considered. These are full state feedback and direct output feedback, although the problem formulation is not restricted solely to these forms of controller. In fact, although full state feedback is a popular choice among researchers in this field (for reasons that will become apparent), its practical application is severely limited. The controller/structure interaction is inserted by the imposition of appropriate closed-loop constraints, such as closed-loop output response and control effort constraints. Numerical results will be obtained for a representative flexible structure model to illustrate the effectiveness of the solution algorithms.
An ORCID based synchronization framework for a national CRIS ecosystem.
Mendes Moreira, João; Cunha, Alcino; Macedo, Nuno
2015-01-01
PTCRIS (Portuguese Current Research Information System) is a program aiming at the creation and sustained development of a national integrated information ecosystem, to support research management according to the best international standards and practices. This paper reports on the experience of designing and prototyping a synchronization framework for PTCRIS based on ORCID (Open Researcher and Contributor ID). This framework embraces the "input once, re-use often" principle, and will enable a substantial reduction of the research output management burden by allowing automatic information exchange between the various national systems. The design of the framework followed best practices in rigorous software engineering, namely well-established principles in the research field of consistency management, and relied on formal analysis techniques and tools for its validation and verification. The notion of consistency between the services was formally specified and discussed with the stakeholders before the technical aspects on how to preserve said consistency were explored. Formal specification languages and automated verification tools were used to analyze the specifications and generate usage scenarios, useful for validation with the stakeholder and essential to certificate compliant services. PMID:26308833
An ORCID based synchronization framework for a national CRIS ecosystem
Mendes Moreira, João; Cunha, Alcino; Macedo, Nuno
2015-01-01
PTCRIS (Portuguese Current Research Information System) is a program aiming at the creation and sustained development of a national integrated information ecosystem, to support research management according to the best international standards and practices. This paper reports on the experience of designing and prototyping a synchronization framework for PTCRIS based on ORCID (Open Researcher and Contributor ID). This framework embraces the "input once, re-use often" principle, and will enable a substantial reduction of the research output management burden by allowing automatic information exchange between the various national systems. The design of the framework followed best practices in rigorous software engineering, namely well-established principles in the research field of consistency management, and relied on formal analysis techniques and tools for its validation and verification. The notion of consistency between the services was formally specified and discussed with the stakeholders before the technical aspects on how to preserve said consistency were explored. Formal specification languages and automated verification tools were used to analyze the specifications and generate usage scenarios, useful for validation with the stakeholder and essential to certificate compliant services. PMID:26308833
Digital watermarking algorithm based on HVS in wavelet domain
NASA Astrophysics Data System (ADS)
Zhang, Qiuhong; Xia, Ping; Liu, Xiaomei
2013-10-01
As a new technique used to protect the copyright of digital productions, the digital watermark technique has drawn extensive attention. A digital watermarking algorithm based on discrete wavelet transform (DWT) was presented according to human visual properties in the paper. Then some attack analyses were given. Experimental results show that the watermarking scheme proposed in this paper is invisible and robust to cropping, and also has good robustness to cut , compression , filtering , and noise adding .
NCUBE - A clustering algorithm based on a discretized data space
NASA Technical Reports Server (NTRS)
Eigen, D. J.; Northouse, R. A.
1974-01-01
Cluster analysis involves the unsupervised grouping of data. The process provides an automatic procedure for generating known training samples for pattern classification. NCUBE, the clustering algorithm presented, is based upon the concept of imposing a gridwork on the data space. The NCUBE computer implementation of this concept provides an easily derived form of piecewise linear discrimination. This piecewise linear discrimination permits the separation of some types of data groups that are not linearly separable.
Physics-based signal processing algorithms for micromachined cantilever arrays
Candy, James V; Clague, David S; Lee, Christopher L; Rudd, Robert E; Burnham, Alan K; Tringe, Joseph W
2013-11-19
A method of using physics-based signal processing algorithms for micromachined cantilever arrays. The methods utilize deflection of a micromachined cantilever that represents the chemical, biological, or physical element being detected. One embodiment of the method comprises the steps of modeling the deflection of the micromachined cantilever producing a deflection model, sensing the deflection of the micromachined cantilever and producing a signal representing the deflection, and comparing the signal representing the deflection with the deflection model.
DNA-based watermarks using the DNA-Crypt algorithm
Heider, Dominik; Barnekow, Angelika
2007-01-01
Background The aim of this paper is to demonstrate the application of watermarks based on DNA sequences to identify the unauthorized use of genetically modified organisms (GMOs) protected by patents. Predicted mutations in the genome can be corrected by the DNA-Crypt program leaving the encrypted information intact. Existing DNA cryptographic and steganographic algorithms use synthetic DNA sequences to store binary information however, although these sequences can be used for authentication, they may change the target DNA sequence when introduced into living organisms. Results The DNA-Crypt algorithm and image steganography are based on the same watermark-hiding principle, namely using the least significant base in case of DNA-Crypt and the least significant bit in case of the image steganography. It can be combined with binary encryption algorithms like AES, RSA or Blowfish. DNA-Crypt is able to correct mutations in the target DNA with several mutation correction codes such as the Hamming-code or the WDH-code. Mutations which can occur infrequently may destroy the encrypted information, however an integrated fuzzy controller decides on a set of heuristics based on three input dimensions, and recommends whether or not to use a correction code. These three input dimensions are the length of the sequence, the individual mutation rate and the stability over time, which is represented by the number of generations. In silico experiments using the Ypt7 in Saccharomyces cerevisiae shows that the DNA watermarks produced by DNA-Crypt do not alter the translation of mRNA into protein. Conclusion The program is able to store watermarks in living organisms and can maintain the original information by correcting mutations itself. Pairwise or multiple sequence alignments show that DNA-Crypt produces few mismatches between the sequences similar to all steganographic algorithms. PMID:17535434
Fast wavelet based algorithms for linear evolution equations
NASA Technical Reports Server (NTRS)
Engquist, Bjorn; Osher, Stanley; Zhong, Sifen
1992-01-01
A class was devised of fast wavelet based algorithms for linear evolution equations whose coefficients are time independent. The method draws on the work of Beylkin, Coifman, and Rokhlin which they applied to general Calderon-Zygmund type integral operators. A modification of their idea is applied to linear hyperbolic and parabolic equations, with spatially varying coefficients. A significant speedup over standard methods is obtained when applied to hyperbolic equations in one space dimension and parabolic equations in multidimensions.
New image watermarking algorithm based on mixed scales wavelets
NASA Astrophysics Data System (ADS)
El Hajji, Mohamed; Douzi, Hassan; Mammass, Driss; Harba, Rachid; Ros, Frédéric
2012-01-01
Watermarking is a technology for embedding secure information in digital content such as audio, images, and video. An effective watermarking algorithm is proposed based on a discrete wavelet transform (DWT) using mixed scales representation. The watermark is embedded in dominant blocks using quantization index modulation (QIM). These dominant blocks correspond to the texture and contour zones. Experimental results demonstrate that the proposed method is robust against various attacks and improves watermark invisibility.
A genetic algorithm for ground-based telescope observation scheduling
NASA Astrophysics Data System (ADS)
Mahoney, William; Veillet, Christian; Thanjavur, Karun
2012-09-01
A prototype genetic algorithm (GA) is being developed to provide assisted and ultimately automated observation scheduling functionality. Harnessing the logic developed for manual queue preparation, the GA can build suitable sets of queues for the potential combinations of environmental and atmospheric conditions. Evolving one step further, the GA can select the most suitable observation for any moment in time, based on allocated priorities, agency balances, and realtime availability of the skies' condition.
A background suppression algorithm for infrared image based on shearlet
NASA Astrophysics Data System (ADS)
Zou, Ruibin; Shi, Caicheng; Qin, Xiao
2015-04-01
Because of the relative far distance between infrared imaging system and target or the wide field infrared optical, the imaging area of infrared target is only a few pixels, which is isolated or spots to be showed in the field of view. The only available is the intensity information (gray value) for the target detection. Simultaneously, there are many shortcomings of the infrared image, such as large noise, interference and so on, therefore the small target is always buried in the background and noises. The small target is relatively difficult to detect, so generally, it is impossible to make reliable detection to this target in a single frame image. Summarily, the core of the infrared small target detection algorithm is the background and noise suppression based on a single frame image. Aiming at the infrared small target detection and the above problems, a shearlets-based background suppression algorithm for infrared image is proposed. The algorithm demonstrates the performance of advantage based on shearlets, which is especially designed to address anisotropic and directional information at various scales. This transform provides an optimally efficient representation of images, which is greatly reduced the amount of the information and the available information representation. In the paper, introducing the principle of shearlets first, and then proposing the theory of the algorithm and explaining the implementation step. Finally, giving the simulation results. In Matlab simulations with this method for several sets of infrared images, simulation results conformed to the theory on background suppression based on shearlets. The result showed that this method can effectively suppress background, and improve the SCR and achieve a satisfactory effect in the sky background. The method is very effectively for target detection, identification, track in infrared image system for the future.
NASA Astrophysics Data System (ADS)
Peckham, Scott D.; Kelbert, Anna; Hill, Mary C.; Hutton, Eric W. H.
2016-05-01
Component-based modeling frameworks make it easier for users to access, configure, couple, run and test numerical models. However, they do not typically provide tools for uncertainty quantification or data-based model verification and calibration. To better address these important issues, modeling frameworks should be integrated with existing, general-purpose toolkits for optimization, parameter estimation and uncertainty quantification. This paper identifies and then examines the key issues that must be addressed in order to make a component-based modeling framework interoperable with general-purpose packages for model analysis. As a motivating example, one of these packages, DAKOTA, is applied to a representative but nontrivial surface process problem of comparing two models for the longitudinal elevation profile of a river to observational data. Results from a new mathematical analysis of the resulting nonlinear least squares problem are given and then compared to results from several different optimization algorithms in DAKOTA.
NASA Astrophysics Data System (ADS)
Tadono, T.; Hashimoto, S.; Onosato, M.; Hori, M.
2012-11-01
Change detection is a fundamental approach in utilization of satellite remote sensing image, especially in multi-temporal analysis that involves for example extracting damaged areas by a natural disaster. Recently, the amount of data obtained by Earth observation satellites has increased significantly owing to the increasing number and types of observing sensors, the enhancement of their spatial resolution, and improvements in their data processing systems. In applications for disaster monitoring, in particular, fast and accurate analysis of broad geographical areas is required to facilitate efficient rescue efforts. It is expected that robust automatic image interpretation is necessary. Several algorithms have been proposed in the field of automatic change detection in past, however they are still lack of robustness for multi purposes, an instrument independency, and accuracy better than a manual interpretation. We are trying to develop a framework for automatic image interpretation using ontology-based knowledge representation. This framework permits the description, accumulation, and use of knowledge drawn from image interpretation. Local relationships among certain concepts defined in the ontology are described as knowledge modules and are collected in the knowledge base. The knowledge representation uses a Bayesian network as a tool to describe various types of knowledge in a uniform manner. Knowledge modules are synthesized and used for target-specified inference. The results applied to two types of disasters by the framework without any modification and tuning are shown in this paper.
An optimized hybrid encode based compression algorithm for hyperspectral image
NASA Astrophysics Data System (ADS)
Wang, Cheng; Miao, Zhuang; Feng, Weiyi; He, Weiji; Chen, Qian; Gu, Guohua
2013-12-01
Compression is a kernel procedure in hyperspectral image processing due to its massive data which will bring great difficulty in date storage and transmission. In this paper, a novel hyperspectral compression algorithm based on hybrid encoding which combines with the methods of the band optimized grouping and the wavelet transform is proposed. Given the characteristic of correlation coefficients between adjacent spectral bands, an optimized band grouping and reference frame selection method is first utilized to group bands adaptively. Then according to the band number of each group, the redundancy in the spatial and spectral domain is removed through the spatial domain entropy coding and the minimum residual based linear prediction method. Thus, embedded code streams are obtained by encoding the residual images using the improved embedded zerotree wavelet based SPIHT encode method. In the experments, hyperspectral images collected by the Airborne Visible/ Infrared Imaging Spectrometer (AVIRIS) were used to validate the performance of the proposed algorithm. The results show that the proposed approach achieves a good performance in reconstructed image quality and computation complexity.The average peak signal to noise ratio (PSNR) is increased by 0.21~0.81dB compared with other off-the-shelf algorithms under the same compression ratio.
Image-based surface matching algorithm oriented to structural biology.
Merelli, Ivan; Cozzi, Paolo; D'Agostino, Daniele; Clematis, Andrea; Milanesi, Luciano
2011-01-01
Emerging technologies for structure matching based on surface descriptions have demonstrated their effectiveness in many research fields. In particular, they can be successfully applied to in silico studies of structural biology. Protein activities, in fact, are related to the external characteristics of these macromolecules and the ability to match surfaces can be important to infer information about their possible functions and interactions. In this work, we present a surface-matching algorithm, based on encoding the outer morphology of proteins in images of local description, which allows us to establish point-to-point correlations among macromolecular surfaces using image-processing functions. Discarding methods relying on biological analysis of atomic structures and expensive computational approaches based on energetic studies, this algorithm can successfully be used for macromolecular recognition by employing local surface features. Results demonstrate that the proposed algorithm can be employed both to identify surface similarities in context of macromolecular functional analysis and to screen possible protein interactions to predict pairing capability. PMID:21566253
Assessing excellence in translational cancer research: a consensus based framework
2013-01-01
Background It takes several years on average to translate basic research findings into clinical research and eventually deliver patient benefits. An expert-based excellence assessment can help improve this process by: identifying high performing Comprehensive Cancer Centres; best practices in translational cancer research; improving the quality and efficiency of the translational cancer research process. This can help build networks of excellent Centres by aiding focused partnerships. In this paper we report on a consensus building exercise that was undertaken to construct an excellence assessment framework for translational cancer research in Europe. Methods We used mixed methods to reach consensus: a systematic review of existing translational research models critically appraised for suitability in performance assessment of Cancer Centres; a survey among European stakeholders (researchers, clinicians, patient representatives and managers) to score a list of potential excellence criteria, a focus group with selected representatives of survey participants to review and rescore the excellence criteria; an expert group meeting to refine the list; an open validation round with stakeholders and a critical review of the emerging framework by an independent body: a committee formed by the European Academy of Cancer Sciences. Results The resulting excellence assessment framework has 18 criteria categorized in 6 themes. Each criterion has a number of questions/sub-criteria. Stakeholders favoured using qualitative excellence criteria to evaluate the translational research “process” rather than quantitative criteria or judging only the outputs. Examples of criteria include checking if the Centre has mechanisms that can be rated as excellent for: involvement of basic researchers and clinicians in translational research (quality of supervision and incentives provided to clinicians to do a PhD in translational research) and well designed clinical trials based on ground
A constitutive model for magnetostriction based on thermodynamic framework
NASA Astrophysics Data System (ADS)
Ho, Kwangsoo
2016-08-01
This work presents a general framework for the continuum-based formulation of dissipative materials with magneto-mechanical coupling in the viewpoint of irreversible thermodynamics. The thermodynamically consistent model developed for the magnetic hysteresis is extended to include the magnetostrictive effect. The dissipative and hysteretic response of magnetostrictive materials is captured through the introduction of internal state variables. The evolution rate of magnetostrictive strain as well as magnetization is derived from thermodynamic and dissipative potentials in accordance with the general principles of thermodynamics. It is then demonstrated that the constitutive model is competent to describe the magneto-mechanical behavior by comparing simulation results with the experimental data reported in the literature.
An Overview of NCA-Based Algorithms for Transcriptional Regulatory Network Inference
Wang, Xu; Alshawaqfeh, Mustafa; Dang, Xuan; Wajid, Bilal; Noor, Amina; Qaraqe, Marwa; Serpedin, Erchin
2015-01-01
In systems biology, the regulation of gene expressions involves a complex network of regulators. Transcription factors (TFs) represent an important component of this network: they are proteins that control which genes are turned on or off in the genome by binding to specific DNA sequences. Transcription regulatory networks (TRNs) describe gene expressions as a function of regulatory inputs specified by interactions between proteins and DNA. A complete understanding of TRNs helps to predict a variety of biological processes and to diagnose, characterize and eventually develop more efficient therapies. Recent advances in biological high-throughput technologies, such as DNA microarray data and next-generation sequence (NGS) data, have made the inference of transcription factor activities (TFAs) and TF-gene regulations possible. Network component analysis (NCA) represents an efficient computational framework for TRN inference from the information provided by microarrays, ChIP-on-chip and the prior information about TF-gene regulation. However, NCA suffers from several shortcomings. Recently, several algorithms based on the NCA framework have been proposed to overcome these shortcomings. This paper first overviews the computational principles behind NCA, and then, it surveys the state-of-the-art NCA-based algorithms proposed in the literature for TRN reconstruction.
GACEM: Genetic Algorithm Based Classifier Ensemble in a Multi-sensor System
Xu, Rongwu; He, Lin
2008-01-01
Multi-sensor systems (MSS) have been increasingly applied in pattern classification while searching for the optimal classification framework is still an open problem. The development of the classifier ensemble seems to provide a promising solution. The classifier ensemble is a learning paradigm where many classifiers are jointly used to solve a problem, which has been proven an effective method for enhancing the classification ability. In this paper, by introducing the concept of Meta-feature (MF) and Trans-function (TF) for describing the relationship between the nature and the measurement of the observed phenomenon, classification in a multi-sensor system can be unified in the classifier ensemble framework. Then an approach called Genetic Algorithm based Classifier Ensemble in Multi-sensor system (GACEM) is presented, where a genetic algorithm is utilized for optimization of both the selection of features subset and the decision combination simultaneously. GACEM trains a number of classifiers based on different combinations of feature vectors at first and then selects the classifiers whose weight is higher than the pre-set threshold to make up the ensemble. An empirical study shows that, compared with the conventional feature-level voting and decision-level voting, not only can GACEM achieve better and more robust performance, but also simplify the system markedly.
Creating "Intelligent" Ensemble Averages Using a Process-Based Framework
NASA Astrophysics Data System (ADS)
Baker, Noel; Taylor, Patrick
2014-05-01
The CMIP5 archive contains future climate projections from over 50 models provided by dozens of modeling centers from around the world. Individual model projections, however, are subject to biases created by structural model uncertainties. As a result, ensemble averaging of multiple models is used to add value to individual model projections and construct a consensus projection. Previous reports for the IPCC establish climate change projections based on an equal-weighted average of all model projections. However, individual models reproduce certain climate processes better than other models. Should models be weighted based on performance? Unequal ensemble averages have previously been constructed using a variety of mean state metrics. What metrics are most relevant for constraining future climate projections? This project develops a framework for systematically testing metrics in models to identify optimal metrics for unequal weighting multi-model ensembles. The intention is to produce improved ("intelligent") unequal-weight ensemble averages. A unique aspect of this project is the construction and testing of climate process-based model evaluation metrics. A climate process-based metric is defined as a metric based on the relationship between two physically related climate variables—e.g., outgoing longwave radiation and surface temperature. Several climate process metrics are constructed using high-quality Earth radiation budget data from NASA's Clouds and Earth's Radiant Energy System (CERES) instrument in combination with surface temperature data sets. It is found that regional values of tested quantities can vary significantly when comparing the equal-weighted ensemble average and an ensemble weighted using the process-based metric. Additionally, this study investigates the dependence of the metric weighting scheme on the climate state using a combination of model simulations including a non-forced preindustrial control experiment, historical simulations, and
NASA Astrophysics Data System (ADS)
Paton, F. L.; Maier, H. R.; Dandy, G. C.
2014-08-01
Cities around the world are increasingly involved in climate action and mitigating greenhouse gas (GHG) emissions. However, in the context of responding to climate pressures in the water sector, very few studies have investigated the impacts of changing water use on GHG emissions, even though water resource adaptation often requires greater energy use. Consequently, reducing GHG emissions, and thus focusing on both mitigation and adaptation responses to climate change in planning and managing urban water supply systems, is necessary. Furthermore, the minimization of GHG emissions is likely to conflict with other objectives. Thus, applying a multiobjective evolutionary algorithm (MOEA), which can evolve an approximation of entire trade-off (Pareto) fronts of multiple objectives in a single run, would be beneficial. Consequently, the main aim of this paper is to incorporate GHG emissions into a MOEA framework to take into consideration both adaptation and mitigation responses to climate change for a city's water supply system. The approach is applied to a case study based on Adelaide's southern water supply system to demonstrate the framework's practical management implications. Results indicate that trade-offs exist between GHG emissions and risk-based performance, as well as GHG emissions and economic cost. Solutions containing rainwater tanks are expensive, while GHG emissions greatly increase with increased desalinated water supply. Consequently, while desalination plants may be good adaptation options to climate change due to their climate-independence, rainwater may be a better mitigation response, albeit more expensive.
Performance evaluation of PCA-based spike sorting algorithms.
Adamos, Dimitrios A; Kosmidis, Efstratios K; Theophilidis, George
2008-09-01
Deciphering the electrical activity of individual neurons from multi-unit noisy recordings is critical for understanding complex neural systems. A widely used spike sorting algorithm is being evaluated for single-electrode nerve trunk recordings. The algorithm is based on principal component analysis (PCA) for spike feature extraction. In the neuroscience literature it is generally assumed that the use of the first two or most commonly three principal components is sufficient. We estimate the optimum PCA-based feature space by evaluating the algorithm's performance on simulated series of action potentials. A number of modifications are made to the open source nev2lkit software to enable systematic investigation of the parameter space. We introduce a new metric to define clustering error considering over-clustering more favorable than under-clustering as proposed by experimentalists for our data. Both the program patch and the metric are available online. Correlated and white Gaussian noise processes are superimposed to account for biological and artificial jitter in the recordings. We report that the employment of more than three principal components is in general beneficial for all noise cases considered. Finally, we apply our results to experimental data and verify that the sorting process with four principal components is in agreement with a panel of electrophysiology experts. PMID:18565614
Matched field localization based on CS-MUSIC algorithm
NASA Astrophysics Data System (ADS)
Guo, Shuangle; Tang, Ruichun; Peng, Linhui; Ji, Xiaopeng
2016-04-01
The problem caused by shortness or excessiveness of snapshots and by coherent sources in underwater acoustic positioning is considered. A matched field localization algorithm based on CS-MUSIC (Compressive Sensing Multiple Signal Classification) is proposed based on the sparse mathematical model of the underwater positioning. The signal matrix is calculated through the SVD (Singular Value Decomposition) of the observation matrix. The observation matrix in the sparse mathematical model is replaced by the signal matrix, and a new concise sparse mathematical model is obtained, which means not only the scale of the localization problem but also the noise level is reduced; then the new sparse mathematical model is solved by the CS-MUSIC algorithm which is a combination of CS (Compressive Sensing) method and MUSIC (Multiple Signal Classification) method. The algorithm proposed in this paper can overcome effectively the difficulties caused by correlated sources and shortness of snapshots, and it can also reduce the time complexity and noise level of the localization problem by using the SVD of the observation matrix when the number of snapshots is large, which will be proved in this paper.
A Radio-Map Automatic Construction Algorithm Based on Crowdsourcing.
Yu, Ning; Xiao, Chenxian; Wu, Yinfeng; Feng, Renjian
2016-01-01
Traditional radio-map-based localization methods need to sample a large number of location fingerprints offline, which requires huge amount of human and material resources. To solve the high sampling cost problem, an automatic radio-map construction algorithm based on crowdsourcing is proposed. The algorithm employs the crowd-sourced information provided by a large number of users when they are walking in the buildings as the source of location fingerprint data. Through the variation characteristics of users' smartphone sensors, the indoor anchors (doors) are identified and their locations are regarded as reference positions of the whole radio-map. The AP-Cluster method is used to cluster the crowdsourced fingerprints to acquire the representative fingerprints. According to the reference positions and the similarity between fingerprints, the representative fingerprints are linked to their corresponding physical locations and the radio-map is generated. Experimental results demonstrate that the proposed algorithm reduces the cost of fingerprint sampling and radio-map construction and guarantees the localization accuracy. The proposed method does not require users' explicit participation, which effectively solves the resource-consumption problem when a location fingerprint database is established. PMID:27070623
An Enhanced Differential Evolution Algorithm Based on Multiple Mutation Strategies
Xiang, Wan-li; Meng, Xue-lei; An, Mei-qing; Li, Yin-zhen; Gao, Ming-xia
2015-01-01
Differential evolution algorithm is a simple yet efficient metaheuristic for global optimization over continuous spaces. However, there is a shortcoming of premature convergence in standard DE, especially in DE/best/1/bin. In order to take advantage of direction guidance information of the best individual of DE/best/1/bin and avoid getting into local trap, based on multiple mutation strategies, an enhanced differential evolution algorithm, named EDE, is proposed in this paper. In the EDE algorithm, an initialization technique, opposition-based learning initialization for improving the initial solution quality, and a new combined mutation strategy composed of DE/current/1/bin together with DE/pbest/bin/1 for the sake of accelerating standard DE and preventing DE from clustering around the global best individual, as well as a perturbation scheme for further avoiding premature convergence, are integrated. In addition, we also introduce two linear time-varying functions, which are used to decide which solution search equation is chosen at the phases of mutation and perturbation, respectively. Experimental results tested on twenty-five benchmark functions show that EDE is far better than the standard DE. In further comparisons, EDE is compared with other five state-of-the-art approaches and related results show that EDE is still superior to or at least equal to these methods on most of benchmark functions. PMID:26609304
A Radio-Map Automatic Construction Algorithm Based on Crowdsourcing
Yu, Ning; Xiao, Chenxian; Wu, Yinfeng; Feng, Renjian
2016-01-01
Traditional radio-map-based localization methods need to sample a large number of location fingerprints offline, which requires huge amount of human and material resources. To solve the high sampling cost problem, an automatic radio-map construction algorithm based on crowdsourcing is proposed. The algorithm employs the crowd-sourced information provided by a large number of users when they are walking in the buildings as the source of location fingerprint data. Through the variation characteristics of users’ smartphone sensors, the indoor anchors (doors) are identified and their locations are regarded as reference positions of the whole radio-map. The AP-Cluster method is used to cluster the crowdsourced fingerprints to acquire the representative fingerprints. According to the reference positions and the similarity between fingerprints, the representative fingerprints are linked to their corresponding physical locations and the radio-map is generated. Experimental results demonstrate that the proposed algorithm reduces the cost of fingerprint sampling and radio-map construction and guarantees the localization accuracy. The proposed method does not require users’ explicit participation, which effectively solves the resource-consumption problem when a location fingerprint database is established. PMID:27070623
Performance of a community detection algorithm based on semidefinite programming
NASA Astrophysics Data System (ADS)
Ricci-Tersenghi, Federico; Javanmard, Adel; Montanari, Andrea
2016-03-01
The problem of detecting communities in a graph is maybe one the most studied inference problems, given its simplicity and widespread diffusion among several disciplines. A very common benchmark for this problem is the stochastic block model or planted partition problem, where a phase transition takes place in the detection of the planted partition by changing the signal-to-noise ratio. Optimal algorithms for the detection exist which are based on spectral methods, but we show these are extremely sensible to slight modification in the generative model. Recently Javanmard, Montanari and Ricci-Tersenghi [1] have used statistical physics arguments, and numerical simulations to show that finding communities in the stochastic block model via semidefinite programming is quasi optimal. Further, the resulting semidefinite relaxation can be solved efficiently, and is very robust with respect to changes in the generative model. In this paper we study in detail several practical aspects of this new algorithm based on semidefinite programming for the detection of the planted partition. The algorithm turns out to be very fast, allowing the solution of problems with O(105) variables in few second on a laptop computer.
PACS model based on digital watermarking and its core algorithms
NASA Astrophysics Data System (ADS)
Que, Dashun; Wen, Xianlin; Chen, Bi
2009-10-01
PACS model based on digital watermarking is proposed by analyzing medical image features and PACS requirements from the point of view of information security, its core being digital watermarking server and the corresponding processing module. Two kinds of digital watermarking algorithm are studied; one is non-region of interest (NROI) digital watermarking algorithm based on wavelet domain and block-mean, the other is reversible watermarking algorithm on extended difference and pseudo-random matrix. The former belongs to robust lossy watermarking, which embedded in NROI by wavelet provides a good way for protecting the focus area (ROI) of images, and introduction of block-mean approach a good scheme to enhance the anti-attack capability; the latter belongs to fragile lossless watermarking, which has the performance of simple implementation and can realize tamper localization effectively, and the pseudo-random matrix enhances the correlation and security between pixels. Plenty of experimental research has been completed in this paper, including the realization of digital watermarking PACS model, the watermarking processing module and its anti-attack experiments, the digital watermarking server and the network transmission simulating experiments of medical images. Theoretical analysis and experimental results show that the designed PACS model can effectively ensure confidentiality, authenticity, integrity and security of medical image information.
Genetic Algorithm based Decentralized PI Type Controller: Load Frequency Control
NASA Astrophysics Data System (ADS)
Dwivedi, Atul; Ray, Goshaidas; Sharma, Arun Kumar
2016-05-01
This work presents a design of decentralized PI type Linear Quadratic (LQ) controller based on genetic algorithm (GA). The proposed design technique allows considerable flexibility in defining the control objectives and it does not consider any knowledge of the system matrices and moreover it avoids the solution of algebraic Riccati equation. To illustrate the results of this work, a load-frequency control problem is considered. Simulation results reveal that the proposed scheme based on GA is an alternative and attractive approach to solve load-frequency control problem from both performance and design point of views.
Linear vs. function-based dose algorithm designs.
Stanford, N
2011-03-01
The performance requirements prescribed in IEC 62387-1, 2007 recommend linear, additive algorithms for external dosimetry [IEC. Radiation protection instrumentation--passive integrating dosimetry systems for environmental and personal monitoring--Part 1: General characteristics and performance requirements. IEC 62387-1 (2007)]. Neither of the two current standards for performance of external dosimetry in the USA address the additivity of dose results [American National Standards Institute, Inc. American National Standard for dosimetry personnel dosimetry performance criteria for testing. ANSI/HPS N13.11 (2009); Department of Energy. Department of Energy Standard for the performance testing of personnel dosimetry systems. DOE/EH-0027 (1986)]. While there are significant merits to adopting a purely linear solution to estimating doses from multi-element external dosemeters, differences in the standards result in technical as well as perception challenges in designing a single algorithm approach that will satisfy both IEC and USA external dosimetry performance requirements. The dosimetry performance testing standards in the USA do not incorporate type testing, but rely on biennial performance tests to demonstrate proficiency in a wide range of pure and mixed fields. The test results are used exclusively to judge the system proficiency, with no specific requirements on the algorithm design. Technical challenges include mixed beta/photon fields with a beta dose as low as 0.30 mSv mixed with 0.05 mSv of low-energy photons. Perception-based challenges, resulting from over 20 y of experience with this type of performance testing in the USA, include the common belief that the overall quality of the dosemeter performance can be judged from performance to pure fields. This paper presents synthetic testing results from currently accredited function-based algorithms and new developed purely linear algorithms. A comparison of the performance data highlights the benefits of each
A Matrix Pencil Algorithm Based Multiband Iterative Fusion Imaging Method
Zou, Yong Qiang; Gao, Xun Zhang; Li, Xiang; Liu, Yong Xiang
2016-01-01
Multiband signal fusion technique is a practicable and efficient way to improve the range resolution of ISAR image. The classical fusion method estimates the poles of each subband signal by the root-MUSIC method, and some good results were get in several experiments. However, this method is fragile in noise for the proper poles could not easy to get in low signal to noise ratio (SNR). In order to eliminate the influence of noise, this paper propose a matrix pencil algorithm based method to estimate the multiband signal poles. And to deal with mutual incoherent between subband signals, the incoherent parameters (ICP) are predicted through the relation of corresponding poles of each subband. Then, an iterative algorithm which aimed to minimize the 2-norm of signal difference is introduced to reduce signal fusion error. Applications to simulate dada verify that the proposed method get better fusion results at low SNR. PMID:26781194
Clustering-based robust three-dimensional phase unwrapping algorithm.
Arevalillo-Herráez, Miguel; Burton, David R; Lalor, Michael J
2010-04-01
Relatively recent techniques that produce phase volumes have motivated the study of three-dimensional (3D) unwrapping algorithms that inherently incorporate the third dimension into the process. We propose a novel 3D unwrapping algorithm that can be considered to be a generalization of the minimum spanning tree (MST) approach. The technique combines characteristics of some of the most robust existing methods: it uses a quality map to guide the unwrapping process, a region growing mechanism to progressively unwrap the signal, and also cut surfaces to avoid error propagation. The approach has been evaluated in the context of noncontact measurement of dynamic objects, suggesting a better performance than MST-based approaches. PMID:20357860
Reconstruction algorithms for optoacoustic imaging based on fiber optic detectors
NASA Astrophysics Data System (ADS)
Lamela, Horacio; Díaz-Tendero, Gonzalo; Gutiérrez, Rebeca; Gallego, Daniel
2011-06-01
Optoacoustic Imaging (OAI), a novel hybrid imaging technology, offers high contrast, molecular specificity and excellent resolution to overcome limitations of the current clinical modalities for detection of solid tumors. The exact time-domain reconstruction formula produces images with excellent resolution but poor contrast. Some approximate time-domain filtered back-projection reconstruction algorithms have also been reported to solve this problem. A wavelet transform implementation filtering can be used to sharpen object boundaries while simultaneously preserving high contrast of the reconstructed objects. In this paper, several algorithms, based on Back Projection (BP) techniques, have been suggested to process OA images in conjunction with signal filtering for ultrasonic point detectors and integral detectors. We apply these techniques first directly to a numerical generated sample image and then to the laserdigitalized image of a tissue phantom, obtaining in both cases the best results in resolution and contrast for a waveletbased filter.
A Matrix Pencil Algorithm Based Multiband Iterative Fusion Imaging Method
NASA Astrophysics Data System (ADS)
Zou, Yong Qiang; Gao, Xun Zhang; Li, Xiang; Liu, Yong Xiang
2016-01-01
Multiband signal fusion technique is a practicable and efficient way to improve the range resolution of ISAR image. The classical fusion method estimates the poles of each subband signal by the root-MUSIC method, and some good results were get in several experiments. However, this method is fragile in noise for the proper poles could not easy to get in low signal to noise ratio (SNR). In order to eliminate the influence of noise, this paper propose a matrix pencil algorithm based method to estimate the multiband signal poles. And to deal with mutual incoherent between subband signals, the incoherent parameters (ICP) are predicted through the relation of corresponding poles of each subband. Then, an iterative algorithm which aimed to minimize the 2-norm of signal difference is introduced to reduce signal fusion error. Applications to simulate dada verify that the proposed method get better fusion results at low SNR.
An improved piecewise linear chaotic map based image encryption algorithm.
Hu, Yuping; Zhu, Congxu; Wang, Zhijian
2014-01-01
An image encryption algorithm based on improved piecewise linear chaotic map (MPWLCM) model was proposed. The algorithm uses the MPWLCM to permute and diffuse plain image simultaneously. Due to the sensitivity to initial key values, system parameters, and ergodicity in chaotic system, two pseudorandom sequences are designed and used in the processes of permutation and diffusion. The order of processing pixels is not in accordance with the index of pixels, but it is from beginning or end alternately. The cipher feedback was introduced in diffusion process. Test results and security analysis show that not only the scheme can achieve good encryption results but also its key space is large enough to resist against brute attack. PMID:24592159
An Improved Piecewise Linear Chaotic Map Based Image Encryption Algorithm
Hu, Yuping; Wang, Zhijian
2014-01-01
An image encryption algorithm based on improved piecewise linear chaotic map (MPWLCM) model was proposed. The algorithm uses the MPWLCM to permute and diffuse plain image simultaneously. Due to the sensitivity to initial key values, system parameters, and ergodicity in chaotic system, two pseudorandom sequences are designed and used in the processes of permutation and diffusion. The order of processing pixels is not in accordance with the index of pixels, but it is from beginning or end alternately. The cipher feedback was introduced in diffusion process. Test results and security analysis show that not only the scheme can achieve good encryption results but also its key space is large enough to resist against brute attack. PMID:24592159
A Matrix Pencil Algorithm Based Multiband Iterative Fusion Imaging Method.
Zou, Yong Qiang; Gao, Xun Zhang; Li, Xiang; Liu, Yong Xiang
2016-01-01
Multiband signal fusion technique is a practicable and efficient way to improve the range resolution of ISAR image. The classical fusion method estimates the poles of each subband signal by the root-MUSIC method, and some good results were get in several experiments. However, this method is fragile in noise for the proper poles could not easy to get in low signal to noise ratio (SNR). In order to eliminate the influence of noise, this paper propose a matrix pencil algorithm based method to estimate the multiband signal poles. And to deal with mutual incoherent between subband signals, the incoherent parameters (ICP) are predicted through the relation of corresponding poles of each subband. Then, an iterative algorithm which aimed to minimize the 2-norm of signal difference is introduced to reduce signal fusion error. Applications to simulate dada verify that the proposed method get better fusion results at low SNR. PMID:26781194
Hybrid regularization image restoration algorithm based on total variation
NASA Astrophysics Data System (ADS)
Zhang, Hongmin; Wang, Yan
2013-09-01
To reduce the noise amplification and ripple phenomenon in the restoration result by using the traditional Richardson-Lucy deconvolution method, a novel hybrid regularization image restoration algorithm based on total variation is proposed in this paper. The key ides is that the hybrid regularization terms are employed according to the characteristics of different regions in the image itself. At the same time, the threshold between the different regularization terms is selected according to the golden section point which takes into account the human eye's visual feeling. Experimental results show that the restoration results of the proposed method are better than that of the total variation Richardson-Lucy algorithm both in PSNR and MSE, and it has the better visual effect simultaneously.
Missile placement analysis based on improved SURF feature matching algorithm
NASA Astrophysics Data System (ADS)
Yang, Kaida; Zhao, Wenjie; Li, Dejun; Gong, Xiran; Sheng, Qian
2015-03-01
The precious battle damage assessment by use of video images to analysis missile placement is a new study area. The article proposed an improved speeded up robust features algorithm named restricted speeded up robust features, which combined the combat application of TV-command-guided missiles and the characteristics of video image. Its restrictions mainly reflected in two aspects, one is to restrict extraction area of feature point; the second is to restrict the number of feature points. The process of missile placement analysis based on video image was designed and a video splicing process and random sample consensus purification were achieved. The RSURF algorithm is proved that has good realtime performance on the basis of guarantee the accuracy.
Liu, Dong-sheng; Fan, Shu-jiang
2014-01-01
In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity. PMID:24688389
A Modified Decision Tree Algorithm Based on Genetic Algorithm for Mobile User Classification Problem
Liu, Dong-sheng; Fan, Shu-jiang
2014-01-01
In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity. PMID:24688389
Algorithms for effective querying of compound graph-based pathway databases
2009-01-01
Background Graph-based pathway ontologies and databases are widely used to represent data about cellular processes. This representation makes it possible to programmatically integrate cellular networks and to investigate them using the well-understood concepts of graph theory in order to predict their structural and dynamic properties. An extension of this graph representation, namely hierarchically structured or compound graphs, in which a member of a biological network may recursively contain a sub-network of a somehow logically similar group of biological objects, provides many additional benefits for analysis of biological pathways, including reduction of complexity by decomposition into distinct components or modules. In this regard, it is essential to effectively query such integrated large compound networks to extract the sub-networks of interest with the help of efficient algorithms and software tools. Results Towards this goal, we developed a querying framework, along with a number of graph-theoretic algorithms from simple neighborhood queries to shortest paths to feedback loops, that is applicable to all sorts of graph-based pathway databases, from PPIs (protein-protein interactions) to metabolic and signaling pathways. The framework is unique in that it can account for compound or nested structures and ubiquitous entities present in the pathway data. In addition, the queries may be related to each other through "AND" and "OR" operators, and can be recursively organized into a tree, in which the result of one query might be a source and/or target for another, to form more complex queries. The algorithms were implemented within the querying component of a new version of the software tool PATIKAweb (Pathway Analysis Tool for Integration and Knowledge Acquisition) and have proven useful for answering a number of biologically significant questions for large graph-based pathway databases. Conclusion The PATIKA Project Web site is http
Algorithmic support for commodity-based parallel computing systems.
Leung, Vitus Joseph; Bender, Michael A.; Bunde, David P.; Phillips, Cynthia Ann
2003-10-01
The Computational Plant or Cplant is a commodity-based distributed-memory supercomputer under development at Sandia National Laboratories. Distributed-memory supercomputers run many parallel programs simultaneously. Users submit their programs to a job queue. When a job is scheduled to run, it is assigned to a set of available processors. Job runtime depends not only on the number of processors but also on the particular set of processors assigned to it. Jobs should be allocated to localized clusters of processors to minimize communication costs and to avoid bandwidth contention caused by overlapping jobs. This report introduces new allocation strategies and performance metrics based on space-filling curves and one dimensional allocation strategies. These algorithms are general and simple. Preliminary simulations and Cplant experiments indicate that both space-filling curves and one-dimensional packing improve processor locality compared to the sorted free list strategy previously used on Cplant. These new allocation strategies are implemented in Release 2.0 of the Cplant System Software that was phased into the Cplant systems at Sandia by May 2002. Experimental results then demonstrated that the average number of communication hops between the processors allocated to a job strongly correlates with the job's completion time. This report also gives processor-allocation algorithms for minimizing the average number of communication hops between the assigned processors for grid architectures. The associated clustering problem is as follows: Given n points in {Re}d, find k points that minimize their average pairwise L{sub 1} distance. Exact and approximate algorithms are given for these optimization problems. One of these algorithms has been implemented on Cplant and will be included in Cplant System Software, Version 2.1, to be released. In more preliminary work, we suggest improvements to the scheduler separate from the allocator.
Tatool: a Java-based open-source programming framework for psychological studies.
von Bastian, Claudia C; Locher, André; Ruflin, Michael
2013-03-01
Tatool (Training and Testing Tool) was developed to assist researchers with programming training software, experiments, and questionnaires. Tatool is Java-based, and thus is a platform-independent and object-oriented framework. The architecture was designed to meet the requirements of experimental designs and provides a large number of predefined functions that are useful in psychological studies. Tatool comprises features crucial for training studies (e.g., configurable training schedules, adaptive training algorithms, and individual training statistics) and allows for running studies online via Java Web Start. The accompanying "Tatool Online" platform provides the possibility to manage studies and participants' data easily with a Web-based interface. Tatool is published open source under the GNU Lesser General Public License, and is available at www.tatool.ch. PMID:22723043
A Model-Based Probabilistic Inversion Framework for Wire Fault Detection Using TDR
NASA Technical Reports Server (NTRS)
Schuet, Stefan R.; Timucin, Dogan A.; Wheeler, Kevin R.
2010-01-01
Time-domain reflectometry (TDR) is one of the standard methods for diagnosing faults in electrical wiring and interconnect systems, with a long-standing history focused mainly on hardware development of both high-fidelity systems for laboratory use and portable hand-held devices for field deployment. While these devices can easily assess distance to hard faults such as sustained opens or shorts, their ability to assess subtle but important degradation such as chafing remains an open question. This paper presents a unified framework for TDR-based chafing fault detection in lossy coaxial cables by combining an S-parameter based forward modeling approach with a probabilistic (Bayesian) inference algorithm. Results are presented for the estimation of nominal and faulty cable parameters from laboratory data.
CoP Sensing Framework on Web-Based Environment
NASA Astrophysics Data System (ADS)
Mustapha, S. M. F. D. Syed
The Web technologies and Web applications have shown similar high growth rate in terms of daily usages and user acceptance. The Web applications have not only penetrated in the traditional domains such as education and business but have also encroached into areas such as politics, social, lifestyle, and culture. The emergence of Web technologies has enabled Web access even to the person on the move through PDAs or mobile phones that are connected using Wi-Fi, HSDPA, or other communication protocols. These two phenomena are the inducement factors toward the need of building Web-based systems as the supporting tools in fulfilling many mundane activities. In doing this, one of the many focuses in research has been to look at the implementation challenges in building Web-based support systems in different types of environment. This chapter describes the implementation issues in building the community learning framework that can be supported on the Web-based platform. The Community of Practice (CoP) has been chosen as the community learning theory to be the case study and analysis as it challenges the creativity of the architectural design of the Web system in order to capture the presence of learning activities. The details of this chapter describe the characteristics of the CoP to understand the inherent intricacies in modeling in the Web-based environment, the evidences of CoP that need to be traced automatically in a slick manner such that the evidence-capturing process is unobtrusive, and the technologies needed to embrace a full adoption of Web-based support system for the community learning framework.
NASA Astrophysics Data System (ADS)
Que, Dashun; Li, Gang; Yue, Peng
2007-12-01
An adaptive optimization watermarking algorithm based on Genetic Algorithm (GA) and discrete wavelet transform (DWT) is proposed in this paper. The core of this algorithm is the fitness function optimization model for digital watermarking based on GA. The embedding intensity for digital watermarking can be modified adaptively, and the algorithm can effectively ensure the imperceptibility of watermarking while the robustness is ensured. The optimization model research may provide a new idea for anti-coalition attacks of digital watermarking algorithm. The paper has fulfilled many experiments, including the embedding and extracting experiments of watermarking, the influence experiments by the weighting factor, the experiments of embedding same watermarking to the different cover image, the experiments of embedding different watermarking to the same cover image, the comparative analysis experiments between this optimization algorithm and human visual system (HVS) algorithm and etc. The simulation results and the further analysis show the effectiveness and advantage of the new algorithm, which also has versatility and expandability. And meanwhile it has better ability of anti-coalition attacks. Moreover, the robustness and security of watermarking algorithm are improved by scrambling transformation and chaotic encryption while preprocessing the watermarking.
A service-based framework for pharmacogenomics data integration
NASA Astrophysics Data System (ADS)
Wang, Kun; Bai, Xiaoying; Li, Jing; Ding, Cong
2010-08-01
Data are central to scientific research and practices. The advance of experiment methods and information retrieval technologies leads to explosive growth of scientific data and databases. However, due to the heterogeneous problems in data formats, structures and semantics, it is hard to integrate the diversified data that grow explosively and analyse them comprehensively. As more and more public databases are accessible through standard protocols like programmable interfaces and Web portals, Web-based data integration becomes a major trend to manage and synthesise data that are stored in distributed locations. Mashup, a Web 2.0 technique, presents a new way to compose content and software from multiple resources. The paper proposes a layered framework for integrating pharmacogenomics data in a service-oriented approach using the mashup technology. The framework separates the integration concerns from three perspectives including data, process and Web-based user interface. Each layer encapsulates the heterogeneous issues of one aspect. To facilitate the mapping and convergence of data, the ontology mechanism is introduced to provide consistent conceptual models across different databases and experiment platforms. To support user-interactive and iterative service orchestration, a context model is defined to capture information of users, tasks and services, which can be used for service selection and recommendation during a dynamic service composition process. A prototype system is implemented and cases studies are presented to illustrate the promising capabilities of the proposed approach.
ROOT based Offline and Online Analysis (ROAn): An analysis framework for X-ray detector data
NASA Astrophysics Data System (ADS)
Lauf, Thomas; Andritschke, Robert
2014-10-01
The ROOT based Offline and Online Analysis (ROAn) framework was developed to perform data analysis on data from Depleted P-channel Field Effect Transistor (DePFET) detectors, a type of active pixel sensors developed at the MPI Halbleiterlabor (HLL). ROAn is highly flexible and extensible, thanks to ROOT's features like run-time type information and reflection. ROAn provides an analysis program which allows to perform configurable step-by-step analysis on arbitrary data, an associated suite of algorithms focused on DePFET data analysis, and a viewer program for displaying and processing online or offline detector data streams. The analysis program encapsulates the applied algorithms in objects called steps which produce analysis results. The dependency between results and thus the order of calculation is resolved automatically by the program. To optimize algorithms for studying detector effects, analysis parameters are often changed. Such changes of input parameters are detected in subsequent analysis runs and only the necessary recalculations are triggered. This saves time and simultaneously keeps the results consistent. The viewer program offers a configurable Graphical User Interface (GUI) and process chain, which allows the user to adapt the program to different tasks such as offline viewing of file data, online monitoring of running detector systems, or performing online data analysis (histogramming, calibration, etc.). Because of its modular design, ROAn can be extended easily, e.g. be adapted to new detector types and analysis processes.
NASA Astrophysics Data System (ADS)
Clapuyt, Francois; Vanacker, Veerle; Van Oost, Kristof
2016-05-01
Combination of UAV-based aerial pictures and Structure-from-Motion (SfM) algorithm provides an efficient, low-cost and rapid framework for remote sensing and monitoring of dynamic natural environments. This methodology is particularly suitable for repeated topographic surveys in remote or poorly accessible areas. However, temporal analysis of landform topography requires high accuracy of measurements and reproducibility of the methodology as differencing of digital surface models leads to error propagation. In order to assess the repeatability of the SfM technique, we surveyed a study area characterized by gentle topography with an UAV platform equipped with a standard reflex camera, and varied the focal length of the camera and location of georeferencing targets between flights. Comparison of different SfM-derived topography datasets shows that precision of measurements is in the order of centimetres for identical replications which highlights the excellent performance of the SfM workflow, all parameters being equal. The precision is one order of magnitude higher for 3D topographic reconstructions involving independent sets of ground control points, which results from the fact that the accuracy of the localisation of ground control points strongly propagates into final results.
Griffiths, Roland R; Johnson, Matthew W
2005-01-01
Hypnotic drugs, including benzodiazepine receptor ligands, barbiturates, antihistamines, and melatonin receptor ligands, are useful in treating insomnia, but clinicians should consider the relative abuse liability of these drugs when prescribing them. Two types of problematic hypnotic self-administration are distinguished. First, recreational abuse occurs when medications are used purposefully for the subjective "high." This type of abuse usually occurs in polydrug abusers, who are most often young and male. Second, chronic quasi-therapeutic abuse is a problematic use of hypnotic drugs in which patients continue long-term use despite medical recommendations to the contrary. Relative abuse liability is defined as an interaction between the relative reinforcing effects (i.e., the capacity to maintain drug self-administration behavior, thereby increasing the likelihood of nonmedical problematic use) and the relative toxicity (i.e., adverse effects having the capacity to harm the individual and/or society). An algorithm is provided that differentiates relative likelihood of abuse and relative toxicity of 19 hypnotic compounds: pentobarbital, methaqualone, diazepam, flunitrazepam, lorazepam, GHB (gamma-hydroxybutyrate, also known as sodium oxybate), temazepam, zaleplon, eszopiclone, triazolam, zopiclone, flurazepam, zolpidem, oxazepam, estazolam, diphenhydramine, quazepam, tra-zodone, and ramelteon. Factors in the analysis include preclinical and clinical assessment of reinforcing effects, preclinical and clinical assessment of withdrawal, actual abuse, acute sedation/memory impairment, and overdose lethality. The analysis shows that both the likelihood of abuse and the toxicity vary from high to none across these compounds. The primary clinical implication of the range of differences in abuse liability is that concern about recreational abuse, inappropriate long-term use, or adverse effects should not deter physicians from prescribing hypnotics when clinically
Zhang, Gang; Huang, Yonghui; Zhong, Ling; Ou, Shanxing; Zhang, Yi; Li, Ziping
2015-01-01
Objective. This study aims to establish a model to analyze clinical experience of TCM veteran doctors. We propose an ensemble learning based framework to analyze clinical records with ICD-10 labels information for effective diagnosis and acupoints recommendation. Methods. We propose an ensemble learning framework for the analysis task. A set of base learners composed of decision tree (DT) and support vector machine (SVM) are trained by bootstrapping the training dataset. The base learners are sorted by accuracy and diversity through nondominated sort (NDS) algorithm and combined through a deep ensemble learning strategy. Results. We evaluate the proposed method with comparison to two currently successful methods on a clinical diagnosis dataset with manually labeled ICD-10 information. ICD-10 label annotation and acupoints recommendation are evaluated for three methods. The proposed method achieves an accuracy rate of 88.2% ± 2.8% measured by zero-one loss for the first evaluation session and 79.6% ± 3.6% measured by Hamming loss, which are superior to the other two methods. Conclusion. The proposed ensemble model can effectively model the implied knowledge and experience in historic clinical data records. The computational cost of training a set of base learners is relatively low. PMID:26504897
A Reference Based Analysis Framework for Analyzing System Call Traces
Chandola, Varun; Kumar, Vipin; Boriah, Shyam
2010-01-01
Reference based analysis (RBA) is a novel data mining tool for exploring a test data set with respect to a reference data set. The power of RBA lies in it ability to transform any complex data type, such as symbolic sequences and multi-variate categorical data instances, into a multivariate continuous representation. The transformed representation not only allows visualization of the complex data, which cannot be otherwise visualized in its original form, but also allows enhanced anomaly detection in the transformed feature space. We demonstrate the application of the RBA framework in analyzing system call traces and show how the transformation results in improved intrusion detection performance over state of art data mining based intrusion detection methods developed for system call traces.
Contrast Enhancement Algorithm Based on Gap Adjustment for Histogram Equalization.
Chiu, Chung-Cheng; Ting, Chih-Chung
2016-01-01
Image enhancement methods have been widely used to improve the visual effects of images. Owing to its simplicity and effectiveness histogram equalization (HE) is one of the methods used for enhancing image contrast. However, HE may result in over-enhancement and feature loss problems that lead to unnatural look and loss of details in the processed images. Researchers have proposed various HE-based methods to solve the over-enhancement problem; however, they have largely ignored the feature loss problem. Therefore, a contrast enhancement algorithm based on gap adjustment for histogram equalization (CegaHE) is proposed. It refers to a visual contrast enhancement algorithm based on histogram equalization (VCEA), which generates visually pleasing enhanced images, and improves the enhancement effects of VCEA. CegaHE adjusts the gaps between two gray values based on the adjustment equation, which takes the properties of human visual perception into consideration, to solve the over-enhancement problem. Besides, it also alleviates the feature loss problem and further enhances the textures in the dark regions of the images to improve the quality of the processed images for human visual perception. Experimental results demonstrate that CegaHE is a reliable method for contrast enhancement and that it significantly outperforms VCEA and other methods. PMID:27338412
Contrast Enhancement Algorithm Based on Gap Adjustment for Histogram Equalization
Chiu, Chung-Cheng; Ting, Chih-Chung
2016-01-01
Image enhancement methods have been widely used to improve the visual effects of images. Owing to its simplicity and effectiveness histogram equalization (HE) is one of the methods used for enhancing image contrast. However, HE may result in over-enhancement and feature loss problems that lead to unnatural look and loss of details in the processed images. Researchers have proposed various HE-based methods to solve the over-enhancement problem; however, they have largely ignored the feature loss problem. Therefore, a contrast enhancement algorithm based on gap adjustment for histogram equalization (CegaHE) is proposed. It refers to a visual contrast enhancement algorithm based on histogram equalization (VCEA), which generates visually pleasing enhanced images, and improves the enhancement effects of VCEA. CegaHE adjusts the gaps between two gray values based on the adjustment equation, which takes the properties of human visual perception into consideration, to solve the over-enhancement problem. Besides, it also alleviates the feature loss problem and further enhances the textures in the dark regions of the images to improve the quality of the processed images for human visual perception. Experimental results demonstrate that CegaHE is a reliable method for contrast enhancement and that it significantly outperforms VCEA and other methods. PMID:27338412
Visual tracking method based on cuckoo search algorithm
NASA Astrophysics Data System (ADS)
Gao, Ming-Liang; Yin, Li-Ju; Zou, Guo-Feng; Li, Hai-Tao; Liu, Wei
2015-07-01
Cuckoo search (CS) is a new meta-heuristic optimization algorithm that is based on the obligate brood parasitic behavior of some cuckoo species in combination with the Lévy flight behavior of some birds and fruit flies. It has been found to be efficient in solving global optimization problems. An application of CS is presented to solve the visual tracking problem. The relationship between optimization and visual tracking is comparatively studied and the parameters' sensitivity and adjustment of CS in the tracking system are experimentally studied. To demonstrate the tracking ability of a CS-based tracker, a comparative study of tracking accuracy and speed of the CS-based tracker with six "state-of-art" trackers, namely, particle filter, meanshift, PSO, ensemble tracker, fragments tracker, and compressive tracker are presented. Comparative results show that the CS-based tracker outperforms the other trackers.
An Agent-Based Framework for Building Decision Support System in Supply Chain Management
NASA Astrophysics Data System (ADS)
Kazemi, A.; Fazel Zarandi, M. H.
In this study, two scenarios are presented for solving Production-Distribution Panning Problem (PDPP) in a Decision Support System (DSS) framework. In the first scenario, a Traditional Decision Support System (TDSS) is presented for PDPP and a Genetic Algorithm (GA) is used for solving it. In the second scenario, a Multi-agent Decision Support System (MADSS) is considered for PDPP and three algorithms are used for solving it: Genetic Algorithm (GA), Tabu Search (TS) and Simulated Annealing (SA). Then an algorithm is suggested by using multi-agent system and A Teams concept. The obtained results reveal that the use of MADSS delivers better solutions to us.
Detection algorithm of big bandwidth chirp signals based on STFT
NASA Astrophysics Data System (ADS)
Wang, Jinzhen; Wu, Juhong; Su, Shaoying; Chen, Zengping
2014-10-01
Aiming at solving the problem of detecting the wideband chirp signals under low Signal-to-Noise Ratio (SNR) condition, an effective signal detection algorithm based on Short-Time-Fourier-Transform (STFT) is proposed. Considering the characteristic of dispersion of noise spectrum and concentration of chirp spectrum, STFT is performed on chirp signals with Gauss window by fixed step, and these frequencies of peak spectrum obtained from every STFT are in correspondence to the time of every stepped window. Then, the frequencies are binarized and the approach similar to mnk method in time domain is used to detect the chirp pulse signal and determine the coarse starting time and ending time. Finally, the data segments, where the former starting time and ending time locate, are subdivided into many segments evenly, on which the STFT is implemented respectively. By that, the precise starting and ending time are attained. Simulations shows that when the SNR is higher than -28dB, the detection probability is not less than 99% and false alarm probability is zero, and also good estimation accuracy of starting and ending time is acquired. The algorithm is easy to realize and surpasses FFT in computation when the width of STFT window and step length are selected properly, so the presented algorithm has good engineering value.
Building simplification algorithms based on user cognition in mobile environment
NASA Astrophysics Data System (ADS)
Shen, Jie; Shi, Junfei; Wang, Meizhen; Wu, Chenyan
2008-10-01
With the development of LBS, mobile map should adaptively satisfy the cognitive requirement of user. User cognition in mobile environment is much more objective oriented and also seem to be a heavier burden than the user in static environment. The holistic idea and methods of map generalization can not fully suitable for the mobile map. This paper took the building simplification in habitation generalization as example, analyzed the characteristic of user cognition in mobile environment and the basic rules of building simplification, collected and studied the state-of-the-art of algorithms of building simplification in the static and mobile environment, put forward the idea of hierarchical building simplification based on user cognition. This paper took Hunan road business district of Nanjing as test area and took the building data with shapfile format of ESRI as test data and realized the simplification algorithm. The method took user as center, calculated the distance between user and the building which will be simplified and took the distance as the basis for choosing different simplification algorithm for different spaces. This contribution aimed to hierarchically present the building in different level of detail by real-time simplification.
Feature extraction algorithm for space targets based on fractal theory
NASA Astrophysics Data System (ADS)
Tian, Balin; Yuan, Jianping; Yue, Xiaokui; Ning, Xin
2007-11-01
In order to offer a potential for extending the life of satellites and reducing the launch and operating costs, satellite servicing including conducting repairs, upgrading and refueling spacecraft on-orbit become much more frequently. Future space operations can be more economically and reliably executed using machine vision systems, which can meet real time and tracking reliability requirements for image tracking of space surveillance system. Machine vision was applied to the research of relative pose for spacecrafts, the feature extraction algorithm was the basis of relative pose. In this paper fractal geometry based edge extraction algorithm which can be used in determining and tracking the relative pose of an observed satellite during proximity operations in machine vision system was presented. The method gets the gray-level image distributed by fractal dimension used the Differential Box-Counting (DBC) approach of the fractal theory to restrain the noise. After this, we detect the consecutive edge using Mathematical Morphology. The validity of the proposed method is examined by processing and analyzing images of space targets. The edge extraction method not only extracts the outline of the target, but also keeps the inner details. Meanwhile, edge extraction is only processed in moving area to reduce computation greatly. Simulation results compared edge detection using the method which presented by us with other detection methods. The results indicate that the presented algorithm is a valid method to solve the problems of relative pose for spacecrafts.
An improved Richardson-Lucy algorithm based on local prior
NASA Astrophysics Data System (ADS)
Yongpan, Wang; Huajun, Feng; Zhihai, Xu; Qi, Li; Chaoyue, Dai
2010-07-01
Ringing is one of the most common disturbing artifacts in image deconvolution. With a totally known kernel, the standard Richardson-Lucy (RL) algorithm succeeds in many motion deblurring processes, but the resulting images still contain visible ringing. When the estimated kernel is different from the real one, the result of the standard RL iterative algorithm will be worse. To suppress the ringing artifacts caused by failures in the blur kernel estimation, this paper improves the RL algorithm based on the local prior. Firstly, the standard deviation of pixels in the local window is computed to find the smooth region and the image gradient in the region is constrained to make its distribution consistent with the deblurring image gradient. Secondly, in order to suppress the ringing near the edge of a rigid body in the image, a new mask was obtained by computing the sharp edge of the image produced using the first step. If the kernel is large-scale, where the foreground is rigid and the background is smoothing, this step could produce a significant inhibitory effect on ringing artifacts. Thirdly, the boundary constraint is strengthened if the boundary is relatively smooth. As a result of the steps above, high-quality deblurred images can be obtained even when the estimated kernels are not perfectly accurate. On the basis of blurred images and the related kernel information taken by the additional hardware, our approach proved to be effective.
An Evolved Wavelet Library Based on Genetic Algorithm
Vaithiyanathan, D.; Seshasayanan, R.; Kunaraj, K.; Keerthiga, J.
2014-01-01
As the size of the images being captured increases, there is a need for a robust algorithm for image compression which satiates the bandwidth limitation of the transmitted channels and preserves the image resolution without considerable loss in the image quality. Many conventional image compression algorithms use wavelet transform which can significantly reduce the number of bits needed to represent a pixel and the process of quantization and thresholding further increases the compression. In this paper the authors evolve two sets of wavelet filter coefficients using genetic algorithm (GA), one for the whole image portion except the edge areas and the other for the portions near the edges in the image (i.e., global and local filters). Images are initially separated into several groups based on their frequency content, edges, and textures and the wavelet filter coefficients are evolved separately for each group. As there is a possibility of the GA settling in local maximum, we introduce a new shuffling operator to prevent the GA from this effect. The GA used to evolve filter coefficients primarily focuses on maximizing the peak signal to noise ratio (PSNR). The evolved filter coefficients by the proposed method outperform the existing methods by a 0.31 dB improvement in the average PSNR and a 0.39 dB improvement in the maximum PSNR. PMID:25405225
Sonoluminescence Bubble Measurements using Vision-Based Algorithms
NASA Technical Reports Server (NTRS)
Hall, Nancy R.; Mackey, Jeffrey R.; Matula, Thomas J.
2003-01-01
Vision-based measurement methods were used to measure bubble sizes in this sonoluminescence experiment. Bubble imaging was accomplished by placing the bubble between a bright light source and a microscope-CCD camera system. A collimated light-emitting diode was operated in a pulsed model with an adjustable time delay with respect to the piezo-electric transducer drive signal. The light-emitting diode produced a bubble shadowgraph consisting of a multiple exposure made by numerous light pulses imaged onto a charge-couple device camera. Each image was transferred from the camera to a computer-controlled machine vision system via a frame grabber. The frame grabber was equipped with on-board memory to accomodate sequential image buffering while images were transferred to the host processor and analyzed. This configuration allowed the host computer to perform diameter measurements, centroid position measurements and shape estimation in "real-time" as the next image was being acquired. Bubble size measurement accuracy with an uncertainty of 3 microns was achieved using standard lenses and machine vision algorithms. Bubble centroid position accuracy was also within the 3 micron tolerance of the vision system. This uncertainty estimation accounted for the optical spatial resolution, digitization errors and the edge detection algorithm accuracy. The vision algorithms include camera calibration, thresholding, edge detection, edge position determination, distance between two edges computations and centroid position computations.
Ballast: A Ball-based Algorithm for Structural Motifs
He, Lu; Vandin, Fabio; Pandurangan, Gopal
2013-01-01
Abstract Structural motifs encapsulate local sequence-structure-function relationships characteristic of related proteins, enabling the prediction of functional characteristics of new proteins, providing molecular-level insights into how those functions are performed, and supporting the development of variants specifically maintaining or perturbing function in concert with other properties. Numerous computational methods have been developed to search through databases of structures for instances of specified motifs. However, it remains an open problem how best to leverage the local geometric and chemical constraints underlying structural motifs in order to develop motif-finding algorithms that are both theoretically and practically efficient. We present a simple, general, efficient approach, called Ballast (ball-based algorithm for structural motifs), to match given structural motifs to given structures. Ballast combines the best properties of previously developed methods, exploiting the composition and local geometry of a structural motif and its possible instances in order to effectively filter candidate matches. We show that on a wide range of motif-matching problems, Ballast efficiently and effectively finds good matches, and we provide theoretical insights into why it works well. By supporting generic measures of compositional and geometric similarity, Ballast provides a powerful substrate for the development of motif-matching algorithms. PMID:23383999
Bron, Esther E; Smits, Marion; van der Flier, Wiesje M; Vrenken, Hugo; Barkhof, Frederik; Scheltens, Philip; Papma, Janne M; Steketee, Rebecca M E; Méndez Orellana, Carolina; Meijboom, Rozanna; Pinto, Madalena; Meireles, Joana R; Garrett, Carolina; Bastos-Leite, António J; Abdulkadir, Ahmed; Ronneberger, Olaf; Amoroso, Nicola; Bellotti, Roberto; Cárdenas-Peña, David; Álvarez-Meza, Andrés M; Dolph, Chester V; Iftekharuddin, Khan M; Eskildsen, Simon F; Coupé, Pierrick; Fonov, Vladimir S; Franke, Katja; Gaser, Christian; Ledig, Christian; Guerrero, Ricardo; Tong, Tong; Gray, Katherine R; Moradi, Elaheh; Tohka, Jussi; Routier, Alexandre; Durrleman, Stanley; Sarica, Alessia; Di Fatta, Giuseppe; Sensi, Francesco; Chincarini, Andrea; Smith, Garry M; Stoyanov, Zhivko V; Sørensen, Lauge; Nielsen, Mads; Tangaro, Sabina; Inglese, Paolo; Wachinger, Christian; Reuter, Martin; van Swieten, John C; Niessen, Wiro J; Klein, Stefan
2015-05-01
Algorithms for computer-aided diagnosis of dementia based on structural MRI have demonstrated high performance in the literature, but are difficult to compare as different data sets and methodology were used for evaluation. In addition, it is unclear how the algorithms would perform on previously unseen data, and thus, how they would perform in clinical practice when there is no real opportunity to adapt the algorithm to the data at hand. To address these comparability, generalizability and clinical applicability issues, we organized a grand challenge that aimed to objectively compare algorithms based on a clinically representative multi-center data set. Using clinical practice as the starting point, the goal was to reproduce the clinical diagnosis. Therefore, we evaluated algorithms for multi-class classification of three diagnostic groups: patients with probable Alzheimer's disease, patients with mild cognitive impairment and healthy controls. The diagnosis based on clinical criteria was used as reference standard, as it was the best available reference despite its known limitations. For evaluation, a previously unseen test set was used consisting of 354 T1-weighted MRI scans with the diagnoses blinded. Fifteen research teams participated with a total of 29 algorithms. The algorithms were trained on a small training set (n=30) and optionally on data from other sources (e.g., the Alzheimer's Disease Neuroimaging Initiative, the Australian Imaging Biomarkers and Lifestyle flagship study of aging). The best performing algorithm yielded an accuracy of 63.0% and an area under the receiver-operating-characteristic curve (AUC) of 78.8%. In general, the best performances were achieved using feature extraction based on voxel-based morphometry or a combination of features that included volume, cortical thickness, shape and intensity. The challenge is open for new submissions via the web-based framework: http://caddementia.grand-challenge.org. PMID:25652394
Network-Based Inference Framework for Identifying Cancer Genes from Gene Expression Data
Yang, Bo; Zhang, Junying; Yin, Yaling; Zhang, Yuanyuan
2013-01-01
Great efforts have been devoted to alleviate uncertainty of detected cancer genes as accurate identification of oncogenes is of tremendous significance and helps unravel the biological behavior of tumors. In this paper, we present a differential network-based framework to detect biologically meaningful cancer-related genes. Firstly, a gene regulatory network construction algorithm is proposed, in which a boosting regression based on likelihood score and informative prior is employed for improving accuracy of identification. Secondly, with the algorithm, two gene regulatory networks are constructed from case and control samples independently. Thirdly, by subtracting the two networks, a differential-network model is obtained and then used to rank differentially expressed hub genes for identification of cancer biomarkers. Compared with two existing gene-based methods (t-test and lasso), the method has a significant improvement in accuracy both on synthetic datasets and two real breast cancer datasets. Furthermore, identified six genes (TSPYL5, CD55, CCNE2, DCK, BBC3, and MUC1) susceptible to breast cancer were verified through the literature mining, GO analysis, and pathway functional enrichment analysis. Among these oncogenes, TSPYL5 and CCNE2 have been already known as prognostic biomarkers in breast cancer, CD55 has been suspected of playing an important role in breast cancer prognosis from literature evidence, and other three genes are newly discovered breast cancer biomarkers. More generally, the differential-network schema can be extended to other complex diseases for detection of disease associated-genes. PMID:24073403
A SAT Based Effective Algorithm for the Directed Hamiltonian Cycle Problem
NASA Astrophysics Data System (ADS)
Jäger, Gerold; Zhang, Weixiong
The Hamiltonian cycle problem (HCP) is an important combinatorial problem with applications in many areas. While thorough theoretical and experimental analyses have been made on the HCP in undirected graphs, little is known for the HCP in directed graphs (DHCP). The contribution of this work is an effective algorithm for the DHCP. Our algorithm explores and exploits the close relationship between the DHCP and the Assignment Problem (AP) and utilizes a technique based on Boolean satisfiability (SAT). By combining effective algorithms for the AP and SAT, our algorithm significantly outperforms previous exact DHCP algorithms including an algorithm based on the award-winning Concorde TSP algorithm.
MegaMol--A Prototyping Framework for Particle-Based Visualization.
Grottel, Sebastian; Krone, Michael; Muller, Christoph; Reina, Guido; Ertl, Thomas
2015-02-01
Visualization applications nowadays not only face increasingly larger datasets, but have to solve increasingly complex research questions. They often require more than a single algorithm and consequently a software solution will exceed the possibilities of simple research prototypes. Well-established systems intended for such complex visual analysis purposes have usually been designed for classical, mesh-based graphics approaches. For particle-based data, however, existing visualization frameworks are too generic - e.g. lacking possibilities for consistent low-level GPU optimization for high-performance graphics - and at the same time are too limited - e.g. by enforcing the use of structures suboptimal for some computations. Thus, we developed the system softwareMegaMol for visualization research on particle-based data. On the one hand, flexible data structures and functional module design allow for easy adaption to changing research questions, e.g. studying vapors in thermodynamics, solid material in physics, or complex functional macromolecules like proteins in biochemistry. Therefore, MegaMol is designed as a development framework. On the other hand, common functionality for data handling and advanced rendering implementations are available and beneficial for all applications. We present several case studies of work implemented using our system as well as a comparison to other freely available or open source systems. PMID:26357030
A point matching algorithm based on reference point pair
NASA Astrophysics Data System (ADS)
Zou, Huanxin; Zhu, Youqing; Zhou, Shilin; Lei, Lin
2016-03-01
Outliers and occlusions are important degradation in the real application of point matching. In this paper, a novel point matching algorithm based on the reference point pairs is proposed. In each iteration, it firstly eliminates the dubious matches to obtain the relatively accurate matching points (reference point pairs), and then calculates the shape contexts of the removed points with reference to them. After re-matching the removed points, the reference point pairs are combined to achieve better correspondences. Experiments on synthetic data validate the advantages of our method in comparison with some classical methods.
CAD Model Retrieval Based on Graduated Assignment Algorithm
NASA Astrophysics Data System (ADS)
Tao, Songqiao
2015-06-01
A retrieval approach for CAD models based on graduated assignment algorithm is proposed in this paper. First, CAD models are transformed into face adjacency graphs (FAGs). Second, the vertex compatibility matrix and edge compatibility matrix between the FAGs of the query and data models are calculated, and the similarity metric for the two comparison models is established from their compatibility matrices, which serves as the optimization objective function for selecting vertex mapping matrix M between the two comparison models. Finally, Sinkhorn's alternative normalization approach for M's rows and columns is adopted to find the optimal vertex mapping matrix M. Experimental results have shown that the proposed approach supports CAD model retrieval.
Polygon star identification based on ant colony algorithm
NASA Astrophysics Data System (ADS)
Ma, Baolin; Wu, Jie; Zhang, Hongbo
2014-11-01
In order to enhance the rate of star identification under different view fields and reduce memory storage, this paper presents a polygon star identification based on ACO algorithm .First, fast cluster analysis. Second, calculate argument for each guide star, using the advantages of ACO in fast path optimization to complete building feature polygon. Third, comparing optimization results and optimization data of guide database to realize match and identifying. Through the simulation shows that the above method can simplify searching process and structure of storage. It can promise the completeness of characteristic patterns of star image. The robustness and reliability are better than traditional triangle identification.
A multiobjective memetic algorithm based on particle swarm optimization.
Liu, Dasheng; Tan, K C; Goh, C K; Ho, W K
2007-02-01
In this paper, a new memetic algorithm (MA) for multiobjective (MO) optimization is proposed, which combines the global search ability of particle swarm optimization with a synchronous local search heuristic for directed local fine-tuning. A new particle updating strategy is proposed based upon the concept of fuzzy global-best to deal with the problem of premature convergence and diversity maintenance within the swarm. The proposed features are examined to show their individual and combined effects in MO optimization. The comparative study shows the effectiveness of the proposed MA, which produces solution sets that are highly competitive in terms of convergence, diversity, and distribution. PMID:17278557
A Rigid Image Registration Based on the Nonsubsampled Contourlet Transform and Genetic Algorithms
Meskine, Fatiha; Chikr El Mezouar, Miloud; Taleb, Nasreddine
2010-01-01
Image registration is a fundamental task used in image processing to match two or more images taken at different times, from different sensors or from different viewpoints. The objective is to find in a huge search space of geometric transformations, an acceptable accurate solution in a reasonable time to provide better registered images. Exhaustive search is computationally expensive and the computational cost increases exponentially with the number of transformation parameters and the size of the data set. In this work, we present an efficient image registration algorithm that uses genetic algorithms within a multi-resolution framework based on the Non-Subsampled Contourlet Transform (NSCT). An adaptable genetic algorithm for registration is adopted in order to minimize the search space. This approach is used within a hybrid scheme applying the two techniques fitness sharing and elitism. Two NSCT based methods are proposed for registration. A comparative study is established between these methods and a wavelet based one. Because the NSCT is a shift-invariant multidirectional transform, the second method is adopted for its search speeding up property. Simulation results clearly show that both proposed techniques are really promising methods for image registration compared to the wavelet approach, while the second technique has led to the best performance results of all. Moreover, to demonstrate the effectiveness of these methods, these registration techniques have been successfully applied to register SPOT, IKONOS and Synthetic Aperture Radar (SAR) images. The algorithm has been shown to work perfectly well for multi-temporal satellite images as well, even in the presence of noise. PMID:22163672
a Hadoop-Based Algorithm of Generating dem Grid from Point Cloud Data
NASA Astrophysics Data System (ADS)
Jian, X.; Xiao, X.; Chengfang, H.; Zhizhong, Z.; Zhaohui, W.; Dengzhong, Z.
2015-04-01
Airborne LiDAR technology has proven to be the most powerful tools to obtain high-density, high-accuracy and significantly detailed surface information of terrain and surface objects within a short time, and from which the Digital Elevation Model of high quality can be extracted. Point cloud data generated from the pre-processed data should be classified by segmentation algorithms, so as to differ the terrain points from disorganized points, then followed by a procedure of interpolating the selected points to turn points into DEM data. The whole procedure takes a long time and huge computing resource due to high-density, that is concentrated on by a number of researches. Hadoop is a distributed system infrastructure developed by the Apache Foundation, which contains a highly fault-tolerant distributed file system (HDFS) with high transmission rate and a parallel programming model (Map/Reduce). Such a framework is appropriate for DEM generation algorithms to improve efficiency. Point cloud data of Dongting Lake acquired by Riegl LMS-Q680i laser scanner was utilized as the original data to generate DEM by a Hadoop-based algorithms implemented in Linux, then followed by another traditional procedure programmed by C++ as the comparative experiment. Then the algorithm's efficiency, coding complexity, and performance-cost ratio were discussed for the comparison. The results demonstrate that the algorithm's speed depends on size of point set and density of DEM grid, and the non-Hadoop implementation can achieve a high performance when memory is big enough, but the multiple Hadoop implementation can achieve a higher performance-cost ratio, while point set is of vast quantities on the other hand.
Primitive Fitting Based on the Efficient multiBaySAC Algorithm
Kang, Zhizhong; Li, Zhen
2015-01-01
Although RANSAC is proven to be robust, the original RANSAC algorithm selects hypothesis sets at random, generating numerous iterations and high computational costs because many hypothesis sets are contaminated with outliers. This paper presents a conditional sampling method, multiBaySAC (Bayes SAmple Consensus), that fuses the BaySAC algorithm with candidate model parameters statistical testing for unorganized 3D point clouds to fit multiple primitives. This paper first presents a statistical testing algorithm for a candidate model parameter histogram to detect potential primitives. As the detected initial primitives were optimized using a parallel strategy rather than a sequential one, every data point in the multiBaySAC algorithm was assigned to multiple prior inlier probabilities for initial multiple primitives. Each prior inlier probability determined the probability that a point belongs to the corresponding primitive. We then implemented in parallel a conditional sampling method: BaySAC. With each iteration of the hypothesis testing process, hypothesis sets with the highest inlier probabilities were selected and verified for the existence of multiple primitives, revealing the fitting for multiple primitives. Moreover, the updated version of the initial probability was implemented based on a memorable form of Bayes’ Theorem, which describes the relationship between prior and posterior probabilities of a data point by determining whether the hypothesis set to which a data point belongs is correct. The proposed approach was tested using real and synthetic point clouds. The results show that the proposed multiBaySAC algorithm can achieve a high computational efficiency (averaging 34% higher than the efficiency of the sequential RANSAC method) and fitting accuracy (exhibiting good performance in the intersection of two primitives), whereas the sequential RANSAC framework clearly suffers from over- and under-segmentation problems. Future work will aim at further
Research on Palmprint Identification Method Based on Quantum Algorithms
Zhang, Zhanzhan
2014-01-01
Quantum image recognition is a technology by using quantum algorithm to process the image information. It can obtain better effect than classical algorithm. In this paper, four different quantum algorithms are used in the three stages of palmprint recognition. First, quantum adaptive median filtering algorithm is presented in palmprint filtering processing. Quantum filtering algorithm can get a better filtering result than classical algorithm through the comparison. Next, quantum Fourier transform (QFT) is used to extract pattern features by only one operation due to quantum parallelism. The proposed algorithm exhibits an exponential speed-up compared with discrete Fourier transform in the feature extraction. Finally, quantum set operations and Grover algorithm are used in palmprint matching. According to the experimental results, quantum algorithm only needs to apply square of N operations to find out the target palmprint, but the traditional method needs N times of calculation. At the same time, the matching accuracy of quantum algorithm is almost 100%. PMID:25105165
An Evolution Based Biosensor Receptor DNA Sequence Generation Algorithm
Kim, Eungyeong; Lee, Malrey; Gatton, Thomas M.; Lee, Jaewan; Zang, Yupeng
2010-01-01
A biosensor is composed of a bioreceptor, an associated recognition molecule, and a signal transducer that can selectively detect target substances for analysis. DNA based biosensors utilize receptor molecules that allow hybridization with the target analyte. However, most DNA biosensor research uses oligonucleotides as the target analytes and does not address the potential problems of real samples. The identification of recognition molecules suitable for real target analyte samples is an important step towards further development of DNA biosensors. This study examines the characteristics of DNA used as bioreceptors and proposes a hybrid evolution-based DNA sequence generating algorithm, based on DNA computing, to identify suitable DNA bioreceptor recognition molecules for stable hybridization with real target substances. The Traveling Salesman Problem (TSP) approach is applied in the proposed algorithm to evaluate the safety and fitness of the generated DNA sequences. This approach improves efficiency and stability for enhanced and variable-length DNA sequence generation and allows extension to generation of variable-length DNA sequences with diverse receptor recognition requirements. PMID:22315543
Algorithm-Based Fault Tolerance for Numerical Subroutines
NASA Technical Reports Server (NTRS)
Tumon, Michael; Granat, Robert; Lou, John
2007-01-01
A software library implements a new methodology of detecting faults in numerical subroutines, thus enabling application programs that contain the subroutines to recover transparently from single-event upsets. The software library in question is fault-detecting middleware that is wrapped around the numericalsubroutines. Conventional serial versions (based on LAPACK and FFTW) and a parallel version (based on ScaLAPACK) exist. The source code of the application program that contains the numerical subroutines is not modified, and the middleware is transparent to the user. The methodology used is a type of algorithm- based fault tolerance (ABFT). In ABFT, a checksum is computed before a computation and compared with the checksum of the computational result; an error is declared if the difference between the checksums exceeds some threshold. Novel normalization methods are used in the checksum comparison to ensure correct fault detections independent of algorithm inputs. In tests of this software reported in the peer-reviewed literature, this library was shown to enable detection of 99.9 percent of significant faults while generating no false alarms.
Registration algorithm of point clouds based on multiscale normal features
NASA Astrophysics Data System (ADS)
Lu, Jun; Peng, Zhongtao; Su, Hang; Xia, GuiHua
2015-01-01
The point cloud registration technology for obtaining a three-dimensional digital model is widely applied in many areas. To improve the accuracy and speed of point cloud registration, a registration method based on multiscale normal vectors is proposed. The proposed registration method mainly includes three parts: the selection of key points, the calculation of feature descriptors, and the determining and optimization of correspondences. First, key points are selected from the point cloud based on the changes of magnitude of multiscale curvatures obtained by using principal components analysis. Then the feature descriptor of each key point is proposed, which consists of 21 elements based on multiscale normal vectors and curvatures. The correspondences in a pair of two point clouds are determined according to the descriptor's similarity of key points in the source point cloud and target point cloud. Correspondences are optimized by using a random sampling consistency algorithm and clustering technology. Finally, singular value decomposition is applied to optimized correspondences so that the rigid transformation matrix between two point clouds is obtained. Experimental results show that the proposed point cloud registration algorithm has a faster calculation speed, higher registration accuracy, and better antinoise performance.
Human emotion detector based on genetic algorithm using lip features
NASA Astrophysics Data System (ADS)
Brown, Terrence; Fetanat, Gholamreza; Homaifar, Abdollah; Tsou, Brian; Mendoza-Schrock, Olga
2010-04-01
We predicted human emotion using a Genetic Algorithm (GA) based lip feature extractor from facial images to classify all seven universal emotions of fear, happiness, dislike, surprise, anger, sadness and neutrality. First, we isolated the mouth from the input images using special methods, such as Region of Interest (ROI) acquisition, grayscaling, histogram equalization, filtering, and edge detection. Next, the GA determined the optimal or near optimal ellipse parameters that circumvent and separate the mouth into upper and lower lips. The two ellipses then went through fitness calculation and were followed by training using a database of Japanese women's faces expressing all seven emotions. Finally, our proposed algorithm was tested using a published database consisting of emotions from several persons. The final results were then presented in confusion matrices. Our results showed an accuracy that varies from 20% to 60% for each of the seven emotions. The errors were mainly due to inaccuracies in the classification, and also due to the different expressions in the given emotion database. Detailed analysis of these errors pointed to the limitation of detecting emotion based on the lip features alone. Similar work [1] has been done in the literature for emotion detection in only one person, we have successfully extended our GA based solution to include several subjects.
Vision-based vehicle detection and tracking algorithm design
NASA Astrophysics Data System (ADS)
Hwang, Junyeon; Huh, Kunsoo; Lee, Donghwi
2009-12-01
The vision-based vehicle detection in front of an ego-vehicle is regarded as promising for driver assistance as well as for autonomous vehicle guidance. The feasibility of vehicle detection in a passenger car requires accurate and robust sensing performance. A multivehicle detection system based on stereo vision has been developed for better accuracy and robustness. This system utilizes morphological filter, feature detector, template matching, and epipolar constraint techniques in order to detect the corresponding pairs of vehicles. After the initial detection, the system executes the tracking algorithm for the vehicles. The proposed system can detect front vehicles such as the leading vehicle and side-lane vehicles. The position parameters of the vehicles located in front are obtained based on the detection information. The proposed vehicle detection system is implemented on a passenger car, and its performance is verified experimentally.
A systems-based conceptual framework for auditing
Swanson, G.A.; Marsh, H.L.
1993-12-31
Since the stock market crash of 1929, the auditing profession has rapidly emerged in advanced economies. The procedures of the profession have generally evolved out of political processes without the systematic guidance of a cohesive conceptual framework. That eclectic approach has produced generally accepted accounting principles that encourage publication of global organizational performance assessments such as net income that have little obvious connection to the concrete processes they purport to describe. The approach, furthermore, allows fragmented and disconnected auditing procedures. This paper presents an interpretive and analytical study of auditing based on observable, measurable entities is developed through living systems theory. This approach provides a means of applying to auditing the well developed investigation methods and procedures of the sciences. 8 refs.
Cho, Jae Heon; Lee, Jong Ho
2015-11-01
Manual calibration is common in rainfall-runoff model applications. However, rainfall-runoff models include several complicated parameters; thus, significant time and effort are required to manually calibrate the parameters individually and repeatedly. Automatic calibration has relative merit regarding time efficiency and objectivity but shortcomings regarding understanding indigenous processes in the basin. In this study, a watershed model calibration framework was developed using an influence coefficient algorithm and genetic algorithm (WMCIG) to automatically calibrate the distributed models. The optimization problem used to minimize the sum of squares of the normalized residuals of the observed and predicted values was solved using a genetic algorithm (GA). The final model parameters were determined from the iteration with the smallest sum of squares of the normalized residuals of all iterations. The WMCIG was applied to a Gomakwoncheon watershed located in an area that presents a total maximum daily load (TMDL) in Korea. The proportion of urbanized area in this watershed is low, and the diffuse pollution loads of nutrients such as phosphorus are greater than the point-source pollution loads because of the concentration of rainfall that occurs during the summer. The pollution discharges from the watershed were estimated for each land-use type, and the seasonal variations of the pollution loads were analyzed. Consecutive flow measurement gauges have not been installed in this area, and it is difficult to survey the flow and water quality in this area during the frequent heavy rainfall that occurs during the wet season. The Hydrological Simulation Program-Fortran (HSPF) model was used to calculate the runoff flow and water quality in this basin. Using the water quality results, a load duration curve was constructed for the basin, the exceedance frequency of the water quality standard was calculated for each hydrologic condition class, and the percent reduction
A class of kernel based real-time elastography algorithms.
Kibria, Md Golam; Hasan, Md Kamrul
2015-08-01
In this paper, a novel real-time kernel-based and gradient-based Phase Root Seeking (PRS) algorithm for ultrasound elastography is proposed. The signal-to-noise ratio of the strain image resulting from this method is improved by minimizing the cross-correlation discrepancy between the pre- and post-compression radio frequency signals with an adaptive temporal stretching method and employing built-in smoothing through an exponentially weighted neighborhood kernel in the displacement calculation. Unlike conventional PRS algorithms, displacement due to tissue compression is estimated from the root of the weighted average of the zero-lag cross-correlation phases of the pair of corresponding analytic pre- and post-compression windows in the neighborhood kernel. In addition to the proposed one, the other time- and frequency-domain elastography algorithms (Ara et al., 2013; Hussain et al., 2012; Hasan et al., 2012) proposed by our group are also implemented in real-time using Java where the computations are serially executed or parallely executed in multiple processors with efficient memory management. Simulation results using finite element modeling simulation phantom show that the proposed method significantly improves the strain image quality in terms of elastographic signal-to-noise ratio (SNRe), elastographic contrast-to-noise ratio (CNRe) and mean structural similarity (MSSIM) for strains as high as 4% as compared to other reported techniques in the literature. Strain images obtained for the experimental phantom as well as in vivo breast data of malignant or benign masses also show the efficacy of our proposed method over the other reported techniques in the literature. PMID:25929595
An estimation of generalized bradley-terry models based on the em algorithm.
Fujimoto, Yu; Hino, Hideitsu; Murata, Noboru
2011-06-01
The Bradley-Terry model is a statistical representation for one's preference or ranking data by using pairwise comparison results of items. For estimation of the model, several methods based on the sum of weighted Kullback-Leibler divergences have been proposed from various contexts. The purpose of this letter is to interpret an estimation mechanism of the Bradley-Terry model from the viewpoint of flatness, a fundamental notion used in information geometry. Based on this point of view, a new estimation method is proposed on a framework of the em algorithm. The proposed method is different in its objective function from that of conventional methods, especially in treating unobserved comparisons, and it is consistently interpreted in a probability simplex. An estimation method with weight adaptation is also proposed from a viewpoint of the sensitivity. Experimental results show that the proposed method works appropriately, and weight adaptation improves accuracy of the estimate. PMID:21395441
A modified SUnSAL-TV algorithm for hyperspectral unmixing based on spatial homogeneity analysis
NASA Astrophysics Data System (ADS)
Yuqian, Wang; Zhenfeng, Shao; Lei, Zhang; Weixun, Zhou
2014-03-01
The sparse regression framework has been introduced by many works to solve the linear spectral unmixing problem due to the knowledge that a pixel is usually mixed by less endmembers compared with the endmembers in spectral libraries or the entire hyperspectral data sets. Traditional sparse unmixing techniques focus on analyzing the spectral properties of hyperspectral imagery without incorporating spatial information. But the integration of spatial information would be beneficial to promote the performance of the linear unmixing process. An algorithm called sparse unmixing via variable splitting augmented Lagrangian and total variation (SUnSAL-TV) adds a total variation spatial regularizer besides the sparsity-inducing regularizer to the final unmixing objective function. The total variation spatial regularization is helpful to promote the fractional abundance smoothness. However, the abundance smoothness varies in the image. In this paper, the spatial smoothness is estimated through homogeneity analysis. Then the spatial regularizer is weighted for each pixel by a homogeneity index. The modified algorithm, called homogeneity analysis based SUnSAL-TV (SUnSAL-TVH), integrates the spatial information with finer modelling of spatial smoothness and is supposed insensitive to the noise and more stable. Experiments on synthetic data sets are taken and indicate the validity of our algorithm.
NASA Astrophysics Data System (ADS)
Singh, R.; Verma, H. K.
2013-12-01
This paper presents a teaching-learning-based optimization (TLBO) algorithm to solve parameter identification problems in the designing of digital infinite impulse response (IIR) filter. TLBO based filter modelling is applied to calculate the parameters of unknown plant in simulations. Unlike other heuristic search algorithms, TLBO algorithm is an algorithm-specific parameter-less algorithm. In this paper big bang-big crunch (BB-BC) optimization and PSO algorithms are also applied to filter design for comparison. Unknown filter parameters are considered as a vector to be optimized by these algorithms. MATLAB programming is used for implementation of proposed algorithms. Experimental results show that the TLBO is more accurate to estimate the filter parameters than the BB-BC optimization algorithm and has faster convergence rate when compared to PSO algorithm. TLBO is used where accuracy is more essential than the convergence speed.
A Progressive Image Compression Method Based on EZW Algorithm
NASA Astrophysics Data System (ADS)
Du, Ke; Lu, Jianming; Yahagi, Takashi
A simple method based on the EZW algorithm is presented for improving image compression performance. Recent success in wavelet image coding is mainly attributed to recognition of the importance of data organization and representation. There have been several very competitive wavelet coders developed, namely, Shapiro's EZW(Embedded Zerotree Wavelets)(1), Said and Pearlman's SPIHT(Set Partitioning In Hierarchical Trees)(2), and Bing-Bing Chai's SLCCA(Significance-Linked Connected Component Analysis for Wavelet Image Coding)(3). The EZW algorithm is based on five key concepts: (1) a DWT(Discrete Wavelet Transform) or hierarchical subband decomposition, (2) prediction of the absence of significant information across scales by exploiting self-similarity inherent in images, (3) entropy-coded successive-approximation quantization, (4) universal lossless data compression which is achieved via adaptive arithmetic coding. and (5) DWT coefficients' degeneration from high scale subbands to low scale subbands. In this paper, we have improved the self-similarity statistical characteristic in concept (5) and present a progressive image compression method.
A cooperative control algorithm for camera based observational systems.
Young, Joseph G.
2012-01-01
Over the last several years, there has been considerable growth in camera based observation systems for a variety of safety, scientific, and recreational applications. In order to improve the effectiveness of these systems, we frequently desire the ability to increase the number of observed objects, but solving this problem is not as simple as adding more cameras. Quite often, there are economic or physical restrictions that prevent us from adding additional cameras to the system. As a result, we require methods that coordinate the tracking of objects between multiple cameras in an optimal way. In order to accomplish this goal, we present a new cooperative control algorithm for a camera based observational system. Specifically, we present a receding horizon control where we model the underlying optimal control problem as a mixed integer linear program. The benefit of this design is that we can coordinate the actions between each camera while simultaneously respecting its kinematics. In addition, we further improve the quality of our solution by coupling our algorithm with a Kalman filter. Through this integration, we not only add a predictive component to our control, but we use the uncertainty estimates provided by the filter to encourage the system to periodically observe any outliers in the observed area. This combined approach allows us to intelligently observe the entire region of interest in an effective and thorough manner.
Particle swarm optimization algorithm based low cost magnetometer calibration
NASA Astrophysics Data System (ADS)
Ali, A. S.; Siddharth, S., Syed, Z., El-Sheimy, N.
2011-12-01
Inertial Navigation Systems (INS) consist of accelerometers, gyroscopes and a microprocessor provide inertial digital data from which position and orientation is obtained by integrating the specific forces and rotation rates. In addition to the accelerometers and gyroscopes, magnetometers can be used to derive the absolute user heading based on Earth's magnetic field. Unfortunately, the measurements of the magnetic field obtained with low cost sensors are corrupted by several errors including manufacturing defects and external electro-magnetic fields. Consequently, proper calibration of the magnetometer is required to achieve high accuracy heading measurements. In this paper, a Particle Swarm Optimization (PSO) based calibration algorithm is presented to estimate the values of the bias and scale factor of low cost magnetometer. The main advantage of this technique is the use of the artificial intelligence which does not need any error modeling or awareness of the nonlinearity. The estimated bias and scale factor errors from the proposed algorithm improve the heading accuracy and the results are also statistically significant. Also, it can help in the development of the Pedestrian Navigation Devices (PNDs) when combined with the INS and GPS/Wi-Fi especially in the indoor environments
Digital super-resolution microscopy using example-based algorithm
NASA Astrophysics Data System (ADS)
Ishikawa, Shinji; Hayasaki, Yoshio
2015-05-01
We propose a super-resolution microscopy with a confocal optical setup and an example-based algorithm. The example-based super-resolution algorithm was performed by an example database which is constructed by learning a lot of sets of a high-resolution patch and a low-resolution patch. The high-resolution patch is a part of the high-resolution image of an object model expressed in a computer, and the low-resolution patch is calculated from the high-resolution patch in consideration with a spatial property of an optical microscope. In the reconstruction process, a low-resolution image observed by the confocal optical setup with an image sensor is converted to the super-resolved high-resolution image selected by a pattern matching method from the example database. We demonstrate the adequate selection of the patch size and the weighting superposition method performs the super resolution with a low signal-to noise ratio.
Algorithm design for a gun simulator based on image processing
NASA Astrophysics Data System (ADS)
Liu, Yu; Wei, Ping; Ke, Jun
2015-08-01
In this paper, an algorithm is designed for shooting games under strong background light. Six LEDs are uniformly distributed on the edge of a game machine screen. They are located at the four corners and in the middle of the top and the bottom edges. Three LEDs are enlightened in the odd frames, and the other three are enlightened in the even frames. A simulator is furnished with one camera, which is used to obtain the image of the LEDs by applying inter-frame difference between the even and odd frames. In the resulting images, six LED are six bright spots. To obtain the LEDs' coordinates rapidly, we proposed a method based on the area of the bright spots. After calibrating the camera based on a pinhole model, four equations can be found using the relationship between the image coordinate system and the world coordinate system with perspective transformation. The center point of the image of LEDs is supposed to be at the virtual shooting point. The perspective transformation matrix is applied to the coordinate of the center point. Then we can obtain the virtual shooting point's coordinate in the world coordinate system. When a game player shoots a target about two meters away, using the method discussed in this paper, the calculated coordinate error is less than ten mm. We can obtain 65 coordinate results per second, which meets the requirement of a real-time system. It proves the algorithm is reliable and effective.
Chaos Time Series Prediction Based on Membrane Optimization Algorithms
Li, Meng; Yi, Liangzhong; Pei, Zheng; Gao, Zhisheng
2015-01-01
This paper puts forward a prediction model based on membrane computing optimization algorithm for chaos time series; the model optimizes simultaneously the parameters of phase space reconstruction (τ, m) and least squares support vector machine (LS-SVM) (γ, σ) by using membrane computing optimization algorithm. It is an important basis for spectrum management to predict accurately the change trend of parameters in the electromagnetic environment, which can help decision makers to adopt an optimal action. Then, the model presented in this paper is used to forecast band occupancy rate of frequency modulation (FM) broadcasting band and interphone band. To show the applicability and superiority of the proposed model, this paper will compare the forecast model presented in it with conventional similar models. The experimental results show that whether single-step prediction or multistep prediction, the proposed model performs best based on three error measures, namely, normalized mean square error (NMSE), root mean square error (RMSE), and mean absolute percentage error (MAPE). PMID:25874249
Rank-based algorithms for anlaysis of microarrays
NASA Astrophysics Data System (ADS)
Liu, Wei-min; Mei, Rui; Bartell, Daniel M.; Di, Xiaojun; Webster, Teresa A.; Ryder, Tom
2001-06-01
Analysis of microarray data often involves extracting information from raw intensities of spots of cells and making certain calls. Rank-based algorithms are powerful tools to provide probability values of hypothesis tests, especially when the distribution of the intensities is unknown. For our current gene expression arrays, a gene is detected by a set of probe pairs consisting of perfect match and mismatch cells. The one-sided upper-tail Wilcoxon's signed rank test is used in our algorithms for absolute calls (whether a gene is detected or not), as well as comparative calls (whether a gene is increasing or decreasing or no significant change in a sample compared with another sample). We also test the possibility to use only perfect match cells to make calls. This paper focuses on absolute calls. We have developed error analysis methods and software tools that allow us to compare the accuracy of the calls in the presence or absence of mismatch cells at different target concentrations. The usage of nonparametric rank-based tests is not limited to absolute and comparative calls of gene expression chips. They can also be applied to other oligonucleotide microarrays for genotyping and mutation detection, as well as spotted arrays.
Chaos time series prediction based on membrane optimization algorithms.
Li, Meng; Yi, Liangzhong; Pei, Zheng; Gao, Zhisheng; Peng, Hong
2015-01-01
This paper puts forward a prediction model based on membrane computing optimization algorithm for chaos time series; the model optimizes simultaneously the parameters of phase space reconstruction (τ, m) and least squares support vector machine (LS-SVM) (γ, σ) by using membrane computing optimization algorithm. It is an important basis for spectrum management to predict accurately the change trend of parameters in the electromagnetic environment, which can help decision makers to adopt an optimal action. Then, the model presented in this paper is used to forecast band occupancy rate of frequency modulation (FM) broadcasting band and interphone band. To show the applicability and superiority of the proposed model, this paper will compare the forecast model presented in it with conventional similar models. The experimental results show that whether single-step prediction or multistep prediction, the proposed model performs best based on three error measures, namely, normalized mean square error (NMSE), root mean square error (RMSE), and mean absolute percentage error (MAPE). PMID:25874249
GPU-based parallel algorithm for blind image restoration using midfrequency-based methods
NASA Astrophysics Data System (ADS)
Xie, Lang; Luo, Yi-han; Bao, Qi-liang
2013-08-01
GPU-based general-purpose computing is a new branch of modern parallel computing, so the study of parallel algorithms specially designed for GPU hardware architecture is of great significance. In order to solve the problem of high computational complexity and poor real-time performance in blind image restoration, the midfrequency-based algorithm for blind image restoration was analyzed and improved in this paper. Furthermore, a midfrequency-based filtering method is also used to restore the image hardly with any recursion or iteration. Combining the algorithm with data intensiveness, data parallel computing and GPU execution model of single instruction and multiple threads, a new parallel midfrequency-based algorithm for blind image restoration is proposed in this paper, which is suitable for stream computing of GPU. In this algorithm, the GPU is utilized to accelerate the estimation of class-G point spread functions and midfrequency-based filtering. Aiming at better management of the GPU threads, the threads in a grid are scheduled according to the decomposition of the filtering data in frequency domain after the optimization of data access and the communication between the host and the device. The kernel parallelism structure is determined by the decomposition of the filtering data to ensure the transmission rate to get around the memory bandwidth limitation. The results show that, with the new algorithm, the operational speed is significantly increased and the real-time performance of image restoration is effectively improved, especially for high-resolution images.
A quantum mechanics-based algorithm for vessel segmentation in retinal images
NASA Astrophysics Data System (ADS)
Youssry, Akram; El-Rafei, Ahmed; Elramly, Salwa
2016-03-01
Blood vessel segmentation is an important step in retinal image analysis. It is one of the steps required for computer-aided detection of ophthalmic diseases. In this paper, a novel quantum mechanics-based algorithm for retinal vessel segmentation is presented. The algorithm consists of three major steps. The first step is the preprocessing of the images to prepare the images for further processing. The second step is feature extraction where a set of four features is generated at each image pixel. These features are then combined using a nonlinear transformation for dimensionality reduction. The final step is applying a recently proposed quantum mechanics-based framework for image processing. In this step, pixels are mapped to quantum systems that are allowed to evolve from an initial state to a final state governed by Schrödinger's equation. The evolution is controlled by the Hamiltonian operator which is a function of the extracted features at each pixel. A measurement step is consequently performed to determine whether the pixel belongs to vessel or non-vessel classes. Many functional forms of the Hamiltonian are proposed, and the best performing form was selected. The algorithm is tested on the publicly available DRIVE database. The average results for sensitivity, specificity, and accuracy are 80.29, 97.34, and 95.83 %, respectively. These results are compared to some recently published techniques showing the superior performance of the proposed method. Finally, the implementation of the algorithm on a quantum computer and the challenges facing this implementation are introduced.
A quantum mechanics-based algorithm for vessel segmentation in retinal images
NASA Astrophysics Data System (ADS)
Youssry, Akram; El-Rafei, Ahmed; Elramly, Salwa
2016-06-01
Blood vessel segmentation is an important step in retinal image analysis. It is one of the steps required for computer-aided detection of ophthalmic diseases. In this paper, a novel quantum mechanics-based algorithm for retinal vessel segmentation is presented. The algorithm consists of three major steps. The first step is the preprocessing of the images to prepare the images for further processing. The second step is feature extraction where a set of four features is generated at each image pixel. These features are then combined using a nonlinear transformation for dimensionality reduction. The final step is applying a recently proposed quantum mechanics-based framework for image processing. In this step, pixels are mapped to quantum systems that are allowed to evolve from an initial state to a final state governed by Schrödinger's equation. The evolution is controlled by the Hamiltonian operator which is a function of the extracted features at each pixel. A measurement step is consequently performed to determine whether the pixel belongs to vessel or non-vessel classes. Many functional forms of the Hamiltonian are proposed, and the best performing form was selected. The algorithm is tested on the publicly available DRIVE database. The average results for sensitivity, specificity, and accuracy are 80.29, 97.34, and 95.83 %, respectively. These results are compared to some recently published techniques showing the superior performance of the proposed method. Finally, the implementation of the algorithm on a quantum computer and the challenges facing this implementation are introduced.
O-buffer: a framework for sample-based graphics.
Qu, Huamin; Kaufman, Arie E
2004-01-01
We present an innovative modeling and rendering primitive, called the O-buffer, as a framework for sample-based graphics. The 2D or 3D O-buffer is, in essence, a conventional image or a volume, respectively, except that samples are not restricted to a regular grid. A sample position in the O-buffer is recorded as an offset to the nearest grid point of a regular base grid (hence the name O-buffer). The O-buffer can greatly improve the expressive power of images and volumes. Image quality can be improved by storing more spatial information with samples and by avoiding multiple resamplings. It can be exploited to represent and render unstructured primitives, such as points, particles, and curvilinear or irregular volumes. The O-buffer is therefore a unified representation for a variety of graphics primitives and supports mixing them in the same scene. It is a semiregular structure which lends itself to efficient construction and rendering. O-buffers may assume a variety of forms including 2D O-buffers, 3D O-buffers, uniform O-buffers, nonuniform O-buffers, adaptive O-buffers, layered-depth O-buffers, and O-buffer trees. We demonstrate the effectiveness of the O--buffer in a variety of applications, such as image-based rendering, point sample rendering, and volume rendering. PMID:18579969
Chen, Zhiru; Hong, Wenxue
2016-02-01
Considering the low accuracy of prediction in the positive samples and poor overall classification effects caused by unbalanced sample data of MicroRNA (miRNA) target, we proposes a support vector machine (SVM)-integration of under-sampling and weight (IUSM) algorithm in this paper, an under-sampling based on the ensemble learning algorithm. The algorithm adopts SVM as learning algorithm and AdaBoost as integration framework, and embeds clustering-based under-sampling into the iterative process, aiming at reducing the degree of unbalanced distribution of positive and negative samples. Meanwhile, in the process of adaptive weight adjustment of the samples, the SVM-IUSM algorithm eliminates the abnormal ones in negative samples with robust sample weights smoothing mechanism so as to avoid over-learning. Finally, the prediction of miRNA target integrated classifier is achieved with the combination of multiple weak classifiers through the voting mechanism. The experiment revealed that the SVM-IUSW, compared with other algorithms on unbalanced dataset collection, could not only improve the accuracy of positive targets and the overall effect of classification, but also enhance the generalization ability of miRNA target classifier. PMID:27382743
A Decision Support Framework For Science-Based, Multi-Stakeholder Deliberation: A Coral Reef Example
We present a decision support framework for science-based assessment and multi-stakeholder deliberation. The framework consists of two parts: a DPSIR (Drivers-Pressures-States-Impacts-Responses) analysis to identify the important causal relationships among anthropogenic environ...
Optimized Laplacian image sharpening algorithm based on graphic processing unit
NASA Astrophysics Data System (ADS)
Ma, Tinghuai; Li, Lu; Ji, Sai; Wang, Xin; Tian, Yuan; Al-Dhelaan, Abdullah; Al-Rodhaan, Mznah
2014-12-01
In classical Laplacian image sharpening, all pixels are processed one by one, which leads to large amount of computation. Traditional Laplacian sharpening processed on CPU is considerably time-consuming especially for those large pictures. In this paper, we propose a parallel implementation of Laplacian sharpening based on Compute Unified Device Architecture (CUDA), which is a computing platform of Graphic Processing Units (GPU), and analyze the impact of picture size on performance and the relationship between the processing time of between data transfer time and parallel computing time. Further, according to different features of different memory, an improved scheme of our method is developed, which exploits shared memory in GPU instead of global memory and further increases the efficiency. Experimental results prove that two novel algorithms outperform traditional consequentially method based on OpenCV in the aspect of computing speed.
Vibration-based damage detection algorithm for WTT structures
NASA Astrophysics Data System (ADS)
Nguyen, Tuan-Cuong; Kim, Tae-Hwan; Choi, Sang-Hoon; Ryu, Joo-Young; Kim, Jeong-Tae
2016-04-01
In this paper, the integrity of a wind turbine tower (WTT) structure is nondestructively estimated using its vibration responses. Firstly, a damage detection algorithm using changes in modal characteristics to predict damage locations and severities in structures is outlined. Secondly, a finite element (FE) model based on a real WTT structure is established by using a commercial software, Midas FEA. Thirdly, forced vibration tests are performed on the FE model of the WTT structure under various damage scenarios. The changes in modal parameters such as natural frequencies and mode shapes are examined for damage monitoring in the structure. Finally, the feasibility of the vibration-based damage detection method is numerically verified by predicting locations and severities of the damage in the FE model of the WTT structure.
Controller design based on μ analysis and PSO algorithm.
Lari, Ali; Khosravi, Alireza; Rajabi, Farshad
2014-03-01
In this paper an evolutionary algorithm is employed to address the controller design problem based on μ analysis. Conventional solutions to μ synthesis problem such as D-K iteration method often lead to high order, impractical controllers. In the proposed approach, a constrained optimization problem based on μ analysis is defined and then an evolutionary approach is employed to solve the optimization problem. The goal is to achieve a more practical controller with lower order. A benchmark system named two-tank system is considered to evaluate performance of the proposed approach. Simulation results show that the proposed controller performs more effective than high order H(∞) controller and has close responses to the high order D-K iteration controller as the common solution to μ synthesis problem. PMID:24314832
A trait-based framework for stream algal communities.
Lange, Katharina; Townsend, Colin Richard; Matthaei, Christoph David
2016-01-01
The use of trait-based approaches to detect effects of land use and climate change on terrestrial plant and aquatic phytoplankton communities is increasing, but such a framework is still needed for benthic stream algae. Here we present a conceptual framework of morphological, physiological, behavioural and life-history traits relating to resource acquisition and resistance to disturbance. We tested this approach by assessing the relationships between multiple anthropogenic stressors and algal traits at 43 stream sites. Our "natural experiment" was conducted along gradients of agricultural land-use intensity (0-95% of the catchment in high-producing pasture) and hydrological alteration (0-92% streamflow reduction resulting from water abstraction for irrigation) as well as related physicochemical variables (total nitrogen concentration and deposited fine sediment). Strategic choice of study sites meant that agricultural intensity and hydrological alteration were uncorrelated. We studied the relationships of seven traits (with 23 trait categories) to our environmental predictor variables using general linear models and an information-theoretic model-selection approach. Life form, nitrogen fixation and spore formation were key traits that showed the strongest relationships with environmental stressors. Overall, FI (farming intensity) exerted stronger effects on algal communities than hydrological alteration. The large-bodied, non-attached, filamentous algae that dominated under high farming intensities have limited dispersal abilities but may cope with unfavourable conditions through the formation of spores. Antagonistic interactions between FI and flow reduction were observed for some trait variables, whereas no interactions occurred for nitrogen concentration and fine sediment. Our conceptual framework was well supported by tests of ten specific hypotheses predicting effects of resource supply and disturbance on algal traits. Our study also shows that investigating a
Microsystem design framework based on tool adaptations and library developments
NASA Astrophysics Data System (ADS)
Karam, Jean Michel; Courtois, Bernard; Rencz, Marta; Poppe, Andras; Szekely, Vladimir
1996-09-01
Besides foundry facilities, Computer-Aided Design (CAD) tools are also required to move microsystems from research prototypes to an industrial market. This paper describes a Computer-Aided-Design Framework for microsystems, based on selected existing software packages adapted and extended for microsystem technology, assembled with libraries where models are available in the form of standard cells described at different levels (symbolic, system/behavioral, layout). In microelectronics, CAD has already attained a highly sophisticated and professional level, where complete fabrication sequences are simulated and the device and system operation is completely tested before manufacturing. In comparison, the art of microsystem design and modelling is still in its infancy. However, at least for the numerical simulation of the operation of single microsystem components, such as mechanical resonators, thermo-elements, elastic diaphragms, reliable simulation tools are available. For the different engineering disciplines (like electronics, mechanics, optics, etc) a lot of CAD-tools for the design, simulation and verification of specific devices are available, but there is no CAD-environment within which we could perform a (micro-)system simulation due to the different nature of the devices. In general there are two different approaches to overcome this limitation: the first possibility would be to develop a new framework tailored for microsystem-engineering. The second approach, much more realistic, would be to use the existing CAD-tools which contain the most promising features, and to extend these tools so that they can be used for the simulation and verification of microsystems and of the devices involved. These tools are assembled with libraries in a microsystem design environment allowing a continuous design flow. The approach is driven by the wish to make microsystems accessible to a large community of people, including SMEs and non-specialized academic institutions.
A content based framework for mass retrieval in mammograms
NASA Astrophysics Data System (ADS)
Kaur, Simranjit; Sharma, Vipul; Singh, Sukhwinder; Gupta, Savita
2014-03-01
In the recent years, there has been a phenomenal growth in the volume of digital mammograms produced in hospitals and medical centers. Thus, there is a need to create efficient access methods or retrieval tools to search, browse and retrieve images from large repositories to aid diagnoses and research. This paper presents a Content Based Medical Image Retrieval (CBMIR) system for mass retrieval in mammograms using a two stage framework. Also, for mass segmentation, a semi-automatic method based on Seed Region Growing approach is proposed. Shape features are extracted at the first stage to find similar shape lesions and the second stage further refines the results by finding similar pathology bearing lesions using texture features. Various shape features used in this study are Compactness, Convexity, Spicularity, Radial Distance (RD) based features, Zernike Moments (ZM) and Fourier Descriptors (FD). The texture of mass lesions is characterized by Gray Level Co-occurrence Matrix (GLCM) features, Gray Level Run Length Matrix (GLRLM) features and Fourier Power Spectrum (FPS) features. In this paper, feature selection is done by Correlation based Feature Selection (CFS) technique to select the best subset of shape and texture features as high dimensionality of feature vector may limit computational efficiency. This study used the IRMA Version of DDSM LJPEG data to evaluate the retrieval performance of various shape and texture features. From the experimental results, it has been found that the proposed CBMIR system using merely the compactness or shape features selected by CFS provided better distinction among four categories of mass shape (Round, Oval, Lobulated and Irregular) at the first stage and FPS based texture features provided better distinction between pathology (Benign and Malignant) at the second stage.
ANNIT - An Efficient Inversion Algorithm based on Prediction Principles
NASA Astrophysics Data System (ADS)
Růžek, B.; Kolář, P.
2009-04-01
Solution of inverse problems represents meaningful job in geophysics. The amount of data is continuously increasing, methods of modeling are being improved and the computer facilities are also advancing great technical progress. Therefore the development of new and efficient algorithms and computer codes for both forward and inverse modeling is still up to date. ANNIT is contributing to this stream since it is a tool for efficient solution of a set of non-linear equations. Typical geophysical problems are based on parametric approach. The system is characterized by a vector of parameters p, the response of the system is characterized by a vector of data d. The forward problem is usually represented by unique mapping F(p)=d. The inverse problem is much more complex and the inverse mapping p=G(d) is available in an analytical or closed form only exceptionally and generally it may not exist at all. Technically, both forward and inverse mapping F and G are sets of non-linear equations. ANNIT solves such situation as follows: (i) joint subspaces {pD, pM} of original data and model spaces D, M, resp. are searched for, within which the forward mapping F is sufficiently smooth that the inverse mapping G does exist, (ii) numerical approximation of G in subspaces {pD, pM} is found, (iii) candidate solution is predicted by using this numerical approximation. ANNIT is working in an iterative way in cycles. The subspaces {pD, pM} are searched for by generating suitable populations of individuals (models) covering data and model spaces. The approximation of the inverse mapping is made by using three methods: (a) linear regression, (b) Radial Basis Function Network technique, (c) linear prediction (also known as "Kriging"). The ANNIT algorithm has built in also an archive of already evaluated models. Archive models are re-used in a suitable way and thus the number of forward evaluations is minimized. ANNIT is now implemented both in MATLAB and SCILAB. Numerical tests show good
Mathematical framework for activity-based cancer biomarkers
Kwong, Gabriel A.; Dudani, Jaideep S.; Carrodeguas, Emmanuel; Mazumdar, Eric V.; Zekavat, Seyedeh M.; Bhatia, Sangeeta N.
2015-01-01
Advances in nanomedicine are providing sophisticated functions to precisely control the behavior of nanoscale drugs and diagnostics. Strategies that coopt protease activity as molecular triggers are increasingly important in nanoparticle design, yet the pharmacokinetics of these systems are challenging to understand without a quantitative framework to reveal nonintuitive associations. We describe a multicompartment mathematical model to predict strategies for ultrasensitive detection of cancer using synthetic biomarkers, a class of activity-based probes that amplify cancer-derived signals into urine as a noninvasive diagnostic. Using a model formulation made of a PEG core conjugated with protease-cleavable peptides, we explore a vast design space and identify guidelines for increasing sensitivity that depend on critical parameters such as enzyme kinetics, dosage, and probe stability. According to this model, synthetic biomarkers that circulate in stealth but then activate at sites of disease have the theoretical capacity to discriminate tumors as small as 5 mm in diameter—a threshold sensitivity that is otherwise challenging for medical imaging and blood biomarkers to achieve. This model may be adapted to describe the behavior of additional activity-based approaches to allow cross-platform comparisons, and to predict allometric scaling across species. PMID:26417077
An evidence-based conceptual framework of healthy cooking.
Raber, Margaret; Chandra, Joya; Upadhyaya, Mudita; Schick, Vanessa; Strong, Larkin L; Durand, Casey; Sharma, Shreela
2016-12-01
Eating out of the home has been positively associated with body weight, obesity, and poor diet quality. While cooking at home has declined steadily over the last several decades, the benefits of home cooking have gained attention in recent years and many healthy cooking projects have emerged around the United States. The purpose of this study was to develop an evidence-based conceptual framework of healthy cooking behavior in relation to chronic disease prevention. A systematic review of the literature was undertaken using broad search terms. Studies analyzing the impact of cooking behaviors across a range of disciplines were included. Experts in the field reviewed the resulting constructs in a small focus group. The model was developed from the extant literature on the subject with 59 studies informing 5 individual constructs (frequency, techniques and methods, minimal usage, flavoring, and ingredient additions/replacements), further defined by a series of individual behaviors. Face validity of these constructs was supported by the focus group. A validated conceptual model is a significant step toward better understanding the relationship between cooking, disease and disease prevention and may serve as a base for future assessment tools and curricula. PMID:27413657
Mathematical framework for activity-based cancer biomarkers.
Kwong, Gabriel A; Dudani, Jaideep S; Carrodeguas, Emmanuel; Mazumdar, Eric V; Zekavat, Seyedeh M; Bhatia, Sangeeta N
2015-10-13
Advances in nanomedicine are providing sophisticated functions to precisely control the behavior of nanoscale drugs and diagnostics. Strategies that coopt protease activity as molecular triggers are increasingly important in nanoparticle design, yet the pharmacokinetics of these systems are challenging to understand without a quantitative framework to reveal nonintuitive associations. We describe a multicompartment mathematical model to predict strategies for ultrasensitive detection of cancer using synthetic biomarkers, a class of activity-based probes that amplify cancer-derived signals into urine as a noninvasive diagnostic. Using a model formulation made of a PEG core conjugated with protease-cleavable peptides, we explore a vast design space and identify guidelines for increasing sensitivity that depend on critical parameters such as enzyme kinetics, dosage, and probe stability. According to this model, synthetic biomarkers that circulate in stealth but then activate at sites of disease have the theoretical capacity to discriminate tumors as small as 5 mm in diameter-a threshold sensitivity that is otherwise challenging for medical imaging and blood biomarkers to achieve. This model may be adapted to describe the behavior of additional activity-based approaches to allow cross-platform comparisons, and to predict allometric scaling across species. PMID:26417077
Internal modelling under Risk-Based Capital (RBC) framework
NASA Astrophysics Data System (ADS)
Ling, Ang Siew; Hin, Pooi Ah
2015-12-01
Very often the methods for the internal modelling under the Risk-Based Capital framework make use of the data which are in the form of run-off triangle. The present research will instead extract from a group of n customers, the historical data for the sum insured si of the i-th customer together with the amount paid yij and the amount aij reported but not yet paid in the j-th development year for j = 1, 2, 3, 4, 5, 6. We model the future value (yij+1, aij+1) to be dependent on the present year value (yij, aij) and the sum insured si via a conditional distribution which is derived from a multivariate power-normal mixture distribution. For a group of given customers with different original purchase dates, the distribution of the aggregate claims liabilities may be obtained from the proposed model. The prediction interval based on the distribution for the aggregate claim liabilities is found to have good ability of covering the observed aggregate claim liabilities.
Algorithm for Stabilizing a POD-Based Dynamical System
NASA Technical Reports Server (NTRS)
Kalb, Virginia L.
2010-01-01
This algorithm provides a new way to improve the accuracy and asymptotic behavior of a low-dimensional system based on the proper orthogonal decomposition (POD). Given a data set representing the evolution of a system of partial differential equations (PDEs), such as the Navier-Stokes equations for incompressible flow, one may obtain a low-dimensional model in the form of ordinary differential equations (ODEs) that should model the dynamics of the flow. Temporal sampling of the direct numerical simulation of the PDEs produces a spatial time series. The POD extracts the temporal and spatial eigenfunctions of this data set. Truncated to retain only the most energetic modes followed by Galerkin projection of these modes onto the PDEs obtains a dynamical system of ordinary differential equations for the time-dependent behavior of the flow. In practice, the steps leading to this system of ODEs entail numerically computing first-order derivatives of the mean data field and the eigenfunctions, and the computation of many inner products. This is far from a perfect process, and often results in the lack of long-term stability of the system and incorrect asymptotic behavior of the model. This algorithm describes a new stabilization method that utilizes the temporal eigenfunctions to derive correction terms for the coefficients of the dynamical system to significantly reduce these errors.
CACONET: Ant Colony Optimization (ACO) Based Clustering Algorithm for VANET
Bajwa, Khalid Bashir; Khan, Salabat; Chaudary, Nadeem Majeed; Akram, Adeel
2016-01-01
A vehicular ad hoc network (VANET) is a wirelessly connected network of vehicular nodes. A number of techniques, such as message ferrying, data aggregation, and vehicular node clustering aim to improve communication efficiency in VANETs. Cluster heads (CHs), selected in the process of clustering, manage inter-cluster and intra-cluster communication. The lifetime of clusters and number of CHs determines the efficiency of network. In this paper a Clustering algorithm based on Ant Colony Optimization (ACO) for VANETs (CACONET) is proposed. CACONET forms optimized clusters for robust communication. CACONET is compared empirically with state-of-the-art baseline techniques like Multi-Objective Particle Swarm Optimization (MOPSO) and Comprehensive Learning Particle Swarm Optimization (CLPSO). Experiments varying the grid size of the network, the transmission range of nodes, and number of nodes in the network were performed to evaluate the comparative effectiveness of these algorithms. For optimized clustering, the parameters considered are the transmission range, direction and speed of the nodes. The results indicate that CACONET significantly outperforms MOPSO and CLPSO. PMID:27149517
A SIFT feature based registration algorithm in automatic seal verification
NASA Astrophysics Data System (ADS)
He, Jin; Ding, Xuewen; Zhang, Hao; Liu, Tiegen
2012-11-01
A SIFT (Scale Invariant Feature Transform) feature based registration algorithm is presented to prepare for the seal verification, especially for the verification of high quality counterfeit sample seal. The similarities and the spatial relationships between the matched SIFT features are combined for the registration. SIFT features extracted from the binary model seal and sample seal images are matched according to their similarities. The matching rate is used to define the similar sample seal that is similar with its model seal. For the similar sample seal, the false matches are eliminated according to the position relationship. Then the homography between model seal and sample seal is constructed and named HS . The theoretical homography is namedH . The accuracy of registration is evaluated by the Frobenius norm of H-HS . In experiments, translation, filling and rotation transformations are applied to seals with different shapes, stroke number and structures. After registering the transformed seals and their model seals, the maximum value of the Frobenius norm of their H-HS is not more than 0.03. The results prove that this algorithm can accomplish accurate registration, which is invariant to translation, filling, and rotation transformation, and there is no limit to the seal shapes, stroke number and structures.
Fast Field Calibration of MIMU Based on the Powell Algorithm
Ma, Lin; Chen, Wanwan; Li, Bin; You, Zheng; Chen, Zhigang
2014-01-01
The calibration of micro inertial measurement units is important in ensuring the precision of navigation systems, which are equipped with microelectromechanical system sensors that suffer from various errors. However, traditional calibration methods cannot meet the demand for fast field calibration. This paper presents a fast field calibration method based on the Powell algorithm. As the key points of this calibration, the norm of the accelerometer measurement vector is equal to the gravity magnitude, and the norm of the gyro measurement vector is equal to the rotational velocity inputs. To resolve the error parameters by judging the convergence of the nonlinear equations, the Powell algorithm is applied by establishing a mathematical error model of the novel calibration. All parameters can then be obtained in this manner. A comparison of the proposed method with the traditional calibration method through navigation tests shows the classic performance of the proposed calibration method. The proposed calibration method also saves more time compared with the traditional calibration method. PMID:25177801
CACONET: Ant Colony Optimization (ACO) Based Clustering Algorithm for VANET.
Aadil, Farhan; Bajwa, Khalid Bashir; Khan, Salabat; Chaudary, Nadeem Majeed; Akram, Adeel
2016-01-01
A vehicular ad hoc network (VANET) is a wirelessly connected network of vehicular nodes. A number of techniques, such as message ferrying, data aggregation, and vehicular node clustering aim to improve communication efficiency in VANETs. Cluster heads (CHs), selected in the process of clustering, manage inter-cluster and intra-cluster communication. The lifetime of clusters and number of CHs determines the efficiency of network. In this paper a Clustering algorithm based on Ant Colony Optimization (ACO) for VANETs (CACONET) is proposed. CACONET forms optimized clusters for robust communication. CACONET is compared empirically with state-of-the-art baseline techniques like Multi-Objective Particle Swarm Optimization (MOPSO) and Comprehensive Learning Particle Swarm Optimization (CLPSO). Experiments varying the grid size of the network, the transmission range of nodes, and number of nodes in the network were performed to evaluate the comparative effectiveness of these algorithms. For optimized clustering, the parameters considered are the transmission range, direction and speed of the nodes. The results indicate that CACONET significantly outperforms MOPSO and CLPSO. PMID:27149517
A Competency-Based Guided-Learning Algorithm Applied on Adaptively Guiding E-Learning
ERIC Educational Resources Information Center
Hsu, Wei-Chih; Li, Cheng-Hsiu
2015-01-01
This paper presents a new algorithm called competency-based guided-learning algorithm (CBGLA), which can be applied on adaptively guiding e-learning. Computational process analysis and mathematical derivation of competency-based learning (CBL) were used to develop the CBGLA. The proposed algorithm could generate an effective adaptively guiding…
Graph-based optimization algorithm and software on kidney exchanges.
Chen, Yanhua; Li, Yijiang; Kalbfleisch, John D; Zhou, Yan; Leichtman, Alan; Song, Peter X-K
2012-07-01
Kidney transplantation is typically the most effective treatment for patients with end-stage renal disease. However, the supply of kidneys is far short of the fast-growing demand. Kidney paired donation (KPD) programs provide an innovative approach for increasing the number of available kidneys. In a KPD program, willing but incompatible donor-candidate pairs may exchange donor organs to achieve mutual benefit. Recently, research on exchanges initiated by altruistic donors (ADs) has attracted great attention because the resultant organ exchange mechanisms offer advantages that increase the effectiveness of KPD programs. Currently, most KPD programs focus on rule-based strategies of prioritizing kidney donation. In this paper, we consider and compare two graph-based organ allocation algorithms to optimize an outcome-based strategy defined by the overall expected utility of kidney exchanges in a KPD program with both incompatible pairs and ADs. We develop an interactive software-based decision support system to model, monitor, and visualize a conceptual KPD program, which aims to assist clinicians in the evaluation of different kidney allocation strategies. Using this system, we demonstrate empirically that an outcome-based strategy for kidney exchanges leads to improvement in both the quantity and quality of kidney transplantation through comprehensive simulation experiments. PMID:22542649
A nonvoxel-based dose convolution/superposition algorithm optimized for scalable GPU architectures
Neylon, J. Sheng, K.; Yu, V.; Low, D. A.; Kupelian, P.; Santhanam, A.; Chen, Q.
2014-10-15
Purpose: Real-time adaptive planning and treatment has been infeasible due in part to its high computational complexity. There have been many recent efforts to utilize graphics processing units (GPUs) to accelerate the computational performance and dose accuracy in radiation therapy. Data structure and memory access patterns are the key GPU factors that determine the computational performance and accuracy. In this paper, the authors present a nonvoxel-based (NVB) approach to maximize computational and memory access efficiency and throughput on the GPU. Methods: The proposed algorithm employs a ray-tracing mechanism to restructure the 3D data sets computed from the CT anatomy into a nonvoxel-based framework. In a process that takes only a few milliseconds of computing time, the algorithm restructured the data sets by ray-tracing through precalculated CT volumes to realign the coordinate system along the convolution direction, as defined by zenithal and azimuthal angles. During the ray-tracing step, the data were resampled according to radial sampling and parallel ray-spacing parameters making the algorithm independent of the original CT resolution. The nonvoxel-based algorithm presented in this paper also demonstrated a trade-off in computational performance and dose accuracy for different coordinate system configurations. In order to find the best balance between the computed speedup and the accuracy, the authors employed an exhaustive parameter search on all sampling parameters that defined the coordinate system configuration: zenithal, azimuthal, and radial sampling of the convolution algorithm, as well as the parallel ray spacing during ray tracing. The angular sampling parameters were varied between 4 and 48 discrete angles, while both radial sampling and parallel ray spacing were varied from 0.5 to 10 mm. The gamma distribution analysis method (γ) was used to compare the dose distributions using 2% and 2 mm dose difference and distance-to-agreement criteria
ERIC Educational Resources Information Center
Sterba, Sonya K.
2009-01-01
A model-based framework, due originally to R. A. Fisher, and a design-based framework, due originally to J. Neyman, offer alternative mechanisms for inference from samples to populations. We show how these frameworks can utilize different types of samples (nonrandom or random vs. only random) and allow different kinds of inference (descriptive vs.…
Evidence-Based Leadership Development: The 4L Framework
ERIC Educational Resources Information Center
Scott, Shelleyann; Webber, Charles F.
2008-01-01
Purpose: This paper aims to use the results of three research initiatives to present the life-long learning leader 4L framework, a model for leadership development intended for use by designers and providers of leadership development programming. Design/methodology/approach: The 4L model is a conceptual framework that emerged from the analysis of…
An improved SIFT algorithm based on KFDA in image registration
NASA Astrophysics Data System (ADS)
Chen, Peng; Yang, Lijuan; Huo, Jinfeng
2016-03-01
As a kind of stable feature matching algorithm, SIFT has been widely used in many fields. In order to further improve the robustness of the SIFT algorithm, an improved SIFT algorithm with Kernel Discriminant Analysis (KFDA-SIFT) is presented for image registration. The algorithm uses KFDA to SIFT descriptors for feature extraction matrix, and uses the new descriptors to conduct the feature matching, finally chooses RANSAC to deal with the matches for further purification. The experiments show that the presented algorithm is robust to image changes in scale, illumination, perspective, expression and tiny pose with higher matching accuracy.
NASA Astrophysics Data System (ADS)
Gu, Xuejun; Jelen, Urszula; Li, Jinsheng; Jia, Xun; Jiang, Steve B.
2011-06-01
Targeting at the development of an accurate and efficient dose calculation engine for online adaptive radiotherapy, we have implemented a finite-size pencil beam (FSPB) algorithm with a 3D-density correction method on graphics processing unit (GPU). This new GPU-based dose engine is built on our previously published ultrafast FSPB computational framework (Gu et al 2009 Phys. Med. Biol. 54 6287-97). Dosimetric evaluations against Monte Carlo dose calculations are conducted on ten IMRT treatment plans (five head-and-neck cases and five lung cases). For all cases, there is improvement with the 3D-density correction over the conventional FSPB algorithm and for most cases the improvement is significant. Regarding the efficiency, because of the appropriate arrangement of memory access and the usage of GPU intrinsic functions, the dose calculation for an IMRT plan can be accomplished well within 1 s (except for one case) with this new GPU-based FSPB algorithm. Compared to the previous GPU-based FSPB algorithm without 3D-density correction, this new algorithm, though slightly sacrificing the computational efficiency (~5-15% lower), has significantly improved the dose calculation accuracy, making it more suitable for online IMRT replanning.
Gu, Xuejun; Jelen, Urszula; Li, Jinsheng; Jia, Xun; Jiang, Steve B
2011-06-01
Targeting at the development of an accurate and efficient dose calculation engine for online adaptive radiotherapy, we have implemented a finite-size pencil beam (FSPB) algorithm with a 3D-density correction method on graphics processing unit (GPU). This new GPU-based dose engine is built on our previously published ultrafast FSPB computational framework (Gu et al 2009 Phys. Med. Biol. 54 6287-97). Dosimetric evaluations against Monte Carlo dose calculations are conducted on ten IMRT treatment plans (five head-and-neck cases and five lung cases). For all cases, there is improvement with the 3D-density correction over the conventional FSPB algorithm and for most cases the improvement is significant. Regarding the efficiency, because of the appropriate arrangement of memory access and the usage of GPU intrinsic functions, the dose calculation for an IMRT plan can be accomplished well within 1 s (except for one case) with this new GPU-based FSPB algorithm. Compared to the previous GPU-based FSPB algorithm without 3D-density correction, this new algorithm, though slightly sacrificing the computational efficiency (∼5-15% lower), has significantly improved the dose calculation accuracy, making it more suitable for online IMRT replanning. PMID:21558589
NASA Astrophysics Data System (ADS)
Mattei, D.; Smith, I.; Ferrari, A.; Carbillet, M.
2010-10-01
Post-processing for exoplanet detection using direct imaging requires large data cubes and/or sophisticated signal processing technics. For alt-azimuthal mounts, a projection effect called field rotation makes the potential planet rotate in a known manner on the set of images. For ground based telescopes that use extreme adaptive optics and advanced coronagraphy, technics based on field rotation are already broadly used and still under progress. In most such technics, for a given initial position of the planet the planet intensity estimate is a linear function of the set of images. However, due to field rotation the modified instrumental response applied is not shift invariant like usual linear filters. Testing all possible initial positions is therefore very time-consuming. To reduce the time process, we propose to deal with each subset of initial positions computed on a different machine using parallelization programming. In particular, the MOODS algorithm dedicated to the VLT-SPHERE instrument, that estimates jointly the light contributions of the star and the potential exoplanet, is parallelized on the Observatoire de la Cote d'Azur cluster. Different parallelization methods (OpenMP, MPI, Jobs Array) have been elaborated for the initial MOODS code and compared to each other. The one finally chosen splits the initial positions on the processors available by accounting at best for the different constraints of the cluster structure: memory, job submission queues, number of available CPUs, cluster average load. At the end, a standard set of images is satisfactorily processed in a few hours instead of a few days.
A game theoretic framework for incentive-based models of intrinsic motivation in artificial systems
Merrick, Kathryn E.; Shafi, Kamran
2013-01-01
An emerging body of research is focusing on understanding and building artificial systems that can achieve open-ended development influenced by intrinsic motivations. In particular, research in robotics and machine learning is yielding systems and algorithms with increasing capacity for self-directed learning and autonomy. Traditional software architectures and algorithms are being augmented with intrinsic motivations to drive cumulative acquisition of knowledge and skills. Intrinsic motivations have recently been considered in reinforcement learning, active learning and supervised learning settings among others. This paper considers game theory as a novel setting for intrinsic motivation. A game theoretic framework for intrinsic motivation is formulated by introducing the concept of optimally motivating incentive as a lens through which players perceive a game. Transformations of four well-known mixed-motive games are presented to demonstrate the perceived games when players' optimally motivating incentive falls in three cases corresponding to strong power, affiliation and achievement motivation. We use agent-based simulations to demonstrate that players with different optimally motivating incentive act differently as a result of their altered perception of the game. We discuss the implications of these results both for modeling human behavior and for designing artificial agents or robots. PMID:24198797
DDoS Attack Detection Algorithms Based on Entropy Computing
NASA Astrophysics Data System (ADS)
Li, Liying; Zhou, Jianying; Xiao, Ning
Distributed Denial of Service (DDoS) attack poses a severe threat to the Internet. It is difficult to find the exact signature of attacking. Moreover, it is hard to distinguish the difference of an unusual high volume of traffic which is caused by the attack or occurs when a huge number of users occasionally access the target machine at the same time. The entropy detection method is an effective method to detect the DDoS attack. It is mainly used to calculate the distribution randomness of some attributes in the network packets' headers. In this paper, we focus on the detection technology of DDoS attack. We improve the previous entropy detection algorithm, and propose two enhanced detection methods based on cumulative entropy and time, respectively. Experiment results show that these methods could lead to more accurate and effective DDoS detection.
An optimization-based iterative algorithm for recovering fluorophore location
NASA Astrophysics Data System (ADS)
Yi, Huangjian; Peng, Jinye; Jin, Chen; He, Xiaowei
2015-10-01
Fluorescence molecular tomography (FMT) is a non-invasive technique that allows three-dimensional visualization of fluorophore in vivo in small animals. In practical applications of FMT, however, there are challenges in the image reconstruction since it is a highly ill-posed problem due to the diffusive behaviour of light transportation in tissue and the limited measurement data. In this paper, we presented an iterative algorithm based on an optimization problem for three dimensional reconstruction of fluorescent target. This method alternates weighted algebraic reconstruction technique (WART) with steepest descent method (SDM) for image reconstruction. Numerical simulations experiments and physical phantom experiment are performed to validate our method. Furthermore, compared to conjugate gradient method, the proposed method provides a better three-dimensional (3D) localization of fluorescent target.
Secure steganographic communication algorithm based on self-organizing patterns
NASA Astrophysics Data System (ADS)
Saunoriene, Loreta; Ragulskis, Minvydas
2011-11-01
A secure steganographic communication algorithm based on patterns evolving in a Beddington-de Angelis-type predator-prey model with self- and cross-diffusion is proposed in this paper. Small perturbations of initial states of the system around the state of equilibrium result in the evolution of self-organizing patterns. Small differences between initial perturbations result in slight differences also in the evolving patterns. It is shown that the generation of interpretable target patterns cannot be considered as a secure mean of communication because contours of the secret image can be retrieved from the cover image using statistical techniques if only it represents small perturbations of the initial states of the system. An alternative approach when the cover image represents the self-organizing pattern that has evolved from initial states perturbed using the dot-skeleton representation of the secret image can be considered as a safe visual communication technique protecting both the secret image and communicating parties.
The guitar chord-generating algorithm based on complex network
NASA Astrophysics Data System (ADS)
Ren, Tao; Wang, Yi-fan; Du, Dan; Liu, Miao-miao; Siddiqi, Awais
2016-02-01
This paper aims to generate chords for popular songs automatically based on complex network. Firstly, according to the characteristics of guitar tablature, six chord networks of popular songs by six pop singers are constructed and the properties of all networks are concluded. By analyzing the diverse chord networks, the accompaniment regulations and features are shown, with which the chords can be generated automatically. Secondly, in terms of the characteristics of popular songs, a two-tiered network containing a verse network and a chorus network is constructed. With this network, the verse and chorus can be composed respectively with the random walk algorithm. Thirdly, the musical motif is considered for generating chords, with which the bad chord progressions can be revised. This method can make the accompaniments sound more melodious. Finally, a popular song is chosen for generating chords and the new generated accompaniment sounds better than those done by the composers.
Efficiency of tabu-search-based conformational search algorithms.
Grebner, Christoph; Becker, Johannes; Stepanenko, Svetlana; Engels, Bernd
2011-07-30
Efficient conformational search or sampling approaches play an integral role in molecular modeling, leading to a strong demand for even faster and more reliable conformer search algorithms. This article compares the efficiency of a molecular dynamics method, a simulated annealing method, and the basin hopping (BH) approach (which are widely used in this field) with a previously suggested tabu-search-based approach called gradient only tabu search (GOTS). The study emphasizes the success of the GOTS procedure and, more importantly, shows that an approach which combines BH and GOTS outperforms the single methods in efficiency and speed. We also show that ring structures built by a hydrogen bond are useful as starting points for conformational search investigations of peptides and organic ligands with biological activities, especially in structures that contain multiple rings. PMID:21541959
Physiology-based diagnosis algorithm for arteriovenous fistula stenosis detection.
Yeih, Dong-Feng; Wang, Yuh-Shyang; Huang, Yi-Chun; Chen, Ming-Fong; Lu, Shey-Shi
2014-01-01
In this paper, a diagnosis algorithm for arteriovenous fistula (AVF) stenosis is developed based on auscultatory features, signal processing, and machine learning. The AVF sound signals are recorded by electronic stethoscopes at pre-defined positions before and after percutaneous transluminal angioplasty (PTA) treatment. Several new signal features of stenosis are identified and quantified, and the physiological explanations for these features are provided. Utilizing support vector machine method, an average of 90% two-fold cross-validation hit-rate can be obtained, with angiography as the gold standard. This offers a non-invasive easy-to-use diagnostic method for medical staff or even patients themselves for early detection of AVF stenosis. PMID:25571021
Algorithm of semicircular laser spot detection based on circle fitting
NASA Astrophysics Data System (ADS)
Wang, Zhengzhou; Xu, Ruihua; Hu, Bingliang
2013-07-01
In order to obtain the exact center of an asymmetrical and semicircular aperture laser spot, a method for laser spot detection method based on circle fitting was proposed in this paper, threshold of laser spot image was segmented by the method of gray morphology algorithm, rough edge of laser spot was detected in both vertical and horizontal direction, short arcs and isolated edge points were deleted by contour growing, the best circle contour was obtained by iterative fitting and the final standard round was fitted in the end. The experimental results show that the precision of the method is obviously better than the gravity model method being used in the traditional large laser automatic alignment system. The accuracy of the method to achieve asymmetrical and semicircular laser spot center meets the requirements of the system.
Personal Navigation Algorithms Based on Wireless Networks and Inertial Sensors
NASA Astrophysics Data System (ADS)
Kaňa, Zdenek; Bradáč, Zdenek; Fiedler, Petr
2014-08-01
The work aims at a development of positioning algorithm suitable for low-cost indoor or urban pedestrian navigation application. The sensor fusion was applied to increase the localization accuracy. Due to required low application cost only low grade inertial sensors and wireless network based ranging were taken into account. The wireless network was assumed to be preinstalled due to other required functionality (for example: building control) therefore only received signal strength (RSS) range measurement technique was considered. Wireless channel loss mapping method was proposed to overcome the natural uncertainties and restrictions in the RSS range measurements The available sensor and environment models are summarized first and the most appropriate ones are selected secondly. Their effective and novel application in the navigation task, and favorable fusion (Particle filtering) of all available information are the main objectives of this thesis.
Sparsity-based algorithm for detecting faults in rotating machines
NASA Astrophysics Data System (ADS)
He, Wangpeng; Ding, Yin; Zi, Yanyang; Selesnick, Ivan W.
2016-05-01
This paper addresses the detection of periodic transients in vibration signals so as to detect faults in rotating machines. For this purpose, we present a method to estimate periodic-group-sparse signals in noise. The method is based on the formulation of a convex optimization problem. A fast iterative algorithm is given for its solution. A simulated signal is formulated to verify the performance of the proposed approach for periodic feature extraction. The detection performance of comparative methods is compared with that of the proposed approach via RMSE values and receiver operating characteristic (ROC) curves. Finally, the proposed approach is applied to single fault diagnosis of a locomotive bearing and compound faults diagnosis of motor bearings. The processed results show that the proposed approach can effectively detect and extract the useful features of bearing outer race and inner race defect.
A Hybrid Metaheuristic for Biclustering Based on Scatter Search and Genetic Algorithms
NASA Astrophysics Data System (ADS)
Nepomuceno, Juan A.; Troncoso, Alicia; Aguilar–Ruiz, Jesús S.
In this paper a hybrid metaheuristic for biclustering based on Scatter Search and Genetic Algorithms is presented. A general scheme of Scatter Search has been used to obtain high-quality biclusters, but a way of generating the initial population and a method of combination based on Genetic Algorithms have been chosen. Experimental results from yeast cell cycle and human B-cell lymphoma are reported. Finally, the performance of the proposed hybrid algorithm is compared with a genetic algorithm recently published.
Lee, S. H.; van der Werf, J. H. J.
2016-01-01
Summary: We have developed an algorithm for genetic analysis of complex traits using genome-wide SNPs in a linear mixed model framework. Compared to current standard REML software based on the mixed model equation, our method is substantially faster. The advantage is largest when there is only a single genetic covariance structure. The method is particularly useful for multivariate analysis, including multi-trait models and random regression models for studying reaction norms. We applied our proposed method to publicly available mice and human data and discuss the advantages and limitations. Availability and implementation: MTG2 is available in https://sites.google.com/site/honglee0707/mtg2. Contact: hong.lee@une.edu.au Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26755623
Patch forest: a hybrid framework of random forest and patch-based segmentation
NASA Astrophysics Data System (ADS)
Xie, Zhongliu; Gillies, Duncan
2016-03-01
The development of an accurate, robust and fast segmentation algorithm has long been a research focus in medical computer vision. State-of-the-art practices often involve non-rigidly registering a target image with a set of training atlases for label propagation over the target space to perform segmentation, a.k.a. multi-atlas label propagation (MALP). In recent years, the patch-based segmentation (PBS) framework has gained wide attention due to its advantage of relaxing the strict voxel-to-voxel correspondence to a series of pair-wise patch comparisons for contextual pattern matching. Despite a high accuracy reported in many scenarios, computational efficiency has consistently been a major obstacle for both approaches. Inspired by recent work on random forest, in this paper we propose a patch forest approach, which by equipping the conventional PBS with a fast patch search engine, is able to boost segmentation speed significantly while retaining an equal level of accuracy. In addition, a fast forest training mechanism is also proposed, with the use of a dynamic grid framework to efficiently approximate data compactness computation and a 3D integral image technique for fast box feature retrieval.
NASA Astrophysics Data System (ADS)
Shahamatnia, Ehsan; Dorotovič, Ivan; Fonseca, Jose M.; Ribeiro, Rita A.
2016-03-01
Developing specialized software tools is essential to support studies of solar activity evolution. With new space missions such as Solar Dynamics Observatory (SDO), solar images are being produced in unprecedented volumes. To capitalize on that huge data availability, the scientific community needs a new generation of software tools for automatic and efficient data processing. In this paper a prototype of a modular framework for solar feature detection, characterization, and tracking is presented. To develop an efficient system capable of automatic solar feature tracking and measuring, a hybrid approach combining specialized image processing, evolutionary optimization, and soft computing algorithms is being followed. The specialized hybrid algorithm for tracking solar features allows automatic feature tracking while gathering characterization details about the tracked features. The hybrid algorithm takes advantages of the snake model, a specialized image processing algorithm widely used in applications such as boundary delineation, image segmentation, and object tracking. Further, it exploits the flexibility and efficiency of Particle Swarm Optimization (PSO), a stochastic population based optimization algorithm. PSO has been used successfully in a wide range of applications including combinatorial optimization, control, clustering, robotics, scheduling, and image processing and video analysis applications. The proposed tool, denoted PSO-Snake model, was already successfully tested in other works for tracking sunspots and coronal bright points. In this work, we discuss the application of the PSO-Snake algorithm for calculating the sidereal rotational angular velocity of the solar corona. To validate the results we compare them with published manual results performed by an expert.
A Comparative Study of Probability Collectives Based Multi-agent Systems and Genetic Algorithms
NASA Technical Reports Server (NTRS)
Huang, Chien-Feng; Wolpert, David H.; Bieniawski, Stefan; Strauss, Charles E. M.
2005-01-01
We compare Genetic Algorithms (GA's) with Probability Collectives (PC), a new framework for distributed optimization and control. In contrast to GA's, PC-based methods do not update populations of solutions. Instead they update an explicitly parameterized probability distribution p over the space of solutions. That updating of p arises as the optimization of a functional of p. The functional is chosen so that any p that optimizes it should be p peaked about good solutions. The PC approach works in both continuous and discrete problems. It does not suffer from the resolution limitation of the finite bit length encoding of parameters into GA alleles. It also has deep connections with both game theory and statistical physics. We review the PC approach using its motivation as the information theoretic formulation of bounded rationality for multi-agent systems. It is then compared with GA's on a diverse set of problems. To handle high dimensional surfaces, in the PC method investigated here p is restricted to a product distribution. Each distribution in that product is controlled by a separate agent. The test functions were selected for their difficulty using either traditional gradient descent or genetic algorithms. On those functions the PC-based approach significantly outperforms traditional GA's in both rate of descent, trapping in false minima, and long term optimization.
A Root/10 based software framework for CMS
Tanenbaum, William
2004-08-26
The implementation of persistency in the Compact Muon Solenoid (CMS) Software Framework uses the core I/O functionality of ROOT. We will discuss the current ROOT/IO implementation, its evolution from the prior Objectivity/DB{trademark} implementation, and the plans and ongoing work for the conversion to ''POOL'', provided by the LHC Computing Grid (LCG) persistency project. The CMS experiment [1] is one of the four approved LHC experiments. Data taking is scheduled to begin in 2007, and will last at least ten years. The CMS software and computing task [2] will be 10-1000 times larger than that of current HEP experiments. Therefore it is essential that software must be modular, flexible, and maintainable as well as providing high performance and quality. One of the technologies utilized has been a C++ based object oriented database management system (ODBMS). Originally, the specific implementation used for object persistency was a commercial product, Objectivity/DB [3]. In 2001, it became apparent that Objectivity was not the optimal long term solution for data persistency, and that it was necessary to abandon Objectivity with a very short time scale. A decision was made to directly use ROOT/IO [4] as a component of an interim persistency implementation. In the very near future, the LHC computing grid persistency project will provide POOL [5] as an implementation for persistency. This paper primarily covers the conversion from Objectivity/DB to ROOT/IO. Also briefly discussed is the ongoing transition to POOL.
Framework for springback compensation based on mechanical factor evaluation
NASA Astrophysics Data System (ADS)
Oya, Tetsuo; Naoyuki Doke, Naoyuki
2013-05-01
Springback is an inevitable phenomenon in sheet metal forming, and many researches on its prediction and compensation method have been presented. The use of high-strength steels is now popular; therefore, the demand for effective springback compensation system is increasing. In this study, a novel approach of springback compensation is presented. The proposed framework consists of a springback solver and design system and optimization process. The springback solver is a finite element procedure in which the degenerated shell element is used instead of the typical shell element. This allows the designer to access directly to the resultant stresses such as the bending moment that is a major cause of springback. By our system, mechanically reasonable springback compensation is possible whereas the conventional compensation method only uses geometrical information that may lead to non-realistic solution. The authors have developed a system based on the proposed procedure to demonstrate the effectiveness of the presented strategy and applied it to some forming situations. In this paper, the overview of our approach and the latest progress is reported.
Geotube: a network based framework for Goescience dissemination
NASA Astrophysics Data System (ADS)
Grieco, Giovanni; Porta, Marina; Merlini, Anna Elisabetta; Caironi, Valeria; Reggiori, Donatella
2016-04-01
Geotube is a project promoted by Il Geco cultural association for the dissemination of Geoscience education in schools by open multimedia environments. The approach is based on the following keystones: • A deep and permanent epistemological reflection supported by confrontation within the International Scientific Community • A close link with the territory • A local to global inductive approach to basic concepts in Geosciences • The construction of an open framework to stimulate creativity The project has been developed as an educational activity for secondary schools (11 to 18 years old students). It provides for the creation of a network of institutions to be involved in order to ensure the required diversified expertise. They can comprise: Universities, Natural Parks, Mountain Communities, Municipalities, schools, private companies working in the sector, and so on. A single project lasts for one school year (October to June) and requires 8-12 work hours at school, one or two half day or full day excursions and a final event of presentation of outputs. The possible outputs comprise a pdf or ppt guidebook, a script and a video completely shooted and edited by the students. The framework is open in order to adapt to the single class or workgroup needs, the level and type of school, the time available and different subjects in Geosciences. In the last two years the two parts of the project have been successfully tested separately, while the full project will be presented at schools in in its full form in April 2016, in collaboration with University of Milan, Campo dei Fiori Natural Park, Piambello Mountain Community and Cunardo Municipality. The production of geotube outputs has been tested in a high school for three consecutive years. Students produced scripts and videos on the following subjects: geologic hazards, volcanoes and earthquakes, and climate change. The excursions have been tested with two different high schools. Firstly two areas have been
Semantics-Based Interoperability Framework for the Geosciences
NASA Astrophysics Data System (ADS)
Sinha, A.; Malik, Z.; Raskin, R.; Barnes, C.; Fox, P.; McGuinness, D.; Lin, K.
2008-12-01
Interoperability between heterogeneous data, tools and services is required to transform data to knowledge. To meet geoscience-oriented societal challenges such as forcing of climate change induced by volcanic eruptions, we suggest the need to develop semantic interoperability for data, services, and processes. Because such scientific endeavors require integration of multiple data bases associated with global enterprises, implicit semantic-based integration is impossible. Instead, explicit semantics are needed to facilitate interoperability and integration. Although different types of integration models are available (syntactic or semantic) we suggest that semantic interoperability is likely to be the most successful pathway. Clearly, the geoscience community would benefit from utilization of existing XML-based data models, such as GeoSciML, WaterML, etc to rapidly advance semantic interoperability and integration. We recognize that such integration will require a "meanings-based search, reasoning and information brokering", which will be facilitated through inter-ontology relationships (ontologies defined for each discipline). We suggest that Markup languages (MLs) and ontologies can be seen as "data integration facilitators", working at different abstraction levels. Therefore, we propose to use an ontology-based data registration and discovery approach to compliment mark-up languages through semantic data enrichment. Ontologies allow the use of formal and descriptive logic statements which permits expressive query capabilities for data integration through reasoning. We have developed domain ontologies (EPONT) to capture the concept behind data. EPONT ontologies are associated with existing ontologies such as SUMO, DOLCE and SWEET. Although significant efforts have gone into developing data (object) ontologies, we advance the idea of developing semantic frameworks for additional ontologies that deal with processes and services. This evolutionary step will
HOLON: a Web-based framework for fostering guideline applications.
Silverman, B. G.; Moidu, K.; Clemente, B. E.; Reis, L.; Ravichandar, D.; Safran, C.
1997-01-01
HOLON is a research and development effort in extending middleware in the healthcare field to support application development, in general, and guideline applications, in particular. This framework makes use of open standards for architecture, software, guideline KBs, clinical repository models, information encodings, and intelligent system modules and agents. By pursuing the use of such standards in our middleware components, we hope eventually to maximize reusability of the HOLON framework by others who also adhere to these open standards. This research reflects lessons learned about the extensions needed in these standards if healthcare middleware frameworks are to transparently support application developers and their users over the web. PMID:9357651
A genetic-based algorithm for personalized resistance training
Kiely, J; Suraci, B; Collins, DJ; de Lorenzo, D; Pickering, C; Grimaldi, KA
2016-01-01
Association studies have identified dozens of genetic variants linked to training responses and sport-related traits. However, no intervention studies utilizing the idea of personalised training based on athlete's genetic profile have been conducted. Here we propose an algorithm that allows achieving greater results in response to high- or low-intensity resistance training programs by predicting athlete's potential for the development of power and endurance qualities with the panel of 15 performance-associated gene polymorphisms. To develop and validate such an algorithm we performed two studies in independent cohorts of male athletes (study 1: athletes from different sports (n = 28); study 2: soccer players (n = 39)). In both studies athletes completed an eight-week high- or low-intensity resistance training program, which either matched or mismatched their individual genotype. Two variables of explosive power and aerobic fitness, as measured by the countermovement jump (CMJ) and aerobic 3-min cycle test (Aero3) were assessed pre and post 8 weeks of resistance training. In study 1, the athletes from the matched groups (i.e. high-intensity trained with power genotype or low-intensity trained with endurance genotype) significantly increased results in CMJ (P = 0.0005) and Aero3 (P = 0.0004). Whereas, athletes from the mismatched group (i.e. high-intensity trained with endurance genotype or low-intensity trained with power genotype) demonstrated non-significant improvements in CMJ (P = 0.175) and less prominent results in Aero3 (P = 0.0134). In study 2, soccer players from the matched group also demonstrated significantly greater (P < 0.0001) performance changes in both tests compared to the mismatched group. Among non- or low responders of both studies, 82% of athletes (both for CMJ and Aero3) were from the mismatched group (P < 0.0001). Our results indicate that matching the individual's genotype with the appropriate training modality leads to more effective
A genetic-based algorithm for personalized resistance training.
Jones, N; Kiely, J; Suraci, B; Collins, D J; de Lorenzo, D; Pickering, C; Grimaldi, K A
2016-06-01
Association studies have identified dozens of genetic variants linked to training responses and sport-related traits. However, no intervention studies utilizing the idea of personalised training based on athlete's genetic profile have been conducted. Here we propose an algorithm that allows achieving greater results in response to high- or low-intensity resistance training programs by predicting athlete's potential for the development of power and endurance qualities with the panel of 15 performance-associated gene polymorphisms. To develop and validate such an algorithm we performed two studies in independent cohorts of male athletes (study 1: athletes from different sports (n = 28); study 2: soccer players (n = 39)). In both studies athletes completed an eight-week high- or low-intensity resistance training program, which either matched or mismatched their individual genotype. Two variables of explosive power and aerobic fitness, as measured by the countermovement jump (CMJ) and aerobic 3-min cycle test (Aero3) were assessed pre and post 8 weeks of resistance training. In study 1, the athletes from the matched groups (i.e. high-intensity trained with power genotype or low-intensity trained with endurance genotype) significantly increased results in CMJ (P = 0.0005) and Aero3 (P = 0.0004). Whereas, athletes from the mismatched group (i.e. high-intensity trained with endurance genotype or low-intensity trained with power genotype) demonstrated non-significant improvements in CMJ (P = 0.175) and less prominent results in Aero3 (P = 0.0134). In study 2, soccer players from the matched group also demonstrated significantly greater (P < 0.0001) performance changes in both tests compared to the mismatched group. Among non- or low responders of both studies, 82% of athletes (both for CMJ and Aero3) were from the mismatched group (P < 0.0001). Our results indicate that matching the individual's genotype with the appropriate training modality leads to more effective
Du, Yigang; Fan, Rui; Li, Yong; Chen, Siping; Jensen, Jørgen Arendt
2016-07-01
An ultrasound imaging framework modeled with the first order nonlinear pressure-velocity relations (NPVR) and implemented by a half-time staggered solution and pseudospectral method is presented in this paper. The framework is capable of simulating linear and nonlinear ultrasound propagation and reflections in a heterogeneous medium with different sound speeds and densities. It can be initialized with arbitrary focus, excitation and apodization for multiple individual channels in both 2D and 3D spatial fields. The simulated channel data can be generated using this framework, and ultrasound image can be obtained by beamforming the simulated channel data. Various results simulated by different algorithms are illustrated for comparisons. The root mean square (RMS) errors for each compared pulses are calculated. The linear propagation is validated by an angular spectrum approach (ASA) with a RMS error of 3% at the focal point for a 2D field, and Field II with RMS errors of 0.8% and 1.5% at the electronic and the elevation focuses for 3D fields, respectively. The accuracy for the NPVR based nonlinear propagation is investigated by comparing with the Abersim simulation for pulsed fields and with the nonlinear ASA for monochromatic fields. The RMS errors of the nonlinear pulses calculated by the NPVR and Abersim are respectively 2.4%, 7.4%, 17.6% and 36.6% corresponding to initial pressure amplitudes of 50kPa, 200kPa, 500kPa and 1MPa at the transducer. By increasing the sampling frequency for the strong nonlinearity, the RMS error for 1MPa initial pressure amplitude is reduced from 36.6% to 27.3%. PMID:27107165
A Semantics-Based Information Distribution Framework for Large Web-Based Course Forum System
ERIC Educational Resources Information Center
Chim, Hung; Deng, Xiaotie
2008-01-01
We propose a novel data distribution framework for developing a large Web-based course forum system. In the distributed architectural design, each forum server is fully equipped with the ability to support some course forums independently. The forum servers collaborating with each other constitute the whole forum system. Therefore, the workload of…
NASA Astrophysics Data System (ADS)
Willgoose, G. R.
2009-12-01
One of the pioneering landform evolution models, SIBERIA, while developed in the 1980’s is still widely used in the science community and is a key component of engineering software used to assess the long-term stability of man-made landforms such as rehabilitated mine sites and nuclear waste repositories. While SIBERIA is very reliable, computationally fast and well tested (both its underlying science and the computer code) the range of emerging applications have challenged the ability of the author to maintain and extend the underlying computer code. Moreover, the architecture of the SIBERIA code is not well suited to collaborative extension of its capabilities without often triggering forking of the code base. This paper describes a new modelling framework designed to supersede SIBERIA (as well as other earth sciences codes by the author) called TelluSim. The design is such that it is potentially more than simply a new landform evolution model, but TelluSim is a more general dynamical system modelling framework using time evolving GIS data as its spatial discretisation. TelluSim is designed as an open modular framework facilitating open-sourcing of the code, while addressing compromises made in the original design of SIBERIA in the 1980’s. An important aspect of the design of TelluSim was to minimise the overhead in interfacing the modules with TelluSim, and minimise any requirement for recoding of existing software, so eliminating a major disadvantage of more complex frameworks. The presentation will discuss in more detail the reasoning behind the design of TelluSim, and experiences of the advantages and disadvantages of using Python relative to other approaches (e.g. Matlab, R). The paper will discuss examples of how TelluSim has facilitated the incorporation and testing of new algorithms, and environmental processes, and the support for novel science and data testing methodologies. It will also discuss plans to link TelluSim with other open source
NASA Astrophysics Data System (ADS)
Wang, Weibao; Overall, Gary; Riggs, Travis; Silveston-Keith, Rebecca; Whitney, Julie; Chiu, George; Allebach, Jan P.
2013-01-01
Assessment of macro-uniformity is a capability that is important for the development and manufacture of printer products. Our goal is to develop a metric that will predict macro-uniformity, as judged by human subjects, by scanning and analyzing printed pages. We consider two different machine learning frameworks for the metric: linear regression and the support vector machine. We have implemented the image quality ruler, based on the recommendations of the INCITS W1.1 macro-uniformity team. Using 12 subjects at Purdue University and 20 subjects at Lexmark, evenly balanced with respect to gender, we conducted subjective evaluations with a set of 35 uniform b/w prints from seven different printers with five levels of tint coverage. Our results suggest that the image quality ruler method provides a reliable means to assess macro-uniformity. We then defined and implemented separate features to measure graininess, mottle, large area variation, jitter, and large-scale non-uniformity. The algorithms that we used are largely based on ISO image quality standards. Finally, we used these features computed for a set of test pages and the subjects' image quality ruler assessments of these pages to train the two different predictors - one based on linear regression and the other based on the support vector machine (SVM). Using five-fold cross-validation, we confirmed the efficacy of our predictor.
A Test Scheduling Algorithm Based on Two-Stage GA
NASA Astrophysics Data System (ADS)
Yu, Y.; Peng, X. Y.; Peng, Y.
2006-10-01
In this paper, we present a new algorithm to co-optimize the core wrapper design and the SOC test scheduling. The SOC test scheduling problem is first formulated into a twodimension floorplan problem and a sequence pair architecture is used to represent it. Then we propose a two-stage GA (Genetic Algorithm) to solve the SOC test scheduling problem. Experiments on ITC'02 benchmark show that our algorithm can effectively reduce test time so as to decrease SOC test cost.
Sensor based framework for secure multimedia communication in VANET.
Rahim, Aneel; Khan, Zeeshan Shafi; Bin Muhaya, Fahad T; Sher, Muhammad; Kim, Tai-Hoon
2010-01-01
Secure multimedia communication enhances the safety of passengers by providing visual pictures of accidents and danger situations. In this paper we proposed a framework for secure multimedia communication in Vehicular Ad-Hoc Networks (VANETs). Our proposed framework is mainly divided into four components: redundant information, priority assignment, malicious data verification and malicious node verification. The proposed scheme jhas been validated with the help of the NS-2 network simulator and the Evalvid tool. PMID:22163462
Sensor Based Framework for Secure Multimedia Communication in VANET
Rahim, Aneel; Khan, Zeeshan Shafi; Bin Muhaya, Fahad T.; Sher, Muhammad; Kim, Tai-Hoon
2010-01-01
Secure multimedia communication enhances the safety of passengers by providing visual pictures of accidents and danger situations. In this paper we proposed a framework for secure multimedia communication in Vehicular Ad-Hoc Networks (VANETs). Our proposed framework is mainly divided into four components: redundant information, priority assignment, malicious data verification and malicious node verification. The proposed scheme jhas been validated with the help of the NS-2 network simulator and the Evalvid tool. PMID:22163462
Formal analysis, hardness, and algorithms for extracting internal structure of test-based problems.
Jaśkowski, Wojciech; Krawiec, Krzysztof
2011-01-01
Problems in which some elementary entities interact with each other are common in computational intelligence. This scenario, typical for coevolving artificial life agents, learning strategies for games, and machine learning from examples, can be formalized as a test-based problem and conveniently embedded in the common conceptual framework of coevolution. In test-based problems, candidate solutions are evaluated on a number of test cases (agents, opponents, examples). It has been recently shown that every test of such problem can be regarded as a separate objective, and the whole problem as multi-objective optimization. Research on reducing the number of such objectives while preserving the relations between candidate solutions and tests led to the notions of underlying objectives and internal problem structure, which can be formalized as a coordinate system that spatially arranges candidate solutions and tests. The coordinate system that spans the minimal number of axes determines the so-called dimension of a problem and, being an inherent property of every problem, is of particular interest. In this study, we investigate in-depth the formalism of a coordinate system and its properties, relate them to properties of partially ordered sets, and design an exact algorithm for finding a minimal coordinate system. We also prove that this problem is NP-hard and come up with a heuristic which is superior to the best algorithm proposed so far. Finally, we apply the algorithms to three abstract problems and demonstrate that the dimension of the problem is typically much lower than the number of tests, and for some problems converges to the intrinsic parameter of the problem--its a priori dimension. PMID:21815770
Rainfall Estimation From The Persiann Satellite-based Algorithm
NASA Astrophysics Data System (ADS)
Hsu, K.; Sorooshian, S.; Gao, X.; Gupta, H.; Imam, B.
Satellite-based rainfall estimates are important for many regions of the world where ground-based measurements are not well established and where continuous sensing is required. For years, many algorithms using geostationary satellite infrared imagery were developed. However, because cloud top temperatures are not corresponding well to the surface rainfall at pixel level, rainfall retrievals from algorithms developed us- ing pixel-by-pixel relationships are shown to be less accurate at high spatial-temporal scales. In this study, a rainfall estimation system, named PERSIANN (Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks), is introduced. This system uses neural network function classification/approximation procedures to compute an estimate of rainfall rate at each 0.25 ` 0.25 pixel of the infrared brightness temperature image provided by geostationary satellites. More ef- fective features were included in the input system from scanning the infrared pixel array with a 5 ` 5 moving window surrounding an estimation pixel. Five statistics in- cluding the means and standard deviations of various window temperatures were ex- tracted. Further, a classification scheme, name self-organizing feature map, was used to classify those five features into a large number of rain/no-rain groups associated with different cloud characteristics. For each group, a multivariate linear function was provided to relate the values of the input features to the output rain rate at 30-minute time intervals. One additional feature of the PERISANN system is that the system pa- rameters are routinely adjustable from limited observation, such as passive microwave TRMM TMI and DMSP SSM/I rainfall rates and ground-based radar/gauge observa- tions. Therefore, updated rainfall estimates are continually provided. The PERSIANN system is currently in operation, and global six-hour rainfall products (50oS-50oN) are available through Hydrological Data and
Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew
2015-01-01
Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists. PMID:25742012
Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew
2015-01-01
Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists. PMID:25742012
A pipelined FPGA implementation of an encryption algorithm based on genetic algorithm
NASA Astrophysics Data System (ADS)
Thirer, Nonel
2013-05-01
With the evolution of digital data storage and exchange, it is essential to protect the confidential information from every unauthorized access. High performance encryption algorithms were developed and implemented by software and hardware. Also many methods to attack the cipher text were developed. In the last years, the genetic algorithm has gained much interest in cryptanalysis of cipher texts and also in encryption ciphers. This paper analyses the possibility to use the genetic algorithm as a multiple key sequence generator for an AES (Advanced Encryption Standard) cryptographic system, and also to use a three stages pipeline (with four main blocks: Input data, AES Core, Key generator, Output data) to provide a fast encryption and storage/transmission of a large amount of data.
A New Class of Priority-based Weighted Fair Scheduling Algorithm
NASA Astrophysics Data System (ADS)
Yang, Li; Pan, ChengSheng; Zhang, ErHan; Liu, HaiYan
The traditional fair queuing scheduling algorithm (WFQ WF2Q) for data application is a fair and efficient algorithm, while when they face to real-time applications such as voice, interactive video, and so on, they are short of the guarantee of strict time delay. In view of this, we propose one kind of weighted fair scheduling algorithm which is based on an strict rob priority class, this algorithm add an absolute priority queue based on the foundation of based class weighted fair scheduling algorithm(CBWFQ), and it also carries on the expansion to network simulator NS2. With the comparison to traditional algorithm, we can drawn a conclusion from the simulation results that the new algorithm can improve the time delay, fairness and other network performances based on the same throughput. Namely it guarantees the real-time application of Quality of Service, also guarantees the fair transmission of other service.
NASA Astrophysics Data System (ADS)
Dong, Ming; He, David
2007-07-01
Diagnostics and prognostics are two important aspects in a condition-based maintenance (CBM) program. However, these two tasks are often separately performed. For example, data might be collected and analysed separately for diagnosis and prognosis. This practice increases the cost and reduces the efficiency of CBM and may affect the accuracy of the diagnostic and prognostic results. In this paper, a statistical modelling methodology for performing both diagnosis and prognosis in a unified framework is presented. The methodology is developed based on segmental hidden semi-Markov models (HSMMs). An HSMM is a hidden Markov model (HMM) with temporal structures. Unlike HMM, an HSMM does not follow the unrealistic Markov chain assumption and therefore provides more powerful modelling and analysis capability for real problems. In addition, an HSMM allows modelling the time duration of the hidden states and therefore is capable of prognosis. To facilitate the computation in the proposed HSMM-based diagnostics and prognostics, new forward-backward variables are defined and a modified forward-backward algorithm is developed. The existing state duration estimation methods are inefficient because they require a huge storage and computational load. Therefore, a new approach is proposed for training HSMMs in which state duration probabilities are estimated on the lattice (or trellis) of observations and states. The model parameters are estimated through the modified forward-backward training algorithm. The estimated state duration probability distributions combined with state-changing point detection can be used to predict the useful remaining life of a system. The evaluation of the proposed methodology was carried out through a real world application: health monitoring of hydraulic pumps. In the tests, the recognition rates for all states are greater than 96%. For each individual pump, the recognition rate is increased by 29.3% in comparison with HMMs. Because of the temporal
HyDE Framework for Stochastic and Hybrid Model-Based Diagnosis
NASA Technical Reports Server (NTRS)
Narasimhan, Sriram; Brownston, Lee
2012-01-01
Hybrid Diagnosis Engine (HyDE) is a general framework for stochastic and hybrid model-based diagnosis that offers flexibility to the diagnosis application designer. The HyDE architecture supports the use of multiple modeling paradigms at the component and system level. Several alternative algorithms are available for the various steps in diagnostic reasoning. This approach is extensible, with support for the addition of new modeling paradigms as well as diagnostic reasoning algorithms for existing or new modeling paradigms. HyDE is a general framework for stochastic hybrid model-based diagnosis of discrete faults; that is, spontaneous changes in operating modes of components. HyDE combines ideas from consistency-based and stochastic approaches to model- based diagnosis using discrete and continuous models to create a flexible and extensible architecture for stochastic and hybrid diagnosis. HyDE supports the use of multiple paradigms and is extensible to support new paradigms. HyDE generates candidate diagnoses and checks them for consistency with the observations. It uses hybrid models built by the users and sensor data from the system to deduce the state of the system over time, including changes in state indicative of faults. At each time step when observations are available, HyDE checks each existing candidate for continued consistency with the new observations. If the candidate is consistent, it continues to remain in the candidate set. If it is not consistent, then the information about the inconsistency is used to generate successor candidates while discarding the candidate that was inconsistent. The models used by HyDE are similar to simulation models. They describe the expected behavior of the system under nominal and fault conditions. The model can be constructed in modular and hierarchical fashion by building component/subsystem models (which may themselves contain component/ subsystem models) and linking them through shared variables/parameters. The
Performance-based seismic design of steel frames utilizing colliding bodies algorithm.
Veladi, H
2014-01-01
A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm. PMID:25202717
Performance-Based Seismic Design of Steel Frames Utilizing Colliding Bodies Algorithm
Veladi, H.
2014-01-01
A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm. PMID:25202717
A MATLAB GUI based algorithm for modelling Magnetotelluric data
NASA Astrophysics Data System (ADS)
Timur, Emre; Onsen, Funda
2016-04-01
The magnetotelluric method is an electromagnetic survey technique that images the electrical resistivity distribution of layers in subsurface depths. Magnetotelluric method measures simultaneously total electromagnetic field components such as both time-varying magnetic field B(t) and induced electric field E(t). At the same time, forward modeling of magnetotelluric method is so beneficial for survey planning purpose, for comprehending the method, especially for students, and as part of an iteration process in inverting measured data. The MTINV program can be used to model and to interpret geophysical electromagnetic (EM) magnetotelluric (MT) measurements using a horizontally layered earth model. This program uses either the apparent resistivity and phase components of the MT data together or the apparent resistivity data alone. Parameter optimization, which is based on linearized inversion method, can be utilized in 1D interpretations. In this study, a new MATLAB GUI based algorithm has been written for the 1D-forward modeling of magnetotelluric response function for multiple layers to use in educational studies. The code also includes an automatic Gaussian noise option for a demanded ratio value. Numerous applications were carried out and presented for 2,3 and 4 layer models and obtained theoretical data were interpreted using MTINV, in order to evaluate the initial parameters and effect of noise. Keywords: Education, Forward Modelling, Inverse Modelling, Magnetotelluric
SIFT algorithm-based 3D pose estimation of femur.
Zhang, Xuehe; Zhu, Yanhe; Li, Changle; Zhao, Jie; Li, Ge
2014-01-01
To address the lack of 3D space information in the digital radiography of a patient femur, a pose estimation method based on 2D-3D rigid registration is proposed in this study. The method uses two digital radiography images to realize the preoperative 3D visualization of a fractured femur. Compared with the pure Digital Radiography or Computed Tomography imaging diagnostic methods, the proposed method has the advantages of low cost, high precision, and minimal harmful radiation. First, stable matching point pairs in the frontal and lateral images of the patient femur and the universal femur are obtained by using the Scale Invariant Feature Transform method. Then, the 3D pose estimation registration parameters of the femur are calculated by using the Iterative Closest Point (ICP) algorithm. Finally, based on the deviation between the six degrees freedom parameter calculated by the proposed method, preset posture parameters are calculated to evaluate registration accuracy. After registration, the rotation error is less than l.5°, and the translation error is less than 1.2 mm, which indicate that the proposed method has high precision and robustness. The proposed method provides 3D image information for effective preoperative orthopedic diagnosis and surgery planning. PMID:25226990
A Framework for Geographic Object-Based Image Analysis (GEOBIA) based on geographic ontology
NASA Astrophysics Data System (ADS)
Gu, H. Y.; Li, H. T.; Yan, L.; Lu, X. J.
2015-06-01
GEOBIA (Geographic Object-Based Image Analysis) is not only a hot topic of current remote sensing and geographical research. It is believed to be a paradigm in remote sensing and GIScience. The lack of a systematic approach designed to conceptualize and formalize the class definitions makes GEOBIA a highly subjective and difficult method to reproduce. This paper aims to put forward a framework for GEOBIA based on geographic ontology theory, which could implement "Geographic entities - Image objects - Geographic objects" true reappearance. It consists of three steps, first, geographical entities are described by geographic ontology, second, semantic network model is built based on OWL(ontology web language), at last, geographical objects are classified with decision rule or other classifiers. A case study of farmland ontology was conducted for describing the framework. The strength of this framework is that it provides interpretation strategies and global framework for GEOBIA with the property of objective, overall, universal, universality, etc., which avoids inconsistencies caused by different experts' experience and provides an objective model for mage analysis.
A Visual mining based framework for classification accuracy estimation
NASA Astrophysics Data System (ADS)
Arun, Pattathal Vijayakumar
2013-12-01
Classification techniques have been widely used in different remote sensing applications and correct classification of mixed pixels is a tedious task. Traditional approaches adopt various statistical parameters, however does not facilitate effective visualisation. Data mining tools are proving very helpful in the classification process. We propose a visual mining based frame work for accuracy assessment of classification techniques using open source tools such as WEKA and PREFUSE. These tools in integration can provide an efficient approach for getting information about improvements in the classification accuracy and helps in refining training data set. We have illustrated framework for investigating the effects of various resampling methods on classification accuracy and found that bilinear (BL) is best suited for preserving radiometric characteristics. We have also investigated the optimal number of folds required for effective analysis of LISS-IV images. Techniki klasyfikacji są szeroko wykorzystywane w różnych aplikacjach teledetekcyjnych, w których poprawna klasyfikacja pikseli stanowi poważne wyzwanie. Podejście tradycyjne wykorzystujące różnego rodzaju parametry statystyczne nie zapewnia efektywnej wizualizacji. Wielce obiecujące wydaje się zastosowanie do klasyfikacji narzędzi do eksploracji danych. W artykule zaproponowano podejście bazujące na wizualnej analizie eksploracyjnej, wykorzystujące takie narzędzia typu open source jak WEKA i PREFUSE. Wymienione narzędzia ułatwiają korektę pół treningowych i efektywnie wspomagają poprawę dokładności klasyfikacji. Działanie metody sprawdzono wykorzystując wpływ różnych metod resampling na zachowanie dokładności radiometrycznej i uzyskując najlepsze wyniki dla metody bilinearnej (BL).
Silicon Framework-Based Lithium Silicides at High Pressures.
Zhang, Shoutao; Wang, Yanchao; Yang, Guochun; Ma, Yanming
2016-07-01
The bandgap and optical properties of diamond silicon (Si) are not suitable for many advanced applications such as thin-film photovoltaic devices and light-emitting diodes. Thus, finding new Si allotropes with better bandgap and optical properties is desirable. Recently, a Si allotrope with a desirable bandgap of ∼1.3 eV was obtained by leaching Na from NaSi6 that was synthesized under high pressure [Nat. Mater. 2015, 14, 169], paving the way to finding new Si allotropes. Li is isoelectronic with Na, with a smaller atomic core and comparable electronegativity. It is unknown whether Li silicides share similar properties, but it is of considerable interest. Here, a swarm intelligence-based structural prediction is used in combination with first-principles calculations to investigate the chemical reactions between Si and Li at high pressures, where seven new compositions (LiSi4, LiSi3, LiSi2, Li2Si3, Li2Si, Li3Si, and Li4Si) become stable above 8.4 GPa. The Si-Si bonding patterns in these compounds evolve with increasing Li content sequentially from frameworks to layers, linear chains, and eventually isolated Si ions. Nearest-neighbor Si atoms, in Cmmm-structured LiSi4, form covalent open channels hosting one-dimensional Li atom chains, which have similar structural features to NaSi6. The analysis of integrated crystal orbital Hamilton populations reveals that the Si-Si interactions are mainly responsible for the structural stability. Moreover, this structure is dynamically stable even at ambient pressure. Our results are also important for understanding the structures and electronic properties of Li-Si binary compounds at high pressures. PMID:27302244
Mutation-Based Artificial Fish Swarm Algorithm for Bound Constrained Global Optimization
NASA Astrophysics Data System (ADS)
Rocha, Ana Maria A. C.; Fernandes, Edite M. G. P.
2011-09-01
The herein presented mutation-based artificial fish swarm (AFS) algorithm includes mutation operators to prevent the algorithm to falling into local solutions, diversifying the search, and to accelerate convergence to the global optima. Three mutation strategies are introduced into the AFS algorithm to define the trial points that emerge from random, leaping and searching behaviors. Computational results show that the new algorithm outperforms other well-known global stochastic solution methods.
An Improved Direction Finding Algorithm Based on Toeplitz Approximation
Wang, Qing; Chen, Hua; Zhao, Guohuang; Chen, Bin; Wang, Pichao
2013-01-01
In this paper, a novel direction of arrival (DOA) estimation algorithm called the Toeplitz fourth order cumulants multiple signal classification method (TFOC-MUSIC) algorithm is proposed through combining a fast MUSIC-like algorithm termed the modified fourth order cumulants MUSIC (MFOC-MUSIC) algorithm and Toeplitz approximation. In the proposed algorithm, the redundant information in the cumulants is removed. Besides, the computational complexity is reduced due to the decreased dimension of the fourth-order cumulants matrix, which is equal to the number of the virtual array elements. That is, the effective array aperture of a physical array remains unchanged. However, due to finite sampling snapshots, there exists an estimation error of the reduced-rank FOC matrix and thus the capacity of DOA estimation degrades. In order to improve the estimation performance, Toeplitz approximation is introduced to recover the Toeplitz structure of the reduced-dimension FOC matrix just like the ideal one which has the Toeplitz structure possessing optimal estimated results. The theoretical formulas of the proposed algorithm are derived, and the simulations results are presented. From the simulations, in comparison with the MFOC-MUSIC algorithm, it is concluded that the TFOC-MUSIC algorithm yields an excellent performance in both spatially-white noise and in spatially-color noise environments. PMID:23296331
Biased Randomized Algorithm for Fast Model-Based Diagnosis
NASA Technical Reports Server (NTRS)
Williams, Colin; Vartan, Farrokh
2005-01-01
A biased randomized algorithm has been developed to enable the rapid computational solution of a propositional- satisfiability (SAT) problem equivalent to a diagnosis problem. The closest competing methods of automated diagnosis are described in the preceding article "Fast Algorithms for Model-Based Diagnosis" and "Two Methods of Efficient Solution of the Hitting-Set Problem" (NPO-30584), which appears elsewhere in this issue. It is necessary to recapitulate some of the information from the cited articles as a prerequisite to a description of the present method. As used here, "diagnosis" signifies, more precisely, a type of model-based diagnosis in which one explores any logical inconsistencies between the observed and expected behaviors of an engineering system. The function of each component and the interconnections among all the components of the engineering system are represented as a logical system. Hence, the expected behavior of the engineering system is represented as a set of logical consequences. Faulty components lead to inconsistency between the observed and expected behaviors of the system, represented by logical inconsistencies. Diagnosis - the task of finding the faulty components - reduces to finding the components, the abnormalities of which could explain all the logical inconsistencies. One seeks a minimal set of faulty components (denoted a minimal diagnosis), because the trivial solution, in which all components are deemed to be faulty, always explains all inconsistencies. In the methods of the cited articles, the minimal-diagnosis problem is treated as equivalent to a minimal-hitting-set problem, which is translated from a combinatorial to a computational problem by mapping it onto the Boolean-satisfiability and integer-programming problems. The integer-programming approach taken in one of the prior methods is complete (in the sense that it is guaranteed to find a solution if one exists) and slow and yields a lower bound on the size of the
Block clustering based on difference of convex functions (DC) programming and DC algorithms.
Le, Hoai Minh; Le Thi, Hoai An; Dinh, Tao Pham; Huynh, Van Ngai
2013-10-01
We investigate difference of convex functions (DC) programming and the DC algorithm (DCA) to solve the block clustering problem in the continuous framework, which traditionally requires solving a hard combinatorial optimization problem. DC reformulation techniques and exact penalty in DC programming are developed to build an appropriate equivalent DC program of the block clustering problem. They lead to an elegant and explicit DCA scheme for the resulting DC program. Computational experiments show the robustness and efficiency of the proposed algorithm and its superiority over standard algorithms such as two-mode K-means, two-mode fuzzy clustering, and block classification EM. PMID:23777526
An automatic image inpainting algorithm based on FCM.
Liu, Jiansheng; Liu, Hui; Qiao, Shangping; Yue, Guangxue
2014-01-01
There are many existing image inpainting algorithms in which the repaired area should be manually determined by users. Aiming at this drawback of the traditional image inpainting algorithms, this paper proposes an automatic image inpainting algorithm which automatically identifies the repaired area by fuzzy C-mean (FCM) algorithm. FCM algorithm classifies the image pixels into a number of categories according to the similarity principle, making the similar pixels clustering into the same category as possible. According to the provided gray value of the pixels to be inpainted, we calculate the category whose distance is the nearest to the inpainting area and this category is to be inpainting area, and then the inpainting area is restored by the TV model to realize image automatic inpainting. PMID:24516358
Android platform based smartphones for a logistical remote association repair framework.
Lien, Shao-Fan; Wang, Chun-Chieh; Su, Juhng-Perng; Chen, Hong-Ming; Wu, Chein-Hsing
2014-01-01
The maintenance of large-scale systems is an important issue for logistics support planning. In this paper, we developed a Logistical Remote Association Repair Framework (LRARF) to aid repairmen in keeping the system available. LRARF includes four subsystems: smart mobile phones, a Database Management System (DBMS), a Maintenance Support Center (MSC) and wireless networks. The repairman uses smart mobile phones to capture QR-codes and the images of faulty circuit boards. The captured QR-codes and images are transmitted to the DBMS so the invalid modules can be recognized via the proposed algorithm. In this paper, the Linear Projective Transform (LPT) is employed for fast QR-code calibration. Moreover, the ANFIS-based data mining system is used for module identification and searching automatically for the maintenance manual corresponding to the invalid modules. The inputs of the ANFIS-based data mining system are the QR-codes and image features; the output is the module ID. DBMS also transmits the maintenance manual back to the maintenance staff. If modules are not recognizable, the repairmen and center engineers can obtain the relevant information about the invalid modules through live video. The experimental results validate the applicability of the Android-based platform in the recognition of invalid modules. In addition, the live video can also be recorded synchronously on the MSC for later use. PMID:24967603
Android Platform Based Smartphones for a Logistical Remote Association Repair Framework
Lien, Shao-Fan; Wang, Chun-Chieh; Su, Juhng-Perng; Chen, Hong-Ming; Wu, Chein-Hsing
2014-01-01
The maintenance of large-scale systems is an important issue for logistics support planning. In this paper, we developed a Logistical Remote Association Repair Framework (LRARF) to aid repairmen in keeping the system available. LRARF includes four subsystems: smart mobile phones, a Database Management System (DBMS), a Maintenance Support Center (MSC) and wireless networks. The repairman uses smart mobile phones to capture QR-codes and the images of faulty circuit boards. The captured QR-codes and images are transmitted to the DBMS so the invalid modules can be recognized via the proposed algorithm. In this paper, the Linear Projective Transform (LPT) is employed for fast QR-code calibration. Moreover, the ANFIS-based data mining system is used for module identification and searching automatically for the maintenance manual corresponding to the invalid modules. The inputs of the ANFIS-based data mining system are the QR-codes and image features; the output is the module ID. DBMS also transmits the maintenance manual back to the maintenance staff. If modules are not recognizable, the repairmen and center engineers can obtain the relevant information about the invalid modules through live video. The experimental results validate the applicability of the Android-based platform in the recognition of invalid modules. In addition, the live video can also be recorded synchronously on the MSC for later use. PMID:24967603
Lederman, Dror
2011-01-01
In this paper, the problem of endotracheal intubation confirmation is addressed. Endotracheal intubation is a complex procedure which requires high skills and the use of secondary confirmation devices to ensure correct positioning of the tube. A novel confirmation approach, based on video images classification, is introduced. The approach is based on identification of specific anatomical landmarks, including esophagus, upper trachea and main bifurcation of the trachea into the two primary bronchi ("carina"), as indicators of correct or incorrect tube insertion and positioning. Classification of the images is performed using a parallel Gaussian mixture models (GMMs) framework, which is composed of several GMMs, schematically connected in parallel, where each GMM represents a different imaging angle. The performance of the proposed approach was evaluated using a dataset of cow-intubation videos and a dataset of human-intubation videos. Each one of the video images was manually (visually) classified by a medical expert into one of three categories: upper-tracheal intubation, correct (carina) intubation, and esophageal intubation. The image classification algorithm was applied off-line using a leave-one-case-out method. The results show that the system correctly classified 1517 out of 1600 (94.8%) of the cow-intubation images, and 340 out of the 358 human images (95.0%). The classification results compared favorably with a "standard" GMM approach utilizing textural based features, as well as with a state-of-the-art classification method, tested on the cow-intubation dataset. PMID:20878236
An Active Sensor Algorithm for Corn Nitrogen Recommendations Based on a Chlorophyll Meter Algorithm
Technology Transfer Automated Retrieval System (TEKTRAN)
In previous work we found active canopy sensor reflectance assessments of corn (Zea mays L.) N status acquired at two growth stages (V11 and V15) have the greatest potential for directing in-season N applications, but emphasized an algorithm was needed to translate sensor readings into appropriate N...