Least significant qubit algorithm for quantum images
NASA Astrophysics Data System (ADS)
Sang, Jianzhi; Wang, Shen; Li, Qiong
2016-11-01
To study the feasibility of the classical image least significant bit (LSB) information hiding algorithm on quantum computer, a least significant qubit (LSQb) information hiding algorithm of quantum image is proposed. In this paper, we focus on a novel quantum representation for color digital images (NCQI). Firstly, by designing the three qubits comparator and unitary operators, the reasonability and feasibility of LSQb based on NCQI are presented. Then, the concrete LSQb information hiding algorithm is proposed, which can realize the aim of embedding the secret qubits into the least significant qubits of RGB channels of quantum cover image. Quantum circuit of the LSQb information hiding algorithm is also illustrated. Furthermore, the secrets extracting algorithm and circuit are illustrated through utilizing control-swap gates. The two merits of our algorithm are: (1) it is absolutely blind and (2) when extracting secret binary qubits, it does not need any quantum measurement operation or any other help from classical computer. Finally, simulation and comparative analysis show the performance of our algorithm.
Algorithm performance evaluation
NASA Astrophysics Data System (ADS)
Smith, Richard N.; Greci, Anthony M.; Bradley, Philip A.
1995-03-01
Traditionally, the performance of adaptive antenna systems is measured using automated antenna array pattern measuring equipment. This measurement equipment produces a plot of the receive gain of the antenna array as a function of angle. However, communications system users more readily accept and understand bit error rate (BER) as a performance measure. The work reported on here was conducted to characterize adaptive antenna receiver performance in terms of overall communications system performance using BER as a performance measure. The adaptive antenna system selected for this work featured a linear array, least mean square (LMS) adaptive algorithm and a high speed phase shift keyed (PSK) communications modem.
Belief network algorithms: A study of performance
Jitnah, N.
1996-12-31
This abstract gives an overview of the work. We present a survey of Belief Network algorithms and propose a domain characterization system to be used as a basis for algorithm comparison and for predicting algorithm performance.
Discovering sequence similarity by the algorithmic significance method
Milosavljevic, A.
1993-02-01
The minimal-length encoding approach is applied to define concept of sequence similarity. A sequence is defined to be similar to another sequence or to a set of keywords if it can be encoded in a small number of bits by taking advantage of common subwords. Minimal-length encoding of a sequence is computed in linear time, using a data compression algorithm that is based on a dynamic programming strategy and the directed acyclic word graph data structure. No assumptions about common word ( k-tuple'') length are made in advance, and common words of any length are considered. The newly proposed algorithmic significance method provides an exact upper bound on the probability that sequence similarity has occurred by chance, thus eliminating the need for any arbitrary choice of similarity thresholds. Preliminary experiments indicate that a small number of keywords can positively identify a DNA sequence, which is extremely relevant in the context of partial sequencing by hybridization.
Discovering sequence similarity by the algorithmic significance method
Milosavljevic, A.
1993-02-01
The minimal-length encoding approach is applied to define concept of sequence similarity. A sequence is defined to be similar to another sequence or to a set of keywords if it can be encoded in a small number of bits by taking advantage of common subwords. Minimal-length encoding of a sequence is computed in linear time, using a data compression algorithm that is based on a dynamic programming strategy and the directed acyclic word graph data structure. No assumptions about common word (``k-tuple``) length are made in advance, and common words of any length are considered. The newly proposed algorithmic significance method provides an exact upper bound on the probability that sequence similarity has occurred by chance, thus eliminating the need for any arbitrary choice of similarity thresholds. Preliminary experiments indicate that a small number of keywords can positively identify a DNA sequence, which is extremely relevant in the context of partial sequencing by hybridization.
High performance FDTD algorithm for GPGPU supercomputers
NASA Astrophysics Data System (ADS)
Zakirov, Andrey; Levchenko, Vadim; Perepelkina, Anastasia; Zempo, Yasunari
2016-10-01
An implementation of FDTD method for solution of optical and other electrodynamic problems of high computational cost is described. The implementation is based on the LRnLA algorithm DiamondTorre, which is developed specifically for GPGPU hardware. The specifics of the DiamondTorre algorithms for staggered grid (Yee cell) and many-GPU devices are shown. The algorithm is implemented in the software for real physics calculation. The software performance is estimated through algorithms parameters and computer model. The real performance is tested on one GPU device, as well as on the many-GPU cluster. The performance of up to 0.65 • 1012 cell updates per second for 3D domain with 0.3 • 1012 Yee cells total is achieved.
Algorithms for improved performance in cryptographic protocols.
Schroeppel, Richard Crabtree; Beaver, Cheryl Lynn
2003-11-01
Public key cryptographic algorithms provide data authentication and non-repudiation for electronic transmissions. The mathematical nature of the algorithms, however, means they require a significant amount of computation, and encrypted messages and digital signatures possess high bandwidth. Accordingly, there are many environments (e.g. wireless, ad-hoc, remote sensing networks) where public-key requirements are prohibitive and cannot be used. The use of elliptic curves in public-key computations has provided a means by which computations and bandwidth can be somewhat reduced. We report here on the research conducted in an LDRD aimed to find even more efficient algorithms and to make public-key cryptography available to a wider range of computing environments. We improved upon several algorithms, including one for which a patent has been applied. Further we discovered some new problems and relations on which future cryptographic algorithms may be based.
Bootstrap performance profiles in stochastic algorithms assessment
Costa, Lino; Espírito Santo, Isabel A.C.P.; Oliveira, Pedro
2015-03-10
Optimization with stochastic algorithms has become a relevant research field. Due to its stochastic nature, its assessment is not straightforward and involves integrating accuracy and precision. Performance profiles for the mean do not show the trade-off between accuracy and precision, and parametric stochastic profiles require strong distributional assumptions and are limited to the mean performance for a large number of runs. In this work, bootstrap performance profiles are used to compare stochastic algorithms for different statistics. This technique allows the estimation of the sampling distribution of almost any statistic even with small samples. Multiple comparison profiles are presented for more than two algorithms. The advantages and drawbacks of each assessment methodology are discussed.
Passive MMW algorithm performance characterization using MACET
NASA Astrophysics Data System (ADS)
Williams, Bradford D.; Watson, John S.; Amphay, Sengvieng A.
1997-06-01
As passive millimeter wave sensor technology matures, algorithms which are tailored to exploit the benefits of this technology are being developed. The expedient development of such algorithms requires an understanding of not only the gross phenomenology, but also specific quirks and limitations inherent in sensors and the data gathering methodology specific to this regime. This level of understanding is approached as the technology matures and increasing amounts of data become available for analysis. The Armament Directorate of Wright Laboratory, WL/MN, has spearheaded the advancement of passive millimeter-wave technology in algorithm development tools and modeling capability as well as sensor development. A passive MMW channel is available within WL/MNs popular multi-channel modeling program Irma, and a sample passive MMW algorithm is incorporated into the Modular Algorithm Concept Evaluation Tool, an algorithm development and evaluation system. The Millimeter Wave Analysis of Passive Signatures system provides excellent data collection capability in the 35, 60, and 95 GHz MMW bands. This paper exploits these assets for the study of the PMMW signature of a High Mobility Multi- Purpose Wheeled Vehicle in the three bands mentioned, and the effect of camouflage upon this signature and autonomous target recognition algorithm performance.
Algorithms for Detecting Significantly Mutated Pathways in Cancer
NASA Astrophysics Data System (ADS)
Vandin, Fabio; Upfal, Eli; Raphael, Benjamin J.
Recent genome sequencing studies have shown that the somatic mutations that drive cancer development are distributed across a large number of genes. This mutational heterogeneity complicates efforts to distinguish functional mutations from sporadic, passenger mutations. Since cancer mutations are hypothesized to target a relatively small number of cellular signaling and regulatory pathways, a common approach is to assess whether known pathways are enriched for mutated genes. However, restricting attention to known pathways will not reveal novel cancer genes or pathways. An alterative strategy is to examine mutated genes in the context of genome-scale interaction networks that include both well characterized pathways and additional gene interactions measured through various approaches. We introduce a computational framework for de novo identification of subnetworks in a large gene interaction network that are mutated in a significant number of patients. This framework includes two major features. First, we introduce a diffusion process on the interaction network to define a local neighborhood of "influence" for each mutated gene in the network. Second, we derive a two-stage multiple hypothesis test to bound the false discovery rate (FDR) associated with the identified subnetworks. We test these algorithms on a large human protein-protein interaction network using mutation data from two recent studies: glioblastoma samples from The Cancer Genome Atlas and lung adenocarcinoma samples from the Tumor Sequencing Project. We successfully recover pathways that are known to be important in these cancers, such as the p53 pathway. We also identify additional pathways, such as the Notch signaling pathway, that have been implicated in other cancers but not previously reported as mutated in these samples. Our approach is the first, to our knowledge, to demonstrate a computationally efficient strategy for de novo identification of statistically significant mutated subnetworks. We
The Real World Significance of Performance Prediction
ERIC Educational Resources Information Center
Pardos, Zachary A.; Wang, Qing Yang; Trivedi, Shubhendu
2012-01-01
In recent years, the educational data mining and user modeling communities have been aggressively introducing models for predicting student performance on external measures such as standardized tests as well as within-tutor performance. While these models have brought statistically reliable improvement to performance prediction, the real world…
Evaluating Algorithm Performance Metrics Tailored for Prognostics
NASA Technical Reports Server (NTRS)
Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai
2009-01-01
Prognostics has taken a center stage in Condition Based Maintenance (CBM) where it is desired to estimate Remaining Useful Life (RUL) of the system so that remedial measures may be taken in advance to avoid catastrophic events or unwanted downtimes. Validation of such predictions is an important but difficult proposition and a lack of appropriate evaluation methods renders prognostics meaningless. Evaluation methods currently used in the research community are not standardized and in many cases do not sufficiently assess key performance aspects expected out of a prognostics algorithm. In this paper we introduce several new evaluation metrics tailored for prognostics and show that they can effectively evaluate various algorithms as compared to other conventional metrics. Specifically four algorithms namely; Relevance Vector Machine (RVM), Gaussian Process Regression (GPR), Artificial Neural Network (ANN), and Polynomial Regression (PR) are compared. These algorithms vary in complexity and their ability to manage uncertainty around predicted estimates. Results show that the new metrics rank these algorithms in different manner and depending on the requirements and constraints suitable metrics may be chosen. Beyond these results, these metrics offer ideas about how metrics suitable to prognostics may be designed so that the evaluation procedure can be standardized. 1
Can search algorithms save large-scale automatic performance tuning?
Balaprakash, P.; Wild, S. M.; Hovland, P. D.
2011-01-01
Empirical performance optimization of computer codes using autotuners has received significant attention in recent years. Given the increased complexity of computer architectures and scientific codes, evaluating all possible code variants is prohibitively expensive for all but the simplest kernels. One way for autotuners to overcome this hurdle is through use of a search algorithm that finds high-performing code variants while examining relatively few variants. In this paper we examine the search problem in autotuning from a mathematical optimization perspective. As an illustration of the power and limitations of this optimization, we conduct an experimental study of several optimization algorithms on a number of linear algebra kernel codes. We find that the algorithms considered obtain performance gains similar to the optimal ones found by complete enumeration or by large random searches but in a tiny fraction of the computation time.
A Hybrid Swarm Intelligence Algorithm for Intrusion Detection Using Significant Features
Amudha, P.; Karthik, S.; Sivakumari, S.
2015-01-01
Intrusion detection has become a main part of network security due to the huge number of attacks which affects the computers. This is due to the extensive growth of internet connectivity and accessibility to information systems worldwide. To deal with this problem, in this paper a hybrid algorithm is proposed to integrate Modified Artificial Bee Colony (MABC) with Enhanced Particle Swarm Optimization (EPSO) to predict the intrusion detection problem. The algorithms are combined together to find out better optimization results and the classification accuracies are obtained by 10-fold cross-validation method. The purpose of this paper is to select the most relevant features that can represent the pattern of the network traffic and test its effect on the success of the proposed hybrid classification algorithm. To investigate the performance of the proposed method, intrusion detection KDDCup'99 benchmark dataset from the UCI Machine Learning repository is used. The performance of the proposed method is compared with the other machine learning algorithms and found to be significantly different. PMID:26221625
Performance evaluation of SAR/GMTI algorithms
NASA Astrophysics Data System (ADS)
Garber, Wendy; Pierson, William; Mcginnis, Ryan; Majumder, Uttam; Minardi, Michael; Sobota, David
2016-05-01
There is a history and understanding of exploiting moving targets within ground moving target indicator (GMTI) data, including methods for modeling performance. However, many assumptions valid for GMTI processing are invalid for synthetic aperture radar (SAR) data. For example, traditional GMTI processing assumes targets are exo-clutter and a system that uses a GMTI waveform, i.e. low bandwidth (BW) and low pulse repetition frequency (PRF). Conversely, SAR imagery is typically formed to focus data at zero Doppler and requires high BW and high PRF. Therefore, many of the techniques used in performance estimation of GMTI systems are not valid for SAR data. However, as demonstrated by papers in the recent literature,1-11 there is interest in exploiting moving targets within SAR data. The techniques employed vary widely, including filter banks to form images at multiple Dopplers, performing smear detection, and attempting to address the issue through waveform design. The above work validates the need for moving target exploitation in SAR data, but it does not represent a theory allowing for the prediction or bounding of performance. This work develops an approach to estimate and/or bound performance for moving target exploitation specific to SAR data. Synthetic SAR data is generated across a range of sensor, environment, and target parameters to test the exploitation algorithms under specific conditions. This provides a design tool allowing radar systems to be tuned for specific moving target exploitation applications. In summary, we derive a set of rules that bound the performance of specific moving target exploitation algorithms under variable operating conditions.
Impact of Multiscale Retinex Computation on Performance of Segmentation Algorithms
NASA Technical Reports Server (NTRS)
Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.; Hines, Glenn D.
2004-01-01
Classical segmentation algorithms subdivide an image into its constituent components based upon some metric that defines commonality between pixels. Often, these metrics incorporate some measure of "activity" in the scene, e.g. the amount of detail that is in a region. The Multiscale Retinex with Color Restoration (MSRCR) is a general purpose, non-linear image enhancement algorithm that significantly affects the brightness, contrast and sharpness within an image. In this paper, we will analyze the impact the MSRCR has on segmentation results and performance.
Performance Comparison Of Evolutionary Algorithms For Image Clustering
NASA Astrophysics Data System (ADS)
Civicioglu, P.; Atasever, U. H.; Ozkan, C.; Besdok, E.; Karkinli, A. E.; Kesikoglu, A.
2014-09-01
Evolutionary computation tools are able to process real valued numerical sets in order to extract suboptimal solution of designed problem. Data clustering algorithms have been intensively used for image segmentation in remote sensing applications. Despite of wide usage of evolutionary algorithms on data clustering, their clustering performances have been scarcely studied by using clustering validation indexes. In this paper, the recently proposed evolutionary algorithms (i.e., Artificial Bee Colony Algorithm (ABC), Gravitational Search Algorithm (GSA), Cuckoo Search Algorithm (CS), Adaptive Differential Evolution Algorithm (JADE), Differential Search Algorithm (DSA) and Backtracking Search Optimization Algorithm (BSA)) and some classical image clustering techniques (i.e., k-means, fcm, som networks) have been used to cluster images and their performances have been compared by using four clustering validation indexes. Experimental test results exposed that evolutionary algorithms give more reliable cluster-centers than classical clustering techniques, but their convergence time is quite long.
Performance evaluation of image processing algorithms in digital mammography
NASA Astrophysics Data System (ADS)
Zanca, Federica; Van Ongeval, Chantal; Jacobs, Jurgen; Pauwels, Herman; Marchal, Guy; Bosmans, Hilde
2008-03-01
The purpose of the study is to evaluate the performance of different image processing algorithms in terms of representation of microcalcification clusters in digital mammograms. Clusters were simulated in clinical raw ("for processing") images. The entire dataset of images consisted of 200 normal mammograms, selected out of our clinical routine cases and acquired with a Siemens Novation DR system. In 100 of the normal images a total of 142 clusters were simulated; the remaining 100 normal mammograms served as true negative input cases. Both abnormal and normal images were processed with 5 commercially available processing algorithms: Siemens OpView1 and Siemens OpView2, Agfa Musica1, Sectra Mamea AB Sigmoid and IMS Raffaello Mammo 1.2. Five observers were asked to locate and score the cluster(s) in each image, by means of dedicated software tool. Observer performance was assessed using the JAFROC Figure of Merit. FROC curves, fitted using the IDCA method, have also been calculated. JAFROC analysis revealed significant differences among the image processing algorithms in the detection of microcalcifications clusters (p=0.0000369). Calculated average Figures of Merit are: 0.758 for Siemens OpView2, 0.747 for IMS Processing 1.2, 0.736 for Agfa Musica1 processing, 0.706 for Sectra Mamea AB Sigmoid processing and 0.703 for Siemens OpView1. This study is a first step towards a quantitative assessment of image processing in terms of cluster detection in clinical mammograms. Although we showed a significant difference among the image processing algorithms, this method does not on its own allow for a global performance ranking of the investigated algorithms.
Performance analysis of LVQ algorithms: a statistical physics approach.
Ghosh, Anarta; Biehl, Michael; Hammer, Barbara
2006-01-01
Learning vector quantization (LVQ) constitutes a powerful and intuitive method for adaptive nearest prototype classification. However, original LVQ has been introduced based on heuristics and numerous modifications exist to achieve better convergence and stability. Recently, a mathematical foundation by means of a cost function has been proposed which, as a limiting case, yields a learning rule similar to classical LVQ2.1. It also motivates a modification which shows better stability. However, the exact dynamics as well as the generalization ability of many LVQ algorithms have not been thoroughly investigated so far. Using concepts from statistical physics and the theory of on-line learning, we present a mathematical framework to analyse the performance of different LVQ algorithms in a typical scenario in terms of their dynamics, sensitivity to initial conditions, and generalization ability. Significant differences in the algorithmic stability and generalization ability can be found already for slightly different variants of LVQ. We study five LVQ algorithms in detail: Kohonen's original LVQ1, unsupervised vector quantization (VQ), a mixture of VQ and LVQ, LVQ2.1, and a variant of LVQ which is based on a cost function. Surprisingly, basic LVQ1 shows very good performance in terms of stability, asymptotic generalization ability, and robustness to initializations and model parameters which, in many cases, is superior to recent alternative proposals.
Turbopump Performance Improved by Evolutionary Algorithms
NASA Technical Reports Server (NTRS)
Oyama, Akira; Liou, Meng-Sing
2002-01-01
The development of design optimization technology for turbomachinery has been initiated using the multiobjective evolutionary algorithm under NASA's Intelligent Synthesis Environment and Revolutionary Aeropropulsion Concepts programs. As an alternative to the traditional gradient-based methods, evolutionary algorithms (EA's) are emergent design-optimization algorithms modeled after the mechanisms found in natural evolution. EA's search from multiple points, instead of moving from a single point. In addition, they require no derivatives or gradients of the objective function, leading to robustness and simplicity in coupling any evaluation codes. Parallel efficiency also becomes very high by using a simple master-slave concept for function evaluations, since such evaluations often consume the most CPU time, such as computational fluid dynamics. Application of EA's to multiobjective design problems is also straightforward because EA's maintain a population of design candidates in parallel. Because of these advantages, EA's are a unique and attractive approach to real-world design optimization problems.
Case study of isosurface extraction algorithm performance
Sutton, P M; Hansen, C D; Shen, H; Schikore, D
1999-12-14
Isosurface extraction is an important and useful visualization method. Over the past ten years, the field has seen numerous isosurface techniques published leaving the user in a quandary about which one should be used. Some papers have published complexity analysis of the techniques yet empirical evidence comparing different methods is lacking. This case study presents a comparative study of several representative isosurface extraction algorithms. It reports and analyzes empirical measurements of execution times and memory behavior for each algorithm. The results show that asymptotically optimal techniques may not be the best choice when implemented on modern computer architectures.
Human performance models and rear-end collision avoidance algorithms.
Brown, T L; Lee, J D; McGehee, D V
2001-01-01
Collision warning systems offer a promising approach to mitigate rear-end collisions, but substantial uncertainty exists regarding the joint performance of the driver and the collision warning algorithms. A simple deterministic model of driver performance was used to examine kinematics-based and perceptual-based rear-end collision avoidance algorithms over a range of collision situations, algorithm parameters, and assumptions regarding driver performance. The results show that the assumptions concerning driver reaction times have important consequences for algorithm performance, with underestimates dramatically undermining the safety benefit of the warning. Additionally, under some circumstances, when drivers rely on the warning algorithms, larger headways can result in more severe collisions. This reflects the nonlinear interaction among the collision situation, the algorithm, and driver response that should not be attributed to the complexities of driver behavior but to the kinematics of the situation. Comparisons made with experimental data demonstrate that a simple human performance model can capture important elements of system performance and complement expensive human-in-the-loop experiments. Actual or potential applications of this research include selection of an appropriate algorithm, more accurate specification of algorithm parameters, and guidance for future experiments.
Performance Analysis of Evolutionary Algorithms for Steiner Tree Problems.
Lai, Xinsheng; Zhou, Yuren; Xia, Xiaoyun; Zhang, Qingfu
2016-12-13
The Steiner tree problem (STP) aims to determine some Steiner nodes such that the minimum spanning tree over these Steiner nodes and a given set of special nodes has the minimum weight, which is NP-hard. STP includes several important cases. The Steiner tree problem in graphs (GSTP) is one of them. Many heuristics have been proposed for STP, and some of them have proved to be performance guarantee approximation algorithms for this problem. Since evolutionary algorithms (EAs) are general and popular randomized heuristics, it is significant to investigate the performance of EAs for STP. Several empirical investigations have shown that EAs are efficient for STP. However, up to now, there is no theoretical work on the performance of EAs for STP. In this paper, we reveal that the (1 + 1) EA achieves 3/2-approximation ratio for STP in a special class of quasi-bipartite graphs in expected runtime O(r(r + s - 1) ⋅ wmax), where r, s and wmax are respectively the number of Steiner nodes, the number of special nodes and the largest weight among all edges in the input graph. We also show that the (1 + 1) EA is better than two other heuristics on two GSTP instances, and the (1 + 1) EA may be inefficient on a constructed GSTP instance.
Computational and performance aspects of PCA-based face-recognition algorithms.
Moon, H; Phillips, P J
2001-01-01
Algorithms based on principal component analysis (PCA) form the basis of numerous studies in the psychological and algorithmic face-recognition literature. PCA is a statistical technique and its incorporation into a face-recognition algorithm requires numerous design decisions. We explicitly state the design decisions by introducing a generic modular PCA-algorithm. This allows us to investigate these decisions, including those not documented in the literature. We experimented with different implementations of each module, and evaluated the different implementations using the September 1996 FERET evaluation protocol (the de facto standard for evaluating face-recognition algorithms). We experimented with (i) changing the illumination normalization procedure; (ii) studying effects on algorithm performance of compressing images with JPEG and wavelet compression algorithms; (iii) varying the number of eigenvectors in the representation; and (iv) changing the similarity measure in the classification process. We performed two experiments. In the first experiment, we obtained performance results on the standard September 1996 FERET large-gallery image sets. In the second experiment, we examined the variability in algorithm performance on different sets of facial images. The study was performed on 100 randomly generated image sets (galleries) of the same size. Our two most significant results are (i) changing the similarity measure produced the greatest change in performance, and (ii) that difference in performance of +/- 10% is needed to distinguish between algorithms.
Typical performance of approximation algorithms for NP-hard problems
NASA Astrophysics Data System (ADS)
Takabe, Satoshi; Hukushima, Koji
2016-11-01
Typical performance of approximation algorithms is studied for randomized minimum vertex cover problems. A wide class of random graph ensembles characterized by an arbitrary degree distribution is discussed with the presentation of a theoretical framework. Herein, three approximation algorithms are examined: linear-programming relaxation, loopy-belief propagation, and the leaf-removal algorithm. The former two algorithms are analyzed using a statistical-mechanical technique, whereas the average-case analysis of the last one is conducted using the generating function method. These algorithms have a threshold in the typical performance with increasing average degree of the random graph, below which they find true optimal solutions with high probability. Our study reveals that there exist only three cases, determined by the order of the typical performance thresholds. In addition, we provide some conditions for classification of the graph ensembles and demonstrate explicitly some examples for the difference in thresholds.
Generic algorithms for high performance scalable geocomputing
NASA Astrophysics Data System (ADS)
de Jong, Kor; Schmitz, Oliver; Karssenberg, Derek
2016-04-01
During the last decade, the characteristics of computing hardware have changed a lot. For example, instead of a single general purpose CPU core, personal computers nowadays contain multiple cores per CPU and often general purpose accelerators, like GPUs. Additionally, compute nodes are often grouped together to form clusters or a supercomputer, providing enormous amounts of compute power. For existing earth simulation models to be able to use modern hardware platforms, their compute intensive parts must be rewritten. This can be a major undertaking and may involve many technical challenges. Compute tasks must be distributed over CPU cores, offloaded to hardware accelerators, or distributed to different compute nodes. And ideally, all of this should be done in such a way that the compute task scales well with the hardware resources. This presents two challenges: 1) how to make good use of all the compute resources and 2) how to make these compute resources available for developers of simulation models, who may not (want to) have the required technical background for distributing compute tasks. The first challenge requires the use of specialized technology (e.g.: threads, OpenMP, MPI, OpenCL, CUDA). The second challenge requires the abstraction of the logic handling the distribution of compute tasks from the model-specific logic, hiding the technical details from the model developer. To assist the model developer, we are developing a C++ software library (called Fern) containing algorithms that can use all CPU cores available in a single compute node (distributing tasks over multiple compute nodes will be done at a later stage). The algorithms are grid-based (finite difference) and include local and spatial operations such as convolution filters. The algorithms handle distribution of the compute tasks to CPU cores internally. In the resulting model the low-level details of how this is done is separated from the model-specific logic representing the modeled system
Motion Cueing Algorithm Development: Piloted Performance Testing of the Cueing Algorithms
NASA Technical Reports Server (NTRS)
Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.; Kelly, Lon C.
2005-01-01
The relative effectiveness in simulating aircraft maneuvers with both current and newly developed motion cueing algorithms was assessed with an eleven-subject piloted performance evaluation conducted on the NASA Langley Visual Motion Simulator (VMS). In addition to the current NASA adaptive algorithm, two new cueing algorithms were evaluated: the optimal algorithm and the nonlinear algorithm. The test maneuvers included a straight-in approach with a rotating wind vector, an offset approach with severe turbulence and an on/off lateral gust that occurs as the aircraft approaches the runway threshold, and a takeoff both with and without engine failure after liftoff. The maneuvers were executed with each cueing algorithm with added visual display delay conditions ranging from zero to 200 msec. Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. Piloted performance parameters for the approach maneuvers, the vertical velocity upon touchdown and the runway touchdown position, were also analyzed but did not show any noticeable difference among the cueing algorithms. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input analysis shows pilot-induced oscillations on a straight-in approach were less prevalent compared to the optimal algorithm. The augmented turbulence cues increased workload on an offset approach that the pilots deemed more realistic compared to the NASA adaptive algorithm. The takeoff with engine failure showed the least roll activity for the nonlinear algorithm, with the least rudder pedal activity for the optimal algorithm.
Performance Improvements of the Phoneme Recognition Algorithm.
1984-06-01
present time, there are commercially available speech recognition machines that perform limited speech recognition. There are still major drawbacks to...to recognize. Even though the training period has been made fairly painless to the user, it still severely limits the vocabulary the machine can...this information to perform the recognition routines. 47 ..- 7f Alterations to the templates’ spectrum file was limited to changing values in the
Performance characterization of a combined material identification and screening algorithm
NASA Astrophysics Data System (ADS)
Green, Robert L.; Hargreaves, Michael D.; Gardner, Craig M.
2013-05-01
Portable analytical devices based on a gamut of technologies (Infrared, Raman, X-Ray Fluorescence, Mass Spectrometry, etc.) are now widely available. These tools have seen increasing adoption for field-based assessment by diverse users including military, emergency response, and law enforcement. Frequently, end-users of portable devices are non-scientists who rely on embedded software and the associated algorithms to convert collected data into actionable information. Two classes of problems commonly encountered in field applications are identification and screening. Identification algorithms are designed to scour a library of known materials and determine whether the unknown measurement is consistent with a stored response (or combination of stored responses). Such algorithms can be used to identify a material from many thousands of possible candidates. Screening algorithms evaluate whether at least a subset of features in an unknown measurement correspond to one or more specific substances of interest and are typically configured to detect from a small list potential target analytes. Thus, screening algorithms are much less broadly applicable than identification algorithms; however, they typically provide higher detection rates which makes them attractive for specific applications such as chemical warfare agent or narcotics detection. This paper will present an overview and performance characterization of a combined identification/screening algorithm that has recently been developed. It will be shown that the combined algorithm provides enhanced detection capability more typical of screening algorithms while maintaining a broad identification capability. Additionally, we will highlight how this approach can enable users to incorporate situational awareness during a response.
A Hybrid Actuation System Demonstrating Significantly Enhanced Electromechanical Performance
NASA Technical Reports Server (NTRS)
Su, Ji; Xu, Tian-Bing; Zhang, Shujun; Shrout, Thomas R.; Zhang, Qiming
2004-01-01
A hybrid actuation system (HYBAS) utilizing advantages of a combination of electromechanical responses of an electroactive polymer (EAP), an electrostrictive copolymer, and an electroactive ceramic single crystal, PZN-PT single crystal, has been developed. The system employs the contribution of the actuation elements cooperatively and exhibits a significantly enhanced electromechanical performance compared to the performances of the device made of each constituting material, the electroactive polymer or the ceramic single crystal, individually. The theoretical modeling of the performances of the HYBAS is in good agreement with experimental observation. The consistence between the theoretical modeling and experimental test make the design concept an effective route for the development of high performance actuating devices for many applications. The theoretical modeling, fabrication of the HYBAS and the initial experimental results will be presented and discussed.
Improved Ant Colony Clustering Algorithm and Its Performance Study
Gao, Wei
2016-01-01
Clustering analysis is used in many disciplines and applications; it is an important tool that descriptively identifies homogeneous groups of objects based on attribute values. The ant colony clustering algorithm is a swarm-intelligent method used for clustering problems that is inspired by the behavior of ant colonies that cluster their corpses and sort their larvae. A new abstraction ant colony clustering algorithm using a data combination mechanism is proposed to improve the computational efficiency and accuracy of the ant colony clustering algorithm. The abstraction ant colony clustering algorithm is used to cluster benchmark problems, and its performance is compared with the ant colony clustering algorithm and other methods used in existing literature. Based on similar computational difficulties and complexities, the results show that the abstraction ant colony clustering algorithm produces results that are not only more accurate but also more efficiently determined than the ant colony clustering algorithm and the other methods. Thus, the abstraction ant colony clustering algorithm can be used for efficient multivariate data clustering. PMID:26839533
Improved Ant Colony Clustering Algorithm and Its Performance Study.
Gao, Wei
2016-01-01
Clustering analysis is used in many disciplines and applications; it is an important tool that descriptively identifies homogeneous groups of objects based on attribute values. The ant colony clustering algorithm is a swarm-intelligent method used for clustering problems that is inspired by the behavior of ant colonies that cluster their corpses and sort their larvae. A new abstraction ant colony clustering algorithm using a data combination mechanism is proposed to improve the computational efficiency and accuracy of the ant colony clustering algorithm. The abstraction ant colony clustering algorithm is used to cluster benchmark problems, and its performance is compared with the ant colony clustering algorithm and other methods used in existing literature. Based on similar computational difficulties and complexities, the results show that the abstraction ant colony clustering algorithm produces results that are not only more accurate but also more efficiently determined than the ant colony clustering algorithm and the other methods. Thus, the abstraction ant colony clustering algorithm can be used for efficient multivariate data clustering.
Developmental Changes in Adolescents' Olfactory Performance and Significance of Olfaction
Klötze, Paula; Gerber, Friederike; Croy, Ilona; Hummel, Thomas
2016-01-01
Aim of the current work was to examine developmental changes in adolescents’ olfactory performance and personal significance of olfaction. In the first study olfactory identification abilities of 76 participants (31 males and 45 females aged between 10 and 18 years; M = 13.8, SD = 2.3) was evaluated with the Sniffin Stick identification test, presented in a cued and in an uncued manner. Verbal fluency was additionally examined for control purpose. In the second study 131 participants (46 males and 85 females aged between 10 and 18 years; (M = 14.4, SD = 2.2) filled in the importance of olfaction questionnaire. Odor identification abilities increased significantly with age and were significantly higher in girls as compared to boys. These effects were especially pronounced in the uncued task and partly related to verbal fluency. In line, the personal significance of olfaction increased with age and was generally higher among female compared to male participants. PMID:27332887
Modeling and performance analysis of GPS vector tracking algorithms
NASA Astrophysics Data System (ADS)
Lashley, Matthew
This dissertation provides a detailed analysis of GPS vector tracking algorithms and the advantages they have over traditional receiver architectures. Standard GPS receivers use a decentralized architecture that separates the tasks of signal tracking and position/velocity estimation. Vector tracking algorithms combine the two tasks into a single algorithm. The signals from the various satellites are processed collectively through a Kalman filter. The advantages of vector tracking over traditional, scalar tracking methods are thoroughly investigated. A method for making a valid comparison between vector and scalar tracking loops is developed. This technique avoids the ambiguities encountered when attempting to make a valid comparison between tracking loops (which are characterized by noise bandwidths and loop order) and the Kalman filters (which are characterized by process and measurement noise covariance matrices) that are used by vector tracking algorithms. The improvement in performance offered by vector tracking is calculated in multiple different scenarios. Rule of thumb analysis techniques for scalar Frequency Lock Loops (FLL) are extended to the vector tracking case. The analysis tools provide a simple method for analyzing the performance of vector tracking loops. The analysis tools are verified using Monte Carlo simulations. Monte Carlo simulations are also used to study the effects of carrier to noise power density (C/N0) ratio estimation and the advantage offered by vector tracking over scalar tracking. The improvement from vector tracking ranges from 2.4 to 6.2 dB in various scenarios. The difference in the performance of the three vector tracking architectures is analyzed. The effects of using a federated architecture with and without information sharing between the receiver's channels are studied. A combination of covariance analysis and Monte Carlo simulation is used to analyze the performance of the three algorithms. The federated algorithm without
A high-performance genetic algorithm: using traveling salesman problem as a case.
Tsai, Chun-Wei; Tseng, Shih-Pang; Chiang, Ming-Chao; Yang, Chu-Sing; Hong, Tzung-Pei
2014-01-01
This paper presents a simple but efficient algorithm for reducing the computation time of genetic algorithm (GA) and its variants. The proposed algorithm is motivated by the observation that genes common to all the individuals of a GA have a high probability of surviving the evolution and ending up being part of the final solution; as such, they can be saved away to eliminate the redundant computations at the later generations of a GA. To evaluate the performance of the proposed algorithm, we use it not only to solve the traveling salesman problem but also to provide an extensive analysis on the impact it may have on the quality of the end result. Our experimental results indicate that the proposed algorithm can significantly reduce the computation time of GA and GA-based algorithms while limiting the degradation of the quality of the end result to a very small percentage compared to traditional GA.
A High-Performance Genetic Algorithm: Using Traveling Salesman Problem as a Case
Tsai, Chun-Wei; Tseng, Shih-Pang; Yang, Chu-Sing
2014-01-01
This paper presents a simple but efficient algorithm for reducing the computation time of genetic algorithm (GA) and its variants. The proposed algorithm is motivated by the observation that genes common to all the individuals of a GA have a high probability of surviving the evolution and ending up being part of the final solution; as such, they can be saved away to eliminate the redundant computations at the later generations of a GA. To evaluate the performance of the proposed algorithm, we use it not only to solve the traveling salesman problem but also to provide an extensive analysis on the impact it may have on the quality of the end result. Our experimental results indicate that the proposed algorithm can significantly reduce the computation time of GA and GA-based algorithms while limiting the degradation of the quality of the end result to a very small percentage compared to traditional GA. PMID:24892038
Performance evaluation of image processing algorithms on the GPU.
Castaño-Díez, Daniel; Moser, Dominik; Schoenegger, Andreas; Pruggnaller, Sabine; Frangakis, Achilleas S
2008-10-01
The graphics processing unit (GPU), which originally was used exclusively for visualization purposes, has evolved into an extremely powerful co-processor. In the meanwhile, through the development of elaborate interfaces, the GPU can be used to process data and deal with computationally intensive applications. The speed-up factors attained compared to the central processing unit (CPU) are dependent on the particular application, as the GPU architecture gives the best performance for algorithms that exhibit high data parallelism and high arithmetic intensity. Here, we evaluate the performance of the GPU on a number of common algorithms used for three-dimensional image processing. The algorithms were developed on a new software platform called "CUDA", which allows a direct translation from C code to the GPU. The implemented algorithms include spatial transformations, real-space and Fourier operations, as well as pattern recognition procedures, reconstruction algorithms and classification procedures. In our implementation, the direct porting of C code in the GPU achieves typical acceleration values in the order of 10-20 times compared to a state-of-the-art conventional processor, but they vary depending on the type of the algorithm. The gained speed-up comes with no additional costs, since the software runs on the GPU of the graphics card of common workstations.
Significant Advances in the AIRS Science Team Version-6 Retrieval Algorithm
NASA Technical Reports Server (NTRS)
Susskind, Joel; Blaisdell, John; Iredell, Lena; Molnar, Gyula
2012-01-01
AIRS/AMSU is the state of the art infrared and microwave atmospheric sounding system flying aboard EOS Aqua. The Goddard DISC has analyzed AIRS/AMSU observations, covering the period September 2002 until the present, using the AIRS Science Team Version-S retrieval algorithm. These products have been used by many researchers to make significant advances in both climate and weather applications. The AIRS Science Team Version-6 Retrieval, which will become operation in mid-20l2, contains many significant theoretical and practical improvements compared to Version-5 which should further enhance the utility of AIRS products for both climate and weather applications. In particular, major changes have been made with regard to the algOrithms used to 1) derive surface skin temperature and surface spectral emissivity; 2) generate the initial state used to start the retrieval procedure; 3) compute Outgoing Longwave Radiation; and 4) determine Quality Control. This paper will describe these advances found in the AIRS Version-6 retrieval algorithm and demonstrate the improvement of AIRS Version-6 products compared to those obtained using Version-5,
Preliminary flight evaluation of an engine performance optimization algorithm
NASA Technical Reports Server (NTRS)
Lambert, H. H.; Gilyard, G. B.; Chisholm, J. D.; Kerr, L. J.
1991-01-01
A performance seeking control (PSC) algorithm has undergone initial flight test evaluation in subsonic operation of a PW 1128 engined F-15. This algorithm is designed to optimize the quasi-steady performance of an engine for three primary modes: (1) minimum fuel consumption; (2) minimum fan turbine inlet temperature (FTIT); and (3) maximum thrust. The flight test results have verified a thrust specific fuel consumption reduction of 1 pct., up to 100 R decreases in FTIT, and increases of as much as 12 pct. in maximum thrust. PSC technology promises to be of value in next generation tactical and transport aircraft.
Simple algorithms for calculating optical communication performance through turbulence
NASA Astrophysics Data System (ADS)
Shapiro, J. H.; Harney, R. C.
1981-01-01
Propagation through turbulence can impose severe limitations on the performance of atmospheric optical communication links. Previous studies have established quantitative results for turbulence-induced beam spread, angular spread, and scintillation. This paper develops communication-theory results for single-bit and message transmission through turbulence. Programmable calculator algorithms for evaluating these results are given, and used to examine system performance in some realistic scenarios. These algorithms make it possible for the uninitiated communication engineer to rapidly assess the effects of turbulence on an atmospheric optical communication link.
Benchmarking the performance of daily temperature homogenisation algorithms
NASA Astrophysics Data System (ADS)
Warren, Rachel; Bailey, Trevor; Jolliffe, Ian; Willett, Kate
2015-04-01
This work explores the creation of realistic synthetic data and its use as a benchmark for comparing the performance of different homogenisation algorithms on daily temperature data. Four different regions in the United States have been selected and three different inhomogeneity scenarios explored for each region. These benchmark datasets are beneficial as, unlike in the real world, the underlying truth is known a priori, thus allowing definite statements to be made about the performance of the algorithms run on them. Performance can be assessed in terms of the ability of algorithms to detect changepoints and also their ability to correctly remove inhomogeneities. The focus is on daily data, thus presenting new challenges in comparison to monthly data and pushing the boundaries of previous studies. The aims of this work are to evaluate and compare the performance of various homogenisation algorithms, aiding their improvement and enabling a quantification of the uncertainty remaining in the data even after they have been homogenised. An important outcome is also to evaluate how realistic the created benchmarks are. It is essential that any weaknesses in the benchmarks are taken into account when judging algorithm performance against them. This information in turn will help to improve future versions of the benchmarks. I intend to present a summary of this work including the method of benchmark creation, details of the algorithms run and some preliminary results. This work forms a three year PhD and feeds into the larger project of the International Surface Temperature Initiative which is working on a global scale and with monthly instead of daily data.
Thermal Performance Simulation of MWNT/NR composites Based on Levenberg-Marquard Algorithm
NASA Astrophysics Data System (ADS)
Yu, Z. Z.; Liu, J. S.
2017-02-01
In this paper, Levenberg-Marquard algorithm was used to simulate thermal performance of aligned carbon nanotubes-filled rubber composite, and the effect of temperature, filling amount, MWNTs orientation and other factors on thermal performance were studied. The research results showed: MWNTs orientation can greatly improve the thermal conductivity of composite materials, the thermal performance improvement of overall orientation was higher than the local orientation. Volume fraction can affect thermal performance, thermal conductivity increased with the increase of volume fraction. Temperature had no significant effect on the thermal conductivity. The simulation results correlated well with experimental results, which showed that the simulation algorithm is effective and feasible.
A high-performance FFT algorithm for vector supercomputers
NASA Technical Reports Server (NTRS)
Bailey, David H.
1988-01-01
Many traditional algorithms for computing the fast Fourier transform (FFT) on conventional computers are unacceptable for advanced vector and parallel computers because they involve nonunit, power-of-two memory strides. A practical technique for computing the FFT that avoids all such strides and appears to be near-optimal for a variety of current vector and parallel computers is presented. Performance results of a program based on this technique are given. Notable among these results is that a FORTRAN implementation of this algorithm on the CRAY-2 runs up to 77-percent faster than Cray's assembly-coded library routine.
Performance characterization of the dynamic programming obstacle detection algorithm.
Gandhi, Tarak; Yang, Mau-Tsuen; Kasturi, Rangachar; Camps, Octavia I; Coraor, Lee D; McCandless, Jeffrey
2006-05-01
A computer vision-based system using images from an airborne aircraft can increase flight safety by aiding the pilot to detect obstacles in the flight path so as to avoid mid-air collisions. Such a system fits naturally with the development of an external vision system proposed by NASA for use in high-speed civil transport aircraft with limited cockpit visibility. The detection techniques should provide high detection probability for obstacles that can vary from subpixels to a few pixels in size, while maintaining a low false alarm probability in the presence of noise and severe background clutter. Furthermore, the detection algorithms must be able to report such obstacles in a timely fashion, imposing severe constraints on their execution time. For this purpose, we have implemented a number of algorithms to detect airborne obstacles using image sequences obtained from a camera mounted on an aircraft. This paper describes the methodology used for characterizing the performance of the dynamic programming obstacle detection algorithm and its special cases. The experimental results were obtained using several types of image sequences, with simulated and real backgrounds. The approximate performance of the algorithm is also theoretically derived using principles of statistical analysis in terms of the signal-to-noise ration (SNR) required for the probabilities of false alarms and misdetections to be lower than prespecified values. The theoretical and experimental performance are compared in terms of the required SNR.
Atmospheric turbulence and sensor system effects on biometric algorithm performance
NASA Astrophysics Data System (ADS)
Espinola, Richard L.; Leonard, Kevin R.; Byrd, Kenneth A.; Potvin, Guy
2015-05-01
Biometric technologies composed of electro-optical/infrared (EO/IR) sensor systems and advanced matching algorithms are being used in various force protection/security and tactical surveillance applications. To date, most of these sensor systems have been widely used in controlled conditions with varying success (e.g., short range, uniform illumination, cooperative subjects). However the limiting conditions of such systems have yet to be fully studied for long range applications and degraded imaging environments. Biometric technologies used for long range applications will invariably suffer from the effects of atmospheric turbulence degradation. Atmospheric turbulence causes blur, distortion and intensity fluctuations that can severely degrade image quality of electro-optic and thermal imaging systems and, for the case of biometrics technology, translate to poor matching algorithm performance. In this paper, we evaluate the effects of atmospheric turbulence and sensor resolution on biometric matching algorithm performance. We use a subset of the Facial Recognition Technology (FERET) database and a commercial algorithm to analyze facial recognition performance on turbulence degraded facial images. The goal of this work is to understand the feasibility of long-range facial recognition in degraded imaging conditions, and the utility of camera parameter trade studies to enable the design of the next generation biometrics sensor systems.
A Multifaceted Independent Performance Analysis of Facial Subspace Recognition Algorithms
Bajwa, Usama Ijaz; Taj, Imtiaz Ahmad; Anwar, Muhammad Waqas; Wang, Xuan
2013-01-01
Face recognition has emerged as the fastest growing biometric technology and has expanded a lot in the last few years. Many new algorithms and commercial systems have been proposed and developed. Most of them use Principal Component Analysis (PCA) as a base for their techniques. Different and even conflicting results have been reported by researchers comparing these algorithms. The purpose of this study is to have an independent comparative analysis considering both performance and computational complexity of six appearance based face recognition algorithms namely PCA, 2DPCA, A2DPCA, (2D)2PCA, LPP and 2DLPP under equal working conditions. This study was motivated due to the lack of unbiased comprehensive comparative analysis of some recent subspace methods with diverse distance metric combinations. For comparison with other studies, FERET, ORL and YALE databases have been used with evaluation criteria as of FERET evaluations which closely simulate real life scenarios. A comparison of results with previous studies is performed and anomalies are reported. An important contribution of this study is that it presents the suitable performance conditions for each of the algorithms under consideration. PMID:23451054
On the performances of computer vision algorithms on mobile platforms
NASA Astrophysics Data System (ADS)
Battiato, S.; Farinella, G. M.; Messina, E.; Puglisi, G.; Ravì, D.; Capra, A.; Tomaselli, V.
2012-01-01
Computer Vision enables mobile devices to extract the meaning of the observed scene from the information acquired with the onboard sensor cameras. Nowadays, there is a growing interest in Computer Vision algorithms able to work on mobile platform (e.g., phone camera, point-and-shot-camera, etc.). Indeed, bringing Computer Vision capabilities on mobile devices open new opportunities in different application contexts. The implementation of vision algorithms on mobile devices is still a challenging task since these devices have poor image sensors and optics as well as limited processing power. In this paper we have considered different algorithms covering classic Computer Vision tasks: keypoint extraction, face detection, image segmentation. Several tests have been done to compare the performances of the involved mobile platforms: Nokia N900, LG Optimus One, Samsung Galaxy SII.
Multipole Algorithms for Molecular Dynamics Simulation on High Performance Computers.
NASA Astrophysics Data System (ADS)
Elliott, William Dewey
1995-01-01
A fundamental problem in modeling large molecular systems with molecular dynamics (MD) simulations is the underlying N-body problem of computing the interactions between all pairs of N atoms. The simplest algorithm to compute pair-wise atomic interactions scales in runtime {cal O}(N^2), making it impractical for interesting biomolecular systems, which can contain millions of atoms. Recently, several algorithms have become available that solve the N-body problem by computing the effects of all pair-wise interactions while scaling in runtime less than {cal O}(N^2). One algorithm, which scales {cal O}(N) for a uniform distribution of particles, is called the Greengard-Rokhlin Fast Multipole Algorithm (FMA). This work describes an FMA-like algorithm called the Molecular Dynamics Multipole Algorithm (MDMA). The algorithm contains several features that are new to N-body algorithms. MDMA uses new, efficient series expansion equations to compute general 1/r^{n } potentials to arbitrary accuracy. In particular, the 1/r Coulomb potential and the 1/r^6 portion of the Lennard-Jones potential are implemented. The new equations are based on multivariate Taylor series expansions. In addition, MDMA uses a cell-to-cell interaction region of cells that is closely tied to worst case error bounds. The worst case error bounds for MDMA are derived in this work also. These bounds apply to other multipole algorithms as well. Several implementation enhancements are described which apply to MDMA as well as other N-body algorithms such as FMA and tree codes. The mathematics of the cell -to-cell interactions are converted to the Fourier domain for reduced operation count and faster computation. A relative indexing scheme was devised to locate cells in the interaction region which allows efficient pre-computation of redundant information and prestorage of much of the cell-to-cell interaction. Also, MDMA was integrated into the MD program SIgMA to demonstrate the performance of the program over
Performance evaluation of image segmentation algorithms on microscopic image data.
Beneš, Miroslav; Zitová, Barbara
2015-01-01
In our paper, we present a performance evaluation of image segmentation algorithms on microscopic image data. In spite of the existence of many algorithms for image data partitioning, there is no universal and 'the best' method yet. Moreover, images of microscopic samples can be of various character and quality which can negatively influence the performance of image segmentation algorithms. Thus, the issue of selecting suitable method for a given set of image data is of big interest. We carried out a large number of experiments with a variety of segmentation methods to evaluate the behaviour of individual approaches on the testing set of microscopic images (cross-section images taken in three different modalities from the field of art restoration). The segmentation results were assessed by several indices used for measuring the output quality of image segmentation algorithms. In the end, the benefit of segmentation combination approach is studied and applicability of achieved results on another representatives of microscopic data category - biological samples - is shown.
Performance of redirected walking algorithms in a constrained virtual world.
Hodgson, Eric; Bachmann, Eric; Thrash, Tyler
2014-04-01
Redirected walking algorithms imperceptibly rotate a virtual scene about users of immersive virtual environment systems in order to guide them away from tracking area boundaries. Ideally, these distortions permit users to explore large unbounded virtual worlds while walking naturally within a physically limited space. Many potential virtual worlds are composed of corridors, passageways, or aisles. Assuming users are not expected to walk through walls or other objects within the virtual world, these constrained worlds limit the directions of travel and as well as the number of opportunities to change direction. The resulting differences in user movement characteristics within the physical world have an impact on redirected walking algorithm performance. This work presents a comparison of generalized RDW algorithm performance within a constrained virtual world. In contrast to previous studies involving unconstrained virtual worlds, experimental results indicate that the steer-to-orbit keeps users in a smaller area than the steer-to-center algorithm. Moreover, in comparison to steer-to-center, steer-to-orbit is shown to reduce potential wall contacts by over 29%.
Madenjian, Charles P.; David, Solomon R.; Pothoven, Steven A.
2012-01-01
We evaluated the performance of the Wisconsin bioenergetics model for lake trout Salvelinus namaycush that were fed ad libitum in laboratory tanks under regimes of low activity and high activity. In addition, we compared model performance under two different model algorithms: (1) balancing the lake trout energy budget on day t based on lake trout energy density on day t and (2) balancing the lake trout energy budget on day t based on lake trout energy density on day t + 1. Results indicated that the model significantly underestimated consumption for both inactive and active lake trout when algorithm 1 was used and that the degree of underestimation was similar for the two activity levels. In contrast, model performance substantially improved when using algorithm 2, as no detectable bias was found in model predictions of consumption for inactive fish and only a slight degree of overestimation was detected for active fish. The energy budget was accurately balanced by using algorithm 2 but not by using algorithm 1. Based on the results of this study, we recommend the use of algorithm 2 to estimate food consumption by fish in the field. Our study results highlight the importance of accurately accounting for changes in fish energy density when balancing the energy budget; furthermore, these results have implications for the science of evaluating fish bioenergetics model performance and for more accurate estimation of food consumption by fish in the field when fish energy density undergoes relatively rapid changes.
Proper bibeta ROC model: algorithm, software, and performance evaluation
NASA Astrophysics Data System (ADS)
Chen, Weijie; Hu, Nan
2016-03-01
Semi-parametric models are often used to fit data collected in receiver operating characteristic (ROC) experiments to obtain a smooth ROC curve and ROC parameters for statistical inference purposes. The proper bibeta model as recently proposed by Mossman and Peng enjoys several theoretical properties. In addition to having explicit density functions for the latent decision variable and an explicit functional form of the ROC curve, the two parameter bibeta model also has simple closed-form expressions for true-positive fraction (TPF), false-positive fraction (FPF), and the area under the ROC curve (AUC). In this work, we developed a computational algorithm and R package implementing this model for ROC curve fitting. Our algorithm can deal with any ordinal data (categorical or continuous). To improve accuracy, efficiency, and reliability of our software, we adopted several strategies in our computational algorithm including: (1) the LABROC4 categorization to obtain the true maximum likelihood estimation of the ROC parameters; (2) a principled approach to initializing parameters; (3) analytical first-order and second-order derivatives of the likelihood function; (4) an efficient optimization procedure (the L-BFGS algorithm in the R package "nlopt"); and (5) an analytical delta method to estimate the variance of the AUC. We evaluated the performance of our software with intensive simulation studies and compared with the conventional binormal and the proper binormal-likelihood-ratio models developed at the University of Chicago. Our simulation results indicate that our software is highly accurate, efficient, and reliable.
Performance comparison of motion estimation algorithms on digital video images
NASA Astrophysics Data System (ADS)
Ali, N. A.; Ja'Afar, A. S.; Anathakrishnan, K. S.
2009-12-01
This paper presents a comparative study on technique to achieve high compression ratio in video coding. The focus is on the Block Matching Motion Estimation (BMME) techniques. It has been particularly used in various coding standards. In the BMME, search patterns and the center-biased characteristics of motion vector (MV) have large impact on the search speed and quality of video. Three fast Block Matching Algorithms (BMAs) of motion estimation through block matching have been implemented and performance of these three has been tested using MATLAB software. The Cross Diamond Search (CDS) is compared with Full Search (FS) and Cross Search (CS) algorithms based on search points (search speed) and peak signal-to-noise ratio (PSNR) as the quality of the video. The CDS algorithm was designed to fit the cross-center-biased (CCB) MV distribution characteristics of the real-world video sequences. CDS compares favorably with the other algorithms for low motion sequences in terms of speed, quality and computational complexity. Keywords: Block-matching, motion estimation, digital video compression, cross-centered biased, cross diamond search.
Performance comparison of motion estimation algorithms on digital video images
NASA Astrophysics Data System (ADS)
Ali, N. A.; Ja'afar, A. S.; Anathakrishnan, K. S.
2010-03-01
This paper presents a comparative study on technique to achieve high compression ratio in video coding. The focus is on the Block Matching Motion Estimation (BMME) techniques. It has been particularly used in various coding standards. In the BMME, search patterns and the center-biased characteristics of motion vector (MV) have large impact on the search speed and quality of video. Three fast Block Matching Algorithms (BMAs) of motion estimation through block matching have been implemented and performance of these three has been tested using MATLAB software. The Cross Diamond Search (CDS) is compared with Full Search (FS) and Cross Search (CS) algorithms based on search points (search speed) and peak signal-to-noise ratio (PSNR) as the quality of the video. The CDS algorithm was designed to fit the cross-center-biased (CCB) MV distribution characteristics of the real-world video sequences. CDS compares favorably with the other algorithms for low motion sequences in terms of speed, quality and computational complexity. Keywords: Block-matching, motion estimation, digital video compression, cross-centered biased, cross diamond search.
A DRAM compiler algorithm for high performance VLSI embedded memories
NASA Technical Reports Server (NTRS)
Eldin, A. G.
1992-01-01
In many applications, the limited density of the embedded SRAM does not allow integrating the memory on the same chip with other logic and functional blocks. In such cases, the embedded DRAM provides the optimum combination of very high density, low power, and high performance. For ASIC's to take full advantage of this design strategy, an efficient and highly reliable DRAM compiler must be used. The embedded DRAM architecture, cell, and peripheral circuit design considerations and the algorithm of a high performance memory compiler are presented .
Yan, Aimin; Wu, Xizeng; Liu, Hong
2010-01-01
The phase retrieval is an important task in x-ray phase contrast imaging. The robustness of phase retrieval is especially important for potential medical imaging applications such as phase contrast mammography. Recently the authors developed an iterative phase retrieval algorithm, the attenuation-partition based algorithm, for the phase retrieval in inline phase-contrast imaging [1]. Applied to experimental images, the algorithm was proven to be fast and robust. However, a quantitative analysis of the performance of this new algorithm is desirable. In this work, we systematically compared the performance of this algorithm with other two widely used phase retrieval algorithms, namely the Gerchberg-Saxton (GS) algorithm and the Transport of Intensity Equation (TIE) algorithm. The systematical comparison is conducted by analyzing phase retrieval performances with a digital breast specimen model. We show that the proposed algorithm converges faster than the GS algorithm in the Fresnel diffraction regime, and is more robust against image noise than the TIE algorithm. These results suggest the significance of the proposed algorithm for future medical applications with the x-ray phase contrast imaging technique. PMID:20720992
Performance analysis of approximate Affine Projection Algorithm in acoustic feedback cancellation.
Nikjoo S, Mohammad; Seyedi, Amir; Tehrani, Arash Saber
2008-01-01
Acoustic feedback is an annoying problem in several audio applications and especially in hearing aids. Adaptive feedback cancellation techniques have attracted recent attention and show great promise in reducing the deleterious effects of feedback. In this paper, we investigated the performance of a class of adaptive feedback cancellation algorithms viz. the approximated Affine Projection Algorithms (APA). Mixed results were obtained with the natural speech and music data collected from five different commercial hearing aids in a variety of sub-oscillatory and oscillatory feedback conditions. The performance of the approximated APA was significantly better with music stimuli than natural speech stimuli.
A performance comparison of integration algorithms in simulating flexible structures
NASA Technical Reports Server (NTRS)
Howe, R. M.
1989-01-01
Asymptotic formulas for the characteristic root errors as well as transfer function gain and phase errors are presented for a number of traditional and new integration methods. Normalized stability regions in the lambda h plane are compared for the various methods. In particular, it is shown that a modified form of Euler integration with root matching is an especially efficient method for simulating lightly-damped structural modes. The method has been used successfully for structural bending modes in the real-time simulation of missiles. Performance of this algorithm is compared with other special algorithms, including the state-transition method. A predictor-corrector version of the modified Euler algorithm permits it to be extended to the simulation of nonlinear models of the type likely to be obtained when using the discretized structure approach. Performance of the different integration methods is also compared for integration step sizes larger than those for which the asymptotic formulas are valid. It is concluded that many traditional integration methods, such as RD-4, are not competitive in the simulation of lightly damped structures.
New Classification Method Based on Support-Significant Association Rules Algorithm
NASA Astrophysics Data System (ADS)
Li, Guoxin; Shi, Wen
One of the most well-studied problems in data mining is mining for association rules. There was also research that introduced association rule mining methods to conduct classification tasks. These classification methods, based on association rule mining, could be applied for customer segmentation. Currently, most of the association rule mining methods are based on a support-confidence structure, where rules satisfied both minimum support and minimum confidence were returned as strong association rules back to the analyzer. But, this types of association rule mining methods lack of rigorous statistic guarantee, sometimes even caused misleading. A new classification model for customer segmentation, based on association rule mining algorithm, was proposed in this paper. This new model was based on the support-significant association rule mining method, where the measurement of confidence for association rule was substituted by the significant of association rule that was a better evaluation standard for association rules. Data experiment for customer segmentation from UCI indicated the effective of this new model.
NASA Astrophysics Data System (ADS)
Habarulema, J. B.; McKinnell, L.-A.
2012-05-01
In this work, results obtained by investigating the application of different neural network backpropagation training algorithms are presented. This was done to assess the performance accuracy of each training algorithm in total electron content (TEC) estimations using identical datasets in models development and verification processes. Investigated training algorithms are standard backpropagation (SBP), backpropagation with weight delay (BPWD), backpropagation with momentum (BPM) term, backpropagation with chunkwise weight update (BPC) and backpropagation for batch (BPB) training. These five algorithms are inbuilt functions within the Stuttgart Neural Network Simulator (SNNS) and the main objective was to find out the training algorithm that generates the minimum error between the TEC derived from Global Positioning System (GPS) observations and the modelled TEC data. Another investigated algorithm is the MatLab based Levenberg-Marquardt backpropagation (L-MBP), which achieves convergence after the least number of iterations during training. In this paper, neural network (NN) models were developed using hourly TEC data (for 8 years: 2000-2007) derived from GPS observations over a receiver station located at Sutherland (SUTH) (32.38° S, 20.81° E), South Africa. Verification of the NN models for all algorithms considered was performed on both "seen" and "unseen" data. Hourly TEC values over SUTH for 2003 formed the "seen" dataset. The "unseen" dataset consisted of hourly TEC data for 2002 and 2008 over Cape Town (CPTN) (33.95° S, 18.47° E) and SUTH, respectively. The models' verification showed that all algorithms investigated provide comparable results statistically, but differ significantly in terms of time required to achieve convergence during input-output data training/learning. This paper therefore provides a guide to neural network users for choosing appropriate algorithms based on the availability of computation capabilities used for research.
Performance evaluation of operational atmospheric correction algorithms over the East China Seas
NASA Astrophysics Data System (ADS)
He, Shuangyan; He, Mingxia; Fischer, Jürgen
2017-01-01
To acquire high-quality operational data products for Chinese in-orbit and scheduled ocean color sensors, the performances of two operational atmospheric correction (AC) algorithms (ESA MEGS 7.4.1 and NASA SeaDAS 6.1) were evaluated over the East China Seas (ECS) using MERIS data. The spectral remote sensing reflectance R rs(λ), aerosol optical thickness (AOT), and Ångström exponent (α) retrieved using the two algorithms were validated using in situ measurements obtained between May 2002 and October 2009. Match-ups of R rs, AOT, and α between the in situ and MERIS data were obtained through strict exclusion criteria. Statistical analysis of R rs(λ) showed a mean percentage difference (MPD) of 9%-13% in the 490-560 nm spectral range, and significant overestimation was observed at 413 nm (MPD>72%). The AOTs were overestimated (MPD>32%), and although the ESA algorithm outperformed the NASA algorithm in the blue-green bands, the situation was reversed in the red-near-infrared bands. The value of α was obviously underestimated by the ESA algorithm (MPD=41%) but not by the NASA algorithm (MPD=35%). To clarify why the NASA algorithm performed better in the retrieval of α, scatter plots of the α single scattering albedo (SSA) density were prepared. These α-SSA density scatter plots showed that the applicability of the aerosol models used by the NASA algorithm over the ECS is better than that used by the ESA algorithm, although neither aerosol model is suitable for the ECS region. The results of this study provide a reference to both data users and data agencies regarding the use of operational data products and the investigation into the improvement of current AC schemes over the ECS.
Jimenez, Edward Steven,
2013-09-01
The goal of this work is to develop a fast computed tomography (CT) reconstruction algorithm based on graphics processing units (GPU) that achieves significant improvement over traditional central processing unit (CPU) based implementations. The main challenge in developing a CT algorithm that is capable of handling very large datasets is parallelizing the algorithm in such a way that data transfer does not hinder performance of the reconstruction algorithm. General Purpose Graphics Processing (GPGPU) is a new technology that the Science and Technology (S&T) community is starting to adopt in many fields where CPU-based computing is the norm. GPGPU programming requires a new approach to algorithm development that utilizes massively multi-threaded environments. Multi-threaded algorithms in general are difficult to optimize since performance bottlenecks occur that are non-existent in single-threaded algorithms such as memory latencies. If an efficient GPU-based CT reconstruction algorithm can be developed; computational times could be improved by a factor of 20. Additionally, cost benefits will be realized as commodity graphics hardware could potentially replace expensive supercomputers and high-end workstations. This project will take advantage of the CUDA programming environment and attempt to parallelize the task in such a way that multiple slices of the reconstruction volume are computed simultaneously. This work will also take advantage of the GPU memory by utilizing asynchronous memory transfers, GPU texture memory, and (when possible) pinned host memory so that the memory transfer bottleneck inherent to GPGPU is amortized. Additionally, this work will take advantage of GPU-specific hardware (i.e. fast texture memory, pixel-pipelines, hardware interpolators, and varying memory hierarchy) that will allow for additional performance improvements.
NASA Astrophysics Data System (ADS)
Alexandre, E.; Cuadra, L.; Nieto-Borge, J. C.; Candil-García, G.; del Pino, M.; Salcedo-Sanz, S.
2015-08-01
Wave parameters computed from time series measured by buoys (significant wave height Hs, mean wave period, etc.) play a key role in coastal engineering and in the design and operation of wave energy converters. Storms or navigation accidents can make measuring buoys break down, leading to missing data gaps. In this paper we tackle the problem of locally reconstructing Hs at out-of-operation buoys by using wave parameters from nearby buoys, based on the spatial correlation among values at neighboring buoy locations. The novelty of our approach for its potential application to problems in coastal engineering is twofold. On one hand, we propose a genetic algorithm hybridized with an extreme learning machine that selects, among the available wave parameters from the nearby buoys, a subset FnSP with nSP parameters that minimizes the Hs reconstruction error. On the other hand, we evaluate to what extent the selected parameters in subset FnSP are good enough in assisting other machine learning (ML) regressors (extreme learning machines, support vector machines and gaussian process regression) to reconstruct Hs. The results show that all the ML method explored achieve a good Hs reconstruction in the two different locations studied (Caribbean Sea and West Atlantic).
Petillion, Saskia; Swinnen, Ans; Defraene, Gilles; Verhoeven, Karolien; Weltens, Caroline; Van den Heuvel, Frank
2014-07-08
The comparison of the pencil beam dose calculation algorithm with modified Batho heterogeneity correction (PBC-MB) and the analytical anisotropic algorithm (AAA) and the mutual comparison of advanced dose calculation algorithms used in breast radiotherapy have focused on the differences between the physical dose distributions. Studies on the radiobiological impact of the algorithm (both on the tumor control and the moderate breast fibrosis prediction) are lacking. We, therefore, investigated the radiobiological impact of the dose calculation algorithm in whole breast radiotherapy. The clinical dose distributions of 30 breast cancer patients, calculated with PBC-MB, were recalculated with fixed monitor units using more advanced algorithms: AAA and Acuros XB. For the latter, both dose reporting modes were used (i.e., dose-to-medium and dose-to-water). Next, the tumor control probability (TCP) and the normal tissue complication probability (NTCP) of each dose distribution were calculated with the Poisson model and with the relative seriality model, respectively. The endpoint for the NTCP calculation was moderate breast fibrosis five years post treatment. The differences were checked for significance with the paired t-test. The more advanced algorithms predicted a significantly lower TCP and NTCP of moderate breast fibrosis then found during the corresponding clinical follow-up study based on PBC calculations. The differences varied between 1% and 2.1% for the TCP and between 2.9% and 5.5% for the NTCP of moderate breast fibrosis. The significant differences were eliminated by determination of algorithm-specific model parameters using least square fitting. Application of the new parameters on a second group of 30 breast cancer patients proved their appropriateness. In this study, we assessed the impact of the dose calculation algorithms used in whole breast radiotherapy on the parameters of the radiobiological models. The radiobiological impact was eliminated by
Performance Trend of Different Algorithms for Structural Design Optimization
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.
1996-01-01
Nonlinear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Center, a project was initiated to assess performance of different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with the sequential unconstrained minimizations technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.
Zhang, Lei; Wang, Linlin; Du, Bochuan; Wang, Tianjiao; Tian, Pu
2016-01-01
Among non-small cell lung cancer (NSCLC), adenocarcinoma (AC), and squamous cell carcinoma (SCC) are two major histology subtypes, accounting for roughly 40% and 30% of all lung cancer cases, respectively. Since AC and SCC differ in their cell of origin, location within the lung, and growth pattern, they are considered as distinct diseases. Gene expression signatures have been demonstrated to be an effective tool for distinguishing AC and SCC. Gene set analysis is regarded as irrelevant to the identification of gene expression signatures. Nevertheless, we found that one specific gene set analysis method, significance analysis of microarray-gene set reduction (SAMGSR), can be adopted directly to select relevant features and to construct gene expression signatures. In this study, we applied SAMGSR to a NSCLC gene expression dataset. When compared with several novel feature selection algorithms, for example, LASSO, SAMGSR has equivalent or better performance in terms of predictive ability and model parsimony. Therefore, SAMGSR is a feature selection algorithm, indeed. Additionally, we applied SAMGSR to AC and SCC subtypes separately to discriminate their respective stages, that is, stage II versus stage I. Few overlaps between these two resulting gene signatures illustrate that AC and SCC are technically distinct diseases. Therefore, stratified analyses on subtypes are recommended when diagnostic or prognostic signatures of these two NSCLC subtypes are constructed. PMID:27446945
Zhang, Lei; Wang, Linlin; Du, Bochuan; Wang, Tianjiao; Tian, Pu; Tian, Suyan
2016-01-01
Among non-small cell lung cancer (NSCLC), adenocarcinoma (AC), and squamous cell carcinoma (SCC) are two major histology subtypes, accounting for roughly 40% and 30% of all lung cancer cases, respectively. Since AC and SCC differ in their cell of origin, location within the lung, and growth pattern, they are considered as distinct diseases. Gene expression signatures have been demonstrated to be an effective tool for distinguishing AC and SCC. Gene set analysis is regarded as irrelevant to the identification of gene expression signatures. Nevertheless, we found that one specific gene set analysis method, significance analysis of microarray-gene set reduction (SAMGSR), can be adopted directly to select relevant features and to construct gene expression signatures. In this study, we applied SAMGSR to a NSCLC gene expression dataset. When compared with several novel feature selection algorithms, for example, LASSO, SAMGSR has equivalent or better performance in terms of predictive ability and model parsimony. Therefore, SAMGSR is a feature selection algorithm, indeed. Additionally, we applied SAMGSR to AC and SCC subtypes separately to discriminate their respective stages, that is, stage II versus stage I. Few overlaps between these two resulting gene signatures illustrate that AC and SCC are technically distinct diseases. Therefore, stratified analyses on subtypes are recommended when diagnostic or prognostic signatures of these two NSCLC subtypes are constructed.
Novel adaptive playout algorithm for voice over IP applications and performance assessment over WANs
NASA Astrophysics Data System (ADS)
Hintoglu, Mustafa H.; Ergul, Faruk R.
2001-07-01
Special purpose hardware and application software have been developed to implement and test Voice over IP protocols. The hardware has interface units to which ISDN telephone sets can be connected. It has Ethernet and RS-232 interfaces for connections to LANs and controlling PCs. The software has modules which are specific to telephone operations and simulation activities. The simulator acts as a WAN environment, generating delays in delivering speech packets according to delay distribution specified. By using WAN simulator, different algorithms can be tested and their performances can be compared. The novel algorithm developed correlates silence periods with received voice packets and delays play out until confidence is established that a significant phrase or sentence is stored in the playout buffer. The performance of this approach has been found to be either superior or comparable to performances of existing algorithms tested. This new algorithm has the advantage that at least a complete phrase or sentence is played out, thereby increasing the intelligibility considerably. The penalty of having larger delays compared to published algorithms operating under bursty traffic conditions is compensated by higher quality of service offered. In the paper, details of developed system and obtained test results will be presented.
GPU based cloud system for high-performance arrhythmia detection with parallel k-NN algorithm.
Tae Joon Jun; Hyun Ji Park; Hyuk Yoo; Young-Hak Kim; Daeyoung Kim
2016-08-01
In this paper, we propose an GPU based Cloud system for high-performance arrhythmia detection. Pan-Tompkins algorithm is used for QRS detection and we optimized beat classification algorithm with K-Nearest Neighbor (K-NN). To support high performance beat classification on the system, we parallelized beat classification algorithm with CUDA to execute the algorithm on virtualized GPU devices on the Cloud system. MIT-BIH Arrhythmia database is used for validation of the algorithm. The system achieved about 93.5% of detection rate which is comparable to previous researches while our algorithm shows 2.5 times faster execution time compared to CPU only detection algorithm.
Sankaran, Ramanan; Angel, Jordan; Brown, W. Michael
2015-04-08
The growth in size of networked high performance computers along with novel accelerator-based node architectures has further emphasized the importance of communication efficiency in high performance computing. The world's largest high performance computers are usually operated as shared user facilities due to the costs of acquisition and operation. Applications are scheduled for execution in a shared environment and are placed on nodes that are not necessarily contiguous on the interconnect. Furthermore, the placement of tasks on the nodes allocated by the scheduler is sub-optimal, leading to performance loss and variability. Here, we investigate the impact of task placement on themore » performance of two massively parallel application codes on the Titan supercomputer, a turbulent combustion flow solver (S3D) and a molecular dynamics code (LAMMPS). Benchmark studies show a significant deviation from ideal weak scaling and variability in performance. The inter-task communication distance was determined to be one of the significant contributors to the performance degradation and variability. A genetic algorithm-based parallel optimization technique was used to optimize the task ordering. This technique provides an improved placement of the tasks on the nodes, taking into account the application's communication topology and the system interconnect topology. As a result, application benchmarks after task reordering through genetic algorithm show a significant improvement in performance and reduction in variability, therefore enabling the applications to achieve better time to solution and scalability on Titan during production.« less
Sankaran, Ramanan; Angel, Jordan; Brown, W. Michael
2015-04-08
The growth in size of networked high performance computers along with novel accelerator-based node architectures has further emphasized the importance of communication efficiency in high performance computing. The world's largest high performance computers are usually operated as shared user facilities due to the costs of acquisition and operation. Applications are scheduled for execution in a shared environment and are placed on nodes that are not necessarily contiguous on the interconnect. Furthermore, the placement of tasks on the nodes allocated by the scheduler is sub-optimal, leading to performance loss and variability. Here, we investigate the impact of task placement on the performance of two massively parallel application codes on the Titan supercomputer, a turbulent combustion flow solver (S3D) and a molecular dynamics code (LAMMPS). Benchmark studies show a significant deviation from ideal weak scaling and variability in performance. The inter-task communication distance was determined to be one of the significant contributors to the performance degradation and variability. A genetic algorithm-based parallel optimization technique was used to optimize the task ordering. This technique provides an improved placement of the tasks on the nodes, taking into account the application's communication topology and the system interconnect topology. As a result, application benchmarks after task reordering through genetic algorithm show a significant improvement in performance and reduction in variability, therefore enabling the applications to achieve better time to solution and scalability on Titan during production.
The performance of the quantum adiabatic algorithm on spike Hamiltonians
NASA Astrophysics Data System (ADS)
Kong, Linghang; Crosson, Elizabeth
Spike Hamiltonians arise from optimization instances for which the adiabatic algorithm provably out performs classical simulated annealing. In this work, we study the efficiency of the adiabatic algorithm for solving the “the Hamming weight with a spike” problem by analyzing the scaling of the spectral gap at the critical point for various sizes of the barrier. Our main result is a rigorous lower bound on the minimum spectral gap for the adiabatic evolution when the bit-symmetric cost function has a thin but polynomially high barrier, which is based on a comparison argument and an improved variational ansatz for the ground state. We also adapt the discrete WKB method for the case of abruptly changing potentials and compare it with the predictions of the spin coherent instanton method which was previously used by Farhi, Goldstone and Gutmann. Finally, our improved ansatz for the ground state leads to a method for predicting the location of avoided crossings in the excited energy states of the thin spike Hamiltonian, and we use a recursion relation to understand the ordering of some of these avoided crossings as a step towards analyzing the previously observed diabatic cascade phenomenon.
Graff, Mario; Poli, Riccardo; Flores, Juan J
2013-01-01
Modeling the behavior of algorithms is the realm of evolutionary algorithm theory. From a practitioner's point of view, theory must provide some guidelines regarding which algorithm/parameters to use in order to solve a particular problem. Unfortunately, most theoretical models of evolutionary algorithms are difficult to apply to realistic situations. However, in recent work (Graff and Poli, 2008, 2010), where we developed a method to practically estimate the performance of evolutionary program-induction algorithms (EPAs), we started addressing this issue. The method was quite general; however, it suffered from some limitations: it required the identification of a set of reference problems, it required hand picking a distance measure in each particular domain, and the resulting models were opaque, typically being linear combinations of 100 features or more. In this paper, we propose a significant improvement of this technique that overcomes the three limitations of our previous method. We achieve this through the use of a novel set of features for assessing problem difficulty for EPAs which are very general, essentially based on the notion of finite difference. To show the capabilities or our technique and to compare it with our previous performance models, we create models for the same two important classes of problems-symbolic regression on rational functions and Boolean function induction-used in our previous work. We model a variety of EPAs. The comparison showed that for the majority of the algorithms and problem classes, the new method produced much simpler and more accurate models than before. To further illustrate the practicality of the technique and its generality (beyond EPAs), we have also used it to predict the performance of both autoregressive models and EPAs on the problem of wind speed forecasting, obtaining simpler and more accurate models that outperform in all cases our previous performance models.
Performance Analysis of Apriori Algorithm with Different Data Structures on Hadoop Cluster
NASA Astrophysics Data System (ADS)
Singh, Sudhakar; Garg, Rakhi; Mishra, P. K.
2015-10-01
Mining frequent itemsets from massive datasets is always being a most important problem of data mining. Apriori is the most popular and simplest algorithm for frequent itemset mining. To enhance the efficiency and scalability of Apriori, a number of algorithms have been proposed addressing the design of efficient data structures, minimizing database scan and parallel and distributed processing. MapReduce is the emerging parallel and distributed technology to process big datasets on Hadoop Cluster. To mine big datasets it is essential to re-design the data mining algorithm on this new paradigm. In this paper, we implement three variations of Apriori algorithm using data structures hash tree, trie and hash table trie i.e. trie with hash technique on MapReduce paradigm. We emphasize and investigate the significance of these three data structures for Apriori algorithm on Hadoop cluster, which has not been given attention yet. Experiments are carried out on both real life and synthetic datasets which shows that hash table trie data structures performs far better than trie and hash tree in terms of execution time. Moreover the performance in case of hash tree becomes worst.
NASA Astrophysics Data System (ADS)
Hou, Zhen-Long; Wei, Xiao-Hui; Huang, Da-Nian; Sun, Xu
2015-09-01
We apply reweighted inversion focusing to full tensor gravity gradiometry data using message-passing interface (MPI) and compute unified device architecture (CUDA) parallel computing algorithms, and then combine MPI with CUDA to formulate a hybrid algorithm. Parallel computing performance metrics are introduced to analyze and compare the performance of the algorithms. We summarize the rules for the performance evaluation of parallel algorithms. We use model and real data from the Vinton salt dome to test the algorithms. We find good match between model and real density data, and verify the high efficiency and feasibility of parallel computing algorithms in the inversion of full tensor gravity gradiometry data.
Burg algorithm for enhancing measurement performance in wavelength scanning interferometry
NASA Astrophysics Data System (ADS)
Woodcock, Rebecca; Muhamedsalih, Hussam; Martin, Haydn; Jiang, Xiangqian
2016-06-01
Wavelength scanning interferometry (WSI) is a technique for measuring surface topography that is capable of resolving step discontinuities and does not require any mechanical movement of the apparatus or measurand, allowing measurement times to be reduced substantially in comparison to related techniques. The axial (height) resolution and measurement range in WSI depends in part on the algorithm used to evaluate the spectral interferograms. Previously reported Fourier transform based methods have a number of limitations which is in part due to the short data lengths obtained. This paper compares the performance auto-regressive model based techniques for frequency estimation in WSI. Specifically, the Burg method is compared with established Fourier transform based approaches using both simulation and experimental data taken from a WSI measurement of a step-height sample.
Violante-Carvalho, Nelson
2005-12-01
Synthetic Aperture Radar (SAR) onboard satellites is the only source of directional wave spectra with continuous and global coverage. Millions of SAR Wave Mode (SWM) imagettes have been acquired since the launch in the early 1990's of the first European Remote Sensing Satellite ERS-1 and its successors ERS-2 and ENVISAT, which has opened up many possibilities specially for wave data assimilation purposes. The main aim of data assimilation is to improve the forecasting introducing available observations into the modeling procedures in order to minimize the differences between model estimates and measurements. However there are limitations in the retrieval of the directional spectrum from SAR images due to nonlinearities in the mapping mechanism. The Max-Planck Institut (MPI) scheme, the first proposed and most widely used algorithm to retrieve directional wave spectra from SAR images, is employed to compare significant wave heights retrieved from ERS-1 SAR against buoy measurements and against the WAM wave model. It is shown that for periods shorter than 12 seconds the WAM model performs better than the MPI, despite the fact that the model is used as first guess to the MPI method, that is the retrieval is deteriorating the first guess. For periods longer than 12 seconds, the part of the spectrum that is directly measured by SAR, the performance of the MPI scheme is at least as good as the WAM model.
NASA Technical Reports Server (NTRS)
Kobayashi, Takahisa; Simon, Donald L.
2002-01-01
As part of the NASA Aviation Safety Program, a unique model-based diagnostics method that employs neural networks and genetic algorithms for aircraft engine performance diagnostics has been developed and demonstrated at the NASA Glenn Research Center against a nonlinear gas turbine engine model. Neural networks are applied to estimate the internal health condition of the engine, and genetic algorithms are used for sensor fault detection, isolation, and quantification. This hybrid architecture combines the excellent nonlinear estimation capabilities of neural networks with the capability to rank the likelihood of various faults given a specific sensor suite signature. The method requires a significantly smaller data training set than a neural network approach alone does, and it performs the combined engine health monitoring objectives of performance diagnostics and sensor fault detection and isolation in the presence of nominal and degraded engine health conditions.
Yeguas, Enrique; Joan-Arinyo, Robert; Victoria Luz N, Mar A
2011-01-01
The availability of a model to measure the performance of evolutionary algorithms is very important, especially when these algorithms are applied to solve problems with high computational requirements. That model would compute an index of the quality of the solution reached by the algorithm as a function of run-time. Conversely, if we fix an index of quality for the solution, the model would give the number of iterations to be expected. In this work, we develop a statistical model to describe the performance of PBIL and CHC evolutionary algorithms applied to solve the root identification problem. This problem is basic in constraint-based, geometric parametric modeling, as an instance of general constraint-satisfaction problems. The performance model is empirically validated over a benchmark with very large search spaces.
The control algorithm improving performance of electric load simulator
NASA Astrophysics Data System (ADS)
Guo, Chenxia; Yang, Ruifeng; Zhang, Peng; Fu, Mengyao
2017-01-01
In order to improve dynamic performance and signal tracking accuracy of electric load simulator, the influence of the moment of inertia, stiffness, friction, gaps and other factors on the system performance were analyzed on the basis of researching the working principle of load simulator in this paper. The PID controller based on Wavelet Neural Network was used to achieve the friction nonlinear compensation, while the gap inverse model was used to compensate the gap nonlinear. The compensation results were simulated by MATLAB software. It was shown that the follow-up performance of sine response curve of the system became better after compensating, the track error was significantly reduced, the accuracy was improved greatly and the system dynamic performance was improved.
Verma, T.; Painuly, N.K.; Mishra, S.P.; Shajahan, M.; Singh, N.; Bhatt, M.L.B.; Jamal, N.; Pant, M.C.
2016-01-01
Background: Inclusion of inhomogeneity corrections in intensity modulated small fields always makes conformal irradiation of lung tumor very complicated in accurate dose delivery. Objective: In the present study, the performance of five algorithms via Monte Carlo, Pencil Beam, Convolution, Fast Superposition and Superposition were evaluated in lung cancer Intensity Modulated Radiotherapy planning. Materials and Methods: Treatment plans for ten lung cancer patients previously planned on Monte Carlo algorithm were re-planned using same treatment planning indices (gantry angel, rank, power etc.) in other four algorithms. Results: The values of radiotherapy planning parameters such as Mean dose, volume of 95% isodose line, Conformity Index, Homogeneity Index for target, Maximum dose, Mean dose; %Volume receiving 20Gy or more by contralateral lung; % volume receiving 30 Gy or more; % volume receiving 25 Gy or more, Mean dose received by heart; %volume receiving 35Gy or more; %volume receiving 50Gy or more, Mean dose to Easophagous; % Volume receiving 45Gy or more, Maximum dose received by Spinal cord and Total monitor unit, Volume of 50 % isodose lines were recorded for all ten patients. Performance of different algorithms was also evaluated statistically. Conclusion: MC and PB algorithms found better as for tumor coverage, dose distribution homogeneity in Planning Target Volume and minimal dose to organ at risks are concerned. Superposition algorithms found to be better than convolution and fast superposition. In the case of tumors located centrally, it is recommended to use Monte Carlo algorithms for the optimal use of radiotherapy. PMID:27853720
2011-04-01
Comparison of Performance Effectiveness of Linear Control Algorithms Developed for a Simplified Ground Vehicle Suspension System by Ross... Linear Control Algorithms Developed for a Simplified Ground Vehicle Suspension System Ross Brown Motile Robotics, Inc, research contractor at U.S... Linear Control Algorithms Developed for a Simplified Ground Vehicle Suspension System 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT
Chan, J.C.-W.; Huang, C.; DeFries, R.
2001-01-01
Two ensemble methods, bagging and boosting, were investigated for improving algorithm performance. Our results confirmed the theoretical explanation [1] that bagging improves unstable, but not stable, learning algorithms. While boosting enhanced accuracy of a weak learner, its behavior is subject to the characteristics of each learning algorithm.
Performance of a parallel algorithm for standard cell placement on the Intel Hypercube
NASA Technical Reports Server (NTRS)
Jones, Mark; Banerjee, Prithviraj
1987-01-01
A parallel simulated annealing algorithm for standard cell placement on the Intel Hypercube is presented. A novel tree broadcasting strategy is used extensively for updating cell locations in the parallel environment. Studies on the performance of the algorithm on example industrial circuits show that it is faster and gives better final placement results than uniprocessor simulated annealing algorithms.
Chung, King; Zeng, Fan-Gang; Acker, Kyle N
2006-10-01
Although cochlear implant (CI) users have enjoyed good speech recognition in quiet, they still have difficulties understanding speech in noise. We conducted three experiments to determine whether a directional microphone and an adaptive multichannel noise reduction algorithm could enhance CI performance in noise and whether Speech Transmission Index (STI) can be used to predict CI performance in various acoustic and signal processing conditions. In Experiment I, CI users listened to speech in noise processed by 4 hearing aid settings: omni-directional microphone, omni-directional microphone plus noise reduction, directional microphone, and directional microphone plus noise reduction. The directional microphone significantly improved speech recognition in noise. Both directional microphone and noise reduction algorithm improved overall preference. In Experiment II, normal hearing individuals listened to the recorded speech produced by 4- or 8-channel CI simulations. The 8-channel simulation yielded similar speech recognition results as in Experiment I, whereas the 4-channel simulation produced no significant difference among the 4 settings. In Experiment III, we examined the relationship between STIs and speech recognition. The results suggested that STI could predict actual and simulated CI speech intelligibility with acoustic degradation and the directional microphone, but not the noise reduction algorithm. Implications for intelligibility enhancement are discussed.
Sera White
2012-04-01
This thesis presents a research study using one year of driving data obtained from plug-in hybrid electric vehicles (PHEV) located in Sacramento and San Francisco, California to determine the effectiveness of incorporating geographic information into vehicle performance algorithms. Sacramento and San Francisco were chosen because of the availability of high resolution (1/9 arc second) digital elevation data. First, I present a method for obtaining instantaneous road slope, given a latitude and longitude, and introduce its use into common driving intensity algorithms. I show that for trips characterized by >40m of net elevation change (from key on to key off), the use of instantaneous road slope significantly changes the results of driving intensity calculations. For trips exhibiting elevation loss, algorithms ignoring road slope overestimated driving intensity by as much as 211 Wh/mile, while for trips exhibiting elevation gain these algorithms underestimated driving intensity by as much as 333 Wh/mile. Second, I describe and test an algorithm that incorporates vehicle route type into computations of city and highway fuel economy. Route type was determined by intersecting trip GPS points with ESRI StreetMap road types and assigning each trip as either city or highway route type according to whichever road type comprised the largest distance traveled. The fuel economy results produced by the geographic classification were compared to the fuel economy results produced by algorithms that assign route type based on average speed or driving style. Most results were within 1 mile per gallon ({approx}3%) of one another; the largest difference was 1.4 miles per gallon for charge depleting highway trips. The methods for acquiring and using geographic data introduced in this thesis will enable other vehicle technology researchers to incorporate geographic data into their research problems.
Performance of new GPU-based scan-conversion algorithm implemented using OpenGL.
Steelman, William A; Richard, William D
2011-04-01
A new GPU-based scan-conversion algorithm implemented using OpenGL is described. The compute performance of this new algorithm running on a modem GPU is compared to the performance of three common scan-conversion algorithms (nearest-neighbor, linear interpolation and bilinear interpolation) implemented in software using a modem CPU. The quality of the images produced by the algorithm, as measured by signal-to-noise power, is also compared to the quality of the images produced using these three common scan-conversion algorithms.
Performance of a parallel algorithm for standard cell placement on the Intel Hypercube
NASA Technical Reports Server (NTRS)
Jones, Mark; Banerjee, Prithviraj
1987-01-01
A parallel simulated annealing algorithm for standard cell placement that is targeted to run on the Intel Hypercube is presented. A tree broadcasting strategy that is used extensively in our algorithm for updating cell locations in the parallel environment is presented. Studies on the performance of our algorithm on example industrial circuits show that it is faster and gives better final placement results than the uniprocessor simulated annealing algorithms.
NASA Astrophysics Data System (ADS)
Singh, Sartajvir; Talwar, Rajneesh
2016-12-01
Detection of snow cover changes is vital for avalanche hazard analysis and flood flashes that arise due to variation in temperature. Hence, multitemporal change detection is one of the practical mean to estimate the snow cover changes over larger area using remotely sensed data. There have been some previous studies that examined how accuracy of change detection analysis is affected by different topography effects over Northwestern Indian Himalayas. The present work emphases on the intercomparison of different topography effects on discrimination performance of fuzzy based change vector analysis (FCVA) as change detection algorithm that includes extraction of change-magnitude and change-direction from a specific pixel belongs multiple or partial membership. The qualitative and quantitative analysis of the proposed FCVA algorithm is performed under topographic conditions and topographic correction conditions. The experimental outcomes confirmed that in change category discrimination procedure, FCVA with topographic correction achieved 86.8% overall accuracy and 4.8% decay (82% of overall accuracy) is found in FCVA without topographic correction. This study suggests that by incorporating the topographic correction model over mountainous region satellite imagery, performance of FCVA algorithm can be significantly improved up to great extent in terms of determining actual change categories.
Hira, Zena M; Trigeorgis, George; Gillies, Duncan F
2014-01-01
Microarray databases are a large source of genetic data, which, upon proper analysis, could enhance our understanding of biology and medicine. Many microarray experiments have been designed to investigate the genetic mechanisms of cancer, and analytical approaches have been applied in order to classify different types of cancer or distinguish between cancerous and non-cancerous tissue. However, microarrays are high-dimensional datasets with high levels of noise and this causes problems when using machine learning methods. A popular approach to this problem is to search for a set of features that will simplify the structure and to some degree remove the noise from the data. The most widely used approach to feature extraction is principal component analysis (PCA) which assumes a multivariate Gaussian model of the data. More recently, non-linear methods have been investigated. Among these, manifold learning algorithms, for example Isomap, aim to project the data from a higher dimensional space onto a lower dimension one. We have proposed a priori manifold learning for finding a manifold in which a representative set of microarray data is fused with relevant data taken from the KEGG pathway database. Once the manifold has been constructed the raw microarray data is projected onto it and clustering and classification can take place. In contrast to earlier fusion based methods, the prior knowledge from the KEGG databases is not used in, and does not bias the classification process--it merely acts as an aid to find the best space in which to search the data. In our experiments we have found that using our new manifold method gives better classification results than using either PCA or conventional Isomap.
NASA Astrophysics Data System (ADS)
Zhao, Wei; Niu, Tianye; Xing, Lei; Xie, Yaoqin; Xiong, Guanglei; Elmore, Kimberly; Zhu, Jun; Wang, Luyao; Min, James K.
2016-02-01
Increased noise is a general concern for dual-energy material decomposition. Here, we develop an image-domain material decomposition algorithm for dual-energy CT (DECT) by incorporating an edge-preserving filter into the Local HighlY constrained backPRojection reconstruction (HYPR-LR) framework. With effective use of the non-local mean, the proposed algorithm, which is referred to as HYPR-NLM, reduces the noise in dual-energy decomposition while preserving the accuracy of quantitative measurement and spatial resolution of the material-specific dual-energy images. We demonstrate the noise reduction and resolution preservation of the algorithm with an iodine concentrate numerical phantom by comparing the HYPR-NLM algorithm to the direct matrix inversion, HYPR-LR and iterative image-domain material decomposition (Iter-DECT). We also show the superior performance of the HYPR-NLM over the existing methods by using two sets of cardiac perfusing imaging data. The DECT material decomposition comparison study shows that all four algorithms yield acceptable quantitative measurements of iodine concentrate. Direct matrix inversion yields the highest noise level, followed by HYPR-LR and Iter-DECT. HYPR-NLM in an iterative formulation significantly reduces image noise and the image noise is comparable to or even lower than that generated using Iter-DECT. For the HYPR-NLM method, there are marginal edge effects in the difference image, suggesting the high-frequency details are well preserved. In addition, when the search window size increases from 11× 11 to 19× 19 , there are no significant changes or marginal edge effects in the HYPR-NLM difference images. The reference drawn from the comparison study includes: (1) HYPR-NLM significantly reduces the DECT material decomposition noise while preserving quantitative measurements and high-frequency edge information, and (2) HYPR-NLM is robust with respect to parameter selection.
Subsonic flight test evaluation of a performance seeking control algorithm on an F-15 airplane
NASA Technical Reports Server (NTRS)
Gilyard, Glenn B.; Orme, John S.
1992-01-01
The subsonic flight test evaluation phase of the NASA F-15 (powered by F 100 engines) performance seeking control program was completed for single-engine operation at part- and military-power settings. The subsonic performance seeking control algorithm optimizes the quasi-steady-state performance of the propulsion system for three modes of operation. The minimum fuel flow mode minimizes fuel consumption. The minimum thrust mode maximizes thrust at military power. Decreases in thrust-specific fuel consumption of 1 to 2 percent were measured in the minimum fuel flow mode; these fuel savings are significant, especially for supersonic cruise aircraft. Decreases of up to approximately 100 degree R in fan turbine inlet temperature were measured in the minimum temperature mode. Temperature reductions of this magnitude would more than double turbine life if inlet temperature was the only life factor. Measured thrust increases of up to approximately 15 percent in the maximum thrust mode cause substantial increases in aircraft acceleration. The system dynamics of the closed-loop algorithm operation were good. The subsonic flight phase has validated the performance seeking control technology, which can significantly benefit the next generation of fighter and transport aircraft.
NASA Technical Reports Server (NTRS)
Choudhary, Alok N.; Patel, Janak H.; Ahuja, Narendra
1989-01-01
In part 1 architecture of NETRA is presented. A performance evaluation of NETRA using several common vision algorithms is also presented. Performance of algorithms when they are mapped on one cluster is described. It is shown that SIMD, MIMD, and systolic algorithms can be easily mapped onto processor clusters, and almost linear speedups are possible. For some algorithms, analytical performance results are compared with implementation performance results. It is observed that the analysis is very accurate. Performance analysis of parallel algorithms when mapped across clusters is presented. Mappings across clusters illustrate the importance and use of shared as well as distributed memory in achieving high performance. The parameters for evaluation are derived from the characteristics of the parallel algorithms, and these parameters are used to evaluate the alternative communication strategies in NETRA. Furthermore, the effect of communication interference from other processors in the system on the execution of an algorithm is studied. Using the analysis, performance of many algorithms with different characteristics is presented. It is observed that if communication speeds are matched with the computation speeds, good speedups are possible when algorithms are mapped across clusters.
2014-09-01
Estimating Driver Performance Using Multiple Electroencephalography (EEG)-Based Regression Algorithms by Gregory Apker, Brent Lance, Scott...Proving Ground, MD 21005-5425 ARL-TR-7074 September 2014 Estimating Driver Performance Using Multiple Electroencephalography (EEG)-Based... Electroencephalography (EEG)- Based Regression Algorithms 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Gregory Apker
High performance genetic algorithm for VLSI circuit partitioning
NASA Astrophysics Data System (ADS)
Dinu, Simona
2016-12-01
Partitioning is one of the biggest challenges in computer-aided design for VLSI circuits (very large-scale integrated circuits). This work address the min-cut balanced circuit partitioning problem- dividing the graph that models the circuit into almost equal sized k sub-graphs while minimizing the number of edges cut i.e. minimizing the number of edges connecting the sub-graphs. The problem may be formulated as a combinatorial optimization problem. Experimental studies in the literature have shown the problem to be NP-hard and thus it is important to design an efficient heuristic algorithm to solve it. The approach proposed in this study is a parallel implementation of a genetic algorithm, namely an island model. The information exchange between the evolving subpopulations is modeled using a fuzzy controller, which determines an optimal balance between exploration and exploitation of the solution space. The results of simulations show that the proposed algorithm outperforms the standard sequential genetic algorithm both in terms of solution quality and convergence speed. As a direction for future study, this research can be further extended to incorporate local search operators which should include problem-specific knowledge. In addition, the adaptive configuration of mutation and crossover rates is another guidance for future research.
Kossobokov, V.G.; Romashkova, L.L.; Keilis-Borok, V. I.; Healy, J.H.
1999-01-01
Algorithms M8 and MSc (i.e., the Mendocino Scenario) were used in a real-time intermediate-term research prediction of the strongest earthquakes in the Circum-Pacific seismic belt. Predictions are made by M8 first. Then, the areas of alarm are reduced by MSc at the cost that some earthquakes are missed in the second approximation of prediction. In 1992-1997, five earthquakes of magnitude 8 and above occurred in the test area: all of them were predicted by M8 and MSc identified correctly the locations of four of them. The space-time volume of the alarms is 36% and 18%, correspondingly, when estimated with a normalized product measure of empirical distribution of epicenters and uniform time. The statistical significance of the achieved results is beyond 99% both for M8 and MSc. For magnitude 7.5 + , 10 out of 19 earthquakes were predicted by M8 in 40% and five were predicted by M8-MSc in 13% of the total volume considered. This implies a significance level of 81% for M8 and 92% for M8-MSc. The lower significance levels might result from a global change in seismic regime in 1993-1996, when the rate of the largest events has doubled and all of them become exclusively normal or reversed faults. The predictions are fully reproducible; the algorithms M8 and MSc in complete formal definitions were published before we started our experiment [Keilis-Borok, V.I., Kossobokov, V.G., 1990. Premonitory activation of seismic flow: Algorithm M8, Phys. Earth and Planet. Inter. 61, 73-83; Kossobokov, V.G., Keilis-Borok, V.I., Smith, S.W., 1990. Localization of intermediate-term earthquake prediction, J. Geophys. Res., 95, 19763-19772; Healy, J.H., Kossobokov, V.G., Dewey, J.W., 1992. A test to evaluate the earthquake prediction algorithm, M8. U.S. Geol. Surv. OFR 92-401]. M8 is available from the IASPEI Software Library [Healy, J.H., Keilis-Borok, V.I., Lee, W.H.K. (Eds.), 1997. Algorithms for Earthquake Statistics and Prediction, Vol. 6. IASPEI Software Library]. ?? 1999 Elsevier
Performance Comparison of Superresolution Array Processing Algorithms. Revised
2007-11-02
OF SUPERRESOLUTION ARRAY PROCESSING ALGORITHMS A.J. BARABELL J. CAPON D.F. DeLONG K.D. SENNE Group 44 J.R. JOHNSON Group 96 PROJECT REPORT...adaptive superresolution direction finding and spatial nulling to support sig- nal copy in the presence of strong cochannel interference. The need for such... superresolution array processing have their origin in spectral estimation for time series. Since the sampling of a function in time is analogous to
Performance Measurement and Analysis of Certain Search Algorithms
1979-05-01
Chaptev 3 extends the worst case tree search model ot Pohl and others to arbitrary heuristic functions, resultlr~g in cost formulas whose arguments...cases tested. 6 Similarly, Nisson, Pohl , and Vanderbrug conjectured that increasing the value of a weighting parameter W in At search will decrease the...algorithm schema [ Pohl 1970a) are essentially the same as A*. The 8-puzzle can be modeled exactly as a collection of points (tile configurations) and
Performance of an Adaptive Matched Filter Using the Griffiths Algorithm
1988-12-01
Simon. Introduction to Adaptive Filters. New York: Macmillan Publishing Company, 1984. 11. Sklar , Bernard . Digital Communications Fundamentals and...York: Harper and Row, 1986. 8. Widrow, Bernard and Samuel D. Stearns. Adaptive Signal Processing. Englewood Cliffs, N.J.: Prentice-Hall, 1985. 9...Fourier Transforms. and Optics. New York: John Wiley and Sons, 1978. 15. Widrow, Bernard and others. "The Complex LMS Algorithm," Proceedings of the IEEE
GOES-R Geostationary Lightning Mapper Performance Specifications and Algorithms
NASA Technical Reports Server (NTRS)
Mach, Douglas M.; Goodman, Steven J.; Blakeslee, Richard J.; Koshak, William J.; Petersen, William A.; Boldi, Robert A.; Carey, Lawrence D.; Bateman, Monte G.; Buchler, Dennis E.; McCaul, E. William, Jr.
2008-01-01
The Geostationary Lightning Mapper (GLM) is a single channel, near-IR imager/optical transient event detector, used to detect, locate and measure total lightning activity over the full-disk. The next generation NOAA Geostationary Operational Environmental Satellite (GOES-R) series will carry a GLM that will provide continuous day and night observations of lightning. The mission objectives for the GLM are to: (1) Provide continuous, full-disk lightning measurements for storm warning and nowcasting, (2) Provide early warning of tornadic activity, and (2) Accumulate a long-term database to track decadal changes of lightning. The GLM owes its heritage to the NASA Lightning Imaging Sensor (1997- present) and the Optical Transient Detector (1995-2000), which were developed for the Earth Observing System and have produced a combined 13 year data record of global lightning activity. GOES-R Risk Reduction Team and Algorithm Working Group Lightning Applications Team have begun to develop the Level 2 algorithms and applications. The science data will consist of lightning "events", "groups", and "flashes". The algorithm is being designed to be an efficient user of the computational resources. This may include parallelization of the code and the concept of sub-dividing the GLM FOV into regions to be processed in parallel. Proxy total lightning data from the NASA Lightning Imaging Sensor on the Tropical Rainfall Measuring Mission (TRMM) satellite and regional test beds (e.g., Lightning Mapping Arrays in North Alabama, Oklahoma, Central Florida, and the Washington DC Metropolitan area) are being used to develop the prelaunch algorithms and applications, and also improve our knowledge of thunderstorm initiation and evolution.
Performance of a community detection algorithm based on semidefinite programming
NASA Astrophysics Data System (ADS)
Ricci-Tersenghi, Federico; Javanmard, Adel; Montanari, Andrea
2016-03-01
The problem of detecting communities in a graph is maybe one the most studied inference problems, given its simplicity and widespread diffusion among several disciplines. A very common benchmark for this problem is the stochastic block model or planted partition problem, where a phase transition takes place in the detection of the planted partition by changing the signal-to-noise ratio. Optimal algorithms for the detection exist which are based on spectral methods, but we show these are extremely sensible to slight modification in the generative model. Recently Javanmard, Montanari and Ricci-Tersenghi [1] have used statistical physics arguments, and numerical simulations to show that finding communities in the stochastic block model via semidefinite programming is quasi optimal. Further, the resulting semidefinite relaxation can be solved efficiently, and is very robust with respect to changes in the generative model. In this paper we study in detail several practical aspects of this new algorithm based on semidefinite programming for the detection of the planted partition. The algorithm turns out to be very fast, allowing the solution of problems with O(105) variables in few second on a laptop computer.
NASA Technical Reports Server (NTRS)
Kasturi, Rangachar; Camps, Octavia; Coraor, Lee
2000-01-01
The research reported here is a part of NASA's Synthetic Vision System (SVS) project for the development of a High Speed Civil Transport Aircraft (HSCT). One of the components of the SVS is a module for detection of potential obstacles in the aircraft's flight path by analyzing the images captured by an on-board camera in real-time. Design of such a module includes the selection and characterization of robust, reliable, and fast techniques and their implementation for execution in real-time. This report describes the results of our research in realizing such a design. It is organized into three parts. Part I. Data modeling and camera characterization; Part II. Algorithms for detecting airborne obstacles; and Part III. Real time implementation of obstacle detection algorithms on the Datacube MaxPCI architecture. A list of publications resulting from this grant as well as a list of relevant publications resulting from prior NASA grants on this topic are presented.
Kang, Ki-woon; Chang, Hyuk-jae; Shim, Hackjoon; Kim, Young-jin; Choi, Byoung-wook; Yang, Woo-in; Shim, Jee-young; Ha, Jongwon; Chung, Namsik
2012-04-01
Automatic computer-assisted detection (auto-CAD) of significant coronary artery disease (CAD) in coronary computed tomography angiography (cCTA) has been shown to have relatively high accuracy. However, to date, scarce data are available regarding the performance of auto-CAD in the setting of acute chest pain. This study sought to demonstrate the feasibility of an auto-CAD algorithm for cCTA in patients presenting with acute chest pain. We retrospectively investigated 398 consecutive patients (229 male, mean age 50±21 years) who had acute chest pain and underwent cCTA between Apr 2007 and Jan 2011 in the emergency department (ED). All cCTA data were analyzed using an auto-CAD algorithm for the detection of >50% CAD on cCTA. The accuracy of auto-CAD was compared with the formal radiology report. In 380 of 398 patients (18 were excluded due to failure of data processing), per-patient analysis of auto-CAD revealed the following: sensitivity 94%, specificity 63%, positive predictive value (PPV) 76%, and negative predictive value (NPV) 89%. After the exclusion of 37 cases that were interpreted as invalid by the auto-CAD algorithm, the NPV was further increased up to 97%, considering the false-negative cases in the formal radiology report, and was confirmed by subsequent invasive angiogram during the index visit. We successfully demonstrated the high accuracy of an auto-CAD algorithm, compared with the formal radiology report, for the detection of >50% CAD on cCTA in the setting of acute chest pain. The auto-CAD algorithm can be used to facilitate the decision-making process in the ED.
Monte Carlo Particle Transport: Algorithm and Performance Overview
Gentile, N; Procassini, R; Scott, H
2005-06-02
Monte Carlo methods are frequently used for neutron and radiation transport. These methods have several advantages, such as relative ease of programming and dealing with complex meshes. Disadvantages include long run times and statistical noise. Monte Carlo photon transport calculations also often suffer from inaccuracies in matter temperature due to the lack of implicitness. In this paper we discuss the Monte Carlo algorithm as it is applied to neutron and photon transport, detail the differences between neutron and photon Monte Carlo, and give an overview of the ways the numerical method has been modified to deal with issues that arise in photon Monte Carlo simulations.
Xie, Lin; Cui, Xiaowei; Zhao, Sihao; Lu, Mingquan
2017-01-01
It is well known that multipath effect remains a dominant error source that affects the positioning accuracy of Global Navigation Satellite System (GNSS) receivers. Significant efforts have been made by researchers and receiver manufacturers to mitigate multipath error in the past decades. Recently, a multipath mitigation technique using dual-polarization antennas has become a research hotspot for it provides another degree of freedom to distinguish the line-of-sight (LOS) signal from the LOS and multipath composite signal without extensively increasing the complexity of the receiver. Numbers of multipath mitigation techniques using dual-polarization antennas have been proposed and all of them report performance improvement over the single-polarization methods. However, due to the unpredictability of multipath, multipath mitigation techniques based on dual-polarization are not always effective while few studies discuss the condition under which the multipath mitigation using a dual-polarization antenna can outperform that using a single-polarization antenna, which is a fundamental question for dual-polarization multipath mitigation (DPMM) and the design of multipath mitigation algorithms. In this paper we analyze the characteristics of the signal received by a dual-polarization antenna and use the maximum likelihood estimation (MLE) to assess the theoretical performance of DPMM in different received signal cases. Based on the assessment we answer this fundamental question and find the dual-polarization antenna’s capability in mitigating short delay multipath—the most challenging one among all types of multipath for the majority of the multipath mitigation techniques. Considering these effective conditions, we propose a dual-polarization sequential iterative maximum likelihood estimation (DP-SIMLE) algorithm for DPMM. The simulation results verify our theory and show superior performance of the proposed DP-SIMLE algorithm over the traditional one using only an
Xie, Lin; Cui, Xiaowei; Zhao, Sihao; Lu, Mingquan
2017-02-13
It is well known that multipath effect remains a dominant error source that affects the positioning accuracy of Global Navigation Satellite System (GNSS) receivers. Significant efforts have been made by researchers and receiver manufacturers to mitigate multipath error in the past decades. Recently, a multipath mitigation technique using dual-polarization antennas has become a research hotspot for it provides another degree of freedom to distinguish the line-of-sight (LOS) signal from the LOS and multipath composite signal without extensively increasing the complexity of the receiver. Numbers of multipath mitigation techniques using dual-polarization antennas have been proposed and all of them report performance improvement over the single-polarization methods. However, due to the unpredictability of multipath, multipath mitigation techniques based on dual-polarization are not always effective while few studies discuss the condition under which the multipath mitigation using a dual-polarization antenna can outperform that using a single-polarization antenna, which is a fundamental question for dual-polarization multipath mitigation (DPMM) and the design of multipath mitigation algorithms. In this paper we analyze the characteristics of the signal received by a dual-polarization antenna and use the maximum likelihood estimation (MLE) to assess the theoretical performance of DPMM in different received signal cases. Based on the assessment we answer this fundamental question and find the dual-polarization antenna's capability in mitigating short delay multipath-the most challenging one among all types of multipath for the majority of the multipath mitigation techniques. Considering these effective conditions, we propose a dual-polarization sequential iterative maximum likelihood estimation (DP-SIMLE) algorithm for DPMM. The simulation results verify our theory and show superior performance of the proposed DP-SIMLE algorithm over the traditional one using only an RHCP
NASA Astrophysics Data System (ADS)
Hou, Rui; Yu, Junle
2011-12-01
Optical burst switching (OBS) has been regarded as the next generation optical switching technology. In this paper, the routing problem based on particle swarm optimization (PSO) algorithm in OBS has been studies and analyzed. Simulation results indicate that, the PSO based routing algorithm will optimal than the conversional shortest path first algorithm in space cost and calculation cost. Conclusions have certain theoretical significances for the improvement of OBS routing protocols.
NASA Astrophysics Data System (ADS)
Li, Y.; Kirchengast, G.; Scherllin-Pirscher, B.; Norman, R.; Yuan, Y. B.; Fritzer, J.; Schwaerz, M.; Zhang, K.
2015-08-01
We introduce a new dynamic statistical optimization algorithm to initialize ionosphere-corrected bending angles of Global Navigation Satellite System (GNSS)-based radio occultation (RO) measurements. The new algorithm estimates background and observation error covariance matrices with geographically varying uncertainty profiles and realistic global-mean correlation matrices. The error covariance matrices estimated by the new approach are more accurate and realistic than in simplified existing approaches and can therefore be used in statistical optimization to provide optimal bending angle profiles for high-altitude initialization of the subsequent Abel transform retrieval of refractivity. The new algorithm is evaluated against the existing Wegener Center Occultation Processing System version 5.6 (OPSv5.6) algorithm, using simulated data on two test days from January and July 2008 and real observed CHAllenging Minisatellite Payload (CHAMP) and Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) measurements from the complete months of January and July 2008. The following is achieved for the new method's performance compared to OPSv5.6: (1) significant reduction of random errors (standard deviations) of optimized bending angles down to about half of their size or more; (2) reduction of the systematic differences in optimized bending angles for simulated MetOp data; (3) improved retrieval of refractivity and temperature profiles; and (4) realistically estimated global-mean correlation matrices and realistic uncertainty fields for the background and observations. Overall the results indicate high suitability for employing the new dynamic approach in the processing of long-term RO data into a reference climate record, leading to well-characterized and high-quality atmospheric profiles over the entire stratosphere.
NASA Astrophysics Data System (ADS)
Li, Y.; Kirchengast, G.; Scherllin-Pirscher, B.; Norman, R.; Yuan, Y. B.; Fritzer, J.; Schwaerz, M.; Zhang, K.
2015-01-01
We introduce a new dynamic statistical optimization algorithm to initialize ionosphere-corrected bending angles of Global Navigation Satellite System (GNSS) based radio occultation (RO) measurements. The new algorithm estimates background and observation error covariance matrices with geographically-varying uncertainty profiles and realistic global-mean correlation matrices. The error covariance matrices estimated by the new approach are more accurate and realistic than in simplified existing approaches and can therefore be used in statistical optimization to provide optimal bending angle profiles for high-altitude initialization of the subsequent Abel transform retrieval of refractivity. The new algorithm is evaluated against the existing Wegener Center Occultation Processing System version 5.6 (OPSv5.6) algorithm, using simulated data on two test days from January and July 2008 and real observed CHAMP and COSMIC measurements from the complete months of January and July 2008. The following is achieved for the new method's performance compared to OPSv5.6: (1) significant reduction in random errors (standard deviations) of optimized bending angles down to about two-thirds of their size or more; (2) reduction of the systematic differences in optimized bending angles for simulated MetOp data; (3) improved retrieval of refractivity and temperature profiles; (4) produces realistically estimated global-mean correlation matrices and realistic uncertainty fields for the background and observations. Overall the results indicate high suitability for employing the new dynamic approach in the processing of long-term RO data into a reference climate record, leading to well characterized and high-quality atmospheric profiles over the entire stratosphere.
Performance of intensity-based non-normalized pointwise algorithms in dynamic speckle analysis.
Stoykova, E; Nazarova, D; Berberova, N; Gotchev, A
2015-09-21
Intensity-based pointwise non-normalized algorithms for 2D evaluation of activity in optical metrology with dynamic speckle analysis are studied and compared. They are applied to a temporal sequence of correlated speckle patterns formed at laser illumination of the object surface. Performance of each algorithm is assessed through the histogram of estimates it produces. A new algorithm is proposed that provides the same quality of the 2D activity map for less computational effort. The algorithms are applied both to synthetic and experimental data.
Binocular self-calibration performed via adaptive genetic algorithm based on laser line imaging
NASA Astrophysics Data System (ADS)
Apolinar Muñoz Rodríguez, J.; Mejía Alanís, Francisco Carlos
2016-07-01
An accurate technique to perform binocular self-calibration by means of an adaptive genetic algorithm based on a laser line is presented. In this calibration, the genetic algorithm computes the vision parameters through simulated binary crossover (SBX). To carry it out, the genetic algorithm constructs an objective function from the binocular geometry of the laser line projection. Then, the SBX minimizes the objective function via chromosomes recombination. In this algorithm, the adaptive procedure determines the search space via line position to obtain the minimum convergence. Thus, the chromosomes of vision parameters provide the minimization. The approach of the proposed adaptive genetic algorithm is to calibrate and recalibrate the binocular setup without references and physical measurements. This procedure leads to improve the traditional genetic algorithms, which calibrate the vision parameters by means of references and an unknown search space. It is because the proposed adaptive algorithm avoids errors produced by the missing of references. Additionally, the three-dimensional vision is carried out based on the laser line position and vision parameters. The contribution of the proposed algorithm is corroborated by an evaluation of accuracy of binocular calibration, which is performed via traditional genetic algorithms.
On the estimation algorithm used in adaptive performance optimization of turbofan engines
NASA Technical Reports Server (NTRS)
Espana, Martin D.; Gilyard, Glenn B.
1993-01-01
The performance seeking control algorithm is designed to continuously optimize the performance of propulsion systems. The performance seeking control algorithm uses a nominal model of the propulsion system and estimates, in flight, the engine deviation parameters characterizing the engine deviations with respect to nominal conditions. In practice, because of measurement biases and/or model uncertainties, the estimated engine deviation parameters may not reflect the engine's actual off-nominal condition. This factor has a necessary impact on the overall performance seeking control scheme exacerbated by the open-loop character of the algorithm. The effects produced by unknown measurement biases over the estimation algorithm are evaluated. This evaluation allows for identification of the most critical measurements for application of the performance seeking control algorithm to an F100 engine. An equivalence relation between the biases and engine deviation parameters stems from an observability study; therefore, it is undecided whether the estimated engine deviation parameters represent the actual engine deviation or whether they simply reflect the measurement biases. A new algorithm, based on the engine's (steady-state) optimization model, is proposed and tested with flight data. When compared with previous Kalman filter schemes, based on local engine dynamic models, the new algorithm is easier to design and tune and it reduces the computational burden of the onboard computer.
On the performance of variable forgetting factor recursive least-squares algorithms
NASA Astrophysics Data System (ADS)
Elisei-Iliescu, Camelia; Paleologu, Constantin; Tamaş, Rǎzvan
2016-12-01
The recursive least-squares (RLS) is a very popular adaptive algorithm, which is widely used in many system identification problems. The parameter that crucially influences the performance of the RLS algorithm is the forgetting factor. The value of this parameter leads to a compromise between tracking, misadjustment, and stability. In this paper, we present some insights on the performance of variable forgetting factor RLS (VFF-RLS) algorithms, in the context of system identification. Besides the classical RLS algorithm, we mainly focus on two recently proposed VFF-RLS algorithms. The novelty of the experimental setup is that we use real-world signals provided by Romanian Air Traffic Services Administration, i.e., voice and noise signals corresponding to real communication channels. In this context, the Air Traffic Control (ATC) communication represents a challenging task, usually involving non-stationary environments and stability issues.
Performance evaluation of simple linear iterative clustering algorithm on medical image processing.
Cong, Jinyu; Wei, Benzheng; Yin, Yilong; Xi, Xiaoming; Zheng, Yuanjie
2014-01-01
Simple Linear Iterative Clustering (SLIC) algorithm is increasingly applied to different kinds of image processing because of its excellent perceptually meaningful characteristics. In order to better meet the needs of medical image processing and provide technical reference for SLIC on the application of medical image segmentation, two indicators of boundary accuracy and superpixel uniformity are introduced with other indicators to systematically analyze the performance of SLIC algorithm, compared with Normalized cuts and Turbopixels algorithm. The extensive experimental results show that SLIC is faster and less sensitive to the image type and the setting superpixel number than other similar algorithms such as Turbopixels and Normalized cuts algorithms. And it also has a great benefit to the boundary recall, the robustness of fuzzy boundary, the setting superpixel size and the segmentation performance on medical image segmentation.
NASA Astrophysics Data System (ADS)
Goswami, D.; Chakraborty, S.
2014-11-01
Laser machining is a promising non-contact process for effective machining of difficult-to-process advanced engineering materials. Increasing interest in the use of lasers for various machining operations can be attributed to its several unique advantages, like high productivity, non-contact processing, elimination of finishing operations, adaptability to automation, reduced processing cost, improved product quality, greater material utilization, minimum heat-affected zone and green manufacturing. To achieve the best desired machining performance and high quality characteristics of the machined components, it is extremely important to determine the optimal values of the laser machining process parameters. In this paper, fireworks algorithm and cuckoo search (CS) algorithm are applied for single as well as multi-response optimization of two laser machining processes. It is observed that although almost similar solutions are obtained for both these algorithms, CS algorithm outperforms fireworks algorithm with respect to average computation time, convergence rate and performance consistency.
Small computer algorithms for comparing therapeutic performances of single-plane iridium implants.
Murphy, D J; Doss, L L
1984-01-01
We present a uniform method for selecting an optimum implant geometry by presenting techniques for evaluating the therapeutically significant maximum dose rate (herein referred to as the "maximum dose rate"), the reference isodose (85% of the maximum dose rate), and the area enclosed by the reference isodose contour. The therapeutic performances of planar iridium implants may be compared by evaluating their respective maximum dose rates, reference isodoses , and areas within the reference isodose contours. Because these parameters are mathematically defined, they reproducibly describe each implant geometry. We chose a small microcomputer to develop these comparison algorithms so that the radiotherapist need not have large, expensive computer facilities available to conduct his own studies. The development of these algorithms led to some significant conclusions and recommendations regarding the placement of interstitial implants. Using seeds that are centrally located in the array to evaluate the maximum dose contour avoids underestimating the array's maximum dose rate. This could occur if edge or corner seeds were used. Underestimating the maximum dose rate (and hence the reference isodose contour area) may have a serious therapeutic outcome, because the actual total treatment dosage may be excessive. As ribbon spacing is increased, there is a point beyond which the reference isodose contours become decoupled. At this point, a single relatively uniform reference isodose contour separates into several contours. This effect not only complicates the planimetry calculations, but it also adversely affects the therapeutic efficacy of the implant by producing therapeutically "cold" regions.
Algorithms and architectures for high performance analysis of semantic graphs.
Hendrickson, Bruce Alan
2005-09-01
analysis. Since intelligence datasets can be extremely large, the focus of this work is on the use of parallel computers. We have been working to develop scalable parallel algorithms that will be at the core of a semantic graph analysis infrastructure. Our work has involved two different thrusts, corresponding to two different computer architectures. The first architecture of interest is distributed memory, message passing computers. These machines are ubiquitous and affordable, but they are challenging targets for graph algorithms. Much of our distributed-memory work to date has been collaborative with researchers at Lawrence Livermore National Laboratory and has focused on finding short paths on distributed memory parallel machines. Our implementation on 32K processors of BlueGene/Light finds shortest paths between two specified vertices in just over a second for random graphs with 4 billion vertices.
NASA Technical Reports Server (NTRS)
Battiste, Vernol; Lawton, George; Lachter, Joel; Brandt, Summer; Koteskey, Robert; Dao, Arik-Quang; Kraut, Josh; Ligda, Sarah; Johnson, Walter W.
2012-01-01
Managing the interval between arrival aircraft is a major part of the en route and TRACON controller s job. In an effort to reduce controller workload and low altitude vectoring, algorithms have been developed to allow pilots to take responsibility for, achieve and maintain proper spacing. Additionally, algorithms have been developed to create dynamic weather-free arrival routes in the presence of convective weather. In a recent study we examined an algorithm to handle dynamic re-routing in the presence of convective weather and two distinct spacing algorithms. The spacing algorithms originated from different core algorithms; both were enhanced with trajectory intent data for the study. These two algorithms were used simultaneously in a human-in-the-loop (HITL) simulation where pilots performed weather-impacted arrival operations into Louisville International Airport while also performing interval management (IM) on some trials. The controllers retained responsibility for separation and for managing the en route airspace and some trials managing IM. The goal was a stress test of dynamic arrival algorithms with ground and airborne spacing concepts. The flight deck spacing algorithms or controller managed spacing not only had to be robust to the dynamic nature of aircraft re-routing around weather but also had to be compatible with two alternative algorithms for achieving the spacing goal. Flight deck interval management spacing in this simulation provided a clear reduction in controller workload relative to when controllers were responsible for spacing the aircraft. At the same time, spacing was much less variable with the flight deck automated spacing. Even though the approaches taken by the two spacing algorithms to achieve the interval management goals were slightly different they seem to be simpatico in achieving the interval management goal of 130 sec by the TRACON boundary.
Deb, Suash; Yang, Xin-She
2014-01-01
Traditional K-means clustering algorithms have the drawback of getting stuck at local optima that depend on the random values of initial centroids. Optimization algorithms have their advantages in guiding iterative computation to search for global optima while avoiding local optima. The algorithms help speed up the clustering process by converging into a global optimum early with multiple search agents in action. Inspired by nature, some contemporary optimization algorithms which include Ant, Bat, Cuckoo, Firefly, and Wolf search algorithms mimic the swarming behavior allowing them to cooperatively steer towards an optimal objective within a reasonable time. It is known that these so-called nature-inspired optimization algorithms have their own characteristics as well as pros and cons in different applications. When these algorithms are combined with K-means clustering mechanism for the sake of enhancing its clustering quality by avoiding local optima and finding global optima, the new hybrids are anticipated to produce unprecedented performance. In this paper, we report the results of our evaluation experiments on the integration of nature-inspired optimization methods into K-means algorithms. In addition to the standard evaluation metrics in evaluating clustering quality, the extended K-means algorithms that are empowered by nature-inspired optimization methods are applied on image segmentation as a case study of application scenario. PMID:25202730
Fong, Simon; Deb, Suash; Yang, Xin-She; Zhuang, Yan
2014-01-01
Traditional K-means clustering algorithms have the drawback of getting stuck at local optima that depend on the random values of initial centroids. Optimization algorithms have their advantages in guiding iterative computation to search for global optima while avoiding local optima. The algorithms help speed up the clustering process by converging into a global optimum early with multiple search agents in action. Inspired by nature, some contemporary optimization algorithms which include Ant, Bat, Cuckoo, Firefly, and Wolf search algorithms mimic the swarming behavior allowing them to cooperatively steer towards an optimal objective within a reasonable time. It is known that these so-called nature-inspired optimization algorithms have their own characteristics as well as pros and cons in different applications. When these algorithms are combined with K-means clustering mechanism for the sake of enhancing its clustering quality by avoiding local optima and finding global optima, the new hybrids are anticipated to produce unprecedented performance. In this paper, we report the results of our evaluation experiments on the integration of nature-inspired optimization methods into K-means algorithms. In addition to the standard evaluation metrics in evaluating clustering quality, the extended K-means algorithms that are empowered by nature-inspired optimization methods are applied on image segmentation as a case study of application scenario.
NASA Astrophysics Data System (ADS)
Nuutinen, Mikko; Virtanen, Toni; Häkkinen, Jukka
2016-03-01
Evaluating algorithms used to assess image and video quality requires performance measures. Traditional performance measures (e.g., Pearson's linear correlation coefficient, Spearman's rank-order correlation coefficient, and root mean square error) compare quality predictions of algorithms to subjective mean opinion scores (mean opinion score/differential mean opinion score). We propose a subjective root-mean-square error (SRMSE) performance measure for evaluating the accuracy of algorithms used to assess image and video quality. The SRMSE performance measure takes into account dispersion between observers. The other important property of the SRMSE performance measure is its measurement scale, which is calibrated to units of the number of average observers. The results of the SRMSE performance measure indicate the extent to which the algorithm can replace the subjective experiment (as the number of observers). Furthermore, we have presented the concept of target values, which define the performance level of the ideal algorithm. We have calculated the target values for all sample sets of the CID2013, CVD2014, and LIVE multiply distorted image quality databases.The target values and MATLAB implementation of the SRMSE performance measure are available on the project page of this study.
St. Hilaire, Melissa A.; Sullivan, Jason P.; Anderson, Clare; Cohen, Daniel A.; Barger, Laura K.; Lockley, Steven W.; Klerman, Elizabeth B.
2012-01-01
There is currently no “gold standard” marker of cognitive performance impairment resulting from sleep loss. We utilized pattern recognition algorithms to determine which features of data collected under controlled laboratory conditions could most reliably identify cognitive performance impairment in response to sleep loss using data from only one testing session, such as would occur in the “real world” or field conditions. A training set for testing the pattern recognition algorithms was developed using objective Psychomotor Vigilance Task (PVT) and subjective Karolinska Sleepiness Scale (KSS) data collected from laboratory studies during which subjects were sleep deprived for 26 – 52 hours. The algorithm was then tested in data from both laboratory and field experiments. The pattern recognition algorithm was able to identify performance impairment with a single testing session in individuals studied under laboratory conditions using PVT, KSS, length of time awake and time of day information with sensitivity and specificity as high as 82%. When this algorithm was tested on data collected under real-world conditions from individuals whose data were not in the training set, accuracy of predictions for individuals categorized with low performance impairment were as high as 98%. Predictions for medium and severe performance impairment were less accurate. We conclude that pattern recognition algorithms may be a promising method for identifying performance impairment in individuals using only current information about the individual’s behavior. Single testing features (e.g., number of PVT lapses) with high correlation with performance impairment in the laboratory setting may not be the best indicators of performance impairment under real-world conditions. Pattern recognition algorithms should be further tested for their ability to be used in conjunction with other assessments of sleepiness in real-world conditions to quantify performance impairment in
High-performance spectral element algorithms and implementations.
Fischer, P. F.; Tufo, H. M.
1999-08-28
We describe the development and implementation of a spectral element code for multimillion gridpoint simulations of incompressible flows in general two- and three-dimensional domains. Parallel performance is present on up to 2048 nodes of the Intel ASCI-Red machine at Sandia.
Ding, Xiaoyu; Lee, Jong-Hwan; Lee, Seong-Whan
2013-04-01
Nonnegative matrix factorization (NMF) is a blind source separation (BSS) algorithm which is based on the distinct constraint of nonnegativity of the estimated parameters as well as on the measured data. In this study, according to the potential feasibility of NMF for fMRI data, the four most popular NMF algorithms, corresponding to the following two types of (1) least-squares based update [i.e., alternating least-squares NMF (ALSNMF) and projected gradient descent NMF] and (2) multiplicative update (i.e., NMF based on Euclidean distance and NMF based on divergence cost function), were investigated by using them to estimate task-related neuronal activities. These algorithms were applied firstly to individual data from a single subject and, subsequently, to group data sets from multiple subjects. On the single-subject level, although all four algorithms detected task-related activation from simulated data, the performance of multiplicative update NMFs was significantly deteriorated when evaluated using visuomotor task fMRI data, for which they failed in estimating any task-related neuronal activities. In group-level analysis on both simulated data and real fMRI data, ALSNMF outperformed the other three algorithms. The presented findings may suggest that ALSNMF appears to be the most promising option among the tested NMF algorithms to extract task-related neuronal activities from fMRI data.
Performance Assessment Method for a Forged Fingerprint Detection Algorithm
NASA Astrophysics Data System (ADS)
Shin, Yong Nyuo; Jun, In-Kyung; Kim, Hyun; Shin, Woochang
The threat of invasion of privacy and of the illegal appropriation of information both increase with the expansion of the biometrics service environment to open systems. However, while certificates or smart cards can easily be cancelled and reissued if found to be missing, there is no way to recover the unique biometric information of an individual following a security breach. With the recognition that this threat factor may disrupt the large-scale civil service operations approaching implementation, such as electronic ID cards and e-Government systems, many agencies and vendors around the world continue to develop forged fingerprint detection technology, but no objective performance assessment method has, to date, been reported. Therefore, in this paper, we propose a methodology designed to evaluate the objective performance of the forged fingerprint detection technology that is currently attracting a great deal of attention.
A High Performance Cloud-Based Protein-Ligand Docking Prediction Algorithm
Chen, Jui-Le; Yang, Chu-Sing
2013-01-01
The potential of predicting druggability for a particular disease by integrating biological and computer science technologies has witnessed success in recent years. Although the computer science technologies can be used to reduce the costs of the pharmaceutical research, the computation time of the structure-based protein-ligand docking prediction is still unsatisfied until now. Hence, in this paper, a novel docking prediction algorithm, named fast cloud-based protein-ligand docking prediction algorithm (FCPLDPA), is presented to accelerate the docking prediction algorithm. The proposed algorithm works by leveraging two high-performance operators: (1) the novel migration (information exchange) operator is designed specially for cloud-based environments to reduce the computation time; (2) the efficient operator is aimed at filtering out the worst search directions. Our simulation results illustrate that the proposed method outperforms the other docking algorithms compared in this paper in terms of both the computation time and the quality of the end result. PMID:23762864
He, Lifeng; Chao, Yuyan
2015-09-01
Labeling connected components and calculating the Euler number in a binary image are two fundamental processes for computer vision and pattern recognition. This paper presents an ingenious method for identifying a hole in a binary image in the first scan of connected-component labeling. Our algorithm can perform connected component labeling and Euler number computing simultaneously, and it can also calculate the connected component (object) number and the hole number efficiently. The additional cost for calculating the hole number is only O(H) , where H is the hole number in the image. Our algorithm can be implemented almost in the same way as a conventional equivalent-label-set-based connected-component labeling algorithm. We prove the correctness of our algorithm and use experimental results for various kinds of images to demonstrate the power of our algorithm.
CPU vs. GPU - Performance comparison for the Gram-Schmidt algorithm
NASA Astrophysics Data System (ADS)
Brandes, T.; Arnold, A.; Soddemann, T.; Reith, D.
2012-08-01
The Gram-Schmidt method is a classical method for determining QR decompositions, which is commonly used in many applications in computational physics, such as orthogonalization of quantum mechanical operators or Lyapunov stability analysis. In this paper, we discuss how well the Gram-Schmidt method performs on different hardware architectures, including both state-of-the-art GPUs and CPUs. We explain, in detail, how a smart interplay between hardware and software can be used to speed up those rather compute intensive applications as well as the benefits and disadvantages of several approaches. In addition, we compare some highly optimized standard routines of the BLAS libraries against our own optimized routines on both processor types. Particular attention was paid to the strong hierarchical memory of modern GPUs and CPUs, which requires cache-aware blocking techniques for optimal performance. Our investigations show that the performance strongly depends on the employed algorithm, compiler and a little less on the employed hardware. Remarkably, the performance of the NVIDIA CUDA BLAS routines improved significantly from CUDA 3.2 to CUDA 4.0. Still, BLAS routines tend to be slightly slower than manually optimized code on GPUs, while we were not able to outperform the BLAS routines on CPUs. Comparing optimized implementations on different hardware architectures, we find that a NVIDIA GeForce GTX580 GPU is about 50% faster than a corresponding Intel X5650 Westmere hexacore CPU. The self-written codes are included as supplementary material.
Performance of SIMAC algorithm suite for tactical missile warning
NASA Astrophysics Data System (ADS)
Montgomery, Joel B.; Montgomery, Christine T.; Sanderson, Richard B.; McCalmont, John F.
2009-05-01
Self protection of airborne assets has been important to the Air Force and DoD community for many years. The greatest threats to aircraft continue to be man portable air defense missiles and ground fire. AFRL has been pursuing a near-IR sensor approach that has shown to have better performance than midwave IR systems with much lower costs. SIMAC couples multiple spatial and temporal filtering techniques to provide the needed clutter suppression in the NIR missile warning systems. Results from flight tests will be discussed .
Simulated performance of remote sensing ocean colour algorithms during the 1996 PRIME cruise
NASA Astrophysics Data System (ADS)
Westbrook, A. G.; Pinkerton, M. H.; Aiken, J.; Pilgrim, D. A.
Coincident pigment and underwater radiometric data were collected during a cruise along the 20°W meridian from 60°N to 37°N in the north-eastern Atlantic Ocean as part of the Natural Environment Research Council (NERC) thematic programme: plankton reactivity in the marine environment (PRIME). These data were used to simulate the retrieval of two bio-optical variables from remotelysensed measurements of ocean colour (for example by the NASA Sea-viewing wide field-of-view sensor, SeaWiFS), using two-band semi-empirical algorithms. The variables considered were the diffuse attenuation coefficient at 490 nm, ( Kd(490), units: m -1) and the phytoplankton pigment concentration expressed as optically-weighted chlorophyll- a concentration [ Ca, units: mg m -3]. There was good agreement between the measured and the retrieved bio-optical values. Algorithms based on the PRIME data were generated to compare the performance of local algorithms (algorithms which apply to a restricted area and/or season) with global algorithms (algorithms developed on data from a wide variety of water masses). The use of local algorithms improved the average accuracy, but not the precision, of the retrievals: errors were still ±36% ( Kd) and ±117% ( Ca) using local algorithms.
Gropp, William D.
2014-06-23
With the coming end of Moore's law, it has become essential to develop new algorithms and techniques that can provide the performance needed by demanding computational science applications, especially those that are part of the DOE science mission. This work was part of a multi-institution, multi-investigator project that explored several approaches to develop algorithms that would be effective at the extreme scales and with the complex processor architectures that are expected at the end of this decade. The work by this group developed new performance models that have already helped guide the development of highly scalable versions of an algebraic multigrid solver, new programming approaches designed to support numerical algorithms on heterogeneous architectures, and a new, more scalable version of conjugate gradient, an important algorithm in the solution of very large linear systems of equations.
HPC-NMF: A High-Performance Parallel Algorithm for Nonnegative Matrix Factorization
Kannan, Ramakrishnan; Sukumar, Sreenivas R.; Ballard, Grey M.; Park, Haesun
2016-08-22
NMF is a useful tool for many applications in different domains such as topic modeling in text mining, background separation in video analysis, and community detection in social networks. Despite its popularity in the data mining community, there is a lack of efficient distributed algorithms to solve the problem for big data sets. We propose a high-performance distributed-memory parallel algorithm that computes the factorization by iteratively solving alternating non-negative least squares (NLS) subproblems for $\\WW$ and $\\HH$. It maintains the data and factor matrices in memory (distributed across processors), uses MPI for interprocessor communication, and, in the dense case, provably minimizes communication costs (under mild assumptions). As opposed to previous implementation, our algorithm is also flexible: It performs well for both dense and sparse matrices, and allows the user to choose any one of the multiple algorithms for solving the updates to low rank factors $\\WW$ and $\\HH$ within the alternating iterations.
A High-Performance Neural Prosthesis Enabled by Control Algorithm Design
Gilja, Vikash; Nuyujukian, Paul; Chestek, Cindy A.; Cunningham, John P.; Yu, Byron M.; Fan, Joline M.; Churchland, Mark M.; Kaufman, Matthew T.; Kao, Jonathan C.; Ryu, Stephen I.; Shenoy, Krishna V.
2012-01-01
Neural prostheses translate neural activity from the brain into control signals for guiding prosthetic devices, such as computer cursors and robotic limbs, and thus offer disabled patients greater interaction with the world. However, relatively low performance remains a critical barrier to successful clinical translation; current neural prostheses are considerably slower with less accurate control than the native arm. Here we present a new control algorithm, the recalibrated feedback intention-trained Kalman filter (ReFIT-KF), that incorporates assumptions about the nature of closed loop neural prosthetic control. When tested with rhesus monkeys implanted with motor cortical electrode arrays, the ReFIT-KF algorithm outperforms existing neural prostheses in all measured domains and halves acquisition time. This control algorithm permits sustained uninterrupted use for hours and generalizes to more challenging tasks without retraining. Using this algorithm, we demonstrate repeatable high performance for years after implantation across two monkeys, thereby increasing the clinical viability of neural prostheses. PMID:23160043
Performance study of LMS based adaptive algorithms for unknown system identification
NASA Astrophysics Data System (ADS)
Javed, Shazia; Ahmad, Noor Atinah
2014-07-01
Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signal is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment.
Performance study of LMS based adaptive algorithms for unknown system identification
Javed, Shazia; Ahmad, Noor Atinah
2014-07-10
Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signal is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment.
Dependence of Adaptive Cross-correlation Algorithm Performance on the Extended Scene Image Quality
NASA Technical Reports Server (NTRS)
Sidick, Erkin
2008-01-01
Recently, we reported an adaptive cross-correlation (ACC) algorithm to estimate with high accuracy the shift as large as several pixels between two extended-scene sub-images captured by a Shack-Hartmann wavefront sensor. It determines the positions of all extended-scene image cells relative to a reference cell in the same frame using an FFT-based iterative image-shifting algorithm. It works with both point-source spot images as well as extended scene images. We have demonstrated previously based on some measured images that the ACC algorithm can determine image shifts with as high an accuracy as 0.01 pixel for shifts as large 3 pixels, and yield similar results for both point source spot images and extended scene images. The shift estimate accuracy of the ACC algorithm depends on illumination level, background, and scene content in addition to the amount of the shift between two image cells. In this paper we investigate how the performance of the ACC algorithm depends on the quality and the frequency content of extended scene images captured by a Shack-Hatmann camera. We also compare the performance of the ACC algorithm with those of several other approaches, and introduce a failsafe criterion for the ACC algorithm-based extended scene Shack-Hatmann sensors.
Quantitative performance evaluation of the EM algorithm applied to radiographic images
NASA Astrophysics Data System (ADS)
Brailean, James C.; Giger, Maryellen L.; Chen, Chin-Tu; Sullivan, Barry J.
1991-07-01
In this study, the authors evaluate quantitatively the performance of the Expectation Maximization (EM) algorithm as a restoration technique for radiographic images. The 'perceived' signal-to-nose ratio (SNR), of simple radiographic patterns processed by the EM algorithm are calculated on the basis of a statistical decision theory model that includes both the observer's visual response function and a noise component internal to the eye-brain system. The relative SNR (ratio of the processed SNR to the original SNR) is calculated and used as a metric to quantitatively compare the effects of the EM algorithm to two popular image enhancement techniques: contrast enhancement (windowing) and unsharp mask filtering.
NASA Astrophysics Data System (ADS)
Iwan Solihin, Mahmud; Fauzi Zanil, Mohd
2016-11-01
Cuckoo Search (CS) and Differential Evolution (DE) algorithms are considerably robust meta-heuristic algorithms to solve constrained optimization problems. In this study, the performance of CS and DE are compared in solving the constrained optimization problem from selected benchmark functions. Selection of the benchmark functions are based on active or inactive constraints and dimensionality of variables (i.e. number of solution variable). In addition, a specific constraint handling and stopping criterion technique are adopted in the optimization algorithm. The results show, CS approach outperforms DE in term of repeatability and the quality of the optimum solutions.
Performance comparison of some evolutionary algorithms on job shop scheduling problems
NASA Astrophysics Data System (ADS)
Mishra, S. K.; Rao, C. S. P.
2016-09-01
Job Shop Scheduling as a state space search problem belonging to NP-hard category due to its complexity and combinational explosion of states. Several naturally inspire evolutionary methods have been developed to solve Job Shop Scheduling Problems. In this paper the evolutionary methods namely Particles Swarm Optimization, Artificial Intelligence, Invasive Weed Optimization, Bacterial Foraging Optimization, Music Based Harmony Search Algorithms are applied and find tuned to model and solve Job Shop Scheduling Problems. To compare about 250 Bench Mark instances have been used to evaluate the performance of these algorithms. The capabilities of each these algorithms in solving Job Shop Scheduling Problems are outlined.
Performance evaluation of recommendation algorithms on Internet of Things services
NASA Astrophysics Data System (ADS)
Mashal, Ibrahim; Alsaryrah, Osama; Chung, Tein-Yaw
2016-06-01
Internet of Things (IoT) is the next wave of industry revolution that will initiate many services, such as personal health care and green energy monitoring, which people may subscribe for their convenience. Recommending IoT services to the users based on objects they own will become very crucial for the success of IoT. In this work, we introduce the concept of service recommender systems in IoT by a formal model. As a first attempt in this direction, we have proposed a hyper-graph model for IoT recommender system in which each hyper-edge connects users, objects, and services. Next, we studied the usefulness of traditional recommendation schemes and their hybrid approaches on IoT service recommendation (IoTSRS) based on existing well known metrics. The preliminary results show that existing approaches perform reasonably well but further extension is required for IoTSRS. Several challenges were discussed to point out the direction of future development in IoTSR.
Performance Evaluation of an Option-Based Learning Algorithm in Multi-Car Elevator Systems
NASA Astrophysics Data System (ADS)
Valdivielso Chian, Alex; Miyamoto, Toshiyuki
In this letter, we present the evaluation of an option-based learning algorithm, developed to perform a conflict-free allocation of calls among cars in a multi-car elevator system. We evaluate its performance in terms of the service time, its flexibility in the task-allocation, and the load balancing.
Performance of QoS-based multicast routing algorithms for real-time communication
NASA Astrophysics Data System (ADS)
Verma, Sanjeev; Pankaj, Rajesh K.; Leon-Garcia, Alberto
1997-10-01
In recent years, there has been a lot of interest in providing real-time multimedia services like digital audio and video over packet-switched networks such as Internet and ATM. These services require certain quality of service (QoS) from the network. The routing algorithm should take QoS factor for an application into account while selecting the most suitable route for the application. In this paper, we introduce a new routing metric and use it with two different heuristics to compute the multicast tree for guaranteed QoS applications that need firm end-to-end delay bound. We then compare the performance of our algorithms with the other proposed QoS-based routing algorithms. Simulations were run over a number of random networks to measure the performance of different algorithms. We studied routing algorithms along with resource reservation and admission control to measure the call throughput over a number of random networks. Simulation results show that our algorithms give a much better performance in terms of call throughput over other proposed schemes like QOSPF.
Performance evaluation of trigger algorithm for the MACE telescope
NASA Astrophysics Data System (ADS)
Yadav, Kuldeep; Yadav, K. K.; Bhatt, N.; Chouhan, N.; Sikder, S. S.; Behere, A.; Pithawa, C. K.; Tickoo, A. K.; Rannot, R. C.; Bhattacharyya, S.; Mitra, A. K.; Koul, R.
The MACE (Major Atmospheric Cherenkov Experiment) telescope with a light collector diameter of 21 m, is being set up at Hanle (32.80 N, 78.90 E, 4200m asl) India, to explore the gamma-ray sky in the tens of GeV energy range. The imaging camera of the telescope comprises 1088 pixels covering a total field-of-view of 4.30 × 4.00 with trigger field-of-view of 2.60 × 3.00 and an uniform pixel resolution of 0.120. In order to achieve low energy trigger threshold of less than 30 GeV, a two level trigger scheme is being designed for the telescope. The first level trigger is generated within 16 pixels of the Camera Integrated Module (CIM) based on 4 nearest neighbour (4NN) close cluster configuration within a coincidence gate window of 5 ns while the second level trigger is generated by combining the first level triggers from neighbouring CIMs. Each pixel of the telescope is expected to operate at a single pixel threshold between 8-10 photo-electrons where the single channel rate dominated by the after- pulsing is expected to be ˜500 kHz. The hardware implementation of the trigger logic is based on complex programmable logic devices (CPLD). The basic design concept, hardware implementation and performance evaluation of the trigger system in terms of threshold energy and trigger rate estimates based on Monte Carlo data for the MACE telescope will be presented in this meeting.
Schold, Jesse D; Arrington, Charlotte J; Levine, Greg
2010-09-01
In the past several years, emphasis on quality metrics in the field of organ transplantation has increased significantly, largely because of the new conditions of participation issued by the Centers for Medicare and Medicaid Services. These regulations directly associate patients' outcomes and measured performance of centers with the distribution of public funding to institutions. Moreover, insurers and marketing ventures have used publicly available outcomes data from transplant centers for business decision making and advertisement purposes. We gave a 10-question survey to attendees of the Transplant Management Forum at the 2009 meeting of the United Network for Organ Sharing to ascertain how centers have responded to the increased oversight of performance. Of 63 responses, 55% indicated a low or near low performance rating at their center in the past 3 years. Respondents from low-performing centers were significantly more likely to indicate increased selection criteria for candidates (81% vs 38%, P = .001) and donors (77% vs 31%, P < .001) as well as alterations in clinical protocols (84% vs 52%, P = .007). Among respondents indicating lost insurance contracts (31%), these differences were also highly significant. Based on respondents' perceptions, outcomes of performance evaluations are associated with significant changes in clinical practice at transplant centers. The transplant community and policy makers should practice vigilance that performance evaluations and regulatory oversight do not inadvertently lead to diminished access to care among viable candidates or decreased transplant volume.
2017-01-01
Background Machine learning techniques may be an effective and efficient way to classify open-text reports on doctor’s activity for the purposes of quality assurance, safety, and continuing professional development. Objective The objective of the study was to evaluate the accuracy of machine learning algorithms trained to classify open-text reports of doctor performance and to assess the potential for classifications to identify significant differences in doctors’ professional performance in the United Kingdom. Methods We used 1636 open-text comments (34,283 words) relating to the performance of 548 doctors collected from a survey of clinicians’ colleagues using the General Medical Council Colleague Questionnaire (GMC-CQ). We coded 77.75% (1272/1636) of the comments into 5 global themes (innovation, interpersonal skills, popularity, professionalism, and respect) using a qualitative framework. We trained 8 machine learning algorithms to classify comments and assessed their performance using several training samples. We evaluated doctor performance using the GMC-CQ and compared scores between doctors with different classifications using t tests. Results Individual algorithm performance was high (range F score=.68 to .83). Interrater agreement between the algorithms and the human coder was highest for codes relating to “popular” (recall=.97), “innovator” (recall=.98), and “respected” (recall=.87) codes and was lower for the “interpersonal” (recall=.80) and “professional” (recall=.82) codes. A 10-fold cross-validation demonstrated similar performance in each analysis. When combined together into an ensemble of multiple algorithms, mean human-computer interrater agreement was .88. Comments that were classified as “respected,” “professional,” and “interpersonal” related to higher doctor scores on the GMC-CQ compared with comments that were not classified (P<.05). Scores did not vary between doctors who were rated as popular or
Assessing the performance of vessel wall tracking algorithms: the importance of the test phantom
NASA Astrophysics Data System (ADS)
Ramnarine, K. V.; Kanber, B.; Panerai, R. B.
2004-01-01
There is widespread clinical interest in assessing the mechanical properties of tissues and vessel walls. This study investigated the importance of the test phantom in providing a realistic assessment of clinical wall tracking performance for a variety of ultrasound modalities. B-mode, colour Doppler and Tissue Doppler Imaging (TDI) cineloop images were acquired using a Philips HDI5000 scanner and L12-5 probe. In-vivo longitudinal sections of 30 common carotid arteries and in-vitro images of pulsatile flow of a blood mimicking fluid through walled and wall-less tissue and vessel mimicking flow phantoms were analysed. Vessel wall tracking performance was assessed for our new probabilistic B-mode algorithm (PROBAL), and 3 different techniques implemented by Philips Medical Systems, based on B-mode edge detection (LDOT), colour Doppler (CVIQ) and TDI (TDIAWM). Precision (standard deviation/mean) of the peak systole dilations for respective PROBAL, LDOT, CVIQ and TDIAWM techniques were: 15.4 +/- 8.4%, 23 +/- 12.7%, 10 +/- 10% and 10.3 +/- 8.1% for the common carotid arteries; 6.4%, 22%, 11.6% and 34.5% for the wall-less flow phantom, 5.3%, 9.8%, 23.4% and 2.7% for the C-flex walled phantom and 3.9%, 2.6%, 1% and 3.2% for the latex walled phantom. The test phantom design and construction had a significant effect on the measurement of wall tracking performance.
NASA Astrophysics Data System (ADS)
Kreuz, Thomas; Andrzejak, Ralph G.; Mormann, Florian; Kraskov, Alexander; Stögbauer, Harald; Elger, Christian E.; Lehnertz, Klaus; Grassberger, Peter
2004-06-01
In a growing number of publications it is claimed that epileptic seizures can be predicted by analyzing the electroencephalogram (EEG) with different characterizing measures. However, many of these studies suffer from a severe lack of statistical validation. Only rarely are results passed to a statistical test and verified against some null hypothesis H0 in order to quantify their significance. In this paper we propose a method to statistically validate the performance of measures used to predict epileptic seizures. From measure profiles rendered by applying a moving-window technique to the electroencephalogram we first generate an ensemble of surrogates by a constrained randomization using simulated annealing. Subsequently the seizure prediction algorithm is applied to the original measure profile and to the surrogates. If detectable changes before seizure onset exist, highest performance values should be obtained for the original measure profiles and the null hypothesis. “The measure is not suited for seizure prediction” can be rejected. We demonstrate our method by applying two measures of synchronization to a quasicontinuous EEG recording and by evaluating their predictive performance using a straightforward seizure prediction statistics. We would like to stress that the proposed method is rather universal and can be applied to many other prediction and detection problems.
Thrust stand evaluation of engine performance improvement algorithms in an F-15 airplane
NASA Technical Reports Server (NTRS)
Conners, Timothy R.
1992-01-01
An investigation is underway to determine the benefits of a new propulsion system optimization algorithm in an F-15 airplane. The performance seeking control (PSC) algorithm optimizes the quasi-steady-state performance of an F100 derivative turbofan engine for several modes of operation. The PSC algorithm uses an onboard software engine model that calculates thrust, stall margin, and other unmeasured variables for use in the optimization. As part of the PSC test program, the F-15 aircraft was operated on a horizontal thrust stand. Thrust was measured with highly accurate load cells. The measured thrust was compared to onboard model estimates and to results from posttest performance programs. Thrust changes using the various PSC modes were recorded. Those results were compared to benefits using the less complex highly integrated digital electronic control (HIDEC) algorithm. The PSC maximum thrust mode increased intermediate power thrust by 10 percent. The PSC engine model did very well at estimating measured thrust and closely followed the transients during optimization. Quantitative results from the evaluation of the algorithms and performance calculation models are included with emphasis on measured thrust results. The report presents a description of the PSC system and a discussion of factors affecting the accuracy of the thrust stand load measurements.
Independent component analysis algorithm FPGA design to perform real-time blind source separation
NASA Astrophysics Data System (ADS)
Meyer-Baese, Uwe; Odom, Crispin; Botella, Guillermo; Meyer-Baese, Anke
2015-05-01
The conditions that arise in the Cocktail Party Problem prevail across many fields creating a need for of Blind Source Separation. The need for BSS has become prevalent in several fields of work. These fields include array processing, communications, medical signal processing, and speech processing, wireless communication, audio, acoustics and biomedical engineering. The concept of the cocktail party problem and BSS led to the development of Independent Component Analysis (ICA) algorithms. ICA proves useful for applications needing real time signal processing. The goal of this research was to perform an extensive study on ability and efficiency of Independent Component Analysis algorithms to perform blind source separation on mixed signals in software and implementation in hardware with a Field Programmable Gate Array (FPGA). The Algebraic ICA (A-ICA), Fast ICA, and Equivariant Adaptive Separation via Independence (EASI) ICA were examined and compared. The best algorithm required the least complexity and fewest resources while effectively separating mixed sources. The best algorithm was the EASI algorithm. The EASI ICA was implemented on hardware with Field Programmable Gate Arrays (FPGA) to perform and analyze its performance in real time.
NASA Astrophysics Data System (ADS)
Finney, Greg A.; Persons, Christopher M.; Henning, Stephan; Hazen, Jessie; Whitley, Daniel
2014-06-01
IERUS Technologies, Inc. and the University of Alabama in Huntsville have partnered to perform characterization and development of algorithms and hardware for adaptive optics. To date the algorithm work has focused on implementation of the stochastic parallel gradient descent (SPGD) algorithm. SPGD is a metric-based approach in which a scalar metric is optimized by taking random perturbative steps for many actuators simultaneously. This approach scales to systems with a large number of actuators while maintaining bandwidth, while conventional methods are negatively impacted by the very large matrix multiplications that are required. The metric approach enables the use of higher speed sensors with fewer (or even a single) sensing element(s), enabling a higher control bandwidth. Furthermore, the SPGD algorithm is model-free, and thus is not strongly impacted by the presence of nonlinearities which degrade the performance of conventional phase reconstruction methods. Finally, for high energy laser applications, SPGD can be performed using the primary laser beam without the need for an additional beacon laser. The conventional SPGD algorithm was modified to use an adaptive gain to improve convergence while maintaining low steady state error. Results from laboratory experiments using phase plates as atmosphere surrogates will be presented, demonstrating areas in which the adaptive gain yields better performance and areas which require further investigation.
A new multiobjective performance criterion used in PID tuning optimization algorithms
Sahib, Mouayad A.; Ahmed, Bestoun S.
2015-01-01
In PID controller design, an optimization algorithm is commonly employed to search for the optimal controller parameters. The optimization algorithm is based on a specific performance criterion which is defined by an objective or cost function. To this end, different objective functions have been proposed in the literature to optimize the response of the controlled system. These functions include numerous weighted time and frequency domain variables. However, for an optimum desired response it is difficult to select the appropriate objective function or identify the best weight values required to optimize the PID controller design. This paper presents a new time domain performance criterion based on the multiobjective Pareto front solutions. The proposed objective function is tested in the PID controller design for an automatic voltage regulator system (AVR) application using particle swarm optimization algorithm. Simulation results show that the proposed performance criterion can highly improve the PID tuning optimization in comparison with traditional objective functions. PMID:26843978
A new multiobjective performance criterion used in PID tuning optimization algorithms.
Sahib, Mouayad A; Ahmed, Bestoun S
2016-01-01
In PID controller design, an optimization algorithm is commonly employed to search for the optimal controller parameters. The optimization algorithm is based on a specific performance criterion which is defined by an objective or cost function. To this end, different objective functions have been proposed in the literature to optimize the response of the controlled system. These functions include numerous weighted time and frequency domain variables. However, for an optimum desired response it is difficult to select the appropriate objective function or identify the best weight values required to optimize the PID controller design. This paper presents a new time domain performance criterion based on the multiobjective Pareto front solutions. The proposed objective function is tested in the PID controller design for an automatic voltage regulator system (AVR) application using particle swarm optimization algorithm. Simulation results show that the proposed performance criterion can highly improve the PID tuning optimization in comparison with traditional objective functions.
NASA Astrophysics Data System (ADS)
Julge, Kalev; Ellmann, Artu; Gruno, Anti
2014-01-01
Numerous filtering algorithms have been developed in order to distinguish the ground surface from nonground points acquired by airborne laser scanning. These algorithms automatically attempt to determine the ground points using various features such as predefined parameters and statistical analysis. Their efficiency also depends on landscape characteristics. The aim of this contribution is to test the performance of six common filtering algorithms embedded in three freeware programs. The algorithms' adaptive TIN, elevation threshold with expand window, maximum local slope, progressive morphology, multiscale curvature, and linear prediction were tested on four relatively large (4 to 8 km2) and diverse landscape areas, which included steep sloped hills, urban areas, ridge-like eskers, and a river valley. The results show that in diverse test areas each algorithm yields various commission and omission errors. It appears that adaptive TIN is suitable in urban areas while the multiscale curvature algorithm is best suited in wooded areas. The multiscale curvature algorithm yielded the overall best results with average root-mean-square error values of 0.35 m.
Experimental Performance of a Genetic Algorithm for Airborne Strategic Conflict Resolution
NASA Technical Reports Server (NTRS)
Karr, David A.; Vivona, Robert A.; Roscoe, David A.; DePascale, Stephen M.; Consiglio, Maria
2009-01-01
The Autonomous Operations Planner, a research prototype flight-deck decision support tool to enable airborne self-separation, uses a pattern-based genetic algorithm to resolve predicted conflicts between the ownship and traffic aircraft. Conflicts are resolved by modifying the active route within the ownship's flight management system according to a predefined set of maneuver pattern templates. The performance of this pattern-based genetic algorithm was evaluated in the context of batch-mode Monte Carlo simulations running over 3600 flight hours of autonomous aircraft in en-route airspace under conditions ranging from typical current traffic densities to several times that level. Encountering over 8900 conflicts during two simulation experiments, the genetic algorithm was able to resolve all but three conflicts, while maintaining a required time of arrival constraint for most aircraft. Actual elapsed running time for the algorithm was consistent with conflict resolution in real time. The paper presents details of the genetic algorithm's design, along with mathematical models of the algorithm's performance and observations regarding the effectiveness of using complimentary maneuver patterns when multiple resolutions by the same aircraft were required.
Experimental Performance of a Genetic Algorithm for Airborne Strategic Conflict Resolution
NASA Technical Reports Server (NTRS)
Karr, David A.; Vivona, Robert A.; Roscoe, David A.; DePascale, Stephen M.; Consiglio, Maria
2009-01-01
The Autonomous Operations Planner, a research prototype flight-deck decision support tool to enable airborne self-separation, uses a pattern-based genetic algorithm to resolve predicted conflicts between the ownship and traffic aircraft. Conflicts are resolved by modifying the active route within the ownship s flight management system according to a predefined set of maneuver pattern templates. The performance of this pattern-based genetic algorithm was evaluated in the context of batch-mode Monte Carlo simulations running over 3600 flight hours of autonomous aircraft in en-route airspace under conditions ranging from typical current traffic densities to several times that level. Encountering over 8900 conflicts during two simulation experiments, the genetic algorithm was able to resolve all but three conflicts, while maintaining a required time of arrival constraint for most aircraft. Actual elapsed running time for the algorithm was consistent with conflict resolution in real time. The paper presents details of the genetic algorithm s design, along with mathematical models of the algorithm s performance and observations regarding the effectiveness of using complimentary maneuver patterns when multiple resolutions by the same aircraft were required.
Lamberti, Alfredo; Vanlanduit, Steve; De Pauw, Ben; Berghmans, Francis
2014-01-01
The working principle of fiber Bragg grating (FBG) sensors is mostly based on the tracking of the Bragg wavelength shift. To accomplish this task, different algorithms have been proposed, from conventional maximum and centroid detection algorithms to more recently-developed correlation-based techniques. Several studies regarding the performance of these algorithms have been conducted, but they did not take into account spectral distortions, which appear in many practical applications. This paper addresses this issue and analyzes the performance of four different wavelength tracking algorithms (maximum detection, centroid detection, cross-correlation and fast phase-correlation) when applied to distorted FBG spectra used for measuring dynamic loads. Both simulations and experiments are used for the analyses. The dynamic behavior of distorted FBG spectra is simulated using the transfer-matrix approach, and the amount of distortion of the spectra is quantified using dedicated distortion indices. The algorithms are compared in terms of achievable precision and accuracy. To corroborate the simulation results, experiments were conducted using three FBG sensors glued on a steel plate and subjected to a combination of transverse force and vibration loads. The analysis of the results showed that the fast phase-correlation algorithm guarantees the best combination of versatility, precision and accuracy. PMID:25521386
NASA Astrophysics Data System (ADS)
MOHAMMED, M. A. SI; BOUSSADIA, H.; BELLAR, A.; ADNANE, A.
2017-01-01
This paper presents a brief synthesis and useful performance analysis of different attitude filtering algorithms (attitude determination algorithms, attitude estimation algorithms, and nonlinear observers) applied to Low Earth Orbit Satellite in terms of accuracy, convergence time, amount of memory, and computation time. This latter is calculated in two ways, using a personal computer and also using On-board computer 750 (OBC 750) that is being used in many SSTL Earth observation missions. The use of this comparative study could be an aided design tool to the designer to choose from an attitude determination or attitude estimation or attitude observer algorithms. The simulation results clearly indicate that the nonlinear Observer is the more logical choice.
The high performance parallel algorithm for Unified Gas-Kinetic Scheme
NASA Astrophysics Data System (ADS)
Li, Shiyi; Li, Qibing; Fu, Song; Xu, Jinxiu
2016-11-01
A high performance parallel algorithm for UGKS is developed to simulate three-dimensional flows internal and external on arbitrary grid system. The physical domain and velocity domain are divided into different blocks and distributed according to the two-dimensional Cartesian topology with intra-communicators in physical domain for data exchange and other intra-communicators in velocity domain for sum reduction to moment integrals. Numerical results of three-dimensional cavity flow and flow past a sphere agree well with the results from the existing studies and validate the applicability of the algorithm. The scalability of the algorithm is tested both on small (1-16) and large (729-5832) scale processors. The tested speed-up ratio is near linear ashind thus the efficiency is around 1, which reveals the good scalability of the present algorithm.
Significant Differences in Pediatric Psychotropic Side Effects: Implications for School Performance
ERIC Educational Resources Information Center
Kubiszyn, Thomas; Mire, Sarah; Dutt, Sonia; Papathopoulos, Katina; Burridge, Andrea Backsheider
2012-01-01
Some side effects (SEs) of increasingly prescribed psychotropic medications can impact student performance in school. SE risk varies, even among drugs from the same class (e.g., antidepressants). Knowing which SEs occur significantly more often than others may enable school psychologists to enhance collaborative risk-benefit analysis, medication…
Montilla, I; Béchet, C; Le Louarn, M; Reyes, M; Tallon, M
2010-11-01
Extremely Large Telescopes (ELTs) are very challenging with respect to their adaptive optics (AO) requirements. Their diameters and the specifications required by the astronomical science for which they are being designed imply a huge increment in the number of degrees of freedom in the deformable mirrors. Faster algorithms are needed to implement the real-time reconstruction and control in AO at the required speed. We present the results of a study of the AO correction performance of three different algorithms applied to the case of a 42-m ELT: one considered as a reference, the matrix-vector multiply (MVM) algorithm; and two considered fast, the fractal iterative method (FrIM) and the Fourier transform reconstructor (FTR). The MVM and the FrIM both provide a maximum a posteriori estimation, while the FTR provides a least-squares one. The algorithms are tested on the European Southern Observatory (ESO) end-to-end simulator, OCTOPUS. The performance is compared using a natural guide star single-conjugate adaptive optics configuration. The results demonstrate that the methods have similar performance in a large variety of simulated conditions. However, with respect to system misregistrations, the fast algorithms demonstrate an interesting robustness.
A fast and high performance multiple data integration algorithm for identifying human disease genes
2015-01-01
Background Integrating multiple data sources is indispensable in improving disease gene identification. It is not only due to the fact that disease genes associated with similar genetic diseases tend to lie close with each other in various biological networks, but also due to the fact that gene-disease associations are complex. Although various algorithms have been proposed to identify disease genes, their prediction performances and the computational time still should be further improved. Results In this study, we propose a fast and high performance multiple data integration algorithm for identifying human disease genes. A posterior probability of each candidate gene associated with individual diseases is calculated by using a Bayesian analysis method and a binary logistic regression model. Two prior probability estimation strategies and two feature vector construction methods are developed to test the performance of the proposed algorithm. Conclusions The proposed algorithm is not only generated predictions with high AUC scores, but also runs very fast. When only a single PPI network is employed, the AUC score is 0.769 by using F2 as feature vectors. The average running time for each leave-one-out experiment is only around 1.5 seconds. When three biological networks are integrated, the AUC score using F3 as feature vectors increases to 0.830, and the average running time for each leave-one-out experiment takes only about 12.54 seconds. It is better than many existing algorithms. PMID:26399620
CCA performance of a new source list/EZW hybrid compression algorithm
NASA Astrophysics Data System (ADS)
Huber, A. Kris; Budge, Scott E.; Moon, Todd K.; Bingham, Gail E.
2001-11-01
A new data compression algorithm for encoding astronomical source lists is presented. Two experiments in combined compression and analysis (CCA) are described, the first using simulated imagery based upon a tractable source list model, and the second using images from SPIRIT III, a spaceborne infrared sensor. A CCA system consisting of the source list compressor followed by a zerotree-wavelet residual encoder is compared to alternatives based on three other astronomical image compression algorithms. CCA performance is expressed in terms of image distortion along with relevant measures of point source detection and estimation quality. Some variations of performance with compression bit rate and point source flux are characterized. While most of the compression algorithms reduce high-frequency quantum noise at certain bit rates, conclusive evidence is not found that such denoising brings an improvement in point source detection or estimation performance of the CCA systems. The proposed algorithm is a top performer in every measure of CCA performance; the computational complexity is relatively high, however.
Performance of the reconstruction algorithms of the FIRST experiment pixel sensors vertex detector
NASA Astrophysics Data System (ADS)
Rescigno, R.; Finck, Ch.; Juliani, D.; Spiriti, E.; Baudot, J.; Abou-Haidar, Z.; Agodi, C.; Alvarez, M. A. G.; Aumann, T.; Battistoni, G.; Bocci, A.; Böhlen, T. T.; Boudard, A.; Brunetti, A.; Carpinelli, M.; Cirrone, G. A. P.; Cortes-Giraldo, M. A.; Cuttone, G.; De Napoli, M.; Durante, M.; Gallardo, M. I.; Golosio, B.; Iarocci, E.; Iazzi, F.; Ickert, G.; Introzzi, R.; Krimmer, J.; Kurz, N.; Labalme, M.; Leifels, Y.; Le Fevre, A.; Leray, S.; Marchetto, F.; Monaco, V.; Morone, M. C.; Oliva, P.; Paoloni, A.; Patera, V.; Piersanti, L.; Pleskac, R.; Quesada, J. M.; Randazzo, N.; Romano, F.; Rossi, D.; Rousseau, M.; Sacchi, R.; Sala, P.; Sarti, A.; Scheidenberger, C.; Schuy, C.; Sciubba, A.; Sfienti, C.; Simon, H.; Sipala, V.; Tropea, S.; Vanstalle, M.; Younis, H.
2014-12-01
Hadrontherapy treatments use charged particles (e.g. protons and carbon ions) to treat tumors. During a therapeutic treatment with carbon ions, the beam undergoes nuclear fragmentation processes giving rise to significant yields of secondary charged particles. An accurate prediction of these production rates is necessary to estimate precisely the dose deposited into the tumours and the surrounding healthy tissues. Nowadays, a limited set of double differential carbon fragmentation cross-section is available. Experimental data are necessary to benchmark Monte Carlo simulations for their use in hadrontherapy. The purpose of the FIRST experiment is to study nuclear fragmentation processes of ions with kinetic energy in the range from 100 to 1000 MeV/u. Tracks are reconstructed using information from a pixel silicon detector based on the CMOS technology. The performances achieved using this device for hadrontherapy purpose are discussed. For each reconstruction step (clustering, tracking and vertexing), different methods are implemented. The algorithm performances and the accuracy on reconstructed observables are evaluated on the basis of simulated and experimental data.
NASA Astrophysics Data System (ADS)
Mantini, D.; Hild, K. E., II; Alleva, G.; Comani, S.
2006-02-01
Independent component analysis (ICA) algorithms have been successfully used for signal extraction tasks in the field of biomedical signal processing. We studied the performances of six algorithms (FastICA, CubICA, JADE, Infomax, TDSEP and MRMI-SIG) for fetal magnetocardiography (fMCG). Synthetic datasets were used to check the quality of the separated components against the original traces. Real fMCG recordings were simulated with linear combinations of typical fMCG source signals: maternal and fetal cardiac activity, ambient noise, maternal respiration, sensor spikes and thermal noise. Clusters of different dimensions (19, 36 and 55 sensors) were prepared to represent different MCG systems. Two types of signal-to-interference ratios (SIR) were measured. The first involves averaging over all estimated components and the second is based solely on the fetal trace. The computation time to reach a minimum of 20 dB SIR was measured for all six algorithms. No significant dependency on gestational age or cluster dimension was observed. Infomax performed poorly when a sub-Gaussian source was included; TDSEP and MRMI-SIG were sensitive to additive noise, whereas FastICA, CubICA and JADE showed the best performances. Of all six methods considered, FastICA had the best overall performance in terms of both separation quality and computation times.
Signal and image processing algorithm performance in a virtual and elastic computing environment
NASA Astrophysics Data System (ADS)
Bennett, Kelly W.; Robertson, James
2013-05-01
The U.S. Army Research Laboratory (ARL) supports the development of classification, detection, tracking, and localization algorithms using multiple sensing modalities including acoustic, seismic, E-field, magnetic field, PIR, and visual and IR imaging. Multimodal sensors collect large amounts of data in support of algorithm development. The resulting large amount of data, and their associated high-performance computing needs, increases and challenges existing computing infrastructures. Purchasing computer power as a commodity using a Cloud service offers low-cost, pay-as-you-go pricing models, scalability, and elasticity that may provide solutions to develop and optimize algorithms without having to procure additional hardware and resources. This paper provides a detailed look at using a commercial cloud service provider, such as Amazon Web Services (AWS), to develop and deploy simple signal and image processing algorithms in a cloud and run the algorithms on a large set of data archived in the ARL Multimodal Signatures Database (MMSDB). Analytical results will provide performance comparisons with existing infrastructure. A discussion on using cloud computing with government data will discuss best security practices that exist within cloud services, such as AWS.
Performance of 12 DIR algorithms in low-contrast regions for mass and density conserving deformation
Yeo, U. J.; Supple, J. R.; Franich, R. D.; Taylor, M. L.; Smith, R.; Kron, T.
2013-10-15
Purpose: Deformable image registration (DIR) has become a key tool for adaptive radiotherapy to account for inter- and intrafraction organ deformation. Of contemporary interest, the application to deformable dose accumulation requires accurate deformation even in low contrast regions where dose gradients may exist within near-uniform tissues. One expects high-contrast features to generally be deformed more accurately by DIR algorithms. The authors systematically assess the accuracy of 12 DIR algorithms and quantitatively examine, in particular, low-contrast regions, where accuracy has not previously been established.Methods: This work investigates DIR algorithms in three dimensions using deformable gel (DEFGEL) [U. J. Yeo, M. L. Taylor, L. Dunn, R. L. Smith, T. Kron, and R. D. Franich, “A novel methodology for 3D deformable dosimetry,” Med. Phys. 39, 2203–2213 (2012)], for application to mass- and density-conserving deformations. CT images of DEFGEL phantoms with 16 fiducial markers (FMs) implanted were acquired in deformed and undeformed states for three different representative deformation geometries. Nonrigid image registration was performed using 12 common algorithms in the public domain. The optimum parameter setup was identified for each algorithm and each was tested for deformation accuracy in three scenarios: (I) original images of the DEFGEL with 16 FMs; (II) images with eight of the FMs mathematically erased; and (III) images with all FMs mathematically erased. The deformation vector fields obtained for scenarios II and III were then applied to the original images containing all 16 FMs. The locations of the FMs estimated by the algorithms were compared to actual locations determined by CT imaging. The accuracy of the algorithms was assessed by evaluation of three-dimensional vectors between true marker locations and predicted marker locations.Results: The mean magnitude of 16 error vectors per sample ranged from 0.3 to 3.7, 1.0 to 6.3, and 1.3 to 7
Hasan, Laiq; Al-Ars, Zaid
2009-01-01
In this paper, we present an efficient and high performance linear recursive variable expansion (RVE) implementation of the Smith-Waterman (S-W) algorithm and compare it with a traditional linear systolic array implementation. The results demonstrate that the linear RVE implementation performs up to 2.33 times better than the traditional linear systolic array implementation, at the cost of utilizing 2 times more resources.
Negri, Lucas; Nied, Ademir; Kalinowski, Hypolito; Paterno, Aleksander
2011-01-01
This paper presents a benchmark for peak detection algorithms employed in fiber Bragg grating spectrometric interrogation systems. The accuracy, precision, and computational performance of currently used algorithms and those of a new proposed artificial neural network algorithm are compared. Centroid and gaussian fitting algorithms are shown to have the highest precision but produce systematic errors that depend on the FBG refractive index modulation profile. The proposed neural network displays relatively good precision with reduced systematic errors and improved computational performance when compared to other networks. Additionally, suitable algorithms may be chosen with the general guidelines presented. PMID:22163806
The FPGA realization of a real-time Bayer image restoration algorithm with better performance
NASA Astrophysics Data System (ADS)
Ma, Huaping; Liu, Shuang; Zhou, Jiangyong; Tang, Zunlie; Deng, Qilin; Zhang, Hongliu
2014-11-01
Along with the wide usage of realizing Bayer color interpolation algorithm through FPGA, better performance, real-time processing, and less resource consumption have become the pursuits for the users. In order to realize the function of high speed and high quality processing of the Bayer image restoration with less resource consumption, the color reconstruction is designed and optimized from the interpolation algorithm and the FPGA realization in this article. Then the hardware realization is finished with FPGA development platform, and the function of real-time and high-fidelity image processing with less resource consumption is realized in the embedded image acquisition systems.
Global Precipitation Measurement (GPM) Microwave Imager Falling Snow Retrieval Algorithm Performance
NASA Astrophysics Data System (ADS)
Skofronick Jackson, Gail; Munchak, Stephen J.; Johnson, Benjamin T.
2015-04-01
Retrievals of falling snow from space represent an important data set for understanding the Earth's atmospheric, hydrological, and energy cycles. While satellite-based remote sensing provides global coverage of falling snow events, the science is relatively new and retrievals are still undergoing development with challenges and uncertainties remaining. This work reports on the development and post-launch testing of retrieval algorithms for the NASA Global Precipitation Measurement (GPM) mission Core Observatory satellite launched in February 2014. In particular, we will report on GPM Microwave Imager (GMI) radiometer instrument algorithm performance with respect to falling snow detection and estimation. Since GPM's launch, the at-launch GMI precipitation algorithms, based on a Bayesian framework, have been used with the new GPM data. The at-launch database is generated using proxy satellite data merged with surface measurements (instead of models). One year after launch, the Bayesian database will begin to be replaced with the more realistic observational data from the GPM spacecraft radar retrievals and GMI data. It is expected that the observational database will be much more accurate for falling snow retrievals because that database will take full advantage of the 166 and 183 GHz snow-sensitive channels. Furthermore, much retrieval algorithm work has been done to improve GPM retrievals over land. The Bayesian framework for GMI retrievals is dependent on the a priori database used in the algorithm and how profiles are selected from that database. Thus, a land classification sorts land surfaces into ~15 different categories for surface-specific databases (radiometer brightness temperatures are quite dependent on surface characteristics). In addition, our work has shown that knowing if the land surface is snow-covered, or not, can improve the performance of the algorithm. Improvements were made to the algorithm that allow for daily inputs of ancillary snow cover
NASA Astrophysics Data System (ADS)
Farhi, Edward; Gosset, David; Hen, Itay; Sandvik, A. W.; Shor, Peter; Young, A. P.; Zamponi, Francesco
2012-11-01
In this paper we study the performance of the quantum adiabatic algorithm on random instances of two combinatorial optimization problems, 3-regular 3-XORSAT and 3-regular max-cut. The cost functions associated with these two clause-based optimization problems are similar as they are both defined on 3-regular hypergraphs. For 3-regular 3-XORSAT the clauses contain three variables and for 3-regular max-cut the clauses contain two variables. The quantum adiabatic algorithms we study for these two problems use interpolating Hamiltonians which are amenable to sign-problem free quantum Monte Carlo and quantum cavity methods. Using these techniques we find that the quantum adiabatic algorithm fails to solve either of these problems efficiently, although for different reasons.
A Hybrid Neural Network-Genetic Algorithm Technique for Aircraft Engine Performance Diagnostics
NASA Technical Reports Server (NTRS)
Kobayashi, Takahisa; Simon, Donald L.
2001-01-01
In this paper, a model-based diagnostic method, which utilizes Neural Networks and Genetic Algorithms, is investigated. Neural networks are applied to estimate the engine internal health, and Genetic Algorithms are applied for sensor bias detection and estimation. This hybrid approach takes advantage of the nonlinear estimation capability provided by neural networks while improving the robustness to measurement uncertainty through the application of Genetic Algorithms. The hybrid diagnostic technique also has the ability to rank multiple potential solutions for a given set of anomalous sensor measurements in order to reduce false alarms and missed detections. The performance of the hybrid diagnostic technique is evaluated through some case studies derived from a turbofan engine simulation. The results show this approach is promising for reliable diagnostics of aircraft engines.
NASA Astrophysics Data System (ADS)
Islam, Syed Zahurul; Islam, Syed Zahidul; Jidin, Razali; Ali, Mohd. Alauddin Mohd.
2010-06-01
Computer vision and digital image processing comprises varieties of applications, where some of these used in image processing include convolution, edge detection as well as contrast enhancement. This paper analyzes execution time optimization analysis between Sobel and Canny image processing algorithms in terms of moving objects edge detection. Sobel and Canny edge detection algorithms have been described with pseudo code and detailed flow chart and implemented in C and MATLAB respectively on different platforms to evaluate performance and execution time for moving cars. It is shown that Sobel algorithm is very effective in case of moving multiple cars and blurs images with efficient execution time. Moreover, convolution operation of Canny takes 94-95% time of total execution time with thin and smooth but redundant edges. This also makes more robust of Sobel to detect moving cars edges.
Performance evaluation of a visual display calibration algorithm for iPad
NASA Astrophysics Data System (ADS)
De Paepe, Lode; De Bock, Peter; Vanovermeire, Olivier; Kimpe, Tom
2012-02-01
IPad devices have become very popular also in the healthcare community. There is an ever growing demand to use tablets for displaying and reviewing of medical images. However, a major problem is the lack of calibration and quality assurance of the IPad display. Medical displays used for review and diagnosis of medical images need to be calibrated to the DICOM GSDF standard to ensure sufficient image quality and reproducibility. This paper presents a convenient and reliable solution. An optimized visual calibration algorithm to calibrate and perform quality assurance tests on IPad devices has been developed. The algorithm allows a user to quickly calibrate an IPad in only a few minutes while a follow-up visual QA test to verify calibration status takes less than a minute. In the calibration phase, the user needs to change position of a slider until a pattern barely becomes visible, and this for a small number of grey levels. In the QA test phase, the user needs detect subtle patterns of varying size, contrast and average luminance level. It is extremely important to accurately quantify performance of the algorithm. For this purpose extensive tests have been performed. Multiple devices have been evaluated for various lighting conditions and viewing angles. The group of test user consisted of both non-clinical and clinical people. Results show that the algorithm consistently is able to calibrate an IPad device to DICOM GSDF with an average deviation smaller than 5% for indoor use and smaller than 12% for outdoor use. Tests have also shown that the algorithm is very reproducible and that there is little difference in performance between users.
Performance enhancement for crystallization unit of a sugar plant using genetic algorithm technique
NASA Astrophysics Data System (ADS)
Tewari, P. C.; Khanduja, Rajiv; Gupta, Mahesh
2012-05-01
This paper deals with the performance enhancement for crystallization unit of a sugar plant using genetic algorithm. The crystallization unit of a sugar industry has three main subsystems arranged in series. Considering exponential distribution for the probable failures and repairs, the mathematical formulation of the problem is done using probabilistic approach, and differential equations are developed on the basis of Markov birth-death process. These equations are then solved using normalizing conditions so as to determine the steady-state availability of the crystallization unit. The performance of each subsystem of crystallization unit in a sugar plant has also been optimized using genetic algorithm. Thus, the findings of the present paper will be highly useful to the plant management for the timely execution of proper maintenance decisions and, hence, to enhance the system performance.
NASA Technical Reports Server (NTRS)
Deavours, Daniel D.; Qureshi, M. Akber; Sanders, William H.
1997-01-01
Modeling tools and technologies are important for aerospace development. At the University of Illinois, we have worked on advancing the state of the art in modeling by Markov reward models in two important areas: reducing the memory necessary to numerically solve systems represented as stochastic activity networks and other stochastic Petri net extensions while still obtaining solutions in a reasonable amount of time, and finding numerically stable and memory-efficient methods to solve for the reward accumulated during a finite mission time. A long standing problem when modeling with high level formalisms such as stochastic activity networks is the so-called state space explosion, where the number of states increases exponentially with size of the high level model. Thus, the corresponding Markov model becomes prohibitively large and solution is constrained by the the size of primary memory. To reduce the memory necessary to numerically solve complex systems, we propose new methods that can tolerate such large state spaces that do not require any special structure in the model (as many other techniques do). First, we develop methods that generate row and columns of the state transition-rate-matrix on-the-fly, eliminating the need to explicitly store the matrix at all. Next, we introduce a new iterative solution method, called modified adaptive Gauss-Seidel, that exhibits locality in its use of data from the state transition-rate-matrix, permitting us to cache portions of the matrix and hence reduce the solution time. Finally, we develop a new memory and computationally efficient technique for Gauss-Seidel based solvers that avoids the need for generating rows of A in order to solve Ax = b. This is a significant performance improvement for on-the-fly methods as well as other recent solution techniques based on Kronecker operators. Taken together, these new results show that one can solve very large models without any special structure.
Using modified fruit fly optimisation algorithm to perform the function test and case studies
NASA Astrophysics Data System (ADS)
Pan, Wen-Tsao
2013-06-01
Evolutionary computation is a computing mode established by practically simulating natural evolutionary processes based on the concept of Darwinian Theory, and it is a common research method. The main contribution of this paper was to reinforce the function of searching for the optimised solution using the fruit fly optimization algorithm (FOA), in order to avoid the acquisition of local extremum solutions. The evolutionary computation has grown to include the concepts of animal foraging behaviour and group behaviour. This study discussed three common evolutionary computation methods and compared them with the modified fruit fly optimization algorithm (MFOA). It further investigated the ability of the three mathematical functions in computing extreme values, as well as the algorithm execution speed and the forecast ability of the forecasting model built using the optimised general regression neural network (GRNN) parameters. The findings indicated that there was no obvious difference between particle swarm optimization and the MFOA in regards to the ability to compute extreme values; however, they were both better than the artificial fish swarm algorithm and FOA. In addition, the MFOA performed better than the particle swarm optimization in regards to the algorithm execution speed, and the forecast ability of the forecasting model built using the MFOA's GRNN parameters was better than that of the other three forecasting models.
NASA Astrophysics Data System (ADS)
Al-Akkoumi, Mouhammad K.; Harris, Alan; Huck, Robert C.; Sluss, James J., Jr.; Giuma, Tayeb A.
2008-02-01
Free-space optical (FSO) communications links are envisioned as a viable option for the provision of temporary high-bandwidth communication links between moving platforms, especially for deployment in battlefield situations. For successful deployment in such real-time environments, fast and accurate alignment and tracking of the FSO equipment is essential. In this paper, a two-wavelength diversity scheme using 1.55 μm and 10 μm is investigated in conjunction with a previously described tracking algorithm to maintain line-of-sight connectivity battlefield scenarios. An analytical model of a mobile FSO communications link is described. Following the analytical model, simulation results are presented for an FSO link between an unmanned aerial surveillance vehicle, the Global Hawk, with a mobile ground vehicle, an M1 Abrams Main Battle Tank. The scenario is analyzed under varying weather conditions to verify continuous connectivity is available through the tracking algorithm. Simulation results are generated to describe the performance of the tracking algorithm with respect to both received optical power levels and variations in beam divergence. Advances to any proposed tracking algorithm due to these power and divergence variations are described for future tracking algorithm development.
Scott, Joshua I; Xue, Xiao; Wang, Ming; Kline, R Joseph; Hoffman, Benjamin C; Dougherty, Daniel; Zhou, Chuanzhen; Bazan, Guillermo; O'Connor, Brendan T
2016-06-08
Polymer semiconductors based on donor-acceptor monomers have recently resulted in significant gains in field effect mobility in organic thin film transistors (OTFTs). These polymers incorporate fused aromatic rings and have been designed to have stiff planar backbones, resulting in strong intermolecular interactions, which subsequently result in stiff and brittle films. The complex synthesis typically required for these materials may also result in increased production costs. Thus, the development of methods to improve mechanical plasticity while lowering material consumption during fabrication will significantly improve opportunities for adoption in flexible and stretchable electronics. To achieve these goals, we consider blending a brittle donor-acceptor polymer, poly[4-(4,4-dihexadecyl-4H-cyclopenta[1,2-b:5,4-b']dithiophen-2-yl)-alt-[1,2,5]thiadiazolo[3,4-c]pyridine] (PCDTPT), with ductile poly(3-hexylthiophene). We found that the ductility of the blend films is significantly improved compared to that of neat PCDTPT films, and when the blend film is employed in an OTFT, the performance is largely maintained. The ability to maintain charge transport character is due to vertical segregation within the blend, while the improved ductility is due to intermixing of the polymers throughout the film thickness. Importantly, the application of large strains to the ductile films is shown to orient both polymers, which further increases charge carrier mobility. These results highlight a processing approach to achieve high performance polymer OTFTs that are electrically and mechanically optimized.
NASA Astrophysics Data System (ADS)
Jafari, H.; Heidarzadeh, H.; Rostami, A.; Rostami, G.; Dolatyari, M.
2017-01-01
A photoconductive fractal antenna significantly improves the performance of photomixing-based continuous wave (CW) terahertz (THz) systems. An analysis has been carried out for the generation of CW-THz radiation by photomixer photoconductive antenna technique. To increase the active area for generation and hence the THz radiation power we used interdigitated electrodes that are coupled with a fractal tree antenna. In this paper, both semiconductor and electromagnetic problems are considered. Here, photomixer devices with Thue-Morse fractal tree antennas in two configurations (narrow and wide) are discussed. This new approach gives better performance, especially in the increasing of THz output power of photomixer devices, when compared with the conventional structures. In addition, applying the interdigitated electrodes improved THz photocurrent, considerably. It produces THz radiation power several times higher than the photomixers with simple gap.
Algorithmic, LOCS and HOCS (chemistry) exam questions: performance and attitudes of college students
NASA Astrophysics Data System (ADS)
Zoller, Uri
2002-02-01
The performance of freshmen biology and physics-mathematics majors and chemistry majors as well as pre- and in-service chemistry teachers in two Israeli universities on algorithmic (ALG), lower-order cognitive skills (LOCS), and higher-order cognitive skills (HOCS) chemistry exam questions were studied. The driving force for the study was an interest in moving science and chemistry instruction from an algorithmic and factual recall orientation dominated by LOCS, to a decision-making, problem-solving and critical system thinking approach, dominated by HOCS. College students' responses to the specially designed ALG, LOCS and HOCS chemistry exam questions were scored and analysed for differences and correlation between the performance means within and across universities by the questions' category. This was followed by a combined student interview - 'speaking aloud' problem solving session for assessing the thinking processes involved in solving these types of questions and the students' attitudes towards them. The main findings were: (1) students in both universities performed consistently in each of the three categories in the order of ALG > LOCS > HOCS; their 'ideological' preference, was HOCS > algorithmic/LOCS, - referred to as 'computational questions', but their pragmatic preference was the reverse; (2) success on algorithmic/LOCS does not imply success on HOCS questions; algorithmic questions constitute a category on its own as far as students success in solving them is concerned. Our study and its results support the effort being made, worldwide, to integrate HOCS-fostering teaching and assessment strategies and, to develop HOCS-oriented science-technology-environment-society (STES)-type curricula within science and chemistry education.
Performance Evaluation of Different Ground Filtering Algorithms for Uav-Based Point Clouds
NASA Astrophysics Data System (ADS)
Serifoglu, C.; Gungor, O.; Yilmaz, V.
2016-06-01
Digital Elevation Model (DEM) generation is one of the leading application areas in geomatics. Since a DEM represents the bare earth surface, the very first step of generating a DEM is to separate the ground and non-ground points, which is called ground filtering. Once the point cloud is filtered, the ground points are interpolated to generate the DEM. LiDAR (Light Detection and Ranging) point clouds have been used in many applications thanks to their success in representing the objects they belong to. Hence, in the literature, various ground filtering algorithms have been reported to filter the LiDAR data. Since the LiDAR data acquisition is still a costly process, using point clouds generated from the UAV images to produce DEMs is a reasonable alternative. In this study, point clouds with three different densities were generated from the aerial photos taken from a UAV (Unmanned Aerial Vehicle) to examine the effect of point density on filtering performance. The point clouds were then filtered by means of five different ground filtering algorithms as Progressive Morphological 1D (PM1D), Progressive Morphological 2D (PM2D), Maximum Local Slope (MLS), Elevation Threshold with Expand Window (ETEW) and Adaptive TIN (ATIN). The filtering performance of each algorithm was investigated qualitatively and quantitatively. The results indicated that the ATIN and PM2D algorithms showed the best overall ground filtering performances. The MLS and ETEW algorithms were found as the least successful ones. It was concluded that the point clouds generated from the UAVs can be a good alternative for LiDAR data.
NASA Astrophysics Data System (ADS)
Ivanov, Martin; Warrach-Sagi, Kirsten; Wulfmeyer, Volker
2017-02-01
A new approach for rigorous spatial analysis of the downscaling performance of regional climate model (RCM) simulations is introduced. It is based on a multiple comparison of the local tests at the grid cells and is also known as `field' or `global' significance. The block length for the local resampling tests is precisely determined to adequately account for the time series structure. New performance measures for estimating the added value of downscaled data relative to the large-scale forcing fields are developed. The methodology is exemplarily applied to a standard EURO-CORDEX hindcast simulation with the Weather Research and Forecasting (WRF) model coupled with the land surface model NOAH at 0.11 ∘ grid resolution. Daily precipitation climatology for the 1990-2009 period is analysed for Germany for winter and summer in comparison with high-resolution gridded observations from the German Weather Service. The field significance test controls the proportion of falsely rejected local tests in a meaningful way and is robust to spatial dependence. Hence, the spatial patterns of the statistically significant local tests are also meaningful. We interpret them from a process-oriented perspective. While the downscaled precipitation distributions are statistically indistinguishable from the observed ones in most regions in summer, the biases of some distribution characteristics are significant over large areas in winter. WRF-NOAH generates appropriate stationary fine-scale climate features in the daily precipitation field over regions of complex topography in both seasons and appropriate transient fine-scale features almost everywhere in summer. As the added value of global climate model (GCM)-driven simulations cannot be smaller than this perfect-boundary estimate, this work demonstrates in a rigorous manner the clear additional value of dynamical downscaling over global climate simulations. The evaluation methodology has a broad spectrum of applicability as it is
Imbir, Kamil K.
2016-01-01
Activation mechanisms such as arousal are known to be responsible for slowdown observed in the Emotional Stroop and modified Stroop tasks. Using the duality of mind perspective, we may conclude that both ways of processing information (automatic or controlled) should have their own mechanisms of activation, namely, arousal for an experiential mind, and subjective significance for a rational mind. To investigate the consequences of both, factorial manipulation was prepared. Other factors that influence Stroop task processing such as valence, concreteness, frequency, and word length were controlled. Subjective significance was expected to influence arousal effects. In the first study, the task was to name the color of font for activation charged words. In the second study, activation charged words were, at the same time, combined with an incongruent condition of the classical Stroop task around a fixation point. The task was to indicate the font color for color-meaning words. In both studies, subjective significance was found to shape the arousal impact on performance in terms of the slowdown reduction for words charged with subjective significance. PMID:26869974
Allen, Larry A.; Yood, Marianne Ulcickas; Wagner, Edward H.; Bowles, Erin J. Aiello; Pardee, Roy; Wellman, Robert; Habel, Laurel; Nekhlyudov, Larissa; Davis, Robert L.; Adedayo, Onitilo; Magid, David J.
2012-01-01
Background Cardiotoxicity is a known complication of certain breast cancer therapies, but rates come from clinical trials with design features that limit external validity. The ability to accurately identify cardiotoxicity from administrative data would enhance safety information. Objective To characterize the performance of claims-based algorithms for identification of cardiac dysfunction in a cancer population. Research Design We sampled 400 charts among 6,460 women diagnosed with incident breast cancer, tumor size ≥2 cm or node positivity, treated within 8 US health care systems during 1999–2007. We abstracted medical records for clinical diagnoses of heart failure (HF) and cardiomyopathy (CM) or evidence of reduced left ventricular ejection fraction. We then assessed the performance of 3 different ICD-9-based algorithms. Results The HF/CM coding algorithm designed a priori to balance performance characteristics provided a sensitivity of 62% (95% confidence interval 40–80%), specificity of 99% (97–99%), positive predictive value (PPV) of 69% (45–85%), and negative predictive value (NPV) of 98% (96–99%). When applied only to incident HF/CM (ICD-9 codes and gold standard diagnosis both appearing after breast cancer diagnosis) in patients exposed to anthracycline and/or trastuzumab therapy the PPV was 42% (14–76%). Conclusions Claims-based algorithms have moderate sensitivity and high specificity for identifying HF/CM among patients with invasive breast cancer. Because the prevalence of HF/CM among the breast cancer population is low, ICD-9 codes have high NPV but only moderate PPV. These findings suggest a significant degree of misclassification due to HF/CM overcoding versus incomplete clinical documentation of HF/CM in the medical record. PMID:22643199
Focused R&D For Electrochromic Smart Windowsa: Significant Performance and Yield Enhancements
Mark Burdis; Neil Sbar
2003-01-31
There is a need to improve the energy efficiency of building envelopes as they are the primary factor governing the heating, cooling, lighting and ventilation requirements of buildings--influencing 53% of building energy use. In particular, windows contribute significantly to the overall energy performance of building envelopes, thus there is a need to develop advanced energy efficient window and glazing systems. Electrochromic (EC) windows represent the next generation of advanced glazing technology that will (1) reduce the energy consumed in buildings, (2) improve the overall comfort of the building occupants, and (3) improve the thermal performance of the building envelope. ''Switchable'' EC windows provide, on demand, dynamic control of visible light, solar heat gain, and glare without blocking the view. As exterior light levels change, the window's performance can be electronically adjusted to suit conditions. A schematic illustrating how SageGlass{reg_sign} electrochromic windows work is shown in Figure I.1. SageGlass{reg_sign} EC glazings offer the potential to save cooling and lighting costs, with the added benefit of improving thermal and visual comfort. Control over solar heat gain will also result in the use of smaller HVAC equipment. If a step change in the energy efficiency and performance of buildings is to be achieved, there is a clear need to bring EC technology to the marketplace. This project addresses accelerating the widespread introduction of EC windows in buildings and thus maximizing total energy savings in the U.S. and worldwide. We report on R&D activities to improve the optical performance needed to broadly penetrate the full range of architectural markets. Also, processing enhancements have been implemented to reduce manufacturing costs. Finally, tests are being conducted to demonstrate the durability of the EC device and the dual pane insulating glass unit (IGU) to be at least equal to that of conventional windows.
van Holle, Lionel; Bauchau, Vincent
2014-01-01
Purpose Disproportionality methods measure how unexpected the observed number of adverse events is. Time-to-onset (TTO) methods measure how unexpected the TTO distribution of a vaccine-event pair is compared with what is expected from other vaccines and events. Our purpose is to compare the performance associated with each method. Methods For the disproportionality algorithms, we defined 336 combinations of stratification factors (sex, age, region and year) and threshold values of the multi-item gamma Poisson shrinker (MGPS). For the TTO algorithms, we defined 18 combinations of significance level and time windows. We used spontaneous reports of adverse events recorded for eight vaccines. The vaccine product labels were used as proxies for true safety signals. Algorithms were ranked according to their positive predictive value (PPV) for each vaccine separately; amedian rank was attributed to each algorithm across vaccines. Results The algorithm with the highest median rank was based on TTO with a significance level of 0.01 and a time window of 60 days after immunisation. It had an overall PPV 2.5 times higher than for the highest-ranked MGPS algorithm, 16th rank overall, which was fully stratified and had a threshold value of 0.8. A TTO algorithm with roughly the same sensitivity as the highest-ranked MGPS had better specificity but longer time-to-detection. Conclusions Within the scope of this study, the majority of the TTO algorithms presented a higher PPV than for any MGPS algorithm. Considering the complementarity of TTO and disproportionality methods, a signal detection strategy combining them merits further investigation. PMID:24038719
Tu, Chengjian; Shen, Shichen; Sheng, Quanhu; Shyr, Yu; Qu, Jun
2017-01-30
Reliable quantification of low-abundance proteins in complex proteomes is challenging largely owing to the limited number of spectra/peptides identified. In this study we developed a straightforward method to improve the quantitative accuracy and precision of proteins by strategically retrieving the less confident peptides that were previously filtered out using the standard target-decoy search strategy. The filtered-out MS/MS spectra matched to confidently-identified proteins were recovered, and the peptide-spectrum-match FDR were re-calculated and controlled at a confident level of FDR≤1%, while protein FDR maintained at ~1%. We evaluated the performance of this strategy in both spectral count- and ion current-based methods. >60% increase of total quantified spectra/peptides was respectively achieved for analyzing a spike-in sample set and a public dataset from CPTAC. Incorporating the peptide retrieval strategy significantly improved the quantitative accuracy and precision, especially for low-abundance proteins (e.g. one-hit proteins). Moreover, the capacity of confidently discovering significantly-altered proteins was also enhanced substantially, as demonstrated with two spike-in datasets. In summary, improved quantitative performance was achieved by this peptide recovery strategy without compromising confidence of protein identification, which can be readily implemented in a broad range of quantitative proteomics techniques including label-free or labeling approaches.
Wolfe, Amy K.; Malone, Elizabeth L.; Heerwagen, Judith H.; Dion, Jerome P.
2014-04-01
The people who use Federal buildings — Federal employees, operations and maintenance staff, and the general public — can significantly impact a building’s environmental performance and the consumption of energy, water, and materials. Many factors influence building occupants’ use of resources (use behaviors) including work process requirements, ability to fulfill agency missions, new and possibly unfamiliar high-efficiency/high-performance building technologies; a lack of understanding, education, and training; inaccessible information or ineffective feedback mechanisms; and cultural norms and institutional rules and requirements, among others. While many strategies have been used to introduce new occupant use behaviors that promote sustainability and reduced resource consumption, few have been verified in the scientific literature or have properly documented case study results. This paper documents validated strategies that have been shown to encourage new use behaviors that can result in significant, persistent, and measureable reductions in resource consumption. From the peer-reviewed literature, the paper identifies relevant strategies for Federal facilities and commercial buildings that focus on the individual, groups of individuals (e.g., work groups), and institutions — their policies, requirements, and culture. The paper documents methods with evidence of success in changing use behaviors and enabling occupants to effectively interact with new technologies/designs. It also provides a case study of the strategies used at a Federal facility — Fort Carson, Colorado. The paper documents gaps in the current literature and approaches, and provides topics for future research.
Doerr, Carola; Lengler, Johannes
2016-10-04
Black-box complexity theory provides lower bounds for the runtime of black-box optimizers like evolutionary algorithms and other search heuristics and serves as an inspiration for the design of new genetic algorithms. Several black-box models covering different classes of algorithms exist, each highlighting a different aspect of the algorithms under considerations. In this work we add to the existing black-box notions a new elitist black-box model, in which algorithms are required to base all decisions solely on (the relative performance of) a fixed number of the best search points sampled so far. Our elitist model thus combines features of the ranking-based and the memory-restricted black-box models with an enforced usage of truncation selection. We provide several examples for which the elitist black-box complexity is exponentially larger than that of the respective complexities in all previous black-box models, thus showing that the elitist black-box complexity can be much closer to the runtime of typical evolutionary algorithms. We also introduce the concept of [Formula: see text]-Monte Carlo black-box complexity, which measures the time it takes to optimize a problem with failure probability at most [Formula: see text]. Even for small [Formula: see text], the [Formula: see text]-Monte Carlo black-box complexity of a function class [Formula: see text] can be smaller by an exponential factor than its typically regarded Las Vegas complexity (which measures the expected time it takes to optimize [Formula: see text]).
2013-10-18
AFRL-RV- PS - AFRL-RV- PS - TR-2013-0106 TR-2013-0106 PERFORMANCE ANALYSIS OF THE ENHANCED BIO-INSPIRED PLANNING ALGORITHM FOR RAPID SITUATION...Qualified requestors may obtain copies of this report from the Defense Technical Information Center (DTIC) (http://www.dtic.mil). AFRL-RV- PS -TR...SPONSOR/MONITOR’S REPORT Kirtland AFB, NM 87117-5776 NUMBER(S) AFRL-RV- PS -TR-2013-0106 12. DISTRIBUTION / AVAILABILITY STATEMENT Approved
Evaluation of odometry algorithm performances using a railway vehicle dynamic model
NASA Astrophysics Data System (ADS)
Allotta, B.; Pugi, L.; Ridolfi, A.; Malvezzi, M.; Vettori, G.; Rindi, A.
2012-05-01
In modern railway Automatic Train Protection and Automatic Train Control systems, odometry is a safety relevant on-board subsystem which estimates the instantaneous speed and the travelled distance of the train; a high reliability of the odometry estimate is fundamental, since an error on the train position may lead to a potentially dangerous overestimation of the distance available for braking. To improve the odometry estimate accuracy, data fusion of different inputs coming from a redundant sensor layout may be used. Simplified two-dimensional models of railway vehicles have been usually used for Hardware in the Loop test rig testing of conventional odometry algorithms and of on-board safety relevant subsystems (like the Wheel Slide Protection braking system) in which the train speed is estimated from the measures of the wheel angular speed. Two-dimensional models are not suitable to develop solutions like the inertial type localisation algorithms (using 3D accelerometers and 3D gyroscopes) and the introduction of Global Positioning System (or similar) or the magnetometer. In order to test these algorithms correctly and increase odometry performances, a three-dimensional multibody model of a railway vehicle has been developed, using Matlab-Simulink™, including an efficient contact model which can simulate degraded adhesion conditions (the development and prototyping of odometry algorithms involve the simulation of realistic environmental conditions). In this paper, the authors show how a 3D railway vehicle model, able to simulate the complex interactions arising between different on-board subsystems, can be useful to evaluate the odometry algorithm and safety relevant to on-board subsystem performances.
NASA Astrophysics Data System (ADS)
Chatterjee, A.; Ghoshal, S. P.; Mukherjee, V.
In this paper, a conventional thermal power system equipped with automatic voltage regulator, IEEE type dual input power system stabilizer (PSS) PSS3B and integral controlled automatic generation control loop is considered. A distributed generation (DG) system consisting of aqua electrolyzer, photovoltaic cells, diesel engine generator, and some other energy storage devices like flywheel energy storage system and battery energy storage system is modeled. This hybrid distributed system is connected to the grid. While integrating this DG with the onventional thermal power system, improved transient performance is noticed. Further improvement in the transient performance of this grid connected DG is observed with the usage of superconducting magnetic energy storage device. The different tunable parameters of the proposed hybrid power system model are optimized by artificial bee colony (ABC) algorithm. The optimal solutions offered by the ABC algorithm are compared with those offered by genetic algorithm (GA). It is also revealed that the optimizing performance of the ABC is better than the GA for this specific application.
Orion Guidance and Control Ascent Abort Algorithm Design and Performance Results
NASA Technical Reports Server (NTRS)
Proud, Ryan W.; Bendle, John R.; Tedesco, Mark B.; Hart, Jeremy J.
2009-01-01
During the ascent flight phase of NASA s Constellation Program, the Ares launch vehicle propels the Orion crew vehicle to an agreed to insertion target. If a failure occurs at any point in time during ascent then a system must be in place to abort the mission and return the crew to a safe landing with a high probability of success. To achieve continuous abort coverage one of two sets of effectors is used. Either the Launch Abort System (LAS), consisting of the Attitude Control Motor (ACM) and the Abort Motor (AM), or the Service Module (SM), consisting of SM Orion Main Engine (OME), Auxiliary (Aux) Jets, and Reaction Control System (RCS) jets, is used. The LAS effectors are used for aborts from liftoff through the first 30 seconds of second stage flight. The SM effectors are used from that point through Main Engine Cutoff (MECO). There are two distinct sets of Guidance and Control (G&C) algorithms that are designed to maximize the performance of these abort effectors. This paper will outline the necessary inputs to the G&C subsystem, the preliminary design of the G&C algorithms, the ability of the algorithms to predict what abort modes are achievable, and the resulting success of the abort system. Abort success will be measured against the Preliminary Design Review (PDR) abort performance metrics and overall performance will be reported. Finally, potential improvements to the G&C design will be discussed.
Palmer, M P; Abreu, E L; Mastrangelo, A; Murray, M M
2009-07-01
Collagen-platelet composites have recently been successfully used as scaffolds to stimulate anterior cruciate ligament (ACL) wound healing in large animal models. These materials are typically kept on ice until use to prevent premature gelation; however, with surgical use, placement of a cold solution then requires up to an hour while the solution comes to body temperature (at which point gelation occurs). Bringing the solution to a higher temperature before injection would likely decrease this intra-operative wait; however, the effects of this on composite performance are not known. The hypothesis tested here was that increasing the temperature of the gel at the time of injection would significantly decrease the time to gelation, but would not significantly alter the mechanical properties of the composite or its ability to support functional tissue repair. Primary outcome measures included the maximum elastic modulus (stiffness) of the composite in vitro and the in vivo yield load of an ACL transection treated with an injected collagen-platelet composite. In vitro findings were that injection temperatures over 30 degrees C resulted in a faster visco-elastic transition; however, the warmed composites had a 50% decrease in their maximum elastic modulus. In vivo studies found that warming the gels prior to injection also resulted in a decrease in the yield load of the healing ACL at 14 weeks. These studies suggest that increasing injection temperature of collagen-platelet composites results in a decrease in performance of the composite in vitro and in the strength of the healing ligament in vivo and this technique should be used only with great caution.
Friesen, J Brent; Pauli, Guido F
2008-01-09
A standard test mix consisting of 21 commercially available natural products of agricultural significance, termed the GUESSmix, was employed to measure the countercurrent chromatography performance characteristics of a very popular quaternary solvent system family made up of hexane-ethyl acetate-methanol-water (HEMWat). The polarity range of the GUESSmix combined with the elution-extrusion countercurrent chromatography (EECCC) technique and the newly developed reciprocal symmetry (ReS) and reciprocal shifted symmetry (ReSS) plots allow liquid-liquid distribution ratios ( K D) to be plotted for every compound eluted on a scale of zero to infinity. It was demonstrated that 16 of the 21 GUESSmix compounds are found in the optimal range of resolution (0.25 < K(D) < 16) of at least one HEMWat solvent system. The HEMWat solvent systems represented by the ratios 4:6:5:5, 4:6:4:6, and 3:7:4:6 possess the most densely populated optimal ranges of resolution for this standard mix. ReS plots have been shown to reveal the symmetrical reversibility of the EECCC method in reference to K(D) = 1. This study lays the groundwork for evaluation and comparison of solvent system families proposed in the literature, as well as the creation of new solvent system families with desired performance characteristics.
He, Ting; Zu, Lianhai; Zhang, Yan; Mao, Chengliang; Xu, Xiaoxiang; Yang, Jinhu; Yang, Shihe
2016-08-23
Semiconductor nanowires that have been extensively studied are typically in a crystalline phase. Much less studied are amorphous semiconductor nanowires due to the difficulty for their synthesis, despite a set of characteristics desirable for photoelectric devices, such as higher surface area, higher surface activity, and higher light harvesting. In this work of combined experiment and computation, taking Zn2GeO4 (ZGO) as an example, we propose a site-specific heteroatom substitution strategy through a solution-phase ions-alternative-deposition route to prepare amorphous/crystalline Si-incorporated ZGO nanowires with tunable band structures. The substitution of Si atoms for the Zn or Ge atoms distorts the bonding network to a different extent, leading to the formation of amorphous Zn1.7Si0.3GeO4 (ZSGO) or crystalline Zn2(GeO4)0.88(SiO4)0.12 (ZGSO) nanowires, respectively, with different bandgaps. The amorphous ZSGO nanowire arrays exhibit significantly enhanced performance in photoelectrochemical water splitting, such as higher and more stable photocurrent, and faster photoresponse and recovery, relative to crystalline ZGSO and ZGO nanowires in this work, as well as ZGO photocatalysts reported previously. The remarkable performance highlights the advantages of the ZSGO amorphous nanowires for photoelectric devices, such as higher light harvesting capability, faster charge separation, lower charge recombination, and higher surface catalytic activity.
Zeng, Zhiping; Yu, Dingshan; He, Ziming; Liu, Jing; Xiao, Fang-Xing; Zhang, Yan; Wang, Rong; Bhattacharyya, Dibakar; Tan, Timothy Thatt Yang
2016-01-01
Covalent bonding of graphene oxide quantum dots (GOQDs) onto amino modified polyvinylidene fluoride (PVDF) membrane has generated a new type of nano-carbon functionalized membrane with significantly enhanced antibacterial and antibiofouling properties. A continuous filtration test using E. coli containing feedwater shows that the relative flux drop over GOQDs modified PVDF is 23%, which is significantly lower than those over pristine PVDF (86%) and GO-sheet modified PVDF (62%) after 10 h of filtration. The presence of GOQD coating layer effectively inactivates E. coli and S. aureus cells, and prevents the biofilm formation on the membrane surface, producing excellent antimicrobial activity and potentially antibiofouling capability, more superior than those of previously reported two-dimensional GO sheets and one-dimensional CNTs modified membranes. The distinctive antimicrobial and antibiofouling performances could be attributed to the unique structure and uniform dispersion of GOQDs, enabling the exposure of a larger fraction of active edges and facilitating the formation of oxidation stress. Furthermore, GOQDs modified membrane possesses satisfying long-term stability and durability due to the strong covalent interaction between PVDF and GOQDs. This study opens up a new synthetic avenue in the fabrication of efficient surface-functionalized polymer membranes for potential waste water treatment and biomolecules separation. PMID:26832603
NASA Astrophysics Data System (ADS)
Zeng, Zhiping; Yu, Dingshan; He, Ziming; Liu, Jing; Xiao, Fang-Xing; Zhang, Yan; Wang, Rong; Bhattacharyya, Dibakar; Tan, Timothy Thatt Yang
2016-02-01
Covalent bonding of graphene oxide quantum dots (GOQDs) onto amino modified polyvinylidene fluoride (PVDF) membrane has generated a new type of nano-carbon functionalized membrane with significantly enhanced antibacterial and antibiofouling properties. A continuous filtration test using E. coli containing feedwater shows that the relative flux drop over GOQDs modified PVDF is 23%, which is significantly lower than those over pristine PVDF (86%) and GO-sheet modified PVDF (62%) after 10 h of filtration. The presence of GOQD coating layer effectively inactivates E. coli and S. aureus cells, and prevents the biofilm formation on the membrane surface, producing excellent antimicrobial activity and potentially antibiofouling capability, more superior than those of previously reported two-dimensional GO sheets and one-dimensional CNTs modified membranes. The distinctive antimicrobial and antibiofouling performances could be attributed to the unique structure and uniform dispersion of GOQDs, enabling the exposure of a larger fraction of active edges and facilitating the formation of oxidation stress. Furthermore, GOQDs modified membrane possesses satisfying long-term stability and durability due to the strong covalent interaction between PVDF and GOQDs. This study opens up a new synthetic avenue in the fabrication of efficient surface-functionalized polymer membranes for potential waste water treatment and biomolecules separation.
Zeng, Zhiping; Yu, Dingshan; He, Ziming; Liu, Jing; Xiao, Fang-Xing; Zhang, Yan; Wang, Rong; Bhattacharyya, Dibakar; Tan, Timothy Thatt Yang
2016-02-02
Covalent bonding of graphene oxide quantum dots (GOQDs) onto amino modified polyvinylidene fluoride (PVDF) membrane has generated a new type of nano-carbon functionalized membrane with significantly enhanced antibacterial and antibiofouling properties. A continuous filtration test using E. coli containing feedwater shows that the relative flux drop over GOQDs modified PVDF is 23%, which is significantly lower than those over pristine PVDF (86%) and GO-sheet modified PVDF (62%) after 10 h of filtration. The presence of GOQD coating layer effectively inactivates E. coli and S. aureus cells, and prevents the biofilm formation on the membrane surface, producing excellent antimicrobial activity and potentially antibiofouling capability, more superior than those of previously reported two-dimensional GO sheets and one-dimensional CNTs modified membranes. The distinctive antimicrobial and antibiofouling performances could be attributed to the unique structure and uniform dispersion of GOQDs, enabling the exposure of a larger fraction of active edges and facilitating the formation of oxidation stress. Furthermore, GOQDs modified membrane possesses satisfying long-term stability and durability due to the strong covalent interaction between PVDF and GOQDs. This study opens up a new synthetic avenue in the fabrication of efficient surface-functionalized polymer membranes for potential waste water treatment and biomolecules separation.
Performance of MODIS Thermal Emissive Bands On-orbit Calibration Algorithms
NASA Technical Reports Server (NTRS)
Xiong, Xiaoxiong; Chang, T.
2009-01-01
serves as the thermal calibration source and the SV provides measurements for the sensor's background and offsets. MODIS on-board BB is a v-grooved plate with its temperature measured using 12 platinum resistive thermistors (PRT) uniformly embedded in the BB substrate. All the BB thermistors were characterized pre-launch with reference to the NIST temperature standards. Unlike typical BB operations in many heritage sensors, which have no temperature control capability, the MODIS on-board BB can be operated at any temperatures between instrument ambient (about 270K) and 315K and can also be varied continuously within this range. This feature has significantly enhanced the MODIS' capability of tracking and updating the TEB nonlinear calibration coefficients over its entire mission. Following a brief description of MODIS TEB on-orbit calibration methodologies and its onboard BB operational activities, this paper provides a comprehensive performance assessment of MODIS TEB quadratic calibration algorithm. It examines the scan-by-scan, orbit-by-orbit, daily, and seasonal variations of detector responses and associated impact due changes in the CFPA and instrument temperatures. Specifically, this paper will analyze the contribution by each individual thermal emissive source term (BB, scan cavity, and scan mirror), the impact on the Level 1 B data product quality due to pre-launch and on-orbit calibration uncertainties. A comparison of Terra and Aqua TEB on-orbit performance, lessons learned, and suggestions for future improvements will also be made.
Performance of an efficient image-registration algorithm in processing MR renography data
Conlin, Christopher C.; Zhang, Jeff L.; Rousset, Florian; Vachet, Clement; Zhao, Yangyang; Morton, Kathryn A.; Carlston, Kristi; Gerig, Guido; Lee, Vivian S.
2015-01-01
Purpose To evaluate the performance of an edge-based registration technique in correcting for respiratory motion artifacts in MR renographic data and to examine the efficiency of a semi-automatic software package in processing renographic data from a cohort of clinical patients. Materials and Methods The developed software incorporates an image-registration algorithm based on the generalized Hough transform of edge maps. It was used to estimate GFR, RPF, and MTT from 36 patients who underwent free-breathing MR renography at 3T using saturation-recovery turbo-FLASH. Processing time required for each patient was recorded. Renal parameter estimates and model-fitting residues from the software were compared to those from a previously reported technique. Inter-reader variability in the software was quantified by the standard deviation of parameter estimates among three readers. GFR estimates from our software were also compared to a reference standard from nuclear medicine. Results The time taken to process one patient’s data with the software averaged 12 ± 4 minutes. The applied image registration effectively reduced motion artifacts in dynamic images by providing renal tracer-retention curves with significantly smaller fitting residues (P < 0.01) than unregistered data or data registered by the previously reported technique. Inter-reader variability was less than 10% for all parameters. GFR estimates from the proposed method showed greater concordance with reference values (P < 0.05). Conclusion These results suggest that the proposed software can process MR renography data efficiently and accurately. Its incorporated registration technique based on the generalized Hough transform effectively reduces respiratory motion artifacts in free-breathing renographic acquisitions. PMID:26174884
Field Significance of Performance Measures in the Context of Regional Climate Model Verification
NASA Astrophysics Data System (ADS)
Ivanov, Martin; Warrach-Sagi, Kirsten; Wulfmeyer, Volker
2015-04-01
The purpose of this study is to rigorously evaluate the skill of dynamically downscaled global climate simulations. We investigate a dynamical downscaling of the ERA-Interim reanalysis using the Weather Research and Forecasting (WRF) model, coupled with the NOAH land surface model within the scope of EURO-CORDEX. WRF has a horizontal resolution of 11° and contains the following physics: the Yonsei university atmospheric boundary layer parameterization, the Morrison two-moment microphysics, the Kain-Fritsch-Eta convection and the Community Atmosphere Model radiation schemes. Daily precipitation is verified over Germany for summer and winter against high-resolution observation data from the German weather service for the first time. The ability of WRF to reproduce the statistical distribution of daily precipitation is evaluated using metrics based on distribution characteristics. Skill against the large-scale ERA-Interim data gives insight into the potential, additional skill of dynamical downscaling. To quantify it, we transform the absolute performance measures to relative skill measures against ERA-Interim. Their field significance is rigorously estimated and locally significant regions are highlighted. Statistical distributions are better reproduced in summer than in winter. In both seasons WRF is too dry over mountain tops due to underestimated and too rare high and underestimated and too frequent small precipitations. In winter WRF is too wet at windward sides and land-sea transition regions due to too frequent weak and moderate precipitation events. In summer it is too dry over land-sea transition regions due to underestimated small and too rare moderate precipitations, and too wet in some river valleys due to too frequent high precipitations. Additional skill relative to ERA-Interim is documented for overall measures as well as measures regarding the spread and tails of the statistical distribution, but not regarding mean seasonal precipitation. The added
Xu, Hang; Su, Shi; Tang, Wuji; Wei, Meng; Wang, Tao; Wang, Dongjin; Ge, Weihong
2015-09-01
A large number of warfarin pharmacogenetics algorithms have been published. Our research was aimed to evaluate the performance of the selected pharmacogenetic algorithms in patients with surgery of heart valve replacement and heart valvuloplasty during the phase of initial and stable anticoagulation treatment. 10 pharmacogenetic algorithms were selected by searching PubMed. We compared the performance of the selected algorithms in a cohort of 193 patients during the phase of initial and stable anticoagulation therapy. Predicted dose was compared to therapeutic dose by using a predicted dose percentage that falls within 20% threshold of the actual dose (percentage within 20%) and mean absolute error (MAE). The average warfarin dose for patients was 3.05±1.23mg/day for initial treatment and 3.45±1.18mg/day for stable treatment. The percentages of the predicted dose within 20% of the therapeutic dose were 44.0±8.8% and 44.6±9.7% for the initial and stable phases, respectively. The MAEs of the selected algorithms were 0.85±0.18mg/day and 0.93±0.19mg/day, respectively. All algorithms had better performance in the ideal group than in the low dose and high dose groups. The only exception is the Wadelius et al. algorithm, which had better performance in the high dose group. The algorithms had similar performance except for the Wadelius et al. and Miao et al. algorithms, which had poor accuracy in our study cohort. The Gage et al. algorithm had better performance in both phases of initial and stable treatment. Algorithms had relatively higher accuracy in the >50years group of patients on the stable phase.
Assessment of next-best-view algorithms performance with various 3D scanners and manipulator
NASA Astrophysics Data System (ADS)
Karaszewski, M.; Adamczyk, M.; Sitnik, R.
2016-09-01
The problem of calculating three dimensional (3D) sensor position (and orientation) during the digitization of real-world objects (called next best view planning or NBV) has been an active topic of research for over 20 years. While many solutions have been developed, it is hard to compare their quality based only on the exemplary results presented in papers. We implemented 13 of the most popular NBV algorithms and evaluated their performance by digitizing five objects of various properties, using three measurement heads with different working volumes mounted on a 6-axis robot with a rotating table for placing objects. The results obtained for the 13 algorithms were then compared based on four criteria: the number of directional measurements, digitization time, total positioning distance, and surface coverage required to digitize test objects with available measurement heads.
Moreno, E; Laffond, E; Muñoz-Bellido, F; Gracia, M T; Macías, E; Moreno, A; Dávila, I
2016-12-01
European Network on Drug Allergy (ENDA) has proposed an algorithm for diagnosing immediate beta-lactam (BL) allergy. We evaluated its performance in real life. During 1994-2014, 1779 patients with suspected immediate reactions to BL were evaluated following ENDA's short diagnostic algorithm. Five hundred and nine patients (28.6%) were diagnosed of BL hypersensitivity. Of them, 457 (25.7%) were at first evaluation [403 by skin tests (ST), 12 by positive IgE and 42 by controlled provocation tests (CPT)]. At second evaluation (SE), 52 additional patients (10.2% of allergic patients) were diagnosed, [50 (2.8%) by ST and 2 (0.1%) by CPT]. Time between reaction and study was significantly longer in patients diagnosed at SE (median 5 vs 42 months; IQR 34 vs 170; P < 0.0001). Anaphylaxis was significantly associated with a diagnosis at SE. European Network on Drug Allergy/EAACI protocol was appropriate and safe when evaluating BL immediate reactions. Re-evaluation should be performed, particularly when anaphylaxis and long interval to diagnosis are present.
Performance Analysis of Different Backoff Algorithms for WBAN-Based Emerging Sensor Networks
Khan, Pervez; Ullah, Niamat; Ali, Farman; Ullah, Sana; Hong, Youn-Sik; Lee, Ki-Young; Kim, Hoon
2017-01-01
The Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) procedure of IEEE 802.15.6 Medium Access Control (MAC) protocols for the Wireless Body Area Network (WBAN) use an Alternative Binary Exponential Backoff (ABEB) procedure. The backoff algorithm plays an important role to avoid collision in wireless networks. The Binary Exponential Backoff (BEB) algorithm used in different standards does not obtain the optimum performance due to enormous Contention Window (CW) gaps induced from packet collisions. Therefore, The IEEE 802.15.6 CSMA/CA has developed the ABEB procedure to avoid the large CW gaps upon each collision. However, the ABEB algorithm may lead to a high collision rate (as the CW size is incremented on every alternative collision) and poor utilization of the channel due to the gap between the subsequent CW. To minimize the gap between subsequent CW sizes, we adopted the Prioritized Fibonacci Backoff (PFB) procedure. This procedure leads to a smooth and gradual increase in the CW size, after each collision, which eventually decreases the waiting time, and the contending node can access the channel promptly with little delay; while ABEB leads to irregular and fluctuated CW values, which eventually increase collision and waiting time before a re-transmission attempt. We analytically approach this problem by employing a Markov chain to design the PFB scheme for the CSMA/CA procedure of the IEEE 80.15.6 standard. The performance of the PFB algorithm is compared against the ABEB function of WBAN CSMA/CA. The results show that the PFB procedure adopted for IEEE 802.15.6 CSMA/CA outperforms the ABEB procedure. PMID:28257112
Francescato, Maria Pia; Stel, Giuliana; Stenner, Elisabetta; Geat, Mario
2015-01-01
Physical activity in patients with type 1 diabetes (T1DM) is hindered because of the high risk of glycemic imbalances. A recently proposed algorithm (named Ecres) estimates well enough the supplemental carbohydrates for exercises lasting one hour, but its performance for prolonged exercise requires validation. Nine T1DM patients (5M/4F; 35-65 years; HbA1c 54 ± 13 mmol · mol(-1)) performed, under free-life conditions, a 3-h walk at 30% heart rate reserve while insulin concentrations, whole-body carbohydrate oxidation rates (determined by indirect calorimetry) and supplemental carbohydrates (93% sucrose), together with glycemia, were measured every 30 min. Data were subsequently compared with the corresponding values estimated by the algorithm. No significant difference was found between the estimated insulin concentrations and the laboratory-measured values (p = NS). Carbohydrates oxidation rate decreased significantly with time (from 0.84 ± 0.31 to 0.53 ± 0.24 g · min(-1), respectively; p < 0.001), being estimated well enough by the algorithm (p = NS). Estimated carbohydrates requirements were practically equal to the corresponding measured values (p = NS), the difference between the two quantities amounting to -1.0 ± 6.1 g, independent of the elapsed exercise time (time effect, p = NS). Results confirm that Ecres provides a satisfactory estimate of the carbohydrates required to avoid glycemic imbalances during moderate intensity aerobic physical activity, opening the prospect of an intriguing method that could liberate patients from the fear of exercise-induced hypoglycemia.
ERIC Educational Resources Information Center
Clauser, Brian E.; Ross, Linette P.; Clyman, Stephen G.; Rose, Kathie M.; Margolis, Melissa J.; Nungester, Ronald J.; Piemme, Thomas E.; Chang, Lucy; El-Bayoumi, Gigi; Malakoff, Gary L.; Pincetl, Pierre S.
1997-01-01
Describes an automated scoring algorithm for a computer-based simulation examination of physicians' patient-management skills. Results with 280 medical students show that scores produced using this algorithm are highly correlated to actual clinician ratings. Scores were also effective in discriminating between case performance judged passing or…
Assessing the performance of data assimilation algorithms which employ linear error feedback
NASA Astrophysics Data System (ADS)
Mallia-Parfitt, Noeleene; Bröcker, Jochen
2016-10-01
Data assimilation means to find an (approximate) trajectory of a dynamical model that (approximately) matches a given set of observations. A direct evaluation of the trajectory against the available observations is likely to yield a too optimistic view of performance, since the observations were already used to find the solution. A possible remedy is presented which simply consists of estimating that optimism, thereby giving a more realistic picture of the "out of sample" performance. Our approach is inspired by methods from statistical learning employed for model selection and assessment purposes in statistics. Applying similar ideas to data assimilation algorithms yields an operationally viable means of assessment. The approach can be used to improve the performance of models or the data assimilation itself. This is illustrated by optimising the feedback gain for data assimilation employing linear feedback.
Assessing the performance of data assimilation algorithms which employ linear error feedback.
Mallia-Parfitt, Noeleene; Bröcker, Jochen
2016-10-01
Data assimilation means to find an (approximate) trajectory of a dynamical model that (approximately) matches a given set of observations. A direct evaluation of the trajectory against the available observations is likely to yield a too optimistic view of performance, since the observations were already used to find the solution. A possible remedy is presented which simply consists of estimating that optimism, thereby giving a more realistic picture of the "out of sample" performance. Our approach is inspired by methods from statistical learning employed for model selection and assessment purposes in statistics. Applying similar ideas to data assimilation algorithms yields an operationally viable means of assessment. The approach can be used to improve the performance of models or the data assimilation itself. This is illustrated by optimising the feedback gain for data assimilation employing linear feedback.
The royal road for genetic algorithms: Fitness landscapes and GA performance
Mitchell, M.; Holland, J.H. ); Forrest, S. . Dept. of Computer Science)
1991-01-01
Genetic algorithms (GAs) play a major role in many artificial-life systems, but there is often little detailed understanding of why the GA performs as it does, and little theoretical basis on which to characterize the types of fitness landscapes that lead to successful GA performance. In this paper we propose a strategy for addressing these issues. Our strategy consists of defining a set of features of fitness landscapes that are particularly relevant to the GA, and experimentally studying how various configurations of these features affect the GA's performance along a number of dimensions. In this paper we informally describe an initial set of proposed feature classes, describe in detail one such class ( Royal Road'' functions), and present some initial experimental results concerning the role of crossover and building blocks'' on landscapes constructed from features of this class. 27 refs., 1 fig., 5 tabs.
Wood, Thomas W.; Heasler, Patrick G.; Daly, Don S.
2010-07-15
Almost all of the "architectures" for radiation detection systems in Department of Energy (DOE) and other USG programs rely on some version of layered detector deployment. Efficacy analyses of layered (or more generally extended) detection systems in many contexts often assume statistical independence among detection events and thus predict monotonically increasing system performance with the addition of detection layers. We show this to be a false conclusion for the ROC curves typical of most current technology gamma detectors, and more generally show that statistical independence is often an unwarranted assumption for systems in which there is ambiguity about the objects to be detected. In such systems, a model of correlation among detection events allows optimization of system algorithms for interpretation of detector signals. These algorithms are framed as optimal discriminant functions in joint signal space, and may be applied to gross counting or spectroscopic detector systems. We have shown how system algorithms derived from this model dramatically improve detection probabilities compared to the standard serial detection operating paradigm for these systems. These results would not surprise anyone who has confronted the problem of correlated errors (or failure rates) in the analogous contexts, but is seems to be largely underappreciated among those analyzing the radiation detection problem – independence is widely assumed and experimental studies typical fail to measure correlation. This situation, if not rectified, will lead to several unfortunate results. Including [1] overconfidence in system efficacy, [2] overinvestment in layers of similar technology, and [3] underinvestment in diversity among detection assets.
Performance evaluation of a routing algorithm based on Hopfield Neural Network for network-on-chip
NASA Astrophysics Data System (ADS)
Esmaelpoor, Jamal; Ghafouri, Abdollah
2015-12-01
Network on chip (NoC) has emerged as a solution to overcome the system on chip growing complexity and design challenges. A proper routing algorithm is a key issue of an NoC design. An appropriate routing method balances load across the network channels and keeps path length as short as possible. This survey investigates the performance of a routing algorithm based on Hopfield Neural Network. It is a dynamic programming to provide optimal path and network monitoring in real time. The aim of this article is to analyse the possibility of using a neural network as a router. The algorithm takes into account the path with the lowest delay (cost) form source to destination. In other words, the path a message takes from source to destination depends on network traffic situation at the time and it is the fastest one. The simulation results show that the proposed approach improves average delay, throughput and network congestion efficiently. At the same time, the increase in power consumption is almost negligible.
K-Means Re-Clustering-Algorithmic Options with Quantifiable Performance Comparisons
Meyer, A W; Paglieroni, D; Asteneh, C
2002-12-17
This paper presents various architectural options for implementing a K-Means Re-Clustering algorithm suitable for unsupervised segmentation of hyperspectral images. Performance metrics are developed based upon quantitative comparisons of convergence rates and segmentation quality. A methodology for making these comparisons is developed and used to establish K values that produce the best segmentations with minimal processing requirements. Convergence rates depend on the initial choice of cluster centers. Consequently, this same methodology may be used to evaluate the effectiveness of different initialization techniques.
NASA Technical Reports Server (NTRS)
Orme, John S.; Schkolnik, Gerard S.
1995-01-01
Performance Seeking Control (PSC), an onboard, adaptive, real-time optimization algorithm, relies upon an onboard propulsion system model. Flight results illustrated propulsion system performance improvements as calculated by the model. These improvements were subject to uncertainty arising from modeling error. Thus to quantify uncertainty in the PSC performance improvements, modeling accuracy must be assessed. A flight test approach to verify PSC-predicted increases in thrust (FNP) and absolute levels of fan stall margin is developed and applied to flight test data. Application of the excess thrust technique shows that increases of FNP agree to within 3 percent of full-scale measurements for most conditions. Accuracy to these levels is significant because uncertainty bands may now be applied to the performance improvements provided by PSC. Assessment of PSC fan stall margin modeling accuracy was completed with analysis of in-flight stall tests. Results indicate that the model overestimates the stall margin by between 5 to 10 percent. Because PSC achieves performance gains by using available stall margin, this overestimation may represent performance improvements to be recovered with increased modeling accuracy. Assessment of thrust and stall margin modeling accuracy provides a critical piece for a comprehensive understanding of PSC's capabilities and limitations.
NASA Astrophysics Data System (ADS)
Zittersteijn, Michiel; Schildknecht, Thomas; Vananti, Alessandro; Dolado Perez, Juan Carlos; Martinot, Vincent
2016-07-01
Currently several thousands of objects are being tracked in the MEO and GEO regions through optical means. With the advent of improved sensors and a heightened interest in the problem of space debris, it is expected that the number of tracked objects will grow by an order of magnitude in the near future. This research aims to provide a method that can treat the correlation and orbit determination problems simultaneously, and is able to efficiently process large data sets with minimal manual intervention. This problem is also known as the Multiple Target Tracking (MTT) problem. The complexity of the MTT problem is defined by its dimension S. Current research tends to focus on the S = 2 MTT problem. The reason for this is that for S = 2 the problem has a P-complexity. However, with S = 2 the decision to associate a set of observations is based on the minimum amount of information, in ambiguous situations (e.g. satellite clusters) this will lead to incorrect associations. The S > 2 MTT problem is an NP-hard combinatorial optimization problem. In previous work an Elitist Genetic Algorithm (EGA) was proposed as a method to approximately solve this problem. It was shown that the EGA is able to find a good approximate solution with a polynomial time complexity. The EGA relies on solving the Lambert problem in order to perform the necessary orbit determinations. This means that the algorithm is restricted to orbits that are described by Keplerian motion. The work presented in this paper focuses on the impact that this restriction has on the algorithm performance.
NASA Astrophysics Data System (ADS)
Peille, Philippe; Ceballos, Maria Teresa; Cobo, Beatriz; Wilms, Joern; Bandler, Simon; Smith, Stephen J.; Dauser, Thomas; Brand, Thorsten; den Hartog, Roland; de Plaa, Jelle; Barret, Didier; den Herder, Jan-Willem; Piro, Luigi; Barcons, Xavier; Pointecouteau, Etienne
2016-07-01
The X-ray Integral Field Unit (X-IFU) microcalorimeter, on-board Athena, with its focal plane comprising 3840 Transition Edge Sensors (TESs) operating at 90 mK, will provide unprecedented spectral-imaging capability in the 0.2-12 keV energy range. It will rely on the on-board digital processing of current pulses induced by the heat deposited in the TES absorber, as to recover the energy of each individual events. Assessing the capabilities of the pulse reconstruction is required to understand the overall scientific performance of the X-IFU, notably in terms of energy resolution degradation with both increasing energies and count rates. Using synthetic data streams generated by the X-IFU End-to-End simulator, we present here a comprehensive benchmark of various pulse reconstruction techniques, ranging from standard optimal filtering to more advanced algorithms based on noise covariance matrices. Beside deriving the spectral resolution achieved by the different algorithms, a first assessment of the computing power and ground calibration needs is presented. Overall, all methods show similar performances, with the reconstruction based on noise covariance matrices showing the best improvement with respect to the standard optimal filtering technique. Due to prohibitive calibration needs, this method might however not be applicable to the X-IFU and the best compromise currently appears to be the so-called resistance space analysis which also features very promising high count rate capabilities.
NASA Technical Reports Server (NTRS)
Ramachandran, Ganesh K.; Akopian, David; Heckler, Gregory W.; Winternitz, Luke B.
2011-01-01
Location technologies have many applications in wireless communications, military and space missions, etc. US Global Positioning System (GPS) and other existing and emerging Global Navigation Satellite Systems (GNSS) are expected to provide accurate location information to enable such applications. While GNSS systems perform very well in strong signal conditions, their operation in many urban, indoor, and space applications is not robust or even impossible due to weak signals and strong distortions. The search for less costly, faster and more sensitive receivers is still in progress. As the research community addresses more and more complicated phenomena there exists a demand on flexible multimode reference receivers, associated SDKs, and development platforms which may accelerate and facilitate the research. One of such concepts is the software GPS/GNSS receiver (GPS SDR) which permits a facilitated access to algorithmic libraries and a possibility to integrate more advanced algorithms without hardware and essential software updates. The GNU-SDR and GPS-SDR open source receiver platforms are such popular examples. This paper evaluates the performance of recently proposed block-corelator techniques for acquisition and tracking of GPS signals using open source GPS-SDR platform.
A numerical algorithm with preference statements to evaluate the performance of scientists.
Ricker, Martin
Academic evaluation committees have been increasingly receptive for using the number of published indexed articles, as well as citations, to evaluate the performance of scientists. It is, however, impossible to develop a stand-alone, objective numerical algorithm for the evaluation of academic activities, because any evaluation necessarily includes subjective preference statements. In a market, the market prices represent preference statements, but scientists work largely in a non-market context. I propose a numerical algorithm that serves to determine the distribution of reward money in Mexico's evaluation system, which uses relative prices of scientific goods and services as input. The relative prices would be determined by an evaluation committee. In this way, large evaluation systems (like Mexico's Sistema Nacional de Investigadores) could work semi-automatically, but not arbitrarily or superficially, to determine quantitatively the academic performance of scientists every few years. Data of 73 scientists from the Biology Institute of Mexico's National University are analyzed, and it is shown that the reward assignation and academic priorities depend heavily on those preferences. A maximum number of products or activities to be evaluated is recommended, to encourage quality over quantity.
Query Processing Performance and Searching over Encrypted Data by using an Efficient Algorithm
NASA Astrophysics Data System (ADS)
Sharma, Manish; Chaudhary, Atul; Kumar, Santosh
2013-01-01
Data is the central asset of today's dynamically operating organization and their business. This data is usually stored in database. A major consideration is applied on the security of that data from the unauthorized access and intruders. Data encryption is a strong option for security of data in database and especially in those organizations where security risks are high. But there is a potential disadvantage of performance degradation. When we apply encryption on database then we should compromise between the security and efficient query processing. The work of this paper tries to fill this gap. It allows the users to query over the encrypted column directly without decrypting all the records. It's improves the performance of the system. The proposed algorithm works well in the case of range and fuzzy match queries.
Parallel and Grid-Based Data Mining - Algorithms, Models and Systems for High-Performance KDD
NASA Astrophysics Data System (ADS)
Congiusta, Antonio; Talia, Domenico; Trunfio, Paolo
Data Mining often is a computing intensive and time requiring process. For this reason, several Data Mining systems have been implemented on parallel computing platforms to achieve high performance in the analysis of large data sets. Moreover, when large data repositories are coupled with geographical distribution of data, users and systems, more sophisticated technologies are needed to implement high-performance distributed KDD systems. Since computational Grids emerged as privileged platforms for distributed computing, a growing number of Grid-based KDD systems has been proposed. In this chapter we first discuss different ways to exploit parallelism in the main Data Mining techniques and algorithms, then we discuss Grid-based KDD systems. Finally, we introduce the Knowledge Grid, an environment which makes use of standard Grid middleware to support the development of parallel and distributed knowledge discovery applications.
Code of Federal Regulations, 2010 CFR
2010-07-01
... REGULATIONS Enhanced Treatment for Cryptosporidium Requirements for Sanitary Surveys Performed by Epa § 141.723 Requirements to respond to significant deficiencies identified in sanitary surveys performed by... deficiencies identified in sanitary surveys performed by EPA. 141.723 Section 141.723 Protection of...
Code of Federal Regulations, 2011 CFR
2011-07-01
... REGULATIONS Enhanced Treatment for Cryptosporidium Requirements for Sanitary Surveys Performed by Epa § 141.723 Requirements to respond to significant deficiencies identified in sanitary surveys performed by... deficiencies identified in sanitary surveys performed by EPA. 141.723 Section 141.723 Protection of...
Basic Performance of the Standard Retrieval Algorithm for the Dual-frequency Precipitation Radar
NASA Astrophysics Data System (ADS)
Seto, S.; Iguchi, T.; Kubota, T.
2013-12-01
applied again by using the adjusted k-Z relations. By iterating a combination of the HB method and the DFR method, k-Z relations are improved. This is termed HB-DFR method (Seto et al. 2013). Though k-Z relations are adjusted simultaneously for all range bins using SRT method, this method can adjust k-Z relation at a range bin independently of other range bins. Therefore, in this method, DSD is represented on a 2-dimensional plane. The HB-DFR method has been incorporated in the DPR Level 2 standard algorithm (L2). The basic performance of L2 is tested with synthetic dataset which were produced from the TRMM/PR standard product. In L2, when only KuPR radar measurement is used, precipitation estimates are in good agreement with the corresponding rain rate estimates in the PR standard product. However, when both KuPR and KaPR radars measurements are used and the HB-DFR method is applied, the precipitation rate estimates are deviated from the estimates in the PR standard product. This is partly because of the poor performance of the HB-DFR and is also partly because of the overestimation in PIA by the Dual-frequency SRT. Improvement of the standard algorithm particularly for the dual-frequency measurement will be presented.
Student-Led Project Teams: Significance of Regulation Strategies in High- and Low-Performing Teams
ERIC Educational Resources Information Center
Ainsworth, Judith
2016-01-01
We studied group and individual co-regulatory and self-regulatory strategies of self-managed student project teams using data from intragroup peer evaluations and a postproject survey. We found that high team performers shared their research and knowledge with others, collaborated to advise and give constructive criticism, and demonstrated moral…
Yao, Ming-Shui; Tang, Wen-Xiang; Wang, Guan-E; Nath, Bhaskar; Xu, Gang
2016-07-01
A strategy for combining metal oxides and metal-organic frameworks is proposed to design new materials for sensing volatile organic compounds, for the first time. The prepared ZnO@ZIF-CoZn core-sheath nanowire arrays show greatly enhanced performance not only on its selectivity but also on its response, recovery behavior, and working temperature.
ERIC Educational Resources Information Center
Hilger, Allison I.; Zelaznik, Howard; Smith, Anne
2016-01-01
Purpose: Stuttering involves a breakdown in the speech motor system. We address whether stuttering in its early stage is specific to the speech motor system or whether its impact is observable across motor systems. Method: As an extension of Olander, Smith, and Zelaznik (2010), we measured bimanual motor timing performance in 115 children: 70…
NASA Astrophysics Data System (ADS)
Ciany, Charles M.; Zurawski, William C.
2009-05-01
Raytheon has extensively processed high-resolution sidescan sonar images with its CAD/CAC algorithms to provide classification of targets in a variety of shallow underwater environments. The Raytheon CAD/CAC algorithm is based on non-linear image segmentation into highlight, shadow, and background regions, followed by extraction, association, and scoring of features from candidate highlight and shadow regions of interest (ROIs). The targets are classified by thresholding an overall classification score, which is formed by summing the individual feature scores. The algorithm performance is measured in terms of probability of correct classification as a function of false alarm rate, and is determined by both the choice of classification features and the manner in which the classifier rates and combines these features to form its overall score. In general, the algorithm performs very reliably against targets that exhibit "strong" highlight and shadow regions in the sonar image- i.e., both the highlight echo and its associated shadow region from the target are distinct relative to the ambient background. However, many real-world undersea environments can produce sonar images in which a significant percentage of the targets exhibit either "weak" highlight or shadow regions in the sonar image. The challenge of achieving robust performance in these environments has traditionally been addressed by modifying the individual feature scoring algorithms to optimize the separation between the corresponding highlight or shadow feature scores of targets and non-targets. This study examines an alternate approach that employs principles of Fisher fusion to determine a set of optimal weighting coefficients that are applied to the individual feature scores before summing to form the overall classification score. The results demonstrate improved performance of the CAD/CAC algorithm on at-sea data sets.
NASA Astrophysics Data System (ADS)
Lee, Y. H.; Chiang, K. W.
2012-07-01
In this study, a 3D Map Matching (3D MM) algorithm is embedded to current INS/GPS fusion algorithm for enhancing the sustainability and accuracy of INS/GPS integration systems, especially the height component. In addition, this study propose an effective solutions to the limitation of current commercial vehicular navigation systems where they fail to distinguish whether the vehicle is moving on the elevated highway or the road under it because those systems don't have sufficient height resolution. To validate the performance of proposed 3D MM embedded INS/GPS integration algorithms, in the test area, two scenarios were considered, paths under the freeways and streets between tall buildings, where the GPS signal is obstacle or interfered easily. The test platform was mounted on the top of a land vehicle and also systems in the vehicle. The IMUs applied includes SPAN-LCI (0.1 deg/hr gyro bias) from NovAtel, which was used as the reference system, and two MEMS IMUs with different specifications for verifying the performance of proposed algorithm. The preliminary results indicate the proposed algorithms are able to improve the accuracy of positional components in GPS denied environments significantly with the use of INS/GPS integrated systems in SPP mode.
The cosmological and performative significance of a Thai cult of healing through meditation.
Tambiah, S J
1977-04-01
A cult of healing through meditation that was observed in Bangkok, Thailand in 1974 is described, and the cult is interpreted in terms of two axes, the cosmological and the performative, and the dialectical, reciprocal and complementary relations between them. The various ramifications of the cosmology are discussed--the categorization of the cosmos itself as a hierarchical scheme, the relations between man and non-human forms of existence, the ideas concerning power and its manner of acquisition and use, the relation between power and restraint, etc. The epistemological basis of the cult, which attempts cure through meditation, and the features of the ritual as they contribute to its performative efficacy are highlighted. The essay concludes by suggesting that there is a single scheme (episteme) underlying religious ideas and applications of knowledge such as meditation, medicine, alchemy, and astrology.
NASA Astrophysics Data System (ADS)
Rimbalová, Jarmila; Vilčeková, Silvia
2013-11-01
The practice of facilities management is rapidly evolving with the increasing interest in the discourse of sustainable development. The industry and its market are forecasted to develop to include non-core functions, activities traditionally not associated with this profession, but which are increasingly being addressed by facilities managers. The scale of growth in the built environment and the consequential growth of the facility management sector is anticipated to be enormous. Key Performance Indicators (KPI) are measure that provides essential information about performance of facility services delivery. In selecting KPI, it is critical to limit them to those factors that are essential to the organization reaching its goals. It is also important to keep the number of KPI small just to keep everyone's attention focused on achieving the same KPIs. This paper deals with the determination of weights of KPI of FM in terms of the design and use of sustainable buildings.
Hilger, Allison I.; Zelaznik, Howard
2016-01-01
Purpose Stuttering involves a breakdown in the speech motor system. We address whether stuttering in its early stage is specific to the speech motor system or whether its impact is observable across motor systems. Method As an extension of Olander, Smith, and Zelaznik (2010), we measured bimanual motor timing performance in 115 children: 70 children who stutter (CWS) and 45 children who do not stutter (CWNS). The children repeated the clapping task yearly for up to 5 years. We used a synchronization-continuation rhythmic timing paradigm. Two analyses were completed: a cross-sectional analysis of data from the children in the initial year of the study (ages 4;0 [years;months] to 5;11) compared clapping performance between CWS and CWNS. A second, multiyear analysis assessed clapping behavior across the ages 3;5–9;5 to examine any potential relationship between clapping performance and eventual persistence or recovery of stuttering. Results Preschool CWS were not different from CWNS on rates of clapping or variability in interclap interval. In addition, no relationship was found between bimanual motor timing performance and eventual persistence in or recovery from stuttering. The disparity between the present findings for preschoolers and those of Olander et al. (2010) most likely arises from the smaller sample size used in the earlier study. Conclusion From the current findings, on the basis of data from relatively large samples of stuttering and nonstuttering children tested over multiple years, we conclude that a bimanual motor timing deficit is not a core feature of early developmental stuttering. PMID:27391252
Significant Returns in Engagement and Performance with a Free Teaching App
ERIC Educational Resources Information Center
Green, Alan
2016-01-01
Pedagogical research shows that teaching methods other than traditional lectures may result in better outcomes. However, lecture remains the dominant method in economics, likely due to high implementation costs of methods shown to be effective in the literature. In this article, the author shows significant benefits of using a teaching app for…
ERIC Educational Resources Information Center
Westover, Jennifer M.; Martin, Emma J.
2014-01-01
Literacy skills are fundamental for all learners. For students with significant disabilities, strong literacy skills provide a gateway to generative communication, genuine friendships, improved access to academic opportunities, access to information technology, and future employment opportunities. Unfortunately, many educators lack the knowledge…
NASA Astrophysics Data System (ADS)
Pacheco-Vega, Arturo
2016-09-01
In this work a new set of correlation equations is developed and introduced to accurately describe the thermal performance of compact heat exchangers with possible condensation. The feasible operating conditions for the thermal system correspond to dry- surface, dropwise condensation, and film condensation. Using a prescribed form for each condition, a global regression analysis for the best-fit correlation to experimental data is carried out with a simulated annealing optimization technique. The experimental data were taken from the literature and algorithmically classified into three groups -related to the possible operating conditions- with a previously-introduced Gaussian-mixture-based methodology. Prior to their use in the analysis, the correct data classification was assessed and confirmed via artificial neural networks. Predictions from the correlations obtained for the different conditions are within the uncertainty of the experiments and substantially more accurate than those commonly used.
Cosmo-SkyMed Di Seconda Generazione Innovative Algorithms and High Performance SAR Data Processors
NASA Astrophysics Data System (ADS)
Mari, S.; Porfilio, M.; Valentini, G.; Serva, S.; Fiorentino, C. A. M.
2016-08-01
In the frame of COSMO-SkyMed di Seconda Generazione (CSG) programme, extensive research activities have been conducted on SAR data processing, with particular emphasis on high resolution processors, wide field products noise and coregistration algorithms.As regards the high resolution, it is essential to create a model for the management of all those elements that are usually considered as negligible but alter the target phase responses when it is "integrated" for several seconds. Concerning the SAR wide-field products noise removal, one of the major problems is the ability compensate all the phenomena that affect the received signal intensity. Research activities are aimed at developing adaptive- iterative techniques for the compensation of inaccuracies on the knowledge of radar antenna pointing, up to achieve compensation of the order of thousandths of degree. Moreover, several modifications of the image coregistration algortithm have been studied aimed at improving the performences and reduce the computational effort.
Chen, Minhua; Silva, Jorge; Paisley, John; Wang, Chunping; Dunson, David; Carin, Lawrence
2013-01-01
Nonparametric Bayesian methods are employed to constitute a mixture of low-rank Gaussians, for data x ∈ ℝN that are of high dimension N but are constrained to reside in a low-dimensional subregion of ℝN. The number of mixture components and their rank are inferred automatically from the data. The resulting algorithm can be used for learning manifolds and for reconstructing signals from manifolds, based on compressive sensing (CS) projection measurements. The statistical CS inversion is performed analytically. We derive the required number of CS random measurements needed for successful reconstruction, based on easily-computed quantities, drawing on block-sparsity properties. The proposed methodology is validated on several synthetic and real datasets. PMID:23894225
NASA Astrophysics Data System (ADS)
Sivakumar, P. Bagavathi; Mohandas, V. P.
Stock price prediction and stock trend prediction are the two major research problems of financial time series analysis. In this work, performance comparison of various attribute set reduction algorithms were made for short term stock price prediction. Forward selection, backward elimination, optimized selection, optimized selection based on brute force, weight guided and optimized selection based on the evolutionary principle and strategy was used. Different selection schemes and cross over types were explored. To supplement learning and modeling, support vector machine was also used in combination. The algorithms were applied on a real time Indian stock data namely CNX Nifty. The experimental study was conducted using the open source data mining tool Rapidminer. The performance was compared in terms of root mean squared error, squared error and execution time. The obtained results indicates the superiority of evolutionary algorithms and the optimize selection algorithm based on evolutionary principles outperforms others.
NASA Astrophysics Data System (ADS)
Kuschenerus, Mieke; Cullen, Robert
2016-08-01
To ensure reliability and precision of wave height estimates for future satellite altimetry missions such as Sentinel 6, reliable parameter retrieval algorithms that can extract significant wave heights up to 20 m have to be established. The retrieved parameters, i.e. the retrieval methods need to be validated extensively on a wide range of possible significant wave heights. Although current missions require wave height retrievals up to 20 m, there is little evidence of systematic validation of parameter retrieval methods for sea states with wave heights above 10 m. This paper provides a definition of a set of simulated sea states with significant wave height up to 20 m, that allow simulation of radar altimeter response echoes for extreme sea states in SAR and low resolution mode. The simulated radar responses are used to derive significant wave height estimates, which can be compared with the initial models, allowing precision estimations of the applied parameter retrieval methods. Thus we establish a validation method for significant wave height retrieval for sea states causing high significant wave heights, to allow improved understanding and planning of future satellite altimetry mission validation.
In this paper we develop and computationally test three implicit enumeration algorithms for solving the asymmetric traveling salesman problem. All...three algorithms use the assignment problem relaxation of the traveling salesman problem with subtour elimination similar to the previous approaches by...previous subtour elimination algorithms and (2) the 1-arborescence approach of Held and Karp for the asymmetric traveling salesman problem.
An Educational System for Learning Search Algorithms and Automatically Assessing Student Performance
ERIC Educational Resources Information Center
Grivokostopoulou, Foteini; Perikos, Isidoros; Hatzilygeroudis, Ioannis
2017-01-01
In this paper, first we present an educational system that assists students in learning and tutors in teaching search algorithms, an artificial intelligence topic. Learning is achieved through a wide range of learning activities. Algorithm visualizations demonstrate the operational functionality of algorithms according to the principles of active…
Performance-Based Seismic Design of Steel Frames Utilizing Colliding Bodies Algorithm
Veladi, H.
2014-01-01
A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm. PMID:25202717
Performance-based seismic design of steel frames utilizing colliding bodies algorithm.
Veladi, H
2014-01-01
A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm.
FOCUSED R&D FOR ELECTROCHROMIC SMART WINDOWS: SIGNIFICANT PERFORMANCE AND YIELD ENHANCEMENTS
Marcus Milling
2004-09-23
Developments made under this program will play a key role in underpinning the technology for producing EC devices. It is anticipated that the work begun during this period will continue to improve materials properties, and drive yields up and costs down, increase durability and make manufacture simpler and more cost effective. It is hoped that this will contribute to a successful and profitable industry, which will help reduce energy consumption and improve comfort for building occupants worldwide. The first major task involved improvements to the materials used in the process. The improvements made as a result of the work done during this project have contributed to the enhanced performance, including dynamic range, uniformity and electrical characteristics. Another major objective of the project was to develop technology to improve yield, reduce cost, and facilitate manufacturing of EC products. Improvements directly attributable to the work carried out as part of this project and seen in the overall EC device performance, have been accompanied by an improvement in the repeatability and consistency of the production process. Innovative test facilities for characterizing devices in a timely and well-defined manner have been developed. The equipment has been designed in such a way as to make scaling-up to accommodate higher throughput necessary for manufacturing relatively straightforward. Finally, the third major goal was to assure the durability of the EC product, both by developments aimed at improving the product performance, as well as development of novel procedures to test the durability of this new product. Both aspects have been demonstrated, both by carrying out a number of different durability tests, both in-house and by independent third-party testers, and also developing several novel durability tests.
Best, J; Bilgi, H; Heider, D; Schotten, C; Manka, P; Bedreli, S; Gorray, M; Ertle, J; van Grunsven, L A; Dechêne, A
2016-12-01
Background: Hepatocellular carcinoma (HCC) is one of the leading causes of death in cirrhotic patients worldwide. The detection rate for early stage HCC remains low despite screening programs. Thus, the majority of HCC cases are detected at advanced tumor stages with limited treatment options. To facilitate earlier diagnosis, this study aims to validate the added benefit of the combination of AFP, the novel biomarkers AFP-L3, DCP, and an associated novel diagnostic algorithm called GALAD. Material and methods: Between 2007 and 2008 and from 2010 to 2012, 285 patients newly diagnosed with HCC and 402 control patients suffering from chronic liver disease were enrolled. AFP, AFP-L3, and DCP were measured using the µTASWako i30 automated immunoanalyzer. The diagnostic performance of biomarkers was measured as single parameters and in a logistic regression model. Furthermore, a diagnostic algorithm (GALAD) based on gender, age, and the biomarkers mentioned above was validated. Results: AFP, AFP-L3, and DCP showed comparable sensitivities and specifities for HCC detection. The combination of all biomarkers had the highest sensitivity with decreased specificity. In contrast, utilization of the biomarker-based GALAD score resulted in a superior specificity of 93.3 % and sensitivity of 85.6 %. In the scenario of BCLC 0/A stage HCC, the GALAD algorithm provided the highest overall AUROC with 0.9242, which was superior to any other marker combination. Conclusions: We could demonstrate in our cohort the superior detection of early stage HCC with the combined use of the respective biomarkers and in particular GALAD even in AFP-negative tumors.
NASA Astrophysics Data System (ADS)
Kizilkaya, Elif A.; Gupta, Surendra M.
2005-11-01
In this paper, we compare the impact of different disassembly line balancing (DLB) algorithms on the performance of our recently introduced Dynamic Kanban System for Disassembly Line (DKSDL) to accommodate the vagaries of uncertainties associated with disassembly and remanufacturing processing. We consider a case study to illustrate the impact of various DLB algorithms on the DKSDL. The approach to the solution, scenario settings, results and the discussions of the results are included.
Asfour, Leila; Asfour, Victoria; McCormack, David; Attia, Rizwan
2014-09-01
A best evidence topic in cardiac surgery was written according to a structured protocol. The question addressed was: is there a difference in cardiothoracic surgery outcomes in terms of morbidity or mortality of patients operated on by a sleep-deprived surgeon compared with those operated by a non-sleep-deprived surgeon? Reported search criteria yielded 77 papers, of which 15 were deemed to represent the best evidence on the topic. Three studies directly related to cardiothoracic surgery and 12 studies related to non-cardiothoracic surgery. Recommendations are based on 18 121 cardiothoracic patients and 214 666 non-cardiothoracic surgical patients. Different definitions of sleep deprivation were used in the studies, either reviewing surgeon's sleeping hours or out-of-hours operating. Surgical outcomes reviewed included: mortality rate, neurological, renal, pulmonary, infectious complications, length of stay, length of intensive care stay, cardiopulmonary bypass times and aortic-cross-clamp times. There were no significant differences in mortality or intraoperative complications in the groups of patients operated on by sleep-deprived versus non-sleep-deprived surgeons in cardiothoracic studies. One study showed a significant increase in the rate of septicaemia in patients operated on by severely sleep-deprived surgeons (3.6%) compared with the moderately sleep-deprived (0.9%) and non-sleep-deprived groups (0.8%) (P = 0.03). In the non-cardiothoracic studies, 7 of the 12 studies demonstrated statistically significant higher reoperation rate in trauma cases (P <0.02) and kidney transplants (night = 16.8% vs day = 6.4%, P <0.01), as well as higher overall mortality (P = 0.028) and morbidity (P <0.0001). There is little direct evidence in the literature demonstrating the effect of sleep deprivation in cardiothoracic surgeons on morbidity or mortality. However, overall the non-cardiothoracic studies have demonstrated that operative time and sleep deprivation can have a
Vogt, Emelie; MacQuarrie, David; Neary, John Patrick
2012-11-01
Ballistocardiography (BCG) is a non-invasive technology that has been used to record ultra-low-frequency vibrations of the heart allowing for the measurement of cardiac cycle events including timing and amplitudes of contraction. Recent developments in BCG have made this technology simple to use, as well as time- and cost-efficient in comparison with other more complicated and invasive techniques used to evaluate cardiac performance. Recent technological advances are considerably greater since the advent of microprocessors and laptop computers. Along with the history of BCG, this paper reviews the present and future potential benefits of using BCG to measure cardiac cycle events and its application to clinical and applied research.
Cramer, Michael J; Dumke, Charles L; Hailes, Walter S; Cuddy, John S; Ruby, Brent C
2015-10-01
A variety of dietary choices are marketed to enhance glycogen recovery after physical activity. Past research informs recommendations regarding the timing, dose, and nutrient compositions to facilitate glycogen recovery. This study examined the effects of isoenergetic sport supplements (SS) vs. fast food (FF) on glycogen recovery and exercise performance. Eleven males completed two experimental trials in a randomized, counterbalanced order. Each trial included a 90-min glycogen depletion ride followed by a 4-hr recovery period. Absolute amounts of macronutrients (1.54 ± 0.27 g·kg-1 carbohydrate, 0.24 ± 0.04 g·kg fat-1, and 0.18 ±0.03g·kg protein-1) as either SS or FF were provided at 0 and 2 hr. Muscle biopsies were collected from the vastus lateralis at 0 and 4 hr post exercise. Blood samples were analyzed at 0, 30, 60, 120, 150, 180, and 240 min post exercise for insulin and glucose, with blood lipids analyzed at 0 and 240 min. A 20k time-trial (TT) was completed following the final muscle biopsy. There were no differences in the blood glucose and insulin responses. Similarly, rates of glycogen recovery were not different across the diets (6.9 ± 1.7 and 7.9 ± 2.4 mmol·kg wet weight- 1·hr-1 for SS and FF, respectively). There was also no difference across the diets for TT performance (34.1 ± 1.8 and 34.3 ± 1.7 min for SS and FF, respectively. These data indicate that short-term food options to initiate glycogen resynthesis can include dietary options not typically marketed as sports nutrition products such as fast food menu items.
The significance of orbital anatomy and periocular wrinkling when performing laser skin resurfacing.
Trelles, M A; Pardo, L; Benedetto, A V; García-Solana, L; Torrens, J
2000-03-01
Knowledge of orbital anatomy and the interaction of muscle contractions, gravitational forces and photoagingis fundamental in understanding the limitations of carbon dioxide (CO2) laser skin resurfacing when rejuvenating the skin of the periocular area. Laser resurfacing does not change the mimetic behavior of the facial muscles nor does it influence gravitational forces. When resurfacing periocular tissue, the creation of scleral show and ectropion are a potential consequence when there is an over zealous attempt at improving the sagging malar fat pad and eyelid laxity by performing an excess amount of laser passes at the lateral portion of the lower eyelid. This results in an inadvertent widening of the palpebral fissure due to the lateral pull of the Orbicularis oculi. Retrospectively, 85 patients were studied, who had undergone periorbital resurfacing with a CO2 laser using anew treatment approach. The Sharplan 40C CO2 Feather Touchlaser was programmed with a circular scanning pattern and used just for the shoulders of the wrinkles. A final laser pass was performed with the same program over the entire lower eyelid skin surface, excluding the outer lateral portion (e.g. a truncated triangle-like area),corresponding to the lateral canthus. Only a single laser pass was delivered to the lateral canthal triangle to avoid widening the lateral opening of the eyelid, which might lead to the potential complications of scleral show and ectropion. When the area of the crows' feet is to be treated, three passes on the skin of this entire lateral orbital surface are completed by moving laterally and upward toward the hairline. Patients examined on days 1, 7, 15, 30, 60, and one year after laser resurfacing showed good results. At two months after treatment, the clinical improvement was rated by the patient and physician as being "very good" in 81 of the 85 patients reviewed. These patients underwent laser resurfacing without complications. The proposed technique of
Ahn, Hye Shin; Jang, Mijung; Yun, Bo La; Kim, Bohyoung; Ko, Eun Sook; Han, Boo-Kyung; Chang, Jung Min; Yi, Ann; Cho, Nariya; Moon, Woo Kyung; Choi, Hye Young
2014-01-01
Objective To compare new full-field digital mammography (FFDM) with and without use of an advanced post-processing algorithm to improve image quality, lesion detection, diagnostic performance, and priority rank. Materials and Methods During a 22-month period, we prospectively enrolled 100 cases of specimen FFDM mammography (Brestige®), which was performed alone or in combination with a post-processing algorithm developed by the manufacturer: group A (SMA), specimen mammography without application of "Mammogram enhancement ver. 2.0"; group B (SMB), specimen mammography with application of "Mammogram enhancement ver. 2.0". Two sets of specimen mammographies were randomly reviewed by five experienced radiologists. Image quality, lesion detection, diagnostic performance, and priority rank with regard to image preference were evaluated. Results Three aspects of image quality (overall quality, contrast, and noise) of the SMB were significantly superior to those of SMA (p < 0.05). SMB was significantly superior to SMA for visualizing calcifications (p < 0.05). Diagnostic performance, as evaluated by cancer score, was similar between SMA and SMB. SMB was preferred to SMA by four of the five reviewers. Conclusion The post-processing algorithm may improve image quality with better image preference in FFDM than without use of the software. PMID:24843234
Knox, Jeanette Bresson Ladegaard; Svendsen, Mette Nordahl
2015-08-01
This article examines the storytelling aspect in philosophizing with rehabilitating cancer patients in small Socratic dialogue groups (SDG). Recounting an experience to illustrate a philosophical question chosen by the participants is the traditional point of departure for the dialogical exchange. However, narrating is much more than a beginning point or the skeletal framework of events and it deserves more scholarly attention than hitherto given. Storytelling pervades the whole Socratic process and impacts the conceptual analysis in a SDG. In this article we show how the narrative aspect became a rich resource for the compassionate bond between participants and how their stories cultivated the abstract reflection in the group. In addition, the aim of the article is to reveal the different layers in the performance of storytelling, or of authoring experience. By picking, poking and dissecting an experience through a collaborative effort, most participants had their initial experience existentially refined and the chosen concept of which the experience served as an illustration transformed into a moral compass to be used in self-orientation post cancer.
Singh, Arvinder; Chandra, Amreesh
2015-01-01
Amongst the materials being investigated for supercapacitor electrodes, carbon based materials are most investigated. However, pure carbon materials suffer from inherent physical processes which limit the maximum specific energy and power that can be achieved in an energy storage device. Therefore, use of carbon-based composites with suitable nano-materials is attaining prominence. The synergistic effect between the pseudocapacitive nanomaterials (high specific energy) and carbon (high specific power) is expected to deliver the desired improvements. We report the fabrication of high capacitance asymmetric supercapacitor based on electrodes of composites of SnO2 and V2O5 with multiwall carbon nanotubes and neutral 0.5 M Li2SO4 aqueous electrolyte. The advantages of the fabricated asymmetric supercapacitors are compared with the results published in the literature. The widened operating voltage window is due to the higher over-potential of electrolyte decomposition and a large difference in the work functions of the used metal oxides. The charge balanced device returns the specific capacitance of ~198 F g−1 with corresponding specific energy of ~89 Wh kg−1 at 1 A g−1. The proposed composite systems have shown great potential in fabricating high performance supercapacitors. PMID:26494197
Singh, Arvinder; Chandra, Amreesh
2015-10-23
Amongst the materials being investigated for supercapacitor electrodes, carbon based materials are most investigated. However, pure carbon materials suffer from inherent physical processes which limit the maximum specific energy and power that can be achieved in an energy storage device. Therefore, use of carbon-based composites with suitable nano-materials is attaining prominence. The synergistic effect between the pseudocapacitive nanomaterials (high specific energy) and carbon (high specific power) is expected to deliver the desired improvements. We report the fabrication of high capacitance asymmetric supercapacitor based on electrodes of composites of SnO2 and V2O5 with multiwall carbon nanotubes and neutral 0.5 M Li2SO4 aqueous electrolyte. The advantages of the fabricated asymmetric supercapacitors are compared with the results published in the literature. The widened operating voltage window is due to the higher over-potential of electrolyte decomposition and a large difference in the work functions of the used metal oxides. The charge balanced device returns the specific capacitance of ~198 F g(-1) with corresponding specific energy of ~89 Wh kg(-1) at 1 A g(-1). The proposed composite systems have shown great potential in fabricating high performance supercapacitors.
Zhao, W; Niu, T; Xing, L; Xiong, G; Elmore, K; Min, J; Zhu, J; Wang, L
2015-06-15
Purpose: To significantly improve dual energy CT (DECT) imaging by establishing a new theoretical framework of image-domain material decomposition with incorporation of edge-preserving techniques. Methods: The proposed algorithm, HYPR-NLM, combines the edge-preserving non-local mean filter (NLM) with the HYPR-LR (Local HighlY constrained backPRojection Reconstruction) framework. Image denoising using HYPR-LR framework depends on the noise level of the composite image which is the average of the different energy images. For DECT, the composite image is the average of high- and low-energy images. To further reduce noise, one may want to increase the window size of the filter of the HYPR-LR, leading resolution degradation. By incorporating the NLM filtering and the HYPR-LR framework, HYPR-NLM reduces the boost material decomposition noise using energy information redundancies as well as the non-local mean. We demonstrate the noise reduction and resolution preservation of the algorithm with both iodine concentration numerical phantom and clinical patient data by comparing the HYPR-NLM algorithm to the direct matrix inversion, HYPR-LR and iterative image-domain material decomposition (Iter-DECT). Results: The results show iterative material decomposition method reduces noise to the lowest level and provides improved DECT images. HYPR-NLM significantly reduces noise while preserving the accuracy of quantitative measurement and resolution. For the iodine concentration numerical phantom, the averaged noise levels are about 2.0, 0.7, 0.2 and 0.4 for direct inversion, HYPR-LR, Iter- DECT and HYPR-NLM, respectively. For the patient data, the noise levels of the water images are about 0.36, 0.16, 0.12 and 0.13 for direct inversion, HYPR-LR, Iter-DECT and HYPR-NLM, respectively. Difference images of both HYPR-LR and Iter-DECT show edge effect, while no significant edge effect is shown for HYPR-NLM, suggesting spatial resolution is well preserved for HYPR-NLM. Conclusion: HYPR
NASA Technical Reports Server (NTRS)
Spinhirne, James D.; Palm, Stephen P.; Hlavka, Dennis L.; Hart, William D.
2007-01-01
The Geoscience Laser Altimeter System (GLAS) launched in early 2003 is the first polar orbiting satellite lidar. The instrument design includes high performance observations of the distribution and optical scattering cross sections of atmospheric clouds and aerosol. The backscatter lidar operates at two wavelengths, 532 and 1064 nm. For the atmospheric cloud and aerosol measurements, the 532 nm channel was designed for ultra high efficiency with solid state photon counting detectors and etalon filtering. Data processing algorithms were developed to calibrate and normalize the signals and produce global scale data products of the height distribution of cloud and aerosol layers and their optical depths and particulate scattering cross sections up to the limit of optical attenuation. The paper will concentrate on the effectiveness and limitations of the lidar channel design and data product algorithms. Both atmospheric receiver channels meet and exceed their design goals. Geiger Mode Avalanche Photodiode modules are used for the 532 nm signal. The operational experience is that some signal artifacts and non-linearity require correction in data processing. As with all photon counting detectors, a pulse-pile-up calibration is an important aspect of the measurement. Additional signal corrections were found to be necessary relating to correction of a saturation signal-run-on effect and also for daytime data, a small range dependent variation in the responsivity. It was possible to correct for these signal errors in data processing and achieve the requirement to accurately profile aerosol and cloud cross section down to 10-7 llm-sr. The analysis procedure employs a precise calibration against molecular scattering in the mid-stratosphere. The 1064 nm channel detection employs a high-speed analog APD for surface and atmospheric measurements where the detection sensitivity is limited by detector noise and is over an order of magnitude less than at 532 nm. A unique feature of
Muller, Christophe; Marcou, Gilles; Horvath, Dragos; Aires-de-Sousa, João; Varnek, Alexandre
2012-12-21
Machine learning (SVM and JRip rule learner) methods have been used in conjunction with the Condensed Graph of Reaction (CGR) approach to identify errors in the atom-to-atom mapping of chemical reactions produced by an automated mapping tool by ChemAxon. The modeling has been performed on the three first enzymatic classes of metabolic reactions from the KEGG database. Each reaction has been converted into a CGR representing a pseudomolecule with conventional (single, double, aromatic, etc.) bonds and dynamic bonds characterizing chemical transformations. The ChemAxon tool was used to automatically detect the matching atom pairs in reagents and products. These automated mappings were analyzed by the human expert and classified as "correct" or "wrong". ISIDA fragment descriptors generated for CGRs for both correct and wrong mappings were used as attributes in machine learning. The learned models have been validated in n-fold cross-validation on the training set followed by a challenge to detect correct and wrong mappings within an external test set of reactions, never used for learning. Results show that both SVM and JRip models detect most of the wrongly mapped reactions. We believe that this approach could be used to identify erroneous atom-to-atom mapping performed by any automated algorithm.
A high-performance seizure detection algorithm based on Discrete Wavelet Transform (DWT) and EEG
Chen, Duo; Wan, Suiren; Xiang, Jing; Bao, Forrest Sheng
2017-01-01
In the past decade, Discrete Wavelet Transform (DWT), a powerful time-frequency tool, has been widely used in computer-aided signal analysis of epileptic electroencephalography (EEG), such as the detection of seizures. One of the important hurdles in the applications of DWT is the settings of DWT, which are chosen empirically or arbitrarily in previous works. The objective of this study aimed to develop a framework for automatically searching the optimal DWT settings to improve accuracy and to reduce computational cost of seizure detection. To address this, we developed a method to decompose EEG data into 7 commonly used wavelet families, to the maximum theoretical level of each mother wavelet. Wavelets and decomposition levels providing the highest accuracy in each wavelet family were then searched in an exhaustive selection of frequency bands, which showed optimal accuracy and low computational cost. The selection of frequency bands and features removed approximately 40% of redundancies. The developed algorithm achieved promising performance on two well-tested EEG datasets (accuracy >90% for both datasets). The experimental results of the developed method have demonstrated that the settings of DWT affect its performance on seizure detection substantially. Compared with existing seizure detection methods based on wavelet, the new approach is more accurate and transferable among datasets. PMID:28278203
System Performance of an Integrated Airborne Spacing Algorithm with Ground Automation
NASA Technical Reports Server (NTRS)
Swieringa, Kurt A.; Wilson, Sara R.; Baxley, Brian T.
2016-01-01
The National Aeronautics and Space Administration's (NASA's) first Air Traffic Management (ATM) Technology Demonstration (ATD-1) was created to facilitate the transition of mature ATM technologies from the laboratory to operational use. The technologies selected for demonstration are the Traffic Management Advisor with Terminal Metering (TMA-TM), which provides precise time-based scheduling in the Terminal airspace; Controller Managed Spacing (CMS), which provides controllers with decision support tools to enable precise schedule conformance; and Interval Management (IM), which consists of flight deck automation that enables aircraft to achieve or maintain precise spacing behind another aircraft. Recent simulations and IM algorithm development at NASA have focused on trajectory-based IM operations where aircraft equipped with IM avionics are expected to achieve a spacing goal, assigned by air traffic controllers, at the final approach fix. The recently published IM Minimum Operational Performance Standards describe five types of IM operations. This paper discusses the results and conclusions of a human-in-the-loop simulation that investigated three of those IM operations. The results presented in this paper focus on system performance and integration metrics. Overall, the IM operations conducted in this simulation integrated well with ground-based decisions support tools and certain types of IM operational were able to provide improved spacing precision at the final approach fix; however, some issues were identified that should be addressed prior to implementing IM procedures into real-world operations.
NASA Astrophysics Data System (ADS)
Mizusawa, Masataka; Kurihara, Masahito
Although the maze (or gridworld) is one of the most widely used benchmark problems for real-time search algorithms, it is not sufficiently clear how the difference in the density of randomly positioned obstacles affects the structure of the state spaces and the performance of the algorithms. In particular, recent studies of the so-called phase transition phenomena that could cause dramatic change in their performance in a relatively small parameter range suggest that we should evaluate the performance in a parametric way with the parameter range wide enough to cover potential transition areas. In this paper, we present two measures for characterizing the hardness of randomly generated mazes parameterized by obstacle ratio and relate them to the performance of real-time search algorithms. The first measure is the entropy calculated from the probability of existence of solutions. The second is a measure based on total initial heuristic error between the actual cost and its heuristic estimation. We show that the maze problems are the most complicated in both measures when the obstacle ratio is around 41%. We then solve the parameterized maze problems with the well-known real-time search algorithms RTA*, LRTA*, and MARTA* to relate their performance to the proposed measures. Evaluating the number of steps required for a single problem solving by the three algorithms and the number of those required for the convergence of the learning process in LRTA*, we show that they all have a peak when the obstacle ratio is around 41%. The results support the relevance of the proposed measures. We also discuss the performance of the algorithms in terms of other statistical measures to get a quantitative, deeper understanding of their behavior.
NASA Technical Reports Server (NTRS)
Yool, S. R.; Star, J. L.; Estes, J. E.; Botkin, D. B.; Eckhardt, D. W.
1986-01-01
The earth's forests fix carbon from the atmosphere during photosynthesis. Scientists are concerned that massive forest removals may promote an increase in atmospheric carbon dioxide, with possible global warming and related environmental effects. Space-based remote sensing may enable the production of accurate world forest maps needed to examine this concern objectively. To test the limits of remote sensing for large-area forest mapping, we use Landsat data acquired over a site in the forested mountains of southern California to examine the relative capacities of a variety of popular image processing algorithms to discriminate different forest types. Results indicate that certain algorithms are best suited to forest classification. Differences in performance between the algorithms tested appear related to variations in their sensitivities to spectral variations caused by background reflectance, differential illumination, and spatial pattern by species. Results emphasize the complexity between the land-cover regime, remotely sensed data and the algorithms used to process these data.
Wang, Mengjun; Devarajan, Karthik; Singal, Amit G; Marrero, Jorge A; Dai, Jianliang; Feng, Ziding; Rinaudo, Jo Ann S; Srivastava, Sudhir; Evans, Alison; Hann, Hie-Won; Lai, Yinzhi; Yang, Hushan; Block, Timothy M; Mehta, Anand
2016-02-01
Biomarkers for the early diagnosis of hepatocellular carcinoma (HCC) are needed to decrease mortality from this cancer. However, as new biomarkers have been slow to be brought to clinical practice, we have developed a diagnostic algorithm that utilizes commonly used clinical measurements in those at risk of developing HCC. Briefly, as α-fetoprotein (AFP) is routinely used, an algorithm that incorporated AFP values along with four other clinical factors was developed. Discovery analysis was performed on electronic data from patients who had liver disease (cirrhosis) alone or HCC in the background of cirrhosis. The discovery set consisted of 360 patients from two independent locations. A logistic regression algorithm was developed that incorporated log-transformed AFP values with age, gender, alkaline phosphatase, and alanine aminotransferase levels. We define this as the Doylestown algorithm. In the discovery set, the Doylestown algorithm improved the overall performance of AFP by 10%. In subsequent external validation in over 2,700 patients from three independent sites, the Doylestown algorithm improved detection of HCC as compared with AFP alone by 4% to 20%. In addition, at a fixed specificity of 95%, the Doylestown algorithm improved the detection of HCC as compared with AFP alone by 2% to 20%. In conclusion, the Doylestown algorithm consolidates clinical laboratory values, with age and gender, which are each individually associated with HCC risk, into a single value that can be used for HCC risk assessment. As such, it should be applicable and useful to the medical community that manages those at risk for developing HCC.
NASA Astrophysics Data System (ADS)
Khodja, Mohamed; Belouchrani, Adel; Abed-Meraim, Karim
2012-12-01
This article deals with the application of Spatial Time-Frequency Distribution (STFD) to the direction finding problem using the Multiple Signal Classification (MUSIC)algorithm. A comparative performance analysis is performed for the method under consideration with respect to that using data covariance matrix when the received array signals are subject to calibration errors in a non-stationary environment. An unified analytical expression of the Direction Of Arrival (DOA) error estimation is derived for both methods. Numerical results show the effect of the parameters intervening in the derived expression on the algorithm performance. It is particularly observed that for low Signal to Noise Ratio (SNR) and high Signal to sensor Perturbation Ratio (SPR) the STFD method gives better performance, while for high SNR and for the same SPR both methods give similar performance.
SU-E-T-605: Performance Evaluation of MLC Leaf-Sequencing Algorithms in Head-And-Neck IMRT
Jing, J; Lin, H; Chow, J
2015-06-15
Purpose: To investigate the efficiency of three multileaf collimator (MLC) leaf-sequencing algorithms proposed by Galvin et al, Chen et al and Siochi et al using external beam treatment plans for head-and-neck intensity modulated radiation therapy (IMRT). Methods: IMRT plans for head-and-neck were created using the CORVUS treatment planning system. The plans were optimized and the fluence maps for all photon beams determined. Three different MLC leaf-sequencing algorithms based on Galvin et al, Chen et al and Siochi et al were used to calculate the final photon segmental fields and their monitor units in delivery. For comparison purpose, the maximum intensity of fluence map was kept constant in different plans. The number of beam segments and total number of monitor units were calculated for the three algorithms. Results: From results of number of beam segments and total number of monitor units, we found that algorithm of Galvin et al had the largest number of monitor unit which was about 70% larger than the other two algorithms. Moreover, both algorithms of Galvin et al and Siochi et al have relatively lower number of beam segment compared to Chen et al. Although values of number of beam segment and total number of monitor unit calculated by different algorithms varied with the head-and-neck plans, it can be seen that algorithms of Galvin et al and Siochi et al performed well with a lower number of beam segment, though algorithm of Galvin et al had a larger total number of monitor units than Siochi et al. Conclusion: Although performance of the leaf-sequencing algorithm varied with different IMRT plans having different fluence maps, an evaluation is possible based on the calculated number of beam segment and monitor unit. In this study, algorithm by Siochi et al was found to be more efficient in the head-and-neck IMRT. The Project Sponsored by the Fundamental Research Funds for the Central Universities (J2014HGXJ0094) and the Scientific Research Foundation for the
Guelpa, Valérian; Laurent, Guillaume J.; Sandoz, Patrick; Zea, July Galeano; Clévy, Cédric
2014-01-01
This paper presents a visual measurement method able to sense 1D rigid body displacements with very high resolutions, large ranges and high processing rates. Sub-pixelic resolution is obtained thanks to a structured pattern placed on the target. The pattern is made of twin periodic grids with slightly different periods. The periodic frames are suited for Fourier-like phase calculations—leading to high resolution—while the period difference allows the removal of phase ambiguity and thus a high range-to-resolution ratio. The paper presents the measurement principle as well as the processing algorithms (source files are provided as supplementary materials). The theoretical and experimental performances are also discussed. The processing time is around 3 μs for a line of 780 pixels, which means that the measurement rate is mostly limited by the image acquisition frame rate. A 3-σ repeatability of 5 nm is experimentally demonstrated which has to be compared with the 168 μm measurement range. PMID:24625736
NASA Astrophysics Data System (ADS)
Erlingis, J. M.; Gourley, J. J.; Kirstetter, P.; Anagnostou, E. N.; Kalogiros, J. A.; Anagnostou, M.
2015-12-01
An Intensive Observation Period (IOP) for the Integrated Precipitation and Hydrology Experiment (IPHEx), part of NASA's Ground Validation campaign for the Global Precipitation Measurement Mission satellite took place from May-June 2014 in the Smoky Mountains of western North Carolina. The National Severe Storms Laboratory's mobile dual-pol X-band radar, NOXP, was deployed in the Pigeon River Basin during this time and employed various scanning strategies, including more than 1000 Range Height Indicator (RHI) scans in coordination with another radar and research aircraft. Rain gauges and disdrometers were also positioned within the basin to verify precipitation estimates and estimation of microphysical parameters. The performance of the SCOP-ME post-processing algorithm on NOXP data is compared with real-time and near real-time precipitation estimates with varying spatial resolutions and quality control measures (Stage IV gauge-corrected radar estimates, Multi-Radar/Multi-Sensor System Quantitative Precipitation Estimates, and CMORPH satellite estimates) to assess the utility of a gap-filling radar in complex terrain. Additionally, the RHI scans collected in this IOP provide a valuable opportunity to examine the evolution of microphysical characteristics of convective and stratiform precipitation as they impinge on terrain. To further the understanding of orographically enhanced precipitation, multiple storms for which RHI data are available are considered.
Guelpa, Valérian; Laurent, Guillaume J; Sandoz, Patrick; Zea, July Galeano; Clévy, Cédric
2014-03-12
This paper presents a visual measurement method able to sense 1D rigid body displacements with very high resolutions, large ranges and high processing rates. Sub-pixelic resolution is obtained thanks to a structured pattern placed on the target. The pattern is made of twin periodic grids with slightly different periods. The periodic frames are suited for Fourier-like phase calculations-leading to high resolution-while the period difference allows the removal of phase ambiguity and thus a high range-to-resolution ratio. The paper presents the measurement principle as well as the processing algorithms (source files are provided as supplementary materials). The theoretical and experimental performances are also discussed. The processing time is around 3 µs for a line of 780 pixels, which means that the measurement rate is mostly limited by the image acquisition frame rate. A 3-σ repeatability of 5 nm is experimentally demonstrated which has to be compared with the 168 µm measurement range.
Performance of a worm algorithm in ϕ4 theory at finite quartic coupling
NASA Astrophysics Data System (ADS)
Korzec, Tomasz; Vierhaus, Ingmar; Wolff, Ulli
2011-07-01
Worm algorithms have been very successful with the simulation of sigma models with fixed length spins which result from scalar field theories in the limit of infinite quartic coupling λ. Here we investigate closer their algorithmic efficiency at finite and even vanishing λ for the one component model in dimensions D = 2 , 3 , 4.
Performance analysis of structured gradient algorithm. [for adaptive beamforming linear arrays
NASA Technical Reports Server (NTRS)
Godara, Lal C.
1990-01-01
The structured gradient algorithm uses a structured estimate of the array correlation matrix (ACM) to estimate the gradient required for the constrained least-mean-square (LMS) algorithm. This structure reflects the structure of the exact array correlation matrix for an equispaced linear array and is obtained by spatial averaging of the elements of the noisy correlation matrix. In its standard form the LMS algorithm does not exploit the structure of the array correlation matrix. The gradient is estimated by multiplying the array output with the receiver outputs. An analysis of the two algorithms is presented to show that the covariance of the gradient estimated by the structured method is less sensitive to the look direction signal than that estimated by the standard method. The effect of the number of elements on the signal sensitivity of the two algorithms is studied.
2014-12-01
signals classification ( MUSIC ) subspace direction-finding algorithm are evaluated in this thesis. Additionally, two performance enhancements are...presented: one that reduces the MUSIC computational load and one that provides a method of utilizing collector motion to resolve DOA ambiguities.
ERIC Educational Resources Information Center
Meir, Daniel D.; Lazinger, Susan S.
1998-01-01
Reports on a survey measuring the performance of a merging algorithm used to generate the now-defunct ULM (Union List of Monographs) file for ALEPH, Israel's research library network. Discusses automatic detection and merging of duplicate bibliographic records, problems created by lack of a standard for Hebrew spelling, and methods for measuring…
Verster, Joris C; Bekker, Evelijne M; de Roos, Marlise; Minova, Anita; Eijken, Erik J E; Kooij, J J Sandra; Buitelaar, Jan K; Kenemans, J Leon; Verbaten, Marinus N; Olivier, Berend; Volkerts, Edmund R
2008-05-01
Although patients with attention-deficit hyperactivity disorder (ADHD) have reported improved driving performance on methylphenidate, limited evidence exists to support an effect of treatment on driving performance and some regions prohibit driving on methylphenidate. A randomized, crossover trial examining the effects of methylphenidate versus placebo on highway driving in 18 adults with ADHD was carried out. After three days of no treatment, patients received either their usual methylphenidate dose (mean: 14.7 mg; range: 10-30 mg) or placebo and then the opposite treatment after a six to seven days washout period. Patients performed a 100 km driving test during normal traffic, 1.5 h after treatment administration. Standard deviation of lateral position (SDLP), the weaving of the car, was the primary outcome measure. Secondary outcome measurements included the standard deviation of speed and patient reports of driving performance. Driving performance was significantly better in the methylphenidate than in the placebo condition, as reflected by the SDLP difference (2.3 cm, 95% CI = 0.8-3.8, P = 0.004). Variation in speed was similar on treatment and on placebo (-0.05 km/h, 95% CI = -0.4 to 0.2, P = 0.70). Among adults with ADHD, with a history of a positive clinical response to methylphenidate, methylphenidate significantly improves driving performance.
Performance of decoder-based algorithms for signal synchronization for DSSS waveforms
NASA Astrophysics Data System (ADS)
Matache, A.; Valles, E. L.
This paper presents results on the implementation of pilotless carrier synchronization algorithms at low SNRs using joint decoding and decision-directed tracking. A software test bed was designed to simulate the effects of decision-directed carrier synchronization (DDCS) techniques. These techniques are compared to non-decision directed algorithms used in phase-locked loops (PLLs) or Costas loops. In previous work by the authors, results for direct M-ARY modulation constellations, with no code spreading were introduced. This paper focuses on the application of the proposed family of decision directed algorithms to direct sequence spread spectrum (DSSS) waveforms, typical of GPS signals. The current algorithm can utilize feedback from turbo codes in addition to the prior support of LDPC codes.
NASA Technical Reports Server (NTRS)
Adelman, H. M.; Haftka, R. T.; Robinson, J. C.
1982-01-01
The status of an effort to increase the efficiency of calculating transient temperature fields in complex aerospace vehicle structures is described. The advantages and disadvantages of explicit and implicit algorithms are discussed. A promising set of implicit algorithms with variable time steps, known as the GEAR package is described. Four test problems, used for evaluating and comparing various algorithms, were selected and finite element models of the configurations are described. These problems include a space shuttle frame component, an insulated cylinder, a metallic panel for a thermal protection system, and a model of the space shuttle orbiter wing. Results generally indicate a preference for implicit oer explicit algorithms for solution of transient structural heat transfer problems when the governing equations are stiff.
NASA Astrophysics Data System (ADS)
Yoon, Jin-Seon; Kim, Nam; Suh, HoHyung; Jeon, Seok Hee
2000-03-01
In this paper, gratings to apply for the optical interconnection are designed using a genetic algorithm (GA) for a robust and efficient schema. The real-time optical interconnection system architecture is composed with LC-SLM, CCD array detector, IBM-PC, He-Ne laser, and Fourier transform lens. A pixelated binary phase grating is displayed on LC-SLM and could interconnect incoming beams to desired output spots freely by real-time. So as to adapt a GA for finding near globally-cost solutions, a chromosome is coded as a binary integer of length 32 X 32, the stochastic tournament method for decreasing the stochastic sampling error is performed, and a single-point crossover having 16 X 16 block size is used. The characteristics on the several parameters are analyzed in the desired grating design. Firstly, as the analysis of the effect on the probability of crossover, a designed grating when the probability of crossover is 0.75 has a 74.7[%] high diffraction efficiency and a 1.73 X 10-1 uniformity quantitatively, where the probability of mutation is 0.001 and the population size is 300. Secondly, on the probability of mutation, a designed grating when the probability of mutation is 0.001 has a 74.4[%] high efficiency and a 1.61 X 10-1 uniformity quantitatively, where the probability of crossover is 1.0 and the population size is 300. Thirdly, on the population size, a designed grating when the population size is 300 and the generation is 400 has above 74[%] diffraction efficiency, where the probability of mutation is 0.001 and the probability of crossover is 1.0.
Experimental Investigation of the Performance of Image Registration and De-aliasing Algorithms
2009-09-01
spread function In the literature these types of algorithms are sometimes hcluded under the broad umbrella of superresolution . However, in the current...We use one of these patterns to visually demonstrate successful de-aliasing 15. SUBJECT TERMS Image de-aliasing Superresolution Microscanning Image...undersampled point spread function. In the literature these types of algorithms are sometimes included under the broad umbrella of superresolution . However, in
Maier, Joscha; Sawall, Stefan; Kachelrieß, Marc
2014-05-15
Purpose: Phase-correlated microcomputed tomography (micro-CT) imaging plays an important role in the assessment of mouse models of cardiovascular diseases and the determination of functional parameters as the left ventricular volume. As the current gold standard, the phase-correlated Feldkamp reconstruction (PCF), shows poor performance in case of low dose scans, more sophisticated reconstruction algorithms have been proposed to enable low-dose imaging. In this study, the authors focus on the McKinnon-Bates (MKB) algorithm, the low dose phase-correlated (LDPC) reconstruction, and the high-dimensional total variation minimization reconstruction (HDTV) and investigate their potential to accurately determine the left ventricular volume at different dose levels from 50 to 500 mGy. The results were verified in phantom studies of a five-dimensional (5D) mathematical mouse phantom. Methods: Micro-CT data of eight mice, each administered with an x-ray dose of 500 mGy, were acquired, retrospectively gated for cardiac and respiratory motion and reconstructed using PCF, MKB, LDPC, and HDTV. Dose levels down to 50 mGy were simulated by using only a fraction of the projections. Contrast-to-noise ratio (CNR) was evaluated as a measure of image quality. Left ventricular volume was determined using different segmentation algorithms (Otsu, level sets, region growing). Forward projections of the 5D mouse phantom were performed to simulate a micro-CT scan. The simulated data were processed the same way as the real mouse data sets. Results: Compared to the conventional PCF reconstruction, the MKB, LDPC, and HDTV algorithm yield images of increased quality in terms of CNR. While the MKB reconstruction only provides small improvements, a significant increase of the CNR is observed in LDPC and HDTV reconstructions. The phantom studies demonstrate that left ventricular volumes can be determined accurately at 500 mGy. For lower dose levels which were simulated for real mouse data sets, the
Hummel, H E; Eisinger, M T; Hein, D F; Breuer, M; Schmid, S; Leithold, G
2012-01-01
Pheromone effects discovered some 130 years, but scientifically defined just half a century ago, are a great bonus for basic and applied biology. Specifically, pest management efforts have been advanced in many insect orders, either for purposes or monitoring, mass trapping, or for mating disruption. Finding and applying a new search algorithm, nearly 20,000 entries in the pheromone literature have been counted, a number much higher than originally anticipated. This compilation contains identified and thus synthesizable structures for all major orders of insects. Among them are hundreds of agriculturally significant insect pests whose aggregated damages and costly control measures range in the multibillions of dollars annually. Unfortunately, and despite a lot of effort within the international entomological scene, the number of efficient and cheap engineering solutions for dispensing pheromones under variable field conditions is uncomfortably lagging behind. Some innovative approaches are cited from the relevant literature in an attempt to rectify this situation. Recently, specifically designed electrospun organic nanofibers offer a lot of promise. With their use, the mating communication of vineyard insects like Lobesia botrana (Lep.: Tortricidae) can be disrupted for periods of seven weeks.
NASA Technical Reports Server (NTRS)
Matic, Roy M.; Mosley, Judith I.
1994-01-01
Future space-based, remote sensing systems will have data transmission requirements that exceed available downlinks necessitating the use of lossy compression techniques for multispectral data. In this paper, we describe several algorithms for lossy compression of multispectral data which combine spectral decorrelation techniques with an adaptive, wavelet-based, image compression algorithm to exploit both spectral and spatial correlation. We compare the performance of several different spectral decorrelation techniques including wavelet transformation in the spectral dimension. The performance of each technique is evaluated at compression ratios ranging from 4:1 to 16:1. Performance measures used are visual examination, conventional distortion measures, and multispectral classification results. We also introduce a family of distortion metrics that are designed to quantify and predict the effect of compression artifacts on multi spectral classification of the reconstructed data.
Lee, Chih; Huang, Chun-Hsi
2010-01-01
Efforts have been devoted to accelerating the construction of suffix trees. However, little attention has been given to post-construction operations on suffix trees. Therefore, we investigate the effects of improved spatial locality on certain post-construction operations on suffix trees. We used a maximal exact repeat finding algorithm, MERF, on which software REPuter is based, as an example, and conducted experiments on the 16 chromosomes of the yeast Saccharomyces cerevisiae. Two versions of suffix trees were customized for the algorithm and two variants of MERF were implemented accordingly. We showed that in all cases, the optimal cache-oblivious MERF is faster and displays consistently lower cache miss rates than their non-optimized counterparts.
NASA Astrophysics Data System (ADS)
Yadav, Deepti; Arora, M. K.; Tiwari, K. C.; Ghosh, J. K.
2016-04-01
Hyperspectral imaging is a powerful tool in the field of remote sensing and has been used for many applications like mineral detection, detection of landmines, target detection etc. Major issues in target detection using HSI are spectral variability, noise, small size of the target, huge data dimensions, high computation cost, complex backgrounds etc. Many of the popular detection algorithms do not work for difficult targets like small, camouflaged etc. and may result in high false alarms. Thus, target/background discrimination is a key issue and therefore analyzing target's behaviour in realistic environments is crucial for the accurate interpretation of hyperspectral imagery. Use of standard libraries for studying target's spectral behaviour has limitation that targets are measured in different environmental conditions than application. This study uses the spectral data of the same target which is used during collection of the HSI image. This paper analyze spectrums of targets in a way that each target can be spectrally distinguished from a mixture of spectral data. Artificial neural network (ANN) has been used to identify the spectral range for reducing data and further its efficacy for improving target detection is verified. The results of ANN proposes discriminating band range for targets; these ranges were further used to perform target detection using four popular spectral matching target detection algorithm. Further, the results of algorithms were analyzed using ROC curves to evaluate the effectiveness of the ranges suggested by ANN over full spectrum for detection of desired targets. In addition, comparative assessment of algorithms is also performed using ROC.
Liu, Chun; Kroll, Andreas
2016-01-01
Multi-robot task allocation determines the task sequence and distribution for a group of robots in multi-robot systems, which is one of constrained combinatorial optimization problems and more complex in case of cooperative tasks because they introduce additional spatial and temporal constraints. To solve multi-robot task allocation problems with cooperative tasks efficiently, a subpopulation-based genetic algorithm, a crossover-free genetic algorithm employing mutation operators and elitism selection in each subpopulation, is developed in this paper. Moreover, the impact of mutation operators (swap, insertion, inversion, displacement, and their various combinations) is analyzed when solving several industrial plant inspection problems. The experimental results show that: (1) the proposed genetic algorithm can obtain better solutions than the tested binary tournament genetic algorithm with partially mapped crossover; (2) inversion mutation performs better than other tested mutation operators when solving problems without cooperative tasks, and the swap-inversion combination performs better than other tested mutation operators/combinations when solving problems with cooperative tasks. As it is difficult to produce all desired effects with a single mutation operator, using multiple mutation operators (including both inversion and swap) is suggested when solving similar combinatorial optimization problems.
Kirschstein, Timo; Wolters, Alexander; Lenz, Jan-Hendrik; Fröhlich, Susanne; Hakenberg, Oliver; Kundt, Günther; Darmüntzel, Martin; Hecker, Michael; Altiner, Attila; Müller-Hilke, Brigitte
2016-01-01
Objective: The amendment of the Medical Licensing Act (ÄAppO) in Germany in 2002 led to the introduction of graded assessments in the clinical part of medical studies. This, in turn, lent new weight to the importance of written tests, even though the minimum requirements for exam quality are sometimes difficult to reach. Introducing exam quality as a criterion for the award of performance-based allocation of funds is expected to steer the attention of faculty members towards more quality and perpetuate higher standards. However, at present there is a lack of suitable algorithms for calculating exam quality. Methods: In the spring of 2014, the students‘ dean commissioned the „core group“ for curricular improvement at the University Medical Center in Rostock to revise the criteria for the allocation of performance-based funds for teaching. In a first approach, we developed an algorithm that was based on the results of the most common type of exam in medical education, multiple choice tests. It included item difficulty and discrimination, reliability as well as the distribution of grades achieved. Results: This algorithm quantitatively describes exam quality of multiple choice exams. However, it can also be applied to exams involving short assay questions and the OSCE. It thus allows for the quantitation of exam quality in the various subjects and – in analogy to impact factors and third party grants – a ranking among faculty. Conclusion: Our algorithm can be applied to all test formats in which item difficulty, the discriminatory power of the individual items, reliability of the exam and the distribution of grades are measured. Even though the content validity of an exam is not considered here, we believe that our algorithm is suitable as a general basis for performance-based allocation of funds. PMID:27275509
NASA Astrophysics Data System (ADS)
Stagnaro, Mattia; Colli, Matteo; Lanza, Luca Giovanni; Chan, Pak Wai
2016-11-01
Eight rainfall events recorded from May to September 2013 at Hong Kong International Airport (HKIA) have been selected to investigate the performance of post-processing algorithms used to calculate the rainfall intensity (RI) from tipping-bucket rain gauges (TBRGs). We assumed a drop-counter catching-type gauge as a working reference and compared rainfall intensity measurements with two calibrated TBRGs operated at a time resolution of 1 min. The two TBRGs differ in their internal mechanics, one being a traditional single-layer dual-bucket assembly, while the other has two layers of buckets. The drop-counter gauge operates at a time resolution of 10 s, while the time of tipping is recorded for the two TBRGs. The post-processing algorithms employed for the two TBRGs are based on the assumption that the tip volume is uniformly distributed over the inter-tip period. A series of data of an ideal TBRG is reconstructed using the virtual time of tipping derived from the drop-counter data. From the comparison between the ideal gauge and the measurements from the two real TBRGs, the performances of different post-processing and correction algorithms are statistically evaluated over the set of recorded rain events. The improvement obtained by adopting the inter-tip time algorithm in the calculation of the RI is confirmed. However, by comparing the performance of the real and ideal TBRGs, the beneficial effect of the inter-tip algorithm is shown to be relevant for the mid-low range (6-50 mm
The use of algorithmic behavioural transfer functions in parametric EO system performance models
NASA Astrophysics Data System (ADS)
Hickman, Duncan L.; Smith, Moira I.
2015-10-01
The use of mathematical models to predict the overall performance of an electro-optic (EO) system is well-established as a methodology and is used widely to support requirements definition, system design, and produce performance predictions. Traditionally these models have been based upon cascades of transfer functions based on established physical theory, such as the calculation of signal levels from radiometry equations, as well as the use of statistical models. However, the performance of an EO system is increasing being dominated by the on-board processing of the image data and this automated interpretation of image content is complex in nature and presents significant modelling challenges. Models and simulations of EO systems tend to either involve processing of image data as part of a performance simulation (image-flow) or else a series of mathematical functions that attempt to define the overall system characteristics (parametric). The former approach is generally more accurate but statistically and theoretically weak in terms of specific operational scenarios, and is also time consuming. The latter approach is generally faster but is unable to provide accurate predictions of a system's performance under operational conditions. An alternative and novel architecture is presented in this paper which combines the processing speed attributes of parametric models with the accuracy of image-flow representations in a statistically valid framework. An additional dimension needed to create an effective simulation is a robust software design whose architecture reflects the structure of the EO System and its interfaces. As such, the design of the simulator can be viewed as a software prototype of a new EO System or an abstraction of an existing design. This new approach has been used successfully to model a number of complex military systems and has been shown to combine improved performance estimation with speed of computation. Within the paper details of the approach
Wang, C. L.
2016-05-17
On the basis of FluoroBancroft linear-algebraic method [S.B. Andersson, Opt. Exp. 16, 18714 (2008)] three highly-resolved positioning methods were proposed for wavelength-shifting fiber (WLSF) neutron detectors. Using a Gaussian or exponential-decay light-response function (LRF), the non-linear relation of photon-number profiles vs. x-pixels was linearized and neutron positions were determined. The proposed algorithms give an average 0.03-0.08 pixel position error, much smaller than that (0.29 pixel) from a traditional maximum photon algorithm (MPA). The new algorithms result in better detector uniformity, less position misassignment (ghosting), better spatial resolution, and an equivalent or better instrument resolution in powder diffraction than the MPA. Moreover, these characters will facilitate broader applications of WLSF detectors at time-of-flight neutron powder diffraction beamlines, including single-crystal diffraction and texture analysis.
Wang, C. L.
2016-05-17
On the basis of FluoroBancroft linear-algebraic method [S.B. Andersson, Opt. Exp. 16, 18714 (2008)] three highly-resolved positioning methods were proposed for wavelength-shifting fiber (WLSF) neutron detectors. Using a Gaussian or exponential-decay light-response function (LRF), the non-linear relation of photon-number profiles vs. x-pixels was linearized and neutron positions were determined. The proposed algorithms give an average 0.03-0.08 pixel position error, much smaller than that (0.29 pixel) from a traditional maximum photon algorithm (MPA). The new algorithms result in better detector uniformity, less position misassignment (ghosting), better spatial resolution, and an equivalent or better instrument resolution in powder diffraction than the MPA.more » Moreover, these characters will facilitate broader applications of WLSF detectors at time-of-flight neutron powder diffraction beamlines, including single-crystal diffraction and texture analysis.« less
NASA Astrophysics Data System (ADS)
Wang, Xingwei; Song, XiaoFei; Chapman, Brian E.; Zheng, Bin
2012-03-01
We developed a new pulmonary vascular tree segmentation/extraction algorithm. The purpose of this study was to assess whether adding this new algorithm to our previously developed computer-aided detection (CAD) scheme of pulmonary embolism (PE) could improve the CAD performance (in particular reducing false positive detection rates). A dataset containing 12 CT examinations with 384 verified pulmonary embolism regions associated with 24 threedimensional (3-D) PE lesions was selected in this study. Our new CAD scheme includes the following image processing and feature classification steps. (1) A 3-D based region growing process followed by a rolling-ball algorithm was utilized to segment lung areas. (2) The complete pulmonary vascular trees were extracted by combining two approaches of using an intensity-based region growing to extract the larger vessels and a vessel enhancement filtering to extract the smaller vessel structures. (3) A toboggan algorithm was implemented to identify suspicious PE candidates in segmented lung or vessel area. (4) A three layer artificial neural network (ANN) with the topology 27-10-1 was developed to reduce false positive detections. (5) A k-nearest neighbor (KNN) classifier optimized by a genetic algorithm was used to compute detection scores for the PE candidates. (6) A grouping scoring method was designed to detect the final PE lesions in three dimensions. The study showed that integrating the pulmonary vascular tree extraction algorithm into the CAD scheme reduced false positive rates by 16.2%. For the case based 3D PE lesion detecting results, the integrated CAD scheme achieved 62.5% detection sensitivity with 17.1 false-positive lesions per examination.
NASA Technical Reports Server (NTRS)
Mach, Douglas M.; Christian, Hugh J.; Blakeslee, Richard; Boccippio, Dennis J.; Goodman, Steve J.; Boeck, William
2006-01-01
We describe the clustering algorithm used by the Lightning Imaging Sensor (LIS) and the Optical Transient Detector (OTD) for combining the lightning pulse data into events, groups, flashes, and areas. Events are single pixels that exceed the LIS/OTD background level during a single frame (2 ms). Groups are clusters of events that occur within the same frame and in adjacent pixels. Flashes are clusters of groups that occur within 330 ms and either 5.5 km (for LIS) or 16.5 km (for OTD) of each other. Areas are clusters of flashes that occur within 16.5 km of each other. Many investigators are utilizing the LIS/OTD flash data; therefore, we test how variations in the algorithms for the event group and group-flash clustering affect the flash count for a subset of the LIS data. We divided the subset into areas with low (1-3), medium (4-15), high (16-63), and very high (64+) flashes to see how changes in the clustering parameters affect the flash rates in these different sizes of areas. We found that as long as the cluster parameters are within about a factor of two of the current values, the flash counts do not change by more than about 20%. Therefore, the flash clustering algorithm used by the LIS and OTD sensors create flash rates that are relatively insensitive to reasonable variations in the clustering algorithms.
ERIC Educational Resources Information Center
Strecht, Pedro; Cruz, Luís; Soares, Carlos; Mendes-Moreira, João; Abreu, Rui
2015-01-01
Predicting the success or failure of a student in a course or program is a problem that has recently been addressed using data mining techniques. In this paper we evaluate some of the most popular classification and regression algorithms on this problem. We address two problems: prediction of approval/failure and prediction of grade. The former is…
Cognitive Correlates of Performance in Algorithms in a Computer Science Course for High School
ERIC Educational Resources Information Center
Avancena, Aimee Theresa; Nishihara, Akinori
2014-01-01
Computer science for high school faces many challenging issues. One of these is whether the students possess the appropriate cognitive ability for learning the fundamentals of computer science. Online tests were created based on known cognitive factors and fundamental algorithms and were implemented among the second grade students in the…
Performance Evaluation of Various STL File Mesh Refining Algorithms Applied for FDM-RP Process
NASA Astrophysics Data System (ADS)
Ledalla, Siva Rama Krishna; Tirupathi, Balaji; Sriram, Venkatesh
2016-06-01
Layered manufacturing machines use the stereolithography (STL) file to build parts. When a curved surface is converted from a computer aided design (CAD) file to STL, it results in a geometrical distortion and chordal error. Parts manufactured with this file, might not satisfy geometric dimensioning and tolerance requirements due to approximated geometry. Current algorithms built in CAD packages have export options to globally reduce this distortion, which leads to an increase in the file size and pre-processing time. In this work, different mesh subdivision algorithms are applied on STL file of a complex geometric features using MeshLab software. The mesh subdivision algorithms considered in this work are modified butterfly subdivision technique, loops sub division technique and general triangular midpoint sub division technique. A comparative study is made with respect to volume and the build time using the above techniques. It is found that triangular midpoint sub division algorithm is more suitable for the geometry under consideration. Only the wheel cap part is then manufactured on Stratasys MOJO FDM machine. The surface roughness of the part is measured on Talysurf surface roughness tester.
DeMaere, Matthew Z.
2016-01-01
Background Chromosome conformation capture, coupled with high throughput DNA sequencing in protocols like Hi-C and 3C-seq, has been proposed as a viable means of generating data to resolve the genomes of microorganisms living in naturally occuring environments. Metagenomic Hi-C and 3C-seq datasets have begun to emerge, but the feasibility of resolving genomes when closely related organisms (strain-level diversity) are present in the sample has not yet been systematically characterised. Methods We developed a computational simulation pipeline for metagenomic 3C and Hi-C sequencing to evaluate the accuracy of genomic reconstructions at, above, and below an operationally defined species boundary. We simulated datasets and measured accuracy over a wide range of parameters. Five clustering algorithms were evaluated (2 hard, 3 soft) using an adaptation of the extended B-cubed validation measure. Results When all genomes in a sample are below 95% sequence identity, all of the tested clustering algorithms performed well. When sequence data contains genomes above 95% identity (our operational definition of strain-level diversity), a naive soft-clustering extension of the Louvain method achieves the highest performance. Discussion Previously, only hard-clustering algorithms have been applied to metagenomic 3C and Hi-C data, yet none of these perform well when strain-level diversity exists in a metagenomic sample. Our simple extension of the Louvain method performed the best in these scenarios, however, accuracy remained well below the levels observed for samples without strain-level diversity. Strain resolution is also highly dependent on the amount of available 3C sequence data, suggesting that depth of sequencing must be carefully considered during experimental design. Finally, there appears to be great scope to improve the accuracy of strain resolution through further algorithm development. PMID:27843713
CUDA-based high-performance computing of the S-BPF algorithm with no-waiting pipelining
NASA Astrophysics Data System (ADS)
Deng, Lin; Yan, Bin; Chang, Qingmei; Han, Yu; Zhang, Xiang; Xi, Xiaoqi; Li, Lei
2015-10-01
The backprojection-filtration (BPF) algorithm has become a good solution for local reconstruction in cone-beam computed tomography (CBCT). However, the reconstruction speed of BPF is a severe limitation for clinical applications. The selective-backprojection filtration (S-BPF) algorithm is developed to improve the parallel performance of BPF by selective backprojection. Furthermore, the general-purpose graphics processing unit (GP-GPU) is a popular tool for accelerating the reconstruction. Much work has been performed aiming for the optimization of the cone-beam back-projection. As the cone-beam back-projection process becomes faster, the data transportation holds a much bigger time proportion in the reconstruction than before. This paper focuses on minimizing the total time in the reconstruction with the S-BPF algorithm by hiding the data transportation among hard disk, CPU and GPU. And based on the analysis of the S-BPF algorithm, some strategies are implemented: (1) the asynchronous calls are used to overlap the implemention of CPU and GPU, (2) an innovative strategy is applied to obtain the DBP image to hide the transport time effectively, (3) two streams for data transportation and calculation are synchronized by the cudaEvent in the inverse of finite Hilbert transform on GPU. Our main contribution is a smart reconstruction of the S-BPF algorithm with GPU's continuous calculation and no data transportation time cost. a 5123 volume is reconstructed in less than 0.7 second on a single Tesla-based K20 GPU from 182 views projection with 5122 pixel per projection. The time cost of our implementation is about a half of that without the overlap behavior.
NASA Technical Reports Server (NTRS)
Lyster, Peter M.; Guo, J.; Clune, T.; Larson, J. W.; Atlas, Robert (Technical Monitor)
2001-01-01
The computational complexity of algorithms for Four Dimensional Data Assimilation (4DDA) at NASA's Data Assimilation Office (DAO) is discussed. In 4DDA, observations are assimilated with the output of a dynamical model to generate best-estimates of the states of the system. It is thus a mapping problem, whereby scattered observations are converted into regular accurate maps of wind, temperature, moisture and other variables. The DAO is developing and using 4DDA algorithms that provide these datasets, or analyses, in support of Earth System Science research. Two large-scale algorithms are discussed. The first approach, the Goddard Earth Observing System Data Assimilation System (GEOS DAS), uses an atmospheric general circulation model (GCM) and an observation-space based analysis system, the Physical-space Statistical Analysis System (PSAS). GEOS DAS is very similar to global meteorological weather forecasting data assimilation systems, but is used at NASA for climate research. Systems of this size typically run at between 1 and 20 gigaflop/s. The second approach, the Kalman filter, uses a more consistent algorithm to determine the forecast error covariance matrix than does GEOS DAS. For atmospheric assimilation, the gridded dynamical fields typically have More than 10(exp 6) variables, therefore the full error covariance matrix may be in excess of a teraword. For the Kalman filter this problem can easily scale to petaflop/s proportions. We discuss the computational complexity of GEOS DAS and our implementation of the Kalman filter. We also discuss and quantify some of the technical issues and limitations in developing efficient, in terms of wall clock time, and scalable parallel implementations of the algorithms.
Feng, Lu; Fedrigo, Enrico; Béchet, Clémentine; Brunner, Elisabeth; Pirani, Werther
2012-06-01
The European Southern Observatory (ESO) is studying the next generation giant telescope, called the European Extremely Large Telescope (E-ELT). With a 42 m diameter primary mirror, it is a significant step from currently existing telescopes. Therefore, the E-ELT with its instruments poses new challenges in terms of cost and computational complexity for the control system, including its adaptive optics (AO). Since the conventional matrix-vector multiplication (MVM) method successfully used so far for AO wavefront reconstruction cannot be efficiently scaled to the size of the AO systems on the E-ELT, faster algorithms are needed. Among those recently developed wavefront reconstruction algorithms, three are studied in this paper from the point of view of design, implementation, and absolute speed on three multicore multi-CPU platforms. We focus on a single-conjugate AO system for the E-ELT. The algorithms are the MVM, the Fourier transform reconstructor (FTR), and the fractal iterative method (FRiM). This study enhances the scaling of these algorithms with an increasing number of CPUs involved in the computation. We discuss implementation strategies, depending on various CPU architecture constraints, and we present the first quantitative execution times so far at the E-ELT scale. MVM suffers from a large computational burden, making the current computing platform undersized to reach timings short enough for AO wavefront reconstruction. In our study, the FTR provides currently the fastest reconstruction. FRiM is a recently developed algorithm, and several strategies are investigated and presented here in order to implement it for real-time AO wavefront reconstruction, and to optimize its execution time. The difficulty to parallelize the algorithm in such architecture is enhanced. We also show that FRiM can provide interesting scalability using a sparse matrix approach.
Johnson, Robin R.; Popovic, Djordje P.; Olmstead, Richard E.; Stikic, Maja; Levendowski, Daniel J.; Berka, Chris
2011-01-01
A great deal of research over the last century has focused on drowsiness/alertness detection, as fatigue-related physical and cognitive impairments pose a serious risk to public health and safety. Available drowsiness/alertness detection solutions are unsatisfactory for a number of reasons: 1) lack of generalizability, 2) failure to address individual variability in generalized models, and/or 3) they lack a portable, un-tethered application. The current study aimed to address these issues, and determine if an individualized electroencephalography (EEG) based algorithm could be defined to track performance decrements associated with sleep loss, as this is the first step in developing a field deployable drowsiness/alertness detection system. The results indicated that an EEG-based algorithm, individualized using a series of brief "identification" tasks, was able to effectively track performance decrements associated with sleep deprivation. Future development will address the need for the algorithm to predict performance decrements due to sleep loss, and provide field applicability. PMID:21419826
NASA Astrophysics Data System (ADS)
Bercea, Gheorghe-Teodor; McRae, Andrew T. T.; Ham, David A.; Mitchell, Lawrence; Rathgeber, Florian; Nardi, Luigi; Luporini, Fabio; Kelly, Paul H. J.
2016-10-01
We present a generic algorithm for numbering and then efficiently iterating over the data values attached to an extruded mesh. An extruded mesh is formed by replicating an existing mesh, assumed to be unstructured, to form layers of prismatic cells. Applications of extruded meshes include, but are not limited to, the representation of three-dimensional high aspect ratio domains employed by geophysical finite element simulations. These meshes are structured in the extruded direction. The algorithm presented here exploits this structure to avoid the performance penalty traditionally associated with unstructured meshes. We evaluate the implementation of this algorithm in the Firedrake finite element system on a range of low compute intensity operations which constitute worst cases for data layout performance exploration. The experiments show that having structure along the extruded direction enables the cost of the indirect data accesses to be amortized after 10-20 layers as long as the underlying mesh is well ordered. We characterize the resulting spatial and temporal reuse in a representative set of both continuous-Galerkin and discontinuous-Galerkin discretizations. On meshes with realistic numbers of layers the performance achieved is between 70 and 90 % of a theoretical hardware-specific limit.
Johnson, Robin R; Popovic, Djordje P; Olmstead, Richard E; Stikic, Maja; Levendowski, Daniel J; Berka, Chris
2011-05-01
A great deal of research over the last century has focused on drowsiness/alertness detection, as fatigue-related physical and cognitive impairments pose a serious risk to public health and safety. Available drowsiness/alertness detection solutions are unsatisfactory for a number of reasons: (1) lack of generalizability, (2) failure to address individual variability in generalized models, and/or (3) lack of a portable, un-tethered application. The current study aimed to address these issues, and determine if an individualized electroencephalography (EEG) based algorithm could be defined to track performance decrements associated with sleep loss, as this is the first step in developing a field deployable drowsiness/alertness detection system. The results indicated that an EEG-based algorithm, individualized using a series of brief "identification" tasks, was able to effectively track performance decrements associated with sleep deprivation. Future development will address the need for the algorithm to predict performance decrements due to sleep loss, and provide field applicability.
Oryspayev, Dossay; Aktulga, Hasan Metin; Sosonkina, Masha; ...
2015-07-14
In this article, sparse matrix vector multiply (SpMVM) is an important kernel that frequently arises in high performance computing applications. Due to its low arithmetic intensity, several approaches have been proposed in literature to improve its scalability and efficiency in large scale computations. In this paper, our target systems are high end multi-core architectures and we use messaging passing interface + open multiprocessing hybrid programming model for parallelism. We analyze the performance of recently proposed implementation of the distributed symmetric SpMVM, originally developed for large sparse symmetric matrices arising in ab initio nuclear structure calculations. We also study important featuresmore » of this implementation and compare with previously reported implementations that do not exploit underlying symmetry. Our SpMVM implementations leverage the hybrid paradigm to efficiently overlap expensive communications with computations. Our main comparison criterion is the "CPU core hours" metric, which is the main measure of resource usage on supercomputers. We analyze the effects of topology-aware mapping heuristic using simplified network load model. Furthermore, we have tested the different SpMVM implementations on two large clusters with 3D Torus and Dragonfly topology. Our results show that the distributed SpMVM implementation that exploits matrix symmetry and hides communication yields the best value for the "CPU core hours" metric and significantly reduces data movement overheads.« less
Oryspayev, Dossay; Aktulga, Hasan Metin; Sosonkina, Masha; Maris, Pieter; Vary, James P.
2015-07-14
In this article, sparse matrix vector multiply (SpMVM) is an important kernel that frequently arises in high performance computing applications. Due to its low arithmetic intensity, several approaches have been proposed in literature to improve its scalability and efficiency in large scale computations. In this paper, our target systems are high end multi-core architectures and we use messaging passing interface + open multiprocessing hybrid programming model for parallelism. We analyze the performance of recently proposed implementation of the distributed symmetric SpMVM, originally developed for large sparse symmetric matrices arising in ab initio nuclear structure calculations. We also study important features of this implementation and compare with previously reported implementations that do not exploit underlying symmetry. Our SpMVM implementations leverage the hybrid paradigm to efficiently overlap expensive communications with computations. Our main comparison criterion is the "CPU core hours" metric, which is the main measure of resource usage on supercomputers. We analyze the effects of topology-aware mapping heuristic using simplified network load model. Furthermore, we have tested the different SpMVM implementations on two large clusters with 3D Torus and Dragonfly topology. Our results show that the distributed SpMVM implementation that exploits matrix symmetry and hides communication yields the best value for the "CPU core hours" metric and significantly reduces data movement overheads.
NASA Astrophysics Data System (ADS)
Yin, Zhendong; Zong, Zhiyuan; Sun, Hongjian; Wu, Zhilu; Yang, Zhutian
2012-12-01
In this article, an efficient multiuser detector based on the artificial fish swarm algorithm (AFSA-MUD) is proposed and investigated for direct-sequence ultrawideband systems under different channels: the additive white Gaussian noise channel and the IEEE 802.15.3a multipath channel. From the literature review, the issues that the computational complexity of classical optimum multiuser detection (OMD) rises exponentially with the number of users and the bit error rate (BER) performance of other sub-optimal multiuser detectors is not satisfactory, still need to be solved. This proposed method can make a good tradeoff between complexity and performance through the various behaviors of artificial fishes in the simplified Euclidean solution space, which is constructed by the solutions of some sub-optimal multiuser detectors. Here, these sub-optimal detectors are minimum mean square error detector, decorrelating detector, and successive interference cancellation detector. As a result of this novel scheme, the convergence speed of AFSA-MUD is greatly accelerated and the number of iterations is also significantly reduced. The experimental results demonstrate that the BER performance and the near-far effect resistance of this proposed algorithm are quite close to those of OMD, while its computational complexity is much lower than the traditional OMD. Moreover, as the number of active users increases, the BER performance of AFSA-MUD is almost the same as that of OMD.
NASA Astrophysics Data System (ADS)
Cho, Hoonkyung; Chun, Joohwan; Song, Sungchan
2016-09-01
The dim moving target tracking from the infrared image sequence in the presence of high clutter and noise has been recently under intensive investigation. The track-before-detect (TBD) algorithm processing the image sequence over a number of frames before decisions on the target track and existence is known to be especially attractive in very low SNR environments (⩽ 3 dB). In this paper, we shortly present a three-dimensional (3-D) TBD with dynamic programming (TBD-DP) algorithm using multiple IR image sensors. Since traditional two-dimensional TBD algorithm cannot track and detect the along the viewing direction, we use 3-D TBD with multiple sensors and also strictly analyze the detection performance (false alarm and detection probabilities) based on Fisher-Tippett-Gnedenko theorem. The 3-D TBD-DP algorithm which does not require a separate image registration step uses the pixel intensity values jointly read off from multiple image frames to compute the merit function required in the DP process. Therefore, we also establish the relationship between the pixel coordinates of image frame and the reference coordinates.
NASA Astrophysics Data System (ADS)
Basu, S.; Ganguly, S.; Nemani, R. R.; Mukhopadhyay, S.; Milesi, C.; Votava, P.; Michaelis, A.; Zhang, G.; Cook, B. D.; Saatchi, S. S.; Boyda, E.
2014-12-01
Accurate tree cover delineation is a useful instrument in the derivation of Above Ground Biomass (AGB) density estimates from Very High Resolution (VHR) satellite imagery data. Numerous algorithms have been designed to perform tree cover delineation in high to coarse resolution satellite imagery, but most of them do not scale to terabytes of data, typical in these VHR datasets. In this paper, we present an automated probabilistic framework for the segmentation and classification of 1-m VHR data as obtained from the National Agriculture Imagery Program (NAIP) for deriving tree cover estimates for the whole of Continental United States, using a High Performance Computing Architecture. The results from the classification and segmentation algorithms are then consolidated into a structured prediction framework using a discriminative undirected probabilistic graphical model based on Conditional Random Field (CRF), which helps in capturing the higher order contextual dependencies between neighboring pixels. Once the final probability maps are generated, the framework is updated and re-trained by incorporating expert knowledge through the relabeling of misclassified image patches. This leads to a significant improvement in the true positive rates and reduction in false positive rates. The tree cover maps were generated for the state of California, which covers a total of 11,095 NAIP tiles and spans a total geographical area of 163,696 sq. miles. Our framework produced correct detection rates of around 85% for fragmented forests and 70% for urban tree cover areas, with false positive rates lower than 3% for both regions. Comparative studies with the National Land Cover Data (NLCD) algorithm and the LiDAR high-resolution canopy height model shows the effectiveness of our algorithm in generating accurate high-resolution tree cover maps.
Kececioglu, O Fatih; Gani, Ahmet; Sekkeli, Mustafa
2016-01-01
The main objective of the present paper is to introduce a new approach for measuring and calculation of fundamental power components in the case of various distorted waveforms including those containing harmonics. The parameters of active, reactive, apparent power and power factor, are measured and calculated by using Goertzel algorithm instead of fast Fourier transformation which is commonly used. The main advantage of utilizing Goertzel algorithm is to minimize computational load and trigonometric equations. The parameters measured in the new technique are applied to a fixed capacitor-thyristor controlled reactor based static VAr compensation system to achieve accurate power factor correction for the first time. This study is implemented both simulation and experimentally.
Device, Algorithm and Integrated Modeling Research for Performance-Drive Multi-Modal Optical Sensors
2012-12-17
relates to the area of hyperspectral imaging. 2 Introduction 2.1 Problem Identification A fundamental problem in ground target tracking using airborne EO / IR ...tracking! algorithm!was! developed!which! combines! spectral! and!polarimetric! imagery ! to!enhance! target ! detection ,! followed!by!a! novel!approach... imagery .! 18.! Example!result!showing!improved! detection !with!combined! method .! 16! 19! Adaptive!track;based!feature;aided!tracking!architecture.! 17
1983-10-01
Concurrency Control Algorithms Computer Corporation of America Wente K. Lin, Philip A. Bernstein, Nathan Goodman and Jerry Nolte APPROVED FOR PUBLIC...Computer Corporation of America 672 Four Cambridge Center 55812121 Cambridge NA 02142 _____________ It. CONTROLLING OFFICE NAME AND ADDRESS 12. REPORT... Corporation of America, Cambridge, AA. Lin[4j Lin, W.K., "Concurrency Control In a *ultiple Copy Distributed Database System," 4th Berkeley Workshog on
NASA Technical Reports Server (NTRS)
Rogers, David
1991-01-01
G/SPLINES are a hybrid of Friedman's Multivariable Adaptive Regression Splines (MARS) algorithm with Holland's Genetic Algorithm. In this hybrid, the incremental search is replaced by a genetic search. The G/SPLINE algorithm exhibits performance comparable to that of the MARS algorithm, requires fewer least squares computations, and allows significantly larger problems to be considered.
NASA Astrophysics Data System (ADS)
Cua, G. B.; Fischer, M.; Heaton, T. H.; Wiemer, S.; Giardini, D.
2008-12-01
The Virtual Seismologist (VS) method is a regional network-based approach to earthquake early warning that estimates earthquake magnitude and location based on the available envelopes of ground motion amplitudes from the seismic network monitoring a given region, predefined prior information, and appropriate attenuation relationships. Bayes' theorem allows for the introduction of prior information (possibilities include network topology or station health status, regional hazard maps, earthquake forecasts, the Gutenberg- Richter magnitude-frequency relationship) into the source estimation process. Peak ground motion amplitudes (PGA and PGV) are then predicted throughout the region of interest using the estimated magnitude and location and the appropriate attenuation relationships. Implementation of the VS algorithm in California and Switzerland is funded by the Seismic Early Warning for Europe (SAFER) project. The VS algorithm is one of three early warning algorithms whose real-time performance on California datasets is being evaluated as part of the California Integrated Seismic Network (CISN) early warning effort funded by the United States Geological Survey (USGS). Real-time operation of the VS codes at the Southern California Seismic Network (SCSN) began in July 2008, and will be extended to Northern California in the following months. In Switzerland, the VS codes have been run on offline waveform data from over 125 earthquakes recorded by the Swiss Digital Seismic Network (SDSN) and the Swiss Strong Motion Network (SSMN). We discuss the performance of the VS codes on these datasets in terms of available warning time and accuracy of magnitude and location estimates.
NASA Astrophysics Data System (ADS)
Zhang, Yan; Uchida, Masato; Tsuru, Masato; Oie, Yuji
We present a TCP flow level performance evaluation on error rate aware scheduling algorithms in Evolved UTRA and UTRAN networks. With the introduction of the error rate, which is the probability of transmission failure under a given wireless condition and the instantaneous transmission rate, the transmission efficiency can be improved without sacrificing the balance between system performance and user fairness. The performance comparison with and without error rate awareness is carried out dependant on various TCP traffic models, user channel conditions, schedulers with different fairness constraints, and automatic repeat request (ARQ) types. The results indicate that error rate awareness can make the resource allocation more reasonable and effectively improve the system and individual performance, especially for poor channel condition users.
NASA Technical Reports Server (NTRS)
Guo, Liwen; Cardullo, Frank M.; Kelly, Lon C.
2007-01-01
This report summarizes the results of delay measurement and piloted performance tests that were conducted to assess the effectiveness of the adaptive compensator and the state space compensator for alleviating the phase distortion of transport delay in the visual system in the VMS at the NASA Langley Research Center. Piloted simulation tests were conducted to assess the effectiveness of two novel compensators in comparison to the McFarland predictor and the baseline system with no compensation. Thirteen pilots with heterogeneous flight experience executed straight-in and offset approaches, at various delay configurations, on a flight simulator where different predictors were applied to compensate for transport delay. The glideslope and touchdown errors, power spectral density of the pilot control inputs, NASA Task Load Index, and Cooper-Harper rating of the handling qualities were employed for the analyses. The overall analyses show that the adaptive predictor results in slightly poorer compensation for short added delay (up to 48 ms) and better compensation for long added delay (up to 192 ms) than the McFarland compensator. The analyses also show that the state space predictor is fairly superior for short delay and significantly superior for long delay than the McFarland compensator.
Algorithms Performance Investigation of a Generalized Spreader-Bar Detection System
Robinson, Sean M.; Ashbaker, Eric D.; Hensley, Walter K.; Schweppe, John E.; Sandness, Gerald A.; Erikson, Luke E.; Ely, James H.
2010-10-01
A “generic” gantry-crane-mounted spreader bar detector has been simulated in the Monte-Carlo radiation transport code MCNP [1]. This model is intended to represent the largest realistically feasible number of detector crystals in a single gantry-crane model intended to sit atop an InterModal Cargo Container (IMCC). Detectors were chosen from among large commonly-available sodium iodide (NaI) crystal scintillators and spaced as evenly as is thought possible with a detector apparatus attached to a gantry crane. Several scenarios were simulated with this model, based on a single IMCC being moved between a ship’s deck or cargo hold and the dock. During measurement, the gantry crane will carry that IMCC through the air and lower it onto a receiving vehicle (e.g. a chassis or a bomb cart). The case of an IMCC being moved through the air from an unknown radiological environment to the ground is somewhat complex; for this initial study a single location was picked at which to simulate background. An HEU source based on earlier validated models was used, and placed at varying depths in a wood cargo. Many statistical realizations of these scenarios are constructed from simulations of the component spectra, simulated to have high statistics. The resultant data are analyzed with several different algorithms. The simulated data were evaluated by each algorithm, with a threshold set to a statistical-only false alarm probability of 0.001 and the resultant Minimum Detectable Amounts were generated for each Cargo depth possible within the IMCC. Using GADRAS as an anomaly detector provided the greatest detection sensitivity, and it is expected that an algorithm similar to this will be of great use to the detection of highly shielded sources.
NASA Astrophysics Data System (ADS)
Lai, Xide; Chen, Xiaoming; Zhang, Xiang; Lei, Mingchuan
2016-11-01
This paper presents an approach to automatic hydraulic optimization of hydraulic machine's blade system combining a blade geometric modeller and parametric generator with automatic CFD solution procedure and multi-objective genetic algorithm. In order to evaluate a plurality of design options and quickly estimate the blade system's hydraulic performance, the approximate model which is able to substitute for the original inside optimization loop has been employed in the hydraulic optimization of blade by using function approximation. As the approximate model is constructed through the database samples containing a set of blade geometries and their resulted hydraulic performances, it can ensure to correctly imitate the real blade's performances predicted by the original model. As hydraulic machine designers are accustomed to do design with 2D blade profiles on stream surface that are then stacked to 3D blade geometric model in the form of NURBS surfaces, geometric variables to be optimized were defined by a series profiles on stream surfaces. The approach depends on the cooperation between a genetic algorithm, a database and user defined objective functions and constraints which comprises hydraulic performances, structural and geometric constraint functions. Example covering optimization design of a mixed-flow pump impeller is presented.
Parallel SOR Iterative Algorithms and Performance Evaluation on a Linux Cluster
2005-06-01
Red - Black two-color SOR implementation. Two other iterative methods , Jacobi method is preferred. Yanheh [4] showed that the and Gauss - Seidel (G-S...The optimal value of co lies in (0, 2). The choice 40 J +" of co = 1 corresponds to the Gauss - Seidel - j.1)( - 11 iteration. 2.2 Red - Black SOR...paper, a parallel algorithm for the structure of a matrix or a grid. However, the red - black SOR method with domain decomposition is multi-color
Lu, Bin; Yan, Hong-Bing; Mu, Chao-Wei; Gao, Yang; Hou, Zhi-Hui; Wang, Zhi-Qiang; Liu, Kun; Parinella, Ashley H.; Leipsic, Jonathon A.
2015-01-01
Objective To investigate the effect of a novel motion-correction algorithm (Snap-short Freeze, SSF) on image quality and diagnostic accuracy in patients undergoing prospectively ECG-triggered CCTA without administering rate-lowering medications. Materials and Methods Forty-six consecutive patients suspected of CAD prospectively underwent CCTA using prospective ECG-triggering without rate control and invasive coronary angiography (ICA). Image quality, interpretability, and diagnostic performance of SSF were compared with conventional multisegment reconstruction without SSF, using ICA as the reference standard. Results All subjects (35 men, 57.6 ± 8.9 years) successfully underwent ICA and CCTA. Mean heart rate was 68.8±8.4 (range: 50–88 beats/min) beats/min without rate controlling medications during CT scanning. Overall median image quality score (graded 1–4) was significantly increased from 3.0 to 4.0 by the new algorithm in comparison to conventional reconstruction. Overall interpretability was significantly improved, with a significant reduction in the number of non-diagnostic segments (690 of 694, 99.4% vs 659 of 694, 94.9%; P<0.001). However, only the right coronary artery (RCA) showed a statistically significant difference (45 of 46, 97.8% vs 35 of 46, 76.1%; P = 0.004) on a per-vessel basis in this regard. Diagnostic accuracy for detecting ≥50% stenosis was improved using the motion-correction algorithm on per-vessel [96.2% (177/184) vs 87.0% (160/184); P = 0.002] and per-segment [96.1% (667/694) vs 86.6% (601/694); P <0.001] levels, but there was not a statistically significant improvement on a per-patient level [97.8 (45/46) vs 89.1 (41/46); P = 0.203]. By artery analysis, diagnostic accuracy was improved only for the RCA [97.8% (45/46) vs 78.3% (36/46); P = 0.007]. Conclusion The intracycle motion correction algorithm significantly improved image quality and diagnostic interpretability in patients undergoing CCTA with prospective ECG triggering and
Zeng, Xianghua; Xu, Cheng; He, Dengming; Li, Maoshi; Zhang, Huiyan; Wu, Quanxin; Xiang, Dedong; Wang, Yuming
2015-01-01
Aim To compare the performance of several simple, noninvasive models comprising various serum markers in diagnosing significant liver fibrosis in the same sample of patients with chronic hepatitis B (CHB) with the same judgment standard. Methods A total of 308 patients with CHB who had undergone liver biopsy, laboratory tests, and liver stiffness measurement (LSM) at the Southwest Hospital, Chongqing, China between March 2010 and April 2014 were retrospectively studied. Receiver operating characteristic (ROC) curves and area under ROC curves (AUROCs) were used to analyze the results of the models, which incorporated age-platelet (PLT) index (API model), aspartate transaminase (AST) to alanine aminotransferase (ALT) ratio (AAR model), AST to PLT ratio index (APRI model), γ-glutamyl transpeptidase (GGT) to PLT ratio index (GPRI model), GGT-PLT-albumin index (S index model), age-AST-PLT-ALT index (FIB-4 model), and age-AST-PLT-ALT-international normalized ratio index (Fibro-Q model). Results The AUROCs of the S index, GPRI, FIB-4, APRI, API, Fibro-Q, AAR, and LSM for predicting significant liver fibrosis were 0.726 (P < 0.001), 0.726 (P < 0.001), 0.621 (P = 0.001), 0.619 (P = 0.001), 0.580 (P = 0.033), 0.569 (P = 0.066), 0.495 (P = 0.886), and 0.757 (P < 0.001), respectively. The S index and GPRI had the highest correlation with histopathological scores (r = 0.373, P < 0.001; r = 0.372, P < 0.001, respectively) and LSM values (r = 0.516, P < 0.001; r = 0.513, P < 0.001, respectively). When LSM was combined with S index and GPRI, the AUROCs were 0.753 (P < 0.001) and 0.746 (P < 0.001), respectively. Conclusion S index and GPRI had the best diagnostic performance for significant liver fibrosis and were robust predictors of significant liver fibrosis in patients with CHB for whom transient elastography was unavailable. PMID:26088852
Architecture-Aware Algorithms for Scalable Performance and Resilience on Heterogeneous Architectures
Dongarra, Jack
2013-03-14
There is a widening gap between the peak performance of high performance computers and the performance realized by full applications. Over the next decade, extreme-scale systems will present major new challenges to software development that could widen the gap so much that it prevents the productive use of future DOE Leadership computers.
Khan, Mohammad Ibrahim; Kamal, Md Sarwar
2015-03-01
Markov Chain is very effective in prediction basically in long data set. In DNA sequencing it is always very important to find the existence of certain nucleotides based on the previous history of the data set. We imposed the Chapman Kolmogorov equation to accomplish the task of Markov Chain. Chapman Kolmogorov equation is the key to help the address the proper places of the DNA chain and this is very powerful tools in mathematics as well as in any other prediction based research. It incorporates the score of DNA sequences calculated by various techniques. Our research utilize the fundamentals of Warshall Algorithm (WA) and Dynamic Programming (DP) to measures the score of DNA segments. The outcomes of the experiment are that Warshall Algorithm is good for small DNA sequences on the other hand Dynamic Programming are good for long DNA sequences. On the top of above findings, it is very important to measure the risk factors of local sequencing during the matching of local sequence alignments whatever the length.
Zhang, Yue; Yan, Baiqian; Ou-Yang, Jun; Zhu, Benpeng; Chen, Shi; Yang, Xiaofei; Wang, Xianghao
2016-01-28
Through principles of spin-valve giant magnetoresistance (SV-GMR) effect and its application in magnetic sensors, we have investigated electric-field control of the output performance of a bridge-structured Co/Cu/NiFe/IrMn SV-GMR sensor on a PZN-PT piezoelectric substrate using the micro-magnetic simulation. We centered on the influence of the variation of uniaxial magnetic anisotropy constant (K) of Co on the output of the bridge, and K was manipulated via the stress of Co, which is generated from the strain of a piezoelectric substrate under an electric field. The results indicate that when K varies between 2 × 10{sup 4 }J/m{sup 3} and 10 × 10{sup 4 }J/m{sup 3}, the output performance can be significantly manipulated: The linear range alters from between −330 Oe and 330 Oe to between −650 Oe and 650 Oe, and the sensitivity is tuned by almost 7 times, making it possible to measure magnetic fields with very different ranges. According to the converse piezoelectric effect, we have found that this variation of K can be realized by applying an electric field with the magnitude of about 2–20 kV/cm on a PZN-PT piezoelectric substrate, which is realistic in application. This result means that electric-control of SV-GMR effect has potential application in developing SV-GMR sensors with improved performance.
Cox, Marsha E.; DiNello, Robert K.; Geisberg, Mark; Abbott, April; Roberts, Pacita L.; Hooton, Thomas M.
2015-01-01
Urinary tract infections (UTIs) are frequently encountered in clinical practice and most commonly caused by Escherichia coli and other Gram-negative uropathogens. We tested RapidBac, a rapid immunoassay for bacteriuria developed by Silver Lake Research Corporation (SLRC), compared with standard bacterial culture using 966 clean-catch urine specimens submitted to a clinical microbiology laboratory in an urban academic medical center. RapidBac was performed in accordance with instructions, providing a positive or negative result in 20 min. RapidBac identified as positive 245/285 (sensitivity 86%) samples with significant bacteriuria, defined as the presence of a Gram-negative uropathogen or Staphylococcus saprophyticus at ≥103 CFU/ml. The sensitivities for Gram-negative bacteriuria at ≥104 CFU/ml and ≥105 CFU/ml were 96% and 99%, respectively. The specificity of the test, detecting the absence of significant bacteriuria, was 94%. The sensitivity and specificity of RapidBac were similar on samples from inpatient and outpatient settings, from male and female patients, and across age groups from 18 to 89 years old, although specificity was higher in men (100%) compared with that in women (92%). The RapidBac test for bacteriuria may be effective as an aid in the point-of-care diagnosis of UTIs especially in emergency and primary care settings. PMID:26063858
NASA Astrophysics Data System (ADS)
Afik, Eldad
2015-09-01
Three-dimensional particle tracking is an essential tool in studying dynamics under the microscope, namely, fluid dynamics in microfluidic devices, bacteria taxis, cellular trafficking. The 3d position can be determined using 2d imaging alone by measuring the diffraction rings generated by an out-of-focus fluorescent particle, imaged on a single camera. Here I present a ring detection algorithm exhibiting a high detection rate, which is robust to the challenges arising from ring occlusion, inclusions and overlaps, and allows resolving particles even when near to each other. It is capable of real time analysis thanks to its high performance and low memory footprint. The proposed algorithm, an offspring of the circle Hough transform, addresses the need to efficiently trace the trajectories of many particles concurrently, when their number in not necessarily fixed, by solving a classification problem, and overcomes the challenges of finding local maxima in the complex parameter space which results from ring clusters and noise. Several algorithmic concepts introduced here can be advantageous in other cases, particularly when dealing with noisy and sparse data. The implementation is based on open-source and cross-platform software packages only, making it easy to distribute and modify. It is implemented in a microfluidic experiment allowing real-time multi-particle tracking at 70 Hz, achieving a detection rate which exceeds 94% and only 1% false-detection.
NASA Astrophysics Data System (ADS)
Brossier, R.
2011-04-01
Full waveform inversion (FWI) is an appealing seismic data-fitting procedure for the derivation of high-resolution quantitative models of the subsurface at various scales. Full modelling and inversion of visco-elastic waves from multiple seismic sources allow for the recovering of different physical parameters, although they remain computationally challenging tasks. An efficient massively parallel, frequency-domain FWI algorithm is implemented here on large-scale distributed-memory platforms for imaging two-dimensional visco-elastic media. The resolution of the elastodynamic equations, as the forward problem of the inversion, is performed in the frequency domain on unstructured triangular meshes, using a low-order finite element discontinuous Galerkin method. The linear system resulting from discretization of the forward problem is solved with a parallel direct solver. The inverse problem, which is presented as a non-linear local optimization problem, is solved in parallel with a quasi-Newton method, and this allows for reliable estimation of multiple classes of visco-elastic parameters. Two levels of parallelism are implemented in the algorithm, based on message passing interfaces and multi-threading, for optimal use of computational time and the core-memory resources available on modern distributed-memory multi-core computational platforms. The algorithm allows for imaging of realistic targets at various scales, ranging from near-surface geotechnic applications to crustal-scale exploration.
Afik, Eldad
2015-01-01
Three-dimensional particle tracking is an essential tool in studying dynamics under the microscope, namely, fluid dynamics in microfluidic devices, bacteria taxis, cellular trafficking. The 3d position can be determined using 2d imaging alone by measuring the diffraction rings generated by an out-of-focus fluorescent particle, imaged on a single camera. Here I present a ring detection algorithm exhibiting a high detection rate, which is robust to the challenges arising from ring occlusion, inclusions and overlaps, and allows resolving particles even when near to each other. It is capable of real time analysis thanks to its high performance and low memory footprint. The proposed algorithm, an offspring of the circle Hough transform, addresses the need to efficiently trace the trajectories of many particles concurrently, when their number in not necessarily fixed, by solving a classification problem, and overcomes the challenges of finding local maxima in the complex parameter space which results from ring clusters and noise. Several algorithmic concepts introduced here can be advantageous in other cases, particularly when dealing with noisy and sparse data. The implementation is based on open-source and cross-platform software packages only, making it easy to distribute and modify. It is implemented in a microfluidic experiment allowing real-time multi-particle tracking at 70 Hz, achieving a detection rate which exceeds 94% and only 1% false-detection. PMID:26329642
Kumaravel, Rasadurai; Narayanaswamy, Kumaratharan
2015-01-01
Multi carrier code division multiple access (MC-CDMA) system is a promising multi carrier modulation (MCM) technique for high data rate wireless communication over frequency selective fading channels. MC-CDMA system is a combination of code division multiple access (CDMA) and orthogonal frequency division multiplexing (OFDM). The OFDM parts reduce multipath fading and inter symbol interference (ISI) and the CDMA part increases spectrum utilization. Advantages of this technique are its robustness in case of multipath propagation and improve security with the minimize ISI. Nevertheless, due to the loss of orthogonality at the receiver in a mobile environment, the multiple access interference (MAI) appears. The MAI is one of the factors that degrade the bit error rate (BER) performance of MC-CDMA system. The multiuser detection (MUD) and turbo coding are the two dominant techniques for enhancing the performance of the MC-CDMA systems in terms of BER as a solution of overcome to MAI effects. In this paper a low complexity iterative soft sensitive bits algorithm (SBA) aided logarithmic-Maximum a-Posteriori algorithm (Log MAP) based turbo MUD is proposed. Simulation results show that the proposed method provides better BER performance with low complexity decoding, by mitigating the detrimental effects of MAI.
Performance analysis of large-scale applications based on wavefront algorithms
Hoisie, A.; Lubeck, O.; Wasserman, H.
1998-12-31
The authors introduced a performance model for parallel, multidimensional, wavefront calculations with machine performance characterized using the LogGP framework. The model accounts for overlap in the communication and computation components. The agreement with experimental data is very good under a variety of model sizes, data partitionings, blocking strategies, and on three different parallel architectures. Using the model, the authors analyzed performance of a deterministic transport code on a hypothetical 100 Tflops future parallel system of interest to ASCI.
Performance optimization of EDFA-Raman hybrid optical amplifier using genetic algorithm
NASA Astrophysics Data System (ADS)
Singh, Simranjit; Kaler, R. S.
2015-05-01
For the first time, a novel net gain analytical model of EDFA-Raman hybrid optical amplifier (HOA) is designed and optimized the various parameters using genetic algorithm. Our method has shown to be robust in the simultaneous analysis of multiple parameters, such as Raman length, EDFA length and its pump powers, to obtained highest possible gain. The optimized HOA is further investigated and characterized on system level in the scenario of 100×10 Gbps dense wavelength division multiplexed (DWDM) system with 25 GHz interval. With an optimized HOA, a flat gain of >18 dB is obtained from frequency region 187 to 189.5 THz with a gain variation of less than 1.35 dB without using any gain flattened technique. The obtained noise figure is also the lowest value (<2 dB/channel) ever reported for proposed hybrid optical amplifier at reduced channel spacing with acceptable bit error rate.
NASA Technical Reports Server (NTRS)
Skofronick-Jackson, Gail; Munchak, Stephen J.; Ringerud, Sarah
2016-01-01
Retrievals of falling snow from space represent an important data set for understanding the Earth's atmospheric, hydrological, and energy cycles, especially during climate change. Estimates of falling snow must be captured to obtain the true global precipitation water cycle, snowfall accumulations are required for hydrological studies, and without knowledge of the frozen particles in clouds one cannot adequately understand the energy and radiation budgets. While satellite-based remote sensing provides global coverage of falling snow events, the science is relatively new and retrievals are still undergoing development with challenges remaining). This work reports on the development and testing of retrieval algorithms for the Global Precipitation Measurement (GPM) mission Core Satellite, launched February 2014.
2011-01-01
Background Verbal autopsies provide valuable information for studying mortality patterns in populations that lack reliable vital registration data. Methods for transforming verbal autopsy results into meaningful information for health workers and policymakers, however, are often costly or complicated to use. We present a simple additive algorithm, the Tariff Method (termed Tariff), which can be used for assigning individual cause of death and for determining cause-specific mortality fractions (CSMFs) from verbal autopsy data. Methods Tariff calculates a score, or "tariff," for each cause, for each sign/symptom, across a pool of validated verbal autopsy data. The tariffs are summed for a given response pattern in a verbal autopsy, and this sum (score) provides the basis for predicting the cause of death in a dataset. We implemented this algorithm and evaluated the method's predictive ability, both in terms of chance-corrected concordance at the individual cause assignment level and in terms of CSMF accuracy at the population level. The analysis was conducted separately for adult, child, and neonatal verbal autopsies across 500 pairs of train-test validation verbal autopsy data. Results Tariff is capable of outperforming physician-certified verbal autopsy in most cases. In terms of chance-corrected concordance, the method achieves 44.5% in adults, 39% in children, and 23.9% in neonates. CSMF accuracy was 0.745 in adults, 0.709 in children, and 0.679 in neonates. Conclusions Verbal autopsies can be an efficient means of obtaining cause of death data, and Tariff provides an intuitive, reliable method for generating individual cause assignment and CSMFs. The method is transparent and flexible and can be readily implemented by users without training in statistics or computer science. PMID:21816107
Maiti, Abhik; Chakravarty, Debashish
2016-01-01
3D reconstruction of geo-objects from their digital images is a time-efficient and convenient way of studying the structural features of the object being modelled. This paper presents a 3D reconstruction methodology which can be used to generate photo-realistic 3D watertight surface of different irregular shaped objects, from digital image sequences of the objects. The 3D reconstruction approach described here is robust, simplistic and can be readily used in reconstructing watertight 3D surface of any object from its digital image sequence. Here, digital images of different objects are used to build sparse, followed by dense 3D point clouds of the objects. These image-obtained point clouds are then used for generation of photo-realistic 3D surfaces, using different surface reconstruction algorithms such as Poisson reconstruction and Ball-pivoting algorithm. Different control parameters of these algorithms are identified, which affect the quality and computation time of the reconstructed 3D surface. The effects of these control parameters in generation of 3D surface from point clouds of different density are studied. It is shown that the reconstructed surface quality of Poisson reconstruction depends on Samples per node (SN) significantly, greater SN values resulting in better quality surfaces. Also, the quality of the 3D surface generated using Ball-Pivoting algorithm is found to be highly depend upon Clustering radius and Angle threshold values. The results obtained from this study give the readers of the article a valuable insight into the effects of different control parameters on determining the reconstructed surface quality.
Li, Hua; Dolly, Steven; Chen, Hsin-Chen; Anastasio, Mark A; Low, Daniel A; Li, Harold H; Michalski, Jeff M; Thorstad, Wade L; Gay, Hiram; Mutic, Sasa
2016-07-01
CT image reconstruction is typically evaluated based on the ability to reduce the radiation dose to as-low-as-reasonably-achievable (ALARA) while maintaining acceptable image quality. However, the determination of common image quality metrics, such as noise, contrast, and contrast-to-noise ratio, is often insufficient for describing clinical radiotherapy task performance. In this study we designed and implemented a new comparative analysis method associating image quality, radiation dose, and patient size with radiotherapy task performance, with the purpose of guiding the clinical radiotherapy usage of CT reconstruction algorithms. The iDose4iterative reconstruction algorithm was selected as the target for comparison, wherein filtered back-projection (FBP) reconstruction was regarded as the baseline. Both phantom and patient images were analyzed. A layer-adjustable anthropomorphic pelvis phantom capable of mimicking 38-58 cm lateral diameter-sized patients was imaged and reconstructed by the FBP and iDose4 algorithms with varying noise-reduction-levels, respectively. The resulting image sets were quantitatively assessed by two image quality indices, noise and contrast-to-noise ratio, and two clinical task-based indices, target CT Hounsfield number (for electron density determination) and structure contouring accuracy (for dose-volume calculations). Additionally, CT images of 34 patients reconstructed with iDose4 with six noise reduction levels were qualitatively evaluated by two radiation oncologists using a five-point scoring mechanism. For the phantom experiments, iDose4 achieved noise reduction up to 66.1% and CNR improvement up to 53.2%, compared to FBP without considering the changes of spatial resolution among images and the clinical acceptance of reconstructed images. Such improvements consistently appeared across different iDose4 noise reduction levels, exhibiting limited interlevel noise (<5 HU) and target CT number variations (<1 HU). The radiation
Li, Hua; Dolly, Steven; Chen, Hsin-Chen; Anastasio, Mark A; Low, Daniel A; Li, Harold H; Michalski, Jeff M; Thorstad, Wade L; Gay, Hiram; Mutic, Sasa
2016-07-08
CT image reconstruction is typically evaluated based on the ability to reduce the radiation dose to as-low-as-reasonably-achievable (ALARA) while maintaining acceptable image quality. However, the determination of common image quality metrics, such as noise, contrast, and contrast-to-noise ratio, is often insufficient for describing clinical radiotherapy task performance. In this study we designed and implemented a new comparative analysis method associating image quality, radiation dose, and patient size with radiotherapy task performance, with the purpose of guiding the clinical radiotherapy usage of CT reconstruction algorithms. The iDose4 iterative reconstruction algorithm was selected as the target for comparison, wherein filtered back-projection (FBP) reconstruction was regarded as the baseline. Both phantom and patient images were analyzed. A layer-adjustable anthropomorphic pelvis phantom capable of mimicking 38-58 cm lateral diameter-sized patients was imaged and reconstructed by the FBP and iDose4 algorithms with varying noise-reduction-levels, respectively. The resulting image sets were quantitatively assessed by two image quality indices, noise and contrast-to-noise ratio, and two clinical task-based indices, target CT Hounsfield number (for electron density determination) and structure contouring accuracy (for dose-volume calculations). Additionally, CT images of 34 patients reconstructed with iDose4 with six noise reduction levels were qualitatively evaluated by two radiation oncologists using a five-point scoring mechanism. For the phantom experiments, iDose4 achieved noise reduction up to 66.1% and CNR improvement up to 53.2%, compared to FBP without considering the changes of spatial resolution among images and the clinical acceptance of reconstructed images. Such improvements consistently appeared across different iDose4 noise reduction levels, exhibiting limited interlevel noise (< 5 HU) and target CT number variations (< 1 HU). The radiation
NASA Astrophysics Data System (ADS)
Ward, V. L.; Singh, R.; Reed, P. M.; Keller, K.
2014-12-01
As water resources problems typically involve several stakeholders with conflicting objectives, multi-objective evolutionary algorithms (MOEAs) are now key tools for understanding management tradeoffs. Given the growing complexity of water planning problems, it is important to establish if an algorithm can consistently perform well on a given class of problems. This knowledge allows the decision analyst to focus on eliciting and evaluating appropriate problem formulations. This study proposes a multi-objective adaptation of the classic environmental economics "Lake Problem" as a computationally simple but mathematically challenging MOEA benchmarking problem. The lake problem abstracts a fictional town on a lake which hopes to maximize its economic benefit without degrading the lake's water quality to a eutrophic (polluted) state through excessive phosphorus loading. The problem poses the challenge of maintaining economic activity while confronting the uncertainty of potentially crossing a nonlinear and potentially irreversible pollution threshold beyond which the lake is eutrophic. Objectives for optimization are maximizing economic benefit from lake pollution, maximizing water quality, maximizing the reliability of remaining below the environmental threshold, and minimizing the probability that the town will have to drastically change pollution policies in any given year. The multi-objective formulation incorporates uncertainty with a stochastic phosphorus inflow abstracting non-point source pollution. We performed comprehensive diagnostics using 6 algorithms: Borg, MOEAD, eMOEA, eNSGAII, GDE3, and NSGAII to ascertain their controllability, reliability, efficiency, and effectiveness. The lake problem abstracts elements of many current water resources and climate related management applications where there is the potential for crossing irreversible, nonlinear thresholds. We show that many modern MOEAs can fail on this test problem, indicating its suitability as a
NASA Astrophysics Data System (ADS)
Xue, Zhiyun; Antani, Sameer; Long, L. Rodney; Jeronimo, Jose; Thoma, George R.
2007-03-01
Cervicography is a technique for visual screening of uterine cervix images for cervical cancer. One of our research goals is the automated detection in these images of acetowhite (AW) lesions, which are sometimes correlated with cervical cancer. These lesions are characterized by the whitening of regions along the squamocolumnar junction on the cervix when treated with 5% acetic acid. Image preprocessing is required prior to invoking AW detection algorithms on cervicographic images for two reasons: (1) to remove Specular Reflections (SR) caused by camera flash, and (2) to isolate the cervix region-of-interest (ROI) from image regions that are irrelevant to the analysis. These image regions may contain medical instruments, film markup, or other non-cervix anatomy or regions, such as vaginal walls. We have qualitatively and quantitatively evaluated the performance of alternative preprocessing algorithms on a test set of 120 images. For cervix ROI detection, all approaches use a common feature set, but with varying combinations of feature weights, normalization, and clustering methods. For SR detection, while one approach uses a Gaussian Mixture Model on an intensity/saturation feature set, a second approach uses Otsu thresholding on a top-hat transformed input image. Empirical results are analyzed to derive conclusions on the performance of each approach.
NASA Astrophysics Data System (ADS)
Yu, Xiaonan; Tong, Shoufeng; Dong, Yan; Song, Yansong; Hao, Shicong; Lu, Jing
2016-06-01
An avalanche photodiode (APD) receiver for intersatellite laser communication links is proposed and its performance is experimentally demonstrated. In the proposed system, a series of analog circuits are used not only to adjust the temperature and control the bias voltage but also to monitor the current and recover the clock from the communication data. In addition, the temperature compensation and multiplication gain control algorithm are embedded in the microcontroller to improve the performance of the receiver. As shown in the experiment, with the change of communication rate from 10 to 2000 Mbps, the detection sensitivity of the APD receiver varies from -47 to -34 dBm. Moreover, due to the existence of the multiplication gain control algorithm, the dynamic range of the APD receiver is effectively improved, while the dynamic range at 10, 100, and 1000 Mbps is 38.7, 37.7, and 32.8 dB, respectively. As a result, the experimental results agree well with the theoretical predictions, and the receiver will improve the flexibility of the intersatellite links without increasing the cost.
A framework for benchmarking of homogenisation algorithm performance on the global scale
NASA Astrophysics Data System (ADS)
Willett, K.; Williams, C.; Jolliffe, I. T.; Lund, R.; Alexander, L. V.; Brönnimann, S.; Vincent, L. A.; Easterbrook, S.; Venema, V. K. C.; Berry, D.; Warren, R. E.; Lopardo, G.; Auchmann, R.; Aguilar, E.; Menne, M. J.; Gallagher, C.; Hausfather, Z.; Thorarinsdottir, T.; Thorne, P. W.
2014-09-01
The International Surface Temperature Initiative (ISTI) is striving towards substantively improving our ability to robustly understand historical land surface air temperature change at all scales. A key recently completed first step has been collating all available records into a comprehensive open access, traceable and version-controlled databank. The crucial next step is to maximise the value of the collated data through a robust international framework of benchmarking and assessment for product intercomparison and uncertainty estimation. We focus on uncertainties arising from the presence of inhomogeneities in monthly mean land surface temperature data and the varied methodological choices made by various groups in building homogeneous temperature products. The central facet of the benchmarking process is the creation of global-scale synthetic analogues to the real-world database where both the "true" series and inhomogeneities are known (a luxury the real-world data do not afford us). Hence, algorithmic strengths and weaknesses can be meaningfully quantified and conditional inferences made about the real-world climate system. Here we discuss the necessary framework for developing an international homogenisation benchmarking system on the global scale for monthly mean temperatures. The value of this framework is critically dependent upon the number of groups taking part and so we strongly advocate involvement in the benchmarking exercise from as many data analyst groups as possible to make the best use of this substantial effort.
Concepts for benchmarking of homogenisation algorithm performance on the global scale
NASA Astrophysics Data System (ADS)
Willett, K.; Williams, C.; Jolliffe, I.; Lund, R.; Alexander, L.; Brönniman, S.; Vincent, L. A.; Easterbrook, S.; Venema, V.; Berry, D.; Warren, R.; Lopardo, G.; Auchmann, R.; Aguilar, E.; Menne, M.; Gallagher, C.; Hausfather, Z.; Thorarinsdottir, T.; Thorne, P. W.
2014-06-01
The International Surface Temperature Initiative (ISTI) is striving towards substantively improving our ability to robustly understand historical land surface air temperature change at all scales. A key recently completed first step has been collating all available records into a comprehensive open access, traceable and version-controlled databank. The crucial next step is to maximise the value of the collated data through a robust international framework of benchmarking and assessment for product intercomparison and uncertainty estimation. We focus on uncertainties arising from the presence of inhomogeneities in monthly surface temperature data and the varied methodological choices made by various groups in building homogeneous temperature products. The central facet of the benchmarking process is the creation of global scale synthetic analogs to the real-world database where both the "true" series and inhomogeneities are known (a luxury the real world data do not afford us). Hence algorithmic strengths and weaknesses can be meaningfully quantified and conditional inferences made about the real-world climate system. Here we discuss the necessary framework for developing an international homogenisation benchmarking system on the global scale for monthly mean temperatures. The value of this framework is critically dependent upon the number of groups taking part and so we strongly advocate involvement in the benchmarking exercise from as many data analyst groups as possible to make the best use of this substantial effort.
NASA Astrophysics Data System (ADS)
Bernabe, Sergio; Igual, Francisco D.; Botella, Guillermo; Garcia, Carlos; Prieto-Matias, Manuel; Plaza, Antonio
2015-10-01
Recent advances in heterogeneous high performance computing (HPC) have opened new avenues for demanding remote sensing applications. Perhaps one of the most popular algorithm in target detection and identification is the automatic target detection and classification algorithm (ATDCA) widely used in the hyperspectral image analysis community. Previous research has already investigated the mapping of ATDCA on graphics processing units (GPUs) and field programmable gate arrays (FPGAs), showing impressive speedup factors that allow its exploitation in time-critical scenarios. Based on these studies, our work explores the performance portability of a tuned OpenCL implementation across a range of processing devices including multicore processors, GPUs and other accelerators. This approach differs from previous papers, which focused on achieving the optimal performance on each platform. Here, we are more interested in the following issues: (1) evaluating if a single code written in OpenCL allows us to achieve acceptable performance across all of them, and (2) assessing the gap between our portable OpenCL code and those hand-tuned versions previously investigated. Our study includes the analysis of different tuning techniques that expose data parallelism as well as enable an efficient exploitation of the complex memory hierarchies found in these new heterogeneous devices. Experiments have been conducted using hyperspectral data sets collected by NASA's Airborne Visible Infra- red Imaging Spectrometer (AVIRIS) and the Hyperspectral Digital Imagery Collection Experiment (HYDICE) sensors. To the best of our knowledge, this kind of analysis has not been previously conducted in the hyperspectral imaging processing literature, and in our opinion it is very important in order to really calibrate the possibility of using heterogeneous platforms for efficient hyperspectral imaging processing in real remote sensing missions.
Clarke, Frank Eldridge; Jones, Blair F.
1972-01-01
Nine ground-water samples from the principal shallow and deep North Sahara aquifers of Algeria and Tunisia were examined to determine the relation of their chemical composition to corrosion and mineral encrustation thought to be contributing to observed decline in well capacities within a UNESCO/UNDP Special Fund Project area. Although the shallow and deep waters differ significantly in certain quality factors, all are sulfochloride types with corrosion potentials ranging from moderate to extreme. None appear to be sufficiently supersaturated with troublesome mineral species to cause rapid or severe encrustation of filter pipes or other well parts. However, calcium carbonate encrustation of deep-well cooling towers and related irrigation pipes can be expected because of loss of carbon dioxide and water during evaporative cooling. Corrosion products, particularly iron sulfide, can be expected to deposit in wells producing waters from the deep aquifers. This could reduce filterpipe openings and increase casing roughness sufficiently to cause significant reduction in well capacity. It seems likely, however, that normal pressure reduction due to exploitation of the artesian systems is a more important control of well performance. If troublesome corrosion and related encrustation are confirmed by downhole inspection, use of corrosion-resisting materials, such as fiber-glass casing and saw-slotted filter pipe (shallow wells only), or stainless-steel screen, will minimize the effects of the waters represented by these samples. A combination of corrosion-resisting stainless steel filter pipe electrically insulated from the casing with a nonconductive spacer and cathodic protection will minimize external corrosion of steel casing, if this is found to be a problem. However, such installations are difficult to make in very deep wells and difficult to control in remote areas. Both the shallow waters and the deep waters examined in this study will tend to cause soil
Evaluation of Advanced Air Bag Deployment Algorithm Performance using Event Data Recorders
Gabler, Hampton C.; Hinch, John
2008-01-01
This paper characterizes the field performance of occupant restraint systems designed with advanced air bag features including those specified in the US Federal Motor Vehicle Safety Standard (FMVSS) No. 208 for advanced air bags, through the use of Event Data Recorders (EDRs). Although advanced restraint systems have been extensively tested in the laboratory, we are only beginning to understand the performance of these systems in the field. Because EDRs record many of the inputs to the advanced air bag control module, these devices can provide unique insights into the characteristics of field performance of air bags. The study was based on 164 advanced air bag cases extracted from NASS/CDS 2002-2006 with associated EDR data. In this dataset, advanced driver air bags were observed to deploy with a 50% probability at a longitudinal delta-V of 9 mph for the first stage, and at 26 mph for both inflator stages. In general, advanced air bag performance was as expected, however, the study identified cases of air bag deployments at delta-Vs as low as 3-4 mph, non-deployments at delta-Vs over 26 mph, and possible delayed air bag deployments. PMID:19026234
A computer algorithm for performing interactive algebraic computation on the GE Image-100 system
NASA Technical Reports Server (NTRS)
Hart, W. D.; Kim, H. H.
1979-01-01
A subroutine which performs specialized algebraic computations upon ocean color scanner multispectral data is presented. The computed results are displayed on a video display. The subroutine exists as a component of the aircraft sensor analysis package. The user specifies the parameters of the computations by directly interacting with the computer. A description of the conversational options is also given.
Algorithms and Algorithmic Languages.
ERIC Educational Resources Information Center
Veselov, V. M.; Koprov, V. M.
This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…
NASA Astrophysics Data System (ADS)
Iguchi, Toshio; Seto, Shinta; Awaka, Jun; Meneghini, Robert; Kubota, Takuji; Chandra, V. Chandra; Yoshida, Naofumi; Urita, Shinji; Kwiatkowski, John; Hanado, Hiroshi
2014-05-01
The GPM core satellite is scheduled to be launched on February 28, 2014. This paper will report results of the early performance test of the Dual-Frequency Precipitation Radar (DPR) on the GPM core satellite in orbit. The DPR, which was developed by Japan Aerospace Exploration Agency (JAXA) and National Institute of Information and Communications Technology (NICT), consists of two radars: Ku-band precipitation radar (KuPR, 13.6 GHz) and Ka-band radar (KaPR, 35.5 GHz). KuPR is very similar to TRMM/PR, but its sensitivity is better than PR. The higher sensitivity is realized by the increase of the transmitting power and the increase of the independent samples. A technique of variable pulse repetition frequency (PRF) is used to optimize the sampling window for precipitation echoes and the number of independent samples. KaPR has a high sensitivity mode in order to detect light rain and some snow, which are predominant in high latitudes. The beams of KuPR and KaPR can be matched by adjusting the phase offset to each element of the phased array antenna in the across-track direction and the transmitting time offset between the two radars in the along-track direction. Beam matching is essential for the use of the dual-frequency algorithm to retrieve accurate rainfall rates. The hardware performance of DPR will be checked immediately after the launch. In addition to the basic characteristics of the radar such as the transmitting power, sensitivity, and resolutions, other characteristics peculiar to the DPR such as beam matching will be tested. The performance of the DPR algorithm will be evaluated by comparing the level 2 products with the corresponding TRMM/PR data in statistical ways. Such statistics include not only the radar reflectivity and rain rate histograms, but also precipitation detectability and rain classification.
Profile Classification Module of GPM-DPR algorithm: performance of first dataset
NASA Astrophysics Data System (ADS)
Le, M.; Chandra, C. V.; Awaka, J.
2014-12-01
The Global Precipitation Measurement (GPM) mission was successfully launched in February 2014. It is the next satellite mission to obtain global precipitation measurements following success of TRMM. The GPM core satellite is equipped with a dual-frequency precipitation radar (DPR) operating at Ku and Ka band. DPR is expected to improve our knowledge of precipitation. Profile classification module of GPM-DPR is a critical module in the retrieval system for space borne radar. It involves two aspects: 1) precipitation type classification;and 2) melting region detection. Dual-frequency classification method that has been implemented into DPR algorithm relies on the microphysical properties using the difference in measured radar reflectivities at two frequencies, a quantity often called the measured dual-frequency ratio (DFRm). There are two aspects that control DFRm vertical profile: a) the non-Rayleigh scattering; b) the path- integrated attenuation. The DFRm is determined by the forward and backscattering properties of the mixed phase and rain and the backscattering properties of the ice. It holds rich information to assist in precipitation type classification and melting layer detection. In order to quantify DFRm features, a set of indices are defined. V1=(DFRm_max-DFRm_min)/(DFRm_max+DFRm_min). Where DFRm_max and DFRm_min are DFRm local max and min values. V2 is the absolute value of the mean slope for DFRm below the DFRm local min point. To further enlarge the difference between rain types, a third DFRm index V3 is defined V3=V1/V2. V3 is an effective parameter and provides a separable threshold for different rain types. The criteria for the melting layer top is defined as the height at which the slope of the DFRm profile hits a peak value. Similarly, the melting layer bottom is defined as the height the DFRm profile has a local minimum value. These criteria show good comparisons with other existing criteria. Dual-frequency classification method has been evaluated
NASA Astrophysics Data System (ADS)
Pradeep, M. V. K.; Balbir, S. M. S.; Norani, M. M.
2016-11-01
Demand for electricity in Malaysia has seen a substantial hike in light of the nation's rapid economic development. The current method of generating electricity is through the combustion of fossil fuels which has led to the detrimental effects on the environment besides causing social and economic outbreaks due to its highly volatile prices. Thus the need for a sustainable energy source is paramount and one that is quickly gaining acceptance is solar energy. However, due to the various environmental and geographical factors that affect the generation of solar electricity, the capability of solar electricity generating system (SEGS) is unable to compete with the high conversion efficiencies of conventional energy sources. In order to effectively monitor SEGS, this study is proposing a performance monitoring system that is capable of detecting drops in the system's performance for parallel networks through a diagnostic mechanism. The performance monitoring system consists of microcontroller connected to relevant sensors for data acquisition. The acquired data is transferred to a microcomputer for software based monitoring and analysis. In order to enhance the interception of sunlight by the SEGS, a sensor based sun tracking system is interfaced to the same controller to allow the PV to maneuver itself autonomously to an angle of maximum sunlight exposure.
Genetic algorithm to design Laue lenses with optimal performance for focusing hard X- and γ-rays
NASA Astrophysics Data System (ADS)
Camattari, Riccardo; Guidi, Vincenzo
2014-10-01
To focus hard X- and γ-rays it is possible to use a Laue lens as a concentrator. With this optics it is possible to improve the detection of radiation for several applications, from the observation of the most violent phenomena in the sky to nuclear medicine applications for diagnostic and therapeutic purposes. We implemented a code named LaueGen, which is based on a genetic algorithm and aims to design optimized Laue lenses. The genetic algorithm was selected because optimizing a Laue lens is a complex and discretized problem. The output of the code consists of the design of a Laue lens, which is composed of diffracting crystals that are selected and arranged in such a way as to maximize the lens performance. The code allows managing crystals of any material and crystallographic orientation. The program is structured in such a way that the user can control all the initial lens parameters. As a result, LaueGen is highly versatile and can be used to design very small lenses, for example, for nuclear medicine, or very large lenses, for example, for satellite-borne astrophysical missions.
Implementing Legacy-C Algorithms in FPGA Co-Processors for Performance Accelerated Smart Payloads
NASA Technical Reports Server (NTRS)
Pingree, Paula J.; Scharenbroich, Lucas J.; Werne, Thomas A.; Hartzell, Christine
2008-01-01
Accurate, on-board classification of instrument data is used to increase science return by autonomously identifying regions of interest for priority transmission or generating summary products to conserve transmission bandwidth. Due to on-board processing constraints, such classification has been limited to using the simplest functions on a small subset of the full instrument data. FPGA co-processor designs for SVM1 classifiers will lead to significant improvement in on-board classification capability and accuracy.
Elliott, Peter C; Smith, Geoff; Ernest, Christine S; Murphy, Barbara M; Worcester, Marian U C; Higgins, Rosemary O; Le Grande, Michael R; Goble, Alan J; Andrewes, David; Tatoulis, James
2010-01-01
Candidates for cardiac bypass surgery often experience cognitive decline. Such decline is likely to affect their everyday cognitive functioning. The aim of the present study was to compare cardiac patients' ratings of their everyday cognitive functioning against significant others' ratings and selected neuropsychological tests. Sixty-nine patients completed a battery of standardised cognitive tests. Patients and significant others also completed the Everyday Function Questionnaire independently of each other. Patient and significant other ratings of patients' everyday cognitive difficulties were found to be similar. Despite the similarities in ratings of difficulties, some everyday cognitive tasks were attributed to different processes. Patients' and significant others' ratings were most closely associated with the neuropsychological test of visual memory. Tests of the patients' verbal memory and fluency were only related to significant others' ratings. Test scores of attention and planning were largely unrelated to ratings by either patients or their significant others.
Performance Evaluation of the Approaches and Algorithms using Hamburg Airport Operations
NASA Technical Reports Server (NTRS)
Zhu, Zhifan; Lee, Hanbong; Jung, Yoon; Okuniek, Nikolai; Gerdes, Ingrid; Schier, Sebastian
2016-01-01
The German Aerospace Center (DLR) and the National Aeronautics and Space Administration (NASA) have been independently developing and testing their own concepts and tools for airport surface traffic management. Although these concepts and tools have been tested individually for European and US airports, they have never been compared or analyzed side-by-side. This paper presents the collaborative research devoted to the evaluation and analysis of two different surface management concepts. Hamburg Airport was used as a common test bed airport for the study. First, two independent simulations using the same traffic scenario were conducted: one by the DLR team using the Controller Assistance for Departure Optimization (CADEO) and the Taxi Routing for Aircraft58; Creation and Controlling (TRACC) in a real-time simulation environment, and one by the NASA team based on the Spot and Runway Departure Advisor (SARDA) in a fast-time simulation environment. A set of common performance metrics was defined. The simulation results showed that both approaches produced operational benefits in efficiency, such as reducing taxi times, while maintaining runway throughput. Both approaches generated the gate pushback schedule to meet the runway schedule, such that the runway utilization was maximized. The conflict-free taxi guidance by TRACC helped avoid taxi conflicts and reduced taxiing stops, but the taxi benefit needed be assessed together with runway throughput to analyze the overall performance objective.
Performance Evaluation of the Approaches and Algorithms for Hamburg Airport Operations
NASA Technical Reports Server (NTRS)
Zhu, Zhifan; Jung, Yoon; Lee, Hanbong; Schier, Sebastian; Okuniek, Nikolai; Gerdes, Ingrid
2016-01-01
In this work, fast-time simulations have been conducted using SARDA tools at Hamburg airport by NASA and real-time simulations using CADEO and TRACC with the NLR ATM Research Simulator (NARSIM) by DLR. The outputs are analyzed using a set of common metrics collaborated between DLR and NASA. The proposed metrics are derived from International Civil Aviation Organization (ICAO)s Key Performance Areas (KPAs) in capability, efficiency, predictability and environment, and adapted to simulation studies. The results are examined to explore and compare the merits and shortcomings of the two approaches using the common performance metrics. Particular attention is paid to the concept of the close-loop, trajectory-based taxi as well as the application of US concept to the European airport. Both teams consider the trajectory-based surface operation concept a critical technology advance in not only addressing the current surface traffic management problems, but also having potential application in unmanned vehicle maneuver on airport surface, such as autonomous towing or TaxiBot [6][7] and even Remote Piloted Aircraft (RPA). Based on this work, a future integration of TRACC and SOSS is described aiming at bringing conflict-free trajectory-based operation concept to US airport.
Performance Evaluation of the Approaches and Algorithms for Hamburg Airport Operations
NASA Technical Reports Server (NTRS)
Zhu, Zhifan; Okuniek, Nikolai; Gerdes, Ingrid; Schier, Sebastian; Lee, Hanbong; Jung, Yoon
2016-01-01
The German Aerospace Center (DLR) and the National Aeronautics and Space Administration (NASA) have been independently developing and testing their own concepts and tools for airport surface traffic management. Although these concepts and tools have been tested individually for European and US airports, they have never been compared or analyzed side-by-side. This paper presents the collaborative research devoted to the evaluation and analysis of two different surface management concepts. Hamburg Airport was used as a common test bed airport for the study. First, two independent simulations using the same traffic scenario were conducted: one by the DLR team using the Controller Assistance for Departure Optimization (CADEO) and the Taxi Routing for Aircraft: Creation and Controlling (TRACC) in a real-time simulation environment, and one by the NASA team based on the Spot and Runway Departure Advisor (SARDA) in a fast-time simulation environment. A set of common performance metrics was defined. The simulation results showed that both approaches produced operational benefits in efficiency, such as reducing taxi times, while maintaining runway throughput. Both approaches generated the gate pushback schedule to meet the runway schedule, such that the runway utilization was maximized. The conflict-free taxi guidance by TRACC helped avoid taxi conflicts and reduced taxiing stops, but the taxi benefit needed be assessed together with runway throughput to analyze the overall performance objective.
Olson, I. A.; Diack, H.; Harrold, Pamela J.
1973-01-01
The “literacy” of a fresh intake of medical students as measured by standardized vocabulary tests has been measured and correlated with examination performance during the first year. Although most students lacked an upper social class upbringing, medical parents, or a classical education, the group performed to a high standard in the tests, comparable with an English honours intake. On the other hand, there appears to be no correlation between an extensive working vocabulary and the ability to perform well in any aspect of the course, apart from the community studies. A qualification in Latin confers no advantage at all on the aspiring doctor. PMID:4685320
NASA Astrophysics Data System (ADS)
Schneider, Barry I.
2016-10-01
Over the past 40 years there has been remarkable progress in the quantitative treatment of complex many-body problems in atomic and molecular physics (AMP). This has happened as a consequence of the development of new and powerful numerical methods, translating these algorithms into practical software and the associated evolution of powerful computing platforms ranging from desktops to high performance computational instruments capable of massively parallel computation. We are taking the opportunity afforded by this CCP2015 to review computational progress in scattering theory and the interaction of strong electromagnetic fields with atomic and molecular systems from the early 1960’s until the present time to show how these advances have revealed a remarkable array of interesting and in many cases unexpected features. The article is by no means complete and certainly reflects the views and experiences of the author.
NASA Astrophysics Data System (ADS)
Izzuan Jaafar, Hazriq; Mohd Ali, Nursabillilah; Mohamed, Z.; Asmiza Selamat, Nur; Faiz Zainal Abidin, Amar; Jamian, J. J.; Kassim, Anuar Mohamed
2013-12-01
This paper presents development of an optimal PID and PD controllers for controlling the nonlinear gantry crane system. The proposed Binary Particle Swarm Optimization (BPSO) algorithm that uses Priority-based Fitness Scheme is adopted in obtaining five optimal controller gains. The optimal gains are tested on a control structure that combines PID and PD controllers to examine system responses including trolley displacement and payload oscillation. The dynamic model of gantry crane system is derived using Lagrange equation. Simulation is conducted within Matlab environment to verify the performance of system in terms of settling time (Ts), steady state error (SSE) and overshoot (OS). This proposed technique demonstrates that implementation of Priority-based Fitness Scheme in BPSO is effective and able to move the trolley as fast as possible to the various desired position.
Yin, Jiandong; Sun, Hongzan; Yang, Jiawen; Guo, Qiyong
2014-01-01
The arterial input function (AIF) plays a crucial role in the quantification of cerebral perfusion parameters. The traditional method for AIF detection is based on manual operation, which is time-consuming and subjective. Two automatic methods have been reported that are based on two frequently used clustering algorithms: fuzzy c-means (FCM) and K-means. However, it is still not clear which is better for AIF detection. Hence, we compared the performance of these two clustering methods using both simulated and clinical data. The results demonstrate that K-means analysis can yield more accurate and robust AIF results, although it takes longer to execute than the FCM method. We consider that this longer execution time is trivial relative to the total time required for image manipulation in a PACS setting, and is acceptable if an ideal AIF is obtained. Therefore, the K-means method is preferable to FCM in AIF detection.
Vaitheeswaran, Ranganathan; Sathiya, Narayanan V K; Bhangle, Janhavi R; Nirhali, Amit; Kumar, Namita; Basu, Sumit; Maiya, Vikram
2011-04-01
The study aims to introduce a hybrid optimization algorithm for anatomy-based intensity modulated radiotherapy (AB-IMRT). Our proposal is that by integrating an exact optimization algorithm with a heuristic optimization algorithm, the advantages of both the algorithms can be combined, which will lead to an efficient global optimizer solving the problem at a very fast rate. Our hybrid approach combines Gaussian elimination algorithm (exact optimizer) with fast simulated annealing algorithm (a heuristic global optimizer) for the optimization of beam weights in AB-IMRT. The algorithm has been implemented using MATLAB software. The optimization efficiency of the hybrid algorithm is clarified by (i) analysis of the numerical characteristics of the algorithm and (ii) analysis of the clinical capabilities of the algorithm. The numerical and clinical characteristics of the hybrid algorithm are compared with Gaussian elimination method (GEM) and fast simulated annealing (FSA). The numerical characteristics include convergence, consistency, number of iterations and overall optimization speed, which were analyzed for the respective cases of 8 patients. The clinical capabilities of the hybrid algorithm are demonstrated in cases of (a) prostate and (b) brain. The analyses reveal that (i) the convergence speed of the hybrid algorithm is approximately three times higher than that of FSA algorithm; (ii) the convergence (percentage reduction in the cost function) in hybrid algorithm is about 20% improved as compared to that in GEM algorithm; (iii) the hybrid algorithm is capable of producing relatively better treatment plans in terms of Conformity Index (CI) [~ 2% - 5% improvement] and Homogeneity Index (HI) [~ 4% - 10% improvement] as compared to GEM and FSA algorithms; (iv) the sparing of organs at risk in hybrid algorithm-based plans is better than that in GEM-based plans and comparable to that in FSA-based plans; and (v) the beam weights resulting from the hybrid algorithm are
A parallel-vector algorithm for rapid structural analysis on high-performance computers
NASA Technical Reports Server (NTRS)
Storaasli, Olaf O.; Nguyen, Duc T.; Agarwal, Tarun K.
1990-01-01
A fast, accurate Choleski method for the solution of symmetric systems of linear equations is presented. This direct method is based on a variable-band storage scheme and takes advantage of column heights to reduce the number of operations in the Choleski factorization. The method employs parallel computation in the outermost DO-loop and vector computation via the loop unrolling technique in the innermost DO-loop. The method avoids computations with zeros outside the column heights, and as an option, zeros inside the band. The close relationship between Choleski and Gauss elimination methods is examined. The minor changes required to convert the Choleski code to a Gauss code to solve non-positive-definite symmetric systems of equations are identified. The results for two large scale structural analyses performed on supercomputers, demonstrate the accuracy and speed of the method.
Evaluation of Algorithm Performance in ChIP-Seq Peak Detection
Wilbanks, Elizabeth G.; Facciotti, Marc T.
2010-01-01
Next-generation DNA sequencing coupled with chromatin immunoprecipitation (ChIP-seq) is revolutionizing our ability to interrogate whole genome protein-DNA interactions. Identification of protein binding sites from ChIP-seq data has required novel computational tools, distinct from those used for the analysis of ChIP-Chip experiments. The growing popularity of ChIP-seq spurred the development of many different analytical programs (at last count, we noted 31 open source methods), each with some purported advantage. Given that the literature is dense and empirical benchmarking challenging, selecting an appropriate method for ChIP-seq analysis has become a daunting task. Herein we compare the performance of eleven different peak calling programs on common empirical, transcription factor datasets and measure their sensitivity, accuracy and usability. Our analysis provides an unbiased critical assessment of available technologies, and should assist researchers in choosing a suitable tool for handling ChIP-seq data. PMID:20628599
A parallel-vector algorithm for rapid structural analysis on high-performance computers
NASA Technical Reports Server (NTRS)
Storaasli, Olaf O.; Nguyen, Duc T.; Agarwal, Tarun K.
1990-01-01
A fast, accurate Choleski method for the solution of symmetric systems of linear equations is presented. This direct method is based on a variable-band storage scheme and takes advantage of column heights to reduce the number of operations in the Choleski factorization. The method employs parallel computation in the outermost DO-loop and vector computation via the 'loop unrolling' technique in the innermost DO-loop. The method avoids computations with zeros outside the column heights, and as an option, zeros inside the band. The close relationship between Choleski and Gauss elimination methods is examined. The minor changes required to convert the Choleski code to a Gauss code to solve non-positive-definite symmetric systems of equations are identified. The results for two large-scale structural analyses performed on supercomputers, demonstrate the accuracy and speed of the method.
NASA Astrophysics Data System (ADS)
Li, Xifei; Liu, Jian; Meng, Xiangbo; Tang, Yongji; Banis, Mohammad Norouzi; Yang, Jinli; Hu, Yuhai; Li, Ruying; Cai, Mei; Sun, Xueliang
2014-02-01
LiCoO2 in the commercial lithium ion batteries has been suffering from its poor cycling performance at high cutoff voltages. In this study, we employ an atomic layer deposition (ALD) technique to surface-modify a LiCoO2 material with various thickness-controlled metal oxide (TiO2, ZrO2 and Al2O3) coatings to improve its battery performance. The effects of the metal oxide coatings on the electrochemical performance of LiCoO2 electrode are studied in detail. It is demonstrated that a uniform and dense coating via the ALD route on LiCoO2 powder can lower the battery performance due to an obvious decrease in lithium diffusion and electron transport with the coating layers. In contrast, it is revealed that a direct coating on prefabricated LiCoO2 electrodes performs much better than a coating on LiCoO2 powders. It is further disclosed that the improved electrochemical performance of coated LiCoO2 electrode is highly dependent on the coating materials. Of the three coating materials, the Al2O3 coating results in the best cycling stability while the ZrO2 coating contributes to the best rate capability. It is thus suggested that the coating materials are functionally specific, and for the best improvement of a cathode, a particular coating material should be sought.
NASA Astrophysics Data System (ADS)
Ramcharan, A. M.; Kemanian, A.; Richard, T.
2013-12-01
The largest terrestrial carbon pool is soil, storing more carbon than present in above ground biomass (Jobbagy and Jackson, 2000). In this context, soil organic carbon has gained attention as a managed sink for atmospheric CO2 emissions. The variety of models that describe soil carbon cycling reflects the relentless effort to characterize the complex nature of soil and the carbon within it. Previous works have laid out the range of mathematical approaches to soil carbon cycling but few have compared model structure performance in diverse agricultural scenarios. As interest in increasing the temporal and spatial scale of models grows, assessing the performance of different model structures is essential to drawing reasonable conclusions from model outputs. This research will address this challenge using the Evolutionary Algorithm Borg-MOEA to optimize the functionality of carbon models in a multi-objective approach to parameter estimation. Model structure performance will be assessed through analysis of multi-objective trade-offs using experimental data from twenty long-term carbon experiments across the globe. Preliminary results show a successful test of this proof of concept using a non-linear soil carbon model structure. Soil carbon dynamics were based on the amount of carbon inputs to the soil and the degree of organic matter saturation of the soil. The degree of organic matter saturation of the soil was correlated with the soil clay content. Six parameters of the non-linear soil organic carbon model were successfully optimized to steady-state conditions using Borg-MOEA and datasets from five agricultural locations in the United States. Given that more than 50% of models rely on linear soil carbon decomposition dynamics, a linear model structure was also optimized and compared to the non-linear case. Results indicate linear dynamics had a significantly lower optimization performance. Results show promise in using the Evolutionary Algorithm Borg-MOEA to assess
Piro, M. H. A.; Simunovic, S.
2016-03-17
Several global optimization methods are reviewed that attempt to ensure that the integral Gibbs energy of a closed isothermal isobaric system is a global minimum to satisfy the necessary and sufficient conditions for thermodynamic equilibrium. In particular, the integral Gibbs energy function of a multicomponent system containing non-ideal phases may be highly non-linear and non-convex, which makes finding a global minimum a challenge. Consequently, a poor numerical approach may lead one to the false belief of equilibrium. Furthermore, confirming that one reaches a global minimum and that this is achieved with satisfactory computational performance becomes increasingly more challenging in systems containing many chemical elements and a correspondingly large number of species and phases. Several numerical methods that have been used for this specific purpose are reviewed with a benchmark study of three of the more promising methods using five case studies of varying complexity. A modification of the conventional Branch and Bound method is presented that is well suited to a wide array of thermodynamic applications, including complex phases with many constituents and sublattices, and ionic phases that must adhere to charge neutrality constraints. Also, a novel method is presented that efficiently solves the system of linear equations that exploits the unique structure of the Hessian matrix, which reduces the calculation from a O(N^{3}) operation to a O(N) operation. As a result, this combined approach demonstrates efficiency, reliability and capabilities that are favorable for integration of thermodynamic computations into multi-physics codes with inherent performance considerations.
Piro, M. H. A.; Simunovic, S.
2016-03-17
Several global optimization methods are reviewed that attempt to ensure that the integral Gibbs energy of a closed isothermal isobaric system is a global minimum to satisfy the necessary and sufficient conditions for thermodynamic equilibrium. In particular, the integral Gibbs energy function of a multicomponent system containing non-ideal phases may be highly non-linear and non-convex, which makes finding a global minimum a challenge. Consequently, a poor numerical approach may lead one to the false belief of equilibrium. Furthermore, confirming that one reaches a global minimum and that this is achieved with satisfactory computational performance becomes increasingly more challenging in systemsmore » containing many chemical elements and a correspondingly large number of species and phases. Several numerical methods that have been used for this specific purpose are reviewed with a benchmark study of three of the more promising methods using five case studies of varying complexity. A modification of the conventional Branch and Bound method is presented that is well suited to a wide array of thermodynamic applications, including complex phases with many constituents and sublattices, and ionic phases that must adhere to charge neutrality constraints. Also, a novel method is presented that efficiently solves the system of linear equations that exploits the unique structure of the Hessian matrix, which reduces the calculation from a O(N3) operation to a O(N) operation. As a result, this combined approach demonstrates efficiency, reliability and capabilities that are favorable for integration of thermodynamic computations into multi-physics codes with inherent performance considerations.« less
NASA Astrophysics Data System (ADS)
Behr, Y.; Cua, G. B.; Clinton, J. F.; Heaton, T. H.
2012-12-01
The Virtual Seismologist (VS) method is a Bayesian approach to regional network-based earthquake early warning (EEW) originally formulated by Cua and Heaton (2007). Implementation of VS into real-time EEW codes has been an on-going effort of the Swiss Seismological Service at ETH Zürich since 2006, with support from ETH Zürich, various European projects, and the United States Geological Survey (USGS). VS is one of three EEW algorithms - the other two being ElarmS (Allen and Kanamori, 2003) and On-Site (Wu and Kanamori, 2005; Boese et al., 2008) algorithms - that form the basis of the California Integrated Seismic Network (CISN) ShakeAlert system, a USGS-funded prototype end-to-end EEW system that could potentially be implemented in California. In Europe, VS is currently operating as a real-time test system in Switzerland. As part of the on-going EU project REAKT (Strategies and Tools for Real-Time Earthquake Risk Reduction), VS will be installed and tested at other European networks. VS has been running in real-time on stations of the Southern California Seismic Network (SCSN) since July 2008, and on stations of the Berkeley Digital Seismic Network (BDSN) and the USGS Menlo Park strong motion network in northern California since February 2009. In Switzerland, VS has been running in real-time on stations monitored by the Swiss Seismological Service (including stations from Austria, France, Germany, and Italy) since 2010. We present summaries of the real-time performance of VS in Switzerland and California over the past two and three years respectively. The empirical relationships used by VS to estimate magnitudes and ground motion, originally derived from southern California data, are demonstrated to perform well in northern California and Switzerland. Implementation in real-time and off-line testing in Europe will potentially be extended to southern Italy, western Greece, Istanbul, Romania, and Iceland. Integration of the VS algorithm into both the CISN Advanced
Dodge, Cristina T; Tamm, Eric P; Cody, Dianna D; Liu, Xinming; Jensen, Corey T; Wei, Wei; Kundra, Vikas; Rong, X John
2016-03-08
The purpose of this study was to characterize image quality and dose performance with GE CT iterative reconstruction techniques, adaptive statistical iterative recontruction (ASiR), and model-based iterative reconstruction (MBIR), over a range of typical to low-dose intervals using the Catphan 600 and the anthropomorphic Kyoto Kagaku abdomen phantoms. The scope of the project was to quantitatively describe the advantages and limitations of these approaches. The Catphan 600 phantom, supplemented with a fat-equivalent oval ring, was scanned using a GE Discovery HD750 scanner at 120 kVp, 0.8 s rotation time, and pitch factors of 0.516, 0.984, and 1.375. The mA was selected for each pitch factor to achieve CTDIvol values of 24, 18, 12, 6, 3, 2, and 1 mGy. Images were reconstructed at 2.5 mm thickness with filtered back-projection (FBP); 20%, 40%, and 70% ASiR; and MBIR. The potential for dose reduction and low-contrast detectability were evaluated from noise and contrast-to-noise ratio (CNR) measurements in the CTP 404 module of the Catphan. Hounsfield units (HUs) of several materials were evaluated from the cylinder inserts in the CTP 404 module, and the modulation transfer function (MTF) was calculated from the air insert. The results were con-firmed in the anthropomorphic Kyoto Kagaku abdomen phantom at 6, 3, 2, and 1mGy. MBIR reduced noise levels five-fold and increased CNR by a factor of five compared to FBP below 6mGy CTDIvol, resulting in a substantial improvement in image quality. Compared to ASiR and FBP, HU in images reconstructed with MBIR were consistently lower, and this discrepancy was reversed by higher pitch factors in some materials. MBIR improved the conspicuity of the high-contrast spatial resolution bar pattern, and MTF quantification confirmed the superior spatial resolution performance of MBIR versus FBP and ASiR at higher dose levels. While ASiR and FBP were relatively insensitive to changes in dose and pitch, the spatial resolution for MBIR
NASA Astrophysics Data System (ADS)
Kim, Kwansu; Kim, Hyungyu; Lee, Joonghyuk; Kim, Seockhyun; Paek, Insu
2016-09-01
A control algorithm for a floating wind turbine installed on a large semi-submersible platform is investigated in this study. The floating wind turbine is different from other typical semi-submersible floating wind turbines in that the platform is so large that the platform motion is not affected by the blade pitch control. For simulation, the hydrodynamic forces data were obtained from ANSYS/AQWA, and implemented to Bladed. For the basic pitch controller, the well-known technique to increase damping by reducing the bandwidth of the controller lower than the platform pitch mode was implemented. Also, to reduce the tower load in the pitch control region, a tower damper based on the nacelle angular acceleration signal was designed. Compared with the results obtained from an onshore wind turbine controller applied to the floating wind turbine, the floating wind turbine controller could reduce the tower moments effectively, however, the standard deviation in power increased significantly.
Depeursinge, Adrien; Iavindrasana, Jimison; Hidki, Asmâa; Cohen, Gilles; Geissbuhler, Antoine; Platon, Alexandra; Poletti, Pierre-Alexandre; Müller, Henning
2010-02-01
In this paper, we compare five common classifier families in their ability to categorize six lung tissue patterns in high-resolution computed tomography (HRCT) images of patients affected with interstitial lung diseases (ILD) and with healthy tissue. The evaluated classifiers are naive Bayes, k-nearest neighbor, J48 decision trees, multilayer perceptron, and support vector machines (SVM). The dataset used contains 843 regions of interest (ROI) of healthy and five pathologic lung tissue patterns identified by two radiologists at the University Hospitals of Geneva. Correlation of the feature space composed of 39 texture attributes is studied. A grid search for optimal parameters is carried out for each classifier family. Two complementary metrics are used to characterize the performances of classification. These are based on McNemar's statistical tests and global accuracy. SVM reached best values for each metric and allowed a mean correct prediction rate of 88.3% with high class-specific precision on testing sets of 423 ROIs.
Liu, Jianyang; Li, Youfu
A number of works for 3-D shape measurement based on structured light have been well-studied in the last decades. A common way to model the system is to use the binocular stereovision-like model. In this model, the projector is treated as a camera, thus making a projector-camera-based system unified with a well-established traditional binocular stereovision system. After calibrating the projector and camera, a 3-D shape information is obtained by conventional triangulation. However, in such a stereovision-like system, the short baseline problem exists and limits the measurement accuracy. Hence, in this work, we present a new projecting-imaging model based on fringe projection profilometry (FPP). In this model, we first derive a rigorous mathematical relationship that exists between the height of an object's surface, the phase difference distribution map, and the parameters of the setup. Based on this model, we then study the problem of how the uncertainty of relevant parameters, particularly the baseline's length, affects the 3-D shape measurement accuracy using our proposed model. We provide an extensive uncertainty analysis on the proposed model through partial derivative analysis, relative error analysis, and sensitivity analysis. Moreover, the Monte Carlo simulation experiment is also conducted which shows that the measurement performance of the projector-camera system has a short baseline.
NASA Astrophysics Data System (ADS)
Abedi, Kambiz; Mirjalili, Seyed Mohammad
2015-03-01
Recently, majority of current research in the field of designing Phonic Crystal Waveguides (PCW) focus in extracting the relations between output slow light properties of PCW and structural parameters through a huge number of tedious non-systematic simulations in order to introduce better designs. This paper proposes a novel systematic approach which can be considered as a shortcut to alleviate the difficulties and human involvements in designing PCWs. In the proposed method, the problem of PCW design is first formulated as an optimization problem. Then, an optimizer is employed in order to automatically find the optimum design for the formulated PCWs. Meanwhile, different constraints are also considered during optimization with the purpose of applying physical limitations to the final optimum structure. As a case study, the structure of a Bragg-like Corrugation Slotted PCWs (BCSPCW) is optimized by using the proposed method. One of the most computationally powerful techniques in Computational Intelligence (CI) called Particle Swarm Optimization (PSO) is employed as an optimizer to automatically find the optimum structure for BCSPCW. The optimization process is done by considering five constraints to guarantee the feasibility of the final optimized structures and avoid band mixing. Numerical results demonstrate that the proposed method is able to find an optimum structure for BCSPCW with 172% and 100% substantial improvements in the bandwidth and Normalized Delay-Bandwidth Product (NDBP) respectively compared to the best current structure in the literature. Moreover, there is a time domain analysis at the end of the paper which verifies the performance of the optimized structure and proves that this structure has low distortion and attenuation simultaneously.
Kaplan, David E.; Dai, Feng; Aytaman, Ayse; Baytarian, Michelle; Fox, Rena; Hunt, Kristel; Knott, Astrid; Pedrosa, Marcos; Pocha, Christine; Mehta, Rajni; Duggal, Mona; Skanderson, Melissa; Valderrama, Adriana; Taddei, Tamar
2015-01-01
HCC cohort, the overall eCTP score matched 96% of patients to within 1 point of the chart-validated CTP score (Spearman correlation, 0.81). In the cirrhosis cohort, 98% were matched to within 1 point of their actual CTP score (Spearman, 0.85). When applied to a cohort of 30,840 patients with cirrhosis, each unit change in eCTP was associated with 39% increase in the relative risk of death or transplantation. The Harrell C statistic for the eCTP (0.678) was numerically higher than those for other disease severity indices for predicting 5-year transplant-free survival. Adding other predictive models to the eCTP resulted in minimal differences in its predictive performance. CONCLUSION We developed and validated an algorithm to extrapolate an eCTP score from data in a large administrative database with excellent correlation to actual CTP score on chart review. When applied to an administrative database, this algorithm is a highly useful predictor of survival when compared with multiple other published liver disease severity indices. PMID:26188137
NASA Astrophysics Data System (ADS)
Michel, Dominik; Miralles, Diego; Jimenez, Carlos; Ershadi, Ali; McCabe, Matthew F.; Hirschi, Martin; Seneviratne, Sonia I.; Jung, Martin; Wood, Eric F.; (Bob) Su, Z.; Timmermans, Joris; Chen, Xuelong; Fisher, Joshua B.; Mu, Quiaozen; Fernandez, Diego
2015-04-01
Research on climate variations and the development of predictive capabilities largely rely on globally available reference data series of the different components of the energy and water cycles. Several efforts have recently aimed at producing large-scale and long-term reference data sets of these components, e.g. based on in situ observations and remote sensing, in order to allow for diagnostic analyses of the drivers of temporal variations in the climate system. Evapotranspiration (ET) is an essential component of the energy and water cycle, which cannot be monitored directly on a global scale by remote sensing techniques. In recent years, several global multi-year ET data sets have been derived from remote sensing-based estimates, observation-driven land surface model simulations or atmospheric reanalyses. The LandFlux-EVAL initiative presented an ensemble-evaluation of these data sets over the time periods 1989-1995 and 1989-2005 (Mueller et al. 2013). The WACMOS-ET project (http://wacmoset.estellus.eu) started in the year 2012 and constitutes an ESA contribution to the GEWEX initiative LandFlux. It focuses on advancing the development of ET estimates at global, regional and tower scales. WACMOS-ET aims at developing a Reference Input Data Set exploiting European Earth Observations assets and deriving ET estimates produced by a set of four ET algorithms covering the period 2005-2007. The algorithms used are the SEBS (Su et al., 2002), Penman-Monteith from MODIS (Mu et al., 2011), the Priestley and Taylor JPL model (Fisher et al., 2008) and GLEAM (Miralles et al., 2011). The algorithms are run with Fluxnet tower observations, reanalysis data (ERA-Interim), and satellite forcings. They are cross-compared and validated against in-situ data. In this presentation the performance of the different ET algorithms with respect to different temporal resolutions, hydrological regimes, land cover types (including grassland, cropland, shrubland, vegetation mosaic, savanna
Gündeş, S G; Gulenc, S; Bingol, R
2001-12-01
To compare the performance of current chromogenic yeast identification methods, three commercial systems (API 20C Aux, Fungichrom I and Candifast) were evaluated in parallel, along with conventional tests to identify yeasts commonly isolated in this clinical microbiology laboratory. In all, 116 clinical isolates, (68 Candida albicans, 12 C. parapsilosis, 12 C. glabrata and 24 other yeasts) were tested. Germ-tube production, microscopical morphology and other conventional methods were used as standards to definitively identify yeast isolates. The percentage of isolates identified correctly varied between 82.7% and 95.6%. Overall, the performance obtained with Fungichrom I was highest with 95.6% identification (111 of 116 isolates). The performance of API 20C Aux was higher with 87% (101 of 116 isolates) than that of Candifast with 82.7% (96 of 116). The Fungichrom I method was found to be rapid, as 90% of strains were identified after incubation for 24 h at 30 degrees C. Both of the chromogenic yeast identification systems provided a simple, accurate alternative to API 20C Aux and conventional assimilation methods for the rapid identification of most commonly encountered isolates of Candida spp. Fungichrom seemed to be the most appropriate system for use in a clinical microbiology laboratory, due to its good performance with regard to sensitivity, ease of use and reading, rapidity and the cost per test.
ERIC Educational Resources Information Center
Moffitt, Terrie E.; Silva, P. A.
1987-01-01
Examined children whose Wechsler Intelligence Scale for Children-Revised (WISC-R) verbal and performance Intelligence Quotient discrepancies placed them beyond the 90th percentile. Longitudinal study showed 23 percent of the discrepant cases to be discrepant at two or more ages. Studied frequency of perinatal difficulties, early childhood…
ERIC Educational Resources Information Center
Park, Sungho; Singer, George H. S.; Gibson, Mary
2005-01-01
The study uses an alternating treatment design to evaluate the functional effect of teacher's affect on students' task performance. Tradition in special education holds that teachers should engage students using positive and enthusiastic affect for task presentations and praise. To test this assumption, we compared two affective conditions. Three…
The Superior Lambert Algorithm
NASA Astrophysics Data System (ADS)
der, G.
2011-09-01
Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most
NASA Astrophysics Data System (ADS)
Jun, Xie Cheng; Su, Yan; Wei, Zhang
2006-08-01
In this paper, a modified algorithm was introduced to improve Rice coding algorithm and researches of image compression with the CDF (2,2) wavelet lifting scheme was made. Our experiments show that the property of the lossless image compression is much better than Huffman, Zip, lossless JPEG, RAR, and a little better than (or equal to) the famous SPIHT. The lossless compression rate is improved about 60.4%, 45%, 26.2%, 16.7%, 0.4% on average. The speed of the encoder is faster about 11.8 times than the SPIHT's and its efficiency in time can be improved by 162%. The speed of the decoder is faster about 12.3 times than that of the SPIHT's and its efficiency in time can be rasied about 148%. This algorithm, instead of largest levels wavelet transform, has high coding efficiency when the wavelet transform levels is larger than 3. For the source model of distributions similar to the Laplacian, it can improve the efficiency of coding and realize the progressive transmit coding and decoding.
Ouroboros: A Tool for Building Generic, Hybrid, Divide& Conquer Algorithms
Johnson, J R; Foster, I
2003-05-01
A hybrid divide and conquer algorithm is one that switches from a divide and conquer to an iterative strategy at a specified problem size. Such algorithms can provide significant performance improvements relative to alternatives that use a single strategy. However, the identification of the optimal problem size at which to switch for a particular algorithm and platform can be challenging. We describe an automated approach to this problem that first conducts experiments to explore the performance space on a particular platform and then uses the resulting performance data to construct an optimal hybrid algorithm on that platform. We implement this technique in a tool, ''Ouroboros'', that automatically constructs a high-performance hybrid algorithm from a set of registered algorithms. We present results obtained with this tool for several classical divide and conquer algorithms, including matrix multiply and sorting, and report speedups of up to six times achieved over non-hybrid algorithms.
Andretta, I; Pomar, C; Rivest, J; Pomar, J; Radünz, J
2016-07-01
This study was developed to assess the impact on performance, nutrient balance, serum parameters and feeding costs resulting from the switching of conventional to precision-feeding programs for growing-finishing pigs. A total of 70 pigs (30.4±2.2 kg BW) were used in a performance trial (84 days). The five treatments used in this experiment were a three-phase group-feeding program (control) obtained with fixed blending proportions of feeds A (high nutrient density) and B (low nutrient density); against four individual daily-phase feeding programs in which the blending proportions of feeds A and B were updated daily to meet 110%, 100%, 90% or 80% of the lysine requirements estimated using a mathematical model. Feed intake was recorded automatically by a computerized device in the feeders, and the pigs were weighed weekly during the project. Body composition traits were estimated by scanning with an ultrasound device and densitometer every 28 days. Nitrogen and phosphorus excretions were calculated by the difference between retention (obtained from densitometer measurements) and intake. Feeding costs were assessed using 2013 ingredient cost data. Feed intake, feed efficiency, back fat thickness, body fat mass and serum contents of total protein and phosphorus were similar among treatments. Feeding pigs in a daily-basis program providing 110%, 100% or 90% of the estimated individual lysine requirements also did not influence BW, body protein mass, weight gain and nitrogen retention in comparison with the animals in the group-feeding program. However, feeding pigs individually with diets tailored to match 100% of nutrient requirements made it possible to reduce (P<0.05) digestible lysine intake by 26%, estimated nitrogen excretion by 30% and feeding costs by US$7.60/pig (-10%) relative to group feeding. Precision feeding is an effective approach to make pig production more sustainable without compromising growth performance.
NASA Astrophysics Data System (ADS)
Ciany, Charles M.; Zurawski, William C.
2007-04-01
Raytheon has extensively processed high-resolution sonar images with its CAD/CAC algorithms to provide real-time classification of mine-like bottom objects in a wide range of shallow-water environments. The algorithm performance is measured in terms of probability of correct classification (Pcc) as a function of false alarm rate, and is impacted by variables associated with both the physics of the problem and the signal processing design choices. Some examples of prominent variables pertaining to the choices of signal processing parameters are image resolution (i.e., pixel dimensions), image normalization scheme, and pixel intensity quantization level (i.e., number of bits used to represent the intensity of each image pixel). Improvements in image resolution associated with the technology transition from sidescan to synthetic aperture sonars have prompted the use of image decimation algorithms to reduce the number of pixels per image that are processed by the CAD/CAC algorithms, in order to meet real-time processor throughput requirements. Additional improvements in digital signal processing hardware have also facilitated the use of an increased quantization level in converting the image data from analog to digital format. This study evaluates modifications to the normalization algorithm and image pixel quantization level within the image processing prior to CAD/CAC processing, and examines their impact on the resulting CAD/CAC algorithm performance. The study utilizes a set of at-sea data from multiple test exercises in varying shallow water environments.
NASA Astrophysics Data System (ADS)
Biazzo, Indaco; Braunstein, Alfredo; Zecchina, Riccardo
2012-08-01
We study the behavior of an algorithm derived from the cavity method for the prize-collecting steiner tree (PCST) problem on graphs. The algorithm is based on the zero temperature limit of the cavity equations and as such is formally simple (a fixed point equation resolved by iteration) and distributed (parallelizable). We provide a detailed comparison with state-of-the-art algorithms on a wide range of existing benchmarks, networks, and random graphs. Specifically, we consider an enhanced derivative of the Goemans-Williamson heuristics and the dhea solver, a branch and cut integer linear programming based approach. The comparison shows that the cavity algorithm outperforms the two algorithms in most large instances both in running time and quality of the solution. Finally we prove a few optimality properties of the solutions provided by our algorithm, including optimality under the two postprocessing procedures defined in the Goemans-Williamson derivative and global optimality in some limit cases.
Warshawsky, A.S.; Uzelac, M.J.; Pimper, J.E. )
1989-05-01
The Crew III algorithm for assessing time and dose dependent combat crew performance subsequent to nuclear irradiation was incorporated into the Janus combat simulation system. Battle outcomes using this algorithm were compared to outcomes based on the currently used time-independent cookie-cutter'' assessment methodology. The results illustrate quantifiable differences in battle outcome between the two assessment techniques. Results suggest that tactical nuclear weapons are more effective than currently assumed if performance degradation attributed to radiation doses between 150 to 3000 rad are taken into account. 6 refs., 9 figs.
2013-01-01
Background In a mass casualty situation, medical personnel must rapidly assess and prioritize patients for treatment and transport. Triage is an important tool for medical management in disaster situations. Lack of common international and Swedish triage guidelines could lead to confusion. Attending the Advanced Trauma Life Support (ATLS) provider course is becoming compulsory in the northern part of Europe. The aim of the ATLS guidelines is provision of effective management of single critically injured patients, not mass casualties incidents. However, the use of the ABCDE algorithms from ATLS, has been proposed to be valuable, even in a disaster environment. The objective for this study was to determine whether the mnemonic ABCDE as instructed in the ATLS provider course, affects the ability of Swedish physician’s to correctly triage patients in a simulated mass casualty incident. Methods The study group included 169 ATLS provider students from 10 courses and course sites in Sweden; 153 students filled in an anonymous test just before the course and just after the course. The tests contained 3 questions based on overall priority. The assignment was to triage 15 hypothetical patients who had been involved in a bus crash. Triage was performed according to the ABCDE algorithm. In the triage, the ATLS students used a colour-coded algorithm with red for priority 1, yellow for priority 2, green for priority 3 and black for dead. The students were instructed to identify and prioritize 3 of the most critically injured patients, who should be the first to leave the scene. The same test was used before and after the course. Results The triage section of the test was completed by 142 of the 169 participants both before and after the course. The results indicate that there was no significant difference in triage knowledge among Swedish physicians who attended the ATLS provider course. The results also showed that Swedish physicians have little experience of real mass
Cusella-De Angelis, Maria Gabriella; Laino, Gregorio; Piattelli, Adriano; Pacifici, Maurizio; De Rosa, Alfredo; Papaccio, Gianpaolo
2007-01-01
Background Scaffold surface features are thought to be important regulators of stem cell performance and endurance in tissue engineering applications, but details about these fundamental aspects of stem cell biology remain largely unclear. Methodology and Findings In the present study, smooth clinical-grade lactide-coglyolic acid 85:15 (PLGA) scaffolds were carved as membranes and treated with NMP (N-metil-pyrrolidone) to create controlled subtractive pits or microcavities. Scanning electron and confocal microscopy revealed that the NMP-treated membranes contained: (i) large microcavities of 80–120 µm in diameter and 40–100 µm in depth, which we termed primary; and (ii) smaller microcavities of 10–20 µm in diameter and 3–10 µm in depth located within the primary cavities, which we termed secondary. We asked whether a microcavity-rich scaffold had distinct bone-forming capabilities compared to a smooth one. To do so, mesenchymal stem cells derived from human dental pulp were seeded onto the two types of scaffold and monitored over time for cytoarchitectural characteristics, differentiation status and production of important factors, including bone morphogenetic protein-2 (BMP-2) and vascular endothelial growth factor (VEGF). We found that the microcavity-rich scaffold enhanced cell adhesion: the cells created intimate contact with secondary microcavities and were polarized. These cytological responses were not seen with the smooth-surface scaffold. Moreover, cells on the microcavity-rich scaffold released larger amounts of BMP-2 and VEGF into the culture medium and expressed higher alkaline phosphatase activity. When this type of scaffold was transplanted into rats, superior bone formation was elicited compared to cells seeded on the smooth scaffold. Conclusion In conclusion, surface microcavities appear to support a more vigorous osteogenic response of stem cells and should be used in the design of therapeutic substrates to improve bone repair and
NASA Astrophysics Data System (ADS)
Fan, Jiahua; Tseng, Hsin-Wu; Kupinski, Matthew; Cao, Guangzhi; Sainath, Paavana; Hsieh, Jiang
2013-03-01
Radiation dose on patient has become a major concern today for Computed Tomography (CT) imaging in clinical practice. Various hardware and algorithm solutions have been designed to reduce dose. Among them, iterative reconstruction (IR) has been widely expected to be an effective dose reduction approach for CT. However, there is no clear understanding on the exact amount of dose saving an IR approach can offer for various clinical applications. We know that quantitative image quality assessment should be task-based. This work applied mathematical model observers to study detectability performance of CT scan data reconstructed using an advanced IR approach as well as the conventional filtered back-projection (FBP) approach. The purpose of this work is to establish a practical and robust approach for CT IR detectability image quality evaluation and to assess the dose saving capability of the IR method under study. Low contrast (LC) objects imbedded in head size and body size phantoms were imaged multiple times with different dose levels. Independent signal present and absent pairs were generated for model observer study training and testing. Receiver Operating Characteristic (ROC) curves for location known exact and location ROC (LROC) curves for location unknown as well as their corresponding the area under the curve (AUC) values were calculated. Results showed approximately 3 times dose reduction has been achieved using the IR method under study.
Mark F. Adams; Seung-Hoe Ku; Patrick Worley; Ed D'Azevedo; Julian C. Cummings; C.S. Chang
2009-10-01
Particle-in-cell (PIC) methods have proven to be eft#11;ective in discretizing the Vlasov-Maxwell system of equations describing the core of toroidal burning plasmas for many decades. Recent physical understanding of the importance of edge physics for stability and transport in tokamaks has lead to development of the fi#12;rst fully toroidal edge PIC code - XGC1. The edge region poses special problems in meshing for PIC methods due to the lack of closed flux surfaces, which makes fi#12;eld-line following meshes and coordinate systems problematic. We present a solution to this problem with a semi-#12;field line following mesh method in a cylindrical coordinate system. Additionally, modern supercomputers require highly concurrent algorithms and implementations, with all levels of the memory hierarchy being effe#14;ciently utilized to realize optimal code performance. This paper presents a mesh and particle partitioning method, suitable to our meshing strategy, for use on highly concurrent cache-based computing platforms.
NASA Astrophysics Data System (ADS)
Behr, Yannik; Clinton, John; Cua, Georgia; Cauzzi, Carlo; Heimers, Stefan; Kästli, Philipp; Becker, Jan; Heaton, Thomas
2013-04-01
The Virtual Seismologist (VS) method is a Bayesian approach to regional network-based earthquake early warning (EEW) originally formulated by Cua and Heaton (2007). Implementation of VS into real-time EEW codes has been an on-going effort of the Swiss Seismological Service at ETH Zürich since 2006, with support from ETH Zürich, various European projects, and the United States Geological Survey (USGS). VS is one of three EEW algorithms that form the basis of the California Integrated Seismic Network (CISN) ShakeAlert system, a USGS-funded prototype end-to-end EEW system that could potentially be implemented in California. In Europe, VS is currently operating as a real-time test system in Switzerland. As part of the on-going EU project REAKT (Strategies and Tools for Real-Time Earthquake Risk Reduction), VS installations in southern Italy, western Greece, Istanbul, Romania, and Iceland are planned or underway. In Switzerland, VS has been running in real-time on stations monitored by the Swiss Seismological Service (including stations from Austria, France, Germany, and Italy) since 2010. While originally based on the Earthworm system it has recently been ported to the SeisComp3 system. Besides taking advantage of SeisComp3's picking and phase association capabilities it greatly simplifies the potential installation of VS at networks in particular those already running SeisComp3. We present the architecture of the new SeisComp3 based version and compare its results from off-line tests with the real-time performance of VS in Switzerland over the past two years. We further show that the empirical relationships used by VS to estimate magnitudes and ground motion, originally derived from southern California data, perform well in Switzerland.
Ojala, J; Hyödynmaa, S; Barańczyk, R; Góra, E; Waligórski, M P R
2014-03-01
Electron radiotherapy is applied to treat the chest wall close to the mediastinum. The performance of the GGPB and eMC algorithms implemented in the Varian Eclipse treatment planning system (TPS) was studied in this region for 9 and 16 MeV beams, against Monte Carlo (MC) simulations, point dosimetry in a water phantom and dose distributions calculated in virtual phantoms. For the 16 MeV beam, the accuracy of these algorithms was also compared over the lung-mediastinum interface region of an anthropomorphic phantom, against MC calculations and thermoluminescence dosimetry (TLD). In the phantom with a lung-equivalent slab the results were generally congruent, the eMC results for the 9 MeV beam slightly overestimating the lung dose, and the GGPB results for the 16 MeV beam underestimating the lung dose. Over the lung-mediastinum interface, for 9 and 16 MeV beams, the GGPB code underestimated the lung dose and overestimated the dose in water close to the lung, compared to the congruent eMC and MC results. In the anthropomorphic phantom, results of TLD measurements and MC and eMC calculations agreed, while the GGPB code underestimated the lung dose. Good agreement between TLD measurements and MC calculations attests to the accuracy of "full" MC simulations as a reference for benchmarking TPS codes. Application of the GGPB code in chest wall radiotherapy may result in significant underestimation of the lung dose and overestimation of dose to the mediastinum, affecting plan optimization over volumes close to the lung-mediastinum interface, such as the lung or heart.
Siman, W; Kappadath, S; Mawlawi, O
2015-06-15
Purpose: {sup 90}Y PET/CT imaging and quantification have recently been suggested as an approach of treatment verification. However, due to low positron yield (32ppm), the {sup 90}Y-PET/CT images are very noisy. Iterative reconstruction techniques that employ regularization, e.g. block sequential regularized expectation maximization (BSREM) algorithm (recently implemented on GE scanners – QClear™), has the potential to increase quantitative accuracy with lower noise penalty compared to OSEM. Our aim is to investigate the performance of RR algorithms in {sup 90}Y PET/CT studies. Methods: A NEMA IEC phantom filled with 3GBq {sup 90}YCl{sub 2} (to simulate patient treatment) was imaged on GE-D690 for 1800s/bed. The sphere-to-background ratio of 7. The data were reconstructed using OSEM and BSREM with PSF modeling and TOF correction while varying the iterations (IT) from 1–6 with fixed 24subsets. For BSREM, the edge-preservation parameter (γ ) was 2 and the penalty-parameters (β) was varied 350–950. In all cases a post-reconstruction filter of 5.2mm (2pixel) transaxial and standard z-axis were used. Sphere average activity concentration (AC) and background standard deviation (SD) were then calculated from VOIs drawn in the spheres and background. Results: Increasing IT from 1to6, the %SD in OSEM increased from 30% to 80%, whereas %SD in BSREM images increased by <5% for all βs. BSREM with β=350 didn’t offer any improvement over OSEM (convergence of mean achieved at 2 IT, in this study). Increasing β from 350 to 950 reduced the AC accuracy of small spheres (<20mm) by 10% and noise from 40% to 20%, which resulted in CNR increase from 11 to 17. Conclusion: In count-limited studies such as {sup 90}Y PET/CT, BSREM can be used to suppress image noise and increase CNR at the expense of a relatively small decrease of quantitative accuracy. The BSREM parameters need to be optimized for each study depending on the radionuclides and count densities. Research
Ojala, Jarkko J; Kapanen, Mika K; Hyödynmaa, Simo J; Wigren, Tuija K; Pitkänen, Maunu A
2014-03-06
The accuracy of dose calculation is a key challenge in stereotactic body radiotherapy (SBRT) of the lung. We have benchmarked three photon beam dose calculation algorithms--pencil beam convolution (PBC), anisotropic analytical algorithm (AAA), and Acuros XB (AXB)--implemented in a commercial treatment planning system (TPS), Varian Eclipse. Dose distributions from full Monte Carlo (MC) simulations were regarded as a reference. In the first stage, for four patients with central lung tumors, treatment plans using 3D conformal radiotherapy (CRT) technique applying 6 MV photon beams were made using the AXB algorithm, with planning criteria according to the Nordic SBRT study group. The plans were recalculated (with same number of monitor units (MUs) and identical field settings) using BEAMnrc and DOSXYZnrc MC codes. The MC-calculated dose distributions were compared to corresponding AXB-calculated dose distributions to assess the accuracy of the AXB algorithm, to which then other TPS algorithms were compared. In the second stage, treatment plans were made for ten patients with 3D CRT technique using both the PBC algorithm and the AAA. The plans were recalculated (with same number of MUs and identical field settings) with the AXB algorithm, then compared to original plans. Throughout the study, the comparisons were made as a function of the size of the planning target volume (PTV), using various dose-volume histogram (DVH) and other parameters to quantitatively assess the plan quality. In the first stage also, 3D gamma analyses with threshold criteria 3%/3mm and 2%/2 mm were applied. The AXB-calculated dose distributions showed relatively high level of agreement in the light of 3D gamma analysis and DVH comparison against the full MC simulation, especially with large PTVs, but, with smaller PTVs, larger discrepancies were found. Gamma agreement index (GAI) values between 95.5% and 99.6% for all the plans with the threshold criteria 3%/3 mm were achieved, but 2%/2 mm
A genetic algorithm for solving supply chain network design model
NASA Astrophysics Data System (ADS)
Firoozi, Z.; Ismail, N.; Ariafar, S. H.; Tang, S. H.; Ariffin, M. K. M. A.
2013-09-01
Network design is by nature costly and optimization models play significant role in reducing the unnecessary cost components of a distribution network. This study proposes a genetic algorithm to solve a distribution network design model. The structure of the chromosome in the proposed algorithm is defined in a novel way that in addition to producing feasible solutions, it also reduces the computational complexity of the algorithm. Computational results are presented to show the algorithm performance.
Im, Piljae; Munk, Jeffrey D; Gehl, Anthony C
2015-06-01
A research project “Evaluation of Variable Refrigerant Flow (VRF) Systems Performance and the Enhanced Control Algorithm on Oak Ridge National Laboratory’s (ORNL’s) Flexible Research Platform” was performed to (1) install and validate the performance of Samsung VRF systems compared with the baseline rooftop unit (RTU) variable-air-volume (VAV) system and (2) evaluate the enhanced control algorithm for the VRF system on the two-story flexible research platform (FRP) in Oak Ridge, Tennessee. Based on the VRF system designed by Samsung and ORNL, the system was installed from February 18 through April 15, 2014. The final commissioning and system optimization were completed on June 2, 2014, and the initial test for system operation was started the following day, June 3, 2014. In addition, the enhanced control algorithm was implemented and updated on June 18. After a series of additional commissioning actions, the energy performance data from the RTU and the VRF system were monitored from July 7, 2014, through February 28, 2015. Data monitoring and analysis were performed for the cooling season and heating season separately, and the calibrated simulation model was developed and used to estimate the energy performance of the RTU and VRF systems. This final report includes discussion of the design and installation of the VRF system, the data monitoring and analysis plan, the cooling season and heating season data analysis, and the building energy modeling study
Performance evaluation and optimization of BM4D-AV denoising algorithm for cone-beam CT images
NASA Astrophysics Data System (ADS)
Huang, Kuidong; Tian, Xiaofei; Zhang, Dinghua; Zhang, Hua
2015-12-01
The broadening application of cone-beam Computed Tomography (CBCT) in medical diagnostics and nondestructive testing, necessitates advanced denoising algorithms for its 3D images. The block-matching and four dimensional filtering algorithm with adaptive variance (BM4D-AV) is applied to the 3D image denoising in this research. To optimize it, the key filtering parameters of the BM4D-AV algorithm are assessed firstly based on the simulated CBCT images and a table of optimized filtering parameters is obtained. Then, considering the complexity of the noise in realistic CBCT images, possible noise standard deviations in BM4D-AV are evaluated to attain the chosen principle for the realistic denoising. The results of corresponding experiments demonstrate that the BM4D-AV algorithm with optimized parameters presents excellent denosing effect on the realistic 3D CBCT images.
Bowen, J.; Dozier, G.
1996-12-31
This paper introduces a hybrid evolutionary hill-climbing algorithm that quickly solves (Constraint Satisfaction Problems (CSPs)). This hybrid uses opportunistic arc and path revision in an interleaved fashion to reduce the size of the search space and to realize when to quit if a CSP is based on an inconsistent constraint network. This hybrid outperforms a well known hill-climbing algorithm, the Iterative Descent Method, on a test suite of 750 randomly generated CSPs.
Benchmarking monthly homogenization algorithms
NASA Astrophysics Data System (ADS)
Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.
2011-08-01
The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data
Samei, Ehsan; Richard, Samuel
2015-01-15
Purpose: Different computed tomography (CT) reconstruction techniques offer different image quality attributes of resolution and noise, challenging the ability to compare their dose reduction potential against each other. The purpose of this study was to evaluate and compare the task-based imaging performance of CT systems to enable the assessment of the dose performance of a model-based iterative reconstruction (MBIR) to that of an adaptive statistical iterative reconstruction (ASIR) and a filtered back projection (FBP) technique. Methods: The ACR CT phantom (model 464) was imaged across a wide range of mA setting on a 64-slice CT scanner (GE Discovery CT750 HD, Waukesha, WI). Based on previous work, the resolution was evaluated in terms of a task-based modulation transfer function (MTF) using a circular-edge technique and images from the contrast inserts located in the ACR phantom. Noise performance was assessed in terms of the noise-power spectrum (NPS) measured from the uniform section of the phantom. The task-based MTF and NPS were combined with a task function to yield a task-based estimate of imaging performance, the detectability index (d′). The detectability index was computed as a function of dose for two imaging tasks corresponding to the detection of a relatively small and a relatively large feature (1.5 and 25 mm, respectively). The performance of MBIR in terms of the d′ was compared with that of ASIR and FBP to assess its dose reduction potential. Results: Results indicated that MBIR exhibits a variability spatial resolution with respect to object contrast and noise while significantly reducing image noise. The NPS measurements for MBIR indicated a noise texture with a low-pass quality compared to the typical midpass noise found in FBP-based CT images. At comparable dose, the d′ for MBIR was higher than those of FBP and ASIR by at least 61% and 19% for the small feature and the large feature tasks, respectively. Compared to FBP and ASIR, MBIR
Threshold-Based OSIC Detection Algorithm for Per-Antenna-Coded TIMO-OFDM Systems
NASA Astrophysics Data System (ADS)
Wang, Xinzheng; Chen, Ming; Zhu, Pengcheng
Threshold-based ordered successive interference cancellation (OSIC) detection algorithm is proposed for per-antenna-coded (PAC) two-input multiple-output (TIMO) orthogonal frequency division multiplexing (OFDM) systems. Successive interference cancellation (SIC) is performed selectively according to channel conditions. Compared with the conventional OSIC algorithm, the proposed algorithm reduces the complexity significantly with only a slight performance degradation.
NASA Astrophysics Data System (ADS)
Primorac, E.; Kuhlenbeck, H.; Freund, H.-J.
2016-07-01
The structure of a thin MoO3 layer on Au(111) with a c(4 × 2) superstructure was studied with LEED I/V analysis. As proposed previously (Quek et al., Surf. Sci. 577 (2005) L71), the atomic structure of the layer is similar to that of a MoO3 single layer as found in regular α-MoO3. The layer on Au(111) has a glide plane parallel to the short unit vector of the c(4 × 2) unit cell and the molybdenum atoms are bridge-bonded to two surface gold atoms with the structure of the gold surface being slightly distorted. The structural refinement of the structure was performed with the CMA-ES evolutionary strategy algorithm which could reach a Pendry R-factor of ∼ 0.044. In the second part the performance of CMA-ES is compared with that of the differential evolution method, a genetic algorithm and the Powell optimization algorithm employing I/V curves calculated with tensor LEED.
Automatic control algorithm effects on energy production
NASA Technical Reports Server (NTRS)
Mcnerney, G. M.
1981-01-01
A computer model was developed using actual wind time series and turbine performance data to simulate the power produced by the Sandia 17-m VAWT operating in automatic control. The model was used to investigate the influence of starting algorithms on annual energy production. The results indicate that, depending on turbine and local wind characteristics, a bad choice of a control algorithm can significantly reduce overall energy production. The model can be used to select control algorithms and threshold parameters that maximize long term energy production. The results from local site and turbine characteristics were generalized to obtain general guidelines for control algorithm design.
ERIC Educational Resources Information Center
Gilger, J. W.; Geary, D. C.
1985-01-01
Compared the performance of 56 children on the 11 subscales of the Luria-Nebraska Neuropsychological Battery-Children's Revision. Results revealed significant differences on Receptive Speech and Expressive Language subscales, suggesting a possible differential sensitivity of the children's Luria-Nebraska to verbal and nonverbal cognitive deficits.…
NASA Astrophysics Data System (ADS)
Laban, Shaban; El-Desouky, Aly
2013-04-01
Commission for the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO). The CLIPS expert system shell has been used as the main rule engine for implementing the algorithm rules. Python programming language and the module "PyCLIPS" are used for building the necessary code for algorithm implementation. More than 1.7 million intervals constitute the Concise List of Frames (CLF) from 20 different seismic stations have been used for evaluating the proposed algorithm and evaluating stations behaviour and performance. The initial results showed that proposed algorithm can help in better understanding of the operation and performance of those stations. Different important information, such as alerts and some station performance parameters, can be derived from the proposed algorithm. For IMS interval-based data and at any period of time it is possible to analyze station behavior, determine the missing data, generate necessary alerts, and to measure some of station performance attributes. The details of the proposed algorithm, methodology, implementation, experimental results, advantages, and limitations of this research are presented. Finally, future directions and recommendations are discussed.
NASA Astrophysics Data System (ADS)
Wang, L.; Wang, T. G.; Wu, J. H.; Cheng, G. P.
2016-09-01
A novel multi-objective optimization algorithm incorporating evolution strategies and vector mechanisms, referred as VD-MOEA, is proposed and applied in aerodynamic- structural integrated design of wind turbine blade. In the algorithm, a set of uniformly distributed vectors is constructed to guide population in moving forward to the Pareto front rapidly and maintain population diversity with high efficiency. For example, two- and three- objective designs of 1.5MW wind turbine blade are subsequently carried out for the optimization objectives of maximum annual energy production, minimum blade mass, and minimum extreme root thrust. The results show that the Pareto optimal solutions can be obtained in one single simulation run and uniformly distributed in the objective space, maximally maintaining the population diversity. In comparison to conventional evolution algorithms, VD-MOEA displays dramatic improvement of algorithm performance in both convergence and diversity preservation for handling complex problems of multi-variables, multi-objectives and multi-constraints. This provides a reliable high-performance optimization approach for the aerodynamic-structural integrated design of wind turbine blade.
Matheoud, Roberta; Della Monica, Patrizia; Loi, Gianfranco; Vigna, Luca; Krengli, Marco; Inglese, Eugenio; Brambilla, Marco
2011-01-30
The purpose of this study was to analyze the behavior of a contouring algorithm for PET images based on adaptive thresholding depending on lesions size and target-to-background (TB) ratio under different conditions of image reconstruction parameters. Based on this analysis, the image reconstruction scheme able to maximize the goodness of fit of the thresholding algorithm has been selected. A phantom study employing spherical targets was designed to determine slice-specific threshold (TS) levels which produce accurate cross-sectional areas. A wide range of TB ratio was investigated. Multiple regression methods were used to fit the data and to construct algorithms depending both on target cross-sectional area and TB ratio, using various reconstruction schemes employing a wide range of iteration number and amount of postfiltering Gaussian smoothing. Analysis of covariance was used to test the influence of iteration number and smoothing on threshold determination. The degree of convergence of ordered-subset expectation maximization (OSEM) algorithms does not influence TS determination. Among these approaches, the OSEM at two iterations and eight subsets with a 6-8 mm post-reconstruction Gaussian three-dimensional filter provided the best fit with a coefficient of determination R² = 0.90 for cross-sectional areas ≤ 133 mm² and R² = 0.95 for cross-sectional areas > 133 mm². The amount of post-reconstruction smoothing has been directly incorporated in the adaptive thresholding algorithms. The feasibility of the method was tested in two patients with lymph node FDG accumulation and in five patients using the bladder to mimic an anatomical structure of large size and uniform uptake, with satisfactory results. Slice-specific adaptive thresholding algorithms look promising as a reproducible method for delineating PET target volumes with good accuracy.
Lopes Antunes, Ana Carolina; Dórea, Fernanda; Halasa, Tariq; Toft, Nils
2016-05-01
Surveillance systems are critical for accurate, timely monitoring and effective disease control. In this study, we investigated the performance of univariate process monitoring control algorithms in detecting changes in seroprevalence for endemic diseases. We also assessed the effect of sample size (number of sentinel herds tested in the surveillance system) on the performance of the algorithms. Three univariate process monitoring control algorithms were compared: Shewart p Chart(1) (PSHEW), Cumulative Sum(2) (CUSUM) and Exponentially Weighted Moving Average(3) (EWMA). Increases in seroprevalence were simulated from 0.10 to 0.15 and 0.20 over 4, 8, 24, 52 and 104 weeks. Each epidemic scenario was run with 2000 iterations. The cumulative sensitivity(4) (CumSe) and timeliness were used to evaluate the algorithms' performance with a 1% false alarm rate. Using these performance evaluation criteria, it was possible to assess the accuracy and timeliness of the surveillance system working in real-time. The results showed that EWMA and PSHEW had higher CumSe (when compared with the CUSUM) from week 1 until the end of the period for all simulated scenarios. Changes in seroprevalence from 0.10 to 0.20 were more easily detected (higher CumSe) than changes from 0.10 to 0.15 for all three algorithms. Similar results were found with EWMA and PSHEW, based on the median time to detection. Changes in the seroprevalence were detected later with CUSUM, compared to EWMA and PSHEW for the different scenarios. Increasing the sample size 10 fold halved the time to detection (CumSe=1), whereas increasing the sample size 100 fold reduced the time to detection by a factor of 6. This study investigated the performance of three univariate process monitoring control algorithms in monitoring endemic diseases. It was shown that automated systems based on these detection methods identified changes in seroprevalence at different times. Increasing the number of tested herds would lead to faster
ERIC Educational Resources Information Center
Robertson, Alexander M.; Willett, Peter
1996-01-01
Describes a genetic algorithm (GA) that assigns weights to query terms in a ranked-output document retrieval system. Experiments showed the GA often found weights slightly superior to those produced by deterministic weighting (F4). Many times, however, the two methods gave the same results and sometimes the F4 results were superior, indicating…
Offline Performance of the Filter Bank EEW Algorithm in the 2014 M6.0 South Napa Earthquake
NASA Astrophysics Data System (ADS)
Meier, M. A.; Heaton, T. H.; Clinton, J. F.
2014-12-01
Medium size events like the M6.0 South Napa earthquake are very challenging for EEW: the damage such events produce can be severe, but it is generally confined to relatively small zones around the epicenter and the shaking duration is short. This leaves a very short window for timely EEW alerts. Algorithms that wait for several stations to trigger before sending out EEW alerts are typically not fast enough for these kind of events because their blind zone (the zone where strong ground motions start before the warnings arrive) typically covers all or most of the area that experiences strong ground motions. At the same time, single station algorithms are often too unreliable to provide useful alerts. The filter bank EEW algorithm is a new algorithm that is designed to provide maximally accurate and precise earthquake parameter estimates with minimum data input, with the goal of producing reliable EEW alerts when only a very small number of stations have been reached by the p-wave. It combines the strengths of single station and network based algorithms in that it starts parameter estimates as soon as 0.5 seconds of data are available from the first station, but then perpetually incorporates additional data from the same or from any number of other stations. The algorithm analyzes the time dependent frequency content of real time waveforms with a filter bank. It then uses an extensive training data set to find earthquake records from the past that have had similar frequency content at a given time since the p-wave onset. The source parameters of the most similar events are used to parameterize a likelihood function for the source parameters of the ongoing event, which can then be maximized to find the most likely parameter estimates. Our preliminary results show that the filter bank EEW algorithm correctly estimated the magnitude of the South Napa earthquake to be ~M6 with only 1 second worth of data at the nearest station to the epicenter. This estimate is then
Alan Black; Arnis Judzis
2005-09-30
This document details the progress to date on the OPTIMIZATION OF DEEP DRILLING PERFORMANCE--DEVELOPMENT AND BENCHMARK TESTING OF ADVANCED DIAMOND PRODUCT DRILL BITS AND HP/HT FLUIDS TO SIGNIFICANTLY IMPROVE RATES OF PENETRATION contract for the year starting October 2004 through September 2005. The industry cost shared program aims to benchmark drilling rates of penetration in selected simulated deep formations and to significantly improve ROP through a team development of aggressive diamond product drill bit--fluid system technologies. Overall the objectives are as follows: Phase 1--Benchmark ''best in class'' diamond and other product drilling bits and fluids and develop concepts for a next level of deep drilling performance; Phase 2--Develop advanced smart bit-fluid prototypes and test at large scale; and Phase 3--Field trial smart bit--fluid concepts, modify as necessary and commercialize products. As of report date, TerraTek has concluded all Phase 1 testing and is planning Phase 2 development.
Malegori, Cristina; Nascimento Marques, Emanuel José; de Freitas, Sergio Tonetto; Pimentel, Maria Fernanda; Pasquini, Celio; Casiraghi, Ernestina
2017-04-01
The main goal of this study was to investigate the analytical performances of a state-of-the-art device, one of the smallest dispersion NIR spectrometers on the market (MicroNIR 1700), making a critical comparison with a benchtop FT-NIR spectrometer in the evaluation of the prediction accuracy. In particular, the aim of this study was to estimate in a non-destructive manner, titratable acidity and ascorbic acid content in acerola fruit during ripening, in a view of direct applicability in field of this new miniaturised handheld device. Acerola (Malpighia emarginata DC.) is a super-fruit characterised by a considerable amount of ascorbic acid, ranging from 1.0% to 4.5%. However, during ripening, acerola colour changes and the fruit may lose as much as half of its ascorbic acid content. Because the variability of chemical parameters followed a non-strictly linear profile, two different regression algorithms were compared: PLS and SVM. Regression models obtained with Micro-NIR spectra give better results using SVM algorithm, for both ascorbic acid and titratable acidity estimation. FT-NIR data give comparable results using both SVM and PLS algorithms, with lower errors for SVM regression. The prediction ability of the two instruments was statistically compared using the Passing-Bablok regression algorithm; the outcomes are critically discussed together with the regression models, showing the suitability of the portable Micro-NIR for in field monitoring of chemical parameters of interest in acerola fruits.
Schulz, Andreas S.; Shmoys, David B.; Williamson, David P.
1997-01-01
Increasing global competition, rapidly changing markets, and greater consumer awareness have altered the way in which corporations do business. To become more efficient, many industries have sought to model some operational aspects by gigantic optimization problems. It is not atypical to encounter models that capture 106 separate “yes” or “no” decisions to be made. Although one could, in principle, try all 2106 possible solutions to find the optimal one, such a method would be impractically slow. Unfortunately, for most of these models, no algorithms are known that find optimal solutions with reasonable computation times. Typically, industry must rely on solutions of unguaranteed quality that are constructed in an ad hoc manner. Fortunately, for some of these models there are good approximation algorithms: algorithms that produce solutions quickly that are provably close to optimal. Over the past 6 years, there has been a sequence of major breakthroughs in our understanding of the design of approximation algorithms and of limits to obtaining such performance guarantees; this area has been one of the most flourishing areas of discrete mathematics and theoretical computer science. PMID:9370525
Passive microwave algorithm development and evaluation
NASA Technical Reports Server (NTRS)
Petty, Grant W.
1995-01-01
The scientific objectives of this grant are: (1) thoroughly evaluate, both theoretically and empirically, all available Special Sensor Microwave Imager (SSM/I) retrieval algorithms for column water vapor, column liquid water, and surface wind speed; (2) where both appropriate and feasible, develop, validate, and document satellite passive microwave retrieval algorithms that offer significantly improved performance compared with currently available algorithms; and (3) refine and validate a novel physical inversion scheme for retrieving rain rate over the ocean. This report summarizes work accomplished or in progress during the first year of a three year grant. The emphasis during the first year has been on the validation and refinement of the rain rate algorithm published by Petty and on the analysis of independent data sets that can be used to help evaluate the performance of rain rate algorithms over remote areas of the ocean. Two articles in the area of global oceanic precipitation are attached.
Sampling Within k-Means Algorithm to Cluster Large Datasets
Bejarano, Jeremy; Bose, Koushiki; Brannan, Tyler; Thomas, Anita; Adragni, Kofi; Neerchal, Nagaraj; Ostrouchov, George
2011-08-01
Due to current data collection technology, our ability to gather data has surpassed our ability to analyze it. In particular, k-means, one of the simplest and fastest clustering algorithms, is ill-equipped to handle extremely large datasets on even the most powerful machines. Our new algorithm uses a sample from a dataset to decrease runtime by reducing the amount of data analyzed. We perform a simulation study to compare our sampling based k-means to the standard k-means algorithm by analyzing both the speed and accuracy of the two methods. Results show that our algorithm is significantly more efficient than the existing algorithm with comparable accuracy. Further work on this project might include a more comprehensive study both on more varied test datasets as well as on real weather datasets. This is especially important considering that this preliminary study was performed on rather tame datasets. Also, these datasets should analyze the performance of the algorithm on varied values of k. Lastly, this paper showed that the algorithm was accurate for relatively low sample sizes. We would like to analyze this further to see how accurate the algorithm is for even lower sample sizes. We could find the lowest sample sizes, by manipulating width and confidence level, for which the algorithm would be acceptably accurate. In order for our algorithm to be a success, it needs to meet two benchmarks: match the accuracy of the standard k-means algorithm and significantly reduce runtime. Both goals are accomplished for all six datasets analyzed. However, on datasets of three and four dimension, as the data becomes more difficult to cluster, both algorithms fail to obtain the correct classifications on some trials. Nevertheless, our algorithm consistently matches the performance of the standard algorithm while becoming remarkably more efficient with time. Therefore, we conclude that analysts can use our algorithm, expecting accurate results in considerably less time.
2007-06-05
tive to the AMF, [1] and [5] discovered that multi-channel and two-dimensional parametric estimation approaches could (1) reduce the computational...dimensional (2-D) parametric estimation using the 2-D least-squares-based lattice algorithm [4]. The specifics of the inverse are found in the next...non- parametric estimation techniques • Least square error (LSE) vs mean square error (MSE) • Primarily multi-channel (M-C) structures; also try 2-D
Combining ptychographical algorithms with the Hybrid Input-Output (HIO) algorithm.
Konijnenberg, A P; Coene, W M J; Pereira, S F; Urbach, H P
2016-12-01
In this article we combine the well-known Ptychographical Iterative Engine (PIE) with the Hybrid Input-Output (HIO) algorithm. The important insight is that the HIO feedback function should be kept strictly separate from the reconstructed object, which is done by introducing a separate feedback function per probe position. We have also combined HIO with floating PIE (fPIE) and extended PIE (ePIE). Simulations indicate that the combined algorithm performs significantly better in many situations. Although we have limited our research to a combination with HIO, the same insight can be used to combine ptychographical algorithms with any phase retrieval algorithm that uses a feedback function.
Chang, Yao-Tang; Wu, Chi-Lin; Cheng, Hsu-Chih
2014-04-24
The rapid development of wireless broadband communication technology has affected the location accuracy of worldwide radio monitoring stations that employ time-difference-of-arrival (TDOA) location technology. In this study, TDOA-based location technology was implemented in Taiwan for the first time according to International Telecommunications Union Radiocommunication (ITU-R) recommendations regarding monitoring and location applications. To improve location accuracy, various scenarios, such as a three-dimensional environment (considering an unequal locating antenna configuration), were investigated. Subsequently, the proposed integrated cross-correlation and genetic algorithm was evaluated in the metropolitan area of Tainan. The results indicated that the location accuracy at a circular error probability of 50% was less than 60 m when a multipath effect was present in the area. Moreover, compared with hyperbolic algorithms that have been applied in conventional TDOA-based location systems, the proposed algorithm yielded 17-fold and 19-fold improvements in the mean difference when the location position of the interference station was favorable and unfavorable, respectively. Hence, the various forms of radio interference, such as low transmission power, burst and weak signals, and metropolitan interference, was proved to be easily identified, located, and removed.
Chang, Yao-Tang; Wu, Chi-Lin; Cheng, Hsu-Chih
2014-01-01
The rapid development of wireless broadband communication technology has affected the location accuracy of worldwide radio monitoring stations that employ time-difference-of-arrival (TDOA) location technology. In this study, TDOA-based location technology was implemented in Taiwan for the first time according to International Telecommunications Union Radiocommunication (ITU-R) recommendations regarding monitoring and location applications. To improve location accuracy, various scenarios, such as a three-dimensional environment (considering an unequal locating antenna configuration), were investigated. Subsequently, the proposed integrated cross-correlation and genetic algorithm was evaluated in the metropolitan area of Tainan. The results indicated that the location accuracy at a circular error probability of 50% was less than 60 m when a multipath effect was present in the area. Moreover, compared with hyperbolic algorithms that have been applied in conventional TDOA-based location systems, the proposed algorithm yielded 17-fold and 19-fold improvements in the mean difference when the location position of the interference station was favorable and unfavorable, respectively. Hence, the various forms of radio interference, such as low transmission power, burst and weak signals, and metropolitan interference, was proved to be easily identified, located, and removed. PMID:24763254
Boerner, Bettina; Tini, Gabrielo M.; Fachinger, Patrick; Graber, Sereina M.; Irani, Sarosh
2017-01-01
Objectives The olfactory function highly impacts quality of life (QoL). Continuous positive airway pressure is an effective treatment for obstructive sleep apnea (OSA) and is often applied by nasal masks (nCPAP). The influence of nCPAP on the olfactory performance of OSA patients is unknown. The aim of this study was to assess the sense of smell before initiation of nCPAP and after three months treatment, in moderate and severe OSA patients. Methods The sense of smell was assessed in 35 patients suffering from daytime sleepiness and moderate to severe OSA (apnea/hypopnea index ≥ 15/h), with the aid of a validated test battery (Sniffin’ Sticks) before initiation of nCPAP therapy and after three months of treatment. Additionally, adherent subjects were included in a double-blind randomized three weeks CPAP-withdrawal trial (sub-therapeutic CPAP pressure). Results Twenty five of the 35 patients used the nCPAP therapy for more than four hours per night, and for more than 70% of nights (adherent group). The olfactory performance of these patients improved significantly (p = 0.007) after three months of nCPAP therapy. When considering the entire group of patients, olfaction also improved significantly (p = 0.001). In the randomized phase the sense of smell of six patients deteriorated under sub-therapeutic CPAP pressure (p = 0.046) whereas five patients in the maintenance CPAP group showed no significant difference (p = 0.501). Conclusions Olfactory performance improved significantly after three months of nCPAP therapy in patients suffering from moderate and severe OSA. It seems that this effect of nCPAP is reversible under sub-therapeutic CPAP pressure. Trial registration ISRCTN11128866 PMID:28158212
Wang, He; Garden, Adam S.; Zhang, Lifei; Wei, Xiong; Ahamad, Anesa; Kuban, Deborah A.; Komaki, Ritsuko; O’Daniel, Jennifer; Zhang, Yongbin; Mohan, Radhe; Dong, Lei
2008-01-01
Purpose Auto-propagation of anatomical region-of-interests (ROIs) from the planning CT to daily CT is an essential step in image-guided adaptive radiotherapy. The goal of this study was to quantitatively evaluate the performance of the algorithm in typical clinical applications. Method and Materials We previously adopted an image intensity-based deformable registration algorithm to find the correspondence between two images. In this study, the ROIs delineated on the planning CT image were mapped onto daily CT or four-dimentional (4D) CT images using the same transformation. Post-processing methods, such as boundary smoothing and modification, were used to enhance the robustness of the algorithm. Auto-propagated contours for eight head-and-neck patients with a total of 100 repeat CTs, one prostate patient with 24 repeat CTs, and nine lung cancer patients with a total of 90 4D-CT images were evaluated against physician-drawn contours and physician-modified deformed contours using the volume-overlap-index (VOI) and mean absolute surface-to-surface distance (ASSD). Results The deformed contours were reasonably well matched with daily anatomy on repeat CT images. The VOI and mean ASSD were 83% and 1.3 mm when compared to the independently drawn contours. A better agreement (greater than 97% and less than 0.4 mm) was achieved if the physician was only asked to correct the deformed contours. The algorithm was robust in the presence of random noise in the image. Conclusion The deformable algorithm may be an effective method to propagate the planning ROIs to subsequent CT images of changed anatomy, although a final review by physicians is highly recommended. PMID:18722272
Wang He; Garden, Adam S.; Zhang Lifei; Wei Xiong; Ahamad, Anesa; Kuban, Deborah A.; Komaki, Ritsuko; O'Daniel, Jennifer; Zhang Yongbin; Mohan, Radhe; Dong Lei
2008-09-01
Purpose: Auto-propagation of anatomic regions of interest from the planning computed tomography (CT) scan to the daily CT is an essential step in image-guided adaptive radiotherapy. The goal of this study was to quantitatively evaluate the performance of the algorithm in typical clinical applications. Methods and Materials: We had previously adopted an image intensity-based deformable registration algorithm to find the correspondence between two images. In the present study, the regions of interest delineated on the planning CT image were mapped onto daily CT or four-dimensional CT images using the same transformation. Postprocessing methods, such as boundary smoothing and modification, were used to enhance the robustness of the algorithm. Auto-propagated contours for 8 head-and-neck cancer patients with a total of 100 repeat CT scans, 1 prostate patient with 24 repeat CT scans, and 9 lung cancer patients with a total of 90 four-dimensional CT images were evaluated against physician-drawn contours and physician-modified deformed contours using the volume overlap index and mean absolute surface-to-surface distance. Results: The deformed contours were reasonably well matched with the daily anatomy on the repeat CT images. The volume overlap index and mean absolute surface-to-surface distance was 83% and 1.3 mm, respectively, compared with the independently drawn contours. Better agreement (>97% and <0.4 mm) was achieved if the physician was only asked to correct the deformed contours. The algorithm was also robust in the presence of random noise in the image. Conclusion: The deformable algorithm might be an effective method to propagate the planning regions of interest to subsequent CT images of changed anatomy, although a final review by physicians is highly recommended.
Statistically significant relational data mining :
Berry, Jonathan W.; Leung, Vitus Joseph; Phillips, Cynthia Ann; Pinar, Ali; Robinson, David Gerald; Berger-Wolf, Tanya; Bhowmick, Sanjukta; Casleton, Emily; Kaiser, Mark; Nordman, Daniel J.; Wilson, Alyson G.
2014-02-01
This report summarizes the work performed under the project (3z(BStatitically significant relational data mining.(3y (BThe goal of the project was to add more statistical rigor to the fairly ad hoc area of data mining on graphs. Our goal was to develop better algorithms and better ways to evaluate algorithm quality. We concetrated on algorithms for community detection, approximate pattern matching, and graph similarity measures. Approximate pattern matching involves finding an instance of a relatively small pattern, expressed with tolerance, in a large graph of data observed with uncertainty. This report gathers the abstracts and references for the eight refereed publications that have appeared as part of this work. We then archive three pieces of research that have not yet been published. The first is theoretical and experimental evidence that a popular statistical measure for comparison of community assignments favors over-resolved communities over approximations to a ground truth. The second are statistically motivated methods for measuring the quality of an approximate match of a small pattern in a large graph. The third is a new probabilistic random graph model. Statisticians favor these models for graph analysis. The new local structure graph model overcomes some of the issues with popular models such as exponential random graph models and latent variable models.
Inference from matrix products: a heuristic spin glass algorithm
Hastings, Matthew B
2008-01-01
We present an algorithm for finding ground states of two-dimensional spin-glass systems based on ideas from matrix product states in quantum information theory. The algorithm works directly at zero temperature and defines an approximation to the energy whose accuracy depends on a parameter k. We test the algorithm against exact methods on random field and random bond Ising models, and we find that accurate results require a k which scales roughly polynomially with the system size. The algorithm also performs well when tested on small systems with arbitrary interactions, where no fast, exact algorithms exist. The time required is significantly less than Monte Carlo schemes.
Versatility of the CFR algorithm for limited angle reconstruction
Fujieda, I.; Heiskanen, K.; Perez-Mendez, V. )
1990-04-01
The constrained Fourier reconstruction (CFR) algorithm and the iterative reconstruction-reprojection (IRR) algorithm are evaluated based on their accuracy for three types of limited angle reconstruction problems. The cFR algorithm performs better for problems such as Xray CT imaging of a nuclear reactor core with one large data gap due to structural blocking of the source and detector pair. For gated heart imaging by Xray CT, radioisotope distribution imaging by PET or SPECT, using a polygonal array of gamma cameras with insensitive gaps between camera boundaries, the IRR algorithm has a slight advantage over the CFR algorithm but the difference is not significant.
NASA Astrophysics Data System (ADS)
Nechad, B.; Ruddick, K.; Schroeder, T.; Oubelkheir, K.; Blondeau-Patissier, D.; Cherukuru, N.; Brando, V.; Dekker, A.; Clementson, L.; Banks, A. C.; Maritorena, S.; Werdell, J.; Sá, C.; Brotas, V.; Caballero de Frutos, I.; Ahn, Y.-H.; Salama, S.; Tilstone, G.; Martinez-Vicente, V.; Foley, D.; McKibben, M.; Nahorniak, J.; Peterson, T.; Siliò-Calzada, A.; Röttgers, R.; Lee, Z.; Peters, M.; Brockmann, C.
2015-02-01
The use of in situ measurements is essential in the validation and evaluation of the algorithms that provide coastal water quality data products from ocean colour satellite remote sensing. Over the past decade, various types of ocean colour algorithms have been developed to deal with the optical complexity of coastal waters. Yet there is a lack of a comprehensive inter-comparison due to the availability of quality checked in situ databases. The CoastColour project Round Robin (CCRR) project funded by the European Space Agency (ESA) was designed to bring together a variety of reference datasets and to use these to test algorithms and assess their accuracy for retrieving water quality parameters. This information was then developed to help end-users of remote sensing products to select the most accurate algorithms for their coastal region. To facilitate this, an inter-comparison of the performance of algorithms for the retrieval of in-water properties over coastal waters was carried out. The comparison used three types of datasets on which ocean colour algorithms were tested. The description and comparison of the three datasets are the focus of this paper, and include the Medium Resolution Imaging Spectrometer (MERIS) Level 2 match-ups, in situ reflectance measurements and data generated by a radiative transfer model (HydroLight). These datasets are available from doi.pangaea.de/10.1594/PANGAEA.841950. The datasets mainly consisted of 6484 marine reflectance associated with various geometrical (sensor viewing and solar angles) and sky conditions and water constituents: Total Suspended Matter (TSM) and Chlorophyll a (CHL) concentrations, and the absorption of Coloured Dissolved Organic Matter (CDOM). Inherent optical properties were also provided in the simulated datasets (5000 simulations) and from 3054 match-up locations. The distributions of reflectance at selected MERIS bands and band ratios, CHL
[An Algorithm for Correcting Fetal Heart Rate Baseline].
Li, Xiaodong; Lu, Yaosheng
2015-10-01
Fetal heart rate (FHR) baseline estimation is of significance for the computerized analysis of fetal heart rate and the assessment of fetal state. In our work, a fetal heart rate baseline correction algorithm was presented to make the existing baseline more accurate and fit to the tracings. Firstly, the deviation of the existing FHR baseline was found and corrected. And then a new baseline was obtained finally after treatment with some smoothing methods. To assess the performance of FHR baseline correction algorithm, a new FHR baseline estimation algorithm that combined baseline estimation algorithm and the baseline correction algorithm was compared with two existing FHR baseline estimation algorithms. The results showed that the new FHR baseline estimation algorithm did well in both accuracy and efficiency. And the results also proved the effectiveness of the FHR baseline correction algorithm.
NASA Astrophysics Data System (ADS)
Won, Jihye; Park, Kwan-Dong
2015-04-01
Real-time PPP-RTK positioning algorithms were developed for the purpose of getting precise coordinates of moving platforms. In this implementation, corrections for the satellite orbit and satellite clock were taken from the IGS-RTS products while the ionospheric delay was removed through ionosphere-free combination and the tropospheric delay was either taken care of using the Global Pressure and Temperature (GPT) model or estimated as a stochastic parameter. To improve the convergence speed, all the available GPS and GLONASS measurements were used and Extended Kalman Filter parameters were optimized. To validate our algorithms, we collected the GPS and GLONASS data from a geodetic-quality receiver installed on a roof of a moving vehicle in an open-sky environment and used IGS final products of satellite orbits and clock offsets. The horizontal positioning error got less than 10 cm within 5 minutes, and the error stayed below 10 cm even after the vehicle start moving. When the IGS-RTS product and the GPT model were used instead of the IGS precise product, the positioning accuracy of the moving vehicle was maintained at better than 20 cm once convergence was achieved at around 6 minutes.
NASA Astrophysics Data System (ADS)
Tabik, S.; Romero, L. F.; Mimica, P.; Plata, O.; Zapata, E. L.
2012-09-01
A broad area in astronomy focuses on simulating extragalactic objects based on Very Long Baseline Interferometry (VLBI) radio-maps. Several algorithms in this scope simulate what would be the observed radio-maps if emitted from a predefined extragalactic object. This work analyzes the performance and scaling of this kind of algorithms on multi-socket, multi-core architectures. In particular, we evaluate a sharing approach, a privatizing approach and a hybrid approach on systems with complex memory hierarchy that includes shared Last Level Cache (LLC). In addition, we investigate which manual processes can be systematized and then automated in future works. The experiments show that the data-privatizing model scales efficiently on medium scale multi-socket, multi-core systems (up to 48 cores) while regardless of algorithmic and scheduling optimizations, the sharing approach is unable to reach acceptable scalability on more than one socket. However, the hybrid model with a specific level of data-sharing provides the best scalability over all used multi-socket, multi-core systems.
Ojala, Jarkko; Kapanen, Mika; Hyödynmaa, Simo
2016-06-01
New version 13.6.23 of the electron Monte Carlo (eMC) algorithm in Varian Eclipse™ treatment planning system has a model for 4MeV electron beam and some general improvements for dose calculation. This study provides the first overall accuracy assessment of this algorithm against full Monte Carlo (MC) simulations for electron beams from 4MeV to 16MeV with most emphasis on the lower energy range. Beams in a homogeneous water phantom and clinical treatment plans were investigated including measurements in the water phantom. Two different material sets were used with full MC: (1) the one applied in the eMC algorithm and (2) the one included in the Eclipse™ for other algorithms. The results of clinical treatment plans were also compared to those of the older eMC version 11.0.31. In the water phantom the dose differences against the full MC were mostly less than 3% with distance-to-agreement (DTA) values within 2mm. Larger discrepancies were obtained in build-up regions, at depths near the maximum electron ranges and with small apertures. For the clinical treatment plans the overall dose differences were mostly within 3% or 2mm with the first material set. Larger differences were observed for a large 4MeV beam entering curved patient surface with extended SSD and also in regions of large dose gradients. Still the DTA values were within 3mm. The discrepancies between the eMC and the full MC were generally larger for the second material set. The version 11.0.31 performed always inferiorly, when compared to the 13.6.23.
NASA Technical Reports Server (NTRS)
Fijany, Amir; Collier, James B.; Citak, Ari
1997-01-01
A team of US Army Corps of Engineers, Omaha District and Engineering and Support Center, Huntsville, let Propulsion Laboratory (JPL), Stanford Research Institute (SRI), and Montgomery Watson is currently in the process of planning and conducting the largest ever survey at the Former Buckley Field (60,000 acres), in Colorado, by using SRI airborne, ground penetrating, Synthetic Aperture Radar (SAR). The purpose of this survey is the detection of surface and subsurface Unexploded Ordnance (UXO) and in a broader sense the site characterization for identification of contaminated as well as clear areas. In preparation for such a large-scale survey, JPL has been developing advanced algorithms and a high-performance restbed for processing of massive amount of expected SAR data from this site. Two key requirements of this project are the accuracy (in terms of UXO detection) and speed of SAR data processing. The first key feature of this testbed is a large degree of automation and a minimum degree of the need for human perception in the processing to achieve an acceptable processing rate of several hundred acres per day. For accurate UXO detection, novel algorithms have been developed and implemented. These algorithms analyze dual polarized (HH and VV) SAR data. They are based on the correlation of HH and VV SAR data and involve a rather large set of parameters for accurate detection of UXO. For each specific site, this set of parameters can be optimized by using ground truth data (i.e., known surface and subsurface UXOs). In this paper, we discuss these algorithms and their successful application for detection of surface and subsurface anti-tank mines by using a data set from Yuma proving Ground, A7, acquired by SRI SAR.
Latzel, M; Büttner, P; Sarau, G; Höflich, K; Heilmann, M; Chen, W; Wen, X; Conibeer, G; Christiansen, S H
2017-02-03
Nanotextured surfaces provide an ideal platform for efficiently capturing and emitting light. However, the increased surface area in combination with surface defects induced by nanostructuring e.g. using reactive ion etching (RIE) negatively affects the device's active region and, thus, drastically decreases device performance. In this work, the influence of structural defects and surface states on the optical and electrical performance of InGaN/GaN nanorod (NR) light emitting diodes (LEDs) fabricated by top-down RIE of c-plane GaN with InGaN quantum wells was investigated. After proper surface treatment a significantly improved device performance could be shown. Therefore, wet chemical removal of damaged material in KOH solution followed by atomic layer deposition of only 10 [Formula: see text] alumina as wide bandgap oxide for passivation were successfully applied. Raman spectroscopy revealed that the initially compressively strained InGaN/GaN LED layer stack turned into a virtually completely relaxed GaN and partially relaxed InGaN combination after RIE etching of NRs. Time-correlated single photon counting provides evidence that both treatments-chemical etching and alumina deposition-reduce the number of pathways for non-radiative recombination. Steady-state photoluminescence revealed that the luminescent performance of the NR LEDs is increased by about 50% after KOH and 80% after additional alumina passivation. Finally, complete NR LED devices with a suspended graphene contact were fabricated, for which the effectiveness of the alumina passivation was successfully demonstrated by electroluminescence measurements.
NASA Astrophysics Data System (ADS)
Latzel, M.; Büttner, P.; Sarau, G.; Höflich, K.; Heilmann, M.; Chen, W.; Wen, X.; Conibeer, G.; Christiansen, S. H.
2017-02-01
Nanotextured surfaces provide an ideal platform for efficiently capturing and emitting light. However, the increased surface area in combination with surface defects induced by nanostructuring e.g. using reactive ion etching (RIE) negatively affects the device’s active region and, thus, drastically decreases device performance. In this work, the influence of structural defects and surface states on the optical and electrical performance of InGaN/GaN nanorod (NR) light emitting diodes (LEDs) fabricated by top-down RIE of c-plane GaN with InGaN quantum wells was investigated. After proper surface treatment a significantly improved device performance could be shown. Therefore, wet chemical removal of damaged material in KOH solution followed by atomic layer deposition of only 10 {nm} alumina as wide bandgap oxide for passivation were successfully applied. Raman spectroscopy revealed that the initially compressively strained InGaN/GaN LED layer stack turned into a virtually completely relaxed GaN and partially relaxed InGaN combination after RIE etching of NRs. Time-correlated single photon counting provides evidence that both treatments—chemical etching and alumina deposition—reduce the number of pathways for non-radiative recombination. Steady-state photoluminescence revealed that the luminescent performance of the NR LEDs is increased by about 50% after KOH and 80% after additional alumina passivation. Finally, complete NR LED devices with a suspended graphene contact were fabricated, for which the effectiveness of the alumina passivation was successfully demonstrated by electroluminescence measurements.
Algorithms for quad-double precision floating pointarithmetic
Hida, Yozo; Li, Xiaoye S.; Bailey, David H.
2000-10-30
A quad-double number is an unevaluated sum of four IEEE double precision numbers, capable of representing at least 212 bits of significance. We present the algorithms for various arithmetic operations (including the four basic operations and various algebraic and transcendental operations) on quad-double numbers. The performance of the algorithms, implemented in C++, is also presented.
Iterative phase retrieval algorithms. I: optimization.
Guo, Changliang; Liu, Shi; Sheridan, John T
2015-05-20
Two modified Gerchberg-Saxton (GS) iterative phase retrieval algorithms are proposed. The first we refer to as the spatial phase perturbation GS algorithm (SPP GSA). The second is a combined GS hybrid input-output algorithm (GS/HIOA). In this paper (Part I), it is demonstrated that the SPP GS and GS/HIO algorithms are both much better at avoiding stagnation during phase retrieval, allowing them to successfully locate superior solutions compared with either the GS or the HIO algorithms. The performances of the SPP GS and GS/HIO algorithms are also compared. Then, the error reduction (ER) algorithm is combined with the HIO algorithm (ER/HIOA) to retrieve the input object image and the phase, given only some knowledge of its extent and the amplitude in the Fourier domain. In Part II, the algorithms developed here are applied to carry out known plaintext and ciphertext attacks on amplitude encoding and phase encoding double random phase encryption systems. Significantly, ER/HIOA is then used to carry out a ciphertext-only attack on AE DRPE systems.
A new frame-based registration algorithm
NASA Technical Reports Server (NTRS)
Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Sumanaweera, T. S.; Yen, S. Y.; Napel, S.
1998-01-01
This paper presents a new algorithm for frame registration. Our algorithm requires only that the frame be comprised of straight rods, as opposed to the N structures or an accurate frame model required by existing algorithms. The algorithm utilizes the full 3D information in the frame as well as a least squares weighting scheme to achieve highly accurate registration. We use simulated CT data to assess the accuracy of our algorithm. We compare the performance of the proposed algorithm to two commonly used algorithms. Simulation results show that the proposed algorithm is comparable to the best existing techniques with knowledge of the exact mathematical frame model. For CT data corrupted with an unknown in-plane rotation or translation, the proposed technique is also comparable to the best existing techniques. However, in situations where there is a discrepancy of more than 2 mm (0.7% of the frame dimension) between the frame and the mathematical model, the proposed technique is significantly better (p < or = 0.05) than the existing techniques. The proposed algorithm can be applied to any existing frame without modification. It provides better registration accuracy and is robust against model mis-match. It allows greater flexibility on the frame structure. Lastly, it reduces the frame construction cost as adherence to a concise model is not required.
Algorithms for radio networks with dynamic topology
NASA Astrophysics Data System (ADS)
Shacham, Nachum; Ogier, Richard; Rutenburg, Vladislav V.; Garcia-Luna-Aceves, Jose
1991-08-01
The objective of this project was the development of advanced algorithms and protocols that efficiently use network resources to provide optimal or nearly optimal performance in future communication networks with highly dynamic topologies and subject to frequent link failures. As reflected by this report, we have achieved our objective and have significantly advanced the state-of-the-art in this area. The research topics of the papers summarized include the following: efficient distributed algorithms for computing shortest pairs of disjoint paths; minimum-expected-delay alternate routing algorithms for highly dynamic unreliable networks; algorithms for loop-free routing; multipoint communication by hierarchically encoded data; efficient algorithms for extracting the maximum information from event-driven topology updates; methods for the neural network solution of link scheduling and other difficult problems arising in communication networks; and methods for robust routing in networks subject to sophisticated attacks.
A novel algorithm for Bluetooth ECG.
Pandya, Utpal T; Desai, Uday B
2012-11-01
In wireless transmission of ECG, data latency will be significant when battery power level and data transmission distance are not maintained. In applications like home monitoring or personalized care, to overcome the joint effect of previous issues of wireless transmission and other ECG measurement noises, a novel filtering strategy is required. Here, a novel algorithm, identified as peak rejection adaptive sampling modified moving average (PRASMMA) algorithm for wireless ECG is introduced. This algorithm first removes error in bit pattern of received data if occurred in wireless transmission and then removes baseline drift. Afterward, a modified moving average is implemented except in the region of each QRS complexes. The algorithm also sets its filtering parameters according to different sampling rate selected for acquisition of signals. To demonstrate the work, a prototyped Bluetooth-based ECG module is used to capture ECG with different sampling rate and in different position of patient. This module transmits ECG wirelessly to Bluetooth-enabled devices where the PRASMMA algorithm is applied on captured ECG. The performance of PRASMMA algorithm is compared with moving average and S-Golay algorithms visually as well as numerically. The results show that the PRASMMA algorithm can significantly improve the ECG reconstruction by efficiently removing the noise and its use can be extended to any parameters where peaks are importance for diagnostic purpose.
Alan Black; Arnis Judzis
2004-10-01
The industry cost shared program aims to benchmark drilling rates of penetration in selected simulated deep formations and to significantly improve ROP through a team development of aggressive diamond product drill bit--fluid system technologies. Overall the objectives are as follows: Phase 1--Benchmark ''best in class'' diamond and other product drilling bits and fluids and develop concepts for a next level of deep drilling performance; Phase 2--Develop advanced smart bit-fluid prototypes and test at large scale; and Phase 3--Field trial smart bit-fluid concepts, modify as necessary and commercialize products. As of report date, TerraTek has concluded all major preparations for the high pressure drilling campaign. Baker Hughes encountered difficulties in providing additional pumping capacity before TerraTek's scheduled relocation to another facility, thus the program was delayed further to accommodate the full testing program.
Efficient Homotopy Continuation Algorithms with Application to Computational Fluid Dynamics
NASA Astrophysics Data System (ADS)
Brown, David A.
New homotopy continuation algorithms are developed and applied to a parallel implicit finite-difference Newton-Krylov-Schur external aerodynamic flow solver for the compressible Euler, Navier-Stokes, and Reynolds-averaged Navier-Stokes equations with the Spalart-Allmaras one-equation turbulence model. Many new analysis tools, calculations, and numerical algorithms are presented for the study and design of efficient and robust homotopy continuation algorithms applicable to solving very large and sparse nonlinear systems of equations. Several specific homotopies are presented and studied and a methodology is presented for assessing the suitability of specific homotopies for homotopy continuation. . A new class of homotopy continuation algorithms, referred to as monolithic homotopy continuation algorithms, is developed. These algorithms differ from classical predictor-corrector algorithms by combining the predictor and corrector stages into a single update, significantly reducing the amount of computation and avoiding wasted computational effort resulting from over-solving in the corrector phase. The new algorithms are also simpler from a user perspective, with fewer input parameters, which also improves the user's ability to choose effective parameters on the first flow solve attempt. Conditional convergence is proved analytically and studied numerically for the new algorithms. The performance of a fully-implicit monolithic homotopy continuation algorithm is evaluated for several inviscid, laminar, and turbulent flows over NACA 0012 airfoils and ONERA M6 wings. The monolithic algorithm is demonstrated to be more efficient than the predictor-corrector algorithm for all applications investigated. It is also demonstrated to be more efficient than the widely-used pseudo-transient continuation algorithm for all inviscid and laminar cases investigated, and good performance scaling with grid refinement is demonstrated for the inviscid cases. Performance is also demonstrated
Xie, Jiale; Guo, Chunxian; Li, Chang Ming
2013-10-14
Cu2O-ZnO nanowire solar cells have the advantages of light weight and high stability while possessing a large active material interface for potentially high power conversion efficiencies. In particular, electrochemically fabricated devices have attracted increasing attention due to their low-cost and simple fabrication process. However, most of them are "partially" electrochemically fabricated by vacuum deposition onto a preexisting ZnO layer. There are a few examples made via all-electrochemical deposition, but the power conversion efficiency (PCE) is too low (0.13%) for practical applications. Herein we use an all-electrochemical approach to directly deposit ZnO NWs onto FTO followed by electrochemical doping with Ga to produce a heterojunction solar cell. The Ga doping greatly improves light utilization while significantly suppressing charge recombination. A 2.5% molar ratio of Ga to ZnO delivers the best performance with a short circuit current density (Jsc) of 3.24 mA cm(-2) and a PCE of 0.25%, which is significantly higher than in the absence of Ga doping. Moreover, the use of electrochemically deposited ZnO powder-buffered Cu2O from a mixed Cu(2+)-ZnO powder solution and oxygen plasma treatment could reduce the density of defect sites in the heterojunction interface to further increase Jsc and PCE to 4.86 mA cm(-2) and 0.34%, respectively, resulting in the highest power conversion efficiency among all-electrochemically fabricated Cu2O-ZnO NW solar cells. This approach offers great potential for a low-cost solution-based process to mass-manufacture high-performance Cu2O-ZnO NW solar cells.
Alan Black; Arnis Judzis
2003-10-01
This document details the progress to date on the OPTIMIZATION OF DEEP DRILLING PERFORMANCE--DEVELOPMENT AND BENCHMARK TESTING OF ADVANCED DIAMOND PRODUCT DRILL BITS AND HP/HT FLUIDS TO SIGNIFICANTLY IMPROVE RATES OF PENETRATION contract for the year starting October 2002 through September 2002. The industry cost shared program aims to benchmark drilling rates of penetration in selected simulated deep formations and to significantly improve ROP through a team development of aggressive diamond product drill bit--fluid system technologies. Overall the objectives are as follows: Phase 1--Benchmark ''best in class'' diamond and other product drilling bits and fluids and develop concepts for a next level of deep drilling performance; Phase 2--Develop advanced smart bit--fluid prototypes and test at large scale; and Phase 3--Field trial smart bit--fluid concepts, modify as necessary and commercialize products. Accomplishments to date include the following: 4Q 2002--Project started; Industry Team was assembled; Kick-off meeting was held at DOE Morgantown; 1Q 2003--Engineering meeting was held at Hughes Christensen, The Woodlands Texas to prepare preliminary plans for development and testing and review equipment needs; Operators started sending information regarding their needs for deep drilling challenges and priorities for large-scale testing experimental matrix; Aramco joined the Industry Team as DEA 148 objectives paralleled the DOE project; 2Q 2003--Engineering and planning for high pressure drilling at TerraTek commenced; 3Q 2003--Continuation of engineering and design work for high pressure drilling at TerraTek; Baker Hughes INTEQ drilling Fluids and Hughes Christensen commence planning for Phase 1 testing--recommendations for bits and fluids.
Feng, Y; Olsen, J.; Parikh, P.; Noel, C; Wooten, H; Du, D; Mutic, S; Hu, Y; Kawrakow, I; Dempsey, J
2014-06-01
Purpose: Evaluate commonly used segmentation algorithms on a commercially available real-time MR image guided radiotherapy (MR-IGRT) system (ViewRay), compare the strengths and weaknesses of each method, with the purpose of improving motion tracking for more accurate radiotherapy. Methods: MR motion images of bladder, kidney, duodenum, and liver tumor were acquired for three patients using a commercial on-board MR imaging system and an imaging protocol used during MR-IGRT. A series of 40 frames were selected for each case to cover at least 3 respiratory cycles. Thresholding, Canny edge detection, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE), along with the ViewRay treatment planning and delivery system (TPDS) were included in the comparisons. To evaluate the segmentation results, an expert manual contouring of the organs or tumor from a physician was used as a ground-truth. Metrics value of sensitivity, specificity, Jaccard similarity, and Dice coefficient were computed for comparison. Results: In the segmentation of single image frame, all methods successfully segmented the bladder and kidney, but only FKM, KHM and TPDS were able to segment the liver tumor and the duodenum. For segmenting motion image series, the TPDS method had the highest sensitivity, Jarccard, and Dice coefficients in segmenting bladder and kidney, while FKM and KHM had a slightly higher specificity. A similar pattern was observed when segmenting the liver tumor and the duodenum. The Canny method is not suitable for consistently segmenting motion frames in an automated process, while thresholding and RD-LSE cannot consistently segment a liver tumor and the duodenum. Conclusion: The study compared six different segmentation methods and showed the effectiveness of the ViewRay TPDS algorithm in segmenting motion images during MR-IGRT. Future studies include a selection of conformal segmentation methods based on image/organ-specific information
Yu, Yong-Jie; Wu, Hai-Long; Shao, Sheng-Zhi; Kang, Chao; Zhao, Juan; Wang, Yu; Zhu, Shao-Hua; Yu, Ru-Qin
2011-09-15
A novel strategy that combines the second-order calibration method based on the trilinear decomposition algorithms with high performance liquid chromatography with diode array detector (HPLC-DAD) was developed to mathematically separate the overlapped peaks and to quantify quinolones in honey samples. The HPLC-DAD data were obtained within a short time in isocratic mode. The developed method could be applied to determine 12 quinolones at the same time even in the presence of uncalibrated interfering components in complex background. To access the performance of the proposed strategy for the determination of quinolones in honey samples, the figures of merit were employed. The limits of quantitation for all analytes were within the range 1.2-56.7 μg kg(-1). The work presented in this paper illustrated the suitability and interesting potential of combining second-order calibration method with second-order analytical instrument for multi-residue analysis in honey samples.
NASA Technical Reports Server (NTRS)
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
Calzado, A; Geleijns, J; Joemai, R M S; Veldkamp, W J H
2014-01-01
Objective: To compare low-contrast detectability (LCDet) performance between a model [non–pre-whitening matched filter with an eye filter (NPWE)] and human observers in CT images reconstructed with filtered back projection (FBP) and iterative [adaptive iterative dose reduction three-dimensional (AIDR 3D; Toshiba Medical Systems, Zoetermeer, Netherlands)] algorithms. Methods: Images of the Catphan® phantom (Phantom Laboratories, New York, NY) were acquired with Aquilion ONE™ 320-detector row CT (Toshiba Medical Systems, Tokyo, Japan) at five tube current levels (20–500 mA range) and reconstructed with FBP and AIDR 3D. Samples containing either low-contrast objects (diameters, 2–15 mm) or background were extracted and analysed by the NPWE model and four human observers in a two-alternative forced choice detection task study. Proportion correct (PC) values were obtained for each analysed object and used to compare human and model observer performances. An efficiency factor (η) was calculated to normalize NPWE to human results. Results: Human and NPWE model PC values (normalized by the efficiency, η = 0.44) were highly correlated for the whole dose range. The Pearson's product-moment correlation coefficients (95% confidence interval) between human and NPWE were 0.984 (0.972–0.991) for AIDR 3D and 0.984 (0.971–0.991) for FBP, respectively. Bland–Altman plots based on PC results showed excellent agreement between human and NPWE [mean absolute difference 0.5 ± 0.4%; range of differences (−4.7%, 5.6%)]. Conclusion: The NPWE model observer can predict human performance in LCDet tasks in phantom CT images reconstructed with FBP and AIDR 3D algorithms at different dose levels. Advances in knowledge: Quantitative assessment of LCDet in CT can accurately be performed using software based on a model observer. PMID:24837275
2016-01-01
The new WHO 2011 guidelines on TB screening among HIV-infected individuals recommend screening using four TB symptoms (current cough, fever, weight loss, and night sweats). This study aimed to assess the performance of WHO 2011 TB symptom screening algorithm for diagnosing pulmonary TB in HIV patients and identify possible risk factors for TB. Institutional based cross-sectional study was conducted from February 2012 to November 2012. A total of 250 HIV-infected patients aged ≥18 years visiting the University of Gondar Hospital, ART clinic, were enrolled. Information about WHO TB clinical symptoms and other known risk factors for TB was collected using structured questionnaire. Spot-morning-spot sputum samples were collected and direct AFB microscopy, sputum culture, and RD9 molecular typing were performed. Statistical data analysis was performed using SPSS Version 20.0 software. Of 250 study participants, fever was reported in 169 (67.6%), whereas cough and night sweats were reported in 167 (66.8%) and 152 (60.8%), respectively. A total of 11 (4.4%) TB cases were identified. Of these, 82% (9/11) TB patients reported cough, so that the negative predictive value was 98%. In addition, 66% (158/239) TB negative patients reported cough, so that positive predictive value of cough was 5%. According to the new WHOTB symptom screening algorithm, out of 250 HIV-infected persons, 83% (5/6) have been investigated by TB symptom screening and AFB smear microscopy. Therefore, the 2011 WHO TB symptom screening tool for the diagnosis of pulmonary TB is likely to reduce the diagnostic delay and lower TB morbidity and mortality rate particularly in HIV prevalent settings. PMID:28058048
Culzoni, María J; Schenone, Agustina V; Llamas, Natalia E; Garrido, Mariano; Di Nezio, Maria S; Band, Beatriz S Fernández; Goicoechea, Héctor C
2009-10-16
A fast chromatographic methodology is presented for the analysis of three synthetic dyes in non-alcoholic beverages: amaranth (E123), sunset yellow FCF (E110) and tartrazine (E102). Seven soft drinks (purchased from a local supermarket) were homogenized, filtered and injected into the chromatographic system. Second order data were obtained by a rapid LC separation and DAD detection. A comparative study of the performance of two second order algorithms (MCR-ALS and U-PLS/RBL) applied to model the data, is presented. Interestingly, the data present time shift between different chromatograms and cannot be conveniently corrected to determine the above-mentioned dyes in beverage samples. This fact originates the lack of trilinearity that cannot be conveniently pre-processed and can hardly be modelled by using U-PLS/RBL algorithm. On the contrary, MCR-ALS has shown to be an excellent tool for modelling this kind of data allowing to reach acceptable figures of merit. Recovery values ranged between 97% and 105% when analyzing artificial and real samples were indicative of the good performance of the method. In contrast with the complete separation, which consumes 10 mL of methanol and 3 mL of 0.08 mol L(-1) ammonium acetate, the proposed fast chromatography method requires only 0.46 mL of methanol and 1.54 mL of 0.08 mol L(-1) ammonium acetate. Consequently, analysis time could be reduced up to 14.2% of the necessary time to perform the complete separation allowing saving both solvents and time, which are related to a reduction of both the costs per analysis and environmental impact.
Chua, Xing Juan; Tan, Shu Min; Chia, Xinyi; Sofer, Zdenek; Luxa, Jan; Pumera, Martin
2017-03-02
Molybdenum disulfide (MoS2 ) is at the forefront of materials research. It shows great promise for electrochemical applications, especially for hydrogen evolution reaction (HER) catalysis. There is a significant discrepancy in the literature on the reported catalytic activity for HER catalysis on MoS2 . Here we test the electrochemical performance of MoS2 obtained from seven sources and we show that these sources provide MoS2 of various phase purity (2H and 3R, and their mixtures) and composition, which is responsible for their different electrochemical properties. The overpotentials for HER at -10 mA cm(-2) for MoS2 from seven different sources range from -0.59 V to -0.78 V vs. reversible hydrogen electrode (RHE). This is of very high importance as with much interest in 2D-MoS2 , the use of the top-down approach would usually involve the application of commercially available MoS2 . These commercially available MoS2 are rarely characterized for composition and phase purity. These key parameters are responsible for large variance of reported catalytic properties of MoS2 .
Li, X. J.; Zhao, D. G. Jiang, D. S.; Liu, Z. S.; Chen, P.; Zhu, J. J.; Le, L. C.; Yang, J.; He, X. G.; Zhang, S. M.; Zhang, B. S.; Liu, J. P.; Yang, H.
2014-10-28
The significant effect of the thickness of Ni film on the performance of the Ohmic contact of Ni/Au to p-GaN is studied. The Ni/Au metal films with thickness of 15/50 nm on p-GaN led to better electrical characteristics, showing a lower specific contact resistivity after annealing in the presence of oxygen. Both the formation of a NiO layer and the evolution of metal structure on the sample surface and at the interface with p-GaN were checked by transmission electron microscopy and energy-dispersive x-ray spectroscopy. The experimental results indicate that a too thin Ni film cannot form enough NiO to decrease the barrier height and get Ohmic contact to p-GaN, while a too thick Ni film will transform into too thick NiO cover on the sample surface and thus will also deteriorate the electrical conductivity of sample.
NASA Astrophysics Data System (ADS)
Dhingra, Sunil; Bhushan, Gian; Dubey, Kashyap Kumar
2014-03-01
The present work studies and identifies the different variables that affect the output parameters involved in a single cylinder direct injection compression ignition (CI) engine using jatropha biodiesel. Response surface methodology based on Central composite design (CCD) is used to design the experiments. Mathematical models are developed for combustion parameters (Brake specific fuel consumption (BSFC) and peak cylinder pressure (Pmax)), performance parameter brake thermal efficiency (BTE) and emission parameters (CO, NO x , unburnt HC and smoke) using regression techniques. These regression equations are further utilized for simultaneous optimization of combustion (BSFC, Pmax), performance (BTE) and emission (CO, NO x , HC, smoke) parameters. As the objective is to maximize BTE and minimize BSFC, Pmax, CO, NO x , HC, smoke, a multiobjective optimization problem is formulated. Nondominated sorting genetic algorithm-II is used in predicting the Pareto optimal sets of solution. Experiments are performed at suitable optimal solutions for predicting the combustion, performance and emission parameters to check the adequacy of the proposed model. The Pareto optimal sets of solution can be used as guidelines for the end users to select optimal combination of engine output and emission parameters depending upon their own requirements.
NASA Astrophysics Data System (ADS)
Swetadri Vasan, S. N.; Ionita, Ciprian N.; Titus, A. H.; Cartwright, A. N.; Bednarek, D. R.; Rudin, S.
2012-03-01
We present the image processing upgrades implemented on a Graphics Processing Unit (GPU) in the Control, Acquisition, Processing, and Image Display System (CAPIDS) for the custom Micro-Angiographic Fluoroscope (MAF) detector. Most of the image processing currently implemented in the CAPIDS system is pixel independent; that is, the operation on each pixel is the same and the operation on one does not depend upon the result from the operation on the other, allowing the entire image to be processed in parallel. GPU hardware was developed for this kind of massive parallel processing implementation. Thus for an algorithm which has a high amount of parallelism, a GPU implementation is much faster than a CPU implementation. The image processing algorithm upgrades implemented on the CAPIDS system include flat field correction, temporal filtering, image subtraction, roadmap mask generation and display window and leveling. A comparison between the previous and the upgraded version of CAPIDS has been presented, to demonstrate how the improvement is achieved. By performing the image processing on a GPU, significant improvements (with respect to timing or frame rate) have been achieved, including stable operation of the system at 30 fps during a fluoroscopy run, a DSA run, a roadmap procedure and automatic image windowing and leveling during each frame.
Vasan, S N Swetadri; Ionita, Ciprian N; Titus, A H; Cartwright, A N; Bednarek, D R; Rudin, S
2012-02-23
We present the image processing upgrades implemented on a Graphics Processing Unit (GPU) in the Control, Acquisition, Processing, and Image Display System (CAPIDS) for the custom Micro-Angiographic Fluoroscope (MAF) detector. Most of the image processing currently implemented in the CAPIDS system is pixel independent; that is, the operation on each pixel is the same and the operation on one does not depend upon the result from the operation on the other, allowing the entire image to be processed in parallel. GPU hardware was developed for this kind of massive parallel processing implementation. Thus for an algorithm which has a high amount of parallelism, a GPU implementation is much faster than a CPU implementation. The image processing algorithm upgrades implemented on the CAPIDS system include flat field correction, temporal filtering, image subtraction, roadmap mask generation and display window and leveling. A comparison between the previous and the upgraded version of CAPIDS has been presented, to demonstrate how the improvement is achieved. By performing the image processing on a GPU, significant improvements (with respect to timing or frame rate) have been achieved, including stable operation of the system at 30 fps during a fluoroscopy run, a DSA run, a roadmap procedure and automatic image windowing and leveling during each frame.
NASA Astrophysics Data System (ADS)
Ostromsky, Tz.; Dimov, I.; Georgieva, R.; Marinov, P.; Zlatev, Z.
2013-10-01
In this paper we present some new results of our work on sensitivity analysis of a large-scale air pollution model, more specificly the Danish Eulerian Model (DEM). The main purpose of this study is to analyse the sensitivity of ozone concentrations with respect to the rates of some chemical reactions. The current sensitivity study considers the rates of six important chemical reactions and is done for the areas of several European cities with different geographical locations, climate, industrialization and population density. One of the most widely used variance-based techniques for sensitivity analysis, such as Sobol estimates and their modifications, have been used in this study. A vast number of numerical experiments with a specially adapted for the purpose version of the Danish Eulerian Model (SA-DEM) were carried out to compute global Sobol sensitivity measures. SA-DEM was implemented and run on two powerful cluster supercomputers: IBM Blue Gene/P, the most powerful parallel supercomputer in Bulgaria and IBM MareNostrum III, the most powerful parallel supercomputer in Spain. The refined (480 × 480) mesh version of the model was used in the experiments on MareNostrum III, which is a challenging computational problem even on such a powerful machine. Some optimizations of the code with respect to the parallel efficiency and the memory use were performed. Tables with performance results of a number of numerical experiments on IBM BlueGene/P and on IBM MareNostrum III are presented and analysed.
Anglada-Escude, Guillem; Butler, R. Paul
2012-06-01
Doppler spectroscopy has uncovered or confirmed all the known planets orbiting nearby stars. Two main techniques are used to obtain precision Doppler measurements at optical wavelengths. The first approach is the gas cell method, which consists of least-squares matching of the spectrum of iodine imprinted on the spectrum of the star. The second method relies on the construction of a stabilized spectrograph externally calibrated in wavelength. The most precise stabilized spectrometer in operation is the High Accuracy Radial velocity Planet Searcher (HARPS), operated by the European Southern Observatory in La Silla Observatory, Chile. The Doppler measurements obtained with HARPS are typically obtained using the cross-correlation function (CCF) technique. This technique consists of multiplying the stellar spectrum by a weighted binary mask and finding the minimum of the product as a function of the Doppler shift. It is known that CCF is suboptimal in exploiting the Doppler information in the stellar spectrum. Here we describe an algorithm to obtain precision radial velocity measurements using least-squares matching of each observed spectrum to a high signal-to-noise ratio template derived from the same observations. This algorithm is implemented in our software HARPS-TERRA (Template-Enhanced Radial velocity Re-analysis Application). New radial velocity measurements on a representative sample of stars observed by HARPS are used to illustrate the benefits of the proposed method. We show that, compared with CCF, template matching provides a significant improvement in accuracy, especially when applied to M dwarfs.
Naeser, Margaret A; Zafonte, Ross; Krengel, Maxine H; Martin, Paula I; Frazier, Judith; Hamblin, Michael R; Knight, Jeffrey A; Meehan, William P; Baker, Errol H
2014-06-01
This pilot, open-protocol study examined whether scalp application of red and near-infrared (NIR) light-emitting diodes (LED) could improve cognition in patients with chronic, mild traumatic brain injury (mTBI). Application of red/NIR light improves mitochondrial function (especially in hypoxic/compromised cells) promoting increased adenosine triphosphate (ATP) important for cellular metabolism. Nitric oxide is released locally, increasing regional cerebral blood flow. LED therapy is noninvasive, painless, and non-thermal (cleared by the United States Food and Drug Administration [FDA], an insignificant risk device). Eleven chronic, mTBI participants (26-62 years of age, 6 males) with nonpenetrating brain injury and persistent cognitive dysfunction were treated for 18 outpatient sessions (Monday, Wednesday, Friday, for 6 weeks), starting at 10 months to 8 years post- mTBI (motor vehicle accident [MVA] or sports-related; and one participant, improvised explosive device [IED] blast injury). Four had a history of multiple concussions. Each LED cluster head (5.35 cm diameter, 500 mW, 22.2 mW/cm(2)) was applied for 10 min to each of 11 scalp placements (13 J/cm(2)). LEDs were placed on the midline from front-to-back hairline; and bilaterally on frontal, parietal, and temporal areas. Neuropsychological testing was performed pre-LED, and at 1 week, and 1 and 2 months after the 18th treatment. A significant linear trend was observed for the effect of LED treatment over time for the Stroop test for Executive Function, Trial 3 inhibition (p=0.004); Stroop, Trial 4 inhibition switching (p=0.003); California Verbal Learning Test (CVLT)-II, Total Trials 1-5 (p=0.003); and CVLT-II, Long Delay Free Recall (p=0.006). Participants reported improved sleep, and fewer post-traumatic stress disorder (PTSD) symptoms, if present. Participants and family reported better ability to perform social, interpersonal, and occupational functions. These open-protocol data suggest that placebo
Williams, Scott G. Buyyounouski, Mark K.; Pickles, Tom; Kestin, Larry; Martinez, Alvaro; Hanlon, Alexandra L.; Duchesne, Gillian M.
2008-03-15
Purpose: To define and incorporate the impact of the percentage of positive biopsy cores (PPC) into a predictive model of prostate cancer radiotherapy biochemical outcome. Methods and Materials: The data of 3264 men with clinically localized prostate cancer treated with external beam radiotherapy at four institutions were retrospectively analyzed. Standard prognostic and treatment factors plus the number of biopsy cores collected and the number positive for malignancy by transrectal ultrasound-guided biopsy were available. The primary endpoint was biochemical failure (bF, Phoenix definition). Multivariate proportional hazards analyses were performed and expressed as a nomogram and the model's predictive ability assessed using the concordance index (c-index). Results: The cohort consisted of 21% low-, 51% intermediate-, and 28% high-risk cancer patients, and 30% had androgen deprivation with radiotherapy. The median PPC was 50% (interquartile range [IQR] 29-67%), and median follow-up was 51 months (IQR 29-71 months). Percentage of positive biopsy cores displayed an independent association with the risk of bF (p = 0.01), as did age, prostate-specific antigen value, Gleason score, clinical stage, androgen deprivation duration, and radiotherapy dose (p < 0.001 for all). Including PPC increased the c-index from 0.72 to 0.73 in the overall model. The influence of PPC varied significantly with radiotherapy dose and clinical stage (p = 0.02 for both interactions), with doses <66 Gy and palpable tumors showing the strongest relationship between PPC and bF. Intermediate-risk patients were poorly discriminated regardless of PPC inclusion (c-index 0.65 for both models). Conclusions: Outcome models incorporating PPC show only minor additional ability to predict biochemical failure beyond those containing standard prognostic factors.
NASA Technical Reports Server (NTRS)
Dieriam, Todd A.
1990-01-01
Future missions to Mars may require pin-point landing precision, possibly on the order of tens of meters. The ability to reach a target while meeting a dynamic pressure constraint to ensure safe parachute deployment is complicated at Mars by low atmospheric density, high atmospheric uncertainty, and the desire to employ only bank angle control. The vehicle aerodynamic performance requirements and guidance necessary for 0.5 to 1.5 lift drag ratio vehicle to maximize the achievable footprint while meeting the constraints are examined. A parametric study of the various factors related to entry vehicle performance in the Mars environment is undertaken to develop general vehicle aerodynamic design requirements. The combination of low lift drag ratio and low atmospheric density at Mars result in a large phugoid motion involving the dynamic pressure which complicates trajectory control. Vehicle ballistic coefficient is demonstrated to be the predominant characteristic affecting final dynamic pressure. Additionally, a speed brake is shown to be ineffective at reducing the final dynamic pressure. An adaptive precision entry atmospheric guidance scheme is presented. The guidance uses a numeric predictor-corrector algorithm to control downrange, an azimuth controller to govern crossrange, and analytic control law to reduce the final dynamic pressure. Guidance performance is tested against a variety of dispersions, and the results from selected tests are presented. Precision entry using bank angle control only is demonstrated to be feasible at Mars.
Gilles, Luc
2005-02-20
Recent progress has been made to compute efficiently the open-loop minimum-variance reconstructor (MVR) for multiconjugate adaptive optics systems by a combination of sparse matrix and iterative techniques. Using spectral analysis, I show that a closed-loop laser guide star multiconjugate adaptive optics control algorithm consisting of MVR cascaded with an integrator control law is unstable. Tosolve this problem, a computationally efficient pseudo-open-loop control (POLC) method was recently proposed. I give a theoretical proof of the stability of this method and demonstrate its superior performance and robustness against misregistration errors compared with conventional least-squares control. This can be accounted for by the fact that POLC incorporates turbulence statistics through its regularization term that can be interpreted as spatial filtering, yielding increased robustness to misregistration. For the Gemini-South 8-m telescope multiconjugate system and for median Cerro Pachon seeing, the performance of POLC in terms of rms wave-front error averaged over a 1-arc min field of view is approximately three times superior to that of a least-squares reconstructor. Performance degradation due to 30% translational misregistration on all three mirrors is approximately a 30% increased rms wave-front error, whereas a least-squares reconstructor is unstable at such a misregistration level.
NASA Technical Reports Server (NTRS)
Abrams, D.; Williams, C.
1999-01-01
This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases for which all know classical algorithms require exponential time.
Stimpson, Shane; Collins, Benjamin; Kochunas, Brendan
2017-03-10
The MPACT code, being developed collaboratively by the University of Michigan and Oak Ridge National Laboratory, is the primary deterministic neutron transport solver being deployed within the Virtual Environment for Reactor Applications (VERA) as part of the Consortium for Advanced Simulation of Light Water Reactors (CASL). In many applications of the MPACT code, transport-corrected scattering has proven to be an obstacle in terms of stability, and considerable effort has been made to try to resolve the convergence issues that arise from it. Most of the convergence problems seem related to the transport-corrected cross sections, particularly when used in the 2Dmore » method of characteristics (MOC) solver, which is the focus of this work. Here in this paper, the stability and performance of the 2-D MOC solver in MPACT is evaluated for two iteration schemes: Gauss-Seidel and Jacobi. With the Gauss-Seidel approach, as the MOC solver loops over groups, it uses the flux solution from the previous group to construct the inscatter source for the next group. Alternatively, the Jacobi approach uses only the fluxes from the previous outer iteration to determine the inscatter source for each group. Consequently for the Jacobi iteration, the loop over groups can be moved from the outermost loop$-$as is the case with the Gauss-Seidel sweeper$-$to the innermost loop, allowing for a substantial increase in efficiency by minimizing the overhead of retrieving segment, region, and surface index information from the ray tracing data. Several test problems are assessed: (1) Babcock & Wilcox 1810 Core I, (2) Dimple S01A-Sq, (3) VERA Progression Problem 5a, and (4) VERA Problem 2a. The Jacobi iteration exhibits better stability than Gauss-Seidel, allowing for converged solutions to be obtained over a much wider range of iteration control parameters. Additionally, the MOC solve time with the Jacobi approach is roughly 2.0-2.5× faster per sweep. While the performance and stability of
The evaluation of the OSGLR algorithm for restructurable controls
NASA Technical Reports Server (NTRS)
Bonnice, W. F.; Wagner, E.; Hall, S. R.; Motyka, P.
1986-01-01
The detection and isolation of commercial aircraft control surface and actuator failures using the orthogonal series generalized likelihood ratio (OSGLR) test was evaluated. The OSGLR algorithm was chosen as the most promising algorithm based on a preliminary evaluation of three failure detection and isolation (FDI) algorithms (the detection filter, the generalized likelihood ratio test, and the OSGLR test) and a survey of the literature. One difficulty of analytic FDI techniques and the OSGLR algorithm in particular is their sensitivity to modeling errors. Therefore, methods of improving the robustness of the algorithm were examined with the incorporation of age-weighting into the algorithm being the most effective approach, significantly reducing the sensitivity of the algorithm to modeling errors. The steady-state implementation of the algorithm based on a single cruise linear model was evaluated using a nonlinear simulation of a C-130 aircraft. A number of off-nominal no-failure flight conditions including maneuvers, nonzero flap deflections, different turbulence levels and steady winds were tested. Based on the no-failure decision functions produced by off-nominal flight conditions, the failure detection performance at the nominal flight condition was determined. The extension of the algorithm to a wider flight envelope by scheduling the linear models used by the algorithm on dynamic pressure and flap deflection was also considered. Since simply scheduling the linear models over the entire flight envelope is unlikely to be adequate, scheduling of the steady-state implentation of the algorithm was briefly investigated.
A proximity algorithm accelerated by Gauss-Seidel iterations for L1/TV denoising models
NASA Astrophysics Data System (ADS)
Li, Qia; Micchelli, Charles A.; Shen, Lixin; Xu, Yuesheng
2012-09-01
Our goal in this paper is to improve the computational performance of the proximity algorithms for the L1/TV denoising model. This leads us to a new characterization of all solutions to the L1/TV model via fixed-point equations expressed in terms of the proximity operators. Based upon this observation we develop an algorithm for solving the model and establish its convergence. Furthermore, we demonstrate that the proposed algorithm can be accelerated through the use of the componentwise Gauss-Seidel iteration so that the CPU time consumed is significantly reduced. Numerical experiments using the proposed algorithm for impulsive noise removal are included, with a comparison to three recently developed algorithms. The numerical results show that while the proposed algorithm enjoys a high quality of the restored images, as the other three known algorithms do, it performs significantly better in terms of computational efficiency measured in the CPU time consumed.
Ouyang, Liangqi; Shaw, Crystal L; Kuo, Chin-Chen; Griffin, Amy L; Martin, David C
2014-04-01
-polymerization time intervals, the polymerization did not cause significant deficits in performance of the DA task, suggesting that hippocampal function was not impaired by PEDOT deposition. However, GFAP+ and ED-1+ cells were also found at the deposition two weeks after the polymerization, suggesting potential secondary scarring. Therefore, less extensive deposition or milder deposition conditions may be desirable to minimize this scarring while maintaining decreased system impedance.
NASA Astrophysics Data System (ADS)
Ouyang, Liangqi; Shaw, Crystal L.; Kuo, Chin-chen; Griffin, Amy L.; Martin, David C.
2014-04-01
-polymerization time intervals, the polymerization did not cause significant deficits in performance of the DA task, suggesting that hippocampal function was not impaired by PEDOT deposition. However, GFAP+ and ED-1+ cells were also found at the deposition two weeks after the polymerization, suggesting potential secondary scarring. Therefore, less extensive deposition or milder deposition conditions may be desirable to minimize this scarring while maintaining decreased system impedance.
A Fast and Efficient Algorithm for Mining Top-k Nodes in Complex Networks
NASA Astrophysics Data System (ADS)
Liu, Dong; Jing, Yun; Zhao, Jing; Wang, Wenjun; Song, Guojie
2017-02-01
One of the key problems in social network analysis is influence maximization, which has great significance both in theory and practical applications. Given a complex network and a positive integer k, and asks the k nodes to trigger the largest expected number of the remaining nodes. Many mature algorithms are mainly divided into propagation-based algorithms and topology- based algorithms. The propagation-based algorithms are based on optimization of influence spread process, so the influence spread of them significantly outperforms the topology-based algorithms. But these algorithms still takes days to complete on large networks. Contrary to propagation based algorithms, the topology-based algorithms are based on intuitive parameter statistics and static topology structure properties. Their running time are extremely short but the results of influence spread are unstable. In this paper, we propose a novel topology-based algorithm based on local index rank (LIR). The influence spread of our algorithm is close to the propagation-based algorithm and sometimes over them. Moreover, the running time of our algorithm is millions of times shorter than that of propagation-based algorithms. Our experimental results show that our algorithm has a good and stable performance in IC and LT model.
A Fast and Efficient Algorithm for Mining Top-k Nodes in Complex Networks.
Liu, Dong; Jing, Yun; Zhao, Jing; Wang, Wenjun; Song, Guojie
2017-02-27
One of the key problems in social network analysis is influence maximization, which has great significance both in theory and practical applications. Given a complex network and a positive integer k, and asks the k nodes to trigger the largest expected number of the remaining nodes. Many mature algorithms are mainly divided into propagation-based algorithms and topology- based algorithms. The propagation-based algorithms are based on optimization of influence spread process, so the influence spread of them significantly outperforms the topology-based algorithms. But these algorithms still takes days to complete on large networks. Contrary to propagation based algorithms, the topology-based algorithms are based on intuitive parameter statistics and static topology structure properties. Their running time are extremely short but the results of influence spread are unstable. In this paper, we propose a novel topology-based algorithm based on local index rank (LIR). The influence spread of our algorithm is close to the propagation-based algorithm and sometimes over them. Moreover, the running time of our algorithm is millions of times shorter than that of propagation-based algorithms. Our experimental results show that our algorithm has a good and stable performance in IC and LT model.
A Fast and Efficient Algorithm for Mining Top-k Nodes in Complex Networks
Liu, Dong; Jing, Yun; Zhao, Jing; Wang, Wenjun; Song, Guojie
2017-01-01
One of the key problems in social network analysis is influence maximization, which has great significance both in theory and practical applications. Given a complex network and a positive integer k, and asks the k nodes to trigger the largest expected number of the remaining nodes. Many mature algorithms are mainly divided into propagation-based algorithms and topology- based algorithms. The propagation-based algorithms are based on optimization of influence spread process, so the influence spread of them significantly outperforms the topology-based algorithms. But these algorithms still takes days to complete on large networks. Contrary to propagation based algorithms, the topology-based algorithms are based on intuitive parameter statistics and static topology structure properties. Their running time are extremely short but the results of influence spread are unstable. In this paper, we propose a novel topology-based algorithm based on local index rank (LIR). The influence spread of our algorithm is close to the propagation-based algorithm and sometimes over them. Moreover, the running time of our algorithm is millions of times shorter than that of propagation-based algorithms. Our experimental results show that our algorithm has a good and stable performance in IC and LT model. PMID:28240238
Fast autodidactic adaptive equalization algorithms
NASA Astrophysics Data System (ADS)
Hilal, Katia
Autodidactic equalization by adaptive filtering is addressed in a mobile radio communication context. A general method, using an adaptive stochastic gradient Bussgang type algorithm, to deduce two low cost computation algorithms is given: one equivalent to the initial algorithm and the other having improved convergence properties thanks to a block criteria minimization. Two start algorithms are reworked: the Godard algorithm and the decision controlled algorithm. Using a normalization procedure, and block normalization, the performances are improved, and their common points are evaluated. These common points are used to propose an algorithm retaining the advantages of the two initial algorithms. This thus inherits the robustness of the Godard algorithm and the precision and phase correction of the decision control algorithm. The work is completed by a study of the stable states of Bussgang type algorithms and of the stability of the Godard algorithms, initial and normalized. The simulation of these algorithms, carried out in a mobile radio communications context, and under severe conditions on the propagation channel, gave a 75% reduction in the number of samples required for the processing in relation with the initial algorithms. The improvement of the residual error was of a much lower return. These performances are close to making possible the use of autodidactic equalization in the mobile radio system.
NASA Astrophysics Data System (ADS)
Attia, Khalid A. M.; Nassar, Mohammed W. I.; El-Zeiny, Mohamed B.; Serag, Ahmed
2017-01-01
For the first time, a new variable selection method based on swarm intelligence namely firefly algorithm is coupled with three different multivariate calibration models namely, concentration residual augmented classical least squares, artificial neural network and support vector regression in UV spectral data. A comparative study between the firefly algorithm and the well-known genetic algorithm was developed. The discussion revealed the superiority of using this new powerful algorithm over the well-known genetic algorithm. Moreover, different statistical tests were performed and no significant differences were found between all the models regarding their predictabilities. This ensures that simpler and faster models were obtained without any deterioration of the quality of the calibration.
A segmentation algorithm for noisy images
Xu, Y.; Olman, V.; Uberbacher, E.C.
1996-12-31
This paper presents a 2-D image segmentation algorithm and addresses issues related to its performance on noisy images. The algorithm segments an image by first constructing a minimum spanning tree representation of the image and then partitioning the spanning tree into sub-trees representing different homogeneous regions. The spanning tree is partitioned in such a way that the sum of gray-level variations over all partitioned subtrees is minimized under the constraints that each subtree has at least a specified number of pixels and two adjacent subtrees have significantly different ``average`` gray-levels. Two types of noise, transmission errors and Gaussian additive noise. are considered and their effects on the segmentation algorithm are studied. Evaluation results have shown that the segmentation algorithm is robust in the presence of these two types of noise.
A Comprehensive Review of Swarm Optimization Algorithms
2015-01-01
Many swarm optimization algorithms have been introduced since the early 60’s, Evolutionary Programming to the most recent, Grey Wolf Optimization. All of these algorithms have demonstrated their potential to solve many optimization problems. This paper provides an in-depth survey of well-known optimization algorithms. Selected algorithms are briefly explained and compared with each other comprehensively through experiments conducted using thirty well-known benchmark functions. Their advantages and disadvantages are also discussed. A number of statistical tests are then carried out to determine the significant performances. The results indicate the overall advantage of Differential Evolution (DE) and is closely followed by Particle Swarm Optimization (PSO), compared with other considered approaches. PMID:25992655
A comprehensive review of swarm optimization algorithms.
Ab Wahab, Mohd Nadhir; Nefti-Meziani, Samia; Atyabi, Adham
2015-01-01
Many swarm optimization algorithms have been introduced since the early 60's, Evolutionary Programming to the most recent, Grey Wolf Optimization. All of these algorithms have demonstrated their potential to solve many optimization problems. This paper provides an in-depth survey of well-known optimization algorithms. Selected algorithms are briefly explained and compared with each other comprehensively through experiments conducted using thirty well-known benchmark functions. Their advantages and disadvantages are also discussed. A number of statistical tests are then carried out to determine the significant performances. The results indicate the overall advantage of Differential Evolution (DE) and is closely followed by Particle Swarm Optimization (PSO), compared with other considered approaches.
2012-01-01
Background Multi-target therapeutics has been shown to be effective for treating complex diseases, and currently, it is a common practice to combine multiple drugs to treat such diseases to optimize the therapeutic outcomes. However, considering the huge number of possible ways to mix multiple drugs at different concentrations, it is practically difficult to identify the optimal drug combination through exhaustive testing. Results In this paper, we propose a novel stochastic search algorithm, called the adaptive reference update (ARU) algorithm, that can provide an efficient and systematic way for optimizing multi-drug cocktails. The ARU algorithm iteratively updates the drug combination to improve its response, where the update is made by comparing the response of the current combination with that of a reference combination, based on which the beneficial update direction is predicted. The reference combination is continuously updated based on the drug response values observed in the past, thereby adapting to the underlying drug response function. To demonstrate the effectiveness of the proposed algorithm, we evaluated its performance based on various multi-dimensional drug functions and compared it with existing algorithms. Conclusions Simulation results show that the ARU algorithm significantly outperforms existing stochastic search algorithms, including the Gur Game algorithm. In fact, the ARU algorithm can more effectively identify potent drug combinations and it typically spends fewer iterations for finding effective combinations. Furthermore, the ARU algorithm is robust to random fluctuations and noise in the measured drug response, which makes the algorithm well-suited for practical drug optimization applications. PMID:23134742
Two Meanings of Algorithmic Mathematics.
ERIC Educational Resources Information Center
Maurer, Stephen B.
1984-01-01
Two mathematical topics are interpreted from the viewpoints of traditional (performing algorithms) and contemporary (creating algorithms and thinking in terms of them for solving problems and developing theory) algorithmic mathematics. The two topics are Horner's method for evaluating polynomials and Gauss's method for solving systems of linear…
A survey of DNA motif finding algorithms
Das, Modan K; Dai, Ho-Kwok
2007-01-01
Background Unraveling the mechanisms that regulate gene expression is a major challenge in biology. An important task in this challenge is to identify regulatory elements, especially the binding sites in deoxyribonucleic acid (DNA) for transcription factors. These binding sites are short DNA segments that are called motifs. Recent advances in genome sequence availability and in high-throughput gene expression analysis technologies have allowed for the development of computational methods for motif finding. As a result, a large number of motif finding algorithms have been implemented and applied to various motif models over the past decade. This survey reviews the latest developments in DNA motif finding algorithms. Results Earlier algorithms use promoter sequences of coregulated genes from single genome and search for statistically overrepresented motifs. Recent algorithms are designed to use phylogenetic footprinting or orthologous sequences and also an integrated approach where promoter sequences of coregulated genes and phylogenetic footprinting are used. All the algorithms studied have been reported to correctly detect the motifs that have been previously detected by laboratory experimental approaches, and some algorithms were able to find novel motifs. However, most of these motif finding algorithms have been shown to work successfully in yeast and other lower organisms, but perform significantly worse in higher organisms. Conclusion Despite considerable efforts to date, DNA motif finding remains a complex challenge for biologists and computer scientists. Researchers have taken many different approaches in developing motif discovery tools and the progress made in this area of research is very encouraging. Performance comparison of different motif finding tools and identification of the best tools have proven to be a difficult task because tools are designed based on algorithms and motif models that are diverse and complex and our incomplete understanding of
Analysis of image thresholding segmentation algorithms based on swarm intelligence
NASA Astrophysics Data System (ADS)
Zhang, Yi; Lu, Kai; Gao, Yinghui; Yang, Bo
2013-03-01
Swarm intelligence-based image thresholding segmentation algorithms are playing an important role in the research field of image segmentation. In this paper, we briefly introduce the theories of four existing image segmentation algorithms based on swarm intelligence including fish swarm algorithm, artificial bee colony, bacteria foraging algorithm and particle swarm optimization. Then some image benchmarks are tested in order to show the differences of the segmentation accuracy, time consumption, convergence and robustness for Salt & Pepper noise and Gaussian noise of these four algorithms. Through these comparisons, this paper gives qualitative analyses for the performance variance of the four algorithms. The conclusions in this paper would give a significant guide for the actual image segmentation.
Automatic design of decision-tree algorithms with evolutionary algorithms.
Barros, Rodrigo C; Basgalupp, Márcio P; de Carvalho, André C P L F; Freitas, Alex A
2013-01-01
This study reports the empirical analysis of a hyper-heuristic evolutionary algorithm that is capable of automatically designing top-down decision-tree induction algorithms. Top-down decision-tree algorithms are of great importance, considering their ability to provide an intuitive and accurate knowledge representation for classification problems. The automatic design of these algorithms seems timely, given the large literature accumulated over more than 40 years of research in the manual design of decision-tree induction algorithms. The proposed hyper-heuristic evolutionary algorithm, HEAD-DT, is extensively tested using 20 public UCI datasets and 10 microarray gene expression datasets. The algorithms automatically designed by HEAD-DT are compared with traditional decision-tree induction algorithms, such as C4.5 and CART. Experimental results show that HEAD-DT is capable of generating algorithms which are significantly more accurate than C4.5 and CART.
Benchmarking homogenization algorithms for monthly data
NASA Astrophysics Data System (ADS)
Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M. J.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratiannil, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.; Willett, K.
2013-09-01
The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies. The algorithms were validated against a realistic benchmark dataset. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including i) the centered root mean square error relative to the true homogeneous values at various averaging scales, ii) the error in linear trend estimates and iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data. Moreover, state-of-the-art relative homogenization algorithms developed to work with an inhomogeneous reference are shown to perform best. The study showed that currently automatic algorithms can perform as well as manual ones.
Real-time Algorithms for Sparse Neuronal System Identification.
Sheikhattar, Alireza; Babadi, Behtash
2016-08-01
We consider the problem of sparse adaptive neuronal system identification, where the goal is to estimate the sparse time-varying neuronal model parameters in an online fashion from neural spiking observations. We develop two adaptive filters based on greedy estimation techniques and regularized log-likelihood maximization. We apply the proposed algorithms to simulated spiking data as well as experimentally recorded data from the ferret's primary auditory cortex during performance of auditory tasks. Our results reveal significant performance gains achieved by the proposed algorithms in terms of sparse identification and trackability, compared to existing algorithms.
TrackEye tracking algorithm characterization
NASA Astrophysics Data System (ADS)
Valley, Michael T.; Shields, Robert W.; Reed, Jack M.
2004-10-01
TrackEye is a film digitization and target tracking system that offers the potential for quantitatively measuring the dynamic state variables (e.g., absolute and relative position, orientation, linear and angular velocity/acceleration, spin rate, trajectory, angle of attack, etc.) for moving objects using captured single or dual view image sequences. At the heart of the system is a set of tracking algorithms that automatically find and quantify the location of user selected image details such as natural test article features or passive fiducials that have been applied to cooperative test articles. This image position data is converted into real world coordinates and rates with user specified information such as the image scale and frame rate. Though tracking methods such as correlation algorithms are typically robust by nature, the accuracy and suitability of each TrackEye tracking algorithm is in general unknown even under good imaging conditions. The challenges of optimal algorithm selection and algorithm performance/measurement uncertainty are even more significant for long range tracking of high-speed targets where temporally varying atmospheric effects degrade the imagery. This paper will present the preliminary results from a controlled test sequence used to characterize the performance of the TrackEye tracking algorithm suite.
TrackEye tracking algorithm characterization.
Reed, Jack W.; Shields, Rob W; Valley, Michael T.
2004-08-01
TrackEye is a film digitization and target tracking system that offers the potential for quantitatively measuring the dynamic state variables (e.g., absolute and relative position, orientation, linear and angular velocity/acceleration, spin rate, trajectory, angle of attack, etc.) for moving objects using captured single or dual view image sequences. At the heart of the system is a set of tracking algorithms that automatically find and quantify the location of user selected image details such as natural test article features or passive fiducials that have been applied to cooperative test articles. This image position data is converted into real world coordinates and rates with user specified information such as the image scale and frame rate. Though tracking methods such as correlation algorithms are typically robust by nature, the accuracy and suitability of each TrackEye tracking algorithm is in general unknown even under good imaging conditions. The challenges of optimal algorithm selection and algorithm performance/measurement uncertainty are even more significant for long range tracking of high-speed targets where temporally varying atmospheric effects degrade the imagery. This paper will present the preliminary results from a controlled test sequence used to characterize the performance of the TrackEye tracking algorithm suite.