NASA Astrophysics Data System (ADS)
Williams, Arnold C.; Pachowicz, Peter W.
2004-09-01
Current mine detection research indicates that no single sensor or single look from a sensor will detect mines/minefields in a real-time manner at a performance level suitable for a forward maneuver unit. Hence, the integrated development of detectors and fusion algorithms are of primary importance. A problem in this development process has been the evaluation of these algorithms with relatively small data sets, leading to anecdotal and frequently over trained results. These anecdotal results are often unreliable and conflicting among various sensors and algorithms. Consequently, the physical phenomena that ought to be exploited and the performance benefits of this exploitation are often ambiguous. The Army RDECOM CERDEC Night Vision Laboratory and Electron Sensors Directorate has collected large amounts of multisensor data such that statistically significant evaluations of detection and fusion algorithms can be obtained. Even with these large data sets care must be taken in algorithm design and data processing to achieve statistically significant performance results for combined detectors and fusion algorithms. This paper discusses statistically significant detection and combined multilook fusion results for the Ellipse Detector (ED) and the Piecewise Level Fusion Algorithm (PLFA). These statistically significant performance results are characterized by ROC curves that have been obtained through processing this multilook data for the high resolution SAR data of the Veridian X-Band radar. We discuss the implications of these results on mine detection and the importance of statistical significance, sample size, ground truth, and algorithm design in performance evaluation.
Algorithm for Detecting Significant Locations from Raw GPS Data
NASA Astrophysics Data System (ADS)
Kami, Nobuharu; Enomoto, Nobuyuki; Baba, Teruyuki; Yoshikawa, Takashi
We present a fast algorithm for probabilistically extracting significant locations from raw GPS data based on data point density. Extracting significant locations from raw GPS data is the first essential step of algorithms designed for location-aware applications. Assuming that a location is significant if users spend a certain time around that area, most current algorithms compare spatial/temporal variables, such as stay duration and a roaming diameter, with given fixed thresholds to extract significant locations. However, the appropriate threshold values are not clearly known in priori and algorithms with fixed thresholds are inherently error-prone, especially under high noise levels. Moreover, for N data points, they are generally O(N 2) algorithms since distance computation is required. We developed a fast algorithm for selective data point sampling around significant locations based on density information by constructing random histograms using locality sensitive hashing. Evaluations show competitive performance in detecting significant locations even under high noise levels.
High-performance combinatorial algorithms
Pinar, Ali
2003-10-31
Combinatorial algorithms have long played an important role in many applications of scientific computing such as sparse matrix computations and parallel computing. The growing importance of combinatorial algorithms in emerging applications like computational biology and scientific data mining calls for development of a high performance library for combinatorial algorithms. Building such a library requires a new structure for combinatorial algorithms research that enables fast implementation of new algorithms. We propose a structure for combinatorial algorithms research that mimics the research structure of numerical algorithms. Numerical algorithms research is nicely complemented with high performance libraries, and this can be attributed to the fact that there are only a small number of fundamental problems that underlie numerical solvers. Furthermore there are only a handful of kernels that enable implementation of algorithms for these fundamental problems. Building a similar structure for combinatorial algorithms will enable efficient implementations for existing algorithms and fast implementation of new algorithms. Our results will promote utilization of combinatorial techniques and will impact research in many scientific computing applications, some of which are listed.
Benchmarking image fusion algorithm performance
NASA Astrophysics Data System (ADS)
Howell, Christopher L.
2012-06-01
Registering two images produced by two separate imaging sensors having different detector sizes and fields of view requires one of the images to undergo transformation operations that may cause its overall quality to degrade with regards to visual task performance. This possible change in image quality could add to an already existing difference in measured task performance. Ideally, a fusion algorithm would take as input unaltered outputs from each respective sensor used in the process. Therefore, quantifying how well an image fusion algorithm performs should be base lined to whether the fusion algorithm retained the performance benefit achievable by each independent spectral band being fused. This study investigates an identification perception experiment using a simple and intuitive process for discriminating between image fusion algorithm performances. The results from a classification experiment using information theory based image metrics is presented and compared to perception test results. The results show an effective performance benchmark for image fusion algorithms can be established using human perception test data. Additionally, image metrics have been identified that either agree with or surpass the performance benchmark established.
Performance analysis of cone detection algorithms.
Mariotti, Letizia; Devaney, Nicholas
2015-04-01
Many algorithms have been proposed to help clinicians evaluate cone density and spacing, as these may be related to the onset of retinal diseases. However, there has been no rigorous comparison of the performance of these algorithms. In addition, the performance of such algorithms is typically determined by comparison with human observers. Here we propose a technique to simulate realistic images of the cone mosaic. We use the simulated images to test the performance of three popular cone detection algorithms, and we introduce an algorithm which is used by astronomers to detect stars in astronomical images. We use Free Response Operating Characteristic (FROC) curves to evaluate and compare the performance of the four algorithms. This allows us to optimize the performance of each algorithm. We observe that performance is significantly enhanced by up-sampling the images. We investigate the effect of noise and image quality on cone mosaic parameters estimated using the different algorithms, finding that the estimated regularity is the most sensitive parameter. PMID:26366758
Belief network algorithms: A study of performance
Jitnah, N.
1996-12-31
This abstract gives an overview of the work. We present a survey of Belief Network algorithms and propose a domain characterization system to be used as a basis for algorithm comparison and for predicting algorithm performance.
TIRS stray light correction: algorithms and performance
NASA Astrophysics Data System (ADS)
Gerace, Aaron; Montanaro, Matthew; Beckmann, Tim; Tyrrell, Kaitlin; Cozzo, Alexandra; Carney, Trevor; Ngan, Vicki
2015-09-01
The Thermal Infrared Sensor (TIRS) onboard Landsat 8 was tasked with continuing thermal band measurements of the Earth as part of the Landsat program. From first light in early 2013, there were obvious indications that stray light was contaminating the thermal image data collected from the instrument. Traditional calibration techniques did not perform adequately as non-uniform banding was evident in the corrected data and error in absolute estimates of temperature over trusted buoys sites varied seasonally and, in worst cases, exceeded 9 K error. The development of an operational technique to remove the effects of the stray light has become a high priority to enhance the utility of the TIRS data. This paper introduces the current algorithm being tested by Landsat's calibration and validation team to remove stray light from TIRS image data. The integration of the algorithm into the EROS test system is discussed with strategies for operationalizing the method emphasized. Techniques for assessing the methodologies used are presented and potential refinements to the algorithm are suggested. Initial results indicate that the proposed algorithm significantly removes stray light artifacts from the image data. Specifically, visual and quantitative evidence suggests that the algorithm practically eliminates banding in the image data. Additionally, the seasonal variation in absolute errors is flattened and, in the worst case, errors of over 9 K are reduced to within 2 K. Future work focuses on refining the algorithm based on these findings and applying traditional calibration techniques to enhance the final image product.
Discovering simple DNA sequences by the algorithmic significance method.
Milosavljević, A; Jurka, J
1993-08-01
A new method, 'algorithmic significance', is proposed as a tool for discovery of patterns in DNA sequences. The main idea is that patterns can be discovered by finding ways to encode the observed data concisely. In this sense, the method can be viewed as a formal version of the Occam's Razor principle. In this paper the method is applied to discover significantly simple DNA sequences. We define DNA sequences to be simple if they contain repeated occurrences of certain 'words' and thus can be encoded in a small number of bits. Such definition includes minisatellites and microsatellites. A standard dynamic programming algorithm for data compression is applied to compute the minimal encoding lengths of sequences in linear time. An electronic mail server for identification of simple sequences based on the proposed method has been installed at the Internet address pythia/anl.gov. PMID:8402207
Algorithms for Detecting Significantly Mutated Pathways in Cancer
NASA Astrophysics Data System (ADS)
Vandin, Fabio; Upfal, Eli; Raphael, Benjamin J.
Recent genome sequencing studies have shown that the somatic mutations that drive cancer development are distributed across a large number of genes. This mutational heterogeneity complicates efforts to distinguish functional mutations from sporadic, passenger mutations. Since cancer mutations are hypothesized to target a relatively small number of cellular signaling and regulatory pathways, a common approach is to assess whether known pathways are enriched for mutated genes. However, restricting attention to known pathways will not reveal novel cancer genes or pathways. An alterative strategy is to examine mutated genes in the context of genome-scale interaction networks that include both well characterized pathways and additional gene interactions measured through various approaches. We introduce a computational framework for de novo identification of subnetworks in a large gene interaction network that are mutated in a significant number of patients. This framework includes two major features. First, we introduce a diffusion process on the interaction network to define a local neighborhood of "influence" for each mutated gene in the network. Second, we derive a two-stage multiple hypothesis test to bound the false discovery rate (FDR) associated with the identified subnetworks. We test these algorithms on a large human protein-protein interaction network using mutation data from two recent studies: glioblastoma samples from The Cancer Genome Atlas and lung adenocarcinoma samples from the Tumor Sequencing Project. We successfully recover pathways that are known to be important in these cancers, such as the p53 pathway. We also identify additional pathways, such as the Notch signaling pathway, that have been implicated in other cancers but not previously reported as mutated in these samples. Our approach is the first, to our knowledge, to demonstrate a computationally efficient strategy for de novo identification of statistically significant mutated subnetworks. We
Algorithms for improved performance in cryptographic protocols.
Schroeppel, Richard Crabtree; Beaver, Cheryl Lynn
2003-11-01
Public key cryptographic algorithms provide data authentication and non-repudiation for electronic transmissions. The mathematical nature of the algorithms, however, means they require a significant amount of computation, and encrypted messages and digital signatures possess high bandwidth. Accordingly, there are many environments (e.g. wireless, ad-hoc, remote sensing networks) where public-key requirements are prohibitive and cannot be used. The use of elliptic curves in public-key computations has provided a means by which computations and bandwidth can be somewhat reduced. We report here on the research conducted in an LDRD aimed to find even more efficient algorithms and to make public-key cryptography available to a wider range of computing environments. We improved upon several algorithms, including one for which a patent has been applied. Further we discovered some new problems and relations on which future cryptographic algorithms may be based.
Passive MMW algorithm performance characterization using MACET
NASA Astrophysics Data System (ADS)
Williams, Bradford D.; Watson, John S.; Amphay, Sengvieng A.
1997-06-01
As passive millimeter wave sensor technology matures, algorithms which are tailored to exploit the benefits of this technology are being developed. The expedient development of such algorithms requires an understanding of not only the gross phenomenology, but also specific quirks and limitations inherent in sensors and the data gathering methodology specific to this regime. This level of understanding is approached as the technology matures and increasing amounts of data become available for analysis. The Armament Directorate of Wright Laboratory, WL/MN, has spearheaded the advancement of passive millimeter-wave technology in algorithm development tools and modeling capability as well as sensor development. A passive MMW channel is available within WL/MNs popular multi-channel modeling program Irma, and a sample passive MMW algorithm is incorporated into the Modular Algorithm Concept Evaluation Tool, an algorithm development and evaluation system. The Millimeter Wave Analysis of Passive Signatures system provides excellent data collection capability in the 35, 60, and 95 GHz MMW bands. This paper exploits these assets for the study of the PMMW signature of a High Mobility Multi- Purpose Wheeled Vehicle in the three bands mentioned, and the effect of camouflage upon this signature and autonomous target recognition algorithm performance.
Bootstrap performance profiles in stochastic algorithms assessment
Costa, Lino; Espírito Santo, Isabel A.C.P.; Oliveira, Pedro
2015-03-10
Optimization with stochastic algorithms has become a relevant research field. Due to its stochastic nature, its assessment is not straightforward and involves integrating accuracy and precision. Performance profiles for the mean do not show the trade-off between accuracy and precision, and parametric stochastic profiles require strong distributional assumptions and are limited to the mean performance for a large number of runs. In this work, bootstrap performance profiles are used to compare stochastic algorithms for different statistics. This technique allows the estimation of the sampling distribution of almost any statistic even with small samples. Multiple comparison profiles are presented for more than two algorithms. The advantages and drawbacks of each assessment methodology are discussed.
The Real World Significance of Performance Prediction
ERIC Educational Resources Information Center
Pardos, Zachary A.; Wang, Qing Yang; Trivedi, Shubhendu
2012-01-01
In recent years, the educational data mining and user modeling communities have been aggressively introducing models for predicting student performance on external measures such as standardized tests as well as within-tutor performance. While these models have brought statistically reliable improvement to performance prediction, the real world…
Performance of a streaming mesh refinement algorithm.
Thompson, David C.; Pebay, Philippe Pierre
2004-08-01
In SAND report 2004-1617, we outline a method for edge-based tetrahedral subdivision that does not rely on saving state or communication to produce compatible tetrahedralizations. This report analyzes the performance of the technique by characterizing (a) mesh quality, (b) execution time, and (c) traits of the algorithm that could affect quality or execution time differently for different meshes. It also details the method used to debug the several hundred subdivision templates that the algorithm relies upon. Mesh quality is on par with other similar refinement schemes and throughput on modern hardware can exceed 600,000 output tetrahedra per second. But if you want to understand the traits of the algorithm, you have to read the report!
Evaluating Algorithm Performance Metrics Tailored for Prognostics
NASA Technical Reports Server (NTRS)
Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai
2009-01-01
Prognostics has taken a center stage in Condition Based Maintenance (CBM) where it is desired to estimate Remaining Useful Life (RUL) of the system so that remedial measures may be taken in advance to avoid catastrophic events or unwanted downtimes. Validation of such predictions is an important but difficult proposition and a lack of appropriate evaluation methods renders prognostics meaningless. Evaluation methods currently used in the research community are not standardized and in many cases do not sufficiently assess key performance aspects expected out of a prognostics algorithm. In this paper we introduce several new evaluation metrics tailored for prognostics and show that they can effectively evaluate various algorithms as compared to other conventional metrics. Specifically four algorithms namely; Relevance Vector Machine (RVM), Gaussian Process Regression (GPR), Artificial Neural Network (ANN), and Polynomial Regression (PR) are compared. These algorithms vary in complexity and their ability to manage uncertainty around predicted estimates. Results show that the new metrics rank these algorithms in different manner and depending on the requirements and constraints suitable metrics may be chosen. Beyond these results, these metrics offer ideas about how metrics suitable to prognostics may be designed so that the evaluation procedure can be standardized. 1
A Hybrid Swarm Intelligence Algorithm for Intrusion Detection Using Significant Features
Amudha, P.; Karthik, S.; Sivakumari, S.
2015-01-01
Intrusion detection has become a main part of network security due to the huge number of attacks which affects the computers. This is due to the extensive growth of internet connectivity and accessibility to information systems worldwide. To deal with this problem, in this paper a hybrid algorithm is proposed to integrate Modified Artificial Bee Colony (MABC) with Enhanced Particle Swarm Optimization (EPSO) to predict the intrusion detection problem. The algorithms are combined together to find out better optimization results and the classification accuracies are obtained by 10-fold cross-validation method. The purpose of this paper is to select the most relevant features that can represent the pattern of the network traffic and test its effect on the success of the proposed hybrid classification algorithm. To investigate the performance of the proposed method, intrusion detection KDDCup'99 benchmark dataset from the UCI Machine Learning repository is used. The performance of the proposed method is compared with the other machine learning algorithms and found to be significantly different. PMID:26221625
A Hybrid Swarm Intelligence Algorithm for Intrusion Detection Using Significant Features.
Amudha, P; Karthik, S; Sivakumari, S
2015-01-01
Intrusion detection has become a main part of network security due to the huge number of attacks which affects the computers. This is due to the extensive growth of internet connectivity and accessibility to information systems worldwide. To deal with this problem, in this paper a hybrid algorithm is proposed to integrate Modified Artificial Bee Colony (MABC) with Enhanced Particle Swarm Optimization (EPSO) to predict the intrusion detection problem. The algorithms are combined together to find out better optimization results and the classification accuracies are obtained by 10-fold cross-validation method. The purpose of this paper is to select the most relevant features that can represent the pattern of the network traffic and test its effect on the success of the proposed hybrid classification algorithm. To investigate the performance of the proposed method, intrusion detection KDDCup'99 benchmark dataset from the UCI Machine Learning repository is used. The performance of the proposed method is compared with the other machine learning algorithms and found to be significantly different. PMID:26221625
Predicting the performance of a spatial gamut mapping algorithm
NASA Astrophysics Data System (ADS)
Bakke, Arne M.; Farup, Ivar; Hardeberg, Jon Y.
2009-01-01
Gamut mapping algorithms are currently being developed to take advantage of the spatial information in an image to improve the utilization of the destination gamut. These algorithms try to preserve the spatial information between neighboring pixels in the image, such as edges and gradients, without sacrificing global contrast. Experiments have shown that such algorithms can result in significantly improved reproduction of some images compared with non-spatial methods. However, due to the spatial processing of images, they introduce unwanted artifacts when used on certain types of images. In this paper we perform basic image analysis to predict whether a spatial algorithm is likely to perform better or worse than a good, non-spatial algorithm. Our approach starts by detecting the relative amount of areas in the image that are made up of uniformly colored pixels, as well as the amount of areas that contain details in out-of-gamut areas. A weighted difference is computed from these numbers, and we show that the result has a high correlation with the observed performance of the spatial algorithm in a previously conducted psychophysical experiment.
Impact of Multiscale Retinex Computation on Performance of Segmentation Algorithms
NASA Technical Reports Server (NTRS)
Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.; Hines, Glenn D.
2004-01-01
Classical segmentation algorithms subdivide an image into its constituent components based upon some metric that defines commonality between pixels. Often, these metrics incorporate some measure of "activity" in the scene, e.g. the amount of detail that is in a region. The Multiscale Retinex with Color Restoration (MSRCR) is a general purpose, non-linear image enhancement algorithm that significantly affects the brightness, contrast and sharpness within an image. In this paper, we will analyze the impact the MSRCR has on segmentation results and performance.
NASA Astrophysics Data System (ADS)
Zhao, Zhanlue
This dissertation consists of two parts. The first part deals with the performance appraisal of estimation algorithms. The second part focuses on the application of estimation algorithms to target tracking. Performance appraisal is crucial for understanding, developing and comparing various estimation algorithms. In particular, with the evolvement of estimation theory and the increase of problem complexity, performance appraisal is getting more and more challenging for engineers to make comprehensive conclusions. However, the existing theoretical results are inadequate for practical reference. The first part of this dissertation is dedicated to performance measures which include local performance measures, global performance measures and model distortion measure. The second part focuses on application of the recursive best linear unbiased estimation (BLUE) or linear minimum mean square error (LIB-M-ISE) estimation to nonlinear measurement problem in target tracking. Kalman filter has been the dominant basis for dynamic state filtering for several decades. Beyond Kalman filter, a more fundamental basis for the recursive best linear unbiased filtering has been thoroughly investigated in a series of papers by my advisor Dr. X. Rong Li. Based on the so-called quasi-recursive best linear unbiased filtering technique, the constraints of the Kalman filter Linear-Gaussian assumptions can be relaxed such that a general linear filtering technique for nonlinear systems can be achieved. An approximate optimal BLUE filter is implemented for nonlinear measurements in target tracking which outperforms the existing method significantly in terms of accuracy, credibility and robustness.
Performance Comparison Of Evolutionary Algorithms For Image Clustering
NASA Astrophysics Data System (ADS)
Civicioglu, P.; Atasever, U. H.; Ozkan, C.; Besdok, E.; Karkinli, A. E.; Kesikoglu, A.
2014-09-01
Evolutionary computation tools are able to process real valued numerical sets in order to extract suboptimal solution of designed problem. Data clustering algorithms have been intensively used for image segmentation in remote sensing applications. Despite of wide usage of evolutionary algorithms on data clustering, their clustering performances have been scarcely studied by using clustering validation indexes. In this paper, the recently proposed evolutionary algorithms (i.e., Artificial Bee Colony Algorithm (ABC), Gravitational Search Algorithm (GSA), Cuckoo Search Algorithm (CS), Adaptive Differential Evolution Algorithm (JADE), Differential Search Algorithm (DSA) and Backtracking Search Optimization Algorithm (BSA)) and some classical image clustering techniques (i.e., k-means, fcm, som networks) have been used to cluster images and their performances have been compared by using four clustering validation indexes. Experimental test results exposed that evolutionary algorithms give more reliable cluster-centers than classical clustering techniques, but their convergence time is quite long.
Grant, Adam M
2008-01-01
Does task significance increase job performance? Correlational designs and confounded manipulations have prevented researchers from assessing the causal impact of task significance on job performance. To address this gap, 3 field experiments examined the performance effects, relational mechanisms, and boundary conditions of task significance. In Experiment 1, fundraising callers who received a task significance intervention increased their levels of job performance relative to callers in 2 other conditions and to their own prior performance. In Experiment 2, task significance increased the job dedication and helping behavior of lifeguards, and these effects were mediated by increases in perceptions of social impact and social worth. In Experiment 3, conscientiousness and prosocial values moderated the effects of task significance on the performance of new fundraising callers. The results provide fresh insights into the effects, relational mechanisms, and boundary conditions of task significance, offering noteworthy implications for theory, research, and practice on job design, social information processing, and work motivation and performance. PMID:18211139
El Hag, Imad A.; Elsiddig, Kamal E.; Elsafi, Mohamed E.M.O; Elfaki, Mona E.E.; Musa, Ahmed M.; Musa, Brima Y.; Elhassan, Ahmed M.
2013-01-01
Abstract Background Tuberculosis is a major health problem in developing countries. The distinction between tuberculous lymphadenitis, non-specific lymphadenitis and malignant lymph node enlargement has to be made at primary health care levels using easy, simple and cheap methods. Objective To develop a reliable clinical algorithm for primary care settings to triage cases of non-specific, tuberculous and malignant lymphadenopathies. Methods Calculation of the odd ratios (OR) of the chosen predictor variables was carried out using logistic regression. The numerical score values of the predictor variables were weighed against their respective OR. The performance of the score was evaluated by the ROC (Receiver Operator Characteristic) curve. Results Four predictor variables; Mantoux reading, erythrocytes sedimentation rate (ESR), nocturnal fever and discharging sinuses correlated significantly with TB diagnosis and were included in the reduced model to establish score A. For score B, the reduced model included Mantoux reading, ESR, lymph-node size and lymph-node number as predictor variables for malignant lymph nodes. Score A ranged 0 to 12 and a cut-off point of 6 gave a best sensitivity and specificity of 91% and 90% respectively, whilst score B ranged -3 to 8 and a cut-off point of 3 gave a best sensitivity and specificity of 83% and 76% respectively. The calculated area under the ROC curve was 0.964 (95% CI, 0.949 – 0.980) and -0.856 (95% CI, 0.787 - 0.925) for scores A and B respectively, indicating good performance. Conclusion The developed algorithm can efficiently triage cases with tuberculous and malignant lymphadenopathies for treatment or referral to specialised centres for further work-up.
Turbopump Performance Improved by Evolutionary Algorithms
NASA Technical Reports Server (NTRS)
Oyama, Akira; Liou, Meng-Sing
2002-01-01
The development of design optimization technology for turbomachinery has been initiated using the multiobjective evolutionary algorithm under NASA's Intelligent Synthesis Environment and Revolutionary Aeropropulsion Concepts programs. As an alternative to the traditional gradient-based methods, evolutionary algorithms (EA's) are emergent design-optimization algorithms modeled after the mechanisms found in natural evolution. EA's search from multiple points, instead of moving from a single point. In addition, they require no derivatives or gradients of the objective function, leading to robustness and simplicity in coupling any evaluation codes. Parallel efficiency also becomes very high by using a simple master-slave concept for function evaluations, since such evaluations often consume the most CPU time, such as computational fluid dynamics. Application of EA's to multiobjective design problems is also straightforward because EA's maintain a population of design candidates in parallel. Because of these advantages, EA's are a unique and attractive approach to real-world design optimization problems.
Case study of isosurface extraction algorithm performance
Sutton, P M; Hansen, C D; Shen, H; Schikore, D
1999-12-14
Isosurface extraction is an important and useful visualization method. Over the past ten years, the field has seen numerous isosurface techniques published leaving the user in a quandary about which one should be used. Some papers have published complexity analysis of the techniques yet empirical evidence comparing different methods is lacking. This case study presents a comparative study of several representative isosurface extraction algorithms. It reports and analyzes empirical measurements of execution times and memory behavior for each algorithm. The results show that asymptotically optimal techniques may not be the best choice when implemented on modern computer architectures.
Generic algorithms for high performance scalable geocomputing
NASA Astrophysics Data System (ADS)
de Jong, Kor; Schmitz, Oliver; Karssenberg, Derek
2016-04-01
During the last decade, the characteristics of computing hardware have changed a lot. For example, instead of a single general purpose CPU core, personal computers nowadays contain multiple cores per CPU and often general purpose accelerators, like GPUs. Additionally, compute nodes are often grouped together to form clusters or a supercomputer, providing enormous amounts of compute power. For existing earth simulation models to be able to use modern hardware platforms, their compute intensive parts must be rewritten. This can be a major undertaking and may involve many technical challenges. Compute tasks must be distributed over CPU cores, offloaded to hardware accelerators, or distributed to different compute nodes. And ideally, all of this should be done in such a way that the compute task scales well with the hardware resources. This presents two challenges: 1) how to make good use of all the compute resources and 2) how to make these compute resources available for developers of simulation models, who may not (want to) have the required technical background for distributing compute tasks. The first challenge requires the use of specialized technology (e.g.: threads, OpenMP, MPI, OpenCL, CUDA). The second challenge requires the abstraction of the logic handling the distribution of compute tasks from the model-specific logic, hiding the technical details from the model developer. To assist the model developer, we are developing a C++ software library (called Fern) containing algorithms that can use all CPU cores available in a single compute node (distributing tasks over multiple compute nodes will be done at a later stage). The algorithms are grid-based (finite difference) and include local and spatial operations such as convolution filters. The algorithms handle distribution of the compute tasks to CPU cores internally. In the resulting model the low-level details of how this is done is separated from the model-specific logic representing the modeled system
Generic algorithms for high performance scalable geocomputing
NASA Astrophysics Data System (ADS)
de Jong, Kor; Schmitz, Oliver; Karssenberg, Derek
2016-04-01
During the last decade, the characteristics of computing hardware have changed a lot. For example, instead of a single general purpose CPU core, personal computers nowadays contain multiple cores per CPU and often general purpose accelerators, like GPUs. Additionally, compute nodes are often grouped together to form clusters or a supercomputer, providing enormous amounts of compute power. For existing earth simulation models to be able to use modern hardware platforms, their compute intensive parts must be rewritten. This can be a major undertaking and may involve many technical challenges. Compute tasks must be distributed over CPU cores, offloaded to hardware accelerators, or distributed to different compute nodes. And ideally, all of this should be done in such a way that the compute task scales well with the hardware resources. This presents two challenges: 1) how to make good use of all the compute resources and 2) how to make these compute resources available for developers of simulation models, who may not (want to) have the required technical background for distributing compute tasks. The first challenge requires the use of specialized technology (e.g.: threads, OpenMP, MPI, OpenCL, CUDA). The second challenge requires the abstraction of the logic handling the distribution of compute tasks from the model-specific logic, hiding the technical details from the model developer. To assist the model developer, we are developing a C++ software library (called Fern) containing algorithms that can use all CPU cores available in a single compute node (distributing tasks over multiple compute nodes will be done at a later stage). The algorithms are grid-based (finite difference) and include local and spatial operations such as convolution filters. The algorithms handle distribution of the compute tasks to CPU cores internally. In the resulting model the low-level details of how this is done is separated from the model-specific logic representing the modeled system
Motion Cueing Algorithm Development: Piloted Performance Testing of the Cueing Algorithms
NASA Technical Reports Server (NTRS)
Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.; Kelly, Lon C.
2005-01-01
The relative effectiveness in simulating aircraft maneuvers with both current and newly developed motion cueing algorithms was assessed with an eleven-subject piloted performance evaluation conducted on the NASA Langley Visual Motion Simulator (VMS). In addition to the current NASA adaptive algorithm, two new cueing algorithms were evaluated: the optimal algorithm and the nonlinear algorithm. The test maneuvers included a straight-in approach with a rotating wind vector, an offset approach with severe turbulence and an on/off lateral gust that occurs as the aircraft approaches the runway threshold, and a takeoff both with and without engine failure after liftoff. The maneuvers were executed with each cueing algorithm with added visual display delay conditions ranging from zero to 200 msec. Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. Piloted performance parameters for the approach maneuvers, the vertical velocity upon touchdown and the runway touchdown position, were also analyzed but did not show any noticeable difference among the cueing algorithms. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input analysis shows pilot-induced oscillations on a straight-in approach were less prevalent compared to the optimal algorithm. The augmented turbulence cues increased workload on an offset approach that the pilots deemed more realistic compared to the NASA adaptive algorithm. The takeoff with engine failure showed the least roll activity for the nonlinear algorithm, with the least rudder pedal activity for the optimal algorithm.
Performance characterization of a combined material identification and screening algorithm
NASA Astrophysics Data System (ADS)
Green, Robert L.; Hargreaves, Michael D.; Gardner, Craig M.
2013-05-01
Portable analytical devices based on a gamut of technologies (Infrared, Raman, X-Ray Fluorescence, Mass Spectrometry, etc.) are now widely available. These tools have seen increasing adoption for field-based assessment by diverse users including military, emergency response, and law enforcement. Frequently, end-users of portable devices are non-scientists who rely on embedded software and the associated algorithms to convert collected data into actionable information. Two classes of problems commonly encountered in field applications are identification and screening. Identification algorithms are designed to scour a library of known materials and determine whether the unknown measurement is consistent with a stored response (or combination of stored responses). Such algorithms can be used to identify a material from many thousands of possible candidates. Screening algorithms evaluate whether at least a subset of features in an unknown measurement correspond to one or more specific substances of interest and are typically configured to detect from a small list potential target analytes. Thus, screening algorithms are much less broadly applicable than identification algorithms; however, they typically provide higher detection rates which makes them attractive for specific applications such as chemical warfare agent or narcotics detection. This paper will present an overview and performance characterization of a combined identification/screening algorithm that has recently been developed. It will be shown that the combined algorithm provides enhanced detection capability more typical of screening algorithms while maintaining a broad identification capability. Additionally, we will highlight how this approach can enable users to incorporate situational awareness during a response.
Quantitative comparison of the performance of SAR segmentation algorithms.
Caves, R; Quegan, S; White, R
1998-01-01
Methods to evaluate the performance of segmentation algorithms for synthetic aperture radar (SAR) images are developed, based on known properties of coherent speckle and a scene model in which areas of constant backscatter coefficient are separated by abrupt edges. Local and global measures of segmentation homogeneity are derived and applied to the outputs of two segmentation algorithms developed for SAR data, one based on iterative edge detection and segment growing, the other based on global maximum a posteriori (MAP) estimation using simulated annealing. The quantitative statistically based measures appear consistent with visual impressions of the relative quality of the segmentations produced by the two algorithms. On simulated data meeting algorithm assumptions, both algorithms performed well but MAP methods appeared visually and measurably better. On real data, MAP estimation was markedly the better method and retained performance comparable to that on simulated data, while the performance of the other algorithm deteriorated sharply. Improvements in the performance measures will require a more realistic scene model and techniques to recognize oversegmentation. PMID:18276219
Significant Advances in the AIRS Science Team Version-6 Retrieval Algorithm
NASA Technical Reports Server (NTRS)
Susskind, Joel; Blaisdell, John; Iredell, Lena; Molnar, Gyula
2012-01-01
AIRS/AMSU is the state of the art infrared and microwave atmospheric sounding system flying aboard EOS Aqua. The Goddard DISC has analyzed AIRS/AMSU observations, covering the period September 2002 until the present, using the AIRS Science Team Version-S retrieval algorithm. These products have been used by many researchers to make significant advances in both climate and weather applications. The AIRS Science Team Version-6 Retrieval, which will become operation in mid-20l2, contains many significant theoretical and practical improvements compared to Version-5 which should further enhance the utility of AIRS products for both climate and weather applications. In particular, major changes have been made with regard to the algOrithms used to 1) derive surface skin temperature and surface spectral emissivity; 2) generate the initial state used to start the retrieval procedure; 3) compute Outgoing Longwave Radiation; and 4) determine Quality Control. This paper will describe these advances found in the AIRS Version-6 retrieval algorithm and demonstrate the improvement of AIRS Version-6 products compared to those obtained using Version-5,
Lytro camera technology: theory, algorithms, performance analysis
NASA Astrophysics Data System (ADS)
Georgiev, Todor; Yu, Zhan; Lumsdaine, Andrew; Goma, Sergio
2013-03-01
The Lytro camera is the first implementation of a plenoptic camera for the consumer market. We consider it a successful example of the miniaturization aided by the increase in computational power characterizing mobile computational photography. The plenoptic camera approach to radiance capture uses a microlens array as an imaging system focused on the focal plane of the main camera lens. This paper analyzes the performance of Lytro camera from a system level perspective, considering the Lytro camera as a black box, and uses our interpretation of Lytro image data saved by the camera. We present our findings based on our interpretation of Lytro camera file structure, image calibration and image rendering; in this context, artifacts and final image resolution are discussed.
A Hybrid Actuation System Demonstrating Significantly Enhanced Electromechanical Performance
NASA Technical Reports Server (NTRS)
Su, Ji; Xu, Tian-Bing; Zhang, Shujun; Shrout, Thomas R.; Zhang, Qiming
2004-01-01
A hybrid actuation system (HYBAS) utilizing advantages of a combination of electromechanical responses of an electroactive polymer (EAP), an electrostrictive copolymer, and an electroactive ceramic single crystal, PZN-PT single crystal, has been developed. The system employs the contribution of the actuation elements cooperatively and exhibits a significantly enhanced electromechanical performance compared to the performances of the device made of each constituting material, the electroactive polymer or the ceramic single crystal, individually. The theoretical modeling of the performances of the HYBAS is in good agreement with experimental observation. The consistence between the theoretical modeling and experimental test make the design concept an effective route for the development of high performance actuating devices for many applications. The theoretical modeling, fabrication of the HYBAS and the initial experimental results will be presented and discussed.
Improved Ant Colony Clustering Algorithm and Its Performance Study.
Gao, Wei
2016-01-01
Clustering analysis is used in many disciplines and applications; it is an important tool that descriptively identifies homogeneous groups of objects based on attribute values. The ant colony clustering algorithm is a swarm-intelligent method used for clustering problems that is inspired by the behavior of ant colonies that cluster their corpses and sort their larvae. A new abstraction ant colony clustering algorithm using a data combination mechanism is proposed to improve the computational efficiency and accuracy of the ant colony clustering algorithm. The abstraction ant colony clustering algorithm is used to cluster benchmark problems, and its performance is compared with the ant colony clustering algorithm and other methods used in existing literature. Based on similar computational difficulties and complexities, the results show that the abstraction ant colony clustering algorithm produces results that are not only more accurate but also more efficiently determined than the ant colony clustering algorithm and the other methods. Thus, the abstraction ant colony clustering algorithm can be used for efficient multivariate data clustering. PMID:26839533
Improved Ant Colony Clustering Algorithm and Its Performance Study
Gao, Wei
2016-01-01
Clustering analysis is used in many disciplines and applications; it is an important tool that descriptively identifies homogeneous groups of objects based on attribute values. The ant colony clustering algorithm is a swarm-intelligent method used for clustering problems that is inspired by the behavior of ant colonies that cluster their corpses and sort their larvae. A new abstraction ant colony clustering algorithm using a data combination mechanism is proposed to improve the computational efficiency and accuracy of the ant colony clustering algorithm. The abstraction ant colony clustering algorithm is used to cluster benchmark problems, and its performance is compared with the ant colony clustering algorithm and other methods used in existing literature. Based on similar computational difficulties and complexities, the results show that the abstraction ant colony clustering algorithm produces results that are not only more accurate but also more efficiently determined than the ant colony clustering algorithm and the other methods. Thus, the abstraction ant colony clustering algorithm can be used for efficient multivariate data clustering. PMID:26839533
Developmental Changes in Adolescents' Olfactory Performance and Significance of Olfaction
Klötze, Paula; Gerber, Friederike; Croy, Ilona; Hummel, Thomas
2016-01-01
Aim of the current work was to examine developmental changes in adolescents’ olfactory performance and personal significance of olfaction. In the first study olfactory identification abilities of 76 participants (31 males and 45 females aged between 10 and 18 years; M = 13.8, SD = 2.3) was evaluated with the Sniffin Stick identification test, presented in a cued and in an uncued manner. Verbal fluency was additionally examined for control purpose. In the second study 131 participants (46 males and 85 females aged between 10 and 18 years; (M = 14.4, SD = 2.2) filled in the importance of olfaction questionnaire. Odor identification abilities increased significantly with age and were significantly higher in girls as compared to boys. These effects were especially pronounced in the uncued task and partly related to verbal fluency. In line, the personal significance of olfaction increased with age and was generally higher among female compared to male participants. PMID:27332887
Modeling and performance analysis of GPS vector tracking algorithms
NASA Astrophysics Data System (ADS)
Lashley, Matthew
This dissertation provides a detailed analysis of GPS vector tracking algorithms and the advantages they have over traditional receiver architectures. Standard GPS receivers use a decentralized architecture that separates the tasks of signal tracking and position/velocity estimation. Vector tracking algorithms combine the two tasks into a single algorithm. The signals from the various satellites are processed collectively through a Kalman filter. The advantages of vector tracking over traditional, scalar tracking methods are thoroughly investigated. A method for making a valid comparison between vector and scalar tracking loops is developed. This technique avoids the ambiguities encountered when attempting to make a valid comparison between tracking loops (which are characterized by noise bandwidths and loop order) and the Kalman filters (which are characterized by process and measurement noise covariance matrices) that are used by vector tracking algorithms. The improvement in performance offered by vector tracking is calculated in multiple different scenarios. Rule of thumb analysis techniques for scalar Frequency Lock Loops (FLL) are extended to the vector tracking case. The analysis tools provide a simple method for analyzing the performance of vector tracking loops. The analysis tools are verified using Monte Carlo simulations. Monte Carlo simulations are also used to study the effects of carrier to noise power density (C/N0) ratio estimation and the advantage offered by vector tracking over scalar tracking. The improvement from vector tracking ranges from 2.4 to 6.2 dB in various scenarios. The difference in the performance of the three vector tracking architectures is analyzed. The effects of using a federated architecture with and without information sharing between the receiver's channels are studied. A combination of covariance analysis and Monte Carlo simulation is used to analyze the performance of the three algorithms. The federated algorithm without
A High-Performance Genetic Algorithm: Using Traveling Salesman Problem as a Case
Tsai, Chun-Wei; Tseng, Shih-Pang; Yang, Chu-Sing
2014-01-01
This paper presents a simple but efficient algorithm for reducing the computation time of genetic algorithm (GA) and its variants. The proposed algorithm is motivated by the observation that genes common to all the individuals of a GA have a high probability of surviving the evolution and ending up being part of the final solution; as such, they can be saved away to eliminate the redundant computations at the later generations of a GA. To evaluate the performance of the proposed algorithm, we use it not only to solve the traveling salesman problem but also to provide an extensive analysis on the impact it may have on the quality of the end result. Our experimental results indicate that the proposed algorithm can significantly reduce the computation time of GA and GA-based algorithms while limiting the degradation of the quality of the end result to a very small percentage compared to traditional GA. PMID:24892038
Dentate Gyrus Circuitry Features Improve Performance of Sparse Approximation Algorithms
Petrantonakis, Panagiotis C.; Poirazi, Panayiota
2015-01-01
Memory-related activity in the Dentate Gyrus (DG) is characterized by sparsity. Memory representations are seen as activated neuronal populations of granule cells, the main encoding cells in DG, which are estimated to engage 2–4% of the total population. This sparsity is assumed to enhance the ability of DG to perform pattern separation, one of the most valuable contributions of DG during memory formation. In this work, we investigate how features of the DG such as its excitatory and inhibitory connectivity diagram can be used to develop theoretical algorithms performing Sparse Approximation, a widely used strategy in the Signal Processing field. Sparse approximation stands for the algorithmic identification of few components from a dictionary that approximate a certain signal. The ability of DG to achieve pattern separation by sparsifing its representations is exploited here to improve the performance of the state of the art sparse approximation algorithm “Iterative Soft Thresholding” (IST) by adding new algorithmic features inspired by the DG circuitry. Lateral inhibition of granule cells, either direct or indirect, via mossy cells, is shown to enhance the performance of the IST. Apart from revealing the potential of DG-inspired theoretical algorithms, this work presents new insights regarding the function of particular cell types in the pattern separation task of the DG. PMID:25635776
Logit Model based Performance Analysis of an Optimization Algorithm
NASA Astrophysics Data System (ADS)
Hernández, J. A.; Ospina, J. D.; Villada, D.
2011-09-01
In this paper, the performance of the Multi Dynamics Algorithm for Global Optimization (MAGO) is studied through simulation using five standard test functions. To guarantee that the algorithm converges to a global optimum, a set of experiments searching for the best combination between the only two MAGO parameters -number of iterations and number of potential solutions, are considered. These parameters are sequentially varied, while increasing the dimension of several test functions, and performance curves were obtained. The MAGO was originally designed to perform well with small populations; therefore, the self-adaptation task with small populations is more challenging while the problem dimension is higher. The results showed that the convergence probability to an optimal solution increases according to growing patterns of the number of iterations and the number of potential solutions. However, the success rates slow down when the dimension of the problem escalates. Logit Model is used to determine the mutual effects between the parameters of the algorithm.
Leukoaraiosis Significantly Worsens Driving Performance of Ordinary Older Drivers
Zheng, Rencheng; Fang, Fang; Ohori, Masanori; Nakamura, Hiroki; Kumagai, Yasuhiho; Okada, Hiroshi; Teramura, Kazuhiko; Nakayama, Satoshi; Irimajiri, Akinori; Taoka, Hiroshi; Okada, Satoshi
2014-01-01
Background Leukoaraiosis is defined as extracellular space caused mainly by atherosclerotic or demyelinated changes in the brain tissue and is commonly found in the brains of healthy older people. A significant association between leukoaraiosis and traffic crashes was reported in our previous study; however, the reason for this is still unclear. Method This paper presents a comprehensive evaluation of driving performance in ordinary older drivers with leukoaraiosis. First, the degree of leukoaraiosis was examined in 33 participants, who underwent an actual-vehicle driving examination on a standard driving course, and a driver skill rating was also collected while the driver carried out a paced auditory serial addition test, which is a calculating task given verbally. At the same time, a steering entropy method was used to estimate steering operation performance. Results The experimental results indicated that a normal older driver with leukoaraiosis was readily affected by external disturbances and made more operation errors and steered less smoothly than one without leukoaraiosis during driving; at the same time, their steering skill significantly deteriorated. Conclusions Leukoaraiosis worsens the driving performance of older drivers because of their increased vulnerability to distraction. PMID:25295736
Preliminary flight evaluation of an engine performance optimization algorithm
NASA Technical Reports Server (NTRS)
Lambert, H. H.; Gilyard, G. B.; Chisholm, J. D.; Kerr, L. J.
1991-01-01
A performance seeking control (PSC) algorithm has undergone initial flight test evaluation in subsonic operation of a PW 1128 engined F-15. This algorithm is designed to optimize the quasi-steady performance of an engine for three primary modes: (1) minimum fuel consumption; (2) minimum fan turbine inlet temperature (FTIT); and (3) maximum thrust. The flight test results have verified a thrust specific fuel consumption reduction of 1 pct., up to 100 R decreases in FTIT, and increases of as much as 12 pct. in maximum thrust. PSC technology promises to be of value in next generation tactical and transport aircraft.
Performance of recovery time improvement algorithms for software RAIDs
Riegel, J.; Menon, Jai
1996-12-31
A software RAID is a RAID implemented purely in software running on a host computer. One problem with software RAIDs is that they do not have access to special hardware such as NVRAM. Thus, software RAIDs may need to check every parity group of an array for consistency following a host crash or power failure. This process of checking parity groups is called recovery, and results in long delays when the software RAID is restarted. In this paper, we review two algorithms to reduce this recovery time for software RAIDs: the PGS Bitmap algorithm we proposed in and the List Algorithm proposed in. We compare the performance of these two algorithms using trace-driven simulations. Our results show that the PGS Bitmap Algorithm can reduce recovery time by a factor of 12 with a response time penalty of less than 1%, or by a factor of 50 with a response time penalty of less than 2%, and a memory requirement of around 9 Kbytes. The List Algorithm can reduce recovery time by a factor of 50 but cannot achieve a response time penalty of less than 16%.
Atmospheric turbulence and sensor system effects on biometric algorithm performance
NASA Astrophysics Data System (ADS)
Espinola, Richard L.; Leonard, Kevin R.; Byrd, Kenneth A.; Potvin, Guy
2015-05-01
Biometric technologies composed of electro-optical/infrared (EO/IR) sensor systems and advanced matching algorithms are being used in various force protection/security and tactical surveillance applications. To date, most of these sensor systems have been widely used in controlled conditions with varying success (e.g., short range, uniform illumination, cooperative subjects). However the limiting conditions of such systems have yet to be fully studied for long range applications and degraded imaging environments. Biometric technologies used for long range applications will invariably suffer from the effects of atmospheric turbulence degradation. Atmospheric turbulence causes blur, distortion and intensity fluctuations that can severely degrade image quality of electro-optic and thermal imaging systems and, for the case of biometrics technology, translate to poor matching algorithm performance. In this paper, we evaluate the effects of atmospheric turbulence and sensor resolution on biometric matching algorithm performance. We use a subset of the Facial Recognition Technology (FERET) database and a commercial algorithm to analyze facial recognition performance on turbulence degraded facial images. The goal of this work is to understand the feasibility of long-range facial recognition in degraded imaging conditions, and the utility of camera parameter trade studies to enable the design of the next generation biometrics sensor systems.
Significant improvements in long trace profiler measurement performance
Takacs, P.Z.; Bresloff, C.J.
1996-07-01
A Modifications made to the Long Trace Profiler (LTP II) system at the Advanced Photon Source at Argonne National Laboratory have significantly improved the accuracy and repeatability of the instrument The use of a Dove prism in the reference beam path corrects for phasing problems between mechanical efforts and thermally-induced system errors. A single reference correction now completely removes both error signals from the measured surface profile. The addition of a precision air conditioner keeps the temperature in the metrology enclosure constant to within {+-}0.1{degrees}C over a 24 hour period and has significantly improved the stability and repeatability of the system. We illustrate the performance improvements with several sets of measurements. The improved environmental control has reduced thermal drift error to about 0.75 microradian RMS over a 7.5 hour time period. Measurements made in the forward scan direction and the reverse scan direction differ by only about 0.5 microradian RMS over a 500mm, trace length. We are now able to put 1-sigma error bar of 0.3 microradian on an average of 10 slope profile measurements over a 500mm long trace length, and we are now able to put a 0.2 microradian error bar on an average of 10 measurements over a 200mm trace length. The corresponding 1-sigma height error bar for this measurement is 1.1 run.
Significant improvements in Long Trace Profiler measurement performance
Takacs, P.Z.; Bresloff, C.J.
1996-12-31
Modifications made to the Long Trace Profiler (LTP II) system at the Advanced Photon Source at Argonne National Laboratory have significantly improved the accuracy and repeatability of the instrument. The use of a Dove prism in the reference beam path corrects for phasing problems between mechanical errors and thermally-induced system errors. A single reference correction now completely removes both error signals from the measured surface profile. The addition of a precision air conditioner keeps the temperature in the metrology enclosure constant to within {+-} 0.1 C over a 24 hour period and has significantly improved the stability and repeatability of the system. The authors illustrate the performance improvements with several sets of measurements. The improved environmental control has reduced thermal drift error to about 0.75 microradian RMS over a 7.5 hour time period. Measurements made in the forward scan direction and the reverse scan direction differ by only about 0.5 microradian RMS over a 500 mm trace length. They are now able to put 1-sigma error bar of 0.3 microradian on an average of 10 slope profile measurements over a 500 mm long trace length, and they are now able to put a 0.2 microradian error bar on an average of 10 measurements over a 200 mm trace length. The corresponding 1-sigma height error bar for this measurement is 1.1 nm.
On the performances of computer vision algorithms on mobile platforms
NASA Astrophysics Data System (ADS)
Battiato, S.; Farinella, G. M.; Messina, E.; Puglisi, G.; Ravì, D.; Capra, A.; Tomaselli, V.
2012-01-01
Computer Vision enables mobile devices to extract the meaning of the observed scene from the information acquired with the onboard sensor cameras. Nowadays, there is a growing interest in Computer Vision algorithms able to work on mobile platform (e.g., phone camera, point-and-shot-camera, etc.). Indeed, bringing Computer Vision capabilities on mobile devices open new opportunities in different application contexts. The implementation of vision algorithms on mobile devices is still a challenging task since these devices have poor image sensors and optics as well as limited processing power. In this paper we have considered different algorithms covering classic Computer Vision tasks: keypoint extraction, face detection, image segmentation. Several tests have been done to compare the performances of the involved mobile platforms: Nokia N900, LG Optimus One, Samsung Galaxy SII.
Ehsan, Shoaib; Kanwal, Nadia; Clark, Adrian F; McDonald-Maier, Klaus D
2012-01-01
Speeded-Up Robust Features is a feature extraction algorithm designed for real-time execution, although this is rarely achievable on low-power hardware such as that in mobile robots. One way to reduce the computation is to discard some of the scale-space octaves, and previous research has simply discarded the higher octaves. This paper shows that this approach is not always the most sensible and presents an algorithm for choosing which octaves to discard based on the properties of the imagery. Results obtained with this best octaves algorithm show that it is able to achieve a significant reduction in computation without compromising matching performance. PMID:21712160
Performance impact of dynamic parallelism on different clustering algorithms
NASA Astrophysics Data System (ADS)
DiMarco, Jeffrey; Taufer, Michela
2013-05-01
In this paper, we aim to quantify the performance gains of dynamic parallelism. The newest version of CUDA, CUDA 5, introduces dynamic parallelism, which allows GPU threads to create new threads, without CPU intervention, and adapt to its data. This effectively eliminates the superfluous back and forth communication between the GPU and CPU through nested kernel computations. The change in performance will be measured using two well-known clustering algorithms that exhibit data dependencies: the K-means clustering and the hierarchical clustering. K-means has a sequential data dependence wherein iterations occur in a linear fashion, while the hierarchical clustering has a tree-like dependence that produces split tasks. Analyzing the performance of these data-dependent algorithms gives us a better understanding of the benefits or potential drawbacks of CUDA 5's new dynamic parallelism feature.
Performance evaluation of image segmentation algorithms on microscopic image data.
Beneš, Miroslav; Zitová, Barbara
2015-01-01
In our paper, we present a performance evaluation of image segmentation algorithms on microscopic image data. In spite of the existence of many algorithms for image data partitioning, there is no universal and 'the best' method yet. Moreover, images of microscopic samples can be of various character and quality which can negatively influence the performance of image segmentation algorithms. Thus, the issue of selecting suitable method for a given set of image data is of big interest. We carried out a large number of experiments with a variety of segmentation methods to evaluate the behaviour of individual approaches on the testing set of microscopic images (cross-section images taken in three different modalities from the field of art restoration). The segmentation results were assessed by several indices used for measuring the output quality of image segmentation algorithms. In the end, the benefit of segmentation combination approach is studied and applicability of achieved results on another representatives of microscopic data category - biological samples - is shown. PMID:25233873
Performance analysis of bearing-only target location algorithms
NASA Astrophysics Data System (ADS)
Gavish, Motti; Weiss, Anthony J.
1992-07-01
The performance of two well known bearing only location techniques, the maximum likelihood (ML) and the Stansfield estimators, is examined. Analytical expressions are obtained for the bias and the covariance matrix of the estimation error, which permit performance comparison for any case of interest. It is shown that the Stansfield algorithm provides biased estimates even for large numbers of measurements, in contrast with the ML method. The rms error of the Stansfield technique is not necessarily larger than the rms of the ML technique. However, it is shown that the ML technique is superior to the Stansfield method when the number of measurements is large enough. Simulation results verify the predicted theoretical performance.
S-index: Measuring significant, not average, citation performance
NASA Astrophysics Data System (ADS)
Antonoyiannakis, Manolis
2009-03-01
We recently [1] introduced the ``citation density curve'' (or cumulative impact factor curve) that captures the full citation performance of a journal: its size, impact factor, the maximum number of citations per paper, the relative size of the different-cited portions of the journal, etc. The citation density curve displays a universal behavior across journals. We exploit this universality to extract a simple metric (the ``S-index'') to characterize the citation impact of ``significant'' papers in each journal. In doing so, we go beyond the journal impact factor, which only measures the impact of the average paper. The conventional wisdom of ranking journals according to their impact factors is thus challenged. Having shown the utility and robustness of the S-index in comparing and ranking journals of different sizes but within the same field, we explore the concept further, going beyond a single field, and beyond journals. Can we compare different scientific fields, departments, or universities? And how should one generalize the citation density curve and the S-index to address these questions? [1] M. Antonoyiannakis and S. Mitra, ``Is PRL too large to have an `impact'?'', Editorial, Physical Review Letters, December 2008.
Lee, Chankyun; Cao, Xiaoyuan; Yoshikane, Noboru; Tsuritani, Takehiro; Rhee, June-Koo Kevin
2015-10-19
The feasibility of software-defined optical networking (SDON) for a practical application critically depends on scalability of centralized control performance. The paper, highly scalable routing and wavelength assignment (RWA) algorithms are investigated on an OpenFlow-based SDON testbed for proof-of-concept demonstration. Efficient RWA algorithms are proposed to achieve high performance in achieving network capacity with reduced computation cost, which is a significant attribute in a scalable centralized-control SDON. The proposed heuristic RWA algorithms differ in the orders of request processes and in the procedures of routing table updates. Combined in a shortest-path-based routing algorithm, a hottest-request-first processing policy that considers demand intensity and end-to-end distance information offers both the highest throughput of networks and acceptable computation scalability. We further investigate trade-off relationship between network throughput and computation complexity in routing table update procedure by a simulation study. PMID:26480397
Madenjian, Charles P.; David, Solomon R.; Pothoven, Steven A.
2012-01-01
We evaluated the performance of the Wisconsin bioenergetics model for lake trout Salvelinus namaycush that were fed ad libitum in laboratory tanks under regimes of low activity and high activity. In addition, we compared model performance under two different model algorithms: (1) balancing the lake trout energy budget on day t based on lake trout energy density on day t and (2) balancing the lake trout energy budget on day t based on lake trout energy density on day t + 1. Results indicated that the model significantly underestimated consumption for both inactive and active lake trout when algorithm 1 was used and that the degree of underestimation was similar for the two activity levels. In contrast, model performance substantially improved when using algorithm 2, as no detectable bias was found in model predictions of consumption for inactive fish and only a slight degree of overestimation was detected for active fish. The energy budget was accurately balanced by using algorithm 2 but not by using algorithm 1. Based on the results of this study, we recommend the use of algorithm 2 to estimate food consumption by fish in the field. Our study results highlight the importance of accurately accounting for changes in fish energy density when balancing the energy budget; furthermore, these results have implications for the science of evaluating fish bioenergetics model performance and for more accurate estimation of food consumption by fish in the field when fish energy density undergoes relatively rapid changes.
Proper bibeta ROC model: algorithm, software, and performance evaluation
NASA Astrophysics Data System (ADS)
Chen, Weijie; Hu, Nan
2016-03-01
Semi-parametric models are often used to fit data collected in receiver operating characteristic (ROC) experiments to obtain a smooth ROC curve and ROC parameters for statistical inference purposes. The proper bibeta model as recently proposed by Mossman and Peng enjoys several theoretical properties. In addition to having explicit density functions for the latent decision variable and an explicit functional form of the ROC curve, the two parameter bibeta model also has simple closed-form expressions for true-positive fraction (TPF), false-positive fraction (FPF), and the area under the ROC curve (AUC). In this work, we developed a computational algorithm and R package implementing this model for ROC curve fitting. Our algorithm can deal with any ordinal data (categorical or continuous). To improve accuracy, efficiency, and reliability of our software, we adopted several strategies in our computational algorithm including: (1) the LABROC4 categorization to obtain the true maximum likelihood estimation of the ROC parameters; (2) a principled approach to initializing parameters; (3) analytical first-order and second-order derivatives of the likelihood function; (4) an efficient optimization procedure (the L-BFGS algorithm in the R package "nlopt"); and (5) an analytical delta method to estimate the variance of the AUC. We evaluated the performance of our software with intensive simulation studies and compared with the conventional binormal and the proper binormal-likelihood-ratio models developed at the University of Chicago. Our simulation results indicate that our software is highly accurate, efficient, and reliable.
NASA Astrophysics Data System (ADS)
Alexandre, E.; Cuadra, L.; Nieto-Borge, J. C.; Candil-García, G.; del Pino, M.; Salcedo-Sanz, S.
2015-08-01
Wave parameters computed from time series measured by buoys (significant wave height Hs, mean wave period, etc.) play a key role in coastal engineering and in the design and operation of wave energy converters. Storms or navigation accidents can make measuring buoys break down, leading to missing data gaps. In this paper we tackle the problem of locally reconstructing Hs at out-of-operation buoys by using wave parameters from nearby buoys, based on the spatial correlation among values at neighboring buoy locations. The novelty of our approach for its potential application to problems in coastal engineering is twofold. On one hand, we propose a genetic algorithm hybridized with an extreme learning machine that selects, among the available wave parameters from the nearby buoys, a subset FnSP with nSP parameters that minimizes the Hs reconstruction error. On the other hand, we evaluate to what extent the selected parameters in subset FnSP are good enough in assisting other machine learning (ML) regressors (extreme learning machines, support vector machines and gaussian process regression) to reconstruct Hs. The results show that all the ML method explored achieve a good Hs reconstruction in the two different locations studied (Caribbean Sea and West Atlantic).
Framework for performance evaluation of face recognition algorithms
NASA Astrophysics Data System (ADS)
Black, John A., Jr.; Gargesha, Madhusudhana; Kahol, Kanav; Kuchi, Prem; Panchanathan, Sethuraman
2002-07-01
Face detection and recognition is becoming increasingly important in the contexts of surveillance,credit card fraud detection,assistive devices for visual impaired,etc. A number of face recognition algorithms have been proposed in the literature.The availability of a comprehensive face database is crucial to test the performance of these face recognition algorithms.However,while existing publicly-available face databases contain face images with a wide variety of poses angles, illumination angles,gestures,face occlusions,and illuminant colors, these images have not been adequately annotated,thus limiting their usefulness for evaluating the relative performance of face detection algorithms. For example,many of the images in existing databases are not annotated with the exact pose angles at which they were taken.In order to compare the performance of various face recognition algorithms presented in the literature there is a need for a comprehensive,systematically annotated database populated with face images that have been captured (1)at a variety of pose angles (to permit testing of pose invariance),(2)with a wide variety of illumination angles (to permit testing of illumination invariance),and (3)under a variety of commonly encountered illumination color temperatures (to permit testing of illumination color invariance). In this paper, we present a methodology for creating such an annotated database that employs a novel set of apparatus for the rapid capture of face images from a wide variety of pose angles and illumination angles. Four different types of illumination are used,including daylight,skylight,incandescent and fluorescent. The entire set of images,as well as the annotations and the experimental results,is being placed in the public domain,and made available for download over the worldwide web.
A DRAM compiler algorithm for high performance VLSI embedded memories
NASA Technical Reports Server (NTRS)
Eldin, A. G.
1992-01-01
In many applications, the limited density of the embedded SRAM does not allow integrating the memory on the same chip with other logic and functional blocks. In such cases, the embedded DRAM provides the optimum combination of very high density, low power, and high performance. For ASIC's to take full advantage of this design strategy, an efficient and highly reliable DRAM compiler must be used. The embedded DRAM architecture, cell, and peripheral circuit design considerations and the algorithm of a high performance memory compiler are presented .
Zhang, Lei; Wang, Linlin; Du, Bochuan; Wang, Tianjiao; Tian, Pu
2016-01-01
Among non-small cell lung cancer (NSCLC), adenocarcinoma (AC), and squamous cell carcinoma (SCC) are two major histology subtypes, accounting for roughly 40% and 30% of all lung cancer cases, respectively. Since AC and SCC differ in their cell of origin, location within the lung, and growth pattern, they are considered as distinct diseases. Gene expression signatures have been demonstrated to be an effective tool for distinguishing AC and SCC. Gene set analysis is regarded as irrelevant to the identification of gene expression signatures. Nevertheless, we found that one specific gene set analysis method, significance analysis of microarray-gene set reduction (SAMGSR), can be adopted directly to select relevant features and to construct gene expression signatures. In this study, we applied SAMGSR to a NSCLC gene expression dataset. When compared with several novel feature selection algorithms, for example, LASSO, SAMGSR has equivalent or better performance in terms of predictive ability and model parsimony. Therefore, SAMGSR is a feature selection algorithm, indeed. Additionally, we applied SAMGSR to AC and SCC subtypes separately to discriminate their respective stages, that is, stage II versus stage I. Few overlaps between these two resulting gene signatures illustrate that AC and SCC are technically distinct diseases. Therefore, stratified analyses on subtypes are recommended when diagnostic or prognostic signatures of these two NSCLC subtypes are constructed. PMID:27446945
Yan, Aimin; Wu, Xizeng; Liu, Hong
2010-01-01
The phase retrieval is an important task in x-ray phase contrast imaging. The robustness of phase retrieval is especially important for potential medical imaging applications such as phase contrast mammography. Recently the authors developed an iterative phase retrieval algorithm, the attenuation-partition based algorithm, for the phase retrieval in inline phase-contrast imaging [1]. Applied to experimental images, the algorithm was proven to be fast and robust. However, a quantitative analysis of the performance of this new algorithm is desirable. In this work, we systematically compared the performance of this algorithm with other two widely used phase retrieval algorithms, namely the Gerchberg-Saxton (GS) algorithm and the Transport of Intensity Equation (TIE) algorithm. The systematical comparison is conducted by analyzing phase retrieval performances with a digital breast specimen model. We show that the proposed algorithm converges faster than the GS algorithm in the Fresnel diffraction regime, and is more robust against image noise than the TIE algorithm. These results suggest the significance of the proposed algorithm for future medical applications with the x-ray phase contrast imaging technique. PMID:20720992
Stereo matching: performance study of two global algorithms
NASA Astrophysics Data System (ADS)
Arunagiri, Sarala; Jordan, Victor J.; Teller, Patricia J.; Deroba, Joseph C.; Shires, Dale R.; Park, Song J.; Nguyen, Lam H.
2011-06-01
Techniques such as clinometry, stereoscopy, interferometry, and polarimetry are used for Digital Elevation Model (DEM) generation from Synthetic Aperture Radar (SAR) images. The choice of technique depends on the SAR configuration, the means used for image acquisition, and the relief type. The most popular techniques are interferometry for regions of high coherence and stereoscopy for regions such as steep forested mountain slopes. Stereo matching, which is finds the disparity map or correspondence points between two images acquired from different sensor positions, is a core process in stereoscopy. Additionally, automatic stereo processing, which involves stereo matching, is an important process in other applications including vision-based obstacle avoidance for unmanned air vehicles (UAVs), extraction of weak targets in clutter, and automatic target detection. Due to its high computational complexity, stereo matching has traditionally been, and continues to be, one of the most heavily investigated topics in computer vision. A stereo matching algorithm performs a subset of the following four steps: cost computation, cost (support) aggregation, disparity computation/optimization, and disparity refinement. Based on the method used for cost computation, the algorithms are classified into feature-, phase-, and area-based algorithms; and they are classified as local or global based on how they perform disparity computation/optimization. We present a comparative performance study of two pairs, i.e., four versions, of global stereo matching codes. Each pair uses a different minimization technique: a simulated annealing or graph cut algorithm. And, the codes of a pair differ in terms of the employed global cost function: absolute difference (AD) or a variation of normalized cross correlation (NCC). The performance comparison is in terms of execution time, the global minimum cost achieved, power and energy consumption, and the quality of generated output. The results of
Performance evaluation of operational atmospheric correction algorithms over the East China Seas
NASA Astrophysics Data System (ADS)
He, Shuangyan; He, Mingxia; Fischer, Jürgen
2016-04-01
To acquire high-quality operational data products for Chinese in-orbit and scheduled ocean color sensors, the performances of two operational atmospheric correction (AC) algorithms (ESA MEGS 7.4.1 and NASA SeaDAS 6.1) were evaluated over the East China Seas (ECS) using MERIS data. The spectral remote sensing reflectance R rs(λ), aerosol optical thickness (AOT), and Ångström exponent (α) retrieved using the two algorithms were validated using in situ measurements obtained between May 2002 and October 2009. Match-ups of R rs, AOT, and α between the in situ and MERIS data were obtained through strict exclusion criteria. Statistical analysis of R rs(λ) showed a mean percentage difference (MPD) of 9%-13% in the 490-560 nm spectral range, and significant overestimation was observed at 413 nm (MPD>72%). The AOTs were overestimated (MPD>32%), and although the ESA algorithm outperformed the NASA algorithm in the blue-green bands, the situation was reversed in the red-near-infrared bands. The value of α was obviously underestimated by the ESA algorithm (MPD=41%) but not by the NASA algorithm (MPD=35%). To clarify why the NASA algorithm performed better in the retrieval of α, scatter plots of the α single scattering albedo (SSA) density were prepared. These α-SSA density scatter plots showed that the applicability of the aerosol models used by the NASA algorithm over the ECS is better than that used by the ESA algorithm, although neither aerosol model is suitable for the ECS region. The results of this study provide a reference to both data users and data agencies regarding the use of operational data products and the investigation into the improvement of current AC schemes over the ECS.
Performance evaluation of PCA-based spike sorting algorithms.
Adamos, Dimitrios A; Kosmidis, Efstratios K; Theophilidis, George
2008-09-01
Deciphering the electrical activity of individual neurons from multi-unit noisy recordings is critical for understanding complex neural systems. A widely used spike sorting algorithm is being evaluated for single-electrode nerve trunk recordings. The algorithm is based on principal component analysis (PCA) for spike feature extraction. In the neuroscience literature it is generally assumed that the use of the first two or most commonly three principal components is sufficient. We estimate the optimum PCA-based feature space by evaluating the algorithm's performance on simulated series of action potentials. A number of modifications are made to the open source nev2lkit software to enable systematic investigation of the parameter space. We introduce a new metric to define clustering error considering over-clustering more favorable than under-clustering as proposed by experimentalists for our data. Both the program patch and the metric are available online. Correlated and white Gaussian noise processes are superimposed to account for biological and artificial jitter in the recordings. We report that the employment of more than three principal components is in general beneficial for all noise cases considered. Finally, we apply our results to experimental data and verify that the sorting process with four principal components is in agreement with a panel of electrophysiology experts. PMID:18565614
Jimenez, Edward Steven,
2013-09-01
The goal of this work is to develop a fast computed tomography (CT) reconstruction algorithm based on graphics processing units (GPU) that achieves significant improvement over traditional central processing unit (CPU) based implementations. The main challenge in developing a CT algorithm that is capable of handling very large datasets is parallelizing the algorithm in such a way that data transfer does not hinder performance of the reconstruction algorithm. General Purpose Graphics Processing (GPGPU) is a new technology that the Science and Technology (S&T) community is starting to adopt in many fields where CPU-based computing is the norm. GPGPU programming requires a new approach to algorithm development that utilizes massively multi-threaded environments. Multi-threaded algorithms in general are difficult to optimize since performance bottlenecks occur that are non-existent in single-threaded algorithms such as memory latencies. If an efficient GPU-based CT reconstruction algorithm can be developed; computational times could be improved by a factor of 20. Additionally, cost benefits will be realized as commodity graphics hardware could potentially replace expensive supercomputers and high-end workstations. This project will take advantage of the CUDA programming environment and attempt to parallelize the task in such a way that multiple slices of the reconstruction volume are computed simultaneously. This work will also take advantage of the GPU memory by utilizing asynchronous memory transfers, GPU texture memory, and (when possible) pinned host memory so that the memory transfer bottleneck inherent to GPGPU is amortized. Additionally, this work will take advantage of GPU-specific hardware (i.e. fast texture memory, pixel-pipelines, hardware interpolators, and varying memory hierarchy) that will allow for additional performance improvements.
Performance Trend of Different Algorithms for Structural Design Optimization
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.
1996-01-01
Nonlinear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Center, a project was initiated to assess performance of different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with the sequential unconstrained minimizations technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.
Restoration algorithms and system performance evaluation for active imagers
NASA Astrophysics Data System (ADS)
Gilles, Jérôme
2007-10-01
This paper deals with two fields related to active imaging system. First, we begin to explore image processing algorithms to restore the artefacts like speckle, scintillation and image dancing caused by atmospheric turbulence. Next, we examine how to evaluate the performance of this kind of systems. To do this task, we propose a modified version of the german TRM3 metric which permits to get MTF-like measures. We use the database acquired during NATO-TG40 field trials to make our tests.
Empirical study of self-configuring genetic programming algorithm performance and behaviour
NASA Astrophysics Data System (ADS)
Semenkin, E.; Semenkina, M.
2015-01-01
The behaviour of the self-configuring genetic programming algorithm with a modified uniform crossover operator that implements a selective pressure on the recombination stage, is studied over symbolic programming problems. The operator's probabilistic rates interplay is studied and the role of operator variants on algorithm performance is investigated. Algorithm modifications based on the results of investigations are suggested. The performance improvement of the algorithm is demonstrated by the comparative analysis of suggested algorithms on the benchmark and real world problems.
Gietzelt, Matthias; Wolf, Klaus-Hendrik; Marschollek, Michael; Haux, Reinhold
2013-07-01
Calibration of accelerometers can be reduced to 3D-ellipsoid fitting problems. Changing extrinsic factors like temperature, pressure or humidity, as well as intrinsic factors like the battery status, demand to calibrate the measurements permanently. Thus, there is a need for fast calibration algorithms, e.g. for online analyses. The primary aim of this paper is to propose a non-iterative calibration algorithm for accelerometers with the focus on minimal execution time and low memory consumption. The secondary aim is to benchmark existing calibration algorithms based on 3D-ellipsoid fitting methods. We compared the algorithms regarding the calibration quality and the execution time as well as the number of quasi-static measurements needed for a stable calibration. As evaluation criterion for the calibration, both the norm of calibrated real-life measurements during inactivity and simulation data was used. The algorithms showed a high calibration quality, but the execution time differed significantly. The calibration method proposed in this paper showed the shortest execution time and a very good performance regarding the number of measurements needed to produce stable results. Furthermore, this algorithm was successfully implemented on a sensor node and calibrates the measured data on-the-fly while continuously storing the measured data to a microSD-card. PMID:23566707
Sankaran, Ramanan; Angel, Jordan; Brown, W. Michael
2015-04-08
The growth in size of networked high performance computers along with novel accelerator-based node architectures has further emphasized the importance of communication efficiency in high performance computing. The world's largest high performance computers are usually operated as shared user facilities due to the costs of acquisition and operation. Applications are scheduled for execution in a shared environment and are placed on nodes that are not necessarily contiguous on the interconnect. Furthermore, the placement of tasks on the nodes allocated by the scheduler is sub-optimal, leading to performance loss and variability. Here, we investigate the impact of task placement on themore » performance of two massively parallel application codes on the Titan supercomputer, a turbulent combustion flow solver (S3D) and a molecular dynamics code (LAMMPS). Benchmark studies show a significant deviation from ideal weak scaling and variability in performance. The inter-task communication distance was determined to be one of the significant contributors to the performance degradation and variability. A genetic algorithm-based parallel optimization technique was used to optimize the task ordering. This technique provides an improved placement of the tasks on the nodes, taking into account the application's communication topology and the system interconnect topology. As a result, application benchmarks after task reordering through genetic algorithm show a significant improvement in performance and reduction in variability, therefore enabling the applications to achieve better time to solution and scalability on Titan during production.« less
Sankaran, Ramanan; Angel, Jordan; Brown, W. Michael
2015-04-08
The growth in size of networked high performance computers along with novel accelerator-based node architectures has further emphasized the importance of communication efficiency in high performance computing. The world's largest high performance computers are usually operated as shared user facilities due to the costs of acquisition and operation. Applications are scheduled for execution in a shared environment and are placed on nodes that are not necessarily contiguous on the interconnect. Furthermore, the placement of tasks on the nodes allocated by the scheduler is sub-optimal, leading to performance loss and variability. Here, we investigate the impact of task placement on the performance of two massively parallel application codes on the Titan supercomputer, a turbulent combustion flow solver (S3D) and a molecular dynamics code (LAMMPS). Benchmark studies show a significant deviation from ideal weak scaling and variability in performance. The inter-task communication distance was determined to be one of the significant contributors to the performance degradation and variability. A genetic algorithm-based parallel optimization technique was used to optimize the task ordering. This technique provides an improved placement of the tasks on the nodes, taking into account the application's communication topology and the system interconnect topology. As a result, application benchmarks after task reordering through genetic algorithm show a significant improvement in performance and reduction in variability, therefore enabling the applications to achieve better time to solution and scalability on Titan during production.
Performance Analysis of Apriori Algorithm with Different Data Structures on Hadoop Cluster
NASA Astrophysics Data System (ADS)
Singh, Sudhakar; Garg, Rakhi; Mishra, P. K.
2015-10-01
Mining frequent itemsets from massive datasets is always being a most important problem of data mining. Apriori is the most popular and simplest algorithm for frequent itemset mining. To enhance the efficiency and scalability of Apriori, a number of algorithms have been proposed addressing the design of efficient data structures, minimizing database scan and parallel and distributed processing. MapReduce is the emerging parallel and distributed technology to process big datasets on Hadoop Cluster. To mine big datasets it is essential to re-design the data mining algorithm on this new paradigm. In this paper, we implement three variations of Apriori algorithm using data structures hash tree, trie and hash table trie i.e. trie with hash technique on MapReduce paradigm. We emphasize and investigate the significance of these three data structures for Apriori algorithm on Hadoop cluster, which has not been given attention yet. Experiments are carried out on both real life and synthetic datasets which shows that hash table trie data structures performs far better than trie and hash tree in terms of execution time. Moreover the performance in case of hash tree becomes worst.
Performance of humans vs. exploration algorithms on the Tower of London Test.
Fimbel, Eric; Lauzon, Stéphane; Rainville, Constant
2009-01-01
The Tower of London Test (TOL) used to assess executive functions was inspired in Artificial Intelligence tasks used to test problem-solving algorithms. In this study, we compare the performance of humans and of exploration algorithms. Instead of absolute execution times, we focus on how the execution time varies with the tasks and/or the number of moves. This approach used in Algorithmic Complexity provides a fair comparison between humans and computers, although humans are several orders of magnitude slower. On easy tasks (1 to 5 moves), healthy elderly persons performed like exploration algorithms using bounded memory resources, i.e., the execution time grew exponentially with the number of moves. This result was replicated with a group of healthy young participants. However, for difficult tasks (5 to 8 moves) the execution time of young participants did not increase significantly, whereas for exploration algorithms, the execution time keeps on increasing exponentially. A pre-and post-test control task showed a 25% improvement of visuo-motor skills but this was insufficient to explain this result. The findings suggest that naive participants used systematic exploration to solve the problem but under the effect of practice, they developed markedly more efficient strategies using the information acquired during the test. PMID:19787066
A Genetic Algorithm for Learning Significant Phrase Patterns in Radiology Reports
Patton, Robert M; Potok, Thomas E; Beckerman, Barbara G; Treadwell, Jim N
2009-01-01
Radiologists disagree with each other over the characteristics and features of what constitutes a normal mammogram and the terminology to use in the associated radiology report. Recently, the focus has been on classifying abnormal or suspicious reports, but even this process needs further layers of clustering and gradation, so that individual lesions can be more effectively classified. Using a genetic algorithm, the approach described here successfully learns phrase patterns for two distinct classes of radiology reports (normal and abnormal). These patterns can then be used as a basis for automatically analyzing, categorizing, clustering, or retrieving relevant radiology reports for the user.
Implementation and performance of a domain decomposition algorithm in Sisal
DeBoni, T.; Feo, J.; Rodrigue, G.; Muller, J.
1993-09-23
Sisal is a general-purpose functional language that hides the complexity of parallel processing, expedites parallel program development, and guarantees determinacy. Parallelism and management of concurrent tasks are realized automatically by the compiler and runtime system. Spatial domain decomposition is a widely-used method that focuses computational resources on the most active, or important, areas of a domain. Many complex programming issues are introduced in paralleling this method including: dynamic spatial refinement, dynamic grid partitioning and fusion, task distribution, data distribution, and load balancing. In this paper, we describe a spatial domain decomposition algorithm programmed in Sisal. We explain the compilation process, and present the execution performance of the resultant code on two different multiprocessor systems: a multiprocessor vector supercomputer, and cache-coherent scalar multiprocessor.
Performance analysis of bearings-only tracking algorithm
NASA Astrophysics Data System (ADS)
van Huyssteen, David; Farooq, Mohamad
1998-07-01
A number of 'bearing-only' target motion analysis algorithms have appeared in the literature over the years, all suited to track an object based solely on noisy measurements of its angular position. In their paper 'Utilization of Modified Polar (MP) Coordinates for Bearings-Only Tracking' Aidala and Hammel advocate a filter in which the observable and unobservable states are naturally decoupled. While the MP filter has certain advantages over Cartesian and pseudolinear extended Kalman filters, it does not escape the requirement for the observer to steer an optimum maneuvering course to guarantee acceptable performance. This paper demonstrates by simulation the consequence if the observer deviates from this profile, even if it is sufficient to produce full state observability.
Detrending moving average algorithm: Frequency response and scaling performances.
Carbone, Anna; Kiyono, Ken
2016-06-01
The Detrending Moving Average (DMA) algorithm has been widely used in its several variants for characterizing long-range correlations of random signals and sets (one-dimensional sequences or high-dimensional arrays) over either time or space. In this paper, mainly based on analytical arguments, the scaling performances of the centered DMA, including higher-order ones, are investigated by means of a continuous time approximation and a frequency response approach. Our results are also confirmed by numerical tests. The study is carried out for higher-order DMA operating with moving average polynomials of different degree. In particular, detrending power degree, frequency response, asymptotic scaling, upper limit of the detectable scaling exponent, and finite scale range behavior will be discussed. PMID:27415389
Burg algorithm for enhancing measurement performance in wavelength scanning interferometry
NASA Astrophysics Data System (ADS)
Woodcock, Rebecca; Muhamedsalih, Hussam; Martin, Haydn; Jiang, Xiangqian
2016-06-01
Wavelength scanning interferometry (WSI) is a technique for measuring surface topography that is capable of resolving step discontinuities and does not require any mechanical movement of the apparatus or measurand, allowing measurement times to be reduced substantially in comparison to related techniques. The axial (height) resolution and measurement range in WSI depends in part on the algorithm used to evaluate the spectral interferograms. Previously reported Fourier transform based methods have a number of limitations which is in part due to the short data lengths obtained. This paper compares the performance auto-regressive model based techniques for frequency estimation in WSI. Specifically, the Burg method is compared with established Fourier transform based approaches using both simulation and experimental data taken from a WSI measurement of a step-height sample.
Detrending moving average algorithm: Frequency response and scaling performances
NASA Astrophysics Data System (ADS)
Carbone, Anna; Kiyono, Ken
2016-06-01
The Detrending Moving Average (DMA) algorithm has been widely used in its several variants for characterizing long-range correlations of random signals and sets (one-dimensional sequences or high-dimensional arrays) over either time or space. In this paper, mainly based on analytical arguments, the scaling performances of the centered DMA, including higher-order ones, are investigated by means of a continuous time approximation and a frequency response approach. Our results are also confirmed by numerical tests. The study is carried out for higher-order DMA operating with moving average polynomials of different degree. In particular, detrending power degree, frequency response, asymptotic scaling, upper limit of the detectable scaling exponent, and finite scale range behavior will be discussed.
Angus, Simon D.; Piotrowska, Monika Joanna
2014-01-01
Multi-dose radiotherapy protocols (fraction dose and timing) currently used in the clinic are the product of human selection based on habit, received wisdom, physician experience and intra-day patient timetabling. However, due to combinatorial considerations, the potential treatment protocol space for a given total dose or treatment length is enormous, even for relatively coarse search; well beyond the capacity of traditional in-vitro methods. In constrast, high fidelity numerical simulation of tumor development is well suited to the challenge. Building on our previous single-dose numerical simulation model of EMT6/Ro spheroids, a multi-dose irradiation response module is added and calibrated to the effective dose arising from 18 independent multi-dose treatment programs available in the experimental literature. With the developed model a constrained, non-linear, search for better performing cadidate protocols is conducted within the vicinity of two benchmarks by genetic algorithm (GA) techniques. After evaluating less than 0.01% of the potential benchmark protocol space, candidate protocols were identified by the GA which conferred an average of 9.4% (max benefit 16.5%) and 7.1% (13.3%) improvement (reduction) on tumour cell count compared to the two benchmarks, respectively. Noticing that a convergent phenomenon of the top performing protocols was their temporal synchronicity, a further series of numerical experiments was conducted with periodic time-gap protocols (10 h to 23 h), leading to the discovery that the performance of the GA search candidates could be replicated by 17–18 h periodic candidates. Further dynamic irradiation-response cell-phase analysis revealed that such periodicity cohered with latent EMT6/Ro cell-phase temporal patterning. Taken together, this study provides powerful evidence towards the hypothesis that even simple inter-fraction timing variations for a given fractional dose program may present a facile, and highly cost
NASA Astrophysics Data System (ADS)
Hou, Zhen-Long; Wei, Xiao-Hui; Huang, Da-Nian; Sun, Xu
2015-09-01
We apply reweighted inversion focusing to full tensor gravity gradiometry data using message-passing interface (MPI) and compute unified device architecture (CUDA) parallel computing algorithms, and then combine MPI with CUDA to formulate a hybrid algorithm. Parallel computing performance metrics are introduced to analyze and compare the performance of the algorithms. We summarize the rules for the performance evaluation of parallel algorithms. We use model and real data from the Vinton salt dome to test the algorithms. We find good match between model and real density data, and verify the high efficiency and feasibility of parallel computing algorithms in the inversion of full tensor gravity gradiometry data.
Performance comparison of neural network training algorithms in modeling of bimodal drug delivery.
Ghaffari, A; Abdollahi, H; Khoshayand, M R; Bozchalooi, I Soltani; Dadgar, A; Rafiee-Tehrani, M
2006-12-11
The major aim of this study was to model the effect of two causal factors, i.e. coating weight gain and amount of pectin-chitosan in the coating solution on the in vitro release profile of theophylline for bimodal drug delivery. Artificial neural network (ANN) as a multilayer perceptron feedforward network was incorporated for developing a predictive model of the formulations. Five different training algorithms belonging to three classes: gradient descent, quasi-Newton (Levenberg-Marquardt, LM) and genetic algorithm (GA) were used to train ANN containing a single hidden layer of four nodes. The next objective of the current study was to compare the performance of aforementioned algorithms with regard to predicting ability. The ANNs were trained with those algorithms using the available experimental data as the training set. The divergence of the RMSE between the output and target values of test set was monitored and used as a criterion to stop training. Two versions of gradient descent backpropagation algorithms, i.e. incremental backpropagation (IBP) and batch backpropagation (BBP) outperformed the others. No significant differences were found between the predictive abilities of IBP and BBP, although, the convergence speed of BBP is three- to four-fold higher than IBP. Although, both gradient descent backpropagation and LM methodologies gave comparable results for the data modeling, training of ANNs with genetic algorithm was erratic. The precision of predictive ability was measured for each training algorithm and their performances were in the order of: IBP, BBP>LM>QP (quick propagation)>GA. According to BBP-ANN implementation, an increase in coating levels and a decrease in the amount of pectin-chitosan generally retarded the drug release. Moreover, the latter causal factor namely the amount of pectin-chitosan played slightly more dominant role in determination of the dissolution profiles. PMID:16959449
NASA Technical Reports Server (NTRS)
Kobayashi, Takahisa; Simon, Donald L.
2002-01-01
As part of the NASA Aviation Safety Program, a unique model-based diagnostics method that employs neural networks and genetic algorithms for aircraft engine performance diagnostics has been developed and demonstrated at the NASA Glenn Research Center against a nonlinear gas turbine engine model. Neural networks are applied to estimate the internal health condition of the engine, and genetic algorithms are used for sensor fault detection, isolation, and quantification. This hybrid architecture combines the excellent nonlinear estimation capabilities of neural networks with the capability to rank the likelihood of various faults given a specific sensor suite signature. The method requires a significantly smaller data training set than a neural network approach alone does, and it performs the combined engine health monitoring objectives of performance diagnostics and sensor fault detection and isolation in the presence of nominal and degraded engine health conditions.
An efficient algorithm to perform multiple testing in epistasis screening
2013-01-01
Background Research in epistasis or gene-gene interaction detection for human complex traits has grown over the last few years. It has been marked by promising methodological developments, improved translation efforts of statistical epistasis to biological epistasis and attempts to integrate different omics information sources into the epistasis screening to enhance power. The quest for gene-gene interactions poses severe multiple-testing problems. In this context, the maxT algorithm is one technique to control the false-positive rate. However, the memory needed by this algorithm rises linearly with the amount of hypothesis tests. Gene-gene interaction studies will require a memory proportional to the squared number of SNPs. A genome-wide epistasis search would therefore require terabytes of memory. Hence, cache problems are likely to occur, increasing the computation time. In this work we present a new version of maxT, requiring an amount of memory independent from the number of genetic effects to be investigated. This algorithm was implemented in C++ in our epistasis screening software MBMDR-3.0.3. We evaluate the new implementation in terms of memory efficiency and speed using simulated data. The software is illustrated on real-life data for Crohn’s disease. Results In the case of a binary (affected/unaffected) trait, the parallel workflow of MBMDR-3.0.3 analyzes all gene-gene interactions with a dataset of 100,000 SNPs typed on 1000 individuals within 4 days and 9 hours, using 999 permutations of the trait to assess statistical significance, on a cluster composed of 10 blades, containing each four Quad-Core AMD Opteron(tm) Processor 2352 2.1 GHz. In the case of a continuous trait, a similar run takes 9 days. Our program found 14 SNP-SNP interactions with a multiple-testing corrected p-value of less than 0.05 on real-life Crohn’s disease (CD) data. Conclusions Our software is the first implementation of the MB-MDR methodology able to solve large-scale SNP
Brambley, Michael R.; Katipamula, Srinivas
2006-10-06
Pacific Northwest National Laboratory (PNNL) is assisting the U.S. Department of Energy (DOE) Distributed Energy (DE) Program by developing advanced control algorithms that would lead to development of tools to enhance performance and reliability, and reduce emissions of distributed energy technologies, including combined heat and power technologies. This report documents phase 2 of the program, providing a detailed functional specification for algorithms for performance monitoring and commissioning verification, scheduled for development in FY 2006. The report identifies the systems for which algorithms will be developed, the specific functions of each algorithm, metrics which the algorithms will output, and inputs required by each algorithm.
Computational Performance Assessment of k-mer Counting Algorithms.
Pérez, Nelson; Gutierrez, Miguel; Vera, Nelson
2016-04-01
This article is about the assessment of several tools for k-mer counting, with the purpose to create a reference framework for bioinformatics researchers to identify computational requirements, parallelizing, advantages, disadvantages, and bottlenecks of each of the algorithms proposed in the tools. The k-mer counters evaluated in this article were BFCounter, DSK, Jellyfish, KAnalyze, KHMer, KMC2, MSPKmerCounter, Tallymer, and Turtle. Measured parameters were the following: RAM occupied space, processing time, parallelization, and read and write disk access. A dataset consisting of 36,504,800 reads was used corresponding to the 14th human chromosome. The assessment was performed for two k-mer lengths: 31 and 55. Obtained results were the following: pure Bloom filter-based tools and disk-partitioning techniques showed a lesser RAM use. The tools that took less execution time were the ones that used disk-partitioning techniques. The techniques that made the major parallelization were the ones that used disk partitioning, hash tables with lock-free approach, or multiple hash tables. PMID:26982880
Shaswary, Elyas; Xu, Yuan; Tavakkoli, Jahan
2016-07-01
Time-delay estimation has countless applications in ultrasound medical imaging. Previously, we proposed a new time-delay estimation algorithm, which was based on the summation of the sign function to compute the time-delay estimate (Shaswary et al., 2015). We reported that the proposed algorithm performs similar to normalized cross-correlation (NCC) and sum squared differences (SSD) algorithms, even though it was significantly more computationally efficient. In this paper, we study the performance of the proposed algorithm using statistical analysis and image quality analysis in ultrasound elastography imaging. Field II simulation software was used for generation of ultrasound radio frequency (RF) echo signals for statistical analysis, and a clinical ultrasound scanner (Sonix® RP scanner, Ultrasonix Medical Corp., Richmond, BC, Canada) was used to scan a commercial ultrasound elastography tissue-mimicking phantom for image quality analysis. The statistical analysis results confirmed that, in overall, the proposed algorithm has similar performance compared to NCC and SSD algorithms. The image quality analysis results indicated that the proposed algorithm produces strain images with marginally higher signal-to-noise and contrast-to-noise ratios compared to NCC and SSD algorithms. PMID:27010697
NASA Astrophysics Data System (ADS)
Zhao, Wei; Niu, Tianye; Xing, Lei; Xie, Yaoqin; Xiong, Guanglei; Elmore, Kimberly; Zhu, Jun; Wang, Luyao; Min, James K.
2016-02-01
Increased noise is a general concern for dual-energy material decomposition. Here, we develop an image-domain material decomposition algorithm for dual-energy CT (DECT) by incorporating an edge-preserving filter into the Local HighlY constrained backPRojection reconstruction (HYPR-LR) framework. With effective use of the non-local mean, the proposed algorithm, which is referred to as HYPR-NLM, reduces the noise in dual-energy decomposition while preserving the accuracy of quantitative measurement and spatial resolution of the material-specific dual-energy images. We demonstrate the noise reduction and resolution preservation of the algorithm with an iodine concentrate numerical phantom by comparing the HYPR-NLM algorithm to the direct matrix inversion, HYPR-LR and iterative image-domain material decomposition (Iter-DECT). We also show the superior performance of the HYPR-NLM over the existing methods by using two sets of cardiac perfusing imaging data. The DECT material decomposition comparison study shows that all four algorithms yield acceptable quantitative measurements of iodine concentrate. Direct matrix inversion yields the highest noise level, followed by HYPR-LR and Iter-DECT. HYPR-NLM in an iterative formulation significantly reduces image noise and the image noise is comparable to or even lower than that generated using Iter-DECT. For the HYPR-NLM method, there are marginal edge effects in the difference image, suggesting the high-frequency details are well preserved. In addition, when the search window size increases from 11× 11 to 19× 19 , there are no significant changes or marginal edge effects in the HYPR-NLM difference images. The reference drawn from the comparison study includes: (1) HYPR-NLM significantly reduces the DECT material decomposition noise while preserving quantitative measurements and high-frequency edge information, and (2) HYPR-NLM is robust with respect to parameter selection.
Performance of a parallel algorithm for standard cell placement on the Intel Hypercube
NASA Technical Reports Server (NTRS)
Jones, Mark; Banerjee, Prithviraj
1987-01-01
A parallel simulated annealing algorithm for standard cell placement on the Intel Hypercube is presented. A novel tree broadcasting strategy is used extensively for updating cell locations in the parallel environment. Studies on the performance of the algorithm on example industrial circuits show that it is faster and gives better final placement results than uniprocessor simulated annealing algorithms.
Chan, J.C.-W.; Huang, C.; DeFries, R.
2001-01-01
Two ensemble methods, bagging and boosting, were investigated for improving algorithm performance. Our results confirmed the theoretical explanation [1] that bagging improves unstable, but not stable, learning algorithms. While boosting enhanced accuracy of a weak learner, its behavior is subject to the characteristics of each learning algorithm.
In-depth performance analysis of an EEG based neonatal seizure detection algorithm
Mathieson, S.; Rennie, J.; Livingstone, V.; Temko, A.; Low, E.; Pressler, R.M.; Boylan, G.B.
2016-01-01
Objective To describe a novel neurophysiology based performance analysis of automated seizure detection algorithms for neonatal EEG to characterize features of detected and non-detected seizures and causes of false detections to identify areas for algorithmic improvement. Methods EEGs of 20 term neonates were recorded (10 seizure, 10 non-seizure). Seizures were annotated by an expert and characterized using a novel set of 10 criteria. ANSeR seizure detection algorithm (SDA) seizure annotations were compared to the expert to derive detected and non-detected seizures at three SDA sensitivity thresholds. Differences in seizure characteristics between groups were compared using univariate and multivariate analysis. False detections were characterized. Results The expert detected 421 seizures. The SDA at thresholds 0.4, 0.5, 0.6 detected 60%, 54% and 45% of seizures. At all thresholds, multivariate analyses demonstrated that the odds of detecting seizure increased with 4 criteria: seizure amplitude, duration, rhythmicity and number of EEG channels involved at seizure peak. Major causes of false detections included respiration and sweat artefacts or a highly rhythmic background, often during intermediate sleep. Conclusion This rigorous analysis allows estimation of how key seizure features are exploited by SDAs. Significance This study resulted in a beta version of ANSeR with significantly improved performance. PMID:27072097
Sera White
2012-04-01
This thesis presents a research study using one year of driving data obtained from plug-in hybrid electric vehicles (PHEV) located in Sacramento and San Francisco, California to determine the effectiveness of incorporating geographic information into vehicle performance algorithms. Sacramento and San Francisco were chosen because of the availability of high resolution (1/9 arc second) digital elevation data. First, I present a method for obtaining instantaneous road slope, given a latitude and longitude, and introduce its use into common driving intensity algorithms. I show that for trips characterized by >40m of net elevation change (from key on to key off), the use of instantaneous road slope significantly changes the results of driving intensity calculations. For trips exhibiting elevation loss, algorithms ignoring road slope overestimated driving intensity by as much as 211 Wh/mile, while for trips exhibiting elevation gain these algorithms underestimated driving intensity by as much as 333 Wh/mile. Second, I describe and test an algorithm that incorporates vehicle route type into computations of city and highway fuel economy. Route type was determined by intersecting trip GPS points with ESRI StreetMap road types and assigning each trip as either city or highway route type according to whichever road type comprised the largest distance traveled. The fuel economy results produced by the geographic classification were compared to the fuel economy results produced by algorithms that assign route type based on average speed or driving style. Most results were within 1 mile per gallon ({approx}3%) of one another; the largest difference was 1.4 miles per gallon for charge depleting highway trips. The methods for acquiring and using geographic data introduced in this thesis will enable other vehicle technology researchers to incorporate geographic data into their research problems.
Performance of a parallel algorithm for standard cell placement on the Intel Hypercube
NASA Technical Reports Server (NTRS)
Jones, Mark; Banerjee, Prithviraj
1987-01-01
A parallel simulated annealing algorithm for standard cell placement that is targeted to run on the Intel Hypercube is presented. A tree broadcasting strategy that is used extensively in our algorithm for updating cell locations in the parallel environment is presented. Studies on the performance of our algorithm on example industrial circuits show that it is faster and gives better final placement results than the uniprocessor simulated annealing algorithms.
Dual Engine application of the Performance Seeking Control algorithm
NASA Technical Reports Server (NTRS)
Mueller, F. D.; Nobbs, S. G.; Stewart, J. F.
1993-01-01
The Dual Engine Performance Seeking Control (PSC) flight/propulsion optimization program has been developed and will be flown during the second quarter of 1993. Previously, only single engine optimization was possible due to the limited capability of the on-board computer. The implementation of Dual Engine PSC has been made possible with the addition of a new state-of-the-art, higher throughput computer. As a result, the single engine PSC performance improvements already flown will be demonstrated on both engines, simultaneously. Dual Engine PSC will make it possible to directly compare aircraft performance with and without the improvements generated by PSC. With the additional thrust achieved with PSC, significant improvements in acceleration times and time to climb will be possible. PSC is also able to reduce deceleration time from supersonic speeds. This paper traces the history of the PSC program, describes the basic components of PSC, discusses the development and implementation of Dual Engine PSC including additions to the code, and presents predictions of the impact of Dual Engine PSC on aircraft performance.
A novel ROC approach for performance evaluation of target detection algorithms
NASA Astrophysics Data System (ADS)
Ganapathy, Priya; Skipper, Julie A.
2007-04-01
Receiver operator characteristic (ROC) analysis is an emerging automated target recognition system performance assessment tool. The ROC metric, area under the curve (AUC), is a universally accepted measure of classifying accuracy. In the presented approach, the detection algorithm output, i.e., a response plane (RP), must consist of grayscale values wherein a maximum value (e.g. 255) corresponds to highest probability of target locations. AUC computation involves the comparison of the RP and the ground truth to classify RP pixels as true positives (TP), true negatives (TN), false positives (FP), or false negatives (FN). Ideally, the background and all objects other than targets are TN. Historically, evaluation methods have excluded the background, and only a few spoof objects likely to be considered as a hit by detection algorithms were a priori demarcated as TN. This can potentially exaggerate the algorithm's performance. Here, a new ROC approach has been developed that divides the entire image into mutually exclusive target (TP) and background (TN) grid squares with adjustable size. Based on the overlap of the thresholded RP with the TP and TN grids, the FN and FP fractions are computed. Variation of the grid square size can bias the ROC results by artificially altering specificity, so an assessment of relative performance under a constant grid square size is adopted in our approach. A pilot study was performed to assess the method's ability to capture RP changes under three different detection algorithm parameter settings on ten images with different backgrounds and target orientations. An ANOVA-based comparison of the AUCs for the three settings showed a significant difference (p<0.001) at 95% confidence interval.
Kossobokov, V.G.; Romashkova, L.L.; Keilis-Borok, V. I.; Healy, J.H.
1999-01-01
Algorithms M8 and MSc (i.e., the Mendocino Scenario) were used in a real-time intermediate-term research prediction of the strongest earthquakes in the Circum-Pacific seismic belt. Predictions are made by M8 first. Then, the areas of alarm are reduced by MSc at the cost that some earthquakes are missed in the second approximation of prediction. In 1992-1997, five earthquakes of magnitude 8 and above occurred in the test area: all of them were predicted by M8 and MSc identified correctly the locations of four of them. The space-time volume of the alarms is 36% and 18%, correspondingly, when estimated with a normalized product measure of empirical distribution of epicenters and uniform time. The statistical significance of the achieved results is beyond 99% both for M8 and MSc. For magnitude 7.5 + , 10 out of 19 earthquakes were predicted by M8 in 40% and five were predicted by M8-MSc in 13% of the total volume considered. This implies a significance level of 81% for M8 and 92% for M8-MSc. The lower significance levels might result from a global change in seismic regime in 1993-1996, when the rate of the largest events has doubled and all of them become exclusively normal or reversed faults. The predictions are fully reproducible; the algorithms M8 and MSc in complete formal definitions were published before we started our experiment [Keilis-Borok, V.I., Kossobokov, V.G., 1990. Premonitory activation of seismic flow: Algorithm M8, Phys. Earth and Planet. Inter. 61, 73-83; Kossobokov, V.G., Keilis-Borok, V.I., Smith, S.W., 1990. Localization of intermediate-term earthquake prediction, J. Geophys. Res., 95, 19763-19772; Healy, J.H., Kossobokov, V.G., Dewey, J.W., 1992. A test to evaluate the earthquake prediction algorithm, M8. U.S. Geol. Surv. OFR 92-401]. M8 is available from the IASPEI Software Library [Healy, J.H., Keilis-Borok, V.I., Lee, W.H.K. (Eds.), 1997. Algorithms for Earthquake Statistics and Prediction, Vol. 6. IASPEI Software Library]. ?? 1999 Elsevier
Subsonic flight test evaluation of a performance seeking control algorithm on an F-15 airplane
NASA Technical Reports Server (NTRS)
Gilyard, Glenn B.; Orme, John S.
1992-01-01
The subsonic flight test evaluation phase of the NASA F-15 (powered by F 100 engines) performance seeking control program was completed for single-engine operation at part- and military-power settings. The subsonic performance seeking control algorithm optimizes the quasi-steady-state performance of the propulsion system for three modes of operation. The minimum fuel flow mode minimizes fuel consumption. The minimum thrust mode maximizes thrust at military power. Decreases in thrust-specific fuel consumption of 1 to 2 percent were measured in the minimum fuel flow mode; these fuel savings are significant, especially for supersonic cruise aircraft. Decreases of up to approximately 100 degree R in fan turbine inlet temperature were measured in the minimum temperature mode. Temperature reductions of this magnitude would more than double turbine life if inlet temperature was the only life factor. Measured thrust increases of up to approximately 15 percent in the maximum thrust mode cause substantial increases in aircraft acceleration. The system dynamics of the closed-loop algorithm operation were good. The subsonic flight phase has validated the performance seeking control technology, which can significantly benefit the next generation of fighter and transport aircraft.
NASA Technical Reports Server (NTRS)
Choudhary, Alok N.; Patel, Janak H.; Ahuja, Narendra
1989-01-01
In part 1 architecture of NETRA is presented. A performance evaluation of NETRA using several common vision algorithms is also presented. Performance of algorithms when they are mapped on one cluster is described. It is shown that SIMD, MIMD, and systolic algorithms can be easily mapped onto processor clusters, and almost linear speedups are possible. For some algorithms, analytical performance results are compared with implementation performance results. It is observed that the analysis is very accurate. Performance analysis of parallel algorithms when mapped across clusters is presented. Mappings across clusters illustrate the importance and use of shared as well as distributed memory in achieving high performance. The parameters for evaluation are derived from the characteristics of the parallel algorithms, and these parameters are used to evaluate the alternative communication strategies in NETRA. Furthermore, the effect of communication interference from other processors in the system on the execution of an algorithm is studied. Using the analysis, performance of many algorithms with different characteristics is presented. It is observed that if communication speeds are matched with the computation speeds, good speedups are possible when algorithms are mapped across clusters.
Performance and development plans for the Inner Detector trigger algorithms at ATLAS
NASA Astrophysics Data System (ADS)
Martin-Haugh, Stewart
2015-12-01
A description of the design and performance of the newly re-implemented tracking algorithms for the ATLAS trigger for LHC Run 2, to commence in spring 2015, is presented. The ATLAS High Level Trigger (HLT) has been restructured to run as a more flexible single stage process, rather than the two separate Level 2 and Event Filter stages used during Run 1. To make optimal use of this new scenario, a new tracking strategy has been implemented for Run 2. This new strategy will use a FastTrackFinder algorithm to directly seed the subsequent Precision Tracking, and will result in improved track parameter resolution and significantly faster execution times than achieved during Run 1 and with better efficiency. The timings of the algorithms for electron and tau track triggers are presented. The profiling infrastructure, constructed to provide prompt feedback from the optimisation, is described, including the methods used to monitor the relative performance improvements as the code evolves. The online deployment and commissioning are also discussed.
High-Performance Algorithm for Solving the Diagnosis Problem
NASA Technical Reports Server (NTRS)
Fijany, Amir; Vatan, Farrokh
2009-01-01
An improved method of model-based diagnosis of a complex engineering system is embodied in an algorithm that involves considerably less computation than do prior such algorithms. This method and algorithm are based largely on developments reported in several NASA Tech Briefs articles: The Complexity of the Diagnosis Problem (NPO-30315), Vol. 26, No. 4 (April 2002), page 20; Fast Algorithms for Model-Based Diagnosis (NPO-30582), Vol. 29, No. 3 (March 2005), page 69; Two Methods of Efficient Solution of the Hitting-Set Problem (NPO-30584), Vol. 29, No. 3 (March 2005), page 73; and Efficient Model-Based Diagnosis Engine (NPO-40544), on the following page. Some background information from the cited articles is prerequisite to a meaningful summary of the innovative aspects of the present method and algorithm. In model-based diagnosis, the function of each component and the relationships among all the components of the engineering system to be diagnosed are represented as a logical system denoted the system description (SD). Hence, the expected normal behavior of the engineering system is the set of logical consequences of the SD. Faulty components lead to inconsistencies between the observed behaviors of the system and the SD. Diagnosis the task of finding faulty components is reduced to finding those components, the abnormalities of which could explain all the inconsistencies. The solution of the diagnosis problem should be a minimal diagnosis, which is a minimal set of faulty components. The calculation of a minimal diagnosis is inherently a hard problem, the solution of which requires amounts of computation time and memory that increase exponentially with the number of components of the engineering system. Among the developments to reduce the computational burden, as reported in the cited articles, is the mapping of the diagnosis problem onto the integer-programming (IP) problem. This mapping makes it possible to utilize a variety of algorithms developed previously
NASA Technical Reports Server (NTRS)
Kasturi, Rangachar; Camps, Octavia; Coraor, Lee
2000-01-01
The research reported here is a part of NASA's Synthetic Vision System (SVS) project for the development of a High Speed Civil Transport Aircraft (HSCT). One of the components of the SVS is a module for detection of potential obstacles in the aircraft's flight path by analyzing the images captured by an on-board camera in real-time. Design of such a module includes the selection and characterization of robust, reliable, and fast techniques and their implementation for execution in real-time. This report describes the results of our research in realizing such a design. It is organized into three parts. Part I. Data modeling and camera characterization; Part II. Algorithms for detecting airborne obstacles; and Part III. Real time implementation of obstacle detection algorithms on the Datacube MaxPCI architecture. A list of publications resulting from this grant as well as a list of relevant publications resulting from prior NASA grants on this topic are presented.
Performance evaluation of power control algorithms in wireless cellular networks
NASA Astrophysics Data System (ADS)
Temaneh-Nyah, C.; Iita, V.
2014-10-01
Power control in a mobile communication network intents to control the transmission power levels in such a way that the required quality of service (QoS) for the users is guaranteed with lowest possible transmission powers. Most of the studies of power control algorithms in the literature are based on some kind of simplified assumptions which leads to compromise in the validity of the results when applied in a real environment. In this paper, a CDMA network was simulated. The real environment was accounted for by defining the analysis area and the network base stations and mobile stations are defined by their geographical coordinates, the mobility of the mobile stations is accounted for. The simulation also allowed for a number of network parameters including the network traffic, and the wireless channel models to be modified. Finally, we present the simulation results of a convergence speed based comparative analysis of three uplink power control algorithms.
A high performance hardware implementation image encryption with AES algorithm
NASA Astrophysics Data System (ADS)
Farmani, Ali; Jafari, Mohamad; Miremadi, Seyed Sohrab
2011-06-01
This paper describes implementation of a high-speed encryption algorithm with high throughput for encrypting the image. Therefore, we select a highly secured symmetric key encryption algorithm AES(Advanced Encryption Standard), in order to increase the speed and throughput using pipeline technique in four stages, control unit based on logic gates, optimal design of multiplier blocks in mixcolumn phase and simultaneous production keys and rounds. Such procedure makes AES suitable for fast image encryption. Implementation of a 128-bit AES on FPGA of Altra company has been done and the results are as follow: throughput, 6 Gbps in 471MHz. The time of encrypting in tested image with 32*32 size is 1.15ms.
GOES-R Geostationary Lightning Mapper Performance Specifications and Algorithms
NASA Technical Reports Server (NTRS)
Mach, Douglas M.; Goodman, Steven J.; Blakeslee, Richard J.; Koshak, William J.; Petersen, William A.; Boldi, Robert A.; Carey, Lawrence D.; Bateman, Monte G.; Buchler, Dennis E.; McCaul, E. William, Jr.
2008-01-01
The Geostationary Lightning Mapper (GLM) is a single channel, near-IR imager/optical transient event detector, used to detect, locate and measure total lightning activity over the full-disk. The next generation NOAA Geostationary Operational Environmental Satellite (GOES-R) series will carry a GLM that will provide continuous day and night observations of lightning. The mission objectives for the GLM are to: (1) Provide continuous, full-disk lightning measurements for storm warning and nowcasting, (2) Provide early warning of tornadic activity, and (2) Accumulate a long-term database to track decadal changes of lightning. The GLM owes its heritage to the NASA Lightning Imaging Sensor (1997- present) and the Optical Transient Detector (1995-2000), which were developed for the Earth Observing System and have produced a combined 13 year data record of global lightning activity. GOES-R Risk Reduction Team and Algorithm Working Group Lightning Applications Team have begun to develop the Level 2 algorithms and applications. The science data will consist of lightning "events", "groups", and "flashes". The algorithm is being designed to be an efficient user of the computational resources. This may include parallelization of the code and the concept of sub-dividing the GLM FOV into regions to be processed in parallel. Proxy total lightning data from the NASA Lightning Imaging Sensor on the Tropical Rainfall Measuring Mission (TRMM) satellite and regional test beds (e.g., Lightning Mapping Arrays in North Alabama, Oklahoma, Central Florida, and the Washington DC Metropolitan area) are being used to develop the prelaunch algorithms and applications, and also improve our knowledge of thunderstorm initiation and evolution.
Performance of a community detection algorithm based on semidefinite programming
NASA Astrophysics Data System (ADS)
Ricci-Tersenghi, Federico; Javanmard, Adel; Montanari, Andrea
2016-03-01
The problem of detecting communities in a graph is maybe one the most studied inference problems, given its simplicity and widespread diffusion among several disciplines. A very common benchmark for this problem is the stochastic block model or planted partition problem, where a phase transition takes place in the detection of the planted partition by changing the signal-to-noise ratio. Optimal algorithms for the detection exist which are based on spectral methods, but we show these are extremely sensible to slight modification in the generative model. Recently Javanmard, Montanari and Ricci-Tersenghi [1] have used statistical physics arguments, and numerical simulations to show that finding communities in the stochastic block model via semidefinite programming is quasi optimal. Further, the resulting semidefinite relaxation can be solved efficiently, and is very robust with respect to changes in the generative model. In this paper we study in detail several practical aspects of this new algorithm based on semidefinite programming for the detection of the planted partition. The algorithm turns out to be very fast, allowing the solution of problems with O(105) variables in few second on a laptop computer.
Copps, Kevin D.; Carnes, Brian R.
2008-04-01
We examine algorithms for the finite element approximation of thermal contact models. We focus on the implementation of thermal contact algorithms in SIERRA Mechanics. Following the mathematical formulation of models for tied contact and resistance contact, we present three numerical algorithms: (1) the multi-point constraint (MPC) algorithm, (2) a resistance algorithm, and (3) a new generalized algorithm. We compare and contrast both the correctness and performance of the algorithms in three test problems. We tabulate the convergence rates of global norms of the temperature solution on sequentially refined meshes. We present the results of a parameter study of the effect of contact search tolerances. We outline best practices in using the software for predictive simulations, and suggest future improvements to the implementation.
Distributed concurrency control performance: A study of algorithms, distribution, and replication
Carey, M.J.; Livny, M.
1988-01-01
Many concurrency control algorithms have been proposed for use in distributed database systems. Despite the large number of available algorithms, and the fact that distributed database systems are becoming a commercial reality, distributed concurrency control performance tradeoffs are still not well understood. In this paper the authors attempt to shed light on some of the important issues by studying the performance of four representative algorithms - distributed 2PL, wound-wait, basic timestamp ordering, and a distributed optimistic algorithm - using a detailed simulation model of a distributed DBMS. The authors examine the performance of these algorithms for various levels of contention, ''distributedness'' of the workload, and data replication. The results should prove useful to designers of future distributed database systems.
2014-01-01
Background Eukaryotic transcriptional regulation is known to be highly connected through the networks of cooperative transcription factors (TFs). Measuring the cooperativity of TFs is helpful for understanding the biological relevance of these TFs in regulating genes. The recent advances in computational techniques led to various predictions of cooperative TF pairs in yeast. As each algorithm integrated different data resources and was developed based on different rationales, it possessed its own merit and claimed outperforming others. However, the claim was prone to subjectivity because each algorithm compared with only a few other algorithms and only used a small set of performance indices for comparison. This motivated us to propose a series of indices to objectively evaluate the prediction performance of existing algorithms. And based on the proposed performance indices, we conducted a comprehensive performance evaluation. Results We collected 14 sets of predicted cooperative TF pairs (PCTFPs) in yeast from 14 existing algorithms in the literature. Using the eight performance indices we adopted/proposed, the cooperativity of each PCTFP was measured and a ranking score according to the mean cooperativity of the set was given to each set of PCTFPs under evaluation for each performance index. It was seen that the ranking scores of a set of PCTFPs vary with different performance indices, implying that an algorithm used in predicting cooperative TF pairs is of strength somewhere but may be of weakness elsewhere. We finally made a comprehensive ranking for these 14 sets. The results showed that Wang J's study obtained the best performance evaluation on the prediction of cooperative TF pairs in yeast. Conclusions In this study, we adopted/proposed eight performance indices to make a comprehensive performance evaluation on the prediction results of 14 existing cooperative TFs identification algorithms. Most importantly, these proposed indices can be easily applied to
NASA Astrophysics Data System (ADS)
Wang, H.; Chang, W.; Cruz, J. R.
Algebraic soft-decision Reed-Solomon (RS) decoding algorithms with improved error-correcting capability and comparable complexity to standard algebraic hard-decision algorithms could be very attractive for possible implementation in the next generation of read channels. In this work, we investigate the performance of a low-complexity Chase (LCC)-type soft-decision RS decoding algorithm, recently proposed by Bellorado and Kavčić, on perpendicular magnetic recording channels for sector-long RS codes of practical interest. Previous results for additive white Gaussian noise channels have shown that for a moderately long high-rate code, the LCC algorithm can achieve a coding gain comparable to the Koetter-Vardy algorithm with much lower complexity. We present a set of numerical results that show that this algorithm provides small coding gains, on the order of a fraction of a dB, with similar complexity to the hard-decision algorithms currently used, and that larger coding gains can be obtained if we use more test patterns, which significantly increases its computational complexity.
Schoenberg, Mike R; Duff, Kevin; Dorfman, Karen; Adams, Russell L
2004-05-01
Data from the WAIS-III standardization sample (The Psychological Corporation, 1997) was used to generate VIQ and PIQ estimation formulae using demographic variables and current WAIS-III subtest performances. The sample (n = 2450) was randomly divided into two groups; the first was used to develop formulas and the second to validate the regression equations. Age, education, ethnicity, gender, region of the country as well as Vocabulary, Matrix Reasoning, and Picture Completion subtests raw scores were used as predictor variables. Prediction formulas were generated using a single verbal and two performance subtest algorithms. The VIQ OPIE-3 model combined Vocabulary raw scores with demographic variables. The PIQ estimation algorithm used Matrix Reasoning and Picture Completion raw scores with demographic variables. The formulas for estimating premorbid VIQ and PIQ were highly significant and accurate in estimation. Differences in estimated VIQ and PIQ scores were evaluated and the OPIE-3 algorithms were found to accurately predict VIQ and PIQ differences within the WAIS-III standardization sample. PMID:15587673
Binocular self-calibration performed via adaptive genetic algorithm based on laser line imaging
NASA Astrophysics Data System (ADS)
Apolinar Muñoz Rodríguez, J.; Mejía Alanís, Francisco Carlos
2016-07-01
An accurate technique to perform binocular self-calibration by means of an adaptive genetic algorithm based on a laser line is presented. In this calibration, the genetic algorithm computes the vision parameters through simulated binary crossover (SBX). To carry it out, the genetic algorithm constructs an objective function from the binocular geometry of the laser line projection. Then, the SBX minimizes the objective function via chromosomes recombination. In this algorithm, the adaptive procedure determines the search space via line position to obtain the minimum convergence. Thus, the chromosomes of vision parameters provide the minimization. The approach of the proposed adaptive genetic algorithm is to calibrate and recalibrate the binocular setup without references and physical measurements. This procedure leads to improve the traditional genetic algorithms, which calibrate the vision parameters by means of references and an unknown search space. It is because the proposed adaptive algorithm avoids errors produced by the missing of references. Additionally, the three-dimensional vision is carried out based on the laser line position and vision parameters. The contribution of the proposed algorithm is corroborated by an evaluation of accuracy of binocular calibration, which is performed via traditional genetic algorithms.
On the estimation algorithm used in adaptive performance optimization of turbofan engines
NASA Technical Reports Server (NTRS)
Espana, Martin D.; Gilyard, Glenn B.
1993-01-01
The performance seeking control algorithm is designed to continuously optimize the performance of propulsion systems. The performance seeking control algorithm uses a nominal model of the propulsion system and estimates, in flight, the engine deviation parameters characterizing the engine deviations with respect to nominal conditions. In practice, because of measurement biases and/or model uncertainties, the estimated engine deviation parameters may not reflect the engine's actual off-nominal condition. This factor has a necessary impact on the overall performance seeking control scheme exacerbated by the open-loop character of the algorithm. The effects produced by unknown measurement biases over the estimation algorithm are evaluated. This evaluation allows for identification of the most critical measurements for application of the performance seeking control algorithm to an F100 engine. An equivalence relation between the biases and engine deviation parameters stems from an observability study; therefore, it is undecided whether the estimated engine deviation parameters represent the actual engine deviation or whether they simply reflect the measurement biases. A new algorithm, based on the engine's (steady-state) optimization model, is proposed and tested with flight data. When compared with previous Kalman filter schemes, based on local engine dynamic models, the new algorithm is easier to design and tune and it reduces the computational burden of the onboard computer.
Performance of Thorup's Shortest Path Algorithm for Large-Scale Network Simulation
NASA Astrophysics Data System (ADS)
Sakumoto, Yusuke; Ohsaki, Hiroyuki; Imase, Makoto
In this paper, we investigate the performance of Thorup's algorithm by comparing it to Dijkstra's algorithm for large-scale network simulations. One of the challenges toward the realization of large-scale network simulations is the efficient execution to find shortest paths in a graph with N vertices and M edges. The time complexity for solving a single-source shortest path (SSSP) problem with Dijkstra's algorithm with a binary heap (DIJKSTRA-BH) is O((M+N)log N). An sophisticated algorithm called Thorup's algorithm has been proposed. The original version of Thorup's algorithm (THORUP-FR) has the time complexity of O(M+N). A simplified version of Thorup's algorithm (THORUP-KL) has the time complexity of O(Mα(N)+N) where α(N) is the functional inverse of the Ackerman function. In this paper, we compare the performances (i.e., execution time and memory consumption) of THORUP-KL and DIJKSTRA-BH since it is known that THORUP-FR is at least ten times slower than Dijkstra's algorithm with a Fibonaccii heap. We find that (1) THORUP-KL is almost always faster than DIJKSTRA-BH for large-scale network simulations, and (2) the performances of THORUP-KL and DIJKSTRA-BH deviate from their time complexities due to the presence of the memory cache in the microprocessor.
NASA Astrophysics Data System (ADS)
Goswami, D.; Chakraborty, S.
2014-11-01
Laser machining is a promising non-contact process for effective machining of difficult-to-process advanced engineering materials. Increasing interest in the use of lasers for various machining operations can be attributed to its several unique advantages, like high productivity, non-contact processing, elimination of finishing operations, adaptability to automation, reduced processing cost, improved product quality, greater material utilization, minimum heat-affected zone and green manufacturing. To achieve the best desired machining performance and high quality characteristics of the machined components, it is extremely important to determine the optimal values of the laser machining process parameters. In this paper, fireworks algorithm and cuckoo search (CS) algorithm are applied for single as well as multi-response optimization of two laser machining processes. It is observed that although almost similar solutions are obtained for both these algorithms, CS algorithm outperforms fireworks algorithm with respect to average computation time, convergence rate and performance consistency.
Algorithms and architectures for high performance analysis of semantic graphs.
Hendrickson, Bruce Alan
2005-09-01
analysis. Since intelligence datasets can be extremely large, the focus of this work is on the use of parallel computers. We have been working to develop scalable parallel algorithms that will be at the core of a semantic graph analysis infrastructure. Our work has involved two different thrusts, corresponding to two different computer architectures. The first architecture of interest is distributed memory, message passing computers. These machines are ubiquitous and affordable, but they are challenging targets for graph algorithms. Much of our distributed-memory work to date has been collaborative with researchers at Lawrence Livermore National Laboratory and has focused on finding short paths on distributed memory parallel machines. Our implementation on 32K processors of BlueGene/Light finds shortest paths between two specified vertices in just over a second for random graphs with 4 billion vertices.
Deb, Suash; Yang, Xin-She
2014-01-01
Traditional K-means clustering algorithms have the drawback of getting stuck at local optima that depend on the random values of initial centroids. Optimization algorithms have their advantages in guiding iterative computation to search for global optima while avoiding local optima. The algorithms help speed up the clustering process by converging into a global optimum early with multiple search agents in action. Inspired by nature, some contemporary optimization algorithms which include Ant, Bat, Cuckoo, Firefly, and Wolf search algorithms mimic the swarming behavior allowing them to cooperatively steer towards an optimal objective within a reasonable time. It is known that these so-called nature-inspired optimization algorithms have their own characteristics as well as pros and cons in different applications. When these algorithms are combined with K-means clustering mechanism for the sake of enhancing its clustering quality by avoiding local optima and finding global optima, the new hybrids are anticipated to produce unprecedented performance. In this paper, we report the results of our evaluation experiments on the integration of nature-inspired optimization methods into K-means algorithms. In addition to the standard evaluation metrics in evaluating clustering quality, the extended K-means algorithms that are empowered by nature-inspired optimization methods are applied on image segmentation as a case study of application scenario. PMID:25202730
Fong, Simon; Deb, Suash; Yang, Xin-She; Zhuang, Yan
2014-01-01
Traditional K-means clustering algorithms have the drawback of getting stuck at local optima that depend on the random values of initial centroids. Optimization algorithms have their advantages in guiding iterative computation to search for global optima while avoiding local optima. The algorithms help speed up the clustering process by converging into a global optimum early with multiple search agents in action. Inspired by nature, some contemporary optimization algorithms which include Ant, Bat, Cuckoo, Firefly, and Wolf search algorithms mimic the swarming behavior allowing them to cooperatively steer towards an optimal objective within a reasonable time. It is known that these so-called nature-inspired optimization algorithms have their own characteristics as well as pros and cons in different applications. When these algorithms are combined with K-means clustering mechanism for the sake of enhancing its clustering quality by avoiding local optima and finding global optima, the new hybrids are anticipated to produce unprecedented performance. In this paper, we report the results of our evaluation experiments on the integration of nature-inspired optimization methods into K-means algorithms. In addition to the standard evaluation metrics in evaluating clustering quality, the extended K-means algorithms that are empowered by nature-inspired optimization methods are applied on image segmentation as a case study of application scenario. PMID:25202730
NASA Technical Reports Server (NTRS)
Battiste, Vernol; Lawton, George; Lachter, Joel; Brandt, Summer; Koteskey, Robert; Dao, Arik-Quang; Kraut, Josh; Ligda, Sarah; Johnson, Walter W.
2012-01-01
Managing the interval between arrival aircraft is a major part of the en route and TRACON controller s job. In an effort to reduce controller workload and low altitude vectoring, algorithms have been developed to allow pilots to take responsibility for, achieve and maintain proper spacing. Additionally, algorithms have been developed to create dynamic weather-free arrival routes in the presence of convective weather. In a recent study we examined an algorithm to handle dynamic re-routing in the presence of convective weather and two distinct spacing algorithms. The spacing algorithms originated from different core algorithms; both were enhanced with trajectory intent data for the study. These two algorithms were used simultaneously in a human-in-the-loop (HITL) simulation where pilots performed weather-impacted arrival operations into Louisville International Airport while also performing interval management (IM) on some trials. The controllers retained responsibility for separation and for managing the en route airspace and some trials managing IM. The goal was a stress test of dynamic arrival algorithms with ground and airborne spacing concepts. The flight deck spacing algorithms or controller managed spacing not only had to be robust to the dynamic nature of aircraft re-routing around weather but also had to be compatible with two alternative algorithms for achieving the spacing goal. Flight deck interval management spacing in this simulation provided a clear reduction in controller workload relative to when controllers were responsible for spacing the aircraft. At the same time, spacing was much less variable with the flight deck automated spacing. Even though the approaches taken by the two spacing algorithms to achieve the interval management goals were slightly different they seem to be simpatico in achieving the interval management goal of 130 sec by the TRACON boundary.
NASA Astrophysics Data System (ADS)
Nuutinen, Mikko; Virtanen, Toni; Häkkinen, Jukka
2016-03-01
Evaluating algorithms used to assess image and video quality requires performance measures. Traditional performance measures (e.g., Pearson's linear correlation coefficient, Spearman's rank-order correlation coefficient, and root mean square error) compare quality predictions of algorithms to subjective mean opinion scores (mean opinion score/differential mean opinion score). We propose a subjective root-mean-square error (SRMSE) performance measure for evaluating the accuracy of algorithms used to assess image and video quality. The SRMSE performance measure takes into account dispersion between observers. The other important property of the SRMSE performance measure is its measurement scale, which is calibrated to units of the number of average observers. The results of the SRMSE performance measure indicate the extent to which the algorithm can replace the subjective experiment (as the number of observers). Furthermore, we have presented the concept of target values, which define the performance level of the ideal algorithm. We have calculated the target values for all sample sets of the CID2013, CVD2014, and LIVE multiply distorted image quality databases.The target values and MATLAB implementation of the SRMSE performance measure are available on the project page of this study.
St. Hilaire, Melissa A.; Sullivan, Jason P.; Anderson, Clare; Cohen, Daniel A.; Barger, Laura K.; Lockley, Steven W.; Klerman, Elizabeth B.
2012-01-01
There is currently no “gold standard” marker of cognitive performance impairment resulting from sleep loss. We utilized pattern recognition algorithms to determine which features of data collected under controlled laboratory conditions could most reliably identify cognitive performance impairment in response to sleep loss using data from only one testing session, such as would occur in the “real world” or field conditions. A training set for testing the pattern recognition algorithms was developed using objective Psychomotor Vigilance Task (PVT) and subjective Karolinska Sleepiness Scale (KSS) data collected from laboratory studies during which subjects were sleep deprived for 26 – 52 hours. The algorithm was then tested in data from both laboratory and field experiments. The pattern recognition algorithm was able to identify performance impairment with a single testing session in individuals studied under laboratory conditions using PVT, KSS, length of time awake and time of day information with sensitivity and specificity as high as 82%. When this algorithm was tested on data collected under real-world conditions from individuals whose data were not in the training set, accuracy of predictions for individuals categorized with low performance impairment were as high as 98%. Predictions for medium and severe performance impairment were less accurate. We conclude that pattern recognition algorithms may be a promising method for identifying performance impairment in individuals using only current information about the individual’s behavior. Single testing features (e.g., number of PVT lapses) with high correlation with performance impairment in the laboratory setting may not be the best indicators of performance impairment under real-world conditions. Pattern recognition algorithms should be further tested for their ability to be used in conjunction with other assessments of sleepiness in real-world conditions to quantify performance impairment in
The performance and development for the Inner Detector Trigger algorithms at ATLAS
NASA Astrophysics Data System (ADS)
Penc, Ondrej
2015-05-01
A redesign of the tracking algorithms for the ATLAS trigger for LHC's Run 2 starting in 2015 is in progress. The ATLAS HLT software has been restructured to run as a more flexible single stage HLT, instead of two separate stages (Level 2 and Event Filter) as in Run 1. The new tracking strategy employed for Run 2 will use a Fast Track Finder (FTF) algorithm to seed subsequent Precision Tracking, and will result in improved track parameter resolution and faster execution times than achieved during Run 1. The performance of the new algorithms has been evaluated to identify those aspects where code optimisation would be most beneficial. The performance and timing of the algorithms for electron and muon reconstruction in the trigger are presented. The profiling infrastructure, constructed to provide prompt feedback from the optimisation, is described, including the methods used to monitor the relative performance improvements as the code evolves.
Performance Assessment Method for a Forged Fingerprint Detection Algorithm
NASA Astrophysics Data System (ADS)
Shin, Yong Nyuo; Jun, In-Kyung; Kim, Hyun; Shin, Woochang
The threat of invasion of privacy and of the illegal appropriation of information both increase with the expansion of the biometrics service environment to open systems. However, while certificates or smart cards can easily be cancelled and reissued if found to be missing, there is no way to recover the unique biometric information of an individual following a security breach. With the recognition that this threat factor may disrupt the large-scale civil service operations approaching implementation, such as electronic ID cards and e-Government systems, many agencies and vendors around the world continue to develop forged fingerprint detection technology, but no objective performance assessment method has, to date, been reported. Therefore, in this paper, we propose a methodology designed to evaluate the objective performance of the forged fingerprint detection technology that is currently attracting a great deal of attention.
Fateen, Seif-Eddeen K.; Bonilla-Petriciolet, Adrian
2014-01-01
The search for reliable and efficient global optimization algorithms for solving phase stability and phase equilibrium problems in applied thermodynamics is an ongoing area of research. In this study, we evaluated and compared the reliability and efficiency of eight selected nature-inspired metaheuristic algorithms for solving difficult phase stability and phase equilibrium problems. These algorithms are the cuckoo search (CS), intelligent firefly (IFA), bat (BA), artificial bee colony (ABC), MAKHA, a hybrid between monkey algorithm and krill herd algorithm, covariance matrix adaptation evolution strategy (CMAES), magnetic charged system search (MCSS), and bare bones particle swarm optimization (BBPSO). The results clearly showed that CS is the most reliable of all methods as it successfully solved all thermodynamic problems tested in this study. CS proved to be a promising nature-inspired optimization method to perform applied thermodynamic calculations for process design. PMID:24967430
Fateen, Seif-Eddeen K; Bonilla-Petriciolet, Adrian
2014-01-01
The search for reliable and efficient global optimization algorithms for solving phase stability and phase equilibrium problems in applied thermodynamics is an ongoing area of research. In this study, we evaluated and compared the reliability and efficiency of eight selected nature-inspired metaheuristic algorithms for solving difficult phase stability and phase equilibrium problems. These algorithms are the cuckoo search (CS), intelligent firefly (IFA), bat (BA), artificial bee colony (ABC), MAKHA, a hybrid between monkey algorithm and krill herd algorithm, covariance matrix adaptation evolution strategy (CMAES), magnetic charged system search (MCSS), and bare bones particle swarm optimization (BBPSO). The results clearly showed that CS is the most reliable of all methods as it successfully solved all thermodynamic problems tested in this study. CS proved to be a promising nature-inspired optimization method to perform applied thermodynamic calculations for process design. PMID:24967430
NASA Astrophysics Data System (ADS)
Tang, Jie; Nett, Brian E.; Chen, Guang-Hong
2009-10-01
Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.
Dependence of Adaptive Cross-correlation Algorithm Performance on the Extended Scene Image Quality
NASA Technical Reports Server (NTRS)
Sidick, Erkin
2008-01-01
Recently, we reported an adaptive cross-correlation (ACC) algorithm to estimate with high accuracy the shift as large as several pixels between two extended-scene sub-images captured by a Shack-Hartmann wavefront sensor. It determines the positions of all extended-scene image cells relative to a reference cell in the same frame using an FFT-based iterative image-shifting algorithm. It works with both point-source spot images as well as extended scene images. We have demonstrated previously based on some measured images that the ACC algorithm can determine image shifts with as high an accuracy as 0.01 pixel for shifts as large 3 pixels, and yield similar results for both point source spot images and extended scene images. The shift estimate accuracy of the ACC algorithm depends on illumination level, background, and scene content in addition to the amount of the shift between two image cells. In this paper we investigate how the performance of the ACC algorithm depends on the quality and the frequency content of extended scene images captured by a Shack-Hatmann camera. We also compare the performance of the ACC algorithm with those of several other approaches, and introduce a failsafe criterion for the ACC algorithm-based extended scene Shack-Hatmann sensors.
Dependence of adaptive cross-correlation algorithm performance on the extended scene image quality
NASA Astrophysics Data System (ADS)
Sidick, Erkin
2008-08-01
Recently, we reported an adaptive cross-correlation (ACC) algorithm to estimate with high accuracy the shift as large as several pixels between two extended-scene sub-images captured by a Shack-Hartmann wavefront sensor. It determines the positions of all extended-scene image cells relative to a reference cell in the same frame using an FFT-based iterative image-shifting algorithm. It works with both point-source spot images as well as extended scene images. We have demonstrated previously based on some measured images that the ACC algorithm can determine image shifts with as high an accuracy as 0.01 pixel for shifts as large 3 pixels, and yield similar results for both point source spot images and extended scene images. The shift estimate accuracy of the ACC algorithm depends on illumination level, background, and scene content in addition to the amount of the shift between two image cells. In this paper we investigate how the performance of the ACC algorithm depends on the quality and the frequency content of extended scene images captured by a Shack-Hatmann camera. We also compare the performance of the ACC algorithm with those of several other approaches, and introduce a failsafe criterion for the ACC algorithm-based extended scene Shack-Hatmann sensors.
On the estimation algorithm for adaptive performance optimization of turbofan engines
NASA Technical Reports Server (NTRS)
Espana, Martin D.
1993-01-01
The performance seeking control (PSC) algorithm is designed to continuously optimize the performance of propulsion systems. The PSC algorithm uses a nominal propulsion system model and estimates, in flight, the engine deviation parameters (EDPs) characterizing the engine deviations with respect to nominal conditions. In practice, because of measurement biases and/or model uncertainties, the estimated EDPs may not reflect the engine's actual off-nominal condition. This factor has a direct impact on the PSC scheme exacerbated by the open-loop character of the algorithm. In this paper, the effects produced by unknown measurement biases over the estimation algorithm are evaluated. This evaluation allows for identification of the most critical measurements for application of the PSC algorithm to an F100 engine. An equivalence relation between the biases and EDPs stems from the analysis; therefore, it is undecided whether the estimated EDPs represent the actual engine deviation or whether they simply reflect the measurement biases. A new algorithm, based on the engine's (steady-state) optimization model, is proposed and tested with flight data. When compared with previous Kalman filter schemes, based on local engine dynamic models, the new algorithm is easier to design and tune and it reduces the computational burden of the onboard computer.
Performance study of LMS based adaptive algorithms for unknown system identification
Javed, Shazia; Ahmad, Noor Atinah
2014-07-10
Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signal is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment.
Gropp, William D.
2014-06-23
With the coming end of Moore's law, it has become essential to develop new algorithms and techniques that can provide the performance needed by demanding computational science applications, especially those that are part of the DOE science mission. This work was part of a multi-institution, multi-investigator project that explored several approaches to develop algorithms that would be effective at the extreme scales and with the complex processor architectures that are expected at the end of this decade. The work by this group developed new performance models that have already helped guide the development of highly scalable versions of an algebraic multigrid solver, new programming approaches designed to support numerical algorithms on heterogeneous architectures, and a new, more scalable version of conjugate gradient, an important algorithm in the solution of very large linear systems of equations.
Performance study of LMS based adaptive algorithms for unknown system identification
NASA Astrophysics Data System (ADS)
Javed, Shazia; Ahmad, Noor Atinah
2014-07-01
Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signal is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment.
Performance evaluation of recommendation algorithms on Internet of Things services
NASA Astrophysics Data System (ADS)
Mashal, Ibrahim; Alsaryrah, Osama; Chung, Tein-Yaw
2016-06-01
Internet of Things (IoT) is the next wave of industry revolution that will initiate many services, such as personal health care and green energy monitoring, which people may subscribe for their convenience. Recommending IoT services to the users based on objects they own will become very crucial for the success of IoT. In this work, we introduce the concept of service recommender systems in IoT by a formal model. As a first attempt in this direction, we have proposed a hyper-graph model for IoT recommender system in which each hyper-edge connects users, objects, and services. Next, we studied the usefulness of traditional recommendation schemes and their hybrid approaches on IoT service recommendation (IoTSRS) based on existing well known metrics. The preliminary results show that existing approaches perform reasonably well but further extension is required for IoTSRS. Several challenges were discussed to point out the direction of future development in IoTSR.
Performance evaluation of trigger algorithm for the MACE telescope
NASA Astrophysics Data System (ADS)
Yadav, Kuldeep; Yadav, K. K.; Bhatt, N.; Chouhan, N.; Sikder, S. S.; Behere, A.; Pithawa, C. K.; Tickoo, A. K.; Rannot, R. C.; Bhattacharyya, S.; Mitra, A. K.; Koul, R.
The MACE (Major Atmospheric Cherenkov Experiment) telescope with a light collector diameter of 21 m, is being set up at Hanle (32.80 N, 78.90 E, 4200m asl) India, to explore the gamma-ray sky in the tens of GeV energy range. The imaging camera of the telescope comprises 1088 pixels covering a total field-of-view of 4.30 × 4.00 with trigger field-of-view of 2.60 × 3.00 and an uniform pixel resolution of 0.120. In order to achieve low energy trigger threshold of less than 30 GeV, a two level trigger scheme is being designed for the telescope. The first level trigger is generated within 16 pixels of the Camera Integrated Module (CIM) based on 4 nearest neighbour (4NN) close cluster configuration within a coincidence gate window of 5 ns while the second level trigger is generated by combining the first level triggers from neighbouring CIMs. Each pixel of the telescope is expected to operate at a single pixel threshold between 8-10 photo-electrons where the single channel rate dominated by the after- pulsing is expected to be ˜500 kHz. The hardware implementation of the trigger logic is based on complex programmable logic devices (CPLD). The basic design concept, hardware implementation and performance evaluation of the trigger system in terms of threshold energy and trigger rate estimates based on Monte Carlo data for the MACE telescope will be presented in this meeting.
Performance of an advanced lump correction algorithm for gamma-ray assays of plutonium
Prettyman, T.H.; Sprinkle, J.K. Jr.; Sheppard, G.A.
1994-08-01
The results of an experimental study to evaluate the performance of an advanced lump correction algorithm for gamma-ray assays of plutonium is presented. The algorithm is applied to correct segmented gamma scanner (SGS) and tomographic gamma scanner (TGS) assays of plutonium samples in 55-gal. drums containing heterogeneous matrices. The relative ability of the SGS and TGS to separate matrix and lump effects is examined, and a technique to detect gross heterogeneity in SGS assays is presented.
Schold, Jesse D; Arrington, Charlotte J; Levine, Greg
2010-09-01
In the past several years, emphasis on quality metrics in the field of organ transplantation has increased significantly, largely because of the new conditions of participation issued by the Centers for Medicare and Medicaid Services. These regulations directly associate patients' outcomes and measured performance of centers with the distribution of public funding to institutions. Moreover, insurers and marketing ventures have used publicly available outcomes data from transplant centers for business decision making and advertisement purposes. We gave a 10-question survey to attendees of the Transplant Management Forum at the 2009 meeting of the United Network for Organ Sharing to ascertain how centers have responded to the increased oversight of performance. Of 63 responses, 55% indicated a low or near low performance rating at their center in the past 3 years. Respondents from low-performing centers were significantly more likely to indicate increased selection criteria for candidates (81% vs 38%, P = .001) and donors (77% vs 31%, P < .001) as well as alterations in clinical protocols (84% vs 52%, P = .007). Among respondents indicating lost insurance contracts (31%), these differences were also highly significant. Based on respondents' perceptions, outcomes of performance evaluations are associated with significant changes in clinical practice at transplant centers. The transplant community and policy makers should practice vigilance that performance evaluations and regulatory oversight do not inadvertently lead to diminished access to care among viable candidates or decreased transplant volume. PMID:20929114
NASA Astrophysics Data System (ADS)
Kreuz, Thomas; Andrzejak, Ralph G.; Mormann, Florian; Kraskov, Alexander; Stögbauer, Harald; Elger, Christian E.; Lehnertz, Klaus; Grassberger, Peter
2004-06-01
In a growing number of publications it is claimed that epileptic seizures can be predicted by analyzing the electroencephalogram (EEG) with different characterizing measures. However, many of these studies suffer from a severe lack of statistical validation. Only rarely are results passed to a statistical test and verified against some null hypothesis H0 in order to quantify their significance. In this paper we propose a method to statistically validate the performance of measures used to predict epileptic seizures. From measure profiles rendered by applying a moving-window technique to the electroencephalogram we first generate an ensemble of surrogates by a constrained randomization using simulated annealing. Subsequently the seizure prediction algorithm is applied to the original measure profile and to the surrogates. If detectable changes before seizure onset exist, highest performance values should be obtained for the original measure profiles and the null hypothesis. “The measure is not suited for seizure prediction” can be rejected. We demonstrate our method by applying two measures of synchronization to a quasicontinuous EEG recording and by evaluating their predictive performance using a straightforward seizure prediction statistics. We would like to stress that the proposed method is rather universal and can be applied to many other prediction and detection problems.
Independent component analysis algorithm FPGA design to perform real-time blind source separation
NASA Astrophysics Data System (ADS)
Meyer-Baese, Uwe; Odom, Crispin; Botella, Guillermo; Meyer-Baese, Anke
2015-05-01
The conditions that arise in the Cocktail Party Problem prevail across many fields creating a need for of Blind Source Separation. The need for BSS has become prevalent in several fields of work. These fields include array processing, communications, medical signal processing, and speech processing, wireless communication, audio, acoustics and biomedical engineering. The concept of the cocktail party problem and BSS led to the development of Independent Component Analysis (ICA) algorithms. ICA proves useful for applications needing real time signal processing. The goal of this research was to perform an extensive study on ability and efficiency of Independent Component Analysis algorithms to perform blind source separation on mixed signals in software and implementation in hardware with a Field Programmable Gate Array (FPGA). The Algebraic ICA (A-ICA), Fast ICA, and Equivariant Adaptive Separation via Independence (EASI) ICA were examined and compared. The best algorithm required the least complexity and fewest resources while effectively separating mixed sources. The best algorithm was the EASI algorithm. The EASI ICA was implemented on hardware with Field Programmable Gate Arrays (FPGA) to perform and analyze its performance in real time.
Thrust stand evaluation of engine performance improvement algorithms in an F-15 airplane
NASA Technical Reports Server (NTRS)
Conners, Timothy R.
1992-01-01
An investigation is underway to determine the benefits of a new propulsion system optimization algorithm in an F-15 airplane. The performance seeking control (PSC) algorithm optimizes the quasi-steady-state performance of an F100 derivative turbofan engine for several modes of operation. The PSC algorithm uses an onboard software engine model that calculates thrust, stall margin, and other unmeasured variables for use in the optimization. As part of the PSC test program, the F-15 aircraft was operated on a horizontal thrust stand. Thrust was measured with highly accurate load cells. The measured thrust was compared to onboard model estimates and to results from posttest performance programs. Thrust changes using the various PSC modes were recorded. Those results were compared to benefits using the less complex highly integrated digital electronic control (HIDEC) algorithm. The PSC maximum thrust mode increased intermediate power thrust by 10 percent. The PSC engine model did very well at estimating measured thrust and closely followed the transients during optimization. Quantitative results from the evaluation of the algorithms and performance calculation models are included with emphasis on measured thrust results. The report presents a description of the PSC system and a discussion of factors affecting the accuracy of the thrust stand load measurements.
A new multiobjective performance criterion used in PID tuning optimization algorithms
Sahib, Mouayad A.; Ahmed, Bestoun S.
2015-01-01
In PID controller design, an optimization algorithm is commonly employed to search for the optimal controller parameters. The optimization algorithm is based on a specific performance criterion which is defined by an objective or cost function. To this end, different objective functions have been proposed in the literature to optimize the response of the controlled system. These functions include numerous weighted time and frequency domain variables. However, for an optimum desired response it is difficult to select the appropriate objective function or identify the best weight values required to optimize the PID controller design. This paper presents a new time domain performance criterion based on the multiobjective Pareto front solutions. The proposed objective function is tested in the PID controller design for an automatic voltage regulator system (AVR) application using particle swarm optimization algorithm. Simulation results show that the proposed performance criterion can highly improve the PID tuning optimization in comparison with traditional objective functions. PMID:26843978
A new multiobjective performance criterion used in PID tuning optimization algorithms.
Sahib, Mouayad A; Ahmed, Bestoun S
2016-01-01
In PID controller design, an optimization algorithm is commonly employed to search for the optimal controller parameters. The optimization algorithm is based on a specific performance criterion which is defined by an objective or cost function. To this end, different objective functions have been proposed in the literature to optimize the response of the controlled system. These functions include numerous weighted time and frequency domain variables. However, for an optimum desired response it is difficult to select the appropriate objective function or identify the best weight values required to optimize the PID controller design. This paper presents a new time domain performance criterion based on the multiobjective Pareto front solutions. The proposed objective function is tested in the PID controller design for an automatic voltage regulator system (AVR) application using particle swarm optimization algorithm. Simulation results show that the proposed performance criterion can highly improve the PID tuning optimization in comparison with traditional objective functions. PMID:26843978
Experimental Performance of a Genetic Algorithm for Airborne Strategic Conflict Resolution
NASA Technical Reports Server (NTRS)
Karr, David A.; Vivona, Robert A.; Roscoe, David A.; DePascale, Stephen M.; Consiglio, Maria
2009-01-01
The Autonomous Operations Planner, a research prototype flight-deck decision support tool to enable airborne self-separation, uses a pattern-based genetic algorithm to resolve predicted conflicts between the ownship and traffic aircraft. Conflicts are resolved by modifying the active route within the ownship s flight management system according to a predefined set of maneuver pattern templates. The performance of this pattern-based genetic algorithm was evaluated in the context of batch-mode Monte Carlo simulations running over 3600 flight hours of autonomous aircraft in en-route airspace under conditions ranging from typical current traffic densities to several times that level. Encountering over 8900 conflicts during two simulation experiments, the genetic algorithm was able to resolve all but three conflicts, while maintaining a required time of arrival constraint for most aircraft. Actual elapsed running time for the algorithm was consistent with conflict resolution in real time. The paper presents details of the genetic algorithm s design, along with mathematical models of the algorithm s performance and observations regarding the effectiveness of using complimentary maneuver patterns when multiple resolutions by the same aircraft were required.
Experimental Performance of a Genetic Algorithm for Airborne Strategic Conflict Resolution
NASA Technical Reports Server (NTRS)
Karr, David A.; Vivona, Robert A.; Roscoe, David A.; DePascale, Stephen M.; Consiglio, Maria
2009-01-01
The Autonomous Operations Planner, a research prototype flight-deck decision support tool to enable airborne self-separation, uses a pattern-based genetic algorithm to resolve predicted conflicts between the ownship and traffic aircraft. Conflicts are resolved by modifying the active route within the ownship's flight management system according to a predefined set of maneuver pattern templates. The performance of this pattern-based genetic algorithm was evaluated in the context of batch-mode Monte Carlo simulations running over 3600 flight hours of autonomous aircraft in en-route airspace under conditions ranging from typical current traffic densities to several times that level. Encountering over 8900 conflicts during two simulation experiments, the genetic algorithm was able to resolve all but three conflicts, while maintaining a required time of arrival constraint for most aircraft. Actual elapsed running time for the algorithm was consistent with conflict resolution in real time. The paper presents details of the genetic algorithm's design, along with mathematical models of the algorithm's performance and observations regarding the effectiveness of using complimentary maneuver patterns when multiple resolutions by the same aircraft were required.
Lamberti, Alfredo; Vanlanduit, Steve; De Pauw, Ben; Berghmans, Francis
2014-01-01
The working principle of fiber Bragg grating (FBG) sensors is mostly based on the tracking of the Bragg wavelength shift. To accomplish this task, different algorithms have been proposed, from conventional maximum and centroid detection algorithms to more recently-developed correlation-based techniques. Several studies regarding the performance of these algorithms have been conducted, but they did not take into account spectral distortions, which appear in many practical applications. This paper addresses this issue and analyzes the performance of four different wavelength tracking algorithms (maximum detection, centroid detection, cross-correlation and fast phase-correlation) when applied to distorted FBG spectra used for measuring dynamic loads. Both simulations and experiments are used for the analyses. The dynamic behavior of distorted FBG spectra is simulated using the transfer-matrix approach, and the amount of distortion of the spectra is quantified using dedicated distortion indices. The algorithms are compared in terms of achievable precision and accuracy. To corroborate the simulation results, experiments were conducted using three FBG sensors glued on a steel plate and subjected to a combination of transverse force and vibration loads. The analysis of the results showed that the fast phase-correlation algorithm guarantees the best combination of versatility, precision and accuracy. PMID:25521386
Performance Analysis of Selective Breeding Algorithm on One Dimensional Bin Packing Problems
NASA Astrophysics Data System (ADS)
Sriramya, P.; Parvathavarthini, B.
2012-12-01
The bin packing optimization problem packs a set of objects into a set of bins so that the amount of wasted space is minimized. The bin packing problem has many important applications. The objective is to find a feasible assignment of all weights to bins that minimizes the total number of bins used. The bin packing problem models several practical problems in such diverse areas as industrial control, computer systems, machine scheduling, VLSI chip layout and etc. Selective breeding algorithm (SBA) is an iterative procedure which borrows the ideas of artificial selection and breeding process. By simulating artificial evolution in this way SBA algorithm can easily solve complex problems. One dimensional bin packing benchmark problems are taken for evaluating the performance of the SBA. The computational results of SBA algorithm show optimal solution for the tested benchmark problems. The proposed SBA algorithm is a good problem-solving technique for one dimensional bin packing problems.
NASA Astrophysics Data System (ADS)
Triana-Martinez, J.; Orjuela-Vargas, S. A.; Philips, W.
2013-03-01
This paper compares the speed performance of a set of classic image algorithms for evaluating texture in images by using CUDA programming. We include a summary of the general program mode of CUDA. We select a set of texture algorithms, based on statistical analysis, that allow the use of repetitive functions, such as the Coocurrence Matrix, Haralick features and local binary patterns techniques. The memory allocation time between the host and device memory is not taken into account. The results of this approach show a comparison of the texture algorithms in terms of speed when executed on CPU and GPU processors. The comparison shows that the algorithms can be accelerated more than 40 times when implemented using CUDA environment.
Angelis, G I; Reader, A J; Kotasidis, F A; Lionheart, W R; Matthews, J C
2011-07-01
Iterative expectation maximization (EM) techniques have been extensively used to solve maximum likelihood (ML) problems in positron emission tomography (PET) image reconstruction. Although EM methods offer a robust approach to solving ML problems, they usually suffer from slow convergence rates. The ordered subsets EM (OSEM) algorithm provides significant improvements in the convergence rate, but it can cycle between estimates converging towards the ML solution of each subset. In contrast, gradient-based methods, such as the recently proposed non-monotonic maximum likelihood (NMML) and the more established preconditioned conjugate gradient (PCG), offer a globally convergent, yet equally fast, alternative to OSEM. Reported results showed that NMML provides faster convergence compared to OSEM; however, it has never been compared to other fast gradient-based methods, like PCG. Therefore, in this work we evaluate the performance of two gradient-based methods (NMML and PCG) and investigate their potential as an alternative to the fast and widely used OSEM. All algorithms were evaluated using 2D simulations, as well as a single [(11)C]DASB clinical brain dataset. Results on simulated 2D data show that both PCG and NMML achieve orders of magnitude faster convergence to the ML solution compared to MLEM and exhibit comparable performance to OSEM. Equally fast performance is observed between OSEM and PCG for clinical 3D data, but NMML seems to perform poorly. However, with the addition of a preconditioner term to the gradient direction, the convergence behaviour of NMML can be substantially improved. Although PCG is a fast convergent algorithm, the use of a (bent) line search increases the complexity of the implementation, as well as the computational time involved per iteration. Contrary to previous reports, NMML offers no clear advantage over OSEM or PCG, for noisy PET data. Therefore, we conclude that there is little evidence to replace OSEM as the algorithm of choice for
Significant Differences in Pediatric Psychotropic Side Effects: Implications for School Performance
ERIC Educational Resources Information Center
Kubiszyn, Thomas; Mire, Sarah; Dutt, Sonia; Papathopoulos, Katina; Burridge, Andrea Backsheider
2012-01-01
Some side effects (SEs) of increasingly prescribed psychotropic medications can impact student performance in school. SE risk varies, even among drugs from the same class (e.g., antidepressants). Knowing which SEs occur significantly more often than others may enable school psychologists to enhance collaborative risk-benefit analysis, medication…
Westover, Jennifer M; Martin, Emma J
2014-12-01
Literacy skills are fundamental for all learners. For students with significant disabilities, strong literacy skills provide a gateway to generative communication, genuine friendships, improved access to academic opportunities, access to information technology, and future employment opportunities. Unfortunately, many educators lack the knowledge to design or implement appropriate evidence-based literacy instruction for students with significant disabilities. Furthermore, students with significant disabilities often receive the majority of their instruction from paraeducators. This single-subject design study examined the effects of performance feedback on the delivery skills of paraeducators during systematic and explicit literacy instruction for students with significant disabilities. The specific skills targeted for feedback were planned opportunities for student responses and correct academic responses. Findings suggested that delivery of feedback on performance resulted in increased pacing, accuracy in student responses, and subsequent attainment of literacy skills for students with significant disabilities. Implications for the use of performance feedback as an evaluation and training tool for increasing effective instructional practices are provided. PMID:25271082
A fast and high performance multiple data integration algorithm for identifying human disease genes
2015-01-01
Background Integrating multiple data sources is indispensable in improving disease gene identification. It is not only due to the fact that disease genes associated with similar genetic diseases tend to lie close with each other in various biological networks, but also due to the fact that gene-disease associations are complex. Although various algorithms have been proposed to identify disease genes, their prediction performances and the computational time still should be further improved. Results In this study, we propose a fast and high performance multiple data integration algorithm for identifying human disease genes. A posterior probability of each candidate gene associated with individual diseases is calculated by using a Bayesian analysis method and a binary logistic regression model. Two prior probability estimation strategies and two feature vector construction methods are developed to test the performance of the proposed algorithm. Conclusions The proposed algorithm is not only generated predictions with high AUC scores, but also runs very fast. When only a single PPI network is employed, the AUC score is 0.769 by using F2 as feature vectors. The average running time for each leave-one-out experiment is only around 1.5 seconds. When three biological networks are integrated, the AUC score using F3 as feature vectors increases to 0.830, and the average running time for each leave-one-out experiment takes only about 12.54 seconds. It is better than many existing algorithms. PMID:26399620
Montilla, I; Béchet, C; Le Louarn, M; Reyes, M; Tallon, M
2010-11-01
Extremely Large Telescopes (ELTs) are very challenging with respect to their adaptive optics (AO) requirements. Their diameters and the specifications required by the astronomical science for which they are being designed imply a huge increment in the number of degrees of freedom in the deformable mirrors. Faster algorithms are needed to implement the real-time reconstruction and control in AO at the required speed. We present the results of a study of the AO correction performance of three different algorithms applied to the case of a 42-m ELT: one considered as a reference, the matrix-vector multiply (MVM) algorithm; and two considered fast, the fractal iterative method (FrIM) and the Fourier transform reconstructor (FTR). The MVM and the FrIM both provide a maximum a posteriori estimation, while the FTR provides a least-squares one. The algorithms are tested on the European Southern Observatory (ESO) end-to-end simulator, OCTOPUS. The performance is compared using a natural guide star single-conjugate adaptive optics configuration. The results demonstrate that the methods have similar performance in a large variety of simulated conditions. However, with respect to system misregistrations, the fast algorithms demonstrate an interesting robustness. PMID:21045895
Performance of the reconstruction algorithms of the FIRST experiment pixel sensors vertex detector
NASA Astrophysics Data System (ADS)
Rescigno, R.; Finck, Ch.; Juliani, D.; Spiriti, E.; Baudot, J.; Abou-Haidar, Z.; Agodi, C.; Alvarez, M. A. G.; Aumann, T.; Battistoni, G.; Bocci, A.; Böhlen, T. T.; Boudard, A.; Brunetti, A.; Carpinelli, M.; Cirrone, G. A. P.; Cortes-Giraldo, M. A.; Cuttone, G.; De Napoli, M.; Durante, M.; Gallardo, M. I.; Golosio, B.; Iarocci, E.; Iazzi, F.; Ickert, G.; Introzzi, R.; Krimmer, J.; Kurz, N.; Labalme, M.; Leifels, Y.; Le Fevre, A.; Leray, S.; Marchetto, F.; Monaco, V.; Morone, M. C.; Oliva, P.; Paoloni, A.; Patera, V.; Piersanti, L.; Pleskac, R.; Quesada, J. M.; Randazzo, N.; Romano, F.; Rossi, D.; Rousseau, M.; Sacchi, R.; Sala, P.; Sarti, A.; Scheidenberger, C.; Schuy, C.; Sciubba, A.; Sfienti, C.; Simon, H.; Sipala, V.; Tropea, S.; Vanstalle, M.; Younis, H.
2014-12-01
Hadrontherapy treatments use charged particles (e.g. protons and carbon ions) to treat tumors. During a therapeutic treatment with carbon ions, the beam undergoes nuclear fragmentation processes giving rise to significant yields of secondary charged particles. An accurate prediction of these production rates is necessary to estimate precisely the dose deposited into the tumours and the surrounding healthy tissues. Nowadays, a limited set of double differential carbon fragmentation cross-section is available. Experimental data are necessary to benchmark Monte Carlo simulations for their use in hadrontherapy. The purpose of the FIRST experiment is to study nuclear fragmentation processes of ions with kinetic energy in the range from 100 to 1000 MeV/u. Tracks are reconstructed using information from a pixel silicon detector based on the CMOS technology. The performances achieved using this device for hadrontherapy purpose are discussed. For each reconstruction step (clustering, tracking and vertexing), different methods are implemented. The algorithm performances and the accuracy on reconstructed observables are evaluated on the basis of simulated and experimental data.
NASA Astrophysics Data System (ADS)
Browning, Tyler; Jackson, Christopher; Cayci, Furkan; Carhart, Gary W.; Liu, J. J.; Kiamilev, Fouad
2015-06-01
"Lucky-region" fusion (LRF) is a synthetic imaging technique that has proven successful in enhancing the quality of images distorted by atmospheric turbulence. The LRF algorithm extracts sharp regions of an image obtained from a series of short exposure frames from fast, high-resolution image sensors, and fuses the sharp regions into a final, improved image. In our previous research, the LRF algorithm had been implemented on CPU and field programmable gate array (FPGA) platforms. The CPU did not have sufficient processing power to handle real-time processing of video. Last year, we presented a real-time LRF implementation using an FPGA. However, due to the slow register-transfer level (RTL) development and simulation time, it was difficult to adjust and discover optimal LRF settings such as Gaussian kernel radius and synthetic frame buffer size. To overcome this limitation, we implemented the LRF algorithm on an off-the-shelf graphical processing unit (GPU) in order to take advantage of built-in parallelization and significantly faster development time. Our initial results show that the unoptimized GPU implementation has almost comparable turbulence mitigation to the FPGA version. In our presentation, we will explore optimization of the LRF algorithm on the GPU to achieve higher performance results, and adding new performance capabilities such as image stabilization.
Performance assessment of an algorithm for the alignment of fMRI time series.
Ciulla, Carlo; Deek, Fadi P
2002-01-01
This paper reports on performance assessment of an algorithm developed to align functional Magnetic Resonance Image (fMRI) time series. The algorithm is based on the assumption that the human brain is subject to rigid-body motion and has been devised by pipelining fiducial markers and tensor based registration methodologies. Feature extraction is performed on each fMRI volume to determine tensors of inertia and gradient image of the brain. A head coordinate system is determined on the basis of three fiducial markers found automatically at the head boundary by means of the tensors and is used to compute a point-based rigid matching transformation. Intensity correction is performed with sub-voxel accuracy by trilinear interpolation. Performance of the algorithm was preliminarily assessed by fMR brain images in which controlled motion has been simulated. Further experimentation has been conducted with real fMRI time series. Rigid-body transformations were retrieved automatically and the value of motion parameters compared to those obtained with the Statistical Parametric Mapping (SPM99) and the Automatic Image Registration (AIR 3.08). Results indicate that the algorithm offers sub-voxel accuracy in performing both misalignment and intensity correction of fMRI time series. PMID:12137364
NASA Astrophysics Data System (ADS)
Mantini, D.; Hild, K. E., II; Alleva, G.; Comani, S.
2006-02-01
Independent component analysis (ICA) algorithms have been successfully used for signal extraction tasks in the field of biomedical signal processing. We studied the performances of six algorithms (FastICA, CubICA, JADE, Infomax, TDSEP and MRMI-SIG) for fetal magnetocardiography (fMCG). Synthetic datasets were used to check the quality of the separated components against the original traces. Real fMCG recordings were simulated with linear combinations of typical fMCG source signals: maternal and fetal cardiac activity, ambient noise, maternal respiration, sensor spikes and thermal noise. Clusters of different dimensions (19, 36 and 55 sensors) were prepared to represent different MCG systems. Two types of signal-to-interference ratios (SIR) were measured. The first involves averaging over all estimated components and the second is based solely on the fetal trace. The computation time to reach a minimum of 20 dB SIR was measured for all six algorithms. No significant dependency on gestational age or cluster dimension was observed. Infomax performed poorly when a sub-Gaussian source was included; TDSEP and MRMI-SIG were sensitive to additive noise, whereas FastICA, CubICA and JADE showed the best performances. Of all six methods considered, FastICA had the best overall performance in terms of both separation quality and computation times.
NASA Astrophysics Data System (ADS)
Wang, Ding; Xu, Wen; Schmidt, Henrik
2002-11-01
A large part of sonar performance prediction uncertainty is associated with the uncertain ocean acoustic environment. Optimal in situ measurement strategy, i.e. adaptively capturing the most critical uncertain environmental parameters within operational constrains can minimize the sonar performance prediction uncertainty. Understanding the relative significance of individual environmental parameters to sonar performance prediction uncertainty is fundamental to the heuristics to determine the most critical environmental parameters. Based on this understanding, the optimal parametrization of ocean acoustic environments can be defined, which will significantly simplify the adaptively sampling pattern. As an example, the matched-field processing is used to localize an unknown sound source position in a realistic ocean environment. Typical shallow water environmental models are used with some of the properties being stochastic variables. The ratio of the main lobe peak to the maximum side lobe peak of the ambiguity function and the main lobe peak displacement due to mismatch are chosen as performance metrics, respectively, in two different scenarios. The relative significance of some environmental parameters such as sediment thickness, weights of empirical orthogonal functions (EOFs) has been computed. Some preliminary results are discussed.
Performance evaluation of imaging seeker tracking algorithm based on multi-features
NASA Astrophysics Data System (ADS)
Li, Yujue; Yan, Jinglong
2011-08-01
The paper presents a new efficient method for performance evaluation of imaging seeker tracking algorithm. The method utilizes multi features which associate with tracking point of each video frame, gets local score(LS) for every feature, and achieves global score(GS) for given tracking algorithm according to the combined strategy. The method can be divided into three steps. In a first step, it extracts evaluation feature from neighbor zone of each tracking point. The feature may include tracking error, shape of target, area of target, tracking path, and so on. Then, as to each feature, a local score can be got rely on the number of target which tracked successfully. It uses similarity measurement and experiential threshold between neighbor zone of tracking point and target template to define tracking successful or not. Of course, the number should be 0 or 1 for single target tracking. Finally, it assigns weight for each feature according to the validity grade for the performance. The weights multiply by local scores and normalized between 0 and 1, this gets global score of certain tracking algorithm. By compare the global score of each tracking algorithm as to certain type of scene, it can evaluate the performance of tracking algorithm quantificational. The proposed method nearly covers all tracking error factors which can be introduced into the process of target tracking, so the evaluation result has a higher reliability. Experimental results, obtained with flying video of infrared imaging seeker, and also included several target tracking algorithms, illustrate the performance of target tracking, demonstrate the effectiveness and robustness of the proposed method.
Dórea, Fernanda C.; McEwen, Beverly J.; McNab, W. Bruce; Revie, Crawford W.; Sanchez, Javier
2013-01-01
Diagnostic test orders to an animal laboratory were explored as a data source for monitoring trends in the incidence of clinical syndromes in cattle. Four years of real data and over 200 simulated outbreak signals were used to compare pre-processing methods that could remove temporal effects in the data, as well as temporal aberration detection algorithms that provided high sensitivity and specificity. Weekly differencing demonstrated solid performance in removing day-of-week effects, even in series with low daily counts. For aberration detection, the results indicated that no single algorithm showed performance superior to all others across the range of outbreak scenarios simulated. Exponentially weighted moving average charts and Holt–Winters exponential smoothing demonstrated complementary performance, with the latter offering an automated method to adjust to changes in the time series that will likely occur in the future. Shewhart charts provided lower sensitivity but earlier detection in some scenarios. Cumulative sum charts did not appear to add value to the system; however, the poor performance of this algorithm was attributed to characteristics of the data monitored. These findings indicate that automated monitoring aimed at early detection of temporal aberrations will likely be most effective when a range of algorithms are implemented in parallel. PMID:23576782
Signal and image processing algorithm performance in a virtual and elastic computing environment
NASA Astrophysics Data System (ADS)
Bennett, Kelly W.; Robertson, James
2013-05-01
The U.S. Army Research Laboratory (ARL) supports the development of classification, detection, tracking, and localization algorithms using multiple sensing modalities including acoustic, seismic, E-field, magnetic field, PIR, and visual and IR imaging. Multimodal sensors collect large amounts of data in support of algorithm development. The resulting large amount of data, and their associated high-performance computing needs, increases and challenges existing computing infrastructures. Purchasing computer power as a commodity using a Cloud service offers low-cost, pay-as-you-go pricing models, scalability, and elasticity that may provide solutions to develop and optimize algorithms without having to procure additional hardware and resources. This paper provides a detailed look at using a commercial cloud service provider, such as Amazon Web Services (AWS), to develop and deploy simple signal and image processing algorithms in a cloud and run the algorithms on a large set of data archived in the ARL Multimodal Signatures Database (MMSDB). Analytical results will provide performance comparisons with existing infrastructure. A discussion on using cloud computing with government data will discuss best security practices that exist within cloud services, such as AWS.
Kim, Byung S; Yoo, Sun K
2007-09-01
The use of wireless networks bears great practical importance in instantaneous transmission of ECG signals during movement. In this paper, three typical wavelet-based ECG compression algorithms, Rajoub (RA), Embedded Zerotree Wavelet (EZ), and Wavelet Transform Higher-Order Statistics Coding (WH), were evaluated to find an appropriate ECG compression algorithm for scalable and reliable wireless tele-cardiology applications, particularly over a CDMA network. The short-term and long-term performance characteristics of the three algorithms were analyzed using normal, abnormal, and measurement noise-contaminated ECG signals from the MIT-BIH database. In addition to the processing delay measurement, compression efficiency and reconstruction sensitivity to error were also evaluated via simulation models including the noise-free channel model, random noise channel model, and CDMA channel model, as well as over an actual CDMA network currently operating in Korea. This study found that the EZ algorithm achieves the best compression efficiency within a low-noise environment, and that the WH algorithm is competitive for use in high-error environments with degraded short-term performance with abnormal or contaminated ECG signals. PMID:17701824
Lu, Y; Sundararajan, N; Saratchandran, P
1998-01-01
This paper presents a detailed performance analysis of the minimal resource allocation network (M-RAN) learning algorithm, M-RAN is a sequential learning radial basis function neural network which combines the growth criterion of the resource allocating network (RAN) of Platt (1991) with a pruning strategy based on the relative contribution of each hidden unit to the overall network output. The resulting network leads toward a minimal topology for the RAN. The performance of this algorithm is compared with the multilayer feedforward networks (MFNs) trained with 1) a variant of the standard backpropagation algorithm, known as RPROP and 2) the dependence identification (DI) algorithm of Moody and Antsaklis on several benchmark problems in the function approximation and pattern classification areas. For all these problems, the M-RAN algorithm is shown to realize networks with far fewer hidden neurons with better or same approximation/classification accuracy. Further, the time taken for learning (training) is also considerably shorter as M-RAN does not require repeated presentation of the training data. PMID:18252454
Performance of 12 DIR algorithms in low-contrast regions for mass and density conserving deformation
Yeo, U. J.; Supple, J. R.; Franich, R. D.; Taylor, M. L.; Smith, R.; Kron, T.
2013-10-15
Purpose: Deformable image registration (DIR) has become a key tool for adaptive radiotherapy to account for inter- and intrafraction organ deformation. Of contemporary interest, the application to deformable dose accumulation requires accurate deformation even in low contrast regions where dose gradients may exist within near-uniform tissues. One expects high-contrast features to generally be deformed more accurately by DIR algorithms. The authors systematically assess the accuracy of 12 DIR algorithms and quantitatively examine, in particular, low-contrast regions, where accuracy has not previously been established.Methods: This work investigates DIR algorithms in three dimensions using deformable gel (DEFGEL) [U. J. Yeo, M. L. Taylor, L. Dunn, R. L. Smith, T. Kron, and R. D. Franich, “A novel methodology for 3D deformable dosimetry,” Med. Phys. 39, 2203–2213 (2012)], for application to mass- and density-conserving deformations. CT images of DEFGEL phantoms with 16 fiducial markers (FMs) implanted were acquired in deformed and undeformed states for three different representative deformation geometries. Nonrigid image registration was performed using 12 common algorithms in the public domain. The optimum parameter setup was identified for each algorithm and each was tested for deformation accuracy in three scenarios: (I) original images of the DEFGEL with 16 FMs; (II) images with eight of the FMs mathematically erased; and (III) images with all FMs mathematically erased. The deformation vector fields obtained for scenarios II and III were then applied to the original images containing all 16 FMs. The locations of the FMs estimated by the algorithms were compared to actual locations determined by CT imaging. The accuracy of the algorithms was assessed by evaluation of three-dimensional vectors between true marker locations and predicted marker locations.Results: The mean magnitude of 16 error vectors per sample ranged from 0.3 to 3.7, 1.0 to 6.3, and 1.3 to 7
Significant differences in pediatric psychotropic side effects: Implications for school performance.
Kubiszyn, Thomas; Mire, Sarah; Dutt, Sonia; Papathopoulos, Katina; Burridge, Andrea Backsheider
2012-03-01
Some side effects (SEs) of increasingly prescribed psychotropic medications can impact student performance in school. SE risk varies, even among drugs from the same class (e.g., antidepressants). Knowing which SEs occur significantly more often than others may enable school psychologists to enhance collaborative risk-benefit analysis, medication monitoring, data-based decision-making, and inform mitigation efforts. SE data from Full Prescribing Information (PI) on the FDA website for ADHD drugs, atypical antipsychotics, and antidepressants with pediatric indications were analyzed. Risk ratios (RR) are reported for each drug within a category compared with placebo. RR tables and graphs inform the reader about SE incidence differences for each drug and provide clear evidence of the wide variability in SE incidence in the FDA data. Breslow-Day and Cochran Mantel-Haenszel methods were used to test for drug-placebo SE differences and to test for significance across drugs within each category based on odds ratios (ORs). Significant drug-placebo differences were found for each drug compared with placebo, when odds were pooled across all drugs in a category compared with placebo, and between some drugs within categories. Unexpectedly, many large RR differences did not reach significance. Potential explanations are offered, including limitations of the FDA data sets and statistical and methodological issues. Future research directions are offered. The potential impact of certain SEs on school performance, mitigation strategies, and the potential role of the school psychologist is discussed, with consideration for ethical and legal limitations. PMID:22582933
The FPGA realization of a real-time Bayer image restoration algorithm with better performance
NASA Astrophysics Data System (ADS)
Ma, Huaping; Liu, Shuang; Zhou, Jiangyong; Tang, Zunlie; Deng, Qilin; Zhang, Hongliu
2014-11-01
Along with the wide usage of realizing Bayer color interpolation algorithm through FPGA, better performance, real-time processing, and less resource consumption have become the pursuits for the users. In order to realize the function of high speed and high quality processing of the Bayer image restoration with less resource consumption, the color reconstruction is designed and optimized from the interpolation algorithm and the FPGA realization in this article. Then the hardware realization is finished with FPGA development platform, and the function of real-time and high-fidelity image processing with less resource consumption is realized in the embedded image acquisition systems.
Global Precipitation Measurement (GPM) Microwave Imager Falling Snow Retrieval Algorithm Performance
NASA Astrophysics Data System (ADS)
Skofronick Jackson, Gail; Munchak, Stephen J.; Johnson, Benjamin T.
2015-04-01
Retrievals of falling snow from space represent an important data set for understanding the Earth's atmospheric, hydrological, and energy cycles. While satellite-based remote sensing provides global coverage of falling snow events, the science is relatively new and retrievals are still undergoing development with challenges and uncertainties remaining. This work reports on the development and post-launch testing of retrieval algorithms for the NASA Global Precipitation Measurement (GPM) mission Core Observatory satellite launched in February 2014. In particular, we will report on GPM Microwave Imager (GMI) radiometer instrument algorithm performance with respect to falling snow detection and estimation. Since GPM's launch, the at-launch GMI precipitation algorithms, based on a Bayesian framework, have been used with the new GPM data. The at-launch database is generated using proxy satellite data merged with surface measurements (instead of models). One year after launch, the Bayesian database will begin to be replaced with the more realistic observational data from the GPM spacecraft radar retrievals and GMI data. It is expected that the observational database will be much more accurate for falling snow retrievals because that database will take full advantage of the 166 and 183 GHz snow-sensitive channels. Furthermore, much retrieval algorithm work has been done to improve GPM retrievals over land. The Bayesian framework for GMI retrievals is dependent on the a priori database used in the algorithm and how profiles are selected from that database. Thus, a land classification sorts land surfaces into ~15 different categories for surface-specific databases (radiometer brightness temperatures are quite dependent on surface characteristics). In addition, our work has shown that knowing if the land surface is snow-covered, or not, can improve the performance of the algorithm. Improvements were made to the algorithm that allow for daily inputs of ancillary snow cover
Performance evaluation of dynamic assembly period algorithm in TCP over OBS networks
NASA Astrophysics Data System (ADS)
Peng, Shuping; Li, Zhengbin; He, Yongqi; Xu, Anshi
2007-11-01
Dynamic Assembly Period (DAP) is a novel assembly algorithm, which is based on the dynamic TCP window. The assembly algorithm can track the variation of the current TCP window aroused by the burst loss events, and update the assembly period dynamically for the next assembly. The analytical model provides the theoretical foundation for the proposed assembly algorithm. Nowadays, there are several kinds of TCP flavors proposed to enhance the performance of TCP, such as Default, Tahoe, Reno, New Reno, SACK, etc., which are adopted in the current internet. In this paper, we evaluated the performance of DAP under the different TCP flavors. The simulation results show that the performance of DAP under Default TCP flavor is the best. The difference in the performance of DAP under such flavors is correlated with the inside mechanism of the flavors. We also compared the performance of DAP and FAP under the same TCP flavor. It indicates that the performance of DAP is better than that of FAP in a wide range of burst loss rate.
Negri, Lucas; Nied, Ademir; Kalinowski, Hypolito; Paterno, Aleksander
2011-01-01
This paper presents a benchmark for peak detection algorithms employed in fiber Bragg grating spectrometric interrogation systems. The accuracy, precision, and computational performance of currently used algorithms and those of a new proposed artificial neural network algorithm are compared. Centroid and gaussian fitting algorithms are shown to have the highest precision but produce systematic errors that depend on the FBG refractive index modulation profile. The proposed neural network displays relatively good precision with reduced systematic errors and improved computational performance when compared to other networks. Additionally, suitable algorithms may be chosen with the general guidelines presented. PMID:22163806
Assessing SWOT discharge algorithms performance across a range of river types
NASA Astrophysics Data System (ADS)
Durand, M. T.; Smith, L. C.; Gleason, C. J.; Bjerklie, D. M.; Garambois, P. A.; Roux, H.
2014-12-01
Scheduled for launch in 2020, the Surface Water and Ocean Topography (SWOT) satellite mission will measure river height, width, and slope, globally, as well as characterizing storage change in lakes, and ocean surface dynamics. Four discharge algorithms have been formulated to solve the inverse problem of river discharge from SWOT observations. Three of these approaches are based on Manning's equation, while the fourth utilizes at-many-stations hydraulic geometry relating width and discharge. In all cases, SWOT will provide some but not all of the information required to estimate discharge. The focus of the inverse approaches is estimation of the unknown parameters. The algorithms use a range of a priori information. This paper will generate synthetic measurements of height, width, and slope for a number of rivers, including reaches of the Sacramento, Ohio, Mississippi, Platte, Amazon, Garonne, Po, Severn, St. Lawrence, and Tanana. These rivers have a wide range of flows, geometries, hydraulic regimes, floodplain interactions, and planforms. One-year synthetic datasets will be generated in each case. We will add white noise to the simulated quantities and generate scenarios with different repeat time. The focus will be on retrievability of the hydraulic parameters across a range of space-time sampling, rather than on ability to retrieve under the specific SWOT orbit. We will focus on several specific research questions affecting algorithm performance, including river characteristics, temporal sampling, and algorithm accuracy. The overall goal is to be able to predict which algorithms will work better for different kinds of rivers, and potentially to combine the outputs of the various algorithms to obtain more robust estimates. Preliminary results on the Sacramento River indicate that all algorithms perform well for this single-channel river, with diffusive hydraulics, with relative RMSE values ranging from 9% to 26% for the various algorithms. Preliminary
Measuring localization performance of super-resolution algorithms on very active samples.
Wolter, Steve; Endesfelder, Ulrike; van de Linde, Sebastian; Heilemann, Mike; Sauer, Markus
2011-04-11
Super-resolution fluorescence imaging based on single-molecule localization relies critically on the availability of efficient processing algorithms to distinguish, identify, and localize emissions of single fluorophores. In multiple current applications, such as three-dimensional, time-resolved or cluster imaging, high densities of fluorophore emissions are common. Here, we provide an analytic tool to test the performance and quality of localization microscopy algorithms and demonstrate that common algorithms encounter difficulties for samples with high fluorophore density. We demonstrate that, for typical single-molecule localization microscopy methods such as dSTORM and the commonly used rapidSTORM scheme, computational precision limits the acceptable density of concurrently active fluorophores to 0.6 per square micrometer and that the number of successfully localized fluorophores per frame is limited to 0.2 per square micrometer. PMID:21503016
A Hybrid Neural Network-Genetic Algorithm Technique for Aircraft Engine Performance Diagnostics
NASA Technical Reports Server (NTRS)
Kobayashi, Takahisa; Simon, Donald L.
2001-01-01
In this paper, a model-based diagnostic method, which utilizes Neural Networks and Genetic Algorithms, is investigated. Neural networks are applied to estimate the engine internal health, and Genetic Algorithms are applied for sensor bias detection and estimation. This hybrid approach takes advantage of the nonlinear estimation capability provided by neural networks while improving the robustness to measurement uncertainty through the application of Genetic Algorithms. The hybrid diagnostic technique also has the ability to rank multiple potential solutions for a given set of anomalous sensor measurements in order to reduce false alarms and missed detections. The performance of the hybrid diagnostic technique is evaluated through some case studies derived from a turbofan engine simulation. The results show this approach is promising for reliable diagnostics of aircraft engines.
Using modified fruit fly optimisation algorithm to perform the function test and case studies
NASA Astrophysics Data System (ADS)
Pan, Wen-Tsao
2013-06-01
Evolutionary computation is a computing mode established by practically simulating natural evolutionary processes based on the concept of Darwinian Theory, and it is a common research method. The main contribution of this paper was to reinforce the function of searching for the optimised solution using the fruit fly optimization algorithm (FOA), in order to avoid the acquisition of local extremum solutions. The evolutionary computation has grown to include the concepts of animal foraging behaviour and group behaviour. This study discussed three common evolutionary computation methods and compared them with the modified fruit fly optimization algorithm (MFOA). It further investigated the ability of the three mathematical functions in computing extreme values, as well as the algorithm execution speed and the forecast ability of the forecasting model built using the optimised general regression neural network (GRNN) parameters. The findings indicated that there was no obvious difference between particle swarm optimization and the MFOA in regards to the ability to compute extreme values; however, they were both better than the artificial fish swarm algorithm and FOA. In addition, the MFOA performed better than the particle swarm optimization in regards to the algorithm execution speed, and the forecast ability of the forecasting model built using the MFOA's GRNN parameters was better than that of the other three forecasting models.
NASA Technical Reports Server (NTRS)
Deavours, Daniel D.; Qureshi, M. Akber; Sanders, William H.
1997-01-01
Modeling tools and technologies are important for aerospace development. At the University of Illinois, we have worked on advancing the state of the art in modeling by Markov reward models in two important areas: reducing the memory necessary to numerically solve systems represented as stochastic activity networks and other stochastic Petri net extensions while still obtaining solutions in a reasonable amount of time, and finding numerically stable and memory-efficient methods to solve for the reward accumulated during a finite mission time. A long standing problem when modeling with high level formalisms such as stochastic activity networks is the so-called state space explosion, where the number of states increases exponentially with size of the high level model. Thus, the corresponding Markov model becomes prohibitively large and solution is constrained by the the size of primary memory. To reduce the memory necessary to numerically solve complex systems, we propose new methods that can tolerate such large state spaces that do not require any special structure in the model (as many other techniques do). First, we develop methods that generate row and columns of the state transition-rate-matrix on-the-fly, eliminating the need to explicitly store the matrix at all. Next, we introduce a new iterative solution method, called modified adaptive Gauss-Seidel, that exhibits locality in its use of data from the state transition-rate-matrix, permitting us to cache portions of the matrix and hence reduce the solution time. Finally, we develop a new memory and computationally efficient technique for Gauss-Seidel based solvers that avoids the need for generating rows of A in order to solve Ax = b. This is a significant performance improvement for on-the-fly methods as well as other recent solution techniques based on Kronecker operators. Taken together, these new results show that one can solve very large models without any special structure.
Scott, Joshua I; Xue, Xiao; Wang, Ming; Kline, R Joseph; Hoffman, Benjamin C; Dougherty, Daniel; Zhou, Chuanzhen; Bazan, Guillermo; O'Connor, Brendan T
2016-06-01
Polymer semiconductors based on donor-acceptor monomers have recently resulted in significant gains in field effect mobility in organic thin film transistors (OTFTs). These polymers incorporate fused aromatic rings and have been designed to have stiff planar backbones, resulting in strong intermolecular interactions, which subsequently result in stiff and brittle films. The complex synthesis typically required for these materials may also result in increased production costs. Thus, the development of methods to improve mechanical plasticity while lowering material consumption during fabrication will significantly improve opportunities for adoption in flexible and stretchable electronics. To achieve these goals, we consider blending a brittle donor-acceptor polymer, poly[4-(4,4-dihexadecyl-4H-cyclopenta[1,2-b:5,4-b']dithiophen-2-yl)-alt-[1,2,5]thiadiazolo[3,4-c]pyridine] (PCDTPT), with ductile poly(3-hexylthiophene). We found that the ductility of the blend films is significantly improved compared to that of neat PCDTPT films, and when the blend film is employed in an OTFT, the performance is largely maintained. The ability to maintain charge transport character is due to vertical segregation within the blend, while the improved ductility is due to intermixing of the polymers throughout the film thickness. Importantly, the application of large strains to the ductile films is shown to orient both polymers, which further increases charge carrier mobility. These results highlight a processing approach to achieve high performance polymer OTFTs that are electrically and mechanically optimized. PMID:27200458
Proper nozzle location, bit profile, and cutter arrangement affect PDC-bit performance significantly
Garcia-Gavito, D.; Azar, J.J.
1994-09-01
During the past 20 years, the drilling industry has looked to new technology to halt the exponentially increasing costs of drilling oil, gas, and geothermal wells. This technology includes bit design innovations to improve overall drilling performance and reduce drilling costs. These innovations include development of drag bits that use PDC cutters, also called PDC bits, to drill long, continuous intervals of soft to medium-hard formations more economically than conventional three-cone roller-cone bits. The cost advantage is the result of higher rates of penetration (ROP's) and longer bit life obtained with the PDC bits. An experimental study comparing the effects of polycrystalline-diamond-compact (PDC)-bit design features on the dynamic pressure distribution at the bit/rock interface was conducted on a full-scale drilling rig. Results showed that nozzle location, bit profile, and cutter arrangement are significant factors in PDC-bit performance.
NASA Astrophysics Data System (ADS)
Chen, Yanli; Du, Lianhuan; Yang, Peihua; Sun, Peng; Yu, Xiang; Mai, Wenjie
2015-08-01
Here, we report robust, flexible CNT-based supercapacitor (SC) electrodes fabricated by electrodepositing polypyrrole (PPy) on freestanding vacuum-filtered CNT film. These electrodes demonstrate significantly improved mechanical properties (with the ultimate tensile strength of 16 MPa), and greatly enhanced electrochemical performance (5.6 times larger areal capacitance). The major drawback of conductive polymer electrodes is the fast capacitance decay caused by structural breakdown, which decreases cycling stability but this is not observed in our case. All-solid-state SCs assembled with the robust CNT/PPy electrodes exhibit excellent flexibility, long lifetime (95% capacitance retention after 10,000 cycles) and high electrochemical performance (a total device volumetric capacitance of 4.9 F/cm3). Moreover, a flexible SC pack is demonstrated to light up 53 LEDs or drive a digital watch, indicating the broad potential application of our SCs for portable/wearable electronics.
NASA Astrophysics Data System (ADS)
Lu, Haibao; Gou, Jan
2012-04-01
A new nanopaper that exhibits exciting electrical and electromagnetic performances is fabricated by incorporating magnetically aligned carbon nanotube (CNT) with carbon nanofibers (CNFs). Electromagnetic CNTs were blended with and aligned into the nanopaper using a magnetic field, to significantly improve the electrical and electromagnetic performances of nanopaper and its enabled shape-memory polymer (SMP) composite. The morphology and structure of the aligned CNT arrays in nanopaper were characterized with scanning electronic microscopy (SEM). A continuous and compact network of CNFs and aligned CNTs indicated that the nanopaper could have highly conductive properties. Furthermore, the electromagnetic interference (EMI) shielding efficiency of the SMP composites with different weight content of aligned CNT arrays was characterized. Finally, the aligned CNT arrays in nanopapers were employed to achieve the electrical actuation and accelerate the recovery speed of SMP composites.
NASA Astrophysics Data System (ADS)
Greco, Mario; Huebner, Claudia; Marchi, Gabriele
2008-10-01
In the field on blind image deconvolution a new promising algorithm, based on the Principal Component Analysis (PCA), has been recently proposed in the literature. The main advantages of the algorithm are the following: computational complexity is generally lower than other deconvolution techniques (e.g., the widely used Iterative Blind Deconvolution - IBD - method); it is robust to white noise; only the blurring point spread function support is required to perform the single-observation deconvolution (i.e., a single degraded observation of a scene is available), while the multiple-observation one is completely unsupervised (i.e., multiple degraded observations of a scene are available). The effectiveness of the PCA-based restoration algorithm has been only confirmed by visual inspection and, to the best of our knowledge, no objective image quality assessment has been performed. In this paper a generalization of the original algorithm version is proposed; then the previous unexplored issue is considered and the achieved results are compared with that of the IBD method, which is used as benchmark.
Algorithmic, LOCS and HOCS (chemistry) exam questions: performance and attitudes of college students
NASA Astrophysics Data System (ADS)
Zoller, Uri
2002-02-01
The performance of freshmen biology and physics-mathematics majors and chemistry majors as well as pre- and in-service chemistry teachers in two Israeli universities on algorithmic (ALG), lower-order cognitive skills (LOCS), and higher-order cognitive skills (HOCS) chemistry exam questions were studied. The driving force for the study was an interest in moving science and chemistry instruction from an algorithmic and factual recall orientation dominated by LOCS, to a decision-making, problem-solving and critical system thinking approach, dominated by HOCS. College students' responses to the specially designed ALG, LOCS and HOCS chemistry exam questions were scored and analysed for differences and correlation between the performance means within and across universities by the questions' category. This was followed by a combined student interview - 'speaking aloud' problem solving session for assessing the thinking processes involved in solving these types of questions and the students' attitudes towards them. The main findings were: (1) students in both universities performed consistently in each of the three categories in the order of ALG > LOCS > HOCS; their 'ideological' preference, was HOCS > algorithmic/LOCS, - referred to as 'computational questions', but their pragmatic preference was the reverse; (2) success on algorithmic/LOCS does not imply success on HOCS questions; algorithmic questions constitute a category on its own as far as students success in solving them is concerned. Our study and its results support the effort being made, worldwide, to integrate HOCS-fostering teaching and assessment strategies and, to develop HOCS-oriented science-technology-environment-society (STES)-type curricula within science and chemistry education.
Performance Evaluation of Different Ground Filtering Algorithms for Uav-Based Point Clouds
NASA Astrophysics Data System (ADS)
Serifoglu, C.; Gungor, O.; Yilmaz, V.
2016-06-01
Digital Elevation Model (DEM) generation is one of the leading application areas in geomatics. Since a DEM represents the bare earth surface, the very first step of generating a DEM is to separate the ground and non-ground points, which is called ground filtering. Once the point cloud is filtered, the ground points are interpolated to generate the DEM. LiDAR (Light Detection and Ranging) point clouds have been used in many applications thanks to their success in representing the objects they belong to. Hence, in the literature, various ground filtering algorithms have been reported to filter the LiDAR data. Since the LiDAR data acquisition is still a costly process, using point clouds generated from the UAV images to produce DEMs is a reasonable alternative. In this study, point clouds with three different densities were generated from the aerial photos taken from a UAV (Unmanned Aerial Vehicle) to examine the effect of point density on filtering performance. The point clouds were then filtered by means of five different ground filtering algorithms as Progressive Morphological 1D (PM1D), Progressive Morphological 2D (PM2D), Maximum Local Slope (MLS), Elevation Threshold with Expand Window (ETEW) and Adaptive TIN (ATIN). The filtering performance of each algorithm was investigated qualitatively and quantitatively. The results indicated that the ATIN and PM2D algorithms showed the best overall ground filtering performances. The MLS and ETEW algorithms were found as the least successful ones. It was concluded that the point clouds generated from the UAVs can be a good alternative for LiDAR data.
Imbir, Kamil K
2016-01-01
Activation mechanisms such as arousal are known to be responsible for slowdown observed in the Emotional Stroop and modified Stroop tasks. Using the duality of mind perspective, we may conclude that both ways of processing information (automatic or controlled) should have their own mechanisms of activation, namely, arousal for an experiential mind, and subjective significance for a rational mind. To investigate the consequences of both, factorial manipulation was prepared. Other factors that influence Stroop task processing such as valence, concreteness, frequency, and word length were controlled. Subjective significance was expected to influence arousal effects. In the first study, the task was to name the color of font for activation charged words. In the second study, activation charged words were, at the same time, combined with an incongruent condition of the classical Stroop task around a fixation point. The task was to indicate the font color for color-meaning words. In both studies, subjective significance was found to shape the arousal impact on performance in terms of the slowdown reduction for words charged with subjective significance. PMID:26869974
Imbir, Kamil K.
2016-01-01
Activation mechanisms such as arousal are known to be responsible for slowdown observed in the Emotional Stroop and modified Stroop tasks. Using the duality of mind perspective, we may conclude that both ways of processing information (automatic or controlled) should have their own mechanisms of activation, namely, arousal for an experiential mind, and subjective significance for a rational mind. To investigate the consequences of both, factorial manipulation was prepared. Other factors that influence Stroop task processing such as valence, concreteness, frequency, and word length were controlled. Subjective significance was expected to influence arousal effects. In the first study, the task was to name the color of font for activation charged words. In the second study, activation charged words were, at the same time, combined with an incongruent condition of the classical Stroop task around a fixation point. The task was to indicate the font color for color-meaning words. In both studies, subjective significance was found to shape the arousal impact on performance in terms of the slowdown reduction for words charged with subjective significance. PMID:26869974
NASA Astrophysics Data System (ADS)
Kim, Chul-Ho; Lee, Kee-Man; Lee, Sang-Heon
Power train system design is one of the key R&D areas on the development process of new automobile because an optimum size of engine with adaptable power transmission which can accomplish the design requirement of new vehicle can be obtained through the system design. Especially, for the electric vehicle design, very reliable design algorithm of a power train system is required for the energy efficiency. In this study, an analytical simulation algorithm is developed to estimate driving performance of a designed power train system of an electric. The principal theory of the simulation algorithm is conservation of energy with several analytical and experimental data such as rolling resistance, aerodynamic drag, mechanical efficiency of power transmission etc. From the analytical calculation results, running resistance of a designed vehicle is obtained with the change of operating condition of the vehicle such as inclined angle of road and vehicle speed. Tractive performance of the model vehicle with a given power train system is also calculated at each gear ratio of transmission. Through analysis of these two calculation results: running resistance and tractive performance, the driving performance of a designed electric vehicle is estimated and it will be used to evaluate the adaptability of the designed power train system on the vehicle.
Focused R&D For Electrochromic Smart Windowsa: Significant Performance and Yield Enhancements
Mark Burdis; Neil Sbar
2003-01-31
There is a need to improve the energy efficiency of building envelopes as they are the primary factor governing the heating, cooling, lighting and ventilation requirements of buildings--influencing 53% of building energy use. In particular, windows contribute significantly to the overall energy performance of building envelopes, thus there is a need to develop advanced energy efficient window and glazing systems. Electrochromic (EC) windows represent the next generation of advanced glazing technology that will (1) reduce the energy consumed in buildings, (2) improve the overall comfort of the building occupants, and (3) improve the thermal performance of the building envelope. ''Switchable'' EC windows provide, on demand, dynamic control of visible light, solar heat gain, and glare without blocking the view. As exterior light levels change, the window's performance can be electronically adjusted to suit conditions. A schematic illustrating how SageGlass{reg_sign} electrochromic windows work is shown in Figure I.1. SageGlass{reg_sign} EC glazings offer the potential to save cooling and lighting costs, with the added benefit of improving thermal and visual comfort. Control over solar heat gain will also result in the use of smaller HVAC equipment. If a step change in the energy efficiency and performance of buildings is to be achieved, there is a clear need to bring EC technology to the marketplace. This project addresses accelerating the widespread introduction of EC windows in buildings and thus maximizing total energy savings in the U.S. and worldwide. We report on R&D activities to improve the optical performance needed to broadly penetrate the full range of architectural markets. Also, processing enhancements have been implemented to reduce manufacturing costs. Finally, tests are being conducted to demonstrate the durability of the EC device and the dual pane insulating glass unit (IGU) to be at least equal to that of conventional windows.
Wolfe, Amy K.; Malone, Elizabeth L.; Heerwagen, Judith H.; Dion, Jerome P.
2014-04-01
The people who use Federal buildings — Federal employees, operations and maintenance staff, and the general public — can significantly impact a building’s environmental performance and the consumption of energy, water, and materials. Many factors influence building occupants’ use of resources (use behaviors) including work process requirements, ability to fulfill agency missions, new and possibly unfamiliar high-efficiency/high-performance building technologies; a lack of understanding, education, and training; inaccessible information or ineffective feedback mechanisms; and cultural norms and institutional rules and requirements, among others. While many strategies have been used to introduce new occupant use behaviors that promote sustainability and reduced resource consumption, few have been verified in the scientific literature or have properly documented case study results. This paper documents validated strategies that have been shown to encourage new use behaviors that can result in significant, persistent, and measureable reductions in resource consumption. From the peer-reviewed literature, the paper identifies relevant strategies for Federal facilities and commercial buildings that focus on the individual, groups of individuals (e.g., work groups), and institutions — their policies, requirements, and culture. The paper documents methods with evidence of success in changing use behaviors and enabling occupants to effectively interact with new technologies/designs. It also provides a case study of the strategies used at a Federal facility — Fort Carson, Colorado. The paper documents gaps in the current literature and approaches, and provides topics for future research.
NASA Astrophysics Data System (ADS)
Kim, Bong Joo; Hwang, Gang Uk
In this paper, we analyze the extended real-time Polling Service (ertPS) algorithm in IEEE 802.16e systems, which is designed to support Voice-over-Internet-Protocol (VoIP) services with data packets of various sizes and silence suppression. The analysis uses a two-dimensional Markov Chain, where the grant size and the voice packet state are considered, and an approximation formula for the total throughput in the ertPS algorithm is derived. Next, to improve the performance of the ertPS algorithm, we propose an enhanced uplink resource allocation algorithm, called the e2rtPS algorithm, for VoIP services in IEEE 802.16e systems. The e2rtPS algorithm considers the queue status information and tries to alleviate the queue congestion as soon as possible by using remaining network resources. Numerical results are provided to show the accuracy of the approximation analysis for the ertPS algorithm and to verify the effectiveness of the e2rtPS algorithm.
Danilovic, D; Ohm, O J; Stroebel, J; Breivik, K; Hoff, P I; Markowitz, T
1998-05-01
We have developed an algorithmic method for automatic determination of stimulation thresholds in both cardiac chambers in patients with intact atrioventricular (AV) conduction. The algorithm utilizes ventricular sensing, may be used with any type of pacing leads, and may be downloaded via telemetry links into already implanted dual-chamber Thera pacemakers. Thresholds are determined with 0.5 V amplitude and 0.06 ms pulse-width resolution in unipolar, bipolar, or both lead configurations, with a programmable sampling interval from 2 minutes to 48 hours. Measured values are stored in the pacemaker memory for later retrieval and do not influence permanent output settings. The algorithm was intended to gather information on continuous behavior of stimulation thresholds, which is important in the formation of strategies for programming pacemaker outputs. Clinical performance of the algorithm was evaluated in eight patients who received bipolar tined steroid-eluting leads and were observed for a mean of 5.1 months. Patient safety was not compromised by the algorithm, except for the possibility of pacing during the physiologic refractory period. Methods for discrimination of incorrect data points were developed and incorrect values were discarded. Fine resolution threshold measurements collected during this study indicated that: (1) there were great differences in magnitude of threshold peaking in different patients; (2) the initial intensive threshold peaking was usually followed by another less intensive but longer-lasting wave of threshold peaking; (3) the pattern of tissue reaction in the atrium appeared different from that in the ventricle; and (4) threshold peaking in the bipolar lead configuration was greater than in the unipolar configuration. The algorithm proved to be useful in studying ambulatory thresholds. PMID:9604237
Experimental Investigation of the Performance of Image Registration and De-aliasing Algorithms
NASA Astrophysics Data System (ADS)
Crabtree, P.; Dao, P.
Various image de-aliasing algorithms and techniques have been developed to improve the resolution of sensor-aliased images captured with an under sampled point spread function. In the literature these types of algorithms are sometimes included under the broad umbrella of superresolution. Image restoration is a more appropriate categorization for this work because we aim to restore image resolution lost due to sensor aliasing, but only up to the limit imposed by diffraction. Specifically, the work presented here is focused on image de-aliasing using microscanning. Much of the previous work in this area demonstrates improvement by using simulated imagery, or using imagery obtained where the sub pixel shifts are unknown and must be estimated. This paper takes an experimental approach to investigate performance for both the visible and long-wave infrared (LWIR) regions. Two linear translation stages are used to provide two-axis camera control via RS-232 interface. The translation stages use stepper motors, but also include a microstepping capability which allows discrete steps of approximately 0.1 microns. However, there are several types of position error associated with these devices. Therefore, the microstepping error is investigated and partially quantified prior to performing microscan image capture and processing. We also consider the impact of less than 100% fill factor on algorithm performance. For the visible region we use a CMOS camera and a resolution target to generate a contrast transfer function (CTF) for both the raw and microscanned images. This allows modulation transfer function (MTF) estimation, which gives a more complete and quantitative description of performance as opposed to simply estimating the limiting resolution and/or visual inspection. The difference between the MTF curves for the raw and microscanned images will be explored as a means to describe performance as a function of spatial frequency. Finally, our goal is to also demonstrate
Code of Federal Regulations, 2010 CFR
2010-10-01
... Contract Summary of Significant Performance Observation. 1553.216-70 Section 1553.216-70 Federal... 1553.216-70 EPA Form 1900-41A, CPAF Contract Summary of Significant Performance Observation. As prescribed in 1516.404-278, EPA Form 1900-41A shall be used to document significant performance...
Code of Federal Regulations, 2011 CFR
2011-10-01
... Contract Summary of Significant Performance Observation. 1553.216-70 Section 1553.216-70 Federal... 1553.216-70 EPA Form 1900-41A, CPAF Contract Summary of Significant Performance Observation. As prescribed in 1516.404-278, EPA Form 1900-41A shall be used to document significant performance...
Information theoretic bounds of ATR algorithm performance for sidescan sonar target classification
NASA Astrophysics Data System (ADS)
Myers, Vincent L.; Pinto, Marc A.
2005-05-01
With research on autonomous underwater vehicles for minehunting beginning to focus on cooperative and adaptive behaviours, some effort is being spent on developing automatic target recognition (ATR) algorithms that are able to operate with high reliability under a wide range of scenarios, particularly in areas of high clutter density, and without human supervision. Because of the great diversity of pattern recognition methods and continuously improving sensor technology, there is an acute requirement for objective performance measures that are independent of any particular sensor, algorithm or target definitions. This paper approaches the ATR problem from the point of view of information theory in an attempt to place bounds on the performance of target classification algorithms that are based on the acoustic shadow of proud targets. Performance is bounded by analysing the simplest of shape classification tasks, that of differentiating between a circular and square shadow, thus allowing us to isolate system design criteria and assess their effect on the overall probability of classification. The information that can be used for target recognition in sidescan sonar imagery is examined and common information theory relationships are used to derive properties of the ATR problem. Some common bounds with analytical solutions are also derived.
Orion Guidance and Control Ascent Abort Algorithm Design and Performance Results
NASA Technical Reports Server (NTRS)
Proud, Ryan W.; Bendle, John R.; Tedesco, Mark B.; Hart, Jeremy J.
2009-01-01
During the ascent flight phase of NASA s Constellation Program, the Ares launch vehicle propels the Orion crew vehicle to an agreed to insertion target. If a failure occurs at any point in time during ascent then a system must be in place to abort the mission and return the crew to a safe landing with a high probability of success. To achieve continuous abort coverage one of two sets of effectors is used. Either the Launch Abort System (LAS), consisting of the Attitude Control Motor (ACM) and the Abort Motor (AM), or the Service Module (SM), consisting of SM Orion Main Engine (OME), Auxiliary (Aux) Jets, and Reaction Control System (RCS) jets, is used. The LAS effectors are used for aborts from liftoff through the first 30 seconds of second stage flight. The SM effectors are used from that point through Main Engine Cutoff (MECO). There are two distinct sets of Guidance and Control (G&C) algorithms that are designed to maximize the performance of these abort effectors. This paper will outline the necessary inputs to the G&C subsystem, the preliminary design of the G&C algorithms, the ability of the algorithms to predict what abort modes are achievable, and the resulting success of the abort system. Abort success will be measured against the Preliminary Design Review (PDR) abort performance metrics and overall performance will be reported. Finally, potential improvements to the G&C design will be discussed.
NASA Astrophysics Data System (ADS)
Rajalakshmi, N.; Padma Subramanian, D.; Thamizhavel, K.
2015-03-01
The extent of real power loss and voltage deviation associated with overloaded feeders in radial distribution system can be reduced by reconfiguration. Reconfiguration is normally achieved by changing the open/closed state of tie/sectionalizing switches. Finding optimal switch combination is a complicated problem as there are many switching combinations possible in a distribution system. Hence optimization techniques are finding greater importance in reducing the complexity of reconfiguration problem. This paper presents the application of firefly algorithm (FA) for optimal reconfiguration of radial distribution system with distributed generators (DG). The algorithm is tested on IEEE 33 bus system installed with DGs and the results are compared with binary genetic algorithm. It is found that binary FA is more effective than binary genetic algorithm in achieving real power loss reduction and improving voltage profile and hence enhancing the performance of radial distribution system. Results are found to be optimum when DGs are added to the test system, which proved the impact of DGs on distribution system.
Performance of a rain retrieval algorithm using TRMM data in the Eastern Mediterranean
NASA Astrophysics Data System (ADS)
Katsanos, D.; Viltard, N.; Lagouvardos, K.; Kotroni, V.
2006-05-01
This study aims to make a regional characterization of the performance of the rain retrieval algorithm BRAIN. This algorithm estimates the rain rate from brightness temperatures measured by the TRMM Microwave Imager (TMI) onboard the TRMM satellite. In this stage of the study, a comparison between the rain estimated from Precipitation Radar (PR) onboard TRMM (2A25 version 5) and the rain retrieved by the BRAIN algorithm is presented, for about 30 satellite overpasses over the Central and Eastern Mediterranean during the period October 2003-March 2004, in order to assess the behavior of the algorithm in the Eastern Mediterranean region. BRAIN was built and tested using PR rain estimates distributed randomly over the whole TRMM sampling region. Characterization of the differences between PR and BRAIN over a specific region is thus interesting because it might show some local trend for one or the other of the instrument. The checking of BRAIN results against the PR rain-estimate appears to be consistent with former results i.e. a somewhat marked discrepancy for the highest rain rates. This difference arises from a known problem that affect rain retrieval based on passive microwave radiometers measurements, but some of the higher radar rain rates could also be questioned. As an independent test, a good correlation between the rain retrieved by BRAIN and lighting data (obtained by the UK Met. Office long range detection system) is also emphasized in the paper.
Palmer, M.P.; Abreu, E.L.; Mastrangelo, A.; Murray, M.M.
2009-01-01
Collagen-platelet composites have recently been successfully used as scaffolds to stimulate anterior cruciate ligament (ACL) wound healing in large animal models. These materials are typically kept on ice until use to prevent premature gelation; however, with surgical use, placement of a cold solution then requires up to an hour while the solution comes to body temperature (at which point gelation occurs). Bringing the solution to a higher temperature before injection would likely decrease this intra-operative wait; however, the effects of this on composite performance are not known. The hypothesis tested here was that increasing the temperature of the gel at the time of injection would significantly decrease the time to gelation, but would not significantly alter the mechanical properties of the composite or its ability to support functional tissue repair. Primary outcome measures included the maximum elastic modulus (stiffness) of the composite in vitro and the in vivo yield load of an ACL transection treated with an injected collagen-platelet composite. In vitro findings were that injection temperatures over 30°C resulted in a faster visco-elastic transition; however, the warmed composites had a 50% decrease in their maximum elastic modulus. In vivo studies found that warming the gels prior to injection also resulted in a decrease in the yield load of the healing ACL at 14 weeks. These studies suggest that increasing injection temperature of collagen-platelet composites results in a decrease in performance of the composite in vitro and in the strength of the healing ligament in vivo and this technique should be used only with great caution. PMID:19030174
Performance comparison of multi-label learning algorithms on clinical data for chronic diseases.
Zufferey, Damien; Hofer, Thomas; Hennebert, Jean; Schumacher, Michael; Ingold, Rolf; Bromuri, Stefano
2015-10-01
We are motivated by the issue of classifying diseases of chronically ill patients to assist physicians in their everyday work. Our goal is to provide a performance comparison of state-of-the-art multi-label learning algorithms for the analysis of multivariate sequential clinical data from medical records of patients affected by chronic diseases. As a matter of fact, the multi-label learning approach appears to be a good candidate for modeling overlapped medical conditions, specific to chronically ill patients. With the availability of such comparison study, the evaluation of new algorithms should be enhanced. According to the method, we choose a summary statistics approach for the processing of the sequential clinical data, so that the extracted features maintain an interpretable link to their corresponding medical records. The publicly available MIMIC-II dataset, which contains more than 19,000 patients with chronic diseases, is used in this study. For the comparison we selected the following multi-label algorithms: ML-kNN, AdaBoostMH, binary relevance, classifier chains, HOMER and RAkEL. Regarding the results, binary relevance approaches, despite their elementary design and their independence assumption concerning the chronic illnesses, perform optimally in most scenarios, in particular for the detection of relevant diseases. In addition, binary relevance approaches scale up to large dataset and are easy to learn. However, the RAkEL algorithm, despite its scalability problems when it is confronted to large dataset, performs well in the scenario which consists of the ranking of the labels according to the dominant disease of the patient. PMID:26275389
NASA Astrophysics Data System (ADS)
Ritter, Axel; Muñoz-Carpena, Rafael
2013-02-01
SummarySuccess in the use of computer models for simulating environmental variables and processes requires objective model calibration and verification procedures. Several methods for quantifying the goodness-of-fit of observations against model-calculated values have been proposed but none of them is free of limitations and are often ambiguous. When a single indicator is used it may lead to incorrect verification of the model. Instead, a combination of graphical results, absolute value error statistics (i.e. root mean square error), and normalized goodness-of-fit statistics (i.e. Nash-Sutcliffe Efficiency coefficient, NSE) is currently recommended. Interpretation of NSE values is often subjective, and may be biased by the magnitude and number of data points, data outliers and repeated data. The statistical significance of the performance statistics is an aspect generally ignored that helps in reducing subjectivity in the proper interpretation of the model performance. In this work, approximated probability distributions for two common indicators (NSE and root mean square error) are derived with bootstrapping (block bootstrapping when dealing with time series), followed by bias corrected and accelerated calculation of confidence intervals. Hypothesis testing of the indicators exceeding threshold values is proposed in a unified framework for statistically accepting or rejecting the model performance. It is illustrated how model performance is not linearly related with NSE, which is critical for its proper interpretation. Additionally, the sensitivity of the indicators to model bias, outliers and repeated data is evaluated. The potential of the difference between root mean square error and mean absolute error for detecting outliers is explored, showing that this may be considered a necessary but not a sufficient condition of outlier presence. The usefulness of the approach for the evaluation of model performance is illustrated with case studies including those with
Nanoporosity Significantly Enhances the Biological Performance of Engineered Glass Tissue Scaffolds
Wang, Shaojie; Kowal, Tia J.; Marei, Mona K.
2013-01-01
Nanoporosity is known to impact the performance of implants and scaffolds such as bioactive glass (BG) scaffolds, either by providing a higher concentration of bioactive chemical species from enhanced surface area, or due to inherent nanoscale topology, or both. To delineate the role of these two characteristics, BG scaffolds have been fabricated with nearly identical surface area (81 and 83±2 m2/g) but significantly different pore size (av. 3.7 and 17.7 nm) by varying both the sintering temperature and the ammonia concentration during the solvent exchange phase of the sol-gel fabrication process. In vitro tests performed with MC3T3-E1 preosteoblast cells on such scaffolds show that initial cell attachment is increased on samples with the smaller nanopore size, providing the first direct evidence of the influence of nanopore topography on cell response to a bioactive structure. Furthermore, in vivo animal tests in New Zealand rabbits (subcutaneous implantation) indicate that nanopores promote colonization and cell penetration into these scaffolds, further demonstrating the favorable effects of nanopores in tissue-engineering-relevant BG scaffolds. PMID:23427819
He, Ting; Zu, Lianhai; Zhang, Yan; Mao, Chengliang; Xu, Xiaoxiang; Yang, Jinhu; Yang, Shihe
2016-08-23
Semiconductor nanowires that have been extensively studied are typically in a crystalline phase. Much less studied are amorphous semiconductor nanowires due to the difficulty for their synthesis, despite a set of characteristics desirable for photoelectric devices, such as higher surface area, higher surface activity, and higher light harvesting. In this work of combined experiment and computation, taking Zn2GeO4 (ZGO) as an example, we propose a site-specific heteroatom substitution strategy through a solution-phase ions-alternative-deposition route to prepare amorphous/crystalline Si-incorporated ZGO nanowires with tunable band structures. The substitution of Si atoms for the Zn or Ge atoms distorts the bonding network to a different extent, leading to the formation of amorphous Zn1.7Si0.3GeO4 (ZSGO) or crystalline Zn2(GeO4)0.88(SiO4)0.12 (ZGSO) nanowires, respectively, with different bandgaps. The amorphous ZSGO nanowire arrays exhibit significantly enhanced performance in photoelectrochemical water splitting, such as higher and more stable photocurrent, and faster photoresponse and recovery, relative to crystalline ZGSO and ZGO nanowires in this work, as well as ZGO photocatalysts reported previously. The remarkable performance highlights the advantages of the ZSGO amorphous nanowires for photoelectric devices, such as higher light harvesting capability, faster charge separation, lower charge recombination, and higher surface catalytic activity. PMID:27494205
Performance of MODIS Thermal Emissive Bands On-orbit Calibration Algorithms
NASA Technical Reports Server (NTRS)
Xiong, Xiaoxiong; Chang, T.
2009-01-01
serves as the thermal calibration source and the SV provides measurements for the sensor's background and offsets. MODIS on-board BB is a v-grooved plate with its temperature measured using 12 platinum resistive thermistors (PRT) uniformly embedded in the BB substrate. All the BB thermistors were characterized pre-launch with reference to the NIST temperature standards. Unlike typical BB operations in many heritage sensors, which have no temperature control capability, the MODIS on-board BB can be operated at any temperatures between instrument ambient (about 270K) and 315K and can also be varied continuously within this range. This feature has significantly enhanced the MODIS' capability of tracking and updating the TEB nonlinear calibration coefficients over its entire mission. Following a brief description of MODIS TEB on-orbit calibration methodologies and its onboard BB operational activities, this paper provides a comprehensive performance assessment of MODIS TEB quadratic calibration algorithm. It examines the scan-by-scan, orbit-by-orbit, daily, and seasonal variations of detector responses and associated impact due changes in the CFPA and instrument temperatures. Specifically, this paper will analyze the contribution by each individual thermal emissive source term (BB, scan cavity, and scan mirror), the impact on the Level 1 B data product quality due to pre-launch and on-orbit calibration uncertainties. A comparison of Terra and Aqua TEB on-orbit performance, lessons learned, and suggestions for future improvements will also be made.
Zeng, Zhiping; Yu, Dingshan; He, Ziming; Liu, Jing; Xiao, Fang-Xing; Zhang, Yan; Wang, Rong; Bhattacharyya, Dibakar; Tan, Timothy Thatt Yang
2016-01-01
Covalent bonding of graphene oxide quantum dots (GOQDs) onto amino modified polyvinylidene fluoride (PVDF) membrane has generated a new type of nano-carbon functionalized membrane with significantly enhanced antibacterial and antibiofouling properties. A continuous filtration test using E. coli containing feedwater shows that the relative flux drop over GOQDs modified PVDF is 23%, which is significantly lower than those over pristine PVDF (86%) and GO-sheet modified PVDF (62%) after 10 h of filtration. The presence of GOQD coating layer effectively inactivates E. coli and S. aureus cells, and prevents the biofilm formation on the membrane surface, producing excellent antimicrobial activity and potentially antibiofouling capability, more superior than those of previously reported two-dimensional GO sheets and one-dimensional CNTs modified membranes. The distinctive antimicrobial and antibiofouling performances could be attributed to the unique structure and uniform dispersion of GOQDs, enabling the exposure of a larger fraction of active edges and facilitating the formation of oxidation stress. Furthermore, GOQDs modified membrane possesses satisfying long-term stability and durability due to the strong covalent interaction between PVDF and GOQDs. This study opens up a new synthetic avenue in the fabrication of efficient surface-functionalized polymer membranes for potential waste water treatment and biomolecules separation. PMID:26832603
NASA Astrophysics Data System (ADS)
Zeng, Zhiping; Yu, Dingshan; He, Ziming; Liu, Jing; Xiao, Fang-Xing; Zhang, Yan; Wang, Rong; Bhattacharyya, Dibakar; Tan, Timothy Thatt Yang
2016-02-01
Covalent bonding of graphene oxide quantum dots (GOQDs) onto amino modified polyvinylidene fluoride (PVDF) membrane has generated a new type of nano-carbon functionalized membrane with significantly enhanced antibacterial and antibiofouling properties. A continuous filtration test using E. coli containing feedwater shows that the relative flux drop over GOQDs modified PVDF is 23%, which is significantly lower than those over pristine PVDF (86%) and GO-sheet modified PVDF (62%) after 10 h of filtration. The presence of GOQD coating layer effectively inactivates E. coli and S. aureus cells, and prevents the biofilm formation on the membrane surface, producing excellent antimicrobial activity and potentially antibiofouling capability, more superior than those of previously reported two-dimensional GO sheets and one-dimensional CNTs modified membranes. The distinctive antimicrobial and antibiofouling performances could be attributed to the unique structure and uniform dispersion of GOQDs, enabling the exposure of a larger fraction of active edges and facilitating the formation of oxidation stress. Furthermore, GOQDs modified membrane possesses satisfying long-term stability and durability due to the strong covalent interaction between PVDF and GOQDs. This study opens up a new synthetic avenue in the fabrication of efficient surface-functionalized polymer membranes for potential waste water treatment and biomolecules separation.
Zeng, Zhiping; Yu, Dingshan; He, Ziming; Liu, Jing; Xiao, Fang-Xing; Zhang, Yan; Wang, Rong; Bhattacharyya, Dibakar; Tan, Timothy Thatt Yang
2016-01-01
Covalent bonding of graphene oxide quantum dots (GOQDs) onto amino modified polyvinylidene fluoride (PVDF) membrane has generated a new type of nano-carbon functionalized membrane with significantly enhanced antibacterial and antibiofouling properties. A continuous filtration test using E. coli containing feedwater shows that the relative flux drop over GOQDs modified PVDF is 23%, which is significantly lower than those over pristine PVDF (86%) and GO-sheet modified PVDF (62%) after 10 h of filtration. The presence of GOQD coating layer effectively inactivates E. coli and S. aureus cells, and prevents the biofilm formation on the membrane surface, producing excellent antimicrobial activity and potentially antibiofouling capability, more superior than those of previously reported two-dimensional GO sheets and one-dimensional CNTs modified membranes. The distinctive antimicrobial and antibiofouling performances could be attributed to the unique structure and uniform dispersion of GOQDs, enabling the exposure of a larger fraction of active edges and facilitating the formation of oxidation stress. Furthermore, GOQDs modified membrane possesses satisfying long-term stability and durability due to the strong covalent interaction between PVDF and GOQDs. This study opens up a new synthetic avenue in the fabrication of efficient surface-functionalized polymer membranes for potential waste water treatment and biomolecules separation. PMID:26832603
Field Significance of Performance Measures in the Context of Regional Climate Model Verification
NASA Astrophysics Data System (ADS)
Ivanov, Martin; Warrach-Sagi, Kirsten; Wulfmeyer, Volker
2015-04-01
The purpose of this study is to rigorously evaluate the skill of dynamically downscaled global climate simulations. We investigate a dynamical downscaling of the ERA-Interim reanalysis using the Weather Research and Forecasting (WRF) model, coupled with the NOAH land surface model within the scope of EURO-CORDEX. WRF has a horizontal resolution of 11° and contains the following physics: the Yonsei university atmospheric boundary layer parameterization, the Morrison two-moment microphysics, the Kain-Fritsch-Eta convection and the Community Atmosphere Model radiation schemes. Daily precipitation is verified over Germany for summer and winter against high-resolution observation data from the German weather service for the first time. The ability of WRF to reproduce the statistical distribution of daily precipitation is evaluated using metrics based on distribution characteristics. Skill against the large-scale ERA-Interim data gives insight into the potential, additional skill of dynamical downscaling. To quantify it, we transform the absolute performance measures to relative skill measures against ERA-Interim. Their field significance is rigorously estimated and locally significant regions are highlighted. Statistical distributions are better reproduced in summer than in winter. In both seasons WRF is too dry over mountain tops due to underestimated and too rare high and underestimated and too frequent small precipitations. In winter WRF is too wet at windward sides and land-sea transition regions due to too frequent weak and moderate precipitation events. In summer it is too dry over land-sea transition regions due to underestimated small and too rare moderate precipitations, and too wet in some river valleys due to too frequent high precipitations. Additional skill relative to ERA-Interim is documented for overall measures as well as measures regarding the spread and tails of the statistical distribution, but not regarding mean seasonal precipitation. The added
NASA Astrophysics Data System (ADS)
Winkler, Stefan; Rangaswamy, Karthik; Tedjokusumo, Jefry; Zhou, ZhiYing
2008-02-01
Determining the self-motion of a camera is useful for many applications. A number of visual motion-tracking algorithms have been developed till date, each with their own advantages and restrictions. Some of them have also made their foray into the mobile world, powering augmented reality-based applications on phones with inbuilt cameras. In this paper, we compare the performances of three feature or landmark-guided motion tracking algorithms, namely marker-based tracking with MXRToolkit, face tracking based on CamShift, and MonoSLAM. We analyze and compare the complexity, accuracy, sensitivity, robustness and restrictions of each of the above methods. Our performance tests are conducted over two stages: The first stage of testing uses video sequences created with simulated camera movements along the six degrees of freedom in order to compare accuracy in tracking, while the second stage analyzes the robustness of the algorithms by testing for manipulative factors like image scaling and frame-skipping.
Assessment of next-best-view algorithms performance with various 3D scanners and manipulator
NASA Astrophysics Data System (ADS)
Karaszewski, M.; Adamczyk, M.; Sitnik, R.
2016-09-01
The problem of calculating three dimensional (3D) sensor position (and orientation) during the digitization of real-world objects (called next best view planning or NBV) has been an active topic of research for over 20 years. While many solutions have been developed, it is hard to compare their quality based only on the exemplary results presented in papers. We implemented 13 of the most popular NBV algorithms and evaluated their performance by digitizing five objects of various properties, using three measurement heads with different working volumes mounted on a 6-axis robot with a rotating table for placing objects. The results obtained for the 13 algorithms were then compared based on four criteria: the number of directional measurements, digitization time, total positioning distance, and surface coverage required to digitize test objects with available measurement heads.
Cottrell, R.Les; Logg, Connie; Chhaparia, Mahesh; Grigoriev, Maxim; Haro, Felipe; Nazir, Fawad; Sandford, Mark
2006-01-25
End-to-End fault and performance problems detection in wide area production networks is becoming increasingly hard as the complexity of the paths, the diversity of the performance, and dependency on the network increase. Several monitoring infrastructures are built to monitor different network metrics and collect monitoring information from thousands of hosts around the globe. Typically there are hundreds to thousands of time-series plots of network metrics which need to be looked at to identify network performance problems or anomalous variations in the traffic. Furthermore, most commercial products rely on a comparison with user configured static thresholds and often require access to SNMP-MIB information, to which a typical end-user does not usually have access. In our paper we propose new techniques to detect network performance problems proactively in close to realtime and we do not rely on static thresholds and SNMP-MIB information. We describe and compare the use of several different algorithms that we have implemented to detect persistent network problems using anomalous variations analysis in real end-to-end Internet performance measurements. We also provide methods and/or guidance for how to set the user settable parameters. The measurements are based on active probes running on 40 production network paths with bottlenecks varying from 0.5Mbits/s to 1000Mbit/s. For well behaved data (no missed measurements and no very large outliers) with small seasonal changes most algorithms identify similar events. We compare the algorithms' robustness with respect to false positives and missed events especially when there are large seasonal effects in the data. Our proposed techniques cover a wide variety of network paths and traffic patterns. We also discuss the applicability of the algorithms in terms of their intuitiveness, their speed of execution as implemented, and areas of applicability. Our encouraging results compare and evaluate the accuracy of our detection
NASA Technical Reports Server (NTRS)
Bohse, J. R.; Bewtra, M.; Barnes, W. L.
1979-01-01
The rationale and procedures used in the radiometric calibration and correction of Heat Capacity Mapping Mission (HCMM) data are presented. Instrument-level testing and calibration of the Heat Capacity Mapping Radiometer (HCMR) were performed by the sensor contractor ITT Aerospace/Optical Division. The principal results are included. From the instrumental characteristics and calibration data obtained during ITT acceptance tests, an algorithm for post-launch processing was developed. Integrated spacecraft-level sensor calibration was performed at Goddard Space Flight Center (GSFC) approximately two months before launch. This calibration provided an opportunity to validate the data calibration algorithm. Instrumental parameters and results of the validation are presented and the performances of the instrument and the data system after launch are examined with respect to the radiometric results. Anomalies and their consequences are discussed. Flight data indicates a loss in sensor sensitivity with time. The loss was shown to be recoverable by an outgassing procedure performed approximately 65 days after the infrared channel was turned on. It is planned to repeat this procedure periodically.
Francescato, Maria Pia; Stel, Giuliana; Stenner, Elisabetta; Geat, Mario
2015-01-01
Physical activity in patients with type 1 diabetes (T1DM) is hindered because of the high risk of glycemic imbalances. A recently proposed algorithm (named Ecres) estimates well enough the supplemental carbohydrates for exercises lasting one hour, but its performance for prolonged exercise requires validation. Nine T1DM patients (5M/4F; 35–65 years; HbA1c 54±13 mmol·mol-1) performed, under free-life conditions, a 3-h walk at 30% heart rate reserve while insulin concentrations, whole-body carbohydrate oxidation rates (determined by indirect calorimetry) and supplemental carbohydrates (93% sucrose), together with glycemia, were measured every 30 min. Data were subsequently compared with the corresponding values estimated by the algorithm. No significant difference was found between the estimated insulin concentrations and the laboratory-measured values (p = NS). Carbohydrates oxidation rate decreased significantly with time (from 0.84±0.31 to 0.53±0.24 g·min-1, respectively; p<0.001), being estimated well enough by the algorithm (p = NS). Estimated carbohydrates requirements were practically equal to the corresponding measured values (p = NS), the difference between the two quantities amounting to –1.0±6.1 g, independent of the elapsed exercise time (time effect, p = NS). Results confirm that Ecres provides a satisfactory estimate of the carbohydrates required to avoid glycemic imbalances during moderate intensity aerobic physical activity, opening the prospect of an intriguing method that could liberate patients from the fear of exercise-induced hypoglycemia. PMID:25918842
Relevant priors prefetching algorithm performance for a picture archiving and communication system.
Andriole, K P; Avrin, D E; Yin, L; Gould, R G; Luth, D M; Arenson, R L
2000-05-01
Proper prefetching of relevant prior examinations from a picture archiving and communication system (PACS) archive, when a patient is scheduled for a new imaging study, and sending the historic images to the display station where the new examination is expected to be routed and subsequently read out, can greatly facilitate interpretation and review, as well as enhance radiology departmental workflow and PACS performance. In practice, it has proven extremely difficult to implement an automatic prefetch as successful as the experienced fileroom clerk. An algorithm based on defined metagroup categories for examination type mnemonics has been designed and implemented as one possible solution to the prefetch problem. The metagroups such as gastrointestinal (GI) tract, abdomen, chest, etc, can represent, in a small number of categories, the several hundreds of examination types performed by a typical radiology department. These metagroups can be defined in a table of examination mnemonics that maps a particular mnemonic to a metagroup or groups, and vice versa. This table is used to effect the prefetch rules of relevance. A given examination may relate to several prefetch categories, and preferences are easily configurable for a particular site. The prefetch algorithm metatable was implemented in database structured query language (SQL) using a many-to-many fetch category strategy. Algorithm performance was measured by analyzing the appropriateness of the priors fetched based on the examination type of the current study. Fetched relevant priors, missed relevant priors, fetched priors that were not relevant to the current examination, and priors not fetched that were not relevant were used to calculate sensitivity and specificity for the prefetch method. The time required for real-time requesting of priors not previously prefetched was also measured. The sensitivity of the prefetch algorithm was determined to be 98.3% and the specificity 100%. Time required for on
No Significant Effect of Prefrontal tDCS on Working Memory Performance in Older Adults
Nilsson, Jonna; Lebedev, Alexander V.; Lövdén, Martin
2015-01-01
Transcranial direct current stimulation (tDCS) has been put forward as a non-pharmacological alternative for alleviating cognitive decline in old age. Although results have shown some promise, little is known about the optimal stimulation parameters for modulation in the cognitive domain. In this study, the effects of tDCS over the dorsolateral prefrontal cortex (dlPFC) on working memory performance were investigated in thirty older adults. An N-back task assessed working memory before, during and after anodal tDCS at a current strength of 1 mA and 2 mA, in addition to sham stimulation. The study used a single-blind, cross-over design. The results revealed no significant effect of tDCS on accuracy or response times during or after stimulation, for any of the current strengths. These results suggest that a single session of tDCS over the dlPFC is unlikely to improve working memory, as assessed by an N-back task, in old age. PMID:26696882
Hernandez, Wilmar
2005-01-01
In this paper, a sensor to measure the rollover angle of a car under performance tests is presented. Basically, the sensor consists of a dual-axis accelerometer, analog-electronic instrumentation stages, a data acquisition system and an adaptive filter based on a recursive least-squares (RLS) lattice algorithm. In short, the adaptive filter is used to improve the performance of the rollover sensor by carrying out an optimal prediction of the relevant signal coming from the sensor, which is buried in a broad-band noise background where we have little knowledge of the noise characteristics. The experimental results are satisfactory and show a significant improvement in the signal-to-noise ratio at the system output.
Boosting runtime-performance of photon pencil beam algorithms for radiotherapy treatment planning.
Siggel, M; Ziegenhein, P; Nill, S; Oelfke, U
2012-10-01
Pencil beam algorithms are still considered as standard photon dose calculation methods in Radiotherapy treatment planning for many clinical applications. Despite their established role in radiotherapy planning their performance and clinical applicability has to be continuously adapted to evolving complex treatment techniques such as adaptive radiation therapy (ART). We herewith report on a new highly efficient version of a well-established pencil beam convolution algorithm which relies purely on measured input data. A method was developed that improves raytracing efficiency by exploiting the capability of modern CPU architecture for a runtime reduction. Since most of the current desktop computers provide more than one calculation unit we used symmetric multiprocessing extensively to parallelize the workload and thus decreasing the algorithmic runtime. To maximize the advantage of code parallelization, we present two implementation strategies - one for the dose calculation in inverse planning software, and one for traditional forward planning. As a result, we could achieve on a 16-core personal computer with AMD processors a superlinear speedup factor of approx. 18 for calculating the dose distribution of typical forward IMRT treatment plans. PMID:22071169
Performance evaluation of a routing algorithm based on Hopfield Neural Network for network-on-chip
NASA Astrophysics Data System (ADS)
Esmaelpoor, Jamal; Ghafouri, Abdollah
2015-12-01
Network on chip (NoC) has emerged as a solution to overcome the system on chip growing complexity and design challenges. A proper routing algorithm is a key issue of an NoC design. An appropriate routing method balances load across the network channels and keeps path length as short as possible. This survey investigates the performance of a routing algorithm based on Hopfield Neural Network. It is a dynamic programming to provide optimal path and network monitoring in real time. The aim of this article is to analyse the possibility of using a neural network as a router. The algorithm takes into account the path with the lowest delay (cost) form source to destination. In other words, the path a message takes from source to destination depends on network traffic situation at the time and it is the fastest one. The simulation results show that the proposed approach improves average delay, throughput and network congestion efficiently. At the same time, the increase in power consumption is almost negligible.
The royal road for genetic algorithms: Fitness landscapes and GA performance
Mitchell, M.; Holland, J.H. ); Forrest, S. . Dept. of Computer Science)
1991-01-01
Genetic algorithms (GAs) play a major role in many artificial-life systems, but there is often little detailed understanding of why the GA performs as it does, and little theoretical basis on which to characterize the types of fitness landscapes that lead to successful GA performance. In this paper we propose a strategy for addressing these issues. Our strategy consists of defining a set of features of fitness landscapes that are particularly relevant to the GA, and experimentally studying how various configurations of these features affect the GA's performance along a number of dimensions. In this paper we informally describe an initial set of proposed feature classes, describe in detail one such class ( Royal Road'' functions), and present some initial experimental results concerning the role of crossover and building blocks'' on landscapes constructed from features of this class. 27 refs., 1 fig., 5 tabs.
K-Means Re-Clustering-Algorithmic Options with Quantifiable Performance Comparisons
Meyer, A W; Paglieroni, D; Asteneh, C
2002-12-17
This paper presents various architectural options for implementing a K-Means Re-Clustering algorithm suitable for unsupervised segmentation of hyperspectral images. Performance metrics are developed based upon quantitative comparisons of convergence rates and segmentation quality. A methodology for making these comparisons is developed and used to establish K values that produce the best segmentations with minimal processing requirements. Convergence rates depend on the initial choice of cluster centers. Consequently, this same methodology may be used to evaluate the effectiveness of different initialization techniques.
2014-01-01
Background We have previously validated administrative data algorithms to identify patients with rheumatoid arthritis (RA) using rheumatology clinic records as the reference standard. Here we reassessed the accuracy of the algorithms using primary care records as the reference standard. Methods We performed a retrospective chart abstraction study using a random sample of 7500 adult patients under the care of 83 family physicians contributing to the Electronic Medical Record Administrative data Linked Database (EMRALD) in Ontario, Canada. Using physician-reported diagnoses as the reference standard, we computed and compared the sensitivity, specificity, and predictive values for over 100 administrative data algorithms for RA case ascertainment. Results We identified 69 patients with RA for a lifetime RA prevalence of 0.9%. All algorithms had excellent specificity (>97%). However, sensitivity varied (75-90%) among physician billing algorithms. Despite the low prevalence of RA, most algorithms had adequate positive predictive value (PPV; 51-83%). The algorithm of “[1 hospitalization RA diagnosis code] or [3 physician RA diagnosis codes with ≥1 by a specialist over 2 years]” had a sensitivity of 78% (95% CI 69–88), specificity of 100% (95% CI 100–100), PPV of 78% (95% CI 69–88) and NPV of 100% (95% CI 100–100). Conclusions Administrative data algorithms for detecting RA patients achieved a high degree of accuracy amongst the general population. However, results varied slightly from our previous report, which can be attributed to differences in the reference standards with respect to disease prevalence, spectrum of disease, and type of comparator group. PMID:24956925
NASA Technical Reports Server (NTRS)
Orme, John S.; Schkolnik, Gerard S.
1995-01-01
Performance Seeking Control (PSC), an onboard, adaptive, real-time optimization algorithm, relies upon an onboard propulsion system model. Flight results illustrated propulsion system performance improvements as calculated by the model. These improvements were subject to uncertainty arising from modeling error. Thus to quantify uncertainty in the PSC performance improvements, modeling accuracy must be assessed. A flight test approach to verify PSC-predicted increases in thrust (FNP) and absolute levels of fan stall margin is developed and applied to flight test data. Application of the excess thrust technique shows that increases of FNP agree to within 3 percent of full-scale measurements for most conditions. Accuracy to these levels is significant because uncertainty bands may now be applied to the performance improvements provided by PSC. Assessment of PSC fan stall margin modeling accuracy was completed with analysis of in-flight stall tests. Results indicate that the model overestimates the stall margin by between 5 to 10 percent. Because PSC achieves performance gains by using available stall margin, this overestimation may represent performance improvements to be recovered with increased modeling accuracy. Assessment of thrust and stall margin modeling accuracy provides a critical piece for a comprehensive understanding of PSC's capabilities and limitations.
NASA Astrophysics Data System (ADS)
Zittersteijn, Michiel; Schildknecht, Thomas; Vananti, Alessandro; Dolado Perez, Juan Carlos; Martinot, Vincent
2016-07-01
Currently several thousands of objects are being tracked in the MEO and GEO regions through optical means. With the advent of improved sensors and a heightened interest in the problem of space debris, it is expected that the number of tracked objects will grow by an order of magnitude in the near future. This research aims to provide a method that can treat the correlation and orbit determination problems simultaneously, and is able to efficiently process large data sets with minimal manual intervention. This problem is also known as the Multiple Target Tracking (MTT) problem. The complexity of the MTT problem is defined by its dimension S. Current research tends to focus on the S = 2 MTT problem. The reason for this is that for S = 2 the problem has a P-complexity. However, with S = 2 the decision to associate a set of observations is based on the minimum amount of information, in ambiguous situations (e.g. satellite clusters) this will lead to incorrect associations. The S > 2 MTT problem is an NP-hard combinatorial optimization problem. In previous work an Elitist Genetic Algorithm (EGA) was proposed as a method to approximately solve this problem. It was shown that the EGA is able to find a good approximate solution with a polynomial time complexity. The EGA relies on solving the Lambert problem in order to perform the necessary orbit determinations. This means that the algorithm is restricted to orbits that are described by Keplerian motion. The work presented in this paper focuses on the impact that this restriction has on the algorithm performance.
Algorithms for thermal and mechanical contact in nuclear fuel performance analysis
Hales, J. D.; Andrs, D.; Gaston, D. R.
2013-07-01
The transfer of heat and force from UO{sub 2} pellets to the cladding is an essential element in typical nuclear fuel performance modeling. Traditionally, this has been accomplished in a one-dimensional fashion, with a slice of fuel interacting with a slice of cladding. In this manner, the location at which the transfer occurs is set a priori. While straightforward, this limits the applicability and accuracy of the model. We propose finite element algorithms for the transfer of heat and force where the location for the transfer is not predetermined. This enables analysis of individual fuel pellets with large sliding between the fuel and the cladding. The simplest of these approaches is a node on face constraint. Heat and force are transferred from a node on the fuel to the cladding face opposite. Another option is a transfer based on quadrature point locations, which is applied here to the transfer of heat. The final algorithm outlined here is the so-called mortar method, with applicability to heat and force transfer. The mortar method promises to be a highly accurate approach which may be used for a transfer of other quantities and in other contexts, such as heat from cladding to a CFD mesh of the coolant. This paper reviews these approaches, discusses their strengths and weaknesses, and presents results from each on simplified nuclear fuel performance models. (authors)
NASA Technical Reports Server (NTRS)
Ramachandran, Ganesh K.; Akopian, David; Heckler, Gregory W.; Winternitz, Luke B.
2011-01-01
Location technologies have many applications in wireless communications, military and space missions, etc. US Global Positioning System (GPS) and other existing and emerging Global Navigation Satellite Systems (GNSS) are expected to provide accurate location information to enable such applications. While GNSS systems perform very well in strong signal conditions, their operation in many urban, indoor, and space applications is not robust or even impossible due to weak signals and strong distortions. The search for less costly, faster and more sensitive receivers is still in progress. As the research community addresses more and more complicated phenomena there exists a demand on flexible multimode reference receivers, associated SDKs, and development platforms which may accelerate and facilitate the research. One of such concepts is the software GPS/GNSS receiver (GPS SDR) which permits a facilitated access to algorithmic libraries and a possibility to integrate more advanced algorithms without hardware and essential software updates. The GNU-SDR and GPS-SDR open source receiver platforms are such popular examples. This paper evaluates the performance of recently proposed block-corelator techniques for acquisition and tracking of GPS signals using open source GPS-SDR platform.
Performance Comparison of Binary Search Tree and Framed ALOHA Algorithms for RFID Anti-Collision
NASA Astrophysics Data System (ADS)
Chen, Wen-Tzu
Binary search tree and framed ALOHA algorithms are commonly adopted to solve the anti-collision problem in RFID systems. In this letter, the read efficiency of these two anti-collision algorithms is compared through computer simulations. Simulation results indicate the framed ALOHA algorithm requires less total read time than the binary search tree algorithm. The initial frame length strongly affects the uplink throughput for the framed ALOHA algorithm.
Competing Sudakov veto algorithms
NASA Astrophysics Data System (ADS)
Kleiss, Ronald; Verheyen, Rob
2016-07-01
We present a formalism to analyze the distribution produced by a Monte Carlo algorithm. We perform these analyses on several versions of the Sudakov veto algorithm, adding a cutoff, a second variable and competition between emission channels. The formal analysis allows us to prove that multiple, seemingly different competition algorithms, including those that are currently implemented in most parton showers, lead to the same result. Finally, we test their performance in a semi-realistic setting and show that there are significantly faster alternatives to the commonly used algorithms.
Caraviello, D Z; Weigel, K A; Craven, M; Gianola, D; Cook, N B; Nordlund, K V; Fricke, P M; Wiltbank, M C
2006-12-01
The fertility of lactating dairy cows is economically important, but the mean reproductive performance of Holstein cows has declined during the past 3 decades. Traits such as first-service conception rate and pregnancy status at 150 d in milk (DIM) are influenced by numerous explanatory factors common to specific farms or individual cows on these farms. Machine learning algorithms offer great flexibility with regard to problems of multicollinearity, missing values, or complex interactions among variables. The objective of this study was to use machine learning algorithms to identify factors affecting the reproductive performance of lactating Holstein cows on large dairy farms. This study used data from farms in the Alta Genetics Advantage progeny-testing program. Production and reproductive records from 153 farms were obtained from on-farm DHI-Plus, Dairy Comp 305, or PCDART herd management software. A survey regarding management, facilities, labor, nutrition, reproduction, genetic selection, climate, and milk production was completed by managers of 103 farms; body condition scores were measured by a single evaluator on 63 farms; and temperature data were obtained from nearby weather stations. The edited data consisted of 31,076 lactation records, 14,804 cows, and 317 explanatory variables for first-service conception rate and 17,587 lactation records, 9,516 cows, and 341 explanatory variables for pregnancy status at 150 DIM. An alternating decision tree algorithm for first-service conception rate classified 75.6% of records correctly and identified the frequency of hoof trimming maintenance, type of bedding in the dry cow pen, type of cow restraint system, and duration of the voluntary waiting period as key explanatory variables. An alternating decision tree algorithm for pregnancy status at 150 DIM classified 71.4% of records correctly and identified bunk space per cow, temperature for thawing semen, percentage of cows with low body condition scores, number of
The performance of phylogenetic algorithms in estimating haplotype genealogies with migration.
Salzburger, Walter; Ewing, Greg B; Von Haeseler, Arndt
2011-05-01
Genealogies estimated from haplotypic genetic data play a prominent role in various biological disciplines in general and in phylogenetics, population genetics and phylogeography in particular. Several software packages have specifically been developed for the purpose of reconstructing genealogies from closely related, and hence, highly similar haplotype sequence data. Here, we use simulated data sets to test the performance of traditional phylogenetic algorithms, neighbour-joining, maximum parsimony and maximum likelihood in estimating genealogies from nonrecombining haplotypic genetic data. We demonstrate that these methods are suitable for constructing genealogies from sets of closely related DNA sequences with or without migration. As genealogies based on phylogenetic reconstructions are fully resolved, but not necessarily bifurcating, and without reticulations, these approaches outperform widespread 'network' constructing methods. In our simulations of coalescent scenarios involving panmictic, symmetric and asymmetric migration, we found that phylogenetic reconstruction methods performed well, while the statistical parsimony approach as implemented in TCS performed poorly. Overall, parsimony as implemented in the PHYLIP package performed slightly better than other methods. We further point out that we are not making the case that widespread 'network' constructing methods are bad, but that traditional phylogenetic tree finding methods are applicable to haplotypic data and exhibit reasonable performance with respect to accuracy and robustness. We also discuss some of the problems of converting a tree to a haplotype genealogy, in particular that it is nonunique. PMID:21457168
Basic Performance of the Standard Retrieval Algorithm for the Dual-frequency Precipitation Radar
NASA Astrophysics Data System (ADS)
Seto, S.; Iguchi, T.; Kubota, T.
2013-12-01
applied again by using the adjusted k-Z relations. By iterating a combination of the HB method and the DFR method, k-Z relations are improved. This is termed HB-DFR method (Seto et al. 2013). Though k-Z relations are adjusted simultaneously for all range bins using SRT method, this method can adjust k-Z relation at a range bin independently of other range bins. Therefore, in this method, DSD is represented on a 2-dimensional plane. The HB-DFR method has been incorporated in the DPR Level 2 standard algorithm (L2). The basic performance of L2 is tested with synthetic dataset which were produced from the TRMM/PR standard product. In L2, when only KuPR radar measurement is used, precipitation estimates are in good agreement with the corresponding rain rate estimates in the PR standard product. However, when both KuPR and KaPR radars measurements are used and the HB-DFR method is applied, the precipitation rate estimates are deviated from the estimates in the PR standard product. This is partly because of the poor performance of the HB-DFR and is also partly because of the overestimation in PIA by the Dual-frequency SRT. Improvement of the standard algorithm particularly for the dual-frequency measurement will be presented.
NASA Astrophysics Data System (ADS)
Hegde, Rajeshwari; Balachandra, K.; Rao, Madhusudhan
2011-12-01
Acoustic echo cancellation is an essential signal enhancement tool in hands-free communication. Loudspeaker signals are picked up by a microphone and are fed back to the correspondent, resulting in an undesired echo. Nowadays, adaptive filtering techniques are typically employed to suppress this echo. In acoustic applications long filters need to be adapted for sufficient echo suppression. Classical adaptation schemes such as LMS are quite expensive for accurate echo path modeling in highly reverberating environments. In order to cope with dynamic signals, step-size μ is often normalized by taking it inversely proportional to the energy of x. This normalized version of LMS (NLMS) is typically used in practice. This paper discusses various variable step-size NLMS based algorithms which can be implemented in acoustic echo cancelling applications. The performance of these algorithms in terms of ERLE and NSEC curves are obtained and comparison between them is done. Also a simple and novel Double-Talk Detection scheme is proposed in this paper.
NASA Astrophysics Data System (ADS)
Ha, S. H.; Choi, S. B.; Lee, G. S.; Yoo, W. H.
2013-02-01
This paper presents control performance evaluation of railway vehicle featured by semi-active suspension system using magnetorheological (MR) fluid damper. In order to achieve this goal, a nine degree of freedom of railway vehicle model, which includes car body and bogie, is established. The wheel-set data is loaded from measured value of railway vehicle. The MR damper system is incorporated with the governing equation of motion of the railway vehicle model which includes secondary suspension. To illustrate the effectiveness of the controlled MR dampers on suspension system of railway vehicle, the control law using the sky-ground hook controller is adopted. This controller takes into account for both vibration control of car body and increasing stability of bogie by adopting a weighting parameter between two performance requirements. The parameters appropriately determined by employing a fuzzy algorithm associated with two fuzzy variables: the lateral speed of the car body and the lateral performance of the bogie. Computer simulation results of control performances such as vibration control and stability analysis are presented in time and frequency domains.
Student-Led Project Teams: Significance of Regulation Strategies in High- and Low-Performing Teams
ERIC Educational Resources Information Center
Ainsworth, Judith
2016-01-01
We studied group and individual co-regulatory and self-regulatory strategies of self-managed student project teams using data from intragroup peer evaluations and a postproject survey. We found that high team performers shared their research and knowledge with others, collaborated to advise and give constructive criticism, and demonstrated moral…
Yao, Ming-Shui; Tang, Wen-Xiang; Wang, Guan-E; Nath, Bhaskar; Xu, Gang
2016-07-01
A strategy for combining metal oxides and metal-organic frameworks is proposed to design new materials for sensing volatile organic compounds, for the first time. The prepared ZnO@ZIF-CoZn core-sheath nanowire arrays show greatly enhanced performance not only on its selectivity but also on its response, recovery behavior, and working temperature. PMID:27153113
Significant Returns in Engagement and Performance with a Free Teaching App
ERIC Educational Resources Information Center
Green, Alan
2016-01-01
Pedagogical research shows that teaching methods other than traditional lectures may result in better outcomes. However, lecture remains the dominant method in economics, likely due to high implementation costs of methods shown to be effective in the literature. In this article, the author shows significant benefits of using a teaching app for…
NASA Astrophysics Data System (ADS)
Lee, Y. H.; Chiang, K. W.
2012-07-01
In this study, a 3D Map Matching (3D MM) algorithm is embedded to current INS/GPS fusion algorithm for enhancing the sustainability and accuracy of INS/GPS integration systems, especially the height component. In addition, this study propose an effective solutions to the limitation of current commercial vehicular navigation systems where they fail to distinguish whether the vehicle is moving on the elevated highway or the road under it because those systems don't have sufficient height resolution. To validate the performance of proposed 3D MM embedded INS/GPS integration algorithms, in the test area, two scenarios were considered, paths under the freeways and streets between tall buildings, where the GPS signal is obstacle or interfered easily. The test platform was mounted on the top of a land vehicle and also systems in the vehicle. The IMUs applied includes SPAN-LCI (0.1 deg/hr gyro bias) from NovAtel, which was used as the reference system, and two MEMS IMUs with different specifications for verifying the performance of proposed algorithm. The preliminary results indicate the proposed algorithms are able to improve the accuracy of positional components in GPS denied environments significantly with the use of INS/GPS integrated systems in SPP mode.
NASA Astrophysics Data System (ADS)
Garcia, C. A.; Ferreira, A.; Dogliotti, A. I.; Tavano, V. M.; High Latitude Oceanography Group-GOAL
2011-12-01
The PATagonia EXperiment (PATEX) is a Brazilian research project, which has the overall objective of characterizing the environmental constraints, phytoplankton assemblages, primary production rates, bio-optical characteristics, and air-sea CO2 fluxes waters along the Argentinean shelf-break during austral spring and summer. A set of seven PATEX cruises has been conducted from 2004 to 2009 (total of 189 CTD stations) covering a broad region, in waters whose surface chlorophyll-a concentration (chla) varied from 0.10 to 22.30 mg m-3. This wide range of phytoplankton biomass reflected several stages of the phytoplankton blooms with relative higher chlorophyll associated to microplankton (picoplankton and/or nanoplankton) dominance during the spring (later summer) cruises in the shelf-break blooms. A special cruise (PATEX 5) was specially designed for sampling a coccolithophorid bloom in the Patagonia inner shelf (Garcia et al, 2011, JGR, 116, C03025). Overall, distinct efficiencies in absorption and scattering properties were observed due to differences in the algal cell size and pigment composition. Cluster analysis performed on both chla-specific absorption and scattering coefficients has shown the relative contributions by each of three cell-size classes. A hierarchical cluster analysis was also applied to "in situ" hyperspectral remote sensing reflectance spectra in order to classify the whole spectra set into coherent groups. Three spectrally distinct classes were well defined and they are significantly associated chla range. NASA OC4v6 chlorophyll algorithm has shown a relatively good performance when combining bio-optical data from all cruises (r2=0.78, slope of 0.86 and intercept of 0.03), with a positive bias (Mean Relative Percentage Difference, RPD=11.53%). The impact of chlorophyll-specific absorption and scattering coefficients on the performance of ocean empirical algorithms is also accessed.
Optimizing the performance of single-mode laser diode system using genetic algorithm
NASA Astrophysics Data System (ADS)
Aydin, Elif; Yildirim, Remzi
2004-07-01
In this correspondence, micro-genetic algorithm (MGA) application results for optimizing the performance of electronic feedback of a laser diode are presented. The goal of optimization is to find the maximum bandwidth of the laser diode with electronic feedback used in fiber optic digital communication. A numerical analysis of the system theory of the single-mode laser diode to obtain numerical results of the gain, the pulse response, and the harmonic distortion for electronic feedback is also presented. The dependence of the system gain on the feedback gain and delay is examined. The pulse response is studied and it is shown that a transmission rate over 1 Gbyte/s can be achieved.
Chen, Minhua; Silva, Jorge; Paisley, John; Wang, Chunping; Dunson, David; Carin, Lawrence
2013-01-01
Nonparametric Bayesian methods are employed to constitute a mixture of low-rank Gaussians, for data x ∈ ℝN that are of high dimension N but are constrained to reside in a low-dimensional subregion of ℝN. The number of mixture components and their rank are inferred automatically from the data. The resulting algorithm can be used for learning manifolds and for reconstructing signals from manifolds, based on compressive sensing (CS) projection measurements. The statistical CS inversion is performed analytically. We derive the required number of CS random measurements needed for successful reconstruction, based on easily-computed quantities, drawing on block-sparsity properties. The proposed methodology is validated on several synthetic and real datasets. PMID:23894225
Lithium deficient mesoporous Li2-xMnSiO4 with significantly improved electrochemical performance
NASA Astrophysics Data System (ADS)
Wang, Haiyan; Hou, Tianli; Sun, Dan; Huang, Xiaobing; He, Hanna; Tang, Yougen; Liu, Younian
2014-02-01
Li2-xMnSiO4 compounds with mesoporous structure are first proposed in the present work. It is interesting to note that the lithium deficient compounds exhibit much higher electrochemical performance in comparison with the stoichiometric one. Among these compounds, Li1.8MnSiO4 shows the best electrochemical performance. It is found that mesoporous Li1.8MnSiO4 without carbon coating delivers a maximum discharge capacity of 110.9 mAh g-1 at 15 mA g-1, maintaining 90.8 mAh g-1 after 25 cycles, while that of the stoichiometric one is only 48.0 mAh g-1, with 12.5 mAh g-1 remaining. The superior properties are mainly due to the great improvement of electronic conductivity and structure stability, as well as suppressed charge-transfer resistance.
NASA Astrophysics Data System (ADS)
Rimbalová, Jarmila; Vilčeková, Silvia
2013-11-01
The practice of facilities management is rapidly evolving with the increasing interest in the discourse of sustainable development. The industry and its market are forecasted to develop to include non-core functions, activities traditionally not associated with this profession, but which are increasingly being addressed by facilities managers. The scale of growth in the built environment and the consequential growth of the facility management sector is anticipated to be enormous. Key Performance Indicators (KPI) are measure that provides essential information about performance of facility services delivery. In selecting KPI, it is critical to limit them to those factors that are essential to the organization reaching its goals. It is also important to keep the number of KPI small just to keep everyone's attention focused on achieving the same KPIs. This paper deals with the determination of weights of KPI of FM in terms of the design and use of sustainable buildings.
Performance-based seismic design of steel frames utilizing colliding bodies algorithm.
Veladi, H
2014-01-01
A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm. PMID:25202717
Performance-Based Seismic Design of Steel Frames Utilizing Colliding Bodies Algorithm
Veladi, H.
2014-01-01
A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm. PMID:25202717
NASA Astrophysics Data System (ADS)
Sivakumar, P. Bagavathi; Mohandas, V. P.
Stock price prediction and stock trend prediction are the two major research problems of financial time series analysis. In this work, performance comparison of various attribute set reduction algorithms were made for short term stock price prediction. Forward selection, backward elimination, optimized selection, optimized selection based on brute force, weight guided and optimized selection based on the evolutionary principle and strategy was used. Different selection schemes and cross over types were explored. To supplement learning and modeling, support vector machine was also used in combination. The algorithms were applied on a real time Indian stock data namely CNX Nifty. The experimental study was conducted using the open source data mining tool Rapidminer. The performance was compared in terms of root mean squared error, squared error and execution time. The obtained results indicates the superiority of evolutionary algorithms and the optimize selection algorithm based on evolutionary principles outperforms others.
FOCUSED R&D FOR ELECTROCHROMIC SMART WINDOWS: SIGNIFICANT PERFORMANCE AND YIELD ENHANCEMENTS
Marcus Milling
2004-09-23
Developments made under this program will play a key role in underpinning the technology for producing EC devices. It is anticipated that the work begun during this period will continue to improve materials properties, and drive yields up and costs down, increase durability and make manufacture simpler and more cost effective. It is hoped that this will contribute to a successful and profitable industry, which will help reduce energy consumption and improve comfort for building occupants worldwide. The first major task involved improvements to the materials used in the process. The improvements made as a result of the work done during this project have contributed to the enhanced performance, including dynamic range, uniformity and electrical characteristics. Another major objective of the project was to develop technology to improve yield, reduce cost, and facilitate manufacturing of EC products. Improvements directly attributable to the work carried out as part of this project and seen in the overall EC device performance, have been accompanied by an improvement in the repeatability and consistency of the production process. Innovative test facilities for characterizing devices in a timely and well-defined manner have been developed. The equipment has been designed in such a way as to make scaling-up to accommodate higher throughput necessary for manufacturing relatively straightforward. Finally, the third major goal was to assure the durability of the EC product, both by developments aimed at improving the product performance, as well as development of novel procedures to test the durability of this new product. Both aspects have been demonstrated, both by carrying out a number of different durability tests, both in-house and by independent third-party testers, and also developing several novel durability tests.
NASA Astrophysics Data System (ADS)
Kizilkaya, Elif A.; Gupta, Surendra M.
2005-11-01
In this paper, we compare the impact of different disassembly line balancing (DLB) algorithms on the performance of our recently introduced Dynamic Kanban System for Disassembly Line (DKSDL) to accommodate the vagaries of uncertainties associated with disassembly and remanufacturing processing. We consider a case study to illustrate the impact of various DLB algorithms on the DKSDL. The approach to the solution, scenario settings, results and the discussions of the results are included.
Wang, Ping; He, Haili; Xu, Xiaolong; Jin, Yongdong
2014-02-12
In this work, we present a new method to synthesize the phosphorus, nitrogen contained graphene nanosheets, which uses dicyandiamide to prevent the aggregation of graphene oxide and act as the nitrogen precursor, and phosphoric acid (H3PO4) as the activation reagent. We have found that through the H3PO4 activation, the samples exhibit the remarkably enhanced supercapacitive performance, and depending on the amount of H3PO4 introduced, the specific capacitance of the samples is gradually increased from 7.6 to 244.6 F g(-1). Meanwhile, the samples also exhibit the good rate capability and excellent stability (up to 10 000 cycles). Through the transmission electron microscopy, high-resolution transmission electron microscopy, X-ray diffraction, X-ray photoelectron spectroscopy and Brunauer-Emmett-Teller analyses, H3PO4 treatment induced large pore volume and phosphorus related function groups in the product are assumed to response for the enhancement. PMID:24456232
Cramer, Michael J; Dumke, Charles L; Hailes, Walter S; Cuddy, John S; Ruby, Brent C
2015-10-01
A variety of dietary choices are marketed to enhance glycogen recovery after physical activity. Past research informs recommendations regarding the timing, dose, and nutrient compositions to facilitate glycogen recovery. This study examined the effects of isoenergetic sport supplements (SS) vs. fast food (FF) on glycogen recovery and exercise performance. Eleven males completed two experimental trials in a randomized, counterbalanced order. Each trial included a 90-min glycogen depletion ride followed by a 4-hr recovery period. Absolute amounts of macronutrients (1.54 ± 0.27 g·kg-1 carbohydrate, 0.24 ± 0.04 g·kg fat-1, and 0.18 ±0.03g·kg protein-1) as either SS or FF were provided at 0 and 2 hr. Muscle biopsies were collected from the vastus lateralis at 0 and 4 hr post exercise. Blood samples were analyzed at 0, 30, 60, 120, 150, 180, and 240 min post exercise for insulin and glucose, with blood lipids analyzed at 0 and 240 min. A 20k time-trial (TT) was completed following the final muscle biopsy. There were no differences in the blood glucose and insulin responses. Similarly, rates of glycogen recovery were not different across the diets (6.9 ± 1.7 and 7.9 ± 2.4 mmol·kg wet weight- 1·hr-1 for SS and FF, respectively). There was also no difference across the diets for TT performance (34.1 ± 1.8 and 34.3 ± 1.7 min for SS and FF, respectively. These data indicate that short-term food options to initiate glycogen resynthesis can include dietary options not typically marketed as sports nutrition products such as fast food menu items. PMID:25811308
Singh, Arvinder; Chandra, Amreesh
2015-01-01
Amongst the materials being investigated for supercapacitor electrodes, carbon based materials are most investigated. However, pure carbon materials suffer from inherent physical processes which limit the maximum specific energy and power that can be achieved in an energy storage device. Therefore, use of carbon-based composites with suitable nano-materials is attaining prominence. The synergistic effect between the pseudocapacitive nanomaterials (high specific energy) and carbon (high specific power) is expected to deliver the desired improvements. We report the fabrication of high capacitance asymmetric supercapacitor based on electrodes of composites of SnO2 and V2O5 with multiwall carbon nanotubes and neutral 0.5 M Li2SO4 aqueous electrolyte. The advantages of the fabricated asymmetric supercapacitors are compared with the results published in the literature. The widened operating voltage window is due to the higher over-potential of electrolyte decomposition and a large difference in the work functions of the used metal oxides. The charge balanced device returns the specific capacitance of ~198 F g−1 with corresponding specific energy of ~89 Wh kg−1 at 1 A g−1. The proposed composite systems have shown great potential in fabricating high performance supercapacitors. PMID:26494197
NASA Astrophysics Data System (ADS)
Singh, Arvinder; Chandra, Amreesh
2015-10-01
Amongst the materials being investigated for supercapacitor electrodes, carbon based materials are most investigated. However, pure carbon materials suffer from inherent physical processes which limit the maximum specific energy and power that can be achieved in an energy storage device. Therefore, use of carbon-based composites with suitable nano-materials is attaining prominence. The synergistic effect between the pseudocapacitive nanomaterials (high specific energy) and carbon (high specific power) is expected to deliver the desired improvements. We report the fabrication of high capacitance asymmetric supercapacitor based on electrodes of composites of SnO2 and V2O5 with multiwall carbon nanotubes and neutral 0.5 M Li2SO4 aqueous electrolyte. The advantages of the fabricated asymmetric supercapacitors are compared with the results published in the literature. The widened operating voltage window is due to the higher over-potential of electrolyte decomposition and a large difference in the work functions of the used metal oxides. The charge balanced device returns the specific capacitance of ~198 F g-1 with corresponding specific energy of ~89 Wh kg-1 at 1 A g-1. The proposed composite systems have shown great potential in fabricating high performance supercapacitors.
Knox, Jeanette Bresson Ladegaard; Svendsen, Mette Nordahl
2015-08-01
This article examines the storytelling aspect in philosophizing with rehabilitating cancer patients in small Socratic dialogue groups (SDG). Recounting an experience to illustrate a philosophical question chosen by the participants is the traditional point of departure for the dialogical exchange. However, narrating is much more than a beginning point or the skeletal framework of events and it deserves more scholarly attention than hitherto given. Storytelling pervades the whole Socratic process and impacts the conceptual analysis in a SDG. In this article we show how the narrative aspect became a rich resource for the compassionate bond between participants and how their stories cultivated the abstract reflection in the group. In addition, the aim of the article is to reveal the different layers in the performance of storytelling, or of authoring experience. By picking, poking and dissecting an experience through a collaborative effort, most participants had their initial experience existentially refined and the chosen concept of which the experience served as an illustration transformed into a moral compass to be used in self-orientation post cancer. PMID:25894237
NASA Technical Reports Server (NTRS)
Spinhirne, James D.; Palm, Stephen P.; Hlavka, Dennis L.; Hart, William D.
2007-01-01
The Geoscience Laser Altimeter System (GLAS) launched in early 2003 is the first polar orbiting satellite lidar. The instrument design includes high performance observations of the distribution and optical scattering cross sections of atmospheric clouds and aerosol. The backscatter lidar operates at two wavelengths, 532 and 1064 nm. For the atmospheric cloud and aerosol measurements, the 532 nm channel was designed for ultra high efficiency with solid state photon counting detectors and etalon filtering. Data processing algorithms were developed to calibrate and normalize the signals and produce global scale data products of the height distribution of cloud and aerosol layers and their optical depths and particulate scattering cross sections up to the limit of optical attenuation. The paper will concentrate on the effectiveness and limitations of the lidar channel design and data product algorithms. Both atmospheric receiver channels meet and exceed their design goals. Geiger Mode Avalanche Photodiode modules are used for the 532 nm signal. The operational experience is that some signal artifacts and non-linearity require correction in data processing. As with all photon counting detectors, a pulse-pile-up calibration is an important aspect of the measurement. Additional signal corrections were found to be necessary relating to correction of a saturation signal-run-on effect and also for daytime data, a small range dependent variation in the responsivity. It was possible to correct for these signal errors in data processing and achieve the requirement to accurately profile aerosol and cloud cross section down to 10-7 llm-sr. The analysis procedure employs a precise calibration against molecular scattering in the mid-stratosphere. The 1064 nm channel detection employs a high-speed analog APD for surface and atmospheric measurements where the detection sensitivity is limited by detector noise and is over an order of magnitude less than at 532 nm. A unique feature of
Performance analysis of hybrid algorithms for lossless compression of climate data
NASA Astrophysics Data System (ADS)
Mummadisetty, Bharath Chandra
Climate data is very important and at the same time, voluminous. Every minute a new entry is recorded for different climate parameters in climate databases around the world. Given the explosive growth of data that needs to be transmitted and stored, there is a necessity to focus on developing better transmission and storage technologies. Data compression is known to be a viable and effective solution to reduce bandwidth and storage requirements of bulk data. So, the goal is to develop the best compression methods for climate data. The methodology used is based on predictive analysis. The focus is to implement a hybrid algorithm which utilizes the functionality of Artificial Neural Networks (ANN) for prediction of climate data. ANN is a very efficient tool to generate models for predicting climate data with great accuracy. Two types of ANN's such as Multilayer Perceptron (MLP) and Cascade Feedforward Neural Network (CFNN) are used. It is beneficial to take advantage of ANN and combine its output with lossless compression algorithms such as differential encoding and Huffman coding to generate high compression ratios. The performance of the two techniques based on MLP and CFNN types are compared using metrics including compression ratio, Mean Square Error (MSE) and Root Mean Square Error (RMSE). The two methods are also compared with a conventional method of differential encoding followed by Huffman Coding. The results indicate that MLP outperforms CFNN. Also compression ratios of both the proposed methods are higher than those obtained by the standard method. Compression ratios as high as 10.3, 9.8, and 9.54 are obtained for precipitation, photosynthetically active radiation, and solar radiation datasets respectively.
Yang, Jinhu; Li, Ying; Zu, Lianhai; Tong, Lianming; Liu, Guanglei; Qin, Yao; Shi, Donglu
2015-04-22
Noble metals are well-known for their surface plasmon resonance effect that enables strong light absorption typically in the visible regions for gold and silver. However, unlike semiconductors, noble metals are commonly considered incapable of catalyzing reactions via photogenerated electron-hole pairs due to their continuous energy band structures. So far, photonically activated catalytic system based on pure noble metal nanostructures has seldom been reported. Here, we report the development of three different novel plasmonic Au superstructures comprised of Au nanoparticles, multiple-twinned nanoparticles and nanoworms assembling on the surfaces of SiO2 nanospheres respectively via a well-designed synthetic strategy. It is found that these novel Au superstructures show enhanced broadband visible-light absorption due to the plasmon resonance coupling within the superstructures, and thus can effectively focus the energy of photon fluxes to generate much more excited hot electrons and holes for promoting catalytic reactions. Accordingly, these Au superstructures exhibit significantly visible-light-enhanced catalytic efficiency (up to ∼264% enhancement) for the commercial reaction of p-nitrophenol reduction. PMID:25840556
System Performance of an Integrated Airborne Spacing Algorithm with Ground Automation
NASA Technical Reports Server (NTRS)
Swieringa, Kurt A.; Wilson, Sara R.; Baxley, Brian T.
2016-01-01
The National Aeronautics and Space Administration's (NASA's) first Air Traffic Management (ATM) Technology Demonstration (ATD-1) was created to facilitate the transition of mature ATM technologies from the laboratory to operational use. The technologies selected for demonstration are the Traffic Management Advisor with Terminal Metering (TMA-TM), which provides precise time-based scheduling in the Terminal airspace; Controller Managed Spacing (CMS), which provides controllers with decision support tools to enable precise schedule conformance; and Interval Management (IM), which consists of flight deck automation that enables aircraft to achieve or maintain precise spacing behind another aircraft. Recent simulations and IM algorithm development at NASA have focused on trajectory-based IM operations where aircraft equipped with IM avionics are expected to achieve a spacing goal, assigned by air traffic controllers, at the final approach fix. The recently published IM Minimum Operational Performance Standards describe five types of IM operations. This paper discusses the results and conclusions of a human-in-the-loop simulation that investigated three of those IM operations. The results presented in this paper focus on system performance and integration metrics. Overall, the IM operations conducted in this simulation integrated well with ground-based decisions support tools and certain types of IM operational were able to provide improved spacing precision at the final approach fix; however, some issues were identified that should be addressed prior to implementing IM procedures into real-world operations.
NASA Astrophysics Data System (ADS)
Mehrparvar, Behnam; Khoshnoudian, Taramarz
2012-03-01
Base isolated structures have been found to be at risk in near-fault regions as a result of long period pulses that may exist in near-source ground motions. Various control strategies, including passive, active and semi-active control systems, have been investigated to overcome this problem. This study focuses on the development of a semi-active control algorithm based on several performance levels anticipated from an isolated building during different levels of ground shaking corresponding to various earthquake hazard levels. The proposed performance-based algorithm is based on a modified version of the well-known semi-active skyhook control algorithm. The proposed control algorithm changes the control gain depending on the level of shaking imposed on the structure. The proposed control system has been evaluated using a series of analyses performed on a base isolated benchmark building subjected to seven pairs of scaled ground motion records. Simulation results show that the newly proposed algorithm is effective in improving the structural and nonstructural performance of the building for selected earthquakes.
Ariyawansa, K.A.
1991-04-01
A benchmark parallel implementation of the Van Slyke and Wets algorithm for stochastic linear programs, and the results of a carefully designed numerical experiment on the Sequent/Balance using the implementation are presented. An important use of this implementation is as a benchmark to assess the performance of approximation algorithms for stochastic linear programs. These approximation algorithms are best suited for implementation on parallel vector processes like the Alliant FX/8. Therefore, the performance of the benchmark implementation on the Alliant FX/8 is of interest. In this paper, we present results observed when a portion of the numerical experiment is performed on the Alliant FX/8. These results indicate that the implementation makes satisfactory use of the concurrency capabilities of the Alliant FX/8. They also indicate that the vectorization capabilities of the Alliant FX/8 are not satisfactorily utilized by the implementation. 9 refs., 9 tabs.