Sample records for support reduction algorithm

  1. An environment-adaptive management algorithm for hearing-support devices incorporating listening situation and noise type classifiers.

    PubMed

    Yook, Sunhyun; Nam, Kyoung Won; Kim, Heepyung; Hong, Sung Hwa; Jang, Dong Pyo; Kim, In Young

    2015-04-01

    In order to provide more consistent sound intelligibility for the hearing-impaired person, regardless of environment, it is necessary to adjust the setting of the hearing-support (HS) device to accommodate various environmental circumstances. In this study, a fully automatic HS device management algorithm that can adapt to various environmental situations is proposed; it is composed of a listening-situation classifier, a noise-type classifier, an adaptive noise-reduction algorithm, and a management algorithm that can selectively turn on/off one or more of the three basic algorithms-beamforming, noise-reduction, and feedback cancellation-and can also adjust internal gains and parameters of the wide-dynamic-range compression (WDRC) and noise-reduction (NR) algorithms in accordance with variations in environmental situations. Experimental results demonstrated that the implemented algorithms can classify both listening situation and ambient noise type situations with high accuracies (92.8-96.4% and 90.9-99.4%, respectively), and the gains and parameters of the WDRC and NR algorithms were successfully adjusted according to variations in environmental situation. The average values of signal-to-noise ratio (SNR), frequency-weighted segmental SNR, Perceptual Evaluation of Speech Quality, and mean opinion test scores of 10 normal-hearing volunteers of the adaptive multiband spectral subtraction (MBSS) algorithm were improved by 1.74 dB, 2.11 dB, 0.49, and 0.68, respectively, compared to the conventional fixed-parameter MBSS algorithm. These results indicate that the proposed environment-adaptive management algorithm can be applied to HS devices to improve sound intelligibility for hearing-impaired individuals in various acoustic environments. Copyright © 2014 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  2. Weather Radar Studies

    DTIC Science & Technology

    1988-03-31

    radar operation and data - collection activities, a large data -analysis effort has been under way in support of automatic wind-shear detection algorithm ...REDUCTION AND ALGORITHM DEVELOPMENT 49 A. General-Purpose Software 49 B. Concurrent Computer Systems 49 C. Sun Workstations 51 D. Radar Data Analysis 52...1. Algorithm Verification 52 2. Other Studies 53 3. Translations 54 4. Outside Distributions 55 E. Mesonet/LLWAS Data Analysis 55 1. 1985 Data 55 2

  3. A New Method of Facial Expression Recognition Based on SPE Plus SVM

    NASA Astrophysics Data System (ADS)

    Ying, Zilu; Huang, Mingwei; Wang, Zhen; Wang, Zhewei

    A novel method of facial expression recognition (FER) is presented, which uses stochastic proximity embedding (SPE) for data dimension reduction, and support vector machine (SVM) for expression classification. The proposed algorithm is applied to Japanese Female Facial Expression (JAFFE) database for FER, better performance is obtained compared with some traditional algorithms, such as PCA and LDA etc.. The result have further proved the effectiveness of the proposed algorithm.

  4. WAR DSS: A DECISION SUPPORT SYSTEM FOR ENVIRONMENTALLY CONSCIOUS CHEMICAL PROCESS DESIGN

    EPA Science Inventory

    The second generation of the Waste Reduction (WAR) Algorithm is constructed as a decision support system (DSS) in the design of chemical manufacturing facilities. The WAR DSS is a software tool that can help reduce the potential environmental impacts (PEIs) of industrial chemical...

  5. Multi-label classification of chronically ill patients with bag of words and supervised dimensionality reduction algorithms.

    PubMed

    Bromuri, Stefano; Zufferey, Damien; Hennebert, Jean; Schumacher, Michael

    2014-10-01

    This research is motivated by the issue of classifying illnesses of chronically ill patients for decision support in clinical settings. Our main objective is to propose multi-label classification of multivariate time series contained in medical records of chronically ill patients, by means of quantization methods, such as bag of words (BoW), and multi-label classification algorithms. Our second objective is to compare supervised dimensionality reduction techniques to state-of-the-art multi-label classification algorithms. The hypothesis is that kernel methods and locality preserving projections make such algorithms good candidates to study multi-label medical time series. We combine BoW and supervised dimensionality reduction algorithms to perform multi-label classification on health records of chronically ill patients. The considered algorithms are compared with state-of-the-art multi-label classifiers in two real world datasets. Portavita dataset contains 525 diabetes type 2 (DT2) patients, with co-morbidities of DT2 such as hypertension, dyslipidemia, and microvascular or macrovascular issues. MIMIC II dataset contains 2635 patients affected by thyroid disease, diabetes mellitus, lipoid metabolism disease, fluid electrolyte disease, hypertensive disease, thrombosis, hypotension, chronic obstructive pulmonary disease (COPD), liver disease and kidney disease. The algorithms are evaluated using multi-label evaluation metrics such as hamming loss, one error, coverage, ranking loss, and average precision. Non-linear dimensionality reduction approaches behave well on medical time series quantized using the BoW algorithm, with results comparable to state-of-the-art multi-label classification algorithms. Chaining the projected features has a positive impact on the performance of the algorithm with respect to pure binary relevance approaches. The evaluation highlights the feasibility of representing medical health records using the BoW for multi-label classification tasks. The study also highlights that dimensionality reduction algorithms based on kernel methods, locality preserving projections or both are good candidates to deal with multi-label classification tasks in medical time series with many missing values and high label density. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Estimation of diffusion coefficients from voltammetric signals by support vector and gaussian process regression

    PubMed Central

    2014-01-01

    Background Support vector regression (SVR) and Gaussian process regression (GPR) were used for the analysis of electroanalytical experimental data to estimate diffusion coefficients. Results For simulated cyclic voltammograms based on the EC, Eqr, and EqrC mechanisms these regression algorithms in combination with nonlinear kernel/covariance functions yielded diffusion coefficients with higher accuracy as compared to the standard approach of calculating diffusion coefficients relying on the Nicholson-Shain equation. The level of accuracy achieved by SVR and GPR is virtually independent of the rate constants governing the respective reaction steps. Further, the reduction of high-dimensional voltammetric signals by manual selection of typical voltammetric peak features decreased the performance of both regression algorithms compared to a reduction by downsampling or principal component analysis. After training on simulated data sets, diffusion coefficients were estimated by the regression algorithms for experimental data comprising voltammetric signals for three organometallic complexes. Conclusions Estimated diffusion coefficients closely matched the values determined by the parameter fitting method, but reduced the required computational time considerably for one of the reaction mechanisms. The automated processing of voltammograms according to the regression algorithms yields better results than the conventional analysis of peak-related data. PMID:24987463

  7. Reduction from cost-sensitive ordinal ranking to weighted binary classification.

    PubMed

    Lin, Hsuan-Tien; Li, Ling

    2012-05-01

    We present a reduction framework from ordinal ranking to binary classification. The framework consists of three steps: extracting extended examples from the original examples, learning a binary classifier on the extended examples with any binary classification algorithm, and constructing a ranker from the binary classifier. Based on the framework, we show that a weighted 0/1 loss of the binary classifier upper-bounds the mislabeling cost of the ranker, both error-wise and regret-wise. Our framework allows not only the design of good ordinal ranking algorithms based on well-tuned binary classification approaches, but also the derivation of new generalization bounds for ordinal ranking from known bounds for binary classification. In addition, our framework unifies many existing ordinal ranking algorithms, such as perceptron ranking and support vector ordinal regression. When compared empirically on benchmark data sets, some of our newly designed algorithms enjoy advantages in terms of both training speed and generalization performance over existing algorithms. In addition, the newly designed algorithms lead to better cost-sensitive ordinal ranking performance, as well as improved listwise ranking performance.

  8. Scalable NIC-based reduction on large-scale clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moody, A.; Fernández, J. C.; Petrini, F.

    2003-01-01

    Many parallel algorithms require effiaent support for reduction mllectives. Over the years, researchers have developed optimal reduction algonduns by taking inm account system size, dam size, and complexities of reduction operations. However, all of these algorithm have assumed the faa that the reduction precessing takes place on the host CPU. Modem Network Interface Cards (NICs) sport programmable processors with substantial memory and thus introduce a fresh variable into the equation This raises the following intersting challenge: Can we take advantage of modern NICs to implementJost redudion operations? In this paper, we take on this challenge in the context of large-scalemore » clusters. Through experiments on the 960-node, 1920-processor or ASCI Linux Cluster (ALC) located at the Lawrence Livermore National Laboratory, we show that NIC-based reductions indeed perform with reduced latency and immed consistency over host-based aleorithms for the wmmon case and that these benefits scale as the system grows. In the largest configuration tested--1812 processors-- our NIC-based algorithm can sum a single element vector in 73 ps with 32-bi integers and in 118 with Mbit floating-point numnbers. These results represent an improvement, respeaively, of 121% and 39% with resvect w the {approx}roductionle vel MPI library« less

  9. Evaluation of model-based versus non-parametric monaural noise-reduction approaches for hearing aids.

    PubMed

    Harlander, Niklas; Rosenkranz, Tobias; Hohmann, Volker

    2012-08-01

    Single channel noise reduction has been well investigated and seems to have reached its limits in terms of speech intelligibility improvement, however, the quality of such schemes can still be advanced. This study tests to what extent novel model-based processing schemes might improve performance in particular for non-stationary noise conditions. Two prototype model-based algorithms, a speech-model-based, and a auditory-model-based algorithm were compared to a state-of-the-art non-parametric minimum statistics algorithm. A speech intelligibility test, preference rating, and listening effort scaling were performed. Additionally, three objective quality measures for the signal, background, and overall distortions were applied. For a better comparison of all algorithms, particular attention was given to the usage of the similar Wiener-based gain rule. The perceptual investigation was performed with fourteen hearing-impaired subjects. The results revealed that the non-parametric algorithm and the auditory model-based algorithm did not affect speech intelligibility, whereas the speech-model-based algorithm slightly decreased intelligibility. In terms of subjective quality, both model-based algorithms perform better than the unprocessed condition and the reference in particular for highly non-stationary noise environments. Data support the hypothesis that model-based algorithms are promising for improving performance in non-stationary noise conditions.

  10. Precipitation and Latent Heating Distributions from Satellite Passive Microwave Radiometry. Part II: Evaluation of Estimates Using Independent Data

    NASA Technical Reports Server (NTRS)

    Yang, Song; Olson, William S.; Wang, Jian-Jian; Bell, Thomas L.; Smith, Eric A.; Kummerow, Christian D.

    2006-01-01

    Rainfall rate estimates from spaceborne microwave radiometers are generally accepted as reliable by a majority of the atmospheric science community. One of the Tropical Rainfall Measuring Mission (TRMM) facility rain-rate algorithms is based upon passive microwave observations from the TRMM Microwave Imager (TMI). In Part I of this series, improvements of the TMI algorithm that are required to introduce latent heating as an additional algorithm product are described. Here, estimates of surface rain rate, convective proportion, and latent heating are evaluated using independent ground-based estimates and satellite products. Instantaneous, 0.5 deg. -resolution estimates of surface rain rate over ocean from the improved TMI algorithm are well correlated with independent radar estimates (r approx. 0.88 over the Tropics), but bias reduction is the most significant improvement over earlier algorithms. The bias reduction is attributed to the greater breadth of cloud-resolving model simulations that support the improved algorithm and the more consistent and specific convective/stratiform rain separation method utilized. The bias of monthly 2.5 -resolution estimates is similarly reduced, with comparable correlations to radar estimates. Although the amount of independent latent heating data is limited, TMI-estimated latent heating profiles compare favorably with instantaneous estimates based upon dual-Doppler radar observations, and time series of surface rain-rate and heating profiles are generally consistent with those derived from rawinsonde analyses. Still, some biases in profile shape are evident, and these may be resolved with (a) additional contextual information brought to the estimation problem and/or (b) physically consistent and representative databases supporting the algorithm. A model of the random error in instantaneous 0.5 deg. -resolution rain-rate estimates appears to be consistent with the levels of error determined from TMI comparisons with collocated radar. Error model modifications for nonraining situations will be required, however. Sampling error represents only a portion of the total error in monthly 2.5 -resolution TMI estimates; the remaining error is attributed to random and systematic algorithm errors arising from the physical inconsistency and/or nonrepresentativeness of cloud-resolving-model-simulated profiles that support the algorithm.

  11. Tailoring the Employment of Offshore Wind Turbine Support Structure Load Mitigation Controllers

    NASA Astrophysics Data System (ADS)

    Shrestha, Binita; Kühn, Martin

    2016-09-01

    The currently available control concepts to mitigate aerodynamic and hydrodynamic induced support structure loads reduce either fore-aft or side-to-side damage under certain operational conditions. The load reduction is achieved together with an increase in loads in other components of the turbine e.g. pitch actuators or drive train, increasing the risk of unscheduled maintenance. The main objective of this paper is to demonstrate a methodology for reduction of support structure damage equivalent loads (DEL) in fore-aft and side-to-side directions using already available control concepts. A multi-objective optimization problem is formulated to minimize the DELs, while limiting the collateral effects of the control algorithms for load reduction. The optimization gives trigger values of sea state condition for the activation or deactivation of certain control concepts. As a result, by accepting the consumption of a small fraction of the load reserve in the design load envelope of other turbine components, a considerable reduction of the support structure loads is facilitated.

  12. A novel clinical decision support system using improved adaptive genetic algorithm for the assessment of fetal well-being.

    PubMed

    Ravindran, Sindhu; Jambek, Asral Bahari; Muthusamy, Hariharan; Neoh, Siew-Chin

    2015-01-01

    A novel clinical decision support system is proposed in this paper for evaluating the fetal well-being from the cardiotocogram (CTG) dataset through an Improved Adaptive Genetic Algorithm (IAGA) and Extreme Learning Machine (ELM). IAGA employs a new scaling technique (called sigma scaling) to avoid premature convergence and applies adaptive crossover and mutation techniques with masking concepts to enhance population diversity. Also, this search algorithm utilizes three different fitness functions (two single objective fitness functions and multi-objective fitness function) to assess its performance. The classification results unfold that promising classification accuracy of 94% is obtained with an optimal feature subset using IAGA. Also, the classification results are compared with those of other Feature Reduction techniques to substantiate its exhaustive search towards the global optimum. Besides, five other benchmark datasets are used to gauge the strength of the proposed IAGA algorithm.

  13. Computation of nonparametric convex hazard estimators via profile methods.

    PubMed

    Jankowski, Hanna K; Wellner, Jon A

    2009-05-01

    This paper proposes a profile likelihood algorithm to compute the nonparametric maximum likelihood estimator of a convex hazard function. The maximisation is performed in two steps: First the support reduction algorithm is used to maximise the likelihood over all hazard functions with a given point of minimum (or antimode). Then it is shown that the profile (or partially maximised) likelihood is quasi-concave as a function of the antimode, so that a bisection algorithm can be applied to find the maximum of the profile likelihood, and hence also the global maximum. The new algorithm is illustrated using both artificial and real data, including lifetime data for Canadian males and females.

  14. A soft computing based approach using modified selection strategy for feature reduction of medical systems.

    PubMed

    Zuhtuogullari, Kursat; Allahverdi, Novruz; Arikan, Nihat

    2013-01-01

    The systems consisting high input spaces require high processing times and memory usage. Most of the attribute selection algorithms have the problems of input dimensions limits and information storage problems. These problems are eliminated by means of developed feature reduction software using new modified selection mechanism with middle region solution candidates adding. The hybrid system software is constructed for reducing the input attributes of the systems with large number of input variables. The designed software also supports the roulette wheel selection mechanism. Linear order crossover is used as the recombination operator. In the genetic algorithm based soft computing methods, locking to the local solutions is also a problem which is eliminated by using developed software. Faster and effective results are obtained in the test procedures. Twelve input variables of the urological system have been reduced to the reducts (reduced input attributes) with seven, six, and five elements. It can be seen from the obtained results that the developed software with modified selection has the advantages in the fields of memory allocation, execution time, classification accuracy, sensitivity, and specificity values when compared with the other reduction algorithms by using the urological test data.

  15. A Soft Computing Based Approach Using Modified Selection Strategy for Feature Reduction of Medical Systems

    PubMed Central

    Zuhtuogullari, Kursat; Allahverdi, Novruz; Arikan, Nihat

    2013-01-01

    The systems consisting high input spaces require high processing times and memory usage. Most of the attribute selection algorithms have the problems of input dimensions limits and information storage problems. These problems are eliminated by means of developed feature reduction software using new modified selection mechanism with middle region solution candidates adding. The hybrid system software is constructed for reducing the input attributes of the systems with large number of input variables. The designed software also supports the roulette wheel selection mechanism. Linear order crossover is used as the recombination operator. In the genetic algorithm based soft computing methods, locking to the local solutions is also a problem which is eliminated by using developed software. Faster and effective results are obtained in the test procedures. Twelve input variables of the urological system have been reduced to the reducts (reduced input attributes) with seven, six, and five elements. It can be seen from the obtained results that the developed software with modified selection has the advantages in the fields of memory allocation, execution time, classification accuracy, sensitivity, and specificity values when compared with the other reduction algorithms by using the urological test data. PMID:23573172

  16. Runtime support for parallelizing data mining algorithms

    NASA Astrophysics Data System (ADS)

    Jin, Ruoming; Agrawal, Gagan

    2002-03-01

    With recent technological advances, shared memory parallel machines have become more scalable, and offer large main memories and high bus bandwidths. They are emerging as good platforms for data warehousing and data mining. In this paper, we focus on shared memory parallelization of data mining algorithms. We have developed a series of techniques for parallelization of data mining algorithms, including full replication, full locking, fixed locking, optimized full locking, and cache-sensitive locking. Unlike previous work on shared memory parallelization of specific data mining algorithms, all of our techniques apply to a large number of common data mining algorithms. In addition, we propose a reduction-object based interface for specifying a data mining algorithm. We show how our runtime system can apply any of the technique we have developed starting from a common specification of the algorithm.

  17. Precipitation and Latent Heating Distributions from Satellite Passive Microwave Radiometry. Part 2; Evaluation of Estimates Using Independent Data

    NASA Technical Reports Server (NTRS)

    Yang, Song; Olson, William S.; Wang, Jian-Jian; Bell, Thomas L.; Smith, Eric A.; Kummerow, Christian D.

    2004-01-01

    Rainfall rate estimates from space-borne k&ents are generally accepted as reliable by a majority of the atmospheric science commu&y. One-of the Tropical Rainfall Measuring Mission (TRh4M) facility rain rate algorithms is based upon passive microwave observations fiom the TRMM Microwave Imager (TMI). Part I of this study describes improvements in the TMI algorithm that are required to introduce cloud latent heating and drying as additional algorithm products. Here, estimates of surface rain rate, convective proportion, and latent heating are evaluated using independent ground-based estimates and satellite products. Instantaneous, OP5resolution estimates of surface rain rate over ocean fiom the improved TMI algorithm are well correlated with independent radar estimates (r approx. 0.88 over the Tropics), but bias reduction is the most significant improvement over forerunning algorithms. The bias reduction is attributed to the greater breadth of cloud-resolving model simulations that support the improved algorithm, and the more consistent and specific convective/stratiform rain separation method utilized. The bias of monthly, 2.5 deg. -resolution estimates is similarly reduced, with comparable correlations to radar estimates. Although the amount of independent latent heating data are limited, TMI estimated latent heating profiles compare favorably with instantaneous estimates based upon dual-Doppler radar observations, and time series of surface rain rate and heating profiles are generally consistent with those derived from rawinsonde analyses. Still, some biases in profile shape are evident, and these may be resolved with: (a) additional contextual information brought to the estimation problem, and/or; (b) physically-consistent and representative databases supporting the algorithm. A model of the random error in instantaneous, 0.5 deg-resolution rain rate estimates appears to be consistent with the levels of error determined from TMI comparisons to collocated radar. Error model modifications for non-raining situations will be required, however. Sampling error appears to represent only a fraction of the total error in monthly, 2S0-resolution TMI estimates; the remaining error is attributed to physical inconsistency or non-representativeness of cloud-resolving model simulated profiles supporting the algorithm.

  18. Real-time data reduction capabilities at the Langley 7 by 10 foot high speed tunnel

    NASA Technical Reports Server (NTRS)

    Fox, C. H., Jr.

    1980-01-01

    The 7 by 10 foot high speed tunnel performs a wide range of tests employing a variety of model installation methods. To support the reduction of static data from this facility, a generalized wind tunnel data reduction program had been developed for use on the Langley central computer complex. The capabilities of a version of this generalized program adapted for real time use on a dedicated on-site computer are discussed. The input specifications, instructions for the console operator, and full descriptions of the algorithms are included.

  19. Automatic welding detection by an intelligent tool pipe inspection

    NASA Astrophysics Data System (ADS)

    Arizmendi, C. J.; Garcia, W. L.; Quintero, M. A.

    2015-07-01

    This work provide a model based on machine learning techniques in welds recognition, based on signals obtained through in-line inspection tool called “smart pig” in Oil and Gas pipelines. The model uses a signal noise reduction phase by means of pre-processing algorithms and attribute-selection techniques. The noise reduction techniques were selected after a literature review and testing with survey data. Subsequently, the model was trained using recognition and classification algorithms, specifically artificial neural networks and support vector machines. Finally, the trained model was validated with different data sets and the performance was measured with cross validation and ROC analysis. The results show that is possible to identify welding automatically with an efficiency between 90 and 98 percent.

  20. Performance-scalable volumetric data classification for online industrial inspection

    NASA Astrophysics Data System (ADS)

    Abraham, Aby J.; Sadki, Mustapha; Lea, R. M.

    2002-03-01

    Non-intrusive inspection and non-destructive testing of manufactured objects with complex internal structures typically requires the enhancement, analysis and visualization of high-resolution volumetric data. Given the increasing availability of fast 3D scanning technology (e.g. cone-beam CT), enabling on-line detection and accurate discrimination of components or sub-structures, the inherent complexity of classification algorithms inevitably leads to throughput bottlenecks. Indeed, whereas typical inspection throughput requirements range from 1 to 1000 volumes per hour, depending on density and resolution, current computational capability is one to two orders-of-magnitude less. Accordingly, speeding up classification algorithms requires both reduction of algorithm complexity and acceleration of computer performance. A shape-based classification algorithm, offering algorithm complexity reduction, by using ellipses as generic descriptors of solids-of-revolution, and supporting performance-scalability, by exploiting the inherent parallelism of volumetric data, is presented. A two-stage variant of the classical Hough transform is used for ellipse detection and correlation of the detected ellipses facilitates position-, scale- and orientation-invariant component classification. Performance-scalability is achieved cost-effectively by accelerating a PC host with one or more COTS (Commercial-Off-The-Shelf) PCI multiprocessor cards. Experimental results are reported to demonstrate the feasibility and cost-effectiveness of the data-parallel classification algorithm for on-line industrial inspection applications.

  1. Phase retrieval with Fourier-weighted projections.

    PubMed

    Guizar-Sicairos, Manuel; Fienup, James R

    2008-03-01

    In coherent lensless imaging, the presence of image sidelobes, which arise as a natural consequence of the finite nature of the detector array, was early recognized as a convergence issue for phase retrieval algorithms that rely on an object support constraint. To mitigate the problem of truncated far-field measurement, a controlled analytic continuation by means of an iterative transform algorithm with weighted projections is proposed and tested. This approach avoids the use of sidelobe reduction windows and achieves full-resolution reconstructions.

  2. Detection of anomaly in human retina using Laplacian Eigenmaps and vectorized matched filtering

    NASA Astrophysics Data System (ADS)

    Yacoubou Djima, Karamatou A.; Simonelli, Lucia D.; Cunningham, Denise; Czaja, Wojciech

    2015-03-01

    We present a novel method for automated anomaly detection on auto fluorescent data provided by the National Institute of Health (NIH). This is motivated by the need for new tools to improve the capability of diagnosing macular degeneration in its early stages, track the progression over time, and test the effectiveness of new treatment methods. In previous work, macular anomalies have been detected automatically through multiscale analysis procedures such as wavelet analysis or dimensionality reduction algorithms followed by a classification algorithm, e.g., Support Vector Machine. The method that we propose is a Vectorized Matched Filtering (VMF) algorithm combined with Laplacian Eigenmaps (LE), a nonlinear dimensionality reduction algorithm with locality preserving properties. By applying LE, we are able to represent the data in the form of eigenimages, some of which accentuate the visibility of anomalies. We pick significant eigenimages and proceed with the VMF algorithm that classifies anomalies across all of these eigenimages simultaneously. To evaluate our performance, we compare our method to two other schemes: a matched filtering algorithm based on anomaly detection on single images and a combination of PCA and VMF. LE combined with VMF algorithm performs best, yielding a high rate of accurate anomaly detection. This shows the advantage of using a nonlinear approach to represent the data and the effectiveness of VMF, which operates on the images as a data cube rather than individual images.

  3. THE SECOND GENERATION OF THE WASTE REDUCTION (WAR) ALGORITHM: A DECISION SUPPORT SYSTEM FOR GREENER CHEMICAL PROCESSES

    EPA Science Inventory

    chemical process designers using simulation software generate alternative designs for one process. One criterion for evaluating these designs is their potential for adverse environmental impacts due to waste generated, energy consumed, and possibilities for fugitive emissions. Co...

  4. N-Dimensional LLL Reduction Algorithm with Pivoted Reflection

    PubMed Central

    Deng, Zhongliang; Zhu, Di

    2018-01-01

    The Lenstra-Lenstra-Lovász (LLL) lattice reduction algorithm and many of its variants have been widely used by cryptography, multiple-input-multiple-output (MIMO) communication systems and carrier phase positioning in global navigation satellite system (GNSS) to solve the integer least squares (ILS) problem. In this paper, we propose an n-dimensional LLL reduction algorithm (n-LLL), expanding the Lovász condition in LLL algorithm to n-dimensional space in order to obtain a further reduced basis. We also introduce pivoted Householder reflection into the algorithm to optimize the reduction time. For an m-order positive definite matrix, analysis shows that the n-LLL reduction algorithm will converge within finite steps and always produce better results than the original LLL reduction algorithm with n > 2. The simulations clearly prove that n-LLL is better than the original LLL in reducing the condition number of an ill-conditioned input matrix with 39% improvement on average for typical cases, which can significantly reduce the searching space for solving ILS problem. The simulation results also show that the pivoted reflection has significantly declined the number of swaps in the algorithm by 57%, making n-LLL a more practical reduction algorithm. PMID:29351224

  5. Comparative intelligibility investigation of single-channel noise-reduction algorithms for Chinese, Japanese, and English.

    PubMed

    Li, Junfeng; Yang, Lin; Zhang, Jianping; Yan, Yonghong; Hu, Yi; Akagi, Masato; Loizou, Philipos C

    2011-05-01

    A large number of single-channel noise-reduction algorithms have been proposed based largely on mathematical principles. Most of these algorithms, however, have been evaluated with English speech. Given the different perceptual cues used by native listeners of different languages including tonal languages, it is of interest to examine whether there are any language effects when the same noise-reduction algorithm is used to process noisy speech in different languages. A comparative evaluation and investigation is taken in this study of various single-channel noise-reduction algorithms applied to noisy speech taken from three languages: Chinese, Japanese, and English. Clean speech signals (Chinese words and Japanese words) were first corrupted by three types of noise at two signal-to-noise ratios and then processed by five single-channel noise-reduction algorithms. The processed signals were finally presented to normal-hearing listeners for recognition. Intelligibility evaluation showed that the majority of noise-reduction algorithms did not improve speech intelligibility. Consistent with a previous study with the English language, the Wiener filtering algorithm produced small, but statistically significant, improvements in intelligibility for car and white noise conditions. Significant differences between the performances of noise-reduction algorithms across the three languages were observed.

  6. Simulation-Based Evaluation of Dose-Titration Algorithms for Rapid-Acting Insulin in Subjects with Type 2 Diabetes Mellitus Inadequately Controlled on Basal Insulin and Oral Antihyperglycemic Medications.

    PubMed

    Ma, Xiaosu; Chien, Jenny Y; Johnson, Jennal; Malone, James; Sinha, Vikram

    2017-08-01

    The purpose of this prospective, model-based simulation approach was to evaluate the impact of various rapid-acting mealtime insulin dose-titration algorithms on glycemic control (hemoglobin A1c [HbA1c]). Seven stepwise, glucose-driven insulin dose-titration algorithms were evaluated with a model-based simulation approach by using insulin lispro. Pre-meal blood glucose readings were used to adjust insulin lispro doses. Two control dosing algorithms were included for comparison: no insulin lispro (basal insulin+metformin only) or insulin lispro with fixed doses without titration. Of the seven dosing algorithms assessed, daily adjustment of insulin lispro dose, when glucose targets were met at pre-breakfast, pre-lunch, and pre-dinner, sequentially, demonstrated greater HbA1c reduction at 24 weeks, compared with the other dosing algorithms. Hypoglycemic rates were comparable among the dosing algorithms except for higher rates with the insulin lispro fixed-dose scenario (no titration), as expected. The inferior HbA1c response for the "basal plus metformin only" arm supports the additional glycemic benefit with prandial insulin lispro. Our model-based simulations support a simplified dosing algorithm that does not include carbohydrate counting, but that includes glucose targets for daily dose adjustment to maintain glycemic control with a low risk of hypoglycemia.

  7. On distribution reduction and algorithm implementation in inconsistent ordered information systems.

    PubMed

    Zhang, Yanqin

    2014-01-01

    As one part of our work in ordered information systems, distribution reduction is studied in inconsistent ordered information systems (OISs). Some important properties on distribution reduction are studied and discussed. The dominance matrix is restated for reduction acquisition in dominance relations based information systems. Matrix algorithm for distribution reduction acquisition is stepped. And program is implemented by the algorithm. The approach provides an effective tool for the theoretical research and the applications for ordered information systems in practices. For more detailed and valid illustrations, cases are employed to explain and verify the algorithm and the program which shows the effectiveness of the algorithm in complicated information systems.

  8. The staircase method: integrals for periodic reductions of integrable lattice equations

    NASA Astrophysics Data System (ADS)

    van der Kamp, Peter H.; Quispel, G. R. W.

    2010-11-01

    We show, in full generality, that the staircase method (Papageorgiou et al 1990 Phys. Lett. A 147 106-14, Quispel et al 1991 Physica A 173 243-66) provides integrals for mappings, and correspondences, obtained as traveling wave reductions of (systems of) integrable partial difference equations. We apply the staircase method to a variety of equations, including the Korteweg-De Vries equation, the five-point Bruschi-Calogero-Droghei equation, the quotient-difference (QD)-algorithm and the Boussinesq system. We show that, in all these cases, if the staircase method provides r integrals for an n-dimensional mapping, with 2r, then one can introduce q <= 2r variables, which reduce the dimension of the mapping from n to q. These dimension-reducing variables are obtained as joint invariants of k-symmetries of the mappings. Our results support the idea that often the staircase method provides sufficiently many integrals for the periodic reductions of integrable lattice equations to be completely integrable. We also study reductions on other quad-graphs than the regular {\\ Z}^2 lattice, and we prove linear growth of the multi-valuedness of iterates of high-dimensional correspondences obtained as reductions of the QD-algorithm.

  9. Data Mining Methods for Recommender Systems

    NASA Astrophysics Data System (ADS)

    Amatriain, Xavier; Jaimes*, Alejandro; Oliver, Nuria; Pujol, Josep M.

    In this chapter, we give an overview of the main Data Mining techniques used in the context of Recommender Systems. We first describe common preprocessing methods such as sampling or dimensionality reduction. Next, we review the most important classification techniques, including Bayesian Networks and Support Vector Machines. We describe the k-means clustering algorithm and discuss several alternatives. We also present association rules and related algorithms for an efficient training process. In addition to introducing these techniques, we survey their uses in Recommender Systems and present cases where they have been successfully applied.

  10. Simplified Antenna Group Determination of RS Overhead Reduced Massive MIMO for Wireless Sensor Networks.

    PubMed

    Lee, Byung Moo

    2017-12-29

    Massive multiple-input multiple-output (MIMO) systems can be applied to support numerous internet of things (IoT) devices using its excessive amount of transmitter (TX) antennas. However, one of the big obstacles for the realization of the massive MIMO system is the overhead of reference signal (RS), because the number of RS is proportional to the number of TX antennas and/or related user equipments (UEs). It has been already reported that antenna group-based RS overhead reduction can be very effective to the efficient operation of massive MIMO, but the method of deciding the number of antennas needed in each group is at question. In this paper, we propose a simplified determination scheme of the number of antennas needed in each group for RS overhead reduced massive MIMO to support many IoT devices. Supporting many distributed IoT devices is a framework to configure wireless sensor networks. Our contribution can be divided into two parts. First, we derive simple closed-form approximations of the achievable spectral efficiency (SE) by using zero-forcing (ZF) and matched filtering (MF) precoding for the RS overhead reduced massive MIMO systems with channel estimation error. The closed-form approximations include a channel error factor that can be adjusted according to the method of the channel estimation. Second, based on the closed-form approximation, we present an efficient algorithm determining the number of antennas needed in each group for the group-based RS overhead reduction scheme. The algorithm depends on the exact inverse functions of the derived closed-form approximations of SE. It is verified with theoretical analysis and simulation that the proposed algorithm works well, and thus can be used as an important tool for massive MIMO systems to support many distributed IoT devices.

  11. Simplified Antenna Group Determination of RS Overhead Reduced Massive MIMO for Wireless Sensor Networks

    PubMed Central

    2017-01-01

    Massive multiple-input multiple-output (MIMO) systems can be applied to support numerous internet of things (IoT) devices using its excessive amount of transmitter (TX) antennas. However, one of the big obstacles for the realization of the massive MIMO system is the overhead of reference signal (RS), because the number of RS is proportional to the number of TX antennas and/or related user equipments (UEs). It has been already reported that antenna group-based RS overhead reduction can be very effective to the efficient operation of massive MIMO, but the method of deciding the number of antennas needed in each group is at question. In this paper, we propose a simplified determination scheme of the number of antennas needed in each group for RS overhead reduced massive MIMO to support many IoT devices. Supporting many distributed IoT devices is a framework to configure wireless sensor networks. Our contribution can be divided into two parts. First, we derive simple closed-form approximations of the achievable spectral efficiency (SE) by using zero-forcing (ZF) and matched filtering (MF) precoding for the RS overhead reduced massive MIMO systems with channel estimation error. The closed-form approximations include a channel error factor that can be adjusted according to the method of the channel estimation. Second, based on the closed-form approximation, we present an efficient algorithm determining the number of antennas needed in each group for the group-based RS overhead reduction scheme. The algorithm depends on the exact inverse functions of the derived closed-form approximations of SE. It is verified with theoretical analysis and simulation that the proposed algorithm works well, and thus can be used as an important tool for massive MIMO systems to support many distributed IoT devices. PMID:29286339

  12. Theoretical and software considerations for general dynamic analysis using multilevel substructured models

    NASA Technical Reports Server (NTRS)

    Schmidt, R. J.; Dodds, R. H., Jr.

    1985-01-01

    The dynamic analysis of complex structural systems using the finite element method and multilevel substructured models is presented. The fixed-interface method is selected for substructure reduction because of its efficiency, accuracy, and adaptability to restart and reanalysis. This method is extended to reduction of substructures which are themselves composed of reduced substructures. The implementation and performance of the method in a general purpose software system is emphasized. Solution algorithms consistent with the chosen data structures are presented. It is demonstrated that successful finite element software requires the use of software executives to supplement the algorithmic language. The complexity of the implementation of restart and reanalysis porcedures illustrates the need for executive systems to support the noncomputational aspects of the software. It is shown that significant computational efficiencies can be achieved through proper use of substructuring and reduction technbiques without sacrificing solution accuracy. The restart and reanalysis capabilities and the flexible procedures for multilevel substructured modeling gives economical yet accurate analyses of complex structural systems.

  13. Parallel Algorithms for Groebner-Basis Reduction

    DTIC Science & Technology

    1987-09-25

    22209 ELEMENT NO. NO. NO. ACCESSION NO. 11. TITLE (Include Security Classification) * PARALLEL ALGORITHMS FOR GROEBNER -BASIS REDUCTION 12. PERSONAL...All other editions are obsolete. Productivity Engineering in the UNIXt Environment p Parallel Algorithms for Groebner -Basis Reduction Technical Report

  14. Robotic Lunar Lander Development Project Status

    NASA Technical Reports Server (NTRS)

    Hammond, Monica; Bassler, Julie; Morse, Brian

    2010-01-01

    This slide presentation reviews the status of the development of a robotic lunar lander. The goal of the project is to perform engineering tests and risk reduction activities to support the development of a small lunar lander for lunar surface science. This includes: (1) risk reduction for the flight of the robotic lander, (i.e., testing and analyzing various phase of the project); (2) the incremental development for the design of the robotic lander, which is to demonstrate autonomous, controlled descent and landing on airless bodies, and design of thruster configuration for 1/6th of the gravity of earth; (3) cold gas test article in flight demonstration testing; (4) warm gas testing of the robotic lander design; (5) develop and test landing algorithms; (6) validate the algorithms through analysis and test; and (7) tests of the flight propulsion system.

  15. Parallel Lattice Basis Reduction Using a Multi-threaded Schnorr-Euchner LLL Algorithm

    NASA Astrophysics Data System (ADS)

    Backes, Werner; Wetzel, Susanne

    In this paper, we introduce a new parallel variant of the LLL lattice basis reduction algorithm. Our new, multi-threaded algorithm is the first to provide an efficient, parallel implementation of the Schorr-Euchner algorithm for today’s multi-processor, multi-core computer architectures. Experiments with sparse and dense lattice bases show a speed-up factor of about 1.8 for the 2-thread and about factor 3.2 for the 4-thread version of our new parallel lattice basis reduction algorithm in comparison to the traditional non-parallel algorithm.

  16. Gaussian diffusion sinogram inpainting for X-ray CT metal artifact reduction.

    PubMed

    Peng, Chengtao; Qiu, Bensheng; Li, Ming; Guan, Yihui; Zhang, Cheng; Wu, Zhongyi; Zheng, Jian

    2017-01-05

    Metal objects implanted in the bodies of patients usually generate severe streaking artifacts in reconstructed images of X-ray computed tomography, which degrade the image quality and affect the diagnosis of disease. Therefore, it is essential to reduce these artifacts to meet the clinical demands. In this work, we propose a Gaussian diffusion sinogram inpainting metal artifact reduction algorithm based on prior images to reduce these artifacts for fan-beam computed tomography reconstruction. In this algorithm, prior information that originated from a tissue-classified prior image is used for the inpainting of metal-corrupted projections, and it is incorporated into a Gaussian diffusion function. The prior knowledge is particularly designed to locate the diffusion position and improve the sparsity of the subtraction sinogram, which is obtained by subtracting the prior sinogram of the metal regions from the original sinogram. The sinogram inpainting algorithm is implemented through an approach of diffusing prior energy and is then solved by gradient descent. The performance of the proposed metal artifact reduction algorithm is compared with two conventional metal artifact reduction algorithms, namely the interpolation metal artifact reduction algorithm and normalized metal artifact reduction algorithm. The experimental datasets used included both simulated and clinical datasets. By evaluating the results subjectively, the proposed metal artifact reduction algorithm causes fewer secondary artifacts than the two conventional metal artifact reduction algorithms, which lead to severe secondary artifacts resulting from impertinent interpolation and normalization. Additionally, the objective evaluation shows the proposed approach has the smallest normalized mean absolute deviation and the highest signal-to-noise ratio, indicating that the proposed method has produced the image with the best quality. No matter for the simulated datasets or the clinical datasets, the proposed algorithm has reduced the metal artifacts apparently.

  17. Evolutionary and Neural Computing Based Decision Support System for Disease Diagnosis from Clinical Data Sets in Medical Practice.

    PubMed

    Sudha, M

    2017-09-27

    As a recent trend, various computational intelligence and machine learning approaches have been used for mining inferences hidden in the large clinical databases to assist the clinician in strategic decision making. In any target data the irrelevant information may be detrimental, causing confusion for the mining algorithm and degrades the prediction outcome. To address this issue, this study attempts to identify an intelligent approach to assist disease diagnostic procedure using an optimal set of attributes instead of all attributes present in the clinical data set. In this proposed Application Specific Intelligent Computing (ASIC) decision support system, a rough set based genetic algorithm is employed in pre-processing phase and a back propagation neural network is applied in training and testing phase. ASIC has two phases, the first phase handles outliers, noisy data, and missing values to obtain a qualitative target data to generate appropriate attribute reduct sets from the input data using rough computing based genetic algorithm centred on a relative fitness function measure. The succeeding phase of this system involves both training and testing of back propagation neural network classifier on the selected reducts. The model performance is evaluated with widely adopted existing classifiers. The proposed ASIC system for clinical decision support has been tested with breast cancer, fertility diagnosis and heart disease data set from the University of California at Irvine (UCI) machine learning repository. The proposed system outperformed the existing approaches attaining the accuracy rate of 95.33%, 97.61%, and 93.04% for breast cancer, fertility issue and heart disease diagnosis.

  18. Post-processing interstitialcy diffusion from molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Bhardwaj, U.; Bukkuru, S.; Warrier, M.

    2016-01-01

    An algorithm to rigorously trace the interstitialcy diffusion trajectory in crystals is developed. The algorithm incorporates unsupervised learning and graph optimization which obviate the need to input extra domain specific information depending on crystal or temperature of the simulation. The algorithm is implemented in a flexible framework as a post-processor to molecular dynamics (MD) simulations. We describe in detail the reduction of interstitialcy diffusion into known computational problems of unsupervised clustering and graph optimization. We also discuss the steps, computational efficiency and key components of the algorithm. Using the algorithm, thermal interstitialcy diffusion from low to near-melting point temperatures is studied. We encapsulate the algorithms in a modular framework with functionality to calculate diffusion coefficients, migration energies and other trajectory properties. The study validates the algorithm by establishing the conformity of output parameters with experimental values and provides detailed insights for the interstitialcy diffusion mechanism. The algorithm along with the help of supporting visualizations and analysis gives convincing details and a new approach to quantifying diffusion jumps, jump-lengths, time between jumps and to identify interstitials from lattice atoms.

  19. Post-processing interstitialcy diffusion from molecular dynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhardwaj, U., E-mail: haptork@gmail.com; Bukkuru, S.; Warrier, M.

    2016-01-15

    An algorithm to rigorously trace the interstitialcy diffusion trajectory in crystals is developed. The algorithm incorporates unsupervised learning and graph optimization which obviate the need to input extra domain specific information depending on crystal or temperature of the simulation. The algorithm is implemented in a flexible framework as a post-processor to molecular dynamics (MD) simulations. We describe in detail the reduction of interstitialcy diffusion into known computational problems of unsupervised clustering and graph optimization. We also discuss the steps, computational efficiency and key components of the algorithm. Using the algorithm, thermal interstitialcy diffusion from low to near-melting point temperatures ismore » studied. We encapsulate the algorithms in a modular framework with functionality to calculate diffusion coefficients, migration energies and other trajectory properties. The study validates the algorithm by establishing the conformity of output parameters with experimental values and provides detailed insights for the interstitialcy diffusion mechanism. The algorithm along with the help of supporting visualizations and analysis gives convincing details and a new approach to quantifying diffusion jumps, jump-lengths, time between jumps and to identify interstitials from lattice atoms. -- Graphical abstract:.« less

  20. Reduction of artifacts in computer simulation of breast Cooper's ligaments

    NASA Astrophysics Data System (ADS)

    Pokrajac, David D.; Kuperavage, Adam; Maidment, Andrew D. A.; Bakic, Predrag R.

    2016-03-01

    Anthropomorphic software breast phantoms have been introduced as a tool for quantitative validation of breast imaging systems. Efficacy of the validation results depends on the realism of phantom images. The recursive partitioning algorithm based upon the octree simulation has been demonstrated as versatile and capable of efficiently generating large number of phantoms to support virtual clinical trials of breast imaging. Previously, we have observed specific artifacts, (here labeled "dents") on the boundaries of simulated Cooper's ligaments. In this work, we have demonstrated that these "dents" result from the approximate determination of the closest simulated ligament to an examined subvolume (i.e., octree node) of the phantom. We propose a modification of the algorithm that determines the closest ligament by considering a pre-specified number of neighboring ligaments selected based upon the functions that govern the shape of ligaments simulated in the subvolume. We have qualitatively and quantitatively demonstrated that the modified algorithm can lead to elimination or reduction of dent artifacts in software phantoms. In a proof-of concept example, we simulated a 450 ml phantom with 333 compartments at 100 micrometer resolution. After the proposed modification, we corrected 148,105 dents, with an average size of 5.27 voxels (5.27nl). We have also qualitatively analyzed the corresponding improvement in the appearance of simulated mammographic images. The proposed algorithm leads to reduction of linear and star-like artifacts in simulated phantom projections, which can be attributed to dents. Analysis of a larger number of phantoms is ongoing.

  1. Neon reduction program on Cymer ArF light sources

    NASA Astrophysics Data System (ADS)

    Kanawade, Dinesh; Roman, Yzzer; Cacouris, Ted; Thornes, Josh; O'Brien, Kevin

    2016-03-01

    In response to significant neon supply constraints, Cymer has responded with a multi-part plan to support its customers. Cymer's primary objective is to ensure that reliable system performance is maintained while minimizing gas consumption. Gas algorithms were optimized to ensure stable performance across all operating conditions. The Cymer neon support plan contains four elements: 1. Gas reduction program to reduce neon by >50% while maintaining existing performance levels and availability; 2. short-term containment solutions for immediate relief. 3. qualification of additional gas suppliers; and 4. long-term recycling/reclaim opportunity. The Cymer neon reduction program has shown excellent results as demonstrated through the comparison on standard gas use versus the new >50% reduced neon performance for ArF immersion light sources. Testing included stressful conditions such as repetition rate, duty cycle and energy target changes. No performance degradation has been observed over typical gas lives.

  2. Analyzing radiation absorption difference of dental substance by using Dual CT

    NASA Astrophysics Data System (ADS)

    Yu, H.; Lee, H. K.; Cho, J. H.; Yang, H. J.; Ju, Y. S.

    2015-07-01

    The purpose of this study was to evaluate the changes of noise and computer tomography (CT) number in each dental substance, by using the metal artefact reduction algorithm; we used dual CT for this study. For the study, we produced resin, titanium, gypsum, and wax that are widely used by dentists. In addition, we made nickel to increase the artefact. While making the study materials, we made sure that there is no difficulty when inserting the substances inside phantom. In order to study, we scanned before and after using the metal artefact reduction algorithm. We conducted an average analysis of CT number and noise, before and after using the metal artefact reduction algorithm. As a result, there was no difference in CT number and noise before and after using the metal artefact reduction algorithm. However, when it comes to the noise value in each substance, wax's noise value was the lowest whereas titanium's noise value was the highest, after applying the metal artefact reduction algorithm. In nickel, CT number and noise value from artefact area showed a decreased noise value when applying the metal artefact reduction algorithm. In conclusion, we assumed that we could increase the effectiveness of CT examination by applying dual energy's metal artefact reduction algorithm.

  3. A framework for porting the NeuroBayes machine learning algorithm to FPGAs

    NASA Astrophysics Data System (ADS)

    Baehr, S.; Sander, O.; Heck, M.; Feindt, M.; Becker, J.

    2016-01-01

    The NeuroBayes machine learning algorithm is deployed for online data reduction at the pixel detector of Belle II. In order to test, characterize and easily adapt its implementation on FPGAs, a framework was developed. Within the framework an HDL model, written in python using MyHDL, is used for fast exploration of possible configurations. Under usage of input data from physics simulations figures of merit like throughput, accuracy and resource demand of the implementation are evaluated in a fast and flexible way. Functional validation is supported by usage of unit tests and HDL simulation for chosen configurations.

  4. VDA, a Method of Choosing a Better Algorithm with Fewer Validations

    PubMed Central

    Kluger, Yuval

    2011-01-01

    The multitude of bioinformatics algorithms designed for performing a particular computational task presents end-users with the problem of selecting the most appropriate computational tool for analyzing their biological data. The choice of the best available method is often based on expensive experimental validation of the results. We propose an approach to design validation sets for method comparison and performance assessment that are effective in terms of cost and discrimination power. Validation Discriminant Analysis (VDA) is a method for designing a minimal validation dataset to allow reliable comparisons between the performances of different algorithms. Implementation of our VDA approach achieves this reduction by selecting predictions that maximize the minimum Hamming distance between algorithmic predictions in the validation set. We show that VDA can be used to correctly rank algorithms according to their performances. These results are further supported by simulations and by realistic algorithmic comparisons in silico. VDA is a novel, cost-efficient method for minimizing the number of validation experiments necessary for reliable performance estimation and fair comparison between algorithms. Our VDA software is available at http://sourceforge.net/projects/klugerlab/files/VDA/ PMID:22046256

  5. Optimizing Controlling-Value-Based Power Gating with Gate Count and Switching Activity

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Kimura, Shinji

    In this paper, a new heuristic algorithm is proposed to optimize the power domain clustering in controlling-value-based (CV-based) power gating technology. In this algorithm, both the switching activity of sleep signals (p) and the overall numbers of sleep gates (gate count, N) are considered, and the sum of the product of p and N is optimized. The algorithm effectively exerts the total power reduction obtained from the CV-based power gating. Even when the maximum depth is kept to be the same, the proposed algorithm can still achieve power reduction approximately 10% more than that of the prior algorithms. Furthermore, detailed comparison between the proposed heuristic algorithm and other possible heuristic algorithms are also presented. HSPICE simulation results show that over 26% of total power reduction can be obtained by using the new heuristic algorithm. In addition, the effect of dynamic power reduction through the CV-based power gating method and the delay overhead caused by the switching of sleep transistors are also shown in this paper.

  6. An Enhanced PSO-Based Clustering Energy Optimization Algorithm for Wireless Sensor Network.

    PubMed

    Vimalarani, C; Subramanian, R; Sivanandam, S N

    2016-01-01

    Wireless Sensor Network (WSN) is a network which formed with a maximum number of sensor nodes which are positioned in an application environment to monitor the physical entities in a target area, for example, temperature monitoring environment, water level, monitoring pressure, and health care, and various military applications. Mostly sensor nodes are equipped with self-supported battery power through which they can perform adequate operations and communication among neighboring nodes. Maximizing the lifetime of the Wireless Sensor networks, energy conservation measures are essential for improving the performance of WSNs. This paper proposes an Enhanced PSO-Based Clustering Energy Optimization (EPSO-CEO) algorithm for Wireless Sensor Network in which clustering and clustering head selection are done by using Particle Swarm Optimization (PSO) algorithm with respect to minimizing the power consumption in WSN. The performance metrics are evaluated and results are compared with competitive clustering algorithm to validate the reduction in energy consumption.

  7. Hybrid Reduced Order Modeling Algorithms for Reactor Physics Calculations

    NASA Astrophysics Data System (ADS)

    Bang, Youngsuk

    Reduced order modeling (ROM) has been recognized as an indispensable approach when the engineering analysis requires many executions of high fidelity simulation codes. Examples of such engineering analyses in nuclear reactor core calculations, representing the focus of this dissertation, include the functionalization of the homogenized few-group cross-sections in terms of the various core conditions, e.g. burn-up, fuel enrichment, temperature, etc. This is done via assembly calculations which are executed many times to generate the required functionalization for use in the downstream core calculations. Other examples are sensitivity analysis used to determine important core attribute variations due to input parameter variations, and uncertainty quantification employed to estimate core attribute uncertainties originating from input parameter uncertainties. ROM constructs a surrogate model with quantifiable accuracy which can replace the original code for subsequent engineering analysis calculations. This is achieved by reducing the effective dimensionality of the input parameter, the state variable, or the output response spaces, by projection onto the so-called active subspaces. Confining the variations to the active subspace allows one to construct an ROM model of reduced complexity which can be solved more efficiently. This dissertation introduces a new algorithm to render reduction with the reduction errors bounded based on a user-defined error tolerance which represents the main challenge of existing ROM techniques. Bounding the error is the key to ensuring that the constructed ROM models are robust for all possible applications. Providing such error bounds represents one of the algorithmic contributions of this dissertation to the ROM state-of-the-art. Recognizing that ROM techniques have been developed to render reduction at different levels, e.g. the input parameter space, the state space, and the response space, this dissertation offers a set of novel hybrid ROM algorithms which can be readily integrated into existing methods and offer higher computational efficiency and defendable accuracy of the reduced models. For example, the snapshots ROM algorithm is hybridized with the range finding algorithm to render reduction in the state space, e.g. the flux in reactor calculations. In another implementation, the perturbation theory used to calculate first order derivatives of responses with respect to parameters is hybridized with a forward sensitivity analysis approach to render reduction in the parameter space. Reduction at the state and parameter spaces can be combined to render further reduction at the interface between different physics codes in a multi-physics model with the accuracy quantified in a similar manner to the single physics case. Although the proposed algorithms are generic in nature, we focus here on radiation transport models used in support of the design and analysis of nuclear reactor cores. In particular, we focus on replacing the traditional assembly calculations by ROM models to facilitate the generation of homogenized cross-sections for downstream core calculations. The implication is that assembly calculations could be done instantaneously therefore precluding the need for the expensive evaluation of the few-group cross-sections for all possible core conditions. Given the generic natures of the algorithms, we make an effort to introduce the material in a general form to allow non-nuclear engineers to benefit from this work.

  8. Optimal design of compact spur gear reductions

    NASA Technical Reports Server (NTRS)

    Savage, M.; Lattime, S. B.; Kimmel, J. A.; Coe, H. H.

    1992-01-01

    The optimal design of compact spur gear reductions includes the selection of bearing and shaft proportions in addition to gear mesh parameters. Designs for single mesh spur gear reductions are based on optimization of system life, system volume, and system weight including gears, support shafts, and the four bearings. The overall optimization allows component properties to interact, yielding the best composite design. A modified feasible directions search algorithm directs the optimization through a continuous design space. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for optimization. After finding the continuous optimum, the designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearings on the optimal configurations.

  9. Real-Time Adaptive Control of Flow-Induced Cavity Tones

    NASA Technical Reports Server (NTRS)

    Kegerise, Michael A.; Cabell, Randolph H.; Cattafesta, Louis N.

    2004-01-01

    An adaptive generalized predictive control (GPC) algorithm was formulated and applied to the cavity flow-tone problem. The algorithm employs gradient descent to update the GPC coefficients at each time step. The adaptive control algorithm demonstrated multiple Rossiter mode suppression at fixed Mach numbers ranging from 0.275 to 0.38. The algorithm was also able t o maintain suppression of multiple cavity tones as the freestream Mach number was varied over a modest range (0.275 to 0.29). Controller performance was evaluated with a measure of output disturbance rejection and an input sensitivity transfer function. The results suggest that disturbances entering the cavity flow are colocated with the control input at the cavity leading edge. In that case, only tonal components of the cavity wall-pressure fluctuations can be suppressed and arbitrary broadband pressure reduction is not possible. In the control-algorithm development, the cavity dynamics are treated as linear and time invariant (LTI) for a fixed Mach number. The experimental results lend support this treatment.

  10. Experimental evaluation of leaky least-mean-square algorithms for active noise reduction in communication headsets.

    PubMed

    Cartes, David A; Ray, Laura R; Collier, Robert D

    2002-04-01

    An adaptive leaky normalized least-mean-square (NLMS) algorithm has been developed to optimize stability and performance of active noise cancellation systems. The research addresses LMS filter performance issues related to insufficient excitation, nonstationary noise fields, and time-varying signal-to-noise ratio. The adaptive leaky NLMS algorithm is based on a Lyapunov tuning approach in which three candidate algorithms, each of which is a function of the instantaneous measured reference input, measurement noise variance, and filter length, are shown to provide varying degrees of tradeoff between stability and noise reduction performance. Each algorithm is evaluated experimentally for reduction of low frequency noise in communication headsets, and stability and noise reduction performance are compared with that of traditional NLMS and fixed-leakage NLMS algorithms. Acoustic measurements are made in a specially designed acoustic test cell which is based on the original work of Ryan et al. ["Enclosure for low frequency assessment of active noise reducing circumaural headsets and hearing protection," Can. Acoust. 21, 19-20 (1993)] and which provides a highly controlled and uniform acoustic environment. The stability and performance of the active noise reduction system, including a prototype communication headset, are investigated for a variety of noise sources ranging from stationary tonal noise to highly nonstationary measured F-16 aircraft noise over a 20 dB dynamic range. Results demonstrate significant improvements in stability of Lyapunov-tuned LMS algorithms over traditional leaky or nonleaky normalized algorithms, while providing noise reduction performance equivalent to that of the NLMS algorithm for idealized noise fields.

  11. AgRISTARS. Supporting research: Algorithms for scene modelling

    NASA Technical Reports Server (NTRS)

    Rassbach, M. E. (Principal Investigator)

    1982-01-01

    The requirements for a comprehensive analysis of LANDSAT or other visual data scenes are defined. The development of a general model of a scene and a computer algorithm for finding the particular model for a given scene is discussed. The modelling system includes a boundary analysis subsystem, which detects all the boundaries and lines in the image and builds a boundary graph; a continuous variation analysis subsystem, which finds gradual variations not well approximated by a boundary structure; and a miscellaneous features analysis, which includes texture, line parallelism, etc. The noise reduction capabilities of this method and its use in image rectification and registration are discussed.

  12. Decoding-Accuracy-Based Sequential Dimensionality Reduction of Spatio-Temporal Neural Activities

    NASA Astrophysics Data System (ADS)

    Funamizu, Akihiro; Kanzaki, Ryohei; Takahashi, Hirokazu

    Performance of a brain machine interface (BMI) critically depends on selection of input data because information embedded in the neural activities is highly redundant. In addition, properly selected input data with a reduced dimension leads to improvement of decoding generalization ability and decrease of computational efforts, both of which are significant advantages for the clinical applications. In the present paper, we propose an algorithm of sequential dimensionality reduction (SDR) that effectively extracts motor/sensory related spatio-temporal neural activities. The algorithm gradually reduces input data dimension by dropping neural data spatio-temporally so as not to undermine the decoding accuracy as far as possible. Support vector machine (SVM) was used as the decoder, and tone-induced neural activities in rat auditory cortices were decoded into the test tone frequencies. SDR reduced the input data dimension to a quarter and significantly improved the accuracy of decoding of novel data. Moreover, spatio-temporal neural activity patterns selected by SDR resulted in significantly higher accuracy than high spike rate patterns or conventionally used spatial patterns. These results suggest that the proposed algorithm can improve the generalization ability and decrease the computational effort of decoding.

  13. A greedy algorithm for species selection in dimension reduction of combustion chemistry

    NASA Astrophysics Data System (ADS)

    Hiremath, Varun; Ren, Zhuyin; Pope, Stephen B.

    2010-09-01

    Computational calculations of combustion problems involving large numbers of species and reactions with a detailed description of the chemistry can be very expensive. Numerous dimension reduction techniques have been developed in the past to reduce the computational cost. In this paper, we consider the rate controlled constrained-equilibrium (RCCE) dimension reduction method, in which a set of constrained species is specified. For a given number of constrained species, the 'optimal' set of constrained species is that which minimizes the dimension reduction error. The direct determination of the optimal set is computationally infeasible, and instead we present a greedy algorithm which aims at determining a 'good' set of constrained species; that is, one leading to near-minimal dimension reduction error. The partially-stirred reactor (PaSR) involving methane premixed combustion with chemistry described by the GRI-Mech 1.2 mechanism containing 31 species is used to test the algorithm. Results on dimension reduction errors for different sets of constrained species are presented to assess the effectiveness of the greedy algorithm. It is shown that the first four constrained species selected using the proposed greedy algorithm produce lower dimension reduction error than constraints on the major species: CH4, O2, CO2 and H2O. It is also shown that the first ten constrained species selected using the proposed greedy algorithm produce a non-increasing dimension reduction error with every additional constrained species; and produce the lowest dimension reduction error in many cases tested over a wide range of equivalence ratios, pressures and initial temperatures.

  14. Evaluation of a metal artifacts reduction algorithm applied to postinterventional flat panel detector CT imaging.

    PubMed

    Stidd, D A; Theessen, H; Deng, Y; Li, Y; Scholz, B; Rohkohl, C; Jhaveri, M D; Moftakhar, R; Chen, M; Lopes, D K

    2014-01-01

    Flat panel detector CT images are degraded by streak artifacts caused by radiodense implanted materials such as coils or clips. A new metal artifacts reduction prototype algorithm has been used to minimize these artifacts. The application of this new metal artifacts reduction algorithm was evaluated for flat panel detector CT imaging performed in a routine clinical setting. Flat panel detector CT images were obtained from 59 patients immediately following cerebral endovascular procedures or as surveillance imaging for cerebral endovascular or surgical procedures previously performed. The images were independently evaluated by 7 physicians for metal artifacts reduction on a 3-point scale at 2 locations: immediately adjacent to the metallic implant and 3 cm away from it. The number of visible vessels before and after metal artifacts reduction correction was also evaluated within a 3-cm radius around the metallic implant. The metal artifacts reduction algorithm was applied to the 59 flat panel detector CT datasets without complications. The metal artifacts in the reduction-corrected flat panel detector CT images were significantly reduced in the area immediately adjacent to the implanted metal object (P = .05) and in the area 3 cm away from the metal object (P = .03). The average number of visible vessel segments increased from 4.07 to 5.29 (P = .1235) after application of the metal artifacts reduction algorithm to the flat panel detector CT images. Metal artifacts reduction is an effective method to improve flat panel detector CT images degraded by metal artifacts. Metal artifacts are significantly decreased by the metal artifacts reduction algorithm, and there was a trend toward increased vessel-segment visualization. © 2014 by American Journal of Neuroradiology.

  15. Filtered-x generalized mixed norm (FXGMN) algorithm for active noise control

    NASA Astrophysics Data System (ADS)

    Song, Pucha; Zhao, Haiquan

    2018-07-01

    The standard adaptive filtering algorithm with a single error norm exhibits slow convergence rate and poor noise reduction performance under specific environments. To overcome this drawback, a filtered-x generalized mixed norm (FXGMN) algorithm for active noise control (ANC) system is proposed. The FXGMN algorithm is developed by using a convex mixture of lp and lq norms as the cost function that it can be viewed as a generalized version of the most existing adaptive filtering algorithms, and it will reduce to a specific algorithm by choosing certain parameters. Especially, it can be used to solve the ANC under Gaussian and non-Gaussian noise environments (including impulsive noise with symmetric α -stable (SαS) distribution). To further enhance the algorithm performance, namely convergence speed and noise reduction performance, a convex combination of the FXGMN algorithm (C-FXGMN) is presented. Moreover, the computational complexity of the proposed algorithms is analyzed, and a stability condition for the proposed algorithms is provided. Simulation results show that the proposed FXGMN and C-FXGMN algorithms can achieve better convergence speed and higher noise reduction as compared to other existing algorithms under various noise input conditions, and the C-FXGMN algorithm outperforms the FXGMN.

  16. Two Improved Algorithms for Envelope and Wavefront Reduction

    NASA Technical Reports Server (NTRS)

    Kumfert, Gary; Pothen, Alex

    1997-01-01

    Two algorithms for reordering sparse, symmetric matrices or undirected graphs to reduce envelope and wavefront are considered. The first is a combinatorial algorithm introduced by Sloan and further developed by Duff, Reid, and Scott; we describe enhancements to the Sloan algorithm that improve its quality and reduce its run time. Our test problems fall into two classes with differing asymptotic behavior of their envelope parameters as a function of the weights in the Sloan algorithm. We describe an efficient 0(nlogn + m) time implementation of the Sloan algorithm, where n is the number of rows (vertices), and m is the number of nonzeros (edges). On a collection of test problems, the improved Sloan algorithm required, on the average, only twice the time required by the simpler Reverse Cuthill-Mckee algorithm while improving the mean square wavefront by a factor of three. The second algorithm is a hybrid that combines a spectral algorithm for envelope and wavefront reduction with a refinement step that uses a modified Sloan algorithm. The hybrid algorithm reduces the envelope size and mean square wavefront obtained from the Sloan algorithm at the cost of greater running times. We illustrate how these reductions translate into tangible benefits for frontal Cholesky factorization and incomplete factorization preconditioning.

  17. SU-E-T-329: Dosimetric Impact of Implementing Metal Artifact Reduction Methods and Metal Energy Deposition Kernels for Photon Dose Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, J; Followill, D; Howell, R

    2015-06-15

    Purpose: To investigate two strategies for reducing dose calculation errors near metal implants: use of CT metal artifact reduction methods and implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) method. Methods: Radiochromic film was used to measure the dose upstream and downstream of titanium and Cerrobend implants. To assess the dosimetric impact of metal artifact reduction methods, dose calculations were performed using baseline, uncorrected images and metal artifact reduction Methods: Philips O-MAR, GE’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI imaging with metal artifact reduction software applied (MARs).To assess the impact of metal kernels, titaniummore » and silver kernels were implemented into a commercial collapsed cone C/S algorithm. Results: The CT artifact reduction methods were more successful for titanium than Cerrobend. Interestingly, for beams traversing the metal implant, we found that errors in the dimensions of the metal in the CT images were more important for dose calculation accuracy than reduction of imaging artifacts. The MARs algorithm caused a distortion in the shape of the titanium implant that substantially worsened the calculation accuracy. In comparison to water kernel dose calculations, metal kernels resulted in better modeling of the increased backscatter dose at the upstream interface but decreased accuracy directly downstream of the metal. We also found that the success of metal kernels was dependent on dose grid size, with smaller calculation voxels giving better accuracy. Conclusion: Our study yielded mixed results, with neither the metal artifact reduction methods nor the metal kernels being globally effective at improving dose calculation accuracy. However, some successes were observed. The MARs algorithm decreased errors downstream of Cerrobend by a factor of two, and metal kernels resulted in more accurate backscatter dose upstream of metals. Thus, these two strategies do have the potential to improve accuracy for patients with metal implants in certain scenarios. This work was supported by Public Health Service grants CA 180803 and CA 10953 awarded by the National Cancer Institute, United States of Health and Human Services, and in part by Mobius Medical Systems.« less

  18. Metal implants on CT: comparison of iterative reconstruction algorithms for reduction of metal artifacts with single energy and spectral CT scanning in a phantom model.

    PubMed

    Fang, Jieming; Zhang, Da; Wilcox, Carol; Heidinger, Benedikt; Raptopoulos, Vassilios; Brook, Alexander; Brook, Olga R

    2017-03-01

    To assess single energy metal artifact reduction (SEMAR) and spectral energy metal artifact reduction (MARS) algorithms in reducing artifacts generated by different metal implants. Phantom was scanned with and without SEMAR (Aquilion One, Toshiba) and MARS (Discovery CT750 HD, GE), with various metal implants. Images were evaluated objectively by measuring standard deviation in regions of interests and subjectively by two independent reviewers grading on a scale of 0 (no artifact) to 4 (severe artifact). Reviewers also graded new artifacts introduced by metal artifact reduction algorithms. SEMAR and MARS significantly decreased variability of the density measurement adjacent to the metal implant, with median SD (standard deviation of density measurement) of 52.1 HU without SEMAR, vs. 12.3 HU with SEMAR, p < 0.001. Median SD without MARS of 63.1 HU decreased to 25.9 HU with MARS, p < 0.001. Median SD with SEMAR is significantly lower than median SD with MARS (p = 0.0011). SEMAR improved subjective image quality with reduction in overall artifacts grading from 3.2 ± 0.7 to 1.4 ± 0.9, p < 0.001. Improvement of overall image quality by MARS has not reached statistical significance (3.2 ± 0.6 to 2.6 ± 0.8, p = 0.088). There was a significant introduction of artifacts introduced by metal artifact reduction algorithm for MARS with 2.4 ± 1.0, but minimal with SEMAR 0.4 ± 0.7, p < 0.001. CT iterative reconstruction algorithms with single and spectral energy are both effective in reduction of metal artifacts. Single energy-based algorithm provides better overall image quality than spectral CT-based algorithm. Spectral metal artifact reduction algorithm introduces mild to moderate artifacts in the far field.

  19. Survival dimensionality reduction (SDR): development and clinical application of an innovative approach to detect epistasis in presence of right-censored data.

    PubMed

    Beretta, Lorenzo; Santaniello, Alessandro; van Riel, Piet L C M; Coenen, Marieke J H; Scorza, Raffaella

    2010-08-06

    Epistasis is recognized as a fundamental part of the genetic architecture of individuals. Several computational approaches have been developed to model gene-gene interactions in case-control studies, however, none of them is suitable for time-dependent analysis. Herein we introduce the Survival Dimensionality Reduction (SDR) algorithm, a non-parametric method specifically designed to detect epistasis in lifetime datasets. The algorithm requires neither specification about the underlying survival distribution nor about the underlying interaction model and proved satisfactorily powerful to detect a set of causative genes in synthetic epistatic lifetime datasets with a limited number of samples and high degree of right-censorship (up to 70%). The SDR method was then applied to a series of 386 Dutch patients with active rheumatoid arthritis that were treated with anti-TNF biological agents. Among a set of 39 candidate genes, none of which showed a detectable marginal effect on anti-TNF responses, the SDR algorithm did find that the rs1801274 SNP in the Fc gamma RIIa gene and the rs10954213 SNP in the IRF5 gene non-linearly interact to predict clinical remission after anti-TNF biologicals. Simulation studies and application in a real-world setting support the capability of the SDR algorithm to model epistatic interactions in candidate-genes studies in presence of right-censored data. http://sourceforge.net/projects/sdrproject/.

  20. Optimal clustering of MGs based on droop controller for improving reliability using a hybrid of harmony search and genetic algorithms.

    PubMed

    Abedini, Mohammad; Moradi, Mohammad H; Hosseinian, S M

    2016-03-01

    This paper proposes a novel method to address reliability and technical problems of microgrids (MGs) based on designing a number of self-adequate autonomous sub-MGs via adopting MGs clustering thinking. In doing so, a multi-objective optimization problem is developed where power losses reduction, voltage profile improvement and reliability enhancement are considered as the objective functions. To solve the optimization problem a hybrid algorithm, named HS-GA, is provided, based on genetic and harmony search algorithms, and a load flow method is given to model different types of DGs as droop controller. The performance of the proposed method is evaluated in two case studies. The results provide support for the performance of the proposed method. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Real-Time Feedback Control of Flow-Induced Cavity Tones. Part 1; Fixed-Gain Control

    NASA Technical Reports Server (NTRS)

    Kegerise, M. A.; Cabell, R. H.; Cattafesta, L. N., III

    2006-01-01

    A generalized predictive control (GPC) algorithm was formulated and applied to the cavity flow-tone problem. The control algorithm demonstrated multiple Rossiter-mode suppression at fixed Mach numbers ranging from 0.275 to 0.38. Controller performance was evaluated with a measure of output disturbance rejection and an input sensitivity transfer function. The results suggest that disturbances entering the cavity flow are collocated with the control input at the cavity leading edge. In that case, only tonal components of the cavity wall-pressure fluctuations can be suppressed and arbitrary broadband pressure reduction is not possible with the present sensor/actuator arrangement. In the control-algorithm development, the cavity dynamics were treated as linear and time invariant (LTI) for a fixed Mach number. The experimental results lend support to that treatment.

  2. Merged or monolithic? Using machine-learning to reconstruct the dynamical history of simulated star clusters

    NASA Astrophysics Data System (ADS)

    Pasquato, Mario; Chung, Chul

    2016-05-01

    Context. Machine-learning (ML) solves problems by learning patterns from data with limited or no human guidance. In astronomy, ML is mainly applied to large observational datasets, e.g. for morphological galaxy classification. Aims: We apply ML to gravitational N-body simulations of star clusters that are either formed by merging two progenitors or evolved in isolation, planning to later identify globular clusters (GCs) that may have a history of merging from observational data. Methods: We create mock-observations from simulated GCs, from which we measure a set of parameters (also called features in the machine-learning field). After carrying out dimensionality reduction on the feature space, the resulting datapoints are fed in to various classification algorithms. Using repeated random subsampling validation, we check whether the groups identified by the algorithms correspond to the underlying physical distinction between mergers and monolithically evolved simulations. Results: The three algorithms we considered (C5.0 trees, k-nearest neighbour, and support-vector machines) all achieve a test misclassification rate of about 10% without parameter tuning, with support-vector machines slightly outperforming the others. The first principal component of feature space correlates with cluster concentration. If we exclude it from the regression, the performance of the algorithms is only slightly reduced.

  3. SU-C-207B-02: Maximal Noise Reduction Filter with Anatomical Structures Preservation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maitree, R; Guzman, G; Chundury, A

    Purpose: All medical images contain noise, which can result in an undesirable appearance and can reduce the visibility of anatomical details. There are varieties of techniques utilized to reduce noise such as increasing the image acquisition time and using post-processing noise reduction algorithms. However, these techniques are increasing the imaging time and cost or reducing tissue contrast and effective spatial resolution which are useful diagnosis information. The three main focuses in this study are: 1) to develop a novel approach that can adaptively and maximally reduce noise while preserving valuable details of anatomical structures, 2) to evaluate the effectiveness ofmore » available noise reduction algorithms in comparison to the proposed algorithm, and 3) to demonstrate that the proposed noise reduction approach can be used clinically. Methods: To achieve a maximal noise reduction without destroying the anatomical details, the proposed approach automatically estimated the local image noise strength levels and detected the anatomical structures, i.e. tissue boundaries. Such information was used to adaptively adjust strength of the noise reduction filter. The proposed algorithm was tested on 34 repeating swine head datasets and 54 patients MRI and CT images. The performance was quantitatively evaluated by image quality metrics and manually validated for clinical usages by two radiation oncologists and one radiologist. Results: Qualitative measurements on repeated swine head images demonstrated that the proposed algorithm efficiently removed noise while preserving the structures and tissues boundaries. In comparisons, the proposed algorithm obtained competitive noise reduction performance and outperformed other filters in preserving anatomical structures. Assessments from the manual validation indicate that the proposed noise reduction algorithm is quite adequate for some clinical usages. Conclusion: According to both clinical evaluation (human expert ranking) and qualitative assessment, the proposed approach has superior noise reduction and anatomical structures preservation capabilities over existing noise removal methods. Senior Author Dr. Deshan Yang received research funding form ViewRay and Varian.« less

  4. Network Reduction Algorithm for Developing Distribution Feeders for Real-Time Simulators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagarajan, Adarsh; Nelson, Austin A; Prabakar, Kumaraguru

    As advanced grid-support functions (AGF) become more widely used in grid-connected photovoltaic (PV) inverters, utilities are increasingly interested in their impacts when implemented in the field. These effects can be understood by modeling feeders in real-time simulators and test PV inverters using power hardware-in-the-loop (PHIL) techniques. This paper presents a novel feeder model reduction algorithm using a ruin & reconstruct methodology that enables large feeders to be solved and operated on real-time computing platforms. Two Hawaiian Electric feeder models in Synergi Electric's load flow software were converted to reduced order models in OpenDSS, and subsequently implemented in the OPAL-RT real-timemore » digital testing platform. Smart PV inverters were added to the realtime model with AGF responses modeled after characterizing commercially available hardware inverters. Finally, hardware inverters were tested in conjunction with the real-time model using PHIL techniques so that the effects of AGFs on the feeders could be analyzed.« less

  5. Network Reduction Algorithm for Developing Distribution Feeders for Real-Time Simulators: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagarajan, Adarsh; Nelson, Austin; Prabakar, Kumaraguru

    As advanced grid-support functions (AGF) become more widely used in grid-connected photovoltaic (PV) inverters, utilities are increasingly interested in their impacts when implemented in the field. These effects can be understood by modeling feeders in real-time systems and testing PV inverters using power hardware-in-the-loop (PHIL) techniques. This paper presents a novel feeder model reduction algorithm using a Monte Carlo method that enables large feeders to be solved and operated on real-time computing platforms. Two Hawaiian Electric feeder models in Synergi Electric's load flow software were converted to reduced order models in OpenDSS, and subsequently implemented in the OPAL-RT real-time digitalmore » testing platform. Smart PV inverters were added to the real-time model with AGF responses modeled after characterizing commercially available hardware inverters. Finally, hardware inverters were tested in conjunction with the real-time model using PHIL techniques so that the effects of AGFs on the choice feeders could be analyzed.« less

  6. Application of higher harmonic blade feathering for helicopter vibration reduction

    NASA Technical Reports Server (NTRS)

    Powers, R. W.

    1978-01-01

    Higher harmonic blade feathering for helicopter vibration reduction is considered. Recent wind tunnel tests confirmed the effectiveness of higher harmonic control in reducing articulated rotor vibratory hub loads. Several predictive analyses developed in support of the NASA program were shown to be capable of calculating single harmonic control inputs required to minimize a single 4P hub response. In addition, a multiple-input, multiple-output harmonic control predictive analysis was developed. All techniques developed thus far obtain a solution by extracting empirical transfer functions from sampled data. Algorithm data sampling and processing requirements are minimal to encourage adaptive control system application of such techniques in a flight environment.

  7. Classification of small lesions on dynamic breast MRI: Integrating dimension reduction and out-of-sample extension into CADx methodology

    PubMed Central

    Nagarajan, Mahesh B.; Huber, Markus B.; Schlossbauer, Thomas; Leinsinger, Gerda; Krol, Andrzej; Wismüller, Axel

    2014-01-01

    Objective While dimension reduction has been previously explored in computer aided diagnosis (CADx) as an alternative to feature selection, previous implementations of its integration into CADx do not ensure strict separation between training and test data required for the machine learning task. This compromises the integrity of the independent test set, which serves as the basis for evaluating classifier performance. Methods and Materials We propose, implement and evaluate an improved CADx methodology where strict separation is maintained. This is achieved by subjecting the training data alone to dimension reduction; the test data is subsequently processed with out-of-sample extension methods. Our approach is demonstrated in the research context of classifying small diagnostically challenging lesions annotated on dynamic breast magnetic resonance imaging (MRI) studies. The lesions were dynamically characterized through topological feature vectors derived from Minkowski functionals. These feature vectors were then subject to dimension reduction with different linear and non-linear algorithms applied in conjunction with out-of-sample extension techniques. This was followed by classification through supervised learning with support vector regression. Area under the receiver-operating characteristic curve (AUC) was evaluated as the metric of classifier performance. Results Of the feature vectors investigated, the best performance was observed with Minkowski functional ’perimeter’ while comparable performance was observed with ’area’. Of the dimension reduction algorithms tested with ’perimeter’, the best performance was observed with Sammon’s mapping (0.84 ± 0.10) while comparable performance was achieved with exploratory observation machine (0.82 ± 0.09) and principal component analysis (0.80 ± 0.10). Conclusions The results reported in this study with the proposed CADx methodology present a significant improvement over previous results reported with such small lesions on dynamic breast MRI. In particular, non-linear algorithms for dimension reduction exhibited better classification performance than linear approaches, when integrated into our CADx methodology. We also note that while dimension reduction techniques may not necessarily provide an improvement in classification performance over feature selection, they do allow for a higher degree of feature compaction. PMID:24355697

  8. Implementation of the U.S. Environmental Protection Agency's Waste Reduction (WAR) Algorithm in Cape-Open Based Process Simulators

    EPA Science Inventory

    The Sustainable Technology Division has recently completed an implementation of the U.S. EPA's Waste Reduction (WAR) Algorithm that can be directly accessed from a Cape-Open compliant process modeling environment. The WAR Algorithm add-in can be used in AmsterChem's COFE (Cape-Op...

  9. Challenges and Recent Developments in Hearing Aids: Part I. Speech Understanding in Noise, Microphone Technologies and Noise Reduction Algorithms

    PubMed Central

    Chung, King

    2004-01-01

    This review discusses the challenges in hearing aid design and fitting and the recent developments in advanced signal processing technologies to meet these challenges. The first part of the review discusses the basic concepts and the building blocks of digital signal processing algorithms, namely, the signal detection and analysis unit, the decision rules, and the time constants involved in the execution of the decision. In addition, mechanisms and the differences in the implementation of various strategies used to reduce the negative effects of noise are discussed. These technologies include the microphone technologies that take advantage of the spatial differences between speech and noise and the noise reduction algorithms that take advantage of the spectral difference and temporal separation between speech and noise. The specific technologies discussed in this paper include first-order directional microphones, adaptive directional microphones, second-order directional microphones, microphone matching algorithms, array microphones, multichannel adaptive noise reduction algorithms, and synchrony detection noise reduction algorithms. Verification data for these technologies, if available, are also summarized. PMID:15678225

  10. Comparing Binaural Pre-processing Strategies I: Instrumental Evaluation.

    PubMed

    Baumgärtel, Regina M; Krawczyk-Becker, Martin; Marquardt, Daniel; Völker, Christoph; Hu, Hongmei; Herzke, Tobias; Coleman, Graham; Adiloğlu, Kamil; Ernst, Stephan M A; Gerkmann, Timo; Doclo, Simon; Kollmeier, Birger; Hohmann, Volker; Dietz, Mathias

    2015-12-30

    In a collaborative research project, several monaural and binaural noise reduction algorithms have been comprehensively evaluated. In this article, eight selected noise reduction algorithms were assessed using instrumental measures, with a focus on the instrumental evaluation of speech intelligibility. Four distinct, reverberant scenarios were created to reflect everyday listening situations: a stationary speech-shaped noise, a multitalker babble noise, a single interfering talker, and a realistic cafeteria noise. Three instrumental measures were employed to assess predicted speech intelligibility and predicted sound quality: the intelligibility-weighted signal-to-noise ratio, the short-time objective intelligibility measure, and the perceptual evaluation of speech quality. The results show substantial improvements in predicted speech intelligibility as well as sound quality for the proposed algorithms. The evaluated coherence-based noise reduction algorithm was able to provide improvements in predicted audio signal quality. For the tested single-channel noise reduction algorithm, improvements in intelligibility-weighted signal-to-noise ratio were observed in all but the nonstationary cafeteria ambient noise scenario. Binaural minimum variance distortionless response beamforming algorithms performed particularly well in all noise scenarios. © The Author(s) 2015.

  11. Comparing Binaural Pre-processing Strategies I

    PubMed Central

    Krawczyk-Becker, Martin; Marquardt, Daniel; Völker, Christoph; Hu, Hongmei; Herzke, Tobias; Coleman, Graham; Adiloğlu, Kamil; Ernst, Stephan M. A.; Gerkmann, Timo; Doclo, Simon; Kollmeier, Birger; Hohmann, Volker; Dietz, Mathias

    2015-01-01

    In a collaborative research project, several monaural and binaural noise reduction algorithms have been comprehensively evaluated. In this article, eight selected noise reduction algorithms were assessed using instrumental measures, with a focus on the instrumental evaluation of speech intelligibility. Four distinct, reverberant scenarios were created to reflect everyday listening situations: a stationary speech-shaped noise, a multitalker babble noise, a single interfering talker, and a realistic cafeteria noise. Three instrumental measures were employed to assess predicted speech intelligibility and predicted sound quality: the intelligibility-weighted signal-to-noise ratio, the short-time objective intelligibility measure, and the perceptual evaluation of speech quality. The results show substantial improvements in predicted speech intelligibility as well as sound quality for the proposed algorithms. The evaluated coherence-based noise reduction algorithm was able to provide improvements in predicted audio signal quality. For the tested single-channel noise reduction algorithm, improvements in intelligibility-weighted signal-to-noise ratio were observed in all but the nonstationary cafeteria ambient noise scenario. Binaural minimum variance distortionless response beamforming algorithms performed particularly well in all noise scenarios. PMID:26721920

  12. SU-F-I-09: Improvement of Image Registration Using Total-Variation Based Noise Reduction Algorithms for Low-Dose CBCT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mukherjee, S; Farr, J; Merchant, T

    Purpose: To study the effect of total-variation based noise reduction algorithms to improve the image registration of low-dose CBCT for patient positioning in radiation therapy. Methods: In low-dose CBCT, the reconstructed image is degraded by excessive quantum noise. In this study, we developed a total-variation based noise reduction algorithm and studied the effect of the algorithm on noise reduction and image registration accuracy. To study the effect of noise reduction, we have calculated the peak signal-to-noise ratio (PSNR). To study the improvement of image registration, we performed image registration between volumetric CT and MV- CBCT images of different head-and-neck patientsmore » and calculated the mutual information (MI) and Pearson correlation coefficient (PCC) as a similarity metric. The PSNR, MI and PCC were calculated for both the noisy and noise-reduced CBCT images. Results: The algorithms were shown to be effective in reducing the noise level and improving the MI and PCC for the low-dose CBCT images tested. For the different head-and-neck patients, a maximum improvement of PSNR of 10 dB with respect to the noisy image was calculated. The improvement of MI and PCC was 9% and 2% respectively. Conclusion: Total-variation based noise reduction algorithm was studied to improve the image registration between CT and low-dose CBCT. The algorithm had shown promising results in reducing the noise from low-dose CBCT images and improving the similarity metric in terms of MI and PCC.« less

  13. Exploring the CAESAR database using dimensionality reduction techniques

    NASA Astrophysics Data System (ADS)

    Mendoza-Schrock, Olga; Raymer, Michael L.

    2012-06-01

    The Civilian American and European Surface Anthropometry Resource (CAESAR) database containing over 40 anthropometric measurements on over 4000 humans has been extensively explored for pattern recognition and classification purposes using the raw, original data [1-4]. However, some of the anthropometric variables would be impossible to collect in an uncontrolled environment. Here, we explore the use of dimensionality reduction methods in concert with a variety of classification algorithms for gender classification using only those variables that are readily observable in an uncontrolled environment. Several dimensionality reduction techniques are employed to learn the underlining structure of the data. These techniques include linear projections such as the classical Principal Components Analysis (PCA) and non-linear (manifold learning) techniques, such as Diffusion Maps and the Isomap technique. This paper briefly describes all three techniques, and compares three different classifiers, Naïve Bayes, Adaboost, and Support Vector Machines (SVM), for gender classification in conjunction with each of these three dimensionality reduction approaches.

  14. Squish: Near-Optimal Compression for Archival of Relational Datasets

    PubMed Central

    Gao, Yihan; Parameswaran, Aditya

    2017-01-01

    Relational datasets are being generated at an alarmingly rapid rate across organizations and industries. Compressing these datasets could significantly reduce storage and archival costs. Traditional compression algorithms, e.g., gzip, are suboptimal for compressing relational datasets since they ignore the table structure and relationships between attributes. We study compression algorithms that leverage the relational structure to compress datasets to a much greater extent. We develop Squish, a system that uses a combination of Bayesian Networks and Arithmetic Coding to capture multiple kinds of dependencies among attributes and achieve near-entropy compression rate. Squish also supports user-defined attributes: users can instantiate new data types by simply implementing five functions for a new class interface. We prove the asymptotic optimality of our compression algorithm and conduct experiments to show the effectiveness of our system: Squish achieves a reduction of over 50% in storage size relative to systems developed in prior work on a variety of real datasets. PMID:28180028

  15. JPSS Science Data Services for the Direct Readout Community

    NASA Technical Reports Server (NTRS)

    Chander, Gyanesh; Lutz, Bob

    2014-01-01

    The Suomi National Polar-orbiting Partnership (S-NPP) and Joint Polar Satellite System (JPSS) High Rate Data (HRD) link provides Direct Broadcast data to users in real-time, utilizing their own remote field terminals. The Field Terminal Support (FTS) provides the resources needed to support the Direct Readout communities by providing software, documentation, and periodic updates to enable them to produce data products from SNPP and JPSS. The FTS distribution server will also provide the necessary ancillary and auxiliary data needed for processing the broadcasts, as well as making orbital data available to assist in locating the satellites of interest. In addition, the FTS provides development support for the algorithm and software through GSFC Direct Readout Laboratory (DRL) International Polar Orbiter Processing Package (IPOPP) and University of Wisconsin (UWISC) Community Satellite Processing Package (CSPP), to enable users to integrate the algorithms into their remote terminals. The support the JPSS Program provides to the institutions developing and maintaining these two software packages, will demonstrate the ability to produce ready-to-use products from the HRD link and provide risk reduction effort at a minimal cost. This paper discusses the key functions and system architecture of FTS.

  16. The design and implementation of cost-effective algorithms for direct solution of banded linear systems on the vector processor system 32 supercomputer

    NASA Technical Reports Server (NTRS)

    Samba, A. S.

    1985-01-01

    The problem of solving banded linear systems by direct (non-iterative) techniques on the Vector Processor System (VPS) 32 supercomputer is considered. Two efficient direct methods for solving banded linear systems on the VPS 32 are described. The vector cyclic reduction (VCR) algorithm is discussed in detail. The performance of the VCR on a three parameter model problem is also illustrated. The VCR is an adaptation of the conventional point cyclic reduction algorithm. The second direct method is the Customized Reduction of Augmented Triangles' (CRAT). CRAT has the dominant characteristics of an efficient VPS 32 algorithm. CRAT is tailored to the pipeline architecture of the VPS 32 and as a consequence the algorithm is implicitly vectorizable.

  17. Peak reduction for commercial buildings using energy storage

    NASA Astrophysics Data System (ADS)

    Chua, K. H.; Lim, Y. S.; Morris, S.

    2017-11-01

    Battery-based energy storage has emerged as a cost-effective solution for peak reduction due to the decrement of battery’s price. In this study, a battery-based energy storage system is developed and implemented to achieve an optimal peak reduction for commercial customers with the limited energy capacity of the energy storage. The energy storage system is formed by three bi-directional power converter rated at 5 kVA and a battery bank with capacity of 64 kWh. Three control algorithms, namely fixed-threshold, adaptive-threshold, and fuzzy-based control algorithms have been developed and implemented into the energy storage system in a campus building. The control algorithms are evaluated and compared under different load conditions. The overall experimental results show that the fuzzy-based controller is the most effective algorithm among the three controllers in peak reduction. The fuzzy-based control algorithm is capable of incorporating a priori qualitative knowledge and expertise about the load characteristic of the buildings as well as the useable energy without over-discharging the batteries.

  18. Optimization of Selected Remote Sensing Algorithms for Embedded NVIDIA Kepler GPU Architecture

    NASA Technical Reports Server (NTRS)

    Riha, Lubomir; Le Moigne, Jacqueline; El-Ghazawi, Tarek

    2015-01-01

    This paper evaluates the potential of embedded Graphic Processing Units in the Nvidias Tegra K1 for onboard processing. The performance is compared to a general purpose multi-core CPU and full fledge GPU accelerator. This study uses two algorithms: Wavelet Spectral Dimension Reduction of Hyperspectral Imagery and Automated Cloud-Cover Assessment (ACCA) Algorithm. Tegra K1 achieved 51 for ACCA algorithm and 20 for the dimension reduction algorithm, as compared to the performance of the high-end 8-core server Intel Xeon CPU with 13.5 times higher power consumption.

  19. A False Alarm Reduction Method for a Gas Sensor Based Electronic Nose

    PubMed Central

    Rahman, Mohammad Mizanur; Suksompong, Prapun; Toochinda, Pisanu; Taparugssanagorn, Attaphongse

    2017-01-01

    Electronic noses (E-Noses) are becoming popular for food and fruit quality assessment due to their robustness and repeated usability without fatigue, unlike human experts. An E-Nose equipped with classification algorithms and having open ended classification boundaries such as the k-nearest neighbor (k-NN), support vector machine (SVM), and multilayer perceptron neural network (MLPNN), are found to suffer from false classification errors of irrelevant odor data. To reduce false classification and misclassification errors, and to improve correct rejection performance; algorithms with a hyperspheric boundary, such as a radial basis function neural network (RBFNN) and generalized regression neural network (GRNN) with a Gaussian activation function in the hidden layer should be used. The simulation results presented in this paper show that GRNN has more correct classification efficiency and false alarm reduction capability compared to RBFNN. As the design of a GRNN and RBFNN is complex and expensive due to large numbers of neuron requirements, a simple hyperspheric classification method based on minimum, maximum, and mean (MMM) values of each class of the training dataset was presented. The MMM algorithm was simple and found to be fast and efficient in correctly classifying data of training classes, and correctly rejecting data of extraneous odors, and thereby reduced false alarms. PMID:28895910

  20. A False Alarm Reduction Method for a Gas Sensor Based Electronic Nose.

    PubMed

    Rahman, Mohammad Mizanur; Charoenlarpnopparut, Chalie; Suksompong, Prapun; Toochinda, Pisanu; Taparugssanagorn, Attaphongse

    2017-09-12

    Electronic noses (E-Noses) are becoming popular for food and fruit quality assessment due to their robustness and repeated usability without fatigue, unlike human experts. An E-Nose equipped with classification algorithms and having open ended classification boundaries such as the k -nearest neighbor ( k -NN), support vector machine (SVM), and multilayer perceptron neural network (MLPNN), are found to suffer from false classification errors of irrelevant odor data. To reduce false classification and misclassification errors, and to improve correct rejection performance; algorithms with a hyperspheric boundary, such as a radial basis function neural network (RBFNN) and generalized regression neural network (GRNN) with a Gaussian activation function in the hidden layer should be used. The simulation results presented in this paper show that GRNN has more correct classification efficiency and false alarm reduction capability compared to RBFNN. As the design of a GRNN and RBFNN is complex and expensive due to large numbers of neuron requirements, a simple hyperspheric classification method based on minimum, maximum, and mean (MMM) values of each class of the training dataset was presented. The MMM algorithm was simple and found to be fast and efficient in correctly classifying data of training classes, and correctly rejecting data of extraneous odors, and thereby reduced false alarms.

  1. Survival dimensionality reduction (SDR): development and clinical application of an innovative approach to detect epistasis in presence of right-censored data

    PubMed Central

    2010-01-01

    Background Epistasis is recognized as a fundamental part of the genetic architecture of individuals. Several computational approaches have been developed to model gene-gene interactions in case-control studies, however, none of them is suitable for time-dependent analysis. Herein we introduce the Survival Dimensionality Reduction (SDR) algorithm, a non-parametric method specifically designed to detect epistasis in lifetime datasets. Results The algorithm requires neither specification about the underlying survival distribution nor about the underlying interaction model and proved satisfactorily powerful to detect a set of causative genes in synthetic epistatic lifetime datasets with a limited number of samples and high degree of right-censorship (up to 70%). The SDR method was then applied to a series of 386 Dutch patients with active rheumatoid arthritis that were treated with anti-TNF biological agents. Among a set of 39 candidate genes, none of which showed a detectable marginal effect on anti-TNF responses, the SDR algorithm did find that the rs1801274 SNP in the FcγRIIa gene and the rs10954213 SNP in the IRF5 gene non-linearly interact to predict clinical remission after anti-TNF biologicals. Conclusions Simulation studies and application in a real-world setting support the capability of the SDR algorithm to model epistatic interactions in candidate-genes studies in presence of right-censored data. Availability: http://sourceforge.net/projects/sdrproject/ PMID:20691091

  2. Continuous Adaptive Population Reduction (CAPR) for Differential Evolution Optimization.

    PubMed

    Wong, Ieong; Liu, Wenjia; Ho, Chih-Ming; Ding, Xianting

    2017-06-01

    Differential evolution (DE) has been applied extensively in drug combination optimization studies in the past decade. It allows for identification of desired drug combinations with minimal experimental effort. This article proposes an adaptive population-sizing method for the DE algorithm. Our new method presents improvements in terms of efficiency and convergence over the original DE algorithm and constant stepwise population reduction-based DE algorithm, which would lead to a reduced number of cells and animals required to identify an optimal drug combination. The method continuously adjusts the reduction of the population size in accordance with the stage of the optimization process. Our adaptive scheme limits the population reduction to occur only at the exploitation stage. We believe that continuously adjusting for a more effective population size during the evolutionary process is the major reason for the significant improvement in the convergence speed of the DE algorithm. The performance of the method is evaluated through a set of unimodal and multimodal benchmark functions. In combining with self-adaptive schemes for mutation and crossover constants, this adaptive population reduction method can help shed light on the future direction of a completely parameter tune-free self-adaptive DE algorithm.

  3. Multiscale computations with a wavelet-adaptive algorithm

    NASA Astrophysics Data System (ADS)

    Rastigejev, Yevgenii Anatolyevich

    A wavelet-based adaptive multiresolution algorithm for the numerical solution of multiscale problems governed by partial differential equations is introduced. The main features of the method include fast algorithms for the calculation of wavelet coefficients and approximation of derivatives on nonuniform stencils. The connection between the wavelet order and the size of the stencil is established. The algorithm is based on the mathematically well established wavelet theory. This allows us to provide error estimates of the solution which are used in conjunction with an appropriate threshold criteria to adapt the collocation grid. The efficient data structures for grid representation as well as related computational algorithms to support grid rearrangement procedure are developed. The algorithm is applied to the simulation of phenomena described by Navier-Stokes equations. First, we undertake the study of the ignition and subsequent viscous detonation of a H2 : O2 : Ar mixture in a one-dimensional shock tube. Subsequently, we apply the algorithm to solve the two- and three-dimensional benchmark problem of incompressible flow in a lid-driven cavity at large Reynolds numbers. For these cases we show that solutions of comparable accuracy as the benchmarks are obtained with more than an order of magnitude reduction in degrees of freedom. The simulations show the striking ability of the algorithm to adapt to a solution having different scales at different spatial locations so as to produce accurate results at a relatively low computational cost.

  4. A review on the multivariate statistical methods for dimensional reduction studies

    NASA Astrophysics Data System (ADS)

    Aik, Lim Eng; Kiang, Lam Chee; Mohamed, Zulkifley Bin; Hong, Tan Wei

    2017-05-01

    In this research study we have discussed multivariate statistical methods for dimensional reduction, which has been done by various researchers. The reduction of dimensionality is valuable to accelerate algorithm progression, as well as really may offer assistance with the last grouping/clustering precision. A lot of boisterous or even flawed info information regularly prompts a not exactly alluring algorithm progression. Expelling un-useful or dis-instructive information segments may for sure help the algorithm discover more broad grouping locales and principles and generally speaking accomplish better exhibitions on new data set.

  5. Integrand-level reduction of loop amplitudes by computational algebraic geometry methods

    NASA Astrophysics Data System (ADS)

    Zhang, Yang

    2012-09-01

    We present an algorithm for the integrand-level reduction of multi-loop amplitudes of renormalizable field theories, based on computational algebraic geometry. This algorithm uses (1) the Gröbner basis method to determine the basis for integrand-level reduction, (2) the primary decomposition of an ideal to classify all inequivalent solutions of unitarity cuts. The resulting basis and cut solutions can be used to reconstruct the integrand from unitarity cuts, via polynomial fitting techniques. The basis determination part of the algorithm has been implemented in the Mathematica package, BasisDet. The primary decomposition part can be readily carried out by algebraic geometry softwares, with the output of the package BasisDet. The algorithm works in both D = 4 and D = 4 - 2 ɛ dimensions, and we present some two and three-loop examples of applications of this algorithm.

  6. Noise Reduction with Microphone Arrays for Speaker Identification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cohen, Z

    Reducing acoustic noise in audio recordings is an ongoing problem that plagues many applications. This noise is hard to reduce because of interfering sources and non-stationary behavior of the overall background noise. Many single channel noise reduction algorithms exist but are limited in that the more the noise is reduced; the more the signal of interest is distorted due to the fact that the signal and noise overlap in frequency. Specifically acoustic background noise causes problems in the area of speaker identification. Recording a speaker in the presence of acoustic noise ultimately limits the performance and confidence of speaker identificationmore » algorithms. In situations where it is impossible to control the environment where the speech sample is taken, noise reduction filtering algorithms need to be developed to clean the recorded speech of background noise. Because single channel noise reduction algorithms would distort the speech signal, the overall challenge of this project was to see if spatial information provided by microphone arrays could be exploited to aid in speaker identification. The goals are: (1) Test the feasibility of using microphone arrays to reduce background noise in speech recordings; (2) Characterize and compare different multichannel noise reduction algorithms; (3) Provide recommendations for using these multichannel algorithms; and (4) Ultimately answer the question - Can the use of microphone arrays aid in speaker identification?« less

  7. A comparative intelligibility study of single-microphone noise reduction algorithms.

    PubMed

    Hu, Yi; Loizou, Philipos C

    2007-09-01

    The evaluation of intelligibility of noise reduction algorithms is reported. IEEE sentences and consonants were corrupted by four types of noise including babble, car, street and train at two signal-to-noise ratio levels (0 and 5 dB), and then processed by eight speech enhancement methods encompassing four classes of algorithms: spectral subtractive, sub-space, statistical model based and Wiener-type algorithms. The enhanced speech was presented to normal-hearing listeners for identification. With the exception of a single noise condition, no algorithm produced significant improvements in speech intelligibility. Information transmission analysis of the consonant confusion matrices indicated that no algorithm improved significantly the place feature score, significantly, which is critically important for speech recognition. The algorithms which were found in previous studies to perform the best in terms of overall quality, were not the same algorithms that performed the best in terms of speech intelligibility. The subspace algorithm, for instance, was previously found to perform the worst in terms of overall quality, but performed well in the present study in terms of preserving speech intelligibility. Overall, the analysis of consonant confusion matrices suggests that in order for noise reduction algorithms to improve speech intelligibility, they need to improve the place and manner feature scores.

  8. A Variational Formulation of Macro-Particle Algorithms for Kinetic Plasma Simulations

    NASA Astrophysics Data System (ADS)

    Shadwick, B. A.

    2013-10-01

    Macro-particle based simulations methods are in widespread use in plasma physics; their computational efficiency and intuitive nature are largely responsible for their longevity. In the main, these algorithms are formulated by approximating the continuous equations of motion. For systems governed by a variational principle (such as collisionless plasmas), approximations of the equations of motion is known to introduce anomalous behavior, especially in system invariants. We present a variational formulation of particle algorithms for plasma simulation based on a reduction of the distribution function onto a finite collection of macro-particles. As in the usual Particle-In-Cell (PIC) formulation, these macro-particles have a definite momentum and are spatially extended. The primary advantage of this approach is the preservation of the link between symmetries and conservation laws. For example, nothing in the reduction introduces explicit time dependence to the system and, therefore, the continuous-time equations of motion exactly conserve energy; thus, these models are free of grid-heating. In addition, the variational formulation allows for constructing models of arbitrary spatial and temporal order. In contrast, the overall accuracy of the usual PIC algorithm is at most second due to the nature of the force interpolation between the gridded field quantities and the (continuous) particle position. Again in contrast to the usual PIC algorithm, here the macro-particle shape is arbitrary; the spatial extent is completely decoupled from both the grid-size and the ``smoothness'' of the shape; smoother particle shapes are not necessarily larger. For simplicity, we restrict our discussion to one-dimensional, non-relativistic, un-magnetized, electrostatic plasmas. We comment on the extension to the electromagnetic case. Supported by the US DoE under contract numbers DE-FG02-08ER55000 and DE-SC0008382.

  9. A Computerized Decision Support System for Depression in Primary Care

    PubMed Central

    Kurian, Benji T.; Trivedi, Madhukar H.; Grannemann, Bruce D.; Claassen, Cynthia A.; Daly, Ella J.; Sunderajan, Prabha

    2009-01-01

    Objective: In 2004, results from The Texas Medication Algorithm Project (TMAP) showed better clinical outcomes for patients whose physicians adhered to a paper-and-pencil algorithm compared to patients who received standard clinical treatment for major depressive disorder (MDD). However, implementation of and fidelity to the treatment algorithm among various providers was observed to be inadequate. A computerized decision support system (CDSS) for the implementation of the TMAP algorithm for depression has since been developed to improve fidelity and adherence to the algorithm. Method: This was a 2-group, parallel design, clinical trial (one patient group receiving MDD treatment from physicians using the CDSS and the other patient group receiving usual care) conducted at 2 separate primary care clinics in Texas from March 2005 through June 2006. Fifty-five patients with MDD (DSM-IV criteria) with no significant difference in disease characteristics were enrolled, 32 of whom were treated by physicians using CDSS and 23 were treated by physicians using usual care. The study's objective was to evaluate the feasibility and efficacy of implementing a CDSS to assist physicians acutely treating patients with MDD compared to usual care in primary care. Primary efficacy outcomes for depression symptom severity were based on the 17-item Hamilton Depression Rating Scale (HDRS17) evaluated by an independent rater. Results: Patients treated by physicians employing CDSS had significantly greater symptom reduction, based on the HDRS17, than patients treated with usual care (P < .001). Conclusions: The CDSS algorithm, utilizing measurement-based care, was superior to usual care for patients with MDD in primary care settings. Larger randomized controlled trials are needed to confirm these findings. Trial Registration: clinicaltrials.gov Identifier: NCT00551083 PMID:19750065

  10. A computerized decision support system for depression in primary care.

    PubMed

    Kurian, Benji T; Trivedi, Madhukar H; Grannemann, Bruce D; Claassen, Cynthia A; Daly, Ella J; Sunderajan, Prabha

    2009-01-01

    In 2004, results from The Texas Medication Algorithm Project (TMAP) showed better clinical outcomes for patients whose physicians adhered to a paper-and-pencil algorithm compared to patients who received standard clinical treatment for major depressive disorder (MDD). However, implementation of and fidelity to the treatment algorithm among various providers was observed to be inadequate. A computerized decision support system (CDSS) for the implementation of the TMAP algorithm for depression has since been developed to improve fidelity and adherence to the algorithm. This was a 2-group, parallel design, clinical trial (one patient group receiving MDD treatment from physicians using the CDSS and the other patient group receiving usual care) conducted at 2 separate primary care clinics in Texas from March 2005 through June 2006. Fifty-five patients with MDD (DSM-IV criteria) with no significant difference in disease characteristics were enrolled, 32 of whom were treated by physicians using CDSS and 23 were treated by physicians using usual care. The study's objective was to evaluate the feasibility and efficacy of implementing a CDSS to assist physicians acutely treating patients with MDD compared to usual care in primary care. Primary efficacy outcomes for depression symptom severity were based on the 17-item Hamilton Depression Rating Scale (HDRS(17)) evaluated by an independent rater. Patients treated by physicians employing CDSS had significantly greater symptom reduction, based on the HDRS(17), than patients treated with usual care (P < .001). The CDSS algorithm, utilizing measurement-based care, was superior to usual care for patients with MDD in primary care settings. Larger randomized controlled trials are needed to confirm these findings. clinicaltrials.gov Identifier: NCT00551083.

  11. Speech enhancement based on modified phase-opponency detectors

    NASA Astrophysics Data System (ADS)

    Deshmukh, Om D.; Espy-Wilson, Carol Y.

    2005-09-01

    A speech enhancement algorithm based on a neural model was presented by Deshmukh et al., [149th meeting of the Acoustical Society America, 2005]. The algorithm consists of a bank of Modified Phase Opponency (MPO) filter pairs tuned to different center frequencies. This algorithm is able to enhance salient spectral features in speech signals even at low signal-to-noise ratios. However, the algorithm introduces musical noise and sometimes misses a spectral peak that is close in frequency to a stronger spectral peak. Refinement in the design of the MPO filters was recently made that takes advantage of the falling spectrum of the speech signal in sonorant regions. The modified set of filters leads to better separation of the noise and speech signals, and more accurate enhancement of spectral peaks. The improvements also lead to a significant reduction in musical noise. Continuity algorithms based on the properties of speech signals are used to further reduce the musical noise effect. The efficiency of the proposed method in enhancing the speech signal when the level of the background noise is fluctuating will be demonstrated. The performance of the improved speech enhancement method will be compared with various spectral subtraction-based methods. [Work supported by NSF BCS0236707.

  12. Cloud-based mobility management in heterogeneous wireless networks

    NASA Astrophysics Data System (ADS)

    Kravchuk, Serhii; Minochkin, Dmytro; Omiotek, Zbigniew; Bainazarov, Ulan; Weryńska-Bieniasz, RóŻa; Iskakova, Aigul

    2017-08-01

    Mobility management is the key feature that supports the roaming of users between different systems. Handover is the essential aspect in the development of solutions supporting mobility scenarios. The handover process becomes more complex in a heterogeneous environment compared to the homogeneous one. Seamlessness and reduction of delay in servicing the handover calls, which can reduce the handover dropping probability, also require complex algorithms to provide a desired QoS for mobile users. A challenging problem to increase the scalability and availability of handover decision mechanisms is discussed. The aim of the paper is to propose cloud based handover as a service concept to cope with the challenges that arise.

  13. Algorithm for AEEG data selection leading to wireless and long term epilepsy monitoring.

    PubMed

    Casson, Alexander J; Yates, David C; Patel, Shyam; Rodriguez-Villegas, Esther

    2007-01-01

    High quality, wireless ambulatory EEG (AEEG) systems that can operate over extended periods of time are not currently feasible due to the high power consumption of wireless transmitters. Previous work has thus proposed data reduction by only transmitting sections of data that contain candidate epileptic activity. This paper investigates algorithms by which this data selection can be carried out. It is essential that the algorithm is low power and that all possible features are identified, even at the expense of more false detections. Given this, a brief review of spike detection algorithms is carried out with a view to using these algorithms to drive the data reduction process. A CWT based algorithm is deemed most suitable for use and an algorithm is described in detail and its performance tested. It is found that over 90% of expert marked spikes are identified whilst giving a 40% reduction in the amount of data to be transmitted and analysed. The performance varies with the recording duration in response to each detection and this effect is also investigated. The proposed algorithm will form the basis of new a AEEG system that allows wireless and longer term epilepsy monitoring.

  14. Operational algorithm for ice-water classification on dual-polarized RADARSAT-2 images

    NASA Astrophysics Data System (ADS)

    Zakhvatkina, Natalia; Korosov, Anton; Muckenhuber, Stefan; Sandven, Stein; Babiker, Mohamed

    2017-01-01

    Synthetic Aperture Radar (SAR) data from RADARSAT-2 (RS2) in dual-polarization mode provide additional information for discriminating sea ice and open water compared to single-polarization data. We have developed an automatic algorithm based on dual-polarized RS2 SAR images to distinguish open water (rough and calm) and sea ice. Several technical issues inherent in RS2 data were solved in the pre-processing stage, including thermal noise reduction in HV polarization and correction of angular backscatter dependency in HH polarization. Texture features were explored and used in addition to supervised image classification based on the support vector machines (SVM) approach. The study was conducted in the ice-covered area between Greenland and Franz Josef Land. The algorithm has been trained using 24 RS2 scenes acquired in winter months in 2011 and 2012, and the results were validated against manually derived ice charts of the Norwegian Meteorological Institute. The algorithm was applied on a total of 2705 RS2 scenes obtained from 2013 to 2015, and the validation results showed that the average classification accuracy was 91 ± 4 %.

  15. Parallelization strategies for continuum-generalized method of moments on the multi-thread systems

    NASA Astrophysics Data System (ADS)

    Bustamam, A.; Handhika, T.; Ernastuti, Kerami, D.

    2017-07-01

    Continuum-Generalized Method of Moments (C-GMM) covers the Generalized Method of Moments (GMM) shortfall which is not as efficient as Maximum Likelihood estimator by using the continuum set of moment conditions in a GMM framework. However, this computation would take a very long time since optimizing regularization parameter. Unfortunately, these calculations are processed sequentially whereas in fact all modern computers are now supported by hierarchical memory systems and hyperthreading technology, which allowing for parallel computing. This paper aims to speed up the calculation process of C-GMM by designing a parallel algorithm for C-GMM on the multi-thread systems. First, parallel regions are detected for the original C-GMM algorithm. There are two parallel regions in the original C-GMM algorithm, that are contributed significantly to the reduction of computational time: the outer-loop and the inner-loop. Furthermore, this parallel algorithm will be implemented with standard shared-memory application programming interface, i.e. Open Multi-Processing (OpenMP). The experiment shows that the outer-loop parallelization is the best strategy for any number of observations.

  16. Experimental determination of pore shapes using phase retrieval from q -space NMR diffraction

    NASA Astrophysics Data System (ADS)

    Demberg, Kerstin; Laun, Frederik Bernd; Bertleff, Marco; Bachert, Peter; Kuder, Tristan Anselm

    2018-05-01

    This paper presents an approach to solving the phase problem in nuclear magnetic resonance (NMR) diffusion pore imaging, a method that allows imaging the shape of arbitrary closed pores filled with an NMR-detectable medium for investigation of the microstructure of biological tissue and porous materials. Classical q -space imaging composed of two short diffusion-encoding gradient pulses yields, analogously to diffraction experiments, the modulus squared of the Fourier transform of the pore image which entails an inversion problem: An unambiguous reconstruction of the pore image requires both magnitude and phase. Here the phase information is recovered from the Fourier modulus by applying a phase retrieval algorithm. This allows omitting experimentally challenging phase measurements using specialized temporal gradient profiles. A combination of the hybrid input-output algorithm and the error reduction algorithm was used with dynamically adapting support (shrinkwrap extension). No a priori knowledge on the pore shape was fed to the algorithm except for a finite pore extent. The phase retrieval approach proved successful for simulated data with and without noise and was validated in phantom experiments with well-defined pores using hyperpolarized xenon gas.

  17. Experimental determination of pore shapes using phase retrieval from q-space NMR diffraction.

    PubMed

    Demberg, Kerstin; Laun, Frederik Bernd; Bertleff, Marco; Bachert, Peter; Kuder, Tristan Anselm

    2018-05-01

    This paper presents an approach to solving the phase problem in nuclear magnetic resonance (NMR) diffusion pore imaging, a method that allows imaging the shape of arbitrary closed pores filled with an NMR-detectable medium for investigation of the microstructure of biological tissue and porous materials. Classical q-space imaging composed of two short diffusion-encoding gradient pulses yields, analogously to diffraction experiments, the modulus squared of the Fourier transform of the pore image which entails an inversion problem: An unambiguous reconstruction of the pore image requires both magnitude and phase. Here the phase information is recovered from the Fourier modulus by applying a phase retrieval algorithm. This allows omitting experimentally challenging phase measurements using specialized temporal gradient profiles. A combination of the hybrid input-output algorithm and the error reduction algorithm was used with dynamically adapting support (shrinkwrap extension). No a priori knowledge on the pore shape was fed to the algorithm except for a finite pore extent. The phase retrieval approach proved successful for simulated data with and without noise and was validated in phantom experiments with well-defined pores using hyperpolarized xenon gas.

  18. Speckle noise reduction in quantitative optical metrology techniques by application of the discrete wavelet transformation

    NASA Astrophysics Data System (ADS)

    Furlong, Cosme; Pryputniewicz, Ryszard J.

    2002-06-01

    Effective suppression of speckle noise content in interferometric data images can help in improving accuracy and resolution of the results obtained with interferometric optical metrology techniques. In this paper, novel speckle noise reduction algorithms based on the discrete wavelet transformation are presented. The algorithms proceed by: (a) estimating the noise level contained in the interferograms of interest, (b) selecting wavelet families, (c) applying the wavelet transformation using the selected families, (d) wavelet thresholding, and (e) applying the inverse wavelet transformation, producing denoised interferograms. The algorithms are applied to the different stages of the processing procedures utilized for generation of quantitative speckle correlation interferometry data of fiber-optic based opto-electronic holography (FOBOEH) techniques, allowing identification of optimal processing conditions. It is shown that wavelet algorithms are effective for speckle noise reduction while preserving image features otherwise faded with other algorithms.

  19. Environmental Optimization Using the WAste Reduction Algorithm (WAR)

    EPA Science Inventory

    Traditionally chemical process designs were optimized using purely economic measures such as rate of return. EPA scientists developed the WAste Reduction algorithm (WAR) so that environmental impacts of designs could easily be evaluated. The goal of WAR is to reduce environme...

  20. Finding minimum spanning trees more efficiently for tile-based phase unwrapping

    NASA Astrophysics Data System (ADS)

    Sawaf, Firas; Tatam, Ralph P.

    2006-06-01

    The tile-based phase unwrapping method employs an algorithm for finding the minimum spanning tree (MST) in each tile. We first examine the properties of a tile's representation from a graph theory viewpoint, observing that it is possible to make use of a more efficient class of MST algorithms. We then describe a novel linear time algorithm which reduces the size of the MST problem by half at the least, and solves it completely at best. We also show how this algorithm can be applied to a tile using a sliding window technique. Finally, we show how the reduction algorithm can be combined with any other standard MST algorithm to achieve a more efficient hybrid, using Prim's algorithm for empirical comparison and noting that the reduction algorithm takes only 0.1% of the time taken by the overall hybrid.

  1. A new automated assessment method for contrast-detail images by applying support vector machine and its robustness to nonlinear image processing.

    PubMed

    Takei, Takaaki; Ikeda, Mitsuru; Imai, Kuniharu; Yamauchi-Kawaura, Chiyo; Kato, Katsuhiko; Isoda, Haruo

    2013-09-01

    The automated contrast-detail (C-D) analysis methods developed so-far cannot be expected to work well on images processed with nonlinear methods, such as noise reduction methods. Therefore, we have devised a new automated C-D analysis method by applying support vector machine (SVM), and tested for its robustness to nonlinear image processing. We acquired the CDRAD (a commercially available C-D test object) images at a tube voltage of 120 kV and a milliampere-second product (mAs) of 0.5-5.0. A partial diffusion equation based technique was used as noise reduction method. Three radiologists and three university students participated in the observer performance study. The training data for our SVM method was the classification data scored by the one radiologist for the CDRAD images acquired at 1.6 and 3.2 mAs and their noise-reduced images. We also compared the performance of our SVM method with the CDRAD Analyser algorithm. The mean C-D diagrams (that is a plot of the mean of the smallest visible hole diameter vs. hole depth) obtained from our devised SVM method agreed well with the ones averaged across the six human observers for both original and noise-reduced CDRAD images, whereas the mean C-D diagrams from the CDRAD Analyser algorithm disagreed with the ones from the human observers for both original and noise-reduced CDRAD images. In conclusion, our proposed SVM method for C-D analysis will work well for the images processed with the non-linear noise reduction method as well as for the original radiographic images.

  2. Processing Cones: A Computational Structure for Image Analysis.

    DTIC Science & Technology

    1981-12-01

    image analysis applications, referred to as a processing cone, is described and sample algorithms are presented. A fundamental characteristic of the structure is its hierarchical organization into two-dimensional arrays of decreasing resolution. In this architecture, a protypical function is defined on a local window of data and applied uniformly to all windows in a parallel manner. Three basic modes of processing are supported in the cone: reduction operations (upward processing), horizontal operations (processing at a single level) and projection operations (downward

  3. INCORPORATING ENVIRONMENTAL AND ECONOMIC CONSIDERATIONS INTO PROCESS DESIGN: THE WASTE REDUCTION (WAR) ALGORITHM

    EPA Science Inventory

    A general theory known as the WAste Reduction (WASR) algorithm has been developed to describe the flow and the generation of potential environmental impact through a chemical process. This theory integrates environmental impact assessment into chemical process design Potential en...

  4. THE WASTE REDUCTION (WAR) ALGORITHM: ENVIRONMENTAL IMPACTS, ENERGY CONSUMPTION, AND ENGINEERING ECONOMICS

    EPA Science Inventory

    A general theory known as the WAste Reduction (WAR) algorithm has been developed to describe the flow and the generation of potential environmental impact through a chemical process. This theory defines potential environmental impact indexes that characterize the generation and t...

  5. Beam-hardening correction by a surface fitting and phase classification by a least square support vector machine approach for tomography images of geological samples

    NASA Astrophysics Data System (ADS)

    Khan, F.; Enzmann, F.; Kersten, M.

    2015-12-01

    In X-ray computed microtomography (μXCT) image processing is the most important operation prior to image analysis. Such processing mainly involves artefact reduction and image segmentation. We propose a new two-stage post-reconstruction procedure of an image of a geological rock core obtained by polychromatic cone-beam μXCT technology. In the first stage, the beam-hardening (BH) is removed applying a best-fit quadratic surface algorithm to a given image data set (reconstructed slice), which minimizes the BH offsets of the attenuation data points from that surface. The final BH-corrected image is extracted from the residual data, or the difference between the surface elevation values and the original grey-scale values. For the second stage, we propose using a least square support vector machine (a non-linear classifier algorithm) to segment the BH-corrected data as a pixel-based multi-classification task. A combination of the two approaches was used to classify a complex multi-mineral rock sample. The Matlab code for this approach is provided in the Appendix. A minor drawback is that the proposed segmentation algorithm may become computationally demanding in the case of a high dimensional training data set.

  6. Outcomes using exhaled nitric oxide measurements as an adjunct to primary care asthma management.

    PubMed

    Hewitt, Richard S; Modrich, Catherine M; Cowan, Jan O; Herbison, G Peter; Taylor, D Robin

    2009-12-01

    Exhaled nitric oxide (FENO) measurements may help to highlight when inhaled corticosteroid (ICS) therapy should or should not be adjusted in asthma. This is often difficult to judge. Our aim was to evaluate a decision-support algorithm incorporating FENO measurements in a nurse-led asthma clinic. Asthma management was guided by an algorithm based on high (>45ppb), intermediate (30-45ppb), or low (<30ppb) FENO levels and asthma control status. This provided for one of eight possible treatment options, including diagnosis review and ICS dose adjustment. Well controlled asthma increased from 41% at visit 1 to 68% at visit 5 (p=0.001). The mean fluticasone dose decreased from 312 mcg/day at visit 2 to 211mcg/day at visit 5 (p=0.022). There was a high level of protocol deviations (25%), often related to concerns about reducing the ICS dose. The % fall in FENO associated with a change in asthma status from poor control to good control was 35%. An FENO-based algorithm provided for a reduction in ICS doses without compromising asthma control. However, the results may have been influenced by the education and support which patients received. Reluctance to reduce ICS dose was an issue which may have influenced the overall results. Australian Clinical Trials Registry # 012605000354684.

  7. A reduction package for cross-dispersed echelle spectrograph data in IDL

    NASA Astrophysics Data System (ADS)

    Hall, Jeffrey C.; Neff, James E.

    1992-12-01

    We have written in IDL a data reduction package that performs reduction and extraction of cross-dispersed echelle spectrograph data. The present package includes a complete set of tools for extracting data from any number of spectral orders with arbitrary tilt and curvature. Essential elements include debiasing and flatfielding of the raw CCD image, removal of scattered light background, either nonoptimal or optimal extraction of data, and wavelength calibration and continuum normalization of the extracted orders. A growing set of support routines permits examination of the frame being processed to provide continuing checks on the statistical properties of the data and on the accuracy of the extraction. We will display some sample reductions and discuss the algorithms used. The inherent simplicity and user-friendliness of the IDL interface make this package a useful tool for spectroscopists. We will provide an email distribution list for those interested in receiving the package, and further documentation will be distributed at the meeting.

  8. SU-E-I-04: Improving CT Quality for Radiation Therapy of Patients with High Body Mass Index Using Iterative Reconstruction Algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noid, G; Tai, A; Li, X

    2015-06-15

    Purpose: Iterative reconstruction (IR) algorithms are developed to improve CT image quality (IQ) by reducing noise without diminishing spatial resolution or contrast. The CT IQ for patients with a high Body Mass Index (BMI) can suffer from increased noise due to photon starvation. The purpose of this study is to investigate and to quantify the IQ enhancement for high BMI patients through the application of IR algorithms. Methods: CT raw data collected for 6 radiotherapy (RT) patients with BMI, greater than or equal to 30 were retrospectively analyzed. All CT data were acquired using a CT scanner (Somaton Definition ASmore » Open, Siemens) installed in a linac room (CT-on-rails) using standard imaging protocols. The CT data were reconstructed using the Sinogram Affirmed Iterative Reconstruction (SAFIRE) and Filtered Back Projection (FBP) methods. IQ metrics of the obtained CTs were compared and correlated with patient depth and BMI. The patient depth was defined as the largest distance from anterior to posterior along the bilateral symmetry axis. Results: IR techniques are demonstrated to preserve contrast and reduce noise in comparison to traditional FBP. Driven by the reduction in noise, the contrast to noise ratio is roughly doubled by adopting the highest SAFIRE strength. A significant correlation was observed between patient depth and IR noise reduction through Pearson’s correlation test (R = 0.9429/P = 0.0167). The mean patient depth was 30.4 cm and the average relative noise reduction for the strongest iterative reconstruction was 55%. Conclusion: The IR techniques produce a measureable enhancement to CT IQ by reducing the noise. Dramatic noise reduction is evident for the high BMI patients. The improved CT IQ enables more accurate delineation of tumors and organs at risk and more accuarte dose calculations for RT planning and delivery guidance. Supported by Siemens.« less

  9. On the classification techniques in data mining for microarray data classification

    NASA Astrophysics Data System (ADS)

    Aydadenta, Husna; Adiwijaya

    2018-03-01

    Cancer is one of the deadly diseases, according to data from WHO by 2015 there are 8.8 million more deaths caused by cancer, and this will increase every year if not resolved earlier. Microarray data has become one of the most popular cancer-identification studies in the field of health, since microarray data can be used to look at levels of gene expression in certain cell samples that serve to analyze thousands of genes simultaneously. By using data mining technique, we can classify the sample of microarray data thus it can be identified with cancer or not. In this paper we will discuss some research using some data mining techniques using microarray data, such as Support Vector Machine (SVM), Artificial Neural Network (ANN), Naive Bayes, k-Nearest Neighbor (kNN), and C4.5, and simulation of Random Forest algorithm with technique of reduction dimension using Relief. The result of this paper show performance measure (accuracy) from classification algorithm (SVM, ANN, Naive Bayes, kNN, C4.5, and Random Forets).The results in this paper show the accuracy of Random Forest algorithm higher than other classification algorithms (Support Vector Machine (SVM), Artificial Neural Network (ANN), Naive Bayes, k-Nearest Neighbor (kNN), and C4.5). It is hoped that this paper can provide some information about the speed, accuracy, performance and computational cost generated from each Data Mining Classification Technique based on microarray data.

  10. Equivalent Discrete-Time Channel Modeling for Molecular Communication With Emphasize on an Absorbing Receiver.

    PubMed

    Damrath, Martin; Korte, Sebastian; Hoeher, Peter Adam

    2017-01-01

    This paper introduces the equivalent discrete-time channel model (EDTCM) to the area of diffusion-based molecular communication (DBMC). Emphasis is on an absorbing receiver, which is based on the so-called first passage time concept. In the wireless communications community the EDTCM is well known. Therefore, it is anticipated that the EDTCM improves the accessibility of DBMC and supports the adaptation of classical wireless communication algorithms to the area of DBMC. Furthermore, the EDTCM has the capability to provide a remarkable reduction of computational complexity compared to random walk based DBMC simulators. Besides the exact EDTCM, three approximations thereof based on binomial, Gaussian, and Poisson approximation are proposed and analyzed in order to further reduce computational complexity. In addition, the Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm is adapted to all four channel models. Numerical results show the performance of the exact EDTCM, illustrate the performance of the adapted BCJR algorithm, and demonstrate the accuracy of the approximations.

  11. Development of adaptive noise reduction filter algorithm for pediatric body images in a multi-detector CT

    NASA Astrophysics Data System (ADS)

    Nishimaru, Eiji; Ichikawa, Katsuhiro; Okita, Izumi; Ninomiya, Yuuji; Tomoshige, Yukihiro; Kurokawa, Takehiro; Ono, Yutaka; Nakamura, Yuko; Suzuki, Masayuki

    2008-03-01

    Recently, several kinds of post-processing image filters which reduce the noise of computed tomography (CT) images have been proposed. However, these image filters are mostly for adults. Because these are not very effective in small (< 20 cm) display fields of view (FOV), we cannot use them for pediatric body images (e.g., premature babies and infant children). We have developed a new noise reduction filter algorithm for pediatric body CT images. This algorithm is based on a 3D post-processing in which the output pixel values are calculated by nonlinear interpolation in z-directions on original volumetric-data-sets. This algorithm does not need the in-plane (axial plane) processing, so the spatial resolution does not change. From the phantom studies, our algorithm could reduce SD up to 40% without affecting the spatial resolution of x-y plane and z-axis, and improved the CNR up to 30%. This newly developed filter algorithm will be useful for the diagnosis and radiation dose reduction of the pediatric body CT images.

  12. Algorithmic framework for group analysis of differential equations and its application to generalized Zakharov-Kuznetsov equations

    NASA Astrophysics Data System (ADS)

    Huang, Ding-jiang; Ivanova, Nataliya M.

    2016-02-01

    In this paper, we explain in more details the modern treatment of the problem of group classification of (systems of) partial differential equations (PDEs) from the algorithmic point of view. More precisely, we revise the classical Lie algorithm of construction of symmetries of differential equations, describe the group classification algorithm and discuss the process of reduction of (systems of) PDEs to (systems of) equations with smaller number of independent variables in order to construct invariant solutions. The group classification algorithm and reduction process are illustrated by the example of the generalized Zakharov-Kuznetsov (GZK) equations of form ut +(F (u)) xxx +(G (u)) xyy +(H (u)) x = 0. As a result, a complete group classification of the GZK equations is performed and a number of new interesting nonlinear invariant models which have non-trivial invariance algebras are obtained. Lie symmetry reductions and exact solutions for two important invariant models, i.e., the classical and modified Zakharov-Kuznetsov equations, are constructed. The algorithmic framework for group analysis of differential equations presented in this paper can also be applied to other nonlinear PDEs.

  13. THE WASTE REDUCTION (WAR) ALGORITHM: ENVIRONMENTAL IMPACTS, ENERGY CONSUMPTION, AND ENGINEERING ECONOMICS

    EPA Science Inventory

    A general theory known as the Waste Reduction (WAR) Algorithm has been developed to describe the flow and the generation of potential environmental impact through a chemical process. The theory defines indexes that characterize the generation and the output of potential environm...

  14. Graph embedding and extensions: a general framework for dimensionality reduction.

    PubMed

    Yan, Shuicheng; Xu, Dong; Zhang, Benyu; Zhang, Hong-Jiang; Yang, Qiang; Lin, Stephen

    2007-01-01

    Over the past few decades, a large family of algorithms - supervised or unsupervised; stemming from statistics or geometry theory - has been designed to provide different solutions to the problem of dimensionality reduction. Despite the different motivations of these algorithms, we present in this paper a general formulation known as graph embedding to unify them within a common framework. In graph embedding, each algorithm can be considered as the direct graph embedding or its linear/kernel/tensor extension of a specific intrinsic graph that describes certain desired statistical or geometric properties of a data set, with constraints from scale normalization or a penalty graph that characterizes a statistical or geometric property that should be avoided. Furthermore, the graph embedding framework can be used as a general platform for developing new dimensionality reduction algorithms. By utilizing this framework as a tool, we propose a new supervised dimensionality reduction algorithm called Marginal Fisher Analysis in which the intrinsic graph characterizes the intraclass compactness and connects each data point with its neighboring points of the same class, while the penalty graph connects the marginal points and characterizes the interclass separability. We show that MFA effectively overcomes the limitations of the traditional Linear Discriminant Analysis algorithm due to data distribution assumptions and available projection directions. Real face recognition experiments show the superiority of our proposed MFA in comparison to LDA, also for corresponding kernel and tensor extensions.

  15. A procalcitonin-based algorithm to guide antibiotic therapy in secondary peritonitis following emergency surgery: a prospective study with propensity score matching analysis.

    PubMed

    Huang, Ting-Shuo; Huang, Shie-Shian; Shyu, Yu-Chiau; Lee, Chun-Hui; Jwo, Shyh-Chuan; Chen, Pei-Jer; Chen, Huang-Yang

    2014-01-01

    Procalcitonin (PCT)-based algorithms have been used to guide antibiotic therapy in several clinical settings. However, evidence supporting PCT-based algorithms for secondary peritonitis after emergency surgery is scanty. In this study, we aimed to investigate whether a PCT-based algorithm could safely reduce antibiotic exposure in this population. From April 2012 to March 2013, patients that had secondary peritonitis diagnosed at the emergency department and underwent emergency surgery were screened for eligibility. PCT levels were obtained pre-operatively, on post-operative days 1, 3, 5, and 7, and on subsequent days if needed. Antibiotics were discontinued if PCT was <1.0 ng/mL or decreased by 80% versus day 1, with resolution of clinical signs. Primary endpoints were time to discontinuation of intravenous antibiotics for the first episode and adverse events. Historical controls were retrieved for propensity score matching. After matching, 30 patients in the PCT group and 60 in the control were included for analysis. The median duration of antibiotic exposure in PCT group was 3.4 days (interquartile range [IQR] 2.2 days), while 6.1 days (IQR 3.2 days) in control (p < 0.001). The PCT algorithm significantly improves time to antibiotic discontinuation (p < 0.001, log-rank test). The rates of adverse events were comparable between 2 groups. Multivariate-adjusted extended Cox model demonstrated that the PCT-based algorithm was significantly associated with a 87% reduction in hazard of antibiotic exposure within 7 days (hazard ratio [HR] 0.13, 95% CI 0.07-0.21, p < 0.001), and a 68% reduction in hazard after 7 days (adjusted HR 0.32, 95% CI 0.11-0.99, p  =  0.047). Advanced age, coexisting pulmonary diseases, and higher severity of illness were significantly associated with longer durations of antibiotic use. The PCT-based algorithm safely reduces antibiotic exposure in this study. Further randomized trials are needed to confirm our findings and incorporate cost-effectiveness analysis. Australian New Zealand Clinical Trials Registry ACTRN12612000601831.

  16. Controller reduction by preserving impulse response energy

    NASA Technical Reports Server (NTRS)

    Craig, Roy R., Jr.; Su, Tzu-Jeng

    1989-01-01

    A model order reduction algorithm based on a Krylov recurrence formulation is developed to reduce order of controllers. The reduced-order controller is obtained by projecting the full-order LQG controller onto a Krylov subspace in which either the controllability or the observability grammian is equal to the identity matrix. The reduced-order controller preserves the impulse response energy of the full-order controller and has a parameter-matching property. Two numerical examples drawn from other controller reduction literature are used to illustrate the efficacy of the proposed reduction algorithm.

  17. Increasing feasibility of the field-programmable gate array implementation of an iterative image registration using a kernel-warping algorithm

    NASA Astrophysics Data System (ADS)

    Nguyen, An Hung; Guillemette, Thomas; Lambert, Andrew J.; Pickering, Mark R.; Garratt, Matthew A.

    2017-09-01

    Image registration is a fundamental image processing technique. It is used to spatially align two or more images that have been captured at different times, from different sensors, or from different viewpoints. There have been many algorithms proposed for this task. The most common of these being the well-known Lucas-Kanade (LK) and Horn-Schunck approaches. However, the main limitation of these approaches is the computational complexity required to implement the large number of iterations necessary for successful alignment of the images. Previously, a multi-pass image interpolation algorithm (MP-I2A) was developed to considerably reduce the number of iterations required for successful registration compared with the LK algorithm. This paper develops a kernel-warping algorithm (KWA), a modified version of the MP-I2A, which requires fewer iterations to successfully register two images and less memory space for the field-programmable gate array (FPGA) implementation than the MP-I2A. These reductions increase feasibility of the implementation of the proposed algorithm on FPGAs with very limited memory space and other hardware resources. A two-FPGA system rather than single FPGA system is successfully developed to implement the KWA in order to compensate insufficiency of hardware resources supported by one FPGA, and increase parallel processing ability and scalability of the system.

  18. Continuous intensity map optimization (CIMO): A novel approach to leaf sequencing in step and shoot IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao Daliang; Earl, Matthew A.; Luan, Shuang

    2006-04-15

    A new leaf-sequencing approach has been developed that is designed to reduce the number of required beam segments for step-and-shoot intensity modulated radiation therapy (IMRT). This approach to leaf sequencing is called continuous-intensity-map-optimization (CIMO). Using a simulated annealing algorithm, CIMO seeks to minimize differences between the optimized and sequenced intensity maps. Two distinguishing features of the CIMO algorithm are (1) CIMO does not require that each optimized intensity map be clustered into discrete levels and (2) CIMO is not rule-based but rather simultaneously optimizes both the aperture shapes and weights. To test the CIMO algorithm, ten IMRT patient cases weremore » selected (four head-and-neck, two pancreas, two prostate, one brain, and one pelvis). For each case, the optimized intensity maps were extracted from the Pinnacle{sup 3} treatment planning system. The CIMO algorithm was applied, and the optimized aperture shapes and weights were loaded back into Pinnacle. A final dose calculation was performed using Pinnacle's convolution/superposition based dose calculation. On average, the CIMO algorithm provided a 54% reduction in the number of beam segments as compared with Pinnacle's leaf sequencer. The plans sequenced using the CIMO algorithm also provided improved target dose uniformity and a reduced discrepancy between the optimized and sequenced intensity maps. For ten clinical intensity maps, comparisons were performed between the CIMO algorithm and the power-of-two reduction algorithm of Xia and Verhey [Med. Phys. 25(8), 1424-1434 (1998)]. When the constraints of a Varian Millennium multileaf collimator were applied, the CIMO algorithm resulted in a 26% reduction in the number of segments. For an Elekta multileaf collimator, the CIMO algorithm resulted in a 67% reduction in the number of segments. An average leaf sequencing time of less than one minute per beam was observed.« less

  19. Symbolic integration of a class of algebraic functions. [by an algorithmic approach

    NASA Technical Reports Server (NTRS)

    Ng, E. W.

    1974-01-01

    An algorithm is presented for the symbolic integration of a class of algebraic functions. This class consists of functions made up of rational expressions of an integration variable x and square roots of polynomials, trigonometric and hyperbolic functions of x. The algorithm is shown to consist of the following components:(1) the reduction of input integrands to conical form; (2) intermediate internal representations of integrals; (3) classification of outputs; and (4) reduction and simplification of outputs to well-known functions.

  20. Multi-phase classification by a least-squares support vector machine approach in tomography images of geological samples

    NASA Astrophysics Data System (ADS)

    Khan, Faisal; Enzmann, Frieder; Kersten, Michael

    2016-03-01

    Image processing of X-ray-computed polychromatic cone-beam micro-tomography (μXCT) data of geological samples mainly involves artefact reduction and phase segmentation. For the former, the main beam-hardening (BH) artefact is removed by applying a best-fit quadratic surface algorithm to a given image data set (reconstructed slice), which minimizes the BH offsets of the attenuation data points from that surface. A Matlab code for this approach is provided in the Appendix. The final BH-corrected image is extracted from the residual data or from the difference between the surface elevation values and the original grey-scale values. For the segmentation, we propose a novel least-squares support vector machine (LS-SVM, an algorithm for pixel-based multi-phase classification) approach. A receiver operating characteristic (ROC) analysis was performed on BH-corrected and uncorrected samples to show that BH correction is in fact an important prerequisite for accurate multi-phase classification. The combination of the two approaches was thus used to classify successfully three different more or less complex multi-phase rock core samples.

  1. Predicting Length of Stay in Intensive Care Units after Cardiac Surgery: Comparison of Artificial Neural Networks and Adaptive Neuro-fuzzy System.

    PubMed

    Maharlou, Hamidreza; Niakan Kalhori, Sharareh R; Shahbazi, Shahrbanoo; Ravangard, Ramin

    2018-04-01

    Accurate prediction of patients' length of stay is highly important. This study compared the performance of artificial neural network and adaptive neuro-fuzzy system algorithms to predict patients' length of stay in intensive care units (ICU) after cardiac surgery. A cross-sectional, analytical, and applied study was conducted. The required data were collected from 311 cardiac patients admitted to intensive care units after surgery at three hospitals of Shiraz, Iran, through a non-random convenience sampling method during the second quarter of 2016. Following the initial processing of influential factors, models were created and evaluated. The results showed that the adaptive neuro-fuzzy algorithm (with mean squared error [MSE] = 7 and R = 0.88) resulted in the creation of a more precise model than the artificial neural network (with MSE = 21 and R = 0.60). The adaptive neuro-fuzzy algorithm produces a more accurate model as it applies both the capabilities of a neural network architecture and experts' knowledge as a hybrid algorithm. It identifies nonlinear components, yielding remarkable results for prediction the length of stay, which is a useful calculation output to support ICU management, enabling higher quality of administration and cost reduction.

  2. Energy and Quality-Aware Multimedia Signal Processing

    NASA Astrophysics Data System (ADS)

    Emre, Yunus

    Today's mobile devices have to support computation-intensive multimedia applications with a limited energy budget. In this dissertation, we present architecture level and algorithm-level techniques that reduce energy consumption of these devices with minimal impact on system quality. First, we present novel techniques to mitigate the effects of SRAM memory failures in JPEG2000 implementations operating in scaled voltages. We investigate error control coding schemes and propose an unequal error protection scheme tailored for JPEG2000 that reduces overhead without affecting the performance. Furthermore, we propose algorithm-specific techniques for error compensation that exploit the fact that in JPEG2000 the discrete wavelet transform outputs have larger values for low frequency subband coefficients and smaller values for high frequency subband coefficients. Next, we present use of voltage overscaling to reduce the data-path power consumption of JPEG codecs. We propose an algorithm-specific technique which exploits the characteristics of the quantized coefficients after zig-zag scan to mitigate errors introduced by aggressive voltage scaling. Third, we investigate the effect of reducing dynamic range for datapath energy reduction. We analyze the effect of truncation error and propose a scheme that estimates the mean value of the truncation error during the pre-computation stage and compensates for this error. Such a scheme is very effective for reducing the noise power in applications that are dominated by additions and multiplications such as FIR filter and transform computation. We also present a novel sum of absolute difference (SAD) scheme that is based on most significant bit truncation. The proposed scheme exploits the fact that most of the absolute difference (AD) calculations result in small values, and most of the large AD values do not contribute to the SAD values of the blocks that are selected. Such a scheme is highly effective in reducing the energy consumption of motion estimation and intra-prediction kernels in video codecs. Finally, we present several hybrid energy-saving techniques based on combination of voltage scaling, computation reduction and dynamic range reduction that further reduce the energy consumption while keeping the performance degradation very low. For instance, a combination of computation reduction and dynamic range reduction for Discrete Cosine Transform shows on average, 33% to 46% reduction in energy consumption while incurring only 0.5dB to 1.5dB loss in PSNR.

  3. Reducing the Volume of NASA Earth-Science Data

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; Braverman, Amy J.; Guillaume, Alexandre

    2010-01-01

    A computer program reduces data generated by NASA Earth-science missions into representative clusters characterized by centroids and membership information, thereby reducing the large volume of data to a level more amenable to analysis. The program effects an autonomous data-reduction/clustering process to produce a representative distribution and joint relationships of the data, without assuming a specific type of distribution and relationship and without resorting to domain-specific knowledge about the data. The program implements a combination of a data-reduction algorithm known as the entropy-constrained vector quantization (ECVQ) and an optimization algorithm known as the differential evolution (DE). The combination of algorithms generates the Pareto front of clustering solutions that presents the compromise between the quality of the reduced data and the degree of reduction. Similar prior data-reduction computer programs utilize only a clustering algorithm, the parameters of which are tuned manually by users. In the present program, autonomous optimization of the parameters by means of the DE supplants the manual tuning of the parameters. Thus, the program determines the best set of clustering solutions without human intervention.

  4. Adaptive intercolor error prediction coder for lossless color (rgb) picutre compression

    NASA Astrophysics Data System (ADS)

    Mann, Y.; Peretz, Y.; Mitchell, Harvey B.

    2001-09-01

    Most of the current lossless compression algorithms, including the new international baseline JPEG-LS algorithm, do not exploit the interspectral correlations that exist between the color planes in an input color picture. To improve the compression performance (i.e., lower the bit rate) it is necessary to exploit these correlations. A major concern is to find efficient methods for exploiting the correlations that, at the same time, are compatible with and can be incorporated into the JPEG-LS algorithm. One such algorithm is the method of intercolor error prediction (IEP), which when used with the JPEG-LS algorithm, results on average in a reduction of 8% in the overall bit rate. We show how the IEP algorithm can be simply modified and that it nearly doubles the size of the reduction in bit rate to 15%.

  5. An Eigensystem Realization Algorithm (ERA) for modal parameter identification and model reduction

    NASA Technical Reports Server (NTRS)

    Juang, J. N.; Pappa, R. S.

    1985-01-01

    A method, called the Eigensystem Realization Algorithm (ERA), is developed for modal parameter identification and model reduction of dynamic systems from test data. A new approach is introduced in conjunction with the singular value decomposition technique to derive the basic formulation of minimum order realization which is an extended version of the Ho-Kalman algorithm. The basic formulation is then transformed into modal space for modal parameter identification. Two accuracy indicators are developed to quantitatively identify the system modes and noise modes. For illustration of the algorithm, examples are shown using simulation data and experimental data for a rectangular grid structure.

  6. Distributed Function Mining for Gene Expression Programming Based on Fast Reduction.

    PubMed

    Deng, Song; Yue, Dong; Yang, Le-chan; Fu, Xiong; Feng, Ya-zhou

    2016-01-01

    For high-dimensional and massive data sets, traditional centralized gene expression programming (GEP) or improved algorithms lead to increased run-time and decreased prediction accuracy. To solve this problem, this paper proposes a new improved algorithm called distributed function mining for gene expression programming based on fast reduction (DFMGEP-FR). In DFMGEP-FR, fast attribution reduction in binary search algorithms (FAR-BSA) is proposed to quickly find the optimal attribution set, and the function consistency replacement algorithm is given to solve integration of the local function model. Thorough comparative experiments for DFMGEP-FR, centralized GEP and the parallel gene expression programming algorithm based on simulated annealing (parallel GEPSA) are included in this paper. For the waveform, mushroom, connect-4 and musk datasets, the comparative results show that the average time-consumption of DFMGEP-FR drops by 89.09%%, 88.85%, 85.79% and 93.06%, respectively, in contrast to centralized GEP and by 12.5%, 8.42%, 9.62% and 13.75%, respectively, compared with parallel GEPSA. Six well-studied UCI test data sets demonstrate the efficiency and capability of our proposed DFMGEP-FR algorithm for distributed function mining.

  7. Laboratory for Engineering Man/Machine Systems (LEMS): System identification, model reduction and deconvolution filtering using Fourier based modulating signals and high order statistics

    NASA Technical Reports Server (NTRS)

    Pan, Jianqiang

    1992-01-01

    Several important problems in the fields of signal processing and model identification, such as system structure identification, frequency response determination, high order model reduction, high resolution frequency analysis, deconvolution filtering, and etc. Each of these topics involves a wide range of applications and has received considerable attention. Using the Fourier based sinusoidal modulating signals, it is shown that a discrete autoregressive model can be constructed for the least squares identification of continuous systems. Some identification algorithms are presented for both SISO and MIMO systems frequency response determination using only transient data. Also, several new schemes for model reduction were developed. Based upon the complex sinusoidal modulating signals, a parametric least squares algorithm for high resolution frequency estimation is proposed. Numerical examples show that the proposed algorithm gives better performance than the usual. Also, the problem was studied of deconvolution and parameter identification of a general noncausal nonminimum phase ARMA system driven by non-Gaussian stationary random processes. Algorithms are introduced for inverse cumulant estimation, both in the frequency domain via the FFT algorithms and in the domain via the least squares algorithm.

  8. A projection pursuit algorithm to classify individuals using fMRI data: Application to schizophrenia.

    PubMed

    Demirci, Oguz; Clark, Vincent P; Calhoun, Vince D

    2008-02-15

    Schizophrenia is diagnosed based largely upon behavioral symptoms. Currently, no quantitative, biologically based diagnostic technique has yet been developed to identify patients with schizophrenia. Classification of individuals into patient with schizophrenia and healthy control groups based on quantitative biologically based data is of great interest to support and refine psychiatric diagnoses. We applied a novel projection pursuit technique on various components obtained with independent component analysis (ICA) of 70 subjects' fMRI activation maps obtained during an auditory oddball task. The validity of the technique was tested with a leave-one-out method and the detection performance varied between 80% and 90%. The findings suggest that the proposed data reduction algorithm is effective in classifying individuals into schizophrenia and healthy control groups and may eventually prove useful as a diagnostic tool.

  9. Economic evaluation of the one-hour rule-out and rule-in algorithm for acute myocardial infarction using the high-sensitivity cardiac troponin T assay in the emergency department.

    PubMed

    Ambavane, Apoorva; Lindahl, Bertil; Giannitsis, Evangelos; Roiz, Julie; Mendivil, Joan; Frankenstein, Lutz; Body, Richard; Christ, Michael; Bingisser, Roland; Alquezar, Aitor; Mueller, Christian

    2017-01-01

    The 1-hour (h) algorithm triages patients presenting with suspected acute myocardial infarction (AMI) to the emergency department (ED) towards "rule-out," "rule-in," or "observation," depending on baseline and 1-h levels of high-sensitivity cardiac troponin (hs-cTn). The economic consequences of applying the accelerated 1-h algorithm are unknown. We performed a post-hoc economic analysis in a large, diagnostic, multicenter study of hs-cTnT using central adjudication of the final diagnosis by two independent cardiologists. Length of stay (LoS), resource utilization (RU), and predicted diagnostic accuracy of the 1-h algorithm compared to standard of care (SoC) in the ED were estimated. The ED LoS, RU, and accuracy of the 1-h algorithm was compared to that achieved by the SoC at ED discharge. Expert opinion was sought to characterize clinical implementation of the 1-h algorithm, which required blood draws at ED presentation and 1h, after which "rule-in" patients were transferred for coronary angiography, "rule-out" patients underwent outpatient stress testing, and "observation" patients received SoC. Unit costs were for the United Kingdom, Switzerland, and Germany. The sensitivity and specificity for the 1-h algorithm were 87% and 96%, respectively, compared to 69% and 98% for SoC. The mean ED LoS for the 1-h algorithm was 4.3h-it was 6.5h for SoC, which is a reduction of 33%. The 1-h algorithm was associated with reductions in RU, driven largely by the shorter LoS in the ED for patients with a diagnosis other than AMI. The estimated total costs per patient were £2,480 for the 1-h algorithm compared to £4,561 for SoC, a reduction of up to 46%. The analysis shows that the use of 1-h algorithm is associated with reduction in overall AMI diagnostic costs, provided it is carefully implemented in clinical practice. These results need to be prospectively validated in the future.

  10. NOAA's Joint Polar Satellite System's (JPSS) Proving Ground and Risk Reduction (PGRR) Program - Bringing JPSS Science into Support of Key NOAA Missions!

    NASA Astrophysics Data System (ADS)

    Sjoberg, W.; McWilliams, G.

    2017-12-01

    This presentation will focus on the continuity of the NOAA Joint Polar Satellite System (JPSS) Program's Proving Ground and Risk Reduction (PGRR) and key activities of the PGRR Initiatives. The PGRR Program was established in 2012, following the launch of the Suomi National Polar Partnership (SNPP) satellite. The JPSS Program Office has used two PGRR Project Proposals to establish an effective approach to managing its science and algorithm teams in order to focus on key NOAA missions. The presenter will provide details of the Initiatives and the processes used by the initiatives that have proven so successful. Details of the new 2017 PGRR Call-for-Proposals and the status of project selections will be discussed.

  11. A Geometric Method for Model Reduction of Biochemical Networks with Polynomial Rate Functions.

    PubMed

    Samal, Satya Swarup; Grigoriev, Dima; Fröhlich, Holger; Weber, Andreas; Radulescu, Ovidiu

    2015-12-01

    Model reduction of biochemical networks relies on the knowledge of slow and fast variables. We provide a geometric method, based on the Newton polytope, to identify slow variables of a biochemical network with polynomial rate functions. The gist of the method is the notion of tropical equilibration that provides approximate descriptions of slow invariant manifolds. Compared to extant numerical algorithms such as the intrinsic low-dimensional manifold method, our approach is symbolic and utilizes orders of magnitude instead of precise values of the model parameters. Application of this method to a large collection of biochemical network models supports the idea that the number of dynamical variables in minimal models of cell physiology can be small, in spite of the large number of molecular regulatory actors.

  12. [Multi-Target Recognition of Internal and External Defects of Potato by Semi-Transmission Hyperspectral Imaging and Manifold Learning Algorithm].

    PubMed

    Huang, Tao; Li, Xiao-yu; Jin, Rui; Ku, Jing; Xu, Sen-miao; Xu, Meng-ling; Wu, Zhen-zhong; Kong, De-guo

    2015-04-01

    The present paper put forward a non-destructive detection method which combines semi-transmission hyperspectral imaging technology with manifold learning dimension reduction algorithm and least squares support vector machine (LSSVM) to recognize internal and external defects in potatoes simultaneously. Three hundred fifteen potatoes were bought in farmers market as research object, and semi-transmission hyperspectral image acquisition system was constructed to acquire the hyperspectral images of normal external defects (bud and green rind) and internal defect (hollow heart) potatoes. In order to conform to the actual production, defect part is randomly put right, side and back to the acquisition probe when the hyperspectral images of external defects potatoes are acquired. The average spectrums (390-1,040 nm) were extracted from the region of interests for spectral preprocessing. Then three kinds of manifold learning algorithm were respectively utilized to reduce the dimension of spectrum data, including supervised locally linear embedding (SLLE), locally linear embedding (LLE) and isometric mapping (ISOMAP), the low-dimensional data gotten by manifold learning algorithms is used as model input, Error Correcting Output Code (ECOC) and LSSVM were combined to develop the multi-target classification model. By comparing and analyzing results of the three models, we concluded that SLLE is the optimal manifold learning dimension reduction algorithm, and the SLLE-LSSVM model is determined to get the best recognition rate for recognizing internal and external defects potatoes. For test set data, the single recognition rate of normal, bud, green rind and hollow heart potato reached 96.83%, 86.96%, 86.96% and 95% respectively, and he hybrid recognition rate was 93.02%. The results indicate that combining the semi-transmission hyperspectral imaging technology with SLLE-LSSVM is a feasible qualitative analytical method which can simultaneously recognize the internal and external defects potatoes and also provide technical reference for rapid on-line non-destructive detecting of the internal and external defects potatoes.

  13. Brain-Inspired Constructive Learning Algorithms with Evolutionally Additive Nonlinear Neurons

    NASA Astrophysics Data System (ADS)

    Fang, Le-Heng; Lin, Wei; Luo, Qiang

    In this article, inspired partially by the physiological evidence of brain’s growth and development, we developed a new type of constructive learning algorithm with evolutionally additive nonlinear neurons. The new algorithms have remarkable ability in effective regression and accurate classification. In particular, the algorithms are able to sustain a certain reduction of the loss function when the dynamics of the trained network are bogged down in the vicinity of the local minima. The algorithm augments the neural network by adding only a few connections as well as neurons whose activation functions are nonlinear, nonmonotonic, and self-adapted to the dynamics of the loss functions. Indeed, we analytically demonstrate the reduction dynamics of the algorithm for different problems, and further modify the algorithms so as to obtain an improved generalization capability for the augmented neural networks. Finally, through comparing with the classical algorithm and architecture for neural network construction, we show that our constructive learning algorithms as well as their modified versions have better performances, such as faster training speed and smaller network size, on several representative benchmark datasets including the MNIST dataset for handwriting digits.

  14. An Efficient Distributed Compressed Sensing Algorithm for Decentralized Sensor Network.

    PubMed

    Liu, Jing; Huang, Kaiyu; Zhang, Guoxian

    2017-04-20

    We consider the joint sparsity Model 1 (JSM-1) in a decentralized scenario, where a number of sensors are connected through a network and there is no fusion center. A novel algorithm, named distributed compact sensing matrix pursuit (DCSMP), is proposed to exploit the computational and communication capabilities of the sensor nodes. In contrast to the conventional distributed compressed sensing algorithms adopting a random sensing matrix, the proposed algorithm focuses on the deterministic sensing matrices built directly on the real acquisition systems. The proposed DCSMP algorithm can be divided into two independent parts, the common and innovation support set estimation processes. The goal of the common support set estimation process is to obtain an estimated common support set by fusing the candidate support set information from an individual node and its neighboring nodes. In the following innovation support set estimation process, the measurement vector is projected into a subspace that is perpendicular to the subspace spanned by the columns indexed by the estimated common support set, to remove the impact of the estimated common support set. We can then search the innovation support set using an orthogonal matching pursuit (OMP) algorithm based on the projected measurement vector and projected sensing matrix. In the proposed DCSMP algorithm, the process of estimating the common component/support set is decoupled with that of estimating the innovation component/support set. Thus, the inaccurately estimated common support set will have no impact on estimating the innovation support set. It is proven that under the condition the estimated common support set contains the true common support set, the proposed algorithm can find the true innovation set correctly. Moreover, since the innovation support set estimation process is independent of the common support set estimation process, there is no requirement for the cardinality of both sets; thus, the proposed DCSMP algorithm is capable of tackling the unknown sparsity problem successfully.

  15. Research on numerical method for multiple pollution source discharge and optimal reduction program

    NASA Astrophysics Data System (ADS)

    Li, Mingchang; Dai, Mingxin; Zhou, Bin; Zou, Bin

    2018-03-01

    In this paper, the optimal method for reduction program is proposed by the nonlinear optimal algorithms named that genetic algorithm. The four main rivers in Jiangsu province, China are selected for reducing the environmental pollution in nearshore district. Dissolved inorganic nitrogen (DIN) is studied as the only pollutant. The environmental status and standard in the nearshore district is used to reduce the discharge of multiple river pollutant. The research results of reduction program are the basis of marine environmental management.

  16. Support vector machines for nuclear reactor state estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zavaljevski, N.; Gross, K. C.

    2000-02-14

    Validation of nuclear power reactor signals is often performed by comparing signal prototypes with the actual reactor signals. The signal prototypes are often computed based on empirical data. The implementation of an estimation algorithm which can make predictions on limited data is an important issue. A new machine learning algorithm called support vector machines (SVMS) recently developed by Vladimir Vapnik and his coworkers enables a high level of generalization with finite high-dimensional data. The improved generalization in comparison with standard methods like neural networks is due mainly to the following characteristics of the method. The input data space is transformedmore » into a high-dimensional feature space using a kernel function, and the learning problem is formulated as a convex quadratic programming problem with a unique solution. In this paper the authors have applied the SVM method for data-based state estimation in nuclear power reactors. In particular, they implemented and tested kernels developed at Argonne National Laboratory for the Multivariate State Estimation Technique (MSET), a nonlinear, nonparametric estimation technique with a wide range of applications in nuclear reactors. The methodology has been applied to three data sets from experimental and commercial nuclear power reactor applications. The results are promising. The combination of MSET kernels with the SVM method has better noise reduction and generalization properties than the standard MSET algorithm.« less

  17. A Study of Lane Detection Algorithm for Personal Vehicle

    NASA Astrophysics Data System (ADS)

    Kobayashi, Kazuyuki; Watanabe, Kajiro; Ohkubo, Tomoyuki; Kurihara, Yosuke

    By the word “Personal vehicle”, we mean a simple and lightweight vehicle expected to emerge as personal ground transportation devices. The motorcycle, electric wheelchair, motor-powered bicycle, etc. are examples of the personal vehicle and have been developed as the useful for transportation for a personal use. Recently, a new types of intelligent personal vehicle called the Segway has been developed which is controlled and stabilized by using on-board intelligent multiple sensors. The demand for needs for such personal vehicles are increasing, 1) to enhance human mobility, 2) to support mobility for elderly person, 3) reduction of environmental burdens. Since rapidly growing personal vehicles' market, a number of accidents caused by human error is also increasing. The accidents are caused by it's drive ability. To enhance or support drive ability as well as to prevent accidents, intelligent assistance is necessary. One of most important elemental functions for personal vehicle is robust lane detection. In this paper, we develop a robust lane detection method for personal vehicle at outdoor environments. The proposed lane detection method employing a 360 degree omni directional camera and unique robust image processing algorithm. In order to detect lanes, combination of template matching technique and Hough transform are employed. The validity of proposed lane detection algorithm is confirmed by actual developed vehicle at various type of sunshined outdoor conditions.

  18. Evaluation of an iterative model-based CT reconstruction algorithm by intra-patient comparison of standard and ultra-low-dose examinations.

    PubMed

    Noël, Peter B; Engels, Stephan; Köhler, Thomas; Muenzel, Daniela; Franz, Daniela; Rasper, Michael; Rummeny, Ernst J; Dobritz, Martin; Fingerle, Alexander A

    2018-01-01

    Background The explosive growth of computer tomography (CT) has led to a growing public health concern about patient and population radiation dose. A recently introduced technique for dose reduction, which can be combined with tube-current modulation, over-beam reduction, and organ-specific dose reduction, is iterative reconstruction (IR). Purpose To evaluate the quality, at different radiation dose levels, of three reconstruction algorithms for diagnostics of patients with proven liver metastases under tumor follow-up. Material and Methods A total of 40 thorax-abdomen-pelvis CT examinations acquired from 20 patients in a tumor follow-up were included. All patients were imaged using the standard-dose and a specific low-dose CT protocol. Reconstructed slices were generated by using three different reconstruction algorithms: a classical filtered back projection (FBP); a first-generation iterative noise-reduction algorithm (iDose4); and a next generation model-based IR algorithm (IMR). Results The overall detection of liver lesions tended to be higher with the IMR algorithm than with FBP or iDose4. The IMR dataset at standard dose yielded the highest overall detectability, while the low-dose FBP dataset showed the lowest detectability. For the low-dose protocols, a significantly improved detectability of the liver lesion can be reported compared to FBP or iDose 4 ( P = 0.01). The radiation dose decreased by an approximate factor of 5 between the standard-dose and the low-dose protocol. Conclusion The latest generation of IR algorithms significantly improved the diagnostic image quality and provided virtually noise-free images for ultra-low-dose CT imaging.

  19. System identification and model reduction using modulating function techniques

    NASA Technical Reports Server (NTRS)

    Shen, Yan

    1993-01-01

    Weighted least squares (WLS) and adaptive weighted least squares (AWLS) algorithms are initiated for continuous-time system identification using Fourier type modulating function techniques. Two stochastic signal models are examined using the mean square properties of the stochastic calculus: an equation error signal model with white noise residuals, and a more realistic white measurement noise signal model. The covariance matrices in each model are shown to be banded and sparse, and a joint likelihood cost function is developed which links the real and imaginary parts of the modulated quantities. The superior performance of above algorithms is demonstrated by comparing them with the LS/MFT and popular predicting error method (PEM) through 200 Monte Carlo simulations. A model reduction problem is formulated with the AWLS/MFT algorithm, and comparisons are made via six examples with a variety of model reduction techniques, including the well-known balanced realization method. Here the AWLS/MFT algorithm manifests higher accuracy in almost all cases, and exhibits its unique flexibility and versatility. Armed with this model reduction, the AWLS/MFT algorithm is extended into MIMO transfer function system identification problems. The impact due to the discrepancy in bandwidths and gains among subsystem is explored through five examples. Finally, as a comprehensive application, the stability derivatives of the longitudinal and lateral dynamics of an F-18 aircraft are identified using physical flight data provided by NASA. A pole-constrained SIMO and MIMO AWLS/MFT algorithm is devised and analyzed. Monte Carlo simulations illustrate its high-noise rejecting properties. Utilizing the flight data, comparisons among different MFT algorithms are tabulated and the AWLS is found to be strongly favored in almost all facets.

  20. A hybrid algorithm for speckle noise reduction of ultrasound images.

    PubMed

    Singh, Karamjeet; Ranade, Sukhjeet Kaur; Singh, Chandan

    2017-09-01

    Medical images are contaminated by multiplicative speckle noise which significantly reduce the contrast of ultrasound images and creates a negative effect on various image interpretation tasks. In this paper, we proposed a hybrid denoising approach which collaborate the both local and nonlocal information in an efficient manner. The proposed hybrid algorithm consist of three stages in which at first stage the use of local statistics in the form of guided filter is used to reduce the effect of speckle noise initially. Then, an improved speckle reducing bilateral filter (SRBF) is developed to further reduce the speckle noise from the medical images. Finally, to reconstruct the diffused edges we have used the efficient post-processing technique which jointly considered the advantages of both bilateral and nonlocal mean (NLM) filter for the attenuation of speckle noise efficiently. The performance of proposed hybrid algorithm is evaluated on synthetic, simulated and real ultrasound images. The experiments conducted on various test images demonstrate that our proposed hybrid approach outperforms the various traditional speckle reduction approaches included recently proposed NLM and optimized Bayesian-based NLM. The results of various quantitative, qualitative measures and by visual inspection of denoise synthetic and real ultrasound images demonstrate that the proposed hybrid algorithm have strong denoising capability and able to preserve the fine image details such as edge of a lesion better than previously developed methods for speckle noise reduction. The denoising and edge preserving capability of hybrid algorithm is far better than existing traditional and recently proposed speckle reduction (SR) filters. The success of proposed algorithm would help in building the lay foundation for inventing the hybrid algorithms for denoising of ultrasound images. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Biased normalized cuts for target detection in hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Xuewen; Dorado-Munoz, Leidy P.; Messinger, David W.; Cahill, Nathan D.

    2016-05-01

    The Biased Normalized Cuts (BNC) algorithm is a useful technique for detecting targets or objects in RGB imagery. In this paper, we propose modifying BNC for the purpose of target detection in hyperspectral imagery. As opposed to other target detection algorithms that typically encode target information prior to dimensionality reduction, our proposed algorithm encodes target information after dimensionality reduction, enabling a user to detect different targets in interactive mode. To assess the proposed BNC algorithm, we utilize hyperspectral imagery (HSI) from the SHARE 2012 data campaign, and we explore the relationship between the number and the position of expert-provided target labels and the precision/recall of the remaining targets in the scene.

  2. Multitarget mixture reduction algorithm with incorporated target existence recursions

    NASA Astrophysics Data System (ADS)

    Ristic, Branko; Arulampalam, Sanjeev

    2000-07-01

    The paper derives a deferred logic data association algorithm based on the mixture reduction approach originally due to Salmond [SPIE vol.1305, 1990]. The novelty of the proposed algorithm provides the recursive formulae for both data association and target existence (confidence) estimation, thus allowing automatic track initiation and termination. T he track initiation performance of the proposed filter is investigated by computer simulations. It is observed that at moderately high levels of clutter density the proposed filter initiates tracks more reliably than its corresponding PDA filter. An extension of the proposed filter to the multi-target case is also presented. In addition, the paper compares the track maintenance performance of the MR algorithm with an MHT implementation.

  3. Denoising and 4D visualization of OCT images

    PubMed Central

    Gargesha, Madhusudhana; Jenkins, Michael W.; Rollins, Andrew M.; Wilson, David L.

    2009-01-01

    We are using Optical Coherence Tomography (OCT) to image structure and function of the developing embryonic heart in avian models. Fast OCT imaging produces very large 3D (2D + time) and 4D (3D volumes + time) data sets, which greatly challenge ones ability to visualize results. Noise in OCT images poses additional challenges. We created an algorithm with a quick, data set specific optimization for reduction of both shot and speckle noise and applied it to 3D visualization and image segmentation in OCT. When compared to baseline algorithms (median, Wiener, orthogonal wavelet, basic non-orthogonal wavelet), a panel of experts judged the new algorithm to give much improved volume renderings concerning both noise and 3D visualization. Specifically, the algorithm provided a better visualization of the myocardial and endocardial surfaces, and the interaction of the embryonic heart tube with surrounding tissue. Quantitative evaluation using an image quality figure of merit also indicated superiority of the new algorithm. Noise reduction aided semi-automatic 2D image segmentation, as quantitatively evaluated using a contour distance measure with respect to an expert segmented contour. In conclusion, the noise reduction algorithm should be quite useful for visualization and quantitative measurements (e.g., heart volume, stroke volume, contraction velocity, etc.) in OCT embryo images. With its semi-automatic, data set specific optimization, we believe that the algorithm can be applied to OCT images from other applications. PMID:18679509

  4. Optimal field-splitting algorithm in intensity-modulated radiotherapy: Evaluations using head-and-neck and female pelvic IMRT cases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dou, Xin; Kim, Yusung, E-mail: yusung-kim@uiowa.edu; Bayouth, John E.

    2013-04-01

    To develop an optimal field-splitting algorithm of minimal complexity and verify the algorithm using head-and-neck (H and N) and female pelvic intensity-modulated radiotherapy (IMRT) cases. An optimal field-splitting algorithm was developed in which a large intensity map (IM) was split into multiple sub-IMs (≥2). The algorithm reduced the total complexity by minimizing the monitor units (MU) delivered and segment number of each sub-IM. The algorithm was verified through comparison studies with the algorithm as used in a commercial treatment planning system. Seven IMRT, H and N, and female pelvic cancer cases (54 IMs) were analyzed by MU, segment numbers, andmore » dose distributions. The optimal field-splitting algorithm was found to reduce both total MU and the total number of segments. We found on average a 7.9 ± 11.8% and 9.6 ± 18.2% reduction in MU and segment numbers for H and N IMRT cases with an 11.9 ± 17.4% and 11.1 ± 13.7% reduction for female pelvic cases. The overall percent (absolute) reduction in the numbers of MU and segments were found to be on average −9.7 ± 14.6% (−15 ± 25 MU) and −10.3 ± 16.3% (−3 ± 5), respectively. In addition, all dose distributions from the optimal field-splitting method showed improved dose distributions. The optimal field-splitting algorithm shows considerable improvements in both total MU and total segment number. The algorithm is expected to be beneficial for the radiotherapy treatment of large-field IMRT.« less

  5. Research on fast algorithm of small UAV navigation in non-linear matrix reductionism method

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao; Fang, Jiancheng; Sheng, Wei; Cao, Juanjuan

    2008-10-01

    The low Reynolds numbers of small UAV will result in unfavorable aerodynamic conditions to support controlled flight. And as operated near ground, the small UAV will be affected seriously by low-frequency interference caused by atmospheric disturbance. Therefore, the GNC system needs high frequency of attitude estimation and control to realize the steady of the UAV. In company with the dimensional of small UAV dwindling away, its GNC system is more and more taken embedded designing technology to reach the purpose of compactness, light weight and low power consumption. At the same time, the operational capability of GNC system also gets limit in a certain extent. Therefore, a kind of high speed navigation algorithm design becomes the imminence demand of GNC system. Aiming at such requirement, a kind of non-linearity matrix reduction approach is adopted in this paper to create a new high speed navigation algorithm which holds the radius of meridian circle and prime vertical circle as constant and linearizes the position matrix calculation formulae of navigation equation. Compared with normal navigation algorithm, this high speed navigation algorithm decreases 17.3% operand. Within small UAV"s mission radius (20km), the accuracy of position error is less than 0.13m. The results of semi-physical experiments and small UAV's auto pilot testing proved that this algorithm can realize high frequency attitude estimation and control. It will avoid low-frequency interference caused by atmospheric disturbance properly.

  6. Detecting double compressed MPEG videos with the same quantization matrix and synchronized group of pictures structure

    NASA Astrophysics Data System (ADS)

    Aghamaleki, Javad Abbasi; Behrad, Alireza

    2018-01-01

    Double compression detection is a crucial stage in digital image and video forensics. However, the detection of double compressed videos is challenging when the video forger uses the same quantization matrix and synchronized group of pictures (GOP) structure during the recompression history to conceal tampering effects. A passive approach is proposed for detecting double compressed MPEG videos with the same quantization matrix and synchronized GOP structure. To devise the proposed algorithm, the effects of recompression on P frames are mathematically studied. Then, based on the obtained guidelines, a feature vector is proposed to detect double compressed frames on the GOP level. Subsequently, sparse representations of the feature vectors are used for dimensionality reduction and enrich the traces of recompression. Finally, a support vector machine classifier is employed to detect and localize double compression in temporal domain. The experimental results show that the proposed algorithm achieves the accuracy of more than 95%. In addition, the comparisons of the results of the proposed method with those of other methods reveal the efficiency of the proposed algorithm.

  7. RES: Regularized Stochastic BFGS Algorithm

    NASA Astrophysics Data System (ADS)

    Mokhtari, Aryan; Ribeiro, Alejandro

    2014-12-01

    RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.

  8. Robust feature extraction for rapid classification of damage in composites

    NASA Astrophysics Data System (ADS)

    Coelho, Clyde K.; Reynolds, Whitney; Chattopadhyay, Aditi

    2009-03-01

    The ability to detect anomalies in signals from sensors is imperative for structural health monitoring (SHM) applications. Many of the candidate algorithms for these applications either require a lot of training examples or are very computationally inefficient for large sample sizes. The damage detection framework presented in this paper uses a combination of Linear Discriminant Analysis (LDA) along with Support Vector Machines (SVM) to obtain a computationally efficient classification scheme for rapid damage state determination. LDA was used for feature extraction of damage signals from piezoelectric sensors on a composite plate and these features were used to train the SVM algorithm in parts, reducing the computational intensity associated with the quadratic optimization problem that needs to be solved during training. SVM classifiers were organized into a binary tree structure to speed up classification, which also reduces the total training time required. This framework was validated on composite plates that were impacted at various locations. The results show that the algorithm was able to correctly predict the different impact damage cases in composite laminates using less than 21 percent of the total available training data after data reduction.

  9. Integrand Reduction Reloaded: Algebraic Geometry and Finite Fields

    NASA Astrophysics Data System (ADS)

    Sameshima, Ray D.; Ferroglia, Andrea; Ossola, Giovanni

    2017-01-01

    The evaluation of scattering amplitudes in quantum field theory allows us to compare the phenomenological prediction of particle theory with the measurement at collider experiments. The study of scattering amplitudes, in terms of their symmetries and analytic properties, provides a theoretical framework to develop techniques and efficient algorithms for the evaluation of physical cross sections and differential distributions. Tree-level calculations have been known for a long time. Loop amplitudes, which are needed to reduce the theoretical uncertainty, are more challenging since they involve a large number of Feynman diagrams, expressed as integrals of rational functions. At one-loop, the problem has been solved thanks to the combined effect of integrand reduction, such as the OPP method, and unitarity. However, plenty of work is still needed at higher orders, starting with the two-loop case. Recently, integrand reduction has been revisited using algebraic geometry. In this presentation, we review the salient features of integrand reduction for dimensionally regulated Feynman integrals, and describe an interesting technique for their reduction based on multivariate polynomial division. We also show a novel approach to improve its efficiency by introducing finite fields. Supported in part by the National Science Foundation under Grant PHY-1417354.

  10. Improving recovery of ECG signal with deterministic guarantees using split signal for multiple supports of matching pursuit (SS-MSMP) algorithm.

    PubMed

    Tawfic, Israa Shaker; Kayhan, Sema Koc

    2017-02-01

    Compressed sensing (CS) is a new field used for signal acquisition and design of sensor that made a large drooping in the cost of acquiring sparse signals. In this paper, new algorithms are developed to improve the performance of the greedy algorithms. In this paper, a new greedy pursuit algorithm, SS-MSMP (Split Signal for Multiple Support of Matching Pursuit), is introduced and theoretical analyses are given. The SS-MSMP is suggested for sparse data acquisition, in order to reconstruct analog and efficient signals via a small set of general measurements. This paper proposes a new fast method which depends on a study of the behavior of the support indices through picking the best estimation of the corrosion between residual and measurement matrix. The term multiple supports originates from an algorithm; in each iteration, the best support indices are picked based on maximum quality created by discovering correlation for a particular length of support. We depend on this new algorithm upon our previous derivative of halting condition that we produce for Least Support Orthogonal Matching Pursuit (LS-OMP) for clear and noisy signal. For better reconstructed results, SS-MSMP algorithm provides the recovery of support set for long signals such as signals used in WBAN. Numerical experiments demonstrate that the new suggested algorithm performs well compared to existing algorithms in terms of many factors used for reconstruction performance. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  11. Sensitivity Analysis for Probabilistic Neural Network Structure Reduction.

    PubMed

    Kowalski, Piotr A; Kusy, Maciej

    2018-05-01

    In this paper, we propose the use of local sensitivity analysis (LSA) for the structure simplification of the probabilistic neural network (PNN). Three algorithms are introduced. The first algorithm applies LSA to the PNN input layer reduction by selecting significant features of input patterns. The second algorithm utilizes LSA to remove redundant pattern neurons of the network. The third algorithm combines the proposed two and constitutes the solution of how they can work together. PNN with a product kernel estimator is used, where each multiplicand computes a one-dimensional Cauchy function. Therefore, the smoothing parameter is separately calculated for each dimension by means of the plug-in method. The classification qualities of the reduced and full structure PNN are compared. Furthermore, we evaluate the performance of PNN, for which global sensitivity analysis (GSA) and the common reduction methods are applied, both in the input layer and the pattern layer. The models are tested on the classification problems of eight repository data sets. A 10-fold cross validation procedure is used to determine the prediction ability of the networks. Based on the obtained results, it is shown that the LSA can be used as an alternative PNN reduction approach.

  12. Robust frequency diversity based algorithm for clutter noise reduction of ultrasonic signals using multiple sub-spectrum phase coherence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gongzhang, R.; Xiao, B.; Lardner, T.

    2014-02-18

    This paper presents a robust frequency diversity based algorithm for clutter reduction in ultrasonic A-scan waveforms. The performance of conventional spectral-temporal techniques like Split Spectrum Processing (SSP) is highly dependent on the parameter selection, especially when the signal to noise ratio (SNR) is low. Although spatial beamforming offers noise reduction with less sensitivity to parameter variation, phased array techniques are not always available. The proposed algorithm first selects an ascending series of frequency bands. A signal is reconstructed for each selected band in which a defect is present when all frequency components are in uniform sign. Combining all reconstructed signalsmore » through averaging gives a probability profile of potential defect position. To facilitate data collection and validate the proposed algorithm, Full Matrix Capture is applied on the austenitic steel and high nickel alloy (HNA) samples with 5MHz transducer arrays. When processing A-scan signals with unrefined parameters, the proposed algorithm enhances SNR by 20dB for both samples and consequently, defects are more visible in B-scan images created from the large amount of A-scan traces. Importantly, the proposed algorithm is considered robust, while SSP is shown to fail on the austenitic steel data and achieves less SNR enhancement on the HNA data.« less

  13. A new algorithm for five-hole probe calibration, data reduction, and uncertainty analysis

    NASA Technical Reports Server (NTRS)

    Reichert, Bruce A.; Wendt, Bruce J.

    1994-01-01

    A new algorithm for five-hole probe calibration and data reduction using a non-nulling method is developed. The significant features of the algorithm are: (1) two components of the unit vector in the flow direction replace pitch and yaw angles as flow direction variables; and (2) symmetry rules are developed that greatly simplify Taylor's series representations of the calibration data. In data reduction, four pressure coefficients allow total pressure, static pressure, and flow direction to be calculated directly. The new algorithm's simplicity permits an analytical treatment of the propagation of uncertainty in five-hole probe measurement. The objectives of the uncertainty analysis are to quantify uncertainty of five-hole results (e.g., total pressure, static pressure, and flow direction) and determine the dependence of the result uncertainty on the uncertainty of all underlying experimental and calibration measurands. This study outlines a general procedure that other researchers may use to determine five-hole probe result uncertainty and provides guidance to improve measurement technique. The new algorithm is applied to calibrate and reduce data from a rake of five-hole probes. Here, ten individual probes are mounted on a single probe shaft and used simultaneously. Use of this probe is made practical by the simplicity afforded by this algorithm.

  14. Greedy Algorithms for Nonnegativity-Constrained Simultaneous Sparse Recovery

    PubMed Central

    Kim, Daeun; Haldar, Justin P.

    2016-01-01

    This work proposes a family of greedy algorithms to jointly reconstruct a set of vectors that are (i) nonnegative and (ii) simultaneously sparse with a shared support set. The proposed algorithms generalize previous approaches that were designed to impose these constraints individually. Similar to previous greedy algorithms for sparse recovery, the proposed algorithms iteratively identify promising support indices. In contrast to previous approaches, the support index selection procedure has been adapted to prioritize indices that are consistent with both the nonnegativity and shared support constraints. Empirical results demonstrate for the first time that the combined use of simultaneous sparsity and nonnegativity constraints can substantially improve recovery performance relative to existing greedy algorithms that impose less signal structure. PMID:26973368

  15. Optimal design of minimum mean-square error noise reduction algorithms using the simulated annealing technique.

    PubMed

    Bai, Mingsian R; Hsieh, Ping-Ju; Hur, Kur-Nan

    2009-02-01

    The performance of the minimum mean-square error noise reduction (MMSE-NR) algorithm in conjunction with time-recursive averaging (TRA) for noise estimation is found to be very sensitive to the choice of two recursion parameters. To address this problem in a more systematic manner, this paper proposes an optimization method to efficiently search the optimal parameters of the MMSE-TRA-NR algorithms. The objective function is based on a regression model, whereas the optimization process is carried out with the simulated annealing algorithm that is well suited for problems with many local optima. Another NR algorithm proposed in the paper employs linear prediction coding as a preprocessor for extracting the correlated portion of human speech. Objective and subjective tests were undertaken to compare the optimized MMSE-TRA-NR algorithm with several conventional NR algorithms. The results of subjective tests were processed by using analysis of variance to justify the statistic significance. A post hoc test, Tukey's Honestly Significant Difference, was conducted to further assess the pairwise difference between the NR algorithms.

  16. High-grade video compression of echocardiographic studies: a multicenter validation study of selected motion pictures expert groups (MPEG)-4 algorithms.

    PubMed

    Barbier, Paolo; Alimento, Marina; Berna, Giovanni; Celeste, Fabrizio; Gentile, Francesco; Mantero, Antonio; Montericcio, Vincenzo; Muratori, Manuela

    2007-05-01

    Large files produced by standard compression algorithms slow down spread of digital and tele-echocardiography. We validated echocardiographic video high-grade compression with the new Motion Pictures Expert Groups (MPEG)-4 algorithms with a multicenter study. Seven expert cardiologists blindly scored (5-point scale) 165 uncompressed and compressed 2-dimensional and color Doppler video clips, based on combined diagnostic content and image quality (uncompressed files as references). One digital video and 3 MPEG-4 algorithms (WM9, MV2, and DivX) were used, the latter at 3 compression levels (0%, 35%, and 60%). Compressed file sizes decreased from 12 to 83 MB to 0.03 to 2.3 MB (1:1051-1:26 reduction ratios). Mean SD of differences was 0.81 for intraobserver variability (uncompressed and digital video files). Compared with uncompressed files, only the DivX mean score at 35% (P = .04) and 60% (P = .001) compression was significantly reduced. At subcategory analysis, these differences were still significant for gray-scale and fundamental imaging but not for color or second harmonic tissue imaging. Original image quality, session sequence, compression grade, and bitrate were all independent determinants of mean score. Our study supports use of MPEG-4 algorithms to greatly reduce echocardiographic file sizes, thus facilitating archiving and transmission. Quality evaluation studies should account for the many independent variables that affect image quality grading.

  17. Optimized design of embedded DSP system hardware supporting complex algorithms

    NASA Astrophysics Data System (ADS)

    Li, Yanhua; Wang, Xiangjun; Zhou, Xinling

    2003-09-01

    The paper presents an optimized design method for a flexible and economical embedded DSP system that can implement complex processing algorithms as biometric recognition, real-time image processing, etc. It consists of a floating-point DSP, 512 Kbytes data RAM, 1 Mbytes FLASH program memory, a CPLD for achieving flexible logic control of input channel and a RS-485 transceiver for local network communication. Because of employing a high performance-price ratio DSP TMS320C6712 and a large FLASH in the design, this system permits loading and performing complex algorithms with little algorithm optimization and code reduction. The CPLD provides flexible logic control for the whole DSP board, especially in input channel, and allows convenient interface between different sensors and DSP system. The transceiver circuit can transfer data between DSP and host computer. In the paper, some key technologies are also introduced which make the whole system work efficiently. Because of the characters referred above, the hardware is a perfect flat for multi-channel data collection, image processing, and other signal processing with high performance and adaptability. The application section of this paper presents how this hardware is adapted for the biometric identification system with high identification precision. The result reveals that this hardware is easy to interface with a CMOS imager and is capable of carrying out complex biometric identification algorithms, which require real-time process.

  18. The ArF laser for the next-generation multiple-patterning immersion lithography supporting green operations

    NASA Astrophysics Data System (ADS)

    Ishida, Keisuke; Ohta, Takeshi; Miyamoto, Hirotaka; Kumazaki, Takahito; Tsushima, Hiroaki; Kurosu, Akihiko; Matsunaga, Takashi; Mizoguchi, Hakaru

    2016-03-01

    Multiple patterning ArF immersion lithography has been expected as the promising technology to satisfy tighter leading edge device requirements. One of the most important features of the next generation lasers will be the ability to support green operations while further improving cost of ownership and performance. Especially, the dependence on rare gases, such as Neon and Helium, is becoming a critical issue for high volume manufacturing process. The new ArF excimer laser, GT64A has been developed to cope with the reduction of operational costs, the prevention against rare resource shortage and the improvement of device yield in multiple-patterning lithography. GT64A has advantages in efficiency and stability based on the field-proven injection-lock twin-chamber platform (GigaTwin platform). By the combination of GigaTwin platform and the advanced gas control algorithm, the consumption of rare gases such as Neon is reduced to a half. And newly designed Line Narrowing Module can realize completely Helium free operation. For the device yield improvement, spectral bandwidth stability is important to increase image contrast and contribute to the further reduction of CD variation. The new spectral bandwidth control algorithm and high response actuator has been developed to compensate the offset due to thermal change during the interval such as the period of wafer exchange operation. And REDeeM Cloud™, new monitoring system for managing light source performance and operations, is on-board and provides detailed light source information such as wavelength, energy, E95, etc.

  19. Dose reduction potential of iterative reconstruction algorithms in neck CTA-a simulation study.

    PubMed

    Ellmann, Stephan; Kammerer, Ferdinand; Allmendinger, Thomas; Brand, Michael; Janka, Rolf; Hammon, Matthias; Lell, Michael M; Uder, Michael; Kramer, Manuel

    2016-10-01

    This study aimed to determine the degree of radiation dose reduction in neck CT angiography (CTA) achievable with Sinogram-affirmed iterative reconstruction (SAFIRE) algorithms. 10 consecutive patients scheduled for neck CTA were included in this study. CTA images of the external carotid arteries either were reconstructed with filtered back projection (FBP) at full radiation dose level or underwent simulated dose reduction by proprietary reconstruction software. The dose-reduced images were reconstructed using either SAFIRE 3 or SAFIRE 5 and compared with full-dose FBP images in terms of vessel definition. 5 observers performed a total of 3000 pairwise comparisons. SAFIRE allowed substantial radiation dose reductions in neck CTA while maintaining vessel definition. The possible levels of radiation dose reduction ranged from approximately 34 to approximately 90% and depended on the SAFIRE algorithm strength and the size of the vessel of interest. In general, larger vessels permitted higher degrees of radiation dose reduction, especially with higher SAFIRE strength levels. With small vessels, the superiority of SAFIRE 5 over SAFIRE 3 was lost. Neck CTA can be performed with substantially less radiation dose when SAFIRE is applied. The exact degree of radiation dose reduction should be adapted to the clinical question, in particular to the smallest vessel needing excellent definition.

  20. Medication Waste Reduction in Pediatric Pharmacy Batch Processes

    PubMed Central

    Veltri, Michael A.; Hamrock, Eric; Mollenkopf, Nicole L.; Holt, Kristen; Levin, Scott

    2014-01-01

    OBJECTIVES: To inform pediatric cart-fill batch scheduling for reductions in pharmaceutical waste using a case study and simulation analysis. METHODS: A pre and post intervention and simulation analysis was conducted during 3 months at a 205-bed children's center. An algorithm was developed to detect wasted medication based on time-stamped computerized provider order entry information. The algorithm was used to quantify pharmaceutical waste and associated costs for both preintervention (1 batch per day) and postintervention (3 batches per day) schedules. Further, simulation was used to systematically test 108 batch schedules outlining general characteristics that have an impact on the likelihood for waste. RESULTS: Switching from a 1-batch-per-day to a 3-batch-per-day schedule resulted in a 31.3% decrease in pharmaceutical waste (28.7% to 19.7%) and annual cost savings of $183,380. Simulation results demonstrate how increasing batch frequency facilitates a more just-in-time process that reduces waste. The most substantial gains are realized by shifting from a schedule of 1 batch per day to at least 2 batches per day. The simulation exhibits how waste reduction is also achievable by avoiding batch preparation during daily time periods where medication administration or medication discontinuations are frequent. Last, the simulation was used to show how reducing batch preparation time per batch provides some, albeit minimal, opportunity to decrease waste. CONCLUSIONS: The case study and simulation analysis demonstrate characteristics of batch scheduling that may support pediatric pharmacy managers in redesign toward minimizing pharmaceutical waste. PMID:25024671

  1. Medication waste reduction in pediatric pharmacy batch processes.

    PubMed

    Toerper, Matthew F; Veltri, Michael A; Hamrock, Eric; Mollenkopf, Nicole L; Holt, Kristen; Levin, Scott

    2014-04-01

    To inform pediatric cart-fill batch scheduling for reductions in pharmaceutical waste using a case study and simulation analysis. A pre and post intervention and simulation analysis was conducted during 3 months at a 205-bed children's center. An algorithm was developed to detect wasted medication based on time-stamped computerized provider order entry information. The algorithm was used to quantify pharmaceutical waste and associated costs for both preintervention (1 batch per day) and postintervention (3 batches per day) schedules. Further, simulation was used to systematically test 108 batch schedules outlining general characteristics that have an impact on the likelihood for waste. Switching from a 1-batch-per-day to a 3-batch-per-day schedule resulted in a 31.3% decrease in pharmaceutical waste (28.7% to 19.7%) and annual cost savings of $183,380. Simulation results demonstrate how increasing batch frequency facilitates a more just-in-time process that reduces waste. The most substantial gains are realized by shifting from a schedule of 1 batch per day to at least 2 batches per day. The simulation exhibits how waste reduction is also achievable by avoiding batch preparation during daily time periods where medication administration or medication discontinuations are frequent. Last, the simulation was used to show how reducing batch preparation time per batch provides some, albeit minimal, opportunity to decrease waste. The case study and simulation analysis demonstrate characteristics of batch scheduling that may support pediatric pharmacy managers in redesign toward minimizing pharmaceutical waste.

  2. Missing texture reconstruction method based on error reduction algorithm using Fourier transform magnitude estimation scheme.

    PubMed

    Ogawa, Takahiro; Haseyama, Miki

    2013-03-01

    A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.

  3. Integrated Multiscale Modeling of Molecular Computing Devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gregory Beylkin

    2012-03-23

    Significant advances were made on all objectives of the research program. We have developed fast multiresolution methods for performing electronic structure calculations with emphasis on constructing efficient representations of functions and operators. We extended our approach to problems of scattering in solids, i.e. constructing fast algorithms for computing above the Fermi energy level. Part of the work was done in collaboration with Robert Harrison and George Fann at ORNL. Specific results (in part supported by this grant) are listed here and are described in greater detail. (1) We have implemented a fast algorithm to apply the Green's function for themore » free space (oscillatory) Helmholtz kernel. The algorithm maintains its speed and accuracy when the kernel is applied to functions with singularities. (2) We have developed a fast algorithm for applying periodic and quasi-periodic, oscillatory Green's functions and those with boundary conditions on simple domains. Importantly, the algorithm maintains its speed and accuracy when applied to functions with singularities. (3) We have developed a fast algorithm for obtaining and applying multiresolution representations of periodic and quasi-periodic Green's functions and Green's functions with boundary conditions on simple domains. (4) We have implemented modifications to improve the speed of adaptive multiresolution algorithms for applying operators which are represented via a Gaussian expansion. (5) We have constructed new nearly optimal quadratures for the sphere that are invariant under the icosahedral rotation group. (6) We obtained new results on approximation of functions by exponential sums and/or rational functions, one of the key methods that allows us to construct separated representations for Green's functions. (7) We developed a new fast and accurate reduction algorithm for obtaining optimal approximation of functions by exponential sums and/or their rational representations.« less

  4. An Approach to Data Center-Based KDD of Remote Sensing Datasets

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher; Mack, Robert; Wharton, Stephen W. (Technical Monitor)

    2001-01-01

    The data explosion in remote sensing is straining the ability of data centers to deliver the data to the user community, yet many large-volume users actually seek a relatively small information component within the data, which they extract at their sites using Knowledge Discovery in Databases (KDD) techniques. To improve the efficiency of this process, the Goddard Earth Sciences Distributed Active Archive Center (GES DAAC) has implemented a KDD subsystem that supports execution of the user's KDD algorithm at the data center, dramatically reducing the volume that is sent to the user. The data are extracted from the archive in a planned, organized "campaign"; the algorithms are executed, and the output products sent to the users over the network. The first campaign, now complete, has resulted in overall reductions in shipped volume from 3.3 TB to 0.4 TB.

  5. Data Centric Sensor Stream Reduction for Real-Time Applications in Wireless Sensor Networks

    PubMed Central

    Aquino, Andre Luiz Lins; Nakamura, Eduardo Freire

    2009-01-01

    This work presents a data-centric strategy to meet deadlines in soft real-time applications in wireless sensor networks. This strategy considers three main aspects: (i) The design of real-time application to obtain the minimum deadlines; (ii) An analytic model to estimate the ideal sample size used by data-reduction algorithms; and (iii) Two data-centric stream-based sampling algorithms to perform data reduction whenever necessary. Simulation results show that our data-centric strategies meet deadlines without loosing data representativeness. PMID:22303145

  6. Improved classification accuracy by feature extraction using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Patriarche, Julia; Manduca, Armando; Erickson, Bradley J.

    2003-05-01

    A feature extraction algorithm has been developed for the purposes of improving classification accuracy. The algorithm uses a genetic algorithm / hill-climber hybrid to generate a set of linearly recombined features, which may be of reduced dimensionality compared with the original set. The genetic algorithm performs the global exploration, and a hill climber explores local neighborhoods. Hybridizing the genetic algorithm with a hill climber improves both the rate of convergence, and the final overall cost function value; it also reduces the sensitivity of the genetic algorithm to parameter selection. The genetic algorithm includes the operators: crossover, mutation, and deletion / reactivation - the last of these effects dimensionality reduction. The feature extractor is supervised, and is capable of deriving a separate feature space for each tissue (which are reintegrated during classification). A non-anatomical digital phantom was developed as a gold standard for testing purposes. In tests with the phantom, and with images of multiple sclerosis patients, classification with feature extractor derived features yielded lower error rates than using standard pulse sequences, and with features derived using principal components analysis. Using the multiple sclerosis patient data, the algorithm resulted in a mean 31% reduction in classification error of pure tissues.

  7. Image-classification-based global dimming algorithm for LED backlights in LCDs

    NASA Astrophysics Data System (ADS)

    Qibin, Feng; Huijie, He; Dong, Han; Lei, Zhang; Guoqiang, Lv

    2015-07-01

    Backlight dimming can help LCDs reduce power consumption and improve CR. With fixed parameters, dimming algorithm cannot achieve satisfied effects for all kinds of images. The paper introduces an image-classification-based global dimming algorithm. The proposed classification method especially for backlight dimming is based on luminance and CR of input images. The parameters for backlight dimming level and pixel compensation are adaptive with image classifications. The simulation results show that the classification based dimming algorithm presents 86.13% power reduction improvement compared with dimming without classification, with almost same display quality. The prototype is developed. There are no perceived distortions when playing videos. The practical average power reduction of the prototype TV is 18.72%, compared with common TV without dimming.

  8. Can image enhancement allow radiation dose to be reduced whilst maintaining the perceived diagnostic image quality required for coronary angiography?

    PubMed Central

    Joshi, Anuja; Gislason-Lee, Amber J; Keeble, Claire; Sivananthan, Uduvil M

    2017-01-01

    Objective: The aim of this research was to quantify the reduction in radiation dose facilitated by image processing alone for percutaneous coronary intervention (PCI) patient angiograms, without reducing the perceived image quality required to confidently make a diagnosis. Methods: Incremental amounts of image noise were added to five PCI angiograms, simulating the angiogram as having been acquired at corresponding lower dose levels (10–89% dose reduction). 16 observers with relevant experience scored the image quality of these angiograms in 3 states—with no image processing and with 2 different modern image processing algorithms applied. These algorithms are used on state-of-the-art and previous generation cardiac interventional X-ray systems. Ordinal regression allowing for random effects and the delta method were used to quantify the dose reduction possible by the processing algorithms, for equivalent image quality scores. Results: Observers rated the quality of the images processed with the state-of-the-art and previous generation image processing with a 24.9% and 15.6% dose reduction, respectively, as equivalent in quality to the unenhanced images. The dose reduction facilitated by the state-of-the-art image processing relative to previous generation processing was 10.3%. Conclusion: Results demonstrate that statistically significant dose reduction can be facilitated with no loss in perceived image quality using modern image enhancement; the most recent processing algorithm was more effective in preserving image quality at lower doses. Advances in knowledge: Image enhancement was shown to maintain perceived image quality in coronary angiography at a reduced level of radiation dose using computer software to produce synthetic images from real angiograms simulating a reduction in dose. PMID:28124572

  9. An algorithm for U-Pb isotope dilution data reduction and uncertainty propagation

    NASA Astrophysics Data System (ADS)

    McLean, N. M.; Bowring, J. F.; Bowring, S. A.

    2011-06-01

    High-precision U-Pb geochronology by isotope dilution-thermal ionization mass spectrometry is integral to a variety of Earth science disciplines, but its ultimate resolving power is quantified by the uncertainties of calculated U-Pb dates. As analytical techniques have advanced, formerly small sources of uncertainty are increasingly important, and thus previous simplifications for data reduction and uncertainty propagation are no longer valid. Although notable previous efforts have treated propagation of correlated uncertainties for the U-Pb system, the equations, uncertainties, and correlations have been limited in number and subject to simplification during propagation through intermediary calculations. We derive and present a transparent U-Pb data reduction algorithm that transforms raw isotopic data and measured or assumed laboratory parameters into the isotopic ratios and dates geochronologists interpret without making assumptions about the relative size of sample components. To propagate uncertainties and their correlations, we describe, in detail, a linear algebraic algorithm that incorporates all input uncertainties and correlations without limiting or simplifying covariance terms to propagate them though intermediate calculations. Finally, a weighted mean algorithm is presented that utilizes matrix elements from the uncertainty propagation algorithm to propagate random and systematic uncertainties for data comparison between other U-Pb labs and other geochronometers. The linear uncertainty propagation algorithms are verified with Monte Carlo simulations of several typical analyses. We propose that our algorithms be considered by the community for implementation to improve the collaborative science envisioned by the EARTHTIME initiative.

  10. SU-E-J-218: Evaluation of CT Images Created Using a New Metal Artifact Reduction Reconstruction Algorithm for Radiation Therapy Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niemkiewicz, J; Palmiotti, A; Miner, M

    2014-06-01

    Purpose: Metal in patients creates streak artifacts in CT images. When used for radiation treatment planning, these artifacts make it difficult to identify internal structures and affects radiation dose calculations, which depend on HU numbers for inhomogeneity correction. This work quantitatively evaluates a new metal artifact reduction (MAR) CT image reconstruction algorithm (GE Healthcare CT-0521-04.13-EN-US DOC1381483) when metal is present. Methods: A Gammex Model 467 Tissue Characterization phantom was used. CT images were taken of this phantom on a GE Optima580RT CT scanner with and without steel and titanium plugs using both the standard and MAR reconstruction algorithms. HU valuesmore » were compared pixel by pixel to determine if the MAR algorithm altered the HUs of normal tissues when no metal is present, and to evaluate the effect of using the MAR algorithm when metal is present. Also, CT images of patients with internal metal objects using standard and MAR reconstruction algorithms were compared. Results: Comparing the standard and MAR reconstructed images of the phantom without metal, 95.0% of pixels were within ±35 HU and 98.0% of pixels were within ±85 HU. Also, the MAR reconstruction algorithm showed significant improvement in maintaining HUs of non-metallic regions in the images taken of the phantom with metal. HU Gamma analysis (2%, 2mm) of metal vs. non-metal phantom imaging using standard reconstruction resulted in an 84.8% pass rate compared to 96.6% for the MAR reconstructed images. CT images of patients with metal show significant artifact reduction when reconstructed with the MAR algorithm. Conclusion: CT imaging using the MAR reconstruction algorithm provides improved visualization of internal anatomy and more accurate HUs when metal is present compared to the standard reconstruction algorithm. MAR reconstructed CT images provide qualitative and quantitative improvements over current reconstruction algorithms, thus improving radiation treatment planning accuracy.« less

  11. An adaptive algorithm for the detection of microcalcifications in simulated low-dose mammography.

    PubMed

    Treiber, O; Wanninger, F; Führ, H; Panzer, W; Regulla, D; Winkler, G

    2003-02-21

    This paper uses the task of microcalcification detection as a benchmark problem to assess the potential for dose reduction in x-ray mammography. We present the results of a newly developed algorithm for detection of microcalcifications as a case study for a typical commercial film-screen system (Kodak Min-R 2000/2190). The first part of the paper deals with the simulation of dose reduction for film-screen mammography based on a physical model of the imaging process. Use of a more sensitive film-screen system is expected to result in additional smoothing of the image. We introduce two different models of that behaviour, called moderate and strong smoothing. We then present an adaptive, model-based microcalcification detection algorithm. Comparing detection results with ground-truth images obtained under the supervision of an expert radiologist allows us to establish the soundness of the detection algorithm. We measure the performance on the dose-reduced images in order to assess the loss of information due to dose reduction. It turns out that the smoothing behaviour has a strong influence on detection rates. For moderate smoothing. a dose reduction by 25% has no serious influence on the detection results. whereas a dose reduction by 50% already entails a marked deterioration of the performance. Strong smoothing generally leads to an unacceptable loss of image quality. The test results emphasize the impact of the more sensitive film-screen system and its characteristics on the problem of assessing the potential for dose reduction in film-screen mammography. The general approach presented in the paper can be adapted to fully digital mammography.

  12. An adaptive algorithm for the detection of microcalcifications in simulated low-dose mammography

    NASA Astrophysics Data System (ADS)

    Treiber, O.; Wanninger, F.; Führ, H.; Panzer, W.; Regulla, D.; Winkler, G.

    2003-02-01

    This paper uses the task of microcalcification detection as a benchmark problem to assess the potential for dose reduction in x-ray mammography. We present the results of a newly developed algorithm for detection of microcalcifications as a case study for a typical commercial film-screen system (Kodak Min-R 2000/2190). The first part of the paper deals with the simulation of dose reduction for film-screen mammography based on a physical model of the imaging process. Use of a more sensitive film-screen system is expected to result in additional smoothing of the image. We introduce two different models of that behaviour, called moderate and strong smoothing. We then present an adaptive, model-based microcalcification detection algorithm. Comparing detection results with ground-truth images obtained under the supervision of an expert radiologist allows us to establish the soundness of the detection algorithm. We measure the performance on the dose-reduced images in order to assess the loss of information due to dose reduction. It turns out that the smoothing behaviour has a strong influence on detection rates. For moderate smoothing, a dose reduction by 25% has no serious influence on the detection results, whereas a dose reduction by 50% already entails a marked deterioration of the performance. Strong smoothing generally leads to an unacceptable loss of image quality. The test results emphasize the impact of the more sensitive film-screen system and its characteristics on the problem of assessing the potential for dose reduction in film-screen mammography. The general approach presented in the paper can be adapted to fully digital mammography.

  13. A Machine-Learning Algorithm Toward Color Analysis for Chronic Liver Disease Classification, Employing Ultrasound Shear Wave Elastography.

    PubMed

    Gatos, Ilias; Tsantis, Stavros; Spiliopoulos, Stavros; Karnabatidis, Dimitris; Theotokas, Ioannis; Zoumpoulis, Pavlos; Loupas, Thanasis; Hazle, John D; Kagadis, George C

    2017-09-01

    The purpose of the present study was to employ a computer-aided diagnosis system that classifies chronic liver disease (CLD) using ultrasound shear wave elastography (SWE) imaging, with a stiffness value-clustering and machine-learning algorithm. A clinical data set of 126 patients (56 healthy controls, 70 with CLD) was analyzed. First, an RGB-to-stiffness inverse mapping technique was employed. A five-cluster segmentation was then performed associating corresponding different-color regions with certain stiffness value ranges acquired from the SWE manufacturer-provided color bar. Subsequently, 35 features (7 for each cluster), indicative of physical characteristics existing within the SWE image, were extracted. A stepwise regression analysis toward feature reduction was used to derive a reduced feature subset that was fed into the support vector machine classification algorithm to classify CLD from healthy cases. The highest accuracy in classification of healthy to CLD subject discrimination from the support vector machine model was 87.3% with sensitivity and specificity values of 93.5% and 81.2%, respectively. Receiver operating characteristic curve analysis gave an area under the curve value of 0.87 (confidence interval: 0.77-0.92). A machine-learning algorithm that quantifies color information in terms of stiffness values from SWE images and discriminates CLD from healthy cases is introduced. New objective parameters and criteria for CLD diagnosis employing SWE images provided by the present study can be considered an important step toward color-based interpretation, and could assist radiologists' diagnostic performance on a daily basis after being installed in a PC and employed retrospectively, immediately after the examination. Copyright © 2017 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  14. Identification of transformer fault based on dissolved gas analysis using hybrid support vector machine-modified evolutionary particle swarm optimisation

    PubMed Central

    2018-01-01

    Early detection of power transformer fault is important because it can reduce the maintenance cost of the transformer and it can ensure continuous electricity supply in power systems. Dissolved Gas Analysis (DGA) technique is commonly used to identify oil-filled power transformer fault type but utilisation of artificial intelligence method with optimisation methods has shown convincing results. In this work, a hybrid support vector machine (SVM) with modified evolutionary particle swarm optimisation (EPSO) algorithm was proposed to determine the transformer fault type. The superiority of the modified PSO technique with SVM was evaluated by comparing the results with the actual fault diagnosis, unoptimised SVM and previous reported works. Data reduction was also applied using stepwise regression prior to the training process of SVM to reduce the training time. It was found that the proposed hybrid SVM-Modified EPSO (MEPSO)-Time Varying Acceleration Coefficient (TVAC) technique results in the highest correct identification percentage of faults in a power transformer compared to other PSO algorithms. Thus, the proposed technique can be one of the potential solutions to identify the transformer fault type based on DGA data on site. PMID:29370230

  15. Identification of transformer fault based on dissolved gas analysis using hybrid support vector machine-modified evolutionary particle swarm optimisation.

    PubMed

    Illias, Hazlee Azil; Zhao Liang, Wee

    2018-01-01

    Early detection of power transformer fault is important because it can reduce the maintenance cost of the transformer and it can ensure continuous electricity supply in power systems. Dissolved Gas Analysis (DGA) technique is commonly used to identify oil-filled power transformer fault type but utilisation of artificial intelligence method with optimisation methods has shown convincing results. In this work, a hybrid support vector machine (SVM) with modified evolutionary particle swarm optimisation (EPSO) algorithm was proposed to determine the transformer fault type. The superiority of the modified PSO technique with SVM was evaluated by comparing the results with the actual fault diagnosis, unoptimised SVM and previous reported works. Data reduction was also applied using stepwise regression prior to the training process of SVM to reduce the training time. It was found that the proposed hybrid SVM-Modified EPSO (MEPSO)-Time Varying Acceleration Coefficient (TVAC) technique results in the highest correct identification percentage of faults in a power transformer compared to other PSO algorithms. Thus, the proposed technique can be one of the potential solutions to identify the transformer fault type based on DGA data on site.

  16. Wavelet denoising of multiframe optical coherence tomography data

    PubMed Central

    Mayer, Markus A.; Borsdorf, Anja; Wagner, Martin; Hornegger, Joachim; Mardin, Christian Y.; Tornow, Ralf P.

    2012-01-01

    We introduce a novel speckle noise reduction algorithm for OCT images. Contrary to present approaches, the algorithm does not rely on simple averaging of multiple image frames or denoising on the final averaged image. Instead it uses wavelet decompositions of the single frames for a local noise and structure estimation. Based on this analysis, the wavelet detail coefficients are weighted, averaged and reconstructed. At a signal-to-noise gain at about 100% we observe only a minor sharpness decrease, as measured by a full-width-half-maximum reduction of 10.5%. While a similar signal-to-noise gain would require averaging of 29 frames, we achieve this result using only 8 frames as input to the algorithm. A possible application of the proposed algorithm is preprocessing in retinal structure segmentation algorithms, to allow a better differentiation between real tissue information and unwanted speckle noise. PMID:22435103

  17. Wavelet denoising of multiframe optical coherence tomography data.

    PubMed

    Mayer, Markus A; Borsdorf, Anja; Wagner, Martin; Hornegger, Joachim; Mardin, Christian Y; Tornow, Ralf P

    2012-03-01

    We introduce a novel speckle noise reduction algorithm for OCT images. Contrary to present approaches, the algorithm does not rely on simple averaging of multiple image frames or denoising on the final averaged image. Instead it uses wavelet decompositions of the single frames for a local noise and structure estimation. Based on this analysis, the wavelet detail coefficients are weighted, averaged and reconstructed. At a signal-to-noise gain at about 100% we observe only a minor sharpness decrease, as measured by a full-width-half-maximum reduction of 10.5%. While a similar signal-to-noise gain would require averaging of 29 frames, we achieve this result using only 8 frames as input to the algorithm. A possible application of the proposed algorithm is preprocessing in retinal structure segmentation algorithms, to allow a better differentiation between real tissue information and unwanted speckle noise.

  18. SU-D-206-02: Evaluation of Partial Storage of the System Matrix for Cone Beam Computed Tomography Using a GPU Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matenine, D; Cote, G; Mascolo-Fortin, J

    2016-06-15

    Purpose: Iterative reconstruction algorithms in computed tomography (CT) require a fast method for computing the intersections between the photons’ trajectories and the object, also called ray-tracing or system matrix computation. This work evaluates different ways to store the system matrix, aiming to reconstruct dense image grids in reasonable time. Methods: We propose an optimized implementation of the Siddon’s algorithm using graphics processing units (GPUs) with a novel data storage scheme. The algorithm computes a part of the system matrix on demand, typically, for one projection angle. The proposed method was enhanced with accelerating options: storage of larger subsets of themore » system matrix, systematic reuse of data via geometric symmetries, an arithmetic-rich parallel code and code configuration via machine learning. It was tested on geometries mimicking a cone beam CT acquisition of a human head. To realistically assess the execution time, the ray-tracing routines were integrated into a regularized Poisson-based reconstruction algorithm. The proposed scheme was also compared to a different approach, where the system matrix is fully pre-computed and loaded at reconstruction time. Results: Fast ray-tracing of realistic acquisition geometries, which often lack spatial symmetry properties, was enabled via the proposed method. Ray-tracing interleaved with projection and backprojection operations required significant additional time. In most cases, ray-tracing was shown to use about 66 % of the total reconstruction time. In absolute terms, tracing times varied from 3.6 s to 7.5 min, depending on the problem size. The presence of geometrical symmetries allowed for non-negligible ray-tracing and reconstruction time reduction. Arithmetic-rich parallel code and machine learning permitted a modest reconstruction time reduction, in the order of 1 %. Conclusion: Partial system matrix storage permitted the reconstruction of higher 3D image grid sizes and larger projection datasets at the cost of additional time, when compared to the fully pre-computed approach. This work was supported in part by the Fonds de recherche du Quebec - Nature et technologies (FRQ-NT). The authors acknowledge partial support by the CREATE Medical Physics Research Training Network grant of the Natural Sciences and Engineering Research Council of Canada (Grant No. 432290).« less

  19. SAMSAN- MODERN NUMERICAL METHODS FOR CLASSICAL SAMPLED SYSTEM ANALYSIS

    NASA Technical Reports Server (NTRS)

    Frisch, H. P.

    1994-01-01

    SAMSAN was developed to aid the control system analyst by providing a self consistent set of computer algorithms that support large order control system design and evaluation studies, with an emphasis placed on sampled system analysis. Control system analysts have access to a vast array of published algorithms to solve an equally large spectrum of controls related computational problems. The analyst usually spends considerable time and effort bringing these published algorithms to an integrated operational status and often finds them less general than desired. SAMSAN reduces the burden on the analyst by providing a set of algorithms that have been well tested and documented, and that can be readily integrated for solving control system problems. Algorithm selection for SAMSAN has been biased toward numerical accuracy for large order systems with computational speed and portability being considered important but not paramount. In addition to containing relevant subroutines from EISPAK for eigen-analysis and from LINPAK for the solution of linear systems and related problems, SAMSAN contains the following not so generally available capabilities: 1) Reduction of a real non-symmetric matrix to block diagonal form via a real similarity transformation matrix which is well conditioned with respect to inversion, 2) Solution of the generalized eigenvalue problem with balancing and grading, 3) Computation of all zeros of the determinant of a matrix of polynomials, 4) Matrix exponentiation and the evaluation of integrals involving the matrix exponential, with option to first block diagonalize, 5) Root locus and frequency response for single variable transfer functions in the S, Z, and W domains, 6) Several methods of computing zeros for linear systems, and 7) The ability to generate documentation "on demand". All matrix operations in the SAMSAN algorithms assume non-symmetric matrices with real double precision elements. There is no fixed size limit on any matrix in any SAMSAN algorithm; however, it is generally agreed by experienced users, and in the numerical error analysis literature, that computation with non-symmetric matrices of order greater than about 200 should be avoided or treated with extreme care. SAMSAN attempts to support the needs of application oriented analysis by providing: 1) a methodology with unlimited growth potential, 2) a methodology to insure that associated documentation is current and available "on demand", 3) a foundation of basic computational algorithms that most controls analysis procedures are based upon, 4) a set of check out and evaluation programs which demonstrate usage of the algorithms on a series of problems which are structured to expose the limits of each algorithm's applicability, and 5) capabilities which support both a priori and a posteriori error analysis for the computational algorithms provided. The SAMSAN algorithms are coded in FORTRAN 77 for batch or interactive execution and have been implemented on a DEC VAX computer under VMS 4.7. An effort was made to assure that the FORTRAN source code was portable and thus SAMSAN may be adaptable to other machine environments. The documentation is included on the distribution tape or can be purchased separately at the price below. SAMSAN version 2.0 was developed in 1982 and updated to version 3.0 in 1988.

  20. ReSCA: decision support tool for remediation planning after the Chernobyl accident.

    PubMed

    Ulanovsky, A; Jacob, P; Fesenko, S; Bogdevitch, I; Kashparov, V; Sanzharova, N

    2011-03-01

    Radioactive contamination of the environment following the Chernobyl accident still provide a substantial impact on the population of affected territories in Belarus, Russia, and Ukraine. Reduction of population exposure can be achieved by performing remediation activities in these areas. Resulting from the IAEA Technical Co-operation Projects with these countries, the program ReSCA (Remediation Strategies after the Chernobyl Accident) has been developed to provide assistance to decision makers and to facilitate a selection of an optimized remediation strategy in rural settlements. The paper provides in-depth description of the program, its algorithm, and structure. © Springer-Verlag 2010

  1. Vibration Reduction of Helicopter Blade Using Variable Dampers: A Feasibility Study

    NASA Technical Reports Server (NTRS)

    Lee, George C.; Liang, Zach; Gan, Quan; Niu, Tiecheng

    2002-01-01

    In the report, the investigation of controlling helicopter-blade lead-lag vibration is described. Current practice of adding passive damping may be improved to handle large dynamic range of the blade with several peaks of vibration resonance. To minimize extra-large damping forces that may damage the control system of blade, passive dampers should have relatively small damping coefficients, which in turn limit the effectiveness. By providing variable damping, a much larger damping coefficient to suppress the vibration can be realized. If the damping force reaches the maximum allowed threshold, the damper will be automatically switched into the mode with smaller damping coefficient to maintain near-constant damping force. Furthermore, the proposed control system will also have a fail-safe feature to guarantee the basic performation of a typical passive damper. The proposed control strategy to avoid resonant regions in the frequency domain is to generate variable damping force in combination with the supporting stiffness to manipulate the restoring force and conservative energy of the controlled blade system. Two control algorithms are developed and verified by a prototype variable damper, a digital controller and corresponding algorithms. Primary experiments show good potentials for the proposed variable damper: about 66% and 82% reductions in displacement at 1/3 length and the root of the blade respectively.

  2. Speckle-reduction algorithm for ultrasound images in complex wavelet domain using genetic algorithm-based mixture model.

    PubMed

    Uddin, Muhammad Shahin; Tahtali, Murat; Lambert, Andrew J; Pickering, Mark R; Marchese, Margaret; Stuart, Iain

    2016-05-20

    Compared with other medical-imaging modalities, ultrasound (US) imaging is a valuable way to examine the body's internal organs, and two-dimensional (2D) imaging is currently the most common technique used in clinical diagnoses. Conventional 2D US imaging systems are highly flexible cost-effective imaging tools that permit operators to observe and record images of a large variety of thin anatomical sections in real time. Recently, 3D US imaging has also been gaining popularity due to its considerable advantages over 2D US imaging. It reduces dependency on the operator and provides better qualitative and quantitative information for an effective diagnosis. Furthermore, it provides a 3D view, which allows the observation of volume information. The major shortcoming of any type of US imaging is the presence of speckle noise. Hence, speckle reduction is vital in providing a better clinical diagnosis. The key objective of any speckle-reduction algorithm is to attain a speckle-free image while preserving the important anatomical features. In this paper we introduce a nonlinear multi-scale complex wavelet-diffusion based algorithm for speckle reduction and sharp-edge preservation of 2D and 3D US images. In the proposed method we use a Rayleigh and Maxwell-mixture model for 2D and 3D US images, respectively, where a genetic algorithm is used in combination with an expectation maximization method to estimate mixture parameters. Experimental results using both 2D and 3D synthetic, physical phantom, and clinical data demonstrate that our proposed algorithm significantly reduces speckle noise while preserving sharp edges without discernible distortions. The proposed approach performs better than the state-of-the-art approaches in both qualitative and quantitative measures.

  3. An improved NAS-RIF algorithm for image restoration

    NASA Astrophysics Data System (ADS)

    Gao, Weizhe; Zou, Jianhua; Xu, Rong; Liu, Changhai; Li, Hengnian

    2016-10-01

    Space optical images are inevitably degraded by atmospheric turbulence, error of the optical system and motion. In order to get the true image, a novel nonnegativity and support constants recursive inverse filtering (NAS-RIF) algorithm is proposed to restore the degraded image. Firstly the image noise is weaken by Contourlet denoising algorithm. Secondly, the reliable object support region estimation is used to accelerate the algorithm convergence. We introduce the optimal threshold segmentation technology to improve the object support region. Finally, an object construction limit and the logarithm function are added to enhance algorithm stability. Experimental results demonstrate that, the proposed algorithm can increase the PSNR, and improve the quality of the restored images. The convergence speed of the proposed algorithm is faster than that of the original NAS-RIF algorithm.

  4. Blooming Artifact Reduction in Coronary Artery Calcification by A New De-blooming Algorithm: Initial Study.

    PubMed

    Li, Ping; Xu, Lei; Yang, Lin; Wang, Rui; Hsieh, Jiang; Sun, Zhonghua; Fan, Zhanming; Leipsic, Jonathon A

    2018-05-02

    The aim of this study was to investigate the use of de-blooming algorithm in coronary CT angiography (CCTA) for optimal evaluation of calcified plaques. Calcified plaques were simulated on a coronary vessel phantom and a cardiac motion phantom. Two convolution kernels, standard (STND) and high-definition standard (HD STND), were used for imaging reconstruction. A dedicated de-blooming algorithm was used for imaging processing. We found a smaller bias towards measurement of stenosis using the de-blooming algorithm (STND: bias 24.6% vs 15.0%, range 10.2% to 39.0% vs 4.0% to 25.9%; HD STND: bias 17.9% vs 11.0%, range 8.9% to 30.6% vs 0.5% to 21.5%). With use of de-blooming algorithm, specificity for diagnosing significant stenosis increased from 45.8% to 75.0% (STND), from 62.5% to 83.3% (HD STND); while positive predictive value (PPV) increased from 69.8% to 83.3% (STND), from 76.9% to 88.2% (HD STND). In the patient group, reduction in calcification volume was 48.1 ± 10.3%, reduction in coronary diameter stenosis over calcified plaque was 52.4 ± 24.2%. Our results suggest that the novel de-blooming algorithm could effectively decrease the blooming artifacts caused by coronary calcified plaques, and consequently improve diagnostic accuracy of CCTA in assessing coronary stenosis.

  5. Clean birth kits to improve birth practices: development and testing of a country level decision support tool.

    PubMed

    Hundley, Vanora A; Avan, Bilal I; Ahmed, Haris; Graham, Wendy J

    2012-12-19

    Clean birth practices can prevent sepsis, one of the leading causes of both maternal and newborn mortality. Evidence suggests that clean birth kits (CBKs), as part of package that includes education, are associated with a reduction in newborn mortality, omphalitis, and puerperal sepsis. However, questions remain about how best to approach the introduction of CBKs in country. We set out to develop a practical decision support tool for programme managers of public health systems who are considering the potential role of CBKs in their strategy for care at birth. Development and testing of the decision support tool was a three-stage process involving an international expert group and country level testing. Stage 1, the development of the tool was undertaken by the Birth Kit Working Group and involved a review of the evidence, a consensus meeting, drafting of the proposed tool and expert review. In Stage 2 the tool was tested with users through interviews (9) and a focus group, with federal and provincial level decision makers in Pakistan. In Stage 3 the findings from the country level testing were reviewed by the expert group. The decision support tool comprised three separate algorithms to guide the policy maker or programme manager through the specific steps required in making the country level decision about whether to use CBKs. The algorithms were supported by a series of questions (that could be administered by interview, focus group or questionnaire) to help the decision maker identify the information needed. The country level testing revealed that the decision support tool was easy to follow and helpful in making decisions about the potential role of CBKs. Minor modifications were made and the final algorithms are presented. Testing of the tool with users in Pakistan suggests that the tool facilitates discussion and aids decision making. However, testing in other countries is needed to determine whether these results can be replicated and to identify how the tool can be adapted to meet country specific needs.

  6. A rate-constrained fast full-search algorithm based on block sum pyramid.

    PubMed

    Song, Byung Cheol; Chun, Kang-Wook; Ra, Jong Beom

    2005-03-01

    This paper presents a fast full-search algorithm (FSA) for rate-constrained motion estimation. The proposed algorithm, which is based on the block sum pyramid frame structure, successively eliminates unnecessary search positions according to rate-constrained criterion. This algorithm provides the identical estimation performance to a conventional FSA having rate constraint, while achieving considerable reduction in computation.

  7. Overview and evolution of the LeRC PMAD DC test bed

    NASA Technical Reports Server (NTRS)

    Soeder, James F.; Frye, Robert J.

    1992-01-01

    Since the beginning of the Space Station Freedom Program (SSFP), the Lewis Research Center (LeRC) has been developed electrical power system test beds to support the overall design effort. Through this time, the SSFP has changed the design baseline numerous times, however, the test bed effort has endeavored to track these changes. Beginning in August 1989 with the baseline and an all DC system, a test bed was developed to support the design baseline. The LeRC power measurement and distribution (PMAD) DC test bed and the changes in the restructure are described. The changes included the size reduction of primary power channel and various power processing elements. A substantial reduction was also made in the amount of flight software with the subsequent migration of these functions to ground control centers. The impact of these changes on the design of the power hardware, the controller algorithms, the control software, and a description of their current status is presented. An overview of the testing using the test bed is described, which includes investigation of stability and source impedance, primary and secondary fault protection, and performance of a rotary utility transfer device. Finally, information is presented on the evolution of the test bed to support the verification and operational phases of the SSFP in light of these restructure scrubs.

  8. Overview and evolution of the LeRC PMAD DC Testbed

    NASA Technical Reports Server (NTRS)

    Soeder, James F.; Frye, Robert J.

    1992-01-01

    Since the beginning of the Space Station Freedom Program (SSFP), the Lewis Research Center (LeRC) has been developed electrical power system test beds to support the overall design effort. Through this time, the SSFP has changed the design baseline numerous times, however, the test bed effort has endeavored to track these changes. Beginning in August 1989 with the baseline and an all DC system, a test bed was developed to support the design baseline. The LeRC power measurement and distribution (PMAD) DC test bed and the changes in the restructure are described. The changes includeed the size reduction of primary power channel and various power processing elements. A substantial reduction was also made in the amount of flight software with the subsequent migration of these functions to ground control centers. The impact of these changes on the design of the power hardware, the controller algorithms, the control software, and a description of their current status is presented. An overview of the testing using the test bed is described, which includes investigation of stability and source impedance, primary and secondary fault protection, and performance of a rotary utility transfer device. Finally, information is presented on the evolution of the test bed to support the verification and operational phases of the SSFP in light of these restructure scrubs.

  9. Detection of surface cracking in steel pipes based on vibration data using a multi-class support vector machine classifier

    NASA Astrophysics Data System (ADS)

    Mustapha, S.; Braytee, A.; Ye, L.

    2017-04-01

    In this study, we focused at the development and verification of a robust framework for surface crack detection in steel pipes using measured vibration responses; with the presence of multiple progressive damage occurring in different locations within the structure. Feature selection, dimensionality reduction, and multi-class support vector machine were established for this purpose. Nine damage cases, at different locations, orientations and length, were introduced into the pipe structure. The pipe was impacted 300 times using an impact hammer, after each damage case, the vibration data were collected using 3 PZT wafers which were installed on the outer surface of the pipe. At first, damage sensitive features were extracted using the frequency response function approach followed by recursive feature elimination for dimensionality reduction. Then, a multi-class support vector machine learning algorithm was employed to train the data and generate a statistical model. Once the model is established, decision values and distances from the hyper-plane were generated for the new collected data using the trained model. This process was repeated on the data collected from each sensor. Overall, using a single sensor for training and testing led to a very high accuracy reaching 98% in the assessment of the 9 damage cases used in this study.

  10. A pilot, randomised controlled trial of a rotational thromboelastometry-based algorithm to treat bleeding episodes in extracorporeal life support: the TEM Protocol in ECLS Study (TEMPEST).

    PubMed

    Buscher, Hergen; Zhang, David; Nair, Priya

    2017-10-01

    Minimal evidence to guide haemostatic therapy for bleeding in extracorporeal life support (ECLS) has resulted in wide variability in practice. We aimed to show that a goal-directed algorithm incorporating results from thromboelastometry (TEM) is feasible and safe for the timely management of bleeding episodes in adult patients receiving ECLS. A pilot randomised controlled trial involving 16 adult patients who underwent ECLS, randomised over 10 months. The intervention group was treated according to a goal-directed algorithm based on TEM results during bleeding episodes. Apart from the intervention, both groups received standard care including conventional laboratory coagulation tests. Need for blood product transfusion, haemorrhagic and thromboembolic complications and survival. There was a statistically non-significant trend towards reduction in the amount of blood products transfused, occurrence of bleeding, and thrombotic complications, when comparing the intervention arm with the control arm. Survival to hospital discharge was 69%. A significant correlation was found between fibrinogen levels and FIBTEM clot firmness at 10 minutes (R = 0.812; P < 0.001); activated partial thromboplastin time and clotting time HEPTEM/INTEM ratio (R = -0.719; P < 0.001); and platelet count and EXTEM clot firmness at 10 minutes (R = 0.783; P < 0.001). TEM allows assessment for coagulation status in a timely manner and its use for the treatment of bleeding episodes in adult patients receiving ECLS appears feasible and safe. Clinical benefit should be investigated in larger multicentre randomised trials.

  11. Spatio-Temporal Process Variability in Watershed Scale Wetland Restoration Planning

    NASA Astrophysics Data System (ADS)

    Evenson, G. R.

    2012-12-01

    Watershed scale restoration decision making processes are increasingly informed by quantitative methodologies providing site-specific restoration recommendations - sometimes referred to as "systematic planning." The more advanced of these methodologies are characterized by a coupling of search algorithms and ecological models to discover restoration plans that optimize environmental outcomes. Yet while these methods have exhibited clear utility as decision support toolsets, they may be critiqued for flawed evaluations of spatio-temporally variable processes fundamental to watershed scale restoration. Hydrologic and non-hydrologic mediated process connectivity along with post-restoration habitat dynamics, for example, are commonly ignored yet known to appreciably affect restoration outcomes. This talk will present a methodology to evaluate such spatio-temporally complex processes in the production of watershed scale wetland restoration plans. Using the Tuscarawas Watershed in Eastern Ohio as a case study, a genetic algorithm will be coupled with the Soil and Water Assessment Tool (SWAT) to reveal optimal wetland restoration plans as measured by their capacity to maximize nutrient reductions. Then, a so-called "graphical" representation of the optimization problem will be implemented in-parallel to promote hydrologic and non-hydrologic mediated connectivity amongst existing wetlands and sites selected for restoration. Further, various search algorithm mechanisms will be discussed as a means of accounting for temporal complexities such as post-restoration habitat dynamics. Finally, generalized patterns of restoration plan optimality will be discussed as an alternative and possibly superior decision support toolset given the complexity and stochastic nature of spatio-temporal process variability.

  12. STAR Algorithm Integration Team - Facilitating operational algorithm development

    NASA Astrophysics Data System (ADS)

    Mikles, V. J.

    2015-12-01

    The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.

  13. Wind noise in hearing aids: I. Effect of wide dynamic range compression and modulation-based noise reduction.

    PubMed

    Chung, King

    2012-01-01

    The objectives of this study were: (1) to examine the effect of wide dynamic range compression (WDRC) and modulation-based noise reduction (NR) algorithms on wind noise levels at the hearing aid output; and (2) to derive effective strategies for clinicians and engineers to reduce wind noise in hearing aids. Three digital hearing aids were fitted to KEMAR. The noise output was recorded at flow velocities of 0, 4.5, 9.0, and 13.5 m/s in a wind tunnel as the KEMAR head was turned from 0° to 360°. Flow noise levels were compared between the 1:1 linear and 3:1 WDRC conditions, and between NR-activated and NR-deactivated conditions when the hearing aid was programmed to the directional and omnidirectional modes. The results showed that: (1) WDRC increased low-level noise and reduced high-level noise; and (2) different noise reduction algorithms provided different amounts of wind noise reduction in different microphone modes, frequency regions, flow velocities, and head angles. Wind noise can be reduced by decreasing the gain for low-level inputs, increasing the compression ratio for high-level inputs, and activating modulation-based noise reduction algorithms.

  14. A novel computer algorithm for modeling and treating mandibular fractures: A pilot study.

    PubMed

    Rizzi, Christopher J; Ortlip, Timothy; Greywoode, Jewel D; Vakharia, Kavita T; Vakharia, Kalpesh T

    2017-02-01

    To describe a novel computer algorithm that can model mandibular fracture repair. To evaluate the algorithm as a tool to model mandibular fracture reduction and hardware selection. Retrospective pilot study combined with cross-sectional survey. A computer algorithm utilizing Aquarius Net (TeraRecon, Inc, Foster City, CA) and Adobe Photoshop CS6 (Adobe Systems, Inc, San Jose, CA) was developed to model mandibular fracture repair. Ten different fracture patterns were selected from nine patients who had already undergone mandibular fracture repair. The preoperative computed tomography (CT) images were processed with the computer algorithm to create virtual images that matched the actual postoperative three-dimensional CT images. A survey comparing the true postoperative image with the virtual postoperative images was created and administered to otolaryngology resident and attending physicians. They were asked to rate on a scale from 0 to 10 (0 = completely different; 10 = identical) the similarity between the two images in terms of the fracture reduction and fixation hardware. Ten mandible fracture cases were analyzed and processed. There were 15 survey respondents. The mean score for overall similarity between the images was 8.41 ± 0.91; the mean score for similarity of fracture reduction was 8.61 ± 0.98; and the mean score for hardware appearance was 8.27 ± 0.97. There were no significant differences between attending and resident responses. There were no significant differences based on fracture location. This computer algorithm can accurately model mandibular fracture repair. Images created by the algorithm are highly similar to true postoperative images. The algorithm can potentially assist a surgeon planning mandibular fracture repair. 4. Laryngoscope, 2016 127:331-336, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.

  15. Formulations and algorithms for problems on rock mass and support deformation during mining

    NASA Astrophysics Data System (ADS)

    Seryakov, VM

    2018-03-01

    The analysis of problem formulations to calculate stress-strain state of mine support and surrounding rocks mass in rock mechanics shows that such formulations incompletely describe the mechanical features of joint deformation in the rock mass–support system. The present paper proposes an algorithm to take into account the actual conditions of rock mass and support interaction and the algorithm implementation method to ensure efficient calculation of stresses in rocks and support.

  16. Performance evaluation of image denoising developed using convolutional denoising autoencoders in chest radiography

    NASA Astrophysics Data System (ADS)

    Lee, Donghoon; Choi, Sunghoon; Kim, Hee-Joung

    2018-03-01

    When processing medical images, image denoising is an important pre-processing step. Various image denoising algorithms have been developed in the past few decades. Recently, image denoising using the deep learning method has shown excellent performance compared to conventional image denoising algorithms. In this study, we introduce an image denoising technique based on a convolutional denoising autoencoder (CDAE) and evaluate clinical applications by comparing existing image denoising algorithms. We train the proposed CDAE model using 3000 chest radiograms training data. To evaluate the performance of the developed CDAE model, we compare it with conventional denoising algorithms including median filter, total variation (TV) minimization, and non-local mean (NLM) algorithms. Furthermore, to verify the clinical effectiveness of the developed denoising model with CDAE, we investigate the performance of the developed denoising algorithm on chest radiograms acquired from real patients. The results demonstrate that the proposed denoising algorithm developed using CDAE achieves a superior noise-reduction effect in chest radiograms compared to TV minimization and NLM algorithms, which are state-of-the-art algorithms for image noise reduction. For example, the peak signal-to-noise ratio and structure similarity index measure of CDAE were at least 10% higher compared to conventional denoising algorithms. In conclusion, the image denoising algorithm developed using CDAE effectively eliminated noise without loss of information on anatomical structures in chest radiograms. It is expected that the proposed denoising algorithm developed using CDAE will be effective for medical images with microscopic anatomical structures, such as terminal bronchioles.

  17. Using Mathematics to Make Computing on Encrypted Data Secure and Practical

    DTIC Science & Technology

    2015-12-01

    LLL) lattice basis reduction algorithm, G-Lattice, Cryptography , Security, Gentry-Szydlo Algorithm, Ring-LWE 16. SECURITY CLASSIFICATION OF: 17...with symmetry be further developed, in order to quantify the security of lattice-based cryptography , including especially the security of homomorphic...the Gentry-Szydlo algorithm, and the ideas should be applicable to a range of questions in cryptography . The new algorithm of Lenstra and Silverberg

  18. Simulated bi-SQUID Arrays Performing Direction Finding

    DTIC Science & Technology

    2015-09-01

    First, we applied the multiple signal classification ( MUSIC ) algorithm on linearly polarized signals. We included multiple signals in the output...both of the same frequency and different fre- quencies. Next, we explored a modified MUSIC algorithm called dimensionality reduction MUSIC (DR- MUSIC ... MUSIC algorithm is able to determine the AoA from the simulated SQUID data for linearly polarized signals. The MUSIC algorithm could accurately find

  19. The SCUBA Data Reduction Pipeline: ORAC-DR at the JCMT

    NASA Astrophysics Data System (ADS)

    Jenness, Tim; Economou, Frossie

    The ORAC data reduction pipeline, developed for UKIRT, has been designed to be a completely general approach to writing data reduction pipelines. This generality has enabled the JCMT to adapt the system for use with SCUBA with minimal development time using the existing SCUBA data reduction algorithms (Surf).

  20. Model reduction of nonsquare linear MIMO systems using multipoint matrix continued-fraction expansions

    NASA Technical Reports Server (NTRS)

    Guo, Tong-Yi; Hwang, Chyi; Shieh, Leang-San

    1994-01-01

    This paper deals with the multipoint Cauer matrix continued-fraction expansion (MCFE) for model reduction of linear multi-input multi-output (MIMO) systems with various numbers of inputs and outputs. A salient feature of the proposed MCFE approach to model reduction of MIMO systems with square transfer matrices is its equivalence to the matrix Pade approximation approach. The Cauer second form of the ordinary MCFE for a square transfer function matrix is generalized in this paper to a multipoint and nonsquare-matrix version. An interesting connection of the multipoint Cauer MCFE method to the multipoint matrix Pade approximation method is established. Also, algorithms for obtaining the reduced-degree matrix-fraction descriptions and reduced-dimensional state-space models from a transfer function matrix via the multipoint Cauer MCFE algorithm are presented. Practical advantages of using the multipoint Cauer MCFE are discussed and a numerical example is provided to illustrate the algorithms.

  1. Algorithm and program for information processing with the filin apparatus

    NASA Technical Reports Server (NTRS)

    Gurin, L. S.; Morkrov, V. S.; Moskalenko, Y. I.; Tsoy, K. A.

    1979-01-01

    The reduction of spectral radiation data from space sources is described. The algorithm and program for identifying segments of information obtained from the Film telescope-spectrometer on the Salyut-4 are presented. The information segments represent suspected X-ray sources. The proposed algorithm is an algorithm of the lowest level. Following evaluation, information free of uninformative segments is subject to further processing with algorithms of a higher level. The language used is FORTRAN 4.

  2. MuLoG, or How to Apply Gaussian Denoisers to Multi-Channel SAR Speckle Reduction?

    PubMed

    Deledalle, Charles-Alban; Denis, Loic; Tabti, Sonia; Tupin, Florence

    2017-09-01

    Speckle reduction is a longstanding topic in synthetic aperture radar (SAR) imaging. Since most current and planned SAR imaging satellites operate in polarimetric, interferometric, or tomographic modes, SAR images are multi-channel and speckle reduction techniques must jointly process all channels to recover polarimetric and interferometric information. The distinctive nature of SAR signal (complex-valued, corrupted by multiplicative fluctuations) calls for the development of specialized methods for speckle reduction. Image denoising is a very active topic in image processing with a wide variety of approaches and many denoising algorithms available, almost always designed for additive Gaussian noise suppression. This paper proposes a general scheme, called MuLoG (MUlti-channel LOgarithm with Gaussian denoising), to include such Gaussian denoisers within a multi-channel SAR speckle reduction technique. A new family of speckle reduction algorithms can thus be obtained, benefiting from the ongoing progress in Gaussian denoising, and offering several speckle reduction results often displaying method-specific artifacts that can be dismissed by comparison between results.

  3. Photoplethysmograph signal reconstruction based on a novel hybrid motion artifact detection-reduction approach. Part I: Motion and noise artifact detection.

    PubMed

    Chong, Jo Woon; Dao, Duy K; Salehizadeh, S M A; McManus, David D; Darling, Chad E; Chon, Ki H; Mendelson, Yitzhak

    2014-11-01

    Motion and noise artifacts (MNA) are a serious obstacle in utilizing photoplethysmogram (PPG) signals for real-time monitoring of vital signs. We present a MNA detection method which can provide a clean vs. corrupted decision on each successive PPG segment. For motion artifact detection, we compute four time-domain parameters: (1) standard deviation of peak-to-peak intervals (2) standard deviation of peak-to-peak amplitudes (3) standard deviation of systolic and diastolic interval ratios, and (4) mean standard deviation of pulse shape. We have adopted a support vector machine (SVM) which takes these parameters from clean and corrupted PPG signals and builds a decision boundary to classify them. We apply several distinct features of the PPG data to enhance classification performance. The algorithm we developed was verified on PPG data segments recorded by simulation, laboratory-controlled and walking/stair-climbing experiments, respectively, and we compared several well-established MNA detection methods to our proposed algorithm. All compared detection algorithms were evaluated in terms of motion artifact detection accuracy, heart rate (HR) error, and oxygen saturation (SpO2) error. For laboratory controlled finger, forehead recorded PPG data and daily-activity movement data, our proposed algorithm gives 94.4, 93.4, and 93.7% accuracies, respectively. Significant reductions in HR and SpO2 errors (2.3 bpm and 2.7%) were noted when the artifacts that were identified by SVM-MNA were removed from the original signal than without (17.3 bpm and 5.4%). The accuracy and error values of our proposed method were significantly higher and lower, respectively, than all other detection methods. Another advantage of our method is its ability to provide highly accurate onset and offset detection times of MNAs. This capability is important for an automated approach to signal reconstruction of only those data points that need to be reconstructed, which is the subject of the companion paper to this article. Finally, our MNA detection algorithm is real-time realizable as the computational speed on the 7-s PPG data segment was found to be only 7 ms with a Matlab code.

  4. Pre-Launch Algorithms and Risk Reduction in Support of the Geostationary Lightning Mapper for GOES-R and Beyond

    NASA Technical Reports Server (NTRS)

    Goodman, Steven; Blakeslee, Richard; Koshak, William

    2008-01-01

    The Geostationary Lightning Mapper (GLM) is a single channel, near-IR optical transient event detector, used to detect, locate and measure total lightning activity over the full-disk as part of a 3-axis stabilized, geostationary weather satellite system. The next generation NOAA Geostationary Operational Environmental Satellite (GOES-R) series with a planned launch in 2014 will carry a GLM that will provide continuous day and night observations of lightning from the west coast of Africa (GOES-E) to New Zealand (GOES-W) when the constellation is fully operational. The mission objectives for the GLM are to 1) provide continuous,full-disk lightning measurements for storm warning and Nowcasting, 2) provide early warning of tornado activity, and 3) accumulate a long-term database to track decadal changes of lightning. The GLM owes its heritage to the NASA Lightning Imaging Sensor (1997-Present) and the Optical Transient Detector (1995-2000), which were developed for the Earth Observing System and have produced a combined 13 year data record of global lightning activity. Instrument formulation studies were completed in March 2007 and the implementation phase to develop a prototype model and up to four flight units is expected to begin in latter part of the year. In parallel with the instrument development, a GOES-R Risk Reduction Team and Algorithm Working Group Lightning Applications Team have begun to develop the Level 2B algorithms and applications. Proxy total lightning data from the NASA Lightning Imaging Sensor on the Tropical Rainfall Measuring Mission (TRMM) satellite and regional test beds (e.g., Lightning Mapping Arrays in North Alabama and the Washington DC Metropolitan area) are being used to develop the pre-launch algorithms and applications, and also improve our knowledge of thunderstorm initiation and evolution. Real time lightning mapping data provided to selected National Weather Service forecast offices in Southern and Eastern Region are also improving our understanding of the application of these data in the severe storm warning process and help to accelerate the development of the pre-launch algorithms and Nowcasting applications.

  5. Pre-Launch Algorithms and Risk Reduction in Support of the Geostationary Lightning Mapper for GOES-R and Beyond

    NASA Technical Reports Server (NTRS)

    Goodman, Steven; Blakeslee, Richard; Koshak, William; Petersen, Walt; Buechler, Dennis; Krehbiel, Paul; Gatlin, Patrick; Zubrick, Steven

    2008-01-01

    The Geostationary Lightning Mapper (GLM) is a single channel, near-IR optical transient event detector, used to detect, locate and measure total lightning activity over the full-disk as part of a 3-axis stabilized, geostationary weather satellite system. The next generation NOAA Geostationary Operational Environmental Satellite (GOES-R) series with a planned launch in 2014 will carry a GLM that will provide continuous day and night observations of lightning from the west coast of Africa (GOES-E) to New Zealand (GOES-W) when the constellation is fully operational.The mission objectives for the GLM are to 1) provide continuous,full-disk lightning measurements for storm warning and Nowcasting, 2) provide early warning of tornadic activity, and 3) accumulate a long-term database to track decadal changes of lightning. The GLM owes its heritage to the NASA Lightning Imaging Sensor (1997-Present) and the Optical Transient Detector (1995-2000), which were developed for the Earth Observing System and have produced a combined 13 year data record of global lightning activity. Instrument formulation studies were completed in March 2007 and the implementation phase to develop a prototype model and up to four flight units is expected to begin in latter part of the year. In parallel with the instrument development, a GOES-R Risk Reduction Team and Algorithm Working Group Lightning Applications Team have begun to develop the Level 2B algorithms and applications. Proxy total lightning data from the NASA Lightning Imaging Sensor on the Tropical Rainfall Measuring Mission (TRMM) sate]lite and regional test beds (e.g., Lightning Mapping Arrays in North Alabama and the Washington DC Metropolitan area) are being used to develop the pre-launch algorithms and applications, and also improve our knowledge of thunderstorm initiation and evolution. Real time lightning mapping data provided to selected National Weather Service forecast offices in Southern and Eastern Region are also improving our understanding of the application of these data in the severe storm warning process and help to accelerate the development of the pre-launch algorithms and Nowcasting applications. Abstract for the 3 rd Conference on Meteorological

  6. Gradient Evolution-based Support Vector Machine Algorithm for Classification

    NASA Astrophysics Data System (ADS)

    Zulvia, Ferani E.; Kuo, R. J.

    2018-03-01

    This paper proposes a classification algorithm based on a support vector machine (SVM) and gradient evolution (GE) algorithms. SVM algorithm has been widely used in classification. However, its result is significantly influenced by the parameters. Therefore, this paper aims to propose an improvement of SVM algorithm which can find the best SVMs’ parameters automatically. The proposed algorithm employs a GE algorithm to automatically determine the SVMs’ parameters. The GE algorithm takes a role as a global optimizer in finding the best parameter which will be used by SVM algorithm. The proposed GE-SVM algorithm is verified using some benchmark datasets and compared with other metaheuristic-based SVM algorithms. The experimental results show that the proposed GE-SVM algorithm obtains better results than other algorithms tested in this paper.

  7. Anisotropic field-of-view shapes for improved PROPELLER imaging☆

    PubMed Central

    Larson, Peder E.Z.; Lustig, Michael S.; Nishimura, Dwight G.

    2010-01-01

    The Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction (PROPELLER) method for magnetic resonance imaging data acquisition and reconstruction has the highly desirable property of being able to correct for motion during the scan, making it especially useful for imaging pediatric or uncooperative patients and diffusion imaging. This method nominally supports a circular field of view (FOV), but tailoring the FOV for noncircular shapes results in more efficient, shorter scans. This article presents new algorithms for tailoring PROPELLER acquisitions to the desired FOV shape and size that are flexible and precise. The FOV design also allows for rotational motion which provides better motion correction and reduced aliasing artifacts. Some possible FOV shapes demonstrated are ellipses, ovals and rectangles, and any convex, pi-symmetric shape can be designed. Standard PROPELLER reconstruction is used with minor modifications, and results with simulated motion presented confirm the effectiveness of the motion correction with these modified FOV shapes. These new acquisition design algorithms are simple and fast enough to be computed for each individual scan. Also presented are algorithms for further scan time reductions in PROPELLER echo-planar imaging (EPI) acquisitions by varying the sample spacing in two directions within each blade. PMID:18818039

  8. Metal-induced streak artifact reduction using iterative reconstruction algorithms in x-ray computed tomography image of the dentoalveolar region.

    PubMed

    Dong, Jian; Hayakawa, Yoshihiko; Kannenberg, Sven; Kober, Cornelia

    2013-02-01

    The objective of this study was to reduce metal-induced streak artifact on oral and maxillofacial x-ray computed tomography (CT) images by developing the fast statistical image reconstruction system using iterative reconstruction algorithms. Adjacent CT images often depict similar anatomical structures in thin slices. So, first, images were reconstructed using the same projection data of an artifact-free image. Second, images were processed by the successive iterative restoration method where projection data were generated from reconstructed image in sequence. Besides the maximum likelihood-expectation maximization algorithm, the ordered subset-expectation maximization algorithm (OS-EM) was examined. Also, small region of interest (ROI) setting and reverse processing were applied for improving performance. Both algorithms reduced artifacts instead of slightly decreasing gray levels. The OS-EM and small ROI reduced the processing duration without apparent detriments. Sequential and reverse processing did not show apparent effects. Two alternatives in iterative reconstruction methods were effective for artifact reduction. The OS-EM algorithm and small ROI setting improved the performance. Copyright © 2012 Elsevier Inc. All rights reserved.

  9. AWARE - The Automated EUV Wave Analysis and REduction algorithm

    NASA Astrophysics Data System (ADS)

    Ireland, J.; Inglis; A. R.; Shih, A. Y.; Christe, S.; Mumford, S.; Hayes, L. A.; Thompson, B. J.

    2016-10-01

    Extreme ultraviolet (EUV) waves are large-scale propagating disturbances observed in the solar corona, frequently associated with coronal mass ejections and flares. Since their discovery over two hundred papers discussing their properties, causes and physics have been published. However, their fundamental nature and the physics of their interactions with other solar phenomena are still not understood. To further the understanding of EUV waves, and their relation to other solar phenomena, we have constructed the Automated Wave Analysis and REduction (AWARE) algorithm for the detection of EUV waves over the full Sun. The AWARE algorithm is based on a novel image processing approach to isolating the bright wavefront of the EUV as it propagates across the corona. AWARE detects the presence of a wavefront, and measures the distance, velocity and acceleration of that wavefront across the Sun. Results from AWARE are compared to results from other algorithms for some well known EUV wave events. Suggestions are also give for further refinements to the basic algorithm presented here.

  10. Iterative metal artefact reduction (MAR) in postsurgical chest CT: comparison of three iMAR-algorithms.

    PubMed

    Aissa, Joel; Boos, Johannes; Sawicki, Lino Morris; Heinzler, Niklas; Krzymyk, Karl; Sedlmair, Martin; Kröpil, Patric; Antoch, Gerald; Thomas, Christoph

    2017-11-01

    The purpose of this study was to evaluate the impact of three novel iterative metal artefact (iMAR) algorithms on image quality and artefact degree in chest CT of patients with a variety of thoracic metallic implants. 27 postsurgical patients with thoracic implants who underwent clinical chest CT between March and May 2015 in clinical routine were retrospectively included. Images were retrospectively reconstructed with standard weighted filtered back projection (WFBP) and with three iMAR algorithms (iMAR-Algo1 = Cardiac algorithm, iMAR-Algo2 = Pacemaker algorithm and iMAR-Algo3 = ThoracicCoils algorithm). The subjective and objective image quality was assessed. Averaged over all artefacts, artefact degree was significantly lower for the iMAR-Algo1 (58.9 ± 48.5 HU), iMAR-Algo2 (52.7 ± 46.8 HU) and the iMAR-Algo3 (51.9 ± 46.1 HU) compared with WFBP (91.6 ± 81.6 HU, p < 0.01 for all). All iMAR reconstructed images showed significantly lower artefacts (p < 0.01) compared with the WFPB while there was no significant difference between the iMAR algorithms, respectively. iMAR-Algo2 and iMAR-Algo3 reconstructions decreased mild and moderate artefacts compared with WFBP and iMAR-Algo1 (p < 0.01). All three iMAR algorithms led to a significant reduction of metal artefacts and increase in overall image quality compared with WFBP in chest CT of patients with metallic implants in subjective and objective analysis. The iMARAlgo2 and iMARAlgo3 were best for mild artefacts. IMARAlgo1 was superior for severe artefacts. Advances in knowledge: Iterative MAR led to significant artefact reduction and increase image-quality compared with WFBP in CT after implementation of thoracic devices. Adjusting iMAR-algorithms to patients' metallic implants can help to improve image quality in CT.

  11. Efficacy and Clinical Utility of a High-Attenuation Object Artifact Reduction Algorithm in Flat-Detector Image Reconstruction Compared With Standard Image Reconstruction.

    PubMed

    Naehle, Claas P; Hechelhammer, Lukas; Richter, Heiko; Ryffel, Fabian; Wildermuth, Simon; Weber, Johannes

    To evaluate the effectiveness and clinical utility of a metal artifact reduction (MAR) image reconstruction algorithm for the reduction of high-attenuation object (HAO)-related image artifacts. Images were quantitatively evaluated for image noise (noiseSD and noiserange) and qualitatively for artifact severity, gray-white-matter delineation, and diagnostic confidence with conventional reconstruction and after applying a MAR algorithm. Metal artifact reduction reduces noiseSD and noiserange (median [interquartile range]) at the level of HAO in 1-cm distance compared with conventional reconstruction (noiseSD: 60.0 [71.4] vs 12.8 [16.1] and noiserange: 262.0 [236.8] vs 72.0 [28.3]; P < 0.0001). Artifact severity (reader 1 [mean ± SD]: 1.1 ± 0.6 vs 2.4 ± 0.5, reader 2: 0.8 ± 0.6 vs 2.0 ± 0.4) at level of HAO and diagnostic confidence (reader 1: 1.6 ± 0.7 vs 2.6 ± 0.5, reader 2: 1.0 ± 0.6 vs 2.3 ± 0.7) significantly improved with MAR (P < 0.0001). Metal artifact reduction did not affect gray-white-matter delineation. Metal artifact reduction effectively reduces image artifacts caused by HAO and significantly improves diagnostic confidence without worsening gray-white-matter delineation.

  12. Algorithmic and user study of an autocompletion algorithm on a large medical vocabulary.

    PubMed

    Sevenster, Merlijn; van Ommering, Rob; Qian, Yuechen

    2012-02-01

    Autocompletion supports human-computer interaction in software applications that let users enter textual data. We will be inspired by the use case in which medical professionals enter ontology concepts, catering the ongoing demand for structured and standardized data in medicine. Goal is to give an algorithmic analysis of one particular autocompletion algorithm, called multi-prefix matching algorithm, which suggests terms whose words' prefixes contain all words in the string typed by the user, e.g., in this sense, opt ner me matches optic nerve meningioma. Second we aim to investigate how well it supports users entering concepts from a large and comprehensive medical vocabulary (snomed ct). We give a concise description of the multi-prefix algorithm, and sketch how it can be optimized to meet required response time. Performance will be compared to a baseline algorithm, which gives suggestions that extend the string typed by the user to the right, e.g. optic nerve m gives optic nerve meningioma, but opt ner me does not. We conduct a user experiment in which 12 participants are invited to complete 40 snomed ct terms with the baseline algorithm and another set of 40 snomed ct terms with the multi-prefix algorithm. Our results show that users need significantly fewer keystrokes when supported by the multi-prefix algorithm than when supported by the baseline algorithm. The proposed algorithm is a competitive candidate for searching and retrieving terms from a large medical ontology. Copyright © 2011 Elsevier Inc. All rights reserved.

  13. LTI system order reduction approach based on asymptotical equivalence and the Co-operation of biology-related algorithms

    NASA Astrophysics Data System (ADS)

    Ryzhikov, I. S.; Semenkin, E. S.; Akhmedova, Sh A.

    2017-02-01

    A novel order reduction method for linear time invariant systems is described. The method is based on reducing the initial problem to an optimization one, using the proposed model representation, and solving the problem with an efficient optimization algorithm. The proposed method of determining the model allows all the parameters of the model with lower order to be identified and by definition, provides the model with the required steady-state. As a powerful optimization tool, the meta-heuristic Co-Operation of Biology-Related Algorithms was used. Experimental results proved that the proposed approach outperforms other approaches and that the reduced order model achieves a high level of accuracy.

  14. Symbolic LTL Compilation for Model Checking: Extended Abstract

    NASA Technical Reports Server (NTRS)

    Rozier, Kristin Y.; Vardi, Moshe Y.

    2007-01-01

    In Linear Temporal Logic (LTL) model checking, we check LTL formulas representing desired behaviors against a formal model of the system designed to exhibit these behaviors. To accomplish this task, the LTL formulas must be translated into automata [21]. We focus on LTL compilation by investigating LTL satisfiability checking via a reduction to model checking. Having shown that symbolic LTL compilation algorithms are superior to explicit automata construction algorithms for this task [16], we concentrate here on seeking a better symbolic algorithm.We present experimental data comparing algorithmic variations such as normal forms, encoding methods, and variable ordering and examine their effects on performance metrics including processing time and scalability. Safety critical systems, such as air traffic control, life support systems, hazardous environment controls, and automotive control systems, pervade our daily lives, yet testing and simulation alone cannot adequately verify their reliability [3]. Model checking is a promising approach to formal verification for safety critical systems which involves creating a formal mathematical model of the system and translating desired safety properties into a formal specification for this model. The complement of the specification is then checked against the system model. When the model does not satisfy the specification, model-checking tools accompany this negative answer with a counterexample, which points to an inconsistency between the system and the desired behaviors and aids debugging efforts.

  15. A Fourier dimensionality reduction model for big data interferometric imaging

    NASA Astrophysics Data System (ADS)

    Vijay Kartik, S.; Carrillo, Rafael E.; Thiran, Jean-Philippe; Wiaux, Yves

    2017-06-01

    Data dimensionality reduction in radio interferometry can provide savings of computational resources for image reconstruction through reduced memory footprints and lighter computations per iteration, which is important for the scalability of imaging methods to the big data setting of the next-generation telescopes. This article sheds new light on dimensionality reduction from the perspective of the compressed sensing theory and studies its interplay with imaging algorithms designed in the context of convex optimization. We propose a post-gridding linear data embedding to the space spanned by the left singular vectors of the measurement operator, providing a dimensionality reduction below image size. This embedding preserves the null space of the measurement operator and hence its sampling properties are also preserved in light of the compressed sensing theory. We show that this can be approximated by first computing the dirty image and then applying a weighted subsampled discrete Fourier transform to obtain the final reduced data vector. This Fourier dimensionality reduction model ensures a fast implementation of the full measurement operator, essential for any iterative image reconstruction method. The proposed reduction also preserves the independent and identically distributed Gaussian properties of the original measurement noise. For convex optimization-based imaging algorithms, this is key to justify the use of the standard ℓ2-norm as the data fidelity term. Our simulations confirm that this dimensionality reduction approach can be leveraged by convex optimization algorithms with no loss in imaging quality relative to reconstructing the image from the complete visibility data set. Reconstruction results in simulation settings with no direction dependent effects or calibration errors show promising performance of the proposed dimensionality reduction. Further tests on real data are planned as an extension of the current work. matlab code implementing the proposed reduction method is available on GitHub.

  16. A fast efficient implicit scheme for the gasdynamic equations using a matrix reduction technique

    NASA Technical Reports Server (NTRS)

    Barth, T. J.; Steger, J. L.

    1985-01-01

    An efficient implicit finite-difference algorithm for the gasdynamic equations utilizing matrix reduction techniques is presented. A significant reduction in arithmetic operations is achieved without loss of the stability characteristics generality found in the Beam and Warming approximate factorization algorithm. Steady-state solutions to the conservative Euler equations in generalized coordinates are obtained for transonic flows and used to show that the method offers computational advantages over the conventional Beam and Warming scheme. Existing Beam and Warming codes can be retrofit with minimal effort. The theoretical extension of the matrix reduction technique to the full Navier-Stokes equations in Cartesian coordinates is presented in detail. Linear stability, using a Fourier stability analysis, is demonstrated and discussed for the one-dimensional Euler equations.

  17. Noise reduction and image enhancement using a hardware implementation of artificial neural networks

    NASA Astrophysics Data System (ADS)

    David, Robert; Williams, Erin; de Tremiolles, Ghislain; Tannhof, Pascal

    1999-03-01

    In this paper, we present a neural based solution developed for noise reduction and image enhancement using the ZISC, an IBM hardware processor which implements the Restricted Coulomb Energy algorithm and the K-Nearest Neighbor algorithm. Artificial neural networks present the advantages of processing time reduction in comparison with classical models, adaptability, and the weighted property of pattern learning. The goal of the developed application is image enhancement in order to restore old movies (noise reduction, focus correction, etc.), to improve digital television images, or to treat images which require adaptive processing (medical images, spatial images, special effects, etc.). Image results show a quantitative improvement over the noisy image as well as the efficiency of this system. Further enhancements are being examined to improve the output of the system.

  18. Pre-Launch Algorithms and Risk Reduction in Support of the Geostationary Lightning Mapper for GOES-R and Beyond

    NASA Technical Reports Server (NTRS)

    Goodman, Steven J.; Blakeslee, R. J.; Koshak, W.; Petersen, W.; Buechler, D. E.; Krehbiel, P. R.; Gatlin, P.; Zubrick, S.

    2008-01-01

    The Geostationary Lightning Mapper (GLM) is a single channel, near-IR imager/optical transient event detector, used to detect, locate and measure total lightning activity over the full-disk as part of a 3-axis stabilized, geostationary weather satellite system. The next generation NOAA Geostationary Operational Environmental Satellite (GOES-R) series with a planned launch in 2014 will carry a GLM that will provide continuous day and night observations of lightning from the west coast of Africa (GOES-E) to New Zealand (GOES-W) when the constellation is fUlly operational. The mission objectives for the GLM are to 1) provide continuous, full-disk lightning measurements for storm warning and nowcasting, 2) provide early warning of tornadic activity, and 3) accumulate a long-term database to track decadal changes of lightning. The GLM owes its heritage to the NASA Lightning Imaging Sensor (1997-Present) and the Optical Transient Detector (1995-2000), which were developed for the Earth Observing System and have produced a combined 13 year data record of global lightning activity. Instrument formulation studies were completed in March 2007 and the implementation phase to develop a prototype model and up to four flight models is expected to be underway in the latter part of 2007. In parallel with the instrument development, a GOES-R Risk Reduction Team and Algorithm Working Group Lightning Applications Team have begun to develop the Level 2 ground processing algorithms and applications. Proxy total lightning data from the NASA Lightning Imaging Sensor on the Tropical Rainfall Measuring Mission (TRMM) satellite and regional test beds (e.g., Lightning Mapping Arrays in North Alabama and the Washington DC Metropolitan area)

  19. Hybrid Optimization of Object-Based Classification in High-Resolution Images Using Continous ANT Colony Algorithm with Emphasis on Building Detection

    NASA Astrophysics Data System (ADS)

    Tamimi, E.; Ebadi, H.; Kiani, A.

    2017-09-01

    Automatic building detection from High Spatial Resolution (HSR) images is one of the most important issues in Remote Sensing (RS). Due to the limited number of spectral bands in HSR images, using other features will lead to improve accuracy. By adding these features, the presence probability of dependent features will be increased, which leads to accuracy reduction. In addition, some parameters should be determined in Support Vector Machine (SVM) classification. Therefore, it is necessary to simultaneously determine classification parameters and select independent features according to image type. Optimization algorithm is an efficient method to solve this problem. On the other hand, pixel-based classification faces several challenges such as producing salt-paper results and high computational time in high dimensional data. Hence, in this paper, a novel method is proposed to optimize object-based SVM classification by applying continuous Ant Colony Optimization (ACO) algorithm. The advantages of the proposed method are relatively high automation level, independency of image scene and type, post processing reduction for building edge reconstruction and accuracy improvement. The proposed method was evaluated by pixel-based SVM and Random Forest (RF) classification in terms of accuracy. In comparison with optimized pixel-based SVM classification, the results showed that the proposed method improved quality factor and overall accuracy by 17% and 10%, respectively. Also, in the proposed method, Kappa coefficient was improved by 6% rather than RF classification. Time processing of the proposed method was relatively low because of unit of image analysis (image object). These showed the superiority of the proposed method in terms of time and accuracy.

  20. Sidelobe reduction and capacity improvement of open-loop collaborative beamforming in wireless sensor networks

    PubMed Central

    2017-01-01

    Collaborative beamforming (CBF) with a finite number of collaborating nodes (CNs) produces sidelobes that are highly dependent on the collaborating nodes’ locations. The sidelobes cause interference and affect the communication rate of unintended receivers located within the transmission range. Nulling is not possible in an open-loop CBF since the collaborating nodes are unable to receive feedback from the receivers. Hence, the overall sidelobe reduction is required to avoid interference in the directions of the unintended receivers. However, the impact of sidelobe reduction on the capacity improvement at the unintended receiver has never been reported in previous works. In this paper, the effect of peak sidelobe (PSL) reduction in CBF on the capacity of an unintended receiver is analyzed. Three meta-heuristic optimization methods are applied to perform PSL minimization, namely genetic algorithm (GA), particle swarm algorithm (PSO) and a simplified version of the PSO called the weightless swarm algorithm (WSA). An average reduction of 20 dB in PSL alongside 162% capacity improvement is achieved in the worst case scenario with the WSA optimization. It is discovered that the PSL minimization in the CBF provides capacity improvement at an unintended receiver only if the CBF cluster is small and dense. PMID:28464000

  1. Dosimetric Evaluation of Metal Artefact Reduction using Metal Artefact Reduction (MAR) Algorithm and Dual-energy Computed Tomography (CT) Method

    NASA Astrophysics Data System (ADS)

    Laguda, Edcer Jerecho

    Purpose: Computed Tomography (CT) is one of the standard diagnostic imaging modalities for the evaluation of a patient's medical condition. In comparison to other imaging modalities such as Magnetic Resonance Imaging (MRI), CT is a fast acquisition imaging device with higher spatial resolution and higher contrast-to-noise ratio (CNR) for bony structures. CT images are presented through a gray scale of independent values in Hounsfield units (HU). High HU-valued materials represent higher density. High density materials, such as metal, tend to erroneously increase the HU values around it due to reconstruction software limitations. This problem of increased HU values due to metal presence is referred to as metal artefacts. Hip prostheses, dental fillings, aneurysm clips, and spinal clips are a few examples of metal objects that are of clinical relevance. These implants create artefacts such as beam hardening and photon starvation that distort CT images and degrade image quality. This is of great significance because the distortions may cause improper evaluation of images and inaccurate dose calculation in the treatment planning system. Different algorithms are being developed to reduce these artefacts for better image quality for both diagnostic and therapeutic purposes. However, very limited information is available about the effect of artefact correction on dose calculation accuracy. This research study evaluates the dosimetric effect of metal artefact reduction algorithms on severe artefacts on CT images. This study uses Gemstone Spectral Imaging (GSI)-based MAR algorithm, projection-based Metal Artefact Reduction (MAR) algorithm, and the Dual-Energy method. Materials and Methods: The Gemstone Spectral Imaging (GSI)-based and SMART Metal Artefact Reduction (MAR) algorithms are metal artefact reduction protocols embedded in two different CT scanner models by General Electric (GE), and the Dual-Energy Imaging Method was developed at Duke University. All three approaches were applied in this research for dosimetric evaluation on CT images with severe metal artefacts. The first part of the research used a water phantom with four iodine syringes. Two sets of plans, multi-arc plans and single-arc plans, using the Volumetric Modulated Arc therapy (VMAT) technique were designed to avoid or minimize influences from high-density objects. The second part of the research used projection-based MAR Algorithm and the Dual-Energy Method. Calculated Doses (Mean, Minimum, and Maximum Doses) to the planning treatment volume (PTV) were compared and homogeneity index (HI) calculated. Results: (1) Without the GSI-based MAR application, a percent error between mean dose and the absolute dose ranging from 3.4-5.7% per fraction was observed. In contrast, the error was decreased to a range of 0.09-2.3% per fraction with the GSI-based MAR algorithm. There was a percent difference ranging from 1.7-4.2% per fraction between with and without using the GSI-based MAR algorithm. (2) A range of 0.1-3.2% difference was observed for the maximum dose values, 1.5-10.4% for minimum dose difference, and 1.4-1.7% difference on the mean doses. Homogeneity indexes (HI) ranging from 0.068-0.065 for dual-energy method and 0.063-0.141 with projection-based MAR algorithm were also calculated. Conclusion: (1) Percent error without using the GSI-based MAR algorithm may deviate as high as 5.7%. This error invalidates the goal of Radiation Therapy to provide a more precise treatment. Thus, GSI-based MAR algorithm was desirable due to its better dose calculation accuracy. (2) Based on direct numerical observation, there was no apparent deviation between the mean doses of different techniques but deviation was evident on the maximum and minimum doses. The HI for the dual-energy method almost achieved the desirable null values. In conclusion, the Dual-Energy method gave better dose calculation accuracy to the planning treatment volume (PTV) for images with metal artefacts than with or without GE MAR Algorithm.

  2. Optimization of waste heat utilization in cold end system of thermal power station based on neural network algorithm

    NASA Astrophysics Data System (ADS)

    Du, Zenghui

    2018-04-01

    At present, the flue gas waste heat utilization projects of coal-fired boilers are often limited by low temperature corrosion problems and conventional PID control. The flue gas temperature cannot be reduced to the best efficiency temperature of wet desulphurization, resulting in the failure of heat recovery to be the maximum. Therefore, this paper analyzes, researches and solves the remaining problems of the cold end system of thermal power station, so as to provide solutions and theoretical support for energy saving and emission reduction and upgrading and the improvement of the comprehensive efficiency of the units.

  3. On computation of Gröbner bases for linear difference systems

    NASA Astrophysics Data System (ADS)

    Gerdt, Vladimir P.

    2006-04-01

    In this paper, we present an algorithm for computing Gröbner bases of linear ideals in a difference polynomial ring over a ground difference field. The input difference polynomials generating the ideal are also assumed to be linear. The algorithm is an adaptation to difference ideals of our polynomial algorithm based on Janet-like reductions.

  4. Xi-cam: Flexible High Throughput Data Processing for GISAXS

    NASA Astrophysics Data System (ADS)

    Pandolfi, Ronald; Kumar, Dinesh; Venkatakrishnan, Singanallur; Sarje, Abinav; Krishnan, Hari; Pellouchoud, Lenson; Ren, Fang; Fournier, Amanda; Jiang, Zhang; Tassone, Christopher; Mehta, Apurva; Sethian, James; Hexemer, Alexander

    With increasing capabilities and data demand for GISAXS beamlines, supporting software is under development to handle larger data rates, volumes, and processing needs. We aim to provide a flexible and extensible approach to GISAXS data treatment as a solution to these rising needs. Xi-cam is the CAMERA platform for data management, analysis, and visualization. The core of Xi-cam is an extensible plugin-based GUI platform which provides users an interactive interface to processing algorithms. Plugins are available for SAXS/GISAXS data and data series visualization, as well as forward modeling and simulation through HipGISAXS. With Xi-cam's advanced mode, data processing steps are designed as a graph-based workflow, which can be executed locally or remotely. Remote execution utilizes HPC or de-localized resources, allowing for effective reduction of high-throughput data. Xi-cam is open-source and cross-platform. The processing algorithms in Xi-cam include parallel cpu and gpu processing optimizations, also taking advantage of external processing packages such as pyFAI. Xi-cam is available for download online.

  5. Measuring radiation dose in computed tomography using elliptic phantom and free-in-air, and evaluating iterative metal artifact reduction algorithm

    NASA Astrophysics Data System (ADS)

    Morgan, Ashraf

    The need for an accurate and reliable way for measuring patient dose in multi-row detector computed tomography (MDCT) has increased significantly. This research was focusing on the possibility of measuring CT dose in air to estimate Computed Tomography Dose Index (CTDI) for routine quality control purposes. New elliptic CTDI phantom that better represent human geometry was manufactured for investigating the effect of the subject shape on measured CTDI. Monte Carlo simulation was utilized in order to determine the dose distribution in comparison to the traditional cylindrical CTDI phantom. This research also investigated the effect of Siemens health care newly developed iMAR (iterative metal artifact reduction) algorithm, arthroplasty phantom was designed and manufactured that purpose. The design of new phantoms was part of the research as they mimic the human geometry more than the existing CTDI phantom. The standard CTDI phantom is a right cylinder that does not adequately represent the geometry of the majority of the patient population. Any dose reduction algorithm that is used during patient scan will not be utilized when scanning the CTDI phantom, so a better-designed phantom will allow the use of dose reduction algorithms when measuring dose, which leads to better dose estimation and/or better understanding of dose delivery. Doses from a standard CTDI phantom and the newly-designed phantoms were compared to doses measured in air. Iterative reconstruction is a promising technique in MDCT dose reduction and artifacts correction. Iterative reconstruction algorithms have been developed to address specific imaging tasks as is the case with Iterative Metal Artifact Reduction or iMAR which was developed by Siemens and is to be in use with the companys future computed tomography platform. The goal of iMAR is to reduce metal artifact when imaging patients with metal implants and recover CT number of tissues adjacent to the implant. This research evaluated iMAR capability of recovering CT numbers and reducing noise. Also, the use of iMAR should allow using lower tube voltage instead of 140 KVp which is used frequently to image patients with shoulder implants. The evaluations of image quality and dose reduction were carried out using an arthroplasty phantom.

  6. Use of predictive algorithms in-home monitoring of chronic obstructive pulmonary disease and asthma: A systematic review.

    PubMed

    Sanchez-Morillo, Daniel; Fernandez-Granero, Miguel A; Leon-Jimenez, Antonio

    2016-08-01

    Major reported factors associated with the limited effectiveness of home telemonitoring interventions in chronic respiratory conditions include the lack of useful early predictors, poor patient compliance and the poor performance of conventional algorithms for detecting deteriorations. This article provides a systematic review of existing algorithms and the factors associated with their performance in detecting exacerbations and supporting clinical decisions in patients with chronic obstructive pulmonary disease (COPD) or asthma. An electronic literature search in Medline, Scopus, Web of Science and Cochrane library was conducted to identify relevant articles published between 2005 and July 2015. A total of 20 studies (16 COPD, 4 asthma) that included research about the use of algorithms in telemonitoring interventions in asthma and COPD were selected. Differences on the applied definition of exacerbation, telemonitoring duration, acquired physiological signals and symptoms, type of technology deployed and algorithms used were found. Predictive models with good clinically reliability have yet to be defined, and are an important goal for the future development of telehealth in chronic respiratory conditions. New predictive models incorporating both symptoms and physiological signals are being tested in telemonitoring interventions with positive outcomes. However, the underpinning algorithms behind these models need be validated in larger samples of patients, for longer periods of time and with well-established protocols. In addition, further research is needed to identify novel predictors that enable the early detection of deteriorations, especially in COPD. Only then will telemonitoring achieve the aim of preventing hospital admissions, contributing to the reduction of health resource utilization and improving the quality of life of patients. © The Author(s) 2016.

  7. Application of Avco data analysis and prediction techniques (ADAPT) to prediction of sunspot activity

    NASA Technical Reports Server (NTRS)

    Hunter, H. E.; Amato, R. A.

    1972-01-01

    The results are presented of the application of Avco Data Analysis and Prediction Techniques (ADAPT) to derivation of new algorithms for the prediction of future sunspot activity. The ADAPT derived algorithms show a factor of 2 to 3 reduction in the expected 2-sigma errors in the estimates of the 81-day running average of the Zurich sunspot numbers. The report presents: (1) the best estimates for sunspot cycles 20 and 21, (2) a comparison of the ADAPT performance with conventional techniques, and (3) specific approaches to further reduction in the errors of estimated sunspot activity and to recovery of earlier sunspot historical data. The ADAPT programs are used both to derive regression algorithm for prediction of the entire 11-year sunspot cycle from the preceding two cycles and to derive extrapolation algorithms for extrapolating a given sunspot cycle based on any available portion of the cycle.

  8. A Laplacian based image filtering using switching noise detector.

    PubMed

    Ranjbaran, Ali; Hassan, Anwar Hasni Abu; Jafarpour, Mahboobe; Ranjbaran, Bahar

    2015-01-01

    This paper presents a Laplacian-based image filtering method. Using a local noise estimator function in an energy functional minimizing scheme we show that Laplacian that has been known as an edge detection function can be used for noise removal applications. The algorithm can be implemented on a 3x3 window and easily tuned by number of iterations. Image denoising is simplified to the reduction of the pixels value with their related Laplacian value weighted by local noise estimator. The only parameter which controls smoothness is the number of iterations. Noise reduction quality of the introduced method is evaluated and compared with some classic algorithms like Wiener and Total Variation based filters for Gaussian noise. And also the method compared with the state-of-the-art method BM3D for some images. The algorithm appears to be easy, fast and comparable with many classic denoising algorithms for Gaussian noise.

  9. Optimized scheme in coal-fired boiler combustion based on information entropy and modified K-prototypes algorithm

    NASA Astrophysics Data System (ADS)

    Gu, Hui; Zhu, Hongxia; Cui, Yanfeng; Si, Fengqi; Xue, Rui; Xi, Han; Zhang, Jiayu

    2018-06-01

    An integrated combustion optimization scheme is proposed for the combined considering the restriction in coal-fired boiler combustion efficiency and outlet NOx emissions. Continuous attribute discretization and reduction techniques are handled as optimization preparation by E-Cluster and C_RED methods, in which the segmentation numbers don't need to be provided in advance and can be continuously adapted with data characters. In order to obtain results of multi-objections with clustering method for mixed data, a modified K-prototypes algorithm is then proposed. This algorithm can be divided into two stages as K-prototypes algorithm for clustering number self-adaptation and clustering for multi-objective optimization, respectively. Field tests were carried out at a 660 MW coal-fired boiler to provide real data as a case study for controllable attribute discretization and reduction in boiler system and obtaining optimization parameters considering [ maxηb, minyNOx ] multi-objective rule.

  10. Colorimetric calibration of wound photography with off-the-shelf devices

    NASA Astrophysics Data System (ADS)

    Bala, Subhankar; Sirazitdinova, Ekaterina; Deserno, Thomas M.

    2017-03-01

    Digital cameras are often used in recent days for photographic documentation in medical sciences. However, color reproducibility of same objects suffers from different illuminations and lighting conditions. This variation in color representation is problematic when the images are used for segmentation and measurements based on color thresholds. In this paper, motivated by photographic follow-up of chronic wounds, we assess the impact of (i) gamma correction, (ii) white balancing, (iii) background unification, and (iv) reference card-based color correction. Automatic gamma correction and white balancing are applied to support the calibration procedure, where gamma correction is a nonlinear color transform. For unevenly illuminated images, non- uniform illumination correction is applied. In the last step, we apply colorimetric calibration using a reference color card of 24 patches with known colors. A lattice detection algorithm is used for locating the card. The least squares algorithm is applied for affine color calibration in the RGB model. We have tested the algorithm on images with seven different types of illumination: with and without flash using three different off-the-shelf cameras including smartphones. We analyzed the spread of resulting color value of selected color patch before and after applying the calibration. Additionally, we checked the individual contribution of different steps of the whole calibration process. Using all steps, we were able to achieve a maximum of 81% reduction in standard deviation of color patch values in resulting images comparing to the original images. That supports manual as well as automatic quantitative wound assessments with off-the-shelf devices.

  11. Do All Roads Lead to Rome? ("or" Reductions for Dummy Travelers)

    ERIC Educational Resources Information Center

    Kilpelainen, Pekka

    2010-01-01

    Reduction is a central ingredient of computational thinking, and an important tool in algorithm design, in computability theory, and in complexity theory. Reduction has been recognized to be a difficult topic for students to learn. Previous studies on teaching reduction have concentrated on its use in special courses on the theory of computing. As…

  12. SU-C-207-05: A Comparative Study of Noise-Reduction Algorithms for Low-Dose Cone-Beam Computed Tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mukherjee, S; Yao, W

    2015-06-15

    Purpose: To study different noise-reduction algorithms and to improve the image quality of low dose cone beam CT for patient positioning in radiation therapy. Methods: In low-dose cone-beam CT, the reconstructed image is contaminated with excessive quantum noise. In this study, three well-developed noise reduction algorithms namely, a) penalized weighted least square (PWLS) method, b) split-Bregman total variation (TV) method, and c) compressed sensing (CS) method were studied and applied to the images of a computer–simulated “Shepp-Logan” phantom and a physical CATPHAN phantom. Up to 20% additive Gaussian noise was added to the Shepp-Logan phantom. The CATPHAN phantom was scannedmore » by a Varian OBI system with 100 kVp, 4 ms and 20 mA. For comparing the performance of these algorithms, peak signal-to-noise ratio (PSNR) of the denoised images was computed. Results: The algorithms were shown to have the potential in reducing the noise level for low-dose CBCT images. For Shepp-Logan phantom, an improvement of PSNR of 2 dB, 3.1 dB and 4 dB was observed using PWLS, TV and CS respectively, while for CATPHAN, the improvement was 1.2 dB, 1.8 dB and 2.1 dB, respectively. Conclusion: Penalized weighted least square, total variation and compressed sensing methods were studied and compared for reducing the noise on a simulated phantom and a physical phantom scanned by low-dose CBCT. The techniques have shown promising results for noise reduction in terms of PSNR improvement. However, reducing the noise without compromising the smoothness and resolution of the image needs more extensive research.« less

  13. A new algorithm for reducing the workload of experts in performing systematic reviews.

    PubMed

    Matwin, Stan; Kouznetsov, Alexandre; Inkpen, Diana; Frunza, Oana; O'Blenis, Peter

    2010-01-01

    To determine whether a factorized version of the complement naïve Bayes (FCNB) classifier can reduce the time spent by experts reviewing journal articles for inclusion in systematic reviews of drug class efficacy for disease treatment. The proposed classifier was evaluated on a test collection built from 15 systematic drug class reviews used in previous work. The FCNB classifier was constructed to classify each article as containing high-quality, drug class-specific evidence or not. Weight engineering (WE) techniques were added to reduce underestimation for Medical Subject Headings (MeSH)-based and Publication Type (PubType)-based features. Cross-validation experiments were performed to evaluate the classifier's parameters and performance. Work saved over sampling (WSS) at no less than a 95% recall was used as the main measure of performance. The minimum workload reduction for a systematic review for one topic, achieved with a FCNB/WE classifier, was 8.5%; the maximum was 62.2% and the average over the 15 topics was 33.5%. This is 15.0% higher than the average workload reduction obtained using a voting perceptron-based automated citation classification system. The FCNB/WE classifier is simple, easy to implement, and produces significantly better results in reducing the workload than previously achieved. The results support it being a useful algorithm for machine-learning-based automation of systematic reviews of drug class efficacy for disease treatment.

  14. Detection of segments with fetal QRS complex from abdominal maternal ECG recordings using support vector machine

    NASA Astrophysics Data System (ADS)

    Delgado, Juan A.; Altuve, Miguel; Nabhan Homsi, Masun

    2015-12-01

    This paper introduces a robust method based on the Support Vector Machine (SVM) algorithm to detect the presence of Fetal QRS (fQRS) complexes in electrocardiogram (ECG) recordings provided by the PhysioNet/CinC challenge 2013. ECG signals are first segmented into contiguous frames of 250 ms duration and then labeled in six classes. Fetal segments are tagged according to the position of fQRS complex within each one. Next, segment features extraction and dimensionality reduction are obtained by applying principal component analysis on Haar-wavelet transform. After that, two sub-datasets are generated to separate representative segments from atypical ones. Imbalanced class problem is dealt by applying sampling without replacement on each sub-dataset. Finally, two SVMs are trained and cross-validated using the two balanced sub-datasets separately. Experimental results show that the proposed approach achieves high performance rates in fetal heartbeats detection that reach up to 90.95% of accuracy, 92.16% of sensitivity, 88.51% of specificity, 94.13% of positive predictive value and 84.96% of negative predictive value. A comparative study is also carried out to show the performance of other two machine learning algorithms for fQRS complex estimation, which are K-nearest neighborhood and Bayesian network.

  15. A Toolbox to Improve Algorithms for Insulin-Dosing Decision Support

    PubMed Central

    Donsa, K.; Plank, J.; Schaupp, L.; Mader, J. K.; Truskaller, T.; Tschapeller, B.; Höll, B.; Spat, S.; Pieber, T. R.

    2014-01-01

    Summary Background Standardized insulin order sets for subcutaneous basal-bolus insulin therapy are recommended by clinical guidelines for the inpatient management of diabetes. The algorithm based GlucoTab system electronically assists health care personnel by supporting clinical workflow and providing insulin-dose suggestions. Objective To develop a toolbox for improving clinical decision-support algorithms. Methods The toolbox has three main components. 1) Data preparation: Data from several heterogeneous sources is extracted, cleaned and stored in a uniform data format. 2) Simulation: The effects of algorithm modifications are estimated by simulating treatment workflows based on real data from clinical trials. 3) Analysis: Algorithm performance is measured, analyzed and simulated by using data from three clinical trials with a total of 166 patients. Results Use of the toolbox led to algorithm improvements as well as the detection of potential individualized subgroup-specific algorithms. Conclusion These results are a first step towards individualized algorithm modifications for specific patient subgroups. PMID:25024768

  16. Quantum algorithm for support matrix machines

    NASA Astrophysics Data System (ADS)

    Duan, Bojia; Yuan, Jiabin; Liu, Ying; Li, Dan

    2017-09-01

    We propose a quantum algorithm for support matrix machines (SMMs) that efficiently addresses an image classification problem by introducing a least-squares reformulation. This algorithm consists of two core subroutines: a quantum matrix inversion (Harrow-Hassidim-Lloyd, HHL) algorithm and a quantum singular value thresholding (QSVT) algorithm. The two algorithms can be implemented on a universal quantum computer with complexity O[log(npq) ] and O[log(pq)], respectively, where n is the number of the training data and p q is the size of the feature space. By iterating the algorithms, we can find the parameters for the SMM classfication model. Our analysis shows that both HHL and QSVT algorithms achieve an exponential increase of speed over their classical counterparts.

  17. Derivation and validation of the Personal Support Algorithm: an evidence-based framework to inform allocation of personal support services in home and community care.

    PubMed

    Sinn, Chi-Ling Joanna; Jones, Aaron; McMullan, Janet Legge; Ackerman, Nancy; Curtin-Telegdi, Nancy; Eckel, Leslie; Hirdes, John P

    2017-11-25

    Personal support services enable many individuals to stay in their homes, but there are no standard ways to classify need for functional support in home and community care settings. The goal of this project was to develop an evidence-based clinical tool to inform service planning while allowing for flexibility in care coordinator judgment in response to patient and family circumstances. The sample included 128,169 Ontario home care patients assessed in 2013 and 25,800 Ontario community support clients assessed between 2014 and 2016. Independent variables were drawn from the Resident Assessment Instrument-Home Care and interRAI Community Health Assessment that are standardised, comprehensive, and fully compatible clinical assessments. Clinical expertise and regression analyses identified candidate variables that were entered into decision tree models. The primary dependent variable was the weekly hours of personal support calculated based on the record of billed services. The Personal Support Algorithm classified need for personal support into six groups with a 32-fold difference in average billed hours of personal support services between the highest and lowest group. The algorithm explained 30.8% of the variability in billed personal support services. Care coordinators and managers reported that the guidelines based on the algorithm classification were consistent with their clinical judgment and current practice. The Personal Support Algorithm provides a structured yet flexible decision-support framework that may facilitate a more transparent and equitable approach to the allocation of personal support services.

  18. Stable orthogonal local discriminant embedding for linear dimensionality reduction.

    PubMed

    Gao, Quanxue; Ma, Jingjie; Zhang, Hailin; Gao, Xinbo; Liu, Yamin

    2013-07-01

    Manifold learning is widely used in machine learning and pattern recognition. However, manifold learning only considers the similarity of samples belonging to the same class and ignores the within-class variation of data, which will impair the generalization and stableness of the algorithms. For this purpose, we construct an adjacency graph to model the intraclass variation that characterizes the most important properties, such as diversity of patterns, and then incorporate the diversity into the discriminant objective function for linear dimensionality reduction. Finally, we introduce the orthogonal constraint for the basis vectors and propose an orthogonal algorithm called stable orthogonal local discriminate embedding. Experimental results on several standard image databases demonstrate the effectiveness of the proposed dimensionality reduction approach.

  19. An efficient General Transit Feed Specification (GTFS) enabled algorithm for dynamic transit accessibility analysis.

    PubMed

    Fayyaz S, S Kiavash; Liu, Xiaoyue Cathy; Zhang, Guohui

    2017-01-01

    The social functions of urbanized areas are highly dependent on and supported by the convenient access to public transportation systems, particularly for the less privileged populations who have restrained auto ownership. To accurately evaluate the public transit accessibility, it is critical to capture the spatiotemporal variation of transit services. This can be achieved by measuring the shortest paths or minimum travel time between origin-destination (OD) pairs at each time-of-day (e.g. every minute). In recent years, General Transit Feed Specification (GTFS) data has been gaining popularity for between-station travel time estimation due to its interoperability in spatiotemporal analytics. Many software packages, such as ArcGIS, have developed toolbox to enable the travel time estimation with GTFS. They perform reasonably well in calculating travel time between OD pairs for a specific time-of-day (e.g. 8:00 AM), yet can become computational inefficient and unpractical with the increase of data dimensions (e.g. all times-of-day and large network). In this paper, we introduce a new algorithm that is computationally elegant and mathematically efficient to address this issue. An open-source toolbox written in C++ is developed to implement the algorithm. We implemented the algorithm on City of St. George's transit network to showcase the accessibility analysis enabled by the toolbox. The experimental evidence shows significant reduction on computational time. The proposed algorithm and toolbox presented is easily transferable to other transit networks to allow transit agencies and researchers perform high resolution transit performance analysis.

  20. An efficient General Transit Feed Specification (GTFS) enabled algorithm for dynamic transit accessibility analysis

    PubMed Central

    Fayyaz S., S. Kiavash; Zhang, Guohui

    2017-01-01

    The social functions of urbanized areas are highly dependent on and supported by the convenient access to public transportation systems, particularly for the less privileged populations who have restrained auto ownership. To accurately evaluate the public transit accessibility, it is critical to capture the spatiotemporal variation of transit services. This can be achieved by measuring the shortest paths or minimum travel time between origin-destination (OD) pairs at each time-of-day (e.g. every minute). In recent years, General Transit Feed Specification (GTFS) data has been gaining popularity for between-station travel time estimation due to its interoperability in spatiotemporal analytics. Many software packages, such as ArcGIS, have developed toolbox to enable the travel time estimation with GTFS. They perform reasonably well in calculating travel time between OD pairs for a specific time-of-day (e.g. 8:00 AM), yet can become computational inefficient and unpractical with the increase of data dimensions (e.g. all times-of-day and large network). In this paper, we introduce a new algorithm that is computationally elegant and mathematically efficient to address this issue. An open-source toolbox written in C++ is developed to implement the algorithm. We implemented the algorithm on City of St. George’s transit network to showcase the accessibility analysis enabled by the toolbox. The experimental evidence shows significant reduction on computational time. The proposed algorithm and toolbox presented is easily transferable to other transit networks to allow transit agencies and researchers perform high resolution transit performance analysis. PMID:28981544

  1. On-the-fly reduction of open loops

    NASA Astrophysics Data System (ADS)

    Buccioni, Federico; Pozzorini, Stefano; Zoller, Max

    2018-01-01

    Building on the open-loop algorithm we introduce a new method for the automated construction of one-loop amplitudes and their reduction to scalar integrals. The key idea is that the factorisation of one-loop integrands in a product of loop segments makes it possible to perform various operations on-the-fly while constructing the integrand. Reducing the integrand on-the-fly, after each segment multiplication, the construction of loop diagrams and their reduction are unified in a single numerical recursion. In this way we entirely avoid objects with high tensor rank, thereby reducing the complexity of the calculations in a drastic way. Thanks to the on-the-fly approach, which is applied also to helicity summation and for the merging of different diagrams, the speed of the original open-loop algorithm can be further augmented in a very significant way. Moreover, addressing spurious singularities of the employed reduction identities by means of simple expansions in rank-two Gram determinants, we achieve a remarkably high level of numerical stability. These features of the new algorithm, which will be made publicly available in a forthcoming release of the OpenLoops program, are particularly attractive for NLO multi-leg and NNLO real-virtual calculations.

  2. Iterative metal artefact reduction in CT: can dedicated algorithms improve image quality after spinal instrumentation?

    PubMed

    Aissa, J; Thomas, C; Sawicki, L M; Caspers, J; Kröpil, P; Antoch, G; Boos, J

    2017-05-01

    To investigate the value of dedicated computed tomography (CT) iterative metal artefact reduction (iMAR) algorithms in patients after spinal instrumentation. Post-surgical spinal CT images of 24 patients performed between March 2015 and July 2016 were retrospectively included. Images were reconstructed with standard weighted filtered back projection (WFBP) and with two dedicated iMAR algorithms (iMAR-Algo1, adjusted to spinal instrumentations and iMAR-Algo2, adjusted to large metallic hip implants) using a medium smooth kernel (B30f) and a sharp kernel (B70f). Frequencies of density changes were quantified to assess objective image quality. Image quality was rated subjectively by evaluating the visibility of critical anatomical structures including the central canal, the spinal cord, neural foramina, and vertebral bone. Both iMAR algorithms significantly reduced artefacts from metal compared with WFBP (p<0.0001). Results of subjective image analysis showed that both iMAR algorithms led to an improvement in visualisation of soft-tissue structures (median iMAR-Algo1=3; interquartile range [IQR]:1.5-3; iMAR-Algo2=4; IQR: 3.5-4) and bone structures (iMAR-Algo1=3; IQR:3-4; iMAR-Algo2=4; IQR:4-5) compared to WFBP (soft tissue: median 2; IQR: 0.5-2 and bone structures: median 2; IQR: 1-3; p<0.0001). Compared with iMAR-Algo1, objective artefact reduction and subjective visualisation of soft-tissue and bone structures were improved with iMAR-Algo2 (p<0.0001). Both iMAR algorithms reduced artefacts compared with WFBP, however, the iMAR algorithm with dedicated settings for large metallic implants was superior to the algorithm specifically adjusted to spinal implants. Copyright © 2016 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  3. Artifact reduction of different metallic implants in flat detector C-arm CT.

    PubMed

    Hung, S-C; Wu, C-C; Lin, C-J; Guo, W-Y; Luo, C-B; Chang, F-C; Chang, C-Y

    2014-07-01

    Flat detector CT has been increasingly used as a follow-up examination after endovascular intervention. Metal artifact reduction has been successfully demonstrated in coil mass cases, but only in a small series. We attempted to objectively and subjectively evaluate the feasibility of metal artifact reduction with various metallic objects and coil lengths. We retrospectively reprocessed the flat detector CT data of 28 patients (15 men, 13 women; mean age, 55.6 years) after they underwent endovascular treatment (20 coiling ± stent placement, 6 liquid embolizers) or shunt drainage (n = 2) between January 2009 and November 2011 by using a metal artifact reduction correction algorithm. We measured CT value ranges and noise by using region-of-interest methods, and 2 experienced neuroradiologists rated the degrees of improved imaging quality and artifact reduction by comparing uncorrected and corrected images. After we applied the metal artifact reduction algorithm, the CT value ranges and the noise were substantially reduced (1815.3 ± 793.7 versus 231.7 ± 95.9 and 319.9 ± 136.6 versus 45.9 ± 14.0; both P < .001) regardless of the types of metallic objects and various sizes of coil masses. The rater study achieved an overall improvement of imaging quality and artifact reduction (85.7% and 78.6% of cases by 2 raters, respectively), with the greatest improvement in the coiling group, moderate improvement in the liquid embolizers, and the smallest improvement in ventricular shunting (overall agreement, 0.857). The metal artifact reduction algorithm substantially reduced artifacts and improved the objective image quality in every studied case. It also allowed improved diagnostic confidence in most cases. © 2014 by American Journal of Neuroradiology.

  4. A metal artifact reduction algorithm in CT using multiple prior images by recursive active contour segmentation

    PubMed Central

    Nam, Haewon

    2017-01-01

    We propose a novel metal artifact reduction (MAR) algorithm for CT images that completes a corrupted sinogram along the metal trace region. When metal implants are located inside a field of view, they create a barrier to the transmitted X-ray beam due to the high attenuation of metals, which significantly degrades the image quality. To fill in the metal trace region efficiently, the proposed algorithm uses multiple prior images with residual error compensation in sinogram space. Multiple prior images are generated by applying a recursive active contour (RAC) segmentation algorithm to the pre-corrected image acquired by MAR with linear interpolation, where the number of prior image is controlled by RAC depending on the object complexity. A sinogram basis is then acquired by forward projection of the prior images. The metal trace region of the original sinogram is replaced by the linearly combined sinogram of the prior images. Then, the additional correction in the metal trace region is performed to compensate the residual errors occurred by non-ideal data acquisition condition. The performance of the proposed MAR algorithm is compared with MAR with linear interpolation and the normalized MAR algorithm using simulated and experimental data. The results show that the proposed algorithm outperforms other MAR algorithms, especially when the object is complex with multiple bone objects. PMID:28604794

  5. Validation Methodology to Allow Simulated Peak Reduction and Energy Performance Analysis of Residential Building Envelope with Phase Change Materials: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tabares-Velasco, P. C.; Christensen, C.; Bianchi, M.

    2012-08-01

    Phase change materials (PCM) represent a potential technology to reduce peak loads and HVAC energy consumption in residential buildings. This paper summarizes NREL efforts to obtain accurate energy simulations when PCMs are modeled in residential buildings: the overall methodology to verify and validate Conduction Finite Difference (CondFD) and PCM algorithms in EnergyPlus is presented in this study. It also shows preliminary results of three residential building enclosure technologies containing PCM: PCM-enhanced insulation, PCM impregnated drywall and thin PCM layers. The results are compared based on predicted peak reduction and energy savings using two algorithms in EnergyPlus: the PCM and Conductionmore » Finite Difference (CondFD) algorithms.« less

  6. Providing a parallel and distributed capability for JMASS using SPEEDES

    NASA Astrophysics Data System (ADS)

    Valinski, Maria; Driscoll, Jonathan; McGraw, Robert M.; Meyer, Bob

    2002-07-01

    The Joint Modeling And Simulation System (JMASS) is a Tri-Service simulation environment that supports engineering and engagement-level simulations. As JMASS is expanded to support other Tri-Service domains, the current set of modeling services must be expanded for High Performance Computing (HPC) applications by adding support for advanced time-management algorithms, parallel and distributed topologies, and high speed communications. By providing support for these services, JMASS can better address modeling domains requiring parallel computationally intense calculations such clutter, vulnerability and lethality calculations, and underwater-based scenarios. A risk reduction effort implementing some HPC services for JMASS using the SPEEDES (Synchronous Parallel Environment for Emulation and Discrete Event Simulation) Simulation Framework has recently concluded. As an artifact of the JMASS-SPEEDES integration, not only can HPC functionality be brought to the JMASS program through SPEEDES, but an additional HLA-based capability can be demonstrated that further addresses interoperability issues. The JMASS-SPEEDES integration provided a means of adding HLA capability to preexisting JMASS scenarios through an implementation of the standard JMASS port communication mechanism that allows players to communicate.

  7. StreakDet data processing and analysis pipeline for space debris optical observations

    NASA Astrophysics Data System (ADS)

    Virtanen, Jenni; Flohrer, Tim; Muinonen, Karri; Granvik, Mikael; Torppa, Johanna; Poikonen, Jonne; Lehti, Jussi; Santti, Tero; Komulainen, Tuomo; Naranen, Jyri

    We describe a novel data processing and analysis pipeline for optical observations of space debris. The monitoring of space object populations requires reliable acquisition of observational data, to support the development and validation of space debris environment models, the build-up and maintenance of a catalogue of orbital elements. In addition, data is needed for the assessment of conjunction events and for the support of contingency situations or launches. The currently available, mature image processing algorithms for detection and astrometric reduction of optical data cover objects that cross the sensor field-of-view comparably slowly, and within a rather narrow, predefined range of angular velocities. By applying specific tracking techniques, the objects appear point-like or as short trails in the exposures. However, the general survey scenario is always a “track before detect” problem, resulting in streaks, i.e., object trails of arbitrary lengths, in the images. The scope of the ESA-funded StreakDet (Streak detection and astrometric reduction) project is to investigate solutions for detecting and reducing streaks from optical images, particularly in the low signal-to-noise ratio (SNR) domain, where algorithms are not readily available yet. For long streaks, the challenge is to extract precise position information and related registered epochs with sufficient precision. Although some considerations for low-SNR processing of streak-like features are available in the current image processing and computer vision literature, there is a need to discuss and compare these approaches for space debris analysis, in order to develop and evaluate prototype implementations. In the StreakDet project, we develop algorithms applicable to single images (as compared to consecutive frames of the same field) obtained with any observing scenario, including space-based surveys and both low- and high-altitude populations. The proposed processing pipeline starts from the segmentation of the acquired image (i.e., the extraction of all sources), followed by the astrometric and photometric characterization of the candidate streaks, and ends with orbital validation of the detected streaks. A central concept of the pipeline is streak classification which guides the actual characterization process by aiming to identify the interesting sources and to filter out the uninteresting ones, as well as by allowing the tailoring of algorithms for specific streak classes (e.g. point-like vs. long, disintegrated streaks). To validate the single-image detections, the processing is finalized by orbital analysis, resulting in preliminary orbital classification (Earth-bound vs. non-Earth-bound orbit) for the detected streaks.

  8. An Overview of a Trajectory-Based Solution for En Route and Terminal Area Self-Spacing: Fourth Revision

    NASA Technical Reports Server (NTRS)

    Abbott, Terence S.

    2013-01-01

    This paper presents an overview of the fourth major revision to an algorithm specifically designed to support NASA's Airborne Precision Spacing concept. This airborne self-spacing concept is trajectory-based, allowing for spacing operations prior to the aircraft being on a common path. Because this algorithm is trajectory-based, it also has the inherent ability to support required-time-of-arrival (RTA) operations. This algorithm was also designed specifically to support a standalone, non-integrated implementation in the spacing aircraft. Revisions to this algorithm were based on a change to the expected operational environment.

  9. First-cycle blood counts and subsequent neutropenia, dose reduction, or delay in early-stage breast cancer therapy.

    PubMed

    Silber, J H; Fridman, M; DiPaola, R S; Erder, M H; Pauly, M V; Fox, K R

    1998-07-01

    If patients could be ranked according to their projected need for supportive care therapy, then more efficient and less costly treatment algorithms might be developed. This work reports on the construction of a model of neutropenia, dose reduction, or delay that rank-orders patients according to their need for costly supportive care such as granulocyte growth factors. A case series and consecutive sample of patients treated for breast cancer were studied. Patients had received standard-dose adjuvant chemotherapy for early-stage nonmetastatic breast cancer and were treated by four medical oncologists. Using 95 patients and validated with 80 additional patients, development models were constructed to predict one or more of the following events: neutropenia (absolute neutrophil count [ANC] < or = 250/microL), dose reduction > or = 15% of that scheduled, or treatment delay > or = 7 days. Two approaches to modeling were attempted. The pretreatment approach used only pretreatment predictors such as chemotherapy regimen and radiation history; the conditional approach included, in addition, blood count information obtained in the first cycle of treatment. The pretreatment model was unsuccessful at predicting neutropenia, dose reduction, or delay (c-statistic = 0.63). Conditional models were good predictors of subsequent events after cycle 1 (c-statistic = 0.87 and 0.78 for development and validation samples, respectively). The depth of the first-cycle ANC was an excellent predictor of events in subsequent cycles (P = .0001 to .004). Chemotherapy plus radiation also increased the risk of subsequent events (P = .0011 to .0901). Decline in hemoglobin (HGB) level during the first cycle of therapy was a significant predictor of events in the development study (P = .0074 and .0015), and although the trend was similar in the validation study, HGB decline failed to reach statistical significance. It is possible to rank patients according to their need of supportive care based on blood counts observed in the first cycle of therapy. Such rankings may aid in the choice of appropriate supportive care for patients with early-stage breast cancer.

  10. Analysis of miRNA expression profile based on SVM algorithm

    NASA Astrophysics Data System (ADS)

    Ting-ting, Dai; Chang-ji, Shan; Yan-shou, Dong; Yi-duo, Bian

    2018-05-01

    Based on mirna expression spectrum data set, a new data mining algorithm - tSVM - KNN (t statistic with support vector machine - k nearest neighbor) is proposed. the idea of the algorithm is: firstly, the feature selection of the data set is carried out by the unified measurement method; Secondly, SVM - KNN algorithm, which combines support vector machine (SVM) and k - nearest neighbor (k - nearest neighbor) is used as classifier. Simulation results show that SVM - KNN algorithm has better classification ability than SVM and KNN alone. Tsvm - KNN algorithm only needs 5 mirnas to obtain 96.08 % classification accuracy in terms of the number of mirna " tags" and recognition accuracy. compared with similar algorithms, tsvm - KNN algorithm has obvious advantages.

  11. Grey Wolf based control for speed ripple reduction at low speed operation of PMSM drives.

    PubMed

    Djerioui, Ali; Houari, Azeddine; Ait-Ahmed, Mourad; Benkhoris, Mohamed-Fouad; Chouder, Aissa; Machmoum, Mohamed

    2018-03-01

    Speed ripple at low speed-high torque operation of Permanent Magnet Synchronous Machine (PMSM) drives is considered as one of the major issues to be treated. The presented work proposes an efficient PMSM speed controller based on Grey Wolf (GW) algorithm to ensure a high-performance control for speed ripple reduction at low speed operation. The main idea of the proposed control algorithm is to propose a specific objective function in order to incorporate the advantage of fast optimization process of the GW optimizer. The role of GW optimizer is to find the optimal input controls that satisfy the speed tracking requirements. The synthesis methodology of the proposed control algorithm is detailed and the feasibility and performances of the proposed speed controller is confirmed by simulation and experimental results. The GW algorithm is a model-free controller and the parameters of its objective function are easy to be tuned. The GW controller is compared to PI one on real test bench. Then, the superiority of the first algorithm is highlighted. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  12. A novel dynamic wavelength bandwidth allocation scheme over OFDMA PONs

    NASA Astrophysics Data System (ADS)

    Yan, Bo; Guo, Wei; Jin, Yaohui; Hu, Weisheng

    2011-12-01

    With rapid growth of Internet applications, supporting differentiated service and enlarging system capacity have been new tasks for next generation access system. In recent years, research in OFDMA Passive Optical Networks (PON) has experienced extraordinary development as for its large capacity and flexibility in scheduling. Although much work has been done to solve hardware layer obstacles for OFDMA PON, scheduling algorithm on OFDMA PON system is still under primary discussion. In order to support QoS service on OFDMA PON system, a novel dynamic wavelength bandwidth allocation (DWBA) algorithm is proposed in this paper. Per-stream QoS service is supported in this algorithm. Through simulation, we proved our bandwidth allocation algorithm performs better in bandwidth utilization and differentiate service support.

  13. Comparison of the genetic algorithm and incremental optimisation routines for a Bayesian inverse modelling based network design

    NASA Astrophysics Data System (ADS)

    Nickless, A.; Rayner, P. J.; Erni, B.; Scholes, R. J.

    2018-05-01

    The design of an optimal network of atmospheric monitoring stations for the observation of carbon dioxide (CO2) concentrations can be obtained by applying an optimisation algorithm to a cost function based on minimising posterior uncertainty in the CO2 fluxes obtained from a Bayesian inverse modelling solution. Two candidate optimisation methods assessed were the evolutionary algorithm: the genetic algorithm (GA), and the deterministic algorithm: the incremental optimisation (IO) routine. This paper assessed the ability of the IO routine in comparison to the more computationally demanding GA routine to optimise the placement of a five-member network of CO2 monitoring sites located in South Africa. The comparison considered the reduction in uncertainty of the overall flux estimate, the spatial similarity of solutions, and computational requirements. Although the IO routine failed to find the solution with the global maximum uncertainty reduction, the resulting solution had only fractionally lower uncertainty reduction compared with the GA, and at only a quarter of the computational resources used by the lowest specified GA algorithm. The GA solution set showed more inconsistency if the number of iterations or population size was small, and more so for a complex prior flux covariance matrix. If the GA completed with a sub-optimal solution, these solutions were similar in fitness to the best available solution. Two additional scenarios were considered, with the objective of creating circumstances where the GA may outperform the IO. The first scenario considered an established network, where the optimisation was required to add an additional five stations to an existing five-member network. In the second scenario the optimisation was based only on the uncertainty reduction within a subregion of the domain. The GA was able to find a better solution than the IO under both scenarios, but with only a marginal improvement in the uncertainty reduction. These results suggest that the best use of resources for the network design problem would be spent in improvement of the prior estimates of the flux uncertainties rather than investing these resources in running a complex evolutionary optimisation algorithm. The authors recommend that, if time and computational resources allow, that multiple optimisation techniques should be used as a part of a comprehensive suite of sensitivity tests when performing such an optimisation exercise. This will provide a selection of best solutions which could be ranked based on their utility and practicality.

  14. Metal artefact reduction with cone beam CT: an in vitro study

    PubMed Central

    Bechara, BB; Moore, WS; McMahan, CA; Noujeim, M

    2012-01-01

    Background Metal in a patient's mouth has been shown to cause artefacts that can interfere with the diagnostic quality of cone beam CT. Recently, a manufacturer has made an algorithm and software available which reduces metal streak artefact (Picasso Master 3D® machine; Vatech, Hwaseong, Republic of Korea). Objectives The purpose of this investigation was to determine whether or not the metal artefact reduction algorithm was effective and enhanced the contrast-to-noise ratio. Methods A phantom was constructed incorporating three metallic beads and three epoxy resin-based bone substitutes to simulate bone next to metal. The phantom was placed in the centre of the field of view and at the periphery. 10 data sets were acquired at 50–90 kVp. The images obtained were analysed using a public domain software ImageJ (NIH Image, Bethesda, MD). Profile lines were used to evaluate grey level changes and area histograms were used to evaluate contrast. The contrast-to-noise ratio was calculated. Results The metal artefact reduction option reduced grey value variation and increased the contrast-to-noise ratio. The grey value varied least when the phantom was in the middle of the volume and the metal artefact reduction was activated. The image quality improved as the peak kilovoltage increased. Conclusion Better images of a phantom were obtained when the metal artefact reduction algorithm was used. PMID:22241878

  15. Affine Projection Algorithm with Improved Data-Selective Method Using the Condition Number

    NASA Astrophysics Data System (ADS)

    Ban, Sung Jun; Lee, Chang Woo; Kim, Sang Woo

    Recently, a data-selective method has been proposed to achieve low misalignment in affine projection algorithm (APA) by keeping the condition number of an input data matrix small. We present an improved method, and a complexity reduction algorithm for the APA with the data-selective method. Experimental results show that the proposed algorithm has lower misalignment and a lower condition number for an input data matrix than both the conventional APA and the APA with the previous data-selective method.

  16. Supporting capacity sharing in the cloud manufacturing environment based on game theory and fuzzy logic

    NASA Astrophysics Data System (ADS)

    Argoneto, Pierluigi; Renna, Paolo

    2016-02-01

    This paper proposes a Framework for Capacity Sharing in Cloud Manufacturing (FCSCM) able to support the capacity sharing issue among independent firms. The success of geographical distributed plants depends strongly on the use of opportune tools to integrate their resources and demand forecast in order to gather a specific production objective. The framework proposed is based on two different tools: a cooperative game algorithm, based on the Gale-Shapley model, and a fuzzy engine. The capacity allocation policy takes into account the utility functions of the involved firms. It is shown how the capacity allocation policy proposed induces all firms to report truthfully their information about their requirements. A discrete event simulation environment has been developed to test the proposed FCSCM. The numerical results show the drastic reduction of unsatisfied capacity obtained by the model of cooperation implemented in this work.

  17. Estimating rare events in biochemical systems using conditional sampling.

    PubMed

    Sundar, V S

    2017-01-28

    The paper focuses on development of variance reduction strategies to estimate rare events in biochemical systems. Obtaining this probability using brute force Monte Carlo simulations in conjunction with the stochastic simulation algorithm (Gillespie's method) is computationally prohibitive. To circumvent this, important sampling tools such as the weighted stochastic simulation algorithm and the doubly weighted stochastic simulation algorithm have been proposed. However, these strategies require an additional step of determining the important region to sample from, which is not straightforward for most of the problems. In this paper, we apply the subset simulation method, developed as a variance reduction tool in the context of structural engineering, to the problem of rare event estimation in biochemical systems. The main idea is that the rare event probability is expressed as a product of more frequent conditional probabilities. These conditional probabilities are estimated with high accuracy using Monte Carlo simulations, specifically the Markov chain Monte Carlo method with the modified Metropolis-Hastings algorithm. Generating sample realizations of the state vector using the stochastic simulation algorithm is viewed as mapping the discrete-state continuous-time random process to the standard normal random variable vector. This viewpoint opens up the possibility of applying more sophisticated and efficient sampling schemes developed elsewhere to problems in stochastic chemical kinetics. The results obtained using the subset simulation method are compared with existing variance reduction strategies for a few benchmark problems, and a satisfactory improvement in computational time is demonstrated.

  18. Iterative methods used in overlap astrometric reduction techniques do not always converge

    NASA Astrophysics Data System (ADS)

    Rapaport, M.; Ducourant, C.; Colin, J.; Le Campion, J. F.

    1993-04-01

    In this paper we prove that the classical Gauss-Seidel type iterative methods used for the solution of the reduced normal equations occurring in overlapping reduction methods of astrometry do not always converge. We exhibit examples of divergence. We then analyze an alternative algorithm proposed by Wang (1985). We prove the consistency of this algorithm and verify that it can be convergent while the Gauss-Seidel method is divergent. We conjecture the convergence of Wang method for the solution of astrometric problems using overlap techniques.

  19. Pollution balance method and the demonstration of its application to minimizing waste in a biochemical process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hilaly, A.K.; Sikdar, S.K.

    In this study, the authors introduced several modifications to the WAR (waste reduction) algorithm developed earlier. These modifications were made for systematically handling sensitivity analysis and various tasks of waste minimization. A design hierarchy was formulated to promote appropriate waste reduction tasks at designated levels of the hierarchy. A sensitivity coefficient was used to measure the relative impacts of process variables on the pollution index of a process. The use of the WAR algorithm was demonstrated by a fermentation process for making penicillin.

  20. Gene Selection and Cancer Classification: A Rough Sets Based Approach

    NASA Astrophysics Data System (ADS)

    Sun, Lijun; Miao, Duoqian; Zhang, Hongyun

    Indentification of informative gene subsets responsible for discerning between available samples of gene expression data is an important task in bioinformatics. Reducts, from rough sets theory, corresponding to a minimal set of essential genes for discerning samples, is an efficient tool for gene selection. Due to the compuational complexty of the existing reduct algoritms, feature ranking is usually used to narrow down gene space as the first step and top ranked genes are selected . In this paper,we define a novel certierion based on the expression level difference btween classes and contribution to classification of the gene for scoring genes and present a algorithm for generating all possible reduct from informative genes.The algorithm takes the whole attribute sets into account and find short reduct with a significant reduction in computational complexity. An exploration of this approach on benchmark gene expression data sets demonstrates that this approach is successful for selecting high discriminative genes and the classification accuracy is impressive.

  1. High-Speed GPU-Based Fully Three-Dimensional Diffuse Optical Tomographic System

    PubMed Central

    Saikia, Manob Jyoti; Kanhirodan, Rajan; Mohan Vasu, Ram

    2014-01-01

    We have developed a graphics processor unit (GPU-) based high-speed fully 3D system for diffuse optical tomography (DOT). The reduction in execution time of 3D DOT algorithm, a severely ill-posed problem, is made possible through the use of (1) an algorithmic improvement that uses Broyden approach for updating the Jacobian matrix and thereby updating the parameter matrix and (2) the multinode multithreaded GPU and CUDA (Compute Unified Device Architecture) software architecture. Two different GPU implementations of DOT programs are developed in this study: (1) conventional C language program augmented by GPU CUDA and CULA routines (C GPU), (2) MATLAB program supported by MATLAB parallel computing toolkit for GPU (MATLAB GPU). The computation time of the algorithm on host CPU and the GPU system is presented for C and Matlab implementations. The forward computation uses finite element method (FEM) and the problem domain is discretized into 14610, 30823, and 66514 tetrahedral elements. The reconstruction time, so achieved for one iteration of the DOT reconstruction for 14610 elements, is 0.52 seconds for a C based GPU program for 2-plane measurements. The corresponding MATLAB based GPU program took 0.86 seconds. The maximum number of reconstructed frames so achieved is 2 frames per second. PMID:24891848

  2. High-Speed GPU-Based Fully Three-Dimensional Diffuse Optical Tomographic System.

    PubMed

    Saikia, Manob Jyoti; Kanhirodan, Rajan; Mohan Vasu, Ram

    2014-01-01

    We have developed a graphics processor unit (GPU-) based high-speed fully 3D system for diffuse optical tomography (DOT). The reduction in execution time of 3D DOT algorithm, a severely ill-posed problem, is made possible through the use of (1) an algorithmic improvement that uses Broyden approach for updating the Jacobian matrix and thereby updating the parameter matrix and (2) the multinode multithreaded GPU and CUDA (Compute Unified Device Architecture) software architecture. Two different GPU implementations of DOT programs are developed in this study: (1) conventional C language program augmented by GPU CUDA and CULA routines (C GPU), (2) MATLAB program supported by MATLAB parallel computing toolkit for GPU (MATLAB GPU). The computation time of the algorithm on host CPU and the GPU system is presented for C and Matlab implementations. The forward computation uses finite element method (FEM) and the problem domain is discretized into 14610, 30823, and 66514 tetrahedral elements. The reconstruction time, so achieved for one iteration of the DOT reconstruction for 14610 elements, is 0.52 seconds for a C based GPU program for 2-plane measurements. The corresponding MATLAB based GPU program took 0.86 seconds. The maximum number of reconstructed frames so achieved is 2 frames per second.

  3. Semisupervised kernel marginal Fisher analysis for face recognition.

    PubMed

    Wang, Ziqiang; Sun, Xia; Sun, Lijun; Huang, Yuchun

    2013-01-01

    Dimensionality reduction is a key problem in face recognition due to the high-dimensionality of face image. To effectively cope with this problem, a novel dimensionality reduction algorithm called semisupervised kernel marginal Fisher analysis (SKMFA) for face recognition is proposed in this paper. SKMFA can make use of both labelled and unlabeled samples to learn the projection matrix for nonlinear dimensionality reduction. Meanwhile, it can successfully avoid the singularity problem by not calculating the matrix inverse. In addition, in order to make the nonlinear structure captured by the data-dependent kernel consistent with the intrinsic manifold structure, a manifold adaptive nonparameter kernel is incorporated into the learning process of SKMFA. Experimental results on three face image databases demonstrate the effectiveness of our proposed algorithm.

  4. An Overview of a Trajectory-Based Solution for En Route and Terminal Area Self-Spacing: Third Revision

    NASA Technical Reports Server (NTRS)

    Abbott, Terence S.

    2012-01-01

    This paper presents an overview of the third major revision to an algorithm specifically designed to support NASA's Airborne Precision Spacing concept. This algorithm is referred to as the Airborne Spacing for Terminal Arrival Routes version 11 (ASTAR11). This airborne self-spacing concept is trajectory-based, allowing for spacing operations prior to the aircraft being on a common path. Because this algorithm is trajectory-based, it also has the inherent ability to support required time-of-arrival (RTA) operations. This algorithm was also designed specifically to support a standalone, non-integrated implementation in the spacing aircraft.

  5. Prehospital Acute ST-Elevation Myocardial Infarction Identification in San Diego: A Retrospective Analysis of the Effect of a New Software Algorithm.

    PubMed

    Coffey, Christanne; Serra, John; Goebel, Mat; Espinoza, Sarah; Castillo, Edward; Dunford, James

    2018-05-03

    A significant increase in false positive ST-elevation myocardial infarction (STEMI) electrocardiogram interpretations was noted after replacement of all of the City of San Diego's 110 monitor-defibrillator units with a new brand. These concerns were brought to the manufacturer and a revised interpretive algorithm was implemented. This study evaluated the effects of a revised interpretation algorithm to identify STEMI when used by San Diego paramedics. Data were reviewed 6 months before and 6 months after the introduction of a revised interpretation algorithm. True-positive and false-positive interpretations were identified. Factors contributing to an incorrect interpretation were assessed and patient demographics were collected. A total of 372 (234 preimplementation, 138 postimplementation) cases met inclusion criteria. There was a significant reduction in false positive STEMI (150 preimplementation, 40 postimplementation; p < 0.001) after implementation. The most common factors resulting in false positive before implementation were right bundle branch block, left bundle branch block, and atrial fibrillation. The new algorithm corrected for these misinterpretations with most postimplementation false positives attributed to benign early repolarization and poor data quality. Subsequent follow-up at 10 months showed maintenance of the observed reduction in false positives. This study shows that introducing a revised 12-lead interpretive algorithm resulted in a significant reduction in the number of false positive STEMI electrocardiogram interpretations in a large urban emergency medical services system. Rigorous testing and standardization of new interpretative software is recommended before introduction into a clinical setting to prevent issues resulting from inappropriate cardiac catheterization laboratory activations. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. The GOES-R Series Geostationary Lightning Mapper (GLM)

    NASA Technical Reports Server (NTRS)

    Goodman, Steven J.; Blakeslee, Richard J.; Koshak, William J.; Mach, Douglas M.

    2011-01-01

    The Geostationary Operational Environmental Satellite (GOES-R) is the next series to follow the existing GOES system currently operating over the Western Hemisphere. Superior spacecraft and instrument technology will support expanded detection of environmental phenomena, resulting in more timely and accurate forecasts and warnings. Advancements over current GOES capabilities include a new capability for total lightning detection (cloud and cloud-to-ground flashes) from the Geostationary Lightning Mapper (GLM), which will have just completed Critical Design Review and move forward into the construction phase of instrument development. The GLM will operate continuously day and night with near-uniform spatial resolution of 8 km with a product refresh rate of less than 20 sec over the Americas and adjacent oceanic regions. This will aid in forecasting severe storms and tornado activity, and convective weather impacts on aviation safety and efficiency. In parallel with the instrument development (an engineering development unit and 4 flight models), a GOES-R Risk Reduction Team and Algorithm Working Group Lightning Applications Team have begun to develop the Level 2 algorithms, cal/val performance monitoring tools, and new applications. Proxy total lightning data from the NASA Lightning Imaging Sensor (LIS) on the Tropical Rainfall Measuring Mission (TRMM) satellite and regional ground-based lightning networks are being used to develop the pre-launch algorithms, test data sets, and applications, as well as improve our knowledge of thunderstorm initiation and evolution. In this presentation we review the planned implementation of the instrument and suite of operational algorithms

  7. VISIR-I: small vessels, least-time nautical routes using wave forecasts

    NASA Astrophysics Data System (ADS)

    Mannarini, G.; Pinardi, N.; Coppini, G.; Oddo, P.; Iafrati, A.

    2015-09-01

    A new numerical model for the on-demand computation of optimal ship routes based on sea-state forecasts has been developed. The model, named VISIR (discoVerIng Safe and effIcient Routes) is designed to support decision-makers when planning a marine voyage. The first version of the system, VISIR-I, considers medium and small motor vessels with lengths of up to a few tens of meters and a displacement hull. The model is made up of three components: the route optimization algorithm, the mechanical model of the ship, and the environmental fields. The optimization algorithm is based on a graph-search method with time-dependent edge weights. The algorithm is also able to compute a voluntary ship speed reduction. The ship model accounts for calm water and added wave resistance by making use of just the principal particulars of the vessel as input parameters. The system also checks the optimal route for parametric roll, pure loss of stability, and surfriding/broaching-to hazard conditions. Significant wave height, wave spectrum peak period, and wave direction forecast fields are employed as an input. Examples of VISIR-I routes in the Mediterranean Sea are provided. The optimal route may be longer in terms of miles sailed and yet it is faster and safer than the geodetic route between the same departure and arrival locations. Route diversions result from the safety constraints and the fact that the algorithm takes into account the full temporal evolution and spatial variability of the environmental fields.

  8. Carbon monoxide mixing ratio inference from gas filter radiometer data

    NASA Technical Reports Server (NTRS)

    Wallio, H. A.; Reichle, H. G., Jr.; Casas, J. C.; Saylor, M. S.; Gormsen, B. B.

    1983-01-01

    A new algorithm has been developed which permits, for the first time, real time data reduction of nadir measurements taken with a gas filter correlation radiometer to determine tropospheric carbon monoxide concentrations. The algorithm significantly reduces the complexity of the equations to be solved while providing accuracy comparable to line-by-line calculations. The method is based on a regression analysis technique using a truncated power series representation of the primary instrument output signals to infer directly a weighted average of trace gas concentration. The results produced by a microcomputer-based implementation of this technique are compared with those produced by the more rigorous line-by-line methods. This algorithm has been used in the reduction of Measurement of Air Pollution from Satellites, Shuttle, and aircraft data.

  9. ADART: an adaptive algebraic reconstruction algorithm for discrete tomography.

    PubMed

    Maestre-Deusto, F Javier; Scavello, Giovanni; Pizarro, Joaquín; Galindo, Pedro L

    2011-08-01

    In this paper we suggest an algorithm based on the Discrete Algebraic Reconstruction Technique (DART) which is capable of computing high quality reconstructions from substantially fewer projections than required for conventional continuous tomography. Adaptive DART (ADART) goes a step further than DART on the reduction of the number of unknowns of the associated linear system achieving a significant reduction in the pixel error rate of reconstructed objects. The proposed methodology automatically adapts the border definition criterion at each iteration, resulting in a reduction of the number of pixels belonging to the border, and consequently of the number of unknowns in the general algebraic reconstruction linear system to be solved, being this reduction specially important at the final stage of the iterative process. Experimental results show that reconstruction errors are considerably reduced using ADART when compared to original DART, both in clean and noisy environments.

  10. Effects of secondary loudspeaker properties on broadband feedforward active duct noise control.

    PubMed

    Chan, Yum-Ji; Huang, Lixi; Lam, James

    2013-07-01

    Dependence of the performance of feedforward active duct noise control on secondary loudspeaker parameters is investigated. Noise reduction performance can be improved if the force factor of the secondary loudspeaker is higher. For example, broadband noise reduction improvement up to 1.6 dB is predicted by increasing the force factor by 50%. In addition, a secondary loudspeaker with a larger force factor was found to have quicker convergence in the adaptive algorithm in experiment. In simulations, noise reduction is improved in using an adaptive algorithm by using a secondary loudspeaker with a heavier moving mass. It is predicted that an extra broadband noise reduction of more than 7 dB can be gained using an adaptive filter if the force factor, moving mass and coil inductance of a commercially available loudspeaker are doubled. Methods to increase the force factor beyond those of commercially available loudspeakers are proposed.

  11. Is appendiceal CT scan overused for evaluating patients with right lower quadrant pain?

    PubMed

    Safran, D B; Pilati, D; Folz, E; Oller, D

    2001-05-01

    Reports citing excellent sensitivity, specificity, and predictive accuracy of focused appendiceal computed tomography (CT) and showing an overall reduction in resource use and nontherapeutic laparotomies have led to increasing use of that imaging modality. Diagnostic algorithms have begun to incorporate appendiceal CT for patients presenting to the emergency department with right lower quadrant pain. We present a series of 4 cases in which use of appendiceal CT ultimately led to increased cost, resource use, and complexity in patient care. The results of these cases support an argument against unbridled use of appendiceal CT scanning and reinforce the need for clinical evaluation by the operating surgeon before routine performance of appendiceal CT scan.

  12. Matrix Multiplication Algorithm Selection with Support Vector Machines

    DTIC Science & Technology

    2015-05-01

    libraries that could intelligently choose the optimal algorithm for a particular set of inputs. Users would be oblivious to the underlying algorithmic...SAT.” J. Artif . Intell. Res.(JAIR), vol. 32, pp. 565–606, 2008. [9] M. G. Lagoudakis and M. L. Littman, “Algorithm selection using reinforcement...Artificial Intelligence , vol. 21, no. 05, pp. 961–976, 2007. [15] C.-C. Chang and C.-J. Lin, “LIBSVM: A library for support vector machines,” ACM

  13. Comparison of doses and NTCP to risk organs with enhanced inspiration gating and free breathing for left-sided breast cancer radiotherapy using the AAA algorithm.

    PubMed

    Edvardsson, Anneli; Nilsson, Martin P; Amptoulach, Sousana; Ceberg, Sofie

    2015-04-10

    The purpose of this study was to investigate the potential dose reduction to the heart, left anterior descending (LAD) coronary artery and the ipsilateral lung for patients treated with tangential and locoregional radiotherapy for left-sided breast cancer with enhanced inspiration gating (EIG) compared to free breathing (FB) using the AAA algorithm. The radiobiological implication of such dose sparing was also investigated. Thirty-two patients, who received tangential or locoregional adjuvant radiotherapy with EIG for left-sided breast cancer, were retrospectively enrolled in this study. Each patient was CT-scanned during FB and EIG. Similar treatment plans, with comparable target coverage, were created in the two CT-sets using the AAA algorithm. Further, the probability of radiation induced cardiac mortality and pneumonitis were calculated using NTCP models. For tangential treatment, the median V25Gy for the heart and LAD was decreased for EIG from 2.2% to 0.2% and 40.2% to 0.1% (p < 0.001), respectively, whereas there was no significant difference in V20Gy for the ipsilateral lung (p = 0.109). For locoregional treatment, the median V25Gy for the heart and LAD was decreased for EIG from 3.3% to 0.2% and 51.4% to 5.1% (p < 0.001), respectively, and the median ipsilateral lung V20Gy decreased from 27.0% for FB to 21.5% (p = 0.020) for EIG. The median excess cardiac mortality probability decreased from 0.49% for FB to 0.02% for EIG (p < 0.001) for tangential treatment and from 0.75% to 0.02% (p < 0.001) for locoregional treatment. There was no significant difference in risk of radiation pneumonitis for tangential treatment (p = 0.179) whereas it decreased for locoregional treatment from 6.82% for FB to 3.17% for EIG (p = 0.004). In this study the AAA algorithm was used for dose calculation to the heart, LAD and left lung when comparing the EIG and FB techniques for tangential and locoregional radiotherapy of breast cancer patients. The results support the dose and NTCP reductions reported in previous studies where dose calculations were performed using the pencil beam algorithm.

  14. Reduced Order Model Basis Vector Generation: Generates Basis Vectors fro ROMs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arrighi, Bill

    2016-03-03

    libROM is a library that implements order reduction via singular value decomposition (SVD) of sampled state vectors. It implements 2 parallel, incremental SVD algorithms and one serial, non-incremental algorithm. It also provides a mechanism for adaptive sampling of basis vectors.

  15. Adaptive Trajectory Prediction Algorithm for Climbing Flights

    NASA Technical Reports Server (NTRS)

    Schultz, Charles Alexander; Thipphavong, David P.; Erzberger, Heinz

    2012-01-01

    Aircraft climb trajectories are difficult to predict, and large errors in these predictions reduce the potential operational benefits of some advanced features for NextGen. The algorithm described in this paper improves climb trajectory prediction accuracy by adjusting trajectory predictions based on observed track data. It utilizes rate-of-climb and airspeed measurements derived from position data to dynamically adjust the aircraft weight modeled for trajectory predictions. In simulations with weight uncertainty, the algorithm is able to adapt to within 3 percent of the actual gross weight within two minutes of the initial adaptation. The root-mean-square of altitude errors for five-minute predictions was reduced by 73 percent. Conflict detection performance also improved, with a 15 percent reduction in missed alerts and a 10 percent reduction in false alerts. In a simulation with climb speed capture intent and weight uncertainty, the algorithm improved climb trajectory prediction accuracy by up to 30 percent and conflict detection performance, reducing missed and false alerts by up to 10 percent.

  16. Novel Signal Noise Reduction Method through Cluster Analysis, Applied to Photoplethysmography.

    PubMed

    Waugh, William; Allen, John; Wightman, James; Sims, Andrew J; Beale, Thomas A W

    2018-01-01

    Physiological signals can often become contaminated by noise from a variety of origins. In this paper, an algorithm is described for the reduction of sporadic noise from a continuous periodic signal. The design can be used where a sample of a periodic signal is required, for example, when an average pulse is needed for pulse wave analysis and characterization. The algorithm is based on cluster analysis for selecting similar repetitions or pulses from a periodic single. This method selects individual pulses without noise, returns a clean pulse signal, and terminates when a sufficiently clean and representative signal is received. The algorithm is designed to be sufficiently compact to be implemented on a microcontroller embedded within a medical device. It has been validated through the removal of noise from an exemplar photoplethysmography (PPG) signal, showing increasing benefit as the noise contamination of the signal increases. The algorithm design is generalised to be applicable for a wide range of physiological (physical) signals.

  17. Strong diffusion formulation of Markov chain ensembles and its optimal weaker reductions

    NASA Astrophysics Data System (ADS)

    Güler, Marifi

    2017-10-01

    Two self-contained diffusion formulations, in the form of coupled stochastic differential equations, are developed for the temporal evolution of state densities over an ensemble of Markov chains evolving independently under a common transition rate matrix. Our first formulation derives from Kurtz's strong approximation theorem of density-dependent Markov jump processes [Stoch. Process. Their Appl. 6, 223 (1978), 10.1016/0304-4149(78)90020-0] and, therefore, strongly converges with an error bound of the order of lnN /N for ensemble size N . The second formulation eliminates some fluctuation variables, and correspondingly some noise terms, within the governing equations of the strong formulation, with the objective of achieving a simpler analytic formulation and a faster computation algorithm when the transition rates are constant or slowly varying. There, the reduction of the structural complexity is optimal in the sense that the elimination of any given set of variables takes place with the lowest attainable increase in the error bound. The resultant formulations are supported by numerical simulations.

  18. Management of recent unstable fractures of the pelvic ring. An update conference supported by the Club Bassin Cotyle. (Pelvis-Acetabulum Club).

    PubMed

    Tonetti, J

    2013-02-01

    Traumatic injury to the pelvic ring is a result of high energy trauma in young patients. These osteo-ligamentous injuries are associated with numerous lesions including retroperitoneal hematoma, urogenital, cutaneous and neurological (lumbosacral plexus). The goal of initial management is to restore vital indicators, urinary excretion function and protect the patient from infectious complications. An emergency decisional algorithm helps manage haemodynamic instability. Initial bone and ligament procedures should reduce displacement and make it possible for the patient the wait until his condition is stable enough for definitive surgical fixation. The goal of surgical treatment is to avoid nonunion and malunion. Stable fixation of the posterior arch after reduction favors union. Different techniques can be used by the posterior, anterior ilio-inguinal or lateral percutaneous approaches. Anterior fixation is discussed to improve reduction and increase the stability obtained with a posterior procedure. Anterior external fixation is useful to temporarily reinforce posterior stabilization. Copyright © 2013. Published by Elsevier Masson SAS.

  19. Exploring high dimensional data with Butterfly: a novel classification algorithm based on discrete dynamical systems.

    PubMed

    Geraci, Joseph; Dharsee, Moyez; Nuin, Paulo; Haslehurst, Alexandria; Koti, Madhuri; Feilotter, Harriet E; Evans, Ken

    2014-03-01

    We introduce a novel method for visualizing high dimensional data via a discrete dynamical system. This method provides a 2D representation of the relationship between subjects according to a set of variables without geometric projections, transformed axes or principal components. The algorithm exploits a memory-type mechanism inherent in a certain class of discrete dynamical systems collectively referred to as the chaos game that are closely related to iterative function systems. The goal of the algorithm was to create a human readable representation of high dimensional patient data that was capable of detecting unrevealed subclusters of patients from within anticipated classifications. This provides a mechanism to further pursue a more personalized exploration of pathology when used with medical data. For clustering and classification protocols, the dynamical system portion of the algorithm is designed to come after some feature selection filter and before some model evaluation (e.g. clustering accuracy) protocol. In the version given here, a univariate features selection step is performed (in practice more complex feature selection methods are used), a discrete dynamical system is driven by this reduced set of variables (which results in a set of 2D cluster models), these models are evaluated for their accuracy (according to a user-defined binary classification) and finally a visual representation of the top classification models are returned. Thus, in addition to the visualization component, this methodology can be used for both supervised and unsupervised machine learning as the top performing models are returned in the protocol we describe here. Butterfly, the algorithm we introduce and provide working code for, uses a discrete dynamical system to classify high dimensional data and provide a 2D representation of the relationship between subjects. We report results on three datasets (two in the article; one in the appendix) including a public lung cancer dataset that comes along with the included Butterfly R package. In the included R script, a univariate feature selection method is used for the dimension reduction step, but in the future we wish to use a more powerful multivariate feature reduction method based on neural networks (Kriesel, 2007). A script written in R (designed to run on R studio) accompanies this article that implements this algorithm and is available at http://butterflygeraci.codeplex.com/. For details on the R package or for help installing the software refer to the accompanying document, Supporting Material and Appendix.

  20. Objective performance assessment of five computed tomography iterative reconstruction algorithms.

    PubMed

    Omotayo, Azeez; Elbakri, Idris

    2016-11-22

    Iterative algorithms are gaining clinical acceptance in CT. We performed objective phantom-based image quality evaluation of five commercial iterative reconstruction algorithms available on four different multi-detector CT (MDCT) scanners at different dose levels as well as the conventional filtered back-projection (FBP) reconstruction. Using the Catphan500 phantom, we evaluated image noise, contrast-to-noise ratio (CNR), modulation transfer function (MTF) and noise-power spectrum (NPS). The algorithms were evaluated over a CTDIvol range of 0.75-18.7 mGy on four major MDCT scanners: GE DiscoveryCT750HD (algorithms: ASIR™ and VEO™); Siemens Somatom Definition AS+ (algorithm: SAFIRE™); Toshiba Aquilion64 (algorithm: AIDR3D™); and Philips Ingenuity iCT256 (algorithm: iDose4™). Images were reconstructed using FBP and the respective iterative algorithms on the four scanners. Use of iterative algorithms decreased image noise and increased CNR, relative to FBP. In the dose range of 1.3-1.5 mGy, noise reduction using iterative algorithms was in the range of 11%-51% on GE DiscoveryCT750HD, 10%-52% on Siemens Somatom Definition AS+, 49%-62% on Toshiba Aquilion64, and 13%-44% on Philips Ingenuity iCT256. The corresponding CNR increase was in the range 11%-105% on GE, 11%-106% on Siemens, 85%-145% on Toshiba and 13%-77% on Philips respectively. Most algorithms did not affect the MTF, except for VEO™ which produced an increase in the limiting resolution of up to 30%. A shift in the peak of the NPS curve towards lower frequencies and a decrease in NPS amplitude were obtained with all iterative algorithms. VEO™ required long reconstruction times, while all other algorithms produced reconstructions in real time. Compared to FBP, iterative algorithms reduced image noise and increased CNR. The iterative algorithms available on different scanners achieved different levels of noise reduction and CNR increase while spatial resolution improvements were obtained only with VEO™. This study is useful in that it provides performance assessment of the iterative algorithms available from several mainstream CT manufacturers.

  1. PCA based clustering for brain tumor segmentation of T1w MRI images.

    PubMed

    Kaya, Irem Ersöz; Pehlivanlı, Ayça Çakmak; Sekizkardeş, Emine Gezmez; Ibrikci, Turgay

    2017-03-01

    Medical images are huge collections of information that are difficult to store and process consuming extensive computing time. Therefore, the reduction techniques are commonly used as a data pre-processing step to make the image data less complex so that a high-dimensional data can be identified by an appropriate low-dimensional representation. PCA is one of the most popular multivariate methods for data reduction. This paper is focused on T1-weighted MRI images clustering for brain tumor segmentation with dimension reduction by different common Principle Component Analysis (PCA) algorithms. Our primary aim is to present a comparison between different variations of PCA algorithms on MRIs for two cluster methods. Five most common PCA algorithms; namely the conventional PCA, Probabilistic Principal Component Analysis (PPCA), Expectation Maximization Based Principal Component Analysis (EM-PCA), Generalize Hebbian Algorithm (GHA), and Adaptive Principal Component Extraction (APEX) were applied to reduce dimensionality in advance of two clustering algorithms, K-Means and Fuzzy C-Means. In the study, the T1-weighted MRI images of the human brain with brain tumor were used for clustering. In addition to the original size of 512 lines and 512 pixels per line, three more different sizes, 256 × 256, 128 × 128 and 64 × 64, were included in the study to examine their effect on the methods. The obtained results were compared in terms of both the reconstruction errors and the Euclidean distance errors among the clustered images containing the same number of principle components. According to the findings, the PPCA obtained the best results among all others. Furthermore, the EM-PCA and the PPCA assisted K-Means algorithm to accomplish the best clustering performance in the majority as well as achieving significant results with both clustering algorithms for all size of T1w MRI images. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  2. Comparing the ISO-recommended and the cumulative data-reduction algorithms in S-on-1 laser damage test by a reverse approach method

    NASA Astrophysics Data System (ADS)

    Zorila, Alexandru; Stratan, Aurel; Nemes, George

    2018-01-01

    We compare the ISO-recommended (the standard) data-reduction algorithm used to determine the surface laser-induced damage threshold of optical materials by the S-on-1 test with two newly suggested algorithms, both named "cumulative" algorithms/methods, a regular one and a limit-case one, intended to perform in some respects better than the standard one. To avoid additional errors due to real experiments, a simulated test is performed, named the reverse approach. This approach simulates the real damage experiments, by generating artificial test-data of damaged and non-damaged sites, based on an assumed, known damage threshold fluence of the target and on a given probability distribution function to induce the damage. In this work, a database of 12 sets of test-data containing both damaged and non-damaged sites was generated by using four different reverse techniques and by assuming three specific damage probability distribution functions. The same value for the threshold fluence was assumed, and a Gaussian fluence distribution on each irradiated site was considered, as usual for the S-on-1 test. Each of the test-data was independently processed by the standard and by the two cumulative data-reduction algorithms, the resulting fitted probability distributions were compared with the initially assumed probability distribution functions, and the quantities used to compare these algorithms were determined. These quantities characterize the accuracy and the precision in determining the damage threshold and the goodness of fit of the damage probability curves. The results indicate that the accuracy in determining the absolute damage threshold is best for the ISO-recommended method, the precision is best for the limit-case of the cumulative method, and the goodness of fit estimator (adjusted R-squared) is almost the same for all three algorithms.

  3. An Algorithm for Timely Transmission of Solicitation Messages in RPL for Energy-Efficient Node Mobility.

    PubMed

    Park, Jihong; Kim, Ki-Hyung; Kim, Kangseok

    2017-04-19

    The IPv6 Routing Protocol for Low Power and Lossy Networks (RPL) was proposed for various applications of IPv6 low power wireless networks. While RPL supports various routing metrics and is designed to be suitable for wireless sensor network environments, it does not consider the mobility of nodes. Therefore, there is a need for a method that is energy efficient and that provides stable and reliable data transmission by considering the mobility of nodes in RPL networks. This paper proposes an algorithm to support node mobility in RPL in an energy-efficient manner and describes its operating principle based on different scenarios. The proposed algorithm supports the mobility of nodes by dynamically adjusting the transmission interval of the messages that request the route based on the speed and direction of the motion of mobile nodes, as well as the costs between neighboring nodes. The performance of the proposed algorithm and previous algorithms for supporting node mobility were examined experimentally. From the experiment, it was observed that the proposed algorithm requires fewer messages per unit time for selecting a new parent node following the movement of a mobile node. Since fewer messages are used to select a parent node, the energy consumption is also less than that of previous algorithms.

  4. An Algorithm for Timely Transmission of Solicitation Messages in RPL for Energy-Efficient Node Mobility

    PubMed Central

    Park, Jihong; Kim, Ki-Hyung; Kim, Kangseok

    2017-01-01

    The IPv6 Routing Protocol for Low Power and Lossy Networks (RPL) was proposed for various applications of IPv6 low power wireless networks. While RPL supports various routing metrics and is designed to be suitable for wireless sensor network environments, it does not consider the mobility of nodes. Therefore, there is a need for a method that is energy efficient and that provides stable and reliable data transmission by considering the mobility of nodes in RPL networks. This paper proposes an algorithm to support node mobility in RPL in an energy-efficient manner and describes its operating principle based on different scenarios. The proposed algorithm supports the mobility of nodes by dynamically adjusting the transmission interval of the messages that request the route based on the speed and direction of the motion of mobile nodes, as well as the costs between neighboring nodes. The performance of the proposed algorithm and previous algorithms for supporting node mobility were examined experimentally. From the experiment, it was observed that the proposed algorithm requires fewer messages per unit time for selecting a new parent node following the movement of a mobile node. Since fewer messages are used to select a parent node, the energy consumption is also less than that of previous algorithms. PMID:28422084

  5. Relative risk reduction is useful metric to standardize effect size for public heath interventions for translational research.

    PubMed

    Mirzazadeh, Ali; Malekinejad, Mohsen; Kahn, James G

    2015-03-01

    Heterogeneity of effect measures in intervention studies undermines the use of evidence to inform policy. Our objective was to develop a comprehensive algorithm to convert all types of effect measures to one standard metric, relative risk reduction (RRR). This work was conducted to facilitate synthesis of published intervention effects for our epidemic modeling of the health impact of human immunodeficiency virus [HIV testing and counseling (HTC)]. We designed and implemented an algorithm to transform varied effect measures to RRR, representing the proportionate reduction in undesirable outcomes. Our extraction of 55 HTC studies identified 473 effect measures representing unique combinations of intervention-outcome-population characteristics, using five outcome metrics: pre-post proportion (70.6%), odds ratio (14.0%), mean difference (10.2%), risk ratio (4.4%), and RRR (0.9%). Outcomes were expressed as both desirable (29.5%, eg, consistent condom use) and undesirable (70.5%, eg, inconsistent condom use). Using four examples, we demonstrate our algorithm for converting varied effect measures to RRR and provide the conceptual basis for advantages of RRR over other metrics. Our review of the literature suggests that RRR, an easily understood and useful metric to convey risk reduction associated with an intervention, is underused by original and review studies. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Open source pipeline for ESPaDOnS reduction and analysis

    NASA Astrophysics Data System (ADS)

    Martioli, Eder; Teeple, Doug; Manset, Nadine; Devost, Daniel; Withington, Kanoa; Venne, Andre; Tannock, Megan

    2012-09-01

    OPERA is a Canada-France-Hawaii Telescope (CFHT) open source collaborative software project currently under development for an ESPaDOnS echelle spectro-polarimetric image reduction pipeline. OPERA is designed to be fully automated, performing calibrations and reduction, producing one-dimensional intensity and polarimetric spectra. The calibrations are performed on two-dimensional images. Spectra are extracted using an optimal extraction algorithm. While primarily designed for CFHT ESPaDOnS data, the pipeline is being written to be extensible to other echelle spectrographs. A primary design goal is to make use of fast, modern object-oriented technologies. Processing is controlled by a harness, which manages a set of processing modules, that make use of a collection of native OPERA software libraries and standard external software libraries. The harness and modules are completely parametrized by site configuration and instrument parameters. The software is open- ended, permitting users of OPERA to extend the pipeline capabilities. All these features have been designed to provide a portable infrastructure that facilitates collaborative development, code re-usability and extensibility. OPERA is free software with support for both GNU/Linux and MacOSX platforms. The pipeline is hosted on SourceForge under the name "opera-pipeline".

  7. Maximum life spiral bevel reduction design

    NASA Technical Reports Server (NTRS)

    Savage, M.; Prasanna, M. G.; Coe, H. H.

    1992-01-01

    Optimization is applied to the design of a spiral bevel gear reduction for maximum life at a given size. A modified feasible directions search algorithm permits a wide variety of inequality constraints and exact design requirements to be met with low sensitivity to initial values. Gear tooth bending strength and minimum contact ratio under load are included in the active constraints. The optimal design of the spiral bevel gear reduction includes the selection of bearing and shaft proportions in addition to gear mesh parameters. System life is maximized subject to a fixed back-cone distance of the spiral bevel gear set for a specified speed ratio, shaft angle, input torque, and power. Significant parameters in the design are: the spiral angle, the pressure angle, the numbers of teeth on the pinion and gear, and the location and size of the four support bearings. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for gradient optimization. After finding the continuous optimum, a designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearing lives on the gear parameters in the optimal configurations. For a fixed back-cone distance, optimal designs with larger shaft angles have larger service lives.

  8. Development of a new metal artifact reduction algorithm by using an edge preserving method for CBCT imaging

    NASA Astrophysics Data System (ADS)

    Kim, Juhye; Nam, Haewon; Lee, Rena

    2015-07-01

    CT (computed tomography) images, metal materials such as tooth supplements or surgical clips can cause metal artifact and degrade image quality. In severe cases, this may lead to misdiagnosis. In this research, we developed a new MAR (metal artifact reduction) algorithm by using an edge preserving filter and the MATLAB program (Mathworks, version R2012a). The proposed algorithm consists of 6 steps: image reconstruction from projection data, metal segmentation, forward projection, interpolation, applied edge preserving smoothing filter, and new image reconstruction. For an evaluation of the proposed algorithm, we obtained both numerical simulation data and data for a Rando phantom. In the numerical simulation data, four metal regions were added into the Shepp Logan phantom for metal artifacts. The projection data of the metal-inserted Rando phantom were obtained by using a prototype CBCT scanner manufactured by medical engineering and medical physics (MEMP) laboratory research group in medical science at Ewha Womans University. After these had been adopted the proposed algorithm was performed, and the result were compared with the original image (with metal artifact without correction) and with a corrected image based on linear interpolation. Both visual and quantitative evaluations were done. Compared with the original image with metal artifacts and with the image corrected by using linear interpolation, both the numerical and the experimental phantom data demonstrated that the proposed algorithm reduced the metal artifact. In conclusion, the evaluation in this research showed that the proposed algorithm outperformed the interpolation based MAR algorithm. If an optimization and a stability evaluation of the proposed algorithm can be performed, the developed algorithm is expected to be an effective tool for eliminating metal artifacts even in commercial CT systems.

  9. Design and analysis of compound flexible skin based on deformable honeycomb

    NASA Astrophysics Data System (ADS)

    Zou, Tingting; Zhou, Li

    2017-04-01

    In this study, we focused at the development and verification of a robust framework for surface crack detection in steel pipes using measured vibration responses; with the presence of multiple progressive damage occurring in different locations within the structure. Feature selection, dimensionality reduction, and multi-class support vector machine were established for this purpose. Nine damage cases, at different locations, orientations and length, were introduced into the pipe structure. The pipe was impacted 300 times using an impact hammer, after each damage case, the vibration data were collected using 3 PZT wafers which were installed on the outer surface of the pipe. At first, damage sensitive features were extracted using the frequency response function approach followed by recursive feature elimination for dimensionality reduction. Then, a multi-class support vector machine learning algorithm was employed to train the data and generate a statistical model. Once the model is established, decision values and distances from the hyper-plane were generated for the new collected data using the trained model. This process was repeated on the data collected from each sensor. Overall, using a single sensor for training and testing led to a very high accuracy reaching 98% in the assessment of the 9 damage cases used in this study.

  10. Print quality analysis for ink-saving algorithms

    NASA Astrophysics Data System (ADS)

    Ortiz Segovia, Maria V.; Bonnier, Nicolas; Allebach, Jan P.

    2012-01-01

    Ink-saving strategies for CMYK printers have evolved from their earlier stages where the 'draft' print mode was the main option available to control ink usage. The savings were achieved by printing alternate dots in an image at the expense of reducing print quality considerably. Nowadays, customers are not only unwilling to compromise quality but have higher expectations regarding both visual print quality and ink reduction solutions. Therefore, the need for more intricate ink-saving solutions with lower impact on print quality is evident. Printing-related factors such as the way the printer places the dots on the paper and the ink-substrate interaction play important and complex roles in the characterization and modeling of the printing process that make the ink reduction topic a challenging problem. In our study, we are interested in benchmarking ink-saving algorithms to find the connections between different ink reduction levels of a given ink-saving method and a set of print quality attributes. This study is mostly related to CMYK printers that use dispersed dot halftoning algorithms. The results of our efforts to develop such an evaluation scheme are presented in this paper.

  11. An analysis of spectral envelope-reduction via quadratic assignment problems

    NASA Technical Reports Server (NTRS)

    George, Alan; Pothen, Alex

    1994-01-01

    A new spectral algorithm for reordering a sparse symmetric matrix to reduce its envelope size was described. The ordering is computed by associating a Laplacian matrix with the given matrix and then sorting the components of a specified eigenvector of the Laplacian. In this paper, we provide an analysis of the spectral envelope reduction algorithm. We described related 1- and 2-sum problems; the former is related to the envelope size, while the latter is related to an upper bound on the work involved in an envelope Cholesky factorization scheme. We formulate the latter two problems as quadratic assignment problems, and then study the 2-sum problem in more detail. We obtain lower bounds on the 2-sum by considering a projected quadratic assignment problem, and then show that finding a permutation matrix closest to an orthogonal matrix attaining one of the lower bounds justifies the spectral envelope reduction algorithm. The lower bound on the 2-sum is seen to be tight for reasonably 'uniform' finite element meshes. We also obtain asymptotically tight lower bounds for the envelope size for certain classes of meshes.

  12. Development and evaluation of thermal model reduction algorithms for spacecraft

    NASA Astrophysics Data System (ADS)

    Deiml, Michael; Suderland, Martin; Reiss, Philipp; Czupalla, Markus

    2015-05-01

    This paper is concerned with the topic of the reduction of thermal models of spacecraft. The work presented here has been conducted in cooperation with the company OHB AG, formerly Kayser-Threde GmbH, and the Institute of Astronautics at Technische Universität München with the goal to shorten and automatize the time-consuming and manual process of thermal model reduction. The reduction of thermal models can be divided into the simplification of the geometry model for calculation of external heat flows and radiative couplings and into the reduction of the underlying mathematical model. For simplification a method has been developed which approximates the reduced geometry model with the help of an optimization algorithm. Different linear and nonlinear model reduction techniques have been evaluated for their applicability in reduction of the mathematical model. Thereby the compatibility with the thermal analysis tool ESATAN-TMS is of major concern, which restricts the useful application of these methods. Additional model reduction methods have been developed, which account to these constraints. The Matrix Reduction method allows the approximation of the differential equation to reference values exactly expect for numerical errors. The summation method enables a useful, applicable reduction of thermal models that can be used in industry. In this work a framework for model reduction of thermal models has been created, which can be used together with a newly developed graphical user interface for the reduction of thermal models in industry.

  13. DESIGNING SUSTAINABLE PROCESSES WITH SIMULATION: THE WASTE REDUCTION (WAR) ALGORITHM

    EPA Science Inventory

    The WAR Algorithm, a methodology for determining the potential environmental impact (PEI) of a chemical process, is presented with modifications that account for the PEI of the energy consumed within that process. From this theory, four PEI indexes are used to evaluate the envir...

  14. A Fast Algorithm for the Convolution of Functions with Compact Support Using Fourier Extensions

    DOE PAGES

    Xu, Kuan; Austin, Anthony P.; Wei, Ke

    2017-12-21

    In this paper, we present a new algorithm for computing the convolution of two compactly supported functions. The algorithm approximates the functions to be convolved using Fourier extensions and then uses the fast Fourier transform to efficiently compute Fourier extension approximations to the pieces of the result. Finally, the complexity of the algorithm is O(N(log N) 2), where N is the number of degrees of freedom used in each of the Fourier extensions.

  15. A Fast Algorithm for the Convolution of Functions with Compact Support Using Fourier Extensions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Kuan; Austin, Anthony P.; Wei, Ke

    In this paper, we present a new algorithm for computing the convolution of two compactly supported functions. The algorithm approximates the functions to be convolved using Fourier extensions and then uses the fast Fourier transform to efficiently compute Fourier extension approximations to the pieces of the result. Finally, the complexity of the algorithm is O(N(log N) 2), where N is the number of degrees of freedom used in each of the Fourier extensions.

  16. Algorithm for detection the QRS complexes based on support vector machine

    NASA Astrophysics Data System (ADS)

    Van, G. V.; Podmasteryev, K. V.

    2017-11-01

    The efficiency of computer ECG analysis depends on the accurate detection of QRS-complexes. This paper presents an algorithm for QRS complex detection based of support vector machine (SVM). The proposed algorithm is evaluated on annotated standard databases such as MIT-BIH Arrhythmia database. The QRS detector obtained a sensitivity Se = 98.32% and specificity Sp = 95.46% for MIT-BIH Arrhythmia database. This algorithm can be used as the basis for the software to diagnose electrical activity of the heart.

  17. The Goes-R Geostationary Lightning Mapper (GLM): Algorithm and Instrument Status

    NASA Technical Reports Server (NTRS)

    Goodman, Steven J.; Blakeslee, Richard J.; Koshak, William J.; Mach, Douglas

    2010-01-01

    The Geostationary Operational Environmental Satellite (GOES-R) is the next series to follow the existing GOES system currently operating over the Western Hemisphere. Superior spacecraft and instrument technology will support expanded detection of environmental phenomena, resulting in more timely and accurate forecasts and warnings. Advancements over current GOES capabilities include a new capability for total lightning detection (cloud and cloud-to-ground flashes) from the Geostationary Lightning Mapper (GLM), and improved capability for the Advanced Baseline Imager (ABI). The Geostationary Lighting Mapper (GLM) will map total lightning activity (in-cloud and cloud-to-ground lighting flashes) continuously day and night with near-uniform spatial resolution of 8 km with a product refresh rate of less than 20 sec over the Americas and adjacent oceanic regions. This will aid in forecasting severe storms and tornado activity, and convective weather impacts on aviation safety and efficiency. In parallel with the instrument development (a prototype and 4 flight models), a GOES-R Risk Reduction Team and Algorithm Working Group Lightning Applications Team have begun to develop the Level 2 algorithms, cal/val performance monitoring tools, and new applications. Proxy total lightning data from the NASA Lightning Imaging Sensor on the Tropical Rainfall Measuring Mission (TRMM) satellite and regional test beds are being used to develop the pre-launch algorithms and applications, and also improve our knowledge of thunderstorm initiation and evolution. A joint field campaign with Brazilian researchers in 2010-2011 will produce concurrent observations from a VHF lightning mapping array, Meteosat multi-band imagery, Tropical Rainfall Measuring Mission (TRMM) Lightning Imaging Sensor (LIS) overpasses, and related ground and in-situ lightning and meteorological measurements in the vicinity of Sao Paulo. These data will provide a new comprehensive proxy data set for algorithm and application development.

  18. Paroxysmal atrial fibrillation prediction based on HRV analysis and non-dominated sorting genetic algorithm III.

    PubMed

    Boon, K H; Khalil-Hani, M; Malarvili, M B

    2018-01-01

    This paper presents a method that able to predict the paroxysmal atrial fibrillation (PAF). The method uses shorter heart rate variability (HRV) signals when compared to existing methods, and achieves good prediction accuracy. PAF is a common cardiac arrhythmia that increases the health risk of a patient, and the development of an accurate predictor of the onset of PAF is clinical important because it increases the possibility to electrically stabilize and prevent the onset of atrial arrhythmias with different pacing techniques. We propose a multi-objective optimization algorithm based on the non-dominated sorting genetic algorithm III for optimizing the baseline PAF prediction system, that consists of the stages of pre-processing, HRV feature extraction, and support vector machine (SVM) model. The pre-processing stage comprises of heart rate correction, interpolation, and signal detrending. After that, time-domain, frequency-domain, non-linear HRV features are extracted from the pre-processed data in feature extraction stage. Then, these features are used as input to the SVM for predicting the PAF event. The proposed optimization algorithm is used to optimize the parameters and settings of various HRV feature extraction algorithms, select the best feature subsets, and tune the SVM parameters simultaneously for maximum prediction performance. The proposed method achieves an accuracy rate of 87.7%, which significantly outperforms most of the previous works. This accuracy rate is achieved even with the HRV signal length being reduced from the typical 30 min to just 5 min (a reduction of 83%). Furthermore, another significant result is the sensitivity rate, which is considered more important that other performance metrics in this paper, can be improved with the trade-off of lower specificity. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Correlation coefficient based supervised locally linear embedding for pulmonary nodule recognition.

    PubMed

    Wu, Panpan; Xia, Kewen; Yu, Hengyong

    2016-11-01

    Dimensionality reduction techniques are developed to suppress the negative effects of high dimensional feature space of lung CT images on classification performance in computer aided detection (CAD) systems for pulmonary nodule detection. An improved supervised locally linear embedding (SLLE) algorithm is proposed based on the concept of correlation coefficient. The Spearman's rank correlation coefficient is introduced to adjust the distance metric in the SLLE algorithm to ensure that more suitable neighborhood points could be identified, and thus to enhance the discriminating power of embedded data. The proposed Spearman's rank correlation coefficient based SLLE (SC(2)SLLE) is implemented and validated in our pilot CAD system using a clinical dataset collected from the publicly available lung image database consortium and image database resource initiative (LICD-IDRI). Particularly, a representative CAD system for solitary pulmonary nodule detection is designed and implemented. After a sequential medical image processing steps, 64 nodules and 140 non-nodules are extracted, and 34 representative features are calculated. The SC(2)SLLE, as well as SLLE and LLE algorithm, are applied to reduce the dimensionality. Several quantitative measurements are also used to evaluate and compare the performances. Using a 5-fold cross-validation methodology, the proposed algorithm achieves 87.65% accuracy, 79.23% sensitivity, 91.43% specificity, and 8.57% false positive rate, on average. Experimental results indicate that the proposed algorithm outperforms the original locally linear embedding and SLLE coupled with the support vector machine (SVM) classifier. Based on the preliminary results from a limited number of nodules in our dataset, this study demonstrates the great potential to improve the performance of a CAD system for nodule detection using the proposed SC(2)SLLE. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. Improved Measurement of Blood Pressure by Extraction of Characteristic Features from the Cuff Oscillometric Waveform

    PubMed Central

    Lim, Pooi Khoon; Ng, Siew-Cheok; Jassim, Wissam A.; Redmond, Stephen J.; Zilany, Mohammad; Avolio, Alberto; Lim, Einly; Tan, Maw Pin; Lovell, Nigel H.

    2015-01-01

    We present a novel approach to improve the estimation of systolic (SBP) and diastolic blood pressure (DBP) from oscillometric waveform data using variable characteristic ratios between SBP and DBP with mean arterial pressure (MAP). This was verified in 25 healthy subjects, aged 28 ± 5 years. The multiple linear regression (MLR) and support vector regression (SVR) models were used to examine the relationship between the SBP and the DBP ratio with ten features extracted from the oscillometric waveform envelope (OWE). An automatic algorithm based on relative changes in the cuff pressure and neighbouring oscillometric pulses was proposed to remove outlier points caused by movement artifacts. Substantial reduction in the mean and standard deviation of the blood pressure estimation errors were obtained upon artifact removal. Using the sequential forward floating selection (SFFS) approach, we were able to achieve a significant reduction in the mean and standard deviation of differences between the estimated SBP values and the reference scoring (MLR: mean ± SD = −0.3 ± 5.8 mmHg; SVR and −0.6 ± 5.4 mmHg) with only two features, i.e., Ratio2 and Area3, as compared to the conventional maximum amplitude algorithm (MAA) method (mean ± SD = −1.6 ± 8.6 mmHg). Comparing the performance of both MLR and SVR models, our results showed that the MLR model was able to achieve comparable performance to that of the SVR model despite its simplicity. PMID:26087370

  1. Monitoring carbon dioxide from space: Retrieval algorithm and flux inversion based on GOSAT data and using CarbonTracker-China

    NASA Astrophysics Data System (ADS)

    Yang, Dongxu; Zhang, Huifang; Liu, Yi; Chen, Baozhang; Cai, Zhaonan; Lü, Daren

    2017-08-01

    Monitoring atmospheric carbon dioxide (CO2) from space-borne state-of-the-art hyperspectral instruments can provide a high precision global dataset to improve carbon flux estimation and reduce the uncertainty of climate projection. Here, we introduce a carbon flux inversion system for estimating carbon flux with satellite measurements under the support of "The Strategic Priority Research Program of the Chinese Academy of Sciences—Climate Change: Carbon Budget and Relevant Issues". The carbon flux inversion system is composed of two separate parts: the Institute of Atmospheric Physics Carbon Dioxide Retrieval Algorithm for Satellite Remote Sensing (IAPCAS), and CarbonTracker-China (CT-China), developed at the Chinese Academy of Sciences. The Greenhouse gases Observing SATellite (GOSAT) measurements are used in the carbon flux inversion experiment. To improve the quality of the IAPCAS-GOSAT retrieval, we have developed a post-screening and bias correction method, resulting in 25%-30% of the data remaining after quality control. Based on these data, the seasonal variation of XCO2 (column-averaged CO2 dry-air mole fraction) is studied, and a strong relation with vegetation cover and population is identified. Then, the IAPCAS-GOSAT XCO2 product is used in carbon flux estimation by CT-China. The net ecosystem CO2 exchange is -0.34 Pg C yr-1 (±0.08 Pg C yr-1), with a large error reduction of 84%, which is a significant improvement on the error reduction when compared with in situ-only inversion.

  2. Sinogram-based adaptive iterative reconstruction for sparse view x-ray computed tomography

    NASA Astrophysics Data System (ADS)

    Trinca, D.; Zhong, Y.; Wang, Y.-Z.; Mamyrbayev, T.; Libin, E.

    2016-10-01

    With the availability of more powerful computing processors, iterative reconstruction algorithms have recently been successfully implemented as an approach to achieving significant dose reduction in X-ray CT. In this paper, we propose an adaptive iterative reconstruction algorithm for X-ray CT, that is shown to provide results comparable to those obtained by proprietary algorithms, both in terms of reconstruction accuracy and execution time. The proposed algorithm is thus provided for free to the scientific community, for regular use, and for possible further optimization.

  3. Genetic Algorithm-Based Model Order Reduction of Aeroservoelastic Systems with Consistant States

    NASA Technical Reports Server (NTRS)

    Zhu, Jin; Wang, Yi; Pant, Kapil; Suh, Peter M.; Brenner, Martin J.

    2017-01-01

    This paper presents a model order reduction framework to construct linear parameter-varying reduced-order models of flexible aircraft for aeroservoelasticity analysis and control synthesis in broad two-dimensional flight parameter space. Genetic algorithms are used to automatically determine physical states for reduction and to generate reduced-order models at grid points within parameter space while minimizing the trial-and-error process. In addition, balanced truncation for unstable systems is used in conjunction with the congruence transformation technique to achieve locally optimal realization and weak fulfillment of state consistency across the entire parameter space. Therefore, aeroservoelasticity reduced-order models at any flight condition can be obtained simply through model interpolation. The methodology is applied to the pitch-plant model of the X-56A Multi-Use Technology Testbed currently being tested at NASA Armstrong Flight Research Center for flutter suppression and gust load alleviation. The present studies indicate that the reduced-order model with more than 12× reduction in the number of states relative to the original model is able to accurately predict system response among all input-output channels. The genetic-algorithm-guided approach exceeds manual and empirical state selection in terms of efficiency and accuracy. The interpolated aeroservoelasticity reduced order models exhibit smooth pole transition and continuously varying gains along a set of prescribed flight conditions, which verifies consistent state representation obtained by congruence transformation. The present model order reduction framework can be used by control engineers for robust aeroservoelasticity controller synthesis and novel vehicle design.

  4. Real-time image annotation by manifold-based biased Fisher discriminant analysis

    NASA Astrophysics Data System (ADS)

    Ji, Rongrong; Yao, Hongxun; Wang, Jicheng; Sun, Xiaoshuai; Liu, Xianming

    2008-01-01

    Automatic Linguistic Annotation is a promising solution to bridge the semantic gap in content-based image retrieval. However, two crucial issues are not well addressed in state-of-art annotation algorithms: 1. The Small Sample Size (3S) problem in keyword classifier/model learning; 2. Most of annotation algorithms can not extend to real-time online usage due to their low computational efficiencies. This paper presents a novel Manifold-based Biased Fisher Discriminant Analysis (MBFDA) algorithm to address these two issues by transductive semantic learning and keyword filtering. To address the 3S problem, Co-Training based Manifold learning is adopted for keyword model construction. To achieve real-time annotation, a Bias Fisher Discriminant Analysis (BFDA) based semantic feature reduction algorithm is presented for keyword confidence discrimination and semantic feature reduction. Different from all existing annotation methods, MBFDA views image annotation from a novel Eigen semantic feature (which corresponds to keywords) selection aspect. As demonstrated in experiments, our manifold-based biased Fisher discriminant analysis annotation algorithm outperforms classical and state-of-art annotation methods (1.K-NN Expansion; 2.One-to-All SVM; 3.PWC-SVM) in both computational time and annotation accuracy with a large margin.

  5. Computer-aided detection of early cancer in the esophagus using HD endoscopy images

    NASA Astrophysics Data System (ADS)

    van der Sommen, Fons; Zinger, Svitlana; Schoon, Erik J.; de With, Peter H. N.

    2013-02-01

    Esophageal cancer is the fastest rising type of cancer in the Western world. The recent development of High-Definition (HD) endoscopy has enabled the specialist physician to identify cancer at an early stage. Nevertheless, it still requires considerable effort and training to be able to recognize these irregularities associated with early cancer. As a first step towards a Computer-Aided Detection (CAD) system that supports the physician in finding these early stages of cancer, we propose an algorithm that is able to identify irregularities in the esophagus automatically, based on HD endoscopic images. The concept employs tile-based processing, so our system is not only able to identify that an endoscopic image contains early cancer, but it can also locate it. The identification is based on the following steps: (1) preprocessing, (2) feature extraction with dimensionality reduction, (3) classification. We evaluate the detection performance in RGB, HSI and YCbCr color space using the Color Histogram (CH) and Gabor features and we compare with other well-known features to describe texture. For classification, we employ a Support Vector Machine (SVM) and evaluate its performance using different parameters and kernel functions. In experiments, our system achieves a classification accuracy of 95.9% on 50×50 pixel tiles of tumorous and normal tissue and reaches an Area Under the Curve (AUC) of 0.990. In 22 clinical examples our algorithm was able to identify all (pre-)cancerous regions and annotate those regions reasonably well. The experimental and clinical validation are considered promising for a CAD system that supports the physician in finding early stage cancer.

  6. Comparing Binaural Pre-processing Strategies III

    PubMed Central

    Warzybok, Anna; Ernst, Stephan M. A.

    2015-01-01

    A comprehensive evaluation of eight signal pre-processing strategies, including directional microphones, coherence filters, single-channel noise reduction, binaural beamformers, and their combinations, was undertaken with normal-hearing (NH) and hearing-impaired (HI) listeners. Speech reception thresholds (SRTs) were measured in three noise scenarios (multitalker babble, cafeteria noise, and single competing talker). Predictions of three common instrumental measures were compared with the general perceptual benefit caused by the algorithms. The individual SRTs measured without pre-processing and individual benefits were objectively estimated using the binaural speech intelligibility model. Ten listeners with NH and 12 HI listeners participated. The participants varied in age and pure-tone threshold levels. Although HI listeners required a better signal-to-noise ratio to obtain 50% intelligibility than listeners with NH, no differences in SRT benefit from the different algorithms were found between the two groups. With the exception of single-channel noise reduction, all algorithms showed an improvement in SRT of between 2.1 dB (in cafeteria noise) and 4.8 dB (in single competing talker condition). Model predictions with binaural speech intelligibility model explained 83% of the measured variance of the individual SRTs in the no pre-processing condition. Regarding the benefit from the algorithms, the instrumental measures were not able to predict the perceptual data in all tested noise conditions. The comparable benefit observed for both groups suggests a possible application of noise reduction schemes for listeners with different hearing status. Although the model can predict the individual SRTs without pre-processing, further development is necessary to predict the benefits obtained from the algorithms at an individual level. PMID:26721922

  7. Verification of Energy Reduction Effect through Control Optimization of Supply Air Temperature in VRF-OAP System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Je; Yoon, Hyun; Im, Piljae

    This paper developed an algorithm that controls the supply air temperature in the variable refrigerant flow (VRF), outdoor air processing unit (OAP) system, according to indoor and outdoor temperature and humidity, and verified the effects after applying the algorithm to real buildings. The VRF-OAP system refers to a heating, ventilation, and air conditioning (HVAC) system to complement a ventilation function, which is not provided in the VRF system. It is a system that supplies air indoors by heat treatment of outdoor air through the OAP, as a number of indoor units and OAPs are connected to the outdoor unit inmore » the VRF system simultaneously. This paper conducted experiments with regard to changes in efficiency and the cooling capabilities of each unit and system according to supply air temperature in the OAP using a multicalorimeter. Based on these results, an algorithm that controlled the temperature of the supply air in the OAP was developed considering indoor and outdoor temperatures and humidity. The algorithm was applied in the test building to verify the effects of energy reduction and the effects on indoor temperature and humidity. Loads were then changed by adjusting the number of conditioned rooms to verify the effect of the algorithm according to various load conditions. In the field test results, the energy reduction effect was approximately 15–17% at a 100% load, and 4–20% at a 75% load. However, no significant effects were shown at a 50% load. The indoor temperature and humidity reached a comfortable level.« less

  8. Verification of Energy Reduction Effect through Control Optimization of Supply Air Temperature in VRF-OAP System

    DOE PAGES

    Lee, Je; Yoon, Hyun; Im, Piljae; ...

    2017-12-27

    This paper developed an algorithm that controls the supply air temperature in the variable refrigerant flow (VRF), outdoor air processing unit (OAP) system, according to indoor and outdoor temperature and humidity, and verified the effects after applying the algorithm to real buildings. The VRF-OAP system refers to a heating, ventilation, and air conditioning (HVAC) system to complement a ventilation function, which is not provided in the VRF system. It is a system that supplies air indoors by heat treatment of outdoor air through the OAP, as a number of indoor units and OAPs are connected to the outdoor unit inmore » the VRF system simultaneously. This paper conducted experiments with regard to changes in efficiency and the cooling capabilities of each unit and system according to supply air temperature in the OAP using a multicalorimeter. Based on these results, an algorithm that controlled the temperature of the supply air in the OAP was developed considering indoor and outdoor temperatures and humidity. The algorithm was applied in the test building to verify the effects of energy reduction and the effects on indoor temperature and humidity. Loads were then changed by adjusting the number of conditioned rooms to verify the effect of the algorithm according to various load conditions. In the field test results, the energy reduction effect was approximately 15–17% at a 100% load, and 4–20% at a 75% load. However, no significant effects were shown at a 50% load. The indoor temperature and humidity reached a comfortable level.« less

  9. Homology search with binary and trinary scoring matrices.

    PubMed

    Smith, Scott F

    2006-01-01

    Protein homology search can be accelerated with the use of bit-parallel algorithms in conjunction with constraints on the values contained in the scoring matrices. Trinary scoring matrices (containing only the values -1, 0, and 1) allow for significant acceleration without significant reduction in the receiver operating characteristic (ROC) score of a Smith-Waterman search. Binary scoring matrices (containing the values 0 and 1) result in some reduction in ROC score, but result in even more acceleration. Binary scoring matrices and five-bit saturating scores can be used for fast prefilters to the Smith-Waterman algorithm.

  10. A multi-level solution algorithm for steady-state Markov chains

    NASA Technical Reports Server (NTRS)

    Horton, Graham; Leutenegger, Scott T.

    1993-01-01

    A new iterative algorithm, the multi-level algorithm, for the numerical solution of steady state Markov chains is presented. The method utilizes a set of recursively coarsened representations of the original system to achieve accelerated convergence. It is motivated by multigrid methods, which are widely used for fast solution of partial differential equations. Initial results of numerical experiments are reported, showing significant reductions in computation time, often an order of magnitude or more, relative to the Gauss-Seidel and optimal SOR algorithms for a variety of test problems. The multi-level method is compared and contrasted with the iterative aggregation-disaggregation algorithm of Takahashi.

  11. Parallel-vector unsymmetric Eigen-Solver on high performance computers

    NASA Technical Reports Server (NTRS)

    Nguyen, Duc T.; Jiangning, Qin

    1993-01-01

    The popular QR algorithm for solving all eigenvalues of an unsymmetric matrix is reviewed. Among the basic components in the QR algorithm, it was concluded from this study, that the reduction of an unsymmetric matrix to a Hessenberg form (before applying the QR algorithm itself) can be done effectively by exploiting the vector speed and multiple processors offered by modern high-performance computers. Numerical examples of several test cases have indicated that the proposed parallel-vector algorithm for converting a given unsymmetric matrix to a Hessenberg form offers computational advantages over the existing algorithm. The time saving obtained by the proposed methods is increased as the problem size increased.

  12. Design and rationale of Heart and Lung Failure – Pediatric INsulin Titration Trial (HALF-PINT): A randomized clinical trial of tight glycemic control in hyperglycemic critically ill children☆

    PubMed Central

    Agus, Michael SD; Hirshberg, Ellie; Srinivasan, Vijay; Faustino, Edward Vincent; Luckett, Peter M; Curley, Martha AQ; Alexander, Jamin; Asaro, Lisa A; Coughlin-Wells, Kerry; Duva, Donna; French, Jaclyn; Hasbani, Natalie; Sisko, Martha T; Soto-Rivera, Carmen L; Steil, Garry; Wypij, David; Nadkarni, Vinay M

    2017-01-01

    Objectives Test whether hyperglycemic critically ill children with cardiovascular and/or respiratory failure experience more ICU-free days when assigned to tight glycemic control with a normoglycemic versus hyperglycemic blood glucose target range. Design Multi-center randomized clinical trial. Setting Pediatric ICUs at 35 academic hospitals. Patients Children aged 2 weeks to 17 years receiving inotropic support and/or acute mechanical ventilation, excluding cardiac surgical patients. Interventions Patients receive intravenous insulin titrated to either 80–110 mg/dL (4.4–6.1 mmol/L) or 150–180 mg/dL (8.3–10.0 mmol/L). The intervention begins upon confirmed hyperglycemia and ends when the patient meets study-defined ICU discharge criteria or after 28 days. Continuous glucose monitoring, a minimum glucose infusion, and an explicit insulin infusion algorithm are deployed to achieve the BG targets while minimizing hypoglycemia risk. Measurements and main results The primary outcome is ICU-free days (equivalent to 28-day hospital mortality-adjusted ICU length of stay). Secondary outcomes include 90-day hospital mortality, organ dysfunction scores, ventilator-free days, nosocomial infection rate, neurodevelopmental outcomes, and nursing workload. To detect an increase of 1.25 ICU-free days (corresponding to a 20% relative reduction in 28-day hospital mortality and a one-day reduction in ICU length of stay), 1414 patients are needed for 80% power using a two-sided 0.05 level test. Conclusions This trial tests whether hyperglycemic critically ill children randomized to 80–110 mg/dL benefit more than those randomized to 150–180 mg/dL. This study implements validated bedside support tools including continuous glucose monitoring and a computerized algorithm to enhance patient safety and ensure reproducible bedside decision-making in achieving glycemic control. PMID:28042054

  13. Fraction Reduction through Continued Fractions

    ERIC Educational Resources Information Center

    Carley, Holly

    2011-01-01

    This article presents a method of reducing fractions without factoring. The ideas presented may be useful as a project for motivated students in an undergraduate number theory course. The discussion is related to the Euclidean Algorithm and its variations may lead to projects or early examples involving efficiency of an algorithm.

  14. CW-SSIM kernel based random forest for image classification

    NASA Astrophysics Data System (ADS)

    Fan, Guangzhe; Wang, Zhou; Wang, Jiheng

    2010-07-01

    Complex wavelet structural similarity (CW-SSIM) index has been proposed as a powerful image similarity metric that is robust to translation, scaling and rotation of images, but how to employ it in image classification applications has not been deeply investigated. In this paper, we incorporate CW-SSIM as a kernel function into a random forest learning algorithm. This leads to a novel image classification approach that does not require a feature extraction or dimension reduction stage at the front end. We use hand-written digit recognition as an example to demonstrate our algorithm. We compare the performance of the proposed approach with random forest learning based on other kernels, including the widely adopted Gaussian and the inner product kernels. Empirical evidences show that the proposed method is superior in its classification power. We also compared our proposed approach with the direct random forest method without kernel and the popular kernel-learning method support vector machine. Our test results based on both simulated and realworld data suggest that the proposed approach works superior to traditional methods without the feature selection procedure.

  15. Comparison of neural network applications for channel assignment in cellular TDMA networks and dynamically sectored PCS networks

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    1997-04-01

    The use of artificial neural networks (NNs) to address the channel assignment problem (CAP) for cellular time-division multiple access and code-division multiple access networks has previously been investigated by this author and many others. The investigations to date have been based on a hexagonal cell structure established by omnidirectional antennas at the base stations. No account was taken of the use of spatial isolation enabled by directional antennas to reduce interference between mobiles. Any reduction in interference translates into increased capacity and consequently alters the performance of the NNs. Previous studies have sought to improve the performance of Hopfield- Tank network algorithms and self-organizing feature map algorithms applied primarily to static channel assignment (SCA) for cellular networks that handle uniformly distributed, stationary traffic in each cell for a single type of service. The resulting algorithms minimize energy functions representing interference constraint and ad hoc conditions that promote convergence to optimal solutions. While the structures of the derived neural network algorithms (NNAs) offer the potential advantages of inherent parallelism and adaptability to changing system conditions, this potential has yet to be fulfilled the CAP for emerging mobile networks. The next-generation communication infrastructures must accommodate dynamic operating conditions. Macrocell topologies are being refined to microcells and picocells that can be dynamically sectored by adaptively controlled, directional antennas and programmable transceivers. These networks must support the time-varying demands for personal communication services (PCS) that simultaneously carry voice, data and video and, thus, require new dynamic channel assignment (DCA) algorithms. This paper examines the impact of dynamic cell sectoring and geometric conditioning on NNAs developed for SCA in omnicell networks with stationary traffic to improve the metrics of convergence rate and call blocking. Genetic algorithms (GAs) are also considered in PCS networks as a means to overcome the known weakness of Hopfield NNAs in determining global minima. The resulting GAs for DCA in PCS networks are compared to improved DCA algorithms based on Hopfield NNs for stationary cellular networks. Algorithm performance is compared on the basis of rate of convergence, blocking probability, analytic complexity, and parametric sensitivity to transient traffic demands and channel interference.

  16. An opposite view data replacement approach for reducing artifacts due to metallic dental objects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yazdi, Mehran; Lari, Meghdad Asadi; Bernier, Gaston

    Purpose: To present a conceptually new method for metal artifact reduction (MAR) that can be used on patients with multiple objects within the scan plane that are also of small sized along the longitudinal (scanning) direction, such as dental fillings. Methods: The proposed algorithm, named opposite view replacement, achieves MAR by first detecting the projection data affected by metal objects and then replacing the affected projections by the corresponding opposite view projections, which are not affected by metal objects. The authors also applied a fading process to avoid producing any discontinuities in the boundary of the affected projection areas inmore » the sinogram. A skull phantom with and without a variety of dental metal inserts was made to extract the performance metric of the algorithm. A head and neck case, typical of IMRT planning, was also tested. Results: The reconstructed CT images based on this new replacement scheme show a significant improvement in image quality for patients with metallic dental objects compared to the MAR algorithms based on the interpolation scheme. For the phantom, the authors showed that the artifact reduction algorithm can efficiently recover the CT numbers in the area next to the metallic objects. Conclusions: The authors presented a new and efficient method for artifact reduction due to multiple small metallic objects. The obtained results from phantoms and clinical cases fully validate the proposed approach.« less

  17. Coherent structure dynamics and identification during the multistage transitions of polymeric turbulent channel flow

    NASA Astrophysics Data System (ADS)

    Zhu, Lu; Xi, Li

    2018-04-01

    Drag reduction induced by polymer additives in wall-bounded turbulence has been studied for decades. A small dosage of polymer additives can drastically reduce the energy dissipation in turbulent flows and alter the flow structures at the same time. As the polymer-induced fluid elasticity increases, drag reduction goes through several stages of transition with drastically different flow statistics. While much attention in the area of polymer-turbulence interactions has been focused on the onset and the asymptotic stage of maximum drag reduction, the transition between the two intermediate stages – low-extent drag reduction (LDR) and high-extent drag reduction (HDR) – likely reflects a qualitative change in the underlying vortex dynamics according to our recent study [1]. In particular, we proposed that polymers start to suppress the lift-up and bursting of vortices at HDR, leading to the localization of turbulent structures. To test our hypothesis, a statistically robust conditional sampling algorithm, based on Jenong and Hussain [2]’s work, was adopted in this study. The comparison of conditional eddies between the Newtonian and the highly elastic turbulence shows that (i) the lifting “strength” of vortices is suppressed by polymers as reflected by the decreasing lifting angle of the conditional eddy and (ii) the curvature of vortices is also eliminated as the orientation of the head of the conditional eddy changes. In summary, the results of conditional sampling support our hypothesis of polymer-turbulence interactions during the LDR-HDR transition.

  18. Binaural noise reduction via cue-preserving MMSE filter and adaptive-blocking-based noise PSD estimation

    NASA Astrophysics Data System (ADS)

    Azarpour, Masoumeh; Enzner, Gerald

    2017-12-01

    Binaural noise reduction, with applications for instance in hearing aids, has been a very significant challenge. This task relates to the optimal utilization of the available microphone signals for the estimation of the ambient noise characteristics and for the optimal filtering algorithm to separate the desired speech from the noise. The additional requirements of low computational complexity and low latency further complicate the design. A particular challenge results from the desired reconstruction of binaural speech input with spatial cue preservation. The latter essentially diminishes the utility of multiple-input/single-output filter-and-sum techniques such as beamforming. In this paper, we propose a comprehensive and effective signal processing configuration with which most of the aforementioned criteria can be met suitably. This relates especially to the requirement of efficient online adaptive processing for noise estimation and optimal filtering while preserving the binaural cues. Regarding noise estimation, we consider three different architectures: interaural (ITF), cross-relation (CR), and principal-component (PCA) target blocking. An objective comparison with two other noise PSD estimation algorithms demonstrates the superiority of the blocking-based noise estimators, especially the CR-based and ITF-based blocking architectures. Moreover, we present a new noise reduction filter based on minimum mean-square error (MMSE), which belongs to the class of common gain filters, hence being rigorous in terms of spatial cue preservation but also efficient and competitive for the acoustic noise reduction task. A formal real-time subjective listening test procedure is also developed in this paper. The proposed listening test enables a real-time assessment of the proposed computationally efficient noise reduction algorithms in a realistic acoustic environment, e.g., considering time-varying room impulse responses and the Lombard effect. The listening test outcome reveals that the signals processed by the blocking-based algorithms are significantly preferred over the noisy signal in terms of instantaneous noise attenuation. Furthermore, the listening test data analysis confirms the conclusions drawn based on the objective evaluation.

  19. Extreme-Scale Algorithms & Software Resilience (EASIR) Architecture-Aware Algorithms for Scalable Performance and Resilience on Heterogeneous Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demmel, James W.

    This project addresses both communication-avoiding algorithms, and reproducible floating-point computation. Communication, i.e. moving data, either between levels of memory or processors over a network, is much more expensive per operation than arithmetic (measured in time or energy), so we seek algorithms that greatly reduce communication. We developed many new algorithms for both dense and sparse, and both direct and iterative linear algebra, attaining new communication lower bounds, and getting large speedups in many cases. We also extended this work in several ways: (1) We minimize writes separately from reads, since writes may be much more expensive than reads on emergingmore » memory technologies, like Flash, sometimes doing asymptotically fewer writes than reads. (2) We extend the lower bounds and optimal algorithms to arbitrary algorithms that may be expressed as perfectly nested loops accessing arrays, where the array subscripts may be arbitrary affine functions of the loop indices (eg A(i), B(i,j+k, k+3*m-7, …) etc.). (3) We extend our communication-avoiding approach to some machine learning algorithms, such as support vector machines. This work has won a number of awards. We also address reproducible floating-point computation. We define reproducibility to mean getting bitwise identical results from multiple runs of the same program, perhaps with different hardware resources or other changes that should ideally not change the answer. Many users depend on reproducibility for debugging or correctness. However, dynamic scheduling of parallel computing resources, combined with nonassociativity of floating point addition, makes attaining reproducibility a challenge even for simple operations like summing a vector of numbers, or more complicated operations like the Basic Linear Algebra Subprograms (BLAS). We describe an algorithm that computes a reproducible sum of floating point numbers, independent of the order of summation. The algorithm depends only on a subset of the IEEE Floating Point Standard 754-2008, uses just 6 words to represent a “reproducible accumulator,” and requires just one read-only pass over the data, or one reduction in parallel. New instructions based on this work are being considered for inclusion in the future IEEE 754-2018 floating-point standard, and new reproducible BLAS are being considered for the next version of the BLAS standard.« less

  20. An Overview of a Trajectory-Based Solution for En Route and Terminal Area Self-Spacing: Fifth Edition

    NASA Technical Reports Server (NTRS)

    Abbott, Terence S.

    2015-01-01

    This paper presents an overview of the fifth revision to an algorithm specifically designed to support NASA's Airborne Precision Spacing concept. This algorithm is referred to as the Airborne Spacing for Terminal Arrival Routes version 12 (ASTAR12). This airborne self-spacing concept is trajectory-based, allowing for spacing operations prior to the aircraft being on a common path. Because this algorithm is trajectory-based, it also has the inherent ability to support required-time-of- arrival (RTA) operations. This algorithm was also designed specifically to support a standalone, non-integrated implementation in the spacing aircraft. This current revision to the algorithm includes a ground speed feedback term to compensate for slower than expected traffic aircraft speeds based on the accepted air traffic control tendency to slow aircraft below the nominal arrival speeds when they are farther from the airport.

  1. Data Reduction of Jittered Infrared Images Using the ORAC Pipeline

    NASA Astrophysics Data System (ADS)

    Currie, Malcolm; Wright, Gillian; Bridger, Alan; Economou, Frossie

    We relate our experiences using the ORAC data reduction pipeline for jittered images of stars and galaxies. The reduction recipes currently combine applications from several Starlink packages with intelligent Perl recipes to cater to UKIRT data. We describe the recipes and some of the algorithms used, and compare the quality of the resultant mosaics and photometry with the existing facilities.

  2. Comparison of l₁-Norm SVR and Sparse Coding Algorithms for Linear Regression.

    PubMed

    Zhang, Qingtian; Hu, Xiaolin; Zhang, Bo

    2015-08-01

    Support vector regression (SVR) is a popular function estimation technique based on Vapnik's concept of support vector machine. Among many variants, the l1-norm SVR is known to be good at selecting useful features when the features are redundant. Sparse coding (SC) is a technique widely used in many areas and a number of efficient algorithms are available. Both l1-norm SVR and SC can be used for linear regression. In this brief, the close connection between the l1-norm SVR and SC is revealed and some typical algorithms are compared for linear regression. The results show that the SC algorithms outperform the Newton linear programming algorithm, an efficient l1-norm SVR algorithm, in efficiency. The algorithms are then used to design the radial basis function (RBF) neural networks. Experiments on some benchmark data sets demonstrate the high efficiency of the SC algorithms. In particular, one of the SC algorithms, the orthogonal matching pursuit is two orders of magnitude faster than a well-known RBF network designing algorithm, the orthogonal least squares algorithm.

  3. Optimizing support vector machine learning for semi-arid vegetation mapping by using clustering analysis

    NASA Astrophysics Data System (ADS)

    Su, Lihong

    In remote sensing communities, support vector machine (SVM) learning has recently received increasing attention. SVM learning usually requires large memory and enormous amounts of computation time on large training sets. According to SVM algorithms, the SVM classification decision function is fully determined by support vectors, which compose a subset of the training sets. In this regard, a solution to optimize SVM learning is to efficiently reduce training sets. In this paper, a data reduction method based on agglomerative hierarchical clustering is proposed to obtain smaller training sets for SVM learning. Using a multiple angle remote sensing dataset of a semi-arid region, the effectiveness of the proposed method is evaluated by classification experiments with a series of reduced training sets. The experiments show that there is no loss of SVM accuracy when the original training set is reduced to 34% using the proposed approach. Maximum likelihood classification (MLC) also is applied on the reduced training sets. The results show that MLC can also maintain the classification accuracy. This implies that the most informative data instances can be retained by this approach.

  4. Does provider adherence to a treatment guideline change clinical outcomes for patients with bipolar disorder? Results from the Texas Medication Algorithm Project.

    PubMed

    Dennehy, Ellen B; Suppes, Trisha; Rush, A John; Miller, Alexander L; Trivedi, Madhukar H; Crismon, M Lynn; Carmody, Thomas J; Kashner, T Michael

    2005-12-01

    Despite increasing adoption of clinical practice guidelines in psychiatry, there is little measurement of provider implementation of these recommendations, and the resulting impact on clinical outcomes. The current study describes one effort to measure these relationships in a cohort of public sector out-patients with bipolar disorder. Participants were enrolled in the algorithm intervention of the Texas Medication Algorithm Project (TMAP). Study methods and the adherence scoring algorithm have been described elsewhere. The current paper addresses the relationships between patient characteristics, provider experience with the algorithm, provider adherence, and clinical outcomes. Measurement of provider adherence includes evaluation of visit frequency, medication choice and dosing, and response to patient symptoms. An exploratory composite 'adherence by visit' score was developed for these analyses. A total of 1948 visits from 141 subjects were evaluated, and utilized a two-stage declining effects model. Providers with more experience using the algorithm tended to adhere less to treatment recommendations. Few patient factors significantly impacted provider adherence. Increased adherence to algorithm recommendations was associated with larger decreases in overall psychiatric symptoms and depressive symptoms over time, but did not impact either immediate or long-term reductions in manic symptoms. Greater provider adherence to treatment guideline recommendations was associated with greater reductions in depressive symptoms and overall psychiatric symptoms over time. Additional research is needed to refine measurement and to further clarify these relationships.

  5. LYDIAN: An Extensible Educational Animation Environment for Distributed Algorithms

    ERIC Educational Resources Information Center

    Koldehofe, Boris; Papatriantafilou, Marina; Tsigas, Philippas

    2006-01-01

    LYDIAN is an environment to support the teaching and learning of distributed algorithms. It provides a collection of distributed algorithms as well as continuous animations. Users can combine algorithms and animations with arbitrary network structures defining the interconnection and behavior of the distributed algorithm. Further, it facilitates…

  6. Development of Educational Support System for Algorithm using Flowchart

    NASA Astrophysics Data System (ADS)

    Ohchi, Masashi; Aoki, Noriyuki; Furukawa, Tatsuya; Takayama, Kanta

    Recently, an information technology is indispensable for the business and industrial developments. However, it has been a social problem that the number of software developers has been insufficient. To solve the problem, it is necessary to develop and implement the environment for learning the algorithm and programming language. In the paper, we will describe the algorithm study support system for a programmer using the flowchart. Since the proposed system uses Graphical User Interface(GUI), it will become easy for a programmer to understand the algorithm in programs.

  7. Prediction of Warfarin Dose Reductions in Puerto Rican Patients, Based on Combinatorial CYP2C9 and VKORC1 Genotypes

    PubMed Central

    Valentin, Isa Ivette; Vazquez, Joan; Rivera-Miranda, Giselle; Seip, Richard L; Velez, Meredith; Kocherla, Mohan; Bogaard, Kali; Cruz-Gonzalez, Iadelisse; Cadilla, Carmen L; Renta, Jessica Y; Felliu, Juan F; Ramos, Alga S; Alejandro-Cowan, Yirelia; Gorowski, Krystyna; Ruaño, Gualberto; Duconge, Jorge

    2012-01-01

    BACKGROUND The influence of CYP2C9 and VKORC1 polymorphisms on warfarin dose has been investigated in white, Asian, and African American populations but not in Puerto Rican Hispanic patients. OBJECTIVE To test the associations between genotypes, international normalized ratio (INR) measurements, and warfarin dosing and gauge the impact of these polymorphisms on warfarin dose, using a published algorithm. METHODS A retrospective warfarin pharmacogenetic association study in 106 Puerto Rican patients was performed. DNA samples from patients were assayed for 12 variants in both CYP2C9 and VKORC1 loci by HILOmet PhyzioType assay. Demographic and clinical nongenetic data were retrospectively collected from medical records. Allele and genotype frequencies were determined and Hardy-Weinberg equilibrium (HWE) was tested. RESULTS Sixty-nine percent of patients were carriers of at least one polymorphism in either the CYP2C9 or the VKORC1 gene. Double, triple, and quadruple carriers accounted for 22%, 5%, and 1%, respectively. No significant departure from HWE was found. Among patients with a given CYP2C9 genotype, warfarin dose requirements declined from GG to AA haplotypes; whereas, within each VKORC1 haplotype, the dose decreased as the number of CYP2C9 variants increased. The presence of these loss-of-function alleles was associated with more out-of-range INR measurements (OR = 1.38) but not with significant INR >4 during the initiation phase. Analyses based on a published pharmacogenetic algorithm predicted dose reductions of up to 4.9 mg/day in carriers and provided better dose prediction in an extreme subgroup of highly sensitive patients, but also suggested the need to improve predictability by developing a customized model for use in Puerto Rican patients. CONCLUSIONS This study laid important groundwork for supporting a prospective pharmacogenetic trial in Puerto Ricans to detect the benefits of incorporating relevant genomic information into a customized DNA-guided warfarin dosing algorithm. PMID:22274142

  8. Prediction of warfarin dose reductions in Puerto Rican patients, based on combinatorial CYP2C9 and VKORC1 genotypes.

    PubMed

    Valentin, Isa Ivette; Vazquez, Joan; Rivera-Miranda, Giselle; Seip, Richard L; Velez, Meredith; Kocherla, Mohan; Bogaard, Kali; Cruz-Gonzalez, Iadelisse; Cadilla, Carmen L; Renta, Jessica Y; Feliu, Juan F; Ramos, Alga S; Alejandro-Cowan, Yirelia; Gorowski, Krystyna; Ruaño, Gualberto; Duconge, Jorge

    2012-02-01

    The influence of CYP2C9 and VKORC1 polymorphisms on warfarin dose has been investigated in white, Asian, and African American populations but not in Puerto Rican Hispanic patients. To test the associations between genotypes, international normalized ratio (INR) measurements, and warfarin dosing and gauge the impact of these polymorphisms on warfarin dose, using a published algorithm. A retrospective warfarin pharmacogenetic association study in 106 Puerto Rican patients was performed. DNA samples from patients were assayed for 12 variants in both CYP2C9 and VKORC1 loci by HILOmet PhyzioType assay. Demographic and clinical nongenetic data were retrospectively collected from medical records. Allele and genotype frequencies were determined and Hardy-Weinberg equilibrium (HWE) was tested. Sixty-nine percent of patients were carriers of at least one polymorphism in either the CYP2C9 or the VKORC1 gene. Double, triple, and quadruple carriers accounted for 22%, 5%, and 1%, respectively. No significant departure from HWE was found. Among patients with a given CYP2C9 genotype, warfarin dose requirements declined from GG to AA haplotypes; whereas, within each VKORC1 haplotype, the dose decreased as the number of CYP2C9 variants increased. The presence of these loss-of-function alleles was associated with more out-of-range INR measurements (OR = 1.38) but not with significant INR >4 during the initiation phase. Analyses based on a published pharmacogenetic algorithm predicted dose reductions of up to 4.9 mg/day in carriers and provided better dose prediction in an extreme subgroup of highly sensitive patients, but also suggested the need to improve predictability by developing a customized model for use in Puerto Rican patients. This study laid important groundwork for supporting a prospective pharmacogenetic trial in Puerto Ricans to detect the benefits of incorporating relevant genomic information into a customized DNA-guided warfarin dosing algorithm.

  9. Relative Risk Reduction as a Metric to Standardize Effect Size for Public Heath Interventions for Translational Research: Methods and Applications

    PubMed Central

    Mirzazadeh, A; Malekinejad, M; Kahn, JG

    2018-01-01

    Objective Heterogeneity of effect measures in intervention studies undermines the use of evidence to inform policy. Our objective was to develop a comprehensive algorithm to convert all types of effect measures to one standard metric, relative risk reduction (RRR). Study Design and Setting This work was conducted to facilitate synthesis of published intervention effects for our epidemic modeling of the health impact of HIV Testing and Counseling (HTC). We designed and implemented an algorithm to transform varied effect measures to RRR, representing the proportionate reduction in undesirable outcomes. Results Our extraction of 55 HTC studies identified 473 effect measures representing unique combinations of intervention-outcome-population characteristics, using five outcome metrics: pre-post proportion (70.6%), odds ratio (14.0%), mean difference (10.2%), risk ratio (4.4%), and RRR (0.9%). Outcomes were expressed as both desirable (29.5%, e.g., consistent condom use) and undesirable (70.5% e.g., inconsistent condom use). Using four examples, we demonstrate our algorithm for converting varied effect measures to RRR, and provide the conceptual basis for advantages of RRR over other metrics. Conclusion Our review of the literature suggests that RRR, an easily understood and useful metric to convey risk reduction associated with an intervention, is underutilized by original and review studies. PMID:25726522

  10. Noise reduction technologies implemented in head-worn preprocessors for improving cochlear implant performance in reverberant noise fields.

    PubMed

    Chung, King; Nelson, Lance; Teske, Melissa

    2012-09-01

    The purpose of this study was to investigate whether a multichannel adaptive directional microphone and a modulation-based noise reduction algorithm could enhance cochlear implant performance in reverberant noise fields. A hearing aid was modified to output electrical signals (ePreprocessor) and a cochlear implant speech processor was modified to receive electrical signals (eProcessor). The ePreprocessor was programmed to flat frequency response and linear amplification. Cochlear implant listeners wore the ePreprocessor-eProcessor system in three reverberant noise fields: 1) one noise source with variable locations; 2) three noise sources with variable locations; and 3) eight evenly spaced noise sources from 0° to 360°. Listeners' speech recognition scores were tested when the ePreprocessor was programmed to omnidirectional microphone (OMNI), omnidirectional microphone plus noise reduction algorithm (OMNI + NR), and adaptive directional microphone plus noise reduction algorithm (ADM + NR). They were also tested with their own cochlear implant speech processor (CI_OMNI) in the three noise fields. Additionally, listeners rated overall sound quality preferences on recordings made in the noise fields. Results indicated that ADM+NR produced the highest speech recognition scores and the most preferable rating in all noise fields. Factors requiring attention in the hearing aid-cochlear implant integration process are discussed. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. An efficient, large-scale, non-lattice-detection algorithm for exhaustive structural auditing of biomedical ontologies.

    PubMed

    Zhang, Guo-Qiang; Xing, Guangming; Cui, Licong

    2018-04-01

    One of the basic challenges in developing structural methods for systematic audition on the quality of biomedical ontologies is the computational cost usually involved in exhaustive sub-graph analysis. We introduce ANT-LCA, a new algorithm for computing all non-trivial lowest common ancestors (LCA) of each pair of concepts in the hierarchical order induced by an ontology. The computation of LCA is a fundamental step for non-lattice approach for ontology quality assurance. Distinct from existing approaches, ANT-LCA only computes LCAs for non-trivial pairs, those having at least one common ancestor. To skip all trivial pairs that may be of no practical interest, ANT-LCA employs a simple but innovative algorithmic strategy combining topological order and dynamic programming to keep track of non-trivial pairs. We provide correctness proofs and demonstrate a substantial reduction in computational time for two largest biomedical ontologies: SNOMED CT and Gene Ontology (GO). ANT-LCA achieved an average computation time of 30 and 3 sec per version for SNOMED CT and GO, respectively, about 2 orders of magnitude faster than the best known approaches. Our algorithm overcomes a fundamental computational barrier in sub-graph based structural analysis of large ontological systems. It enables the implementation of a new breed of structural auditing methods that not only identifies potential problematic areas, but also automatically suggests changes to fix the issues. Such structural auditing methods can lead to more effective tools supporting ontology quality assurance work. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. Efficient Bit-to-Symbol Likelihood Mappings

    NASA Technical Reports Server (NTRS)

    Moision, Bruce E.; Nakashima, Michael A.

    2010-01-01

    This innovation is an efficient algorithm designed to perform bit-to-symbol and symbol-to-bit likelihood mappings that represent a significant portion of the complexity of an error-correction code decoder for high-order constellations. Recent implementation of the algorithm in hardware has yielded an 8- percent reduction in overall area relative to the prior design.

  13. A General Exponential Framework for Dimensionality Reduction.

    PubMed

    Wang, Su-Jing; Yan, Shuicheng; Yang, Jian; Zhou, Chun-Guang; Fu, Xiaolan

    2014-02-01

    As a general framework, Laplacian embedding, based on a pairwise similarity matrix, infers low dimensional representations from high dimensional data. However, it generally suffers from three issues: 1) algorithmic performance is sensitive to the size of neighbors; 2) the algorithm encounters the well known small sample size (SSS) problem; and 3) the algorithm de-emphasizes small distance pairs. To address these issues, here we propose exponential embedding using matrix exponential and provide a general framework for dimensionality reduction. In the framework, the matrix exponential can be roughly interpreted by the random walk over the feature similarity matrix, and thus is more robust. The positive definite property of matrix exponential deals with the SSS problem. The behavior of the decay function of exponential embedding is more significant in emphasizing small distance pairs. Under this framework, we apply matrix exponential to extend many popular Laplacian embedding algorithms, e.g., locality preserving projections, unsupervised discriminant projections, and marginal fisher analysis. Experiments conducted on the synthesized data, UCI, and the Georgia Tech face database show that the proposed new framework can well address the issues mentioned above.

  14. A case study of view-factor rectification procedures for diffuse-gray radiation enclosure computations

    NASA Technical Reports Server (NTRS)

    Taylor, Robert P.; Luck, Rogelio

    1995-01-01

    The view factors which are used in diffuse-gray radiation enclosure calculations are often computed by approximate numerical integrations. These approximately calculated view factors will usually not satisfy the important physical constraints of reciprocity and closure. In this paper several view-factor rectification algorithms are reviewed and a rectification algorithm based on a least-squares numerical filtering scheme is proposed with both weighted and unweighted classes. A Monte-Carlo investigation is undertaken to study the propagation of view-factor and surface-area uncertainties into the heat transfer results of the diffuse-gray enclosure calculations. It is found that the weighted least-squares algorithm is vastly superior to the other rectification schemes for the reduction of the heat-flux sensitivities to view-factor uncertainties. In a sample problem, which has proven to be very sensitive to uncertainties in view factor, the heat transfer calculations with weighted least-squares rectified view factors are very good with an original view-factor matrix computed to only one-digit accuracy. All of the algorithms had roughly equivalent effects on the reduction in sensitivity to area uncertainty in this case study.

  15. Data Reduction Algorithm Using Nonnegative Matrix Factorization with Nonlinear Constraints

    NASA Astrophysics Data System (ADS)

    Sembiring, Pasukat

    2017-12-01

    Processing ofdata with very large dimensions has been a hot topic in recent decades. Various techniques have been proposed in order to execute the desired information or structure. Non- Negative Matrix Factorization (NMF) based on non-negatives data has become one of the popular methods for shrinking dimensions. The main strength of this method is non-negative object, the object model by a combination of some basic non-negative parts, so as to provide a physical interpretation of the object construction. The NMF is a dimension reduction method thathasbeen used widely for numerous applications including computer vision,text mining, pattern recognitions,and bioinformatics. Mathematical formulation for NMF did not appear as a convex optimization problem and various types of algorithms have been proposed to solve the problem. The Framework of Alternative Nonnegative Least Square(ANLS) are the coordinates of the block formulation approaches that have been proven reliable theoretically and empirically efficient. This paper proposes a new algorithm to solve NMF problem based on the framework of ANLS.This algorithm inherits the convergenceproperty of the ANLS framework to nonlinear constraints NMF formulations.

  16. Vectorization of transport and diffusion computations on the CDC Cyber 205

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abu-Shumays, I.K.

    1986-01-01

    The development and testing of alternative numerical methods and computational algorithms specifically designed for the vectorization of transport and diffusion computations on a Control Data Corporation (CDC) Cyber 205 vector computer are described. Two solution methods for the discrete ordinates approximation to the transport equation are summarized and compared. Factors of 4 to 7 reduction in run times for certain large transport problems were achieved on a Cyber 205 as compared with run times on a CDC-7600. The solution of tridiagonal systems of linear equations, central to several efficient numerical methods for multidimensional diffusion computations and essential for fluid flowmore » and other physics and engineering problems, is also dealt with. Among the methods tested, a combined odd-even cyclic reduction and modified Cholesky factorization algorithm for solving linear symmetric positive definite tridiagonal systems is found to be the most effective for these systems on a Cyber 205. For large tridiagonal systems, computation with this algorithm is an order of magnitude faster on a Cyber 205 than computation with the best algorithm for tridiagonal systems on a CDC-7600.« less

  17. A regularized approach for geodesic-based semisupervised multimanifold learning.

    PubMed

    Fan, Mingyu; Zhang, Xiaoqin; Lin, Zhouchen; Zhang, Zhongfei; Bao, Hujun

    2014-05-01

    Geodesic distance, as an essential measurement for data dissimilarity, has been successfully used in manifold learning. However, most geodesic distance-based manifold learning algorithms have two limitations when applied to classification: 1) class information is rarely used in computing the geodesic distances between data points on manifolds and 2) little attention has been paid to building an explicit dimension reduction mapping for extracting the discriminative information hidden in the geodesic distances. In this paper, we regard geodesic distance as a kind of kernel, which maps data from linearly inseparable space to linear separable distance space. In doing this, a new semisupervised manifold learning algorithm, namely regularized geodesic feature learning algorithm, is proposed. The method consists of three techniques: a semisupervised graph construction method, replacement of original data points with feature vectors which are built by geodesic distances, and a new semisupervised dimension reduction method for feature vectors. Experiments on the MNIST, USPS handwritten digit data sets, MIT CBCL face versus nonface data set, and an intelligent traffic data set show the effectiveness of the proposed algorithm.

  18. Energy-efficient algorithm for classification of states of wireless sensor network using machine learning methods

    NASA Astrophysics Data System (ADS)

    Yuldashev, M. N.; Vlasov, A. I.; Novikov, A. N.

    2018-05-01

    This paper focuses on the development of an energy-efficient algorithm for classification of states of a wireless sensor network using machine learning methods. The proposed algorithm reduces energy consumption by: 1) elimination of monitoring of parameters that do not affect the state of the sensor network, 2) reduction of communication sessions over the network (the data are transmitted only if their values can affect the state of the sensor network). The studies of the proposed algorithm have shown that at classification accuracy close to 100%, the number of communication sessions can be reduced by 80%.

  19. Quantum speedup of the traveling-salesman problem for bounded-degree graphs

    NASA Astrophysics Data System (ADS)

    Moylett, Dominic J.; Linden, Noah; Montanaro, Ashley

    2017-03-01

    The traveling-salesman problem is one of the most famous problems in graph theory. However, little is currently known about the extent to which quantum computers could speed up algorithms for the problem. In this paper, we prove a quadratic quantum speedup when the degree of each vertex is at most 3 by applying a quantum backtracking algorithm to a classical algorithm by Xiao and Nagamochi. We then use similar techniques to accelerate a classical algorithm for when the degree of each vertex is at most 4, before speeding up higher-degree graphs via reductions to these instances.

  20. Test Generation Algorithm for Fault Detection of Analog Circuits Based on Extreme Learning Machine

    PubMed Central

    Zhou, Jingyu; Tian, Shulin; Yang, Chenglin; Ren, Xuelong

    2014-01-01

    This paper proposes a novel test generation algorithm based on extreme learning machine (ELM), and such algorithm is cost-effective and low-risk for analog device under test (DUT). This method uses test patterns derived from the test generation algorithm to stimulate DUT, and then samples output responses of the DUT for fault classification and detection. The novel ELM-based test generation algorithm proposed in this paper contains mainly three aspects of innovation. Firstly, this algorithm saves time efficiently by classifying response space with ELM. Secondly, this algorithm can avoid reduced test precision efficiently in case of reduction of the number of impulse-response samples. Thirdly, a new process of test signal generator and a test structure in test generation algorithm are presented, and both of them are very simple. Finally, the abovementioned improvement and functioning are confirmed in experiments. PMID:25610458

  1. An evaluation of noise reduction algorithms for particle-based fluid simulations in multi-scale applications

    NASA Astrophysics Data System (ADS)

    Zimoń, M. J.; Prosser, R.; Emerson, D. R.; Borg, M. K.; Bray, D. J.; Grinberg, L.; Reese, J. M.

    2016-11-01

    Filtering of particle-based simulation data can lead to reduced computational costs and enable more efficient information transfer in multi-scale modelling. This paper compares the effectiveness of various signal processing methods to reduce numerical noise and capture the structures of nano-flow systems. In addition, a novel combination of these algorithms is introduced, showing the potential of hybrid strategies to improve further the de-noising performance for time-dependent measurements. The methods were tested on velocity and density fields, obtained from simulations performed with molecular dynamics and dissipative particle dynamics. Comparisons between the algorithms are given in terms of performance, quality of the results and sensitivity to the choice of input parameters. The results provide useful insights on strategies for the analysis of particle-based data and the reduction of computational costs in obtaining ensemble solutions.

  2. SAT Encoding of Unification in EL

    NASA Astrophysics Data System (ADS)

    Baader, Franz; Morawska, Barbara

    Unification in Description Logics has been proposed as a novel inference service that can, for example, be used to detect redundancies in ontologies. In a recent paper, we have shown that unification in EL is NP-complete, and thus of a complexity that is considerably lower than in other Description Logics of comparably restricted expressive power. In this paper, we introduce a new NP-algorithm for solving unification problems in EL, which is based on a reduction to satisfiability in propositional logic (SAT). The advantage of this new algorithm is, on the one hand, that it allows us to employ highly optimized state-of-the-art SAT solvers when implementing an EL-unification algorithm. On the other hand, this reduction provides us with a proof of the fact that EL-unification is in NP that is much simpler than the one given in our previous paper on EL-unification.

  3. Knowledge-Based Scheduling of Arrival Aircraft in the Terminal Area

    NASA Technical Reports Server (NTRS)

    Krzeczowski, K. J.; Davis, T.; Erzberger, H.; Lev-Ram, Israel; Bergh, Christopher P.

    1995-01-01

    A knowledge based method for scheduling arrival aircraft in the terminal area has been implemented and tested in real time simulation. The scheduling system automatically sequences, assigns landing times, and assign runways to arrival aircraft by utilizing continuous updates of aircraft radar data and controller inputs. The scheduling algorithm is driven by a knowledge base which was obtained in over two thousand hours of controller-in-the-loop real time simulation. The knowledge base contains a series of hierarchical 'rules' and decision logic that examines both performance criteria, such as delay reductions, as well as workload reduction criteria, such as conflict avoidance. The objective of the algorithm is to devise an efficient plan to land the aircraft in a manner acceptable to the air traffic controllers. This paper describes the scheduling algorithms, gives examples of their use, and presents data regarding their potential benefits to the air traffic system.

  4. Cut set-based risk and reliability analysis for arbitrarily interconnected networks

    DOEpatents

    Wyss, Gregory D.

    2000-01-01

    Method for computing all-terminal reliability for arbitrarily interconnected networks such as the United States public switched telephone network. The method includes an efficient search algorithm to generate minimal cut sets for nonhierarchical networks directly from the network connectivity diagram. Efficiency of the search algorithm stems in part from its basis on only link failures. The method also includes a novel quantification scheme that likewise reduces computational effort associated with assessing network reliability based on traditional risk importance measures. Vast reductions in computational effort are realized since combinatorial expansion and subsequent Boolean reduction steps are eliminated through analysis of network segmentations using a technique of assuming node failures to occur on only one side of a break in the network, and repeating the technique for all minimal cut sets generated with the search algorithm. The method functions equally well for planar and non-planar networks.

  5. Knowledge-based scheduling of arrival aircraft

    NASA Technical Reports Server (NTRS)

    Krzeczowski, K.; Davis, T.; Erzberger, H.; Lev-Ram, I.; Bergh, C.

    1995-01-01

    A knowledge-based method for scheduling arrival aircraft in the terminal area has been implemented and tested in real-time simulation. The scheduling system automatically sequences, assigns landing times, and assigns runways to arrival aircraft by utilizing continuous updates of aircraft radar data and controller inputs. The scheduling algorithms is driven by a knowledge base which was obtained in over two thousand hours of controller-in-the-loop real-time simulation. The knowledge base contains a series of hierarchical 'rules' and decision logic that examines both performance criteria, such as delay reduction, as well as workload reduction criteria, such as conflict avoidance. The objective of the algorithms is to devise an efficient plan to land the aircraft in a manner acceptable to the air traffic controllers. This paper will describe the scheduling algorithms, give examples of their use, and present data regarding their potential benefits to the air traffic system.

  6. TH-AB-207A-12: CT Lung Cancer Screening and the Effects of Further Dose Reduction On CAD Performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, S; Lo, P; Hoffman, J

    Purpose: CT lung screening is already performed at low doses. In this study, we investigated the effects of further dose reduction on a lung-nodule CAD detection algorithm. Methods: The original raw CT data and images from 348 patients were obtained from our local database of National Lung Screening Trial (NLST) cases. 61 patients (17.5%) had at least one nodule reported on the NLST reader forms. All scans were acquired with fixed mAs (25 for standard-sized patients, 40 for large patients) on a 64-slice scanner (Sensation 64, Siemens Healthcare). All images were reconstructed with 1-mm slice thickness, B50 kernel. Based onmore » a previously-published technique, we added noise to the raw data to simulate reduced-dose versions of each case at 50% and 25% of the original NLST dose (i.e. approximately 1.0 and 0.5 mGy CTDIvol). For each case at each dose level, a CAD detection algorithm was run and nodules greater than 4 mm in diameter were reported. These CAD results were compared to “truth”, defined as the approximate nodule centroids from the NLST forms. Sensitivities and false-positive rates (FPR) were calculated for each dose level, with a sub-analysis by nodule LungRADS category. Results: For larger category 4 nodules, median sensitivities were 100% at all three dose levels, and mean sensitivity decreased with dose. For the more challenging category 2 and 3 nodules, the dose dependence was less obvious. Overall, mean subject-level sensitivity varied from 38.5% at 100% dose to 40.4% at 50% dose, a difference of only 1.9%. However, median FPR quadrupled from 1 per case at 100% dose to 4 per case at 25% dose. Conclusions: Dose reduction affected nodule detectability differently depending on the LungRADS category, and FPR was very sensitive at sub-screening levels. Care should be taken to adapt CAD for the very challenging noise characteristics of screening. Funding support: NIH U01 CA181156; Disclosures (McNitt-Gray): Institutional research agreement, Siemens Healthcare; Past recipient, research grant support, Siemens Healthcare; Consultant, Toshiba America Medical Systems; Consultant, Samsung Electronics.« less

  7. Improved multi-stage neonatal seizure detection using a heuristic classifier and a data-driven post-processor.

    PubMed

    Ansari, A H; Cherian, P J; Dereymaeker, A; Matic, V; Jansen, K; De Wispelaere, L; Dielman, C; Vervisch, J; Swarte, R M; Govaert, P; Naulaers, G; De Vos, M; Van Huffel, S

    2016-09-01

    After identifying the most seizure-relevant characteristics by a previously developed heuristic classifier, a data-driven post-processor using a novel set of features is applied to improve the performance. The main characteristics of the outputs of the heuristic algorithm are extracted by five sets of features including synchronization, evolution, retention, segment, and signal features. Then, a support vector machine and a decision making layer remove the falsely detected segments. Four datasets including 71 neonates (1023h, 3493 seizures) recorded in two different university hospitals, are used to train and test the algorithm without removing the dubious seizures. The heuristic method resulted in a false alarm rate of 3.81 per hour and good detection rate of 88% on the entire test databases. The post-processor, effectively reduces the false alarm rate by 34% while the good detection rate decreases by 2%. This post-processing technique improves the performance of the heuristic algorithm. The structure of this post-processor is generic, improves our understanding of the core visually determined EEG features of neonatal seizures and is applicable for other neonatal seizure detectors. The post-processor significantly decreases the false alarm rate at the expense of a small reduction of the good detection rate. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  8. Predicting complications of percutaneous coronary intervention using a novel support vector method.

    PubMed

    Lee, Gyemin; Gurm, Hitinder S; Syed, Zeeshan

    2013-01-01

    To explore the feasibility of a novel approach using an augmented one-class learning algorithm to model in-laboratory complications of percutaneous coronary intervention (PCI). Data from the Blue Cross Blue Shield of Michigan Cardiovascular Consortium (BMC2) multicenter registry for the years 2007 and 2008 (n=41 016) were used to train models to predict 13 different in-laboratory PCI complications using a novel one-plus-class support vector machine (OP-SVM) algorithm. The performance of these models in terms of discrimination and calibration was compared to the performance of models trained using the following classification algorithms on BMC2 data from 2009 (n=20 289): logistic regression (LR), one-class support vector machine classification (OC-SVM), and two-class support vector machine classification (TC-SVM). For the OP-SVM and TC-SVM approaches, variants of the algorithms with cost-sensitive weighting were also considered. The OP-SVM algorithm and its cost-sensitive variant achieved the highest area under the receiver operating characteristic curve for the majority of the PCI complications studied (eight cases). Similar improvements were observed for the Hosmer-Lemeshow χ(2) value (seven cases) and the mean cross-entropy error (eight cases). The OP-SVM algorithm based on an augmented one-class learning problem improved discrimination and calibration across different PCI complications relative to LR and traditional support vector machine classification. Such an approach may have value in a broader range of clinical domains.

  9. Predicting complications of percutaneous coronary intervention using a novel support vector method

    PubMed Central

    Lee, Gyemin; Gurm, Hitinder S; Syed, Zeeshan

    2013-01-01

    Objective To explore the feasibility of a novel approach using an augmented one-class learning algorithm to model in-laboratory complications of percutaneous coronary intervention (PCI). Materials and methods Data from the Blue Cross Blue Shield of Michigan Cardiovascular Consortium (BMC2) multicenter registry for the years 2007 and 2008 (n=41 016) were used to train models to predict 13 different in-laboratory PCI complications using a novel one-plus-class support vector machine (OP-SVM) algorithm. The performance of these models in terms of discrimination and calibration was compared to the performance of models trained using the following classification algorithms on BMC2 data from 2009 (n=20 289): logistic regression (LR), one-class support vector machine classification (OC-SVM), and two-class support vector machine classification (TC-SVM). For the OP-SVM and TC-SVM approaches, variants of the algorithms with cost-sensitive weighting were also considered. Results The OP-SVM algorithm and its cost-sensitive variant achieved the highest area under the receiver operating characteristic curve for the majority of the PCI complications studied (eight cases). Similar improvements were observed for the Hosmer–Lemeshow χ2 value (seven cases) and the mean cross-entropy error (eight cases). Conclusions The OP-SVM algorithm based on an augmented one-class learning problem improved discrimination and calibration across different PCI complications relative to LR and traditional support vector machine classification. Such an approach may have value in a broader range of clinical domains. PMID:23599229

  10. Imaging in children with unilateral ureteropelvic junction obstruction: time to reduce investigations?

    PubMed

    Abadir, Nadin; Schmidt, Maria; Laube, Guido F; Weitz, Marcus

    2017-09-01

    The objective of the study was the development of an abridged risk-stratified imaging algorithm for the management of children with unilateral ureteropelvic junction obstruction (UPJO). Data on timing, frequency and duration of diagnostic imaging in children with unilateral UPJO was extracted retrospectively. Based on these findings, an abridged imaging algorithm was developed without changing the intended management by the clinicians and the outcome of the individual patient. The potential reduction of imaging studies was analysed and stratified by risk and management groups. The reduction in imaging studies, seen for ultrasound (US) and functional imaging (FI), was 45% each. On average, this is equivalent to 3 US and 1 FI studies less for every patient within the study period. The change was more pronounced in the low-risk groups. Progression of UPJO never occurred after 2 years of age and all secondary surgeries were carried out until the age of 3. Although our findings need to be validated by further prospective research, the developed imaging algorithm represents a risk-stratified approach towards less imaging studies in children with unilateral UPJO, and a follow-up beyond 3 years of age should be considered only in selected cases at the discretion of the clinician. What is Known: • ultrasound and functional imaging represent an integral part of therapeutic decision-making in children with unilateral ureteropelvic junction obstruction • imaging studies cannot accurately assess which patients are in need of surgical intervention, therefore close, serial imaging is preferred What is New: • a new, risk-stratified imaging algorithm was developed for the first 3 years of life • applying this algorithm could lead to a considerable reduction of imaging studies, and also the associated risks and health-care costs.

  11. Discrete-time model reduction in limited frequency ranges

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Juang, Jer-Nan; Longman, Richard W.

    1991-01-01

    A mathematical formulation for model reduction of discrete time systems such that the reduced order model represents the system in a particular frequency range is discussed. The algorithm transforms the full order system into balanced coordinates using frequency weighted discrete controllability and observability grammians. In this form a criterion is derived to guide truncation of states based on their contribution to the frequency range of interest. Minimization of the criterion is accomplished without need for numerical optimization. Balancing requires the computation of discrete frequency weighted grammians. Close form solutions for the computation of frequency weighted grammians are developed. Numerical examples are discussed to demonstrate the algorithm.

  12. Off-line data reduction

    NASA Astrophysics Data System (ADS)

    Gutowski, Marek W.

    1992-12-01

    Presented is a novel, heuristic algorithm, based on fuzzy set theory, allowing for significant off-line data reduction. Given the equidistant data, the algorithm discards some points while retaining others with their original values. The fraction of original data points retained is typically {1}/{6} of the initial value. The reduced data set preserves all the essential features of the input curve. It is possible to reconstruct the original information to high degree of precision by means of natural cubic splines, rational cubic splines or even linear interpolation. Main fields of application should be non-linear data fitting (substantial savings in CPU time) and graphics (storage space savings).

  13. An Impact-Location Estimation Algorithm for Subsonic Uninhabited Aircraft

    NASA Technical Reports Server (NTRS)

    Bauer, Jeffrey E.; Teets, Edward

    1997-01-01

    An impact-location estimation algorithm is being used at the NASA Dryden Flight Research Center to support range safety for uninhabited aerial vehicle flight tests. The algorithm computes an impact location based on the descent rate, mass, and altitude of the vehicle and current wind information. The predicted impact location is continuously displayed on the range safety officer's moving map display so that the flightpath of the vehicle can be routed to avoid ground assets if the flight must be terminated. The algorithm easily adapts to different vehicle termination techniques and has been shown to be accurate to the extent required to support range safety for subsonic uninhabited aerial vehicles. This paper describes how the algorithm functions, how the algorithm is used at NASA Dryden, and how various termination techniques are handled by the algorithm. Other approaches to predicting the impact location and the reasons why they were not selected for real-time implementation are also discussed.

  14. Electricity Load Forecasting Using Support Vector Regression with Memetic Algorithms

    PubMed Central

    Hu, Zhongyi; Xiong, Tao

    2013-01-01

    Electricity load forecasting is an important issue that is widely explored and examined in power systems operation literature and commercial transactions in electricity markets literature as well. Among the existing forecasting models, support vector regression (SVR) has gained much attention. Considering the performance of SVR highly depends on its parameters; this study proposed a firefly algorithm (FA) based memetic algorithm (FA-MA) to appropriately determine the parameters of SVR forecasting model. In the proposed FA-MA algorithm, the FA algorithm is applied to explore the solution space, and the pattern search is used to conduct individual learning and thus enhance the exploitation of FA. Experimental results confirm that the proposed FA-MA based SVR model can not only yield more accurate forecasting results than the other four evolutionary algorithms based SVR models and three well-known forecasting models but also outperform the hybrid algorithms in the related existing literature. PMID:24459425

  15. Electricity load forecasting using support vector regression with memetic algorithms.

    PubMed

    Hu, Zhongyi; Bao, Yukun; Xiong, Tao

    2013-01-01

    Electricity load forecasting is an important issue that is widely explored and examined in power systems operation literature and commercial transactions in electricity markets literature as well. Among the existing forecasting models, support vector regression (SVR) has gained much attention. Considering the performance of SVR highly depends on its parameters; this study proposed a firefly algorithm (FA) based memetic algorithm (FA-MA) to appropriately determine the parameters of SVR forecasting model. In the proposed FA-MA algorithm, the FA algorithm is applied to explore the solution space, and the pattern search is used to conduct individual learning and thus enhance the exploitation of FA. Experimental results confirm that the proposed FA-MA based SVR model can not only yield more accurate forecasting results than the other four evolutionary algorithms based SVR models and three well-known forecasting models but also outperform the hybrid algorithms in the related existing literature.

  16. Evaluation of a metal artifact reduction algorithm applied to post-interventional flat detector CT in comparison to pre-treatment CT in patients with acute subarachnoid haemorrhage.

    PubMed

    Mennecke, Angelika; Svergun, Stanislav; Scholz, Bernhard; Royalty, Kevin; Dörfler, Arnd; Struffert, Tobias

    2017-01-01

    Metal artefacts can impair accurate diagnosis of haemorrhage using flat detector CT (FD-CT), especially after aneurysm coiling. Within this work we evaluate a prototype metal artefact reduction algorithm by comparison of the artefact-reduced and the non-artefact-reduced FD-CT images to pre-treatment FD-CT and multi-slice CT images. Twenty-five patients with acute aneurysmal subarachnoid haemorrhage (SAH) were selected retrospectively. FD-CT and multi-slice CT before endovascular treatment as well as FD-CT data sets after treatment were available for all patients. The algorithm was applied to post-treatment FD-CT. The effect of the algorithm was evaluated utilizing the pre-post concordance of a modified Fisher score, a subjective image quality assessment, the range of the Hounsfield units within three ROIs, and the pre-post slice-wise Pearson correlation. The pre-post concordance of the modified Fisher score, the subjective image quality, and the pre-post correlation of the ranges of the Hounsfield units were significantly higher for artefact-reduced than for non-artefact-reduced images. Within the metal-affected slices, the pre-post slice-wise Pearson correlation coefficient was higher for artefact-reduced than for non-artefact-reduced images. The overall diagnostic quality of the artefact-reduced images was improved and reached the level of the pre-interventional FD-CT images. The metal-unaffected parts of the image were not modified. • After coiling subarachnoid haemorrhage, metal artefacts seriously reduce FD-CT image quality. • This new metal artefact reduction algorithm is feasible for flat-detector CT. • After coiling, MAR is necessary for diagnostic quality of affected slices. • Slice-wise Pearson correlation is introduced to evaluate improvement of MAR in future studies. • Metal-unaffected parts of image are not modified by this MAR algorithm.

  17. An Overview of a Trajectory-Based Solution for En Route and Terminal Area Self-Spacing: Sixth Revision

    NASA Technical Reports Server (NTRS)

    Abbott, Terence S.

    2015-01-01

    This paper presents an overview of the sixth revision to an algorithm specifically designed to support NASA's Airborne Precision Spacing concept. This algorithm is referred to as the Airborne Spacing for Terminal Arrival Routes version 13 (ASTAR13). This airborne self-spacing concept contains both trajectory-based and state-based mechanisms for calculating the speeds required to achieve or maintain a precise spacing interval. The trajectory-based capability allows for spacing operations prior to the aircraft being on a common path. This algorithm was also designed specifically to support a standalone, non-integrated implementation in the spacing aircraft. This current revision to the algorithm adds the state-based capability in support of evolving industry standards relating to airborne self-spacing.

  18. Air Traffic Control Experimentation and Evaluation with the NASA ATS-6 Satellite : Volume 4. Data Reduction and Analysis Software.

    DOT National Transportation Integrated Search

    1976-09-01

    Software used for the reduction and analysis of the multipath prober, modem evaluation (voice, digital data, and ranging), and antenna evaluation data acquired during the ATS-6 field test program is described. Multipath algorithms include reformattin...

  19. Accelerating k-NN Algorithm with Hybrid MPI and OpenSHMEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Jian; Hamidouche, Khaled; Zheng, Jie

    2015-08-05

    Machine Learning algorithms are benefiting from the continuous improvement of programming models, including MPI, MapReduce and PGAS. k-Nearest Neighbors (k-NN) algorithm is a widely used machine learning algorithm, applied to supervised learning tasks such as classification. Several parallel implementations of k-NN have been proposed in the literature and practice. However, on high-performance computing systems with high-speed interconnects, it is important to further accelerate existing designs of the k-NN algorithm through taking advantage of scalable programming models. To improve the performance of k-NN on large-scale environment with InfiniBand network, this paper proposes several alternative hybrid MPI+OpenSHMEM designs and performs a systemicmore » evaluation and analysis on typical workloads. The hybrid designs leverage the one-sided memory access to better overlap communication with computation than the existing pure MPI design, and propose better schemes for efficient buffer management. The implementation based on k-NN program from MaTEx with MVAPICH2-X (Unified MPI+PGAS Communication Runtime over InfiniBand) shows up to 9.0% time reduction for training KDD Cup 2010 workload over 512 cores, and 27.6% time reduction for small workload with balanced communication and computation. Experiments of running with varied number of cores show that our design can maintain good scalability.« less

  20. SU-E-I-75: Evaluation of An Orthopedic Metal Artifact Reduction (O-MAR) Algorithm On Patients with Spinal Prostheses Near Spinal Tumors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, Z; Xia, P; Djemil, T

    Purpose: To evaluate the impact of a commercial orthopedic metal artifact reduction (O-MAR) algorithm on CT image quality and dose calculation for patients with spinal prostheses near spinal tumors. Methods: A CT electron density phantom was scanned twice: with tissue-simulating inserts only, and with a titanium insert replacing solid water. A patient plan was mapped to the phantom images in two ways: with the titanium inside or outside of the spinal tumor. Pinnacle and Eclipse were used to evaluate the dosimetric effects of O-MAR on 12-bit and 16-bit CT data, respectively. CT images from five patients with spinal prostheses weremore » reconstructed with and without O-MAR. Two observers assessed the image quality improvement from O-MAR. Both pencil beam and Monte Carlo dose calculation in iPlan were used for the patient study. The percentage differences between non-OMAR and O-MAR datasets were calculated for PTV-min, PTV-max, PTV-mean, PTV-V100, PTV-D90, OAR-V10Gy, OAR-max, and OAR-D0.1cc. Results: O-MAR improved image quality but did not significantly affect the dose distributions and DVHs for both 12-bit and 16- bit CT phantom data. All five patient cases demonstrated some degree of image quality improvement from O-MAR, ranging from small to large metal artifact reduction. For pencil beam, the largest discrepancy was observed for OARV-10Gy at 5.4%, while the other seven parameters were ≤0.6%. For Monte Carlo, the differences between non-O-MAR and O-MAR datasets were ≤3.0%. Conclusion: Both phantom and patient studies indicated that O-MAR can substantially reduce metal artifacts on CT images, allowing better visualization of the anatomical structures and metal objects. The dosimetric impact of O-MAR was insignificant regardless of the metal location, image bit-depth, and dose calculation algorithm. O-MAR corrected images are recommended for radiation treatment planning on patients with spinal prostheses because of the improved image quality and no need to modify current dose constraints. This work was supported by a research grant from Philips Healthcare. Paul Klahr is an employee of Philips Healthcare.« less

  1. Maximum life spur gear design

    NASA Technical Reports Server (NTRS)

    Savage, M.; Mackulin, M. J.; Coe, H. H.; Coy, J. J.

    1991-01-01

    Optimization procedures allow one to design a spur gear reduction for maximum life and other end use criteria. A modified feasible directions search algorithm permits a wide variety of inequality constraints and exact design requirements to be met with low sensitivity to initial guess values. The optimization algorithm is described, and the models for gear life and performance are presented. The algorithm is compact and has been programmed for execution on a desk top computer. Two examples are presented to illustrate the method and its application.

  2. Angular-contact ball-bearing internal load estimation algorithm using runtime adaptive relaxation

    NASA Astrophysics Data System (ADS)

    Medina, H.; Mutu, R.

    2017-07-01

    An algorithm to estimate internal loads for single-row angular contact ball bearings due to externally applied thrust loads and high-operating speeds is presented. A new runtime adaptive relaxation procedure and blending function is proposed which ensures algorithm stability whilst also reducing the number of iterations needed to reach convergence, leading to an average reduction in computation time in excess of approximately 80%. The model is validated based on a 218 angular contact bearing and shows excellent agreement compared to published results.

  3. The Correlation Fractal Dimension of Complex Networks

    NASA Astrophysics Data System (ADS)

    Wang, Xingyuan; Liu, Zhenzhen; Wang, Mogei

    2013-05-01

    The fractality of complex networks is studied by estimating the correlation dimensions of the networks. Comparing with the previous algorithms of estimating the box dimension, our algorithm achieves a significant reduction in time complexity. For four benchmark cases tested, that is, the Escherichia coli (E. Coli) metabolic network, the Homo sapiens protein interaction network (H. Sapiens PIN), the Saccharomyces cerevisiae protein interaction network (S. Cerevisiae PIN) and the World Wide Web (WWW), experiments are provided to demonstrate the validity of our algorithm.

  4. Algorithm comparison for schedule optimization in MR fingerprinting.

    PubMed

    Cohen, Ouri; Rosen, Matthew S

    2017-09-01

    In MR Fingerprinting, the flip angles and repetition times are chosen according to a pseudorandom schedule. In previous work, we have shown that maximizing the discrimination between different tissue types by optimizing the acquisition schedule allows reductions in the number of measurements required. The ideal optimization algorithm for this application remains unknown, however. In this work we examine several different optimization algorithms to determine the one best suited for optimizing MR Fingerprinting acquisition schedules. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Bayesian Kernel Methods for Non-Gaussian Distributions: Binary and Multi-class Classification Problems

    DTIC Science & Technology

    2013-05-28

    those of the support vector machine and relevance vector machine, and the model runs more quickly than the other algorithms . When one class occurs...incremental support vector machine algorithm for online learning when fewer than 50 data points are available. (a) Papers published in peer-reviewed journals...learning environments, where data processing occurs one observation at a time and the classification algorithm improves over time with new

  6. Initiation of insulin glargine therapy in type 2 diabetes subjects suboptimally controlled on oral antidiabetic agents: results from the AT.LANTUS trial.

    PubMed

    Davies, M; Lavalle-González, F; Storms, F; Gomis, R

    2008-05-01

    For many patients with type 2 diabetes, oral antidiabetic agents (OADs) do not provide optimal glycaemic control, necessitating insulin therapy. Fear of hypoglycaemia is a major barrier to initiating insulin therapy. The AT.LANTUS study investigated optimal methods to initiate and maintain insulin glargine (LANTUS, glargine, Sanofi-aventis, Paris, France) therapy using two treatment algorithms. This subgroup analysis investigated the initiation of once-daily glargine therapy in patients suboptimally controlled on multiple OADs. This study was a 24-week, multinational (59 countries), multicenter (611), randomized study. Algorithm 1 was a clinic-driven titration and algorithm 2 was a patient-driven titration. Titration was based on target fasting blood glucose < or =100 mg/dl (< or =5.5 mmol/l). Algorithms were compared for incidence of severe hypoglycaemia [requiring assistance and blood glucose <50 mg/dl (<2.8 mmol/l)] and baseline to end-point change in haemoglobin A(1c) (HbA(1c)). Of the 4961 patients enrolled in the study, 865 were included in this subgroup analysis: 340 received glargine plus 1 OAD and 525 received glargine plus >1 OAD. Incidence of severe hypoglycaemia was <1%. HbA(1c) decreased significantly between baseline and end-point for patients receiving glargine plus 1 OAD (-1.4%, p < 0.001; algorithm 1 -1.3% vs. algorithm 2 -1.5%; p = 0.03) and glargine plus >1 OAD (-1.7%, p < 0.001; algorithm 1 -1.5% vs. algorithm 2 -1.8%; p = 0.001). This study shows that initiation of once-daily glargine with OADs results in significant reduction of HbA(1c) with a low risk of hypoglycaemia. The greater reduction in HbA(1c) was seen in patients randomized to the patient-driven algorithm (algorithm 2) on 1 or >1 OAD.

  7. Grid artifact reduction for direct digital radiography detectors based on rotated stationary grids with homomorphic filtering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Dong Sik; Lee, Sanggyun

    2013-06-15

    Purpose: Grid artifacts are caused when using the antiscatter grid in obtaining digital x-ray images. In this paper, research on grid artifact reduction techniques is conducted especially for the direct detectors, which are based on amorphous selenium. Methods: In order to analyze and reduce the grid artifacts, the authors consider a multiplicative grid image model and propose a homomorphic filtering technique. For minimal damage due to filters, which are used to suppress the grid artifacts, rotated grids with respect to the sampling direction are employed, and min-max optimization problems for searching optimal grid frequencies and angles for given sampling frequenciesmore » are established. The authors then propose algorithms for the grid artifact reduction based on the band-stop filters as well as low-pass filters. Results: The proposed algorithms are experimentally tested for digital x-ray images, which are obtained from direct detectors with the rotated grids, and are compared with other algorithms. It is shown that the proposed algorithms can successfully reduce the grid artifacts for direct detectors. Conclusions: By employing the homomorphic filtering technique, the authors can considerably suppress the strong grid artifacts with relatively narrow-bandwidth filters compared to the normal filtering case. Using rotated grids also significantly reduces the ringing artifact. Furthermore, for specific grid frequencies and angles, the authors can use simple homomorphic low-pass filters in the spatial domain, and thus alleviate the grid artifacts with very low implementation complexity.« less

  8. Open-path FTIR data reduction algorithm with atmospheric absorption corrections: the NONLIN code

    NASA Astrophysics Data System (ADS)

    Phillips, William; Russwurm, George M.

    1999-02-01

    This paper describes the progress made to date in developing, testing, and refining a data reduction computer code, NONLIN, that alleviates many of the difficulties experienced in the analysis of open path FTIR data. Among the problems that currently effect FTIR open path data quality are: the inability to obtain a true I degree or background, spectral interferences of atmospheric gases such as water vapor and carbon dioxide, and matching the spectral resolution and shift of the reference spectra to a particular field instrument. This algorithm is based on a non-linear fitting scheme and is therefore not constrained by many of the assumptions required for the application of linear methods such as classical least squares (CLS). As a result, a more realistic mathematical model of the spectral absorption measurement process can be employed in the curve fitting process. Applications of the algorithm have proven successful in circumventing open path data reduction problems. However, recent studies, by one of the authors, of the temperature and pressure effects on atmospheric absorption indicate there exist temperature and water partial pressure effects that should be incorporated into the NONLIN algorithm for accurate quantification of gas concentrations. This paper investigates the sources of these phenomena. As a result of this study a partial pressure correction has been employed in NONLIN computer code. Two typical field spectra are examined to determine what effect the partial pressure correction has on gas quantification.

  9. Statistical iterative reconstruction for streak artefact reduction when using multidetector CT to image the dento-alveolar structures.

    PubMed

    Dong, J; Hayakawa, Y; Kober, C

    2014-01-01

    When metallic prosthetic appliances and dental fillings exist in the oral cavity, the appearance of metal-induced streak artefacts is not avoidable in CT images. The aim of this study was to develop a method for artefact reduction using the statistical reconstruction on multidetector row CT images. Adjacent CT images often depict similar anatomical structures. Therefore, reconstructed images with weak artefacts were attempted using projection data of an artefact-free image in a neighbouring thin slice. Images with moderate and strong artefacts were continuously processed in sequence by successive iterative restoration where the projection data was generated from the adjacent reconstructed slice. First, the basic maximum likelihood-expectation maximization algorithm was applied. Next, the ordered subset-expectation maximization algorithm was examined. Alternatively, a small region of interest setting was designated. Finally, the general purpose graphic processing unit machine was applied in both situations. The algorithms reduced the metal-induced streak artefacts on multidetector row CT images when the sequential processing method was applied. The ordered subset-expectation maximization and small region of interest reduced the processing duration without apparent detriments. A general-purpose graphic processing unit realized the high performance. A statistical reconstruction method was applied for the streak artefact reduction. The alternative algorithms applied were effective. Both software and hardware tools, such as ordered subset-expectation maximization, small region of interest and general-purpose graphic processing unit achieved fast artefact correction.

  10. Error reduction in EMG signal decomposition

    PubMed Central

    Kline, Joshua C.

    2014-01-01

    Decomposition of the electromyographic (EMG) signal into constituent action potentials and the identification of individual firing instances of each motor unit in the presence of ambient noise are inherently probabilistic processes, whether performed manually or with automated algorithms. Consequently, they are subject to errors. We set out to classify and reduce these errors by analyzing 1,061 motor-unit action-potential trains (MUAPTs), obtained by decomposing surface EMG (sEMG) signals recorded during human voluntary contractions. Decomposition errors were classified into two general categories: location errors representing variability in the temporal localization of each motor-unit firing instance and identification errors consisting of falsely detected or missed firing instances. To mitigate these errors, we developed an error-reduction algorithm that combines multiple decomposition estimates to determine a more probable estimate of motor-unit firing instances with fewer errors. The performance of the algorithm is governed by a trade-off between the yield of MUAPTs obtained above a given accuracy level and the time required to perform the decomposition. When applied to a set of sEMG signals synthesized from real MUAPTs, the identification error was reduced by an average of 1.78%, improving the accuracy to 97.0%, and the location error was reduced by an average of 1.66 ms. The error-reduction algorithm in this study is not limited to any specific decomposition strategy. Rather, we propose it be used for other decomposition methods, especially when analyzing precise motor-unit firing instances, as occurs when measuring synchronization. PMID:25210159

  11. Identifying Septal Support Reconstructions for Saddle Nose Deformity: The Cakmak Algorithm.

    PubMed

    Cakmak, Ozcan; Emre, Ismet Emrah; Ozkurt, Fazil Emre

    2015-01-01

    The saddle nose deformity is one of the most challenging problems in nasal surgery with a less predictable and reproducible result than other nasal procedures. The main feature of this deformity is loss of septal support with both functional and aesthetic implications. Most reports on saddle nose have focused on aesthetic improvement and neglected the reestablishment of septal support to improve airway. To explain how the Cakmak algorithm, an algorithm that describes various fixation techniques and grafts in different types of saddle nose deformities, aids in identifying saddle nose reconstructions that restore supportive nasal framework and provide the aesthetic improvements typically associated with procedures to correct saddle nose deformities. This algorithm presents septal support reconstruction of patients with saddle nose deformity based on the experience of the senior author in 206 patients with saddle nose deformity. Preoperative examination, intraoperative assessment, reconstruction techniques, graft materials, and patient evaluation of aesthetic success were documented, and 4 different types of saddle nose deformities were defined. The Cakmak algorithm classifies varying degrees of saddle nose deformity from type 0 to type 4 and helps identify the most appropriate surgical procedure to restore the supportive nasal framework and aesthetic dorsum. Among the 206 patients, 110 women and 96 men, mean (range) age was 39.7 years (15-68 years), and mean (range) of follow-up was 32 months (6-148 months). All but 12 patients had a history of previous nasal surgeries. Application of the Cakmak algorithm resulted in 36 patients categorized with type 0 saddle nose deformities; 79, type 1; 50, type 2; 20, type 3a; 7, type 3b; and 14, type 4. Postoperative photographs showed improvement of deformities, and patient surveys revealed aesthetic improvement in 201 patients and improvement in nasal breathing in 195 patients. Three patients developed postoperative infection and 21 patients underwent revision septal surgery. The goal of saddle nose reconstruction should be not only to restore an aesthetic dorsum but also to restore the supportive nasal framework. The surgeon should provide more projected and strengthened septal support before augmentation of saddle nose deformity to improve breathing and achieve a stable long-term result. The Cakmak algorithm is a mechanism that helps surgeons identify the most effective way to maximize septal support and aesthetic appeal. 4.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    SEDLACEK,III, A.J.FINFROCK,C.

    As a member of the science-support part of the ITT-lead LISA development program, BNL is tasked with the acquisition of UV Raman spectral fingerprints and associated scattering cross-sections for those chemicals-of-interest to the program's sponsor. In support of this role, the present report contains the first installment of UV Raman spectral fingerprint data on the initial subset of chemicals. Because of the unique nature associated with the acquisition of spectral fingerprints for use in spectral pattern matching algorithms (i.e., CLS, PLS, ANN) great care has been undertaken to maximize the signal-to-noise and to minimize unnecessary spectral subtractions, in an effortmore » to provide the highest quality spectral fingerprints. This report is divided into 4 sections. The first is an Experimental section that outlines how the Raman spectra are performed. This is then followed by a section on Sample Handling. Following this, the spectral fingerprints are presented in the Results section where the data reduction process is outlined. Finally, a Photographs section is included.« less

  13. A high-performance spatial database based approach for pathology imaging algorithm evaluation

    PubMed Central

    Wang, Fusheng; Kong, Jun; Gao, Jingjing; Cooper, Lee A.D.; Kurc, Tahsin; Zhou, Zhengwen; Adler, David; Vergara-Niedermayr, Cristobal; Katigbak, Bryan; Brat, Daniel J.; Saltz, Joel H.

    2013-01-01

    Background: Algorithm evaluation provides a means to characterize variability across image analysis algorithms, validate algorithms by comparison with human annotations, combine results from multiple algorithms for performance improvement, and facilitate algorithm sensitivity studies. The sizes of images and image analysis results in pathology image analysis pose significant challenges in algorithm evaluation. We present an efficient parallel spatial database approach to model, normalize, manage, and query large volumes of analytical image result data. This provides an efficient platform for algorithm evaluation. Our experiments with a set of brain tumor images demonstrate the application, scalability, and effectiveness of the platform. Context: The paper describes an approach and platform for evaluation of pathology image analysis algorithms. The platform facilitates algorithm evaluation through a high-performance database built on the Pathology Analytic Imaging Standards (PAIS) data model. Aims: (1) Develop a framework to support algorithm evaluation by modeling and managing analytical results and human annotations from pathology images; (2) Create a robust data normalization tool for converting, validating, and fixing spatial data from algorithm or human annotations; (3) Develop a set of queries to support data sampling and result comparisons; (4) Achieve high performance computation capacity via a parallel data management infrastructure, parallel data loading and spatial indexing optimizations in this infrastructure. Materials and Methods: We have considered two scenarios for algorithm evaluation: (1) algorithm comparison where multiple result sets from different methods are compared and consolidated; and (2) algorithm validation where algorithm results are compared with human annotations. We have developed a spatial normalization toolkit to validate and normalize spatial boundaries produced by image analysis algorithms or human annotations. The validated data were formatted based on the PAIS data model and loaded into a spatial database. To support efficient data loading, we have implemented a parallel data loading tool that takes advantage of multi-core CPUs to accelerate data injection. The spatial database manages both geometric shapes and image features or classifications, and enables spatial sampling, result comparison, and result aggregation through expressive structured query language (SQL) queries with spatial extensions. To provide scalable and efficient query support, we have employed a shared nothing parallel database architecture, which distributes data homogenously across multiple database partitions to take advantage of parallel computation power and implements spatial indexing to achieve high I/O throughput. Results: Our work proposes a high performance, parallel spatial database platform for algorithm validation and comparison. This platform was evaluated by storing, managing, and comparing analysis results from a set of brain tumor whole slide images. The tools we develop are open source and available to download. Conclusions: Pathology image algorithm validation and comparison are essential to iterative algorithm development and refinement. One critical component is the support for queries involving spatial predicates and comparisons. In our work, we develop an efficient data model and parallel database approach to model, normalize, manage and query large volumes of analytical image result data. Our experiments demonstrate that the data partitioning strategy and the grid-based indexing result in good data distribution across database nodes and reduce I/O overhead in spatial join queries through parallel retrieval of relevant data and quick subsetting of datasets. The set of tools in the framework provide a full pipeline to normalize, load, manage and query analytical results for algorithm evaluation. PMID:23599905

  14. Precipitation and Latent Heating Distributions from Satellite Passive Microwave Radiometry. Part 1; Improved Method and Uncertainties

    NASA Technical Reports Server (NTRS)

    Olson, William S.; Kummerow, Christian D.; Yang, Song; Petty, Grant W.; Tao, Wei-Kuo; Bell, Thomas L.; Braun, Scott A.; Wang, Yansen; Lang, Stephen E.; Johnson, Daniel E.; hide

    2006-01-01

    A revised Bayesian algorithm for estimating surface rain rate, convective rain proportion, and latent heating profiles from satellite-borne passive microwave radiometer observations over ocean backgrounds is described. The algorithm searches a large database of cloud-radiative model simulations to find cloud profiles that are radiatively consistent with a given set of microwave radiance measurements. The properties of these radiatively consistent profiles are then composited to obtain best estimates of the observed properties. The revised algorithm is supported by an expanded and more physically consistent database of cloud-radiative model simulations. The algorithm also features a better quantification of the convective and nonconvective contributions to total rainfall, a new geographic database, and an improved representation of background radiances in rain-free regions. Bias and random error estimates are derived from applications of the algorithm to synthetic radiance data, based upon a subset of cloud-resolving model simulations, and from the Bayesian formulation itself. Synthetic rain-rate and latent heating estimates exhibit a trend of high (low) bias for low (high) retrieved values. The Bayesian estimates of random error are propagated to represent errors at coarser time and space resolutions, based upon applications of the algorithm to TRMM Microwave Imager (TMI) data. Errors in TMI instantaneous rain-rate estimates at 0.5 -resolution range from approximately 50% at 1 mm/h to 20% at 14 mm/h. Errors in collocated spaceborne radar rain-rate estimates are roughly 50%-80% of the TMI errors at this resolution. The estimated algorithm random error in TMI rain rates at monthly, 2.5deg resolution is relatively small (less than 6% at 5 mm day.1) in comparison with the random error resulting from infrequent satellite temporal sampling (8%-35% at the same rain rate). Percentage errors resulting from sampling decrease with increasing rain rate, and sampling errors in latent heating rates follow the same trend. Averaging over 3 months reduces sampling errors in rain rates to 6%-15% at 5 mm day.1, with proportionate reductions in latent heating sampling errors.

  15. Generation and assessment of turntable SAR data for the support of ATR development

    NASA Astrophysics Data System (ADS)

    Cohen, Marvin N.; Showman, Gregory A.; Sangston, K. James; Sylvester, Vincent B.; Gostin, Lamar; Scheer, C. Ruby

    1998-10-01

    Inverse synthetic aperture radar (ISAR) imaging on a turntable-tower test range permits convenient generation of high resolution two-dimensional images of radar targets under controlled conditions for testing SAR image processing and for supporting automatic target recognition (ATR) algorithm development. However, turntable ISAR images are often obtained under near-field geometries and hence may suffer geometric distortions not present in airborne SAR images. In this paper, turntable data collected at Georgia Tech's Electromagnetic Test Facility are used to begin to assess the utility of two- dimensional ISAR imaging algorithms in forming images to support ATR development. The imaging algorithms considered include a simple 2D discrete Fourier transform (DFT), a 2-D DFT with geometric correction based on image domain resampling, and a computationally-intensive geometric matched filter solution. Images formed with the various algorithms are used to develop ATR templates, which are then compared with an eye toward utilization in an ATR algorithm.

  16. Sparse Solutions for Single Class SVMs: A Bi-Criterion Approach

    NASA Technical Reports Server (NTRS)

    Das, Santanu; Oza, Nikunj C.

    2011-01-01

    In this paper we propose an innovative learning algorithm - a variation of One-class nu Support Vector Machines (SVMs) learning algorithm to produce sparser solutions with much reduced computational complexities. The proposed technique returns an approximate solution, nearly as good as the solution set obtained by the classical approach, by minimizing the original risk function along with a regularization term. We introduce a bi-criterion optimization that helps guide the search towards the optimal set in much reduced time. The outcome of the proposed learning technique was compared with the benchmark one-class Support Vector machines algorithm which more often leads to solutions with redundant support vectors. Through out the analysis, the problem size for both optimization routines was kept consistent. We have tested the proposed algorithm on a variety of data sources under different conditions to demonstrate the effectiveness. In all cases the proposed algorithm closely preserves the accuracy of standard one-class nu SVMs while reducing both training time and test time by several factors.

  17. Comparative Analysis of Rank Aggregation Techniques for Metasearch Using Genetic Algorithm

    ERIC Educational Resources Information Center

    Kaur, Parneet; Singh, Manpreet; Singh Josan, Gurpreet

    2017-01-01

    Rank Aggregation techniques have found wide applications for metasearch along with other streams such as Sports, Voting System, Stock Markets, and Reduction in Spam. This paper presents the optimization of rank lists for web queries put by the user on different MetaSearch engines. A metaheuristic approach such as Genetic algorithm based rank…

  18. Reductions of topologically massive gravity I: Hamiltonian analysis of second order degenerate Lagrangians

    NASA Astrophysics Data System (ADS)

    Ćaǧatay Uçgun, Filiz; Esen, Oǧul; Gümral, Hasan

    2018-01-01

    We present Skinner-Rusk and Hamiltonian formalisms of second order degenerate Clément and Sarıoğlu-Tekin Lagrangians. The Dirac-Bergmann constraint algorithm is employed to obtain Hamiltonian realizations of Lagrangian theories. The Gotay-Nester-Hinds algorithm is used to investigate Skinner-Rusk formalisms of these systems.

  19. Recombination of the steering vector of the triangle grid array in quaternions and the reduction of the MUSIC algorithm

    NASA Astrophysics Data System (ADS)

    Bai, Chen; Han, Dongjuan

    2018-04-01

    MUSIC is widely used on DOA estimation. Triangle grid is a common kind of the arrangement of array, but it is more complicated than rectangular array in calculation of steering vector. In this paper, the quaternions algorithm can reduce dimension of vector and make the calculation easier.

  20. Evaluation of Algorithms for a Miles-in-Trail Decision Support Tool

    NASA Technical Reports Server (NTRS)

    Bloem, Michael; Hattaway, David; Bambos, Nicholas

    2012-01-01

    Four machine learning algorithms were prototyped and evaluated for use in a proposed decision support tool that would assist air traffic managers as they set Miles-in-Trail restrictions. The tool would display probabilities that each possible Miles-in-Trail value should be used in a given situation. The algorithms were evaluated with an expected Miles-in-Trail cost that assumes traffic managers set restrictions based on the tool-suggested probabilities. Basic Support Vector Machine, random forest, and decision tree algorithms were evaluated, as was a softmax regression algorithm that was modified to explicitly reduce the expected Miles-in-Trail cost. The algorithms were evaluated with data from the summer of 2011 for air traffic flows bound to the Newark Liberty International Airport (EWR) over the ARD, PENNS, and SHAFF fixes. The algorithms were provided with 18 input features that describe the weather at EWR, the runway configuration at EWR, the scheduled traffic demand at EWR and the fixes, and other traffic management initiatives in place at EWR. Features describing other traffic management initiatives at EWR and the weather at EWR achieved relatively high information gain scores, indicating that they are the most useful for estimating Miles-in-Trail. In spite of a high variance or over-fitting problem, the decision tree algorithm achieved the lowest expected Miles-in-Trail costs when the algorithms were evaluated using 10-fold cross validation with the summer 2011 data for these air traffic flows.

  1. Fusing face-verification algorithms and humans.

    PubMed

    O'Toole, Alice J; Abdi, Hervé; Jiang, Fang; Phillips, P Jonathon

    2007-10-01

    It has been demonstrated recently that state-of-the-art face-recognition algorithms can surpass human accuracy at matching faces over changes in illumination. The ranking of algorithms and humans by accuracy, however, does not provide information about whether algorithms and humans perform the task comparably or whether algorithms and humans can be fused to improve performance. In this paper, we fused humans and algorithms using partial least square regression (PLSR). In the first experiment, we applied PLSR to face-pair similarity scores generated by seven algorithms participating in the Face Recognition Grand Challenge. The PLSR produced an optimal weighting of the similarity scores, which we tested for generality with a jackknife procedure. Fusing the algorithms' similarity scores using the optimal weights produced a twofold reduction of error rate over the most accurate algorithm. Next, human-subject-generated similarity scores were added to the PLSR analysis. Fusing humans and algorithms increased the performance to near-perfect classification accuracy. These results are discussed in terms of maximizing face-verification accuracy with hybrid systems consisting of multiple algorithms and humans.

  2. Statistical analysis for validating ACO-KNN algorithm as feature selection in sentiment analysis

    NASA Astrophysics Data System (ADS)

    Ahmad, Siti Rohaidah; Yusop, Nurhafizah Moziyana Mohd; Bakar, Azuraliza Abu; Yaakub, Mohd Ridzwan

    2017-10-01

    This research paper aims to propose a hybrid of ant colony optimization (ACO) and k-nearest neighbor (KNN) algorithms as feature selections for selecting and choosing relevant features from customer review datasets. Information gain (IG), genetic algorithm (GA), and rough set attribute reduction (RSAR) were used as baseline algorithms in a performance comparison with the proposed algorithm. This paper will also discuss the significance test, which was used to evaluate the performance differences between the ACO-KNN, IG-GA, and IG-RSAR algorithms. This study evaluated the performance of the ACO-KNN algorithm using precision, recall, and F-score, which were validated using the parametric statistical significance tests. The evaluation process has statistically proven that this ACO-KNN algorithm has been significantly improved compared to the baseline algorithms. The evaluation process has statistically proven that this ACO-KNN algorithm has been significantly improved compared to the baseline algorithms. In addition, the experimental results have proven that the ACO-KNN can be used as a feature selection technique in sentiment analysis to obtain quality, optimal feature subset that can represent the actual data in customer review data.

  3. The VIDA Framework as an Education Tool: Leveraging Volcanology Data for Educational Purposes

    NASA Astrophysics Data System (ADS)

    Faied, D.; Sanchez, A.

    2009-04-01

    The VIDA Framework as an Education Tool: Leveraging Volcanology Data for Educational Purposes Dohy Faied, Aurora Sanchez (on behalf of SSP08 VAPOR Project Team) While numerous global initiatives exist to address the potential hazards posed by volcanic eruption events and assess impacts from a civil security viewpoint, there does not yet exist a single, unified, international system of early warning and hazard tracking for eruptions. Numerous gaps exist in the risk reduction cycle, from data collection, to data processing, and finally dissemination of salient information to relevant parties. As part of the 2008 International Space University's Space Studies Program, a detailed gap analysis of the state of volcano disaster risk reduction was undertaken, and this paper presents the principal results. This gap analysis considered current sensor technologies, data processing algorithms, and utilization of data products by various international organizations. Recommendations for strategies to minimize or eliminate certain gaps are also provided. In the effort to address the gaps, a framework evolved at system level. This framework, known as VIDA, is a tool to develop user requirements for civil security in hazardous contexts, and a candidate system concept for a detailed design phase. While the basic intention of VIDA is to support disaster risk reduction efforts, there are several methods of leveraging raw science data to support education across a wide demographic. Basic geophysical data could be used to educate school children about the characteristics of volcanoes, satellite mappings could support informed growth and development of societies in at-risk areas, and raw sensor data could contribute to a wide range of university-level research projects. Satellite maps, basic geophysical data, and raw sensor data are combined and accessible in a way that allows the relationships between these data types to be explored and used in a training environment. Such a resource naturally lends itself to research efforts in the subject but also research in operational tools, system architecture, and human/machine interaction in civil protection or emergency scenarios.

  4. Virtual tutorials, Wikipedia books, and multimedia-based teaching for blended learning support in a course on algorithms and data structures

    NASA Astrophysics Data System (ADS)

    Knackmuß, Jenny; Creutzburg, Reiner

    2014-02-01

    The aim of this paper is to describe the benefit and support of virtual tutorials, Wikipedia books and multimedia-based teaching in a course on Algorithms and Data Structures. We describe our work and experiences gained from using virtual tutorials held in Netucate iLinc sessions and the use of various multimedia and animation elements for the support of deeper understanding of the ordinary lectures held in the standard classroom on Algorithms and Data Structures for undergraduate computer sciences students. We will describe the benefits, form, style and contents of those virtual tutorials. Furthermore, we mention the advantage of Wikipedia books to support the blended learning process using modern mobile devices. Finally, we give some first statistical measures of improved student's scores after introducing this new form of teaching support.

  5. Portable Health Algorithms Test System

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.

    2010-01-01

    A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.

  6. Biomechanical Reconstruction Using the Tacit Learning System: Intuitive Control of Prosthetic Hand Rotation.

    PubMed

    Oyama, Shintaro; Shimoda, Shingo; Alnajjar, Fady S K; Iwatsuki, Katsuyuki; Hoshiyama, Minoru; Tanaka, Hirotaka; Hirata, Hitoshi

    2016-01-01

    Background: For mechanically reconstructing human biomechanical function, intuitive proportional control, and robustness to unexpected situations are required. Particularly, creating a functional hand prosthesis is a typical challenge in the reconstruction of lost biomechanical function. Nevertheless, currently available control algorithms are in the development phase. The most advanced algorithms for controlling multifunctional prosthesis are machine learning and pattern recognition of myoelectric signals. Despite the increase in computational speed, these methods cannot avoid the requirement of user consciousness and classified separation errors. "Tacit Learning System" is a simple but novel adaptive control strategy that can self-adapt its posture to environment changes. We introduced the strategy in the prosthesis rotation control to achieve compensatory reduction, as well as evaluated the system and its effects on the user. Methods: We conducted a non-randomized study involving eight prosthesis users to perform a bar relocation task with/without Tacit Learning System support. Hand piece and body motions were recorded continuously with goniometers, videos, and a motion-capture system. Findings: Reduction in the participants' upper extremity rotatory compensation motion was monitored during the relocation task in all participants. The estimated profile of total body energy consumption improved in five out of six participants. Interpretation: Our system rapidly accomplished nearly natural motion without unexpected errors. The Tacit Learning System not only adapts human motions but also enhances the human ability to adapt to the system quickly, while the system amplifies compensation generated by the residual limb. The concept can be extended to various situations for reconstructing lost functions that can be compensated.

  7. Zero-block mode decision algorithm for H.264/AVC.

    PubMed

    Lee, Yu-Ming; Lin, Yinyi

    2009-03-01

    In the previous paper , we proposed a zero-block intermode decision algorithm for H.264 video coding based upon the number of zero-blocks of 4 x 4 DCT coefficients between the current macroblock and the co-located macroblock. The proposed algorithm can achieve significant improvement in computation, but the computation performance is limited for high bit-rate coding. To improve computation efficiency, in this paper, we suggest an enhanced zero-block decision algorithm, which uses an early zero-block detection method to compute the number of zero-blocks instead of direct DCT and quantization (DCT/Q) calculation and incorporates two adequate decision methods into semi-stationary and nonstationary regions of a video sequence. In addition, the zero-block decision algorithm is also applied to the intramode prediction in the P frame. The enhanced zero-block decision algorithm brings out a reduction of average 27% of total encoding time compared to the zero-block decision algorithm.

  8. TPSLVM: a dimensionality reduction algorithm based on thin plate splines.

    PubMed

    Jiang, Xinwei; Gao, Junbin; Wang, Tianjiang; Shi, Daming

    2014-10-01

    Dimensionality reduction (DR) has been considered as one of the most significant tools for data analysis. One type of DR algorithms is based on latent variable models (LVM). LVM-based models can handle the preimage problem easily. In this paper we propose a new LVM-based DR model, named thin plate spline latent variable model (TPSLVM). Compared to the well-known Gaussian process latent variable model (GPLVM), our proposed TPSLVM is more powerful especially when the dimensionality of the latent space is low. Also, TPSLVM is robust to shift and rotation. This paper investigates two extensions of TPSLVM, i.e., the back-constrained TPSLVM (BC-TPSLVM) and TPSLVM with dynamics (TPSLVM-DM) as well as their combination BC-TPSLVM-DM. Experimental results show that TPSLVM and its extensions provide better data visualization and more efficient dimensionality reduction compared to PCA, GPLVM, ISOMAP, etc.

  9. New algorithms for optimal reduction of technical risks

    NASA Astrophysics Data System (ADS)

    Todinov, M. T.

    2013-06-01

    The article features exact algorithms for reduction of technical risk by (1) optimal allocation of resources in the case where the total potential loss from several sources of risk is a sum of the potential losses from the individual sources; (2) optimal allocation of resources to achieve a maximum reduction of system failure; and (3) making an optimal choice among competing risky prospects. The article demonstrates that the number of activities in a risky prospect is a key consideration in selecting the risky prospect. As a result, the maximum expected profit criterion, widely used for making risk decisions, is fundamentally flawed, because it does not consider the impact of the number of risk-reward activities in the risky prospects. A popular view, that if a single risk-reward bet with positive expected profit is unacceptable then a sequence of such identical risk-reward bets is also unacceptable, has been analysed and proved incorrect.

  10. TH-AB-207A-05: A Fully-Automated Pipeline for Generating CT Images Across a Range of Doses and Reconstruction Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, S; Lo, P; Hoffman, J

    Purpose: To evaluate the robustness of CAD or Quantitative Imaging methods, they should be tested on a variety of cases and under a variety of image acquisition and reconstruction conditions that represent the heterogeneity encountered in clinical practice. The purpose of this work was to develop a fully-automated pipeline for generating CT images that represent a wide range of dose and reconstruction conditions. Methods: The pipeline consists of three main modules: reduced-dose simulation, image reconstruction, and quantitative analysis. The first two modules of the pipeline can be operated in a completely automated fashion, using configuration files and running the modulesmore » in a batch queue. The input to the pipeline is raw projection CT data; this data is used to simulate different levels of dose reduction using a previously-published algorithm. Filtered-backprojection reconstructions are then performed using FreeCT-wFBP, a freely-available reconstruction software for helical CT. We also added support for an in-house, model-based iterative reconstruction algorithm using iterative coordinate-descent optimization, which may be run in tandem with the more conventional recon methods. The reduced-dose simulations and image reconstructions are controlled automatically by a single script, and they can be run in parallel on our research cluster. The pipeline was tested on phantom and lung screening datasets from a clinical scanner (Definition AS, Siemens Healthcare). Results: The images generated from our test datasets appeared to represent a realistic range of acquisition and reconstruction conditions that we would expect to find clinically. The time to generate images was approximately 30 minutes per dose/reconstruction combination on a hybrid CPU/GPU architecture. Conclusion: The automated research pipeline promises to be a useful tool for either training or evaluating performance of quantitative imaging software such as classifiers and CAD algorithms across the range of acquisition and reconstruction parameters present in the clinical environment. Funding support: NIH U01 CA181156; Disclosures (McNitt-Gray): Institutional research agreement, Siemens Healthcare; Past recipient, research grant support, Siemens Healthcare; Consultant, Toshiba America Medical Systems; Consultant, Samsung Electronics.« less

  11. Numerical predictions and measurements in the lubrication of aeronautical engine and transmission components

    NASA Astrophysics Data System (ADS)

    Moraru, Laurentiu Eugen

    2005-11-01

    This dissertation treats a variety of aspects of the lubrication of mechanical components encountered in aeronautical engines and transmissions. The study covers dual clearance squeeze film dampers, mixed elastohydrodynamic lubrication (EHL) cases and thermal elastohydrodynamic contacts. The dual clearance squeeze film damper (SFD) invented by Fleming is investigated both theoretically and experimentally for cases when the sleeve that separates the two oil films is free to float and for cases when the separating sleeve is supported by a squirrel cage. The Reynolds equation is developed to handle each of these cases and it is solved analytically for short bearings. A rotordynamic model of a test rig is developed, for both the single and dual SFD cases. A computer code is written to calculate the motion of the test rig rotor. Experiments are performed in order to validate the theoretical results. Rotordynamics computations are found to favorably agree with measured data. A probabilistic model for mixed EHL is developed and implemented. Surface roughness of gears are measured and processed. The mixed EHL model incorporates the average flow model of Patir and Cheng and the elasto-plastic contact mechanics model of Chang Etsion and Bogy. The current algorithm allows for the computation of the load supported by an oil film and for the load supported by the elasto-plastically deformed asperities. This work also presents a way to incorporate the effect of the fluid induced roughness deformation by utilizing the "amplitude reduction" results provided by the deterministic analyses. The Lobatto point Gaussian integration algorithm of Elrod and Brewe was extended for thermal lubrication problems involving compressible lubricants and it was implemented in thermal elastohydrodynamic cases. The unknown variables across the film are written in series of Legendre polynomials. The thermal Reynolds equation is obtained in terms of the series coefficients and it is proven that it can only explicitly contain the information from the first three Legendre polynomials. A computer code was written to implement the Lobatto point algorithm for a EHL line contact. Use of the Labatto point calculation method has resulted in greater accuracy without the use of a larger number of grid points.

  12. Canonical Duality Theory and Algorithms for Solving Some Challenging Problems in Global Optimization and Decision Science

    DTIC Science & Technology

    2015-09-24

    algorithms for solving real- world problems. Within the past five years, 2 books, 5 journal special issues, and about 60 papers have been published...Four international conferences have been organized, including the 3rd World Congress of Global Optimization. A unified methodology and algorithm have...been developed with real- world applications. This grant has been used to support and co-support three post-doctors, three PhD students, one part

  13. An Overview of a Trajectory-Based Solution for En Route and Terminal Area Self-Spacing: Seventh Revision

    NASA Technical Reports Server (NTRS)

    Abbott, Terence S.

    2015-01-01

    This paper presents an overview of the seventh revision to an algorithm specifically designed to support NASA's Airborne Precision Spacing concept. This paper supersedes the previous documentation and presents a modification to the algorithm referred to as the Airborne Spacing for Terminal Arrival Routes version 13 (ASTAR13). This airborne self-spacing concept contains both trajectory-based and state-based mechanisms for calculating the speeds required to achieve or maintain a precise spacing interval. The trajectory-based capability allows for spacing operations prior to the aircraft being on a common path. This algorithm was also designed specifically to support a standalone, non-integrated implementation in the spacing aircraft. This current revision to the algorithm adds the state-based capability in support of evolving industry standards relating to airborne self-spacing.

  14. An Overview of a Trajectory-Based Solution for En Route and Terminal Area Self-Spacing: Eighth Revision

    NASA Technical Reports Server (NTRS)

    Abbott, Terence S.; Swieringa, Kurt S.

    2017-01-01

    This paper presents an overview of the eighth revision to an algorithm specifically designed to support NASA's Airborne Precision Spacing concept. This paper supersedes the previous documentation and presents a modification to the algorithm referred to as the Airborne Spacing for Terminal Arrival Routes version 13 (ASTAR13). This airborne self-spacing concept contains both trajectory-based and state-based mechanisms for calculating the speeds required to achieve or maintain a precise spacing interval with another aircraft. The trajectory-based capability allows for spacing operations prior to the aircraft being on a common path. This algorithm was also designed specifically to support a standalone, non-integrated implementation in the spacing aircraft. This current revision to the algorithm supports the evolving industry standards relating to airborne self-spacing.

  15. Multiphase flows of N immiscible incompressible fluids: A reduction-consistent and thermodynamically-consistent formulation and associated algorithm

    NASA Astrophysics Data System (ADS)

    Dong, S.

    2018-05-01

    We present a reduction-consistent and thermodynamically consistent formulation and an associated numerical algorithm for simulating the dynamics of an isothermal mixture consisting of N (N ⩾ 2) immiscible incompressible fluids with different physical properties (densities, viscosities, and pair-wise surface tensions). By reduction consistency we refer to the property that if only a set of M (1 ⩽ M ⩽ N - 1) fluids are present in the system then the N-phase governing equations and boundary conditions will exactly reduce to those for the corresponding M-phase system. By thermodynamic consistency we refer to the property that the formulation honors the thermodynamic principles. Our N-phase formulation is developed based on a more general method that allows for the systematic construction of reduction-consistent formulations, and the method suggests the existence of many possible forms of reduction-consistent and thermodynamically consistent N-phase formulations. Extensive numerical experiments have been presented for flow problems involving multiple fluid components and large density ratios and large viscosity ratios, and the simulation results are compared with the physical theories or the available physical solutions. The comparisons demonstrate that our method produces physically accurate results for this class of problems.

  16. Integrand reduction for two-loop scattering amplitudes through multivariate polynomial division

    NASA Astrophysics Data System (ADS)

    Mastrolia, Pierpaolo; Mirabella, Edoardo; Ossola, Giovanni; Peraro, Tiziano

    2013-04-01

    We describe the application of a novel approach for the reduction of scattering amplitudes, based on multivariate polynomial division, which we have recently presented. This technique yields the complete integrand decomposition for arbitrary amplitudes, regardless of the number of loops. It allows for the determination of the residue at any multiparticle cut, whose knowledge is a mandatory prerequisite for applying the integrand-reduction procedure. By using the division modulo Gröbner basis, we can derive a simple integrand recurrence relation that generates the multiparticle pole decomposition for integrands of arbitrary multiloop amplitudes. We apply the new reduction algorithm to the two-loop planar and nonplanar diagrams contributing to the five-point scattering amplitudes in N=4 super Yang-Mills and N=8 supergravity in four dimensions, whose numerator functions contain up to rank-two terms in the integration momenta. We determine all polynomial residues parametrizing the cuts of the corresponding topologies and subtopologies. We obtain the integral basis for the decomposition of each diagram from the polynomial form of the residues. Our approach is well suited for a seminumerical implementation, and its general mathematical properties provide an effective algorithm for the generalization of the integrand-reduction method to all orders in perturbation theory.

  17. A Log-Scaling Fault Tolerant Agreement Algorithm for a Fault Tolerant MPI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hursey, Joshua J; Naughton, III, Thomas J; Vallee, Geoffroy R

    The lack of fault tolerance is becoming a limiting factor for application scalability in HPC systems. The MPI does not provide standardized fault tolerance interfaces and semantics. The MPI Forum's Fault Tolerance Working Group is proposing a collective fault tolerant agreement algorithm for the next MPI standard. Such algorithms play a central role in many fault tolerant applications. This paper combines a log-scaling two-phase commit agreement algorithm with a reduction operation to provide the necessary functionality for the new collective without any additional messages. Error handling mechanisms are described that preserve the fault tolerance properties while maintaining overall scalability.

  18. Glint-induced false alarm reduction in signature adaptive target detection

    NASA Astrophysics Data System (ADS)

    Crosby, Frank J.

    2002-07-01

    The signal adaptive target detection algorithm developed by Crosby and Riley uses target geometry to discern anomalies in local backgrounds. Detection is not restricted based on specific target signatures. The robustness of the algorithm is limited by an increased false alarm potential. The base algorithm is extended to eliminate one common source of false alarms in a littoral environment. This common source is glint reflected on the surface of water. The spectral and spatial transience of glint prevent straightforward characterization and complicate exclusion. However, the statistical basis of the detection algorithm and its inherent computations allow for glint discernment and the removal of its influence.

  19. Image compression/decompression based on mathematical transform, reduction/expansion, and image sharpening

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described.

  20. Spectral CT metal artifact reduction with an optimization-based reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Gilat Schmidt, Taly; Barber, Rina F.; Sidky, Emil Y.

    2017-03-01

    Metal objects cause artifacts in computed tomography (CT) images. This work investigated the feasibility of a spectral CT method to reduce metal artifacts. Spectral CT acquisition combined with optimization-based reconstruction is proposed to reduce artifacts by modeling the physical effects that cause metal artifacts and by providing the flexibility to selectively remove corrupted spectral measurements in the spectral-sinogram space. The proposed Constrained `One-Step' Spectral CT Image Reconstruction (cOSSCIR) algorithm directly estimates the basis material maps while enforcing convex constraints. The incorporation of constraints on the reconstructed basis material maps is expected to mitigate undersampling effects that occur when corrupted data is excluded from reconstruction. The feasibility of the cOSSCIR algorithm to reduce metal artifacts was investigated through simulations of a pelvis phantom. The cOSSCIR algorithm was investigated with and without the use of a third basis material representing metal. The effects of excluding data corrupted by metal were also investigated. The results demonstrated that the proposed cOSSCIR algorithm reduced metal artifacts and improved CT number accuracy. For example, CT number error in a bright shading artifact region was reduced from 403 HU in the reference filtered backprojection reconstruction to 33 HU using the proposed algorithm in simulation. In the dark shading regions, the error was reduced from 1141 HU to 25 HU. Of the investigated approaches, decomposing the data into three basis material maps and excluding the corrupted data demonstrated the greatest reduction in metal artifacts.

  1. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASAs Space Launch System

    NASA Technical Reports Server (NTRS)

    Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David

    2015-01-01

    The engineering development of the National Aeronautics and Space Administration's (NASA) new Space Launch System (SLS) requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The nominal and off-nominal characteristics of SLS's elements and subsystems must be understood and matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large and complex systems engineering challenge, which is being addressed in part by focusing on the specific subsystems involved in the handling of off-nominal mission and fault tolerance with response management. Using traditional model-based system and software engineering design principles from the Unified Modeling Language (UML) and Systems Modeling Language (SysML), the Mission and Fault Management (M&FM) algorithms for the vehicle are crafted and vetted in Integrated Development Teams (IDTs) composed of multiple development disciplines such as Systems Engineering (SE), Flight Software (FSW), Safety and Mission Assurance (S&MA) and the major subsystems and vehicle elements such as Main Propulsion Systems (MPS), boosters, avionics, Guidance, Navigation, and Control (GNC), Thrust Vector Control (TVC), and liquid engines. These model-based algorithms and their development lifecycle from inception through FSW certification are an important focus of SLS's development effort to further ensure reliable detection and response to off-nominal vehicle states during all phases of vehicle operation from pre-launch through end of flight. To test and validate these M&FM algorithms a dedicated test-bed was developed for full Vehicle Management End-to-End Testing (VMET). For addressing fault management (FM) early in the development lifecycle for the SLS program, NASA formed the M&FM team as part of the Integrated Systems Health Management and Automation Branch under the Spacecraft Vehicle Systems Department at the Marshall Space Flight Center (MSFC). To support the development of the FM algorithms, the VMET developed by the M&FM team provides the ability to integrate the algorithms, perform test cases, and integrate vendor-supplied physics-based launch vehicle (LV) subsystem models. Additionally, the team has developed processes for implementing and validating the M&FM algorithms for concept validation and risk reduction. The flexibility of the VMET capabilities enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the developed algorithms utilizing actual subsystem models such as MPS, GNC, and others. One of the principal functions of VMET is to validate the M&FM algorithms and substantiate them with performance baselines for each of the target vehicle subsystems in an independent platform exterior to the flight software test and validation processes. In any software development process there is inherent risk in the interpretation and implementation of concepts from requirements and test cases into flight software compounded with potential human errors throughout the development and regression testing lifecycle. Risk reduction is addressed by the M&FM group but in particular by the Analysis Team working with other organizations such as S&MA, Structures and Environments, GNC, Orion, Crew Office, Flight Operations, and Ground Operations by assessing performance of the M&FM algorithms in terms of their ability to reduce Loss of Mission (LOM) and Loss of Crew (LOC) probabilities. In addition, through state machine and diagnostic modeling, analysis efforts investigate a broader suite of failure effects and associated detection and responses to be tested in VMET to ensure reliable failure detection, and confirm responses do not create additional risks or cause undesired states through interactive dynamic effects with other algorithms and systems. VMET further contributes to risk reduction by prototyping and exercising the M&FM algorithms early in their implementation and without any inherent hindrances such as meeting FSW processor scheduling constraints due to their target platform - the ARINC 6535-partitioned Operating System, resource limitations, and other factors related to integration with other subsystems not directly involved with M&FM such as telemetry packing and processing. The baseline plan for use of VMET encompasses testing the original M&FM algorithms coded in the same C++ language and state machine architectural concepts as that used by FSW. This enables the development of performance standards and test cases to characterize the M&FM algorithms and sets a benchmark from which to measure their effectiveness and performance in the exterior FSW development and test processes. This paper is outlined in a systematic fashion analogous to a lifecycle process flow for engineering development of algorithms into software and testing. Section I describes the NASA SLS M&FM context, presenting the current infrastructure, leading principles, methods, and participants. Section II defines the testing philosophy of the M&FM algorithms as related to VMET followed by section III, which presents the modeling methods of the algorithms to be tested and validated in VMET. Its details are then further presented in section IV followed by Section V presenting integration, test status, and state analysis. Finally, section VI addresses the summary and forward directions followed by the appendices presenting relevant information on terminology and documentation.

  2. Streak detection and analysis pipeline for optical images

    NASA Astrophysics Data System (ADS)

    Virtanen, J.; Granvik, M.; Torppa, J.; Muinonen, K.; Poikonen, J.; Lehti, J.; Säntti, T.; Komulainen, T.; Flohrer, T.

    2014-07-01

    We describe a novel data processing and analysis pipeline for optical observations of moving objects, either of natural (asteroids, meteors) or artificial origin (satellites, space debris). The monitoring of the space object populations requires reliable acquisition of observational data to support the development and validation of population models, and to build and maintain catalogues of orbital elements. The orbital catalogues are, in turn, needed for the assessment of close approaches (for asteroids, with the Earth; for satellites, with each other) and for the support of contingency situations or launches. For both types of populations, there is also increasing interest to detect fainter objects corresponding to the small end of the size distribution. We focus on the low signal-to-noise (SNR) detection of objects with high angular velocities, resulting in long and faint object trails, or streaks, in the optical images. The currently available, mature image processing algorithms for detection and astrometric reduction of optical data cover objects that cross the sensor field-of-view comparably slowly, and, particularly for satellites, within a rather narrow, predefined range of angular velocities. By applying specific tracking techniques, the objects appear point-like or as short trails in the exposures. However, the general survey scenario is always a 'track-before-detect' problem, resulting in streaks of arbitrary lengths. Although some considerations for low-SNR processing of streak-like features are available in the current image processing and computer vision literature, algorithms are not readily available yet. In the ESA-funded StreakDet (Streak detection and astrometric reduction) project, we develop and evaluate an automated processing pipeline applicable to single images (as compared to consecutive frames of the same field) obtained with any observing scenario, including space-based surveys and both low- and high-altitude populations. The algorithmic flow starts from the segmentation of the acquired image (i.e., the extraction of all sources), followed by the astrometric and photometric characterization of the candidate streaks, and ends with orbital validation of the detected streaks. For the low-SNR extraction of objects, we put forward an approach which does not rely on a priori information, such as the object velocities, a typical assumption in earlier implementations. Our algorithm is based on local grayscale mean difference evaluation, followed by a threshold operation and spatial filtering of black-and-white (1-bit) data to remove stars and other non-streak features. For long streaks, the challenge is to extract position information and related registered epochs with sufficient precision. Moreover, satellite streaks can show up in complex morphologies because of their fast, and often irregular lightcurve variations. A central concept of the pipeline is streak classification which guides the actual characterization process by aiming to identify the interesting sources and to filter out the uninteresting ones, as well as by allowing the tailoring of algorithms for specific streak classes (e.g. PSF fitting for point-like vs. long, disintegrated streaks). Finally, to validate the single-image detections, the processing is finalized by orbital analysis using our statistical inverse methods (see, Muinonen et al., this conference), resulting in preliminary orbital classification (e.g., Earth-bound vs. non-Earth-bound orbits) for the detected streaks.

  3. Integrating dimension reduction and out-of-sample extension in automated classification of ex vivo human patellar cartilage on phase contrast X-ray computed tomography.

    PubMed

    Nagarajan, Mahesh B; Coan, Paola; Huber, Markus B; Diemoz, Paul C; Wismüller, Axel

    2015-01-01

    Phase contrast X-ray computed tomography (PCI-CT) has been demonstrated as a novel imaging technique that can visualize human cartilage with high spatial resolution and soft tissue contrast. Different textural approaches have been previously investigated for characterizing chondrocyte organization on PCI-CT to enable classification of healthy and osteoarthritic cartilage. However, the large size of feature sets extracted in such studies motivates an investigation into algorithmic feature reduction for computing efficient feature representations without compromising their discriminatory power. For this purpose, geometrical feature sets derived from the scaling index method (SIM) were extracted from 1392 volumes of interest (VOI) annotated on PCI-CT images of ex vivo human patellar cartilage specimens. The extracted feature sets were subject to linear and non-linear dimension reduction techniques as well as feature selection based on evaluation of mutual information criteria. The reduced feature set was subsequently used in a machine learning task with support vector regression to classify VOIs as healthy or osteoarthritic; classification performance was evaluated using the area under the receiver-operating characteristic (ROC) curve (AUC). Our results show that the classification performance achieved by 9-D SIM-derived geometric feature sets (AUC: 0.96 ± 0.02) can be maintained with 2-D representations computed from both dimension reduction and feature selection (AUC values as high as 0.97 ± 0.02). Thus, such feature reduction techniques can offer a high degree of compaction to large feature sets extracted from PCI-CT images while maintaining their ability to characterize the underlying chondrocyte patterns.

  4. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.; Som, Sukhamony

    1990-01-01

    The performance modeling and enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures is examined. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called ATAMM (Algorithm To Architecture Mapping Model). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.

  5. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Som, Sukhamoy; Stoughton, John W.; Mielke, Roland R.

    1990-01-01

    Performance modeling and performance enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures are discussed. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called algorithm to architecture mapping model (ATAMM). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.

  6. Static and dynamic structural characterization of nanomaterial catalysts

    NASA Astrophysics Data System (ADS)

    Masiel, Daniel Joseph

    Heterogeneous catalysts systems are pervasive in industry, technology and academia. These systems often involve nanostructured transition metal particles that have crucial interfaces with either their supports or solid products. Understanding the nature of these interfaces as well as the structure of the catalysts and support materials themselves is crucial for the advancement of catalysis in general. Recent developments in the field of transmission electron microscopy (TEM) including dynamic transmission electron microscopy (DTEM), electron tomography, and in situ techniques stand poised to provide fresh insight into nanostructured catalyst systems. Several electron microscopy techniques are applied in this study to elucidate the mechanism of silica nanocoil growth and to discern the role of the support material and catalyst size in carbon dioxide and steam reforming of methane. The growth of silica nanocoils by faceted cobalt nanoparticles is a process that was initially believed to take place via a vapor-liquid-solid growth mechanism similar to other nanowire growth techniques. The extensive TEM work described here suggests that the process may instead occur via transport of silicate and silica species over the nanoparticle surface. Electron tomography studies of the interface between the catalyst particles and the wire indicate that they grow from edges between facets. Studies on reduction of the Co 3O4 nanoparticle precursors to the faceted pure cobalt catalysts were carried out using DTEM and in situ heating. Supported catalyst systems for methane reforming were studied using dark field scanning TEM to better understand sintering effects and the increased activity of Ni/Co catalysts supported by carbon nanotubes. Several novel electron microscopy techniques are described including annular dark field DTEM and a metaheuristic algorithm for solving the phase problem of coherent diffractive imaging. By inserting an annular dark field aperture into the back focal plane of the objective lens in a DTEM, time-resolved dark field images can be produced that have vastly improved contrast for supported catalyst materials compared to bright field DTEM imaging. A new algorithm called swarm optimized phase retrieval is described that uses a population-based approach to solve for the missing phases of diffraction data from discrete particles.

  7. What does voice-processing technology support today?

    PubMed Central

    Nakatsu, R; Suzuki, Y

    1995-01-01

    This paper describes the state of the art in applications of voice-processing technologies. In the first part, technologies concerning the implementation of speech recognition and synthesis algorithms are described. Hardware technologies such as microprocessors and DSPs (digital signal processors) are discussed. Software development environment, which is a key technology in developing applications software, ranging from DSP software to support software also is described. In the second part, the state of the art of algorithms from the standpoint of applications is discussed. Several issues concerning evaluation of speech recognition/synthesis algorithms are covered, as well as issues concerning the robustness of algorithms in adverse conditions. Images Fig. 3 PMID:7479720

  8. A Simple and Computationally Efficient Approach to Multifactor Dimensionality Reduction Analysis of Gene-Gene Interactions for Quantitative Traits

    PubMed Central

    Gui, Jiang; Moore, Jason H.; Williams, Scott M.; Andrews, Peter; Hillege, Hans L.; van der Harst, Pim; Navis, Gerjan; Van Gilst, Wiek H.; Asselbergs, Folkert W.; Gilbert-Diamond, Diane

    2013-01-01

    We present an extension of the two-class multifactor dimensionality reduction (MDR) algorithm that enables detection and characterization of epistatic SNP-SNP interactions in the context of a quantitative trait. The proposed Quantitative MDR (QMDR) method handles continuous data by modifying MDR’s constructive induction algorithm to use a T-test. QMDR replaces the balanced accuracy metric with a T-test statistic as the score to determine the best interaction model. We used a simulation to identify the empirical distribution of QMDR’s testing score. We then applied QMDR to genetic data from the ongoing prospective Prevention of Renal and Vascular End-Stage Disease (PREVEND) study. PMID:23805232

  9. Identification of Flights for Cost-Efficient Climate Impact Reduction

    NASA Technical Reports Server (NTRS)

    Chen, Neil Y.; Kirschen, Philippe G.; Sridhar, Banavar; Ng, Hok K.

    2014-01-01

    The aircraft-induced climate impact has drawn attention in recent years. Aviation operations affect the environment mainly through the release of carbon-dioxide, nitrogen-oxides, and by the formation of contrails. Recent research has shown that altering trajectories can reduce aviation environmental cost by reducing Absolute Global Temperature Change Potential, a climate assessment metric that adapts a linear system for modeling the global temperature response to aviation emissions and contrails. However, these methods will increase fuel consumption that leads to higher fuel costs imposed on airlines. The goal of this work is to identify ights for which the environmental cost of climate impact reduction outweighs the increase in operational cost on an individual aircraft basis. Environmental cost is quanti ed using the monetary social cost of carbon. The increase in operational cost is considering cost of additional fuel usage only. For this paper, an algorithm has been developed that modi es the trajectories of ights to evaluate the e ect of environ- mental cost and operational cost of ights in the United States National Airspace System. The algorithm identi es ights for which the environmental cost of climate impact can be reduced and modi es their trajectories to achieve maximum environmental net bene t, which is the di erence between reduction in environmental cost and additional operational cost. The result shows on a selected day, 16% of the ights among eight major airlines, or 2,043 ights, can achieve environmental net bene t using weather forecast data, resulting in net bene t of around $500,000. The result also suggests that the long-haul ights would be better candidates for cost-ecient climate impact reduction than the short haul ights. The algorithm will help to identify the characteristics of ights that are capable of applying cost-ecient climate impact reduction strategy.

  10. An Evaluation of the WSSC (Weapon System Support Cost) Cost Allocation Algorithms. II. Installation Support.

    DTIC Science & Technology

    1983-06-01

    S XX3OXX, or XX37XX is found. As a result, the following two host-financed tenant support accounts currently will be treated as unit operations costs ... Horngren , Cost Accounting : A Managerial Emphasis, Prentice-Hall Inc., Englewood Cliffs, NJ, 1972. 10. D. B. Levine and J. M. Jondrow, "The...WSSC COST ALLOCATION Technical Report ~ALGORITHMS II: INSTALLATION SUPPORT 6. PERFORMING ORG. REPORT NUMBER 7. AUTHOR( S ) 9. CONTRACT OR GRANT NUMBER

  11. Filtered refocusing: a volumetric reconstruction algorithm for plenoptic-PIV

    NASA Astrophysics Data System (ADS)

    Fahringer, Timothy W.; Thurow, Brian S.

    2016-09-01

    A new algorithm for reconstruction of 3D particle fields from plenoptic image data is presented. The algorithm is based on the technique of computational refocusing with the addition of a post reconstruction filter to remove the out of focus particles. This new algorithm is tested in terms of reconstruction quality on synthetic particle fields as well as a synthetically generated 3D Gaussian ring vortex. Preliminary results indicate that the new algorithm performs as well as the MART algorithm (used in previous work) in terms of the reconstructed particle position accuracy, but produces more elongated particles. The major advantage to the new algorithm is the dramatic reduction in the computational cost required to reconstruct a volume. It is shown that the new algorithm takes 1/9th the time to reconstruct the same volume as MART while using minimal resources. Experimental results are presented in the form of the wake behind a cylinder at a Reynolds number of 185.

  12. The contour-buildup algorithm to calculate the analytical molecular surface.

    PubMed

    Totrov, M; Abagyan, R

    1996-01-01

    A new algorithm is presented to calculate the analytical molecular surface defined as a smooth envelope traced out by the surface of a probe sphere rolled over the molecule. The core of the algorithm is the sequential build up of multi-arc contours on the van der Waals spheres. This algorithm yields substantial reduction in both memory and time requirements of surface calculations. Further, the contour-buildup principle is intrinsically "local", which makes calculations of the partial molecular surfaces even more efficient. Additionally, the algorithm is equally applicable not only to convex patches, but also to concave triangular patches which may have complex multiple intersections. The algorithm permits the rigorous calculation of the full analytical molecular surface for a 100-residue protein in about 2 seconds on an SGI indigo with R4400++ processor at 150 Mhz, with the performance scaling almost linearly with the protein size. The contour-buildup algorithm is faster than the original Connolly algorithm an order of magnitude.

  13. Dynamic PET image reconstruction integrating temporal regularization associated with respiratory motion correction for applications in oncology

    NASA Astrophysics Data System (ADS)

    Merlin, Thibaut; Visvikis, Dimitris; Fernandez, Philippe; Lamare, Frédéric

    2018-02-01

    Respiratory motion reduces both the qualitative and quantitative accuracy of PET images in oncology. This impact is more significant for quantitative applications based on kinetic modeling, where dynamic acquisitions are associated with limited statistics due to the necessity of enhanced temporal resolution. The aim of this study is to address these drawbacks, by combining a respiratory motion correction approach with temporal regularization in a unique reconstruction algorithm for dynamic PET imaging. Elastic transformation parameters for the motion correction are estimated from the non-attenuation-corrected PET images. The derived displacement matrices are subsequently used in a list-mode based OSEM reconstruction algorithm integrating a temporal regularization between the 3D dynamic PET frames, based on temporal basis functions. These functions are simultaneously estimated at each iteration, along with their relative coefficients for each image voxel. Quantitative evaluation has been performed using dynamic FDG PET/CT acquisitions of lung cancer patients acquired on a GE DRX system. The performance of the proposed method is compared with that of a standard multi-frame OSEM reconstruction algorithm. The proposed method achieved substantial improvements in terms of noise reduction while accounting for loss of contrast due to respiratory motion. Results on simulated data showed that the proposed 4D algorithms led to bias reduction values up to 40% in both tumor and blood regions for similar standard deviation levels, in comparison with a standard 3D reconstruction. Patlak parameter estimations on reconstructed images with the proposed reconstruction methods resulted in 30% and 40% bias reduction in the tumor and lung region respectively for the Patlak slope, and a 30% bias reduction for the intercept in the tumor region (a similar Patlak intercept was achieved in the lung area). Incorporation of the respiratory motion correction using an elastic model along with a temporal regularization in the reconstruction process of the PET dynamic series led to substantial quantitative improvements and motion artifact reduction. Future work will include the integration of a linear FDG kinetic model, in order to directly reconstruct parametric images.

  14. Dynamic PET image reconstruction integrating temporal regularization associated with respiratory motion correction for applications in oncology.

    PubMed

    Merlin, Thibaut; Visvikis, Dimitris; Fernandez, Philippe; Lamare, Frédéric

    2018-02-13

    Respiratory motion reduces both the qualitative and quantitative accuracy of PET images in oncology. This impact is more significant for quantitative applications based on kinetic modeling, where dynamic acquisitions are associated with limited statistics due to the necessity of enhanced temporal resolution. The aim of this study is to address these drawbacks, by combining a respiratory motion correction approach with temporal regularization in a unique reconstruction algorithm for dynamic PET imaging. Elastic transformation parameters for the motion correction are estimated from the non-attenuation-corrected PET images. The derived displacement matrices are subsequently used in a list-mode based OSEM reconstruction algorithm integrating a temporal regularization between the 3D dynamic PET frames, based on temporal basis functions. These functions are simultaneously estimated at each iteration, along with their relative coefficients for each image voxel. Quantitative evaluation has been performed using dynamic FDG PET/CT acquisitions of lung cancer patients acquired on a GE DRX system. The performance of the proposed method is compared with that of a standard multi-frame OSEM reconstruction algorithm. The proposed method achieved substantial improvements in terms of noise reduction while accounting for loss of contrast due to respiratory motion. Results on simulated data showed that the proposed 4D algorithms led to bias reduction values up to 40% in both tumor and blood regions for similar standard deviation levels, in comparison with a standard 3D reconstruction. Patlak parameter estimations on reconstructed images with the proposed reconstruction methods resulted in 30% and 40% bias reduction in the tumor and lung region respectively for the Patlak slope, and a 30% bias reduction for the intercept in the tumor region (a similar Patlak intercept was achieved in the lung area). Incorporation of the respiratory motion correction using an elastic model along with a temporal regularization in the reconstruction process of the PET dynamic series led to substantial quantitative improvements and motion artifact reduction. Future work will include the integration of a linear FDG kinetic model, in order to directly reconstruct parametric images.

  15. The Orthogonally Partitioned EM Algorithm: Extending the EM Algorithm for Algorithmic Stability and Bias Correction Due to Imperfect Data.

    PubMed

    Regier, Michael D; Moodie, Erica E M

    2016-05-01

    We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience.

  16. Design a software real-time operation platform for wave piercing catamarans motion control using linear quadratic regulator based genetic algorithm.

    PubMed

    Liang, Lihua; Yuan, Jia; Zhang, Songtao; Zhao, Peng

    2018-01-01

    This work presents optimal linear quadratic regulator (LQR) based on genetic algorithm (GA) to solve the two degrees of freedom (2 DoF) motion control problem in head seas for wave piercing catamarans (WPC). The proposed LQR based GA control strategy is to select optimal weighting matrices (Q and R). The seakeeping performance of WPC based on proposed algorithm is challenged because of multi-input multi-output (MIMO) system of uncertain coefficient problems. Besides the kinematical constraint problems of WPC, the external conditions must be considered, like the sea disturbance and the actuators (a T-foil and two flaps) control. Moreover, this paper describes the MATLAB and LabVIEW software plats to simulate the reduction effects of WPC. Finally, the real-time (RT) NI CompactRIO embedded controller is selected to test the effectiveness of the actuators based on proposed techniques. In conclusion, simulation and experimental results prove the correctness of the proposed algorithm. The percentage of heave and pitch reductions are more than 18% in different high speeds and bad sea conditions. And the results also verify the feasibility of NI CompactRIO embedded controller.

  17. Marginal Fisher analysis and its variants for human gait recognition and content- based image retrieval.

    PubMed

    Xu, Dong; Yan, Shuicheng; Tao, Dacheng; Lin, Stephen; Zhang, Hong-Jiang

    2007-11-01

    Dimensionality reduction algorithms, which aim to select a small set of efficient and discriminant features, have attracted great attention for human gait recognition and content-based image retrieval (CBIR). In this paper, we present extensions of our recently proposed marginal Fisher analysis (MFA) to address these problems. For human gait recognition, we first present a direct application of MFA, then inspired by recent advances in matrix and tensor-based dimensionality reduction algorithms, we present matrix-based MFA for directly handling 2-D input in the form of gray-level averaged images. For CBIR, we deal with the relevance feedback problem by extending MFA to marginal biased analysis, in which within-class compactness is characterized only by the distances between each positive sample and its neighboring positive samples. In addition, we present a new technique to acquire a direct optimal solution for MFA without resorting to objective function modification as done in many previous algorithms. We conduct comprehensive experiments on the USF HumanID gait database and the Corel image retrieval database. Experimental results demonstrate that MFA and its extensions outperform related algorithms in both applications.

  18. Design a software real-time operation platform for wave piercing catamarans motion control using linear quadratic regulator based genetic algorithm

    PubMed Central

    Liang, Lihua; Zhang, Songtao; Zhao, Peng

    2018-01-01

    This work presents optimal linear quadratic regulator (LQR) based on genetic algorithm (GA) to solve the two degrees of freedom (2 DoF) motion control problem in head seas for wave piercing catamarans (WPC). The proposed LQR based GA control strategy is to select optimal weighting matrices (Q and R). The seakeeping performance of WPC based on proposed algorithm is challenged because of multi-input multi-output (MIMO) system of uncertain coefficient problems. Besides the kinematical constraint problems of WPC, the external conditions must be considered, like the sea disturbance and the actuators (a T-foil and two flaps) control. Moreover, this paper describes the MATLAB and LabVIEW software plats to simulate the reduction effects of WPC. Finally, the real-time (RT) NI CompactRIO embedded controller is selected to test the effectiveness of the actuators based on proposed techniques. In conclusion, simulation and experimental results prove the correctness of the proposed algorithm. The percentage of heave and pitch reductions are more than 18% in different high speeds and bad sea conditions. And the results also verify the feasibility of NI CompactRIO embedded controller. PMID:29709008

  19. Improved Cost-Base Design of Water Distribution Networks using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Moradzadeh Azar, Foad; Abghari, Hirad; Taghi Alami, Mohammad; Weijs, Steven

    2010-05-01

    Population growth and progressive extension of urbanization in different places of Iran cause an increasing demand for primary needs. The water, this vital liquid is the most important natural need for human life. Providing this natural need is requires the design and construction of water distribution networks, that incur enormous costs on the country's budget. Any reduction in these costs enable more people from society to access extreme profit least cost. Therefore, investment of Municipal councils need to maximize benefits or minimize expenditures. To achieve this purpose, the engineering design depends on the cost optimization techniques. This paper, presents optimization models based on genetic algorithm(GA) to find out the minimum design cost Mahabad City's (North West, Iran) water distribution network. By designing two models and comparing the resulting costs, the abilities of GA were determined. the GA based model could find optimum pipe diameters to reduce the design costs of network. Results show that the water distribution network design using Genetic Algorithm could lead to reduction of at least 7% in project costs in comparison to the classic model. Keywords: Genetic Algorithm, Optimum Design of Water Distribution Network, Mahabad City, Iran.

  20. A model reduction approach to numerical inversion for a parabolic partial differential equation

    NASA Astrophysics Data System (ADS)

    Borcea, Liliana; Druskin, Vladimir; Mamonov, Alexander V.; Zaslavsky, Mikhail

    2014-12-01

    We propose a novel numerical inversion algorithm for the coefficients of parabolic partial differential equations, based on model reduction. The study is motivated by the application of controlled source electromagnetic exploration, where the unknown is the subsurface electrical resistivity and the data are time resolved surface measurements of the magnetic field. The algorithm presented in this paper considers inversion in one and two dimensions. The reduced model is obtained with rational interpolation in the frequency (Laplace) domain and a rational Krylov subspace projection method. It amounts to a nonlinear mapping from the function space of the unknown resistivity to the small dimensional space of the parameters of the reduced model. We use this mapping as a nonlinear preconditioner for the Gauss-Newton iterative solution of the inverse problem. The advantage of the inversion algorithm is twofold. First, the nonlinear preconditioner resolves most of the nonlinearity of the problem. Thus the iterations are less likely to get stuck in local minima and the convergence is fast. Second, the inversion is computationally efficient because it avoids repeated accurate simulations of the time-domain response. We study the stability of the inversion algorithm for various rational Krylov subspaces, and assess its performance with numerical experiments.

  1. Generation Algorithm of Discrete Line in Multi-Dimensional Grids

    NASA Astrophysics Data System (ADS)

    Du, L.; Ben, J.; Li, Y.; Wang, R.

    2017-09-01

    Discrete Global Grids System (DGGS) is a kind of digital multi-resolution earth reference model, in terms of structure, it is conducive to the geographical spatial big data integration and mining. Vector is one of the important types of spatial data, only by discretization, can it be applied in grids system to make process and analysis. Based on the some constraint conditions, this paper put forward a strict definition of discrete lines, building a mathematic model of the discrete lines by base vectors combination method. Transforming mesh discrete lines issue in n-dimensional grids into the issue of optimal deviated path in n-minus-one dimension using hyperplane, which, therefore realizing dimension reduction process in the expression of mesh discrete lines. On this basis, we designed a simple and efficient algorithm for dimension reduction and generation of the discrete lines. The experimental results show that our algorithm not only can be applied in the two-dimensional rectangular grid, also can be applied in the two-dimensional hexagonal grid and the three-dimensional cubic grid. Meanwhile, when our algorithm is applied in two-dimensional rectangular grid, it can get a discrete line which is more similar to the line in the Euclidean space.

  2. Coupled dimensionality reduction and classification for supervised and semi-supervised multilabel learning

    PubMed Central

    Gönen, Mehmet

    2014-01-01

    Coupled training of dimensionality reduction and classification is proposed previously to improve the prediction performance for single-label problems. Following this line of research, in this paper, we first introduce a novel Bayesian method that combines linear dimensionality reduction with linear binary classification for supervised multilabel learning and present a deterministic variational approximation algorithm to learn the proposed probabilistic model. We then extend the proposed method to find intrinsic dimensionality of the projected subspace using automatic relevance determination and to handle semi-supervised learning using a low-density assumption. We perform supervised learning experiments on four benchmark multilabel learning data sets by comparing our method with baseline linear dimensionality reduction algorithms. These experiments show that the proposed approach achieves good performance values in terms of hamming loss, average AUC, macro F1, and micro F1 on held-out test data. The low-dimensional embeddings obtained by our method are also very useful for exploratory data analysis. We also show the effectiveness of our approach in finding intrinsic subspace dimensionality and semi-supervised learning tasks. PMID:24532862

  3. Coupled dimensionality reduction and classification for supervised and semi-supervised multilabel learning.

    PubMed

    Gönen, Mehmet

    2014-03-01

    Coupled training of dimensionality reduction and classification is proposed previously to improve the prediction performance for single-label problems. Following this line of research, in this paper, we first introduce a novel Bayesian method that combines linear dimensionality reduction with linear binary classification for supervised multilabel learning and present a deterministic variational approximation algorithm to learn the proposed probabilistic model. We then extend the proposed method to find intrinsic dimensionality of the projected subspace using automatic relevance determination and to handle semi-supervised learning using a low-density assumption. We perform supervised learning experiments on four benchmark multilabel learning data sets by comparing our method with baseline linear dimensionality reduction algorithms. These experiments show that the proposed approach achieves good performance values in terms of hamming loss, average AUC, macro F 1 , and micro F 1 on held-out test data. The low-dimensional embeddings obtained by our method are also very useful for exploratory data analysis. We also show the effectiveness of our approach in finding intrinsic subspace dimensionality and semi-supervised learning tasks.

  4. Integrating Algorithm Visualization Video into a First-Year Algorithm and Data Structure Course

    ERIC Educational Resources Information Center

    Crescenzi, Pilu; Malizia, Alessio; Verri, M. Cecilia; Diaz, Paloma; Aedo, Ignacio

    2012-01-01

    In this paper we describe the results that we have obtained while integrating algorithm visualization (AV) movies (strongly tightened with the other teaching material), within a first-year undergraduate course on algorithms and data structures. Our experimental results seem to support the hypothesis that making these movies available significantly…

  5. Development and validation of an algorithm for laser application in wound treatment 1

    PubMed Central

    da Cunha, Diequison Rite; Salomé, Geraldo Magela; Massahud, Marcelo Renato; Mendes, Bruno; Ferreira, Lydia Masako

    2017-01-01

    ABSTRACT Objective: To develop and validate an algorithm for laser wound therapy. Method: Methodological study and literature review. For the development of the algorithm, a review was performed in the Health Sciences databases of the past ten years. The algorithm evaluation was performed by 24 participants, nurses, physiotherapists, and physicians. For data analysis, the Cronbach’s alpha coefficient and the chi-square test for independence was used. The level of significance of the statistical test was established at 5% (p<0.05). Results: The professionals’ responses regarding the facility to read the algorithm indicated: 41.70%, great; 41.70%, good; 16.70%, regular. With regard the algorithm being sufficient for supporting decisions related to wound evaluation and wound cleaning, 87.5% said yes to both questions. Regarding the participants’ opinion that the algorithm contained enough information to support their decision regarding the choice of laser parameters, 91.7% said yes. The questionnaire presented reliability using the Cronbach’s alpha coefficient test (α = 0.962). Conclusion: The developed and validated algorithm showed reliability for evaluation, wound cleaning, and use of laser therapy in wounds. PMID:29211197

  6. Support the Design of Improved IUE NEWSIPS High Dispersion Extraction Algorithms: Improved IUE High Dispersion Extraction Algorithms

    NASA Technical Reports Server (NTRS)

    Lawton, Pat

    2004-01-01

    The objective of this work was to support the design of improved IUE NEWSIPS high dispersion extraction algorithms. The purpose of this work was to evaluate use of the Linearized Image (LIHI) file versus the Re-Sampled Image (SIHI) file, evaluate various extraction, and design algorithms for evaluation of IUE High Dispersion spectra. It was concluded the use of the Re-Sampled Image (SIHI) file was acceptable. Since the Gaussian profile worked well for the core and the Lorentzian profile worked well for the wings, the Voigt profile was chosen for use in the extraction algorithm. It was found that the gamma and sigma parameters varied significantly across the detector, so gamma and sigma masks for the SWP detector were developed. Extraction code was written.

  7. Integrating image quality in 2nu-SVM biometric match score fusion.

    PubMed

    Vatsa, Mayank; Singh, Richa; Noore, Afzel

    2007-10-01

    This paper proposes an intelligent 2nu-support vector machine based match score fusion algorithm to improve the performance of face and iris recognition by integrating the quality of images. The proposed algorithm applies redundant discrete wavelet transform to evaluate the underlying linear and non-linear features present in the image. A composite quality score is computed to determine the extent of smoothness, sharpness, noise, and other pertinent features present in each subband of the image. The match score and the corresponding quality score of an image are fused using 2nu-support vector machine to improve the verification performance. The proposed algorithm is experimentally validated using the FERET face database and the CASIA iris database. The verification performance and statistical evaluation show that the proposed algorithm outperforms existing fusion algorithms.

  8. Desiderata for computable representations of electronic health records-driven phenotype algorithms

    PubMed Central

    Mo, Huan; Thompson, William K; Rasmussen, Luke V; Pacheco, Jennifer A; Jiang, Guoqian; Kiefer, Richard; Zhu, Qian; Xu, Jie; Montague, Enid; Carrell, David S; Lingren, Todd; Mentch, Frank D; Ni, Yizhao; Wehbe, Firas H; Peissig, Peggy L; Tromp, Gerard; Larson, Eric B; Chute, Christopher G; Pathak, Jyotishman; Speltz, Peter; Kho, Abel N; Jarvik, Gail P; Bejan, Cosmin A; Williams, Marc S; Borthwick, Kenneth; Kitchner, Terrie E; Roden, Dan M; Harris, Paul A

    2015-01-01

    Background Electronic health records (EHRs) are increasingly used for clinical and translational research through the creation of phenotype algorithms. Currently, phenotype algorithms are most commonly represented as noncomputable descriptive documents and knowledge artifacts that detail the protocols for querying diagnoses, symptoms, procedures, medications, and/or text-driven medical concepts, and are primarily meant for human comprehension. We present desiderata for developing a computable phenotype representation model (PheRM). Methods A team of clinicians and informaticians reviewed common features for multisite phenotype algorithms published in PheKB.org and existing phenotype representation platforms. We also evaluated well-known diagnostic criteria and clinical decision-making guidelines to encompass a broader category of algorithms. Results We propose 10 desired characteristics for a flexible, computable PheRM: (1) structure clinical data into queryable forms; (2) recommend use of a common data model, but also support customization for the variability and availability of EHR data among sites; (3) support both human-readable and computable representations of phenotype algorithms; (4) implement set operations and relational algebra for modeling phenotype algorithms; (5) represent phenotype criteria with structured rules; (6) support defining temporal relations between events; (7) use standardized terminologies and ontologies, and facilitate reuse of value sets; (8) define representations for text searching and natural language processing; (9) provide interfaces for external software algorithms; and (10) maintain backward compatibility. Conclusion A computable PheRM is needed for true phenotype portability and reliability across different EHR products and healthcare systems. These desiderata are a guide to inform the establishment and evolution of EHR phenotype algorithm authoring platforms and languages. PMID:26342218

  9. DC Bus Regulation with a Flywheel Energy Storage System

    NASA Technical Reports Server (NTRS)

    Kenny, Barbara H.; Kascak, Peter E.

    2003-01-01

    This paper describes the DC bus regulation control algorithm for the NASA flywheel energy storage system during charge, charge reduction and discharge modes of operation. The algorithm was experimentally verified with results given in a previous paper. This paper presents the necessary models for simulation with detailed block diagrams of the controller algorithm. It is shown that the flywheel system and the controller can be modeled in three levels of detail depending on the type of analysis required. The three models are explained and then compared using simulation results.

  10. Parallel Algorithms and Patterns

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robey, Robert W.

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  11. Restoration of dimensional reduction in the random-field Ising model at five dimensions

    NASA Astrophysics Data System (ADS)

    Fytas, Nikolaos G.; Martín-Mayor, Víctor; Picco, Marco; Sourlas, Nicolas

    2017-04-01

    The random-field Ising model is one of the few disordered systems where the perturbative renormalization group can be carried out to all orders of perturbation theory. This analysis predicts dimensional reduction, i.e., that the critical properties of the random-field Ising model in D dimensions are identical to those of the pure Ising ferromagnet in D -2 dimensions. It is well known that dimensional reduction is not true in three dimensions, thus invalidating the perturbative renormalization group prediction. Here, we report high-precision numerical simulations of the 5D random-field Ising model at zero temperature. We illustrate universality by comparing different probability distributions for the random fields. We compute all the relevant critical exponents (including the critical slowing down exponent for the ground-state finding algorithm), as well as several other renormalization-group invariants. The estimated values of the critical exponents of the 5D random-field Ising model are statistically compatible to those of the pure 3D Ising ferromagnet. These results support the restoration of dimensional reduction at D =5 . We thus conclude that the failure of the perturbative renormalization group is a low-dimensional phenomenon. We close our contribution by comparing universal quantities for the random-field problem at dimensions 3 ≤D <6 to their values in the pure Ising model at D -2 dimensions, and we provide a clear verification of the Rushbrooke equality at all studied dimensions.

  12. Restoration of dimensional reduction in the random-field Ising model at five dimensions.

    PubMed

    Fytas, Nikolaos G; Martín-Mayor, Víctor; Picco, Marco; Sourlas, Nicolas

    2017-04-01

    The random-field Ising model is one of the few disordered systems where the perturbative renormalization group can be carried out to all orders of perturbation theory. This analysis predicts dimensional reduction, i.e., that the critical properties of the random-field Ising model in D dimensions are identical to those of the pure Ising ferromagnet in D-2 dimensions. It is well known that dimensional reduction is not true in three dimensions, thus invalidating the perturbative renormalization group prediction. Here, we report high-precision numerical simulations of the 5D random-field Ising model at zero temperature. We illustrate universality by comparing different probability distributions for the random fields. We compute all the relevant critical exponents (including the critical slowing down exponent for the ground-state finding algorithm), as well as several other renormalization-group invariants. The estimated values of the critical exponents of the 5D random-field Ising model are statistically compatible to those of the pure 3D Ising ferromagnet. These results support the restoration of dimensional reduction at D=5. We thus conclude that the failure of the perturbative renormalization group is a low-dimensional phenomenon. We close our contribution by comparing universal quantities for the random-field problem at dimensions 3≤D<6 to their values in the pure Ising model at D-2 dimensions, and we provide a clear verification of the Rushbrooke equality at all studied dimensions.

  13. False alarm reduction by the And-ing of multiple multivariate Gaussian classifiers

    NASA Astrophysics Data System (ADS)

    Dobeck, Gerald J.; Cobb, J. Tory

    2003-09-01

    The high-resolution sonar is one of the principal sensors used by the Navy to detect and classify sea mines in minehunting operations. For such sonar systems, substantial effort has been devoted to the development of automated detection and classification (D/C) algorithms. These have been spurred by several factors including (1) aids for operators to reduce work overload, (2) more optimal use of all available data, and (3) the introduction of unmanned minehunting systems. The environments where sea mines are typically laid (harbor areas, shipping lanes, and the littorals) give rise to many false alarms caused by natural, biologic, and man-made clutter. The objective of the automated D/C algorithms is to eliminate most of these false alarms while still maintaining a very high probability of mine detection and classification (PdPc). In recent years, the benefits of fusing the outputs of multiple D/C algorithms have been studied. We refer to this as Algorithm Fusion. The results have been remarkable, including reliable robustness to new environments. This paper describes a method for training several multivariate Gaussian classifiers such that their And-ing dramatically reduces false alarms while maintaining a high probability of classification. This training approach is referred to as the Focused- Training method. This work extends our 2001-2002 work where the Focused-Training method was used with three other types of classifiers: the Attractor-based K-Nearest Neighbor Neural Network (a type of radial-basis, probabilistic neural network), the Optimal Discrimination Filter Classifier (based linear discrimination theory), and the Quadratic Penalty Function Support Vector Machine (QPFSVM). Although our experience has been gained in the area of sea mine detection and classification, the principles described herein are general and can be applied to a wide range of pattern recognition and automatic target recognition (ATR) problems.

  14. Effect of normalization methods on the performance of supervised learning algorithms applied to HTSeq-FPKM-UQ data sets: 7SK RNA expression as a predictor of survival in patients with colon adenocarcinoma.

    PubMed

    Shahriyari, Leili

    2017-11-03

    One of the main challenges in machine learning (ML) is choosing an appropriate normalization method. Here, we examine the effect of various normalization methods on analyzing FPKM upper quartile (FPKM-UQ) RNA sequencing data sets. We collect the HTSeq-FPKM-UQ files of patients with colon adenocarcinoma from TCGA-COAD project. We compare three most common normalization methods: scaling, standardizing using z-score and vector normalization by visualizing the normalized data set and evaluating the performance of 12 supervised learning algorithms on the normalized data set. Additionally, for each of these normalization methods, we use two different normalization strategies: normalizing samples (files) or normalizing features (genes). Regardless of normalization methods, a support vector machine (SVM) model with the radial basis function kernel had the maximum accuracy (78%) in predicting the vital status of the patients. However, the fitting time of SVM depended on the normalization methods, and it reached its minimum fitting time when files were normalized to the unit length. Furthermore, among all 12 learning algorithms and 6 different normalization techniques, the Bernoulli naive Bayes model after standardizing files had the best performance in terms of maximizing the accuracy as well as minimizing the fitting time. We also investigated the effect of dimensionality reduction methods on the performance of the supervised ML algorithms. Reducing the dimension of the data set did not increase the maximum accuracy of 78%. However, it leaded to discovery of the 7SK RNA gene expression as a predictor of survival in patients with colon adenocarcinoma with accuracy of 78%. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  15. Novel models and algorithms of load balancing for variable-structured collaborative simulation under HLA/RTI

    NASA Astrophysics Data System (ADS)

    Yue, Yingchao; Fan, Wenhui; Xiao, Tianyuan; Ma, Cheng

    2013-07-01

    High level architecture(HLA) is the open standard in the collaborative simulation field. Scholars have been paying close attention to theoretical research on and engineering applications of collaborative simulation based on HLA/RTI, which extends HLA in various aspects like functionality and efficiency. However, related study on the load balancing problem of HLA collaborative simulation is insufficient. Without load balancing, collaborative simulation under HLA/RTI may encounter performance reduction or even fatal errors. In this paper, load balancing is further divided into static problems and dynamic problems. A multi-objective model is established and the randomness of model parameters is taken into consideration for static load balancing, which makes the model more credible. The Monte Carlo based optimization algorithm(MCOA) is excogitated to gain static load balance. For dynamic load balancing, a new type of dynamic load balancing problem is put forward with regards to the variable-structured collaborative simulation under HLA/RTI. In order to minimize the influence against the running collaborative simulation, the ordinal optimization based algorithm(OOA) is devised to shorten the optimization time. Furthermore, the two algorithms are adopted in simulation experiments of different scenarios, which demonstrate their effectiveness and efficiency. An engineering experiment about collaborative simulation under HLA/RTI of high speed electricity multiple units(EMU) is also conducted to indentify credibility of the proposed models and supportive utility of MCOA and OOA to practical engineering systems. The proposed research ensures compatibility of traditional HLA, enhances the ability for assigning simulation loads onto computing units both statically and dynamically, improves the performance of collaborative simulation system and makes full use of the hardware resources.

  16. Classification of Alzheimer's disease and prediction of mild cognitive impairment-to-Alzheimer's conversion from structural magnetic resource imaging using feature ranking and a genetic algorithm.

    PubMed

    Beheshti, Iman; Demirel, Hasan; Matsuda, Hiroshi

    2017-04-01

    We developed a novel computer-aided diagnosis (CAD) system that uses feature-ranking and a genetic algorithm to analyze structural magnetic resonance imaging data; using this system, we can predict conversion of mild cognitive impairment (MCI)-to-Alzheimer's disease (AD) at between one and three years before clinical diagnosis. The CAD system was developed in four stages. First, we used a voxel-based morphometry technique to investigate global and local gray matter (GM) atrophy in an AD group compared with healthy controls (HCs). Regions with significant GM volume reduction were segmented as volumes of interest (VOIs). Second, these VOIs were used to extract voxel values from the respective atrophy regions in AD, HC, stable MCI (sMCI) and progressive MCI (pMCI) patient groups. The voxel values were then extracted into a feature vector. Third, at the feature-selection stage, all features were ranked according to their respective t-test scores and a genetic algorithm designed to find the optimal feature subset. The Fisher criterion was used as part of the objective function in the genetic algorithm. Finally, the classification was carried out using a support vector machine (SVM) with 10-fold cross validation. We evaluated the proposed automatic CAD system by applying it to baseline values from the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset (160 AD, 162 HC, 65 sMCI and 71 pMCI subjects). The experimental results indicated that the proposed system is capable of distinguishing between sMCI and pMCI patients, and would be appropriate for practical use in a clinical setting. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Optical diffraction tomography: accuracy of an off-axis reconstruction

    NASA Astrophysics Data System (ADS)

    Kostencka, Julianna; Kozacki, Tomasz

    2014-05-01

    Optical diffraction tomography is an increasingly popular method that allows for reconstruction of three-dimensional refractive index distribution of semi-transparent samples using multiple measurements of an optical field transmitted through the sample for various illumination directions. The process of assembly of the angular measurements is usually performed with one of two methods: filtered backprojection (FBPJ) or filtered backpropagation (FBPP) tomographic reconstruction algorithm. The former approach, although conceptually very simple, provides an accurate reconstruction for the object regions located close to the plane of focus. However, since FBPJ ignores diffraction, its use for spatially extended structures is arguable. According to the theory of scattering, more precise restoration of a 3D structure shall be achieved with the FBPP algorithm, which unlike the former approach incorporates diffraction. It is believed that with this method one is allowed to obtain a high accuracy reconstruction in a large measurement volume exceeding depth of focus of an imaging system. However, some studies have suggested that a considerable improvement of the FBPP results can be achieved with prior propagation of the transmitted fields back to the centre of the object. This, supposedly, enables reduction of errors due to approximated diffraction formulas used in FBPP. In our view this finding casts doubt on quality of the FBPP reconstruction in the regions far from the rotation axis. The objective of this paper is to investigate limitation of the FBPP algorithm in terms of an off-axis reconstruction and compare its performance with the FBPJ approach. Moreover, in this work we propose some modifications to the FBPP algorithm that allow for more precise restoration of a sample structure in off-axis locations. The research is based on extensive numerical simulations supported with wave-propagation method.

  18. Spatial and spectral analysis of corneal epithelium injury using hyperspectral images

    NASA Astrophysics Data System (ADS)

    Md Noor, Siti Salwa; Michael, Kaleena; Marshall, Stephen; Ren, Jinchang

    2017-12-01

    Eye assessment is essential in preventing blindness. Currently, the existing methods to assess corneal epithelium injury are complex and require expert knowledge. Hence, we have introduced a non-invasive technique using hyperspectral imaging (HSI) and an image analysis algorithm of corneal epithelium injury. Three groups of images were compared and analyzed, namely healthy eyes, injured eyes, and injured eyes with stain. Dimensionality reduction using principal component analysis (PCA) was applied to reduce massive data and redundancies. The first 10 principal components (PCs) were selected for further processing. The mean vector of 10 PCs with 45 pairs of all combinations was computed and sent to two classifiers. A quadratic Bayes normal classifier (QDC) and a support vector classifier (SVC) were used in this study to discriminate the eleven eyes into three groups. As a result, the combined classifier of QDC and SVC showed optimal performance with 2D PCA features (2DPCA-QDSVC) and was utilized to classify normal and abnormal tissues, using color image segmentation. The result was compared with human segmentation. The outcome showed that the proposed algorithm produced extremely promising results to assist the clinician in quantifying a cornea injury.

  19. Robust prediction of protein subcellular localization combining PCA and WSVMs.

    PubMed

    Tian, Jiang; Gu, Hong; Liu, Wenqi; Gao, Chiyang

    2011-08-01

    Automated prediction of protein subcellular localization is an important tool for genome annotation and drug discovery, and Support Vector Machines (SVMs) can effectively solve this problem in a supervised manner. However, the datasets obtained from real experiments are likely to contain outliers or noises, which can lead to poor generalization ability and classification accuracy. To explore this problem, we adopt strategies to lower the effect of outliers. First we design a method based on Weighted SVMs, different weights are assigned to different data points, so the training algorithm will learn the decision boundary according to the relative importance of the data points. Second we analyse the influence of Principal Component Analysis (PCA) on WSVM classification, propose a hybrid classifier combining merits of both PCA and WSVM. After performing dimension reduction operations on the datasets, kernel-based possibilistic c-means algorithm can generate more suitable weights for the training, as PCA transforms the data into a new coordinate system with largest variances affected greatly by the outliers. Experiments on benchmark datasets show promising results, which confirms the effectiveness of the proposed method in terms of prediction accuracy. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. NASA Tech Briefs, March 2005

    NASA Technical Reports Server (NTRS)

    2005-01-01

    Topics covered include: Scheme for Entering Binary Data Into a Quantum Computer; Encryption for Remote Control via Internet or Intranet; Coupled Receiver/Decoders for Low-Rate Turbo Codes; Processing GPS Occultation Data To Characterize Atmosphere; Displacing Unpredictable Nulls in Antenna Radiation Patterns; Integrated Pointing and Signal Detector for Optical Receiver; Adaptive Thresholding and Parameter Estimation for PPM; Data-Driven Software Framework for Web-Based ISS Telescience; Software for Secondary-School Learning About Robotics; Fuzzy Logic Engine; Telephone-Directory Program; Simulating a Direction-Finder Search for an ELT; Formulating Precursors for Coating Metals and Ceramics; Making Macroscopic Assemblies of Aligned Carbon Nanotubes; Ball Bearings Equipped for In Situ Lubrication on Demand; Synthetic Bursae for Robots; Robot Forearm and Dexterous Hand; Making a Metal-Lined Composite-Overwrapped Pressure Vessel; Ex Vivo Growth of Bioengineered Ligaments and Other Tissues; Stroboscopic Goggles for Reduction of Motion Sickness; Articulating Support for Horizontal Resistive Exercise; Modified Penning-Malmberg Trap for Storing Antiprotons; Tumbleweed Rovers; Two-Photon Fluorescence Microscope for Microgravity Research; Biased Randomized Algorithm for Fast Model-Based Diagnosis; Fast Algorithms for Model-Based Diagnosis; Simulations of Evaporating Multicomponent Fuel Drops; Formation Flying of Tethered and Nontethered Spacecraft; and Two Methods for Efficient Solution of the Hitting- Set Problem.

  1. Prediction and causal reasoning in planning

    NASA Technical Reports Server (NTRS)

    Dean, T.; Boddy, M.

    1987-01-01

    Nonlinear planners are often touted as having an efficiency advantage over linear planners. The reason usually given is that nonlinear planners, unlike their linear counterparts, are not forced to make arbitrary commitments to the order in which actions are to be performed. This ability to delay commitment enables nonlinear planners to solve certain problems with far less effort than would be required of linear planners. Here, it is argued that this advantage is bought with a significant reduction in the ability of a nonlinear planner to accurately predict the consequences of actions. Unfortunately, the general problem of predicting the consequences of a partially ordered set of actions is intractable. In gaining the predictive power of linear planners, nonlinear planners sacrifice their efficiency advantage. There are, however, other advantages to nonlinear planning (e.g., the ability to reason about partial orders and incomplete information) that make it well worth the effort needed to extend nonlinear methods. A framework is supplied for causal inference that supports reasoning about partially ordered events and actions whose effects depend upon the context in which they are executed. As an alternative to a complete but potentially exponential-time algorithm, researchers provide a provably sound polynomial-time algorithm for predicting the consequences of partially ordered events.

  2. Cooperative vehicles for robust traffic congestion reduction: An analysis based on algorithmic, environmental and agent behavioral factors

    PubMed Central

    Desai, Prajakta; Desai, Aniruddha

    2017-01-01

    Traffic congestion continues to be a persistent problem throughout the world. As vehicle-to-vehicle communication develops, there is an opportunity of using cooperation among close proximity vehicles to tackle the congestion problem. The intuition is that if vehicles could cooperate opportunistically when they come close enough to each other, they could, in effect, spread themselves out among alternative routes so that vehicles do not all jam up on the same roads. Our previous work proposed a decentralized multiagent based vehicular congestion management algorithm entitled Congestion Avoidance and Route Allocation using Virtual Agent Negotiation (CARAVAN), wherein the vehicles acting as intelligent agents perform cooperative route allocation using inter-vehicular communication. This paper focuses on evaluating the practical applicability of this approach by testing its robustness and performance (in terms of travel time reduction), across variations in: (a) environmental parameters such as road network topology and configuration; (b) algorithmic parameters such as vehicle agent preferences and route cost/preference multipliers; and (c) agent-related parameters such as equipped/non-equipped vehicles and compliant/non-compliant agents. Overall, the results demonstrate the adaptability and robustness of the decentralized cooperative vehicles approach to providing global travel time reduction using simple local coordination strategies. PMID:28792513

  3. Cooperative vehicles for robust traffic congestion reduction: An analysis based on algorithmic, environmental and agent behavioral factors.

    PubMed

    Desai, Prajakta; Loke, Seng W; Desai, Aniruddha

    2017-01-01

    Traffic congestion continues to be a persistent problem throughout the world. As vehicle-to-vehicle communication develops, there is an opportunity of using cooperation among close proximity vehicles to tackle the congestion problem. The intuition is that if vehicles could cooperate opportunistically when they come close enough to each other, they could, in effect, spread themselves out among alternative routes so that vehicles do not all jam up on the same roads. Our previous work proposed a decentralized multiagent based vehicular congestion management algorithm entitled Congestion Avoidance and Route Allocation using Virtual Agent Negotiation (CARAVAN), wherein the vehicles acting as intelligent agents perform cooperative route allocation using inter-vehicular communication. This paper focuses on evaluating the practical applicability of this approach by testing its robustness and performance (in terms of travel time reduction), across variations in: (a) environmental parameters such as road network topology and configuration; (b) algorithmic parameters such as vehicle agent preferences and route cost/preference multipliers; and (c) agent-related parameters such as equipped/non-equipped vehicles and compliant/non-compliant agents. Overall, the results demonstrate the adaptability and robustness of the decentralized cooperative vehicles approach to providing global travel time reduction using simple local coordination strategies.

  4. Enhanced algorithms for stochastic programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishna, Alamuru S.

    1993-09-01

    In this dissertation, we present some of the recent advances made in solving two-stage stochastic linear programming problems of large size and complexity. Decomposition and sampling are two fundamental components of techniques to solve stochastic optimization problems. We describe improvements to the current techniques in both these areas. We studied different ways of using importance sampling techniques in the context of Stochastic programming, by varying the choice of approximation functions used in this method. We have concluded that approximating the recourse function by a computationally inexpensive piecewise-linear function is highly efficient. This reduced the problem from finding the mean ofmore » a computationally expensive functions to finding that of a computationally inexpensive function. Then we implemented various variance reduction techniques to estimate the mean of a piecewise-linear function. This method achieved similar variance reductions in orders of magnitude less time than, when we directly applied variance-reduction techniques directly on the given problem. In solving a stochastic linear program, the expected value problem is usually solved before a stochastic solution and also to speed-up the algorithm by making use of the information obtained from the solution of the expected value problem. We have devised a new decomposition scheme to improve the convergence of this algorithm.« less

  5. Fuzzy support vector machine: an efficient rule-based classification technique for microarrays.

    PubMed

    Hajiloo, Mohsen; Rabiee, Hamid R; Anooshahpour, Mahdi

    2013-01-01

    The abundance of gene expression microarray data has led to the development of machine learning algorithms applicable for tackling disease diagnosis, disease prognosis, and treatment selection problems. However, these algorithms often produce classifiers with weaknesses in terms of accuracy, robustness, and interpretability. This paper introduces fuzzy support vector machine which is a learning algorithm based on combination of fuzzy classifiers and kernel machines for microarray classification. Experimental results on public leukemia, prostate, and colon cancer datasets show that fuzzy support vector machine applied in combination with filter or wrapper feature selection methods develops a robust model with higher accuracy than the conventional microarray classification models such as support vector machine, artificial neural network, decision trees, k nearest neighbors, and diagonal linear discriminant analysis. Furthermore, the interpretable rule-base inferred from fuzzy support vector machine helps extracting biological knowledge from microarray data. Fuzzy support vector machine as a new classification model with high generalization power, robustness, and good interpretability seems to be a promising tool for gene expression microarray classification.

  6. Development and validation of a dual sensing scheme to improve accuracy of bradycardia and pause detection in an insertable cardiac monitor.

    PubMed

    Passman, Rod S; Rogers, John D; Sarkar, Shantanu; Reiland, Jerry; Reisfeld, Erin; Koehler, Jodi; Mittal, Suneet

    2017-07-01

    Undersensing of premature ventricular beats and low-amplitude R waves are primary causes for inappropriate bradycardia and pause detections in insertable cardiac monitors (ICMs). The purpose of this study was to develop and validate an enhanced algorithm to reduce inappropriately detected bradycardia and pause episodes. Independent data sets to develop and validate the enhanced algorithm were derived from a database of ICM-detected bradycardia and pause episodes in de-identified patients monitored for unexplained syncope. The original algorithm uses an auto-adjusting sensitivity threshold for R-wave sensing to detect tachycardia and avoid T-wave oversensing. In the enhanced algorithm, a second sensing threshold is used with a long blanking and fixed lower sensitivity threshold, looking for evidence of undersensed signals. Data reported includes percent change in appropriate and inappropriate bradycardia and pause detections as well as changes in episode detection sensitivity and positive predictive value with the enhanced algorithm. The validation data set, from 663 consecutive patients, consisted of 4904 (161 patients) bradycardia and 2582 (133 patients) pause episodes, of which 2976 (61%) and 996 (39%) were appropriately detected bradycardia and pause episodes. The enhanced algorithm reduced inappropriate bradycardia and pause episodes by 95% and 47%, respectively, with 1.7% and 0.6% reduction in appropriate episodes, respectively. The average episode positive predictive value improved by 62% (P < .001) for bradycardia detection and by 26% (P < .001) for pause detection, with an average relative sensitivity of 95% (P < .001) and 99% (P = .5), respectively. The enhanced dual sense algorithm for bradycardia and pause detection in ICMs substantially reduced inappropriate episode detection with a minimal reduction in true episode detection. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  7. Sensor-Based Vibration Signal Feature Extraction Using an Improved Composite Dictionary Matching Pursuit Algorithm

    PubMed Central

    Cui, Lingli; Wu, Na; Wang, Wenjing; Kang, Chenhui

    2014-01-01

    This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP) algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP) is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and reconstruction algorithm is feasible and effective. PMID:25207870

  8. Sensor-based vibration signal feature extraction using an improved composite dictionary matching pursuit algorithm.

    PubMed

    Cui, Lingli; Wu, Na; Wang, Wenjing; Kang, Chenhui

    2014-09-09

    This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP) algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP) is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and reconstruction algorithm is feasible and effective.

  9. Extremum seeking-based optimization of high voltage converter modulator rise-time

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scheinker, Alexander; Bland, Michael; Krstic, Miroslav

    2013-02-01

    We digitally implement an extremum seeking (ES) algorithm, which optimizes the rise time of the output voltage of a high voltage converter modulator (HVCM) at the Los Alamos Neutron Science Center (LANSCE) HVCM test stand by iteratively, simultaneously tuning the first 8 switching edges of each of the three phase drive waveforms (24 variables total). We achieve a 50 μs rise time, which is reduction in half compared to the 100 μs achieved at the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory. Considering that HVCMs typically operate with an output voltage of 100 kV, with a 60Hz repetitionmore » rate, the 50 μs rise time reduction will result in very significant energy savings. The ES algorithm will prove successful, despite the noisy measurements and cost calculations, confirming the theoretical results that the algorithm is not affected by noise whose frequency components are independent of the perturbing frequencies.« less

  10. Effective dimensional reduction algorithm for eigenvalue problems for thin elastic structures: A paradigm in three dimensions

    PubMed Central

    Ovtchinnikov, Evgueni E.; Xanthis, Leonidas S.

    2000-01-01

    We present a methodology for the efficient numerical solution of eigenvalue problems of full three-dimensional elasticity for thin elastic structures, such as shells, plates and rods of arbitrary geometry, discretized by the finite element method. Such problems are solved by iterative methods, which, however, are known to suffer from slow convergence or even convergence failure, when the thickness is small. In this paper we show an effective way of resolving this difficulty by invoking a special preconditioning technique associated with the effective dimensional reduction algorithm (EDRA). As an example, we present an algorithm for computing the minimal eigenvalue of a thin elastic plate and we show both theoretically and numerically that it is robust with respect to both the thickness and discretization parameters, i.e. the convergence does not deteriorate with diminishing thickness or mesh refinement. This robustness is sine qua non for the efficient computation of large-scale eigenvalue problems for thin elastic structures. PMID:10655469

  11. Assessment of metal artifact reduction methods in pelvic CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdoli, Mehrsima; Mehranian, Abolfazl; Ailianou, Angeliki

    2016-04-15

    Purpose: Metal artifact reduction (MAR) produces images with improved quality potentially leading to confident and reliable clinical diagnosis and therapy planning. In this work, the authors evaluate the performance of five MAR techniques for the assessment of computed tomography images of patients with hip prostheses. Methods: Five MAR algorithms were evaluated using simulation and clinical studies. The algorithms included one-dimensional linear interpolation (LI) of the corrupted projection bins in the sinogram, two-dimensional interpolation (2D), a normalized metal artifact reduction (NMAR) technique, a metal deletion technique, and a maximum a posteriori completion (MAPC) approach. The algorithms were applied to ten simulatedmore » datasets as well as 30 clinical studies of patients with metallic hip implants. Qualitative evaluations were performed by two blinded experienced radiologists who ranked overall artifact severity and pelvic organ recognition for each algorithm by assigning scores from zero to five (zero indicating totally obscured organs with no structures identifiable and five indicating recognition with high confidence). Results: Simulation studies revealed that 2D, NMAR, and MAPC techniques performed almost equally well in all regions. LI falls behind the other approaches in terms of reducing dark streaking artifacts as well as preserving unaffected regions (p < 0.05). Visual assessment of clinical datasets revealed the superiority of NMAR and MAPC in the evaluated pelvic organs and in terms of overall image quality. Conclusions: Overall, all methods, except LI, performed equally well in artifact-free regions. Considering both clinical and simulation studies, 2D, NMAR, and MAPC seem to outperform the other techniques.« less

  12. Radiation dose reduction using a neck detection algorithm for single spiral brain and cervical spine CT acquisition in the trauma setting.

    PubMed

    Ardley, Nicholas D; Lau, Ken K; Buchan, Kevin

    2013-12-01

    Cervical spine injuries occur in 4-8 % of adults with head trauma. Dual acquisition technique has been traditionally used for the CT scanning of brain and cervical spine. The purpose of this study was to determine the efficacy of radiation dose reduction by using a single acquisition technique that incorporated both anatomical regions with a dedicated neck detection algorithm. Thirty trauma patients for brain and cervical spine CT were included and were scanned with the single acquisition technique. The radiation doses from the single CT acquisition technique with the neck detection algorithm, which allowed appropriate independent dose administration relevant to brain and cervical spine regions, were recorded. Comparison was made both to the doses calculated from the simulation of the traditional dual acquisitions with matching parameters, and to the doses of retrospective dual acquisition legacy technique with the same sample size. The mean simulated dose for the traditional dual acquisition technique was 3.99 mSv, comparable to the average dose of 4.2 mSv from 30 previous patients who had CT of brain and cervical spine as dual acquisitions. The mean dose from the single acquisition technique was 3.35 mSv, resulting in a 16 % overall dose reduction. The images from the single acquisition technique were of excellent diagnostic quality. The new single acquisition CT technique incorporating the neck detection algorithm for brain and cervical spine significantly reduces the overall radiation dose by eliminating the unavoidable overlapping range between 2 anatomical regions which occurs with the traditional dual acquisition technique.

  13. Hearing through the noise: Biologically inspired noise reduction

    NASA Astrophysics Data System (ADS)

    Lee, Tyler Paul

    Vocal communication in the natural world demands that a listener perform a remarkably complicated task in real-time. Vocalizations mix with all other sounds in the environment as they travel to the listener, arriving as a jumbled low-dimensional signal. A listener must then use this signal to extract the structure corresponding to individual sound sources. How this computation is implemented in the brain remains poorly understood, yet an accurate description of such mechanisms would impact a variety of medical and technological applications of sound processing. In this thesis, I describe initial work on how neurons in the secondary auditory cortex of the Zebra Finch extract song from naturalistic background noise. I then build on our understanding of the function of these neurons by creating an algorithm that extracts speech from natural background noise using spectrotemporal modulations. The algorithm, implemented as an artificial neural network, can be flexibly applied to any class of signal or noise and performs better than an optimal frequency-based noise reduction algorithm for a variety of background noises and signal-to-noise ratios. One potential drawback to using spectrotemporal modulations for noise reduction, though, is that analyzing the modulations present in an ongoing sound requires a latency set by the slowest temporal modulation computed. The algorithm avoids this problem by reducing noise predictively, taking advantage of the large amount of temporal structure present in natural sounds. This predictive denoising has ties to recent work suggesting that the auditory system uses attention to focus on predicted regions of spectrotemporal space when performing auditory scene analysis.

  14. Two Procedures to Flag Radio Frequency Interference in the UV Plane

    NASA Astrophysics Data System (ADS)

    Sekhar, Srikrishna; Athreya, Ramana

    2018-07-01

    We present two algorithms to identify and flag radio frequency interference (RFI) in radio interferometric imaging data. The first algorithm utilizes the redundancy of visibilities inside a UV cell in the visibility plane to identify corrupted data, while varying the detection threshold in accordance with the observed reduction in noise with radial UV distance. In the second algorithm, we propose a scheme to detect faint RFI in the visibility time-channel (TC) plane of baselines. The efficacy of identifying RFI in the residual visibilities is reduced by the presence of ripples due to inaccurate subtraction of the strongest sources. This can be due to several reasons including primary beam asymmetries and other direction-dependent calibration errors. We eliminated these ripples by clipping the corresponding peaks in the associated Fourier plane. RFI was detected in the ripple-free TC plane but was flagged in the original visibilities. Application of these two algorithms to five different 150 MHz data sets from the GMRT resulted in a reduction in image noise of 20%–50% throughout the field along with a reduction in systematics and a corresponding increase in the number of detected sources. However, in comparing the mean flux densities before and after flagging RFI, we find a differential change with the fainter sources (25σ < S < 100 mJy) showing a change of ‑6% to +1% relative to the stronger sources (S > 100 mJy). We are unable to explain this effect, but it could be related to the CLEAN bias known for interferometers.

  15. Observer Evaluation of a Metal Artifact Reduction Algorithm Applied to Head and Neck Cone Beam Computed Tomographic Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korpics, Mark; Surucu, Murat; Mescioglu, Ibrahim

    Purpose and Objectives: To quantify, through an observer study, the reduction in metal artifacts on cone beam computed tomographic (CBCT) images using a projection-interpolation algorithm, on images containing metal artifacts from dental fillings and implants in patients treated for head and neck (H&N) cancer. Methods and Materials: An interpolation-substitution algorithm was applied to H&N CBCT images containing metal artifacts from dental fillings and implants. Image quality with respect to metal artifacts was evaluated subjectively and objectively. First, 6 independent radiation oncologists were asked to rank randomly sorted blinded images (before and after metal artifact reduction) using a 5-point rating scalemore » (1 = severe artifacts; 5 = no artifacts). Second, the standard deviation of different regions of interest (ROI) within each image was calculated and compared with the mean rating scores. Results: The interpolation-substitution technique successfully reduced metal artifacts in 70% of the cases. From a total of 60 images from 15 H&N cancer patients undergoing image guided radiation therapy, the mean rating score on the uncorrected images was 2.3 ± 1.1, versus 3.3 ± 1.0 for the corrected images. The mean difference in ranking score between uncorrected and corrected images was 1.0 (95% confidence interval: 0.9-1.2, P<.05). The standard deviation of each ROI significantly decreased after artifact reduction (P<.01). Moreover, a negative correlation between the mean rating score for each image and the standard deviation of the oral cavity and bilateral cheeks was observed. Conclusion: The interpolation-substitution algorithm is efficient and effective for reducing metal artifacts caused by dental fillings and implants on CBCT images, as demonstrated by the statistically significant increase in observer image quality ranking and by the decrease in ROI standard deviation between uncorrected and corrected images.« less

  16. Impact of iterative metal artifact reduction on diagnostic image quality in patients with dental hardware.

    PubMed

    Weiß, Jakob; Schabel, Christoph; Bongers, Malte; Raupach, Rainer; Clasen, Stephan; Notohamiprodjo, Mike; Nikolaou, Konstantin; Bamberg, Fabian

    2017-03-01

    Background Metal artifacts often impair diagnostic accuracy in computed tomography (CT) imaging. Therefore, effective and workflow implemented metal artifact reduction algorithms are crucial to gain higher diagnostic image quality in patients with metallic hardware. Purpose To assess the clinical performance of a novel iterative metal artifact reduction (iMAR) algorithm for CT in patients with dental fillings. Material and Methods Thirty consecutive patients scheduled for CT imaging and dental fillings were included in the analysis. All patients underwent CT imaging using a second generation dual-source CT scanner (120 kV single-energy; 100/Sn140 kV in dual-energy, 219 mAs, gantry rotation time 0.28-1/s, collimation 0.6 mm) as part of their clinical work-up. Post-processing included standard kernel (B49) and an iterative MAR algorithm. Image quality and diagnostic value were assessed qualitatively (Likert scale) and quantitatively (HU ± SD) by two reviewers independently. Results All 30 patients were included in the analysis, with equal reconstruction times for iMAR and standard reconstruction (17 s ± 0.5 vs. 19 s ± 0.5; P > 0.05). Visual image quality was significantly higher for iMAR as compared with standard reconstruction (3.8 ± 0.5 vs. 2.6 ± 0.5; P < 0.0001, respectively) and showed improved evaluation of adjacent anatomical structures. Similarly, HU-based measurements of degree of artifacts were significantly lower in the iMAR reconstructions as compared with the standard reconstruction (0.9 ± 1.6 vs. -20 ± 47; P < 0.05, respectively). Conclusion The tested iterative, raw-data based reconstruction MAR algorithm allows for a significant reduction of metal artifacts and improved evaluation of adjacent anatomical structures in the head and neck area in patients with dental hardware.

  17. A Spectral Algorithm for Envelope Reduction of Sparse Matrices

    NASA Technical Reports Server (NTRS)

    Barnard, Stephen T.; Pothen, Alex; Simon, Horst D.

    1993-01-01

    The problem of reordering a sparse symmetric matrix to reduce its envelope size is considered. A new spectral algorithm for computing an envelope-reducing reordering is obtained by associating a Laplacian matrix with the given matrix and then sorting the components of a specified eigenvector of the Laplacian. This Laplacian eigenvector solves a continuous relaxation of a discrete problem related to envelope minimization called the minimum 2-sum problem. The permutation vector computed by the spectral algorithm is a closest permutation vector to the specified Laplacian eigenvector. Numerical results show that the new reordering algorithm usually computes smaller envelope sizes than those obtained from the current standard algorithms such as Gibbs-Poole-Stockmeyer (GPS) or SPARSPAK reverse Cuthill-McKee (RCM), in some cases reducing the envelope by more than a factor of two.

  18. The high performance parallel algorithm for Unified Gas-Kinetic Scheme

    NASA Astrophysics Data System (ADS)

    Li, Shiyi; Li, Qibing; Fu, Song; Xu, Jinxiu

    2016-11-01

    A high performance parallel algorithm for UGKS is developed to simulate three-dimensional flows internal and external on arbitrary grid system. The physical domain and velocity domain are divided into different blocks and distributed according to the two-dimensional Cartesian topology with intra-communicators in physical domain for data exchange and other intra-communicators in velocity domain for sum reduction to moment integrals. Numerical results of three-dimensional cavity flow and flow past a sphere agree well with the results from the existing studies and validate the applicability of the algorithm. The scalability of the algorithm is tested both on small (1-16) and large (729-5832) scale processors. The tested speed-up ratio is near linear ashind thus the efficiency is around 1, which reveals the good scalability of the present algorithm.

  19. A parameter estimation algorithm for spatial sine testing - Theory and evaluation

    NASA Technical Reports Server (NTRS)

    Rost, R. W.; Deblauwe, F.

    1992-01-01

    This paper presents the theory and an evaluation of a spatial sine testing parameter estimation algorithm that uses directly the measured forced mode of vibration and the measured force vector. The parameter estimation algorithm uses an ARMA model and a recursive QR algorithm is applied for data reduction. In this first evaluation, the algorithm has been applied to a frequency response matrix (which is a particular set of forced mode of vibration) using a sliding frequency window. The objective of the sliding frequency window is to execute the analysis simultaneously with the data acquisition. Since the pole values and the modal density are obtained from this analysis during the acquisition, the analysis information can be used to help determine the forcing vectors during the experimental data acquisition.

  20. Scalable and fault tolerant orthogonalization based on randomized distributed data aggregation

    PubMed Central

    Gansterer, Wilfried N.; Niederbrucker, Gerhard; Straková, Hana; Schulze Grotthoff, Stefan

    2013-01-01

    The construction of distributed algorithms for matrix computations built on top of distributed data aggregation algorithms with randomized communication schedules is investigated. For this purpose, a new aggregation algorithm for summing or averaging distributed values, the push-flow algorithm, is developed, which achieves superior resilience properties with respect to failures compared to existing aggregation methods. It is illustrated that on a hypercube topology it asymptotically requires the same number of iterations as the optimal all-to-all reduction operation and that it scales well with the number of nodes. Orthogonalization is studied as a prototypical matrix computation task. A new fault tolerant distributed orthogonalization method rdmGS, which can produce accurate results even in the presence of node failures, is built on top of distributed data aggregation algorithms. PMID:24748902

  1. Multimedia systems in ultrasound image boundary detection and measurements

    NASA Astrophysics Data System (ADS)

    Pathak, Sayan D.; Chalana, Vikram; Kim, Yongmin

    1997-05-01

    Ultrasound as a medical imaging modality offers the clinician a real-time of the anatomy of the internal organs/tissues, their movement, and flow noninvasively. One of the applications of ultrasound is to monitor fetal growth by measuring biparietal diameter (BPD) and head circumference (HC). We have been working on automatic detection of fetal head boundaries in ultrasound images. These detected boundaries are used to measure BPD and HC. The boundary detection algorithm is based on active contour models and takes 32 seconds on an external high-end workstation, SUN SparcStation 20/71. Our goal has been to make this tool available within an ultrasound machine and at the same time significantly improve its performance utilizing multimedia technology. With the advent of high- performance programmable digital signal processors (DSP), the software solution within an ultrasound machine instead of the traditional hardwired approach or requiring an external computer is now possible. We have integrated our boundary detection algorithm into a programmable ultrasound image processor (PUIP) that fits into a commercial ultrasound machine. The PUIP provides both the high computing power and flexibility needed to support computationally-intensive image processing algorithms within an ultrasound machine. According to our data analysis, BPD/HC measurements made on PUIP lie within the interobserver variability. Hence, the errors in the automated BPD/HC measurements using the algorithm are on the same order as the average interobserver differences. On PUIP, it takes 360 ms to measure the values of BPD/HC on one head image. When processing multiple head images in sequence, it takes 185 ms per image, thus enabling 5.4 BPD/HC measurements per second. Reduction in the overall execution time from 32 seconds to a fraction of a second and making this multimedia system available within an ultrasound machine will help this image processing algorithm and other computer-intensive imaging applications become a practical tool for the sonographers in the feature.

  2. A Self-Organizing Spatial Clustering Approach to Support Large-Scale Network RTK Systems.

    PubMed

    Shen, Lili; Guo, Jiming; Wang, Lei

    2018-06-06

    The network real-time kinematic (RTK) technique can provide centimeter-level real time positioning solutions and play a key role in geo-spatial infrastructure. With ever-increasing popularity, network RTK systems will face issues in the support of large numbers of concurrent users. In the past, high-precision positioning services were oriented towards professionals and only supported a few concurrent users. Currently, precise positioning provides a spatial foundation for artificial intelligence (AI), and countless smart devices (autonomous cars, unmanned aerial-vehicles (UAVs), robotic equipment, etc.) require precise positioning services. Therefore, the development of approaches to support large-scale network RTK systems is urgent. In this study, we proposed a self-organizing spatial clustering (SOSC) approach which automatically clusters online users to reduce the computational load on the network RTK system server side. The experimental results indicate that both the SOSC algorithm and the grid algorithm can reduce the computational load efficiently, while the SOSC algorithm gives a more elastic and adaptive clustering solution with different datasets. The SOSC algorithm determines the cluster number and the mean distance to cluster center (MDTCC) according to the data set, while the grid approaches are all predefined. The side-effects of clustering algorithms on the user side are analyzed with real global navigation satellite system (GNSS) data sets. The experimental results indicate that 10 km can be safely used as the cluster radius threshold for the SOSC algorithm without significantly reducing the positioning precision and reliability on the user side.

  3. DoD Force & Infrastructure Categories: A FYDP-Based Conceptual Model of Department of Defense Programs and Resources

    DTIC Science & Technology

    2002-09-01

    Support 0605160D Counterproliferation Support (H) 0901502A Service Support to DTSA 0901502F Service Support to DTSA 0901502N Service Support to DTSA ...Base Operations Support 0901502A Service Support to DTSA (1F2C) Int’l Engagement & Threat Reduction 0901502F Service Support to DTSA (1F2C...Int’l Engagement & Threat Reduction 0901502N Service Support to DTSA (1F2C) Int’l Engagement & Threat Reduction 0901503A Service Support to OSD

  4. Peak-to-average power ratio reduction in orthogonal frequency division multiplexing-based visible light communication systems using a modified partial transmit sequence technique

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Deng, Honggui; Ren, Shuang; Tang, Chengying; Qian, Xuewen

    2018-01-01

    We propose an efficient partial transmit sequence technique based on genetic algorithm and peak-value optimization algorithm (GAPOA) to reduce high peak-to-average power ratio (PAPR) in visible light communication systems based on orthogonal frequency division multiplexing (VLC-OFDM). By analysis of hill-climbing algorithm's pros and cons, we propose the POA with excellent local search ability to further process the signals whose PAPR is still over the threshold after processed by genetic algorithm (GA). To verify the effectiveness of the proposed technique and algorithm, we evaluate the PAPR performance and the bit error rate (BER) performance and compare them with partial transmit sequence (PTS) technique based on GA (GA-PTS), PTS technique based on genetic and hill-climbing algorithm (GH-PTS), and PTS based on shuffled frog leaping algorithm and hill-climbing algorithm (SFLAHC-PTS). The results show that our technique and algorithm have not only better PAPR performance but also lower computational complexity and BER than GA-PTS, GH-PTS, and SFLAHC-PTS technique.

  5. Image compression/decompression based on mathematical transform, reduction/expansion, and image sharpening

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1997-12-30

    An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described. 22 figs.

  6. Why don’t you use Evolutionary Algorithms in Big Data?

    NASA Astrophysics Data System (ADS)

    Stanovov, Vladimir; Brester, Christina; Kolehmainen, Mikko; Semenkina, Olga

    2017-02-01

    In this paper we raise the question of using evolutionary algorithms in the area of Big Data processing. We show that evolutionary algorithms provide evident advantages due to their high scalability and flexibility, their ability to solve global optimization problems and optimize several criteria at the same time for feature selection, instance selection and other data reduction problems. In particular, we consider the usage of evolutionary algorithms with all kinds of machine learning tools, such as neural networks and fuzzy systems. All our examples prove that Evolutionary Machine Learning is becoming more and more important in data analysis and we expect to see the further development of this field especially in respect to Big Data.

  7. Methods and Algorithms for Computer-aided Engineering of Die Tooling of Compressor Blades from Titanium Alloy

    NASA Astrophysics Data System (ADS)

    Khaimovich, A. I.; Khaimovich, I. N.

    2018-01-01

    The articles provides the calculation algorithms for blank design and die forming fitting to produce the compressor blades for aircraft engines. The design system proposed in the article allows generating drafts of trimming and reducing dies automatically, leading to significant reduction of work preparation time. The detailed analysis of the blade structural elements features was carried out, the taken limitations and technological solutions allowed to form generalized algorithms of forming parting stamp face over the entire circuit of the engraving for different configurations of die forgings. The author worked out the algorithms and programs to calculate three dimensional point locations describing the configuration of die cavity.

  8. Differentially Private Frequent Subgraph Mining

    PubMed Central

    Xu, Shengzhi; Xiong, Li; Cheng, Xiang; Xiao, Ke

    2016-01-01

    Mining frequent subgraphs from a collection of input graphs is an important topic in data mining research. However, if the input graphs contain sensitive information, releasing frequent subgraphs may pose considerable threats to individual's privacy. In this paper, we study the problem of frequent subgraph mining (FGM) under the rigorous differential privacy model. We introduce a novel differentially private FGM algorithm, which is referred to as DFG. In this algorithm, we first privately identify frequent subgraphs from input graphs, and then compute the noisy support of each identified frequent subgraph. In particular, to privately identify frequent subgraphs, we present a frequent subgraph identification approach which can improve the utility of frequent subgraph identifications through candidates pruning. Moreover, to compute the noisy support of each identified frequent subgraph, we devise a lattice-based noisy support derivation approach, where a series of methods has been proposed to improve the accuracy of the noisy supports. Through formal privacy analysis, we prove that our DFG algorithm satisfies ε-differential privacy. Extensive experimental results on real datasets show that the DFG algorithm can privately find frequent subgraphs with high data utility. PMID:27616876

  9. Progressive Classification Using Support Vector Machines

    NASA Technical Reports Server (NTRS)

    Wagstaff, Kiri; Kocurek, Michael

    2009-01-01

    An algorithm for progressive classification of data, analogous to progressive rendering of images, makes it possible to compromise between speed and accuracy. This algorithm uses support vector machines (SVMs) to classify data. An SVM is a machine learning algorithm that builds a mathematical model of the desired classification concept by identifying the critical data points, called support vectors. Coarse approximations to the concept require only a few support vectors, while precise, highly accurate models require far more support vectors. Once the model has been constructed, the SVM can be applied to new observations. The cost of classifying a new observation is proportional to the number of support vectors in the model. When computational resources are limited, an SVM of the appropriate complexity can be produced. However, if the constraints are not known when the model is constructed, or if they can change over time, a method for adaptively responding to the current resource constraints is required. This capability is particularly relevant for spacecraft (or any other real-time systems) that perform onboard data analysis. The new algorithm enables the fast, interactive application of an SVM classifier to a new set of data. The classification process achieved by this algorithm is characterized as progressive because a coarse approximation to the true classification is generated rapidly and thereafter iteratively refined. The algorithm uses two SVMs: (1) a fast, approximate one and (2) slow, highly accurate one. New data are initially classified by the fast SVM, producing a baseline approximate classification. For each classified data point, the algorithm calculates a confidence index that indicates the likelihood that it was classified correctly in the first pass. Next, the data points are sorted by their confidence indices and progressively reclassified by the slower, more accurate SVM, starting with the items most likely to be incorrectly classified. The user can halt this reclassification process at any point, thereby obtaining the best possible result for a given amount of computation time. Alternatively, the results can be displayed as they are generated, providing the user with real-time feedback about the current accuracy of classification.

  10. The Goes-R Geostationary Lightning Mapper (GLM)

    NASA Technical Reports Server (NTRS)

    Goodman, Steven J.; Blakeslee, Richard J.; Koshak, William J.; Mach, Douglas

    2011-01-01

    The Geostationary Operational Environmental Satellite (GOES-R) is the next series to follow the existing GOES system currently operating over the Western Hemisphere. Superior spacecraft and instrument technology will support expanded detection of environmental phenomena, resulting in more timely and accurate forecasts and warnings. Advancements over current GOES capabilities include a new capability for total lightning detection (cloud and cloud-to-ground flashes) from the Geostationary Lightning Mapper (GLM), and improved storm diagnostic capability with the Advanced Baseline Imager. The GLM will map total lightning activity (in-cloud and cloud-to-ground lighting flashes) continuously day and night with near-uniform spatial resolution of 8 km with a product refresh rate of less than 20 sec over the Americas and adjacent oceanic regions. This will aid in forecasting severe storms and tornado activity, and convective weather impacts on aviation safety and efficiency. In parallel with the instrument development, a GOES-R Risk Reduction Team and Algorithm Working Group Lightning Applications Team have begun to develop the Level 2 algorithms, cal/val performance monitoring tools, and new applications. Proxy total lightning data from the NASA Lightning Imaging Sensor on the Tropical Rainfall Measuring Mission (TRMM) satellite and regional test beds are being used to develop the pre-launch algorithms and applications, and also improve our knowledge of thunderstorm initiation and evolution. In this paper we will report on new Nowcasting and storm warning applications being developed and evaluated at various NOAA Testbeds.

  11. Applying active learning to supervised word sense disambiguation in MEDLINE.

    PubMed

    Chen, Yukun; Cao, Hongxin; Mei, Qiaozhu; Zheng, Kai; Xu, Hua

    2013-01-01

    This study was to assess whether active learning strategies can be integrated with supervised word sense disambiguation (WSD) methods, thus reducing the number of annotated samples, while keeping or improving the quality of disambiguation models. We developed support vector machine (SVM) classifiers to disambiguate 197 ambiguous terms and abbreviations in the MSH WSD collection. Three different uncertainty sampling-based active learning algorithms were implemented with the SVM classifiers and were compared with a passive learner (PL) based on random sampling. For each ambiguous term and each learning algorithm, a learning curve that plots the accuracy computed from the test set as a function of the number of annotated samples used in the model was generated. The area under the learning curve (ALC) was used as the primary metric for evaluation. Our experiments demonstrated that active learners (ALs) significantly outperformed the PL, showing better performance for 177 out of 197 (89.8%) WSD tasks. Further analysis showed that to achieve an average accuracy of 90%, the PL needed 38 annotated samples, while the ALs needed only 24, a 37% reduction in annotation effort. Moreover, we analyzed cases where active learning algorithms did not achieve superior performance and identified three causes: (1) poor models in the early learning stage; (2) easy WSD cases; and (3) difficult WSD cases, which provide useful insight for future improvements. This study demonstrated that integrating active learning strategies with supervised WSD methods could effectively reduce annotation cost and improve the disambiguation models.

  12. Characterization (environmental Signature) and Function of the Main Instrumented (monitoring Water Quality Network in Real Time) Rivers Atoyac and Zahuapan in High Atoyac Basin; in Dry, Rain and Winter Season 2013-2014; Puebla-Tlaxcala Mexico

    NASA Astrophysics Data System (ADS)

    Tavera, E. M.; Rodriguez-Espinosa, P. F.; Morales-Garcia, S. S.; Muñoz-Sevilla, N. P.

    2014-12-01

    The Zahuapan and Atoyac rivers were characterized in the Upper Atoyac through the integration of physical and chemical parameters (environmental firm) determining the behavior and function of the basin as a tool for measuring and monitoring the quality and management of water resources of the water in one of the most polluted rivers in Mexico. For the determination of the environmental signature proceeded to characterize the water through 11 physicochemical parameters: temperature (T), potential hydrogen (pH), dissolved oxygen (DO), spectral absorption coefficient (SAC), the reduction of oxide potential (ORP), turbidity (Turb), conductivity (l), biochemical oxygen demand in 5 days (BOD5), chemical oxygen demand (COD), total suspended solids (TSS) and total dissolved solids (TDS ), which were evaluated in 49 sites in the dry season, 47 for the rainy season and 23 for the winter season in the basin and Atoyac Zahuapan Alto Atoyac, Puebla-Tlaxcala, Mexico river; finding a mathematical algorithm to assimilate and better represent the information obtained. The algorithm allows us to estimate correlation greater than 0.85. The results allow us to propose the algorithm used in the monitoring stations for purposes of processing information assimilated form.This measurement and monitoring of water quality supports the project, the monitoring network in real time and the actions to clean up Atoyac River, in the urban area of the city of Puebla.

  13. Applying active learning to supervised word sense disambiguation in MEDLINE

    PubMed Central

    Chen, Yukun; Cao, Hongxin; Mei, Qiaozhu; Zheng, Kai; Xu, Hua

    2013-01-01

    Objectives This study was to assess whether active learning strategies can be integrated with supervised word sense disambiguation (WSD) methods, thus reducing the number of annotated samples, while keeping or improving the quality of disambiguation models. Methods We developed support vector machine (SVM) classifiers to disambiguate 197 ambiguous terms and abbreviations in the MSH WSD collection. Three different uncertainty sampling-based active learning algorithms were implemented with the SVM classifiers and were compared with a passive learner (PL) based on random sampling. For each ambiguous term and each learning algorithm, a learning curve that plots the accuracy computed from the test set as a function of the number of annotated samples used in the model was generated. The area under the learning curve (ALC) was used as the primary metric for evaluation. Results Our experiments demonstrated that active learners (ALs) significantly outperformed the PL, showing better performance for 177 out of 197 (89.8%) WSD tasks. Further analysis showed that to achieve an average accuracy of 90%, the PL needed 38 annotated samples, while the ALs needed only 24, a 37% reduction in annotation effort. Moreover, we analyzed cases where active learning algorithms did not achieve superior performance and identified three causes: (1) poor models in the early learning stage; (2) easy WSD cases; and (3) difficult WSD cases, which provide useful insight for future improvements. Conclusions This study demonstrated that integrating active learning strategies with supervised WSD methods could effectively reduce annotation cost and improve the disambiguation models. PMID:23364851

  14. Introducing parallelism to histogramming functions for GEM systems

    NASA Astrophysics Data System (ADS)

    Krawczyk, Rafał D.; Czarski, Tomasz; Kolasinski, Piotr; Pozniak, Krzysztof T.; Linczuk, Maciej; Byszuk, Adrian; Chernyshova, Maryna; Juszczyk, Bartlomiej; Kasprowicz, Grzegorz; Wojenski, Andrzej; Zabolotny, Wojciech

    2015-09-01

    This article is an assessment of potential parallelization of histogramming algorithms in GEM detector system. Histogramming and preprocessing algorithms in MATLAB were analyzed with regard to adding parallelism. Preliminary implementation of parallel strip histogramming resulted in speedup. Analysis of algorithms parallelizability is presented. Overview of potential hardware and software support to implement parallel algorithm is discussed.

  15. Developing operation algorithms for vision subsystems in autonomous mobile robots

    NASA Astrophysics Data System (ADS)

    Shikhman, M. V.; Shidlovskiy, S. V.

    2018-05-01

    The paper analyzes algorithms for selecting keypoints on the image for the subsequent automatic detection of people and obstacles. The algorithm is based on the histogram of oriented gradients and the support vector method. The combination of these methods allows successful selection of dynamic and static objects. The algorithm can be applied in various autonomous mobile robots.

  16. Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert M.

    2013-01-01

    A new regression model search algorithm was developed that may be applied to both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The algorithm is a simplified version of a more complex algorithm that was originally developed for the NASA Ames Balance Calibration Laboratory. The new algorithm performs regression model term reduction to prevent overfitting of data. It has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a regression model search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression model. Therefore, the simplified algorithm is not intended to replace the original algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new search algorithm.

  17. Online Normalization Algorithm for Engine Turbofan Monitoring

    DTIC Science & Technology

    2014-10-02

    Online Normalization Algorithm for Engine Turbofan Monitoring Jérôme Lacaille 1 , Anastasios Bellas 2 1 Snecma, 77550 Moissy-Cramayel, France...understand the behavior of a turbofan engine, one first needs to deal with the variety of data acquisition contexts. Each time a set of measurements is...it auto-adapts itself with piecewise linear models. 1. INTRODUCTION Turbofan engine abnormality diagnosis uses three steps: reduction of

  18. A wavelet and least square filter based spatial-spectral denoising approach of hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Li, Ting; Chen, Xiao-Mei; Chen, Gang; Xue, Bo; Ni, Guo-Qiang

    2009-11-01

    Noise reduction is a crucial step in hyperspectral imagery pre-processing. Based on sensor characteristics, the noise of hyperspectral imagery represents in both spatial and spectral domain. However, most prevailing denosing techniques process the imagery in only one specific domain, which have not utilized multi-domain nature of hyperspectral imagery. In this paper, a new spatial-spectral noise reduction algorithm is proposed, which is based on wavelet analysis and least squares filtering techniques. First, in the spatial domain, a new stationary wavelet shrinking algorithm with improved threshold function is utilized to adjust the noise level band-by-band. This new algorithm uses BayesShrink for threshold estimation, and amends the traditional soft-threshold function by adding shape tuning parameters. Comparing with soft or hard threshold function, the improved one, which is first-order derivable and has a smooth transitional region between noise and signal, could save more details of image edge and weaken Pseudo-Gibbs. Then, in the spectral domain, cubic Savitzky-Golay filter based on least squares method is used to remove spectral noise and artificial noise that may have been introduced in during the spatial denoising. Appropriately selecting the filter window width according to prior knowledge, this algorithm has effective performance in smoothing the spectral curve. The performance of the new algorithm is experimented on a set of Hyperion imageries acquired in 2007. The result shows that the new spatial-spectral denoising algorithm provides more significant signal-to-noise-ratio improvement than traditional spatial or spectral method, while saves the local spectral absorption features better.

  19. An Out-of-Core GPU based dimensionality reduction algorithm for Big Mass Spectrometry Data and its application in bottom-up Proteomics.

    PubMed

    Awan, Muaaz Gul; Saeed, Fahad

    2017-08-01

    Modern high resolution Mass Spectrometry instruments can generate millions of spectra in a single systems biology experiment. Each spectrum consists of thousands of peaks but only a small number of peaks actively contribute to deduction of peptides. Therefore, pre-processing of MS data to detect noisy and non-useful peaks are an active area of research. Most of the sequential noise reducing algorithms are impractical to use as a pre-processing step due to high time-complexity. In this paper, we present a GPU based dimensionality-reduction algorithm, called G-MSR, for MS2 spectra. Our proposed algorithm uses novel data structures which optimize the memory and computational operations inside GPU. These novel data structures include Binary Spectra and Quantized Indexed Spectra (QIS) . The former helps in communicating essential information between CPU and GPU using minimum amount of data while latter enables us to store and process complex 3-D data structure into a 1-D array structure while maintaining the integrity of MS data. Our proposed algorithm also takes into account the limited memory of GPUs and switches between in-core and out-of-core modes based upon the size of input data. G-MSR achieves a peak speed-up of 386x over its sequential counterpart and is shown to process over a million spectra in just 32 seconds. The code for this algorithm is available as a GPL open-source at GitHub at the following link: https://github.com/pcdslab/G-MSR.

  20. Active Learning Using Hint Information.

    PubMed

    Li, Chun-Liang; Ferng, Chun-Sung; Lin, Hsuan-Tien

    2015-08-01

    The abundance of real-world data and limited labeling budget calls for active learning, an important learning paradigm for reducing human labeling efforts. Many recently developed active learning algorithms consider both uncertainty and representativeness when making querying decisions. However, exploiting representativeness with uncertainty concurrently usually requires tackling sophisticated and challenging learning tasks, such as clustering. In this letter, we propose a new active learning framework, called hinted sampling, which takes both uncertainty and representativeness into account in a simpler way. We design a novel active learning algorithm within the hinted sampling framework with an extended support vector machine. Experimental results validate that the novel active learning algorithm can result in a better and more stable performance than that achieved by state-of-the-art algorithms. We also show that the hinted sampling framework allows improving another active learning algorithm designed from the transductive support vector machine.

  1. Objective Measures of Listening Effort: Effects of Background Noise and Noise Reduction

    ERIC Educational Resources Information Center

    Sarampalis, Anastasios; Kalluri, Sridhar; Edwards, Brent; Hafter, Ervin

    2009-01-01

    Purpose: This work is aimed at addressing a seeming contradiction related to the use of noise-reduction (NR) algorithms in hearing aids. The problem is that although some listeners claim a subjective improvement from NR, it has not been shown to improve speech intelligibility, often even making it worse. Method: To address this, the hypothesis…

  2. The comparison of automated clustering algorithms for resampling representative conformer ensembles with RMSD matrix.

    PubMed

    Kim, Hyoungrae; Jang, Cheongyun; Yadav, Dharmendra K; Kim, Mi-Hyun

    2017-03-23

    The accuracy of any 3D-QSAR, Pharmacophore and 3D-similarity based chemometric target fishing models are highly dependent on a reasonable sample of active conformations. Since a number of diverse conformational sampling algorithm exist, which exhaustively generate enough conformers, however model building methods relies on explicit number of common conformers. In this work, we have attempted to make clustering algorithms, which could find reasonable number of representative conformer ensembles automatically with asymmetric dissimilarity matrix generated from openeye tool kit. RMSD was the important descriptor (variable) of each column of the N × N matrix considered as N variables describing the relationship (network) between the conformer (in a row) and the other N conformers. This approach used to evaluate the performance of the well-known clustering algorithms by comparison in terms of generating representative conformer ensembles and test them over different matrix transformation functions considering the stability. In the network, the representative conformer group could be resampled for four kinds of algorithms with implicit parameters. The directed dissimilarity matrix becomes the only input to the clustering algorithms. Dunn index, Davies-Bouldin index, Eta-squared values and omega-squared values were used to evaluate the clustering algorithms with respect to the compactness and the explanatory power. The evaluation includes the reduction (abstraction) rate of the data, correlation between the sizes of the population and the samples, the computational complexity and the memory usage as well. Every algorithm could find representative conformers automatically without any user intervention, and they reduced the data to 14-19% of the original values within 1.13 s per sample at the most. The clustering methods are simple and practical as they are fast and do not ask for any explicit parameters. RCDTC presented the maximum Dunn and omega-squared values of the four algorithms in addition to consistent reduction rate between the population size and the sample size. The performance of the clustering algorithms was consistent over different transformation functions. Moreover, the clustering method can also be applied to molecular dynamics sampling simulation results.

  3. Hybrid Model Based on Genetic Algorithms and SVM Applied to Variable Selection within Fruit Juice Classification

    PubMed Central

    Fernandez-Lozano, C.; Canto, C.; Gestal, M.; Andrade-Garda, J. M.; Rabuñal, J. R.; Dorado, J.; Pazos, A.

    2013-01-01

    Given the background of the use of Neural Networks in problems of apple juice classification, this paper aim at implementing a newly developed method in the field of machine learning: the Support Vector Machines (SVM). Therefore, a hybrid model that combines genetic algorithms and support vector machines is suggested in such a way that, when using SVM as a fitness function of the Genetic Algorithm (GA), the most representative variables for a specific classification problem can be selected. PMID:24453933

  4. Performance of fusion algorithms for computer-aided detection and classification of mines in very shallow water obtained from testing in navy Fleet Battle Exercise-Hotel 2000

    NASA Astrophysics Data System (ADS)

    Ciany, Charles M.; Zurawski, William; Kerfoot, Ian

    2001-10-01

    The performance of Computer Aided Detection/Computer Aided Classification (CAD/CAC) Fusion algorithms on side-scan sonar images was evaluated using data taken at the Navy's's Fleet Battle Exercise-Hotel held in Panama City, Florida, in August 2000. A 2-of-3 binary fusion algorithm is shown to provide robust performance. The algorithm accepts the classification decisions and associated contact locations form three different CAD/CAC algorithms, clusters the contacts based on Euclidian distance, and then declares a valid target when a clustered contact is declared by at least 2 of the 3 individual algorithms. This simple binary fusion provided a 96 percent probability of correct classification at a false alarm rate of 0.14 false alarms per image per side. The performance represented a 3.8:1 reduction in false alarms over the best performing single CAD/CAC algorithm, with no loss in probability of correct classification.

  5. Diffusion algorithms and data reduction routine for onsite real-time launch predictions for the transport of Delta-Thor exhaust effluents

    NASA Technical Reports Server (NTRS)

    Stephens, J. B.

    1976-01-01

    The National Aeronautics and Space Administration/Marshall Space Flight Center multilayer diffusion algorithms have been specialized for the prediction of the surface impact for the dispersive transport of the exhaust effluents from the launch of a Delta-Thor vehicle. This specialization permits these transport predictions to be made at the launch range in real time so that the effluent monitoring teams can optimize their monitoring grids. Basically, the data reduction routine requires only the meteorology profiles for the thermodynamics and kinematics of the atmosphere as an input. These profiles are graphed along with the resulting exhaust cloud rise history, the centerline concentrations and dosages, and the hydrogen chloride isopleths.

  6. Using High Resolution Design Spaces for Aerodynamic Shape Optimization Under Uncertainty

    NASA Technical Reports Server (NTRS)

    Li, Wu; Padula, Sharon

    2004-01-01

    This paper explains why high resolution design spaces encourage traditional airfoil optimization algorithms to generate noisy shape modifications, which lead to inaccurate linear predictions of aerodynamic coefficients and potential failure of descent methods. By using auxiliary drag constraints for a simultaneous drag reduction at all design points and the least shape distortion to achieve the targeted drag reduction, an improved algorithm generates relatively smooth optimal airfoils with no severe off-design performance degradation over a range of flight conditions, in high resolution design spaces parameterized by cubic B-spline functions. Simulation results using FUN2D in Euler flows are included to show the capability of the robust aerodynamic shape optimization method over a range of flight conditions.

  7. Algorithms and analyses for stochastic optimization for turbofan noise reduction using parallel reduced-order modeling

    NASA Astrophysics Data System (ADS)

    Yang, Huanhuan; Gunzburger, Max

    2017-06-01

    Simulation-based optimization of acoustic liner design in a turbofan engine nacelle for noise reduction purposes can dramatically reduce the cost and time needed for experimental designs. Because uncertainties are inevitable in the design process, a stochastic optimization algorithm is posed based on the conditional value-at-risk measure so that an ideal acoustic liner impedance is determined that is robust in the presence of uncertainties. A parallel reduced-order modeling framework is developed that dramatically improves the computational efficiency of the stochastic optimization solver for a realistic nacelle geometry. The reduced stochastic optimization solver takes less than 500 seconds to execute. In addition, well-posedness and finite element error analyses of the state system and optimization problem are provided.

  8. [Clinical study using activity-based costing to assess cost-effectiveness of a wound management system utilizing modern dressings in comparison with traditional wound care].

    PubMed

    Ohura, Takehiko; Sanada, Hiromi; Mino, Yoshio

    2004-01-01

    In recent years, the concept of cost-effectiveness, including medical delivery and health service fee systems, has become widespread in Japanese health care. In the field of pressure ulcer management, the recent introduction of penalty subtraction in the care fee system emphasizes the need for prevention and cost-effective care of pressure ulcer. Previous cost-effectiveness research on pressure ulcer management tended to focus only on "hardware" costs such as those for pharmaceuticals and medical supplies, while neglecting other cost aspects, particularly those involving the cost of labor. Thus, cost-effectiveness in pressure ulcer care has not yet been fully established. To provide true cost effectiveness data, a comparative prospective study was initiated in patients with stage II and III pressure ulcers. Considering the potential impact of the pressure reduction mattress on clinical outcome, in particular, the same type of pressure reduction mattresses are utilized in all the cases in the study. The cost analysis method used was Activity-Based Costing, which measures material and labor cost aspects on a daily basis. A reduction in the Pressure Sore Status Tool (PSST) score was used to measure clinical effectiveness. Patients were divided into three groups based on the treatment method and on the use of a consistent algorithm of wound care: 1. MC/A group, modern dressings with a treatment algorithm (control cohort). 2. TC/A group, traditional care (ointment and gauze) with a treatment algorithm. 3. TC/NA group, traditional care (ointment and gauze) without a treatment algorithm. The results revealed that MC/A is more cost-effective than both TC/A and TC/NA. This suggests that appropriate utilization of modern dressing materials and a pressure ulcer care algorithm would contribute to reducing health care costs, improved clinical results, and, ultimately, greater cost-effectiveness.

  9. Dual-microphone and binaural noise reduction techniques for improved speech intelligibility by hearing aid users

    NASA Astrophysics Data System (ADS)

    Yousefian Jazi, Nima

    Spatial filtering and directional discrimination has been shown to be an effective pre-processing approach for noise reduction in microphone array systems. In dual-microphone hearing aids, fixed and adaptive beamforming techniques are the most common solutions for enhancing the desired speech and rejecting unwanted signals captured by the microphones. In fact, beamformers are widely utilized in systems where spatial properties of target source (usually in front of the listener) is assumed to be known. In this dissertation, some dual-microphone coherence-based speech enhancement techniques applicable to hearing aids are proposed. All proposed algorithms operate in the frequency domain and (like traditional beamforming techniques) are purely based on the spatial properties of the desired speech source and does not require any knowledge of noise statistics for calculating the noise reduction filter. This benefit gives our algorithms the ability to address adverse noise conditions, such as situations where interfering talker(s) speaks simultaneously with the target speaker. In such cases, the (adaptive) beamformers lose their effectiveness in suppressing interference, since the noise channel (reference) cannot be built and updated accordingly. This difference is the main advantage of the proposed techniques in the dissertation over traditional adaptive beamformers. Furthermore, since the suggested algorithms are independent of noise estimation, they offer significant improvement in scenarios that the power level of interfering sources are much more than that of target speech. The dissertation also shows the premise behind the proposed algorithms can be extended and employed to binaural hearing aids. The main purpose of the investigated techniques is to enhance the intelligibility level of speech, measured through subjective listening tests with normal hearing and cochlear implant listeners. However, the improvement in quality of the output speech achieved by the algorithms are also presented to show that the proposed methods can be potential candidates for future use in commercial hearing aids and cochlear implant devices.

  10. Can PSA Reflex Algorithm be a valid alternative to other PSA-based prostate cancer screening strategies?

    PubMed

    Caldarelli, G; Troiano, G; Rosadini, D; Nante, N

    2017-01-01

    The available laboratory tests for the differential diagnosis of prostate cancer, are represented by the total PSA, the free PSA, and the free/total PSA ratio. In Italy most of doctors tend to request both total and free PSA for their patients even in cases where the total PSA doesn't justify the further request of free PSA, with a consequent growth of the costs for the National Health System. The aim of our study was to predict the saving in Euro (due to reagents) and reduction in free PSA tests, applying the "PSA Reflex" algorithm. We calculated the number of total PSA and free PSA exams performed in 2014 in the Hospital of Grosseto and, simulating the application of the "PSA Reflex" algorithm in the same year, we calculated the decrease in the number of free PSA requests and we tried to predict the Euro savings in reagents, obtained from this reduction. In 2014 in the Hospital of Grosseto 25,955 total PSA tests have been performed: 3,631 (14%) resulted greater than 10 ng / ml; 7,686 (29.6%) between 2 and 10 ng / ml; 14,638 (56.4%) lower than 2 ng / ml. The performed free PSA tests were 16904. Simulating the use of "PSA Reflex" algorithm, the free PSA tests would be performed only in cases with total PSA values between 2 and 10 ng / mL with a saving of 54.5% of free PSA exams and of 8,971 euros, only for reagents. Our study showed that the "PSA Reflex" algorithm is a valid alternative leading to a reduction of the costs. The estimated intralaboratory savings, due to the reagents, seem to be modest, however, they are followed by the additional savings due to the other diagnostic processes for prostate cancers.

  11. Use of a channelized Hotelling observer to assess CT image quality and optimize dose reduction for iteratively reconstructed images.

    PubMed

    Favazza, Christopher P; Ferrero, Andrea; Yu, Lifeng; Leng, Shuai; McMillan, Kyle L; McCollough, Cynthia H

    2017-07-01

    The use of iterative reconstruction (IR) algorithms in CT generally decreases image noise and enables dose reduction. However, the amount of dose reduction possible using IR without sacrificing diagnostic performance is difficult to assess with conventional image quality metrics. Through this investigation, achievable dose reduction using a commercially available IR algorithm without loss of low contrast spatial resolution was determined with a channelized Hotelling observer (CHO) model and used to optimize a clinical abdomen/pelvis exam protocol. A phantom containing 21 low contrast disks-three different contrast levels and seven different diameters-was imaged at different dose levels. Images were created with filtered backprojection (FBP) and IR. The CHO was tasked with detecting the low contrast disks. CHO performance indicated dose could be reduced by 22% to 25% without compromising low contrast detectability (as compared to full-dose FBP images) whereas 50% or more dose reduction significantly reduced detection performance. Importantly, default settings for the scanner and protocol investigated reduced dose by upward of 75%. Subsequently, CHO-based protocol changes to the default protocol yielded images of higher quality and doses more consistent with values from a larger, dose-optimized scanner fleet. CHO assessment provided objective data to successfully optimize a clinical CT acquisition protocol.

  12. Pulmonary Nodule Recognition Based on Multiple Kernel Learning Support Vector Machine-PSO

    PubMed Central

    Zhu, Zhichuan; Zhao, Qingdong; Liu, Liwei; Zhang, Lijuan

    2018-01-01

    Pulmonary nodule recognition is the core module of lung CAD. The Support Vector Machine (SVM) algorithm has been widely used in pulmonary nodule recognition, and the algorithm of Multiple Kernel Learning Support Vector Machine (MKL-SVM) has achieved good results therein. Based on grid search, however, the MKL-SVM algorithm needs long optimization time in course of parameter optimization; also its identification accuracy depends on the fineness of grid. In the paper, swarm intelligence is introduced and the Particle Swarm Optimization (PSO) is combined with MKL-SVM algorithm to be MKL-SVM-PSO algorithm so as to realize global optimization of parameters rapidly. In order to obtain the global optimal solution, different inertia weights such as constant inertia weight, linear inertia weight, and nonlinear inertia weight are applied to pulmonary nodules recognition. The experimental results show that the model training time of the proposed MKL-SVM-PSO algorithm is only 1/7 of the training time of the MKL-SVM grid search algorithm, achieving better recognition effect. Moreover, Euclidean norm of normalized error vector is proposed to measure the proximity between the average fitness curve and the optimal fitness curve after convergence. Through statistical analysis of the average of 20 times operation results with different inertial weights, it can be seen that the dynamic inertial weight is superior to the constant inertia weight in the MKL-SVM-PSO algorithm. In the dynamic inertial weight algorithm, the parameter optimization time of nonlinear inertia weight is shorter; the average fitness value after convergence is much closer to the optimal fitness value, which is better than the linear inertial weight. Besides, a better nonlinear inertial weight is verified. PMID:29853983

  13. Pulmonary Nodule Recognition Based on Multiple Kernel Learning Support Vector Machine-PSO.

    PubMed

    Li, Yang; Zhu, Zhichuan; Hou, Alin; Zhao, Qingdong; Liu, Liwei; Zhang, Lijuan

    2018-01-01

    Pulmonary nodule recognition is the core module of lung CAD. The Support Vector Machine (SVM) algorithm has been widely used in pulmonary nodule recognition, and the algorithm of Multiple Kernel Learning Support Vector Machine (MKL-SVM) has achieved good results therein. Based on grid search, however, the MKL-SVM algorithm needs long optimization time in course of parameter optimization; also its identification accuracy depends on the fineness of grid. In the paper, swarm intelligence is introduced and the Particle Swarm Optimization (PSO) is combined with MKL-SVM algorithm to be MKL-SVM-PSO algorithm so as to realize global optimization of parameters rapidly. In order to obtain the global optimal solution, different inertia weights such as constant inertia weight, linear inertia weight, and nonlinear inertia weight are applied to pulmonary nodules recognition. The experimental results show that the model training time of the proposed MKL-SVM-PSO algorithm is only 1/7 of the training time of the MKL-SVM grid search algorithm, achieving better recognition effect. Moreover, Euclidean norm of normalized error vector is proposed to measure the proximity between the average fitness curve and the optimal fitness curve after convergence. Through statistical analysis of the average of 20 times operation results with different inertial weights, it can be seen that the dynamic inertial weight is superior to the constant inertia weight in the MKL-SVM-PSO algorithm. In the dynamic inertial weight algorithm, the parameter optimization time of nonlinear inertia weight is shorter; the average fitness value after convergence is much closer to the optimal fitness value, which is better than the linear inertial weight. Besides, a better nonlinear inertial weight is verified.

  14. Desiderata for computable representations of electronic health records-driven phenotype algorithms.

    PubMed

    Mo, Huan; Thompson, William K; Rasmussen, Luke V; Pacheco, Jennifer A; Jiang, Guoqian; Kiefer, Richard; Zhu, Qian; Xu, Jie; Montague, Enid; Carrell, David S; Lingren, Todd; Mentch, Frank D; Ni, Yizhao; Wehbe, Firas H; Peissig, Peggy L; Tromp, Gerard; Larson, Eric B; Chute, Christopher G; Pathak, Jyotishman; Denny, Joshua C; Speltz, Peter; Kho, Abel N; Jarvik, Gail P; Bejan, Cosmin A; Williams, Marc S; Borthwick, Kenneth; Kitchner, Terrie E; Roden, Dan M; Harris, Paul A

    2015-11-01

    Electronic health records (EHRs) are increasingly used for clinical and translational research through the creation of phenotype algorithms. Currently, phenotype algorithms are most commonly represented as noncomputable descriptive documents and knowledge artifacts that detail the protocols for querying diagnoses, symptoms, procedures, medications, and/or text-driven medical concepts, and are primarily meant for human comprehension. We present desiderata for developing a computable phenotype representation model (PheRM). A team of clinicians and informaticians reviewed common features for multisite phenotype algorithms published in PheKB.org and existing phenotype representation platforms. We also evaluated well-known diagnostic criteria and clinical decision-making guidelines to encompass a broader category of algorithms. We propose 10 desired characteristics for a flexible, computable PheRM: (1) structure clinical data into queryable forms; (2) recommend use of a common data model, but also support customization for the variability and availability of EHR data among sites; (3) support both human-readable and computable representations of phenotype algorithms; (4) implement set operations and relational algebra for modeling phenotype algorithms; (5) represent phenotype criteria with structured rules; (6) support defining temporal relations between events; (7) use standardized terminologies and ontologies, and facilitate reuse of value sets; (8) define representations for text searching and natural language processing; (9) provide interfaces for external software algorithms; and (10) maintain backward compatibility. A computable PheRM is needed for true phenotype portability and reliability across different EHR products and healthcare systems. These desiderata are a guide to inform the establishment and evolution of EHR phenotype algorithm authoring platforms and languages. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  15. Fusion of High Resolution Multispectral Imagery in Vulnerable Coastal and Land Ecosystems.

    PubMed

    Ibarrola-Ulzurrun, Edurne; Gonzalo-Martin, Consuelo; Marcello-Ruiz, Javier; Garcia-Pedrero, Angel; Rodriguez-Esparragon, Dionisio

    2017-01-25

    Ecosystems provide a wide variety of useful resources that enhance human welfare, but these resources are declining due to climate change and anthropogenic pressure. In this work, three vulnerable ecosystems, including shrublands, coastal areas with dunes systems and areas of shallow water, are studied. As far as these resources' reduction is concerned, remote sensing and image processing techniques could contribute to the management of these natural resources in a practical and cost-effective way, although some improvements are needed for obtaining a higher quality of the information available. An important quality improvement is the fusion at the pixel level. Hence, the objective of this work is to assess which pansharpening technique provides the best fused image for the different types of ecosystems. After a preliminary evaluation of twelve classic and novel fusion algorithms, a total of four pansharpening algorithms was analyzed using six quality indices. The quality assessment was implemented not only for the whole set of multispectral bands, but also for the subset of spectral bands covered by the wavelength range of the panchromatic image and outside of it. A better quality result is observed in the fused image using only the bands covered by the panchromatic band range. It is important to highlight the use of these techniques not only in land and urban areas, but a novel analysis in areas of shallow water ecosystems. Although the algorithms do not show a high difference in land and coastal areas, coastal ecosystems require simpler algorithms, such as fast intensity hue saturation, whereas more heterogeneous ecosystems need advanced algorithms, as weighted wavelet ' à trous ' through fractal dimension maps for shrublands and mixed ecosystems. Moreover, quality map analysis was carried out in order to study the fusion result in each band at the local level. Finally, to demonstrate the performance of these pansharpening techniques, advanced Object-Based (OBIA) support vector machine classification was applied, and a thematic map for the shrubland ecosystem was obtained, which corroborates wavelet ' à trous ' through fractal dimension maps as the best fusion algorithm for this ecosystem.

  16. A Mixed Approach to Similarity Metric Selection in Affinity Propagation-Based WiFi Fingerprinting Indoor Positioning.

    PubMed

    Caso, Giuseppe; de Nardis, Luca; di Benedetto, Maria-Gabriella

    2015-10-30

    The weighted k-nearest neighbors (WkNN) algorithm is by far the most popular choice in the design of fingerprinting indoor positioning systems based on WiFi received signal strength (RSS). WkNN estimates the position of a target device by selecting k reference points (RPs) based on the similarity of their fingerprints with the measured RSS values. The position of the target device is then obtained as a weighted sum of the positions of the k RPs. Two-step WkNN positioning algorithms were recently proposed, in which RPs are divided into clusters using the affinity propagation clustering algorithm, and one representative for each cluster is selected. Only cluster representatives are then considered during the position estimation, leading to a significant computational complexity reduction compared to traditional, flat WkNN. Flat and two-step WkNN share the issue of properly selecting the similarity metric so as to guarantee good positioning accuracy: in two-step WkNN, in particular, the metric impacts three different steps in the position estimation, that is cluster formation, cluster selection and RP selection and weighting. So far, however, the only similarity metric considered in the literature was the one proposed in the original formulation of the affinity propagation algorithm. This paper fills this gap by comparing different metrics and, based on this comparison, proposes a novel mixed approach in which different metrics are adopted in the different steps of the position estimation procedure. The analysis is supported by an extensive experimental campaign carried out in a multi-floor 3D indoor positioning testbed. The impact of similarity metrics and their combinations on the structure and size of the resulting clusters, 3D positioning accuracy and computational complexity are investigated. Results show that the adoption of metrics different from the one proposed in the original affinity propagation algorithm and, in particular, the combination of different metrics can significantly improve the positioning accuracy while preserving the efficiency in computational complexity typical of two-step algorithms.

  17. A Mixed Approach to Similarity Metric Selection in Affinity Propagation-Based WiFi Fingerprinting Indoor Positioning

    PubMed Central

    Caso, Giuseppe; de Nardis, Luca; di Benedetto, Maria-Gabriella

    2015-01-01

    The weighted k-nearest neighbors (WkNN) algorithm is by far the most popular choice in the design of fingerprinting indoor positioning systems based on WiFi received signal strength (RSS). WkNN estimates the position of a target device by selecting k reference points (RPs) based on the similarity of their fingerprints with the measured RSS values. The position of the target device is then obtained as a weighted sum of the positions of the k RPs. Two-step WkNN positioning algorithms were recently proposed, in which RPs are divided into clusters using the affinity propagation clustering algorithm, and one representative for each cluster is selected. Only cluster representatives are then considered during the position estimation, leading to a significant computational complexity reduction compared to traditional, flat WkNN. Flat and two-step WkNN share the issue of properly selecting the similarity metric so as to guarantee good positioning accuracy: in two-step WkNN, in particular, the metric impacts three different steps in the position estimation, that is cluster formation, cluster selection and RP selection and weighting. So far, however, the only similarity metric considered in the literature was the one proposed in the original formulation of the affinity propagation algorithm. This paper fills this gap by comparing different metrics and, based on this comparison, proposes a novel mixed approach in which different metrics are adopted in the different steps of the position estimation procedure. The analysis is supported by an extensive experimental campaign carried out in a multi-floor 3D indoor positioning testbed. The impact of similarity metrics and their combinations on the structure and size of the resulting clusters, 3D positioning accuracy and computational complexity are investigated. Results show that the adoption of metrics different from the one proposed in the original affinity propagation algorithm and, in particular, the combination of different metrics can significantly improve the positioning accuracy while preserving the efficiency in computational complexity typical of two-step algorithms. PMID:26528984

  18. Fusion of High Resolution Multispectral Imagery in Vulnerable Coastal and Land Ecosystems

    PubMed Central

    Ibarrola-Ulzurrun, Edurne; Gonzalo-Martin, Consuelo; Marcello-Ruiz, Javier; Garcia-Pedrero, Angel; Rodriguez-Esparragon, Dionisio

    2017-01-01

    Ecosystems provide a wide variety of useful resources that enhance human welfare, but these resources are declining due to climate change and anthropogenic pressure. In this work, three vulnerable ecosystems, including shrublands, coastal areas with dunes systems and areas of shallow water, are studied. As far as these resources’ reduction is concerned, remote sensing and image processing techniques could contribute to the management of these natural resources in a practical and cost-effective way, although some improvements are needed for obtaining a higher quality of the information available. An important quality improvement is the fusion at the pixel level. Hence, the objective of this work is to assess which pansharpening technique provides the best fused image for the different types of ecosystems. After a preliminary evaluation of twelve classic and novel fusion algorithms, a total of four pansharpening algorithms was analyzed using six quality indices. The quality assessment was implemented not only for the whole set of multispectral bands, but also for the subset of spectral bands covered by the wavelength range of the panchromatic image and outside of it. A better quality result is observed in the fused image using only the bands covered by the panchromatic band range. It is important to highlight the use of these techniques not only in land and urban areas, but a novel analysis in areas of shallow water ecosystems. Although the algorithms do not show a high difference in land and coastal areas, coastal ecosystems require simpler algorithms, such as fast intensity hue saturation, whereas more heterogeneous ecosystems need advanced algorithms, as weighted wavelet ‘à trous’ through fractal dimension maps for shrublands and mixed ecosystems. Moreover, quality map analysis was carried out in order to study the fusion result in each band at the local level. Finally, to demonstrate the performance of these pansharpening techniques, advanced Object-Based (OBIA) support vector machine classification was applied, and a thematic map for the shrubland ecosystem was obtained, which corroborates wavelet ‘à trous’ through fractal dimension maps as the best fusion algorithm for this ecosystem. PMID:28125055

  19. Radioiodine therapy of hyperfunctioning thyroid nodules: usefulness of an implemented dose calculation algorithm allowing reduction of radioiodine amount.

    PubMed

    Schiavo, M; Bagnara, M C; Pomposelli, E; Altrinetti, V; Calamia, I; Camerieri, L; Giusti, M; Pesce, G; Reitano, C; Bagnasco, M; Caputo, M

    2013-09-01

    Radioiodine is a common option for treatment of hyperfunctioning thyroid nodules. Due to the expected selective radioiodine uptake by adenoma, relatively high "fixed" activities are often used. Alternatively, the activity is individually calculated upon the prescription of a fixed value of target absorbed dose. We evaluated the use of an algorithm for personalized radioiodine activity calculation, which allows as a rule the administration of lower radioiodine activities. Seventy-five patients with single hyperfunctioning thyroid nodule eligible for 131I treatment were studied. The activities of 131I to be administered were estimated by the method described by Traino et al. and developed for Graves'disease, assuming selective and homogeneous 131I uptake by adenoma. The method takes into account 131I uptake and its effective half-life, target (adenoma) volume and its expected volume reduction during treatment. A comparison with the activities calculated by other dosimetric protocols, and the "fixed" activity method was performed. 131I uptake was measured by external counting, thyroid nodule volume by ultrasonography, thyroid hormones and TSH by ELISA. Remission of hyperthyroidism was observed in all but one patient; volume reduction of adenoma was closely similar to that assumed by our model. Effective half-life was highly variable in different patients, and critically affected dose calculation. The administered activities were clearly lower with respect to "fixed" activities and other protocols' prescription. The proposed algorithm proved to be effective also for single hyperfunctioning thyroid nodule treatment and allowed a significant reduction of administered 131I activities, without loss of clinical efficacy.

  20. Evaluation of renal nerve morphological changes and norepinephrine levels following treatment with novel bipolar radiofrequency delivery systems in a porcine model

    PubMed Central

    Cohen-Mazor, Meital; Mathur, Prabodh; Stanley, James R.L.; Mendelsohn, Farrell O.; Lee, Henry; Baird, Rose; Zani, Brett G.; Markham, Peter M.; Rocha-Singh, Krishna

    2014-01-01

    Objective: To evaluate the safety and effectiveness of different bipolar radiofrequency system algorithms in interrupting the renal sympathetic nerves and reducing renal norepinephrine in a healthy porcine model. Methods: A porcine model (N = 46) was used to investigate renal norepinephrine levels and changes to renal artery tissues and nerves following percutaneous renal denervation with radiofrequency bipolar electrodes mounted on a balloon catheter. Parameters of the radiofrequency system (i.e. electrode length and energy delivery algorithm), and the effects of single and longitudinal treatments along the artery were studied with a 7-day model in which swine received unilateral radiofrequency treatments. Additional sets of animals were used to examine norepinephrine and histological changes 28 days following bilateral percutaneous radiofrequency treatment or surgical denervation; untreated swine were used for comparison of renal norepinephrine levels. Results: Seven days postprocedure, norepinephrine concentrations decreased proportionally to electrode length, with 81, 60 and 38% reductions (vs. contralateral control) using 16, 4 and 2-mm electrodes, respectively. Applying a temperature-control algorithm with the 4-mm electrodes increased efficacy, with a mean 89.5% norepinephrine reduction following a 30-s treatment at 68°C. Applying this treatment along the entire artery length affected more nerves vs. a single treatment, resulting in superior norepinephrine reduction 28 days following bilateral treatment. Conclusion: Percutaneous renal artery application of bipolar radiofrequency energy demonstrated safety and resulted in a significant renal norepinephrine content reduction and renal nerve injury compared with untreated controls in porcine models. PMID:24875181

Top